Jump to content
phil_s

Browsersync on a Digitalocean Droplet (with Processwire + gulp)

Recommended Posts

Hi guys,

I have replicated my local gulp workflow on a digitalocean droplet,
and I can't seem to get browsersync to work. Has anybody tried this before and can chime in? 

My PW install is at http://clients.domain.com/clientname/
In there, my folder setup looks like this, so gulp runs from the /templates folder:

clients
+ public
  + clientname
    + site
      + templates
        + node_modules
        + gulpfile.js
        + package.json
        + src/
        + assets/

 

I tried all kinds of shenanigans including leaving out any proxy settings,
setting a proxy, different port and fiddling with the scriptPath options...

gulp.task('browser-sync', function(){
  browserSync.init({
	scriptPath: function (path, port, options) {
		return options.getIn(['urls', 'external']) + path;
	}
  });
});

// or this:
gulp.task('browser-sync', function(){
  browserSync.init({
		host: 'XX.XX.XX.XX',
        open: 'external',
        proxy: 'clients.domain.com/clientname/'
  });
});


Gulp and browsersync is running fine it seems, only the integration is tricky: I get timeouts or 404s with the auto generated snippet (apparently the relevant files are created dynamically on runtime, but I can't access or see them at the default paths) 

I'm probably missing something very obvious, can't be this hard, can it?

 

- - -

EDIT:
Could this simply be a firewall issue?

- entering the IP @ Browsersync's default port (3000) doesn't load anything
- I used serverpilot to configure the machine so the firewall is set up with very few ports open, and obviously 3000/3001 is not one of them.

I guess I'm off to find out if opening one of these ports is a good idea or not :)

- - -

EDIT 2: When running netstat -peanut I can see that the relevant ports are actually listening, so this is not a firewall issue after after all?

tcp6       0      0 :::3002                 :::*                    LISTEN      1000       184931      17882/gulp      
tcp6       0      0 :::3003                 :::*                    LISTEN      1000       184942      17882/gulp  

 

Share this post


Link to post
Share on other sites

Hi.

If you have ssh access to the server:

1.- Try telnet from local console: like

telnet 127.0.0.1:3000

If locally you get some response from the service, then is a firewall problem(It block'c communication to the outside).

 

2.- Look at the protocol, maybe is only open for tcp6 connections and firewall block's tcp4 connections

If you're using Linux try:

# iptables -L -n | grep :3000

It should give you a list of rules in INPUT, OUTPUT that filter all packets for that port.

Look at the first line:

"Chain INPUT (policy ACCEPT)"

If it says ACCEPT, all connection's are by default allowed.
If it says DROP the policy is to deny all incoming connections.

Hope this could help.

 

  • Like 1

Share this post


Link to post
Share on other sites
Quote
3 hours ago, Francesco Bortolussi said:

1.- Try telnet from local console: like



telnet 127.0.0.1:3000

If locally you get some response from the service, then is a firewall problem(It block'c communication to the outside).

This works indeed! (telnet localhost 3000 connects..)

 

3 hours ago, Francesco Bortolussi said:

2.- Look at the protocol, maybe is only open for tcp6 connections and firewall block's tcp4 connections

If you're using Linux try:


# iptables -L -n | grep :3000

 

Hum.. # iptables -L -n works, but there is no listing for a port 3000. So it should not be blocked?
I also tried: curl -s localhost:3000 >/dev/null && echo Connected. || echo Fail. which results in "connected"

Thanks Francesco!

Share this post


Link to post
Share on other sites

Hi.

If there's no iptable rule for port 3000 is because is not blocked (or allowed in case of a DROP policy).

Check iptables INPUT and OUTPUT chain's policy to see if they are in ACCEPT or DROP. If is ACCEPT forget about firewall that's not the problem.

 

The result's of your positive connection via telnet could be  because the address 127.0.0.1 is mapped to ::1 ip v6 address. (In the /etc/hosts file). And as you wrote in your initial post, there is a socket listening in tcp6 local address.

Check that the browsersync service is listening in ip v4 protocol (you should see tcp instead of tcp6).

Check with:

 

# netstat -putan | grep 3000

 

You should see: tcp as protocol (not tcp6) and the Real IP as address (The one you use to connect from yout PC to the server).

If you not see this, check browsersync config and be shure you are using the right address.

 

Have a nice day.

  • Like 1

Share this post


Link to post
Share on other sites

What os are you using?  If you have a recent centos/redhat/fedora droplet check firewalld to see if ports are open.
 

firewall-cmd --list-ports

(and --list-services) to check what ports are open.

 

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Tyssen
      I'm working on a site that uses the front-end tooling set-up from https://github.com/nystudio107/craft It uses package.json for defining all the Gulp plugins and paths as described in https://nystudio107.com/blog/a-better-package-json-for-the-frontend
      I've been able to convert it for use with my Processwire project but I'm a bit confused about how the Critical CSS part works and how to map URLs to templates. I've also found https://processwire-recipes.com/recipes/inline-critical-css/ which uses Grunt but again, it's not clear to me how it handles all the templates.
      Has anyone got a similar set up working and would you be able to share your package.json or gulpfile?
    • By Sergio
      Hello fellow PW devs!
      This is a short story from the server management trenches.
      These past couple of days trying to solve an unexpected problem: after DigitalOcean patched the droplets in NYC3 region last week, my client's droplet became almost useless and went down a couple of times. The droplet has 2GB RAM and was running Ubuntu 16.04 that was updated to kernel 4.4.0-116 after the patch. The server was provisioned using Forge (forge.laravel.com).
      After sshing into it, and running "top" I've noticed the cause: "php-fpm7.1" processes (3-5 instances) were spiking the CPU to 100%. This was very odd, as the CPU usually kept around 33% most of the time.
      The site uses ProCache and markupCache and was getting around 800-1000 visits/day last week. I checked everything on PW's side and nothing seemed out of place, so I went restarting PHP and Nginx but the problem continued.  I checked access logs and no suspicious activity shown up. I upgraded PHP to 7.2 to see if anything will changed but the problem continued.
      My only guess after all that is that the droplet in question got screwed up somehow, because I didn't see any complaints on the web of other people getting the same problem on DO (But I confess that I did a quick Google search only).
      So in the end I decided to create a new droplet, now with 2 CPU cores and kept the 2GB (1 extra core and $5 cheaper ). Reinstalled PW there and pointed the floating IP to this new server.
      The installation went smooth but to one issue: error log started to show messages of MySQL showing "to many files" error when the users were searching. I've never encountered this message before, so after reading some StackOverflow posts, I changed mysql.services config file to remove its file limit (https://stackoverflow.com/a/36807137)
      Everything is normal now, but I think I'll never discover what truly happened. Anyone else had this kind of problem with MySQL before?
       
    • By Sipho
      What build tools do people use with processwire? Most build tools I have tried are either solely designed for single-page applications or don't support bundling npm packages.
      So far, I have been downloading third-party libraries and including them in my html. While this works, it isn't great when you want to update to a newer version. It also involves committing often hundreds of files not associated with the project when using git. For this reason, I would like to use a package manager to handle all of the updating for me and keeping my git directory nice and clean. After doing some research, it seems that npm is by far the most popular package manager. I find this strange considering it was designed for ndoe.js which is server side software but I can look past that.
      I have had some experience with webpack when I was learning how to use Vue.js for making single page applications. For that purpose, it worked pretty well but was also extremely confusing at first. Everywhere I go I see webpack being recommended as the number one build tool for front-end web applications. However, I have struggled to get it working with anything that isn't a single-page application. It seems to be designed to bundle everything into one or two js files. This doesn't really work in a setup like processwire. I would like to be able to import certain css and js into pages only as needed. For instance, if I only need a slideshow on one page, it doesn't make sense to include the code that does that on every page. That's not to say you won't have some global css and js. I additionally don't like how complicated it is to setup a webpack project and how webpack imports css inside js files.
      I decided to try Brunch which boasts being simpler than other build tools, including webpack. I must say, I am fairly impressed with it so far with it's ease of use. But I once again ran into the problem of it being designed for single-page applications. For example, I wanted to use lightbox on a few pages. After installing it from npm, I couldn't figure out a good way to include it's css. Brunch has a setting that let's you include styles from npm packages but it includes them globally. This means every single processwire page would have css for displaying lightboxes even when I don't need them. This would work but seems to go against the whole idea of being modular. I am also trying to use uikit as an npm package to no avail. I found somebody else with similar issues but was never answered: 
      I noticed that ryan opted to just include a static version of uikit in AdminThemeUikit. Is this the recommended way of doing things? Have I got it all wrong? Doesn't this introduce pains whenever uikit needs updating?
      All I want is a better way to handle all my dependencies. I have been looking for the correct way to do this and it's beginning to drive me insane 
    • By benbyf
      HELLO!
      I've had two Timeouts recently on a site due to the server being overloaded.
      The site is hosted on digitalocean on a single distro with 512 MB Memory, 20 GB disk, Ubuntu 14.04.4 x64. as shown below before each outage I can a climbing peak in user bandwidth then the CPU hangs:

      (the light blue above is user, and the dark blue sys)
      Not really had anything like this before with processwire as am puzzled. the site gets between 200 - 1000 visits a day, and I'm using the PW built in caching.
      Any advice or tips would help loads!
    • By sudodo
      Note - I've not set this up, I'm not experienced, I'm probably omitting much
      relevant information as a result so this post will be a bit of a work in
      progress. The answer may be on the forum here - If it is I can't tell for lack
      of experience (I've looked).
      I cannot for the life of me get an install to work and I've tried a lot, and
      I've asked others who're also struggling but I'll try posting here before
      another CMS as I've heard it's nice.
      Info about the server : https://gist.github.com/65086fbc7b5dd03abd0f0461b9c0ec8b
      I'm using the `stable` version of Processwire.
      My `htaccess` file is working - you can test here http://rightangle.space/ and
      click on the admin page to see the internal server error.
      Here is the htaccess file https://gist.github.com/3b805b8ab3c7978aca90a6e39773da00
      Here is the /etc/apache2/apache2.conf file https://gist.github.com/2b2f2518ce7df4af4739413bc967cf56
      Here is the /etc/apache2/sites-enabled/000-default.conf file https://gist.github.com/400cc958ff32dfb6df80693fd8531f08
      Here's the output of  tree -fa /var/www/ https://gist.github.com/a3569becd9889b4b05c4f0d0a8a561d7

       
×
×
  • Create New...