Jump to content

PW Sites in Production on Nginx


netcarver

Recommended Posts

11 hours ago, netcarver said:

@wbmnfktr Thank you, gleaned a lead from that old thread.

Everyone; Still interested in seeing live PW sites running on Nginx, so please post if you know of any.

One of my client's website is running on Nginx : https://ricardo-vargas.com
It works, but I think a better approach in most cases is to run Nginx as a reversed proxy with Apache serving the files, like I did for my other client: https://www.brightline.org. You get the benefits of easier configuration and a better performance (compared to only run Apache).

  • Like 5
Link to comment
Share on other sites

Maybe I will have one in a week or few days, as I just yesterday uploaded a new project to a custom stage, where the customer uses an own server where I could configure apache for PW, but nginx seems to be in front of it as a load-balancer. I'm not sure how this works, as I had no time for looking around or asking, but maybe this is a similar setup as Sergio mentioned?

(When I reloaded the apache service during the installation setup and I hit F5 to early in the browser, I got a nginx message "Bad Gateway").

The site will go live next week, I think.

  • Like 1
Link to comment
Share on other sites

6 hours ago, Sergio said:

You get the benefits of easier configuration and a better performance (compared to only run Apache).

I've always wondered why does this involve better performance? I've read it's better at serving static files? I have uses runcloud/serverpilot which use nginx as reverse proxy, but I haven't really taken the time to test for speed or to understand if ProcessWire takes advantage of this. 

Link to comment
Share on other sites

@pwired I'm not sure if you really know what or about what you are talking? But I definetly know that I really dislike such sort of posts.

Please revert back to kindly posts only.

And, if you want to answer me to this post, please do it via PM to me. (I'm into weekend now and only can answer on monday next week, but will definetly do.)

  • Like 2
  • Thanks 2
Link to comment
Share on other sites

9 hours ago, elabx said:

I've always wondered why does this involve better performance? I've read it's better at serving static files? I have uses runcloud/serverpilot which use nginx as reverse proxy, but I haven't really taken the time to test for speed or to understand if ProcessWire takes advantage of this. 

Nginx' performance advantages over Apache were built on three factors: modern-day multiprocessing in the server, a lot less overhead due to reduced functionality and memory caching. Over the last five years, Apache has greatly reduced that gap by adapting Nginx' multiprocessing approach (one keyword there is the event MPM module), so Apache isn't spending most of its time spinning up and tearing down whole server instances anymore. File system access has greatly improved with solid state disks, too. Apache still has a lot more functionality, and its distributed config file approach, most prominently the ability to make configuration changes with a .htaccess file inside the web directories, hurts performance. Its dynamic module loading approach and the dozens of pre-installed modules most distributions ship also take up processing time and memory.

Nowadays, Apache can be stripped down a lot and compiled to be head to head with Nginx, though few actually care to do that, since it also means removing functionality one might need in the future. A stock Apache is usually still quite a bit slower and reaches its limits faster (about the factor 2). This becomes an issue under heavy load or on slow machines.

Where Nginx still shines brightly is load balancing. Apache can do it too, but with Nginx it is straight forward and well documented, having been there for a long time.

 

For those interested in a bit of (highly subjective) history: for a long time (speak eighties and nineties), the classic forking mechanism that was common on *nix OSes was the way to do multiprocessing in network servers, and therefore in Apache too. This meant spawning a full copy of the server process and initializing it, then tearing it down when the request was done. Apache brought a small revolution to that approach by implementing preforking, meaning to keep spare server instances around to fulfill requests with little delay. After a while, there were other approaches too when faster multiprocessing approaches become part of common operating systems, like multi threading, which is supported by Apache's "worker" multiprocessing module (MPM).

There were, however, big caveats with using other MPMs. Since file systems used to be slow, sometimes awfully so, in the old days, and since the classic CGI approach of starting an executable from the file system, supplying it with information through environment variables and standard input and capturing its standard output was a security nightmare - even without thinking about shared hosting - nifty programmers included full languages interpreters inside Apache modules. mod_perl and mod_php became the big thing, the latter coming to dominate the web after a few years. These interpreters, though, often had memory leaks and issues with thread isolation, meaning at best that an error in one thread tore down numerous other sessions and at worst that the server had a propensity for information leaks, remote code execution and privilege escalation attacks, the former security nightmare squared. Thus, these tightly integrated interpreters more or less locked their users into the classic prefork approach where every instance is its own, basically independent process.

With PHP as the market leader not evolving in that regard, things were frozen for quite some time. This was when Nginx conquered the market, first by serving static HTML and associated resources with lightning speed (CMSes generating static HTML were still a big thing for a while), but soon by taking care of all the static stuff while handling the dynamic things off to Apache and caching parts of its responses in memory. Finally, though, PHP finally got a fresh boost and grew stable enough for its engine to re-use interpreter instances. It was easier to contain things inside an interpreter-only process instead of dealing with all the server peculiarities, so FastCGI daemons finally became stable, known and used, and suddenly the need to have the language interpreter contained in the web server fell away. Apache got leaner and Nginx more flexible. Caching servers like Varnish became popular since it suddenly was relatively easy to build a fast, nice, layered caching solution with a combination of Nginx, Varnish and a full fledged web server like Apache or IIS, able to serve thousands of highly dynamic and media rich pages per minute.

About that time, SSL grew in importance too, and hosting providers learned to love Nginx as a means to route domains to changing backends and provide fast and easily configurable SSL endpoint termination.

Over the last years, Nginx got other features like generic TCP protocol load balancing that offset it from other servers and make it more into a one-stop solution for modern web applications. It does boost its popularity that Nginx is often the first (or the first major) web server to ship evolving technologies, making the front pages and pulling in early adopters, http/2 being one of the most prominent examples there.

  • Like 12
  • Thanks 4
Link to comment
Share on other sites

For a while I tested litespeed, worked fast and simple, I did not have to change anything on the server except turn it on through a plugin in the controlpanel (directadmin) only thing was the cost per month, but the speed was great.

Here is a link to a page that talks about apache, nginx and litespeed: https://www.litespeedtech.com/products/litespeed-web-server/compare-litespeed-apache-nginx

Just found openlitespeed as well, the open source version of litespeed: https://openlitespeed.org/

 

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

HI everyone! Has anyone configured nginx to serve webp images through some rules? Just like proposed using htaccess rules in this blog post. I am currently using Runcloud setup with nginx + htaccess and I can't get images to get served as webp so I am going to guess the issue is that static files are getting served through nginx. 

Link to comment
Share on other sites

  • 3 months later...

Most of the stuff in the .htaccess is actually not needed to run processwire, but to prevent access to places nobody should snoop around. This only becomes a problem though if there is indeed a file in such a location, which would be directly accessed. In common usage those things rarely pop up, because who tries to read e.g. /site/config.php or even non php files of the installation.

Link to comment
Share on other sites

@LostKobrakai I thought .htaccess was needed for rewrites to work as well?

I found your gist. Is this code compatible with PW 3.0? Maybe you could indicate  where to put the code? 

I'm using UpCloud with RunCloud. Below contents of /etc/nginx-rc/:

conf.d               extra.d               fastcgi_params          koi-win          mime.types.default  proxy.conf           uwsgi_params
default_server.conf  fastcgi.conf          fastcgi_params.default  main-extra.conf  nginx.conf          scgi_params          uwsgi_params.default
dhparam.pem          fastcgi.conf.default  koi-utf                 mime.types       nginx.conf.default  scgi_params.default  win-utf

 

Link to comment
Share on other sites

  • 3 months later...
On 1/11/2020 at 1:16 AM, Peter said:

I found your gist. Is this code compatible with PW 3.0?

After looking through it, it should be compatible with 3.x. Most paths haven't changed since 2.x

But in 3.x there are lots of new .htaccess rules. I wonder if anyone has already "translated" them to Nginx?

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...