Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Posts posted by gurkendoktor

  1. Do you have a certificate for both https://example.com and https://www.example.com?

      RewriteCond %{HTTPS} off
      RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

    might confuse 

     RewriteCond %{HTTP_HOST} !^www\. [NC]
     RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

    The last line you should change into https://, as it would redirect it to the HTTP version which would then again being caught by the previous rule. Mind you: a redirect is always a new request, so the rules are being processed again. This can lead to confusion and "unforeseen behaviour" which is actually technically correct – just not what you intended. 

    So what happens in the above rules is when you enter:
    http://example.com => gets rewritten to https://example.com at the first rule, the next request is triggering the second rule (missing "www") and rewriting to http://www.example.com, which then gets caught by the first rule again. The next request should actually go through, maybe there is something here we don't see.

    https://example.com => gets rewritten to http://www.example.com and caught by the first rule again upon next request.

    http://www.example.com should be plainly rewritten to https://www.example.com without any problem, except there is something we don't see here.

    https://www.example.com should be left untouched and give you the site immediately. The first two you should improve (this is a reason why Chrome caches these redirects, it's  always another TCP round trip and more load on your server).

    Except there is something we don't see here.

    Or just try Mike's version ;)

  2. Quote

    The browser cache has no impact on your logged in status.

    Chrome does cache redirects though for performance reasons. So if you put a 301 (Moved Permanently) in your config and mess it up, it will look wrong in Chrome even if you already corrected it. So you might want to clear the cache as well, just in case.

  3. I personally love it for the modularity and interchangability of containers (and thus: features). So exchanging PHP to HHVM is an easy task. Vagrant is (imo) too "monolithic", but I think a little more easy to set up, especially with puphpet et al. Docker on the other hand can be quite cumbersome if you want your own containers, re-compile apps etc pp

    • Like 1
  4. Who is still using vagrant? Serious question. I'm interested.

    I mean, there was a lot of improvement in this to address some of the mentioned issues. OTOH, there are also new technologies like Docker.

  5. Some of the people who are using this are not in a position to afford "proper" webdevs, which is fair. They have a different business and "a website" is just one of their marketing channels. So what these builders are competing with is Facebook, Instagram etc. (even wordpress.com) – and once these people have a proper and thriving business and their budget and needs increase – we will be gladly helping them. Right? 

    Or to put it differently: what smartphone cameras are capable of these days is amazing. The cost of DSLR is ridiculously low. Still, people hire photographers. 

    Who sees their business model threatened by these generators should probably rethink it.

    • Like 4
  6. I have my company website running with http/2 on DO with nginx. I didn't use concatenation or any frontend asset post-processing, as the CSS is quite small and the gzipping is done by the server. 

    • Like 1
  7. But in which folder I have to unzip the files? And how I do install this?

    Well, it's not THAT easy. Check here: https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04

    I suggest you read about the way ElasticSearch works to understand what is possible and what is not, and how to adapt it to your needs. It seems easy, but it a mighty thing and as such can easily cause more trouble than it's worth. 

    • Like 5
  8. To start with, with DigitalOcean you can choose the location of your droplet, or spread your droplets across the world. You can activate / provision / switch off droplets via API. You pay by time and not by month, so if you need more resources at peaks, you just activate more. 

    It's for different usage. HE is for "hosting", DO is for a scaling infrastructure that can be programmatically coordinated. docker-machine supports Digital Ocean, but not Host Europe.

    Besides, with DO you have a lot of choice between various OS and pre-configured systems, for example for tryout. And it comes without plesk, which to me is an advantage ;)

    For you, HE is probably the best option. For others, it's something else ;)

    • Like 3
  9. As far as I understood Varnish, it dismisses all kinds of cookies by default. Makes sense :)

    There's also a danger of decreasing perfomance instead of the opposite if you aren't careful with includes, as you may be introducing round trips to the content server or trading a few lines of PHP code for separate request parsing and additional file IO. Even if you speed up nine out of ten requests, the one that has to fetch every dynamic part from scratch may simply take too long in the user's eyes.

    Ok, given you get 90% of your content from the cache (e.g. varnish), then the backend would have WAY more headroom to quickly process and deliver the dynamic results. The additional round-trip delay should add up to somewhat below 4ms, and given that you need to do some processing and / or DB queries in the background, it's negligible.

    I think whether or not you want to use SSI / ESI depends a lot on your caching strategy which depends a lot on the architecture. I see the benefits and the excitement, but you should always ask yourself if the extra effort is worth it. Rule of thumb: the more visitors you have, the more you want a cache and the more efficient the cache is. If you have one visitor every 5 minutes (approx. 200/day), then you have to use long cache lifetimes and still every 2nd user experiences small delay as the cache needs to be revalidated and updated. Just to get stale after 5 minutes, and not a single visitor was seen this time. Depending on your hardware, 5000/day shouldn't cause any problems – IF they come well distributed and not all of them in 10 minutes. Still that shouldn't be a problem, given that you know how to set up your web server and DB server for this kind of traffic. It will be probably very slow however. If you have this kind of scenario, a cache would still make sense and save your a** for these very 10 minutes ;)

  10. varnish ESI with processwire sounds like a fun field to experiment with. makes a lot of sense if you have high-traffic sites with dynamic (i.e. user-generated or personalized) content which is putting quite some load on the server, and static segements that never change. only thing you need is SSL offloading, as varnish doesn't do SSL. Also, in HTTP/2 environments, I'm pretty sure it will cause some funny side effects.

  11. Actually, specifially DB backup should be easy as you have access to it anyway. All things server related should be covered by stuff like icinga / nagios or 3rd party system like newrelic. The data these services provide, you could however put in your admin area – together with PW related data.

    I think there is an easy update module already and afaik it also shows when modules are out of date. But go ahead, better is the enemy of good :)

  12. Hmm, there are message boards and everything, and a small community. You might be able to get "direct support", but in case of bugs or feature request I suggest opening an issue on GitHub. This is the problem with new things, there is not a solution to every newly discovered problem yet :)

    • Like 1
  13. I would suggest to split this. In my opinion, all the points except 4 can be in the backend. For 4.) there are dedicated monitoring tools which measure exactly this: availability, load, status of services, memory consumption, … you name it. This again can be made available by API. 

    Besides, you should _always_ be able to update. ALWAYS. Whether you feel you need it or not. One day there might be a breach no one saw coming and you do want to be able to install the newest version in the least time possible. So in your own interest, always update, always make your modules update-friendly, and always keep your hands of fiddling around in the core.

    • Like 1
  14. should be, but see pwFoo and my posts about image uploading. mysql is not the problem, as this is a task of a a process further "in the back" such as PHP or HHVM. I'm currently traveling, as soon as I'm back I'll have a look and will post an answer to this (approx. weekend)

  15. "low traffic" is not "production" ;) 

    but yes, you're right: you want no downtime, however much traffic you have. in terms of reliability I really don't have so much experience with caddy yet. I don't know how much you know about the init/upstart or supervisord processes of unix / linux. if your answer is "what?" – stick to apache. otherwise you can at least duct-tape a nice environment which should be quite reliable.

    If you only want to have "automatic SSL" you can install the letsencrypt-client (https://letsencrypt.org/). This comes with Apache integration, which saves you a lot of hassle in generating certificates and updating your config – it runs more or less automatically.

    I want to host both websites on the same droplet and will need SSL for simple forms.

    I suggest you run all your websites on SSL. And not only the forms, but the complete site. On low traffic sites, it doesn't matter that you put a bit more load on the server. What you gain from it (and more so your audience / visitors) is well worth it.

  • Create New...