Jump to content

gurkendoktor

Members
  • Posts

    61
  • Joined

  • Last visited

  • Days Won

    1

gurkendoktor last won the day on December 27 2015

gurkendoktor had the most liked content!

Recent Profile Visitors

3,275 profile views

gurkendoktor's Achievements

Full Member

Full Member (4/6)

85

Reputation

1

Community Answers

  1. Just choose one and stick with it. While you ponder and tinker, use 302. Once you're sure and live, use 301. After a while everything will automatically point to the canonical URL as this is the one that gets distributed and listed in $search_engine.
  2. You can alway see the headers (and thus your redirects) by using curl -vL http://your.domain/ at the command line (if you have cURL installed). At the same time you can check the server logs (access log) and you will see what site is being accessed.
  3. Do you have a certificate for both https://example.com and https://www.example.com? RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] might confuse RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] The last line you should change into https://, as it would redirect it to the HTTP version which would then again being caught by the previous rule. Mind you: a redirect is always a new request, so the rules are being processed again. This can lead to confusion and "unforeseen behaviour" which is actually technically correct – just not what you intended. So what happens in the above rules is when you enter: http://example.com => gets rewritten to https://example.com at the first rule, the next request is triggering the second rule (missing "www") and rewriting to http://www.example.com, which then gets caught by the first rule again. The next request should actually go through, maybe there is something here we don't see. https://example.com => gets rewritten to http://www.example.com and caught by the first rule again upon next request. http://www.example.com should be plainly rewritten to https://www.example.com without any problem, except there is something we don't see here. https://www.example.com should be left untouched and give you the site immediately. The first two you should improve (this is a reason why Chrome caches these redirects, it's always another TCP round trip and more load on your server). Except there is something we don't see here. Or just try Mike's version
  4. Chrome does cache redirects though for performance reasons. So if you put a 301 (Moved Permanently) in your config and mess it up, it will look wrong in Chrome even if you already corrected it. So you might want to clear the cache as well, just in case.
  5. gurkendoktor

    Vagrant

    I personally love it for the modularity and interchangability of containers (and thus: features). So exchanging PHP to HHVM is an easy task. Vagrant is (imo) too "monolithic", but I think a little more easy to set up, especially with puphpet et al. Docker on the other hand can be quite cumbersome if you want your own containers, re-compile apps etc pp
  6. gurkendoktor

    Vagrant

    Who is still using vagrant? Serious question. I'm interested. I mean, there was a lot of improvement in this to address some of the mentioned issues. OTOH, there are also new technologies like Docker.
  7. Thank you Ryan for sharing this with us. It is really interesting to see the challenges in scaling and how you solved it. Thank you so much for all your work, and big thanks to Jan for helping you making the PW infrastructure even more performant and reliable.
  8. Some of the people who are using this are not in a position to afford "proper" webdevs, which is fair. They have a different business and "a website" is just one of their marketing channels. So what these builders are competing with is Facebook, Instagram etc. (even wordpress.com) – and once these people have a proper and thriving business and their budget and needs increase – we will be gladly helping them. Right? Or to put it differently: what smartphone cameras are capable of these days is amazing. The cost of DSLR is ridiculously low. Still, people hire photographers. Who sees their business model threatened by these generators should probably rethink it.
  9. I have my company website running with http/2 on DO with nginx. I didn't use concatenation or any frontend asset post-processing, as the CSS is quite small and the gzipping is done by the server.
  10. Well, it's not THAT easy. Check here: https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04 I suggest you read about the way ElasticSearch works to understand what is possible and what is not, and how to adapt it to your needs. It seems easy, but it a mighty thing and as such can easily cause more trouble than it's worth.
  11. To start with, with DigitalOcean you can choose the location of your droplet, or spread your droplets across the world. You can activate / provision / switch off droplets via API. You pay by time and not by month, so if you need more resources at peaks, you just activate more. It's for different usage. HE is for "hosting", DO is for a scaling infrastructure that can be programmatically coordinated. docker-machine supports Digital Ocean, but not Host Europe. Besides, with DO you have a lot of choice between various OS and pre-configured systems, for example for tryout. And it comes without plesk, which to me is an advantage For you, HE is probably the best option. For others, it's something else
  12. when I first read about AMP I thought it's writing HTML like it's 1999. There's more to it.
  13. which user / group and permissions does this folder have? to which user does your php process belong (ps aux)?
  14. As far as I understood Varnish, it dismisses all kinds of cookies by default. Makes sense Ok, given you get 90% of your content from the cache (e.g. varnish), then the backend would have WAY more headroom to quickly process and deliver the dynamic results. The additional round-trip delay should add up to somewhat below 4ms, and given that you need to do some processing and / or DB queries in the background, it's negligible. I think whether or not you want to use SSI / ESI depends a lot on your caching strategy which depends a lot on the architecture. I see the benefits and the excitement, but you should always ask yourself if the extra effort is worth it. Rule of thumb: the more visitors you have, the more you want a cache and the more efficient the cache is. If you have one visitor every 5 minutes (approx. 200/day), then you have to use long cache lifetimes and still every 2nd user experiences small delay as the cache needs to be revalidated and updated. Just to get stale after 5 minutes, and not a single visitor was seen this time. Depending on your hardware, 5000/day shouldn't cause any problems – IF they come well distributed and not all of them in 10 minutes. Still that shouldn't be a problem, given that you know how to set up your web server and DB server for this kind of traffic. It will be probably very slow however. If you have this kind of scenario, a cache would still make sense and save your a** for these very 10 minutes
×
×
  • Create New...