Jump to content

gurkendoktor

Members
  • Posts

    61
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by gurkendoktor

  1. Just choose one and stick with it. While you ponder and tinker, use 302. Once you're sure and live, use 301. After a while everything will automatically point to the canonical URL as this is the one that gets distributed and listed in $search_engine.
  2. You can alway see the headers (and thus your redirects) by using curl -vL http://your.domain/ at the command line (if you have cURL installed). At the same time you can check the server logs (access log) and you will see what site is being accessed.
  3. Do you have a certificate for both https://example.com and https://www.example.com? RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] might confuse RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] The last line you should change into https://, as it would redirect it to the HTTP version which would then again being caught by the previous rule. Mind you: a redirect is always a new request, so the rules are being processed again. This can lead to confusion and "unforeseen behaviour" which is actually technically correct – just not what you intended. So what happens in the above rules is when you enter: http://example.com => gets rewritten to https://example.com at the first rule, the next request is triggering the second rule (missing "www") and rewriting to http://www.example.com, which then gets caught by the first rule again. The next request should actually go through, maybe there is something here we don't see. https://example.com => gets rewritten to http://www.example.com and caught by the first rule again upon next request. http://www.example.com should be plainly rewritten to https://www.example.com without any problem, except there is something we don't see here. https://www.example.com should be left untouched and give you the site immediately. The first two you should improve (this is a reason why Chrome caches these redirects, it's always another TCP round trip and more load on your server). Except there is something we don't see here. Or just try Mike's version
  4. Chrome does cache redirects though for performance reasons. So if you put a 301 (Moved Permanently) in your config and mess it up, it will look wrong in Chrome even if you already corrected it. So you might want to clear the cache as well, just in case.
  5. gurkendoktor

    Vagrant

    I personally love it for the modularity and interchangability of containers (and thus: features). So exchanging PHP to HHVM is an easy task. Vagrant is (imo) too "monolithic", but I think a little more easy to set up, especially with puphpet et al. Docker on the other hand can be quite cumbersome if you want your own containers, re-compile apps etc pp
  6. gurkendoktor

    Vagrant

    Who is still using vagrant? Serious question. I'm interested. I mean, there was a lot of improvement in this to address some of the mentioned issues. OTOH, there are also new technologies like Docker.
  7. Thank you Ryan for sharing this with us. It is really interesting to see the challenges in scaling and how you solved it. Thank you so much for all your work, and big thanks to Jan for helping you making the PW infrastructure even more performant and reliable.
  8. Some of the people who are using this are not in a position to afford "proper" webdevs, which is fair. They have a different business and "a website" is just one of their marketing channels. So what these builders are competing with is Facebook, Instagram etc. (even wordpress.com) – and once these people have a proper and thriving business and their budget and needs increase – we will be gladly helping them. Right? Or to put it differently: what smartphone cameras are capable of these days is amazing. The cost of DSLR is ridiculously low. Still, people hire photographers. Who sees their business model threatened by these generators should probably rethink it.
  9. I have my company website running with http/2 on DO with nginx. I didn't use concatenation or any frontend asset post-processing, as the CSS is quite small and the gzipping is done by the server.
  10. Well, it's not THAT easy. Check here: https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04 I suggest you read about the way ElasticSearch works to understand what is possible and what is not, and how to adapt it to your needs. It seems easy, but it a mighty thing and as such can easily cause more trouble than it's worth.
  11. To start with, with DigitalOcean you can choose the location of your droplet, or spread your droplets across the world. You can activate / provision / switch off droplets via API. You pay by time and not by month, so if you need more resources at peaks, you just activate more. It's for different usage. HE is for "hosting", DO is for a scaling infrastructure that can be programmatically coordinated. docker-machine supports Digital Ocean, but not Host Europe. Besides, with DO you have a lot of choice between various OS and pre-configured systems, for example for tryout. And it comes without plesk, which to me is an advantage For you, HE is probably the best option. For others, it's something else
  12. when I first read about AMP I thought it's writing HTML like it's 1999. There's more to it.
  13. which user / group and permissions does this folder have? to which user does your php process belong (ps aux)?
  14. As far as I understood Varnish, it dismisses all kinds of cookies by default. Makes sense Ok, given you get 90% of your content from the cache (e.g. varnish), then the backend would have WAY more headroom to quickly process and deliver the dynamic results. The additional round-trip delay should add up to somewhat below 4ms, and given that you need to do some processing and / or DB queries in the background, it's negligible. I think whether or not you want to use SSI / ESI depends a lot on your caching strategy which depends a lot on the architecture. I see the benefits and the excitement, but you should always ask yourself if the extra effort is worth it. Rule of thumb: the more visitors you have, the more you want a cache and the more efficient the cache is. If you have one visitor every 5 minutes (approx. 200/day), then you have to use long cache lifetimes and still every 2nd user experiences small delay as the cache needs to be revalidated and updated. Just to get stale after 5 minutes, and not a single visitor was seen this time. Depending on your hardware, 5000/day shouldn't cause any problems – IF they come well distributed and not all of them in 10 minutes. Still that shouldn't be a problem, given that you know how to set up your web server and DB server for this kind of traffic. It will be probably very slow however. If you have this kind of scenario, a cache would still make sense and save your a** for these very 10 minutes
  15. varnish ESI with processwire sounds like a fun field to experiment with. makes a lot of sense if you have high-traffic sites with dynamic (i.e. user-generated or personalized) content which is putting quite some load on the server, and static segements that never change. only thing you need is SSL offloading, as varnish doesn't do SSL. Also, in HTTP/2 environments, I'm pretty sure it will cause some funny side effects.
  16. yes, i use it as css-property background-image and img tag.
  17. i'm running a site in production on PHP7, no problems. depends a lot on the modules you use i guess. core should be no problem at all.
  18. Actually, specifially DB backup should be easy as you have access to it anyway. All things server related should be covered by stuff like icinga / nagios or 3rd party system like newrelic. The data these services provide, you could however put in your admin area – together with PW related data. I think there is an easy update module already and afaik it also shows when modules are out of date. But go ahead, better is the enemy of good
  19. Hmm, there are message boards and everything, and a small community. You might be able to get "direct support", but in case of bugs or feature request I suggest opening an issue on GitHub. This is the problem with new things, there is not a solution to every newly discovered problem yet
  20. I would suggest to split this. In my opinion, all the points except 4 can be in the backend. For 4.) there are dedicated monitoring tools which measure exactly this: availability, load, status of services, memory consumption, … you name it. This again can be made available by API. Besides, you should _always_ be able to update. ALWAYS. Whether you feel you need it or not. One day there might be a breach no one saw coming and you do want to be able to install the newest version in the least time possible. So in your own interest, always update, always make your modules update-friendly, and always keep your hands of fiddling around in the core.
  21. should be, but see pwFoo and my posts about image uploading. mysql is not the problem, as this is a task of a a process further "in the back" such as PHP or HHVM. I'm currently traveling, as soon as I'm back I'll have a look and will post an answer to this (approx. weekend)
  22. There is also a forum thread about letsencrypt: https://processwire.com/talk/topic/8338-free-ssl-from-q2-2015/
  23. "low traffic" is not "production" but yes, you're right: you want no downtime, however much traffic you have. in terms of reliability I really don't have so much experience with caddy yet. I don't know how much you know about the init/upstart or supervisord processes of unix / linux. if your answer is "what?" – stick to apache. otherwise you can at least duct-tape a nice environment which should be quite reliable. If you only want to have "automatic SSL" you can install the letsencrypt-client (https://letsencrypt.org/). This comes with Apache integration, which saves you a lot of hassle in generating certificates and updating your config – it runs more or less automatically. I suggest you run all your websites on SSL. And not only the forms, but the complete site. On low traffic sites, it doesn't matter that you put a bit more load on the server. What you gain from it (and more so your audience / visitors) is well worth it.
  24. thank you! i didn't notice it so much but i'll give it another thought.
×
×
  • Create New...