Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. A very nice simple tracker - thank you David. I have one issue – just realized, that pages with tracking activated are "edited" by each hit by "guest" (field phits) – that's obvious. In combination with the module "changelog" – where I want to list pages last edited only by editors – the "edited by guest" makes it difficult to filter out what my editor colleagues have changed recently. Maybe it's worth adding an option to set the field->save method to "quiet"? https://processwire.com/api/ref/pages/save-field/
  3. Today
  4. For a while I tested litespeed, worked fast and simple, I did not have to change anything on the server except turn it on through a plugin in the controlpanel (directadmin) only thing was the cost per month, but the speed was great. Here is a link to a page that talks about apache, nginx and litespeed: https://www.litespeedtech.com/products/litespeed-web-server/compare-litespeed-apache-nginx Just found openlitespeed as well, the open source version of litespeed: https://openlitespeed.org/
  5. Manjaro is rocking cool !
  6. Ah ok, thanks for the clarification there. Guess the optimist in me was hanging around 🙂 Good to know - thanks!
  7. Sorry, I've got nothing for this. At this point my suggestion would be to keep those packages in the repository. It's the only way that works seamlessly with the module installer in Admin, and I believe it makes more sense for your module than any of the alternatives I can think of. Anything else would reduce your potential user base significantly. I've gone the other way with some of my projects, but those have mainly been site profiles, so the context is quite different. While I agree that a native Composer installation integration could be nice, we're not quite there yet — at this point requiring Composer may make sense for hard-core developer-oriented projects, not much else. Go for it! I've been putting out PHP 7.1+ content for a while now, and so far no complaints. It's perhaps worth noting that PHP 7.1 is well on its way to being obsolete as well: it's already past the active support phase, and in just over two months it will stop receiving security updates. While certain organisations may backport security fixes for 7.1 for a while after that, officially its really close to the end of its lifespan. Folks should be already using (or at the very least actively updating to) 7.2 – or preferably 7.3 🙂 Again: go for it 🙂 I don't think that this will affect a particularly large portion of users (last time I polled this was in 2017, but even back then ~55% of those responding were already on 3.0), 2.8 itself hasn't been updated for almost three years now, and users of legacy versions can always keep using one of the solutions that work for said legacy setups.
  8. Sure... but my composer knowledge is quite limited. All I know is how to install something with it. 😉 I guess @teppo has more knowledge here and maybe there is a solution or something.
  9. @wbmnfktr - Hence why I'm needing some more info – not sure if any additional composer support has been added. Would be great if I could trigger an install during module install from the UI. 🙂
  10. Ooooh... That wouldn't work on about 99% of my client's hosting plans and even my hosting plans are limited in this point.
  11. I'd also like to drop support for ProcessWire 2.8. Any objections?
  12. @teppo - in regards to Jumplinks 2 and composer support: the module utilises several composer packages, which at present are kept in the vendor directory of the module, part of source control. Just to be 100% sure, if I were to drop this from source control, this would force users to install via composer? If so, I'd need to leave it in there, unless you know of another way 🙂 @all - I'm putting some time aside today to continue working on Jumplinks 2 – specifically frontend stuff. I'm cool with sticking to the usage of jQuery for the time being, though I imagine at some point I would move over to something like Svelte. No point in doing it now though, as pretty much all of the functionality I need (excluding bulk actions) has already been implemented. On the topic of dependencies, I'm feeling inclined to push the minimum PHP version to the one that still has active security support, which at this point is PHP 7.1. In my experience, it's always been best to only support maintained versions of PHP. This also allows me to bump illuminate/database from 5.4 to 5.8 (I imagine there are a bunch of fixes). I don't know what the implications of this are in terms of the various hosting providers people use, but I'd be quite shocked if providers were only supporting unmaintained versions of PHP.
  13. What I had to do was create two websites with webfaction, one using HTTP and one using HTTPS Go to Domains/Websites, then Websites In the HTTPS site, just use the standard Processwire .htaccess and make sure you've got the site set to use HTTPS. Create another site, using static/CGI/PHP template will do with the same domain name, but make sure it is using HTTP (not HTTPS) and in that one add the code you posted to the .htaccess file of the HTTP site.
  14. These columns are sorted in 1.5.56. However, instead of text, I've gone with varchar(512), which should be more than long enough. :)
  15. So true. It's a nice idea, but CSS content is indeed problematic. Sometimes you want to add otherwise meaningless (visual) content (like icons) with CSS content and it turns out that some screen readers read them out loud, which can be quite confusing. There's no aria-hidden for CSS content, so reliably hiding said content from screen readers can be a real hassle. On the other hand there are times when you actually want to provide content for screen readers with this technique – yet it turns out that some of them will happily disregard it. In my experience it's almost never a good idea to add content with CSS 😅
  16. Hey @teppo – no problem at all 🙂Probably a very good idea; happy for you to submit a merge request. Thanks!
  17. @ryan maybe garli.js is of some help...
  18. Nginx' performance advantages over Apache were built on three factors: modern-day multiprocessing in the server, a lot less overhead due to reduced functionality and memory caching. Over the last five years, Apache has greatly reduced that gap by adapting Nginx' multiprocessing approach (one keyword there is the event MPM module), so Apache isn't spending most of its time spinning up and tearing down whole server instances anymore. File system access has greatly improved with solid state disks, too. Apache still has a lot more functionality, and its distributed config file approach, most prominently the ability to make configuration changes with a .htaccess file inside the web directories, hurts performance. Its dynamic module loading approach and the dozens of pre-installed modules most distributions ship also take up processing time and memory. Nowadays, Apache can be stripped down a lot and compiled to be head to head with Nginx, though few actually care to do that, since it also means removing functionality one might need in the future. A stock Apache is usually still quite a bit slower and reaches its limits faster (about the factor 2). This becomes an issue under heavy load or on slow machines. Where Nginx still shines brightly is load balancing. Apache can do it too, but with Nginx it is straight forward and well documented, having been there for a long time. For those interested in a bit of (highly subjective) history: for a long time (speak eighties and nineties), the classic forking mechanism that was common on *nix OSes was the way to do multiprocessing in network servers, and therefore in Apache too. This meant spawning a full copy of the server process and initializing it, then tearing it down when the request was done. Apache brought a small revolution to that approach by implementing preforking, meaning to keep spare server instances around to fulfill requests with little delay. After a while, there were other approaches too when faster multiprocessing approaches become part of common operating systems, like multi threading, which is supported by Apache's "worker" multiprocessing module (MPM). There were, however, big caveats with using other MPMs. Since file systems used to be slow, sometimes awfully so, in the old days, and since the classic CGI approach of starting an executable from the file system, supplying it with information through environment variables and standard input and capturing its standard output was a security nightmare - even without thinking about shared hosting - nifty programmers included full languages interpreters inside Apache modules. mod_perl and mod_php became the big thing, the latter coming to dominate the web after a few years. These interpreters, though, often had memory leaks and issues with thread isolation, meaning at best that an error in one thread tore down numerous other sessions and at worst that the server had a propensity for information leaks, remote code execution and privilege escalation attacks, the former security nightmare squared. Thus, these tightly integrated interpreters more or less locked their users into the classic prefork approach where every instance is its own, basically independent process. With PHP as the market leader not evolving in that regard, things were frozen for quite some time. This was when Nginx conquered the market, first by serving static HTML and associated resources with lightning speed (CMSes generating static HTML were still a big thing for a while), but soon by taking care of all the static stuff while handling the dynamic things off to Apache and caching parts of its responses in memory. Finally, though, PHP finally got a fresh boost and grew stable enough for its engine to re-use interpreter instances. It was easier to contain things inside an interpreter-only process instead of dealing with all the server peculiarities, so FastCGI daemons finally became stable, known and used, and suddenly the need to have the language interpreter contained in the web server fell away. Apache got leaner and Nginx more flexible. Caching servers like Varnish became popular since it suddenly was relatively easy to build a fast, nice, layered caching solution with a combination of Nginx, Varnish and a full fledged web server like Apache or IIS, able to serve thousands of highly dynamic and media rich pages per minute. About that time, SSL grew in importance too, and hosting providers learned to love Nginx as a means to route domains to changing backends and provide fast and easily configurable SSL endpoint termination. Over the last years, Nginx got other features like generic TCP protocol load balancing that offset it from other servers and make it more into a one-stop solution for modern web applications. It does boost its popularity that Nginx is often the first (or the first major) web server to ship evolving technologies, making the front pages and pulling in early adopters, http/2 being one of the most prominent examples there.
  19. Yesterday
  20. @teppo's argument here is solid and I thought... cool... just add some details to those links with CSS-magic: a[target=_blank]:after { content: " (opens in a new window)"; } BUT... not all screenreaders support that feature... all I found was kind of a mixed result. https://www.powermapper.com/tests/screen-readers/content/css-generated-content/ Just in case for those who had a similar idea. 😁
  21. We use Nginx. But the ProcessWire Site is non public.
  22. @pwired I'm not sure if you really know what or about what you are talking? But I definetly know that I really dislike such sort of posts. Please revert back to kindly posts only. And, if you want to answer me to this post, please do it via PM to me. (I'm into weekend now and only can answer on monday next week, but will definetly do.)
  23. I've always wondered why does this involve better performance? I've read it's better at serving static files? I have uses runcloud/serverpilot which use nginx as reverse proxy, but I haven't really taken the time to test for speed or to understand if ProcessWire takes advantage of this.
  24. Nginx ? Why not take the overengineering to the next level and compile PW to c++
  25. Maybe I will have one in a week or few days, as I just yesterday uploaded a new project to a custom stage, where the customer uses an own server where I could configure apache for PW, but nginx seems to be in front of it as a load-balancer. I'm not sure how this works, as I had no time for looking around or asking, but maybe this is a similar setup as Sergio mentioned? (When I reloaded the apache service during the installation setup and I hit F5 to early in the browser, I got a nginx message "Bad Gateway"). The site will go live next week, I think.
  26. @Sergio Thanks for posting those links! Anyone else?
  27. One of my client's website is running on Nginx : https://ricardo-vargas.com It works, but I think a better approach in most cases is to run Nginx as a reversed proxy with Apache serving the files, like I did for my other client: https://www.brightline.org. You get the benefits of easier configuration and a better performance (compared to only run Apache).
  28. Little tweak that you can add to your config.php on DEV to support bootstrapping of PW (in that case there is no SERVER_NAME set and you need to define it manually): <?php if(!defined("PROCESSWIRE")) die(); // make bootstrapping possible $const = get_defined_constants(true); $host = $const['user']['host'] ?: $_SERVER['SERVER_NAME']; $config->dbUser = 'root'; $config->dbPass = ''; $config->httpHosts = [$_SERVER[HTTP_HOST]]; $config->debug = true; switch($host) { case 'www.foo.bar': $config->dbName = 'foo'; $config->userAuthSalt = 'bar'; break; }
  1. Load more activity
×
×
  • Create New...