Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by teppo

  1. This sounds a lot like the issue in https://github.com/processwire/processwire-issues/issues/1802, especially since @ryanspecifically mentioned fixing an issue where title!= was misbehaving by finding too many results.
  2. "If it works, don't touch it" πŸ™‚ Seriously speaking though, this module works well and does what it needs to do. So yes, if you want to restrict access by branch then it is a good option, regardless of which core version you are using.
  3. Version 1.0.0 of the MarkupMenu module released: https://github.com/teppokoivula/MarkupMenu/releases/tag/1.0.0. This release includes a relatively minor, but also potentially breaking change: new argument $root was added to the MarkupMenu::renderArrayItem() method, between some of the existing params. Since this method is hookable this could be a problem for existing code, so I figured it was best to increment major version.
  4. Here's a tiny module that I originally built for my own needs, but figured that someone else might find it useful as well: Textformatter Iframe Embed. It is a textformatter that (by default) turns this: <p>iframe/https://www.domain.tld/path/</p> ... into this: <iframe class="TextformatterIframeEmbed" src="https://www.domain.tld/path/"></iframe> Embed tag (iframe/) and iframe tag markup can be customized via module configuration screen.
  5. I'm assuming that this problem was resolved already? Just in case: sounds like your home page wasn't throwing Wire404Exception "properly", e.g. providing Wire404Exception::codeFunction as the second argument or by calling wire404() πŸ™‚
  6. @szabesz, this should be possible with the latest version of the module. There's a new hookable method ProcessChangelogHooks::getPageEventDetails: $wire->addHookAfter('ProcessChangelogHooks::getPageEventDetails', function(HookEvent $event) { // getPageEventDetails can return null to prevent saving duplicate entries if ($event->return === null) return; $event->return = array_merge($event->return, [ 'Custom key' => 'Custom value', ]); });
  7. teppo

    Hacked website

    Since classes/HomePage.php is part of the default site profile (blank profile) for recent ProcessWire versions, this makes it sound like the site might've been updated and possibly reinstalled. Would be interesting to know what those new files in wire were, e.g. if they were also files added by an update. Same goes for those new files in root dir as well. The site shouldn't update on its own, of course, so that still doesn't explain what might've happened in the first place. Your site didn't have install.php available, by any chance? That, combined with write permission for the web server user, could be one potential gotcha. This sounds like something that @ryan might want to look into, just in case. With the information we currently have available it is not possible to figure out much more.
  8. The symptoms make it sound like your .htaccess file is somehow erroneous or Apache is not using/processing rules from it correctly. I would start debugging from there. You could, for an example, try to access one of the files that are normally protected by .htaccess rules, just to see if those rules are working as expected. (Root URL working properly is a typical sign of this as well: PHP is working and requests for the index.php file are apparently being processed, but .htaccess rules are not passing requests for non-root paths to it.)
  9. Only solution I can think of right now would be handling cookie check server-side, which in most cases is honestly waste of time and resources. My solution is to leave noscript versions out, and use JS similar to what PrivacyWire (and probably all other sensible tools) do: by default the script is disabled (e.g. has type="text/plain" or something along those lines) and only after consent has been given that gets swapped with actual type.
  10. As a free solution for ProcessWire I would definitely recommend PrivacyWire. In my opinion it is the only viable solution at the moment. When that's not an option or more automation is needed, we tend to use Cookiebot. It's a paid service (they do have a free tier for small scale and limited requirements), but there are a few things that can make it worth the cost: It scans the site (as you mentioned) automatically and creates a list/table of used cookies, as well as identifies any cases where cookies are set without prior consent. At least here in Finland a list of cookies used on a site β€” including whether they are first or third party cookies, what they are used for, and for how long they are stored β€” is required. While one can of course keep such a table up manually, well... let's just say that especially for large and media-rich sites it's a whole lot of work. It has an automatic block mode that at least tries to automatically prevent the browser from loading anything that can set cookies. With PrivacyWire (for an example) you'll have to modify the markup of embedded script tags, iframe elements, etc. in order to load them only after consent has been given. It automatically generates and stores per-user consent history. At least here in Finland that is a necessary thing β€” site owners must be able to produce proof that the user has indeed given consent to storing data, and a separate registry is a must (said proof has to be available even if the user has cleared their cookies, localstorage, etc.) With paid plans it is very customizable. For an example we use a modified version of generaxion/cookiebot-accessible-template, since many of our sites need to be accessible. There are other services that do similar things, and I believe that some are cheaper than Cookiebot, but I have not had a chance or any need to properly dig into said other services. I'm only familiar with official guidelines and legislation as it has been implemented here in Finland, and also IANAL. Even with GDPR the actual implementation (and how that implementation is interpreted by officials) can vary from country to country πŸ™‚
  11. As someone who has spent considerable time dealing with similar issues, submitting error reports and PRs, etc. in the context of WordPress plugins, I can say that it's simply a general issue with dependencies. Popularity of the CMS matters very little, if at all. Relative popularity of the module/plugin matters more, though even that is not a 100% guarantee. You can avoid some of this by only using the dependencies you truly require, and choosing the ones to use carefully: see if they are actively maintained, whether that is by regular code updates or via the support forum. Only using the dependencies I truly require, and choosing the ones to use carefully πŸ™‚ There's no silver bullet here. If the module author is still active you can submit error reports via support forum and/or GitHub. If the author is no longer active, someone (including you) could fork the module and make fixes there, but unless new maintainer is going to really commit into maintaining the module, it's not a good long term solution. For modules that are no longer actively maintained, my recommendation would be to move on β€” find something else that gets the job done. Or better yet, don't use a module. Again, you should only use the modules that you truly need. Alternatives for the modules you've mentioned: Instead of MarkupSEO you could look into Seo Maestro, but since the author (Wanze) has stated that they are no longer actively involved with ProcessWire, it's questionable whether this is a good long term solution either. We use Markup Metadata, which does at least some of the same tasks; it's also less involved/opinionated, which means that it's easier to maintain. Instead of AllInOneMinify I would really recommend looking into ProCache. It's not free, but if it's for a commercial purpose (or long term personal projects etc.) it is β€” in my opinion β€” well worth the cost. It also does a lot for your site's performance that AllInOneMinify won't do.
  12. Yes. Current approach makes use of ProcessWire's database export feature and I'm not sure if it can do this (from what I can tell this is not really what it was meant to do; it was rather intended to store the dump "permanently"), so this might need to be a different command. Just like flydev mentioned above (native:restore). Personally I don't see much reason to use the ProcessWire db export method in this context, unless it does something special that mysqldump can't do (?). Various reasons for this. Resetting passwords is one, but I've also had to remove users matching specific conditions or temporarily disable their accounts, update categories for posts (e.g. a main category is removed and all posts that belonged to it need to be connected to a new one), automate some content modification for a number of pages matching specific criteria, etc. In WP it's not as easy to create "bootstrap scripts" as it is in PW, so WP-CLI is (in my opinion) often the easiest way to do any sort of bulk edit πŸ™‚ This is where differences between WP and PW matter. WP has a built-in cron task queue: the core maintains a list of registered tasks (some from core itself, others registered via hooks from theme(s) or plugins). Each task has a set recurrence (interval), a precalculated time for next run, and hook function to trigger when the task is due. WP-CLI can be used to trigger a specific task, run all tasks that are due now, etc. By default tasks are executed "lazily", similar to what ProcessWire does when using Lazy Cron, but typically you would not want to rely on that or slow down page views for regular users. Instead you'd set up a real cron job to run e.g. once per minute. That cron job then executes "wp cron event run --due-now --quiet". I don't say this often, but in this area WP is β€” in my opinion β€” ahead of PW. The cron setup makes a lot of sense, and it's really easy to keep track of active tasks πŸ™‚
  13. Fair πŸ™‚ If PHP has write access to directories containing executable code, this opens a huge can of worms: a malicious or vulnerable module could allow attackers to write code and wreak serious havoc on the target system. Same thing could happen if there's a flaw in site code that allows attacker to write or download their own executable files. You could argue that a malicious or vulnerable module could cause similar problems anyway, but in my experience it is often easier to slip in code that writes code than code that does evil things. At the very least it's another attack vector. I'm aware that even the installer suggests that we allow PHP to write to the modules directory, but in my opinion that should not be done unless absolutely necessary. And I have yet to come across a situation where that would be the case πŸ™‚
  14. Flushing caches and rewrite rules after each deploy Exporting database from production and importing it locally (sometimes, very rarely, the other way around) Editing (and sometimes just listing and filtering) users, posts/pages, rewrite rules, plugins... Managing (listing and running) registered cron jobs Managing translations Those are the most common things I use it for, ordered from most common to least common πŸ™‚ Looking at the commands currently available for RockShell, "db:pull" and "db:restore" seem like they could be most useful for me personally. I'm not entirely convinced about the method used there, though, considering that it needs to store temporary dump file(s) on the disk. Seems to me that using mysqldump and redirecting the output to local disk has certain benefits over this β€” such as eliminating the need for temporary files, and thus also eliminating any chance of multiple simultaneous pulls causing conflicts. "pw:users" could be useful as well, but if there's no way to filter users or define which fields to display, the use cases seem somewhat limited. In comparison WP-CLI makes it possible to filter users by specific fields (very handy on larger setups), and there are commands for creating users, setting field values, making bulk edits, etc. (Considering the big picture, instead of grouping "users" under "pw" as a single command, it might make more sense to define "users" or "user" as a namespace and "list" as a single command under that namespace.)
  15. Thanks for elaborating! When you put it like that, it definitely seems like a lot πŸ™‚ In my case (using the core approach) the process boils down to this: Translate the word in admin Click the "view" link to display current CSV content Manually copy and paste the new string from the CSV content in admin to existing CSV file bundled with the module git add + commit Manual step is required to copy the data, and another manual step is required to update translations for a module on another site, but to me personally it doesn't feel like a very big deal. Additionally I would never allow PHP to write anything directly into the modules directory β€” in my opinion it is a very risky thing to do, so that option is out of the question πŸ™‚ Anyway, since the core has a system for bundling translations with modules, it would make sense to see how it could be improved so that perhaps we don't need "competing" solutions. From this thread alone I've picked a couple of ideas that would (in my opinion) make sense as core additions, and I'm planning to open requests or submit PRs for those.
  16. It does look cleaner, especially if there can be multiple dashes in the command, making the syntax somewhat ambiguous. The only CLI tool I'm really familiar with is WP-CLI, and there the namespace is separated by a space, e.g. "wp db export dump.sql" or "wp db import dump.sql". Anyway, just wanted to drop in to say that I'm really happy to hear that RockShell is alive and actively developed. I've not used it myself yet (and I never really got into wireshell, typically just ran commands through Tracy's console or via one-off PHP files that bootstrap PW), but I've been using WP-CLI a lot recently. Probably should dig into RockShell as well πŸ™‚
  17. EFS has been getting considerably faster in recent years, but yes, there is always latency. Whether it's noticeable/problematic, I don't really know. To be clear using EFS for session files is not something I've ever done myself.
  18. By the way, how do you currently handle site assets? Are those shared between instances?
  19. There are basically two options here: Use a separate session storage. That could be MySQL (in which case you may need to beef it up even more), or it could be something else, e.g. Redis. Redis is relatively easy to configure as a session storage: set up a Redis server, make sure that you have Redis extension installed for PHP, and point PHP to the Redis server via settings in php.ini. Store files on shared disk. From what you're describing it sounds like the disk is not shared in this case, which is not going to work for session files. With AWS it would typically mean EFS, i.e. making sure that the location where session files are stored is mounted on EFS. If you were already using SessionHandlerDB, the easiest approach would probably be to keep using it, unless the scalability issue is basically instantaneous. Just make sure that session data is cleared automatically, so that the size of the session table won't become a bottleneck. The key thing here is that if you're, for an example, using Ubuntu (or similar setup) then PHP may have session garbage cleaning disabled by default and instead it is handled via cron job. This job will not automatically apply to SessionHandlerDB, so you would need to either a) tweak PHP settings and enable normal garbace cleaning (as Bernhard mentions in issue https://github.com/processwire/processwire-issues/issues/1760) or b) set up a custom cron job or something like that to clean up SessionHandlerDB database table manually.
  20. This repository doesn't include third party module translations, core and core modules only. We've got a relatively up-to-date FormBuilder translation package floating around. I'll give it a quick read-through, see if it needs updating, and post it somewhere (most likely the FB support forum).
  21. How and why does it suck? Serious question β€” I've started bundling translations with my modules, and so far it seems like a nice option. Would be interesting to hear other opinions as well, though πŸ™‚
  22. If you stick with DB sessions you should see this pretty soon. If sessions are not cleared properly, data will likely just keep accumulating. I'm not sure if stale data remains viewable in admin, so might want to take a peek at the database as well, just in case.
  23. First of all, there are other benefits than performance β€” such as the fancy process view that SessionHandlerDB comes with (if I recall correctly, it's been a long time since I've tried it). Also if you have multiple servers but shared database, it may be easier to use DB sessions. (We use Redis in part for this reason.) I don't have any numbers, and I'm afraid those wouldn't be very helpful. Performance depends on a number of factors and thus numbers from one use case may not be applicable to another. That being said, in my personal experience for typical use cases disk is often faster and has smaller chance of running into scalability issues; I've never had scalability issues with disk based sessions, but I've run into them multiple times using database sessions. Though again, that's just my experience from the setups and use cases I've worked with πŸ™‚ Overall, comparing session files stored on local disk vs. MySQL/MariaDB that stores data on the same disk, I would expect database to have more overhead; it has to do much more than just read a file from the disk, after all. But then again database can make use of in-memory caching to mitigate such issues. And of course if your database is on a separate machine (or a faster disk) that would again change things, though that's also where latency due to connections and data transfer may step into the picture. Finally, the native PHP session handling mechanism is in some ways less likely to cause issues, especially compared to something you've cooked up yourself. (Just for the record, PHP has built-in support for storing sessions in Redis, so I would consider that "native"). It should probably be noted, though, that if you let PHP handle garbage cleaning, that is likely to cause some amount of overhead; the approach that Ubuntu takes (separate cron job) does not suffer from this, at least not in the same way. My personal preference for session storage is Redis β€” which, again in my experience, is also the fastest option of those mentioned here β€” and if that's not available then disk πŸ™‚
  24. Just wondering if it might be related to this: https://github.com/processwire/processwire-issues/issues/1760. Basically on some systems (mainly Ubuntu) sessions are cleared in a somewhat non-standard way, which works for disk based sessions but not work SessionHandlerDB. Site running slow could mean that the sessions table has grown so large that queries take very long time. If that's the case, you may want to apply the solution that Bernhard suggested in aforementioned issue. Or alternatively disable SessionHandlerDB. (Just for the record: in most cases I would advise against database sessions, unless there's a specific reason for them. Disk is usually β€” in my experience at least β€” faster and also tends to have fewer quirks. If disk based session storage is not an option, I would look into setting up Redis for session storage.)
  25. It depends: at least here in Finland IP address is considered personal data, since it can be used to identify a specific person. Thus any request for a server that passes along IP address... is also passing along personal data.
  • Create New...