Jump to content

teppo

PW-Moderators
  • Posts

    3,260
  • Joined

  • Last visited

  • Days Won

    112

Everything posted by teppo

  1. Only solution I can think of right now would be handling cookie check server-side, which in most cases is honestly waste of time and resources. My solution is to leave noscript versions out, and use JS similar to what PrivacyWire (and probably all other sensible tools) do: by default the script is disabled (e.g. has type="text/plain" or something along those lines) and only after consent has been given that gets swapped with actual type.
  2. As a free solution for ProcessWire I would definitely recommend PrivacyWire. In my opinion it is the only viable solution at the moment. When that's not an option or more automation is needed, we tend to use Cookiebot. It's a paid service (they do have a free tier for small scale and limited requirements), but there are a few things that can make it worth the cost: It scans the site (as you mentioned) automatically and creates a list/table of used cookies, as well as identifies any cases where cookies are set without prior consent. At least here in Finland a list of cookies used on a site — including whether they are first or third party cookies, what they are used for, and for how long they are stored — is required. While one can of course keep such a table up manually, well... let's just say that especially for large and media-rich sites it's a whole lot of work. It has an automatic block mode that at least tries to automatically prevent the browser from loading anything that can set cookies. With PrivacyWire (for an example) you'll have to modify the markup of embedded script tags, iframe elements, etc. in order to load them only after consent has been given. It automatically generates and stores per-user consent history. At least here in Finland that is a necessary thing — site owners must be able to produce proof that the user has indeed given consent to storing data, and a separate registry is a must (said proof has to be available even if the user has cleared their cookies, localstorage, etc.) With paid plans it is very customizable. For an example we use a modified version of generaxion/cookiebot-accessible-template, since many of our sites need to be accessible. There are other services that do similar things, and I believe that some are cheaper than Cookiebot, but I have not had a chance or any need to properly dig into said other services. I'm only familiar with official guidelines and legislation as it has been implemented here in Finland, and also IANAL. Even with GDPR the actual implementation (and how that implementation is interpreted by officials) can vary from country to country ?
  3. As someone who has spent considerable time dealing with similar issues, submitting error reports and PRs, etc. in the context of WordPress plugins, I can say that it's simply a general issue with dependencies. Popularity of the CMS matters very little, if at all. Relative popularity of the module/plugin matters more, though even that is not a 100% guarantee. You can avoid some of this by only using the dependencies you truly require, and choosing the ones to use carefully: see if they are actively maintained, whether that is by regular code updates or via the support forum. Only using the dependencies I truly require, and choosing the ones to use carefully ? There's no silver bullet here. If the module author is still active you can submit error reports via support forum and/or GitHub. If the author is no longer active, someone (including you) could fork the module and make fixes there, but unless new maintainer is going to really commit into maintaining the module, it's not a good long term solution. For modules that are no longer actively maintained, my recommendation would be to move on — find something else that gets the job done. Or better yet, don't use a module. Again, you should only use the modules that you truly need. Alternatives for the modules you've mentioned: Instead of MarkupSEO you could look into Seo Maestro, but since the author (Wanze) has stated that they are no longer actively involved with ProcessWire, it's questionable whether this is a good long term solution either. We use Markup Metadata, which does at least some of the same tasks; it's also less involved/opinionated, which means that it's easier to maintain. Instead of AllInOneMinify I would really recommend looking into ProCache. It's not free, but if it's for a commercial purpose (or long term personal projects etc.) it is — in my opinion — well worth the cost. It also does a lot for your site's performance that AllInOneMinify won't do.
  4. Yes. Current approach makes use of ProcessWire's database export feature and I'm not sure if it can do this (from what I can tell this is not really what it was meant to do; it was rather intended to store the dump "permanently"), so this might need to be a different command. Just like flydev mentioned above (native:restore). Personally I don't see much reason to use the ProcessWire db export method in this context, unless it does something special that mysqldump can't do (?). Various reasons for this. Resetting passwords is one, but I've also had to remove users matching specific conditions or temporarily disable their accounts, update categories for posts (e.g. a main category is removed and all posts that belonged to it need to be connected to a new one), automate some content modification for a number of pages matching specific criteria, etc. In WP it's not as easy to create "bootstrap scripts" as it is in PW, so WP-CLI is (in my opinion) often the easiest way to do any sort of bulk edit ? This is where differences between WP and PW matter. WP has a built-in cron task queue: the core maintains a list of registered tasks (some from core itself, others registered via hooks from theme(s) or plugins). Each task has a set recurrence (interval), a precalculated time for next run, and hook function to trigger when the task is due. WP-CLI can be used to trigger a specific task, run all tasks that are due now, etc. By default tasks are executed "lazily", similar to what ProcessWire does when using Lazy Cron, but typically you would not want to rely on that or slow down page views for regular users. Instead you'd set up a real cron job to run e.g. once per minute. That cron job then executes "wp cron event run --due-now --quiet". I don't say this often, but in this area WP is — in my opinion — ahead of PW. The cron setup makes a lot of sense, and it's really easy to keep track of active tasks ?
  5. Fair ? If PHP has write access to directories containing executable code, this opens a huge can of worms: a malicious or vulnerable module could allow attackers to write code and wreak serious havoc on the target system. Same thing could happen if there's a flaw in site code that allows attacker to write or download their own executable files. You could argue that a malicious or vulnerable module could cause similar problems anyway, but in my experience it is often easier to slip in code that writes code than code that does evil things. At the very least it's another attack vector. I'm aware that even the installer suggests that we allow PHP to write to the modules directory, but in my opinion that should not be done unless absolutely necessary. And I have yet to come across a situation where that would be the case ?
  6. Flushing caches and rewrite rules after each deploy Exporting database from production and importing it locally (sometimes, very rarely, the other way around) Editing (and sometimes just listing and filtering) users, posts/pages, rewrite rules, plugins... Managing (listing and running) registered cron jobs Managing translations Those are the most common things I use it for, ordered from most common to least common ? Looking at the commands currently available for RockShell, "db:pull" and "db:restore" seem like they could be most useful for me personally. I'm not entirely convinced about the method used there, though, considering that it needs to store temporary dump file(s) on the disk. Seems to me that using mysqldump and redirecting the output to local disk has certain benefits over this — such as eliminating the need for temporary files, and thus also eliminating any chance of multiple simultaneous pulls causing conflicts. "pw:users" could be useful as well, but if there's no way to filter users or define which fields to display, the use cases seem somewhat limited. In comparison WP-CLI makes it possible to filter users by specific fields (very handy on larger setups), and there are commands for creating users, setting field values, making bulk edits, etc. (Considering the big picture, instead of grouping "users" under "pw" as a single command, it might make more sense to define "users" or "user" as a namespace and "list" as a single command under that namespace.)
  7. Thanks for elaborating! When you put it like that, it definitely seems like a lot ? In my case (using the core approach) the process boils down to this: Translate the word in admin Click the "view" link to display current CSV content Manually copy and paste the new string from the CSV content in admin to existing CSV file bundled with the module git add + commit Manual step is required to copy the data, and another manual step is required to update translations for a module on another site, but to me personally it doesn't feel like a very big deal. Additionally I would never allow PHP to write anything directly into the modules directory — in my opinion it is a very risky thing to do, so that option is out of the question ? Anyway, since the core has a system for bundling translations with modules, it would make sense to see how it could be improved so that perhaps we don't need "competing" solutions. From this thread alone I've picked a couple of ideas that would (in my opinion) make sense as core additions, and I'm planning to open requests or submit PRs for those.
  8. It does look cleaner, especially if there can be multiple dashes in the command, making the syntax somewhat ambiguous. The only CLI tool I'm really familiar with is WP-CLI, and there the namespace is separated by a space, e.g. "wp db export dump.sql" or "wp db import dump.sql". Anyway, just wanted to drop in to say that I'm really happy to hear that RockShell is alive and actively developed. I've not used it myself yet (and I never really got into wireshell, typically just ran commands through Tracy's console or via one-off PHP files that bootstrap PW), but I've been using WP-CLI a lot recently. Probably should dig into RockShell as well ?
  9. EFS has been getting considerably faster in recent years, but yes, there is always latency. Whether it's noticeable/problematic, I don't really know. To be clear using EFS for session files is not something I've ever done myself.
  10. By the way, how do you currently handle site assets? Are those shared between instances?
  11. There are basically two options here: Use a separate session storage. That could be MySQL (in which case you may need to beef it up even more), or it could be something else, e.g. Redis. Redis is relatively easy to configure as a session storage: set up a Redis server, make sure that you have Redis extension installed for PHP, and point PHP to the Redis server via settings in php.ini. Store files on shared disk. From what you're describing it sounds like the disk is not shared in this case, which is not going to work for session files. With AWS it would typically mean EFS, i.e. making sure that the location where session files are stored is mounted on EFS. If you were already using SessionHandlerDB, the easiest approach would probably be to keep using it, unless the scalability issue is basically instantaneous. Just make sure that session data is cleared automatically, so that the size of the session table won't become a bottleneck. The key thing here is that if you're, for an example, using Ubuntu (or similar setup) then PHP may have session garbage cleaning disabled by default and instead it is handled via cron job. This job will not automatically apply to SessionHandlerDB, so you would need to either a) tweak PHP settings and enable normal garbace cleaning (as Bernhard mentions in issue https://github.com/processwire/processwire-issues/issues/1760) or b) set up a custom cron job or something like that to clean up SessionHandlerDB database table manually.
  12. This repository doesn't include third party module translations, core and core modules only. We've got a relatively up-to-date FormBuilder translation package floating around. I'll give it a quick read-through, see if it needs updating, and post it somewhere (most likely the FB support forum).
  13. How and why does it suck? Serious question — I've started bundling translations with my modules, and so far it seems like a nice option. Would be interesting to hear other opinions as well, though ?
  14. If you stick with DB sessions you should see this pretty soon. If sessions are not cleared properly, data will likely just keep accumulating. I'm not sure if stale data remains viewable in admin, so might want to take a peek at the database as well, just in case.
  15. First of all, there are other benefits than performance — such as the fancy process view that SessionHandlerDB comes with (if I recall correctly, it's been a long time since I've tried it). Also if you have multiple servers but shared database, it may be easier to use DB sessions. (We use Redis in part for this reason.) I don't have any numbers, and I'm afraid those wouldn't be very helpful. Performance depends on a number of factors and thus numbers from one use case may not be applicable to another. That being said, in my personal experience for typical use cases disk is often faster and has smaller chance of running into scalability issues; I've never had scalability issues with disk based sessions, but I've run into them multiple times using database sessions. Though again, that's just my experience from the setups and use cases I've worked with ? Overall, comparing session files stored on local disk vs. MySQL/MariaDB that stores data on the same disk, I would expect database to have more overhead; it has to do much more than just read a file from the disk, after all. But then again database can make use of in-memory caching to mitigate such issues. And of course if your database is on a separate machine (or a faster disk) that would again change things, though that's also where latency due to connections and data transfer may step into the picture. Finally, the native PHP session handling mechanism is in some ways less likely to cause issues, especially compared to something you've cooked up yourself. (Just for the record, PHP has built-in support for storing sessions in Redis, so I would consider that "native"). It should probably be noted, though, that if you let PHP handle garbage cleaning, that is likely to cause some amount of overhead; the approach that Ubuntu takes (separate cron job) does not suffer from this, at least not in the same way. My personal preference for session storage is Redis — which, again in my experience, is also the fastest option of those mentioned here — and if that's not available then disk ?
  16. Just wondering if it might be related to this: https://github.com/processwire/processwire-issues/issues/1760. Basically on some systems (mainly Ubuntu) sessions are cleared in a somewhat non-standard way, which works for disk based sessions but not work SessionHandlerDB. Site running slow could mean that the sessions table has grown so large that queries take very long time. If that's the case, you may want to apply the solution that Bernhard suggested in aforementioned issue. Or alternatively disable SessionHandlerDB. (Just for the record: in most cases I would advise against database sessions, unless there's a specific reason for them. Disk is usually — in my experience at least — faster and also tends to have fewer quirks. If disk based session storage is not an option, I would look into setting up Redis for session storage.)
  17. It depends: at least here in Finland IP address is considered personal data, since it can be used to identify a specific person. Thus any request for a server that passes along IP address... is also passing along personal data.
  18. Probably as many answers as answerers, but from my personal point of view: I use select few (proven) libraries from project to project, and always as few as possible. In total I typically use 3-4 libraries: carousels, foldable mobile navigation, pull-out bars, and modals. Those are the "hard problems", most other things are easy enough to build case by case, and often would require custom code anyway to look/work exactly as designed. (I don't do a lot of super complex animation/visualization/modeling in typical project.) In my case frameworks with all the bells and whistles rarely do exactly what I want (or the client or another designer wants), so I just end up hacking them, replacing parts, etc. One exception are PWA/mobile apps; for those I use Vuetify (and design accordingly) ?
  19. This is something that I may need as well, so will look into that, soon(ish) I hope ? Thanks for the report, I'll look into this. Sounds like paths are somehow messed up. Just to be clear, are these consecutive revisions for a field where there are no actual changes between those revisions? If so, it definitely sounds like a bug. It is not intentional to store a revision when/where there are no changes in it. Which version of VersionControl is this on, and are you able to reproduce this easily or does it occure randomly / rarely? What type of field is this? Does the error occur on API use, or while editing the page via admin? Any chance that you might've come across these on a "blank" setup, just to rule out conflicts caused by hooks, other modules, etc.? It's been a while since I touched the code related to this, but based on a quick look it seems that VersionControl only stores data if $page->isChanged($page_field->name) returns true, so basically it sounds like ProcessWire itself is saying that the field has changed. There's only one exception to this, but that is only applicable to when module configuration is saved/changed, so it would be odd if that was the cause. Do you have any idea why ProcessWire might report these fields as changed, even if they aren't? I don't think I've run into this particular problem myself, though isChanged() has always felt somewhat "quirky" to me, or rather I've never quite understood where and when it works or doesn't work ? This also seems odd. It would make sense if order would've changed, but here it sounds like something is marking the field as "changed" even though it has not. It might have something to do with your first point, though at this point I'm just guessing. Again, this is not something I've observed myself, as far as I can recall. That is a good point. For 2.x I did consider redesigning the interface, but if I recall correctly, I figured that current interface feels better for most use cases. There may have been technical issues as well, though I can't quite remember what they might've been. I'd be happy to give this a shot again some time, though not sure about the timeline ?
  20. Hey @snck, This should be fixed in the latest version of the module, 0.35.4. I'm not entirely sure of the circumstances causing this issue, but the warning was pretty clear, so it should be fine now.
  21. If I'm reading this correctly, the confusion is all about what "age of cache" means, right? Currently "age of cache" refers to the expiration time of the cache, not to the time it was stored. Meanwhile you expected it to mean the time the cache was stored. I've always assumed age to refer to the expiration time, but you are correct — it is not unreasonable at all to expect age to refer to the time of storage ? In terms of documentation, something along the lines of "Optionally specify expiration time" and "If cache exists but has expired, then blank is returned" would perhaps be more in line with actual behaviour, but generally speaking the rule of thumb is that in this context "age" is about "expiration time" rather than "storage time". To be clear I'm quite certain that the way the function works and the way it calculates expiration is exactly how it was meant to work — and even if it wasn't, changing it at this point would be a major breaking change in terms of behaviour for existing setups.
  22. This is true, with a couple of small twists: SearchEngine supports "pinning" specific template(s) to the top of the list, or alternatively grouping results by template. These require making slight modifications (adding extra rules) to the query (DatabaseQuerySelect object) generated by ProcessWire. In the dev branch of the module there is a work in progress "sort by relevance" feature, which also modifies the query. This is based on MySQL natural language full-text search, so it's still up to the database to decide how relevant each result really is. Sorting results by number of matches, giving some fields more "weight" than others, etc. are not currently something that this module does, though I have occasionally considered if they should be. The main issue here is that it would require different storage and search mechanisms, so it's a lot of work, and additionally it would raise a few rather complicated issues (e.g. handling permissions, which is something that we currently get "for free" by relying on selectors.) Not sure how sensible that would be, all things considered. It might make more sense to use SE to feed data to a separate search tool, or ditch SE altogether for that sort of use case ?
  23. Just wanted to say that this is indeed a very nice update! For an ongoing project I'm relying heavily on WireCache for caching due to the nature of the site: most users are logged in and there is a ton of traffic at very specific times. Keen to implement any performance enhancements, and Redis has been on the top of my list. Definitely interested in using (or developing, if need be) a Redis WireCache module. Sure, we have CacheRedis already, but a drop-in solution would be even better. (... not to mention that core support for drop-in caching modules is one instance where WP has been ahead of PW. Not that it matters all that much, but still happy to be able to tick that box ?)
  24. Is the admin in english or translated to another language? Just wondering if it could be an issue with the translation, e.g. a translated version doesn't have %s in it. The line itself looks fine to me.
  25. That's going to be a problem. ProcessPageView::execute is responsible for calling ProcessPageView::renderPage or ProcessPageView::renderNoPage, which are the two methods responsible for catching and handling 404 exception. In this case you're preventing the core from doing what it would normally do. Instead of looking for workarounds (which may not exist), it's easier to just handle segments in templates, or define allowed segments (as discussed above). If you need to serve a page from an URL below home page that doesn't match an actual page, and this needs to happen in a module, it's better to leave URL segments out of the equation (just keep them disabled) and instead use URL hooks or hook to ProcessPageView::pageNotFound.
×
×
  • Create New...