Jump to content

teppo

PW-Moderators
  • Posts

    3,208
  • Joined

  • Last visited

  • Days Won

    107

Posts posted by teppo

  1. Here's a tiny module that I originally built for my own needs, but figured that someone else might find it useful as well: Textformatter Iframe Embed. It is a textformatter that (by default) turns this:

    <p>iframe/https://www.domain.tld/path/</p>

    ... into this:

    <iframe class="TextformatterIframeEmbed" src="https://www.domain.tld/path/"></iframe>

    Embed tag (iframe/) and iframe tag markup can be customized via module configuration screen.

    • Like 7
  2. On 2/18/2022 at 10:26 PM, szabesz said:

    Yes, that was the issue. Some time ago I added some a template code based "sort url" logic to the home page using URL segments, so that was it when the RSS feed stopped working.

    Now I am wondering how else I should go about implementing "short urls", something like this: example.com/xyz-promotion

    Do you think that I can use this https://github.com/apeisa/ProcessRedirects/releases for such a thing? I've never seen/used this module before, that's why I'm asking.

    I'm assuming that this problem was resolved already? Just in case: sounds like your home page wasn't throwing Wire404Exception "properly", e.g. providing Wire404Exception::codeFunction as the second argument or by calling wire404() 🙂

    • Like 1
  3. @szabesz, this should be possible with the latest version of the module. There's a new hookable method ProcessChangelogHooks::getPageEventDetails:

    $wire->addHookAfter('ProcessChangelogHooks::getPageEventDetails', function(HookEvent $event) {
    
        // getPageEventDetails can return null to prevent saving duplicate entries
        if ($event->return === null) return;
    
    	$event->return = array_merge($event->return, [
    		'Custom key' => 'Custom value',
    	]);
    });

     

    • Like 1
  4. Since classes/HomePage.php is part of the default site profile (blank profile) for recent ProcessWire versions, this makes it sound like the site might've been updated and possibly reinstalled.

    Would be interesting to know what those new files in wire were, e.g. if they were also files added by an update. Same goes for those new files in root dir as well.

    The site shouldn't update on its own, of course, so that still doesn't explain what might've happened in the first place.

    Your site didn't have install.php available, by any chance? That, combined with write permission for the web server user, could be one potential gotcha.

    This sounds like something that @ryan might want to look into, just in case. With the information we currently have available it is not possible to figure out much more.

    • Like 3
  5. The symptoms make it sound like your .htaccess file is somehow erroneous or Apache is not using/processing rules from it correctly. I would start debugging from there. You could, for an example, try to access one of the files that are normally protected by .htaccess rules, just to see if those rules are working as expected.

    (Root URL working properly is a typical sign of this as well: PHP is working and requests for the index.php file are apparently being processed, but .htaccess rules are not passing requests for non-root paths to it.)

    • Like 1
  6. 3 hours ago, DrQuincy said:

    However, many scripts, such as Google Analytics, also give you an noscript image or iframe to install. How do you handle this? I presume they are still take people's IP and using it for tracking and therefore it's personal info under GDPR.

    Only solution I can think of right now would be handling cookie check server-side, which in most cases is honestly waste of time and resources. My solution is to leave noscript versions out, and use JS similar to what PrivacyWire (and probably all other sensible tools) do: by default the script is disabled (e.g. has type="text/plain" or something along those lines) and only after consent has been given that gets swapped with actual type.

    • Like 2
  7. As a free solution for ProcessWire I would definitely recommend PrivacyWire. In my opinion it is the only viable solution at the moment.

    When that's not an option or more automation is needed, we tend to use Cookiebot. It's a paid service (they do have a free tier for small scale and limited requirements), but there are a few things that can make it worth the cost:

    • It scans the site (as you mentioned) automatically and creates a list/table of used cookies, as well as identifies any cases where cookies are set without prior consent. At least here in Finland a list of cookies used on a site — including whether they are first or third party cookies, what they are used for, and for how long they are stored — is required. While one can of course keep such a table up manually, well... let's just say that especially for large and media-rich sites it's a whole lot of work.
    • It has an automatic block mode that at least tries to automatically prevent the browser from loading anything that can set cookies. With PrivacyWire (for an example) you'll have to modify the markup of embedded script tags, iframe elements, etc. in order to load them only after consent has been given.
    • It automatically generates and stores per-user consent history. At least here in Finland that is a necessary thing — site owners must be able to produce proof that the user has indeed given consent to storing data, and a separate registry is a must (said proof has to be available even if the user has cleared their cookies, localstorage, etc.)
    • With paid plans it is very customizable. For an example we use a modified version of generaxion/cookiebot-accessible-template, since many of our sites need to be accessible.

    There are other services that do similar things, and I believe that some are cheaper than Cookiebot, but I have not had a chance or any need to properly dig into said other services.

    I'm only familiar with official guidelines and legislation as it has been implemented here in Finland, and also IANAL. Even with GDPR the actual implementation (and how that implementation is interpreted by officials) can vary from country to country 🙂

    • Like 9
  8. 10 hours ago, tires said:

    I'm afraid this is a general problem with the modules on older websites, isn't it?
    Or a general problem of a cms with less popularity?

    As someone who has spent considerable time dealing with similar issues, submitting error reports and PRs, etc. in the context of WordPress plugins, I can say that it's simply a general issue with dependencies. Popularity of the CMS matters very little, if at all. Relative popularity of the module/plugin matters more, though even that is not a 100% guarantee.

    You can avoid some of this by only using the dependencies you truly require, and choosing the ones to use carefully: see if they are actively maintained, whether that is by regular code updates or via the support forum.

    10 hours ago, tires said:

    How do you deal with this?

    Only using the dependencies I truly require, and choosing the ones to use carefully 🙂

    There's no silver bullet here. If the module author is still active you can submit error reports via support forum and/or GitHub. If the author is no longer active, someone (including you) could fork the module and make fixes there, but unless new maintainer is going to really commit into maintaining the module, it's not a good long term solution.

    For modules that are no longer actively maintained, my recommendation would be to move on — find something else that gets the job done. Or better yet, don't use a module. Again, you should only use the modules that you truly need.

    Alternatives for the modules you've mentioned:

    • Instead of MarkupSEO you could look into Seo Maestro, but since the author (Wanze) has stated that they are no longer actively involved with ProcessWire, it's questionable whether this is a good long term solution either. We use Markup Metadata, which does at least some of the same tasks; it's also less involved/opinionated, which means that it's easier to maintain.
    • Instead of AllInOneMinify I would really recommend looking into ProCache. It's not free, but if it's for a commercial purpose (or long term personal projects etc.) it is — in my opinion — well worth the cost. It also does a lot for your site's performance that AllInOneMinify won't do.
    • Like 11
  9. 22 hours ago, bernhard said:

    You mean to stream the output of mysqldump directly to the local dev environment?

    Yes.

    Current approach makes use of ProcessWire's database export feature and I'm not sure if it can do this (from what I can tell this is not really what it was meant to do; it was rather intended to store the dump "permanently"), so this might need to be a different command. Just like flydev mentioned above (native:restore). Personally I don't see much reason to use the ProcessWire db export method in this context, unless it does something special that mysqldump can't do (?).

    22 hours ago, bernhard said:
    • Editing users/posts/pages etc: Never had that need, I'd be happy to hear examples. The only thing that I needed was changing/resetting user's passwords, which is possible via user:pass

    Various reasons for this. Resetting passwords is one, but I've also had to remove users matching specific conditions or temporarily disable their accounts, update categories for posts (e.g. a main category is removed and all posts that belonged to it need to be connected to a new one), automate some content modification for a number of pages matching specific criteria, etc.

    In WP it's not as easy to create "bootstrap scripts" as it is in PW, so WP-CLI is (in my opinion) often the easiest way to do any sort of bulk edit 🙂

    22 hours ago, bernhard said:
    • Cron jobs? How does that work? I'm doing that via my server control panel.

    This is where differences between WP and PW matter.

    WP has a built-in cron task queue: the core maintains a list of registered tasks (some from core itself, others registered via hooks from theme(s) or plugins). Each task has a set recurrence (interval), a precalculated time for next run, and hook function to trigger when the task is due. WP-CLI can be used to trigger a specific task, run all tasks that are due now, etc.

    By default tasks are executed "lazily", similar to what ProcessWire does when using Lazy Cron, but typically you would not want to rely on that or slow down page views for regular users. Instead you'd set up a real cron job to run e.g. once per minute. That cron job then executes "wp cron event run --due-now --quiet".

    I don't say this often, but in this area WP is — in my opinion — ahead of PW. The cron setup makes a lot of sense, and it's really easy to keep track of active tasks 🙂

    • Like 1
    • Thanks 1
  10. 3 hours ago, bernhard said:

    Which means 100 more manual steps on 100 other sites, or if there are 3 steps for each site that's 300 manual steps. The example is a little constructed of course, but I just hate these kind of tasks and I'm used to "git pull and it works"

    Fair 🙂

    3 hours ago, bernhard said:

    Could you please explain why that is so risky or what the risk exactly is?

    If PHP has write access to directories containing executable code, this opens a huge can of worms: a malicious or vulnerable module could allow attackers to write code and wreak serious havoc on the target system. Same thing could happen if there's a flaw in site code that allows attacker to write or download their own executable files. You could argue that a malicious or vulnerable module could cause similar problems anyway, but in my experience it is often easier to slip in code that writes code than code that does evil things.

    At the very least it's another attack vector.

    I'm aware that even the installer suggests that we allow PHP to write to the modules directory, but in my opinion that should not be done unless absolutely necessary. And I have yet to come across a situation where that would be the case 🙂

    • Like 2
  11. 6 hours ago, bernhard said:

    What did you use it for?

    1. Flushing caches and rewrite rules after each deploy
    2. Exporting database from production and importing it locally (sometimes, very rarely, the other way around)
    3. Editing (and sometimes just listing and filtering) users, posts/pages, rewrite rules, plugins...
    4. Managing (listing and running) registered cron jobs
    5. Managing translations

    Those are the most common things I use it for, ordered from most common to least common 🙂

    Looking at the commands currently available for RockShell, "db:pull" and "db:restore" seem like they could be most useful for me personally. I'm not entirely convinced about the method used there, though, considering that it needs to store temporary dump file(s) on the disk. Seems to me that using mysqldump and redirecting the output to local disk has certain benefits over this — such as eliminating the need for temporary files, and thus also eliminating any chance of multiple simultaneous pulls causing conflicts.

    "pw:users" could be useful as well, but if there's no way to filter users or define which fields to display, the use cases seem somewhat limited. In comparison WP-CLI makes it possible to filter users by specific fields (very handy on larger setups), and there are commands for creating users, setting field values, making bulk edits, etc. (Considering the big picture, instead of grouping "users" under "pw" as a single command, it might make more sense to define "users" or "user" as a namespace and "list" as a single command under that namespace.)

  12. On 6/30/2023 at 5:05 PM, bernhard said:

     

    @teppo I already wrote what I don't like about the process. Imagine you have a module and you add one translatable string to that module. The steps necessary to push that translation into your modules folder are:

    Steps necessary when using RockLanguage and when RockLanguage is setup for your project (you have to do that only once per project):

    • Once on the file translation screen, translate the text into the desired language and save
    • git commit

    Now imagine you pushed your translation and then you add another word the other day... What would you have to do?

    I intentionally did not use the forums spoiler feature and I intentionally did not use "same as above" to indicate how much (unnecessary and repeated) work the current process puts on the developer.

    Imagine when using RockLanguage:

    • translate the word
    • git commit

    Thanks for elaborating! When you put it like that, it definitely seems like a lot 🙂

    In my case (using the core approach) the process boils down to this:

    1. Translate the word in admin
    2. Click the "view" link to display current CSV content
    3. Manually copy and paste the new string from the CSV content in admin to existing CSV file bundled with the module
    4. git add + commit

    Manual step is required to copy the data, and another manual step is required to update translations for a module on another site, but to me personally it doesn't feel like a very big deal. Additionally I would never allow PHP to write anything directly into the modules directory — in my opinion it is a very risky thing to do, so that option is out of the question 🙂

    Anyway, since the core has a system for bundling translations with modules, it would make sense to see how it could be improved so that perhaps we don't need "competing" solutions. From this thread alone I've picked a couple of ideas that would (in my opinion) make sense as core additions, and I'm planning to open requests or submit PRs for those.

    • Like 1
    • Thanks 1
  13. 1 hour ago, bernhard said:

    Thx for your screenshot! Looks like we should maybe change the syntax for commands from dash (db-pull) to colon (db:pull) to get the grouping?

    It does look cleaner, especially if there can be multiple dashes in the command, making the syntax somewhat ambiguous. The only CLI tool I'm really familiar with is WP-CLI, and there the namespace is separated by a space, e.g. "wp db export dump.sql" or "wp db import dump.sql".

    Anyway, just wanted to drop in to say that I'm really happy to hear that RockShell is alive and actively developed. I've not used it myself yet (and I never really got into wireshell, typically just ran commands through Tracy's console or via one-off PHP files that bootstrap PW), but I've been using WP-CLI a lot recently. Probably should dig into RockShell as well 🙂

    • Like 1
  14. 16 hours ago, flydev said:

    Wouldn't the system be too slow with EFS? It's a network drive, isn't it? Curious to see how locks are managed with this setup.

    EFS has been getting considerably faster in recent years, but yes, there is always latency. Whether it's noticeable/problematic, I don't really know.

    To be clear using EFS for session files is not something I've ever done myself.

    • Like 1
    • Thanks 1
  15. There are basically two options here:

    1. Use a separate session storage. That could be MySQL (in which case you may need to beef it up even more), or it could be something else, e.g. Redis. Redis is relatively easy to configure as a session storage: set up a Redis server, make sure that you have Redis extension installed for PHP, and point PHP to the Redis server via settings in php.ini.
    2. Store files on shared disk. From what you're describing it sounds like the disk is not shared in this case, which is not going to work for session files. With AWS it would typically mean EFS, i.e. making sure that the location where session files are stored is mounted on EFS.

    If you were already using SessionHandlerDB, the easiest approach would probably be to keep using it, unless the scalability issue is basically instantaneous. Just make sure that session data is cleared automatically, so that the size of the session table won't become a bottleneck.

    The key thing here is that if you're, for an example, using Ubuntu (or similar setup) then PHP may have session garbage cleaning disabled by default and instead it is handled via cron job. This job will not automatically apply to SessionHandlerDB, so you would need to either a) tweak PHP settings and enable normal garbace cleaning (as Bernhard mentions in issue https://github.com/processwire/processwire-issues/issues/1760) or b) set up a custom cron job or something like that to clean up SessionHandlerDB database table manually.

    • Like 1
  16. 6 hours ago, MSP01 said:

    Finnish translations seem to be currently located here:

    https://github.com/apeisa/Finnish-ProcessWire

    This repository doesn't include third party module translations, core and core modules only.

    On 5/16/2023 at 12:01 AM, lpa said:

    Is there translation for the FormBuilder module anywhere, then?

    We've got a relatively up-to-date FormBuilder translation package floating around. I'll give it a quick read-through, see if it needs updating, and post it somewhere (most likely the FB support forum).

    • Like 1
  17. 8 hours ago, Nishant said:

    Not sure if the session had grown large, as after uninstalling the SessionDBHandler module, the table is also removed from the database. But last time I checked under processwire panel, It was showing somewhere around 200 active sessions.

    If you stick with DB sessions you should see this pretty soon. If sessions are not cleared properly, data will likely just keep accumulating.

    I'm not sure if stale data remains viewable in admin, so might want to take a peek at the database as well, just in case.

    • Like 1
  18. 10 hours ago, bernhard said:

    This is interesting. I thought it would be the other way round. Thought that ryan wrote that somewhere and also thought that as SessionHandlerDB is newer it has a reason that it has been built and would therefore be preferable over disk sessions.

    First of all, there are other benefits than performance — such as the fancy process view that SessionHandlerDB comes with (if I recall correctly, it's been a long time since I've tried it). Also if you have multiple servers but shared database, it may be easier to use DB sessions. (We use Redis in part for this reason.)

    I don't have any numbers, and I'm afraid those wouldn't be very helpful. Performance depends on a number of factors and thus numbers from one use case may not be applicable to another. That being said, in my personal experience for typical use cases disk is often faster and has smaller chance of running into scalability issues; I've never had scalability issues with disk based sessions, but I've run into them multiple times using database sessions. Though again, that's just my experience from the setups and use cases I've worked with 🙂

    Overall, comparing session files stored on local disk vs. MySQL/MariaDB that stores data on the same disk, I would expect database to have more overhead; it has to do much more than just read a file from the disk, after all. But then again database can make use of in-memory caching to mitigate such issues. And of course if your database is on a separate machine (or a faster disk) that would again change things, though that's also where latency due to connections and data transfer may step into the picture.

    Finally, the native PHP session handling mechanism is in some ways less likely to cause issues, especially compared to something you've cooked up yourself. (Just for the record, PHP has built-in support for storing sessions in Redis, so I would consider that "native"). It should probably be noted, though, that if you let PHP handle garbage cleaning, that is likely to cause some amount of overhead; the approach that Ubuntu takes (separate cron job) does not suffer from this, at least not in the same way.

    My personal preference for session storage is Redis — which, again in my experience, is also the fastest option of those mentioned here — and if that's not available then disk 🙂

    • Like 5
  19. Just wondering if it might be related to this: https://github.com/processwire/processwire-issues/issues/1760. Basically on some systems (mainly Ubuntu) sessions are cleared in a somewhat non-standard way, which works for disk based sessions but not work SessionHandlerDB. Site running slow could mean that the sessions table has grown so large that queries take very long time.

    If that's the case, you may want to apply the solution that Bernhard suggested in aforementioned issue. Or alternatively disable SessionHandlerDB.

    (Just for the record: in most cases I would advise against database sessions, unless there's a specific reason for them. Disk is usually — in my experience at least — faster and also tends to have fewer quirks. If disk based session storage is not an option, I would look into setting up Redis for session storage.)

    • Like 4
  20. 1 hour ago, AndZyk said:

    Since TailwindCSS covers the CSS part of components, what do you use for making the components interactive? Like for example a carousel, accordion etc.

    Probably as many answers as answerers, but from my personal point of view: I use select few (proven) libraries from project to project, and always as few as possible.

    In total I typically use 3-4 libraries: carousels, foldable mobile navigation, pull-out bars, and modals. Those are the "hard problems", most other things are easy enough to build case by case, and often would require custom code anyway to look/work exactly as designed.

    (I don't do a lot of super complex animation/visualization/modeling in typical project.)

    In my case frameworks with all the bells and whistles rarely do exactly what I want (or the client or another designer wants), so I just end up hacking them, replacing parts, etc. One exception are PWA/mobile apps; for those I use Vuetify (and design accordingly) 🙂

    • Like 3
  21. On 3/28/2023 at 12:52 PM, Didjee said:

    Hi @teppo, first of all, thank you for this wonderful module! Do you see a possibility that ProFields Table will be supported in the future? I am certainly willing to sponsor (part of) the development costs for this!

     This is something that I may need as well, so will look into that, soon(ish) I hope 🙂

    On 4/12/2023 at 2:00 PM, prestoav said:

    Thanks for the great module!

    Sadly, I'm now seeing error after updating my MAMP install to PHP 8.0.8. See errors below, other versions numbers for this install are:

    Thanks for the report, I'll look into this. Sounds like paths are somehow messed up.

    1 hour ago, adrian said:

    1) We often find lots of revisions listed for a field that simply return "There is no difference between these revisions". I don't really understand why a revision is stored if there were no changes made. It seems like unnecessary DB bloat, but more importantly, it makes it very hard to find the revisions where changes actually are made. Do you think this behaviour could be changed so that no revision is stored?

    Just to be clear, are these consecutive revisions for a field where there are no actual changes between those revisions? If so, it definitely sounds like a bug. It is not intentional to store a revision when/where there are no changes in it.

    Which version of VersionControl is this on, and are you able to reproduce this easily or does it occure randomly / rarely? What type of field is this? Does the error occur on API use, or while editing the page via admin?

    Any chance that you might've come across these on a "blank" setup, just to rule out conflicts caused by hooks, other modules, etc.?

    It's been a while since I touched the code related to this, but based on a quick look it seems that VersionControl only stores data if $page->isChanged($page_field->name) returns true, so basically it sounds like ProcessWire itself is saying that the field has changed. There's only one exception to this, but that is only applicable to when module configuration is saved/changed, so it would be odd if that was the cause.

    Do you have any idea why ProcessWire might report these fields as changed, even if they aren't? I don't think I've run into this particular problem myself, though isChanged() has always felt somewhat "quirky" to me, or rather I've never quite understood where and when it works or doesn't work 🙂

    1 hour ago, adrian said:

    2) We often see changes to page reference fields (checkboxes in this case, but I expect it probably doesn't matter the inputfield type) where changes are recorded, but in reality, they weren't actually changed - why would they be recorded as being removed and then added again?

    This also seems odd. It would make sense if order would've changed, but here it sounds like something is marking the field as "changed" even though it has not. It might have something to do with your first point, though at this point I'm just guessing. Again, this is not something I've observed myself, as far as I can recall.

    1 hour ago, adrian said:

    3) When you have a page template where the width of fields is a small percentage (25%, 33%, etc), the interface for viewing the revision history is quite awkward, with lots of horizontal and vertical scrolling required. I was wondering whether you think it might be a better experience to load the revisions in a PW panel (pw-panel - https://processwire.com/blog/posts/pw-3.0.15/)

    That is a good point. For 2.x I did consider redesigning the interface, but if I recall correctly, I figured that current interface feels better for most use cases. There may have been technical issues as well, though I can't quite remember what they might've been.

    I'd be happy to give this a shot again some time, though not sure about the timeline 🙂

  22. Hey @snck,

    This should be fixed in the latest version of the module, 0.35.4. I'm not entirely sure of the circumstances causing this issue, but the warning was pretty clear, so it should be fine now.

    • Like 1
×
×
  • Create New...