Jump to content

Jonathan Lahijani

Members
  • Posts

    768
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by Jonathan Lahijani

  1. Based on driving myself completely insane with noHooks for the last 2 years, and based on what Ryan specifically said here: ... I completely agree with Ryan. noHooks option should be absolutely avoided. Seriously, if you use it in advanced cases like I have been doing, you will hit every WTF issue known to man. It is not made for developer use, even though it gives off that vibe. I think that is a mistake. I will write more about this in depth soon, but at least in my situation, my goal was to ultimately update a page and all of its descendant repeaters (repeaters within repeaters) only after the page is its descendant repeaters have been saved completely first without hook interference. Using noHooks basically fucks up everything up (saying it's been frustrating dealing with it is an understatement) and there are unintended consequences everywhere! The correct way to do what I described is to do something like this, which took forever to figure out (every line has a specific reasoning behind it): // // /site/classes/OrderPage.php // class OrderPage extends Page { public function getAggregateRootPage() { return $this; } public function finalize() { // your code to finalize the page, such as populating an order_total field, etc. } } // // /site/classes/OrderLineItemsRepeaterPage.php // class OrderLineItemsRepeaterPage extends RepeaterPage { public function getAggregateRootPage() { return $this->getForPage(); } } // // /site/init.php or /site/ready.php (doesn't matter) // wire()->set('finishedPages', new PageArray()); // hook 1: use Pages::saved only to build list of pages to finalize wire()->set('finishedPagesHook', wire()->addHookAfter('Pages::saved', function(HookEvent $event) { $page = $event->arguments('page'); if(!method_exists($page, 'getAggregateRootPage')) return; wire('finishedPages')->add($page->getAggregateRootPage()); // duplicated pages won't get stored }, [ 'priority' => 1000 ]); // hook 2: use ProcessWire::finished to finalize the pages 🤌 wire()->addHookBefore('ProcessWire::finished', function(HookEvent $event) { wire()->removeHook(wire('finishedPagesHook')); foreach(wire('finishedPages') as $finishedPage) { $finishedPage->finalize(); } }, [ 'priority' => 1001 ]); When I demo my system one day which is way more complicated than the example code above, it will become clear. // TLDR: don't use noHooks when saving a page. DON'T! Instead, create a log of what pages need to be finalized, and act on those pages in ProcessWire::finished hook, which is when you can be absolutely sure the coast is clear. I wish I knew about that hook earlier. If you follow those simple rules, you don't have to think about CLI vs non-cli, ajax vs. non-ajax, whether the current page process implements WirePageEditor, uncache, getFresh, saved vs. saveReady, before vs. after hook, hook priority, editing a repeater on a page vs. editing the repeater "directly", where the hook should go (it should be in init.php/ready.php or in the init()/ready() method of a module), etc. Note: there's a whole other aspect to this in terms of locking the a page to prevent multiple saves (like if a page was being saved by an automated script and the same page was being saved by an editor in the GUI).
  2. Awesome. I use BunnyCDN as well (previously KeyCDN) configured with ProCache, mainly because they publish the list of IPs they use if their edge-servers need to pull assets (and then serve) from your website. While this post isn't directly related to your module, it matters if you are using WireRequestBlocker for the reasons I explained here (post only viewable if you have access). Long story short, if a CDN makes a request to a URL on your site that matches a blocking rule in WireRequestBlocker (rare, but it's bound to happen by accident) then the IP of that particular BunnyCDN edge-server will get blocked. Then if a visitor on your site is being sent assets from that edge server, then it will error because the CDN was never able to obtain it due to the edge-server being blocked. This is why the site might look fine in California, but broken in New York for example, as the user is being sent assets from different edge-servers. To prevent this from happening, I have a cronjob set up (runs every 24 hours) to grab the list of BunnyCDN edge-server IPs and I insert it into WireRequestBlockers "IP addresses to whitelist" field. This is a function that can do what I described above: function bunnycdnWhitelistIps() { if(!wire('modules')->isInstalled('WireRequestBlocker')) return false; // Fetch BunnyCDN edge server list $url = 'https://bunnycdn.com/api/system/edgeserverlist'; // Use cURL to fetch the content $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_TIMEOUT, 30); $response = curl_exec($ch); $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); if($httpCode !== 200 || $response === false) { throw new WireException("Error fetching data from BunnyCDN API. HTTP Code: $httpCode\n"); } // Parse the JSON response $data = json_decode($response, true); if(json_last_error() !== JSON_ERROR_NONE) { throw new WireException("Invalid IPs."); } // Extract IP addresses into an array $ipAddresses = []; if(isset($data) && is_array($data)) { foreach($data as $ip) { if(filter_var($ip, FILTER_VALIDATE_IP)) { $ipAddresses[] = $ip; } } } // Remove duplicates and sort $ipAddresses = array_unique($ipAddresses); sort($ipAddresses); $data = wire('modules')->getModuleConfigData('WireRequestBlocker'); $data['goodIps'] = implode("\n", $ipAddresses); wire('modules')->saveModuleConfigData('WireRequestBlocker', $data); }
  3. Thanks for this module. I've been using it a lot recently given that the web app I'm building makes heavy use of roles and permissions and I appreciate the "helicopter view" the module gives you. Recently, I've been using access rules on fields within the context of a template. Unfortunately, ProcessWire doesn't have a "helicopter view" of being able to see those settings, which means you have to go into each template and click on a field to bring up the modal, then go to the access tab to see what the settings are. Now imagine having to do that for dozens of fields. I wonder if you've dealt with that and if a feature like that makes sense for this module (or if it's out of scope).
  4. @bernhard Is there a way for ProcessWire to know if it was executed from RockShell? Does RockShell leave some sort of signature that could be detected? Right now I'm working around this by putting this in the handle() method: $this->wire()->config->isRockShell = true; Somewhat related: Does it make sense to have RockShell put ProcessWire in CLI mode by default, because currently it doesn't do that and my assumption is that it would? Not sure of the pros/cons of doing that, but I'm assuming you given it some thought.
  5. @bernhard Here's a function that solves the original problem of how to MOVE (not copy!) repeater items from one page to another, which preserves IDs. I tested it and I believe I accounted for everything, but I recommend testing it more before using it in production. // move the repeater items from fromPage to toPage // the same repeater field must be assigned to both pages // note: fromPage and toPage can be repeater page items as well since they are technically pages function moveRepeaterItems(string $fieldName, Page|RepeaterPage $fromPage, Page|RepeaterPage $toPage): void { // checks if(!wire('fields')->get($fieldName)) { throw new WireException("Field '$fieldName' does not exist."); } if(!$fromPage->id) { throw new WireException("From page does not exist."); } if(!$toPage->id) { throw new WireException("To page does not exist."); } if(!$fromPage->hasField($fieldName)) { throw new WireException("From page does not have field '$fieldName'."); } if(!$toPage->hasField($fieldName)) { throw new WireException("To page does not have field '$fieldName'."); } if($toPage->get($fieldName)->count('include=all,check_access=0')) { throw new WireException("To page already has items in field '$fieldName'."); } // store the parent_id $parent_id = wire('database')->query("SELECT parent_id FROM field_{$fieldName} WHERE pages_id = '{$fromPage->id}'")->fetchColumn(); // delete potential (and likely) existing toPage data placeholder // prevents this error: Integrity constraint violation: 1062 Duplicate entry '1491109' for key 'PRIMARY' in /wire/core/WireDatabasePDO.php:783 // remember, this will be empty since we checked above that there are no items in the toPage field wire('database')->query("DELETE FROM `field_{$fieldName}` WHERE `pages_id` = '{$toPage->id}'"); // update the record in table 'field_$field' where pages_id=$fromPage->id and change the pages_id to $toPage->id wire('database')->query("UPDATE `field_{$fieldName}` SET `pages_id` = '{$toPage->id}' WHERE `pages_id` = '{$fromPage->id}'"); // update the record in table 'pages' where id=$parent_id: change name from 'for-page-{$fromPage->id}' to 'for-page-{$toPage->id}' wire('database')->query("UPDATE `pages` SET `name` = 'for-page-{$toPage->id}' WHERE `id` = '{$parent_id}'"); } // example moveRepeaterItems( fieldName: 'order_line_items', fromPage: $pages->get("/orders/foo/"), toPage: $pages->get("/orders/bar/") );
  6. A quick note: Keep in mind that the clone will not occur (ProcessPageEdit::processSubmitAction is never executed) if there's a required field on the page being cloned that has not been populated and/or the page is statusFlagged.
  7. I've only played around with the new admin theme and haven't committed to it yet. That was one thing I noticed as well and I feel the color difference in the original theme definitely makes it easier to visually separate nested fields.
  8. Is it possible to put TracyDebugger in Development mode regardless of whatever settings are in the "Access permission" section? Like is there a $config setting that can force Development mode in which overrides whatever is in Access permission? I have a special case where I want it in Development mode that none of the Access permissions will be quite flexible enough for. To be specific, I want it enabled in PW CLI mode (but on my dev server), which means I can't use user/role-based or IP-based detection. I also don't want to use "Force isLocal" because that will enable it for both CLI and GUI mode. I don't want it for GUI mode in that particular case since my dev server is technically publicly accessible and could lead to TracyDebugger being used as a hacking vector.
  9. https://x.com/adamwathan/status/1559250403547652097
  10. I looked at DaisyUI but it relies on @apply under the hood which the creator of Tailwind said he wish he could uninvent so that didn't sit well with me.
  11. @bernhard Within the Tailwind ecosystem, my goal was to find the "framework" that had the best JavaScript components (the typical things like accordions, tabs, etc.). I started with Tailwind UI but that was geared for headless, however behind the scenes in their demos they had a hidden Alpine.js based solution. That was hacky but I used it for a little bit. Then Alpine.js itself had their premium components and since the two pair well, I went with that for a while but it didn't feel right in the same way that UIkit does. Then I found Preline and played with that for a while, but a couple years ago it was good but not as good as Flowbite. So I stuck with Flowbite for the last 2 years. A couple months ago I re-visited Preline and they've made some incredible progress, so much so that I feel it's "ahead" of Flowbite. Then a month or two ago, the folks at Tailwind finally released non-headless / vanilla JS components officially for Tailwind UI, which I haven't experimented with yet but I'll probably switch to that if it makes sense (which I'm sure it will): https://tailwindcss.com/blog/vanilla-js-support-for-tailwind-plus Also, I've been thinking about eventually switching back to vanilla CSS at some point because of how much progress it's made in the last 15 years. I stopped writing vanilla CSS when Bootstrap 2 came out and every since then I've gone from one framework to another (Bootstrap 2 -> Foundation -> Bootstrap 3 -> Uikit2 -> Uikit3 -> Tailwind). I love UIkit but I feel it's antiquated now and not taking advantage of all the new cool features of CSS. I also came to not like being softly "jailed" in their way of doing things. Also I like the idea of not using a build-step so vanilla CSS is probably what I'll settle on when I'm ready.
  12. I really wish I knew about this issue 2 years ago. Like... really really wish.
  13. @ryan, can you pretty please look at this issue that has to do with a sub-selector bug that occurs when there are about 1500+ pages? https://github.com/processwire/processwire-issues/issues/2084 It would be nice to have that fixed before the next master version.
  14. Just wanted to say this module is working really well under a lot of load. 👍
  15. Thank you very much. This adjustment also fixes admin emails being sent when errors occur. Awesome!
  16. Ack! You're right. I did this API integration a year ago and I forgot about the 'auth' setting in my routes file. Thanks for mentioning this.
  17. @Sebi Is there a reason there is not a simple "API-key based authentication" auth-type, meaning to communicate with the API, you just need an API key (without having to deal with sessions or JWTs)? Would you allow a pull request to add this?
  18. Callbacks were introduced in this update (ProcessWire 2.0.235) but I feel like this was an important update that was not spoken about. Analogous to Rails callbacks.
  19. That worked... I replaced the try/catch block with just $app->run(); The error appeared in the console and logged to 'errors' log. I wonder if it can be refactored a bit? Ideally I'd like to run try/catch in my own rockshell command, but with the current approach rockshell's try/catch will override it, right?
  20. Basically, I'm running a rockshell command and I get this error due to a logic bug in my own code: Call to a member function getProduction() on null However because the error isn't really specific, I'm not sure where in my codebase that's exactly occurring (like which file and which line). It doesn't log it to ProcessWire's 'errors' log. If I run the same code that's inside my rockshell command "outside" of rockshell (using php on the command line), it will also error, but details about it like this: PHP Fatal error: Uncaught Error: Call to a member function getProduction() on null in /path/to/pw/site/classes/BlahPage.php:210 Stack trace: ... Is it possible to log errors to the error log?
  21. @bernhard Somewhere in my codebase, an exception is occurring. It's not rockshell specific, but I'm using rockshell to run a background job and at some point this exception occurs. However, the exception details are very minimal in rockshell's output (just a simple line) which doesn't give me enough information me to track it down. I would think the exception would get placed in ProcessWire's own logs, but it doesn't. Any thoughts on how I can get the exceptions to get logged to help me debug?
  22. I need to set up monitoring on my servers to detect this. I didn't realize DigitalOcean has a metrics tool. Going to set that up now. I will look into MariaDB using too much memory in general though. Thanks for the tip.
  23. Just a heads up for anyone using DigitalOcean and sending out emails using SMTP with port 587, DigitalOcean just recently started blocking this port on "new" droplets. I put "new" in quotes because that's not true: I have a droplet from months ago, before their supposed announced change, and it still got blocked. I didn't realize this until one of my clients brought it up. Good job DO! /s I use WireMailSmtp and power it with Mailgun's SMTP credentials with port 587. I've been doing it this way for a long time, although using Mailgun's direct API approach (of which WireMailgun uses) is more preferred and would avoid this issue. I will start taking that approach soon with new and existing sites. Using SMTP is convenient however. Anyway, I'm not the only one that's complaining: https://www.digitalocean.com/community/questions/smtp-587-ports-is-closed An easy fix, for now at least, is to use port 2525 which is not blocked and which Mailgun also supports: https://www.mailgun.com/blog/email/which-smtp-port-understanding-ports-25-465-587/
      • 5
      • Like
  24. They are droplets. One droplet is Ubuntu 22.04 and the other is 24.04. I don't think the droplets themselves have the same hardware specs. There wasn't a similarity between the sites that would possibly explain it, at least not an obvious one.
  25. (I'm putting this in the Dev Talk forum since I don't think this is ProcessWire specific.) I have a ProcessWire site on a DigitalOcean server using Ubuntu 24.04 with MariaDB and PHP. This site doesn't receive much traffic. On 4/1/2025 MariaDB crashed. Here's what systemctl said: > systemctl status mariadb.service × mariadb.service - MariaDB 10.11.11 database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; preset: enabled) Active: failed (Result: exit-code) since Wed 2025-04-02 06:31:12 UTC; 9s ago Duration: 1month 5d 13h 27min 44.368s Docs: man:mariadbd(8) https://mariadb.com/kb/en/library/systemd/ Process: 752113 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS) Process: 752115 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) Process: 752118 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bin/galera_recovery`; [ $? -eq 0 ] && systemctl set-> Process: 752178 ExecStart=/usr/sbin/mariadbd $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE) Main PID: 752178 (code=exited, status=1/FAILURE) Status: "MariaDB server is down" CPU: 178ms Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [ERROR] InnoDB: Failed to read log at 73992704: I/O error Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [Note] InnoDB: Starting shutdown... Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [Note] Plugin 'FEEDBACK' is disabled. Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [ERROR] Unknown/unsupported storage engine: InnoDB Apr 02 06:31:12 myserver1 mariadbd[752178]: 2025-04-02 6:31:12 0 [ERROR] Aborting Apr 02 06:31:12 myserver1 systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE Apr 02 06:31:12 myserver1 systemd[1]: mariadb.service: Failed with result 'exit-code'. Apr 02 06:31:12 myserver1 systemd[1]: Failed to start mariadb.service - MariaDB 10.11.11 database server. Not sure why that happened and this is a recently built ProcessWire site (no legacy cruft). I thought simply restarting MariaDB would fix it but it didn't. Not even restarting the server fixed it. After Googling and ChatGPTing, this post (and ChatGPT) recommended removing /var/lib/mysql/ib_logfile0 (and iblogfile1), then restarting. The replies to that post suggest that while it works, it's dangerous for reasons related to potentially losing data. Given that this site is not mission critical, that wasn't a concern and the approach suggested worked. Ok. --- Now today, on a totally different site and server (still DigitalOcean) with the same Ubuntu version and LAMP stack, MariaDB crashed in the same way. I ran the same command "systemctl status mariadb.service" on that server and the output was basically identical. Restarting MariaDB worked in this case so it was much easier to fix. Again, this site doesn't receive much traffic. --- I'm wondering what's going on here? While these sites do get the typical hack-bots trying to exploit WordPress or insecure env files and things like that, I don't think that's the story here, even though they do tend to hammer the server. It has to be one of the following reasons logically speaking: a bug with MariaDB and/or Ubuntu? a bug with ProcessWire? an issue with DigitalOcean underlying hardware? Unlikely. overloading of server resources from hackbots or even AI crawlers? I looked at my logs and that didn't seem immediately plausible something code related on my end? Possible because they are both running my home-grown ecommerce module. Anyone else have issues with MySQL or MariaDB crashing for odd reasons?
×
×
  • Create New...