Jump to content

wbmnfktr

Members
  • Posts

    2,183
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by wbmnfktr

  1. What about the core module Page Path History? It takes care of page URLs in case they change and redirect from old to new URLs. ProcessWire is quite good at managing old URLs, at least for those created in your ProcessWire instance. In case of a migration it's something totally different. There is even a managing area under the settings tab in each page. There would be no real need to add redirects anymore. Unless you want to use something that never existed before.
  2. I receive the markup but couldn't access any data somehow. Using $rockfrontend every time wouldn't save me that much time so I tried using it the .latte way. This did the trick. Thanks for this hint!
  3. Can someone please give me a hint on how to use RockFrontend with Latte and combination with RepeaterMatrix? I just can't find a way to show the content of my blocks (RepeaterMatrix types). {* home.latte *} <ul n:if="$page->blocks"> {foreach $page->blocks as $block} <li> {include '../blocks/' . $block->type . '.latte'} </li> {/foreach} </ul> {* oneOfTheBlocks.latte *} <div> {* NOPE *} {$block->headline} </div> <div> {* NOPE *} {$page->headline} </div>
  4. What exactly does this mean? In case this means "5 people updating pages and the host goes down"... yeah you really should look for a more reliable host. In case this means "we have 50-100 users visiting the site to look up the news"... you might want to look into optimizing the page or invest in ProCache. Can you give us a number (ballpark ranges are fine) of visitors and page impression we talk about here? Low traffic can mean everything between 100/month to 50k/month.
  5. Just stumbled over this... https://www.slant.co/topics/5409/~php-cms
  6. @bernhard you might want to check this one out as it works perfectly well with the free ChatGPT 3.5 (GPT 4 is acting weird with this prompt). https://flowgpt.com/prompt/qGQmSnF-MDsfhDhpfzZEM Copy the whole prompt Send it and wait a moment Explain what you want to accomplish Answer questions or add details if needed Ask GPT to write the code You have to get used to it as it's quite verbose sometimes.
  7. I was just looking for the Page Autosave and Live Preview module to re-install it in a project, yet I couldn't find it anymore. I probably missed something somewhere. Right?
  8. Please check your installed and used extensions - from password managers to vpn and privacy-related ones. Most of them are off/disabled in incognito mode so they don't interfere there.
  9. 90s and early 2000s House/Dance/Techno Remixes #oldschool
  10. Do you have more details on that setup? I have an old machine running that kind of dev setup but never noticed that issue at all. Which ProcessWire version are you running?
  11. Based on my recent post and your answers in Module: ProcessWire Core Upgrade I decided to no longer hijack that thread and therefore open this one. @bernhard asked some very good questions there and here are some of my answers. First and foremost, I keep all of my "Automation Hooks" at /site/ready.php, and they are not present in all of my projects. I mostly utilize them in larger long-term projects and projects in which I have devoted a significant amount of effort - see samples below. I considered building a module for each project so that I could keep it around, fork it, alter it... that kind of dev approach. But that didn't work out as expected, so everything was moved back to ready.php - which actually works best for me. One file in plain sight, right next to everything else. For these activities, I have my own collection of snippets. I simply grab the appropriate snippet, tweak it, position it, test it, and proceed from there. No overly complicated workflow at all. But why should I care if there are any module changes (see thread above) if I'm not going to update them right away? I'm curious about what's going on in each project. I can see which modules were installed, which are in use, and which may be upgraded the next time there is a release. I rarely update modules or anything else outside of my development environment. I learned my lessons here. This isn't something I'd use in a project in which 99% of the code consists of echo/foreach like in some smaller sites I built. I honestly don't care because their code base and modules will almost certainly never be touched again. Projects like these are a totally different thing - some of those "Automation Hooks" are in place there: What things I "automate" and have in place across my projects - to give you an idea why this makes my life easier. Checking for news updates (weekly) No updates in the last 2 weeks? Remind the client to say something on their website and suggest doing so on social media, Google Business, and everywhere else too. Checking event count (weekly) Only 3 events left? Tell the client to check this and update the event pages or at least post something in the news to tell visitors what's going on. Checking newsletters (weekly) No newsletter in the last 2 weeks? You get the deal: the client will be notified to either send a newsletter or hand out content so someone else can do this Checking lunch deals (daily) Same as above. A restaurant runs out of lunch deals and will be notified via email right away. Archive old stuff (weekly) Short term content, like lunch deals, don't have to stay in there until the end of days. Archive them under another branch of pages, delete/trash them. Out of sight, out of mind. Checking for TODOs (weekly) This is a special one as I sometimes like to add a notes field to content pages in which I can write down things I need to do on that specific page in a project. I look for all pages having content in this field, put everything in a mail and go from there. It's something like a TODO app but direct in the project. No Jira, Asana, Todoist needed for this. There are plenty other options available. In my situation, it varies depending on the specialty, the project's budget for maintenance, or the overall scope of my tasks. I typically utilize these "Automation Hooks" to learn more about a project and remain informed. Sometimes it's for the client, but it's all automated. There have been projects performing comparable automated tasks for over a year now or more, and the customer is thrilled to receive an email every now and then with a list of things he should do. WINNER WINNER! I get paid to remind him, and he gets things done. You could put anything you wanted in those. File counts, database size, login errors, PHP version, PW version, and so on. Ping at @flydev @monollonom @horst
      • 6
      • Like
      • Thanks
  12. This is brilliant! Just minor tweaks. You already did 99,9% of the job. Awesome! Thank you. wire()->addHookAfter("LazyCron::every4Weeks", function(HookEvent $event) { $checker = $event->modules->get("ProcessWireUpgradeCheck"); if(!$checker) return; $upgrades = $checker->getModuleVersions(true); // we only want modules with new versions if(!count($upgrades)) return; $subject = "There are " . count($upgrades) . " modules to update"; $body = "Hi!\n\nAn upgrade is available for these modules on " . wire('config')->httpHost . ":\n\n"; foreach($upgrades as $name => $info) { $body .= "- $name ($info[remote])\n"; } $body .= "\nHead to " . $event->pages->get("process=ProcessWireUpgrade")->httpUrl . " to upgrade."; // not sure if this `get` would work? $mail = wireMail(); $mail->from("wwwuser@example.com") ->to("admin@example.com") ->subject($subject) ->body($body) ->send(); }); Just added this to an existing site. Works perfectly fine. Follow up Added public recipe for all: https://processwire.recipes/recipes/automate-module-upgrade-check/
  13. I am working on and thinking about automating quite some tasks in ProcessWire and just stumbled about the thought to take updates into my list of possible tasks. Has anyone an idea if it's possible to hook LazyCron into/or hook directly into ProcessWireUpgrades to look for updates with this module (including updating the directory listings) and go from there, like sending an email with all existing updates/changes? As far as I could tell the answer is probably: NO Any ideas or thoughts? First of all my main goal is receiving an email once in a while for each instance in which updates could be done/are available. No automated updates (I am not an adrenalin enthusiast)! Oh, and yes... please feel free to "steal" this idea for a module if you like or add a recipe for this.
  14. Kosheen... almost an Oldie/Classic now.
  15. @bernhard so... I just installed a new instance of Umami on Railway and local, yet the newest version doesn't work with your module anymore - for now. The same issue. I found something in regards to server headers, yet... those didn't fix that issue. Might look into it the next days I find time for it.
  16. Did you enable that feature? https://umami.is/docs/enable-share-url If so... I'm not sure what the issue could be. Are all SSL certificates valid and is everything publicly available? Give Railway a try. https://umami.is/docs/running-on-railway
  17. He is so right with this! A few days ago I saw someone mentioning ProcessWire in a comment under a WordPress-related ad on Instagram.
  18. Great talk and very good comparison towards Laravel and Symfony. I wish I could have seen the faces or read their thoughts the moment you said See here: https://youtu.be/ncS36UqaBvc?t=400
  19. Never heard of StackPathCDN but... THANKS for this module!
  20. Yet again... this sounds super AWESOME!
  21. Wow... is all I can say right now for the moment. What amount of traffic or hits/second are you awaiting for that kind of setup? I built and ran pretty cheap and simple setups that handled up to about 30-50k hits*/day without noticable issues - ok, those sites were ProCached and running behind Cloudflare CDN (free tier), yet... it worked out. They probably could have handled even more. Nothing of my projects here are scaling horizontally, vertically or in any other direction ? compared to your setup. It's not within your league of setups by any measure - but here is how I built something that scaled back in my days very well: JS files came from sub[1-3].domain.tld super necessary parts were inlined file_get_contents of custom JS came from external sources CSS files came from sub[1-3].domain.tld almost all (critical) CSS was inlined file_get_contents of custom CSS came from external sources IMGs came from assets[1-3].domain.tld Cloudflare took care of GZIP and compressing and caching the output (not sure about brotli) ProCache took care of the heavy load prior to everything else as 95% of the whole site/s were cached (pre-cached by using a Sitescraper after each release) with a very long lifetime Asset and file handling were kind of static and strict without much options for custom solutions (wasn't really necessary for those sites) as the overall page setups were kind of minimal and simple (blog style - minimal differences) files like JS, CSS, IMGs came from other services and not my host, actually everything from a subdomain came from other services as the hosting was too cheap to handle lots of requests - I used Github, Zeitgeist (which is Vercel now - I guess), and some other services I can't remember, for that It was a bl**dy hell to make that work back then (BUT I had to save money I didn't have then) - but those were also one of my very first real projects with ProcessWire then (one of my first public 10 projects ever, and most of them were my own projects) - nowadays that setup would probably be still annoying in some parts, yet more feasible and easier to handle with way better results. My issues back then were limited database and webserver connections (those were over limit pretty fast) at my hosting companies (HostN*n, Dream***, Host***, Blue***, A2***, and such - super cheap) so I split all assets to other services and made them work via subdomains. In the very early days I only paid something between 0,99 USD/month for those sites. Later on 2,99 USD and even later 8,99 USD. It only became faster and faster. About a year before selling/shutting down those projects I paid about 60 USD/month/project. STEEP! Still the almost same setups could easily handle more than double/triple the hits*/day nowadays but with far better pagespeed results than ever before. Till today I'm happy with these kind of setups for my projects. The moment I reach at least 50k+ hits*/day with a project I return to that but with methods and services from today. What I use nowadays (for whatever reason - you will find out ?? webgo IONOS Hetzner Plusline Server Netlify Vercel Cloudflare Pages Cloudflare CDN Cloudinary Planetscale Runway Superbase * real hits/users/sessions - no fake requests ** paid plans for super high traffic sites, otherwise free tiers
  22. Could you provide a sample image? And... could you please export the field so I can import it here and compare it? My testing worked with a 1kb JPG pretty fine in PW 3.0.210. Additional note: Please open that image you try to upload and save/export it again as JPG or PNG. I bet there is something wrong with the file itself.
  23. Where does TinyMCE come from this time? You said before you are still using (which is fine) CKEditor. However. PagePathHistory is one option, yet it should be a problem here. You would know about it in the settings tab. I can't really tell where to look now without knowing or seeing anything of the setup or code. My next steps would be exporting a site profile and creating a totally new instance. Maybe even on another setup. Trying Duplicator and do the same. I faced a ton of weird issues in the past and in 95% of the time it was either my fault or a weird setup/hosting issue.
  24. Ok... so to give you a small update on this and the reason why I thougt about this feature in first place. In my case, this time, the "client"* really needed to be sure that all enabled/installed modules are somewhat/somehow Core/Ryan modules. It's not about Ryan himself, more of... 3rd party-issues. My "client"* needs a solution for a closed environment. All software, even for websites - or in this case - something similar to an intranet, needs to be audited. A website setup will be worked out in the future (hopefully), so every partner/affiliate should be able to clone a repo from Github (based on a master repo with all modules, settings, defaults audited ... - a site profile, ready to use, to be clear here). They trust ProcessWire and believe in Ryan and therefore only wanted to go with his modules in first place, while 3rd party modules are fine BUT... they needed to know about those prior to install them ANYWHERE, or when in testing-stage needed to know if those are "3rd Party" to get them into an audit. As of now we are already a step further and the "client"* looked up more and more, maybe already most of the modules we could need, already by his own DEV team. To make it clear. They looked for things in the modules like base64-encodings, external resources, external scripts, hidden Google Analytics (and similar) calls, and what-not. Their DEV team is awesome... yes, they digged through a lot and were happy so far - even 3rd Party/community modules. There were some issues they fixed already in some modules - can't say which, as I really don't know. But I got an "OK! Let's go." this week. ? * THE CLIENT: The client - is a new company of an old friend of mine - I worked with for 10+ years and they have to audit any software they use for their clients. They used Dr***l, Ty**3, and some custom solutions in the last few years and weren't that happy. A dummy/MVP-setup of mine brought them to ProcessWire with "huge success" (their words). But to be able to release it to more real clients of them, they need to be able to assure some things... which brings us back to my initial thought and question we are working on right now. To make it even more transparent: it's within the financial/automobile/insurace niche - so they are super strict with their contractors and partners (my client/friend). TL;DR: ProcessWire in a niche-software-solution, ready to be cloned from Github but with audited modules without any [shady-ish] scripts or whatever.
×
×
  • Create New...