Leaderboard
Popular Content
Showing content with the highest reputation since 03/13/2026 in all areas
-
I've been working on sites where the standard sitemap setup started showing cracks: XML generated on every request eating memory, no way to regenerate automatically without writing custom hooks, and no visibility into what actually ended up in the file. So I built Sitemap — a module that generates static XML files to disk, splits output by template name, and handles the full lifecycle from generation to search engine notification. What it does differently: Writes files to disk instead of rendering in memory — no RAM spike on large sites Pages are fetched in chunks of 500 with uncacheAll(), so memory stays flat regardless of page count Each template gets its own named file: sitemap-product.xml, sitemap-blog.xml, etc. — the index at sitemap.xml references them all Auto-regeneration via LazyCron with a configurable interval; the LazyCron hook slot is chosen dynamically to match what you configured (every hour, every 6 hours, daily, etc.) rather than always using everyHour A needs_regen flag is set whenever a page is saved, trashed, or deleted — visible in the admin dashboard IndexNow support: after generation, all URLs are submitted to api.indexnow.org in batches of 10,000 Sitemap: directive written directly to the physical robots.txt on save and on generate Lock file prevents concurrent generation Admin dashboard at Setup > Sitemap showing file count, URL count, total size, and last generated time Settings stored in a dedicated DB table (sitemap_settings, name/value, MEDIUMTEXT) rather than the module's data field — avoids the serialized config size limit when template settings grow large. Image sitemap extension and hreflang alternate links for multilanguage sites are both supported. GitHub: https://github.com/mxmsmnv/Sitemap Screenshots: Still v1.0.0, so feedback is very welcome — especially from anyone running it on a site with 10k+ pages.15 points
-
Hey everyone, I've been building a e-commerce project and needed to show personalized content based on visitor location — shipping availability, regional pricing, state-level compliance notices. Nothing like this existed in the PW ecosystem, so I built it. What it does Detects country, region and city from the visitor IP using MaxMind GeoLite2 databases (free). Result is cached in session. Exposes $geoip as a wire variable — available in every template automatically, just like $page or $user. // That's it. No setup, no require, just use it. if ($geoip->inCountry('US')) { echo $page->us_content; } API // Boolean checks — accept single value or array $geoip->inCountry('US') $geoip->inCountry(['US', 'CA', 'GB']) $geoip->inRegion('GA') // ISO 3166-2 subdivision code $geoip->inRegion(['GA', 'NJ', 'NY']) $geoip->inCity('Atlanta') // Inline conditional with optional fallback echo $geoip->showIf('countryCode', 'US', $page->us_block, $page->global_block); echo $geoip->showIf('regionCode', ['GA', 'NJ', 'NY'], $page->northeast_promo); echo $geoip->showIf('continent', 'Europe', $page->eu_gdpr_notice); // Single field $geoip->getField('countryCode') // "US" $geoip->getField('regionCode') // "GA" $geoip->getField('city') // "Atlanta" $geoip->getField('timezone') // "America/New_York" // Full array $geo = $geoip->detect(); // ip, country, countryCode, continent, region, regionCode, // city, zip, lat, lon, timezone, corrected, status Combining conditions // Country + region if ($geoip->inCountry('US') && $geoip->inRegion('CA')) { echo $page->california_prop65_notice; } // Logged-in + location if ($user->isLoggedIn() && $geoip->inCountry('US')) { echo $page->us_member_block; } // Time-of-day in visitor's timezone $tz = $geoip->getField('timezone') ?: 'UTC'; $hour = (int) (new DateTime('now', new DateTimeZone($tz)))->format('H'); if ($geoip->inCountry('US') && $hour >= 9 && $hour < 17) { echo 'Our US office is open right now.'; } // Pre-select shipping dropdown (Vivino-style) $selectedCountry = $geoip->getField('countryCode') ?: 'US'; $selectedState = $geoip->getField('regionCode') ?: ''; User location correction Frontend widget lets visitors fix incorrectly detected location. Stored per-IP in DB, applied on subsequent requests. You can also build your own UI — just POST to /?geoip_action=correct with country_code, region_code, city. Setup Composer package and databases live in site/assets/GeoIP/ — not in the module directory, so they survive updates. cd /path/to/site/assets/GeoIP/ && composer require geoip2/geoip2 Then drop GeoLite2-City.mmdb (or GeoLite2-Country.mmdb) in the same folder. Free download from maxmind.com. The module config page shows the exact path and command for your server. Admin panel Setup → GeoIP — lookup log with country/region/city, corrections manager, manual IP lookup tool. GitHub: https://github.com/mxmsmnv/GeoIP License: MIT Requires: ProcessWire 3.0.200+, PHP 8.2+ Feedback welcome — especially if you're doing anything geo-based with ProcessWire. Maxim11 points
-
The toughest challenge so far? Being our own client. Actually, the website we launched back then was just a placeholder, and the plan was to quickly replace it with a proper portfolio website. But as is often the case, it ended up taking a little longer than expected. A year later, we’ve finally done it: Our new website is live! https://konkat.studio/ The goal of the new website is to showcase our work and better communicate our services. The site is bilingual and was built using ProcessWire and PageGrid. More on that later. In addition to the website, we’ve also evolved our visual identity and logo. KONKAT (from concatenation) stands for linking individual elements into a functional whole. Our new branding makes this connection visible. In our case, we combine strategy, design, and technology into a unified process. The logo mark communicates this as well; as most of you probably know, the += operator in JavaScript joins elements and assigns the result. It took us some iterations to get the design right, but once the design was done, development was pretty straightforward. Most of the time was spent preparing the content for the projects, and that is also where PageGrid was super useful since it allowed us to design the layout and content of each project individually. Backend view: Managing project content and layouts with PageGrid. PageGrid also significantly sped up development, as we built all other pages using only its core blocks. For the projects overview, for instance, we used the datalist block to automatically generate the listing from our project pages, working perfectly out of the box without any custom logic. We also added some custom code where it made sense, e.g. the scroll animation on the homepage was just a bit easier to achieve with custom code (it uses native CSS sticky). Backend view: Using Pagegrid's inline editing to update some text on the english version auf our services page. Another great thing is that PageGrid takes care of lazy loading images and videos (using the famous lazysizes js plugin) and is caching its content automatically. As a result, we got a 100 on the Google Lighthouse test on desktop and 99 on mobile without any extra optimizations (we are not using Markup Cache or ProCache for this site). Backend view: Editing a thumbnail on the homepage If you have any further questions regarding our workflow or process, feel free to ask. I will do my best to answer them. Also, please let us know if you find any bugs, since the website is brand new, there are probably some we haven't caught yet! We also welcome any feedback you may have. Best, Jan & Diogo (KONKAT)9 points
-
There may be a time where you need to create a page reference field using the Select inputfield and it's selecting repeater pages. Let's say I have a repeater field called "order_line_items" and I want to create a page reference field called "order_line_item" that allows me to select a repeater item (which is a page) of the "order_line_items" repeater field. Repeater pages are a bit different from regular pages in that their "parent" is a container admin page associated with the page in which it exists (dig into /admin/repeaters/order_line_items/ in your page tree to see what I mean). So when you are configuring your page reference field, you can't really choose a Parent. However when configuring your field, your instinct would be to choose the Template of "repeater_order_line_items". Then because you need extra precision in what pages are actually available for selection (rather than all of them across all pages), your instinct will be to implement custom PHP code: $wire->addHookAfter('InputfieldPage::getSelectablePages', function($event) { if($event->object->hasField == 'order_line_item') { $event->return = $event->pages->find('your selector here'); } }); The problem with that approach is that even though you have defined the custom PHP code and the select field correctly shows the selectable repeater pages in the select field, behind-the-scenes, ProcessWire has still loaded EVERY SINGLE REPEATER that has the "repeater_order_line_items" template (you can see this is TracyDebugger's pages loaded list)! Your site will definitely be slower as a result, dramatically so if you have thousands or tens of thousands of repeater pages of that template. I hit this issue years ago (2018) and I thought it was a bug. I discussed it with Ryan and it's technically not a bug, but kind of the way ProcessWire works, which is beyond this tutorial. While you can circumvent this using the PageAutocomplete field, I don't like the ergonomics of that field in certain situations. I want the good-old select field. The solution to this is to NOT select anything for the "Template" when configuring your field. So in my example, I chose "repeater_order_line_items", but instead, it should be left blank. Now the field will just rely on the code portion and all the unnecessary page loads will be eliminated.9 points
-
I made a small module called ProcessSiteSettings and thought I’d share it here in case it’s useful to someone. ProcessSiteSettings is a lightweight ProcessWire module for storing and editing global site values in one place. There are already similar modules available, so this is simply another free option with a focus on quick setup, usability, and helpful template snippets. It adds a Settings item to the ProcessWire admin menu and creates one central page for global site values like: copyright text footer content contact info social links SEO / AI summary text images repeaters Repeater Matrix page references other regular PW fields Ofc. add your own fields of choice to it, or delete the ones you don't need! It uses a normal template + page approach instead of a fake config form, so it works much more naturally with ProcessWire fields. It also includes a small helper box on the settings edit screen that shows ready-to-use template snippets for each field. It tries to generate smarter examples depending on the field type, including more complex fields like images, repeaters, matrix items, multi-value fields, etc. Settings Menu: Editing Values, adding/deleting custom fields and shortcut to Fields for quick creation of new ones. Quick helper for showing everything on frontend! Example usage on templates: <?= siteSettings('ss_copyright_text'); ?> <?= siteSettings()->ss_footer_text; ?> <?= $siteSettings->ss_contact_email; ?> The admin is in English and the module is translation-ready. Just install the module, and that's it! Just sharing it here in case someone finds it useful. Download Here: ProcessSiteSettings.zip8 points
-
@adrian Thanks for bringing this up. It seemed like x-user came here to troll with an expectation that ProcessWire should ignore and blacklist anything having to do with AI. That doesn't seem realistic. But it did make me wonder, are there any other CMS projects that are taking this approach? It seems unlikely. I imagine we're not too many years away from the point where a CMS project can't compete if it's not involved in the AI space in some way or another. I also think that the AI changes are coming whether we like it or not. So we can either jump on and grow, and make things better, or get left behind, and perhaps get left without a job. If it's only the people that dismiss environmental concerns using AI and voting with their wallets, then there's no incentive for these companies to do better. X-user would make a greater difference to the world by being an AI user that cares and chooses companies based on their values. And I think that's what we all should do. Whereas abandoning anything having to do with AI does nothing to improve the direction of AI and seems a little like self-sabotage. In the future, and with users that care, there will be pressure on AI companies to do things right. For example, when they build that next data center, they will also build a giant solar array or wind farm to power it. Depending on coal and gas plants for electricity is not sustainable, and now it's more costly than solar. Coal and gas is EOL'd. It may be that the power demands of AI push us towards sustainable solutions faster than otherwise, and we need that as quickly as possible. My opinion: We can't dismiss AI and complain. We have to participate and push for better solutions when there are opportunities to do so. If we sit out, there will be no such opportunities. The environmental problems were here long before AI. As I understand, the root of it is power generation. The US (at least) is not solving the power generation problem in a way that can overcome the politics, corruption and outright stupidity. But I also think that it's very likely AI that will be in some way responsible for the solutions for these problems. There are so many problems to solve that are bigger than any of us have answers for. And if there are solutions for these problems, I have no doubt they'll be coming with the help of AI in some fashion. From my perspective blacklisting AI solves nothing and instead is abandoning the problems and giving up.6 points
-
Hello ProcessWire forums, I am sharing a new page I developed for cybersecurity and DevOps expert Julie Tsai. Built with ProcessWire, it includes use of ProModules FormBuilder for the contact form and ProFields for the soon to be launched Blog section, where a Repeater Matrix controls the flow of content. Always enjoy working with this CMF and learning all that it's capable of … each time something new emerges. https://julietsai.net/5 points
-
Contrary to Chrome's preload feature, it only fetches pages from the current site, and you can disable it: mu.init({ prefetch: false, });4 points
-
MediaHub update.... TL;DR: MediaHub fields can now detect and import images used on the same page. A per-field import button scans other image fields on the same page and intelligently matches against the MediaHub library. It includes deduplication, perceptual hashing, and confidence badges. This saves significant manual effort when transitioning from standard image fields to MediaHub fields — i.e., you can run both in tandem while evaluating, or until you're ready to switch. I made the jump from building MediaHub to implementing it on a real client site. I ran into issues, and those pain points led to new features. It's a different experience switching from testing with Instagram images to deploying on a 15+-year-old client ProcessWire site — a significant commercial site that can't afford downtime. It features blog posts, staff photos, services, and the usual content you'd expect on a professional services site. Having more on the line meant I approached it with greater scrutiny, taking things slowly — up to a point. My approach: Add a MediaHub field on every page beneath the existing images field. Import each image individually (tedious, but reassuring). Add a script that outputs the MediaHub image first, falling back to a standard image if the MH API had issues. Apply data-type=mediahub to the HTML so I could quickly identify which images had yet to be ported. Step 2 became tedious once I'd confirmed the core functionality was solid. I already have a global import function that scans a site and imports existing images. But I wanted something different for this workflow. If I were an agency porting an entire site, what would be the most useful feature? How would I migrate one page at a time and confirm it was working, rather than relying on the global import? The answer was a localised import button on the MediaHub field itself. Pressing the import button scans existing image fields on the same page and opens a modal containing a list of images available to add to both MediaHub and the MediaHub field. It doesn't yet support Matrix pages, though it works correctly within a Matrix field. The modal assesses whether each asset already exists in MediaHub. Avoiding duplicate images is a core principle of MediaHub, so getting this right mattered. It handles most cases correctly. The one gap: images with different crops are treated as separate images — technically accurate, but better crop detection would be more useful in practice. That's next on the list.4 points
-
@mattgs Like Adrian, I also consider myself very environmentally conscious. I've not spent much time learning AI in part because I thought it was problematic for a lot of reasons. But I don't think we're likely to stop these AI companies so that's why I thought I should try things out with a company that seems to have more ethics than the others. Anthropic seems to have a mission for AI safety and sustainability. I hope it's legit. And as far as I can tell, the other companies don't, which I find concerning. But I'm also not as up-to-speed as you are on the on the issues you brought up, so I'll have to look closer as well as check out the video you mentioned (do you have a link?). I'm also aware that a project like ProcessWire gets executed millions upon millions of times every month (or day?) throughout the world, and every execution consumes energy. So I've always been very interested in optimization and making ProcessWire use as little time and energy as possible to do its work. The updates that we've been focusing on here are aimed directly at that. So perhaps AI is using a little energy to find optimizations and bugs in PW, but that single brief code review session reduces the energy usage of every ProcessWire execution going forward. This is a case where AI is likely saving a lot more energy than it consumes, indirectly by making ProcessWire use less energy. Some of the optimizations and bugs its found have been there since the beginning, and likely would have never been identified otherwise.4 points
-
I just came across this library and thought I'd share. Are any of you using this in your ProcessWire sites? It looks very similar to what I've read about HTMX but I have yet to test either properly. I just dropped the following into the head of my portfolio site and now navigation is as smooth as butter! No page load flicker, and I didn't modify anything else. The only bug on my site was when I tried to load the landing page in another language, but that may be as related to the URL segment, or the way in which PW organises translated pages? Not sure. I haven't learned enough about AJAX to know if this would interfere with existing contact forms or not without customisation (?). Curious to hear what you think, or if you're implementing this, HTMX or similar on your sites. Alpine AJAX looks like another interesting alternative. <!-- 1. uJS Script - Include the script --> <script src="https://unpkg.com/@digicreon/mujs/dist/mu.min.js"></script> <!-- 2. Initialize --> <script>mu.init();</script> https://mujs.org/3 points
-
@ryan I have active ProFields license, but I have no access to that board. Can you please provide up-to-date information on what we can expect? I need to tell a client about the state of draft/version management in ProcessWire, and I do not want to provide outdated information.3 points
-
I saw it on HN last week and it caught my eye as well: https://news.ycombinator.com/item?id=47285876 I will be playing around with it at some point as I love the hypermedia approach of doing things (big HTMX fan). I like that it can do what they call "patch modes" (what other libraries call "islands"?). HTMX can do that too, but it's feels off with the "OOB" approach. That has always bothered me, but I think they are addressing that in HTMX v4. I've played with Alpine-Ajax and Datastar as well, but muJS seems like it best aligns with the way I think.3 points
-
@mattgs This is a very friendly community as a whole. But to be fair, both of you guys posted kind of unfriendly messages. And both of you make good points too. There are some big challenges in the world right now, none of us are perfect, and we've all got to do the best we can, where we can. So I think it's good that you guys have these environmental concerns, and we all should, and it's good to communicate these, share and learn, while also being friendly.3 points
-
@teppo Thanks, good to hear the more I use it, the more I'll be blown away. I've been using AI ever since ChatGTP first came out, but primarily just for technical questions and such. For instance, a couple months ago Claude helped me figure out how to reduce static pressure in our HVAC system by rebuilding (DIY) the return plenum and filter rack, and it was super helpful. I posed the same questions to GPT and Gemini but they weren't nearly as helpful. This week is the first that I've gotten into collaboration with the actual code. Adrian showed me all the things Claude Code had recommended for the PageFinder, and I found myself really liking what it had found and suggested... Seemed like we were on the same page, just like with the HVAC work. The other thing is that I've found it a little overwhelming with all these models (GPT, Grok, Gemini, Claude, etc.) with big changes almost weekly, and if these companies were ethical and ones I'd even want to be putting money towards. Then I learned about why they created Anthropic in the first place, and last week heard how they were sticking to their ethics and wouldn't cross their red lines despite government pressure. Sounds like integrity to me, something that is hard to find with big companies. That opened my comfort level and clarified for me that Anthropic's Claude Code was a good place to dive deeper with this stuff.3 points
-
Hey @gebeer Yes, it's pretty wild and fun to watch sites being built by AI. Frightening too obviously ! I created an entire blog last week without touching the admin. Templates, fields, test content, pages etc created in about 5 mins. And 4 minutes of that were typing my instructions. But you could always save it as a 'recipe' and reuse it. No worries. I don't mind questions. 🙂 Yep, the JSON schema approach is a wrapper around the PW fields and templates API. It can create/update fields (type, label, description, inputfield settings) and templates (fields, family settings like allowChildren, urlSegments, etc.). Re. roles/permissions or module install/uninstall, I didn't need those. My workflow is quite simple, thankfully and doesn't involve clients updating their sites. I haven't used RockMigrations, but knowing Bernard's reputation, I imagine RockMigrations is significantly more comprehensive in that regard. Regarding the question about AppAPI Vs HTTP API, I had to ask the robots about that one and the reply is comprehensive but possibly not what you were asking. See below, but you're probably already familiar with the AppAPI stuff. BTW the Module isn't commercial, and anyone can try it. I want to clean up the repo before opening it up for wider use. Is the above any help to you? Did it answer your questions?2 points
-
It looks really nice and congrats on the (re-)launch! A small bug I noticed though on the homepage with the scroll animation (using Firefox on MacOS): konkat.mp42 points
-
Beautiful website. 👏2 points
-
Did you all notice that hovering menu items on their website preloads the content? Not sure that's a great idea. It makes things feel snappy, but it makes me think of Chrome's preload feature and its privacy concerns: https://www.malwarebytes.com/blog/product/2026/02/chrome-preloading-could-be-leaking-your-data-and-causing-problems-in-browser-guard2 points
-
Just released V1.4, here's what has changed/was added in the last versions: 1.1: Added companion Lister Pro (https://processwire.com/store/lister-pro/) action to translate pages using page actions. 1.2: Added backwards compatibility with PHP 8.1 1.3: Bug fixes only 1.4: Added support for the option to choose the current user language as source language, and made sourceLanguage and targetLanguage(s) hookable to allow altering them at runtime if needed2 points
-
I tested on a site with ~12000 URLs, it seems to work fine 🙂 Right now, every user with page-edit permission can use the module, but there might be good reasons to restrict it to certain roles. Maybe you could add a "sitemap" permission to make this more granular?2 points
-
Update: v1.1.6 is now released with configurable export path! You can now set custom paths in module settings as you requested: Setup → Modules → Context → Configure → Export Path For Junie AI integration: .junie/skills/docs Important for CloudPanel/Nginx users: Since .htaccess doesn't work on Nginx, I recommend using an absolute path outside web root for maximum security: /home/lqrs/context-exports/ The module now detects absolute paths (starting with /) and uses them as-is.2 points
-
I am sure many of you have seen @Ex-user's comments about AI data centres in the in the ProcessWire 3.0.257 – Core updates thread. It seems like we have lost them from the community, but I do think a Pub topic about the darker side of AI is worthwhile. I have watched the video that was posted and here is another one worth a watch. I must admit that Claude code is making me much more productive, so it's a strange situation to be in. I do worry about all of the environmental and human health concerns posted in that other video and all of the societal ones in this video and so I think it's important that we are at least aware of these things when we talk about AI in ProcessWire. Let's keep the dialog respectful and productive.2 points
-
Wish I had more time to put into this, but for now just a few random thoughts, sorry in advance for the long rant: I, too, do see the issues that AI is causing (or at least some of them). But this train is not easy to stop. Programming is just one area it is affecting, but in this context I am personally leaning towards the conclusion that AI may well decimate the whole concept of humans writing code for a job. And if things continue to evolve at this pace I don't think it is going to be a decades long process. A few years ago I tried to create a module for ProcessWire from scratch using ChatGPT, and it was a miserable failure. Now Claude is at a stage where I don't think I can truly justify writing code myself from a productivity (or quality) point of view. AI has also made the devops part of my work quite different from what it used to be, and I see no evidence of things slowing down in the near future. For us who work in IT and more specifically programming / development, it seems to me that in the big picture there are a couple of options: get a new job that isn't (yet) as tightly coupled with AI, or keep up with the changes. Also, I wholeheartedly agree with a lot of what Ryan has written in this thread; it's quite a bit easier to influence things positively from the inside 🙂 There will no doubt be some cases where AI is not going to be as prominent, at least for a while. But it seems to me that those are either somewhat niche, or specialized cases. Gamers and the game industry, for example, have been pushing hard against using AI, which I completely understand. ... and of course I may be wrong, and this whole thing may come crashing down any moment. Predicting the future is not easy. By the way, it would be interesting to hear about ways to make AI use less of a problem. A co-worker mentioned https://github.com/rtk-ai/rtk, which is a Rust tool that claims to reduce AI token consumption by as much as 60-90%. Someone correct me if I'm wrong, but e.g. cutting your token use to half should also cut your energy consumption to half, right? (For the record, I have not yet tested RTK properly, so can't say if it works that well.)2 points
-
This week on the dev branch we've got several commits with various core improvements and fixes. @adrian has been using Claude Code to suggest core optimizations (focused mostly on the PageFinder) and so he sent the suggestions to me. (PageFinder is the brains behind the $pages->find() method, and many others). I took the suggestions and coded them into our PageFinder, but didn't want to mess with what was already working well, so put them in a new class named PageFinder2, at least temporarily. If running the latest dev branch, you can enable PageFinder2 by adding the following to your /site/config.php: $config->PageFinder('version', 2); The most significant changes are: using subqueries for subselectors rather than separate independent queries; Reusing PageFinder instances (keeping a pool of typically 1-3 PageFinders rather than creating a new one for each $pages->find() operation); and lots of in_array() calls have been converted to isset() lookups, which should technically be faster (still the case in PHP8?, I'm not sure). There were some other changes as well. Theoretically these changes should make PageFinder even faster than it already is. I did quite a bit of testing and found that for the most part it performs the same as PageFinder v1. But then I came across a rather complex selector that translated to a much faster PageFinder operation, nearly twice as fast, and that convinced me it was worthwhile. While PageFinder v2 is not consistently faster than v1, there are some situations where it can be a lot faster. I'm not totally clear on what those situations are just yet, but I'll be doing more testing. In other situations it also can use a lot fewer queries, though that doesn't necessarily translate to a performance difference. But on the whole, all of Claude's suggestions were quite good, regardless of performance improvements. I was pretty impressed with what Claude Code had suggested, so decided to install it on my computer too. I've found it's particularly good at finding bugs. I'll ask it to do a code review on a core file, and it always has good suggestions. It uses ProcessWire terminology too. For instance it pointed me to an object that wasn't properly "wired to the ProcessWire instance", and that's something you'd only ever hear in ProcessWire land. Claude code also helped with improvements to our DatabaseQuery* classes, PagesVersions module, Wire base class, NullPage class, and minor updates to the PagesLoader* classes. I'm not having it write any code just yet, but am having it suggest where improvements can be made. I like to code. I asked it how it knew so much about ProcessWire, and it said that it stays up-to-date with the forums, the website, API docs, and GitHub repo. Thanks to @adrian and @Jan V. for recommending it to me (Jan V. uses it to manage this webserver), I can see how it's going to be a big help to ProcessWire with its suggestions and ideas, I'm already learning a lot from it. And if you get a chance to try the updated PageFinder, please let me know how it works for you. Thanks for reading and have a great weekend!2 points
-
This great module by @Robin S https://processwire.com/modules/connect-page-fields/ should work for your use case.2 points
-
Chris Ferdinandi has a valuable opinion on the subject. I would love to agree with him, but I honestly don't know if I can afford to https://gomakethings.com/training-your-replacement/ While you're at it, check his others posts and subscribe to the newsletter, he is a very insightful guy 🙂2 points
-
There are hundreds and hundreds of things we do every day that harm the environment, even things labeled as Eco-friendly. What's unfair is characterizing this community as disappointing for using those things, when the person making that assessment is also poisoning the environment simply by living in a modern society. @Ex-user, we are indeed a friendly community, and if you've been here a while, you'll know that, but we're also people free to express our opinions. There's no censorship here, and comments aren't blocked. If you don't like a comment, perhaps we won't like some of yours either, but that's okay: that's how it should be. This is my last comment in this thread, but I had to say it because it's not fair. I might even be wrong, but it's my personal opinion, not the community's.2 points
-
Seriously? Is this why you're disappointed with our community? Are you even an adult person? If you're so worried about the environment, you should start by quitting programming yourself, dude. Producing a single new laptop generates approximately 331 kg of CO2 emissions, while desktops create up to 948 kg of CO2. The manufacturing process accounts for 75%-85% of this impact, consuming 1,200 kg of water and 239 kg of fossil fuels. Globally, electronics contribute significantly to 62 million tonnes of annual e-waste. The software industry, part of the broader ICT sector, is responsible for approximately 2% to 4% of global greenhouse gas emissions, a figure comparable to the entire aviation industry. These emissions stem from both the energy consumed during software operation and the "embodied carbon" from manufacturing hardware. Key Environmental Impacts of Laptops, PCs, and Software Development: Carbon Footprint: Manufacturing a new laptop produces over 300kg of CO2 Resource Intensity: Creating one computer requires 1.5 tons of water, 48 pounds of chemicals, and 530 pounds of fossil fuels. E-Waste Generation: Small IT equipment (laptops, phones) generates 11 billion pounds of global e-waste annually. Toxicity: Improperly discarded computers leak toxic heavy metals, including mercury, lead, and chromium, into the environment. Manufacturing vs. Use: For battery-powered devices like laptops, 80% of total emissions occur during production, not during usage. Industry Impact: The ICT sector is responsible for roughly 3.7% to 3.9% of global greenhouse gas emissions, a figure comparable to the entire aviation industry. Growth Projection: Emissions from this sector are expected to rise significantly, potentially reaching 14% of global emissions by 2040. Development Impact: Creating a single, light software feature can produce about 60 kg of CO2, while a "heavy-duty" feature can generate 300 kg or more. Key Drivers: Major contributors include data center energy consumption, network infrastructure, and the energy used by developers' machines. The rapid replacement cycle (typically 3 years) is driving these figures, with e-waste expected to reach 82 million tonnes by 2030. Only 17.4% to 22.3% of global e-waste is formally recycled, with the rest ending up in landfills, often polluting soil and groundwater in developing countries. Ryan's reasoning makes much more sense than yours. We should repeat politically correct slogans less like a parrot and use a little more common sense and human reasoning.2 points
-
One of the reasons I no longer use ChatGPT for anything - I don't want to get political here, but IYKYK. I would love to boycott Google and Amazon completely as well. I do my best on these fronts, but it's basically impossible.2 points
-
Yes, it is a completely new hosting-package. The error-reporting is a good idea!1 point
-
Hey, I've started to create a Media Hub for Processwire. [Edit...newest updates to UI are in later posts] Screenshots attached. Obviously a few UI improvements are needed 🙂 One of my clients requested a centralised media manager. I thought it'd be fun to give it a go and learn some stuff. I know that with a self-built Module, it'll always be maintained, and I have an active interest in evolving it. Shout out to @markus-th who just announced he is doing similar with WireMedia.1 point
-
Thanks. Yep, I'm leaning that way too. Also, making the user load all the images even with pagination, lazy loading and caching etc isn't ideal.1 point
-
I prefer this one, along with the option to configure which source is shown by default (preferably the first tab is always the default, if possible). A cleaner, more organized interface is better. If image sources are merged, even when page-context images are listed first, how can I tell the actual source? In some cases that does matter. Avoiding confusion up front is the better solution, in my opinion.1 point
-
Hey a UI/UX question if anyone wants to chip in. Does this make sense? Rich Text Editors (normal flow) You click the Add Image button in TinyMCE, and the Select image window opens. You see a list of images associated with that page (only). Rich Text Editors (with MediaHub) With MediaHub, I'm no longer restricted to showing only the images attached to a page's fields. But that raises a UX question: clients are used to seeing just a handful of images tied to the page they're editing. Presenting hundreds of Library assets upfront and asking them to sort and filter isn't an improvement...if it's a sorting and filtering burden. The better approach is two tabs? On this page for the assets already attached to the current page, and MediaHub (or Library) for the full collection? Or should I only display the entire library with the users' page images positioned at the top for convenience? Cheers P1 point
-
Hello community, to my surprise and delight, my module “FrontendAutoReload”, which was previously only published on Github and used exclusively by me, was presented in the ProcessWire Weekly Newsletter #545. And because I haven't encountered any problems using it so far, I'm in the process of submitting it to the module repository. It's basically a little helper for ProcessWire template development, which automatically hits the refresh button in your browser each time a change in the template folder is detected. Not groundbreaking but a convienient timesafer. I would therefore be delighted if it could also be of some help to other developers. For information on how to use it, check out FrontentAutoReload on Github. If you have any questions, don't hesitate to ask them.1 point
-
Just finding this module now, and I am loving it! First of, belated congratulations are in order, and thanks a lot for making this module available @digitalbricks !1 point
-
@szabesz I forgot to cancel my trial yesterday lol. But so far it's been working well for me albeit slower than what I'm used to with claude. That being said, I'm on the cheapest plan. For me at least there's a cancel button under settings > subscription > cancel plan... Here's hoping it works next month if I change my mind1 point
-
Thanks for reporting @monollonom! It seems like our small animation on the logo, breaks the page in Firefox. While we don't figure this out, I removed it from firefox with @-moz-document url-prefix() Not as nice, but at least the page works. Edit: the animation works for Firefox now 👍1 point
-
I guess it's https://julietsai.net @protro Nice Cursor Blob1 point
-
Hey @szabesz Apologies - I had replied to people privately so you are probably thinking I am ignoring requests for info. I'll DM you now.1 point
-
Hello @Peter Knight, we’re really impressed with your work! We’re currently using Media Manager, but unfortunately it’s no longer being developed, so we’re urgently looking for a modern alternative. Media Hub could be the solution. As an agency, we’d also like to use it commercially – do you have any thoughts on this?1 point
-
New link to mPDF info here: https://mpdf.github.io/, this http://mpdf1.com/manual/index.php?tid=256 has been off line for a while now. Thanks @Wanze for this module!1 point
-
1 point
-
Like you said a TextFormatter formats on output and thus should leave the original data intact (otherwise this could lead to unexpected behaviours). Best would be to create a module and hook before Pages::save for example to sanitize and convert those safe links1 point
-
Some updates to the MediaHub module I've added an import tool that lets you pull in all the existing images on your ProcessWire site without having to re-upload anything manually. If you've been running PW for a while before installing Media Hub, you'll probably have images spread across dozens of pages and fields, and doing that by hand is nobody's idea of a good time. The new Import tab (v104) scans every image field on your site, shows you what's out there in a nice table with thumbnails, dimensions, file sizes, which page and field each image belongs to, and crucially flags potential duplicates. So you can spot that the same hero image has been uploaded to 12 different pages and just import one copy. You can filter by field, template, or search by name, then select what you want and batch import the lot with a progress bar. There's also an Import by URL option if you've got images sitting on your server or hosted elsewhere that you want to bring in directly. Nothing groundbreaking, just one of those quality-of-life things that makes the difference between a module people install and a module people actually use.1 point
-
Media Hub update The Media Hub now has a folder view Folders is the default name for the view. It can be renamed to Gallery, Collections, Assets, Media, etc. Media assets can exist in one or more Folders so essentially they operate like Tags Clicking a Folder name will filter the main MediaHub by items within that Folder (and optionally sub folders) On an Asset detail view, you can quickly add and remove an association to a folder Support for Folder nesting, renaming etc is working1 point
-
End of 2025 the relaunch of the Kubota Brabender Technology Website went live. KBT is a global leader in feeding technology and bulk solids handling. The technical production is developed by me, Olaf Gleba. The grafic design is supplied by C&G: Strategische Kommunikation GmbH. Homepage: https://www.kubota-bt.com The site uses a lot of asynchronous calls in several sections. 1. The product search is ajax driven and also preserve entered filter options while browsing the product pages (and all other pages) with session.storage. 2. Because there are many global partners and locations, it is neccessary to allow to narrow down the selection for proper contacts. 3. The page media holds all available downloads and media content. This includes magazines, videos, terms of contract and works standard specifications. The videos are implemented with help of the HTML Dialog Element. Work Standards documents are supplied/collected from one source (product page) to avoid duplication. 4. Backend: As most of the time, i provided content modules that fits the client needs. This and that: PrivacyWire (@joshua) as CCM (just a few cookies to handle Matomo and external movie content) Uses the SearchEngine Module (@teppo) to handle (multilanguage) site search Wire Mail Smtp (@horst)to deliver automated e-mails Heavy use of Fieldtype AssistedURL (Fork by @adrian) to provide language dependend, local file linking (fieldname_[de|en] approach) Distribution of concatenate/minified css and javascript is cachebusted (happens within my developement environment,- no modules (like AIOM etc.) involved). The site uses a bunch of modules provided by the ProFields Package (for example Repeater Matrix and Table Fieldtypes).1 point
-
Ok, found it out... In the init-function of my module, I call $this->modules->get('JqueryUI')->use('modal'); Then I simply add «pw-modal» as a class to my link to the field-editor and viola, it opens in a modal window... Link for a new Field looks like this: <a href="/processwire/setup/field/edit?id=new&fieldgroup_id=new&modal=1&process_template=1&name=myTestField" class="pw-modal addNewField">New Field</a>1 point