Search the Community
Showing results for 'files'.
-
Hi I think Analytics for MH would be really useful addition for power users. Some of you mentioned you mange thousands of images. Maybe with MH that might be less given the shared image concept. Iβll be starting shortly so if you have any requests, just add them here. Phase 1 will be an emphasis on data vs big dashboards etc. Metrics we could surfaceβ¦ # MediaHub Analytics β Metric Ideas ## Asset Inventory & Volume - Total asset count (all time) - Asset count by type (image, video, document, audio, etc.) - Coloured storage usage bar by type (like the iCloud bar) - Total storage consumed, broken down by type - Average file size by type - Largest single assets (top 10) - Assets exceeding a defined file size threshold ## Usage & Engagement - Top 1 and top 9 most-used assets (by placement count) - Assets used on the most pages - Assets used more than once (vs. unique placements) - Most-used asset by type (e.g. most-used video) - Assets referenced in TinyMCE/rich text fields vs. structured fields - Pages with the most assets total - Pages with the most images specifically - Pages with the highest asset variety (mixed types) ## Waste & Orphan Detection - Unused assets (uploaded but placed nowhere) - Assets uploaded but never used in a TinyMCE field specifically - Assets that were used but the page/entry has since been deleted - Duplicate or near-duplicate filenames - Assets with no alt text or metadata ## Crops & Transforms - Images with the most crop variants - Images with crops defined but never rendered - Images with no crops defined at all - Most common crop ratios/dimensions used across the hub ## Age & Freshness - Recently added (last 7, 30, 90 days) - Oldest assets in the hub - Oldest assets that have never been used - Assets not updated or replaced in over X months - Upload velocity over time (assets added per week/month) ## Content Quality & Hygiene - Assets missing required metadata (title, alt text, caption, tags) - Images below recommended resolution for their usage context - Assets with broken or missing source files - Assets with no focal point set (if supported) - Untagged or uncategorised assets ## People & Process - Assets uploaded per user/author - Which users upload the most unused assets - Upload activity by day of week or time of day - Most active uploaders in the last 30 days ## Search & Filter Behaviour - Most searched terms (by frequency) - Most searched terms with zero results - Most used filters (type, date, tag, label, etc.) - Filter combinations used most often together - Searches that result in no action (user searches but doesnβt select anything) - Most abandoned searches (searched, filtered, then left) ## Collections & Folders - Largest collections by asset count - Largest collections by total storage size - Most nested / deepest folder structures - Collections with the most unused assets inside them - Empty collections (created but never populated) - Collections that havenβt been updated in over X months - Most viewed or accessed collections - Collections with assets shared across the most pages ## Labels (Library / Storage Organisation) - Asset count per label - Storage volume per label - Labels with the most unused assets - Labels with no assets assigned (orphan labels) - Most combined labels (which labels appear together most) - Unlabelled assets (no label assigned at all) ## Tags (Display / Website Facing) - Most used tags by asset count - Tags applied to assets that are never actually used on the website - Assets with the most tags applied - Untagged assets - Tags that are never searched or filtered by visitors - Tag overlap β assets sharing the same tag cluster (useful for spotting redundancy) - Most used tag per asset type (e.g. most common image tag vs. video tag) ----- **Other high-value metrics for power users:** duplicate detection, metadata completeness scoring, upload velocity trends, and per-user waste ratios (whoβs uploading assets that never get used). The type of useful info that might surface process problems rather than just content problems.
-
Ok, we are back in business. Stemplates (Free) is now working more cleanly with the 3rd party module. This won't be an issue again for any other module that relies on template names to function. I had to make a few other changes, but Stemplates is better for it. Here's the updated list: β completely non-destructive β doesn't modify your templates or fields β doesn't touch system templates (admin, repeaters, etc.) β doesn't alter your workflow (if anything, it simplifies it) β free from manual aliases, no mapping files, no rewrite rules to maintain β template files follow your renames automatically (no manual moves, no copy-paste, no backup file shuffle) β third-party modules that reference template names keep working after a rename β API calls using the old template name continue to work transparently β every rename and every config update is logged to Setup β Logs β stemplates so you always have a full audit trail βΉοΈ adds a Setup β Stemplates admin page for browsing your folders (purely additive, you can ignore it) βΉοΈ writes to the database only when you rename a template, and only to keep other modules' template pickers in sync
-
Caveat 01 For transparency, I should note that the only files presenting issues were include files where the path was not absolute. The solution was to change: include("includes/get-widget.inc"); to include($config->paths->templates . "includes/get-widget.inc"); You need to do this for every include and require that uses a relative path. But, it's a one-time change, and once it's updated, the template location does not matter.
-
Hey everyone. I have a new Module in the works. It's 99.9% 75% ready for general release, but already running on my own sites for weeks. [Edit: see post about bug re. 3rd party module] If you've ever opened /site/templates/ on a project that's been running for a year, you know the feeling. 20-50 PHP with no structure, no grouping - an alphabetical avalanche. I usually get so far by namespacing all my files, but sometimes I wish for more organisation. Stemplates lets you organise your templates into folders. That's real directories on the filesystem - the way you're used to working. So, instead of leaving everything in a flat directory, you can go from thisβ¦ site/templates/ βββ account-dashboard.php βββ account-billing.php βββ shipping-methods.php βββ shipping-tracking.php βββ blog-index.php βββ blog-post.php βββ blog-category.php to thisβ¦ site/templates/ βββ account/ β βββ dashboard.php β βββ billing.php βββ shipping/ β βββ methods.php β βββ tracking.php βββ blog/ βββ index.php βββ post.php βββ category.php I've been running it on my own sites without issues for a while, and it takes just minutes to set up, even on a large site. Setup takes even less time if you're using AI/MCP. Even better, Stemplates is: β completely non-destructive β doesn't touch your database β doesn't modify your templates or fields β doesn't change anything in the admin UI β doesn't alter your workflow β free from manual aliases, no mapping files, no rewrite rules to maintain β doesn't touch system templates (admin, repeaters, etc.) It also works with page classes and supports nested subfolders (50 levels tested). Understandably, I was reluctant to mess around with such a fundamental part of my sites, so a few safeguards exist... Migrate one template at a time at your own pace - no big switchover required Your existing flat templates keep working untouched, alongside any you've already moved If a file can't be found in its subfolder, ProcessWire falls back to its normal flat-folder behaviour automatically - the site doesn't break Uninstall cleanly at any time. Stemplates Free is undergoing a slight rework available now DM me for access. Stemplates Pro (coming soon) takes Stemplates even further. More soon, but honestly, Stemplates (Free) will take care of 99% of your new template -> folders world. Thanks for reading! Peter
-
Sorry for confusion Soma. I meant that post on Tuesday was my first for a long time. Everything has been running fine with my PW sites for years so I haven't been here and that's what I meant. But now I have this problem ..and others. But for now I am concentrating on solving this one I posted above on Tuesday. I'm going back to basics and trying to find out why it works on my localhost but not on the live server. A clean install of the latest PW 3.0.255 blank site profile does work on the live server. During that I saw that the SQL storage engine is innoDB by default (good thing) but my sites were always myISAM but I think you're right - it shouldn't matter when importing a d.b. The wireRenderFile undefined is strange because I can find the function in the core files on the live server so file is not missing. I will continue playing and report back any progress. Thank you Soma for your interest.
-
The memory feature is now active in AgentTools and it's enabled by default. I have to say it makes the experience a whole lot better. Also, we've updated it so that agents can now decide what API.md files they want to receive, and whether or not they want sitemaps/schema, rather than is sending stuff the may or may not need. So the "Include extra context" setting in engineer is now removed since it's no longer necessary. More coming by Friday.
-
Maybe some core files are missing on the server? It can happen that uploading a complete upload to a new server misses a file or something.
-
PromptWire 1.7.0 is out with 4 new MCP tools, bringing the total to 40. Latest tools are listed below. I haven't 100% fully tested a site sync yet, so always back up your existing site, database and files. pw_site_compare compares your local and remote sites across pages, schema, and template/module files. Pages are matched by URL path rather than database ID, so it works reliably across environments with different auto-increment sequences. You can exclude templates (e.g. user, role, licence pages) to focus the diff on what you actually intend to deploy. pw_site_sync (my favourite) orchestrates a full deployment in one operation: compare, back up the target, enable maintenance mode, push schema, push pages with their file/image assets, push template and module files, disable maintenance. It runs in dry-run mode by default so you see the full plan before anything is touched. Scope can be narrowed to just pages, schema, or files. pw_maintenance toggles maintenance mode on local, remote, or both sites. A styled 503 page is served to visitors with appropriate Retry-After and noindex headers. Superusers and the PromptWire API bypass it, so you can verify changes and keep the agent working during a deployment. pw_backup creates database dumps (using ProcessWire's WireDatabaseBackup) and zip archives of site/templates and site/modules. You can list, restore, or delete backups from either environment. The backup directory is auto-protected with .htaccess so SQL dumps are never web-accessible. HTTPS enforcement. The API endpoint rejects plain HTTP with a 403 before the API key is checked. PROMPTWIRE_ALLOW_HTTP in config-promptwire.php bypasses this for local dev only. Autoload change. The module is now autoloaded to intercept front-end requests during maintenance mode. The cost is a single file_exists() check per page load. Documentation updated across all pages at peterknight.digital/docs/promptwire/v1/
-
PromptWire 1.6.0 is out with 7 new MCP tools for database inspection, log reading, and cache management. The total is now 36 tools. Latest are: DATABASE TOOLS pw_db_schema Inspect database tables. Without arguments it lists all tables with engines, row counts, and sizes. Pass a table name for detailed columns, types, keys, and indexes. Example output: table: pages columns: id int(10) unsigned PRI auto_increment parent_id int(11) unsigned MUL templates_id int(11) unsigned MUL name varchar(128) MUL status int(10) unsigned MUL modified timestamp MUL created timestamp MUL indexes: PRIMARY, name_parent_id (unique), parent_id, templates_id, status pw_db_query β Execute read-only SELECT queries. Only SELECT, SHOW, and DESCRIBE are allowed; mutations are blocked. A LIMIT is auto-injected if you don't include one. Example output: SELECT id, name, templates_id, status FROM pages ORDER BY id DESC LIMIT 5 id | name | templates_id | status 1703 | v1-15-4 | 72 | 1 1702 | test-05-brief-... | 59 | 1025 1701 | nyhavn-lufthavn | 59 | 1025 pw_db_explain β Run EXPLAIN on a SELECT query for performance analysis. Useful for diagnosing slow queries and confirming index usage. pw_db_counts β Quick overview of data volume. Shows row counts for core ProcessWire tables (pages, fields, templates, modules, caches) and the 20 largest field data tables. Example output: Core tables: pages: 605 | fields: 76 | templates: 53 | modules: 126 Top field tables: field_title: 599 | field_pkd_mediahub_mime: 254 | field_pkd_mediahub_image: 168 LOG TOOLS pw_logs β List available log files, or read and filter entries from a specific log. Filter by level (error, warning, info) and text pattern. Example output errors 138.5 KB 2026-04-20 exceptions 101.9 KB 2026-04-21 media-hub 38.4 KB 2026-04-21 modules 30.9 KB 2026-04-21 session 22.0 KB 2026-04-21 pw_last_error β Retrieve the most recent error from the error and exception logs. No parameters needed. Example output: 2026-04-21 15:47:56 [exceptions] SQLSTATE[42S22]: Column not found: 1054 Unknown column 'field_images.width' CACHE TOOL pw_clear_cache β Clear ProcessWire caches by target: all, modules, templates, compiled, or wire-cache. Useful after deploying changes or when things feel stale. Example output: target: modules cleared: [modules] success: true All 7 tools work via both local CLI and remote HTTP API, so they're available whether you're working against a local dev site or a remote production server.
-
Hi. I've recently moved a long standing site to a new server and now I get wireRenderFile is undefined on home page. So I checked on my localhost server and it is running fine on there. I'm using Markup Regions and the home page is just like all my pages using : <div id="ajax-content" pw-replace> <?=wireRenderFile('_ajax-home.php', array('id' => $page->id))?> </div> I did a clean install of 3.0.255 and it runs ok. Replaced site files and imported my d.b as usual but gives red error screen - The error I'm getting : Hmm⦠Fatal Error: Uncaught Error: Call to undefined function wireRenderFile() in site/templates/home.php:3 #0 wire/core/TemplateFile.php (328): require() #1 wire/core/Wire.php (413): TemplateFile->___render() #2 wire/core/WireHooks.php (1018): Wire->_callMethod('___render', Array) #3 wire/core/Wire.php (484): WireHooks->runHooks(Object(TemplateFile), 'render', Array) #4 wire/modules/PageRender.module (547): Wire->__call('render', Array) #5 wire/core/Page.php (3152): PageRender->render(Object(HomePage), Array) #6 wire/core/Wire.php (416): Page->___renderPage(Array) #7 wire/core/WireHooks.php (1018): Wire->_callMethod('___renderPage', Array) #8 wire/core/Wire.php (484): WireHooks->runHooks(Object(HomePage), 'renderPage', Array) #9 wire/core/Page.php (3097): Wire->__call('renderPage', Array) #10 wire/core/Wire.php (413): Page->___render() #11 wire/core/WireHooks.php (1018): Wire->_callMethod('___render', Array) #12 wire/core/Wire.php (484): WireHooks->runHooks(Object(HomePage), 'render', Array) #13 wire/modules/Process/ProcessPageView.module (193): Wire->__call('render', Array) #14 wire/modules/Process/ProcessPageView.module (114): ProcessPageView->renderPage(Object(HomePage), Object(PagesRequest)) #15 wire/core/Wire.php (416): ProcessPageView->___execute(true) #16 wire/core/WireHooks.php (1018): Wire->_callMethod('___execute', Array) #17 wire/core/Wire.php (484): WireHooks->runHooks(Object(ProcessPageView), 'execute', Array) #18 index.php (55): Wire->__call('execute', Array) #19 {main} thrown (line 3 of site/templates/home.php) I can login to admin on back end ok. Any attempt to view a page from admin gives the error above as they all use basic-page template with the same markup region. Like I said it all works fine on my localhost on 3.0.229 Any ideas? Thanks .
-
Well, it's been about a year in the making, but v5 is finally available. I have upgraded a lot of sites to it now without issues, but I would still caution you to be ready to revert (or delete module) files if the namespace changes cause any issues - there were a lot early on. The two new banner features are: the console panel can now run very long running scripts (there is a one hour limit just so that broken scripts don't run forever) which is great for massive batch modifications or the like. The dumps recorder panel (either manually loaded or via Enable Guest Dumps) now live polls for new dumps so you don't need to continually load the page to see the entries as they are logged. Have fun! Breaking Changes - Minimum requirements bumped to ProcessWire 3 and PHP 7.1 - Removed legacy Tracy 2.5.x core branch and FireLogger support - Panel DOM IDs now include `ProcessWire-` prefix β update any custom CSS/JS targeting panel IDs Namespace Support - Full `namespace ProcessWire` support across all panels and POST processing files - Autoloader bridge for seamless non-namespaced to namespaced module migration - Third-party panels bridged automatically via `class_alias()` Security - Comprehensive security hardening: XSS sanitization, CSRF protection on all panels, directory traversal fixes, CSP nonces on all inline scripts, cookie SameSite enforcement, and input sanitization New Features - Console β long-running scripts automatically switch to background polling, surviving gateway timeouts; session locks released so you can continue browsing while scripts run; storage migrated to IndexedDB - PW Version Switcher β extracted into its own class with automatic version reverting on failure - Dumps Recorder live polling β live-polls for new dumps from other users and guest sessions - File Editor BlueScreen integration β exception page links open in the built-in file editor - Various smaller additions across Diagnostics, API Explorer, Request Info, and PW Info panels Bug Fixes - PHP 8.x compatibility fixes for `htmlspecialchars()`, `trim()`, and `isset` null handling - Fixed Console panel snippet and polling issues, including cache-busting for CDN/proxy environments - Fixed File Editor not opening files linked from BlueScreen exception pages - Fixed Adminer URL/namespace issues and thumbnail viewer path handling - Fixed Debug Mode panel to use modern API methods instead of deprecated ones - Fixed API Explorer "What's New" section and reflection errors for hooked methods - Fixed "unsaved changes" false positive when saving pages in PW admin - Windows path and line ending fixes Performance - Session lock contention reduced - Various loop and query optimizations
-
There has been a lot of ProcessWire work covered this week! Here's a summary: 1. AgentTools module has been upgraded with "Site Engineer", an AI agent now built into your admin, and you can ask it questions, create migrations, or have it make other web development updates to your site by going to Setup > Agent Tools > Engineer. To enable the Engineer, you need an Anthropic API key, an OpenAI API key, or an OpenAI compatible API key (apparently several others use the OpenAI key standard). You can optionally put Engineer in "read-only" mode, which is what I do for production sites. In read-only mode, it answers questions and provides you with code for making updates yourself. But if read-only mode is not enabled, then it can act as your web developer and make changes directly, which is what I use with development sites. AgentTools provides full context to Engineer on your site's pages, fields and templates. If using ProcessWire 3.0.258 (or newer) it also provides the new API.md files to help AI know how to best work with all of ProcessWire's Fieldtypes. If you are having Engineer create or manipulate Fields on your site, it's a good idea to have 3.0.258 for the API.md support. Engineer also supports prompt caching for up to 1 hour in order to limit token usage. 2. The AgentTools now has JSON site-map generation features for AI agents. This enables an AI agent to see the full scope of your site. A second site-map feature focuses on all your site's templates and fields, essentially providing the full site schema to the AI agent. 3. In the core, we've added API.md files for all 18 of ProcessWire's core Fieldtypes, except for the comments and cache Fieldtypes, so far. In order to facilitiate this, and to facilitiate AI agent accessibility, all of ProcessWire's Fieldtypes now have their own directories as well. 4. After looking at all the API.md files, it became clear that there was plenty of room for improvement in the APIs of several Fieldtypes, so there have been major core updates to several Fieldtypes, as well as the Fields, Templates and Fieldgroups classes. 5. A Fieldtype testing framework has been built, which tests the full scope of 20 ProcessWire Fieldtypes (all the core ones, plus FieldtypeRepeaterMatrix and FieldtypeTable). It tests field creation, manipulation, traversal (where applicable), sorting (where applicable), searching with selectors, and more. I'll be uploading the testing framework to GitHub soon as well. 6. The new testing framework identified some bugs, which have been fixed. Most notable were selector matching bugs in FieldtypeFloat and FieldtypeDatetime. 7. There has been some refactoring in ProFields FieldtypeRepeaterMatrix and FieldtypeTable, plus API.md files have been generated for both. New versions should be ready soon. In fact, that applies to all of the ProFields, and I hope to cover FieldtypeCombo and FieldtypeCustom next week. 8. ProcessWire 3.0.258 has a whole lot of improvements, changes and fixes in it. Here's the commit log: https://github.com/processwire/processwire/commits/dev/ 9. Back to working on PagesVersionsPro (and have been for a few weeks) but more on that later. 10. There's probably more, but that's all I can remember at the moment. π Thanks for reading and have a great weekend! Basic examples of using Engineer for migrations:
- 27 replies
-
- 22
-
-
-
TrackingScripts Module Manage and inject tracking scripts (Google Analytics, Google Ads, Facebook Pixel, custom code) into site pages, with optional PrivacyWire consent integration and robots.txt/llms.txt file management. Features Google Analytics (GA4) β inject gtag.js with Measurement ID Google Ads β inject gtag.js with Ads conversion ID Facebook Pixel β inject Pixel tracking code with noscript fallback Custom code β free-form textareas for any third-party scripts (head and/or body) PrivacyWire integration β when enabled, scripts are injected with data-category attributes and type="text/plain" so they only load after user consent robots.txt & llms.txt β edit and auto-generate both files from the admin; content is written to the site root on save Per-service controls β enable/disable, position (head or body), and consent category for each service independently ID validation β regex validation for GA (G-), Ads (AW-), and Pixel (numeric) IDs before injection Admin-only exclusion β scripts are never injected on admin or form-builder templates Files site/modules/TrackingScripts/ βββ TrackingScripts.info.php β module metadata βββ TrackingScripts.module.php β main module (hooks, script injection) βββ TrackingScriptsConfig.php β module configuration (ModuleConfig) βββ ProcessTrackingScriptsConfig.info.php β Process module metadata βββ ProcessTrackingScriptsConfig.module β admin UI for non-superusers Installation Copy the TrackingScripts folder into /site/modules/ In the admin go to Modules β Refresh, then install TrackingScripts Optionally install ProcessTrackingScriptsConfig β this adds a Setup β Tracking Scripts page that allows non-superuser roles to edit the configuration. Assign the tracking-scripts-config permission to any role that needs access. Configuration Go to Modules β Configure β TrackingScripts (superuser) or Setup β Tracking Scripts (any user with permission). Google Analytics Field Description Enable Activate/deactivate injection Measurement ID GA4 ID, e.g. G-XXXXXXXXXX Position Inject in <head> or before </body> PrivacyWire Category Consent category (default: Statistics) Google Ads Field Description Enable Activate/deactivate injection Ads ID e.g. AW-XXXXXXXXX Position Inject in <head> or before </body> PrivacyWire Category Consent category (default: Marketing) Facebook Pixel Field Description Enable Activate/deactivate injection Pixel ID Numeric ID, e.g. 123456789012345 Position Inject in <head> or before </body> PrivacyWire Category Consent category (default: Marketing) Custom Tracking Code Two free-form textareas for any additional third-party code: Custom Code β Head: injected before </head> Custom Code β Body: injected before </body> PrivacyWire Integration When enabled, all tracking scripts are rendered with PrivacyWire-compatible attributes: <script type="text/plain" data-type="text/javascript" data-category="statistics" class="require-consent" src="..."></script> This ensures scripts only execute after the user gives consent for the corresponding cookie category. Requires the PrivacyWire module to be installed and active. Robots.txt & LLMs.txt Edit the content of both files directly from the admin. On save, the files are written to (or removed from) the site root: /robots.txt β search engine crawler directives /llms.txt β LLM/AI bot directives If a textarea is left empty, the corresponding file is deleted from the site root. How It Works The module hooks into Page::render (priority 100) to inject scripts via str_replace on </head> and </body>. This means: No template modifications required Works on all front-end pages automatically Runs before PrivacyWire (priority 101), so consent attributes are in place when PrivacyWire processes the page The robots.txt and llms.txt files are written via a hook on Modules::saveConfig, triggered whenever the module configuration is saved from either the module config screen or the Process admin page. ProcessTrackingScriptsConfig (Admin UI) A Process module that mirrors the full TrackingScripts configuration under Setup β Tracking Scripts. Purpose Allows non-superuser roles to manage tracking scripts without access to the Modules admin. Permission The module registers the permission tracking-scripts-config. To grant access: Go to Access β Roles Edit the desired role Check tracking-scripts-config Save How it works Reads and writes the same configuration data as TrackingScripts via $modules->getConfig() / $modules->saveConfig() Changes from either location (Modules β Configure or Setup β Tracking Scripts) are reflected in both Saving triggers the same Modules::saveConfig hook, so robots.txt/llms.txt files are written automatically Requirements ProcessWire 3.0.110+ PHP 7.2+ PrivacyWire (optional, for consent integration) License Licensed under the MIT License.
-
- 5
-
-
I tweaked the new Admin Theme KONKAT a bit to my needs via the config files provided and using it now for three months or so. Now it is the opposite. When I work an older projects, I wish to switch to the new layout. Changing stuff you are used to for months/years will take time to adopt. Now I couldnβt even tell what exactly bothered my with the initial KONKAT theme when I tried it the first time. So for me it was just the human reflex BAHH looks different, want my old design back.
- 52 replies
-
- 2
-
-
Good morning @daΒ² I have wrapped my head around your problem and I have found a possibility to check file uploads against the value of an input field by using a custom validation. In the following example you have 2 form fields: Field 1 is a text input where you have to enter the name of a file (for example "myfile") without the extension Field 2 is the file upload field where you upload a ZIP folder with different files. Validation check In this simple example it will be checked if there is a file with the filename entered in field1 present inside the files uploaded in field 2. In other words: Check if the uploaded ZIP folder contains a file with the filename for example "myfile" in this case. If yes, then the form submission is successful, otherwise an error will be shown at field 1. This is only a simple scenario to demonstrate how a custom validation could be done. You have to write your own validation rules. Now lets write some code for this example: The first one is the custom validation rule. Please add this code to your site/init.php. If this file does not exist, please create it first. <?php namespace ProcessWire; \Valitron\Validator::addRule('checkFilenameInZip', function ($field, $value, array $params) { $fieldName = $params[0]; // fieldname of the upload field $form = $params[1]; // grab the form object // get all the files from the given ZIP folder as an array $zipfiles = $form->getUploadedZipFilesForValidation($fieldName); // now start a validation rule with the value of this input field ($value) and the files inside the ZIP folder if (!is_null($zipfiles)) { $fileNames = []; foreach ($zipfiles as $zipfile) { $fileNames[] = pathinfo($zipfile, PATHINFO_FILENAME); } return in_array($value, $fileNames); } return true; }, 'must be a wrong filename. The ZIP folder does not contain a file with the given name.'); I have named the validation rule "checkFilenameInZip" in this case. The important thing is that you have to add 2 values to the $params variable: The name of the file upload field The form object itself To get all the files that were uploaded via the upload field, you can use one of these methods of the form object. getUploadedZipFilesForValidation (outputs only files of a ZIP file) getUploadedFilesForValidation (outputs all files of a file upload field) Important: To use these 2 new methods you have to update to the latest version of FrontendForms (2.3.14)! All ZIP files will be extracted automatically. Nested ZIP files are supported too. Now lets create the form $form = new \FrontendForms\Form('testform'); $form->setMaxAttempts(0); // set 0 for DEV $form->setMaxTime(0); // set 0 for DEV $form->setMinTime(0); // set 0 for DEV $form->useAjax(false); // set false for DEV // input field containing a filename $filename = new \FrontendForms\InputText('filename'); $filename->setLabel('Name of file'); $filename->setNotes('Please enter the name of the file without extension.'); $filename->setRule('required'); $filename->setRule('checkFilenameInZip', 'fileupload1', $form);// here is the new custom validation rule: Enter the name of the file upload field as the first parameter and the form object as the second parameter $form->add($filename); // the file upload field $file1 = new \FrontendForms\InputFile('fileupload1'); $file1->setRule('required'); $file1->setLabel('Multiple files upload'); $form->add($file1); $button = new \FrontendForms\Button('submit'); $button->setAttribute('value', 'Send'); $form->add($button); if ($form->isValid()) { // do what you want // $extractedFiles = $form->getUploadedFiles(true); // returns all uploaded files including extracted ZIP folder files // $nonExtractedFiles = $form->getUploadedFiles(); // returns all uploaded files but Zip files are not extracted // $values = $form->getValues(); // returns all submitted form values } echo $form->render(); Copy this code and paste it inside a template for testing purposes. Upload a ZIP folder and add the name of a file inside the "filename" input field. If the file exists in your ZIP folder you will succeed. Please note: The extraction of the ZIP files is done only once during the validation process. You can get the extracted files afterwards inside the isValid() method via the getUploadedFiles() method from the temporary upload folder. Use the return of this method to do some more work with the uploaded files. Take this example as a starting point for your own validation rules and let me know if it works for you. I think this is the only possibility to get what you need. Another hint: User browser validation in FrontendForms to make client side validation first by enabling it in the backend.
-
Hi, thanks for the explanation. I'll try to explain my actual case, it's more complex and would be easier to manage if I could just invalidate a form field manually. The form has a zip field, zip file contains several files. To make the explanation short, I extract the zip and validate its content, but the validation also requires to know values from the other fields in the form. I think this is the first problem: accessing the other fields (only if they are validated by the form) while validating the zip field. So I have a field "A", form should valid it (basic validation, like "required"), then while validating zip content (another field) and extracting its data I check again field "A" against the zip data. If there's an issue I must invalidate field "A" (but I'm processing zip field) and display an error on this "A" field. In one sentence: I know that field "A" is wrong only when processing zip field. Also this validator seems to make things more complex. I don't want to extract/validate the zip 2 times, once in validator, and once again when form is fully validated (isValid() function) to save values in database. So I should refactor code to store the validated data somewhere to reuse them in isValid() function, this is extra complicated work I want to avoid. Also this is not a light process for CPU, zip can be 500 MB and contain a lot of files to parse (some are also zip to extract again), this is a heavy process and I don't like the idea of doing this 2 times. Telling the form "this field is not valid and should display this error message" is way more simple and doesn't necessitate extra code, I just have to return an error code from the high level class that process data and invalidate one or another field. If you have an idea to avoid processing zip 2 times and without adding extra work, it would be welcomed, hope my explanations are clear (I'm not sure π). Actually I'm displaying error message on form result page, and in case of error the user has to fill again the whole form. My client is OK with this since this form is mainly used by himself, but I'm not happy because this is also a public form that a user could use, and having the whole form reset in case of error is a bad design. This is the actual code in template php: $form = new UploadSolveForm(); if ($form->isValid()) { $uploadFilePath = $form->getSolveFilePath(); $tagsText = trim($form->getValue('tagsFinalInput')); $tags = $tagsText ? explode(',', $tagsText) : []; $parseCommand = new ParseSolveUploadCommand( $uploadFilePath, trim($form->getValue('groupName')), $tags, RoomType::from($form->getValue('room')), boolval($form->getValue('isSpaceKo')), $form->getValue('subtreeVilainPosition') ? TablePosition::from($form->getValue('subtreeVilainPosition')) : null, $form->getValue('subtreeVilainAction') ? PlayerAction::from($form->getValue('subtreeVilainAction')) : null, $form->getValue('subtreeHeroPosition') ? TablePosition::from($form->getValue('subtreeHeroPosition')) : null ); if (!$parseCommand->execute()) { // I would like to do: // $errorCode = $parseCommand->getErrorCode(); // if ($errorCode == ParseSolveUploadCommand::SOME_ERROR){ // $form->setError('a field name', 'errorMessage'); // } else if($errorCode == ParseSolveUploadCommand::SOME_OTHER_ERROR){ // $form->setError('another field name', 'errorMessage'); // } if ($parseCommand->getUserErrorMessage()) NoticeManager::add($parseCommand->getUserErrorMessage(), NoticeType::ERROR); // Will display an alert box on form result page else NoticeManager::add(__("Une erreur s'est produite, merci de contacter un administrateur.", COMMON_TRANSLATION_DOMAIN), NoticeType::ERROR); } else { $parseCommand->getReport()->noticeUser(); } } Thank you for your interest. π
-
Hello, A small utility I built for my own workflow β export any page directly from the editor as a clean Markdown file. Useful for documentation, content migration, and feeding page content to AI tools. GitHub: https://github.com/mxmsmnv/PageMarkdown What it does: Adds an Export to Markdown button to the page edit form Smart HTML conversion β CKEditor content (tables, lists, headings, links, bold/italic) β standard Markdown Supports ProFields: Table, Combo, Repeater Matrix (with type labels and nested structure) Images and files render as Markdown image/link syntax Page references render as links or titles MapMarker, Email, URL, Color fields all handled Configurable: toggle field labels as headings, ignore lists per field/type, datetime format, empty HTML cleanup Requirements: ProcessWire 3.0+, PHP 8.0+ MIT License.
-
If you enable WebP, it creates variants for all images. However, Ryan's .htaccess code is from here: https://processwire.com/blog/posts/webp-images-and-more/#webp-image-strategies-in-processwire Doesn't rewrite GIFs, I presume this is because they can be animated. RewriteRule ^(.*?)(site/assets/files/)([0-9]+)/(.*)\.(jpe?g|png)(.*)$ /$1$2$3/$4.webp [L] Given this, it seems inefficient to create a webp variants for GIFs. Does the core have an inbuilt way to disable this, or is it by design that this isn't offered/done automatically?
-
@ukyo Thanks for your awesome work with those modules, really impressive what you are building, and it's a big help for improving the AI friendliness of ProcessWire. The AgentTools module readme is now linking to your boost project as well. Glad you like the API.md files. Admittedly it was not my idea, but I asked Claude what would be helpful and he said these API.md files, plus an abbreviated sitemap json file so that it can get a big picture overview of a PW installation at a glance. That sitemap feature was actually added to the AgentTools module today. Several API.md files have been added to the core today as well. For Fieldtypes that don't have their own directory, they are in a combined /wire/modules/Fieldtype/API.md file. We're also adding dedicated Field classes specific to each Fieldtype, which will improve field documentation but also allow for custom field API methods separate from the Fieldtype (where useful).
-
Hey folks, we at frameless Media often develop across multiple devices β laptop, tablet, sometimes even from a phone with an AI coding assistant. Git is our single source of truth, but getting those changes onto a staging or production server has always been annoying. Especially on shared hosting where there's no SSH, no git, and git-based FTP via YAML configs is more hassle than it's worth. We also frequently need to test new modules directly on shared hosting environments where the server setup differs from our local machines. Manually uploading files after every push? No thanks. So we built GitSync. π― TL;DR: β Link any installed module to its GitHub repo β See all branches and their latest commits β One-click sync β only changed files are downloaded β GitHub Webhook support β auto-sync on every push β Works on shared hosting β no git, no SSH, no cron β Private repo support via GitHub Token What's the difference to ProcessUpgrade? ProcessUpgrade is great for updating published modules from the PW modules directory. But it tracks releases, not branches. During development, when you're pushing to `develop` or `feature/xyz` ten times a day, you need something different. That's where GitSync comes in. π How it works Install the module, add your GitHub Token (optional for public repos) Go to GitSync > Add Module, pick any installed module from the dropdown GitSync searches GitHub for matching repositories automatically Link the module to a repo + branch β done From now on, you can sync with one click. GitSync compares file hashes locally and remotely (using the same SHA1 blob hashing that git uses internally) and only downloads what actually changed. No full re-downloads, minimal API usage. Want it fully automatic? Set up a GitHub Webhook β enter a secret in the module config, point the webhook to `https://yoursite.com/gitsync-webhook/`, and every push triggers an automatic sync. The module overview shows a β‘ webhook badge on auto-synced modules so you always know what's wired up. The real power: remote development with AI π± You're on the train, phone in hand, chatting with Claude via the Claude app. Claude writes code, commits to a feature branch on GitHub. GitSync picks up the webhook and syncs the module to your dev server. Automatically. You open the edited webpage on your phone, check the result, give feedback, iterate. The entire development loop without ever opening a laptop. π€― This works just as well for teams: multiple developers push to GitHub from different machines, and the staging server always reflects the latest state β no manual deploys, no SSH sessions, no FTP. We've been using a prototype internally for a few weeks now and it's become part of our daily workflow β especially the webhook auto-sync is something we don't want to miss anymore. As proof of concept we built the public release entirely as described above π Technical details for the curious The differential sync works like GIT itself: every file's content is hashed as `sha1("blob {size}\0{content}")`. GitHub's Trees API returns these hashes for the entire branch in a single request. GitSync computes the same hash locally. Matching hash = identical file = skip. Requirements ProcessWire >= 3.0 and PHP >= 7.4 with cURL Module and Docs π GitHub: https://github.com/frameless-at/GitSync π Module Directory: https://processwire.com/modules/git-sync/ Would love to hear your thoughts, ideas, and edge cases we might not have considered! Cheers, Mike
- 1 reply
-
- 11
-
-
-
Hi @ryan and the ProcessWire community, Thank you for starting this amazing discussion. Like many of you, I've been deeply exploring how to make AI agents more effective within the ProcessWire ecosystem. Hearing about the new Agent Tools and the API.md initiative is incredibly exciting! ProcessWireβs predictability and clear architecture make it exceptionally pattern-friendly for AI agents. Building on this exact philosophy, I have been developing two complementary open-source packages: processwire-console and processwire-boost. I wanted to share my architecture and findings, as they align perfectly with the goals discussed here. 1. API.md vs. AGENTS.md (Data Documentation vs. Orchestration) @ryan your idea of adding API.md files to Fieldtype modules is brilliant and absolutely necessary. It solves the issue of the AI not knowing the exact CRUD syntax for specific module APIs. However, as @szabesz noted regarding directory structures (.agents/ vs global contexts), managing when the AI reads this context is equally critical. If we feed everything to the AI at once, we waste tokens and dilute the context window. I see API.md and AGENTS.md as completely complementary: API.md (The Knowledge): Native, module-level API documentation focusing purely on syntax and dataset interaction. AGENTS.md / map.json (The Librarian & Routing): Placed in the project root, this acts as a trigger-based router. Instead of holding documentation, it lists installed modules and triggers (e.g., "Use when working with HTMX components... read site/modules/Htmx/AGENTS.md" or "Working with Repeater? read site/modules/FieldtypeRepeater/API.md"). mcp.json: The configuration for integrating ProcessWire's context securely directly into Model Context Protocol (MCP) servers locally. By combining ProcessWire's native API.md documents with a root-level AGENTS.md / map.json router, we can keep the AI deeply focused. It drastically reduces token usage and limits hallucinations because the AI only reads the specific API.md when it is actively working on that module's scope. 2. Giving AI "Hands": The Console & Migrations While having readable documentation is half the battle, the other half is allowing the AI to safely interact with the system. I see that AgentTools natively introduces a very cool migration runner (--at-migrations-apply) and a dedicated CLI file for agents. This is a massive step forward to prevent AI from executing dangerous ad-hoc scripts. To build on this paradigm, my processwire-console package reimagines this CLI experience utilizing a full Symfony Console architecture. This gives both developers and AI agents strict, typed, and predictable commands, along with a dedicated migration architecture. Instead of editing a single agent script, the AI can seamlessly run independent commands like: php vendor/bin/wire make:migration AddBlogFields php vendor/bin/wire migrate:status php vendor/bin/wire migrate This provides a Laravel/Symfony-style schema migration system that AI agents understand natively. It allows them to scaffold entire schemas predictably without breaking the production environment. processwire-boost: To give the AI safe, read-only oversight, I integrated an MCP (Model Context Protocol) server over JSON-RPC. Agents can natively execute tools like pw_schema_read (to understand the exact templates/fields currently installed) or pw_query to fetch ProcessWire data securely before deciding how to proceed. Repositories & Working Examples If you're interested in giving these tools a spin or looking at how the AI components communicate with each other, they are open-source here: processwire-boost: https://github.com/trk/processwire-boost processwire-console: https://github.com/trk/processwire-console For a real-world example of a module that seamlessly integrates with this AI context architecture, you can check out the Htmx module. It not only includes its own localized AGENTS.md, but it also actively extends processwire-console by injecting its own CLI commands. For example, AI agents can scaffold native UI components directly from the terminal: php vendor/bin/wire make:htmx-ui Card Htmx Module: https://github.com/trk/Htmx @ryan your approach with API.md in the core modules is the final missing piece. When ProcessWire natively exposes its capabilities clearly through text, architectures like processwire-boost with intelligent context indexing will allow agents to seamlessly crawl, understand, and reliably act upon the CMS with unprecedented accuracy.
-
@ryan It seems Claude is your real co-worker nowβdoes that mean we can expect major developments sooner? Iβm thinking of things like official nginx support, additional databases, organizing template files into subdirectories, and so on.
-
We made the PR and let you and Claudia decide on how you want to handle a fallback. Also we renamed .agents to agents, so FTP transfer works reliably. It has the format I posted for templates/fields/roles/permissions. And it has various wrapper functions for things like creating pages, roles, install modules etc. These can live in various locations like site/migrate.php or inside a Foo module in Foo.migrate.php or the Foo.module.php itself or anywhere you call $rm->migrate(). It handles dependencies for migrations gracefully. Files it doesn't handle as far as I know. @Peter Knight has a module for page content creation with AI that he is working on. That one does rich text content and images, I think. Yes, that's very unfortunate indeed. I went with .agents because it's supposed to become the standard and many tools already support it. So I do the symlinking ritual until that is sorted out (if ever). Someone made a CLI for this at https://github.com/runkids/skillshare I haven't tried it yet, but looks pretty impressive.
-
@gebeer Thanks! Sounds like Claudia would like a PR for the ddev update if available. For the skills stuff, thanks for explaining it all. I'll look forward to having a closer look in your commits but one thing noticed so far is that your version has the option to install the skill files off the PW install root in a .agents dir. But the only place PW an assume is writable is off the /site/assets/. So the .agents off the root would work in some installs and not in others. Thanks for the example rock migrations file. It looks to me like the same format that the core Pages Export/Import module uses, except that the module uses them JSON encoded. While we're calling the ones generated by Claude with the AgentTools module "migrations", they really are just repeatable logic. So the logic can be about creating/updating/deleting some pages/fields/templates or perhaps something else. How does the Rock Migrations format work when you need some logic as part of the migration, such as creating a page, then creating another page that references that page (FieldtypePage)? You could do this with the pages export/import but would have to run the JSON through more than once to do it. Also, how does it handle files? Handling files is something AgentTools does not yet do.