All Activity
- Past hour
-
That’s a good set. Thanks for the feedback. Re. The top 10 limit. I’m thinking of dashboard space where the dashboard contains rolled up data of many metrics. But you’ll be able to click into a detailed page listing top X. In the meantime if you view table mode of the library and sort by size, you’ll get the same data.
- Today
-
szabesz started following JSON+LD Schema module , Tracy and AI/LLMs and Analytics for MediaHub - taking requests
-
This workflow would be really powerful, indeed. Using the MCP server as a tool, the agent could use it on its own, but also the developer could ask the LLM/agent in the console, as Adrian mentions:
-
I would give these (and related stats, listings) high priority. (BTW, why just 10?)
-
@psy Thanks! Note that the link above points to /talk/topic/13598-jsonld-schema-module/?do=getNewComment Is it that by mistake, perhaps? Another note is that I find it confusing that you edit the initial post and also clear the trace of its past state(s). This way it looks like information from 2016, but it is clearly not, and the discussion following the first post also looks odd and confusing to the newcomer.
-
I'm not getting it done as quickly as I'd like. Most of the modules were written in February and March, when I had free time. Regarding the topic, yes and no. I want to standardize everything so that it looks native. Everywhere and always. And also the UI and UX should be convenient. So that when you open the Tracy dashboard at 3 AM, it doesn't look like a bright light, nothing more. @Ivan Gretsky
-
Ivan Gretsky started following PW 3.0.259 – Core updates , Important / Interesting Updates and Tracy and AI/LLMs
-
@maximus Seeing the speed with which you produce new cool modules just maybe you could come up with a custom theme css pretty quickly: - https://forum.nette.org/cs/31690-dark-mode-pro-tracy-css (translate from Czech) - https://github.com/nette/tracy/blob/ab7b1d19c0de130bed6de5b30c9ad7884131b465/src/Tracy/Debugger.php#L107-L111 It is already possible to change the debugger output to dark natively like this Debugger::$dumpTheme = 'dark'; (https://tracy.nette.org/en/dumper), but that is already by default, as I understand.
-
I have read this a couple of times. But as I am not as fluent in AI dev I couldn't quite grasp the whole workflow idea. So let me explain as I understood it and make my points. I think most devs develop in either IDE like VS Code with an AI extension, Zed and such or in CLI with Claude Code, OpenCode etc. They want their agents to be able to test certain url and see if there are any errors or warnings. I guess it is already possible by telling agents where the Tracy logs are and how to read them. AI has or could easily gain access to those anyway. An MCP could be intermediary here but does it help with anything? Another way would be to use the newly introduces CLI modules that could query an URL and return it along with debug info from Tracy. Just maybe AI agents can now debug in the browser, but that seems like a too complicated scenario. Am I misunderstanding something?
- Yesterday
-
CLI modules sound great, can't wait to play around with that! Two things I hope ProcessWire will eventually tackle natively (and also the things I currently think are kind of its weak points) are scheduled tasks and queues. For reference: https://laravel.com/docs/13.x/scheduling (or, pardon my french, https://developer.wordpress.org/plugins/cron/ and https://developer.wordpress.org/cli/commands/cron/) and https://laravel.com/docs/13.x/queues. I would assume that Jonathan was thinking of something similar, but I won't try to speak for him 🙂
-
What is the best way to move a Processwire site to a new server in 2026? I have backups from my old sites - almost 2 years old by now... - and have installed empty Processwire sites on my new server. Do the PW versions still have to be the same to make importing the old templates and database etc. work? How can I find out in my backups what version of PW they were on then and could I still find that version and downgrade to it? Or are there now other ways to rebuild the old sites?
-
Thanks @maximus. I think I managed to achieve what I wanted by replacing the /wire folder from the installation package with a symlink to one shared /wire folder, created with the following command line: ln -s /var/www/wire /var/www/html/oneofmywebsites/wire Then install as normal with its own database. That seems to work fine, right? Any drawbacks to watch out for? Why would 'symlinks to /wire' be something to avoid - 'can cause issues with __DIR__ resolution in some PW internals'?
-
A lot of things changed in the world. I do not quite like many of them... But this what is happening here right now with ProcessWire is just amazing! Ryan finally found himself a companion to work with and they both are doing really well! I was kind of concerned about PW development, but not anymore.
-
Peter Knight started following PW 3.0.259 – Core updates
-
Amazing stuff. And support for goose turd green !! 😀
-
Hey everyone! Pushed a big update to WireWall today. The main addition is a dashboard module — install ProcessWireWall alongside the main module and you get a live stats page at Admin → Setup → WireWall. It shows blocked/allowed counts, a 24-hour chart, top block reasons, top countries, top IPs, active bans with countdown timers, and a recent events table. Works in both light and dark admin themes since it reads PW CSS variables. Also rewrote the settings page from scratch — went from 15+ scattered fieldsets down to 10 logical sections. City and subdivision blocking options now only show up if you actually have GeoLite2-City.mmdb installed, which cleans things up a lot. A few security fixes in this release too: proxy headers like CF-Connecting-IP are now validated against Cloudflare's published IP ranges before being trusted (previously any client could spoof them), unserialize() in the cache layer got hardened, and some overly broad AJAX bypass patterns were tightened up. Silent 404 mode now throws ProcessWire's native 404 page instead of plain text. GitHub: https://github.com/mxmsmnv/WireWall
-
@Jonathan Lahijani tell me more about the queue system?
- Last week
-
-
Jonathan Lahijani started following PW 3.0.259 – Core updates
-
Wow. These "ProcessWire as a web application framework"-type updates are coming in strong! Migrations, CLI, Tests, AI. One can wish for a maybe first party queue system too. 😄
-
ryan started following PW 3.0.259 – Core updates
-
This week we have ProcessWire 3.0.259 which includes several improvements, but my favorite is the addition of a new module type called "CliModule" which is short for "Command Line Interface Module". CliModules are those that provide the option for running from the command line. To list the available actions from command line modules, you can type "php index.php" in ProcessWire's installation directory. If "php" is not in your path, you'll have to type "/path/to/php index.php" instead, or add it to your path. Here's example output on my installation: As you can see above, I've got AgentTools, WireTests and an example "Hello World" CliModule showing the available command line options. If I want to execute one of the commands, then I just type what it indicates. For example, here I will run `php index.php test FieldtypeText` and here's the output: Here's a simple example of a CliModule: <?php namespace ProcessWire; class HelloWorldCli extends WireData implements Module, CliModule { public static function getModuleInfo() { return [ 'title' => 'Hello World CLI module', 'description' => 'Just an example', 'version' => 1, 'cli' => 'hello', // Example: php index.php hello ]; } public function executeCli(array $args) { $command = $args[0] ?? ''; $name = isset($args[1]) ? $args[1] : 'friend'; if($command === 'hi') { echo "Hello there $name!"; } else if($command === 'bye') { echo "Goodbye $name, see you later!"; } else { echo "Specify 'hi' or 'bye' optionally followed by a name"; } } public function getCliCommands() { return [ 'hi' => 'Say hello', 'bye' => 'Say goodbye', ]; } } For more details on the CliModule format, see wire/core/CliModule.php Improvements have continued with the AgentTools module. This week we added: New multi-model support: You can now configure multiple different agents in the module, and choose which one you'd like to use from the Engineer screen. Details New agent-memory support: Now when you make a request of the Engineer, it remembers it for follow-up questions and changes. It keeps a conversation history for context of what you are working on. Details New support for subagents: This enables any of the agents to launch additional agents when/where it helps to do so. For instance, specialist agents, or lower cost agents for simple jobs, and who knows what else. Claude requested the feature and also implemented it, so I'll be interested to see how it gets used. Details New agents configuration screen where you can define up to 10 agents (that's plenty, right?). Details Also new this week is a new WireTests module testing suite for ProcessWire. This first version focuses on testing all of ProcessWire's Fieldtype modules (including a few ProFields ones as well), but it's easy to add tests for any kind of module type. So we'll be adding more tests and improving existing tests as this module moves forward. For details head on over to: WireTests Thanks for reading and have a great weekend!
- 6 replies
-
- 13
-
-
-
Maybe it would be helpful to have an AI prompt (like Ryan's Agent Tools) built-in to the Console panel so you can prompt your way to a script, or ask it to fix/extend an existing script?
-
Peter Knight started following Analytics for MediaHub - taking requests
-
Hi I think Analytics for MH would be really useful addition for power users. Some of you mentioned you mange thousands of images. Maybe with MH that might be less given the shared image concept. I’ll be starting shortly so if you have any requests, just add them here. Phase 1 will be an emphasis on data vs big dashboards etc. Metrics we could surface… # MediaHub Analytics — Metric Ideas ## Asset Inventory & Volume - Total asset count (all time) - Asset count by type (image, video, document, audio, etc.) - Coloured storage usage bar by type (like the iCloud bar) - Total storage consumed, broken down by type - Average file size by type - Largest single assets (top 10) - Assets exceeding a defined file size threshold ## Usage & Engagement - Top 1 and top 9 most-used assets (by placement count) - Assets used on the most pages - Assets used more than once (vs. unique placements) - Most-used asset by type (e.g. most-used video) - Assets referenced in TinyMCE/rich text fields vs. structured fields - Pages with the most assets total - Pages with the most images specifically - Pages with the highest asset variety (mixed types) ## Waste & Orphan Detection - Unused assets (uploaded but placed nowhere) - Assets uploaded but never used in a TinyMCE field specifically - Assets that were used but the page/entry has since been deleted - Duplicate or near-duplicate filenames - Assets with no alt text or metadata ## Crops & Transforms - Images with the most crop variants - Images with crops defined but never rendered - Images with no crops defined at all - Most common crop ratios/dimensions used across the hub ## Age & Freshness - Recently added (last 7, 30, 90 days) - Oldest assets in the hub - Oldest assets that have never been used - Assets not updated or replaced in over X months - Upload velocity over time (assets added per week/month) ## Content Quality & Hygiene - Assets missing required metadata (title, alt text, caption, tags) - Images below recommended resolution for their usage context - Assets with broken or missing source files - Assets with no focal point set (if supported) - Untagged or uncategorised assets ## People & Process - Assets uploaded per user/author - Which users upload the most unused assets - Upload activity by day of week or time of day - Most active uploaders in the last 30 days ## Search & Filter Behaviour - Most searched terms (by frequency) - Most searched terms with zero results - Most used filters (type, date, tag, label, etc.) - Filter combinations used most often together - Searches that result in no action (user searches but doesn’t select anything) - Most abandoned searches (searched, filtered, then left) ## Collections & Folders - Largest collections by asset count - Largest collections by total storage size - Most nested / deepest folder structures - Collections with the most unused assets inside them - Empty collections (created but never populated) - Collections that haven’t been updated in over X months - Most viewed or accessed collections - Collections with assets shared across the most pages ## Labels (Library / Storage Organisation) - Asset count per label - Storage volume per label - Labels with the most unused assets - Labels with no assets assigned (orphan labels) - Most combined labels (which labels appear together most) - Unlabelled assets (no label assigned at all) ## Tags (Display / Website Facing) - Most used tags by asset count - Tags applied to assets that are never actually used on the website - Assets with the most tags applied - Untagged assets - Tags that are never searched or filtered by visitors - Tag overlap — assets sharing the same tag cluster (useful for spotting redundancy) - Most used tag per asset type (e.g. most common image tag vs. video tag) ----- **Other high-value metrics for power users:** duplicate detection, metadata completeness scoring, upload velocity trends, and per-user waste ratios (who’s uploading assets that never get used). The type of useful info that might surface process problems rather than just content problems.
-
Hi everyone, in particular @ryan @Peter Knight@ukyo @gebeer @maximus who seem to have been most AI active lately. I've just added dai() and bdai() dumping calls so that objects and arrays are rendered in plain text format more friendly to LLMs, but I am curious what AI/LLM integration features you think would be most useful? Claude suggested an MCP server - here is its plan. Does this sound useful? Any other ideas? Two processes, loosely coupled: ┌──────────────────┐ ┌──────────────────────────┐ │ Claude / Cursor │ stdio MCP │ TracyDebugger site │ │ client │ ─────────────────► │ │ │ │ HTTP + token │ tracy-ai/* endpoints │ └──────────────────┘ ◄───────────────── └──────────────────────────┘ MCP server — a tiny program the agent launches over stdio (the MCP transport). Ships as a sibling module (TracyDebuggerMCP/) or a standalone npm package the user npx's. TracyDebugger HTTP endpoints — new authenticated tracy-ai/* routes inside the ProcessWire site. The MCP server is just a thin translator between MCP tool calls and these HTTP requests. The MCP server holds no site logic. It's a dumb adapter. All the real work (reading panels, redacting secrets, rendering plaintext) stays inside TracyDebugger where the ProcessWire API is available. What the agent sees A handful of tools in the MCP catalog: tracy_export_bundle(preset: "debug" | "performance" | "template" | "full") tracy_get_request_info() tracy_get_last_errors(limit: int = 10) tracy_get_slow_queries(limit: int = 10) tracy_get_template_schema(template: string) tracy_list_dumps() tracy_run_console(code: string) ← gated, opt-in only Every tool returns the scrubbed plaintext/JSON produced by AIExport — same output Phase 2's "Copy" button produces. Config on the site New module-config section: aiExportHTTPEndpointEnabled (default off) aiExportMCPToken — a random token generated once per site, shown to the user to paste into their MCP client config aiExportAllowConsoleExec (default off) — gates tracy_run_console aiExportAllowedIPs — optional whitelist Config on the client User's ~/.config/claude/mcp.json or equivalent: json { "mcpServers": { "tracy": { "command": "npx", "args": ["-y", "tracy-mcp"], "env": { "TRACY_URL": "https://mysite.test", "TRACY_TOKEN": "<paste token from module config>" } } } } The agent launches the MCP server locally; the MCP server talks to the site over HTTPS with the token. Auth Per-site token (generated in module config, rotateable). Token sent as Authorization: Bearer … header on every HTTP call. Optional IP whitelist on the site side. tracy_run_console additionally requires aiExportAllowConsoleExec=true — otherwise the MCP server gets a 403 and reports "console execution disabled for this site" to the agent. Example flow — agent debugging an error User in Claude: "Why is /about/team throwing a 500?" Agent: Calls tracy_export_bundle(preset: "debug"). MCP server hits GET https://mysite.test/tracy-ai/export?bundle=debug with the bearer token. Site responds with scrubbed JSON: request info, PW info, last error with stack, slow queries, recent PW logs. Agent reads the traceback, sees TemplateFile.php:123 Undefined index "featured_image", asks tracy_get_template_schema(template: "team"). Site responds with the template's fields — no featured_image field exists. Agent suggests the fix, possibly calls tracy_run_console (if enabled) to verify. No human pasting. Agent pulls what it needs on demand, scoped by the tool it calls. What ships where In TracyDebugger itself: the tracy-ai/* HTTP endpoints + auth + token config + AIExport (already built in Phase 1, extended in Phase 2). In the MCP server (separate repo, ~200 lines): tool definitions, HTTP calls, response shaping for MCP. This separation matters because the MCP server can be installed independently of the site, and the site is still useful without it (you can hit tracy-ai/export with curl directly). Footprint on production Zero unless you explicitly enable it. The endpoints, token, and MCP config are all opt-in behind module settings. That's the shape. The main design choices worth confirming before building: Token-only auth, or also require the existing Tracy access? — i.e., should the agent's token have to belong to an allowed Tracy dev user, or is a separate machine token fine? I'd lean separate machine token for agents; reusing session auth is awkward over stdio. Read-only by default? — I strongly recommend yes. tracy_run_console is the only write path and should be a separate opt-in. Does the MCP server live in this repo or a separate repo? — I'd say separate. Different language, different release cadence, and the site works without it.
-
- You can now cancel a long running console panel script. - There are new dai() and bdai() methods which dump the contents of objects etc in a plain text format which is more friendly for consumption by LLMs.
-
Ha! I love discovering new stuff like that and then feeling, how did I live without it? I think the tree drawer is only available since one of the later UI themes.
-
I added the ProcessWire namespace as shown below. This seems to have fixed the wireRenderFile not found and the page now displays. I do not know why it works without that on my localhost. Perhaps it could be something to do with this forum post I found that gave me the idea to try adding the namespace? https://processwire.com/talk/topic/11815-undefined-variable-pw-3-wirerenderfile-use-compiled-file/#comment-109884 <?php namespace ProcessWire; ?> <div id="ajax-content" pw-replace> <?=wireRenderFile('_ajax-home.php', array('id' => $page->id))?> </div>
-
I know a reasonable amount about the output formatting basics. But I keep finding it behaving unreliably so I'm looking to enhance my understanding, or find fixes. Example: A template ('basic page') contains a Page Table Next (ptn) field. One of the templates in use in the ptn field has a repeater field. I'm rendering with Latte and I wish vars holding Page[Array] objects exposed to my templates to have of(TRUE) Let's say $thePage holds the basic page in question. There's 2 overarching contexts, front and back end. On admin screens pages default to of(FALSE) on front end, of(TRUE) This is already a bit of a problem that need workaround, since Page Table Next renders the front end output in the back end. But the question is, after $thePage->of($bool) what is the output formatting state of: $thePage->ptn->first ? $thePage->ptn->first->repeaterField->... ? It's not reliably $bool. I've sometimes tried to bolster the reliability by also calling $wire->pages->of($bool). But I still find it's not always as expected. In the case of linked pages, as above, how does an instantiated page object know whether its output formatting should be on or off? Where is it inherited from? None of the following appear reliably correct (I could be wrong, there's a lot of combos) It could be from $thePage Page object. It could be from $pages. It could be from the back/front end context. (worst case, since then you have to explicitly call of() on every page you reference which makes a real mess of templating, requiring a temporary variable to store the page so you can make the call before using a property) It could be to do with one of the first two options at the time the referenced page is loaded, which would account for the unreliability in the case that rendering involves a process where of() is called in turn with FALSE and TRUE... Can anyone help? I guess ideally what I'm after is $thePage->setOutputFormattingOnSelfAndAllReferencedPages($bool). (Aside: yes, I'm aware of using $thePage->getFormatted() and getUnformatted() but if you have to rely on these you can't use more convenient formats like ->each() or ->get() etc.)