-
Posts
1,553 -
Joined
-
Last visited
-
Days Won
48
Posts posted by gebeer
-
-
I added a agenttools skill at https://github.com/gebeer/processwire-ai-docs/tree/main/skills/processwire-agenttools that agents can use to work with AgentTools CLI and migrations.
The skill follows https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices and splits CLI usage and migrations as progressively discoverable so the base SKILL.md stays lightweight. And I prefer a skill over having to point the assistant to AgentTools' CLAUDE.md and agent_cli.md manually in every session I want to use it.
In the process of testing the skill, a common problem came up. We are all working in different environments. Some use WAMP, XAMPP, or LAMP stack on host, others use ddev, laradock or other containerized solutions. current CLAUDE.md in the module doesn't account for that. So I added a shell wrapper script that can handle host LAMP and ddev. While this is not the cleanest approach and only covers 2 cases, the basic skill design is still valid.
-
1
-
2
-
-
I really like the way things are going with ProcessWire and AI. Thank you, Ryan.
I've been a big fan and strong advocate for migrations in PW, since I started using RockMigrations years ago. What makes RM a particularly strong candidate is the abstraction into a schema-like format which is much easier to understand/read/write than native PW API code. This is a real strength of RM and I would prefer a schema-based approach anytime over writing (or having AI write) PW API code.
Claude is very good at understanding the PW API, other models are not that strong. But they all can understand schemata. Be it PHP arrays, JSON, YAML. So I would advocate for either developing an "official" PW migration schema or adapting the existing, battle tested one from RockMigrations.
-
3
-
-
16 hours ago, Peter Knight said:
You'll be please to note it's impressed with your question
I'm flushing, haha. Tell Cursor that I'm impressed with their clear answer.
The approach makes sense, having structured data going in and out through the CLI.
What I don't quite get yet is why the local setup can't just use same approach over HTTP as the remote one. If Cursor could shed some light on this, it would be much appreciated.
-
1
-
-
@Peter Knight Thank you for taking your time to give me the intel. This answers all my questions.
The local CLI caught my eye. Which one are you using, is it home baked or https://github.com/wirecli/wire-cli or https://github.com/trk/processwire-console or https://github.com/baumrock/RockShell ?
As for migrations, I'd say Rockmigrations is pretty much feature complete and stable. It only lacks support for some of the Pro fields. But that seems to be the case for other migration tools as well. It does support Repeater Matrix though.
Looking forward to seeing your MCP repo once it's up.
-
2
-
-
9 minutes ago, adrian said:
Haha, this was recommended to me by the YT algo today. Live would be so easy if 4 plugins could solve every problem. Unfortunately things are way more complex than this. But there's some nice tips in there for people who get started with claude code. The superpowers plugin is good. I sometimes use it for implementation of bigger features.
-
1 hour ago, szabesz said:
Can you please provide up-to-date information on what we can expect? I need to tell a client about the state of draft/version management in ProcessWire, and I do not want to provide outdated information.
Yes, please provide updated info on this. We also need to replace the old ProDrafts module on a client project. Thank you.
-
1
-
-
This looks very interesting. Although the specific publishing workflow is not something, I would need. It sure must be fun to sit here and watch the agent churn along and very satifying to open the published URL afterwards :-)
The HTTPS API and the migration functionality, those are intriguing. I've been happily using RockMigrations for a few years now. And wouldn't want to miss it. Your JSON-based approach is a wrapper around PW fields/templates API, I guess. Does it support roles/permissions, module install/uninstall etc or is that out of scope for you?
That HTTP API layer is very powerful and can be used for all kinds of things, I guess. Does it differ a lot from other API module approaches like AppAPI? Can endpoints be added in a modular fashion, how does auth work?
Many questions, I know. Please don't feel obligated to answer them all.
Cheers
-
1
-
-
On 3/8/2026 at 11:36 PM, Peter Knight said:
Hey, if anyone is using Cursor for AI development, I just added some interesting functionality to the MCP module, including full 360 local/remote sync.
Awesome! MCP should work with Claude Code, Codex, OPenCode, Pi, Windsurf etc also, no?
-
On 2/27/2026 at 5:30 AM, wbmnfktr said:
For those who want to play around with that workflow:
I just had time to look at your repo. This is gold. Thanks so much for sharing.
-
3
-
-
On 2/26/2026 at 7:32 AM, wbmnfktr said:
Tools
-
OpenCode - https://opencode.ai/
Similar to Claude Code, easy to configure, and even easier to extend with custom modes, agents, skills, and whatever you might need or want. Has a great planning mode and doesn't ask unnecessary questions in the middle of tasks like Claude Code did for a while just to burn more tokens. -
Kimi Code CLI (with Kimi K2.5) - https://www.kimi.com/code/en
Tested it last month and while it's a CLI like OpenCode/Claude Code it feels and works totally different. It doesn't have any modes but supports AGENTS.md and SKILLS. Super fast and it is super capable for quick fixes, smaller features, or heavy automations. -
Windsurf IDE - https://windsurf.com/editor
Like Cursor with almost identical features, a custom terminal integration, includes a browser that has full access and control which is great for debugging, UI/UX (especially with Opus 4x.) - I guess most of you have seen in the past or even tried it. Was called Codium before and I know some of you used that Codium Extension which was awesome.
I really like your tool choice. Very close to mine. Except I still use CC more den OC. But even that will change :-)
As for the skills approach. I, too, think that this is the way to go. Here's my little collection: https://github.com/gebeer/processwire-ai-docsWhere I would disagree is having each learning also in skills. I think a memory layer (see my processwire-memory skill based on memvid) is better suited for that.
Another interesting approach that I just found recently is https://github.com/agenticnotetaking/arscontexta which is basically skills but interlinked like a wiki so the agent always finds the information it needs.
So many things still to explore :-)
-
1
-
OpenCode - https://opencode.ai/
-
3 minutes ago, gebeer said:
I wanted to add: This gives you basically something like context7. But locally with your very specific knowledge and not implemented as MCP but as skill which has less overhead in the context window.
And you could modify it easily for other knowledge. Different frameworks, whatever.
-
2
-
-
23 hours ago, elabx said:
Working right now on a single-file PW-API-docs database based on https://github.com/memvid/memvid. Has semantic vector search (local embedding model), BM25 and all that good stuff. Also supports CRUD. I fed it a good part of https://github.com/phlppschrr/processwire-api-docs/blob/main/api-docs/index.md .
Skill is up at https://github.com/gebeer/processwire-ai-docs/tree/main/skills/processwire-memory
You need to install memvid-cli. It's all in the README and skill.
You can build your memory file with docs you need from https://github.com/phlppschrr/processwire-api-docs/blob/main/api-docs/index.md
If you want me to share my mem file (~35MB), I can do that, too.
I haven't used it a lot yet but it seems to work quite well. Maybe needs some work on the proactive part so that agents know when to lookup stuff even if not explicitly prompted. Implementing that depends very much on the AI tools you're using. For Claude Code hooks would be a good place. Others like Cursor, I don't know.
-
11 hours ago, bernhard said:
And you get useful results? I installed it in cursor and it was impressive to see the browser pop up, but not really useful... I even added an /auto-login route to RockDevTools to make the mcp login as superuser by default, but still it was not able to fix such a latte exception issue.
Yes, very good results. It's fast and pretty token efficient. You can connect it to an already open browser wit a logged in session etc. no need for auto-login route. let your agent read the mcp instructions. mine said to start chrome with a debug flag. Let me pulled that up through my conversation-search MCP quickly. Here it is :-)
Launch command (detached from terminal): setsid chromium --remote-debugging-port=9222 &>/dev/null & - --remote-debugging-port=9222 — enables CDP so the MCP can connect - setsid — creates a new session, fully detached from the terminal's process group (plain nohup doesn't survive terminal close because the terminal sends SIGTERM, not just SIGHUP) - &>/dev/null & — suppresses output and backgrounds itThat's for Linux but should work on your Mac, too.
In that browser you open your PW project, backend and then the mcp can connect.
That's the way it's supposed to be done, I guess.-
1
-
-
Hi fellow devs,
this is a somewhat different post, a little essay. Take it with a grain of salt and some humor. Maybe some of you share similar experience.
I don't really mean to poop on a certain group with certain preferences, but then, that's what I'm doing here. I needed to write it to load off some frustration.
No offense intended.
Good Sunday read :-)
React Is NPC Technology
Have you ever really looked at React code? Not the tutorial. Not the "Hello World." An actual production component from an actual codebase someone is actually proud of? Because the first time I did, I thought there'd been a mistake. A failed merge. HTML bleeding into JavaScript, strings that weren't strings, logic and markup performing some kind of violation you'd normally catch in code review before it got anywhere near main. "Fix this," I thought. "Someone broke this."
It looks broken because it is broken. That's the first thing you need to understand.
JSX is a category error. Mixing markup and logic at the syntax level - not as an abstraction, not behind an interface, but visually, literally, right there in the file - is the kind of decision that should have ended careers. Instead it ended up on 40% of job postings. And here's the part that actually matters, the part that explains everything:
Nobody can tell you why.
"Everyone uses it." Go ahead, ask. That's the answer. That's the complete sentence, delivered with the confidence of someone who has never once questioned whether a thing should exist before learning how it works. The argument for React is React's market share. The case for Next.js is that your tech lead saw it on a conference talk in 2021 and it was already too late. You're supposed to hear this and nod - because if everyone's doing something, there must be a reason, right?
The herd doesn't just run toward cliffs. Except. That's literally what herds do.
The web development community, bless its heart, has a category of decision I can only call NPC behavior. Not an insult - a technical description. An NPC doesn't evaluate options. An NPC reads the room, finds the dominant pattern, and propagates it. React is on every job posting = React is what employers want = React is what I need to know = React is what I reach for. The loop closes. Nobody along the chain asked if it was right. They asked if it was safe. Safe to put on a resume. Safe to recommend. Safe to defend at the standup. React is the framework you choose when you've stopped choosing and started inheriting.
The 10% who actually think about their tools - they're out there running Alpine.js. Which is 8kb. Does the same job. No build step required. Add an attribute, the thing works. Revolutionary concept. They're running htmx, which understood something profound: the web already has a protocol for moving data, and it was fine. You didn't need to rebuild HTTP in JavaScript. You just needed to reach for the right thing instead of the fashionable one.
Let's talk performance, because "everyone uses it" is already bad enough before you look at what it actually does.
React ships 40-100kb of runtime JavaScript before your application does a single thing. Your users wait while React bootstraps itself. Then it hydrates - a word that sounds refreshing and means "React redoes on the client what the server already did, because React can't help it." Then they invented Server Components to fix the problem of shipping too much JavaScript. The solution: ship different JavaScript, handled differently, with new mental models, new abstractions, new ways to get it wrong.
They called it an innovation.
I once worked with WordPress and React together. I want you to sit with that. Two philosophies, neither of which is actually correct, stacked on each other like a complexity casserole nobody ordered. WordPress solving 2003's problems with 2003's patterns. React solving 2003's problems with 2013's patterns that created 2023's problems. Together they achieved something genuinely special: all the drawbacks of both, and none of the advantages of either. The PHP you want but in a different way and the hydration you couldn't prevent, serving pages that load like it's apologizing for something.
Twenty years building for the web and I've watched frameworks rise and fall like geological events. ColdFusion, anyone? Remember when Java applets were going to be everywhere? Flash was going to be the web. Then jQuery saved us. Then Angular saved us from jQuery. Then React saved us from Angular. Rescue upon rescue, each one leaving more complexity than it cleared, each one defended by exactly the same people who defended the last one, now wearing a different conference lanyard.
ProcessWire. That's what I build with. Most developers have never heard of it - which is not a criticism, that's the evidence. You find ProcessWire because you went looking for something specific, evaluated it, and it fit. It doesn't have conference talks. It doesn't have a VC-funded developer relations team. It has a forum full of people who chose it. That's a different category of thing entirely.
The same 10% who finds ProcessWire finds Alpine. Finds htmx. Makes decisions that don't optimize for defensibility in interviews. Builds websites that load fast because they don't carry React around everywhere they go.
There's a physics concept called a local minimum. A place where a system settles because the immediate neighborhood looks stable - the energy gradient points upward in every direction, so the system stops. Stays. Convinces itself it's home. Even if a global minimum exists somewhere else, at lower energy, lighter, simpler - you'd have to climb first, and the herd doesn't climb.
React is a local minimum. The web settled here when it got tired of looking. Stable enough. Defended by enough career investment. Surrounded by enough tooling and tutorials and framework-specific bootcamps that switching costs feel existential. The ground state - simpler, faster, closer to what the web actually is - sits somewhere else, past a hill that looks too steep from inside the valley.
The ground state is always simpler. That's not a philosophical position. That's thermodynamics.
They don't want you to know that.
-
15
-
1
-
-
On 2/21/2026 at 6:29 PM, bernhard said:
Is it somehow possible to give the AI access to a browser so that it can try to load the page and see and fix such exceptions on its own?
I use https://github.com/ChromeDevTools/chrome-devtools-mcp for that. Very fast.
On 2/21/2026 at 6:29 PM, bernhard said:Another thing I'd love to have is to make it PLAN upfront but then start building once that is done. It seems I have to always confirm the build step after the plan is done.
The thing about the plan is that it's supposed to be reviewed before it is being applied, haha. But if you trust it without at least a quick glance, ok. I think Cursor can play sounds when it needs your attention. You could use that to notify you when you have to click the button.
20 hours ago, elabx said:Worked on this the last couple days: https://github.com/elabx/processwire-mcp just tested it yesterday, so not a lot of usage.
Wow. The "Full ProcessWire API access - Query, create, update, and delete pages" is the most interesting for me here.
Working right now on a single-file PW-API-docs database based on https://github.com/memvid/memvid. Has semantic vector search (local embedding model), BM25 and all that good stuff. Also supports CRUD. I fed it a good part of https://github.com/phlppschrr/processwire-api-docs/blob/main/api-docs/index.md . File currently has around 35MB. Search is blazingly fast. I implement it as portable skill, not as MCP. Needs a little more love and testing but I'll drop it soonish.
-
4
-
-
17 hours ago, interrobang said:
😲 I have no idea how context7's “related skills” are created.
The complete setup, including the markdown conversion, is in the other repo https://github.com/phlppschrr/processwire-knowledge-base. The Python code for this is mostly vibe-coded with codex 5.2, and partly with gemini 3 pro.
I have to admit that I haven't looked at the resulting Python code, but at least on my computer it works reliably. The repo indexed by context7 currently only contains the Markdown version of the ProcessWire API documentation as Markdown. As long as Ryan hasn't built prompt injecting into his phpdocs, it should be safe.I don't know yet how to best use the repository. I would appreciate input on how to write the skill to make the content usable for an LLM without bloating the context window. Probably the way via context7 is more promising than searching the local md files with grep.
By the way, I've also started using gemini to break down the blog articles into individual, uniformly structured snippets with frontmatter metadata. I can imagine that this would be quite good for context7. However, it hasn't been committed yet. Unfortunately, I have to get back to customer work now...
Example of a generated snippet:
--- title: "Substitute Images for Missing Resizes" date: "2025-05-23" category: "Hooks" tags: [hooks, images, error-handling] api_methods: ["Pageimage::filenameDoesNotExist"] source_url: "https://processwire.com/blog/posts/pw-3.0.255/" --- ## Summary: The `Pageimage::filenameDoesNotExist` hookable method allows you to provide a substitute image when a requested resized image version or its source is missing. ## Context: This prevents "broken image" icons by falling back to a placeholder image or logging the missing file for investigation. ## Implementation: Hook `Pageimage::filenameDoesNotExist`. ```php $wire->addHookAfter('Pageimage::filenameDoesNotExist', function($event) { $filename = $event->arguments(0); // The missing file path // Path to your placeholder image $placeholder = config()->paths->templates . 'assets/images/placeholder.jpg'; if(file_exists($placeholder)) { files()->copy($placeholder, $filename); $event->return = true; // Tell ProcessWire the file is now available } }); ```
Contents of related skills are not included in the docs that context7 parsed from your repo. Those are separate. It should be save to use as is @bernhard FYI. If you want to be 100% sure that you can trust those snippets, you'd need to go through https://github.com/phlppschrr/processwire-knowledge-base/tree/master/docs and look for prompt injections. But I think that would be overkill tbh
-
14 hours ago, elabx said:
Is anyone going the way of SpecKit, BMAD, Opensec, superpowers?
tried several of them, including kilo code from NVIDIA (I think) which uses a clean spec-driven workflow.
Currently working on my own version of that with prompt templates, verification through hooks and all that good stuff.
Spec driven is a good approach, especially for larger features. For small things I'm still using good old chat in claude code.
-
1
-
-
Would love to have @Jonathan Lahijani chime in here. Maybe he's got news about his MCP project :-)
-
I just published https://github.com/gebeer/conversation-search-mcp
Its a very minimal and fast MCP server that can search over jsonl session transcripts.
It can be pointed to a folder with those sessions and then do BM25 searches for relevant context.
Claude Code sessions all get stored in ~/.claude/projects/* folders. One folder per project.
I have pointed mine to a folder in there that contains all my ProcessWire projects. So it has all past conversations that I had with claude code in these projects. There's a lot of valuable data in there. Like how the assistant applied fixes, what context I gave it, what it did wrong, what corrections I had to make etc.
When the MCP server is active, the assistant can use it to search for relevant context.
Currently I'm working on a hook system that auto-injects relevant context from a search based on the current prompt. It's an experiment currently. But might be a good way to enhance the assistants understanding with relevant context.
-
2
-
-
48 minutes ago, szabesz said:
Yes, the issue with blog articles is that they are verbose, and there is no need for such verbosity for an LLM. However, instead of trying to squeeze blog posts and API docs content and examples into a context window, it would be better to do some "sort of LLM training". Like LoRAs for image models. Does anyone have an understanding of how such a thing could be done?
Slight understanding on a high level. The greatest challenge is training data collection and formatting, not so much the LoRA training itself. I spent some time about 1 year ago on how to approach the data side of things and couldn't find any good sources back then. Then gave up on it.
imo it's better to work with ICL (In Context Learning) and a SoTA model than having a spezialized but weaker one. That might not true anymore, though.
-
1
-
-
3 hours ago, bernhard said:
Hey @gebeer @interrobang @Peter Knight love your input but I think we are getting a little off-topic? I have created a new AI+PW thread here:
Hope that makes sense!
Makes total sense :-)
-
1
-
-
16 hours ago, interrobang said:
Wow. That was quick. Thank you!
-
4 minutes ago, interrobang said:
Do you know if we can create our own repo that only contains the Markdown APIDocs and use it for content7, or does it have to be the official repo?
You can surely do that. context7 will accept it no problem. could add a note: unofficial but AI-friendly docs for the great ProcessWire CMS/CMF lol
-
1
-
-
3 minutes ago, pideluxe said:
Slightly off-topic, but looking how others do similar things could be helpful: Drupal 10’s "Recipes" - modular YAML configuration packages for specific use cases like blogs or e-commerce - could inspire a similar approach in ProcessWire to streamline project setups. While PW already excels in flexibility, a "Recipe Manager" module could allow users to define, share, and install pre-configured templates, fields, and modules via JSON/YAML files, making it easier to replicate common setups (e.g., portfolios, multilingual sites) without manual repetition.
Existing tools like RockMigrations or Site Profiles already cover parts of this, but a dedicated system could automate dependencies, roles, and content structures while keeping PW’s core simplicity intact. Community-driven recipes (e.g., for SEO, galleries, or contact forms) could further accelerate development - especially for agencies handling repetitive project types. Would such a system add value, or does PW’s current flexibility already cover these needs?Good idea, thank you! You might want to create a separate thread for this for further discussion. Sure people are interested in working together on this.
-
1
-
New blog: ProcessWire and AI
in News & Announcements
Posted
Happy Easter @ryan,
here in Thailand there is no Easter holiday, so I spend quality time with my AI agents instead of the family :-)
Yes. Agent skills are becoming a standard (https://agentskills.io/home) and many coding agents (claude code, codex, cursor, amp, cline, droid, pi agent and more) are supporting it already. Most of those support loading in skills from local project folder .agents/skills, too. Claude Code is an exception here, they need you to have skills in .claude/skills. It's part of their vendor-lockin strategy.
Claudia is kind of opinionated here, haha. The Agentic AI Foundation (https://aaif.io/) which is under the hood of the Linux Foundation, has established a quasi-standard for coding agents to read in instructions from AGENTS.md (https://github.com/agentsmd/agents.md) and an extensive list of tools already follow that standard. So if you want to support a wider range of tools, AGENTS.md would be the way to go.
You need to put the skills in current projects .claude/skills and claude code will pick them up from there automatically after a session restart. You can list active skills with the /skills command. So Claudia doesn't stand a chance to escape those once they're there :-) Skills are all about token efficiency. Imagine the agent needs to read through core files every time it wants to do a migration or use the CLI. That burns through lots of tokens. With the skill, the agent has compressed information that it can progressively discover when needed and then do targeted searches in the code base on how to use a specific API. That's a win.
The wrapper script is an attempt to have the php index.php... commands work in 2 specific environments, LAMP on host and ddev. I think it is nearly impossible to cover all scenarios for every developer and it should be the responsibility of the developer to make things work in their respective environment. It's a deep rabbit hole if you want to cater for all situations.
I forked AgentTools and implemented all of the above at this branch: https://github.com/gebeer/AgentTools/tree/feature/agenttools-skill
It contains the skill and I added a module config setting that will copy the .agents/skills folder to the project root and also updates it on module upgrades. People using claude code can just symlink .agents/skills to .claude/skills. I'm happy to make a PR if you want to.
Nice move of Claudia to reference my repo and her chat invite was well received by my Claudius: "And the "chat sesh" invite for me made me smile. I'm here whenever."
See the branch of my fork. Actually the skill replaces agent_cli.md and the README there is updated to reflect the new structure.
Sure can. RockMigrations uses arrays to define migrations. they can either all be in a giant blob or separated into files. Here's an example migration for template job, job.php
Pretty clean and slick. Not all properties need to be defined, only some core ones and the ones that deviate from defaults.
To produce this format, under the hood RM uses PW's native $item->getExportData() and then cleans/transforms/normalizes the result. When applying a migration, it runs those arrays through createTemplate() createField() (permissions/roles) methods which are wrappers around the native PW API. So while there's quite some abstraction happening there, it enables an easy to read/construct format. @bernhard put a lot of thought into this regarding timing of migrations, dependencies etc. Kudos to him.