bernhard Posted 20 hours ago Posted 20 hours ago Hey everyone, I've noticed that AI-related discussions are popping up more and more across the forum, but they're scattered across different threads and often go off-topic (guilty as charged). So I thought it's time we create a dedicated place to collect our experiences, tools, and workflows around using AI with ProcessWire. Why this thread? There are several existing discussions that touch on the topic: My recent post about Cursor turned into a broader AI conversation that drifted off-topic (link) There's a thread about MCP (Model Context Protocol) and ProcessWire (link) @gebeer started a thread about creating better Markdown documentation for ProcessWire - IMHO it was more of a request rather than a howto (link) All of these are related, but none of them serve as a central hub for the bigger question: How do we best leverage AI in our day-to-day ProcessWire development? What I'd love to collect here: What's your current setup? Which AI tools are you using (Cursor, GitHub Copilot, Claude, ChatGPT, something else)? How did you integrate them into your workflow? What works well? Where does AI genuinely save you time with ProcessWire? Module development, migrations, frontend templating, debugging, writing selectors, documentation...? What doesn't work (yet)? Where do the current AI tools fall short when it comes to PW specifically? Is it the lack of training data, the API structure, something else? Context & documentation: How do you feed ProcessWire knowledge to your AI? Custom rules, project documentation, Markdown exports of the PW docs, MCP servers? Tips & tricks: Any prompts, configurations, or workflows that made a real difference for you? Looking forward to your input! 1
bernhard Posted 19 hours ago Author Posted 19 hours ago 20 hours ago, interrobang said: 23 hours ago, gebeer said: You can surely do that. context7 will accept it no problem. could add a note: unofficial but AI-friendly docs for the great ProcessWire CMS/CMF lol Done. https://context7.com/phlppschrr/processwire-api-docs This is a quote from the other thread but I think it fits better here. @interrobang this looks impressive. Would you mind sharing more info about how that was built, how it can be used and how we can make sure we don't get prompt-injected something in chinese ^^ 1
interrobang Posted 17 hours ago Posted 17 hours ago 2 hours ago, bernhard said: This is a quote from the other thread but I think it fits better here. @interrobang this looks impressive. Would you mind sharing more info about how that was built, how it can be used and how we can make sure we don't get prompt-injected something in chinese ^^ 😲 I have no idea how context7's “related skills” are created. The complete setup, including the markdown conversion, is in the other repo https://github.com/phlppschrr/processwire-knowledge-base. The Python code for this is mostly vibe-coded with codex 5.2, and partly with gemini 3 pro. I have to admit that I haven't looked at the resulting Python code, but at least on my computer it works reliably. The repo indexed by context7 currently only contains the Markdown version of the ProcessWire API documentation as Markdown. As long as Ryan hasn't built prompt injecting into his phpdocs, it should be safe. I don't know yet how to best use the repository. I would appreciate input on how to write the skill to make the content usable for an LLM without bloating the context window. Probably the way via context7 is more promising than searching the local md files with grep. By the way, I've also started using gemini to break down the blog articles into individual, uniformly structured snippets with frontmatter metadata. I can imagine that this would be quite good for context7. However, it hasn't been committed yet. Unfortunately, I have to get back to customer work now... Example of a generated snippet: Spoiler --- title: "Substitute Images for Missing Resizes" date: "2025-05-23" category: "Hooks" tags: [hooks, images, error-handling] api_methods: ["Pageimage::filenameDoesNotExist"] source_url: "https://processwire.com/blog/posts/pw-3.0.255/" --- ## Summary: The `Pageimage::filenameDoesNotExist` hookable method allows you to provide a substitute image when a requested resized image version or its source is missing. ## Context: This prevents "broken image" icons by falling back to a placeholder image or logging the missing file for investigation. ## Implementation: Hook `Pageimage::filenameDoesNotExist`. ```php $wire->addHookAfter('Pageimage::filenameDoesNotExist', function($event) { $filename = $event->arguments(0); // The missing file path // Path to your placeholder image $placeholder = config()->paths->templates . 'assets/images/placeholder.jpg'; if(file_exists($placeholder)) { files()->copy($placeholder, $filename); $event->return = true; // Tell ProcessWire the file is now available } }); ```
szabesz Posted 17 hours ago Posted 17 hours ago 18 minutes ago, interrobang said: I've also started using gemini to break down the blog articles into individual, uniformly structured snippets with frontmatter metadata. Yes, the issue with blog articles is that they are verbose, and there is no need for such verbosity for an LLM. However, instead of trying to squeeze blog posts and API docs content and examples into a context window, it would be better to do some "sort of LLM training". Like LoRAs for image models. Does anyone have an understanding of how such a thing could be done?
gebeer Posted 16 hours ago Posted 16 hours ago 48 minutes ago, szabesz said: Yes, the issue with blog articles is that they are verbose, and there is no need for such verbosity for an LLM. However, instead of trying to squeeze blog posts and API docs content and examples into a context window, it would be better to do some "sort of LLM training". Like LoRAs for image models. Does anyone have an understanding of how such a thing could be done? Slight understanding on a high level. The greatest challenge is training data collection and formatting, not so much the LoRA training itself. I spent some time about 1 year ago on how to approach the data side of things and couldn't find any good sources back then. Then gave up on it. imo it's better to work with ICL (In Context Learning) and a SoTA model than having a spezialized but weaker one. That might not true anymore, though. 1
gebeer Posted 16 hours ago Posted 16 hours ago I just published https://github.com/gebeer/conversation-search-mcp Its a very minimal and fast MCP server that can search over jsonl session transcripts. It can be pointed to a folder with those sessions and then do BM25 searches for relevant context. Claude Code sessions all get stored in ~/.claude/projects/* folders. One folder per project. I have pointed mine to a folder in there that contains all my ProcessWire projects. So it has all past conversations that I had with claude code in these projects. There's a lot of valuable data in there. Like how the assistant applied fixes, what context I gave it, what it did wrong, what corrections I had to make etc. When the MCP server is active, the assistant can use it to search for relevant context. Currently I'm working on a hook system that auto-injects relevant context from a search based on the current prompt. It's an experiment currently. But might be a good way to enhance the assistants understanding with relevant context. 2
gebeer Posted 16 hours ago Posted 16 hours ago Would love to have @Jonathan Lahijani chime in here. Maybe he's got news about his MCP project :-)
elabx Posted 16 hours ago Posted 16 hours ago 3 hours ago, bernhard said: What's your current setup? Used Cursor for a few months! Now using Claude inside Cursor lol Why? To be honest sometimes it's difficult to actually grasp and put into words, but I'd just say I go with "feel" and right now Opus just feel really nice. Also in general in any AI tool what's invaluable now is using MCPs: Figma, Notion, Gitlab, Chrome. Testing using ddev-claude-code as of today to just let it run wild in docker. Question for ddev users, anyone found a projects that lets you manage multiple worktrees of the same project? But at the same time, copies anything related to the php project? For example in case of processwire, the site/files. And override the ddev name, do to; site-dev1.ddev.site, site-dev2.ddev.site. 3 hours ago, bernhard said: What doesn't work (yet)? Every undocumented AI agent doesn't catch FieldtypeOptions field evaluates "truthy" you gotta check $field->id 🤣. Maybe I should definitely find a way to include something like context7 but I do fear prompt injection (aha, but let claude run wild on its own? haha). To be honest, it now always feels what "doesn't work" are always my own boundaries of time and multitask lol 3 hours ago, bernhard said: Context & documentation: I am making a ProcessWire MCP inspired by the threads around and I think it could be very valuable but for now having AI executing script through the CLI in ddev is also amazing and just gets me there, of course using the one and only RockMigrations. An maybe an effort that is not about my docs, but my customer docs, is a skill that makes a documentation site for the specific processwire installs in Notion pages. What an insane amount of module development is being done now right?! 1
bernhard Posted 15 hours ago Author Posted 15 hours ago 10 minutes ago, gebeer said: Would love to have @Jonathan Lahijani chime in here. Maybe he's got news about his MCP project 🙂 Thx! Had him in mind, but forgot to mention him! --- Update from my side: Had AI develop several features for my startup this week. It was fast. And it was good. Real quality code. Or let's say at least faster and better than I would have done it 😄 This is insane. My workflow is currently: Tell cursor to inspect the project and create the rules and skills necessary for agents to do their work Tell it to have a frontend developer for frontend stuff, a backen dev for backend stuff Spin up (multiple) agents and tell them what to do If it's a complex task switch cursor to "PLAN" mode first and check the plan before building Check if everything works Check the git diff If necessary ask for changes commit My learning so far: This is impressive - so far I have always had the opinion that the bigger the task gets the more AI struggles and it's better to do it on your own. I have always been a huge fan of cursor tab, which auto-suggests the next word or 2-3 lines of code, but this is another level! Good results cost money: I've been on my cursor 20$/month plan for a year and thought I was using it heavily... But now I've used 70% of my Opus4.6 quota in only a few days. The cursor pricing page states this at the moment: --> so I'd probably be ok with the 60$ plan or maybe even need the 200$ plan... Are you all spending this amount? I asked perplexity and it seems I can use Anthropic API directly in cursor and it might get cheaper? Any experiences/numbers to share @elabx or others? 1
maximus Posted 15 hours ago Posted 15 hours ago i use claude web with 20$ plan and create step-by-step templates and use my file for building backend and frontend and make copy-paste to Nova App. when i exceeded session/daily limit, i open another free claude account for continue or use another platforms.
elabx Posted 15 hours ago Posted 15 hours ago 50 minutes ago, bernhard said: Are you all spending this amount? Around 100-150 USD on cursor, on the 20 plan, and also probably wasting a lot of money too since I pay OpenAI (like their webUI), gemini (hate their web UI but I like their deep research and images) and now Claude. Probably have to spin one of the agents to make me a reasonable budget or I'll go broke. Anyone tried Kimi? Getting very good impressions from peers.
elabx Posted 14 hours ago Posted 14 hours ago 54 minutes ago, bernhard said: If it's a complex task switch cursor to "PLAN" mode first and check the plan before building Is anyone going the way of SpecKit, BMAD, Opensec, superpowers? 1
szabesz Posted 14 hours ago Posted 14 hours ago (edited) 39 minutes ago, elabx said: Anyone tried Kimi? I just paid for 1 month of Allegretto ($39/month) and used it with a deep research prompt asking for an "Intermediate PHP developer who is new to PHP Swiss Ephemeris" demo project (with detailed requirements, of course). It produced runnable code with outstanding results in about an hour. That might seem slow, but for me, it would have taken at least two weeks to figure all that out. It also came with explanations, which provides me a good starting point to learn the topic. So Kimi 2.5's deep research is very impressive, especially regarding coding-related prompts. It performs much better than my (admittedly) cheap Gemini Pro plan. I prompted Gemini with the same request, and it produced half-baked code, clearly running out of "steam" (memory/context window, whatever...). Additionally, Kimi's deep research acts like a programmer, while Gemini's deep research behaves like a very important executive who happens to be good at coding but prefers to give unnecessary executive summaries on the topic. I dislike that as it just consumes "tokens" on something you do not need. Well, my comparison might not be fair, as my Gemini Plan is a lot cheaper than $39/month, but those Gemini deep research unnecessary executive summaries also come with higher plans, I guess. Edit: "unnecessary executive summaries" and yes, I always prompt it not to do that but it does so anyway. The only difference is that they are shorter than the summaries one gets without asking not to do them. Edited 14 hours ago by szabesz
elabx Posted 14 hours ago Posted 14 hours ago 14 minutes ago, szabesz said: Additionally, Kimi's deep research acts like a programmer, while Gemini's deep research behaves like a very important executive who happens to be good at coding but prefers to give unnecessary executive summaries on the topic. I dislike that as it just consumes "tokens" on something you do not need. REALLY interesting, thanks for chiming in. Where is this Allegretto plan explained I can only find https://platform.moonshot.ai/docs/pricing/chat#concepts Am I even talling about the same thing?
szabesz Posted 14 hours ago Posted 14 hours ago Visiting https://www.kimi.com/membership/pricing a modal pops up for me and its top part looks like this: Spoiler To tell the truth, I will probably use it for browser-based prompting, and it shows how many such tasks are still available for that day. For example: Spoiler
bernhard Posted 13 hours ago Author Posted 13 hours ago Another learning: while it might seem to take long to wait for results of an agent... you can spin up multiple agents and let them work on two different tasks (like two new unrelated features) 2
Peter Knight Posted 9 hours ago Posted 9 hours ago 4 hours ago, bernhard said: Another learning: while it might seem to take long to wait for results of an agent... you can spin up multiple agents and let them work on two different tasks (like two new unrelated features) Don’t forget Cursor cloud agents. Give a cloud agent a plan and the agent works in the background while you’re away from the laptop. I give them a task on the iPad when I’m at the gym and by the time I’m home there’s a new branch waiting for me. I can’t verify this but somehow I’ve often found the quality of the cloud agents to be better than the desktop ones even on the same model. Can’t be true? But feels that way. 2
gebeer Posted 12 minutes ago Posted 12 minutes ago 14 hours ago, elabx said: Is anyone going the way of SpecKit, BMAD, Opensec, superpowers? tried several of them, including kilo code from NVIDIA (I think) which uses a clean spec-driven workflow. Currently working on my own version of that with prompt templates, verification through hooks and all that good stuff. Spec driven is a good approach, especially for larger features. For small things I'm still using good old chat in claude code.
Recommended Posts