Jump to content

gebeer

Members
  • Posts

    1,537
  • Joined

  • Last visited

  • Days Won

    46

Everything posted by gebeer

  1. Contents of related skills are not included in the docs that context7 parsed from your repo. Those are separate. It should be save to use as is @bernhard FYI. If you want to be 100% sure that you can trust those snippets, you'd need to go through https://github.com/phlppschrr/processwire-knowledge-base/tree/master/docs and look for prompt injections. But I think that would be overkill tbh
  2. tried several of them, including kilo code from NVIDIA (I think) which uses a clean spec-driven workflow. Currently working on my own version of that with prompt templates, verification through hooks and all that good stuff. Spec driven is a good approach, especially for larger features. For small things I'm still using good old chat in claude code.
  3. Would love to have @Jonathan Lahijani chime in here. Maybe he's got news about his MCP project :-)
  4. I just published https://github.com/gebeer/conversation-search-mcp Its a very minimal and fast MCP server that can search over jsonl session transcripts. It can be pointed to a folder with those sessions and then do BM25 searches for relevant context. Claude Code sessions all get stored in ~/.claude/projects/* folders. One folder per project. I have pointed mine to a folder in there that contains all my ProcessWire projects. So it has all past conversations that I had with claude code in these projects. There's a lot of valuable data in there. Like how the assistant applied fixes, what context I gave it, what it did wrong, what corrections I had to make etc. When the MCP server is active, the assistant can use it to search for relevant context. Currently I'm working on a hook system that auto-injects relevant context from a search based on the current prompt. It's an experiment currently. But might be a good way to enhance the assistants understanding with relevant context.
  5. Slight understanding on a high level. The greatest challenge is training data collection and formatting, not so much the LoRA training itself. I spent some time about 1 year ago on how to approach the data side of things and couldn't find any good sources back then. Then gave up on it. imo it's better to work with ICL (In Context Learning) and a SoTA model than having a spezialized but weaker one. That might not true anymore, though.
  6. Makes total sense :-)
  7. Wow. That was quick. Thank you!
  8. You can surely do that. context7 will accept it no problem. could add a note: unofficial but AI-friendly docs for the great ProcessWire CMS/CMF lol
  9. Good idea, thank you! You might want to create a separate thread for this for further discussion. Sure people are interested in working together on this.
  10. This is great!. I see you have a much more sophisticated setup for API/docs extraction setup than me. I was having a go at producing markdown docs for processwire a while ago and took a different approach (Scrape API docs instead of parsing source). Partial results are here: https://gebeer.github.io/mkdocs-processwire/ I really wish @ryan would adapt the md format for the official API docs so that AI assistants can easily parse them and als cotent7 can index them. And the collection of blog posts you have is impressive. As for the Skill itself, it doesn't currently follow the recommended format. There should be no README, docs dir could be renamed to references etc. Other than that it looks amazing. Love the scripts section. Def will give this a go and let you know how it goes.
  11. @bernhard and I were just talking about that yesterday. You have built it already. Wow. Making it public would be awesome. When I develop skills, I let the AI follow https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices and the official open standard at https://agentskills.io/home Most IDE's CLIs nowadays follow that standard already. Might be useful when overhauling your skills.
  12. @bernhard thanks for sharing the video. And exciting journey you had there :-) Eventually this is where things are going, I guess. Would be great to know who in this community is working on similar stuff. I'm currently creating a collection of skills that can be plugged-in to PW projects here: https://github.com/gebeer/processwire-ai-docs Would love to collaborate with others on that and exchange ideas.
  13. @bernhard this approach looks great and seems a good solution for content related workflows. So for devs that also do content editing that should make it easier. The conversion from/to YAML is the key part here. @Peter Knight how did you implement that in cursor, as a skill with scripts?
  14. These are very good questions. My honest take: PW is still very much a niche product. People who've been working with it for years learned to appreciate it. But it's very hard to convince anyone to jump in and dive deep until you discover the many advantages PW offers. And yeah, most devs have their specific workflows and it is just inconvenient to adapt new ones that might actually work better. Time/energy constraints may contribute to that. I can only say for me personally, I'd always buy and support proprietary modules developed by experienced PW devs although I am a FOSS enthusiast.
  15. @bernhard I want to express my gratitude for all you have contributed to the PW community over the years. And I commend you on your decision to open source your modules. Many of your modules have become part of my workflows for most projects over the years. I was happily paying for the bundle. It was definitely worth it. I hope that the community will pick up on your move and contribute through PRs for further refinement and new features where needed. Thank you again, Bernhard. You and your work are both much appreciated :-)
  16. I made this into a skill, following agent skill conventions. Available here: https://github.com/gebeer/processwire-ai-docs/tree/main/skills/pw-ddev-cli There's also a custom page classes skill in that repo, based on Ryans great blog article that just came out. Let your agents stay informed :-)
  17. Awesome article that sums it all up neatly. Thanks for this comprehensive guide, Ryan! I converted the content of this article into a reusable AI agent skill. Available here: https://github.com/gebeer/processwire-ai-docs/tree/main/skills/pw-page-classes
  18. @ryan it would be much appreciated if we could get your feedback on this. Thank you.
  19. Awesome! Yes, Opus 4.5 is really good now with PW. It also helps a lot that they have implemented the LSP in Claude Code directly. Honestly, at this stage I don't think we even need to feed docs to it anymore. Just instructions to explore the relevant API methods for a task itself itself in the codebase. Is there a specific reason why you implemented that as MCP and not as Skill? MCPs eat a lot of context. Depends on the implementation, of course. So dunno about how much context Octopus occupies. ATM I have some basic instructions in CLAUDE.md that explain how to bootstrap PW and use the CLI through ddev for exploration, debugging, DB queries. That makes a big difference already. Opus is great at exploring stuff through the PHP CLI, either as one-liners or as script files for more complex stuff. Here's my current instructions: ## PHP CLI Usage (ddev) All PHP CLI commands **must run through ddev** to use the web container's PHP interpreter. ### Basic Commands ```bash # Run PHP directly ddev php script.php # Check PHP version ddev php --version # Execute arbitrary command in web container ddev exec php script.php # Interactive shell in web container ddev ssh ``` ### ProcessWire Bootstrap Bootstrap ProcessWire by including `./index.php` from project root. After include, full PW API is available (`$pages`, `$page`, `$config`, `$sanitizer`, etc.). **All CLI script files must be placed in `./cli_scripts/`.** **Inline script execution:** ```bash ddev exec php -r "namespace ProcessWire; include('./index.php'); echo \$pages->count('template=product');" ``` **Run a PHP script:** ```bash ddev php cli_scripts/myscript.php ``` **Example CLI script** (`cli_scripts/example.php`): ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); // PW API now available $products = $pages->find('template=product'); foreach ($products as $p) { echo "{$p->id}: {$p->title}\n"; } ``` ### PHP CLI Usage for Debugging & Information Gathering Examples **One-liners** — use `ddev php -r` with functions API (`pages()`, `templates()`, `modules()`) to avoid bash `$` variable expansion. Local variables still need escaping (`\$t`). Prefix output with `PHP_EOL` to separate from RockMigrations log noise: ```bash # Count pages by template ddev php -r "namespace ProcessWire; include('./index.php'); echo PHP_EOL.'Products: '.pages()->count('template=product');" # Check module status ddev php -r "namespace ProcessWire; include('./index.php'); echo PHP_EOL.(modules()->isInstalled('ProcessShop') ? 'yes' : 'no');" # List all templates (note \$t escaping for local var) ddev php -r "namespace ProcessWire; include('./index.php'); foreach(templates() as \$t) echo \$t->name.PHP_EOL;" ``` **Script files** — preferred for complex queries, place in `./cli_scripts/`: ```php // cli_scripts/inspect_fields.php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); $p = pages()->get('/'); print_r($p->getFields()->each('name')); ``` ```bash ddev php cli_scripts/inspect_fields.php ``` ### TracyDebugger in CLI **Works in CLI:** - `d($var, $title)` — dumps to terminal using `print_r()` for arrays/objects - `TD::dump()` / `TD::dumpBig()` — same behavior **Does NOT work in CLI:** - `bd()` / `barDump()` — requires browser debug bar **Example:** ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); $page = pages()->get('/'); d($page, 'Home page'); // outputs to terminal d($page->getFields()->each('name'), 'Fields'); ``` ### Direct Database Queries Use `database()` (returns `WireDatabasePDO`, a PDO wrapper) for raw SQL queries: ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); // Prepared statement with named parameter $query = database()->prepare("SELECT * FROM pages WHERE template = :tpl LIMIT 5"); $query->execute(['tpl' => 'product']); $rows = $query->fetchAll(\PDO::FETCH_ASSOC); // Simple query $result = database()->query("SELECT COUNT(*) FROM pages"); echo $result->fetchColumn(); ``` **Key methods:** - `database()->prepare($sql)` — prepared statement, use `:param` placeholders - `database()->query($sql)` — direct query (no params) - `$query->execute(['param' => $value])` — bind and execute - `$query->fetch(\PDO::FETCH_ASSOC)` — single row - `$query->fetchAll(\PDO::FETCH_ASSOC)` — all rows - `$query->fetchColumn()` — single value **Example** (`cli_scripts/query_module_data.php`): ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); $query = database()->prepare("SELECT data FROM modules WHERE class = :class"); $query->execute(['class' => 'ProcessPageListerPro']); $row = $query->fetch(\PDO::FETCH_ASSOC); print_r(json_decode($row['data'], true)); ``` ### ddev Exec Options - `ddev exec --dir /var/www/html/site <cmd>` — run from specific directory - `ddev exec -s db <cmd>` — run in database container - `ddev mysql` — MySQL client access
  20. Wow, that was quick. Thanks. Will wait until it goes into main and then update through module interface.
  21. Hi @bernhard, after we upgraded RockFrontend and Less to latest versions in a project, we got this error in the backend: This is caused by the call to $this->createAssets() call in RockIcons.module.php init() method. Further tracing it back, we found that the L1101 in RockFrontend.module.php is the cause. Changing $lessFile = $this->getFile($lessFile); to $lessFile = $this->getFile($lessFile, true); fixes it. The getFile method has default false for $forcePath and returns an empty string when $forcePath is set to false which causes L1102 to throw a WireException. and ultimately leads to the error message from wire/core/ModulesLoader.php around L167. When passing $forcePath = true, the correct file path is returned. We checked and in our setup the createAssets method in RockIcons.module.php is the only caller of the lessToCss method in RockFrontend. This might be related to your refactor of RockFrontend/RockDevTools. While it is only happening when logged in as superuser, it still is troublesome because the error in the backend never goes away and icons don't display in the frontend. Frontend is still functional when not logged in as superuser.
  22. You could do that with RockMigrations $rm->installModule('SessionHandlerDB'). Add it to the migration file on every site. Don't even need to spin up. Will be applied next time you login as superuser. That should suffice :-)
  23. Yes, you can. The project files live on the Linux host machine (your Omarchy setup). They are being mounted to the ddev containers as docker volumes. The Docker daemon follows all symlinks to their real locations. So you can have a similar setup as on your current Ubuntu server. Kind of, yes :-) Depends on how juicy your new machine is. ddev spins up a few docker containers for each project. The more projects you have running the more containers need to be started. I have never started more than 3 projects at the same time. So can't really tell what happens if you spin up 10 or more. Are you working on multiple projects at the same time daily, do you really need them to be available at the same time? project startup is quite fast. So there's no need to have them all running all the time. You can manage them through docker desktop or Vscode extension or CLI, of course. All ddev projects are managed through one ddev-router container (Traefik) which acts as reverse proxy for http/s calls. So if you have multiple projects running, they can access each other through http. Just do it man. You won't regret. Linux has plenty of file explorers to choose from, you can find a decent replacement for XYplorer for sure. I live in Thunar. It has a plethora of plugins (batch rename etc) and is very customizable. As for Omarchy, it is a very opinionated setup but should give you a great starting point. I moved to tiling WMs some years ago and now wouldn't want to miss them. It's just so much more organized. I know exactly which application lives on what workspace and can switch in a blink of a keystroke. Who the heck needs frickin floating windows, why were they even invented?
  24. Ryan, thank you for clarifying. This totally makes sense now :-)
  25. Hi @ryan, maybe I'm misreading this. But actually you would want bots to collect training data for PW, especially for the API reference part. This website does not publish content that is protected IP. It offers information that aims to attract developers and decision makers to use PW for their business. IMHO, blocking these bots is contra-productive. You are cutting yourself off from a growing number of developers that build projects with AI tools to boost their productivity. In the near future we probably will not be able to compete if we do not use these tools. The more accurate training data and context these AI assistants have for PW, the better they can perform and produce actually usable, production ready code. I would give the current approach of blocking these bots a second thought.
×
×
  • Create New...