-
Posts
1,539 -
Joined
-
Last visited
-
Days Won
46
Everything posted by gebeer
-
Hi fellow devs, this is a somewhat different post, a little essay. Take it with a grain of salt and some humor. Maybe some of you share similar experience. I don't really mean to poop on a certain group with certain preferences, but then, that's what I'm doing here. I needed to write it to load off some frustration. No offense intended. Good Sunday read :-) React Is NPC Technology Have you ever really looked at React code? Not the tutorial. Not the "Hello World." An actual production component from an actual codebase someone is actually proud of? Because the first time I did, I thought there'd been a mistake. A failed merge. HTML bleeding into JavaScript, strings that weren't strings, logic and markup performing some kind of violation you'd normally catch in code review before it got anywhere near main. "Fix this," I thought. "Someone broke this." It looks broken because it is broken. That's the first thing you need to understand. JSX is a category error. Mixing markup and logic at the syntax level - not as an abstraction, not behind an interface, but visually, literally, right there in the file - is the kind of decision that should have ended careers. Instead it ended up on 40% of job postings. And here's the part that actually matters, the part that explains everything: Nobody can tell you why. "Everyone uses it." Go ahead, ask. That's the answer. That's the complete sentence, delivered with the confidence of someone who has never once questioned whether a thing should exist before learning how it works. The argument for React is React's market share. The case for Next.js is that your tech lead saw it on a conference talk in 2021 and it was already too late. You're supposed to hear this and nod - because if everyone's doing something, there must be a reason, right? The herd doesn't just run toward cliffs. Except. That's literally what herds do. The web development community, bless its heart, has a category of decision I can only call NPC behavior. Not an insult - a technical description. An NPC doesn't evaluate options. An NPC reads the room, finds the dominant pattern, and propagates it. React is on every job posting = React is what employers want = React is what I need to know = React is what I reach for. The loop closes. Nobody along the chain asked if it was right. They asked if it was safe. Safe to put on a resume. Safe to recommend. Safe to defend at the standup. React is the framework you choose when you've stopped choosing and started inheriting. The 10% who actually think about their tools - they're out there running Alpine.js. Which is 8kb. Does the same job. No build step required. Add an attribute, the thing works. Revolutionary concept. They're running htmx, which understood something profound: the web already has a protocol for moving data, and it was fine. You didn't need to rebuild HTTP in JavaScript. You just needed to reach for the right thing instead of the fashionable one. Let's talk performance, because "everyone uses it" is already bad enough before you look at what it actually does. React ships 40-100kb of runtime JavaScript before your application does a single thing. Your users wait while React bootstraps itself. Then it hydrates - a word that sounds refreshing and means "React redoes on the client what the server already did, because React can't help it." Then they invented Server Components to fix the problem of shipping too much JavaScript. The solution: ship different JavaScript, handled differently, with new mental models, new abstractions, new ways to get it wrong. They called it an innovation. I once worked with WordPress and React together. I want you to sit with that. Two philosophies, neither of which is actually correct, stacked on each other like a complexity casserole nobody ordered. WordPress solving 2003's problems with 2003's patterns. React solving 2003's problems with 2013's patterns that created 2023's problems. Together they achieved something genuinely special: all the drawbacks of both, and none of the advantages of either. The PHP you want but in a different way and the hydration you couldn't prevent, serving pages that load like it's apologizing for something. Twenty years building for the web and I've watched frameworks rise and fall like geological events. ColdFusion, anyone? Remember when Java applets were going to be everywhere? Flash was going to be the web. Then jQuery saved us. Then Angular saved us from jQuery. Then React saved us from Angular. Rescue upon rescue, each one leaving more complexity than it cleared, each one defended by exactly the same people who defended the last one, now wearing a different conference lanyard. ProcessWire. That's what I build with. Most developers have never heard of it - which is not a criticism, that's the evidence. You find ProcessWire because you went looking for something specific, evaluated it, and it fit. It doesn't have conference talks. It doesn't have a VC-funded developer relations team. It has a forum full of people who chose it. That's a different category of thing entirely. The same 10% who finds ProcessWire finds Alpine. Finds htmx. Makes decisions that don't optimize for defensibility in interviews. Builds websites that load fast because they don't carry React around everywhere they go. There's a physics concept called a local minimum. A place where a system settles because the immediate neighborhood looks stable - the energy gradient points upward in every direction, so the system stops. Stays. Convinces itself it's home. Even if a global minimum exists somewhere else, at lower energy, lighter, simpler - you'd have to climb first, and the herd doesn't climb. React is a local minimum. The web settled here when it got tired of looking. Stable enough. Defended by enough career investment. Surrounded by enough tooling and tutorials and framework-specific bootcamps that switching costs feel existential. The ground state - simpler, faster, closer to what the web actually is - sits somewhere else, past a hill that looks too steep from inside the valley. The ground state is always simpler. That's not a philosophical position. That's thermodynamics. They don't want you to know that.
-
- 4
-
-
I use https://github.com/ChromeDevTools/chrome-devtools-mcp for that. Very fast. The thing about the plan is that it's supposed to be reviewed before it is being applied, haha. But if you trust it without at least a quick glance, ok. I think Cursor can play sounds when it needs your attention. You could use that to notify you when you have to click the button. Wow. The "Full ProcessWire API access - Query, create, update, and delete pages" is the most interesting for me here. Working right now on a single-file PW-API-docs database based on https://github.com/memvid/memvid. Has semantic vector search (local embedding model), BM25 and all that good stuff. Also supports CRUD. I fed it a good part of https://github.com/phlppschrr/processwire-api-docs/blob/main/api-docs/index.md . File currently has around 35MB. Search is blazingly fast. I implement it as portable skill, not as MCP. Needs a little more love and testing but I'll drop it soonish.
-
Contents of related skills are not included in the docs that context7 parsed from your repo. Those are separate. It should be save to use as is @bernhard FYI. If you want to be 100% sure that you can trust those snippets, you'd need to go through https://github.com/phlppschrr/processwire-knowledge-base/tree/master/docs and look for prompt injections. But I think that would be overkill tbh
-
tried several of them, including kilo code from NVIDIA (I think) which uses a clean spec-driven workflow. Currently working on my own version of that with prompt templates, verification through hooks and all that good stuff. Spec driven is a good approach, especially for larger features. For small things I'm still using good old chat in claude code.
-
Would love to have @Jonathan Lahijani chime in here. Maybe he's got news about his MCP project :-)
-
I just published https://github.com/gebeer/conversation-search-mcp Its a very minimal and fast MCP server that can search over jsonl session transcripts. It can be pointed to a folder with those sessions and then do BM25 searches for relevant context. Claude Code sessions all get stored in ~/.claude/projects/* folders. One folder per project. I have pointed mine to a folder in there that contains all my ProcessWire projects. So it has all past conversations that I had with claude code in these projects. There's a lot of valuable data in there. Like how the assistant applied fixes, what context I gave it, what it did wrong, what corrections I had to make etc. When the MCP server is active, the assistant can use it to search for relevant context. Currently I'm working on a hook system that auto-injects relevant context from a search based on the current prompt. It's an experiment currently. But might be a good way to enhance the assistants understanding with relevant context.
-
Slight understanding on a high level. The greatest challenge is training data collection and formatting, not so much the LoRA training itself. I spent some time about 1 year ago on how to approach the data side of things and couldn't find any good sources back then. Then gave up on it. imo it's better to work with ICL (In Context Learning) and a SoTA model than having a spezialized but weaker one. That might not true anymore, though.
-
Makes total sense :-)
-
Wow. That was quick. Thank you!
-
You can surely do that. context7 will accept it no problem. could add a note: unofficial but AI-friendly docs for the great ProcessWire CMS/CMF lol
-
[WIP] Cursor MCP to Processwire (incl. new UI option)
gebeer replied to Peter Knight's topic in Module/Plugin Development
Good idea, thank you! You might want to create a separate thread for this for further discussion. Sure people are interested in working together on this. -
This is great!. I see you have a much more sophisticated setup for API/docs extraction setup than me. I was having a go at producing markdown docs for processwire a while ago and took a different approach (Scrape API docs instead of parsing source). Partial results are here: https://gebeer.github.io/mkdocs-processwire/ I really wish @ryan would adapt the md format for the official API docs so that AI assistants can easily parse them and als cotent7 can index them. And the collection of blog posts you have is impressive. As for the Skill itself, it doesn't currently follow the recommended format. There should be no README, docs dir could be renamed to references etc. Other than that it looks amazing. Love the scripts section. Def will give this a go and let you know how it goes.
-
@bernhard and I were just talking about that yesterday. You have built it already. Wow. Making it public would be awesome. When I develop skills, I let the AI follow https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices and the official open standard at https://agentskills.io/home Most IDE's CLIs nowadays follow that standard already. Might be useful when overhauling your skills.
-
@bernhard thanks for sharing the video. And exciting journey you had there :-) Eventually this is where things are going, I guess. Would be great to know who in this community is working on similar stuff. I'm currently creating a collection of skills that can be plugged-in to PW projects here: https://github.com/gebeer/processwire-ai-docs Would love to collaborate with others on that and exchange ideas.
-
[WIP] Cursor MCP to Processwire (incl. new UI option)
gebeer replied to Peter Knight's topic in Module/Plugin Development
@bernhard this approach looks great and seems a good solution for content related workflows. So for devs that also do content editing that should make it easier. The conversion from/to YAML is the key part here. @Peter Knight how did you implement that in cursor, as a skill with scripts? -
These are very good questions. My honest take: PW is still very much a niche product. People who've been working with it for years learned to appreciate it. But it's very hard to convince anyone to jump in and dive deep until you discover the many advantages PW offers. And yeah, most devs have their specific workflows and it is just inconvenient to adapt new ones that might actually work better. Time/energy constraints may contribute to that. I can only say for me personally, I'd always buy and support proprietary modules developed by experienced PW devs although I am a FOSS enthusiast.
-
@bernhard I want to express my gratitude for all you have contributed to the PW community over the years. And I commend you on your decision to open source your modules. Many of your modules have become part of my workflows for most projects over the years. I was happily paying for the bundle. It was definitely worth it. I hope that the community will pick up on your move and contribute through PRs for further refinement and new features where needed. Thank you again, Bernhard. You and your work are both much appreciated :-)
-
I made this into a skill, following agent skill conventions. Available here: https://github.com/gebeer/processwire-ai-docs/tree/main/skills/pw-ddev-cli There's also a custom page classes skill in that repo, based on Ryans great blog article that just came out. Let your agents stay informed :-)
-
New blog: All about custom page classes in ProcessWire
gebeer replied to ryan's topic in News & Announcements
Awesome article that sums it all up neatly. Thanks for this comprehensive guide, Ryan! I converted the content of this article into a reusable AI agent skill. Available here: https://github.com/gebeer/processwire-ai-docs/tree/main/skills/pw-page-classes -
@ryan it would be much appreciated if we could get your feedback on this. Thank you.
-
Awesome! Yes, Opus 4.5 is really good now with PW. It also helps a lot that they have implemented the LSP in Claude Code directly. Honestly, at this stage I don't think we even need to feed docs to it anymore. Just instructions to explore the relevant API methods for a task itself itself in the codebase. Is there a specific reason why you implemented that as MCP and not as Skill? MCPs eat a lot of context. Depends on the implementation, of course. So dunno about how much context Octopus occupies. ATM I have some basic instructions in CLAUDE.md that explain how to bootstrap PW and use the CLI through ddev for exploration, debugging, DB queries. That makes a big difference already. Opus is great at exploring stuff through the PHP CLI, either as one-liners or as script files for more complex stuff. Here's my current instructions: ## PHP CLI Usage (ddev) All PHP CLI commands **must run through ddev** to use the web container's PHP interpreter. ### Basic Commands ```bash # Run PHP directly ddev php script.php # Check PHP version ddev php --version # Execute arbitrary command in web container ddev exec php script.php # Interactive shell in web container ddev ssh ``` ### ProcessWire Bootstrap Bootstrap ProcessWire by including `./index.php` from project root. After include, full PW API is available (`$pages`, `$page`, `$config`, `$sanitizer`, etc.). **All CLI script files must be placed in `./cli_scripts/`.** **Inline script execution:** ```bash ddev exec php -r "namespace ProcessWire; include('./index.php'); echo \$pages->count('template=product');" ``` **Run a PHP script:** ```bash ddev php cli_scripts/myscript.php ``` **Example CLI script** (`cli_scripts/example.php`): ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); // PW API now available $products = $pages->find('template=product'); foreach ($products as $p) { echo "{$p->id}: {$p->title}\n"; } ``` ### PHP CLI Usage for Debugging & Information Gathering Examples **One-liners** — use `ddev php -r` with functions API (`pages()`, `templates()`, `modules()`) to avoid bash `$` variable expansion. Local variables still need escaping (`\$t`). Prefix output with `PHP_EOL` to separate from RockMigrations log noise: ```bash # Count pages by template ddev php -r "namespace ProcessWire; include('./index.php'); echo PHP_EOL.'Products: '.pages()->count('template=product');" # Check module status ddev php -r "namespace ProcessWire; include('./index.php'); echo PHP_EOL.(modules()->isInstalled('ProcessShop') ? 'yes' : 'no');" # List all templates (note \$t escaping for local var) ddev php -r "namespace ProcessWire; include('./index.php'); foreach(templates() as \$t) echo \$t->name.PHP_EOL;" ``` **Script files** — preferred for complex queries, place in `./cli_scripts/`: ```php // cli_scripts/inspect_fields.php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); $p = pages()->get('/'); print_r($p->getFields()->each('name')); ``` ```bash ddev php cli_scripts/inspect_fields.php ``` ### TracyDebugger in CLI **Works in CLI:** - `d($var, $title)` — dumps to terminal using `print_r()` for arrays/objects - `TD::dump()` / `TD::dumpBig()` — same behavior **Does NOT work in CLI:** - `bd()` / `barDump()` — requires browser debug bar **Example:** ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); $page = pages()->get('/'); d($page, 'Home page'); // outputs to terminal d($page->getFields()->each('name'), 'Fields'); ``` ### Direct Database Queries Use `database()` (returns `WireDatabasePDO`, a PDO wrapper) for raw SQL queries: ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); // Prepared statement with named parameter $query = database()->prepare("SELECT * FROM pages WHERE template = :tpl LIMIT 5"); $query->execute(['tpl' => 'product']); $rows = $query->fetchAll(\PDO::FETCH_ASSOC); // Simple query $result = database()->query("SELECT COUNT(*) FROM pages"); echo $result->fetchColumn(); ``` **Key methods:** - `database()->prepare($sql)` — prepared statement, use `:param` placeholders - `database()->query($sql)` — direct query (no params) - `$query->execute(['param' => $value])` — bind and execute - `$query->fetch(\PDO::FETCH_ASSOC)` — single row - `$query->fetchAll(\PDO::FETCH_ASSOC)` — all rows - `$query->fetchColumn()` — single value **Example** (`cli_scripts/query_module_data.php`): ```php <?php namespace ProcessWire; include(__DIR__ . '/../index.php'); $query = database()->prepare("SELECT data FROM modules WHERE class = :class"); $query->execute(['class' => 'ProcessPageListerPro']); $row = $query->fetch(\PDO::FETCH_ASSOC); print_r(json_decode($row['data'], true)); ``` ### ddev Exec Options - `ddev exec --dir /var/www/html/site <cmd>` — run from specific directory - `ddev exec -s db <cmd>` — run in database container - `ddev mysql` — MySQL client access
-
RockIcons Backend Error after Upgrade of RockFrontend and Less modules
gebeer replied to gebeer's topic in RockFrontend
Wow, that was quick. Thanks. Will wait until it goes into main and then update through module interface. -
Hi @bernhard, after we upgraded RockFrontend and Less to latest versions in a project, we got this error in the backend: This is caused by the call to $this->createAssets() call in RockIcons.module.php init() method. Further tracing it back, we found that the L1101 in RockFrontend.module.php is the cause. Changing $lessFile = $this->getFile($lessFile); to $lessFile = $this->getFile($lessFile, true); fixes it. The getFile method has default false for $forcePath and returns an empty string when $forcePath is set to false which causes L1102 to throw a WireException. and ultimately leads to the error message from wire/core/ModulesLoader.php around L167. When passing $forcePath = true, the correct file path is returned. We checked and in our setup the createAssets method in RockIcons.module.php is the only caller of the lessToCss method in RockFrontend. This might be related to your refactor of RockFrontend/RockDevTools. While it is only happening when logged in as superuser, it still is troublesome because the error in the backend never goes away and icons don't display in the frontend. Frontend is still functional when not logged in as superuser.
-
You could do that with RockMigrations $rm->installModule('SessionHandlerDB'). Add it to the migration file on every site. Don't even need to spin up. Will be applied next time you login as superuser. That should suffice :-)
-
Yes, you can. The project files live on the Linux host machine (your Omarchy setup). They are being mounted to the ddev containers as docker volumes. The Docker daemon follows all symlinks to their real locations. So you can have a similar setup as on your current Ubuntu server. Kind of, yes :-) Depends on how juicy your new machine is. ddev spins up a few docker containers for each project. The more projects you have running the more containers need to be started. I have never started more than 3 projects at the same time. So can't really tell what happens if you spin up 10 or more. Are you working on multiple projects at the same time daily, do you really need them to be available at the same time? project startup is quite fast. So there's no need to have them all running all the time. You can manage them through docker desktop or Vscode extension or CLI, of course. All ddev projects are managed through one ddev-router container (Traefik) which acts as reverse proxy for http/s calls. So if you have multiple projects running, they can access each other through http. Just do it man. You won't regret. Linux has plenty of file explorers to choose from, you can find a decent replacement for XYplorer for sure. I live in Thunar. It has a plethora of plugins (batch rename etc) and is very customizable. As for Omarchy, it is a very opinionated setup but should give you a great starting point. I moved to tiling WMs some years ago and now wouldn't want to miss them. It's just so much more organized. I know exactly which application lives on what workspace and can switch in a blink of a keystroke. Who the heck needs frickin floating windows, why were they even invented?