Jump to content

All Activity

This stream auto-updates

  1. Today
  2. Both hard refresh of browser and modules refresh did not help. I'll try a fresh install and see if that fixes it. I tested it on two different setups, but both were on version 4.27.7 before the update.
  3. Hi @androbey - I haven't seen any issues like that, but I wonder if a modules > refresh and a hard reload in the browser might fix it? Also, what version did you upgrade from when it was working ok?
  4. I don't know if this a the right place, but with the latest version 5.0.12 and a recent ProcessWire version, I can't use the console anymore. On ProcessWire 2.0.255 when trying to exec a simple "bd('hi');" the exec takes forever and never finishes, on ProcessWire 2.0.259 I only get an error back ("403: Forbidden CSRF token validation failed"). Is there anything I miss or do I have to make any changes?
  5. Hi Jonathan. It was quite difficult to buy a Mac Mini recently when Open Claw was released. People realised the little machines were quite happy chugging along on local LLMs and they were selling like hot cakes. Is that any interest to you? Have to admit, I like the idea of having a replaceable GPU but not sure my gaming machine would be up for any serious work.
  6. I was experimenting with Ollama using its cloud models the other day, on its $20/month plan. I had good results. For my next project, I am considering using Kimi-K2.6, GLM 5+, and others. You can use Opencode or Claude for it: `ollama launch claude --model minimax-m2.5:cloud`
  7. I use VSCode and have been a subscriber to GitHub Copilot for a few years. I didn't pay much attention to the plan I was on until I started using agentic coding in ~October 2025. I then realized what made GitHub Copilot a ridiculously good value, as others discovered as well, was that it works on a "per-request" billing model. In short, if you knew what you were doing (I didn't realize this fully at the time), you could use a high-end model like Opus 4.5, which costs 3 credits, and if just have it rip for HOURS on a task and it would only cost 3 requests (lower end models would be 1 request). With the cheapest plan on GitHub Copilot, it is (well, now WAS), $10/month which gave you 300 requests. A lot of people took advantage of this... imagine paying $10/month and getting like $5000/$10,000 worth of value (ie, what would be the real cost of per-token billing) out of it per month! Absolutely insane. Microsoft understandably put an end to that last week because they were losing their shirts: https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/ In short, they are one of the first to go per-token/per-usage billing and I suspect others like Anthropic and OpenAI will eventually follow suit. It's only a matter of time given the economics of it all. However as you may know, the Chinese AI labs are extremely competitive with their AI offerings and have both monthly plans and token-based plans (generally 90% less cost) and of course, they release the models outright. Because of what Microsoft did, I've been experimenting with the various Chinese models via OpenRouter (and still using VSCode GitHub Copilot "Bring Your Own Key" which they support), so it's basically the same experience. However there seems to be a lot of advancements in high density, low parameter models (Qwen3.6 27B, Deepseek V4 Flash) which can be run on consumer hardware at good speeds and output that is not far behind something like Sonnet or Opus, or at least catching up quickly. I haven't owned a discreet GPU in many many years (I don't game), but I believe with a RTX 5090 and 64-128GB of RAM, it can be done. Don't quote me on any that however... I haven't dived into this world yet and don't yet have an understanding of all the settings that determine how well an LLM runs and how it affects its intelligence. I'd be interested to hear anyone who is playing with this idea: What models are you using? What hardware? What software are you using to run it?
  8. @monollonom Bingo! 🙂 I was not aware of this config setting. Never used it on any templates. And that said, i wouldn't have investigate in this direction. Especially all source templates do not have this flag set. Maybe it is a good idea to check/set this while adding pages through the API, though. Many thanks, cheers Olaf
  9. When setting $config->advanced = true you can specify in a specific template whether you want the name field to appear in the content tab. Is it maybe your case?
  10. Hi, i used to add pages per API. In only one particular case (structural identical to other added pages) the path settings appears in the content tab ("Inhalt") and not as usual in the settings tab ("Einstellungen") (s.screens). While this is only a display issue,- i am just curious,- anyone a hint what this could be caused by? PW 3.0.222 dev greets Olaf
  11. Yesterday
  12. Good day! I have a module that sets a url hook. But then I have code in site/ready.php that stops page rendering process under certain conditions based on a page being viewed. So I need to modify that condition to include all url/path hooks. How do I do it? I thought of setting a custom $config variable. Or checking for $config->requestPath(). But maybe there is something universal?
  13. Hi @ryan there are a lot of modules I was working on. I began creating them even before the company project itself got underway (these are apps intended for internal use), anticipating what I would need based on the initial requirements and my prior experience with ProcessWire projects. I wanted to avoid conflating module development with the development of the core application itself. That said, this meant I was able to test some of the modules incrementally, as the need arose; however, a significant portion of the project remains to be completed, and there are other modules—on which I performed only limited initial testing—that I haven't actually put into use yet. In fact, they could be considered by the community as proofs of concept, given that I am not strictly a "professional" developer; I'm a graphic designer, and my role within the company is as Webmaster. Nevertheless—thanks to ProcessWire's ease of use—I have spent years implementing tools that assist the various departments in their daily operations. All these modules were built around a philosophy of "Simple": they get straight to the point regarding my specific needs, though I am not sure whether they would prove useful to others. I have never worked with Git or a version control system, nor have I ever published modules before, so I have no idea what the best way to proceed would be; if you could offer me any guidance, I would be very grateful. SimpleWire current modules list: SimpleAsset: Asset management for ProcessWire. Resolves, groups, and renders CSS/JS assets from CDN sources or local paths with cache-busting, SRI, and inline threshold support. SimpleAttempt: Error-first pattern helper for ProcessWire. Returns [error, data] arrays instead of try-catch blocks for explicit, predictable error handling. SimpleAttribute: HTML attribute-based template syntax (.attr.phtml) with pw-* directives, {{ }} interpolation, and file-modification caching. SimpleClient: Fluent cURL-based HTTP client with retry, file download, and concurrent pool support. SimpleForm: Programmatic HTML form builder with fluent API and DomQuery. SimpleHelper: GitHub-based utility vault management with local caching, ETag updates, and discovery. SimpleIcon: Tabler Icons renderer with local SVG caching. Supports data URI, inline SVG, and image tag output formats. SimpleQuery: GraphQL-like query engine (WireQL) for ProcessWire pages with caching, rate limiting, and write support. SimpleQueue: Background job queue with priority, delayed execution, retry, and LazyCron-based processing. SimpleRender: Template rendering with views, components, partials, JSON, XML, and fragment extraction. SimpleRequest: HTTP request abstraction with input handling, validation, and content negotiation. SimpleResponse: HTTP response builder with HTMX support, redirects, and content negotiation. SimpleRouter: URL routing with pattern matching and caching for ProcessWire. For use as a URL segments engine. SimpleSSE: Server-Sent Events (SSE) helper for ProcessWire. Provides a simple API for streaming real-time events to the browser. SimpleFlow: In planning. A basic visual workflow module for ProcessWire. SimpleHook: In planning. A support module for the SimpleFlow module. SimpleFront: In planning. Islands architecture component framework SimpleAuth: In planning. A thin wrap around ProcessWire user/role/access APIs.
  14. @WireCodex It sounds like a lot of people here would be interested. Is it something where you'd want to make it a public and supported module in the modules directory, or more interested in sharing as a proof of concept?
  15. @AndZyk I was able to duplicate the issue when using openrouter. It turned out that because the openrouter string starts with "anthropic", it was getting interpreted as a line having a "provider" property, which is something that we deprecated, but still supported for backwards compatibility, and so it was causing the properties to get mixed up. I've updated the detection logic so that it shouldn't happen anymore. Thanks .
  16. ProcessTranslatePage 1.5 + 1.6 Two new versions released — improved glossary handling and a second translation provider. 1.5 — Better glossary management The module now detects free DeepL accounts (limited to one glossary) and warns instead of showing an error. Existing glossaries on your account are shown as a dropdown so you can select one manually, and there's a new "Delete glossary" option in the settings that removes it from DeepL while keeping the entries in the language fields. 1.6 — Added Google Cloud Translation option My main motivation here was that I just discovered DeepL has discontinued new free API plans, so Google Cloud Translation is now available as an alternative. Both providers support the same field types and write modes. Setup for Google requires a GCP service account; the full steps are in the readme. Glossary support remains DeepL-only for now, as Google's glossaries require a Cloud Storage bucket which seemed a little bit too much effort for the effect. Locale codes (DE, de, EN-GB, en_gb …) are normalised automatically on runtime now, so the format in the language fields doesn't matter anymore.
  17. @ryan Thank you for the explanation. When I try to enter all values manually in the primary agent: It mixes the values after saving: I cannot leave all fields empty, because then it throws an error. My module config is correct, but ignored because of the primary agent setting. I hope this helps.
  18. Hi @All Module is updated. In Modules listing it say: Version 0.0.0 but it is 9. IDK why. Anyway, module is updated to support multilanguage fields and added some cosmetics to look nicer.
  19. Hi @ThomasLichtenstern I would be available right now, living in South of Germany, we made lovely PW websites with a focus on details and usability and using PW for a lot of business logics right now. And we are experienced and keen on frontend details - so your customer will love it - on mobile and on desktop. It's tricky to link our showcases here in PW Website, but here some of them. A few larger websites in pharmaceutical business with PW are already offline due to clients. Some of our websites are listed in the site and still online, e.g. https://processwire.com/sites/list/die-schwarz-bunte-dein-freizeit-guide/ Looking forward, would be a pleasure Andreas
  20. Last week
  21. https://directory.processwire.com/developers/
  22. Hi, team. Lately, Claude and I 😁have been working on a series of modules for a new project at my company. This set includes a module for queue management. They aren't finished or ready for production yet; in fact, I’ve had few opportunities to test some of them, as I haven't reached those specific stages of the project yet. With these implementations, I’m not aiming to create anything overly complex—after all, there are already plenty of libraries available for that purpose. The idea is for them to be easy to use, free of external dependencies, and equipped with the basic functionality required for the tasks at hand—always adhering to the language and philosophy of ProcessWire. Would you be interested in having me upload some of them to GitHub for a look?
  23. Hi @ThomasLichtenstern I've just moved WordPress (500+ pages) website to Processwire. I got all neccessary scripts to move content, links redirect pages/posts/categories. I did whole SEO, because they had problem with WP, even with SEO plugins I am 100% sure there are way better developers here than me, but if... I am here :)
  24. Oh great, where can I find it? I did search without success
  25. Hey Thomas There's a developer directory in here somewhere with many talented devs - both solo and agency.
  26. Hi, I have a customer here in Germany that wants to move away from wordpress to processwire. Its a wordpress site with no special funktions or so. Just some pages with graphics. Is there someone here who can help? Can send the link to the actual site if wanted for an estimation of time to migrate. Regards Thomas
  27. Hi Ryan, I'll do my best to explain this, but keep in mind my experience with queues / background jobs is only 2 years old. But in short, inspiration would be best taken from the classic, "batteries included" big web application frameworks like Laravel (as Teppo pointed out) and Rails. I like the Laravel page I linked because it gets very in-depth (I've read that page at least 10 times). Let's use a classic example like this: Let's say someone wants to upload a video to a field and we want it to be converted to a different file format (let's say that's being handled by ffmpeg). That's a time intensive task that wouldn't be able to be done within a standard max limit 30 second web request, or even with the memory available to php via php-fpm. It might take minutes or even hours, and even then, things can go wrong and it might fail. So, instead we'd want this to happen independent of the web request, therefore a background job dedicated to that task would have to be made and scheduled to be processed. It could happen immediately, or maybe it can be scheduled for 30 minutes later. This is not related to cron jobs which are a different concept. (Another example is for example in an ecommerce checkout; it's generally better practice to send the order notification email as a background job instead of inline with the code that processes the order after it's submitted). With that, queue systems are typically powered by a different dedicated database or system, with Redis being a popular choice. The reason for this is to limit load on the primary database; many large systems may have millions of jobs per day so offloading that to a separate database saves resources. However the big web application frameworks I mentioned also allow the option for the main database itself to store the jobs (Rails enabled this in Nov 2024), which is typically fast enough and probably good enough for a ProcessWire-based solution. You then have workers which act on the jobs. You can define there to be one or multiple workers. Let's say you have a powerful server and want to transcode multiple videos at a time, then having multiple workers would allow more jobs to be done in parallel and take advantage of your system resources. Obviously you don't want a worker to act on a job more than once, or two different workers to act on the same job. So this gets into jobs have statuses, being locked, and avoiding race conditions. The Laravel documentation gets into all of that, but I think reading up on queues/background jobs/workers and experimenting with it would be tremendously helpful. In regards to my print-on-demand system, I'm not using a queue system as I described above and I did experiment with WireQueue and IftRunner a while ago, but it didn't really... fit? It was a while ago and I was still wrapping my head around queues; also I came to realize that what I needed was even deeper than that (ie, durable workflows, but that's unrelated to what we're talking about here). I eventually put something together that relies on "fake" jobs using cron jobs and progressing through a durable workflow; it's not the most efficient way to do it, but I got it working. I've been meaning to rewrite that part of the system one day and having a native / first-party queue system (which dovetails with the CLI commands), would be the best approach.
  28. Ah let's not forget The OG WireQueue! Couple examples from my own experience, involving FormBuilder: It would be great to make FormBuilder actions asynchronous, when integrating whatever third party email/list management, some have very simple integrations that can pass as barely noticeable since it normally involves an extra request, but some of these, sometimes require tokens refreshments to a different endpoint for example, and time starts adding, making the forms feel slow. Another issue that I actually have right now in a website, is how I integrate FormBuilder to save leads as pages, and before saving new pages, I make a search of existing pages/users to avoid duplication of data. This has worked perfectly for the last 8 years or something, but after a few hundreds of leads saved in the database, the query of users during the request is starting to be noticeable, and the form submissions now feel slow. So I'm going to build FormBuilderActionQueueItem to add this process as a queue item to be processed and migrate all logic to the queue worker. Another example that comes to my mind is also building emails that depend on some sort of query, it's not a rare petition that I get to "enrich" emails with related information and data that comes from queries, since the FormBuilder administrator emails' purpose is to help a sales teams make decisions.
  1. Load more activity
×
×
  • Create New...