-
Posts
804 -
Joined
-
Last visited
-
Days Won
41
Jonathan Lahijani last won the day on April 14
Jonathan Lahijani had the most liked content!
Profile Information
-
Gender
Male
-
Location
Los Angeles, CA
Jonathan Lahijani's Achievements
-
The PW Upgrade module.
-
One minor nitpick with this module is that if you're running the dev version of ProcessWire (lets say version 3.0.123) and that version receives additional commits/updates but without a version bump, you can't "upgrade" to it given how this module works. You can only upgrade when 3.0.124 dev version is available. Not a big deal for me since I have a bash script that takes care of pulling the latest from GitHub, however it would be nice to support getting the absolutely latest bleeding edge version with this module.
-
Jonathan Lahijani started following PW 3.0.259 – Core updates and Running LLMs Locally
-
I use VSCode and have been a subscriber to GitHub Copilot for a few years. I didn't pay much attention to the plan I was on until I started using agentic coding in ~October 2025. I then realized what made GitHub Copilot a ridiculously good value, as others discovered as well, was that it works on a "per-request" billing model. In short, if you knew what you were doing (I didn't realize this fully at the time), you could use a high-end model like Opus 4.5, which costs 3 credits, and if just have it rip for HOURS on a task and it would only cost 3 requests (lower end models would be 1 request). With the cheapest plan on GitHub Copilot, it is (well, now WAS), $10/month which gave you 300 requests. A lot of people took advantage of this... imagine paying $10/month and getting like $5000/$10,000 worth of value (ie, what would be the real cost of per-token billing) out of it per month! Absolutely insane. Microsoft understandably put an end to that last week because they were losing their shirts: https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/ In short, they are one of the first to go per-token/per-usage billing and I suspect others like Anthropic and OpenAI will eventually follow suit. It's only a matter of time given the economics of it all. However as you may know, the Chinese AI labs are extremely competitive with their AI offerings and have both monthly plans and token-based plans (generally 90% less cost) and of course, they release the models outright. Because of what Microsoft did, I've been experimenting with the various Chinese models via OpenRouter (and still using VSCode GitHub Copilot "Bring Your Own Key" which they support), so it's basically the same experience. However there seems to be a lot of advancements in high density, low parameter models (Qwen3.6 27B, Deepseek V4 Flash) which can be run on consumer hardware at good speeds and output that is not far behind something like Sonnet or Opus, or at least catching up quickly. I haven't owned a discreet GPU in many many years (I don't game), but I believe with a RTX 5090 and 64-128GB of RAM, it can be done. Don't quote me on any that however... I haven't dived into this world yet and don't yet have an understanding of all the settings that determine how well an LLM runs and how it affects its intelligence. I'd be interested to hear anyone who is playing with this idea: What models are you using? What hardware? What software are you using to run it?
-
Hi Ryan, I'll do my best to explain this, but keep in mind my experience with queues / background jobs is only 2 years old. But in short, inspiration would be best taken from the classic, "batteries included" big web application frameworks like Laravel (as Teppo pointed out) and Rails. I like the Laravel page I linked because it gets very in-depth (I've read that page at least 10 times). Let's use a classic example like this: Let's say someone wants to upload a video to a field and we want it to be converted to a different file format (let's say that's being handled by ffmpeg). That's a time intensive task that wouldn't be able to be done within a standard max limit 30 second web request, or even with the memory available to php via php-fpm. It might take minutes or even hours, and even then, things can go wrong and it might fail. So, instead we'd want this to happen independent of the web request, therefore a background job dedicated to that task would have to be made and scheduled to be processed. It could happen immediately, or maybe it can be scheduled for 30 minutes later. This is not related to cron jobs which are a different concept. (Another example is for example in an ecommerce checkout; it's generally better practice to send the order notification email as a background job instead of inline with the code that processes the order after it's submitted). With that, queue systems are typically powered by a different dedicated database or system, with Redis being a popular choice. The reason for this is to limit load on the primary database; many large systems may have millions of jobs per day so offloading that to a separate database saves resources. However the big web application frameworks I mentioned also allow the option for the main database itself to store the jobs (Rails enabled this in Nov 2024), which is typically fast enough and probably good enough for a ProcessWire-based solution. You then have workers which act on the jobs. You can define there to be one or multiple workers. Let's say you have a powerful server and want to transcode multiple videos at a time, then having multiple workers would allow more jobs to be done in parallel and take advantage of your system resources. Obviously you don't want a worker to act on a job more than once, or two different workers to act on the same job. So this gets into jobs have statuses, being locked, and avoiding race conditions. The Laravel documentation gets into all of that, but I think reading up on queues/background jobs/workers and experimenting with it would be tremendously helpful. In regards to my print-on-demand system, I'm not using a queue system as I described above and I did experiment with WireQueue and IftRunner a while ago, but it didn't really... fit? It was a while ago and I was still wrapping my head around queues; also I came to realize that what I needed was even deeper than that (ie, durable workflows, but that's unrelated to what we're talking about here). I eventually put something together that relies on "fake" jobs using cron jobs and progressing through a durable workflow; it's not the most efficient way to do it, but I got it working. I've been meaning to rewrite that part of the system one day and having a native / first-party queue system (which dovetails with the CLI commands), would be the best approach.
-
Wow. These "ProcessWire as a web application framework"-type updates are coming in strong! Migrations, CLI, Tests, AI. One can wish for a maybe first party queue system too. 😄
-
Ahh ok. So 150 million of it is cached and therefore not charged. Makes more sense.
-
What was the cost for that many tokens?
-
I can't wait to play with all this stuff. I love that ProcessWire was envisioned over 2 decades ago, yet is adaptable with the cutting edge in web development. This just speaks to how well it was architected when it was publicly released. This feels like the next era of ProcessWire!
-
For my management execution system web application which was built with PW, 50+ templates and another 50+ repeater fields which are technically templates.
-
This song is a total banger:
-
Caddy - a lightweight HTTP2 web server
Jonathan Lahijani replied to gurkendoktor's topic in Dev Talk
@Ivan Gretsky Yes, still using Apache on the LXCs. I've never heard of Incus, but thanks for mentioning it. Seems it started in 2023 so in the early stages. For now, I'm going full force on Proxmox (I spent a lot of time in December and January experimenting with it), but will keep a close eye on it. Users on this post on HN say some nice things about Incus so that's a good sign. -
Caddy - a lightweight HTTP2 web server
Jonathan Lahijani replied to gurkendoktor's topic in Dev Talk
This is a bit off-topic (however related to Caddy), but I have been re-doing my internal development setup, which is now powered by a dedicated Proxmox server (on a Minisforum MS-01) and uses LXC containers for ProcessWire sites. On some containers, I just have one ProcessWire site, and on others, I have multiple. Previously, I just had one bare-metal, dedicated server with all my sites on it, which is very convenient but I lose having parity with my production environments (which isn't a big deal with a typical website, but more-so for mission critical webapps; also I am trying to avoid using Docker). So my internal infrastructure might look like this: lxc1 site1.domain.com site2.domain.com site3.domain.com lxc2 site4.domain.com Since I want some of my development sites to be accessible from the outside on a dedicated subdomain like shown above, this requires the need of a reverse proxy since my internet connection has only 1 IP address. Therefore, I set up another LXC which runs Caddy as a reverse proxy (also set up fail2ban to deal with stupid hackbots) which works so well and it's ridiculously easy to set up and takes care of SSL automatically. I did this just a couple days ago, after not having played with Caddy for a couple years, so maybe this bit of excitement will get me back into using it directly as well. -
Caddy - a lightweight HTTP2 web server
Jonathan Lahijani replied to gurkendoktor's topic in Dev Talk
@Pete I did get far with it, but I never ended up running a site with Caddy so it hasn't been battle tested. I've messaged it to you directly as I don't want to post something incomplete for the public until I've really done my due diligence on it. -
From my experience, even asking an LLM (like ChatGPT in their website chat interface) about ProcessWire's architecture is pretty impressive. I spent a lot of time last year using AI to compare (verbally compare, not direct code) ProcessWire to full-stack web application frameworks like Laravel and the language it used to describe ProcessWire, with API variables like $pages, $fields, etc as "services-like objects" was something I've never seen described anywhere (pw docs or forums), but it's technically correct, and a lot of things clicked with me after that. These coding agents do really well with ProcessWire without any special harnesses already. I'm excited for all these upcoming features.
-
So ProcessWire now has the beginnings of a first-party "schema" (in the ProcessWire sense) migrations system? YES!!!