-
Posts
2,183 -
Joined
-
Last visited
-
Days Won
55
Everything posted by wbmnfktr
-
I'm a bit short in time right now so I might have to write a follow-up to give you a real and more complete answer here. But in the meantime - a short summary: each and every client/side project of mine get's a click-dummy of the final product to see how it could work out, what's needed and so on. I use Astro JS for that as it's super flexible to work with, I can deploy it somewhere at Netlify, Vercel, or Cloudflare. Each commit is a new build. I can share it with everyone - frontend and backend-wise. It's more or less just HTML, CSS, JS - some parts of it might have a TailwindCSS or AlpineJS flavor but still super basic. And the big plus: ALL build steps (TailwindCSS, AlpineJS, ...) are already in place. If needed I can connect it to an API to fetch articles, news, or whatever kind of data to make it look more real or to go super fast - especially when migrating from WordPress where there is a RestAPI or GraphQL almost always in place already. For sideprojects I connect to api.domain.tld, grab JSON and render out either pages or just parts of the project on-build. for side projects in very early stages that would be the state for the next 3 to 6 months to see if the project get's some kind of traffic - for client projects this is the base to start the real work. from there I take all the component and move them from .astro to .twig - the difference is so minimal I could use Regex to make the changes most of the time. feeding all layouts, components, partials, blocks, however we want to call those code snippets into ProcessWire is pretty easy, when you know where things have to go and most of the time you only change the parts that define the source - so from a JS fetch() to a $pages->find('...') - and of course you have to build out the ProcessWire backend stuff, hooks, automation, and whatever you need or want. Some would say there are a lot of unnecessary steps in this process and they could be right, but I prefer to test projects early on and hate to look at Figma files or Illustrator screenshots. So there is that. I always worked that way and that will probably never change. On the technical site you have think about 2 systems running side by side. Astro JS on Netlify, Vercel, Cloudflare or a VPS with NodeJS and ... lots of other stuff ProcessWire with database and everything it needs on a sub-domain. You could fit everything onto one server but it can be quite painful to get this up and running so I use a regular hosting provider for ProcessWire and one of those mentioned above for Astro JS. The output is, most of the time, 100% static and build on-demand with data and content available at that moment. You could make it more dynamic with AlpineJS or HTMX but only for small parts, and not for articles and news - as those wouldn't exist within the static build. As this turned out to be broader as expected please feel free to ask about more details where needed.
-
Put those posts here in the Dev Talk in their own topical threads. I personally love those posts and looks behind the scenes. So... a +1 from me here.
-
Add a 2nd build file that only uses parts of your CSS/less. / ├── dist │ ├── backend.css │ └── frontend.css └── src ├── _index-backend.less ├── _index-frontend.less └── less ├── blocks.less ├── buttons.less └── ... In the build file you only import those parts you need/want in the backend or WYSIWYG editor, while the frontend will have everything. // _index-backend.less // we only need the buttons in the backend @import "buttons.less"; That's how I structured my .less/.sass/.scss files.
-
Just a little side note for those that like to play and experiment: I was playing around with Grok 3 beta (via x.com/grok.com) and asked for specific Processwire rule files to use in Cursor/Windsurf. And let's say: I'm quite impressed how good those rule files look. 🤯 I'm in the process of moving everything around and can't tell if they work as good as they look but Grok3 seems to be even better than Sonnet 3.5 and 3.7 for technical tasks like these.
-
Back when I started this thread I tried multiple ways, modules, and custom exports. From JSON to AppApi to GraphQL and everything in-between. I still use basic JSON in some projects or just grab what I need via HTMX nowadays. I pull in only simple data via JSON I might need on build time or fully rendered HTML with HTMX in my AstroJS projects. Whenever I start a new project and need a MVP-like skeleton of it, I go with static content in Markdown/MDX in AstroJS, later on I'll migrate to 100% ProcessWire in most cases. It just works, I feel home, know how to handle stuff, have everything I need and with ProCache, LoginRegisterPro, and FormBuilder I can keep everything on my server and don't need things like Supabase, Neon, FormSpark or whatever. So to finally answer your question: no, not anymore
-
Is there a chance you are using RockMigrations and somehow triggered an older migration that removed fields from that template? Happened to me yesterday and took me a while to find. And to make it clear: it's intended behaviour - see screenshot. The minus tells RockMigrations to only have listed fields in that template. Without the minus I could add fields via backend and run migrations without removing manually added fields.
-
Is your database set to utf8mb4? // site/config.php $config->dbCharset = 'utf8mb4'; If not... saving emojis and emoticons won't work as expected. Not sure about the steps to convert a db to accept it. Sorry.
-
Publish it as a gist on Github or in case you are interested write a full recipe and publish it on processwire.recipes.
- 1 reply
-
- 1
-
-
Serving different templates folder depending on domain name
wbmnfktr replied to shadowmoses's topic in General Support
First idea... is there an actual homepage template in all folders? Does your logic always return the correct strings on the homepage and other pages? -
The user here is only on the frontend aka consuming content from those websites? Well, than those 400ms can be squashed by using Cloudflare as CDN. Cloudflare is close to everyone, especially regular users. Your backend users/admins/whoever might probably be just fine with the additional 400ms when adding content, notes, and other data.
-
... but it feels like home.
-
To dump my thoughts here: What's the main goal? Or ... What's the reason you need an off-site user management system? How real-time does it need to be? Putting everything on one server (#1) would be perfect as you then could bootstrap ProcessWire into other instances quite easily and go that route - which actually would be #4 - but you would lose those juice ms on each request you already mentioned. Ideas #2 and #3 sound fun but yes, probably total overkill. Never used or tried those, so no idea how well that would work out at the end here. #4 as mentioned would probably need all sites/instances on one server as well. Not sure if and how you wouild do this across multiple serveres. But here comes idea #5: Get a server that's somewhere in the middle and build a your user management there. You then could put an API on top with either your custom code or something like the AppApi module and roll your own auth for some or all users. Probably a bit overkill as well, yet another approach could be some kind of API to sync accounts from and to that server. Trigger actions via webhooks in case new users were added or removed somewhere. Could be fun. Could be pain. Not sure for now.
-
Not 100% sure but isn't this the default behaviour of browsers or maybe even the web server to encode umlauts/special characters? I know we can enable them in ProcessWire itself and use them in page names but not sure this would change the behaviour in Jumplinks as well. For the time being you could at these to the redirects. How often do you experience these kind of URLs?
-
Is that one public? If so... change that! For everything else... I'm absolutely not sure this was or wasn't a hacking attempt but at least it looks like someone was scanning the website and the server/hosting - especially the database - couldn't handle that much traffic/pings/requests. This could have been Google, ChatGPT, or whatever crawler/bot/spider is active right now. That FieldtypeText log at the beginning seems to indicate there is something it can't handle for whatever reason. Might be a hook that updates a text field or something. I'd probably use this as starting point for everyting PW-related. Please check the other /site/assets/logs/ files for more entries, check server log files to see what happened elsewhere on the server and so on.
-
Not at real answer (Sorry!), but... We do initial workshops with clients to guide them around and assist for at least 3 months for each and every question they might have. In 90% of all cases that's all it ever takes as they are only able to add and edit small parts of the site. Using CSV imports or ProLister might take a bit more time but other than that is easy going. So that said: we never created any real tutorials, guides, or similar. Keeping the client very close in the first months they do things on their own is super important for us. Most of the editors can barely handle more than MS Office basics but can manage the website's content.
-
I have seen that tool before but never used it and actually don't know if it's worth putting that into an IDE. The codebase is already available in full. Maybe I miss the unique feature here.
-
I'll try to get my latest starter tested and fixed and will give you a link to the full repo with all examples and rules. Well... you basically have the link already but the version is an old one. In regards to self-hosted docs and documents there is a new service which helps quite a bit - maybe worth putting processwire.com docs in there: parse.new About the examples vs. docs part... even though we have huge context windows now, they fill up pretty quick. Using examples here nails the results almost 99% of the time. A list of hooks, some more complex examples like modules with configurations, and other module specific things speed it up as well. No scraping, outlining, and understanding as you can reference it. The same with more complex selectors. I write them myself, put them in place, and whatever IDE and LLM can work with it. Plus: adding an example of the returned data structure makes writing views/components so super easy then. ☺️ Update: just found out parse.new is for sale - not sure how long this will be kept up - see: sale.parse.new
-
The main reason I switched to ProcessWire was the fact that I could add an unlimited amount of templates with 100% custom fields to my projects. Back in the days WordPress had two types of content: posts, pages - I remember when the feature to have pages was added. 😂 So I started using Textpattern which allowed me to have at least 10 custom fields and individual page templates. Which worked pretty well for a while but ... after some time I needed more fields, more templates, and found ProcessWire. In that moment I was able to create templates for books, restaurants, movies, musicians, whatever type of data I wanted and needed. Fields became more than just strings or dates. It was possible to have textareas, repeaters, tables whereever and whenever needed. That was pretty much 10+ years ago. 🤯 Oh... and of course having this was awesome as well: an unlimited amount of backend users, user roles, access management, multilanguage support, resource friendly and worked perfectly fine even on low-end cheap shared hosting.
- 18 replies
-
- 13
-
-
I took a way easier route and grabbed the one from Yifan here: https://gist.github.com/yifanzz/3cfb8f9065769ffbf94348255f85597d more details: https://www.youtube.com/watch?v=aG-utUqVrb0 At one point I gave this one a try but will keep only parts from it in the next iteration of my rules file. https://github.com/kinopeee/windsurfrules/blob/main/v5-en/.windsurfrules Another thing I started doing is adding files with examples: PHP or programming isn't the real issue with any of the current bigger LLMs, but ProcessWire itself as there isn't that much training data around - compared to WordPress, NextJS, Angular, ... whatever. So adding the examples makes it pretty easy and drops the amount of credits needed by a good margin.
-
Just gave it a super quick try, while giving guidance based on existing code and purpose. I used Deepseek V3, as it is giving me more issues than Claude or OpenAI on average, and ... even though it was a simple task it did a great job. I need to test on the weekend with this as my current rules do things here as well - as they should. The rephrasing might just be the cherry on the top! Claude Sonnet does a great job as always but faster.
-
This reminds me of the AIDER Architect Workflow i saw a while back. In short: 2 instances one for a technical concept/developer manual one for doing the work the first instance uses Gemini Flash, Deepseek R1, or an OpenAI Reasoning model and puts every little detail into separate files the second instance uses Claude Sonnet 3.5 and does all the heavy code lifting Your workflow is super similar, yet way slimmer and probably faster for most tasks. Need to try this in my rules!
-
Install Language Support (core module), look for translatable files within the LoginRegisterPro module directory, select all files listed, go back to the overview. From there you can either search for a translation or dig into all files and translate each and every string you find.