-
Posts
74 -
Joined
-
Last visited
-
Days Won
2
Everything posted by ErikMH
-
Thanks for your thoughts, @wbmnfktr! I apologize for omitting the purpose of my question.... Basically, my sites have all been “read-only,” if you will. A very limited number of people have added content to a staging site (in New York, say), and that’s been cloned to the various servers. I haven’t needed to worry about other data getting added in Sydney or Seoul or Amsterdam. (I have the idea that most ProcessWire sites are actually fairly local: museums, schools, shops, local councils. Mine are certainly niche, but they are not in any way geographically limited.) But as soon as I allow user logins (and user profiles and preferences and user-stored private notes, etc.), then I have data getting added in Santiago and Bangalore, etc. It will still be a small percentage of the total site data, but it has to be handled somehow. The easiest thing would certainly be to change my topology altogether and have exactly one speedy server somewhere, but it seems unfair to add 400ms latency (x2) to every server request for visitors half-way across the planet, just to accommodate this need. OTOH, I was thinking that would seem like a fairly reasonable price to pay for the small subset of information that was actually user-related — if only there were some way of siloing that data away on a single server. That’s where I was coming from with my original question. As I write this, though, it occurs to me that there will always be small but non-zero bits of user-related information that are being called for constantly (“does the user have permission to see this particular data?” “Display the user’s full name in the corner.” etc.) — so I’m realizing that dividing the data up really isn’t a great idea. Your option #5 sounds quite interesting; I hadn’t been aware of AppApi. I’ve read through the documentation and will think about it. In fact, your sixth (unnumbered) option might be the best yet: allow local addition/modification of user-related data, but set up my own synching protocol. The usual problems of conflict-resolution wouldn’t be much of a factor, since the likelihood of a single user’s data being modified on two geographically distinct servers is vanishingly small. (It would be low anyway, but Cloudflare is set to continue using a given server for the duration of a session.) I could actually use a log-synchronization strategy similar to what Galera provides for MariaDb.... I won’t be free to begin this for another four weeks, but I’ll keep digging into it when I get free moments in the meantime. Thanks again for your suggestions! Anyone else have any thoughts?
-
Hello, friends! Thus far in my Processwire journey, I have created a half-dozen lightweight sites that all run on an inexpensive Vultr cloud server (also, a really exciting web app that isn’t quite ready for unveiling, but I can’t wait to show you!). I clone this Vultr server, and “restore” it to a few other Vultr servers around the globe, and have set up Cloudflare to “load-balance” (really, “proximity-balance”) access — so rather than upgrading my single server to handle more traffic, I have cheaper, geographically-distributed servers with quite sufficient speed and lower latency. This has worked really well for my read-only sites. I now find I need to add user accounts and lightweight user-stored data. Since I would prefer to use Processwire’s built-in user structure, I seem to have a few options: Host everything on one server. Pros: Simple solution. Cons: I lose the low-latency geographic distribution; this would be really tragic for colleagues in Australia and New Zealand, in particular. Roll my own user subsystem using Cloudflare or other cloud database and workers. Pros: It would probably work? And I could keep my low-latency architecture. Cons: I lose all Processwire integration; I have to learn how to create such a thing. Set up all my servers’ MariaDb databases as replica masters using Galera. Pros: 100% of my data would be synchronized among all my servers. Cons: It sounds complicated, and it’s probably overkill. And I have no idea how I’d proceed when I update the system to include a new template or fields; I don’t think there’d be a way for me to simply “push” from a staging server the way I do now.... Host a few specific Processwire tables in one database instance, access it from PW sites around the globe. Pros: I keep the low-latency distribution and I work primarily with Processwire. Cons: I don’t know how to accomplish it. So, obviously, I want to go with #4 if I can. But I have some questions: Am I right? Or would one of the other options be preferable? Or am I missing a better solution? Is #4 possible? I would use hooks, somehow, to divert user-table and related traffic? And a secure Cloudflare “tunnel,” for example, between the local PW host and the user-table host? To save and read user-table (and related) data, I would somehow need to redefine the $user variable to use some kind of a backchannel API inquiry? I feel like I’m just a little bit out of my depth here, and if anyone has any suggestions, I’d really appreciate your insight!
-
Ha! Twenty-three months later I have the same exact problem. And twenty-three months later, I find @Robin S’s excellent answer. Thank you so much — again!!
-
I apologize if I sounded rude in my first reply. It had been some years since I’d last worked with WordPress, when an academic friend asked me to help her migrate (and upgrade) her WP site last month to a new provider. This should have been a very straightforward job, but every single one of the plug-in authors had built in limitations that kept me from doing very simple things unless I upgraded to their pro-level versions — and in most cases this would have involved minimum one-year subscriptions. And since my client’s site had been hosted with a provider that allowed no command-line access and only limited SFTP, I was at the plug-in authors’ mercy. Your situation is quite different, of course, but the whole experience reminded me strongly of what I dislike about what WP has become. In my case, I can attest that PW is about 50% of my income, and that I devote about 50% of my work-time towards it, so it’s certainly a viable means of income. But it’s true that creative work, even in IT, is a less predictable income stream than maintenance. Best of luck with ProcessWire, and with going freelance! I hope we’ll continue to see you here in the forum!
-
I, for one, dislike updating software, applying security patches, and making backups. Especially for other people. Reducing those chores means I get to spend more time building creative web sites and tailoring sites to my client’s needs, which I enjoy very much. And if maximizing your revenue is a priority, well, good creative work is generally more lucrative, too.
-
@modifiedcontent: I, too, use MariaDb on Vultr servers with no problems whatsoever. But I think that @Jonathan Lahijani is asking about Vultr’s managed databases. I’m afraid I have no experience with those, so I can’t weigh in profitably. Have you used those?
- 16 replies
-
- 1
-
-
- hosting services
- vps
-
(and 1 more)
Tagged with:
-
I’m afraid I have no experience with any of the hosts you mention. In addition to my Vultr recommendation, I can offer the following info: Cloudways. They really grease the wheels for setups where pushing and pulling from or to staging and production environments is key. You have a choice of all (or most) of the data centers available from Vultr, linode, Digital Ocean, Amazon, and Google, and can mix and match. They were independent, based in Malta, but they’ve been bought by Digital Ocean. A little over two years ago, I tested the speed of a simple PW site on identically configured Cloudways hosts with Digital Ocean, Vultr, and linode backends in the same cities. (I couldn’t afford Amazon or Google.) Though all the sites behaved perfectly, the Vultr-backed site was far more responsive. D.O.’s sluggishness didn’t surprise me, but I’d have expected linode to have been more competitive. It was these Cloudways tests that led me naturally to Vultr when D.O. bought Cloudways. pair Networks. A shadow of their former self. Still only in Pittsburgh, Pennsylvania — though at one point theirs was one of the best-connected data centers in the world. They have not kept up with the times. I’m still hosting a couple of nearly abandoned sites with them; I guess I hate to pull the plug, since I’ve been a client for 27 years! Hetzner Online. Lots of offerings. GDPR compliant. Based in Germany (for better and for worse). The user interface looks like it was designed in 1999 and got a new coat of paint last year. I have one site with them, which I intend to move to Vultr — not because I’m particularly unhappy with them, but just to simplify. Let us know what you decide!
- 16 replies
-
- 1
-
-
- hosting services
- vps
-
(and 1 more)
Tagged with:
-
I’ve had a great experience with Vultr, first as the data center for some sites I hosted via CloudWays, and more recently (since Cloudways was bought by D.O.) directly with Vultr. I absolutely recommend their “Vultr Cloud Compute” (shared-CPU VPSs) — especially the “high performance” and “high frequency” offerings starting at $6/month. In fact, I’ve found the absolute base-level “high frequency” choice is more than adequate to host five small PW sites (single “wire” folder, multiple “site” folders): 1 vCPU (3GHz+ Intel Xeon), 1 GB RAM, 32 GB NVMe SSD, 1 TB bandwidth They give you a choice of many OSs (including CentOS 9 and 10, though I use Debian 12) and many data centers worldwide. For my niche sites with visitors from around the globe, I’ve found it practical to set up one server locally for development and data entry. Whenever I’m ready to push a code change or new postings/pages, I make a (nearly free) snapshot backup of the staging server, and restore it to a half-dozen strategically chosen server sites (at $6/each): Amsterdam, Seoul, Atlanta — you get the idea). I have set up Cloudflare to handle the load balancing geographically. If the CPU and bandwidth are adequate but you need more drive space, they offer S3-compatible object storage or inexpensive regular block storage. Or if you’re a high-traffic site you can of course up the RAM, CPU, bandwidth, and internal SSD specs. The usual background applies: I have no affiliation with Vultr, I’ve just been happily hosting with them for three years. I do recommend that you choose either “high frequency” (if speed and a little extra drive space are more important) or “high performance” (if you want more bandwidth and are willing to trade a little drive space and speed for it), rather than “regular performance,” which felt surprisingly slower to me when I set up parallel systems for comparison. https://www.vultr.com/pricing/#cloud-compute/ https://www.vultr.com/features/datacenter-locations/
- 16 replies
-
- 2
-
-
- hosting services
- vps
-
(and 1 more)
Tagged with:
-
I’ve been very happy with Vultr over the past couple of years. I host several low-traffic PW sites (1.2M total requests/month) on one of their lowest-tier “high performance” machines for $6/month and they are quite reliable and extremely fast for this use. Though they suggest a 4GB machine for production, I find the 1GB is absolutely sufficient for ProcessWire. On the other hand, it’s important not to try to save the extra $1/month and go with the corresponding “regular performance” machine: performance does indeed take a fairly dramatic hit. I’ve found the Vultr servers to be much more responsive than Digital Ocean or Linode (or the venerable Pair Networks); I haven’t compared any others.
-
Indeed, I’ve been finding HTMX very useful indeed — and I was surprised to see that I’m several versions behind. I spent quite a while looking for the intervening release notes. I couldn’t find any at htmx.org, but they are available via UNPKG: https://unpkg.com/browse/htmx.org@1.9.3/CHANGELOG.md Looks like it’s time to upgrade! ?
-
That’s a shame, @bernhard — not the way I like my app developers to behave. ? But I’ve learned so much for your own helpful comments and answers and modules that it would never occur to me to withhold a helpful hand when there’s something I know a little bit about. I’ve been privileged to belong to several forums with very high signal-to-noise ratios, but this one is truly la crème de la crème. Tschüß, und Viel Glück!
-
I don’t know @wbmnfktr’s source, but it sure sounds like that’ll do the trick nicely! My suggestion otherwise, honestly, if you don’t have a lot of digital typography knowledge or relevant tools, would simply be to experiment with the CSS: specify only one font at a time and include font-variant-numeric: tabular-nums, remembering (of course) to dump (or not use) your browser cache with each new page load. I believe all the various Arial varieties support it, as do the Avenirs. So do (at least) the macOS Helveticas. DIsappointingly, the current macOS system font (variations on “SF” and “San Francisco”) appear not to support it, other than the mono-spaced SF Mono variant. It probably goes without saying, but you’ve asked for “basic,” so: all monospaced fonts will do exactly what you want, even without the font-variant-numeric: tabular-nums. Even if you don’t want to use a monospaced font in general, you might consider using it just for the date/time stamp. My favorite is Michael Everson’s shareware (€25) Everson Mono, which has beautiful glyphs for (I believe) every non-Han Unicode codepoint. Everson writes: I met Michael (nice guy!) a few years ago in the U.K., but otherwise I have no connection or vested interest — other than Everson Mono being my favorite monospaced font for the past 35 years....
-
Sorry for suggesting something you’d already tried, @bernhard — I’d skimmed over some of the details of your post (pre-coffee!). Not all fonts have tabular numeric forms, of course. Actually, a quick look through my 361 installed typefaces shows that only about half of my installed fonts support tabular numbers — YMMV, of course. This, on a MacBook Pro running macOS 13.4.1 (the current “Ventura” release). So, it’s possible that tabular-nums is working just fine — only not with any of the fonts that have been called for.
-
Have you tried using the tabular-nums attribute of the font-variant-numeric CSS property? That’s how I’d approach this.
-
Mother of all “PW persists in logging me off” threads
ErikMH replied to ErikMH's topic in General Support
Thank you all very much for your suggestions for further research; I am consistently amazed by the high quality of responses here, as well as the high signal-to-noise ratio! I owe you an update: As I mockingly predicted to myself, the mere act of writing the OP seems to have fixed the problem. Of course, I know this *isn’t* really true, but in truth I have not been unexpectedly logged out even once since I posted here two days ago — astonishing, given that I’d gotten logged out at least 25 or 30 times in the two or three hours leading up to my post. In a perfect world, I would hunt this down ASAP *before* it bites me again. Pragmatically, though, I have to admit that I’m on deadline and there are more immediately pressing concerns. Again, I am very grateful for all of the various pointed questions; they will be the first place I turn when the problem returns. And I promise to update the thread with more info when that happens! In the meantime, thank you, @Jonathan Lahijani, @bernhard, & @flydev! -
Mother of all “PW persists in logging me off” threads
ErikMH replied to ErikMH's topic in General Support
I should add that I am now often (but not always) seeing an error dialog that says: > Unknown error, please try again later and then I’m shown a rather alarming (but, fortunately, totally spurious) empty listing of pages: -
There are quite a few threads here where users report ProcessWire repeatedly logging them out (see below). I, too, have had this problem intermittently over the two years I’ve been using PW. I was able to reduce the problem somewhat about a year ago by turning fingerprinting completely off — but the problem has never completely gone away. I’m now at my wits’ end: I was unexpectedly logged out a half-dozen times again earlier today, though I could never see a pattern. As of an hour ago, though, every single time I try to update a field definition, the change is discarded and I’m logged out. Using a different browser helped for a few minutes, but then it began having precisely the same predictable problem. Fingerprinting is off altogether ($config->sessionFingerprint = false;). CSRF protection is off ($config->protectCSRF = false;). I have installed the Session Handler Database, so my /assets/sessions/ folder is empty. This is my PW development environment, running on my MacBook Pro (M1 Max, current MacOS) via DDEV and Colima; restarting the environment has no effect. I do use Cloudflare WARP/1.1.1.1 on my Mac, though that shouldn’t be relevant; turning it off has no effect. session.gc_maxlifetime is the default 1440 session.gc_divisor is the default 10001 I would like to fix this problem for good and never see it again, so that I can get back to far more important work. Does anyone have any ideas? --- Chronological (probably not comprehensive) list of relevant threads that I’ve read thoroughly:
-
@teppo, it looks like this is precisely the module I was going to begin searching for on Monday. I’m wildly excited that you’re doing this, though I understand your warnings and cautions. My fingers are crossed!
-
Fantastic little module, and I especially like the “magic” that happens when certain interpolated punctuation marks (“,”, “|”) are used between concatenated strings and one of the concatenated strings is blank: they’re left out! ? Would it be simple to add an ellipsis (“…”) to the list of characters so treated? My use case involves long paragraphs where I keep track of opening- and closing-words separately (in text fields), but I’d like to be able to represent the whole paragraph with Starting words … concluding words. Occasionally, there’s a very short paragraph, and I don’t want to imply that something has been left out (which an ellipsis, of course, does) — and in those unusual cases I include all of the text in the “starting words” field, leaving the “ending words” field empty. But (unlike with , and with |) the ellipsis shows up regardless.
-
You just saved my bacon, @Robin S — thanks for this! For those searching, this is the secret sauce for presenting a subset of entries to select from, based on a previous selection. In my case: parent=page.refSection where refSection is a grosser-level selection in a hierarchy. I believe this also answers this question:
-
Well, but that can’t be right. You posted this, with findMany() working just fine yesterday: So it looks like there’s something about getPage() (implicit or explicit), first(), and presumably last(), in combination with findMany(). I think.
-
Excellent thought. I haven’t done this before, so hopefully I’ve got the details right: findMany() combined with getPage() gives pages whose parents are (incorrectly) NullPage
-
Nicely done, @adrian!
-
OK, guys, it seems there was one more piece to the puzzle. Like @adrian, when I recreated the situation in a simple test environment running 3.0.192, I couldn’t recreate the problem. I should have tested this yesterday, but where I’d discovered that changing findMany() to find() in my code worked around the problem, I figured I’d found it. So, the following conditions are necessary but insufficient: PW 3.0.192 or greater (including the current version) page classes findMany() instead of find() I believe I’ve found the final ingredient. I have tested this in a simplified environment and recreated the problem: 4. getPage() In real life, I’ve rolled my own pagination (since I’d wanted to provide users with logarithmic-like controls for moving backwards and forwards through the site, and provide them with sign-post dates); this paginating has worked well. But it means that rather than a foreach loop I cycle through results in a for loop, and then assign the item with a getPage(). So, Adrian’s code looked like: foreach($pages->findMany('template-basic-page') as $p) { d($p->id . ' : ' . $p->returnParentId()); } but mine looks more like: $items = $pages->findMany('template=Test'); for ($idx = 0; $idx < 1; $idx++): $p = $items->getPage($idx); echo ($p->identify()); endfor; (Yes, that’s my test system, where I’ve hard-coded it to “loop” through just one test record; it’s sufficient to show the problem though: ProcessWire\TestPage #279 id: 1025 name: 'test-record-a' parent: '' template: 'Test' title: 'Test record A' data: array (1) 'title' => 'Test record A' parent there should = '1'. I see that I can do the same thing by removing getPage() altogether: $p = $items($idx); In the grand scheme of things, I don’t know which is better; however, this assignation — whether implicit or via getPage(), seems to be the final piece: add this, and $this->parent() becomes a NullPage. So, to sum up: all of this works fine in 3.0.150-or-so (where I developed it) on up through 3.0.191. As of .192, it breaks. Switching from findMany() to find() works around the problem. Sticking with findMany() and using the customary foreach() code pattern also works around the problem. Phew! @kongondo, I agree that the commit you link to looks relevant. Do you think the (implicit or explicit) getPage() may call any of that code?
-
This is good to know about, @adrian — thanks for drawing my attention to it! No, though, 3.0.195 behaves just like everything else for me beginning with 3.0.192. If I use findMany(), I have the problem as outlined above; if I use find(), everything’s fine.