Jump to content

teppo

PW-Moderators
  • Posts

    3,208
  • Joined

  • Last visited

  • Days Won

    107

Everything posted by teppo

  1. Same place where we've always been, and where we're ultimately going to be with any dependency management solution, automated or manual: risk assessment. Any time we depend on third party dependencies we're making a choice to trust the vendors of said dependencies (and by extension anyone they trust, and also anyone who might have access to the "pipeline" in between us and those vendors). Whether the benefit is greater than the risk is what matters. (Sorry for going a bit philosophical with this ?) As for Composer vs. npm, my opinion is that the situation with Composer is slightly less problematic, simply due to the ecosystem. I don't have numbers to back this, so please take it with a grain of salt, but in my experience relying on numerous interconnected dependencies is more common for npm projects. ProcessWire modules, for an example, tend to have relatively few dependencies, which can be a good thing (in this particular context). One solution, mentioned in that Reddit discussion as well, is roave/security-advisories. I would recommend adding it to your setups, just in case. It's a super simple package that defines a list of "conflicts", each of those being a package + version with known vulnerabilities. When running composer install or composer update, these conflicts crash the process. This may well be the only "positive use case" for conflicts I've come across ? It's not a fool proof solution, but a good start — and pretty much a no-brainer, in my opinion. Another thing that can help is private packagist, which provides automated security audits, alerts, and more. Private packagist is not free, but for commercial users it's definitely something to consider (and not just for the security features).
  2. Should have some free time this weekend to work on these, hopefully that'll get us somewhere.
  3. I'll look into this. It feels like this should be accompanied by a config setting that defines allowed partials directories, to avoid any potential security issues down the line, and I'd also like to extend this to components (or at least build it in a way that makes sense for that context as well). I seem to recall a request for "components from/as modules" (I believe it was suggested by @bernhard), and this could be a good initial step towards that. One thing that won't be supported is accessing partials in other root directories via the object-oriented $partials->path->to->partial syntax. I don't think it's essential here, mostly just thinking out loud. There are both logical issues (potentially conflicting names etc.) as well as technical ones (currently partials directory gets "pre-loaded" to memory during Wireframe bootstrap) that would make this problematic. (And, again, I don't think it's even necessary.) If by latter placeholders you mean partials, then yes, sublayouts implemented as partials have access to $placeholders. Actually had to test as I wasn't sure, but it looks like it works right out of the box ? For the record I did consider this approach too, but felt that it wouldn't fully solve the issue. Perhaps I'm too invested in the concept of inheritance as in templating languages, but the main issue is that this would likely be one-dimensional, while I'd like to see something that is more flexible. Sure, one-dimensional stack with enough layers could achieve almost anything, but that could get super complicated and would also affect performance negatively. There may well be something to this idea, but for the time being I think other routes look more promising ?
  4. So... that "layouts within layouts" thing almost ended up in v 0.22, but I decided to hold it back. I don't actually think it solves the right issue, or that it makes enough sense as is. In other words I believe I may have been looking at this from the wrong direction. That, or I just need some extra information before going forward ? What I was going to add would've been a new $layouts API variable, which would've made it possible to embed second (named) layout within one layout. Something along these lines: <!-- layouts/default.php --> <head> <title>Hello World</title> </head> <body> <?= $layouts->render('single-column') ?> </body> <!-- layouts/single-column.php --> <main> <h1><?= $page->title ?></h1> <?= $page->body ?> </main> The only thing this really adds to current feature set is the ability to reuse layouts — which, to be honest, doesn't seem particularly useful, especially considering that the downside is that it adds a new API variable, new layer of complexity, etc. To me it doesn't seem too different from this, which is already doable: <!-- layouts/default.php --> <head> <title>Hello World</title> </head> <body> <?= $partials->render('sublayouts/single-column') ?> </body> <!-- partials/sublayouts/single-column.php --> <main> <h1><?= $page->title ?></h1> <?= $page->body ?> </main> With recent API updates, it's also quite straightforward to dynamically change the sublayout: <!-- layouts/default.php --> <body> <?= $partials->render('sublayouts/' . $view->sublayout ?: 'default') ?> </body> <!-- controllers/HomeController.php --> <?php // ... public function render() { // override sublayout for the home template $this->view->sublayout = 'home'; } // ... Unless I'm missing something, this seems like a pretty clean way to achieve simple sublayouts. Only difference is that they live under the "partials" directory, but after thinking this through that actually seems like a logical place: having them under "layouts" would feel a bit weird (these are different from the "main layouts" after all), while adding a whole new "sublayouts" directory at the root level would, at least to me, seem a bit unnecessary. (Although if partials get the ability to point to files in other / absolute directories, it will of course become possible to add such directory on a case by case basis.) Am I on the right tracks here, @Ivan Gretsky? ? Technically you can go as deep as you want with this type of structure, with relatively little code: if you need a third level of sub-sub-layouts, those would just need to be rendered in the sublayout partial(s). I guess it's, in a way, the opposite of what Twig/Blade/etc. do: instead of the sublayout or view defining that they extend some other file (layout or sublayout), you'd output the child in the parent file. Anyway, let me know what you think. I'll be happy to work on this further, but I've realized that since this isn't really something I personally use, it's way too easy to forget what the actual problem I'm solving was... ?
  5. Cross-posting here that $partials->get('path/to/partial') is now supported in Wireframe 0.22.0, along with new method $partials->render('path/to/partial'). More details in the docs: https://wireframe-framework.com/docs/view/partials/.
  6. Hey @Clarity Some features of Wireframe do need to be initialized for components to work: paths, autoloader, etc. In practice this shouldn't make a difference: those features will be loaded behind the scenes, but you don't have to use the structure provided by Wireframe, i.e. you can render the page in whatever approach you choose. This used to require calling the initOnce() method of Wireframe manually, but I've just committed an update that should make it a bit easier: as of version 0.21.3 calling the static factory method for components (<?= Wireframe::component($component_name, array $args) ?>) will automatically initialize necessary parts of the module if it hasn't been done yet. If you don't want to use the static factory method, you'd still have to get and initialize the Wireframe module at some point.
  7. Heard of it, but not tried it. For now I think I'd rather keep issues and requests in one place, and discussions here at the forum. Definitely something to keep in mind though ?
  8. Hey @szabesz, Sorry for the delayed answer, busy week here ? Public RSS feed is rendered by ProcessChangelogRSS module, which hooks into ProcessPageView::pageNotFound. If you can, you could... a) check if the module gets called properly (by adding some debug code to init() method), b) check if the hook gets triggered — in which case we could rule out any config setting or key related issues —, and c) if not, make sure that code in templates or some other module is not preventing the hook from working (URL segments being enabled on the home template is potential culprit, because 404 needs to be triggered in specific way or hooks no longer work as expected). Of course it could be an issue with the module as well, but it did work last time I used it... which was a while ago ? If I recall correctly, I added this as a sort of "power user feature", so that while it can be used, it's intentionally somewhat obscure. This is in part because the data structure for Process Changelog is really quite awful for that sort of query. Anyway, I'll look into this (https://github.com/teppokoivula/ProcessChangelog/issues/33) ? Makes sense to me. I'll look into this as well (https://github.com/teppokoivula/ProcessChangelog/issues/34). That's a quick one: I've never used this combination. In fact I don't think I've ever used ProDrafts, apart from some quick tests way, way back. If changes made while ProDrafts are made to pages, they'll get logged. If it has a custom data storage or if it prevents hooks from running, then they won't be logged. I don't have a test setup for this at hand, so can't confrim this easily. There's a hookable method — ProcessChangelogHooks::shouldLogPageEvent($page, $field, $operation) — which you could likely use to prevent Process Changelog from logging such events, at least assuming that there's something about the Page or Field (alone, or in combination with session data, etc.) that makes such edits identificable. In other words I'm pretty sure that this is doable, but it may well require custom hook. Let me know if you try this, and perhaps I can help with getting that setup up and running. Though, again, I've zero experience with ProDrafts, so... ?
  9. In recent (4.x) versions of ProCache the docblock for ProCache::___allowCacheForPage suggest that it might be a good place to hook to prevent cache file from being created. I'd probably try hooking there and setting return value to false in case a specific env variable is defined (or not defined). Other than that... no obvious solutions here.
  10. With identical modification this is working fine for me. Are other config variables working as expected? Any chance that there's a compiled version or some sort of cache in play here?
  11. Same issue here in terms of links pointing to wrong location (virtualbox based dev environment). Not a big deal for me personally, but sure it would be nice if they worked. Just had a look and I'm wondering if there's a real need for the "Only used if you are viewing a live site" part — why does it work like that, and could it be overridden somehow? I guess it makes sense if "local" means that you're literally developing on the local machine without any extra layers, but since "local" is also used to identify other whether this is a "development environment" vs. "production environment", this is indeed problematic ? Developer specific config file would also make sense because the developer is not necessarily logged in when viewing Tracy panels. It'd be even harder to distinguish between guests ?
  12. Hey Steve, If you're happy to add some logic to config, this was added recently: https://github.com/adrianbj/TracyDebugger/issues/56
  13. You're right (of course). Just double-checked with earlier Tracy version, and no slow-downs for non-authenticated requests. Could've been an indirect result, i.e. my authenticated requests might've slowed entire server down considerably, which in turn would result in visible delays for others. Other than that... no idea, but things seem to be working well now ?
  14. Likely yes. I did notice as well that requests that missed the (Cloudflare) cache seemed sluggish. The server was struggling under the load, so even requests served via ProCache were unusually slow.
  15. To be fair it was largely my own fault for letting the log file blow up. Anyway, a splendid performance boost ? Interestingly I had this module installed. Might've been a configuration error or something; should probably give it another try ?
  16. Confirmed. Added some extra lines to the log to make sure, and rendering the Tracy Logs panel went from 16863.97 ms to 5.2 ms. A lot faster indeed ?
  17. Hey Adrian (and others, in case anyone else happens to run into this issue)! I'm posting here instead of opening a GitHub issue since this doesn't feel like a "bug" or "issue", but rather "a potential gotcha": The background is that I've been recently dealing with major performance issues at weekly.pw. Every request was taking 10+ seconds, which made editing content... let's say an interesting experience. Particularly when PW triggers additional HTTP requests for a number of things, from checking if a page exists to the link editor modal window. Makes one think twice before clicking or hovering over anything in the admin... ? I've tried to debug the issue with little luck, eventually deciding to blame it on the server (sorry, Contabo!) until today — accidentally, while migrating the site to a new server — finally figured out that the real problem was in fact the Tracy Logs panel and a ~800M, ~3.5 million row logs/tracy/error.log file. The underlying reason for this were warnings generated by the XML sitemap module: each request was generating ~2.5k new rows, with an hour long cache, so potentially 60k or more new rows per day. Now, I'm writing this half hoping that if someone else runs into similar problem they'll be smarter than me and check if any of the Tracy panels suffer from slow rendering time (which is already reported for each panel individually), but I do also wonder if there's something that could automatically prevent this? Perhaps logs should be pruned, Tracy should warn if there's crazy amount of data in one of the log files, or log file reading could be somehow optimized? Food for thought. Again this is obviously not a bug, but still something that can end up biting you pretty hard ?
  18. I've not participated much here, since I feel there are more knowledgeable folks here already, but a few quick (?) opinions/experiences: I would love to have an easy way to migrate changes for fields and templates between environments, and version control all of that. I've had cases where I've made a change, only to realize that it wasn't such a good idea (or better yet, have a client realize that) and off we go to manually undo said change. Sometimes in quite a bit of hurry. These are among the situations in which an easy rollback feature would be highly appreciated. I do like making changes via ProcessWire's UI, but at the same time I strongly dislike having to do the exact same thing more than once. Once is fun, but having to redo things (especially from memory, and potentially multiple times) is definitely not what I'd like to spend my time doing ? I've worked on both solo projects, and projects with a relatively big team. While versioning schema and easily switching between different versions is IMHO already very useful for solo projects, it becomes — as was nicely explained by MoritzLost earlier — near must-have when you're working with a team, switching between branches, participating in code reviews, etc. I'll be the first one to admit that my memory is nowhere near impeccable. Just today I worked on a project I last worked on friday — four days ago! — and couldn't for the life of me remember exactly what I'd done to the schema and why. Now imagine having to remember why something was set in a specific way years ago, and if altering it will result in issues down the stream. Also, what if it was done by someone else, who no longer works on your team... ? Something I might add is that, at least in my case, large rewrites etc. often mean that new code is no longer compatible with old data structures. For me it's pretty rare for these things to be strictly tied to one part of the site, or perhaps new templates/fields only. Unless both you and the client are happy to maintain two sets of everything, possibly for extended periods on time, that's going to be a difficult task to handle without some type of automation, especially if/when downtime is not an option. Anyway, I guess a lions share of this discussion boils down to the type of projects we typically work on, and of course different experiences and preferences ?‍♂️ As for the solutions we've been presented with: I've personally been enjoying module management via Composer. Not only does this make it possible to version control things and keep environments in sync, it also makes deploying updates a breeze. As I've said before, in my opinion the biggest issue here is that not all modules are installable this way, but that's an issue that can be solved (in more than one way). While I think I understand what MoritzLost in particular has been saying about template/field definitions, personally I'm mostly happy with well defined migrations. In my opinion the work Bernhard has put in this area is superb, and definitely a promising route to explore further. One thing I'd like to hear more is that how do other systems with so-called declarative config handle actual data? Some of you have made it sound very easy, so is there an obvious solution that I'm missing, or does it just mean that data is dropped (or alternatively left somewhere, unseen and unused) when the schema updates? Full disclosure: I also work on WordPress projects where "custom fields" are managed via ACF + ACF Composer and "custom post types" via Extended CPTs + Poet. Said tools make it easy to define and deploy schema updates, but there's no out-of-the-box method for migrating data from one schema version to another (that I'm aware of). And this is one of the reasons why I think migrations sometimes make more sense; at least they can be written in a way that allows them to be reverted without data loss.
  19. That does seem like an issue. I have a vague memory that I might've had to make changes related to the timestamps at some point in the past, but can't remember what it was all about. For the time being I'll open a GitHub issue for this, as it might require a bit of digging and testing to make sure that everything works as expected ?
  20. Switched all my personal sites to the latest dev branch last week (after accidentally updating the server from PHP 8 to 8.1 — whoops...) and have had no issues — as far as I know — so far. Apart from a few minor ones (deprecation warnings) related to PHP 8.1, but those are already reported via GitHub. I'd say that it seems pretty solid so far ?
  21. This API ref page is pretty low level — so yes, the core does (by default and automatically) track changes made to pages, but it's mainly for checking if a specific Page object was changed during this particular request. The core itself doesn't maintain a log of changed pages, that's usually where modules step in. I'm not entirely sure what this module is; you're not referring to Activity Log, are you? Anyway, a module that tracks changes should be able to keep track of changes made via API as well, with just one exception: if the API request is made with the noHooks option enabled, no module will be able to track that. In other words API requests alone are nothing special, so they should be tracked normally, unless the one triggering said API request is specifically preventing any hooks from running, in which case there's no (clean) way to intercept or track it ?
  22. teppo

    WordPress compromise

    To be fair it's kind of cheap to blame this particular issue on WordPress. An unrelated vendor was compromised, which is something that could happen to pretty much any platform out there. In this sense ProcessWire isn't much (if any) safer at its core. Just saying. If there's a lesson here, it's probably that "anyone can be compromised". This was a vendor considered trustworthy by hundreds of thousands of users, after all. And perhaps another lesson might be that "relying on third party vendors can be an issue" — although writing every bit of code yourself isn't exactly a silver bullet either ? (In the video above they also put plenty of blame on PHP as a programming language, argue that the issue is due to WordPress being "pretty old" — like new software was inherently more secure than actively maintained older software — etc. There were some valid points there as well, but blanket statements like that... ugh ?)
  23. It should definitely work with $this->view->setTemplate(), like Zeka said above — though let me know if it doesn't, and I'll look into it ?
×
×
  • Create New...