Leaderboard
Popular Content
Showing content with the highest reputation on 12/03/2023 in all areas
-
Which version of RockMigrations did you use? I added Repeater Matrix support and this was merged approx. 2 months ago. Here's how to use it: https://github.com/baumrock/RockMigrations#repeatermatrix-field Repeater Matrix support is also mentioned on the module's page https://processwire.com/modules/rock-migrations/ If you tried the version with Repeater Matrix support and it didn't work for you, please let us know what exactly went wrong so we can help to fix it. In all my tests and using it in a production project, I didn't encounter problems so far. RockMigrations has been a big time saver and working reliably for me for the past 2 years or so. I haven't tried other solutions mentioned here since they are not actively maintained. As for having this functionality in the core. Yes, that would be great, indeed. Honestly, I don't understand, why @ryan hasn't implemented it yet. Maybe also because there is a great solution with a well designed API available already as a free module? Still, I think it would benefit the project if this was either in core or available as an optional core module. IMO, migrations are essential, especially when using PW for bigger projects and when working on projects in a team. As for the discussion here about potential problems with migrations brought up by @MarkE , I was able to work around all of those with RockMigrations since it is also declarative, but not destructive (unless explicitly intended). The "few things too many" argument is flawed since all the extra functionality is optional and therefore doesn't impact performance. And the addon-features are well designed and can be very useful. Performance with RockMigrations is really good. Only very large migrations take some time. I have a project with about 4000 lines of migrations for templates, fields, pages and roles in my main migration file and then some additional 1000 lines spread throughout modules and these take about 20 seconds. Since you are not doing migrations all too often, this is very tolerable. Migrations on smaller projects are almost not noticable.3 points
-
Thanks @bernhardfor the detailed explanations about splitting up the migrations. MAybe you want to add that to the Wiki. That would be awesome. For that big project where I have that 4000 lines migrations in one file I still need to do that. When I started out with that project I wasn't fully aware on how to properly organize my migrations. Like with any other tool, knowledge grows with usage time ?2 points
-
Glad to hear some facts from someone actually using my module and not just guessing ? I want to add here that during development it's never ever 20s for me and I'm using migrations all over and all my projects would simply not be doable or manageable without the module. The reason is that I've split migrations into smaller pieces rather than having them in one huge file. If you do that, RockMigrations will take care of only migrating the file that has actually been changed, so migrations during local development are a matter of milliseconds. For example I'd have a FooPage.php custom pageclass and there I'd add migrations that do stuff related to the FooPage, for example adding foo_field_body and foo_field_description. One of these "few things too many" aka MagicPages takes care of triggering the init() and ready() method of this pageclass, so that I can watch these files and trigger the migrate() method whenever the file is saved. I'd prefer if that was a core feature, but it is not. On deployment RM will run all migrations and that might take a little. But I don't know how long exactly because Github does that for me with the help of RM's deployment tools. Usually a full deployment with copying files etc. takes about a minute. Let's say that's a client project and the client contacts me 2 months later that something doesn't work. What I'd do is to execute "rockshell db:pull production" and some seconds later I'd have the current state of the project with all the new data on my local environment. While browsing the new content one of these "few things too many" aka filesOnDemand takes care of downloading all the images that the user has uploaded in the meantime that were not transferred by the db:pull. It does that in the background and only if I want it and have $config->filesOnDemand = ... in my config file. Does RockMigrations have a fancy "revert" button? No. Is it possible to revert changes? Yes! If you need that, do it. Just write the according revert migrations and trigger them. I don't do that for every migration, because I don't want to do work that I never ever need. If I change my mind during development and want to go another route all I need is "rockshell db:pull" and all changes are "reverted". Sometimes (in maybe 0.5% of all migrations) it still happens that I want to revert changes that I've already pushed to production. What I do in that case is to add some "cleanup" (you could also call it "revert") migrations at the top of the actual migrate() method. Something like $rm->deleteField('fooclass_bodyfield', true). You could even remove that line of code later, once that change has been applied to production. "Waaaah, you can't do that! What if others are working on the same codebase and never triggered that migration?" I hear you say... Well, I can. Everybody working on my codebase has to do two things before starting to work: 1) "git pull" to get the latest code state 2) "rockshell db:pull" to get the latest database state So there will not be any "fooclass_bodyfield" for him/her and therefore it's fine to not have the deleteField() call in their migrations. Another benefit of splitting migrations into pieces is best shown by RockPageBuilder. There all blocks have their dedicated migrations. A slider block needing an images field for example would create it's own field once the migrate() method is called. That makes it possible to just drag&drop the folder of this block into another project and boom, everything is there. If I had a central place for migrations that would not be possible. I know that this approach might look unconventional to some. So does ProcessWire itself with its "everything is a page" philosophy. But it works. Great. I invite anybody that actually used my module and notices a performance penalty to report it and I'll do my best to improve the module. I'm relying on RM every day (and have been for some years now), so reports like this would be highly appreciated. Thx. BTW: There's also $config->useMagicPages = false ?2 points
-
Something I've wanted in ProcessWire for a long time is full version support for pages. It's one of those things I've been trying to build since ProcessWire 1.0, but never totally got there. Versioning text and number fields (and similar types) is straightforward. But field types in ProcessWire are plugin modules, making any type of data storage possible. That just doesn't mix well with being version friendly, particularly when getting into repeaters and other complex types. ProDrafts got close, but full version support was dropped from it before the first version was released. It had just become too much to manage, and I wanted it to focus just on doing drafts, and doing them as well as we could. ProDrafts supports repeaters too, though nested repeaters became too complex to officially support, so there are still some inherent limitations. I tried again to get full version support with a module called PageSnapshots developed a couple years ago, and spent weeks developing it. But by the time I got it fully working with all the core Fieldtypes (including repeaters), I wasn't happy with it. It was functional but had become too complex for comfort. So it was never released. This happens with about 1/2 of the code I write – it gets thrown out or rewritten. It's part of the process. What I learned from all this is that it's not practical for any single module to effectively support versions across all Fieldtypes in ProcessWire. Instead, the Fieldtypes themselves have to manage versions of their own data, at least in the more complicated cases (repeaters, ProFields and such). The storage systems behind Fieldtypes are sometimes unique to the type, and version management needs to stay internal [to the Fieldtype] in those cases. Repeaters are a good example, as they literally use other pages as storage, in addition to the field_* tables. For the above reasons, I've been working on a core interface for Fieldtypes to provide their own version support. Alongside that, I've been working on something that vaguely resembles the Snapshots module's API. But rather than trying to manage versions for page field data itself, it delegates to the Fieldtypes when appropriate. If a Fieldtype implements the version interface, it calls upon that Fieldtype to save, get, restore and delete versions of its own data. It breaks the complexity down into smaller chunks, to the point where it's no longer "complexity", and thus reasonable and manageable. It's a work in progress and I've not committed any of it to the core yet, but some of this is functional already. So far it's going more smoothly than past attempts due to the different approach. My hope is to have core version support so that modules like ProDrafts and others can eventually use that API to handle their versioning needs rather than trying to do it all themselves. I also hope this will enable us to effectively version the repeater types (including nested). I'm not there yet, but it's in sight. If it all works out as intended, the plan is to have a page versions API, as part of the $pages API. I'll follow up more as work continues. Thanks for reading and have a great weekend!1 point
-
Thx! Looks like I should update ? And great to hear that as this plays well with RockMigration's deployment ?1 point
-
@bernhard, the /site/assets/div-queue/ directory was only used in v0.1.0. From v0.1.1 onwards the queue files are distributed in the same location as the variation files, e.g. /site/assets/files/1234/, and so the /site/assets/div-queue/ directory isn't needed.1 point
-
They way I've sort of done this is with New Relic, you install agents for php/apache/mysql in the server and does it's magic to monitor the relevant processes. And I say "sort of done this" is because I didn't really plan for anything, just wanted to test it out and got easily overwhelmed by all the features it has and concepts I don't really understand. I did manage to get alerts on spikes in memory and load, but that's about it and I'd say thats like 0.01% of what newrelic does, but kinda solved what I needed at that moment.1 point
-
I don't know of a PW way to do it, but you can quite easily query the DB directly: <?php // do proper sanitization when using direct SQL queries!! $sql = "SELECT pages_id FROM field_title WHERE data1034 LIKE '%plan%'"; $result = $database->query($sql); $pageid = $result->fetch(\PDO::FETCH_COLUMN); This returns 1170 for $pageid So if you modify the query to search for "Paket" in "data1034" this would return false. With a regular PW page selector it would still return 1170 as it also searches in the "data" column. Note that this query does not take any access control into account.1 point
-
Hey @gebeer it's really the same as with hooks. In the beginning placing all hooks in ready.php is the easiest. Later or on more complex projects it's better to move those hooks into their dedicated page classes. The concept is simple: Everything that belongs logically to FooPage goes into FooPage::migrate() Everything that belongs logically to BarPage goes into BarPage::migrate() Everything that belongs to the project and nowhere else or that needs to take care of circular reference issues goes into Site.module.php Site.module.php is a concept that I'm using for several years now and it is great. It's an autoload module that holds all the project specific stuff that belongs nowhere else. Similar to _functions.php but with all the benefits of OOP. Years ago I've created modules named after the project, like for the Foo project it was Foo.module.php and for the Bar project it was Bar.module.php; But I much more prefer Site.module.php which is why RockMigrations offers you to create this module for you (if you want). Nowadays whenever I'm working on any of my projects I instantly now where to look for: Site.module.php - that saves brain power for more important tasks ? I've done a video about hooks and custom page classes and the same concept applies to migrations (the video starts at the interesting part): All you have to do is make your custom pageclass a MagicPage and add a migrate() method: <?php namespace ProcessWire; use RockMigrations\MagicPage; class BasicPagePage extends Page { use MagicPage; public function migrate() { $rm = $this->rockmigrations(); $rm->migrate(...); } } Then as soon as you save that file those migrations will be fired. And only those. ? There's a reason why these features are built into the module...1 point
-
Indeed, you are right. FANTASTIC, it is working now. Thank you so much!1 point
-
@howdytom, I think I see where the confusion comes in. In more recent PW versions the require() line in /site/templates/admin.php is... require($config->paths->core . "admin.php"); ...whereas in earlier versions it is... require($config->paths->adminTemplates . 'controller.php'); These both end up doing the same thing so you should only have one require() in the file. The functional part of my code is this bit: // Get the user's IP address $ip = $session->getIP(); // Define the allowed IP addresses $allowed_ips = ['111.111.111.111', '222.222.222.222']; // Check user's IP is allowed if(!in_array($ip, $allowed_ips)) { return 'Access denied'; } I only include the require() line to indicate that my code needs to be inserted above the existing contents of /site/templates/admin.php1 point
-
There isn't anything in that code that should cause an error in any version of PW. Just a guess, but given that the error message originates from /wire/core/admin.php make sure you have edited /site/templates/admin.php and not /wire/core/admin.php by mistake.1 point
-
New version pushed to the dev branch with fixes for InputfieldTable and FieldsetPage elements. Please let me know if this solves the issue @monollonom https://github.com/SkyLundy/Fluency/tree/development Thanks for everyone's patience. Work has been crazy (and a long-ago planned vacation didn't help). Thanks!1 point
-
I have a json based migrations module that I released some time ago (ProcessDbMigrate). It needed further work, so I have not publicised it further after the initial proof of concept. In the meantime I have improved it enormously and tested it quite extensively with multiple field types including RepeaterMatrix. It is almost ready for re-release. It is a completely different concept to RockMigrations, which is an excellent and well-established module. My module will automatically track database changes as you make them in the back end UI and then export json for installation in the target. It also provides for roll-back, database comparisons and much more. Hopefully out before Christmas. PS I found during the course of development that there are quite a few flaws in the native export functions and had to re-write quite a bit.1 point
-
Shouldn’t you write <pw-region id="mainhead" instead of <div id="mainhead" in your code?1 point
-
Yes, that's mentioned in the readme: I'd say it's not the size of the site that matters but the number of image variations on any particular page. But in any case, if the way variations are created by default in PW is not presenting a problem then you won't need the module.1 point
-
Just in case it might be useful to you, there's a $sanitizer->truncate() method which will trim a block of text to the nearest word / sentence etc. It's very handy for creating neat summaries.1 point
-
Pages will definitely scale quite far. But there is certainly more overhead with a page than there is with a plain DB table. As a result, when you are talking about storing huge quantities of data, I would keep your pages to represent the visible URLs on your site. If each row of data isn't going to be related to a unique URL in your page structure, then there really isn't a technical need to store it as a page. Though if you don't need infinite scalability, you may still find using pages for that data to be more convenient. But since it sounds like you do need near-infinite scalability, going to the DB sounds like a better choice. ProcessWire Fieldtypes are designed to represent simple and complex structures in this way, while still letting you use the API admin interface to handle it all. However this does require developing your own Fieldtype and Inputfield to manage it (which actually isn't too difficult). If you don't need an interface and/or PW API access to manage it, you can also just go straight to the $db object as if ProcessWire wasn't there. But this isn't as nice or fun as having your data still be connected with ProcessWire.1 point