Jump to content

Why RockMigrations uses a totally different migration concept than other migration modules and how that works.


bernhard
 Share

Recommended Posts

I'm creating a new topic in response to @cst989's question in the RM thread as I think this is a common question and misunderstanding when evaluating RockMigrations. It's also easier to communicate in separate threads than in one huge multi-page-thread...

18 hours ago, cst989 said:

This module is looking great and I really appreciate all the effort that's gone into describing how to get started, videos etc..

Hi @cst989 thx for your question and interest in RockMigrations.

18 hours ago, cst989 said:

Our team (myself excluded) is experienced with Phinx for database migrations on non-ProcessWire projects. From what I understand, a good thing about Phinx is that a file is created for each migration, that migration will only be run once, the database will record that the migration has been run, and the file will be ignored from then on. It also has a nice CLI that creates the file, dated, with the structure ready to go.

This sounds like you have a misconception in your head which is quite common I guess. Did you watch my latest video on RM, especially this part? https://www.youtube.com/watch?v=o6O859d3cFA&t=576s

So why do you think it is a good thing to have one file per migration? I know that this is the way migrations usually work. But I don't think that this is the best way to do it. I'm not saying one way is right and the other is not. I'm just saying I'm having a really, really good time with RockMigrations and it makes working with PW in a more professional setup (meaning either working in a team and/or deploying PW to multiple locations and of course managing everything with GIT) a lot more fun an a lot faster and more efficient.

If we look at how migrations usually work we can have a look at the other PW migrations module, which works just like usual migration modules work: You create one file per migration and you end up with a list of migrations that get executed one after another. See this screenshot from the modue's docs:

UI.png

In my opinion that screenshot perfectly shows one huge disadvantage of that approach: You don't see what's going on. You end up with a huge list of migrations that you can't understand on first sight.

In RockMigrations this is totally different. You don't create one file per migration. You put all the necessary migrations where they belong and - in my opinion - make the most sense.

An example: Let's say we want to add 2 fields to the homepage: "foo" and "bar". Ideally you are already using Custom Page Classes (see my video here https://www.youtube.com/watch?v=D651-w95M0A). They are not only a good idea for organizing your hooks but also your migrations! Just like adding all your HomePage-related hooks into the HomePage pageclass init() or ready() method, you'd add the migrations for your 3 fields into the HomePage pageclasses migrate() method. This could look something like this:

ej8yTRq.png

Now let's say we develop things further and realise we also need a "bar" field:

uUdrASr.png

Do so see what we changed? I guess yes 🙂

Now one big difference to a regular migration approach is that you don't write downgrade() or reversion migrations - unless you don't want to revert the changes! In real life I've almost never ever needed to revert changes. Why? Because you develop things locally and only push changes you really want to have on your production system. If you happen to have to remove some changes that you applied on your dev it's easy to do, though:

9VlYauF.png

You see what we did? Nice! So does everybody else that has access to the project's GIT repo! So all your team mates will instantly see and understand what you did.

Pro-tip: You don't even need lies 43-45 if you didn't push those changes to production! If you only created those fields on your local dev you can simply restore the staging database on your local dev environment and remove the migrations that create the fields.

Pro-tip 2: Also have a look at RockShell, then restoring staging or production data is as easy as "php rockshell db-pull staging"

Pro-tip 3: When restoring a DB dump from the staging system it can easily happen that you have data in your database that was created only on the remote and you don't have on your dev system (like new blog posts for example). If you then open those new blog posts on your dev system processwire and the blog post contains images processwire will not be able to show those images (as only the file path is stored in the DB and not the whole image!). Just add $config->filesOnDemand = "http://yourstagingsite.example.com" to your config.php file and RockMigrations will download those files once PW requests the file (either on the frontend or also on the backend)!

Having all your changes now in your git history you can even jump back and forth in time with your IDE:

6kAumd8.gif 

18 hours ago, cst989 said:

I'm thinking I could write something to make it work this way... a module that manually keeps track of all these files and then creates a list, in date order, with any new files and passes that list to $rm->watch()? Or am I reinventing the wheel with something which goes against the logic of RockMigrations...? I suppose the main aim is to idiot-proof the process so nobody edits old migrations. 

You could. I thought about that as well. But I think it does not really make sense and I hope my examples above show why. I'm always open to input though and happy to try to think about it from other perspectives.

One final note: I'm not sure if what you say about $rm->watch() makes sense here. If you watch() a file that means that RM checks it's modified timestamp. If that timestamp is later than the last migration that RM ran, then RM will automatically execute the migrations that are in that file. All other files and migrations will be ignored. That makes it a lot more efficient when working on projects that have many migration files in many different places. When triggered from the CLI though or if you do a modules refresh then it will always trigger all migrations in all watched files.

I hope that makes sense!

---

Ok, now really a final note 😄

One HUGE benefit of how RockMigrations works (meaning that you write migrations in page classes or in modules) is that you create reusable pieces of work/code. For example let's say you work on a website that needs a blog. So you create a blog module and in that module you have some migrations like this:

<?php
$rm->createTemplate('blogparent');
$rm->createTemplate('blogitem');
$rm->setParentChild('blogparent', 'blogitem');
$rm->migrate([
  'fields' => [...], // create fields headline, date, body
  'templates' => [...], // add those fields to the blogitem template
]);

You'd typically have those lines in Blog.module.php::migrate()

What if you need a blog in another project? Yep --> just git clone that module into the new project and execute migrations! For example in migrate.php this:

<?php
$rm->installModule('Blog');

If you follow a regular migrations concept where all project-migrations are stored in a central folder you can't do that!

Of course you don't have to work like this. You can still write all your migrations in the project's migrate.php file. Because I have to admit that it is a lot harder to build a blog module that can be reused across different projects than just creating one for one single project! It always depends on the situation. But - and now I'll really leave it for today 😄 - you could also make your Blog-Module's migrate() method hookable and that would make it possible that you build a generic blog for all projects and then you add field "foo" to your blog in project-a and you add field "bar" to your blog of project-b.

Have fun discovering RockMigrations. I understand it can look frightening at first, but it is an extremely rewarding investment! Ask @dotnetic if you don't believe me 😄 

  • Like 5
  • Thanks 2
Link to comment
Share on other sites

  • bernhard changed the title to Why RockMigrations uses a totally different migration concept than other migration modules and how that works.

Thank you @bernhard for this extensive post. I use RockMigrations every day and love it, as I stated often before. It isn't as hard as it looks at first. Just try it out and if you should stumble over something, then just ask for support here in the forum. RockMigrations saves much time and makes it easy to develop features in your dev environment and then when the feature is finished push the changes to the live server with migrations being executed automatically, which creates all fields, templates and even pages and contents. Could live without it, but that would be a sad life.

So.... start using it now!

  • Like 3
Link to comment
Share on other sites

  • 11 months later...

@bernhard Hi!

The main problem with removing fields from the example is that when you work on a large project in different branches, the data structure in the project can change.

The main advantage of the classical approach to migrations is that regardless of which commit you moved to the current one, you can restore the structure to the current one.

In your example, we can only add new fields and slightly change the settings for existing ones, but for this we need the code to be present in the project.

But what to do if a person missed the changes to delete fields, change their type, or skipped conversion operations?
Because this code was in one commit, but the present one simply describes the required fields, templates, etc.?

I see that in this case we will have to contain a thick piece of exceptions, where there will be checks for deletion, transformations performed, etc., and this piece will become larger and larger as the project lives, especially if there is a significant difference between releases and branches.

I really liked the presentation of the project and its description, but unfortunately I could not accept it, since I cannot aggregate the necessary migration from commit to commit.

Link to comment
Share on other sites

Hi @*Most Powerful Pony!* 

what you are describing is indeed more complex and in such a situation you'd have to be more careful. But it's not really a drawback of RockMigrations in my opinion - it's a general team management issue just like any other merge conflicts.

If I understood you correctly you are talking about a situation like this:

- website X, v1.0
--- feature branch A - adds field_a v1.1a
--- feature branch B - adds field_b v1.1b
--- feature branch C - adds field_c v1.1c

Now if anyone checks out A, then B, then C he'd had all three fields, yes.

That can be a problem, I agree. The question is what you do to prevent such problems. In a classic approach you'd move back and revert migrations from branch A, then checkout B, then revert, then checkout C.

But you can do the same with RockMigrations. You could for example make a DB dump of v1.0, work on branch A, then checkout B, do "rockshell db:restore" so you'd be on v1.0 and then after that restore RockShell will have applied your migrations and you'd be on v1.1b with only field_b (and no field a+c).

What I'm usually doing is to have one single source of truth, which is usually the production system. Then I can do whatever I want in whatever branch and just do a "rockshell db:pull" and it will pull the production DB and apply my local migrations of my current branch. That way I also make sure that I don't get any bad surprises on deployment.

But you could also use RockMigrations in a more classical way. @elabx is using it in combination with the older Migrations module, which takes care of executing migrations one by one and also provides a UI for rollbacks. That might make sense in a scenario like you are describing, still RockMigrations can be of great help for writing the migrations code itself.

But don't forget that when going the classical route you basically double the development effort. Every change needs to have the corresponding rollback code as well. If that is a requirement, you can do it (even with RockMigrations). I just don't do it because I don't need it, because "rockshell db:dump" and "rockshell db:restore" is usually a lot faster than writing code for rollbacks (and not to forget test everything) 🙂 I'm using DDEV and I'm checking in the DB into my repo. So checking out any commit in the history is "git checkout ... && ddev import-db -f dump.sql". That's it.

But if you want to support RockMigrations I'd be happy to add support for classical workflows to RockMigrations or we can discuss any other cooperation to push ProcessWire forward on this topic 🙂 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...