Jump to content

add better configuration for fields and templates and make them version controllable


dotnetic

Recommended Posts

The problem:

Synchronizing fields and/or templates made on the dev server with the live server is cumbersome.
At the same time, there is no version control of fields and templates, which some folks (including myself) see as a disadvantage of ProcessWire.

A way to have version control to track changes and replicate automatically would be desirable.

There is the template and fields export feature in ProcessWire which has said for ages that this is only a beta version, although I have used it many times without any problems. However, even with this method, it is very cumbersome to reconcile changes between dev and live. You have to remember which fields / templates you created and select them, then copy and paste them on the dev server. This is a manual, cumbersome and time consuming process.

Existing solutions:

For this reason, several solutions have been developed such as:

Other systems like Laravel, Craft, Kirby and Statamic use configuration files (migrations, YAML) to manage fields / templates, which allow to create a state of fields / templates. Since the configuration is done in a file, you can of course manage it with git (or other vcs). By configuring in a file, it is also possible to automatically execute these migrations during a git push through different deploy pipelines like github actions, buddy, bitbucket pipelines and thus you have the desired state on the desired server.

Where to go from here?

In another post @bernhard showcased a prototype, that uses a YAML-file to create and manage fields / templates in ProcessWire.

At the same time he showcased a YAML recorder which writes all changes that are made to templates and fields into a YAML file, which looks very promising:

I think a combination of a recorder and a YAML config file would be a optimal solution, at least for me.

What format to use for such a configuration file was and has also to be discussed:

 

  • Like 13
  • Thanks 1
Link to comment
Share on other sites

Thank you for summing this topic up in a new thread. I had the same intention but couldn't spare the time.

I am all for version control of fields and templates. @bernhard's RockMigration Module already does a great job here. And since he introduced the prototype recorder I am very excited that we will soon have something to work with and build upon.

This should really be part of the PW core or available as optional core module. Would be great if @ryan put this on the roadmap for 2022.

  • Like 4
Link to comment
Share on other sites

I'd be curious how the other tools with file based config manage changes over time. My biggest problem with file based config (over file based migration) is that it'll only give you the current state you want things to be in, but no indication of which state the system is coming from and how to actually make data already in the system(db) be migrated to the new expected state. I'm not aware of declarative systems being able to handle that, which to me severly limits usefulness. It might be great in the beginning of a project, where you can scrap existing data, but won't work at all once old data needs to be maintained going forward.

  • Like 2
Link to comment
Share on other sites

Migration 21-11 (add foo field)

<?php
$rm->migrate([
  'fields' => [
    'foo' => [...],
  ],
  'templates' => [
    'xyz' => [
      'fields' => ['foo'],
     ],
  ],
]);

 Migration 21-12 (add bar field)

<?php
$rm->migrate([
  'fields' => [
    'foo' => [...],
    'bar' => [...],
  ],
  'templates' => [
    'xyz' => [
      'fields' => ['foo', 'bar'],
     ],
  ],
]);

 Migration 22-01 (add baz field)

<?php
$rm->migrate([
  'fields' => [
    'foo' => [...],
    'bar' => [...],
    'baz' => [...],
  ],
  'templates' => [
    'xyz' => [
      'fields' => ['foo', 'bar', 'baz'],
     ],
  ],
]);

RockMigrations works in a way that no matter how often you run the migration the config will be applied and the system will result in the same config each time. I know this is a totally different concept than you are using in your migrations module but it has turned out to be extremely easy to use and it works well. In the example above it does not matter if you jump from 21-11 to 21-12 and then to 22-01 or if you go directly from 21-11 to 22-01. Or you could even start with 22-01 and nothing else before.

That's easy as long as you add things to the system. Removing things is a little different, but it's also easy:

22-02 (remove baz field)

<?php
$rm->removeField('baz');
$rm->migrate([
  'fields' => [
    'foo' => [...],
    'bar' => [...],
  ],
  'templates' => [
    'xyz' => [
      'fields' => ['foo', 'bar'],
     ],
  ],
]);

Reverting things it a totally different topic! 22-01 --> 21-11 would not really be possible using RockMigrations. Though you can think of 22-02 as a "revert migration" that does exactly what a reversion would do.

But if one does not like that concept they could also use RockMigrations in a way your module works. In my experience this makes migrations just a lot more complex while adding things into a php array is cake. I've never had any problems over the last few years with my concept 🙂 

  • Like 1
Link to comment
Share on other sites

This sounds interesting. Though I wouldn't really call this file based config. It's not a single config, but rather migrations, which work off of a declarative config. Still wondering how this would deal with data though. Say I have a text field, which needs to be switched out with a pro multiplier while keeping the current text around as the content of one of the subfields. The above makes it seem like the prev. field and contents would just be deleted and the new one would just be empty.

Link to comment
Share on other sites

1 hour ago, LostKobrakai said:

It's not a single config, but rather migrations, which work off of a declarative config.

Exactly. It's even named like this: migrate($config) https://github.com/BernhardBaumrock/RockMigrations/blob/db4d300b4369863d41003f5c668b7ecaf2f19b5d/RockMigrations.module.php#L2693

1 hour ago, LostKobrakai said:

Still wondering how this would deal with data though. Say I have a text field, which needs to be switched out with a pro multiplier while keeping the current text around as the content of one of the subfields. The above makes it seem like the prev. field and contents would just be deleted and the new one would just be empty.

Sorry, I don't understand. Note that I have no experience with multiplier fields. Maybe you can give me another example?

Link to comment
Share on other sites

Another example could be migrating an address stored in a textarea, which should be split into multiple dedicated text fields (street, postal, city, …). Or some set of fields on template A, which should be extracted/migrated to template B – creating new child pages wherever there are template A pages. Imagine pages having a single address, and now they need to be able to have multiple addresses.

Link to comment
Share on other sites

12 minutes ago, LostKobrakai said:

Another example could be migrating an address stored in a textarea, which should be split into multiple dedicated text fields (street, postal, city, …). Or some set of fields on template A, which should be extracted/migrated to template B – creating new child pages wherever there are template A pages. Imagine pages having a single address, and now they need to be able to have multiple addresses.

This can all be achieved by code which means that you can put this in a migration function. And in RockMigrations you call your migration function on the target installation. RM is supporting Repeater Fields. So even the 3rd scenario would be possible.

Link to comment
Share on other sites

23 hours ago, LostKobrakai said:

Another example could be migrating an address stored in a textarea, which should be split into multiple dedicated text fields (street, postal, city, …). Or some set of fields on template A, which should be extracted/migrated to template B – creating new child pages wherever there are template A pages. Imagine pages having a single address, and now they need to be able to have multiple addresses.

Thx for that example. I understand now.

<?php
// migrate field content only if the address field exists
if($rm->fields->get('address')) {
  // first we make sure the fields exist
  $rm->createField('foo', 'text');
  $rm->createField('bar', 'text');
  $rm->createField('baz', 'text');
  
  // then we make sure they are added to xyz template
  $rm->addFieldsToTemplates(['foo','bar','baz'], 'xyz');
  
  // then we migrate data
  foreach($pages->find('template=xyz,address!=') as $p) {
    $parts = ...; // split address into parts
    $p->of(false);
    $p->foo = $parts[0];
    $p->bar = $parts[1];
    $p->baz = $parts[2];
    $p->save();
  }
  
  // then we delete the obsolete field
  $rm->deleteField('address');
}

// then we use the existing migration config for future use
// so you can easily add other stuff (like whatsoever field)
$rm->migrate([
  'fields' => [
    'foo' => [...],
    'bar' => [...],
    'baz' => [...],
    'whatsoever' => [...],
  ],
  'templates' => [
    'xyz' => [
      'fields' => ['foo', 'bar', 'baz'],
     ],
    'whatsoever' => [
      'fields' => ['whatsoever'],
    ],
  ],
]);

If you are inside a module that uses a migrate() method to migrate data this could easily been refactored to this:

<?php
class MyModule ... {
  
  public function migrate() {
    $rm = $this->wire->modules->get('RockMigrations');
    $this->migrateAddressData();
    $rm->migrate([
      'fields' => [
        'foo' => [...],
        'bar' => [...],
        'baz' => [...],
        'whatsoever' => [...],
      ],
      'templates' => [
        'xyz' => [
          'fields' => ['foo', 'bar', 'baz'],
         ],
        'whatsoever' => [
          'fields' => ['whatsoever'],
        ],
      ],
    ]);
  }
  
  public function migrateAddressData() {
    // if the address field does not exist any more
    // we can exit early as we have nothing to do
  	if(!$rm->fields->get('address')) return;
    
    // first we make sure the fields exist
    $rm->createField('foo', 'text');
    $rm->createField('bar', 'text');
    $rm->createField('baz', 'text');

    // then we make sure they are added to xyz template
    $rm->addFieldsToTemplates(['foo','bar','baz'], 'xyz');

    // then we migrate data
    foreach($pages->find('template=xyz,address!=') as $p) {
      $parts = ...; // split address into parts
      $p->of(false);
      $p->foo = $parts[0];
      $p->bar = $parts[1];
      $p->baz = $parts[2];
      $p->save();
    }

    // then we delete the obsolete field
    $rm->deleteField('address');
  }
  
}

This is one of the situations that are not possible using YAML of course 🙂 But the "migrateAddressData" could be done in PHP while the migrate() could migrate YAML data still.

  • Like 1
Link to comment
Share on other sites

On 1/20/2022 at 7:32 AM, dotnetic said:

There is the template and fields export feature in ProcessWire which has said for ages that this is only a beta version, although I have used it many times without any problems. However, even with this method, it is very cumbersome to reconcile changes between dev and live. You have to remember which fields / templates you created and select them, then copy and paste them on the dev server. This is a manual, cumbersome and time consuming process.

The JSON that this feature generates for import/export mostly works for me, however the issue is having to manually select fields/templates via the UI.

I notice that templates include a timestamp for when they were changed in the data field of the templates table, but fields don't.

If both did, then it should be possible to get fields and templates that have changed since the last build time of a module or template.

I use the admin UI, and don't want to write more code than I have to.

Specifying what fields and templates that a module depends on in code and letting the system pull a config file with the definition of those fields and templates from the database would be UI friendly, but not rely on having to remember what fields or templates are required for a specific purpose, but not require much code either.

This is similar to a database first workflow I have with SQL Server and ASP.Net Core. I can reverse engineer any tables I want from SQL Server to get a class definition, so I don't need to worry if the definition of something was manipulated outside my code; I can always grab the current definition without having to write any code.

For those who have a licensed copy of Padloper 1, it's interesting to see how it creates the fields and templates it needs. It appears to be using  JSON files based on the field and template import/export feature, but has its own code to import them.

With a timestamp and possibly a version number added to the data in the fields and templates table, I wonder if that would make a hybrid UI/code based update model easier to work with?

Of course for more complex migrations, RM is more capable, but I wonder if even there, it can grab the field and template definitions if they're exported as JSON?

  • Like 4
Link to comment
Share on other sites

5 minutes ago, Kiwi Chris said:

I notice that templates include a timestamp for when they were changed in the data field of the templates table, but fields don't.

If both did, then it should be possible to get fields and templates that have changed since the last build time of a module or template.

+1

 

5 minutes ago, Kiwi Chris said:

, and don't want to write more code than I have to.

+1

 

6 minutes ago, Kiwi Chris said:

With a timestamp and possibly a version number added to the data in the fields and templates table, I wonder if that would make a hybrid UI/code based update model easier to work with?

Could be a good first (or next) step towards automated file based migrations with rollback feature! 🙂

 

  • Like 5
Link to comment
Share on other sites

I posted over in this thread

how to get a JSON definition of each required field, template, and page, using core functions, but that doesn't resolve knowing whether a change has been made.

I wonder if it would be possible to store a hashed list of the state of objects at a particular build?

A hash might be more useful than a timestamp in a collaborative environment, as what got modified rather than when it got modified is probably more relevant. 

It should be possible to iterate over the list of objects, compare hashes and build a migration file containing a list of those objects that have changed since the last build.

This will not solve all scenarios, but could be a good start.

Entity Framework in .Net Core from Microsoft does something like this, and the documentation specifically states that the migration file will need manual editing for some scenarios, specifically if a field has been renamed, as the migration will include code to drop the original field and create a new one, rather than code to rename the existing field, which would obviously result in data loss.

If Microsoft can't figure out how to do fully automated migrations with all their resources, then it's obviously a significant challenge, however I like the idea of having automation to produce a migration file that is a starting point which in simple cases may be usable as is, but in more complex cases, at least reduces the amount of code that needs to be written manually.

Link to comment
Share on other sites

This is kind of cross-post... in order to keep this thread updated with related thoughts.

In summary... @ryan and @horst use what PW provides... @bernhard uses his RockMigration... all others of us use what ever fits our need and I'm totally fine with that... and really appreciate that.

Yet... somehow I'd really love to see a CraftCMS-like (even though I never saw or used it) JSON/YAML-Recorder-style tool @bernhard showed to us.

What if we open a money-pool for such a development?
Someone interested?

If I could have some kind of a solid working solution for a real time JSON/YAML-file-export/import solution which I only need to migrate via Git, without writing any functions, code or whatever, solely based on my PW-setup, to make all my instances the same... sure with some checks in the background but... YES... that would be my dream.

Still... I want and have to try and test the solution CraftCMS has available.

I put my money where my mouth is and start upfront with EUR 500 in the pool.

I will post some details of all the features I'd like to have and see in such a module in the upcoming days, maybe a week or two.
Yet... whoever wants to join, let me know.
In case you want to develop such a module, PLEASE let me know.

I guess I want to see this module happen in some kind or another (community or pro module).

My goal is... making PW even better. And maybe this could be another step.

Crossposted from

 

  • Like 2
Link to comment
Share on other sites

On 1/29/2022 at 11:46 PM, wbmnfktr said:

In case you want to develop such a module, PLEASE let me know.

I firmly believe that Ryan should be the one to do it, even thought he does not (yet) need it for some reason. I would only be willing to add money to the "pool" if he is the one who implements AND maintains it. So why don't we just pledge that we collect the amount he asks for its initial development and ask him to do it? New features for ProDevTools perhaps?

  • Like 6
Link to comment
Share on other sites

21 hours ago, szabesz said:

I firmly believe that Ryan should be the one to do it, even thought he does not (yet) need it for some reason.

@szabesz Ryan's post in the update thread is very insightful in this regard:

Quote

I'll develop it on my local copy of the site and it might involve creating several fields, templates and template files. I'll take a day or two to develop it and when it comes time to migrate the finished work to the live server, that's the fun part

[…]

A blog is just an example but it's the same as any other update. It's a painless process that always goes quickly and smoothly. This part of it takes maybe 5 to 10 minutes and is one of my favorite parts of the project, like driving a new car and then seeing it for the first time in your driveway.

Main takeaways from this:

  1. Ryan always works alone, never in a team.
  2. Ryan only works on projects with sporadic, large updates, never continuous/ongoing development.

With these constraints, a manual workflow is really no problem. Personally, I still wouldn't want to go without version control and automatic deployments, but I can see that if you're not used to that kind of workflow you don't see the need for it in this case. Unless you run into one of the constraints to this manual workflow:

  1. Working on the same project with multiple people at the same time without version control is near-impossible and error-prone.
  2. Working on a project with constant updates where you need to deploy not once every 3 months, but 5 times a day – in the latter case, those 5 - 10 minutes for each deployment really add up and get annoying real quick.

So I can understand Ryan's point of view that version control integration is kind of a 'luxus' feature, instead of an absolute necessity for many teams/projects. I don't agree with this view – but ultimately it's up to Ryan where he wants to take ProcessWire. And it's up to developers to figure out if ProcessWire's feature-set is sufficient for each individual team or project.

Quote

I will post some details of all the features I'd like to have and see in such a module in the upcoming days, maybe a week or two.
Yet... whoever wants to join, let me know.
In case you want to develop such a module, PLEASE let me know.

I agree with @szabesz that you need this in the core. Full version compatibility will require some changes in mindset and feature-set for the core, and this can only come from the core itself.

On 1/29/2022 at 9:07 PM, Kiwi Chris said:

how to get a JSON definition of each required field, template, and page, using core functions, but that doesn't resolve knowing whether a change has been made.

I wonder if it would be possible to store a hashed list of the state of objects at a particular build?

A hash might be more useful than a timestamp in a collaborative environment, as what got modified rather than when it got modified is probably more relevant. 

It should be possible to iterate over the list of objects, compare hashes and build a migration file containing a list of those objects that have changed since the last build.

@Kiwi Chris The difficulty comes from trying to use migrations, so a stream of changes, instead of declarative config. You want a config that describes the entire system so it can be built from scratch if necessary, not just a set of changes to go from one build to another. See below for details.

-----

In the other thread I posted some arguments why a declarative config is better than PHP migrations, just leaving this here since @dotnetic asked to have it cross-posted:

Quote

YAML is preferable because it's declarative instead of imperative. This has a couple of side-benefits, like cleaner diff views in git, no formatting issues or different styles and no 'noise' in your commits (all only relevant if you have a git-based workflow with pull requests). But the big thing is that it makes it impossible to create environment-specific configuration, which is exactly what you don't want. If you embrace that the configuration is the source of truth for the entire site state (excluding content), you won't need this anyway. Take your example where you switch a field based on whether the languages module is installed - I would flag this in a PR and consider it an antipattern. Whether a site is multi-language or not should be part of the configuration. If it isn't there's no way to guarantee that the code which works in staging will also work in production, so at that point you're doing all the work for controlled deployments and version control but not getting the benefits.

Another downside of PHP is that it's onedirectional by default. With YAML, if a deployment fails, you can just roll back to the earlier version and apply the configuration of that version. With PHP, this may work if the PHP migration is just one single $rm->migrate call with an array of configuration (so basically it is a declarative config). But you have no guarantees that it will, and if you have any logic in your migration that depends on the previous state of the site to migrate to a new state, this migration is irreversible.

Migrations do have their place - if you really need to perform some logic, like moving content from one field or format to another. But besides that, declarative configuration files are preferable.

  • Like 6
Link to comment
Share on other sites

I have been reading these posts with great interest but also a great deal of confusion. It seems that there are maybe 3 conversations going on simultaneously. Or maybe they are one and the same (or sides of the same 3-sided coin 😄) and it is me who is not getting it. I suspect the latter.

#1 Conversation 1:

Create fields and templates quickly using a configuration file.

#2 Conversation 2:

Deploying sites from local to production.

#3 Conversation 3:

Versioning fields and templates.

It is #3 that I don't understand at all. Maybe this is because I always use #2 in my 'deploy to production' strategy. This is not a criticism against #3. I genuinely don't understand why you would need to version templates and fields (but especially templates). I have read most of the posts but it is still not sinking in. Could someone please briefly explain this (i.e., why one would want to version templates and fields)? By versioning a field, for instance, does it mean that if the field label or description, etc. changes, that you need to be able to roll back those changes?  Something more complicated than that? Please don't' laugh 😁.I am probably exposing my ignorance here but happy to learn 🙂.

Thanks.

  • Like 1
  • Confused 1
Link to comment
Share on other sites

Hi @kongondo,

for me it seems you pretty precise nailed it down! 🙂

#1 is about prototyping
#2 is a simple OneWayDeployment from dev to production, or dev to stage to production, but based on automatic exported text files from PW.
#3 is about a deployment fully automated based on static files that describe somehow a start and an end for a deployment. This should be able to be rolled force and back, and even in a none linear order.

The number 3 is what you want to use in a middle or large team of developers. It should make them able to use different states, maybe like with parallel git branches.
And ähm, that's only how I understand it. 🙂

 

EDIT: and yes, it is confusing to me too, because it seems that no one really has this differences, (that only reflect the different workflows and personal preferences) on the radar. 😁

Edited by horst
  • Like 3
Link to comment
Share on other sites

On 1/30/2022 at 7:05 PM, szabesz said:

I firmly believe that Ryan should be the one to do it, even thought he does not (yet) need it for some reason.

I wish this would be an option here but Ryan on the other hand has his stand here. Which I really like somehow even it doesn't fit my needs at all.

On the other hand I have seen already solutions from others - no, they aren't public and I won't tell who, but - and they look promising. Really promising like something I know from the setups I worked with years and years ago in a J2EE/Oracle setup. That was a super-portable setup. A few files managed through Tortoise SVN and all and each instance was updated.

On 1/30/2022 at 7:05 PM, szabesz said:

I would only be willing to add money to the "pool" if he is the one who implements AND maintains it. So why don't we just pledge that we collect the amount he asks for its initial development and ask him to do it?

I would support a broader pool to make developing open source software and extensions for PW more a business than a hobby. I know too many free/OSS devs that can hardly make a living, therefore my goal is more about a sustainable pool for plugin devs.

SURE... this is still absolutely nothing like planned or thought through... yet... there needs or should be some kind of benefit for plugin devs. I already donated to some here for their work. Just because their software made my life and projects way easier. While talking about that, I still plan to build up kind of a pool for developers in my projects. Still in progress... yet. 

  • Like 3
Link to comment
Share on other sites

11 hours ago, horst said:

The number 3 is what you want to use in a middle or large team of developers. It should make them able to use different states, maybe like with parallel git branches.
And ähm, that's only how I understand it. 🙂

Thanks @horstfor enlightening me. Some of the confusion is gone. I still don't get why you would version a field or template (not template file) though 😁.

  • Like 1
Link to comment
Share on other sites

1 hour ago, kongondo said:

I still don't get why you would version a field or template (not template file) though 😁.

Personally, I'm not after versioning, I have never ever reverted back my code/database to any previous state, anyway, as I only sometimes copy some useful lines from a few hours ago and that's it. I painstakingly test all what I implement, so in the end there is no need to revert to anything.

What I am after is being able to edit templates/fields in the admin on my local machine and when I am done (and has already thoroughly tested all my work), I would like to push all the changes to the production site with a single click. That's it.

  • Like 4
Link to comment
Share on other sites

6 hours ago, kongondo said:

I still don't get why you would version a field or template (not template file) though 😁.

This is to keep track of the state of a site's structure independent from the content. To allow changes of fields and templates that have been altered on local or staging to be transferred to the production version without loosing content that was introduced in the meantime. Does this make sense?

  • Like 3
Link to comment
Share on other sites

1 hour ago, gebeer said:

This is to keep track of the state of a site's structure independent from the content. To allow changes of fields and templates that have been altered on local or staging to be transferred to the production version without loosing content that was introduced in the meantime. Does this make sense?

I was explaining the very same needs above, I think. I think we can all agree that this would be dead useful. Ryan does the same but manually, as he enjoys redoing manual clicking work in production. While I can imagine that it can be fun for him, I always avoid this, as it is boring to do it IMHO. That is why I make changes to templates/fields in production, clone the db to local afterwards, and work in code in local staging. When all is ok, I just need to apply the code changes to production because the production db is already up-to-date in the first place.

While it works for a one man show, starts to be an issue for a team. Still, even a solo developer would be better off if our feature request could become reality one day.

  • Like 2
Link to comment
Share on other sites

@kongondo @szabesz @horst A completely automated deployment enables continuous deployment as well as a number of other workflows. Being able to rollback to a previous version is part of it, but it's only one of the benefits of version control, and probably not the most important one. It's all a question of how much your workflow scales with (a) the amount of work done on a project / number of deployments in a given timeframe and (b) number of people on your team. For me, the 'breaking points' would be more than maybe one deployment a week, and more than one person working on a project.

There were many different approaches mentioned in the previous threads – migrations, cloning the production database before any changes, lots of custom scripting etc. But those all break down if you're starting to work in a team and get into workflows centered around version control. The key to those workflows is that each commit is self-contained, meaning you can rebuild the entire site state from it with a single command.

For comparison, here's how I work on a Craft project with my team, following a feature-branch workflow. I may work on the blog section while my colleague works on the navigation. We both have our own local development environment so we can test even major changes without interfering with each other. Once my colleague has finished, they commit their changes to a separate branch, push it to Github and open a pull request – including any template changes, translations, config changes, etc. I get a notification to review the PR. I would like to review it now, but I'm working on a different feature right now, have a couple of commits and some half-finished work in my working directory that's not even working at the moment. No problem, I just stash my current changes so I have a clean working directory, then fetch and checkout my colleague's branch from Github. Now it only takes a couple of commands to get my environment to the exact state the PR is in:

composer install (in case any dependencies / plugins have changed)
php craft project-config/apply (Apply the project configuration in my current working directory)
npm ci (install new npm dependencies, if any)
npm run build (build frontend)

Most of the time, you only need one or two of these commands, and of course you can put those in a tiny script so it's truly only one step. Now I can test the changes in my development environment and add my feedback to the PR. Keep in mind that the new 'blog article' entry type my colleague created with all it's fields and settings is now available in my installation, since they are included in the config that's commited in the branch. Now imagine doing that if you have to recreate all the fields my colleague has created for this PR manually, and remove them again when I'm done. Now image doing that 10 times a day.

By the way, everything I was working on is savely stored in my branch/stash, but is not interfering with the branch I'm testing now. This is the benefit of a declarative config: Everything that's not in the config gets removed. So even if I left my own work in a broken state, it won't interfere with reviewing the PR. With migrations, you'd have to include an up and down migration for every change, and remember to execute them in the right order when switching branches.

Any manual steps, no matter how easy or quick they are, prevent those workflows at scale.

Automatic deployments also makes your deployments reproducible. Let's say you have an additional staging environment so the client can test any incoming changes before they hit production. If you do manual deployments, you may do everything right when deploying to staging but forget a field when deploying to production. With fully automated deployments in symmetric environments you'll catch any deployment errors in staging. That's not to say you can't introduce bugs or have something break unexpectedly, but by removing manual steps you're removing a major source of errors in your deployments.

8 hours ago, szabesz said:

What I am after is being able to edit templates/fields in the admin on my local machine and when I am done (and has already thoroughly tested all my work), I would like to push all the changes to the production site with a single click. That's it.

I can one-up that: zero clicks. Automatic deployments triggered through webhooks as soon as a PR is merged into the main branch on Github. Deployment notifications are sent to slack, so everyone sees what's going on. A branch protection rule on Github prevents any developers from pushing directly to the main branch, and requires at least one (or more) approvals on a PR before it can be merged 🙂

6 hours ago, szabesz said:

Personally, I'm not after versioning, I have never ever reverted back my code/database to any previous state, anyway, as I only sometimes copy some useful lines from a few hours ago and that's it. I painstakingly test all what I implement, so in the end there is no need to revert to anything.

You're clients never ask you to undo some change you did a while ago? Not because of some bug, but because of changed requirements? In any case, if your version control only includes templates, but not the state of templates/fields that those templates excect, you won't be able to reverse anything non-trivial without a lot of work. Which means you don't get a major benefit of version control. Going from commenting out chunks of code because 'you might need them later' and just deleting them, knowing you will be able to restore them at any time, is really enjoyable. Having the same security for templates, fields, etc is great.

Fun story: I once implemented a change requested by a client that I knew wasn't a good idea, just because it would take less time than arguing. Once they saw it in production, they immediately asked me to revert it. One `git revert` later, this feature was back in its previous iteration.

  • Like 9
Link to comment
Share on other sites

41 minutes ago, MoritzLost said:

You're clients never ask you to undo some change you did a while ago? Not because of some bug, but because of changed requirements?

Sure, I did it a few times but clients always have to pay the price :) Just kidding... Yes, it can sometimes be a real issue for sure. Anyway, I work solo and my projects/clients are somewhat "special" and most of the time clients just rely on all my decisions.

Still, thanks for your insights! You have clearly explained your motivations which are more than reasonable, I think. I would be more than happy to join a crowd funding initiative if YOU were the one to lead it, but first someone needs to make Ryan firmly believe he also needs this... Sounds impossible, but never say never.

  • Like 3
Link to comment
Share on other sites

1 hour ago, MoritzLost said:

This is the benefit of a declarative config: Everything that's not in the config gets removed. So even if I left my own work in a broken state, it won't interfere with reviewing the PR. With migrations, you'd have to include an up and down migration for every change, and remember to execute them in the right order when switching branches.

Just a try to understand this in regard of PW fields and templates: Would this go in a (yet simplified) direction like automatic write down *ALL* fields and template config into a file (or 2 files) for the export. And on the import, first wipe out all existing (current) config and restore / build the given state from the 1 or 2 files?

 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...