Jump to content

add better configuration for fields and templates and make them version controllable


dotnetic

Recommended Posts

5 hours ago, gebeer said:

Does this make sense?

Now it does, thanks ?.

2 hours ago, MoritzLost said:

The key to those workflows is that each commit is self-contained, meaning you can rebuild the entire site state from it with a single command.

Thank you very much for the very clear and detailed explanation. My confusion is over ?.

  • Like 2
  • Haha 1
Link to comment
Share on other sites

I've not participated much here, since I feel there are more knowledgeable folks here already, but a few quick (?) opinions/experiences:

  • I would love to have an easy way to migrate changes for fields and templates between environments, and version control all of that.
  • I've had cases where I've made a change, only to realize that it wasn't such a good idea (or better yet, have a client realize that) and off we go to manually undo said change. Sometimes in quite a bit of hurry. These are among the situations in which an easy rollback feature would be highly appreciated.
  • I do like making changes via ProcessWire's UI, but at the same time I strongly dislike having to do the exact same thing more than once. Once is fun, but having to redo things (especially from memory, and potentially multiple times) is definitely not what I'd like to spend my time doing ?
  • I've worked on both solo projects, and projects with a relatively big team. While versioning schema and easily switching between different versions is IMHO already very useful for solo projects, it becomes — as was nicely explained by MoritzLost earlier — near must-have when you're working with a team, switching between branches, participating in code reviews, etc.
  • I'll be the first one to admit that my memory is nowhere near impeccable. Just today I worked on a project I last worked on friday — four days ago! — and couldn't for the life of me remember exactly what I'd done to the schema and why. Now imagine having to remember why something was set in a specific way years ago, and if altering it will result in issues down the stream. Also, what if it was done by someone else, who no longer works on your team... ?

Something I might add is that, at least in my case, large rewrites etc. often mean that new code is no longer compatible with old data structures. For me it's pretty rare for these things to be strictly tied to one part of the site, or perhaps new templates/fields only. Unless both you and the client are happy to maintain two sets of everything, possibly for extended periods on time, that's going to be a difficult task to handle without some type of automation, especially if/when downtime is not an option.

Anyway, I guess a lions share of this discussion boils down to the type of projects we typically work on, and of course different experiences and preferences ?‍♂️

As for the solutions we've been presented with:

  • I've personally been enjoying module management via Composer. Not only does this make it possible to version control things and keep environments in sync, it also makes deploying updates a breeze. As I've said before, in my opinion the biggest issue here is that not all modules are installable this way, but that's an issue that can be solved (in more than one way).
  • While I think I understand what MoritzLost in particular has been saying about template/field definitions, personally I'm mostly happy with well defined migrations. In my opinion the work Bernhard has put in this area is superb, and definitely a promising route to explore further.

One thing I'd like to hear more is that how do other systems with so-called declarative config handle actual data? Some of you have made it sound very easy, so is there an obvious solution that I'm missing, or does it just mean that data is dropped (or alternatively left somewhere, unseen and unused) when the schema updates?

Full disclosure: I also work on WordPress projects where "custom fields" are managed via ACF + ACF Composer and "custom post types" via Extended CPTs + Poet. Said tools make it easy to define and deploy schema updates, but there's no out-of-the-box method for migrating data from one schema version to another (that I'm aware of). And this is one of the reasons why I think migrations sometimes make more sense; at least they can be written in a way that allows them to be reverted without data loss.

  • Like 10
Link to comment
Share on other sites

4 hours ago, MoritzLost said:

Keep in mind that the new 'blog article' entry type my colleague created with all it's fields and settings is now available in my installation, since they are included in the config that's commited in the branch.

Thanks @MoritzLost for the detailed post. One thing I don't understand and am hoping you might explain is how Craft handles field renaming within the project config file. Do the config files refer to fields by ID, name, or something else? It seems like IDs couldn't be used in the config because if the IDs auto-increment as fields are added then they wouldn't be consistent between installations. But if names are used instead of IDs then how is it declared in the config that, say, existing field "date" was renamed to "post_date", versus field "date" was deleted and a new field "post_date" was created? Because there's an important difference there in terms of whether data in the database for "date" is dropped or a table is renamed.

  • Like 4
Link to comment
Share on other sites

On 2/1/2022 at 2:18 AM, MoritzLost said:

The difficulty comes from trying to use migrations, so a stream of changes, instead of declarative config. You want a config that describes the entire system so it can be built from scratch if necessary, not just a set of changes to go from one build to another. See below for details.

Some people are wanting continuous upgrades to complete sites, but the kind of scenario I'm facing for the first time, is potentially identical, simultaneous upgrades required to multiple sites with different data, but on an extensible modular level.

I've been thinking along the lines of having a method to build a full state, but whenever any objects in that state change, having some way to track what's changed since the last build.

Building the full state looks to be relatively easy; specify what fields, templates, and pages are required, and in what order to satisfy dependencies, and then dump JSON files with definitions of all the objects required.

Handling changes since the initial build in a reversible way is more of a challenge.

I think if I did want reproducible upgrades to whole sites, that would be easily achievable by simply specifying the module dependencies in a configuration file, as they would encapsulate all the fields, templates, and pages required for a given site configuration, 

I'm moving towards the idea that everything other than the core would basically end up being a module, which in some cases may be responsible for nothing more than maintaining a set of fields templates, and possibly pages (eg for lists) for a specific task.

Existing module dependency capabilities mean I can handle if a module depends on existing fields, as I can simply make it depend on the module that installs and maintains those fields.

EG, module 1 installs a base set of fields, templates and pages.

Module 2 requires some of the object data installed by module 1 but extends on that with additional functionality including its own fields, templates, and possibly pages.

Module 3 also requires data from module 1 and extends on it in a different way. 

All sites will have module 1, but will have either module 2 or 3 but not both.

I don't want or need to migrate an entire site, as that would get too messy if I start deploying a number of similar sites, but each with unique user data and sub-components.

I still want to be able to use the UI though for rapid prototyping, and then build a deployment configuration once I've tested things out.

Obviously changing things via the UI doesn't automatically add dependencies to a module, but that's fairly insignificant to achieve.

5 hours ago, Robin S said:

Because there's an important difference there in terms of whether data in the database for "date" is dropped or a table is renamed.

Microsoft's Entity Framework Core specifically has an issue with this. It can generate automatic migrations with schema changes, but a renamed field or class results in a drop and create, rather than a rename, and their documentation recommends manually editing auto-generated migrations to deal with this. Obviously this problem isn't unique to ProcessWire by any means. ?

The way things work there, is class definitions are complete, self-contained defintions of the schema, but separate migration files handle getting to that state if the current schema doesn't match the class definitions. Although .Net is a different platform, I wonder whether the way Entity Framework handles migrations could provide some insight into how to build a workable system for ProcessWire?

  • Like 1
Link to comment
Share on other sites

8 hours ago, horst said:

Just a try to understand this in regard of PW fields and templates: Would this go in a (yet simplified) direction like automatic write down *ALL* fields and template config into a file (or 2 files) for the export. And on the import, first wipe out all existing (current) config and restore / build the given state from the 1 or 2 files?

Have a look at Bernhard's video in the first post of this thread which presents a proof of concept YAML recorder. This records all fields/templates to a declarative YAML file once you make changes through the admin UI to either a field or template. So you will always get the current state of all fields/templates as a version controllable file. That could be imported back once someone writes the logic for the import process.

That YAML recorder really is a great first step. But fields/templates config alone do not represent the full state of a site's structure. We'd need to also record the state of permissions, roles and modules and later create/restore them on import. @bernhard's RockMigrations module already has methods createPermission and createRole, so does the PW API. Modules can be installed/removed through the PW modules API. So importing the recorded changes should be possible.

The recorder and importer are the key features needed for version controlling application structure. Adding fields/templates/permissions/roles/modules through code like with RockMigrations would be an added benefit for developers who don't like using the admin UI.

 

  • Like 3
Link to comment
Share on other sites

5 hours ago, gebeer said:

The recorder and importer are the key features needed for version controlling application structure. Adding fields/templates/permissions/roles/modules through code like with RockMigrations would be an added benefit for developers who don't like using the admin UI.

IMHO migrations are the key feature needed for version controlling your site. A recorder and importer is added benefit for people that are too lazy to write any lines of code (which is actually copy and paste and much preferable - think of $rm->renameField('from', 'to') instead of delete old field and create new field when using a config like mentioned above). But I got that nobody wants to hear that ? 

5 hours ago, Kiwi Chris said:

The way things work there, is class definitions are complete, self-contained defintions of the schema, but separate migration files handle getting to that state if the current schema doesn't match the class definitions. Although .Net is a different platform, I wonder whether the way Entity Framework handles migrations could provide some insight into how to build a workable system for ProcessWire?

Ever tried Lostkobrakai's migrations module?

I'm out of this discussion...

Link to comment
Share on other sites

11 minutes ago, bernhard said:

IMHO migrations are the key feature needed for version controlling your site. A recorder and importer is added benefit for people that are too lazy to write any lines of code (which is actually copy and paste and much preferable - think of $rm->renameField('from', 'to') instead of delete old field and create new field when using a config like mentioned above). But I got that nobody wants to hear that ? 

With coded migrations you alter the structure of the application and it is great that your module provides this. But I think we should respect that not everyone wants to do this by code. Some would rather like to use the admin UI. And this is where the recorder comes in. Any changes are reflected in a declarative way. Even coded migrations would be reflected there. What I was trying to say is that the complete state of the application should be tracked in a declarative manner. How you get to that state, be it through coded migrations or through adding stuff through the UI, should be secondary and left up to the developer.

17 minutes ago, bernhard said:

I'm out of this discussion...

? Please don't go just yet. I'm sure we can all benefit from your input.

  • Like 4
Link to comment
Share on other sites

23 hours ago, horst said:

Just a try to understand this in regard of PW fields and templates: Would this go in a (yet simplified) direction like automatic write down *ALL* fields and template config into a file (or 2 files) for the export. And on the import, first wipe out all existing (current) config and restore / build the given state from the 1 or 2 files?

Yes, writing the config should always dump everything. Much easier than keeping track of changes. Of course, under the hood the actual implementation could optimize that further, for example by only writing files that have changed to reduce disk i/o. But conceptually, the config should always include the full config for the current system state.

On import, you probably can't wipe out all fields since that would remove the database tables and wipe all content. When the config is applied, the appropriate process/class should read the config and apply any differences between the config and the database state to the database. I.e. create missing fields, remove fields that aren't in the config, apply all other settings etc. At least that's how Craft does it. Conceptually, the entire config is read in and the site is set to that state.

21 hours ago, teppo said:

One thing I'd like to hear more is that how do other systems with so-called declarative config handle actual data? Some of you have made it sound very easy, so is there an obvious solution that I'm missing, or does it just mean that data is dropped (or alternatively left somewhere, unseen and unused) when the schema updates?

In Craft, there's a clear separation of concerns between config and data. The config is tracked in version control, data isn't. That's not a problem if you don't do any 'real' content editing in your dev environment. For our projects, we usually set up a staging environment pretty early on and do all actual content editing there. Once the project is ready to go live, we either switch the live domain over to that staging environment (so staging is promoted to production, essentially). Or we install a new instance of the project and copy over the database and assets folder so we have separate production and staging environments.

For projects that are already live, you just wouldn't do any real content in the dev or staging environments. If you really need a large content update alongside a code update, you could use an export/import module or migrations. Migrations complement the declarative config and most of the time, we don't need them at all.

By the way, there's a discussion to be had about where you draw the line between config and content. For example, for a multilingual site, are the available languages configuration (only editable by the dev) or content (editors can create new languages)? There are many of those grey areas and I don't think this has a single right answer.

19 hours ago, Robin S said:

Thanks @MoritzLost for the detailed post. One thing I don't understand and am hoping you might explain is how Craft handles field renaming within the project config file. Do the config files refer to fields by ID, name, or something else? It seems like IDs couldn't be used in the config because if the IDs auto-increment as fields are added then they wouldn't be consistent between installations. But if names are used instead of IDs then how is it declared in the config that, say, existing field "date" was renamed to "post_date", versus field "date" was deleted and a new field "post_date" was created? Because there's an important difference there in terms of whether data in the database for "date" is dropped or a table is renamed.

Craft uses UUIDs in addition to the name. Each field also has an ID that's environment-specific, but that's an implementation detail you never have to interact with, since you can always refer to a field by name or UUID. So you can change a field handle while the UUID stays the same. This also prevents naming conflicts, since new UUIDs are pretty much guaranteed to be unique.

----

On a broader note regarding the difference between declarative config and migrations: It's important to distinguish between the 'conceptual' view (the config represents the entire site state) and implementation details. Take git as an example. Conceptual, each commit represents a snapshot of the entire codebase in a particular version. Of course, under the hood git doesn't just store a copy of the entire codebase for each commit, but optimizes that by having pointers to a database of blobs / objects. But that's an implementation detail, while the public API is inspired by treating each commit as a snapshot of the codebase, not a collections of diffs.

  • Like 2
Link to comment
Share on other sites

11 hours ago, bernhard said:

think of $rm->renameField('from', 'to') instead of delete old field and create new field

That's exactly how migrations files work with Entity Framework in .Net, but you've got to write that manually. I'm OK with that as it's not much to write.

For new, changed, or deleted fields, it builds the migration automatically from the current definition of system state based on what's changed.

2 hours ago, MoritzLost said:

In Craft, there's a clear separation of concerns between config and data. The config is tracked in version control, data isn't.

That's exactly what I'd want. The main situation  in ProcessWire where tracking data can be useful is with pages used as input for list items.

2 hours ago, MoritzLost said:

On a broader note regarding the difference between declarative config and migrations

I think I want both; a full description of system state that can be used to install a new instance of a given configuration, but also migrations so that an existing installation can be synchronised with a specific state.

Maybe it's not the best approach, but that's similar to how Entity Framework on .Net works, and I work with that as well, so having a similar workflow to something I'm already using would keep things simple, but I understand there may be other solutions that are more efficient for other people.

One of the key points for me as well, is I want to be able to do this at the module level, in addition to or instead of the site level.

Link to comment
Share on other sites

3 hours ago, MoritzLost said:

Yes, writing the config should always dump everything. Much easier than keeping track of changes. Of course, under the hood the actual implementation could optimize that further, for example by only writing files that have changed to reduce disk i/o. But conceptually, the config should always include the full config for the current system state.

On import, you probably can't wipe out all fields since that would remove the database tables and wipe all content. When the config is applied, the appropriate process/class should read the config and apply any differences between the config and the database state to the database. I.e. create missing fields, remove fields that aren't in the config, apply all other settings etc. At least that's how Craft does it. Conceptually, the entire config is read in and the site is set to that state.

Hhmm, I do not get the differences between "you probably can't wipe out all fields since that would remove the database tables" and "remove fields that aren't in the config". ?

And what is with the content in your previous example, (switching between your working branch and reviewing a coworkers branch). It seems that is not within the systems config dump, or is it?
"In Craft, there's a clear separation of concerns between config and data. The config is tracked in version control, data isn't." When your coworker has implemented a new blog section, aren't there any new pages with content in his branch then?

What comes to my mind with your example are mainly two things:

A) The simpler one is: In PW I can do this (switching complete systems for reviews force and back) by simply switch the whole database instead of static config files, then I have everything whats needed for my coworkers branch, config AND content. And nothing is lost when I switch back the DB. So, where is the benefit with the proposed static config files system?

B) The second thing, that's where I really hope is the benefit in, but what I yet haven't grasped is: How does it handle a migration of your dev work into the main branch and then one of another coworker and then the blog-implementation, and then etc, etc. ? This cannot be done by adding missing things and remove something from the main system what is not existing in a single branch work. Or can it? How does this function?

 

Link to comment
Share on other sites

Just wanted to add my 2 cents and say that a database migration system (like Ruby on Rails... they've had it perfected since 2005 or so) takes ProcessWire from being a CMS/CMF to something more web application framework-like, at least from my point of view.  That's a defining feature the way I see it and what I believe Bernard is shooting for (I haven't experimented with RM yet).

Personally, I do everything by hand the way Ryan described (because I'm impatient and it's fast enough), combined with little 1-off scripts that modify big chunks of data as needed, but that approach will fall apart when there's multiple developers involved, syncing changes, or even re-implementing your own changes on a production site that were originally done on a development site.

I do wonder if I would use a migrations feature if it were native to ProcessWire.  Right now, I rarely even use Field/Template/Page Exports when making field/template/page changes from dev to production, but I definitely understand the use case (having worked with web applications frameworks extensively).  While having a database migration system is the more 'proper' and 12-factor-y way to do complex development, I don't personally view ProcessWire as a web application framework like Laravel and Rails.

There's something to be said about being able to throw ProcessWire around and experiment with things quickly.  It has had a real impact on my productivity and solutions.  Hand-writing every field or template added or changed would be tiring (although it would optional).  Having it auto-recorded like CraftCMS would be interesting and there have been attempts to do that.

Not sure where I'm going with this, but just some thoughts I felt like sharing.

  • Like 3
Link to comment
Share on other sites

21 hours ago, Kiwi Chris said:

I think I want both; a full description of system state that can be used to install a new instance of a given configuration, but also migrations so that an existing installation can be synchronised with a specific state.

@Kiwi Chris Migrations have their place and I definitely wouldn't do without them. I think it's best if config and migrations complement each other.

21 hours ago, Kiwi Chris said:

One of the key points for me as well, is I want to be able to do this at the module level, in addition to or instead of the site level.

I think there needs to be a distinctions between shared (community / open source) modules and site-specific modules. For Craft, this distinction is between plugins (external plugins installed via Composer and tracked in the project config) and modules (site-specific modules providing site-specific functionality). Both can provide config to be tracked in the project configuration. But they work slightly different and keeping those things separate makes it easier to talk about them.

20 hours ago, horst said:

Hhmm, I do not get the differences between "you probably can't wipe out all fields since that would remove the database tables" and "remove fields that aren't in the config". ?

@horst I just meant that existing fields that already exist in the database and still exists in the config aren't wiped and recreated when applying the config (since that would wipe the data as well). The config always includes all fields (as well as entry types, settings, etc) that exist on the site. So if I remove a field in dev, its removed from the config. If I apply that config to other environments, the field is removed from those as well.

20 hours ago, horst said:

And what is with the content in your previous example, (switching between your working branch and reviewing a coworkers branch). It seems that is not within the systems config dump, or is it?
"In Craft, there's a clear separation of concerns between config and data. The config is tracked in version control, data isn't." When your coworker has implemented a new blog section, aren't there any new pages with content in his branch then?

You don't need the content in the version control. Local test environments only create data for testing purposes. So when I check out my colleagues PR, I will have all entry types for the news blog etc, but no actual data. For quick testing I can just create a couple of blog posts manually. For larger projects, you can use a data seeder plugin, or a content migration that imports some data. We've done this for a project where an existing database was imported to a new system, there we could just run the import migration to seed development/staging environments. Admittedly, it's a tiny bit of additional work. But far easier than making sure you don't put garbage data into version control, figuring out how to merge diverging content branches, dealing with assets. And I don't want content dumps muddying up my git commits anyway.

Once you start working this way, it's super relaxing not having to worry about creating 'real' content in your dev environment, being able to wipe out content on a whim, try something out etc. The 'real' content editing happens in the staging/production environment anyway.

20 hours ago, horst said:

A) The simpler one is: In PW I can do this (switching complete systems for reviews force and back) by simply switch the whole database instead of static config files, then I have everything whats needed for my coworkers branch, config AND content. And nothing is lost when I switch back the DB. So, where is the benefit with the proposed static config files system?

How are you merging diverging branches from multiple people once the time comes to merge them? You can't really merge database dumps (and stay sane). Also, database dumps are terrible for version control, much to noisy to be readable in diffs. With a YAML config, I can just look at the diff view of the PR in Github and tell at a glance what changed in the config, I don't think you can do that with an SQL dump unless you're Cypher from The Matrix …

20 hours ago, horst said:

B) The second thing, that's where I really hope is the benefit in, but what I yet haven't grasped is: How does it handle a migration of your dev work into the main branch and then one of another coworker and then the blog-implementation, and then etc, etc. ? This cannot be done by adding missing things and remove something from the main system what is not existing in a single branch work. Or can it? How does this function?

The main branch is always the source of truth for the entire site state, the config includes all fields, entry types, general config settings, etc. Everyone working on a feature creates a new branch that modifies the config in some way – but those branches still include the entire site state, including the feature being worked on. So once you merge that feature in to the main branch and deploy in staging/production, the system can 'interpolate' between the current state and the config. That is, compare the config with the database state, then adjust the database to match the config by creating fields that are in the config but not in the database, removing fields from the database that aren't in the config anymore, applying any settings changes from the config to the database etc.

Of course, there may be merge conflicts if some branches contain conflicting changes. In this case, after merging in the first branch, you'd get a merge conflict which prevents the PR from being merged. You resolve these like all regular merge conflicts in git, which is made easier since the config is just a bunch of YAML files. For simple conflicts, you can resolve those directly in the Github UI. For more complicated conflicts (which are very rare), you would do that locally by either rebasing or merging in the main branch.

  • Like 4
  • Thanks 2
Link to comment
Share on other sites

Many thanks @MoritzLost for the detailed explanations, (and the lowered down expectations of mine ? ).

This makes all sense to me. Especially the point B) with the merge conflicts. If I have gotten that right it is not doable in a fully automated way (without interaction) with scenarios like: Developers A B & C start at the same time with the exact same state of the main branch, (the branch of truth), to develop new features or modify existing things. Developer A renames 2 fields, delete one field and create one new and after 2 days, his work is reviewed and merged into the main branch. When Developer B or C now want to merge their new branches after 3 or 4 days, they have the 2 old named fields and the deleted one staying in their branches. Some of that can or will lead into conflicts or inconsistency. The automated merge should not recreate the deleted field from Developer A, and so on. So, in conclusion, this automated static declarative update systems needs also a lot of discipline, consultation and interaction between the developers and with the merging / migration system. This implies that the larger the development team, the smaller the advantage of automation. Of course, the versioning through the YAML files, etc. remains unaffected, regardless how far the team will scale. But yes, somewhat reduced expectations. It sounded here before as if one must only lean back and everything goes by itself, fully automatically. ?

 

Link to comment
Share on other sites

On 2/2/2022 at 4:51 PM, cst989 said:

I'm surprised @MarkE hasn't weighed in to talk more about their module linked in the OP - I think it's a really great concept that seems to tick a lot of the boxes being discussed. I just wish I had more time to try something like that out.

I'm definitely meaning to 'weigh in'. I think this is a really important topic - possibly the most important one for the future development of PW, maybe alongside the pagebuilder. However I'm really buried in a complex app at the moment and don't have time to fully consider all the issues.

First off, regardless of the merits of my particular module, I do think that the help file explains some of the functionality that I think is important. In particular, the approach is completely declarative and UI-based - it neither requires coding nor a clean start (it can be installed in an existing system). I encountered a number of issues in building the module this way that others should be aware of - for example that ids can vary between the development and target environments - but I think any other approach reduces the scope of application.

As regards the status of ProcessDbMigrate, I have been using it quite successfully in a moderately complex live app (there are some bug fixes and improvements pending for the released version) but really wanted to try it out in the app referred to earlier, which has complex fieldtypes (e.g. my FieldtypeMeasurement) and many pro fields. I hit a snag with nested repeaters which needs attention once I have finished the app.

I don't see ProcessDbMigrate as the solution however, but I do think it demonstrates a lot of the required functionality (it creates json not yaml files, but that's not a big difference). I see it as a partial prototype. Aside for any snags like that mentioned, I do not like the profusion of templates and fields which clutter up the admin.

As to a way forward, I think a collaborative development of requirements and spec would help and then some agreement on who and how to build it. I also really think that a contribution to this discussion from @ryan before proceeding would be most helpful.

I'll try and return with some more detailed thoughts once I get this app done!

  • Like 5
Link to comment
Share on other sites

7 hours ago, MarkE said:

the most important one for the future development of PW

+1

Well, we all know that Ryan is not interested in attracting hordes of developers to ProcessWire, and I agree, as I also prefer quality over quantity. However, making sure he keeps the current community he already has is vital to ProcessWire. If his standpoint is something like "Guys, I am not much interested in what you think is one of the most important things in order not to loose you by turning you towards other systems, but I am here to help to make mods to the core when you need them." than that approach will hurt ProcessWire, I think.

 

  • Like 1
  • Confused 1
Link to comment
Share on other sites

10 hours ago, MarkE said:

As to a way forward, I think a collaborative development of requirements and spec would help and then some agreement on who and how to build it.

+1001 ?

And my first request is, to start with TDD (Test Driven Development)! I can share the basics (or a starting point) for that, means I've a small and comfortable to use setup ready for that. Would need to write a short documentation / explanation, on how it works and should be used.

 

 

10 hours ago, MarkE said:

I also really think that a contribution to this discussion from @ryan

 

10 hours ago, MarkE said:

before proceeding would be most helpful.

+1 ?

 

10 hours ago, MarkE said:

I think this is a really important topic - possibly the most important one for the future development of PW

-10 ?

 

  • Like 3
Link to comment
Share on other sites

Just to give my two cents:

As a solo developer I think it would be nice to have an automated generation of config files to version control template and fields in Git, but I can absolutely understand Ryan if he doesn't find it necessary as solo developer for his workflow. Especially with his tools for Export/Import fields, templates and pages.

For teams working on a project a good version control would be really helpful in the coordination, but as a solo developer I enjoy the way of creating and configuring templates and fields in the backend of ProcessWire. The last thing I want to is to have to write blueprints and migrations for simple configurations. For example the CMS Kirby uses blueprints in YAML for its templates and fields and this is of course great for version control, but I find it slows down the development process, because you have to study the reference documentation instead of just creating a template or field in the back-end. In Kirby it is part of the core concept, but in ProcessWire it is not and I hope it stays this way.

If these config files for templates and fields would be automatically generated in YAML or JSON somewhere in the sites-folder, where I could version control them, this would be nice. But personally as solo developer I don't want to waste my time with writing configurations files or migrations and composer dependencies.

  • Like 3
Link to comment
Share on other sites

Quote

making sure he keeps the current community he already has is vital to ProcessWire . . . .

Remember the years before and after the evo exodus to processwire ? The forum was vibrant and full with
starting-coders and beginners and for a long time the forum was praised for fast replying and helping them out.

Processwire has already lost almost all starting-coders and beginners like we used to have them in the past.

I asked the forum a few times if there is interest in getting them back but without any reaction.
A clear indication.

Processwire is already a fantastic and full grown product so spend less time with adding weekly new xyz features
and instead start spending more time with Marketing the potential of this great product.

No secret that this is not going to happen because with Processwire, Ryan is only interested in coding
and is followed by a lot of coders a like in the forum.

Another suggestion to make the community grow is to split Processwire up in 2 versions:
1) a version for starting coders and beginners like it was in the beginning
2) a version for experienced coders like we have today

Last but not least: maybe I should not complain about anything
and just take Processwire for what it is and be happy with it.

 

  • Like 1
Link to comment
Share on other sites

15 hours ago, horst said:

But yes, somewhat reduced expectations. It sounded here before as if one must only lean back and everything goes by itself, fully automatically.

@horst Well then, allow me to raise your expectations again, because your description is not how it works ?

In your scenario, both developers could merge their branches with zero conflicts, and as a result the main branch would incorporate all the changes from both branches. They don't even need to know what the other one is doing, and nobody needs to constantly keep up with changes from other branches / team members. That's because git is really smart in the way it performs merges. Basically, you can view every branch as a set of changes applied to the existing files. As long as those changes don't conflict, you can merge in multiple PRs back to back without any manual conflict resolution.

So most of the time, you can just lean back and everything works. The only time you get a merge conflict that needs to be resolved is if there are actual conflicts that require a decision. For example, if developer A renames some_old_field to unicorns and developer B renames the same field to rainbows, that would result in a merge conflict, because a single field can't have multiple names. So someone needs to decide between unicorns and rainbows for the field name. In other words, you don't have any overhead caused by git itself – git acts as a safety net by warning you about merge conflicts so you can fix them.

In a well-engineered system with good separation of concerns, it's rare to have non-trivial merge conflicts, since it's unlikely that two people working on separate features will need to touch the exact same files. And most of the time, if you do get a merge conflict it's trivial to resolve – for example, if two PRs add a new variable to our SCSS variables in the same place. This would be a merge conflicts, but it's trivial to resolve, since you know you want both changes. If you know git well, you can resolve those in under a minute, oftentimes with a single command (by specifying the appropriate merge strategy for the situation).

Quote

This implies that the larger the development team, the smaller the advantage of automation.

It's the exact opposite – the larger the development team, the more you will benefit from this streamlined workflow. Everyone can focus on different features and merge their work in with the minimum amount of effort required by either them or other developers to keep in sync with each other.

Regarding all the git stuff, I recommend the Git Pro book (available for free), a great resource to understanding how git works under the hood and discover some of the lesser-known features and power tools. Reading the book front to back helped me a lot to establish our feature-branch workflow (for Craft projects) at work, utilize git to work way more effectively, solve issues with simple commands instead of the xkcd 1597 route and much more.

For branching and merging in particular, check out the following chapters:

  • Like 4
  • Thanks 3
Link to comment
Share on other sites

On 2/4/2022 at 12:21 PM, MoritzLost said:

allow me to raise your expectations again

Great! ?

OMG, when reading your last answer, I finally got that GIT is used (responsible) for the merging-magic. Ok, that makes it look to me in another light. ?️

The whole time I assumed that there (at least additionally) must be some sort of importer in the CMS which has to deal with all the heavy stuff (conflicts and edge cases, etc). But now, its all good. ?

  • Like 1
  • Haha 1
Link to comment
Share on other sites

Just in case... have a look a Git worktrees.

Sounds weird and complicated (especially in the setup process) yet... it could be another game changer. No stashing, deleting or workflows like the workflow from xkcd. Maybe even easier than branches to be honest, while still using branches.

I keep this here... as it's entertaining and fun...

Looking through that channel you might find interesting things while searching for "git" or "workflow" or "bug". Sure it's VIM-based, yet even possible in VSCode (and others) I guess - most of the time. However it's super fun to watch and to get ideas and insights.

  • Like 2
Link to comment
Share on other sites

@wbmnfktr I've never really gotten the point of worktrees … and every example I've read is super theoretical. What are you using them for day-to-day? The trouble I have is that most tools can't properly deal with a copy of the project in a subdirectory … for example, PHP files in a sub-directory won't be autoloaded unless I adjust my composer.json, at which point it's more hassle than promised. Maybe I'm just used to branches. Once you're used to switching branches it takes mere seconds, so it's hard to imagine worktrees being faster still. Or maybe worktrees have more uses if you're working with a compiled language, where switching branches and recompiling might take a lot more time …

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...