Jump to content

horst

PW-Moderators
  • Posts

    4,077
  • Joined

  • Last visited

  • Days Won

    87

Everything posted by horst

  1. Many thanks @MoritzLost for the detailed explanations, (and the lowered down expectations of mine ? ). This makes all sense to me. Especially the point B) with the merge conflicts. If I have gotten that right it is not doable in a fully automated way (without interaction) with scenarios like: Developers A B & C start at the same time with the exact same state of the main branch, (the branch of truth), to develop new features or modify existing things. Developer A renames 2 fields, delete one field and create one new and after 2 days, his work is reviewed and merged into the main branch. When Developer B or C now want to merge their new branches after 3 or 4 days, they have the 2 old named fields and the deleted one staying in their branches. Some of that can or will lead into conflicts or inconsistency. The automated merge should not recreate the deleted field from Developer A, and so on. So, in conclusion, this automated static declarative update systems needs also a lot of discipline, consultation and interaction between the developers and with the merging / migration system. This implies that the larger the development team, the smaller the advantage of automation. Of course, the versioning through the YAML files, etc. remains unaffected, regardless how far the team will scale. But yes, somewhat reduced expectations. It sounded here before as if one must only lean back and everything goes by itself, fully automatically. ?
  2. Hhmm, I do not get the differences between "you probably can't wipe out all fields since that would remove the database tables" and "remove fields that aren't in the config". ? And what is with the content in your previous example, (switching between your working branch and reviewing a coworkers branch). It seems that is not within the systems config dump, or is it? "In Craft, there's a clear separation of concerns between config and data. The config is tracked in version control, data isn't." When your coworker has implemented a new blog section, aren't there any new pages with content in his branch then? What comes to my mind with your example are mainly two things: A) The simpler one is: In PW I can do this (switching complete systems for reviews force and back) by simply switch the whole database instead of static config files, then I have everything whats needed for my coworkers branch, config AND content. And nothing is lost when I switch back the DB. So, where is the benefit with the proposed static config files system? B) The second thing, that's where I really hope is the benefit in, but what I yet haven't grasped is: How does it handle a migration of your dev work into the main branch and then one of another coworker and then the blog-implementation, and then etc, etc. ? This cannot be done by adding missing things and remove something from the main system what is not existing in a single branch work. Or can it? How does this function?
  3. Just a try to understand this in regard of PW fields and templates: Would this go in a (yet simplified) direction like automatic write down *ALL* fields and template config into a file (or 2 files) for the export. And on the import, first wipe out all existing (current) config and restore / build the given state from the 1 or 2 files?
  4. Hi @kongondo, for me it seems you pretty precise nailed it down! ? #1 is about prototyping #2 is a simple OneWayDeployment from dev to production, or dev to stage to production, but based on automatic exported text files from PW. #3 is about a deployment fully automated based on static files that describe somehow a start and an end for a deployment. This should be able to be rolled force and back, and even in a none linear order. The number 3 is what you want to use in a middle or large team of developers. It should make them able to use different states, maybe like with parallel git branches. And ähm, that's only how I understand it. ? EDIT: and yes, it is confusing to me too, because it seems that no one really has this differences, (that only reflect the different workflows and personal preferences) on the radar. ?
  5. I'm not sure if I correctly understand what you want to track and how, but there is this module that may provide what you are looking for: https://processwire.com/modules/process-changelog/ Besides a paginated logpage in admin it also comes with an additional RSS feature:
  6. FIELDS first! TEMPLATES second! CONTENT last! Even if you want add two new templates at once with a family relation like parent <>child, you paste in the JSON on the import site, it tells you about the conflict after inspecting. The solution is simply to run the import twice (two times). That is already supported by the importer.
  7. You definetely should! ? Have a look at the Screenshots and its 100% save to:
  8. Not tested, but Whats about the php function trim()? I would trim <p> and </p> from your fieldcontent. (sorry, onmobile)
  9. https://www.google.com/search?q=site%3Aprocesswire.com%2Ftalk+RockSEO doesn't show much, but Rock(XYZ) indicates that it is a module from @bernhard (Bernhard Baumrock) ? EDIT: Oh, @Guy Incognito has beaten me! ?
  10. +1 +1 Could be a good first (or next) step towards automated file based migrations with rollback feature! ?
  11. Wich PW version is this site running on? Or was it running on 10.01.2022?
  12. @donatas Hi, can you share your code for line 442 ?
  13. How do you process the input? $sanitizer in use? If so, wich sanitizer method? You can debug the input directly before any processing with the raw $_POST or $_GET super globals of PHP to see if there arives more then the 246 chars. (And then go further step by step)
  14. ATM I simply use the manual export <> import function from PW for fields and template settings. Every time I also have to alter field and or template settings, I write down the timestamp on a pad of paper when I start to do so and then collect a list (on the left side) of all fields I touches in the next time until deployment, (and on the right side I collect all template names).* When I'm ready for deploying, I may or may not shut down the live system for a short amount of time, depending on the amount of changes, then collect from the DEVs export selector all needed fields, copy the export json and drop it into the targets fields import field and proceed. Next step is the same with template changes. There is no need to shutdown a live site for long time. Mostly when I had large deployments, it was closed for one hour, but most of the time I do and have done those db migrations without any maintenance mode, because most of my sites uses ProCache. If it really would matter, I can prime the cache by crawling the site once before starting to deploy. It is not comfortable and also not the best I can think of, but it's not nearly as uncomfortable as described by @bernhard? Using these functions is not nearly as error prone as manually writing EVERYTHING into one schema or another for every field and template change. Noting everything by hand is not error tolerant for typos, I might forget to note a value, a change to a template, etc. All this is avoided by simply copy-&-paste ready generated JSON directly from PW. No typos, nothing forgotten, nothing added. Not perfect but for me much faster, easier and safer than writing everything down again by hand. PS: * In case you are not sure if you have written everything down on paper, you can generously include additional fields or templates in the export. In this case, it is better to have more (or even all) than too few. All unchanged settings will be recognized by PW during import and will not be processed further. ?
  15. Hi @Didjee, unfortunately you are right. My old code no longer works. ? I have dived into the issue and filed a github issue with a possible fix. So hopefully Ryan soon will find time to look at it. ? If you like, as a workaround you can place my new suggestion into the file wire/core/Pageimage.php instead of using the current webp() function there. /** * Get WebP "extra" version of this Pageimage * * @return PagefileExtra * @since 3.0.132 * */ public function webp() { $webp = $this->extras('webp'); if(!$webp) { $webp = new PagefileExtra($this, 'webp'); $webp->setArray($this->wire('config')->webpOptions); $this->extras('webp', $webp); $webp->addHookAfter('create', $this, 'hookWebpCreate'); } // recognize changes of JPEG / PNG variation via timestamp comparison if($webp && is_readable($webp->filename()) && filemtime($webp->filename()) < filemtime($this->filename) ) { $webp->create(); } return $webp; }
  16. Hey @benbyf, I believe I'm not of much help. ? Me seems to be just another dinosaurs setup. ? IDE: Nusphere PhpEd PW Modules always: Admin DEV Mode Colors | ALIF - Admin Links In Frontend | Cronjob Database Backup | Process Database Backups | Login Notifier | User Activity | ProCache I develop on local https hosts (via Laragon on Windows); Where possible, I use Staging on subdomains of the (clients) LIVE server, as this mostly is the exact same environment setup like on LIVE; Sync & migration between all installations with Beyond Compare and the PW Database Backup Module, and/or the remote-files-on-demand hook from Ryan; Only very rare I directly work on STAGE via PhpED (SSH, SFTP), and afterwards pull it down to local; That's it. ?
  17. @DrQuincy what is this page doing that takes a lot of time? If there is a loop operation that iterates over a lot of items, and you are able to implement an optional get param with the last processed item identifier, there is a good chance that this is all you need to do to make this work. For this to work you would check if a start-identifier was given with the URL, if not start with the first item, otherwise with the #n item from the $input->get->myidentifiername. When starting the script, store a timestamp and in each loop operation compare it with the current timestamp. Maybe when it reached the point greater or equal 100 seconds, do a redirect call to the same page, but with the current (last) processed loop item identifier: ... if(time() >= $starttimestamp + 100) { $session->redirect($page->url . '?itemidentifiername=' . $lastProcessedItemID); } ... Or, much easier, have you tested if the 120 second time limit can be reset from within a script? (calling set_time_limit(120) from within a loop can make it run endless on some server configurations)
  18. @Didjee I think this must be an issue with the core. But I'm not able to investigate further ATM. Maybe there is a possible workaround, or at least some (micro) optimization, with your code example, leaving out the subfile class(es) from the chain. If you have enabled the $config->webpOptions['useSrcExt'], you can globally enable the creation of webP variations in the base core image engines, directly when creating any jpeg or png variation, by setting $config->imageSizerOptions['webpAdd'] to true. But this can result into a lot of unwanted webP files from intermediate image variations (that are never shown to the front end, but only are variation steps in the building chain)! Therefore better globally leave this setting to false, but pass an individual options array into any image sizer method where appropriate (width(123, $options), height(123, $options), size(123, 456, $options)) with this enabled. Example: <?php // this create a jpeg AND a webp varition with 900px for the 2x parts $imageX2 = $page->image->getCrop('default')->width(900, ['webpAdd' => true]); // this reduces the 2x variation to the half size for 1x, (jpeg AND webp at once) $imageX1 = $imageX2->width(450, ['webpAdd' => true]); // now, as you have set $config->webpOptions['useSrcExt'] to true, // every webp variation simply has to get a trailing ".webp" to the jpeg URL: ?> <source type="image/webp" srcset="<?= $imageX1->url . '.webp' ?> 1x, <?= $imageX2->url . '.webp' ?> 2x"> <source type="image/jpeg" srcset="<?= $imageX1->url ?> 1x, <?= $imageX2->url; ?> 2x"> Given that the issue you encountered is located in the files / images sub module for webp, the auto refresh now should be working, as you use the older, (but clumpsy) method in the base imagesizer engine. Have not tested this for quite a few (master)versions, but hopefully it is still true. ? PS: De hartelijke groeten aan de naburige stad! ?
  19. I do not understand. I asked for the field type of mitglied_kategorie.
  20. How do you add the category to the member pages? Through which type of field? Options Select, Page Reference, ...?
  21. @Abe Cube please use the code input to post code: Example: <?php namespace ProcessWire; class TabExtension extends WireData implements Module { public static function getModuleInfo() { return [ 'title' => 'Foo test module', 'summary' => 'An example of creating a module', 'version' => 1, 'summary' => 'Display message when pages are saved', 'autoload' => true, ]; } public function ready() { $this->addHookAfter('ProcessPageEdit::buildForm', $this, 'addButtons'); } public function addButtons($event) { $page = $event->object->getPage(); if ($page->template == "bewerbung") { $form = $event->return; $inputfields = new InputfieldWrapper(); $tab = new InputfieldWrapper(); $tab->attr('title', 'Settings'); $tab->attr('class', 'WireTab'); $markup = $this->modules->get('InputfieldMarkup'); $markup->label = 'Settings'; $markup->value = '<p>Just a placeholder for some inputfields.</p>'; $tab->add($markup); $inputfields->add($tab); $form->add($inputfields); } } }
  22. @BillH You may have a look into the DBs table of an image field and check what numbers are set in the column "sort" before and after you opened, changed and saved or reopened a special page. If you have no DB viewer app at hand, you also can use a little debug script (cli script) to check this. Example script for CLI access, given the images field is named "images" and the page id of one problematic page is "1234": <?php namespace ProcessWire; include('./index.php'); $tablename = 'field_images'; $pages_id = 1234; echo '<pre>'; $query = "SELECT data, sort FROM {$tablename} WHERE pages_id={$pages_id}"; $pDOStatement = $database->query($query); foreach($pDOStatement as $row) { echo "\n{$row['sort']} :: {$row['data']}"; } echo "\n\nReady!"; exit(0); You may check if all image entries have a value in the column "sort", if it is a valid value, starting from 0 and is up counting, if it changes and when, etc. This may help a bit on coming closer to the issue. Do the older pages use the same images field and also the same template then the newer pages? If not, are there different per template settings for the images field? ...
×
×
  • Create New...