Jump to content

gebeer

Members
  • Posts

    1,489
  • Joined

  • Last visited

  • Days Won

    43

Everything posted by gebeer

  1. Yes, it would for sure. I recall that Ryan was talking about making the PW API available through JS some years back. That would be exactly what we need. A RESTful API like the WP REST API in ProcessWire. This should ideally be available as an optional core module and support all Pro fields.
  2. Hi, in a recent project I had to import large amounts of data that got uploaded through the frontend. To avoid lags in the frontend I pushed the actual import work to processes in the background. Since timely resources for that project were limited, I resorted to a quite simple method of starting the workers public function startSalesImportWorker() { $path = $this->config->paths->siteModules . "{$this->className}/workers/"; $command = "php {$path}salesimportworker.php"; $outputFile = "{$path}/logs/workerlogs.txt"; $this->workerPid = (int) shell_exec(sprintf("%s > $outputFile 2>&1 & echo $!", "$command")); if ($this->workerPid) return $this->workerPid; return false; } Here's the worker code namespace ProcessWire; use SlashTrace\SlashTrace; use SlashTrace\EventHandler\DebugHandler; include(__DIR__ . "/../vendor/autoload.php"); include(__DIR__ . "/../../../../index.php"); ini_set('display_errors', false); error_reporting(E_WARNING | E_ERROR); $slashtrace = new SlashTrace(); $slashtrace->addHandler(new DebugHandler()); // $slashtrace->register(); $lockfile = __DIR__ . "/locks/lock-" . getmypid(); // restart when fail or done function workerShutdown($args) { // release lockfile if (file_exists($args['lockfile'])) unlink($args['lockfile']); echo PHP_EOL . "Restarting...\n"; $outputFile = __DIR__ . '/logs/workerlogs.txt'; $command = PHP_BINARY . " " . $args['command']; sleep(1); // execute worker again exec(sprintf("%s > $outputFile 2>&1 & echo $!", "$command")); } register_shutdown_function('ProcessWire\workerShutdown', array('lockfile' => $lockfile, 'command' =>$argv[0])); // wait for other workers to finish while (wire('files')->find(__DIR__ . "/locks/")) { sleep(5); } // create lockfile file_put_contents($lockfile, $lockfile); try { // ini_set('max_execution_time', 300); //300 seconds = 5 minutes wire('users')->setCurrentUser(wire('users')->get("admin")); echo "starting import: " . date('Y-m-d H:i:s') . PHP_EOL; /** @var \ProcessWire\DataImport $mod */ $mod = wire('modules')->get("DataImport"); $mod->importSales(); echo PHP_EOL . "Import finished: " . date('Y-m-d H:i:s'); // run only for 1 round, then start a new process: prevent memory issues die; } catch (\Exception $e) { $slashtrace->handleException($e); die; } I got the idea for restarting the same worker from https://www.algotech.solutions/blog/php/easy-way-to-keep-background-php-jobs-alive/ Note that I am using https://github.com/slashtrace/slashtrace for error handling since it gives nice CLI output. And I couldn't figure out how to utilize the native PW Debug class for that since I needed stack traces. Overall this solution worked quite well. But it doesn't give any control over the worker processes. At least there was no time to implement. Only after having finished the project, I discovered https://symfony.com/doc/current/components/process.html which seems to have everything you need to start and monitor background processes. So next time the need arises I will definitely give it a try. I'm imagining a Process module that lets you monitor/stop background workers and a generic module to kick them off. How do you handle background processes with PW?
  3. .I guess the reason why @wbmnfktr resorted to WordPress is that you have the JSON feeds readily available without any coding. And I often thought that it would be really awesome if we had something like this in PW. With all the modules mentioned AppAPI GraphQl etc you still need to write code to get the desired output at the desired endpoint. WordPress handles this out of the box. And this makes it attractive. PW would definitely benefit and get more attention if there was a module that automatically creates RESTful API endpoints following the page tree structure.
  4. Hi, just wanted to share my experience working with larger data sets in PW. For a recent project I had to import rather large data sets (between 200 and 5000 rows) from Excel sheets. The import files got processed in chunks and I had to save only one value to one float field on a page. There were about 15.000 pages of that type in the system and another 1800 pages of a different type. The process of saving each page got really slow when looping through hundreds or thousands of pages. Inside the loop I retrieved every page with a $pages->get() call. This was really fast. But saving got very slow. It took about 4 minutes to process 2500 pages on my dev docker machine and about 2 minutes on the live server with 16 virtual cores and 80GB of RAM. There was one hook running on change of the page field that I saved the values to that did a simple calculation and saved the result to a different float field on the same page. And I guess that was one reason for slowing the process down. After changing the logic and doing the calculation in the import loop, things got a little better. But not much. So I wonder if PW is just not designed to handle large amounts of page saves in an efficient way? I would have thought otherwise. What I ended up doing is writing the values directly to the field in the DB with a simple method public function dbSetData($field, $id, $value) { $db = $this->wire->database; $statement = "UPDATE `{$field}` SET `data` = {$value} WHERE `pages_id` = {$id};"; $query = $db->prepare($statement); if ($db->execute($query)) return true; return false; } and also getting the required value for the simple calculation directly from the DB public function dbGetData($field, $id) { $db = $this->wire->database; $statement = "SELECT `data` FROM `{$field}` WHERE `pages_id` = {$id};"; /** @var WireDatabasePDOStatement $query */ $query = $db->prepare($statement); if ($db->execute($query)) { $resultStr = $query->fetchColumn(); if (is_string($resultStr)) $result = $resultStr + 0; return $result; } else { return false; } } And that drastically dropped execution time to about 8 seconds for 2500 rows compared to 2 minutes with $pages->get() and $pages->save(). I guess if I need to setup a scenario like this again, I will not use pages for storing those values but rather use a Pro Fields Table field or write directly to the DB.
  5. Hi @d'Hinnisdaël do you still support this module, is it working with latest PW master?
  6. With coded migrations you alter the structure of the application and it is great that your module provides this. But I think we should respect that not everyone wants to do this by code. Some would rather like to use the admin UI. And this is where the recorder comes in. Any changes are reflected in a declarative way. Even coded migrations would be reflected there. What I was trying to say is that the complete state of the application should be tracked in a declarative manner. How you get to that state, be it through coded migrations or through adding stuff through the UI, should be secondary and left up to the developer. ? Please don't go just yet. I'm sure we can all benefit from your input.
  7. Have a look at Bernhard's video in the first post of this thread which presents a proof of concept YAML recorder. This records all fields/templates to a declarative YAML file once you make changes through the admin UI to either a field or template. So you will always get the current state of all fields/templates as a version controllable file. That could be imported back once someone writes the logic for the import process. That YAML recorder really is a great first step. But fields/templates config alone do not represent the full state of a site's structure. We'd need to also record the state of permissions, roles and modules and later create/restore them on import. @bernhard's RockMigrations module already has methods createPermission and createRole, so does the PW API. Modules can be installed/removed through the PW modules API. So importing the recorded changes should be possible. The recorder and importer are the key features needed for version controlling application structure. Adding fields/templates/permissions/roles/modules through code like with RockMigrations would be an added benefit for developers who don't like using the admin UI.
  8. When you build a table row, the attributes for that row need to go in an options array. $data = [$club->title, $club->id]; $options = array(); $options['class'] = "class1 class2"; // class (string): specify one or more class names to apply to the <tr> $options['attr'] = array('cid' => $club->id); // attrs (array): array of attr => value for attributes to add to the <tr> $table->row($data, $options); Hope this helps.
  9. This is to keep track of the state of a site's structure independent from the content. To allow changes of fields and templates that have been altered on local or staging to be transferred to the production version without loosing content that was introduced in the meantime. Does this make sense?
  10. You need to give them this mantra: "pull - merge - push" and let them recite it 100 times a day at least ? It only knows that it needs to deploy on push. So it won't deploy when someone merges stuff on the server. From that perspective it is pretty fail safe. Give it a try, think you'll love it.
  11. Yes, production code could be overwritten if local master is behind origin. But that should not be a problem since you should always pull and merge locally before you push to origin. If you try and push a branch that is behind origin you will get a warning from git anyways. Of course this requires some discipline. But here is only one rule everyone in the team has to follow: always pull and merge before you push. Personally I believe that build do not belong on the server. But that is just an opinion. Building on the server requires all npm packages to be available there. I would not feel confident with that, seeing all the security/vulnerability warnings after a npm install. node_modules is excluded from most repos for a reason, I guess. Your deployment strategy looks pretty similar to what I describe over there. Only we do not work with a second origin for production but have added an additional push url to our origin (github). I can see the advantage with your approach in that it is not so easy to accidentally push things to production. We even use this on shared hosting where we have shell access and git available.
  12. Exactly. staging.myproject.tld would point to folder /var/www/myproject-staging We never merge things on the server, always merge stuff locally first and then push. For the post-receive hook fires only after a push. The most important thing to remember when working with this strategy is: always pull/merge before you push.
  13. Hi all, I got inspired to writing this little tutorial by @FireWire's post. We are using the same deployment workflow that he mentions for all new projects. I tried different approaches in the past, also github Actions and gitlab Runners. Setting those up always felt like a PITA to me. Especially since all I wanted was to automatically deploy my project to staging or live on push Whom this is for Single devs or teams who want to streamline their deployment process with native git methods. Requirements shell access to the server git installed on the server and locally If you don't have shell access and git on the server, upgrade or switch hosting. Walkthrough In this example we will be using github to host our code and a server of our choice for deployment. The project is called myproject. Step 1 (github) Create a repository named myproject. Let's assume that is available at git@github.com:myaccount/myproject.git. This is our remote URL. Step 2 (local) create a project in the folder myproject and push it to github like you usually would. The remote of your project should now read like this inside the myproject folder $ git remote add origin git@github.com:myaccount/myproject.git $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin git@github.com:myaccount/myproject.git (push) Step 3 (server) Login via ssh to the server and go to the document root. We assume this to be '/var/www/'. We further assume the command for connecting to our server via ssh is 'ssh myuser@myserver'. Go to web root, create a directory that will hold a bare git repo, cd into it and create the bare git repository. A bare repo is one that does not contain the actual project files but only the version control information. cd /var/www/ mkdir myproject-git && cd myproject-git git init --bare Step 4 (server) Create the root directory for your ProcessWire installation cd /var/www/ mkdir myproject Step 5 (local) Now we add information about the bare git repo to our local git config. So that when we push changes, they will be pushed both to github and to the bare git repo on our server. Inside our project folder we do git remote set-url --add --push origin myuser@myserver:/var/www/myproject-git After that we need to add the original github push origin again, because it got overwritten by the last command git remote set-url --add --push origin git@github.com:myaccount/myproject.git Now the list of remotes should look like this $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin myuser@myserver:/var/www/myproject-git (push) origin git@github.com:myaccount/myproject.git (push) We have one fetch and 2 push remotes. This means that if you push a commit, it will be pushed to both github and your server repo. Step 6 (server) Here comes the actual deployment magic. We are using a git hook that fires a script after every push. This hook is called a post-receive hook. We move into the directory with the bare repository, change to the hooks directory, create the file that triggers the hook and open it for editing with nano $ cd /var/www/myproject-git/hooks $ touch post-receive $ nano post-receive Now we paste this script into the open editor and save it #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" # Target directory. TARGET="/var/www/myproject" while read oldrev newrev ref do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [[ $BRANCH == "main" ]]; then echo "Push received! Deploying branch: ${BRANCH}..." # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH else echo "Not main branch. Skipping." fi done What this does is checking out (copying) all files that are in the repository to our ProcessWire root directory every time we push something. And that is exactly what we wanted to achieve. This example setup is for a single branch. If you wanted to make this work with multiple branches, you need to make some small adjustments. Let's assume you have one staging and one live installation where web root for live is at /var/www/myproject and for staging at /var/www/myproject-staging In Step 4 above you would create a second dir $ cd /var/www/ $ mkdir myproject $ mkdir myproject-staging And the content of the post-receive hook file could look like #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" while read oldrev newrev ref; do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [ $BRANCH == "master" ]; then TARGET="/var/www/myproject" elif [ $BRANCH == "staging" ]; then TARGET="/var/www/myproject-staging" else echo "Branch not found. Skipping Deployment." fi # deploy only if var TARGET is set if [ -z ${TARGET} ]; then echo "no target set" else echo "STARTING DEPLOYMENT..." echo "Push to ${BRANCH} received! Deploying branch: ${BRANCH} to: ${TARGET}" # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH fi done Now everything you push to your staging branch will be deployed to /var/www/myproject-staging. Commits to master branch to /var/www/myproject We really do enjoy this deployment workflow. Everything is neat and clean. No need to keep track of which files you uploaded already via SFTP. Peace of mind :-) I basically put together bits and pieces I found around the web to set this up. Would be eager to see how you implement stuff like this.
  14. We recently have switched to exactly the same deployment strategy for new projects and will convert old ones, too. This makes deployment so much easier compared to traditional SFTP setups. It doesn't require any external services like github Actions and makes collaborating on projects very enjoyable. We generally do not include built assets in the repo and handle these through pre-push git hooks on the local machine that trigger rsync tasks for the dist folder. How do you handle these? Here's an example of pre-push: #!/bin/sh url="$2" current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,') wanted_branch='main' if [[ $url == *github.com* ]]; then read -p "You're about to push, are you sure you won't break the build? Y/N? " -n 1 -r </dev/tty echo if [[ $REPLY =~ ^[Yy]$ ]]; then if [ $current_branch = $wanted_branch ]; then sshUser="ssh-user" remoteServer="remote.server.com" remotePath="/remote/path/to/site/templates/" echo When prompted, please insert password. echo Updating files... rsync -av --delete site/templates/dist $sshUser@$remoteServer:$remotePath exit 0 fi else echo "answer was no" exit 1 fi else echo "rsync already finished" fi
  15. This can all be achieved by code which means that you can put this in a migration function. And in RockMigrations you call your migration function on the target installation. RM is supporting Repeater Fields. So even the 3rd scenario would be possible.
  16. Thank you for summing this topic up in a new thread. I had the same intention but couldn't spare the time. I am all for version control of fields and templates. @bernhard's RockMigration Module already does a great job here. And since he introduced the prototype recorder I am very excited that we will soon have something to work with and build upon. This should really be part of the PW core or available as optional core module. Would be great if @ryan put this on the roadmap for 2022.
  17. @ryanThis is a bit off topic, but I would also support version control for fields/templates as something to maybe concentrate on for this year. There is a lively discussion going on: And Bernhard has already shown an awesome proof of concept in his video:
  18. I am more excited about the recorder than about YAML. And I see your point. When using a PHP array for the migrate() method, you can do things that you can't do in YAML. But on the other hand, for recording changes locally and than migrating them to staging/live, YAML would be sufficient. Your use case, where you setup fields and templates for a RockMails module with RockMigrations is different. When installing that module, it needs to check whether it is being installed on a multilang site or not. But when recording and migrating changes, we already know the context for the migration since it has been recorded through YAML. My conclusion would be to store the recorded data in YAML or JSON. For other use cases, like you described, we can still use a PHP array.
  19. I think I had a wrong understanding of the migrate([]) method, thinking it is destructive for fields/templates that are not in the $config array. So if I have an existing site, I would have to build the $config array with all fields and templates that are in the existing installation already. If I forgot one field, it would be deleted by the next call of the migrate method. But looking at the code, I see that it only creates fields and templates. But still, if I don't add fields/templates to the $config array, that are already there in an existing installation, the migration would not cover all fields/templates. Does that make sense? Yes, this is exactly what happens in the video. All fields/templates of a site are getting written to the yaml file. It would be great, if we had this available for setting up initial migration() $config arrays. Either inside the module or as a separate "recorder" module. That way, we could plugin RockMigrations to any existing site, create the initial $config data (inside a yaml file) and move on from there with our migrations.
  20. Hi @bernhard shows a test of recording changes in templates/fields to yaml. This looks very promising. Do you have any plans integrating this into your module? It is easy to use RockMigrations when you start a new project. But for existing projects where RM comes in later, we need to write the migration files by hand before we can start using rm()->migrate. Would it be possible to create a yaml from all templates/fields of an existing install based on the recorder that you showed in that video?
  21. This is totally freaking awesome! Can't give enough thumbs up. Any plans to release this? Wannahave ? That would just be such a great help for keeping things in sync.
  22. I published a generic module with some examples at https://github.com/gebeer/CustomPageTypes Happy visual learning ?
  23. Hello all, Since https://processwire.com/docs/tutorials/using-custom-page-types-in-processwire/ came out, I used to implement custom page classes as modules, following the principles described in that tutorial. Now only a few weeks ago I stumbled across https://processwire.com/blog/posts/pw-3.0.152/#new-ability-to-specify-custom-page-classes. This seems to me a much cleaner and easier way of implementation. Though it restricts the naming of custom classes to the naming conventions for the class loader. Other than that I can't really see any more disadvantages. Which way do you prefer and why? On a side note, useful features like described in the second link often can only be found in @ryans core update blog posts. If you don't read them on a regular basis, those new features are easy to miss. I'd love to see those hidden gems find their way into the API reference in more detail. Although $config->usePageClasses is documented at https://processwire.com/api/ref/config/, I think it would deserve its own page with all the explanations from the blog post.
  24. Developing on Linux (currently Arch/KDE Plasma) for the last 16 years. Would never go back to proprietary alternatives. Why pay for something that should really be free for all? Devtools: Editor: VSCodium with PHP Intelephense, GitLense, PHP Debug (xdebug support), Prettier (Code Formatter), Todo Tree and @bernhards PWSnippets. Local dev env: after having used vagrant for a long time, about 4 years ago I switched to https://laradock.io/ for local docker environment. Ensures portable environments across multiple machines PW modules used on almost every project: TracyDebugger, WireMailSmtp, ProFields (mainly for Repeater Matrix), TablePro Asset building pipeline: npm scripts / gulp / webpack. Will have a look into Laravel Mix. Might save time although I actually like to fiddle with all the configs. Deployment: for older and ongoing projects mostly SFTP. For new projects git with git hooks. This is so much cleaner. Not using any service but creating own git hooks on the server. git must be available on the production server. Staging servers used rarely. Mostly deploy from local to production. Hosting: I do not offer hosting services. This is up to the client. Personally I use https://uberspace.de/en/ which is a command line configured shared hosting provider from DE with a pay what you want pricing model
×
×
  • Create New...