Jump to content

gebeer

Members
  • Posts

    1,398
  • Joined

  • Last visited

  • Days Won

    39

Everything posted by gebeer

  1. Oh, I see. The feature request is about improving documentation and giving developers a way to disable stripping of comments.
  2. Thanks again for the suggestion. I really don't want to spend more time for finding a workaround. I'll be waiting until we get some feedback in the feature request and a clean way to avoid stripping comments. In the meantime, I fell back to good old if statements for including the necessary HTML blocks.
  3. I have setup a starter kit for developing PW sites with Tailwind CSS. You can find it at https://github.com/gebeer/tailwind-css-webpack-processwire-starter I know, I'm a bit late to the party with Tailwind. In fact I only discovered it a couple of months ago and found it to be a very interesting concept. It took quite some time to get a pure webpack/postcss/tailwind setup that features live reloading dev server, cache busting, supports babel as well as autoprefixing and works well with PW. For the cache busting part I developed a small helper module that utilizes webpack.manifest.json to always load the assets with the correct file hashes for cache busting. No more timestamps, version numbers or the like required. The little helper can be found at https://github.com/gebeer/WebpackAssets Having used it for the first time in production on a client project I have to say that I really enjoy working with Tailwind. The concept of using utility classes only really convinced me after having put it to practice. During the work on the client project, some quirks with webpack-dev-server came to surface. I was able to solve them with the help of a colleague and now it is running quite stable in both Linux and Mac environments. I am fascinated by the fast build times compared to using webpack/gulp or gulp/sass. Also the small file size of the compiled CSS is remarkable compared to other frameworks. So this will be my goto setup whenever I am free to choose the frontend framework for a project. If anyone feels like they want to give it a try, I shall be happy to get your feedback
  4. Unfortunately this is not working. And it looks ugly and feels so wrong ;-) But thank you for the suggestion.
  5. This only helps in some cases. In my case I need the comment to read <!-- ko if: properties.ms == "pro" --> for knockout.js to recognise it. So the other syntax would break my JS. But thanks for pointing that out.
  6. I made a feature request for documentation improvement: https://github.com/processwire/processwire-requests/issues/454
  7. Hello, MarkupRegions is stripping out HTML comments by default. I just discovered that while converting a rather old project to use MarkupRegions. In that project HTML comments are required for some knockout.js code to work. And I was scratching my head for quite a while until I recognized that those comments got removed. This behaviour is nowhere being mentioned in the documentation, other than in the code itself at https://github.com/processwire/processwire/blob/3acd7709c1cfc1817579db00c2f608235bdfb1e7/wire/core/WireMarkupRegions.php#L89. And an old forum post mentions it. How can I tell MarkupRegions that my markup is required to stay exactly the same or in other words, where can I pass options ("exact" =>true) to MarkupRegions find() to change the default behaviour?
  8. You might want to have a look at https://processwire.com/modules/less/ and https://processwire.com/modules/sassify/ for example implementations of parsers. I don't quite see an advantage on compiling frontend assets on the server. Except for themes that want to provide customisations without the need for coding. One module for each frontend package seems like a reasonable way to go but keeping those up to date will also be quite time consuming for the maintainer. This is where package managers come in handy. So, if you can find composer packages like BS and UIkit that you can utilize for the PW modules than that would be a big time saver.
  9. Unfortunately we where not on InnoDB. But the problem was not so much memory consumption but rather long execution time for saving pages. Total mem consumption after saving values to 2500 pages was around 45MB only.
  10. Yes, it would for sure. I recall that Ryan was talking about making the PW API available through JS some years back. That would be exactly what we need. A RESTful API like the WP REST API in ProcessWire. This should ideally be available as an optional core module and support all Pro fields.
  11. Hi, in a recent project I had to import large amounts of data that got uploaded through the frontend. To avoid lags in the frontend I pushed the actual import work to processes in the background. Since timely resources for that project were limited, I resorted to a quite simple method of starting the workers public function startSalesImportWorker() { $path = $this->config->paths->siteModules . "{$this->className}/workers/"; $command = "php {$path}salesimportworker.php"; $outputFile = "{$path}/logs/workerlogs.txt"; $this->workerPid = (int) shell_exec(sprintf("%s > $outputFile 2>&1 & echo $!", "$command")); if ($this->workerPid) return $this->workerPid; return false; } Here's the worker code namespace ProcessWire; use SlashTrace\SlashTrace; use SlashTrace\EventHandler\DebugHandler; include(__DIR__ . "/../vendor/autoload.php"); include(__DIR__ . "/../../../../index.php"); ini_set('display_errors', false); error_reporting(E_WARNING | E_ERROR); $slashtrace = new SlashTrace(); $slashtrace->addHandler(new DebugHandler()); // $slashtrace->register(); $lockfile = __DIR__ . "/locks/lock-" . getmypid(); // restart when fail or done function workerShutdown($args) { // release lockfile if (file_exists($args['lockfile'])) unlink($args['lockfile']); echo PHP_EOL . "Restarting...\n"; $outputFile = __DIR__ . '/logs/workerlogs.txt'; $command = PHP_BINARY . " " . $args['command']; sleep(1); // execute worker again exec(sprintf("%s > $outputFile 2>&1 & echo $!", "$command")); } register_shutdown_function('ProcessWire\workerShutdown', array('lockfile' => $lockfile, 'command' =>$argv[0])); // wait for other workers to finish while (wire('files')->find(__DIR__ . "/locks/")) { sleep(5); } // create lockfile file_put_contents($lockfile, $lockfile); try { // ini_set('max_execution_time', 300); //300 seconds = 5 minutes wire('users')->setCurrentUser(wire('users')->get("admin")); echo "starting import: " . date('Y-m-d H:i:s') . PHP_EOL; /** @var \ProcessWire\DataImport $mod */ $mod = wire('modules')->get("DataImport"); $mod->importSales(); echo PHP_EOL . "Import finished: " . date('Y-m-d H:i:s'); // run only for 1 round, then start a new process: prevent memory issues die; } catch (\Exception $e) { $slashtrace->handleException($e); die; } I got the idea for restarting the same worker from https://www.algotech.solutions/blog/php/easy-way-to-keep-background-php-jobs-alive/ Note that I am using https://github.com/slashtrace/slashtrace for error handling since it gives nice CLI output. And I couldn't figure out how to utilize the native PW Debug class for that since I needed stack traces. Overall this solution worked quite well. But it doesn't give any control over the worker processes. At least there was no time to implement. Only after having finished the project, I discovered https://symfony.com/doc/current/components/process.html which seems to have everything you need to start and monitor background processes. So next time the need arises I will definitely give it a try. I'm imagining a Process module that lets you monitor/stop background workers and a generic module to kick them off. How do you handle background processes with PW?
  12. .I guess the reason why @wbmnfktr resorted to WordPress is that you have the JSON feeds readily available without any coding. And I often thought that it would be really awesome if we had something like this in PW. With all the modules mentioned AppAPI GraphQl etc you still need to write code to get the desired output at the desired endpoint. WordPress handles this out of the box. And this makes it attractive. PW would definitely benefit and get more attention if there was a module that automatically creates RESTful API endpoints following the page tree structure.
  13. Hi, just wanted to share my experience working with larger data sets in PW. For a recent project I had to import rather large data sets (between 200 and 5000 rows) from Excel sheets. The import files got processed in chunks and I had to save only one value to one float field on a page. There were about 15.000 pages of that type in the system and another 1800 pages of a different type. The process of saving each page got really slow when looping through hundreds or thousands of pages. Inside the loop I retrieved every page with a $pages->get() call. This was really fast. But saving got very slow. It took about 4 minutes to process 2500 pages on my dev docker machine and about 2 minutes on the live server with 16 virtual cores and 80GB of RAM. There was one hook running on change of the page field that I saved the values to that did a simple calculation and saved the result to a different float field on the same page. And I guess that was one reason for slowing the process down. After changing the logic and doing the calculation in the import loop, things got a little better. But not much. So I wonder if PW is just not designed to handle large amounts of page saves in an efficient way? I would have thought otherwise. What I ended up doing is writing the values directly to the field in the DB with a simple method public function dbSetData($field, $id, $value) { $db = $this->wire->database; $statement = "UPDATE `{$field}` SET `data` = {$value} WHERE `pages_id` = {$id};"; $query = $db->prepare($statement); if ($db->execute($query)) return true; return false; } and also getting the required value for the simple calculation directly from the DB public function dbGetData($field, $id) { $db = $this->wire->database; $statement = "SELECT `data` FROM `{$field}` WHERE `pages_id` = {$id};"; /** @var WireDatabasePDOStatement $query */ $query = $db->prepare($statement); if ($db->execute($query)) { $resultStr = $query->fetchColumn(); if (is_string($resultStr)) $result = $resultStr + 0; return $result; } else { return false; } } And that drastically dropped execution time to about 8 seconds for 2500 rows compared to 2 minutes with $pages->get() and $pages->save(). I guess if I need to setup a scenario like this again, I will not use pages for storing those values but rather use a Pro Fields Table field or write directly to the DB.
  14. Hi @d'Hinnisdaël do you still support this module, is it working with latest PW master?
  15. With coded migrations you alter the structure of the application and it is great that your module provides this. But I think we should respect that not everyone wants to do this by code. Some would rather like to use the admin UI. And this is where the recorder comes in. Any changes are reflected in a declarative way. Even coded migrations would be reflected there. What I was trying to say is that the complete state of the application should be tracked in a declarative manner. How you get to that state, be it through coded migrations or through adding stuff through the UI, should be secondary and left up to the developer. ? Please don't go just yet. I'm sure we can all benefit from your input.
  16. Have a look at Bernhard's video in the first post of this thread which presents a proof of concept YAML recorder. This records all fields/templates to a declarative YAML file once you make changes through the admin UI to either a field or template. So you will always get the current state of all fields/templates as a version controllable file. That could be imported back once someone writes the logic for the import process. That YAML recorder really is a great first step. But fields/templates config alone do not represent the full state of a site's structure. We'd need to also record the state of permissions, roles and modules and later create/restore them on import. @bernhard's RockMigrations module already has methods createPermission and createRole, so does the PW API. Modules can be installed/removed through the PW modules API. So importing the recorded changes should be possible. The recorder and importer are the key features needed for version controlling application structure. Adding fields/templates/permissions/roles/modules through code like with RockMigrations would be an added benefit for developers who don't like using the admin UI.
  17. When you build a table row, the attributes for that row need to go in an options array. $data = [$club->title, $club->id]; $options = array(); $options['class'] = "class1 class2"; // class (string): specify one or more class names to apply to the <tr> $options['attr'] = array('cid' => $club->id); // attrs (array): array of attr => value for attributes to add to the <tr> $table->row($data, $options); Hope this helps.
  18. This is to keep track of the state of a site's structure independent from the content. To allow changes of fields and templates that have been altered on local or staging to be transferred to the production version without loosing content that was introduced in the meantime. Does this make sense?
  19. You need to give them this mantra: "pull - merge - push" and let them recite it 100 times a day at least ? It only knows that it needs to deploy on push. So it won't deploy when someone merges stuff on the server. From that perspective it is pretty fail safe. Give it a try, think you'll love it.
  20. Yes, production code could be overwritten if local master is behind origin. But that should not be a problem since you should always pull and merge locally before you push to origin. If you try and push a branch that is behind origin you will get a warning from git anyways. Of course this requires some discipline. But here is only one rule everyone in the team has to follow: always pull and merge before you push. Personally I believe that build do not belong on the server. But that is just an opinion. Building on the server requires all npm packages to be available there. I would not feel confident with that, seeing all the security/vulnerability warnings after a npm install. node_modules is excluded from most repos for a reason, I guess. Your deployment strategy looks pretty similar to what I describe over there. Only we do not work with a second origin for production but have added an additional push url to our origin (github). I can see the advantage with your approach in that it is not so easy to accidentally push things to production. We even use this on shared hosting where we have shell access and git available.
  21. Exactly. staging.myproject.tld would point to folder /var/www/myproject-staging We never merge things on the server, always merge stuff locally first and then push. For the post-receive hook fires only after a push. The most important thing to remember when working with this strategy is: always pull/merge before you push.
  22. Hi all, I got inspired to writing this little tutorial by @FireWire's post. We are using the same deployment workflow that he mentions for all new projects. I tried different approaches in the past, also github Actions and gitlab Runners. Setting those up always felt like a PITA to me. Especially since all I wanted was to automatically deploy my project to staging or live on push Whom this is for Single devs or teams who want to streamline their deployment process with native git methods. Requirements shell access to the server git installed on the server and locally If you don't have shell access and git on the server, upgrade or switch hosting. Walkthrough In this example we will be using github to host our code and a server of our choice for deployment. The project is called myproject. Step 1 (github) Create a repository named myproject. Let's assume that is available at git@github.com:myaccount/myproject.git. This is our remote URL. Step 2 (local) create a project in the folder myproject and push it to github like you usually would. The remote of your project should now read like this inside the myproject folder $ git remote add origin git@github.com:myaccount/myproject.git $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin git@github.com:myaccount/myproject.git (push) Step 3 (server) Login via ssh to the server and go to the document root. We assume this to be '/var/www/'. We further assume the command for connecting to our server via ssh is 'ssh myuser@myserver'. Go to web root, create a directory that will hold a bare git repo, cd into it and create the bare git repository. A bare repo is one that does not contain the actual project files but only the version control information. cd /var/www/ mkdir myproject-git && cd myproject-git git init --bare Step 4 (server) Create the root directory for your ProcessWire installation cd /var/www/ mkdir myproject Step 5 (local) Now we add information about the bare git repo to our local git config. So that when we push changes, they will be pushed both to github and to the bare git repo on our server. Inside our project folder we do git remote set-url --add --push origin myuser@myserver:/var/www/myproject-git After that we need to add the original github push origin again, because it got overwritten by the last command git remote set-url --add --push origin git@github.com:myaccount/myproject.git Now the list of remotes should look like this $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin myuser@myserver:/var/www/myproject-git (push) origin git@github.com:myaccount/myproject.git (push) We have one fetch and 2 push remotes. This means that if you push a commit, it will be pushed to both github and your server repo. Step 6 (server) Here comes the actual deployment magic. We are using a git hook that fires a script after every push. This hook is called a post-receive hook. We move into the directory with the bare repository, change to the hooks directory, create the file that triggers the hook and open it for editing with nano $ cd /var/www/myproject-git/hooks $ touch post-receive $ nano post-receive Now we paste this script into the open editor and save it #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" # Target directory. TARGET="/var/www/myproject" while read oldrev newrev ref do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [[ $BRANCH == "main" ]]; then echo "Push received! Deploying branch: ${BRANCH}..." # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH else echo "Not main branch. Skipping." fi done What this does is checking out (copying) all files that are in the repository to our ProcessWire root directory every time we push something. And that is exactly what we wanted to achieve. This example setup is for a single branch. If you wanted to make this work with multiple branches, you need to make some small adjustments. Let's assume you have one staging and one live installation where web root for live is at /var/www/myproject and for staging at /var/www/myproject-staging In Step 4 above you would create a second dir $ cd /var/www/ $ mkdir myproject $ mkdir myproject-staging And the content of the post-receive hook file could look like #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" while read oldrev newrev ref; do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [ $BRANCH == "master" ]; then TARGET="/var/www/myproject" elif [ $BRANCH == "staging" ]; then TARGET="/var/www/myproject-staging" else echo "Branch not found. Skipping Deployment." fi # deploy only if var TARGET is set if [ -z ${TARGET} ]; then echo "no target set" else echo "STARTING DEPLOYMENT..." echo "Push to ${BRANCH} received! Deploying branch: ${BRANCH} to: ${TARGET}" # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH fi done Now everything you push to your staging branch will be deployed to /var/www/myproject-staging. Commits to master branch to /var/www/myproject We really do enjoy this deployment workflow. Everything is neat and clean. No need to keep track of which files you uploaded already via SFTP. Peace of mind :-) I basically put together bits and pieces I found around the web to set this up. Would be eager to see how you implement stuff like this.
  23. We recently have switched to exactly the same deployment strategy for new projects and will convert old ones, too. This makes deployment so much easier compared to traditional SFTP setups. It doesn't require any external services like github Actions and makes collaborating on projects very enjoyable. We generally do not include built assets in the repo and handle these through pre-push git hooks on the local machine that trigger rsync tasks for the dist folder. How do you handle these? Here's an example of pre-push: #!/bin/sh url="$2" current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,') wanted_branch='main' if [[ $url == *github.com* ]]; then read -p "You're about to push, are you sure you won't break the build? Y/N? " -n 1 -r </dev/tty echo if [[ $REPLY =~ ^[Yy]$ ]]; then if [ $current_branch = $wanted_branch ]; then sshUser="ssh-user" remoteServer="remote.server.com" remotePath="/remote/path/to/site/templates/" echo When prompted, please insert password. echo Updating files... rsync -av --delete site/templates/dist $sshUser@$remoteServer:$remotePath exit 0 fi else echo "answer was no" exit 1 fi else echo "rsync already finished" fi
  24. This can all be achieved by code which means that you can put this in a migration function. And in RockMigrations you call your migration function on the target installation. RM is supporting Repeater Fields. So even the 3rd scenario would be possible.
×
×
  • Create New...