Leaderboard
Popular Content
Showing content with the highest reputation on 02/05/2022 in all areas
-
This week we have some great performance and scalability improvements in the core that enable lazy-loading of fields, templates and fieldgroups, thanks to @thetuningspoon — https://processwire.com/blog/posts/field-and-template-scalability-improvements/12 points
-
I feel real good about this scalability and speed tuning with the fields and templates. Big thanks fly out to the tuning spoon and ryan.4 points
-
Hey Adrian (and others, in case anyone else happens to run into this issue)! I'm posting here instead of opening a GitHub issue since this doesn't feel like a "bug" or "issue", but rather "a potential gotcha": The background is that I've been recently dealing with major performance issues at weekly.pw. Every request was taking 10+ seconds, which made editing content... let's say an interesting experience. Particularly when PW triggers additional HTTP requests for a number of things, from checking if a page exists to the link editor modal window. Makes one think twice before clicking or hovering over anything in the admin... ? I've tried to debug the issue with little luck, eventually deciding to blame it on the server (sorry, Contabo!) until today — accidentally, while migrating the site to a new server — finally figured out that the real problem was in fact the Tracy Logs panel and a ~800M, ~3.5 million row logs/tracy/error.log file. The underlying reason for this were warnings generated by the XML sitemap module: each request was generating ~2.5k new rows, with an hour long cache, so potentially 60k or more new rows per day. Now, I'm writing this half hoping that if someone else runs into similar problem they'll be smarter than me and check if any of the Tracy panels suffer from slow rendering time (which is already reported for each panel individually), but I do also wonder if there's something that could automatically prevent this? Perhaps logs should be pruned, Tracy should warn if there's crazy amount of data in one of the log files, or log file reading could be somehow optimized? Food for thought. Again this is obviously not a bug, but still something that can end up biting you pretty hard ?3 points
-
Hello @pmx, welcome to the PW forums Personally I prefer using Markup Regions, as that method produces easy to read template code: Official docs: https://processwire.com/docs/front-end/output/markup-regions/ My cheat sheet: https://processwire.com/talk/topic/23641-markup-regions-pw-vs-data-pw-different-behavior/#comment-201538 Good example on the basics: https://processwire.com/talk/topic/23641-markup-regions-pw-vs-data-pw-different-behavior/?do=findComment&comment=201505 How to debug Markup Regions tip: https://processwire.com/talk/topic/24398-markup-regions-template-strategy-is-this-normal/?do=findComment&comment=206568 As an advanced technique, it is even possible to hook into the rendering process but you probably won't need it: https://processwire.com/talk/topic/21852-markup-regions-in-hooks/?do=findComment&comment=208104 I hope this helps.2 points
-
Hi all, I got inspired to writing this little tutorial by @FireWire's post. We are using the same deployment workflow that he mentions for all new projects. I tried different approaches in the past, also github Actions and gitlab Runners. Setting those up always felt like a PITA to me. Especially since all I wanted was to automatically deploy my project to staging or live on push Whom this is for Single devs or teams who want to streamline their deployment process with native git methods. Requirements shell access to the server git installed on the server and locally If you don't have shell access and git on the server, upgrade or switch hosting. Walkthrough In this example we will be using github to host our code and a server of our choice for deployment. The project is called myproject. Step 1 (github) Create a repository named myproject. Let's assume that is available at git@github.com:myaccount/myproject.git. This is our remote URL. Step 2 (local) create a project in the folder myproject and push it to github like you usually would. The remote of your project should now read like this inside the myproject folder $ git remote add origin git@github.com:myaccount/myproject.git $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin git@github.com:myaccount/myproject.git (push) Step 3 (server) Login via ssh to the server and go to the document root. We assume this to be '/var/www/'. We further assume the command for connecting to our server via ssh is 'ssh myuser@myserver'. Go to web root, create a directory that will hold a bare git repo, cd into it and create the bare git repository. A bare repo is one that does not contain the actual project files but only the version control information. cd /var/www/ mkdir myproject-git && cd myproject-git git init --bare Step 4 (server) Create the root directory for your ProcessWire installation cd /var/www/ mkdir myproject Step 5 (local) Now we add information about the bare git repo to our local git config. So that when we push changes, they will be pushed both to github and to the bare git repo on our server. Inside our project folder we do git remote set-url --add --push origin myuser@myserver:/var/www/myproject-git After that we need to add the original github push origin again, because it got overwritten by the last command git remote set-url --add --push origin git@github.com:myaccount/myproject.git Now the list of remotes should look like this $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin myuser@myserver:/var/www/myproject-git (push) origin git@github.com:myaccount/myproject.git (push) We have one fetch and 2 push remotes. This means that if you push a commit, it will be pushed to both github and your server repo. Step 6 (server) Here comes the actual deployment magic. We are using a git hook that fires a script after every push. This hook is called a post-receive hook. We move into the directory with the bare repository, change to the hooks directory, create the file that triggers the hook and open it for editing with nano $ cd /var/www/myproject-git/hooks $ touch post-receive $ nano post-receive Now we paste this script into the open editor and save it #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" # Target directory. TARGET="/var/www/myproject" while read oldrev newrev ref do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [[ $BRANCH == "main" ]]; then echo "Push received! Deploying branch: ${BRANCH}..." # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH else echo "Not main branch. Skipping." fi done What this does is checking out (copying) all files that are in the repository to our ProcessWire root directory every time we push something. And that is exactly what we wanted to achieve. This example setup is for a single branch. If you wanted to make this work with multiple branches, you need to make some small adjustments. Let's assume you have one staging and one live installation where web root for live is at /var/www/myproject and for staging at /var/www/myproject-staging In Step 4 above you would create a second dir $ cd /var/www/ $ mkdir myproject $ mkdir myproject-staging And the content of the post-receive hook file could look like #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" while read oldrev newrev ref; do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [ $BRANCH == "master" ]; then TARGET="/var/www/myproject" elif [ $BRANCH == "staging" ]; then TARGET="/var/www/myproject-staging" else echo "Branch not found. Skipping Deployment." fi # deploy only if var TARGET is set if [ -z ${TARGET} ]; then echo "no target set" else echo "STARTING DEPLOYMENT..." echo "Push to ${BRANCH} received! Deploying branch: ${BRANCH} to: ${TARGET}" # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH fi done Now everything you push to your staging branch will be deployed to /var/www/myproject-staging. Commits to master branch to /var/www/myproject We really do enjoy this deployment workflow. Everything is neat and clean. No need to keep track of which files you uploaded already via SFTP. Peace of mind :-) I basically put together bits and pieces I found around the web to set this up. Would be eager to see how you implement stuff like this.1 point
-
@teppo Maybe Soma's LogMaintenance module would be useful?1 point
-
Take a look here: https://processwire.com/blog/posts/pw-3.0.141/#example-1-changing-the-templates-directory1 point
-
Hi @pmx, Welcome to the forums and to ProcessWire. Another alternative is to use what can be referred to as template partials. I use Tailwind myself. Recently, I faced a similar situation to yours, with <div> seemingly having to be appended to $content. Below, I'll explain how I got around this. First, to answer your questions: Technically possible but sort of goes against the idea of a delayed output. Also, not easy to maintain if you have to chase your 'echo's when debugging. This is because in the delayed output, /site/templates/_init.php is prepended to the output and /site/templates/_main.php is appended. When you echo in your template file, e.g. basic-page.php, the output of echo comes before the output of _main.php. The idea with delayed output is that you don't echo anything in basic-page.php but defer it for later echoing in _main.php, hence the name, 'delayed' output. Back to the alternative I mentioned above, it might seem a bit complicated and also will result in extra files but it will definitely lead to the separation of the markup, so that you don't have to deal with <div>s in your templates. I have used this approach recently for a demo site for an ecommerce module that I am developing. The demo site can be found here. Please note that the demo site will require Padloper to use. However, I am pointing you to it as an example of the 'partial templating' I am talking about. I don't know how comfortable you are with PHP so, let me know if you want me to explain things further. Below, is an example of how a template products.php is calling and getting markup for a single product. This is the products template. https://github.com/kongondo/Padloper2Starter/blob/main/products.php On this line it calls and gets the markup of a single product to be rendered and appends it to its $content. This line calls a function renderSingleProduct() available in /site/templates/_func.php, which is a shared functions file called in /site/templates/_init.php here. The important part here is that renderSingleProduct() sets a number of variables that the partial template of single product, i.e. single-product-html.php will need. These are $product, $totalAmount, etc. This is achieved by using the ProcessWire class TemplateFile(). For example: <?php namespace ProcessWire; $totalAmount = 7895; $foo = 'foo string'; $bar = 'bar';// could even be a value from a field of some other page $file = "single-product-html.php"; $templatePath = $config->paths->templates . "partials/" . $file; // CREATE A NEW 'VIRTUAL' TEMPLATE FILE /** @var TemplateFile $t */ $t = new TemplateFile($templatePath); // SET/PASS VARIABLES TO THE TEMPLATE // here we create variables and set their values $t->set('foo', $foo); $t->set('bar', $bar); // here, the variable $testVariable will be assigned the value of $totalAmount $t->set('testVariable', $totalAmount); // ------------- $out = $t->render(); // ------ // in a function, return $out. // It will be appended to $content in your template file Written in a hurry and not tested ?.1 point
-
Hello @szabesz Thx a lot for your help! This seems really powerfull! ? Merci beaucoup ?1 point
-
@thetuningspoon, massive thanks for the POC! @ryan If I am editing a page with lots of repeaters on it (hundreds), would the new lazy loading improve (ProcessPageEdit) performance? This is aside from the current ajax-loading of repeater fields. Are you able to share (a gist maybe?) the script you used to automate the creation of fields, templates and fieldgroups please? It might help others quickly replicate your test install in order to help with testing this new feature. Thanks.1 point
-
Just to give my two cents: As a solo developer I think it would be nice to have an automated generation of config files to version control template and fields in Git, but I can absolutely understand Ryan if he doesn't find it necessary as solo developer for his workflow. Especially with his tools for Export/Import fields, templates and pages. For teams working on a project a good version control would be really helpful in the coordination, but as a solo developer I enjoy the way of creating and configuring templates and fields in the backend of ProcessWire. The last thing I want to is to have to write blueprints and migrations for simple configurations. For example the CMS Kirby uses blueprints in YAML for its templates and fields and this is of course great for version control, but I find it slows down the development process, because you have to study the reference documentation instead of just creating a template or field in the back-end. In Kirby it is part of the core concept, but in ProcessWire it is not and I hope it stays this way. If these config files for templates and fields would be automatically generated in YAML or JSON somewhere in the sites-folder, where I could version control them, this would be nice. But personally as solo developer I don't want to waste my time with writing configurations files or migrations and composer dependencies.1 point
-
1 point
-
Thank you, that worked! I wanted to loop through the results to template them manually and for some reason I had to change the following, which did nothing: echo $limitedResults->renderPager(); to $pager = $modules->get("MarkupPagerNav"); echo $pager->render($limitedResults); but otherwise, got there in the end.1 point
-
@Bergy Sure, no problem. The scripts make some assumptions: 1. you have ssh access to your server 2. you not only have access to your document root (-> mostly /var/www ...etc), but also to your /home/user directory 3. your /home/user is at the same time your ssh user 4. your rsa public key is located under /home/user/.ssh/o2u_rsa 5. All scripts are located under /home/user/ For more comfort register a (RSA) key of your client computer to the development remote server (with keygen you can easily create a key on your machine under linux) you also can generate a key under windows with putty (with some plugins, google therefor putty generate ssh key, ssh authentification without password). # explanation regarding the use of placeholders in the scripts: # all placeholders are indicated with double braces -> {{ ... }}. Don't put your concrete names/data at all in braces, the double braces only indicate a placeholder # {{dev dbname}} is a placeholder for your database name on the development server/webspace. EDIT: Or is it actually the db-username? # {{prod dbname}} is a placeholder for your database name on the production server/webspace. EDIT: Or is it actually the db-username? # {{dev server user}} is a placeholder for your username on the dev server/webspace. # {{prod server user}} is a placeholder for your username on the prod server/webspace. # {{prod.server.name}} is a placeholder for your production server name. These scripts are working successfully at my hoster Uberspace The hosting os used by my hoster is Centos Linux (6) Feel free to correct me, or optimize things, always glad if i can learn something Now the Following describes the scenario Deployment of a new dev version of a website to the production environment / server / webspace The main script is deploy-dev-prod.sh. It executes 2 further scripts, like a php include. Code deploy-dev-prod.sh #!/bin/bash # Deployment Script: # Deploys the website from development environment to production environment # should be placed in your /home/user folder on the server # Usage: # # chmod +x deploy-dev-prod.sh # ./deploy-dev-prod.sh # If a sqldump-dev-backup.sql File already exists rename sqldump-dev-backup.sql to sqldump-dev-backup2.sql if [ -e sqldump-dev-backup.sql ] then mv sqldump-dev-backup.sql sqldump-dev-backup2.sql printf "\nSource Server: Done - rename sqldump-dev-backup.sql to sqldump-dev-backup2.sql\n\waiting for next step ...\n\n" fi # create MySQL dumpfile on source server mysqldump {{dev dbname}} > sqldump-dev-backup.sql # Attention: mysqldump --databases {{dev dbname}} > name.sql would export the database per se, which is not desired here. In our case only the tables should be exported in order to not need to adjust the database credentials on the target server (-> site/config.php). if [ "$?" -eq "0" ] then printf "\nSource Server: Done - create sqldump-dev-backup.sql\n\nwaiting for next step ...\n\n" # if creation of sqldump-dev-backup.sql was successful, remove sqldump-dev-backup2.sql, 1 backup is sufficient if [ -e sqldump-dev-backup2.sql ] then rm sqldump-dev-backup2.sql printf "\nSource Server: Done - remove sqldump-dev-backup2.sql\n\nwaiting for next step ...\n\n" fi else printf "\nSource Server: Error: creation of sqldump-dev-backup.sql failed\n\n" fi # rsync sqldump-dev-backup.sql to production server/webspace in ~ (= home folder) rsync -e 'ssh -i /home/{{dev server user}}/.ssh/o2u_rsa' -vaH /home/{{dev server user}}/sqldump-dev-backup.sql {{prod server user}}@{{prod.server.name}}:/home/{{prod server user}}/ # Was rsync successful? if [ "$?" = "0" ] then printf "\nDone - sending file sqldump-dev-backup.sql to target server\n\nwaiting for next step ...\n\n" else printf "\nError - sending file sqldump-dev-backup.sql to target server failed\n\n" fi # Backup Target Server (Auth via SSH Key) ssh -i ~/.ssh/o2u_rsa {{prod server user}}@{{prod.server.name}} 'bash -s' < backup-prod-server.sh # rsync cms processwire site/ files from dev to prod rsync -e 'ssh -i /home/{{dev server user}}/.ssh/o2u_rsa' -vaH --log-file=rsync.log --exclude=config.php --exclude=assets/cache {{/absolute/path/on/your/dev/webspace/to/processwire/site/}} {{prod server user}}@{{prod.server.name}}:{{/absolute/path/on/your/prod/webspace/to/processwire/site}} # attention: the dev path (= source) has a trailing slash, the prod path (= target) hasn't ! Because we want to copy THE CONTENT of /dev/site/ to prod/site # Was rsync successful? if [ "$?" = "0" ] then printf "\nDone - rsync content of site folder with target server\n\nwaiting for next step ...\n\n" else printf "\nError - rsync site folder failed\n\n" fi # Update CMS Database Target Server (Auth via SSH Key) ssh -i ~/.ssh/o2u_rsa {{prod server user}}@{{prod.server.name}} 'bash -s' < update-prod-server.sh backup-prod-server.sh #!/bin/bash # This script is part of deploy-dev-prod.sh and shouldn't be executed standalone. # if a sqldump-prod-backup.sql file already exists, rename sqldump-prod-backup.sql to sqldump-prod-backup2.sql if [ -e sqldump-prod-backup.sql ] then mv sqldump-prod-backup.sql sqldump-prod-backup2.sql printf "\nTarget Server: Done - rename sqldump-prod-backup.sql to sqldump-prod-backup2.sql\n\nwaiting for next step ...\n\n" fi # create MySQL dumpfile on target server mysqldump {{prod dbname}} > sqldump-prod-backup.sql # Attention: mysqldump --databases {{prod dbname}} > name.sql would export the database per se, which is not desired here. In our case only the tables should be exported in order to not need to adjust the database credentials on the target server (-> sit/config.php). if [ "$?" -eq "0" ] then printf "\nTarget Server: Done - create sqldump-prod-backup.sql\n\nwaiting for next step ...\n\n" # if creation of sqldump-prod-backup.sql was successful, remove sqldump-prod-backup2.sql, 1 backup is sufficient if [ -e sqldump-prod-backup2.sql ] then rm sqldump-prod-backup2.sql printf "\nTarget Server: Done - remove sqldump-prod-backup2.sql\n\nwaiting for next step ...\n\n" fi else printf "\nTarget Server: Error: creation of sqldump-prod-backup.sql failed\n\n" fi # copy {{/path/to/site/}} as backup to ~/site-backup/ if [ ! -d site-backup ] then mkdir site-backup fi # always delete an existing folder before copying content into a folder with the same name rm -R site-backup && cp -R /absolute/path/to/site/ site-backup # was site backup successful? if [ "$?" = "0" ] then printf "\nTarget Server: Done - backup current site folder\n\nwaiting for next step ...\n\n" else printf "\nTarget Server: Error - backup current site folder failed\n\n" fi and finally update-prod-server.sh #!/bin/bash # This script is part of deploy-dev-prod.sh and shouldn't be executed standalone. # # import database tables of source server into database of the target server mysql {{prod dbname}} < sqldump-dev-backup.sql # was import of sql dumpfile successful? if [ "$?" -eq "0" ] then printf "\nTarget Server: Done - import sqldump-dev-backup.sql\n\nDeployment finished successfully. Now reload the website" else printf "\nTarget Server: Error: import of sqldump-dev-backup.sql failed\n\n" fi For updating Processwire i use pw-upgrade.sh: #!/bin/sh # credits to https://gist.github.com/craigrodway/66c9633ae5d865a9b090 # # ProcessWire upgrade script # # Upgrades ProcessWire ./wire directory. # Use either master or dev branch. # # # Usage: # # chmod +x ./pw-upgrade.sh # ./pw-upgrade.sh # go 1 level over document root: cd {{/absolute/path/to/one/level/over/document-root}} # replace this path with your actual path without curly brackets # if processwire-master-backup exists, rename processwire-master-backup to processwire-master-backup2 if [ -e processwire-master-backup ] then mv processwire-master-backup processwire-master-backup2 printf "\nDone - rename processwire-master-backup to processwire-master-backup2\n\nwaiting for next step ...\n\n" fi # rename processwire-master to processwire-master-backup mv processwire-master processwire-master-backup printf "\nDone - rename current processwire-master to processwire-master-backup\n\nwaiting for next step ...\n\n" # download new version as tmp.zip - unzip it - and remove tmp.zip afterwards wget -qO- -O tmp.zip https://github.com/processwire/processwire/archive/master.zip && unzip tmp.zip && rm tmp.zip if [ "$?" -eq "0" ] then printf "\nDone - downloaded new master\n\nwaiting for next step ...\n\n" else printf "\nError - Download of new master failed\n\n" fi # delete processwire-master-backup2 rm -r processwire-master-backup2 if [ "$?" -eq "0" ] then printf "\nDone - removed processwire-master-backup2 because we need only one backup version\n\nwaiting for next step ...\n\n" else printf "\nError - processwire-master-backup2 couldn't be removed\n\n" fi # in html/ delete wire and index.php and replace it with wire and index.php from new processwire-master cd html/ rm -R wire && rm index.php && cp -R ../processwire-master/wire/ wire && cp ../processwire-master/index.php index.php if [ "$?" -eq "0" ] then printf "\nDone - replace wire and index.php with the ones from new processwire-master\n\n" printf "\nUpgrade finished - now login in CMS backend and do some reloads\n\n" fi Hope this is a bit of a help or inspiration. Feel free to give me your opinion. The only problem is, i'll be on holidays the next 10-12 days and not available. But afterwards i have a look at this thread.1 point