Leaderboard
Popular Content
Showing content with the highest reputation on 01/30/2022 in all areas
-
Well, I couldn't help myself. ? Plus it wasn't as bad as I expected. @Didjee, the insert buttons and other features from Repeater Matrix v7/8 should work in the newly released Restrict Repeater Matrix v0.2.0.4 points
-
Yes, production code could be overwritten if local master is behind origin. But that should not be a problem since you should always pull and merge locally before you push to origin. If you try and push a branch that is behind origin you will get a warning from git anyways. Of course this requires some discipline. But here is only one rule everyone in the team has to follow: always pull and merge before you push. Personally I believe that build do not belong on the server. But that is just an opinion. Building on the server requires all npm packages to be available there. I would not feel confident with that, seeing all the security/vulnerability warnings after a npm install. node_modules is excluded from most repos for a reason, I guess. Your deployment strategy looks pretty similar to what I describe over there. Only we do not work with a second origin for production but have added an additional push url to our origin (github). I can see the advantage with your approach in that it is not so easy to accidentally push things to production. We even use this on shared hosting where we have shell access and git available.3 points
-
I've been interested in sharing my setup since it's radically changed over the last year for the better. Wish I could open the repo for the code of my flagship project, but it's the site for the company I work for and isn't mine, www.renovaenergy.com Local Dev: Code editor is Sublime Text tuned for my preferences/workflow. OS is Ubuntu Linux, will probably distro-hop at some point like Linux users do. Environment is provided by Devilbox, which I can't recommend enough. It's a fast (mostly) pre-configured yet customizable Docker tool with outstanding documentation. A ProcessWire ready container is available. CSS/JS compiled by Gulp/Babel/Browserify for local dev and production builds. ES6 modules. Zero frameworks, no jQuery. Focus on lightweight JS and code splitting for better load times. CSS is compiled and split into separate files by media queries which are loaded by browsers on demand based on screen size. Currently building out website unit/integration tests using Codeception. This is becoming increasingly necessary as the site becomes more complex. Firefox Developer Edition Tilix terminal emulator, Quake mode is awesome Cacher stores code/scripts/configs in the cloud for easy sharing across machines. IDE integration is solid Meld for fast diffs WakaTime because who doesn't like programming metrics for yourself? DevDocs but locally in a Nativefier app. REQUEST: Star ProcessWire on Github. If a project has 7k+ stars it is a candidate to have it's documentation added to DevDocs. Production: Code editor is Vim on server Deployment is via Git. Local repositories have a secondary remote that pushes code to production via a bare GIT repo which updates assets on the server using hooks. Access to server via SSH only. Changes to files only made locally and pushed. Hosting by DigitalOcean with servers custom built from OS up for performance/security. Custom PageSpeed module implementation. Automatic image conversion to webp, file system asset caching, code inlining, delivery optimization, cache control, etc. Driven down TTFB <=500ms on most pages with load times around 2 seconds sometimes less if I'm lucky haha StatusCake monitors uptime, automated speed tests, server resources, and HTTPS cert checking. PagerDuty is integrated with StatusCake so issues like servers going down, server resources (ram/disk/memory) low, and whatever else get notifications on all your devices. 7G Firewall rules are added to the PW .htaccess file to block a ton of bots and malicious automated page visits. Highly recommended. Mailgun for transactional email ProcessWire Modules & Features: Modules (most used): CronjobDatabaseBackup, ProFields, Fluency, ImageBlurHash, MarkupSitemap, PageListShowPageId, ProDevTools, TracyDebugger, ListerPro, ProDrafts Template cache. We used ProCache initially but saw some redundancies/conflicts between it and PageSpeed tools on the server. Would absolutely recommend ProCache if your hosting environment isn't self-managed. All configurations are saved in .env files which are unique to local/staging/production environments with contents stored as secure notes in our password manager. This is achieved using the phpdotenv module loaded on boot in config.php where sensitive configurations and environment-dependent values are made securely available application-wide. Extensive use of ProcessWire image resizing and responsive srcset images in HTML for better performance across devices. URL Hooks - Use case- We rolled out a Web API so external platforms can make RESTful JSON requests to the site at dedicated endpoints. The syntax resembles application frameworks which made development really enjoyable and productive. The code is organized separately from the templates directory and allowed for clean separation of responsibilities without dummy pages or having to enable URL segments on the root page. Also allowed for easily building pure endpoints to receive form submissions. Page Classes - My usage -This was a gamechanger. Removing business logic from templates (only loops, variables, and if statements allowed) and using an OOP approach has been fantastic. Not sure if many people are using this but it's made the code much more DRY, predictable, and well organized. Implementing custom rendering methods in DefaultPage allowed for easily "componentizing" of common elements (video galleries, page previews, forms, etc) so that they are rendered from one source. Helped achieve no HTML in PHP and no PHP in HMTL (with the exceptions above). Also allows for using things like PHP Traits to share behavior between specific Page Classes. I completely fell in love all over again with PW over this and now I couldn't live without it. This literally restructured the entire site for the better. Probably other stuff but this post is too long anyway haha.3 points
-
That's a pretty great strategy. I've thought about moving builds to the server, my approach will probably be updating the hook below to run a Gulp build script automatically. Question about your pre-push hook, does that make it possible to accidentally overwrite production code when the local branch is behind master? Asking since I haven't used a pre-push for deployment and I'm wondering if the files are being copied to the server before your local repo finds out that it could be behind the remote on Github. I'm going to describe our full setup for clarity because we don't use managed servers and that requires a bit more configuration. I included some details at the end to use this with managed hosting which is easier. On our servers there is a Linux user called 'deployment' which contains bare Git repositories for each site in '/home/deployment/sites' with this post-receive hook. #!/bin/bash while read oldrev newrev ref do if [[ $ref =~ .*/main$ ]]; then echo "Main ref received. Deploying to production..." sudo git --work-tree=/path/to/hosting/directory --git-dir=/path/to/deployment/repo checkout -f # This shouldn't be required on managed hosting setsid sudo chown -R www-data:www-data /path/to/hosting/directory > /dev/null 2>&1 < /dev/null & setsid find /path/to/hosting/directory -type d ! -perm 755 -exec sudo chmod 755 {} \; >/dev/null 2>&1 < /dev/null & setsid find /path/to/hosting/directory -type f ! -perm 644 -exec sudo chmod 644 {} \; >/dev/null 2>&1 < /dev/null & else echo "Ref $ref received. Not deploying production: only the main branch may be deployed on this server." fi done Locally in we have an additional remote called 'production'. We also use this for a deployment to staging where the remote only accepts pushes from the 'development' branch. So using 'git push' pushes to our Gitlab repo, and 'git push production' sends code live. production deployment@website.com:sites/website.com.deploy.git (push) Thinking about it now it would be a good idea to write a bash script that pushes to production only when the push to Gitlab is successful to further ensure all main branches match (writing myself a todo for this). Things I like about this approach: Only files that have changed are copied to the public directory which is fast and efficient PW core, modules, and extensive application code we have in /site outside of the templates directory are included. Things like PW logs and translation files are excluded via .gitignore. Config values are kept in a .env file so 'config.php' still lives in the repo and changes can be pushed. It is not possible for anyone to overwrite work that was pushed because the local branch will be behind the production branch. Server login passwords are disabled at the OS level so SSH keys are used. Pushes require no password. I wrote an interactive bash script on the server to add new sites which automatically creates hosting directories, Apache virtual host file, and deployment repository all from pre-written templates. Keeps the setup predictable, error free, easy to use consistently with very little work. When I complete the testing suite I'm going to add a pre-push hook locally and modify the post-receive hook to execute tests and require that all pass before deploying. Eventually I'll be putting all of this on a CI/CD pipeline but for now this smaller scale approach is just fine. I don't have the time to revamp our deployment strategy at the moment haha. Differences in hosting environments- For un-managed hosting the lines that begin with 'setsid' are required to change ownership to the Apache user and set file permissions in the hosting directory after copy. If you're managing the web server you probably already know what to do as far as user/permission management for 'deployment'. For managed hosting (I use Dreamhost for some projects) no user/permission configs are required so all the 'setsid' lines can be deleted. Only SSH access and Git on the managed hosting server are needed. Just create a sister directory to your website directory, initialize a bare repo with 'git init --bare', add the post-recieve hook with the proper directory locations, and remember to 'chmod +x' your post-receive hook file. This can probably be optimized more but I've been using it for years and it works ¯\_(ツ)_/¯2 points
-
I have already gotten a pretty good idea about how to utilize ProcessWire in the sites I make, but there is something nagging me. I see a button "Add new" next to the page tree, and when I click it, I have 1 option: Bookmarks. I have clicked it, and added a page to the bookmarks, but I don't understand what this actually does. I don't see anything different on that page, and I can't understand the explanation that is provided ("The pages you select above represent bookmarks to the parent pages where you want children added. Note that if a user does not have permission to add a page to a given parent page (whether due to access control or template family settings), the bookmark will not appear."). So, I guess the question boils down to: what do bookmarks do?1 point
-
Hi all, I got inspired to writing this little tutorial by @FireWire's post. We are using the same deployment workflow that he mentions for all new projects. I tried different approaches in the past, also github Actions and gitlab Runners. Setting those up always felt like a PITA to me. Especially since all I wanted was to automatically deploy my project to staging or live on push Whom this is for Single devs or teams who want to streamline their deployment process with native git methods. Requirements shell access to the server git installed on the server and locally If you don't have shell access and git on the server, upgrade or switch hosting. Walkthrough In this example we will be using github to host our code and a server of our choice for deployment. The project is called myproject. Step 1 (github) Create a repository named myproject. Let's assume that is available at git@github.com:myaccount/myproject.git. This is our remote URL. Step 2 (local) create a project in the folder myproject and push it to github like you usually would. The remote of your project should now read like this inside the myproject folder $ git remote add origin git@github.com:myaccount/myproject.git $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin git@github.com:myaccount/myproject.git (push) Step 3 (server) Login via ssh to the server and go to the document root. We assume this to be '/var/www/'. We further assume the command for connecting to our server via ssh is 'ssh myuser@myserver'. Go to web root, create a directory that will hold a bare git repo, cd into it and create the bare git repository. A bare repo is one that does not contain the actual project files but only the version control information. cd /var/www/ mkdir myproject-git && cd myproject-git git init --bare Step 4 (server) Create the root directory for your ProcessWire installation cd /var/www/ mkdir myproject Step 5 (local) Now we add information about the bare git repo to our local git config. So that when we push changes, they will be pushed both to github and to the bare git repo on our server. Inside our project folder we do git remote set-url --add --push origin myuser@myserver:/var/www/myproject-git After that we need to add the original github push origin again, because it got overwritten by the last command git remote set-url --add --push origin git@github.com:myaccount/myproject.git Now the list of remotes should look like this $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin myuser@myserver:/var/www/myproject-git (push) origin git@github.com:myaccount/myproject.git (push) We have one fetch and 2 push remotes. This means that if you push a commit, it will be pushed to both github and your server repo. Step 6 (server) Here comes the actual deployment magic. We are using a git hook that fires a script after every push. This hook is called a post-receive hook. We move into the directory with the bare repository, change to the hooks directory, create the file that triggers the hook and open it for editing with nano $ cd /var/www/myproject-git/hooks $ touch post-receive $ nano post-receive Now we paste this script into the open editor and save it #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" # Target directory. TARGET="/var/www/myproject" while read oldrev newrev ref do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [[ $BRANCH == "main" ]]; then echo "Push received! Deploying branch: ${BRANCH}..." # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH else echo "Not main branch. Skipping." fi done What this does is checking out (copying) all files that are in the repository to our ProcessWire root directory every time we push something. And that is exactly what we wanted to achieve. This example setup is for a single branch. If you wanted to make this work with multiple branches, you need to make some small adjustments. Let's assume you have one staging and one live installation where web root for live is at /var/www/myproject and for staging at /var/www/myproject-staging In Step 4 above you would create a second dir $ cd /var/www/ $ mkdir myproject $ mkdir myproject-staging And the content of the post-receive hook file could look like #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" while read oldrev newrev ref; do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [ $BRANCH == "master" ]; then TARGET="/var/www/myproject" elif [ $BRANCH == "staging" ]; then TARGET="/var/www/myproject-staging" else echo "Branch not found. Skipping Deployment." fi # deploy only if var TARGET is set if [ -z ${TARGET} ]; then echo "no target set" else echo "STARTING DEPLOYMENT..." echo "Push to ${BRANCH} received! Deploying branch: ${BRANCH} to: ${TARGET}" # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH fi done Now everything you push to your staging branch will be deployed to /var/www/myproject-staging. Commits to master branch to /var/www/myproject We really do enjoy this deployment workflow. Everything is neat and clean. No need to keep track of which files you uploaded already via SFTP. Peace of mind :-) I basically put together bits and pieces I found around the web to set this up. Would be eager to see how you implement stuff like this.1 point
-
I have used Stripe Checkout and haven't got into this issue ? Are you redirecting to the exact same domain?? Things like this has happened to me when I redirect from Stripe to a different subdomain like "www.domain.com", and session was initially started in "domain.com"1 point
-
You need to give them this mantra: "pull - merge - push" and let them recite it 100 times a day at least ? It only knows that it needs to deploy on push. So it won't deploy when someone merges stuff on the server. From that perspective it is pretty fail safe. Give it a try, think you'll love it.1 point
-
@Robin S wow, this is really great! Thanks for spending your weekend hours on this update ?1 point
-
Exactly. staging.myproject.tld would point to folder /var/www/myproject-staging We never merge things on the server, always merge stuff locally first and then push. For the post-receive hook fires only after a push. The most important thing to remember when working with this strategy is: always pull/merge before you push.1 point
-
The most common way to use the "Add New" feature is by defining "Allowed template(s) for children" and "Allowed template(s) for parents" on the templates in question. See the information in the screenshot below: When you have this configured then you can quickly add new pages of the child template via the "Add New" button and menu link. The "Bookmarks" feature applies when you haven't defined allowed parent/child template settings, but you want a shortcut to add a new child page under some specific parent page. So if you had some deeply nested page that you often want to add new child pages under you could set a bookmark for that parent. Clicking the bookmark would then be the same as opening the page tree to that parent and clicking "Add", but it would perhaps save you some time. It's not a feature that I've found all that useful in practice.1 point
-
All personal projects use the dev branch for quite some time now, even upcoming client projects use the dev branch it. Besides 3.0.191 all have been super stable and reliable so far. But still on PHP 7.4.1 point
-
You can't add additional markup while adding classes with CKEditor - or at least there is no way I know about. (So there are still chances.) Or at least not in that matter. Therefore... why do you need this kind of markup at all? Styling? To achieve this you could write a JS that adds another span for whatever reason.1 point
-
This is kind of cross-post... in order to keep this thread updated with related thoughts. In summary... @ryan and @horst use what PW provides... @bernhard uses his RockMigration... all others of us use what ever fits our need and I'm totally fine with that... and really appreciate that. Yet... somehow I'd really love to see a CraftCMS-like (even though I never saw or used it) JSON/YAML-Recorder-style tool @bernhard showed to us. What if we open a money-pool for such a development? Someone interested? If I could have some kind of a solid working solution for a real time JSON/YAML-file-export/import solution which I only need to migrate via Git, without writing any functions, code or whatever, solely based on my PW-setup, to make all my instances the same... sure with some checks in the background but... YES... that would be my dream. Still... I want and have to try and test the solution CraftCMS has available. I put my money where my mouth is and start upfront with EUR 500 in the pool. I will post some details of all the features I'd like to have and see in such a module in the upcoming days, maybe a week or two. Yet... whoever wants to join, let me know. In case you want to develop such a module, PLEASE let me know. I guess I want to see this module happen in some kind or another (community or pro module). My goal is... making PW even better. And maybe this could be another step. Crossposted from1 point
-
Happy to report this module has been fairly robust in some of our clients PW instances. Thanks for all your work! One thing I noticed is that the revision timestamp is being generated by the database and not via PHP. So when the two services are on separate machines, with different timezone configurations, the timestamps don't match up with the activity on the web server. Just discovered this as one of our client's databases went down and a failover database in another zone took its place. Revisions were showing that the users had traveled to the future... 5 hours from now. ?1 point
-
Same as @teppo, upgraded a lot of sites to the latest dev midweek and switched them to PHP 8.1. So far all good.1 point
-
Switched all my personal sites to the latest dev branch last week (after accidentally updating the server from PHP 8 to 8.1 — whoops...) and have had no issues — as far as I know — so far. Apart from a few minor ones (deprecation warnings) related to PHP 8.1, but those are already reported via GitHub. I'd say that it seems pretty solid so far ?1 point
-
TextformatterRockHeadlineIDs Textformatter that applies id attributes to all headlines (h1-h6) in the markup field. // input <h1>This is my headline</h1> // output <h1 id='this-is-my-headline'>This is my headline</h1> Download & Docs: https://processwire.com/modules/textformatter-rock-headline-ids/ https://github.com/baumrock/TextformatterRockHeadlineIDs PS: There is a similar module I didn't know about before:1 point
-
I think we've talking past each other a bit, so let me see if I can word it differently. The cache should never return an expired value, but I can understand that the strange behavior in your tests 5 and 6 suggests differently. If you don't want to lose a value, don't let it expire. Instead, put the logic to overwrite the cached value with a newer one when your API call is successful into your code. And perhaps store the last retrieval time in your cached value, so you have the creation timestamp you mentioned. Perhaps code says more than a thousand words: <?php /** * Code for Cron Job */ $apiResult = your_api_call(); $cachedResult = json_decode($cache->get('yourcachekey'), true); if($apiResult) // Only if API call was successful if(!$cachedResult || $cachedResult['created'] < time() - 300) { // Cache entry older than 5 minutes, write new value to cache $cache->save('yourcachekey', json_encode(['created' => time(), 'apiResult' => $apiResult], WireCache::expireNever); } } /** * Code for frontend use of API data */ $cachedResult = json_decode($cache->get('yourcachekey'), true); $apiResult = $cachedResult['apiResult']; if($apiResult['created'] < strtotime('-1 month')) { // Deal with a stale cache value that hasn't been updated in over a month here }1 point
-
Yes, it is about how you build the selector. When you are dealing with input coming from a search form you usually build up the selector in a string variable $selector, adding pieces to $selector for each populated GET variable. But before you try that start with something simpler. Rather than building up a selector string dynamically, experiment with writing out different selector strings and supplying the selector to $pages->find() to see if you get back the pages you expect. The selectors documentation is here: https://processwire.com/docs/selectors/ Each "clause" you add to the selector works with AND logic by default: field_1=foo, field_2=bar This matches pages where field_1=foo AND field_2=bar. For OR logic with different fields and values, check the documentation for OR-groups. (field_1=foo), (field_2=bar) This matches pages where field_1=foo OR field_2=bar. Regarding OR conditions for multiple values in a single field, or multiple fields with a single value, see the following sections: https://processwire.com/docs/selectors/#or-selectors1 https://processwire.com/docs/selectors/#or-selectors2 When you have sorted out what kind of selector you need to match the pages you want you can move on to building up the selector dynamically according to your GET variables.1 point