Jump to content

Share your PW dev setups / tools


benbyf

Recommended Posts

YAML is easier to read and write as PHP and less error prone (Yeah I know, we are devs, but there might be unexperienced devs / users that try to accomplish something fast). Indeed I agree, that PHP would offer more possibilities, but I think most times YAML or another configuration markup are enough. 

21 hours ago, szabesz said:

since the recorder would generate the text file, a developer would just read the file and not edit it anyway

Every developer ist different here. Some might prefer writing directly to the file to change things or add new fields/templates, others prefer to do it via the admin.

Update: I would like to shift this conversation to a newly created thread, because we are getting off-topic of sharing out setup/tools.

 

  • Like 1
  • Sad 1
Link to comment
Share on other sites

On 1/18/2022 at 8:57 PM, bernhard said:

What I'd like to know of you guys is why everybody seems to be so excited about YAML? Is it about the YAML thing or is it about the recorder? I'm asking because YAML would really be a drawback IMHO. What I think would be much better is to have a regular PHP file that returns a simple array. That's almost the same as a YAML file but you can do additional things if you want or need. See this example:

@bernhard YAML is preferable because it's declarative instead of imperative. This has a couple of side-benefits, like cleaner diff views in git, no formatting issues or different styles and no 'noise' in your commits (all only relevant if you have a git-based workflow with pull requests). But the big thing is that it makes it impossible to create environment-specific configuration, which is exactly what you don't want. If you embrace that the configuration is the source of truth for the entire site state (excluding content), you won't need this anyway. Take your example where you switch a field based on whether the languages module is installed - I would flag this in a PR and consider it an antipattern. Whether a site is multi-language or not should be part of the configuration. If it isn't there's no way to guarantee that the code which works in staging will also work in production, so at that point you're doing all the work for controlled deployments and version control but not getting the benefits.

Another downside of PHP is that it's onedirectional by default. With YAML, if a deployment fails, you can just roll back to the earlier version and apply the configuration of that version. With PHP, this may work if the PHP migration is just one single $rm->migrate call with an array of configuration (so basically it is a declarative config). But you have no guarantees that it will, and if you have any logic in your migration that depends on the previous state of the site to migrate to a new state, this migration is irreversible.

Migrations do have their place - if you really need to perform some logic, like moving content from one field or format to another. But besides that, declarative configuration files are preferable.

  • Like 8
  • Thanks 1
  • Sad 1
Link to comment
Share on other sites

I've been interested in sharing my setup since it's radically changed over the last year for the better. Wish I could open the repo for the code of my flagship project, but it's the site for the company I work for and isn't mine, www.renovaenergy.com

Local Dev:

  • Code editor is Sublime Text tuned for my preferences/workflow.
  • OS is Ubuntu Linux, will probably distro-hop at some point like Linux users do.
  • Environment is provided by Devilbox, which I can't recommend enough. It's a fast (mostly) pre-configured yet customizable Docker tool with outstanding documentation. A ProcessWire ready container is available.
  • CSS/JS compiled by Gulp/Babel/Browserify for local dev and production builds. ES6 modules. Zero frameworks, no jQuery. Focus on lightweight JS and code splitting for better load times. CSS is compiled and split into separate files by media queries which are loaded by browsers on demand based on screen size.
  • Currently building out website unit/integration tests using Codeception. This is becoming increasingly necessary as the site becomes more complex.
  • Firefox Developer Edition
  • Tilix terminal emulator, Quake mode is awesome
  • Cacher stores code/scripts/configs in the cloud for easy sharing across machines. IDE integration is solid
  • Meld for fast diffs
  • WakaTime because who doesn't like programming metrics for yourself?
  • DevDocs but locally in a Nativefier app. REQUEST: Star ProcessWire on Github. If a project has 7k+ stars it is a candidate to have it's documentation added to DevDocs.

Production:

  • Code editor is Vim on server
  • Deployment is via Git. Local repositories have a secondary remote that pushes code to production via a bare GIT repo which updates assets on the server using hooks.
  • Access to server via SSH only. Changes to files only made locally and pushed.
  • Hosting by DigitalOcean with servers custom built from OS up for performance/security.
  • Custom PageSpeed module implementation. Automatic image conversion to webp, file system asset caching, code inlining, delivery optimization, cache control, etc. Driven down TTFB <=500ms on most pages with load times around 2 seconds sometimes less if I'm lucky haha
  • StatusCake monitors uptime, automated speed tests, server resources, and HTTPS cert checking.
  • PagerDuty is integrated with StatusCake so issues like servers going down, server resources (ram/disk/memory) low, and whatever else get notifications on all your devices.
  • 7G Firewall rules are added to the PW .htaccess file to block a ton of bots and malicious automated page visits. Highly recommended.
  • Mailgun for transactional email

ProcessWire Modules & Features:

  • Modules (most used): CronjobDatabaseBackup, ProFields, Fluency, ImageBlurHash, MarkupSitemap, PageListShowPageId, ProDevTools, TracyDebugger, ListerPro, ProDrafts
  • Template cache. We used ProCache initially but saw some redundancies/conflicts between it and PageSpeed tools on the server. Would absolutely recommend ProCache if your hosting environment isn't self-managed.
  • All configurations are saved in .env files which are unique to local/staging/production environments with contents stored as secure notes in our password manager. This is achieved using the phpdotenv module loaded on boot in config.php where sensitive configurations and environment-dependent values are made securely available application-wide.
  • Extensive use of ProcessWire image resizing and responsive srcset images in HTML for better performance across devices.
  • URL Hooks - Use case- We rolled out a Web API so external platforms can make RESTful JSON requests to the site at dedicated endpoints. The syntax resembles application frameworks which made development really enjoyable and productive. The code is organized separately from the templates directory and allowed for clean separation of responsibilities without dummy pages or having to enable URL segments on the root page. Also allowed for easily building pure endpoints to receive form submissions.
  • Page Classes - My usage -This was a gamechanger. Removing business logic from templates (only loops, variables, and if statements allowed) and using an OOP approach has been fantastic. Not sure if many people are using this but it's made the code much more DRY, predictable, and well organized. Implementing custom rendering methods in DefaultPage allowed for easily "componentizing" of common elements (video galleries, page previews, forms, etc) so that they are rendered from one source. Helped achieve no HTML in PHP and no PHP in HMTL (with the exceptions above). Also allows for using things like PHP Traits to share behavior between specific Page Classes. I completely fell in love all over again with PW over this and now I couldn't live without it. This literally restructured the entire site for the better.

Probably other stuff but this post is too long anyway haha.

  • Like 12
Link to comment
Share on other sites

1 hour ago, FireWire said:

Deployment is via Git. Local repositories have a secondary remote that pushes code to production via a bare GIT repo which updates assets on the server using hooks.

We recently have switched to exactly the same deployment strategy for new projects and will convert old ones, too. This makes deployment so much easier compared to traditional SFTP setups. It doesn't require any external services like github Actions and makes collaborating on projects very enjoyable.
We generally do not include built assets in the repo and handle these through pre-push git hooks on the local machine that trigger rsync tasks for the dist folder. How do you handle these?
Here's an example of pre-push:

#!/bin/sh

url="$2"
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')
wanted_branch='main'

if [[ $url == *github.com* ]];
    then
    read -p "You're about to push, are you sure you won't break the build? Y/N? " -n 1 -r </dev/tty
    echo
    if [[ $REPLY =~ ^[Yy]$ ]]; then
        if [ $current_branch = $wanted_branch ]; then
            sshUser="ssh-user"
            remoteServer="remote.server.com"
            remotePath="/remote/path/to/site/templates/"
            echo When prompted, please insert password.
            echo Updating files...
            rsync -av --delete site/templates/dist $sshUser@$remoteServer:$remotePath
            exit 0
        fi
    else
    echo "answer was no"
    exit 1
    fi
else
    echo "rsync already finished"
fi

 

  • Like 3
Link to comment
Share on other sites

16 hours ago, gebeer said:

We recently have switched to exactly the same deployment strategy for new projects and will convert old ones, too. This makes deployment so much easier compared to traditional SFTP setups. It doesn't require any external services like github Actions and makes collaborating on projects very enjoyable.
We generally do not include built assets in the repo and handle these through pre-push git hooks on the local machine that trigger rsync tasks for the dist folder. How do you handle these?
Here's an example of pre-push:

#!/bin/sh

url="$2"
current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')
wanted_branch='main'

if [[ $url == *github.com* ]];
    then
    read -p "You're about to push, are you sure you won't break the build? Y/N? " -n 1 -r </dev/tty
    echo
    if [[ $REPLY =~ ^[Yy]$ ]]; then
        if [ $current_branch = $wanted_branch ]; then
            sshUser="ssh-user"
            remoteServer="remote.server.com"
            remotePath="/remote/path/to/site/templates/"
            echo When prompted, please insert password.
            echo Updating files...
            rsync -av --delete site/templates/dist $sshUser@$remoteServer:$remotePath
            exit 0
        fi
    else
    echo "answer was no"
    exit 1
    fi
else
    echo "rsync already finished"
fi

 

That's a pretty great strategy. I've thought about moving builds to the server, my approach will probably be updating the hook below to run a Gulp build script automatically. Question about your pre-push hook, does that make it possible to accidentally overwrite production code when the local branch is behind master? Asking since I haven't used a pre-push for deployment and I'm wondering if the files are being copied to the server before your local repo finds out that it could be behind the remote on Github.

I'm going to describe our full setup for clarity because we don't use managed servers and that requires a bit more configuration. I included some details at the end to use this with managed hosting which is easier. On our servers there is a Linux user called 'deployment' which contains bare Git repositories for each site in '/home/deployment/sites' with this post-receive hook.

#!/bin/bash

while read oldrev newrev ref
do
  if [[ $ref =~ .*/main$ ]];
  then
    echo "Main ref received.  Deploying to production..."
    sudo git --work-tree=/path/to/hosting/directory --git-dir=/path/to/deployment/repo checkout -f

    # This shouldn't be required on managed hosting
    setsid sudo chown -R www-data:www-data /path/to/hosting/directory > /dev/null 2>&1 < /dev/null &
    setsid find /path/to/hosting/directory -type d ! -perm 755 -exec sudo chmod 755 {} \; >/dev/null 2>&1 < /dev/null &
    setsid find /path/to/hosting/directory -type f ! -perm 644 -exec sudo chmod 644 {} \; >/dev/null 2>&1 < /dev/null &
  else
    echo "Ref $ref received. Not deploying production: only the main branch may be deployed on this server."
  fi
done

Locally in we have an additional remote called 'production'. We also use this for a deployment to staging where the remote only accepts pushes from the 'development' branch. So using 'git push' pushes to our Gitlab repo, and 'git push production' sends code live.

production deployment@website.com:sites/website.com.deploy.git (push)

Thinking about it now it would be a good idea to write a bash script that pushes to production only when the push to Gitlab is successful to further ensure all main branches match (writing myself a todo for this). Things I like about this approach:

  • Only files that have changed are copied to the public directory which is fast and efficient
  • PW core, modules, and extensive application code we have in /site outside of the templates directory are included. Things like PW logs and translation files are excluded via .gitignore.
  • Config values are kept in a .env file so 'config.php' still lives in the repo and changes can be pushed.
  • It is not possible for anyone to overwrite work that was pushed because the local branch will be behind the production branch.
  • Server login passwords are disabled at the OS level so SSH keys are used. Pushes require no password.
  • I wrote an interactive bash script on the server to add new sites which automatically creates hosting directories, Apache virtual host file, and deployment repository all from pre-written templates. Keeps the setup predictable, error free, easy to use consistently with very little work.

When I complete the testing suite I'm going to add a pre-push hook locally and modify the post-receive hook to execute tests and require that all pass before deploying. Eventually I'll be putting all of this on a CI/CD pipeline but for now this smaller scale approach is just fine. I don't have the time to revamp our deployment strategy at the moment haha.

Differences in hosting environments-

For un-managed hosting the lines that begin with 'setsid' are required to change ownership to the Apache user and set file permissions in the hosting directory after copy. If you're managing the web server you probably already know what to do as far as user/permission management for 'deployment'.

For managed hosting (I use Dreamhost for some projects) no user/permission configs are required so all the 'setsid' lines can be deleted. Only SSH access and Git on the managed hosting server are needed. Just create a sister directory to your website directory, initialize a bare repo with 'git init --bare', add the post-recieve hook with the proper directory locations, and remember to 'chmod +x' your post-receive hook file.

This can probably be optimized more but I've been using it for years and it works ¯\_(ツ)_/¯

  • Like 2
Link to comment
Share on other sites

4 hours ago, FireWire said:

Question about your pre-push hook, does that make it possible to accidentally overwrite production code when the local branch is behind master? Asking since I haven't used a pre-push for deployment and I'm wondering if the files are being copied to the server before your local repo finds out that it could be behind the remote on Github.

Yes, production code could be overwritten if local master is behind origin. But that should not be a problem since you should always pull and merge locally before you push to origin. If you try and push a branch that is behind origin you will get a warning from git anyways. Of course this requires some discipline. But here is only one rule everyone in the team has to follow: always pull and merge before you push.

 

4 hours ago, FireWire said:

I've thought about moving builds to the server

Personally I believe that build do not belong on the server. But that is just an opinion. Building on the server requires all npm packages to be available there. I would not feel confident with that, seeing all the security/vulnerability warnings after a npm install. node_modules is excluded from most repos for a reason, I guess.

Your deployment strategy looks pretty similar to what I describe over there. Only we do not work with a second origin for production but have added an additional push url to our origin (github). I can see the advantage with your approach in that it is not so easy to accidentally push things to production. We even use this on shared hosting where we have shell access and git available.

  • Like 3
Link to comment
Share on other sites

1 hour ago, gebeer said:

Yes, production code could be overwritten if local master is behind origin. But that should not be a problem since you should always pull and merge locally before you push to origin. If you try and push a branch that is behind origin you will get a warning from git anyways. Of course this requires some discipline. But here is only one rule everyone in the team has to follow: always pull and merge before you push.

 

Personally I believe that build do not belong on the server. But that is just an opinion. Building on the server requires all npm packages to be available there. I would not feel confident with that, seeing all the security/vulnerability warnings after a npm install. node_modules is excluded from most repos for a reason, I guess.

Your deployment strategy looks pretty similar to what I describe over there. Only we do not work with a second origin for production but have added an additional push url to our origin (github). I can see the advantage with your approach in that it is not so easy to accidentally push things to production. We even use this on shared hosting where we have shell access and git available.

I don't know why I wasn't thinking about NPM security issues... that was a dumb on my part haha.

Link to comment
Share on other sites

On 1/28/2022 at 11:04 PM, FireWire said:
  • URL Hooks - Use case- We rolled out a Web API so external platforms can make RESTful JSON requests to the site at dedicated endpoints. The syntax resembles application frameworks which made development really enjoyable and productive. The code is organized separately from the templates directory and allowed for clean separation of responsibilities without dummy pages or having to enable URL segments on the root page. Also allowed for easily building pure endpoints to receive form submissions.
  • Page Classes - My usage -This was a gamechanger. Removing business logic from templates (only loops, variables, and if statements allowed) and using an OOP approach has been fantastic. Not sure if many people are using this but it's made the code much more DRY, predictable, and well organized. Implementing custom rendering methods in DefaultPage allowed for easily "componentizing" of common elements (video galleries, page previews, forms, etc) so that they are rendered from one source. Helped achieve no HTML in PHP and no PHP in HMTL (with the exceptions above). Also allows for using things like PHP Traits to share behavior between specific Page Classes. I completely fell in love all over again with PW over this and now I couldn't live without it. This literally restructured the entire site for the better.

Holy smokes how did i miss these two!!! so useful!!!! Thanks for sharing you setup, I'm defos going to get on the PageSpeed train too on DigitalOcean and the 7G Firewall  looks super usefuly too.

  • Like 3
Link to comment
Share on other sites

1 hour ago, benbyf said:

Holy smokes how did i miss these two!!! so useful!!!! Thanks for sharing you setup, I'm defos going to get on the PageSpeed train too on DigitalOcean and the 7G Firewall  looks super usefuly too.

PM me if I can share any info that would help get you up to speed. Can share some configs, code, or answer some questions if you need. I spent a lot (a lot) of time on tuning our setup and would be happy to share.

Also, snapshot your "perfect setup" on DO before you host anything on it. I have one that is a template for our servers and I can spin one up in minutes. Also gives you some extra confidence with experimenting when you know you can nuke a server and start over with a machine built how you like it.

  • Like 3
Link to comment
Share on other sites

  • 1 year later...
On 12/24/2021 at 8:05 PM, wbmnfktr said:

Yet another functional setup without bells and whistles.

Workflow

 

I run Debian Linux on my main-machine where I do all my work and therefore have a local Apache2/MariaDB/PHP server setup running. Here starts absolute every project in its very own VirtualHost and Git repository.

I prefer to use a similar setup like @horst and do the actual development on my local machine and then deliver it to a testing/stage/qa system where clients can take a look at the progress. The difference here is that my dev/testing/stage setups all run on my hosting accounts. I like to have a bit more control at this point.

Everything gets transferred via Git. From local to testing/stage/qa and later on even the live environment. Changes will be made only on and within the local dev setup. So it's super easy to see whenever a client changes files or something was changed on the remote server.

Hi Friends,

during my setup for PW, i am tuning up my workflow, so your setup looks is very nice and fits for me too! Just to understand it right:

On my desktop i am running on debian as on all my servers. To setup now the local dev workflow i will install the exactly same LEMP stack here on my machine and do the dev with localhost. After everything is up and running i will pull it on a dev.server for testing and then upload it to the final production server. Correct?

I am just asking because in e.g. in prepros i read something of a "built in local webserver". So is there a lot of tools, which have the local dev server built in? Why don't use them? Because you want to have the exact same setup as on the Server, correct?

Thanks a lot for reply to help me getting into PW! So nice piece of software!

  Daniel

  • Like 2
Link to comment
Share on other sites

1 hour ago, daniel712 said:

To setup now the local dev workflow i will install the exactly same LEMP stack here on my machine

I can highly recommend using https://ddev.readthedocs.io/en/stable/ for local development. Instead of installing LAMP stack on the local machine's host, you get everything containerized. Big advantage is that you can run different projects in different environments and replicate the live server environment easily for each project. There's also a dedicated Thread for ProcessWire and ddev: 

  

  • Like 4
Link to comment
Share on other sites

10 hours ago, gebeer said:

I can highly recommend using https://ddev.readthedocs.io/en/stable/ for local development. Instead of installing LAMP stack on the local machine's host, you get everything containerized. Big advantage is that you can run different projects in different environments and replicate the live server environment easily for each project.

Works like a charm! Great Thanks! 🙂

Does most of you going this way? Or does anybody prefer a local installed LEMP stack, why then?

Link to comment
Share on other sites

14 hours ago, daniel712 said:

I am just asking because in e.g. in prepros i read something of a "built in local webserver".

The Prepros live server is not capable of handling ProcessWire, just plain HTML/CSS/JS files. So using DDEV is probably one of the easiest ways to get started. On Windows Laragon would be a great choice as well. Especially as the installation is almost only one click. But you got that part already... the last thing I can say:

Have fun and enjoy ProcessWire!

 

  

2 hours ago, daniel712 said:

Does most of you going this way? Or does anybody prefer a local installed LEMP stack, why then?

Nowadays I use DDEV as well. While the setup is a bit complicated at first, you get used to it after the second or third time installing everything. And it's way fast and easier to manage all those different version of everything and spinning up a new project is super fast!

Link to comment
Share on other sites

14 hours ago, daniel712 said:

Does most of you going this way?

I'm also using DDEV for 1,5 years now coming from Windows+Laragon and I could not be happier. Can't remember of any issues and changing PHP versions is as easy as changing the config file and doing a "ddev restart". Also it makes things possible that were not possible with Laragon, for example on one project I needed poppler-utils for generating JPG images from PDF. On the linux server this worked nicely but on Laragon not. So local development was different from the live server environment and that's not ideal.

Also when working on projects in a team DDEV is great. Just share the github repo of the project including the config.yaml of DDEV and your teammate just have to do "git pull && ddev import-db -f /site... && ddev start"

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

I just recently started using DDEV and it's great.

However I do have one issue with it out of the box, it seems like ProcessWire's normal error reporting doesn't work for me when debug  mode is on. Instead I get the standard 500 error message that would normally be shown to a logged-out user. Anyone else run into that? This is when not using Tracy Debugger.

When using Tracy, it does show errors, but only if I manually set it to DEVELOPMENT mode, it's not able to DETECT that.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...