Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 01/29/2022 in all areas

  1. Hi all, I got inspired to writing this little tutorial by @FireWire's post. We are using the same deployment workflow that he mentions for all new projects. I tried different approaches in the past, also github Actions and gitlab Runners. Setting those up always felt like a PITA to me. Especially since all I wanted was to automatically deploy my project to staging or live on push Whom this is for Single devs or teams who want to streamline their deployment process with native git methods. Requirements shell access to the server git installed on the server and locally If you don't have shell access and git on the server, upgrade or switch hosting. Walkthrough In this example we will be using github to host our code and a server of our choice for deployment. The project is called myproject. Step 1 (github) Create a repository named myproject. Let's assume that is available at git@github.com:myaccount/myproject.git. This is our remote URL. Step 2 (local) create a project in the folder myproject and push it to github like you usually would. The remote of your project should now read like this inside the myproject folder $ git remote add origin git@github.com:myaccount/myproject.git $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin git@github.com:myaccount/myproject.git (push) Step 3 (server) Login via ssh to the server and go to the document root. We assume this to be '/var/www/'. We further assume the command for connecting to our server via ssh is 'ssh myuser@myserver'. Go to web root, create a directory that will hold a bare git repo, cd into it and create the bare git repository. A bare repo is one that does not contain the actual project files but only the version control information. cd /var/www/ mkdir myproject-git && cd myproject-git git init --bare Step 4 (server) Create the root directory for your ProcessWire installation cd /var/www/ mkdir myproject Step 5 (local) Now we add information about the bare git repo to our local git config. So that when we push changes, they will be pushed both to github and to the bare git repo on our server. Inside our project folder we do git remote set-url --add --push origin myuser@myserver:/var/www/myproject-git After that we need to add the original github push origin again, because it got overwritten by the last command git remote set-url --add --push origin git@github.com:myaccount/myproject.git Now the list of remotes should look like this $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin myuser@myserver:/var/www/myproject-git (push) origin git@github.com:myaccount/myproject.git (push) We have one fetch and 2 push remotes. This means that if you push a commit, it will be pushed to both github and your server repo. Step 6 (server) Here comes the actual deployment magic. We are using a git hook that fires a script after every push. This hook is called a post-receive hook. We move into the directory with the bare repository, change to the hooks directory, create the file that triggers the hook and open it for editing with nano $ cd /var/www/myproject-git/hooks $ touch post-receive $ nano post-receive Now we paste this script into the open editor and save it #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" # Target directory. TARGET="/var/www/myproject" while read oldrev newrev ref do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [[ $BRANCH == "main" ]]; then echo "Push received! Deploying branch: ${BRANCH}..." # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH else echo "Not main branch. Skipping." fi done What this does is checking out (copying) all files that are in the repository to our ProcessWire root directory every time we push something. And that is exactly what we wanted to achieve. This example setup is for a single branch. If you wanted to make this work with multiple branches, you need to make some small adjustments. Let's assume you have one staging and one live installation where web root for live is at /var/www/myproject and for staging at /var/www/myproject-staging In Step 4 above you would create a second dir $ cd /var/www/ $ mkdir myproject $ mkdir myproject-staging And the content of the post-receive hook file could look like #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" while read oldrev newrev ref; do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [ $BRANCH == "master" ]; then TARGET="/var/www/myproject" elif [ $BRANCH == "staging" ]; then TARGET="/var/www/myproject-staging" else echo "Branch not found. Skipping Deployment." fi # deploy only if var TARGET is set if [ -z ${TARGET} ]; then echo "no target set" else echo "STARTING DEPLOYMENT..." echo "Push to ${BRANCH} received! Deploying branch: ${BRANCH} to: ${TARGET}" # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH fi done Now everything you push to your staging branch will be deployed to /var/www/myproject-staging. Commits to master branch to /var/www/myproject We really do enjoy this deployment workflow. Everything is neat and clean. No need to keep track of which files you uploaded already via SFTP. Peace of mind :-) I basically put together bits and pieces I found around the web to set this up. Would be eager to see how you implement stuff like this.
    6 points
  2. I've been interested in sharing my setup since it's radically changed over the last year for the better. Wish I could open the repo for the code of my flagship project, but it's the site for the company I work for and isn't mine, www.renovaenergy.com Local Dev: Code editor is Sublime Text tuned for my preferences/workflow. OS is Ubuntu Linux, will probably distro-hop at some point like Linux users do. Environment is provided by Devilbox, which I can't recommend enough. It's a fast (mostly) pre-configured yet customizable Docker tool with outstanding documentation. A ProcessWire ready container is available. CSS/JS compiled by Gulp/Babel/Browserify for local dev and production builds. ES6 modules. Zero frameworks, no jQuery. Focus on lightweight JS and code splitting for better load times. CSS is compiled and split into separate files by media queries which are loaded by browsers on demand based on screen size. Currently building out website unit/integration tests using Codeception. This is becoming increasingly necessary as the site becomes more complex. Firefox Developer Edition Tilix terminal emulator, Quake mode is awesome Cacher stores code/scripts/configs in the cloud for easy sharing across machines. IDE integration is solid Meld for fast diffs WakaTime because who doesn't like programming metrics for yourself? DevDocs but locally in a Nativefier app. REQUEST: Star ProcessWire on Github. If a project has 7k+ stars it is a candidate to have it's documentation added to DevDocs. Production: Code editor is Vim on server Deployment is via Git. Local repositories have a secondary remote that pushes code to production via a bare GIT repo which updates assets on the server using hooks. Access to server via SSH only. Changes to files only made locally and pushed. Hosting by DigitalOcean with servers custom built from OS up for performance/security. Custom PageSpeed module implementation. Automatic image conversion to webp, file system asset caching, code inlining, delivery optimization, cache control, etc. Driven down TTFB <=500ms on most pages with load times around 2 seconds sometimes less if I'm lucky haha StatusCake monitors uptime, automated speed tests, server resources, and HTTPS cert checking. PagerDuty is integrated with StatusCake so issues like servers going down, server resources (ram/disk/memory) low, and whatever else get notifications on all your devices. 7G Firewall rules are added to the PW .htaccess file to block a ton of bots and malicious automated page visits. Highly recommended. Mailgun for transactional email ProcessWire Modules & Features: Modules (most used): CronjobDatabaseBackup, ProFields, Fluency, ImageBlurHash, MarkupSitemap, PageListShowPageId, ProDevTools, TracyDebugger, ListerPro, ProDrafts Template cache. We used ProCache initially but saw some redundancies/conflicts between it and PageSpeed tools on the server. Would absolutely recommend ProCache if your hosting environment isn't self-managed. All configurations are saved in .env files which are unique to local/staging/production environments with contents stored as secure notes in our password manager. This is achieved using the phpdotenv module loaded on boot in config.php where sensitive configurations and environment-dependent values are made securely available application-wide. Extensive use of ProcessWire image resizing and responsive srcset images in HTML for better performance across devices. URL Hooks - Use case- We rolled out a Web API so external platforms can make RESTful JSON requests to the site at dedicated endpoints. The syntax resembles application frameworks which made development really enjoyable and productive. The code is organized separately from the templates directory and allowed for clean separation of responsibilities without dummy pages or having to enable URL segments on the root page. Also allowed for easily building pure endpoints to receive form submissions. Page Classes - My usage -This was a gamechanger. Removing business logic from templates (only loops, variables, and if statements allowed) and using an OOP approach has been fantastic. Not sure if many people are using this but it's made the code much more DRY, predictable, and well organized. Implementing custom rendering methods in DefaultPage allowed for easily "componentizing" of common elements (video galleries, page previews, forms, etc) so that they are rendered from one source. Helped achieve no HTML in PHP and no PHP in HMTL (with the exceptions above). Also allows for using things like PHP Traits to share behavior between specific Page Classes. I completely fell in love all over again with PW over this and now I couldn't live without it. This literally restructured the entire site for the better. Probably other stuff but this post is too long anyway haha.
    4 points
  3. This week's commits include fixes for 8 reported issues and a couple new $sanitizer methods for entity encoding values in arrays. I mentioned earlier that I'd like to try and get a new master version out early this year and that's still the case. I've been successfully migrating some of the production sites that I work on to the dev branch, and so far, so good. Please let me know how it works for you. We're on track for a new master version hopefully soon. A couple weeks ago I mentioned a module I was working on for automatically saving front-end forms in progress. For the moment, it's it's called FormAutoSaver, but maybe I'll come up with a better name (or not). It emails the person filling out the form with one or two reminders with links to finish it if they leave it before submitting it. The emails are template-file based so fully under your control, and the delay in sending 1-2 reminders is also fully configurable, whether hours or days. This module is essentially finished and now I'm just writing the documentation. I already have it in use on 1 site and it's made a helpful difference, not just in getting people to finish their forms, but also in helping us to analyze where people tend to get stuck and stop filling out the form. It also comes with an admin tool that lets you browse the forms in progress and see where they are at. Whether you want to increase the completion rate of of any particular form(s), or you want to identify bottlenecks, I think you'll find tool helpful. You don't have to design your form for it, as it will work with any existing form. I expect to have this posted into the ProDevTools board as soon as next week. Thanks for reading this short update and have a great weekend!
    3 points
  4. Same as @teppo, upgraded a lot of sites to the latest dev midweek and switched them to PHP 8.1. So far all good.
    3 points
  5. Switched all my personal sites to the latest dev branch last week (after accidentally updating the server from PHP 8 to 8.1 — whoops...) and have had no issues — as far as I know — so far. Apart from a few minor ones (deprecation warnings) related to PHP 8.1, but those are already reported via GitHub. I'd say that it seems pretty solid so far ?
    3 points
  6. @Kiwi Chris @AndZyk @bernhard @wbmnfktr I tried to shift the conversation to this thread so we have a cumulated discussion and not all opinions and solutions cluttered around in several threads. Maybe a moderator could move your discussions here to the new thread if thats ok with you.
    2 points
  7. Well hello, today is the day I would like to introduce Flowti, my biggest Project I have worked on for the last 2 1/2 Years and the reason I am developing Symprowire, too. So, what is it all about you may ask. Flowti is my daily driver to Plan, Document and Execute Projects Flowti is my tool to work on concepts and ideas Flowti is my tool to present Work Flowti is my tool to organize and support Product Management Work Flowti is my digitalization Framework To give you a glimpse about what Flowti is already capable of I attached some screenshots for you to study ? To make this one short, I plan to fully Open Source Flowti this year after Symprowire reaches v1.0 and will use this post as a way to show progress. Cheers, Luis
    2 points
  8. ProcessWire 3.0.193 resolves 6 issues, makes improvements to the template and module editors, adds new hooks, adds improvements the $pages->find() findRaw method, and more. We covered some of these updates in last week's post, so we'll focus on what's new this week. First off we have a new advanced mode feature that lets you edit the raw configuration data for a module. This can be useful for various reasons, especially for module developers. If you have $config->advanced = true; in your /site/config.php file, you'll see a new option on your module information screen that enables you to directly edit the raw JSON configuration data for the module. There's also an option that lets you view the raw JSON module information data. Unlike the configuration data, this isn't editable. That's because it comes from the module directly (overtime you do a Modules > Refresh) or is generated at runtime, so there's little point in editing it here. In my case, I've found these new tools helpful for clearing out old and/or irrelevant configuration data during module development. In some cases, having the ability to edit this data may help to identify or fix issues that previously would have been difficult to do without using the API. If there's interest, I may move this into a dedicated (non-core) module that also lets you directly edit field and template configuration data too. But for now the feature is in the core, but just requires advanced mode before it appears. A few new hooks were added this week: Fieldgroups::fieldRemoved($fieldgroup, $field) Called after a field has been removed from a fieldgroup/template. Fieldgroups::fieldAdded($fieldgroup, $field) Called after a new field has been added to a fieldgroup/template. Fieldgroups::renameReady($template, $oldName, $newName) Called before a fieldgroup is about to be renamed. Fieldgroups::renamed($template, $oldName, $newName) Called after a fieldgroup has been renamed. Templates::renameReady($template, $oldName, $newName) Called before a template is about to be renamed. Templates::renamed($template, $oldName, $newName) Called after a template has been renamed. Fields::renameReady($field, $oldName, $newName) Called before a field is about to be renamed. Fields::renamed($field, $oldName, $newName) Called after a field has been renamed. These accompany the existing addReady(), added(), deleteReady(), deleted(), cloneReady(), cloned(), saveReady() and saved() hooks available for fields, templates and fieldgroups. Last week a couple people asked about versioning and migration of stuff in PW (like fields, templates, modules, etc.) and if there were any plans to provide additional tools for that. For the projects I work on at least, this part of the development process consumes so little time that it doesn't warrant developing more stuff for it. But I understand others might find it useful, so for those that would, I'd rather keep the core lean and instead leave that to tools/modules built by experts like Bernhard and others around here. I think it's important that whoever develops and maintains such features also be the same one(s) that would use them. But if any kind of core updates would be helpful to developers looking to implement more features here, I'm on board. Whether that means adding more hooks to specific events (see above as examples), maintaining field/template/module data in files in addition to the current DB tables, or anything else that helps such modules, this is all possible and likely simple for us to support in the core. So just let me know what I can do to help. While not full-featured migration tools, we do have useful field, template and page export/import tools in the core already, and those will of course continue to be maintained and improved, and may be expanded to include modules too. Thanks for reading and have a great weekend!
    1 point
  9. @MindFull - have you tried setting the timezone of your MySQL server? I always do that on new installs and it seems to work to keep everything matched up. Here's a good tutorial on ways to achieve this: https://phoenixnap.com/kb/change-mysql-time-zone I've always used option 1.
    1 point
  10. Shopify has several APIs for this with multiple SDKs. Have a look and choose one that suits you.
    1 point
  11. I don't think there is a plug and play module that enables anything from Shopify. How do you want to make your integration? What do you expect to get from combining both? There's also Padloper 2, an ecommerce module (set of modules?) that has entered Alpha/Early Beta phase!
    1 point
  12. We recently have switched to exactly the same deployment strategy for new projects and will convert old ones, too. This makes deployment so much easier compared to traditional SFTP setups. It doesn't require any external services like github Actions and makes collaborating on projects very enjoyable. We generally do not include built assets in the repo and handle these through pre-push git hooks on the local machine that trigger rsync tasks for the dist folder. How do you handle these? Here's an example of pre-push: #!/bin/sh url="$2" current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,') wanted_branch='main' if [[ $url == *github.com* ]]; then read -p "You're about to push, are you sure you won't break the build? Y/N? " -n 1 -r </dev/tty echo if [[ $REPLY =~ ^[Yy]$ ]]; then if [ $current_branch = $wanted_branch ]; then sshUser="ssh-user" remoteServer="remote.server.com" remotePath="/remote/path/to/site/templates/" echo When prompted, please insert password. echo Updating files... rsync -av --delete site/templates/dist $sshUser@$remoteServer:$remotePath exit 0 fi else echo "answer was no" exit 1 fi else echo "rsync already finished" fi
    1 point
  13. It's not just the new insert before / insert after features, it's also all the new options for adding items that would need attention. It looks like a lot of work to support and probably it still won't work reliably for all the options, e.g. custom. Sorry, but it's unlikely I'm going to get around to spending the time on this in the foreseeable future. It's probably not practical for a module like Restrict Repeater Matrix to exists when the base module (Repeater Matrix) can be changed at any point. Repeater Matrix is a commercial module so Ryan should be responsive to the needs of its users - I suggest submitting requests to Ryan for the features you want to see. I'll add a note at the top of the readme about the incompatibility with recent updates to Repeater Matrix.
    1 point
  14. Fair enough, I've added support for more custom labels in v0.2.1.
    1 point
  15. The concept of RockMigrations makes a lot of sense, but what seemed to make it hard, was it depended on manually building object definitions for migrations in code. It turns out there are ProcessWire core methods to export field, template, and page definitions as JSON, so that makes it a lot less code to write. Where I'd like to get to is have a list of field, template, and page dependencies in a module configuration, and be able to do something like $module->build(); to generate JSON files for all those dependencies in the module directory. Modules already have install() , uninstall() and upgrade() methods so they could look for any object dependencies in the module config and create (or remove) them as required. Here's a working proof of concept I've been playing with in Tracy console. Although I prefer to use the UI when I can, due to the issues around dependencies between fields, templates, and pages, manually specifying an array with the objects in the order they need to be created avoids the issues others have run into of having to make multiple passes to satisfy all dependencies. I could make it smarter by checking the modified timestamp for templates and pages (and hopefully soon fields if Ryan adds it to the core), to only generate new JSON files for objects that have changed since the last build. $requiredObjects = array( 'competitionImage' => 'field', 'compGrade' => 'field', 'competition' => 'template', 'compId' => 'field', 'competitionCalendar' =>'template', 'competitiontopics' => 'page', 'media' => 'page', ); $modulePath = $config->path('MyModule'); $pageExporter = new PagesExportImport(); foreach($requiredObjects as $objectName => $objectType){ switch($objectType) { case 'field': $object = $fields->get($objectName); break; case 'template': $object = $templates->get($objectName); break; case 'page': $object = $pages->find("name={$objectName}, include=hidden"); break; } if($object){ $file = $modulePath . "data/{$objectType}s/". $objectName . '.json'; if($objectType != 'page'){ $data = wireEncodeJSON($object->getExportData(),true,true); }else{ $data = $pageExporter->exportJSON($object); } file_put_contents($file, $data); }else{ //Object doesn't exist! echo $objectName; } }
    1 point
  16. I'm not sure if I correctly understand what you want to track and how, but there is this module that may provide what you are looking for: https://processwire.com/modules/process-changelog/ Besides a paginated logpage in admin it also comes with an additional RSS feature:
    1 point
  17. Hello Seba, yes, this is possible. I have connected the login to our domain controller via LDAP and the control of the authorisations runs via corresponding groups. (Customized Version of this Module: https://github.com/conclurer/LdapSignIn ) in general, its just a normal website with access protection ?
    1 point
  18. TBH I totally forgot about the Export/Import feature of templates and fields until I have read about it here again. My workflow was to have two windows open (developement and staging/production) and manually rebuild fields I have created in my development environment. But export/import is way easier. Thank you for reminding me. Maybe this feature should be more prominent with additional buttons at the beginning of the overview for people like me who don't look at the end of the page. ? As for the other discussed solutions: A JSON/YAML solution would also be great for version control, but for me it would not be necessary, because in my experience the gap between development and staging/production is not that large. But for large websites this could be handy.
    1 point
  19. Sounds good, happy to integrate that if you submit a pull request on the GitHub repo.
    1 point
  20. I use the tools that ProcessWire comes with. I spent a lot of time making them simple and easy to use, and that's what I like to use. Just as an example, let's say that I've got a website and a client wants to add a full featured blog to it. I'll develop it on my local copy of the site and it might involve creating several fields, templates and template files. I'll take a day or two to develop it and when it comes time to migrate the finished work to the live server, that's the fun part, but there's not much to it: Create or export/import the new fields on the live site first, then do the same for the new templates. Copy the new or updated template files (and related CSS/JS assets) into place. Create or export/import the pages needed by the blog, and it's done. A blog is just an example but it's the same as any other update. It's a painless process that always goes quickly and smoothly. This part of it takes maybe 5 to 10 minutes and is one of my favorite parts of the project, like driving a new car and then seeing it for the first time in your driveway. I like to oversee this part of any project and have no need to optimize it further so guess I'm not the target market for add on migration tools. That's correct, it would be fairly straightforward.
    1 point
  21. Hiya, loving this dashboard for a few more complex projects. I have a request for the number panel - since I often add a link in the "detail", if the number is zero then the link isn't shown. It would be great if there was a way to show it regardless. Actually it's hard to tell if it's a bug or intentional. In DashboardPanelNumber.module if I wrap the number in is_numeric it works when zero is passed as the number as below: public function setup() { parent::setup(); $this->locale = $this->data['locale'] ?? setlocale(LC_ALL, 0); $this->detail = $this->data['detail'] ?? ''; $this->number = is_numeric($this->data['number']) ? $this->data['number'] : null; if (is_int($this->number) || is_float($this->number)) { $this->number = $this->formatNumber($this->number); } }
    1 point
  22. I actually find that ProcessWire plays pretty well with Git, certainly in comparison to WordPress. The main thing is to avoid installing modules via the admin UI (just download the module and put it in your repo). And of course exclude /assets/ from the repo (PW conveniently keeps all of this together). It would be nice to be able to exclude the wire folder and have Composer handle that instead, but I don't find it that big of a deal to keep the wire folder in Git. In the very rare case that a direct core mod is necessary for a project I'm working on, it's good to be able to make that change and commit it to the repo. The ability to maintain a master configuration file containing all of the meta data for the templates and fields in a project and to be able to put this into version control would be a real game changer, though. This is a real pain point we're running into on a lot of our projects, especially now that we have a couple systems that have multiple deployments and multiple developers working on them. Since we already have JSON export for templates and most field types (the options field export/import is still not fully functional), it doesn't seem like this is too far off. Maybe PW could cache a copy of the JSON configuration file and then if it gets updated externally, show a message in the admin that there are pending field/template changes to be applied, allowing the user to apply these changes with a single click. If it makes it easier/cleaner, there could be 2 config files, one for fields and one for templates (and maybe another for modules?) I'm curious how other systems handle this under the hood. Edit: Here's how Craft CMS does it: https://craftcms.com/docs/3.x/project-config.html#propagating-changes This sounds a lot like what I'm thinking. The command line option for applying changes is also a great idea since some changes could take a while to apply on large projects.
    1 point
  23. Yes, currently generating diagramms by code definition in the editor, but planning to let generate diagramms by parent-child relations in a future iteration. Its good for now and me but not that user-friendly ?
    1 point
  24. Nice, are you using mermaid-js for the diagramming?
    1 point
  25. I've never used symfony (only the CLI component) so I have to admit that I don't understand what you are doing here, sorry ?
    1 point
  26. About a week ago I wrote a blog post while on train and some odd hiccup -- not sure if it was connection issue or just something I did wrong as I was using a tablet, which is far from perfect tool for this job -- but the whole post simply disappeared. Needless to say, I wasn't very happy. Sure, I should've saved more often, so it was really my mistake, but that was still an example of a situation where autosave would've saved the day (quite literally.) My needs are quite simple so I'm currently thinking of writing a very simple module for that purpose alone, but it's good to see that others are toying with similar ideas here. My issues are pretty far from a full-featured "save as draft" solution, but still
    1 point
×
×
  • Create New...