-
Posts
1,554 -
Joined
-
Last visited
-
Days Won
48
Everything posted by gebeer
-
Yes, production code could be overwritten if local master is behind origin. But that should not be a problem since you should always pull and merge locally before you push to origin. If you try and push a branch that is behind origin you will get a warning from git anyways. Of course this requires some discipline. But here is only one rule everyone in the team has to follow: always pull and merge before you push. Personally I believe that build do not belong on the server. But that is just an opinion. Building on the server requires all npm packages to be available there. I would not feel confident with that, seeing all the security/vulnerability warnings after a npm install. node_modules is excluded from most repos for a reason, I guess. Your deployment strategy looks pretty similar to what I describe over there. Only we do not work with a second origin for production but have added an additional push url to our origin (github). I can see the advantage with your approach in that it is not so easy to accidentally push things to production. We even use this on shared hosting where we have shell access and git available.
- 66 replies
-
- 3
-
-
- developing
- working with pw
-
(and 1 more)
Tagged with:
-
Exactly. staging.myproject.tld would point to folder /var/www/myproject-staging We never merge things on the server, always merge stuff locally first and then push. For the post-receive hook fires only after a push. The most important thing to remember when working with this strategy is: always pull/merge before you push.
-
Hi all, I got inspired to writing this little tutorial by @FireWire's post. We are using the same deployment workflow that he mentions for all new projects. I tried different approaches in the past, also github Actions and gitlab Runners. Setting those up always felt like a PITA to me. Especially since all I wanted was to automatically deploy my project to staging or live on push Whom this is for Single devs or teams who want to streamline their deployment process with native git methods. Requirements shell access to the server git installed on the server and locally If you don't have shell access and git on the server, upgrade or switch hosting. Walkthrough In this example we will be using github to host our code and a server of our choice for deployment. The project is called myproject. Step 1 (github) Create a repository named myproject. Let's assume that is available at git@github.com:myaccount/myproject.git. This is our remote URL. Step 2 (local) create a project in the folder myproject and push it to github like you usually would. The remote of your project should now read like this inside the myproject folder $ git remote add origin git@github.com:myaccount/myproject.git $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin git@github.com:myaccount/myproject.git (push) Step 3 (server) Login via ssh to the server and go to the document root. We assume this to be '/var/www/'. We further assume the command for connecting to our server via ssh is 'ssh myuser@myserver'. Go to web root, create a directory that will hold a bare git repo, cd into it and create the bare git repository. A bare repo is one that does not contain the actual project files but only the version control information. cd /var/www/ mkdir myproject-git && cd myproject-git git init --bare Step 4 (server) Create the root directory for your ProcessWire installation cd /var/www/ mkdir myproject Step 5 (local) Now we add information about the bare git repo to our local git config. So that when we push changes, they will be pushed both to github and to the bare git repo on our server. Inside our project folder we do git remote set-url --add --push origin myuser@myserver:/var/www/myproject-git After that we need to add the original github push origin again, because it got overwritten by the last command git remote set-url --add --push origin git@github.com:myaccount/myproject.git Now the list of remotes should look like this $ git remote -v origin git@github.com:myaccount/myproject.git (fetch) origin myuser@myserver:/var/www/myproject-git (push) origin git@github.com:myaccount/myproject.git (push) We have one fetch and 2 push remotes. This means that if you push a commit, it will be pushed to both github and your server repo. Step 6 (server) Here comes the actual deployment magic. We are using a git hook that fires a script after every push. This hook is called a post-receive hook. We move into the directory with the bare repository, change to the hooks directory, create the file that triggers the hook and open it for editing with nano $ cd /var/www/myproject-git/hooks $ touch post-receive $ nano post-receive Now we paste this script into the open editor and save it #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" # Target directory. TARGET="/var/www/myproject" while read oldrev newrev ref do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [[ $BRANCH == "main" ]]; then echo "Push received! Deploying branch: ${BRANCH}..." # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH else echo "Not main branch. Skipping." fi done What this does is checking out (copying) all files that are in the repository to our ProcessWire root directory every time we push something. And that is exactly what we wanted to achieve. This example setup is for a single branch. If you wanted to make this work with multiple branches, you need to make some small adjustments. Let's assume you have one staging and one live installation where web root for live is at /var/www/myproject and for staging at /var/www/myproject-staging In Step 4 above you would create a second dir $ cd /var/www/ $ mkdir myproject $ mkdir myproject-staging And the content of the post-receive hook file could look like #!/bin/bash # Bare repository directory. GIT_DIR="/var/www/myproject-git" while read oldrev newrev ref; do BRANCH=$(git rev-parse --symbolic --abbrev-ref $ref) if [ $BRANCH == "master" ]; then TARGET="/var/www/myproject" elif [ $BRANCH == "staging" ]; then TARGET="/var/www/myproject-staging" else echo "Branch not found. Skipping Deployment." fi # deploy only if var TARGET is set if [ -z ${TARGET} ]; then echo "no target set" else echo "STARTING DEPLOYMENT..." echo "Push to ${BRANCH} received! Deploying branch: ${BRANCH} to: ${TARGET}" # deploy to our target directory. git --work-tree=$TARGET --git-dir=$GIT_DIR checkout -f $BRANCH fi done Now everything you push to your staging branch will be deployed to /var/www/myproject-staging. Commits to master branch to /var/www/myproject We really do enjoy this deployment workflow. Everything is neat and clean. No need to keep track of which files you uploaded already via SFTP. Peace of mind :-) I basically put together bits and pieces I found around the web to set this up. Would be eager to see how you implement stuff like this.
- 9 replies
-
- 12
-
-
-
We recently have switched to exactly the same deployment strategy for new projects and will convert old ones, too. This makes deployment so much easier compared to traditional SFTP setups. It doesn't require any external services like github Actions and makes collaborating on projects very enjoyable. We generally do not include built assets in the repo and handle these through pre-push git hooks on the local machine that trigger rsync tasks for the dist folder. How do you handle these? Here's an example of pre-push: #!/bin/sh url="$2" current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,') wanted_branch='main' if [[ $url == *github.com* ]]; then read -p "You're about to push, are you sure you won't break the build? Y/N? " -n 1 -r </dev/tty echo if [[ $REPLY =~ ^[Yy]$ ]]; then if [ $current_branch = $wanted_branch ]; then sshUser="ssh-user" remoteServer="remote.server.com" remotePath="/remote/path/to/site/templates/" echo When prompted, please insert password. echo Updating files... rsync -av --delete site/templates/dist $sshUser@$remoteServer:$remotePath exit 0 fi else echo "answer was no" exit 1 fi else echo "rsync already finished" fi
- 66 replies
-
- 3
-
-
- developing
- working with pw
-
(and 1 more)
Tagged with:
-
Thank you for summing this topic up in a new thread. I had the same intention but couldn't spare the time. I am all for version control of fields and templates. @bernhard's RockMigration Module already does a great job here. And since he introduced the prototype recorder I am very excited that we will soon have something to work with and build upon. This should really be part of the PW core or available as optional core module. Would be great if @ryan put this on the roadmap for 2022.
-
@ryanThis is a bit off topic, but I would also support version control for fields/templates as something to maybe concentrate on for this year. There is a lively discussion going on: And Bernhard has already shown an awesome proof of concept in his video:
-
I am more excited about the recorder than about YAML. And I see your point. When using a PHP array for the migrate() method, you can do things that you can't do in YAML. But on the other hand, for recording changes locally and than migrating them to staging/live, YAML would be sufficient. Your use case, where you setup fields and templates for a RockMails module with RockMigrations is different. When installing that module, it needs to check whether it is being installed on a multilang site or not. But when recording and migrating changes, we already know the context for the migration since it has been recorded through YAML. My conclusion would be to store the recorded data in YAML or JSON. For other use cases, like you described, we can still use a PHP array.
- 66 replies
-
- 3
-
-
- developing
- working with pw
-
(and 1 more)
Tagged with:
-
RockMigrations1 - Easy migrations from dev/staging to live server
gebeer replied to bernhard's topic in Modules/Plugins
I think I had a wrong understanding of the migrate([]) method, thinking it is destructive for fields/templates that are not in the $config array. So if I have an existing site, I would have to build the $config array with all fields and templates that are in the existing installation already. If I forgot one field, it would be deleted by the next call of the migrate method. But looking at the code, I see that it only creates fields and templates. But still, if I don't add fields/templates to the $config array, that are already there in an existing installation, the migration would not cover all fields/templates. Does that make sense? Yes, this is exactly what happens in the video. All fields/templates of a site are getting written to the yaml file. It would be great, if we had this available for setting up initial migration() $config arrays. Either inside the module or as a separate "recorder" module. That way, we could plugin RockMigrations to any existing site, create the initial $config data (inside a yaml file) and move on from there with our migrations. -
RockMigrations1 - Easy migrations from dev/staging to live server
gebeer replied to bernhard's topic in Modules/Plugins
Hi @bernhard shows a test of recording changes in templates/fields to yaml. This looks very promising. Do you have any plans integrating this into your module? It is easy to use RockMigrations when you start a new project. But for existing projects where RM comes in later, we need to write the migration files by hand before we can start using rm()->migrate. Would it be possible to create a yaml from all templates/fields of an existing install based on the recorder that you showed in that video? -
This is totally freaking awesome! Can't give enough thumbs up. Any plans to release this? Wannahave ? That would just be such a great help for keeping things in sync.
- 66 replies
-
- 4
-
-
- developing
- working with pw
-
(and 1 more)
Tagged with:
-
I published a generic module with some examples at https://github.com/gebeer/CustomPageTypes Happy visual learning ?
-
Hello all, Since https://processwire.com/docs/tutorials/using-custom-page-types-in-processwire/ came out, I used to implement custom page classes as modules, following the principles described in that tutorial. Now only a few weeks ago I stumbled across https://processwire.com/blog/posts/pw-3.0.152/#new-ability-to-specify-custom-page-classes. This seems to me a much cleaner and easier way of implementation. Though it restricts the naming of custom classes to the naming conventions for the class loader. Other than that I can't really see any more disadvantages. Which way do you prefer and why? On a side note, useful features like described in the second link often can only be found in @ryans core update blog posts. If you don't read them on a regular basis, those new features are easy to miss. I'd love to see those hidden gems find their way into the API reference in more detail. Although $config->usePageClasses is documented at https://processwire.com/api/ref/config/, I think it would deserve its own page with all the explanations from the blog post.
-
Developing on Linux (currently Arch/KDE Plasma) for the last 16 years. Would never go back to proprietary alternatives. Why pay for something that should really be free for all? Devtools: Editor: VSCodium with PHP Intelephense, GitLense, PHP Debug (xdebug support), Prettier (Code Formatter), Todo Tree and @bernhards PWSnippets. Local dev env: after having used vagrant for a long time, about 4 years ago I switched to https://laradock.io/ for local docker environment. Ensures portable environments across multiple machines PW modules used on almost every project: TracyDebugger, WireMailSmtp, ProFields (mainly for Repeater Matrix), TablePro Asset building pipeline: npm scripts / gulp / webpack. Will have a look into Laravel Mix. Might save time although I actually like to fiddle with all the configs. Deployment: for older and ongoing projects mostly SFTP. For new projects git with git hooks. This is so much cleaner. Not using any service but creating own git hooks on the server. git must be available on the production server. Staging servers used rarely. Mostly deploy from local to production. Hosting: I do not offer hosting services. This is up to the client. Personally I use https://uberspace.de/en/ which is a command line configured shared hosting provider from DE with a pay what you want pricing model
- 66 replies
-
- 8
-
-
- developing
- working with pw
-
(and 1 more)
Tagged with:
-
Thank you so much for this post. This should be considered for the official documentation. It would have saved me a lot of time and headache when developing my first custom fieldtype
-
UPDATE: I installed a multilang page from scratch and found that the behaviour is the same as with my other install which also is latest dev. On PW 3.0.189 dev mysite.local/cn/cn/ redirects to mysite.local/cn/ Whereas when I switch to 3.0.184 master, mysite.local/cn/cn/ throws a 404. Will go and file an issue. Meanwhile if any of you can enlighten me, that would be great. Filed an issue at GH https://github.com/processwire/processwire-issues/issues/1479
-
Hello all, didn't know how to better describe my problem in the thread title. So bare with me. Will try to explain my setup. - multilingual site with all necessary modules installed, including LanguageSupportPageNames - default lang is English. Default lang homepage URL performs a redirect to /en/. So default home URL is mysite.local/en/ - one of my languages is Chinese (name cn). Home URL for Chinese is mysite.local/cn/ Page tree with page names and URLs for chinese language: - home: mysite.local/cn/ -- cn: mysite.local/cn/cn/ << here is the problematic URL: page name = lang name --- page1 mysite.local/cn/cn/page1 (only active in en and cn) --- page2 mysite.local/cn/cn/page2 (only active in en and cn) Now if I visit mysite.local/cn/cn/ I get redirected to mysite.local/cn/ which results in homepage view. Visiting mysite.local/cn/cn/page1 redirects to mysite.local/cn/page1. Results in 404 Changing the page name 'cn' to something else like 'asia' would resolve the problem But it is not an option because the URL structure like /en/cn/ and /cn/cn/ is a requirement for the project. Tracing back the redirect with Debug::backtrace inside a before hook to Session::redirect reveals the following: In wire/core/PagesPathFinder.php l. 519 the array $parts, (in my case [0 => 'cn', 1 => 'cn']) is passed by reference to getPathPartsLanguage(array &$parts) In that method the language is determined from the first entry in the array $parts. In that process the first entry is removed by $segment = array_shift($parts) Since $parts is passed in by reference, the original array of 2 path elements is being reduced to 1. This results in a wrong path and redirection. When I change the code so that $parts is not being passed by reference, weird things start happening. Getting 404s for existing pages like mysite.local/cn/cn/page1 and even for default language mysite.local/en/cn/page1 ATM I'm lost and don't know how I can get the required URL structure to work as expected. So if any of you have an idea of what could be the culprit here, please let me know. Thank you for staying with me until here.
-
Totally did not think of that one. Thanks a ton!
-
@adrianrevviving this old thread because I would need the rootparent selector, too. Have any of you ever had the need for this kind of selector? Just asking here before filing a feature request.
-
I just implemented this inside a autoload module and discovered that this hook only works in application ready state. So if you are utilizing this inside a module, you need to call the hook inside ready() method like this public function init() { // handle render of correct page from urlSegements $this->addHookAfter('ProcessPageView::execute', $this, 'hookPageView'); } public function ready() { // need to call this in ready. Not working in init() $this->pages->addHookAfter('Page::path', $this, 'hookPagePath'); } public function hookPagePath(HookEvent $event) { $page = $event->object; // page ROW and all children recursively if ($page->id == 1043 || $page->rootParent->id != 1043) return; $orgPath = $event->return; $pathSegments = explode('/', trim($orgPath, '/')); // $pathSegments[0] is language segment // get rid of $pathSegments[1] 'row' unset($pathSegments[1]); $newPath = '/' . implode('/', $pathSegments) . '/'; $event->return = str_replace('//', '/', $newPath); } public function hookPageView(HookEvent $event) { $page = $event->page; // only act on homepage if ($page->id != 1) return; // get last urlSegment to retieve page with that name if (count(input()->urlSegments())) { $wantedName = sanitizer()->pageName(input()->urlSegmentLast); $wantedPage = pages()->get("name={$wantedName}"); if ($wantedPage && $wantedPage->id) { $event->return = $wantedPage->render(); } else { throw new Wire404Exception(); } } } If knew this earlier it would have saved me some time and frustration...
-
Great, thank you so much for the quick fix. Everything working smoothly and I'm having fun again working with the console. Yeah, I thought that the monospace option should take care of that, too. But, obviously, this was not the case. Cheers
-
Hi all, I am experiencing a very strange issue inside the Tracy console for some time now. While typing, suddenly the new characters get inserted 1 position off to the left of the cursor. This is best demonstrated by a short clip: console.mp4 It makes editing impossible. This started happening on some installs for some time. But now is happening on all. I thought this must be a caching issue then. But it is happening across browsers (FF, Brave, Chrome - all on Linux). No JS errors in the dev console. I did a related search for ace editor that came up with https://stackoverflow.com/questions/15183031/ace-editor-cursor-behaves-incorrectly https://github.com/ajaxorg/ace/issues/2548 https://pretagteam.com/question/wrong-cursor-position-with-ace-editor-in-safari They all refer to a problem with none monospaced fonts used in the editor (specifically on iOs and Linux). Digging through the CSS, I found this rule which is injected in a style attribute by Tracy `<style nonce="" class="tracy-debug">`: .ace_editor, .ace_editor * { font-family: 'Monaco','Menlo','Ubuntu Mono','Consolas','source-code-pro',monospace!important; } When changing the rule to include Courier New, it works: .ace_editor, .ace_editor * { font-family: 'Courier New', 'Monaco','Menlo','Ubuntu Mono','Consolas','source-code-pro',monospace!important; } This might be just an issue on Linux. Can anybody confirm this for other operating systems? @adrian would it be possible to include Courier New in the font-family? This seems to be injected through site/modules/TracyDebugger/scripts/ace-editor/ace.js. So I'm not sure if you have influence on the contents of that file. A search on the ace issue tracker reveiles quite a few related issues. So the problem is well known but hasn't been fixed in years. As a quick fix, I added the extra font to ace.js but this will be gone with the next update. Oh wait, actually this is defined in site/modules/TracyDebugger/styles/styles.css around line 1787. I added Courier New there: .ace_editor, .ace_editor * { font-family: 'Monaco', 'Menlo', 'Ubuntu Mono', 'Consolas', 'source-code-pro', 'Courier New', monospace !important; } Would be great if this could be included in one of the next updates.
-
@fuzendesign Are you using TracyDebugger module? If not, I can highly recommend it. It is a big time saver being able to easily dump and inspect your code and makes developing with PW even more enjoyable.
-
You are pointing to a solved thread and asking me for an answer. You say you are stuck but don't tell where you're stuck. I'm afraid I can't help you if you don't give exact details of your problem. EDIT: reading my reply again, it sounds a bit grumpy. That was not really intended. Just wanted to say that you have higher chances in getting a helpful answer if you try and explain your problem in more detail, possibly with some code that you already have.