Jump to content

Using DDEV for local ProcessWire development (tips & tricks)


bernhard

Recommended Posts

I had two projects in the last 2 weeks making problems due to using short php tags.

DDEV has you covered! Just put this

short_open_tag = 1

into .ddev/php/my-php.ini and do "ddev restart" ?

  • Like 1
Link to comment
Share on other sites

On 6/20/2023 at 9:46 AM, bernhard said:

I had two projects in the last 2 weeks making problems due to using short php tags.

I have a question here: I've always been using short tags for echo like <?= $foo ?> and never had a problem. Now on these external projects I had short open tags like <? ... ?> and suddenly things broke. Can anybody explain why short echo worked but short open did not? I thought that's the same thing?

Link to comment
Share on other sites

Not fact-checked, but chatGPT suggests this as a history of short tags in PHP...

Quote
  • Prior to PHP 4.0.0, the short tag syntax was enabled by default, and developers could use either <? or <?php to open PHP code blocks.

  • In PHP 4.0.0, the short tag syntax was disabled by default, due to concerns about compatibility with XML and other markup languages that use the <? syntax. However, developers could still enable short tags by setting the "short_open_tag" configuration option to "On" in the php.ini file.

  • In PHP 5.4.0, the short tag syntax was made optional, and developers could enable it by setting the "short_open_tag" configuration option to "On" or by using the new short echo syntax (<?=) instead of the <?php echo statement.

  • In PHP 7.0.0, short tags were officially deprecated, meaning that they were no longer recommended for use in new PHP code. However, they were still supported for backwards compatibility with older code.

  • In PHP 8.0.0, short tags were removed entirely, meaning that they were no longer supported in PHP code. Developers are now encouraged to use the full <?php opening tag or the short echo syntax instead.

Does this fit with what you are seeing (given the version of PHP you might be running?)

  • Like 1
Link to comment
Share on other sites

5 hours ago, netcarver said:

Developers are now encouraged to use the full <?php opening tag or the short echo syntax instead.

That's what I experienced, yes. My short echo always works, I'm using that all over and have never every anywhere had to enable short open tags. But I've never been using <? instead of <?php and that's what I needed the settings in php.ini for! So the essence is that <? and <?= are different kind of short tags and the former is bad and the latter is good ? @Stefanowitsch 

  • Like 2
Link to comment
Share on other sites

Quote

Note: As short tags can be disabled it is recommended to only use the normal tags (<?php ?> and <?= ?>) to maximise compatibility.

From PHP.net: https://www.php.net/manual/en/language.basic-syntax.phptags.php

I read it as follows. While opening short tags can be disabled and are removed in newer versions, the shorthand echo <?= can‘t be configured, isn‘t (yet) deprecated and hence should work fine on older and latest PHP versions.

  • Like 2
Link to comment
Share on other sites

  • 2 weeks later...

I have seen this or maybe a very similar warning a few times - not with DDEV as I only use project abbreviations and not fully qualified domain names there.

So your project would be running as something like brk.ddev.site here.
Chrome is fine with that.

Possible solutions:

  • don't use full domain names
  • use abbreviations instead
  • write it domain-tld instead

 

And always remember the ddev.site part is a fully qualified domain thats registered and everything.
So technically yourproject.ddev.site is/could be a real subdomain and due to some security features Chrome tries to verify that this subdomain is secure and everything.

ddev.png.18a7c3c20b4da8f66c13a984a581e57a.png

  • Like 2
Link to comment
Share on other sites

39 minutes ago, wbmnfktr said:

I have seen this or maybe a very similar warning a few times - not with DDEV as I only use project abbreviations and not fully qualified domain names there.

So your project would be running as something like brk.ddev.site here.
Chrome is fine with that.

Possible solutions:

  • don't use full domain names
  • use abbreviations instead
  • write it domain-tld instead

 

And always remember the ddev.site part is a fully qualified domain thats registered and everything.
So technically yourproject.ddev.site is/could be a real subdomain and due to some security features Chrome tries to verify that this subdomain is secure and everything.

ddev.png.18a7c3c20b4da8f66c13a984a581e57a.png

This ? 

« Just » make an offer on ddev.site and put the certificat on your setup ?

Another solution is to make your own self signed certificate and add it to your OS certificates authority in order to trust it.

  • Haha 2
Link to comment
Share on other sites

16 hours ago, elabx said:

After watching this thread for a long time I finally tested ddev today and wow! I had never setup any environment so fast.

Congratulations. A problem nowadays is, that people complain and ask questions before trying out things themselves and realize that it took way less time to set things up, then writing and asking.

Link to comment
Share on other sites

  • 2 weeks later...

I have a new favourite alias ? 

function rockshell() {
    ddev exec php rock "$1"
}

WHY?

RockShell needs to be run from within the DDEV web container. So usually you'd either ssh into the container and then run "php rock ..." or you'd have to use the command "ddev exec php rock ..."

With this alias all you need to do is type "rockshell ..." ? Or don't add any command and you'll get the list of all RockShell commands:

ZeywArc.png

Here are all current aliases that I use:

# ddev aliases
alias ddc='ddev config --php-version=8.1 --database=mariadb:10.6 --webserver-type=apache-fpm --timezone=Europe/Vienna --omit-containers=dba'
alias dds='ddev ssh && alias ll="ls -alh"'
#alias dd='ddev start && ddev launch && ddev auth ssh -d ~/.ssh/ddev'
alias dd='colima start && ddev start && ddev launch && ddev auth ssh'
alias ddm='ddev launch -m' # launch mailhog
alias ddr='ddev restart'
alias ddp='ddev poweroff'
alias ddx='ddev xdebug on'
alias ddxo='ddev xdebug off'
function rockshell() {
    ddev exec php rock "$1"
}

 

  • Like 1
Link to comment
Share on other sites

Ok just realised that the "rockshell" alias does not work 100%

rockshell db:restore -y

--> it will still ask for confirmation, so the -y seems to be ignored. Any ideas why?

Update: OK... it is crazy...

jv1ZtAe.png

So if that thing knows it, why didn't it make it right from the beginning?! ? 

Link to comment
Share on other sites

  • 2 weeks later...

Hi crew! 

Does anyone know how to make symlinks work for composer?? Maybe there is an alias path within the ddev container? ?

To do something like this: https://medium.com/pvtl/local-composer-package-development-47ac5c0e5bb4

EDIT: Ok, found something! https://carnage.github.io/2019/07/local-development
EDIT2: A simplified version? https://adithya.dev/using-docker-for-spatie-laravel-package-development/ more inline with what I had being doing. Haven't tried any of these but will get back when I do.

Link to comment
Share on other sites

  • 2 weeks later...

Ok so here's what worked for me for now to develop composer packages locally while having them working in a ddev project:

Add a volume to the web service on ddev adding file .dev/docker-compose.composer-packages.yaml:

services:
  web:
    volumes:
      - "${COMPOSER_PACKAGE_DIRECTORY}:/packages"

I've also heard it's advised to add this as a whole other service? Let me know if anyone thinks this is better!

You can then define whatever path on your file system in .ddev/.env such as:

COMPOSER_PACKAGE_DIRECTORY="../../packages"

Now you could just git clone you repos into the path you defined in the variable and "ddev composer install" in case you have the following configuration on your composer.json and it will symlink your libraries instead of 

{
"repositories": [
    {
      "type": "path",
      "url": "/packages/*",
      "options": {
        "symlink": true
      }
    }
  ],
}

Check the url matches the target the container's path in the volume definition made in the yaml file. 

I added the additional layer to this proposed by the original article that put me to work, to be able to have my composer.json "clean" and without the local package path definition. I setup the script adviced in the original article:

#!/usr/bin/env bash
command -v jq >/dev/null 2>&1 || { echo >&2 "jq is required to support local development of composer packages, please install using your OS package manager before continuing"; exit 1; }
jq -s ".[0] * .[1]" composer.json composer.local.json > composer.dev.json
COMPOSER=composer.dev.json php /usr/local/bin/composer "$@"
rm composer.dev.json
[ -e "composer.dev.lock" ] && rm composer.dev.lock

Save this as .ddev/commands/web/composer-local 

Create a composer.local.json where you can hold a local configuration that will be merged with composer.json into a temporary composer.dev.json file that will install this merged combination, guarding your composer.local and composer.lock file from being edited.

Example I am using as composer.local.json:

{
  "repositories": [
    {
      "type": "path",
      "url": "/packages/*",
      "options": {
        "symlink": true
      }
    }
  ]
}

If you comment the lines that delete the composer.dev.json you'll see that the property "repositories" value is overwritten by our custom definition. 

Then run:

ddev composer-local install

You should see your composer dependencies available as local packages linked to the repository.

You can just roll back with the regular composer install:

ddev composer install

I am still unaware if deleting the composer.dev.json and .lock files are really necessary as it did help me figure out what was going on. 

Please test if you find this useful and let me know how it goes.

Thanks everyone for your thoughts and work put into this thread, I'm really enjoying migrating to ddev!

  • Like 2
Link to comment
Share on other sites

@elabx what's the reason behind the setup you describe here?
I probably missed a lot or just don't understand it what's in previous parts of this discussion.

You mentioned this:

On 8/13/2023 at 6:36 PM, elabx said:

here's what worked for me for now to develop composer packages locally while having them working in a ddev project

So... as far as I can understand this - combined with the rest - it's something more or less outside of the usual ProcessWire realm aka developing/working on a composer package. Am I right with this?

I am just wondering if I missed something in regards of handling composer packages in ProcessWire and DDEV while working on a project.

Link to comment
Share on other sites

1 hour ago, wbmnfktr said:

So... as far as I can understand this - combined with the rest - it's something more or less outside of the usual ProcessWire realm aka developing/working on a composer package. Am I right with this?

Yes! This actually has nothing to do with ProcessWire itself

So, regularly, you just want to load the composer dependencies and that's it! But what if you want to test a library you are developing, that has it's own dependencies defined through composer. Ideally too, you want this installed on an existing local project. And you wanna do this with composer itself!

Composer has of course has this in mind and it has the symlink flag so that composer knows it should load a local copy where you can have a repository checked out on any branch.

{
  "repositories": [
    {
      "type": "path",
      "url": "../../packages/*",
      "options": {
        "symlink": true
      }
    }
  ]
}

In my long gone and not missed non-ddev days this worked great with just having my repos checked out in a folder above my project's. I just had to include the local path as repositories, and it symlinked the libraries from the upper packages directory. This wasn't perfect, since when working on the local version, the composer.json and composer.locked got edits that I didn't want in the main branch of the project's repo, so on every change I had to rollback updates in this two files, then install again if I wanted to test "live" versions locally, but more on this later.

With ddev this is not so straightforward since the container's volume is isolated from the system. So it threw an error regarding not finding the path to that repo. Of course, there is no "../packages" in the container's volume! Hence, we use the first config to enable a path that is reachable by the composer path configuration within the container. The second config for the .env variable of the path is really unnecessary but thought it would be nice to handle this type of thing in a variable, since each developer might place their repositories in whatever other place within their filesystem. 

.ddev/docker-compose.composer-packages.yaml:

services:
  web:
    volumes:
      - "${COMPOSER_PACKAGE_DIRECTORY}:/packages"

 .ddev/.env 

COMPOSER_PACKAGE_DIRECTORY="../packages"

And the composer.json configuration for the local repo, see how on this case it is defined from the root, since it's where the volume is mounted from the previous instruction.

{
  "repositories": [
    {
      "type": "path",
      "url": "/packages/*",
      "options": {
        "symlink": true
      }
    }
  ]
}

And this is is like 80% of of the problem solved! 

The shell script wrapped in the ddev command is a way to not mess with the regular lockfile which is the one I use to deploy to production. So it creates an "alt" composer.json and lockfile to work in a different composer setup on parallel to the one that is your "production" one. 

 

 

 

  • Like 2
Link to comment
Share on other sites

Ok. Now this makes kind of sense to me. It's not about the composer packages in ProcessWire I deal with once in a while in a project, but with those I (never will) develop for later use.

This seems to be quite a lot of work for, what I would call it, a simple task like that. But ok, I don't know anything about developing composer packages - that has to be said.

Thanks for that detailed answer and background on this. At least I can relax now and know I didn't break a few projects because I used composer packages in a wrong way or something.

  • Like 1
Link to comment
Share on other sites

8 minutes ago, wbmnfktr said:

This seems to be quite a lot of work for, what I would call it, a simple task like that. But ok, I don't know anything about developing composer packages - that has to be said.

I very much agree it seems complicated! I am very knew to composer packages myself and this is just the first thing I figured out and it felt ddev made it also really easy to make it "ergonomic" for daily use.

It's at least a way that makes sense to me haha and I end up with one extra command ( ddev composer-local {$@} ) that just works when I want to work with my local versions and another command to go back on live/latest.

  • Like 1
Link to comment
Share on other sites

On 10/29/2022 at 8:54 AM, bernhard said:

The biggest benefit for me is that you get a real unix dev environment. So if you have some special need for your server (eg creating thumbnails from PDF via poppler-utils you are out of luck with laragon). With DDEV you simply add poppler-utils to the config of the web container and you can develop everything locally and it will work the same on the live server.

I confirm this. This is one of the main benefits. Also every project can have its own configuration, for example if you need to run an old PHP or MySql version (which you shouldn't). Best part is, you can run the projects simultaneously.

  • Like 3
Link to comment
Share on other sites

On 10/30/2022 at 8:58 AM, pwired said:

By the way ... did someone mention that Docker Desktop is no longer free like it obvious was before ?
I don't mean if you are running a company with 250+ employees

You don't need Docker Desktop for ddev to run at least on WSL or Linux (not sure about MacOS, but they recommend using Colima). You just can use the free open source docker-ce (docker engine).

Link to comment
Share on other sites

  • 2 weeks later...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...