Jump to content
flydev 👊🏻

☁️ Duplicator: Backup and move sites

Recommended Posts

So it turned out that the FTP extension wasn't loaded. After enabling it and restarting Apache / Laragon, everything works as expected. (with the earlier Duplicator version, however, no error message was shown) Thanks to @Autofahrn for the help.

1 hour ago, Autofahrn said:

Sorry, I somehow missed the FTP part since already the package build seems to fail.

nope:

15 hours ago, dragan said:

Local backups work fine, but FTP does nothing.

😉

  • Like 2

Share this post


Link to post
Share on other sites
2 minutes ago, dragan said:

Local backups work fine, but FTP does nothing.

FTP (and any cron related stuff) failed due to the false error seen in the log (introduced with the new package format).

  • Like 1

Share this post


Link to post
Share on other sites

@flydev - a couple of bugs and questions for you.

Firstly, I am getting these notices with the current dev branch:

image.thumb.png.6b2ffa5b489dc26c7aa3162dceb5162d.png

Would be great if those could be cleaned up please.

The other question is a weird one. With both the master and dev versions I have been getting "mysql gone away errors" lately on one site/server, but only from the cronjob (run from the system cron). If I do a Backup Now from the Duplicator Process module page it works fine. So it seems like there is some strange difference when run via CLI with the cronjob. The database size recently went over  128MB which is the size of my max_allowed_packet setting. I bumped it up and now it works from the command without the error. The weird thing though is that I have another server which has been working fine with a max_allowed_packet of 16M and the database is over 250MB. Both servers are Digital Ocean VPS. The one with the errors is running the latest version of Debian and the one without errors is on UBUNTU. 

The Debian server shows:

Ver 15.1 Distrib 10.3.18-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2

while the UBUNTU one shows:

Ver 14.14 Distrib 5.7.28, for Linux (x86_64) using  EditLine wrapper

So I am wondering if it's a MariaDB vs MySQL issue, or the version difference or something else.

Anyway, wondering if you (or anyone else) might have come across this recently.

Thanks!

  • Like 1

Share this post


Link to post
Share on other sites

@flydev - sorry, just noticed something new. Even those the Duplicate Process module page shows a valid package created from the cronjob, I am seeing these logged notices that suggest the package was not built correctly. Any ideas?

image.png.ad9ed3801135290d043282d039a10f55.png

  • Like 1

Share this post


Link to post
Share on other sites

@adrian, if you use the duplicator.module 1.3.13 from this post, it should remove those issues:

No idea regarding the mysql issues, I don't have that large databases.

  • Like 2

Share this post


Link to post
Share on other sites

Thanks @Autofahrn but that is the version I am using.

Sorry, I take that back - I am using 1.3.12. Where do I find 1.3.13?

  • Like 1

Share this post


Link to post
Share on other sites
5 minutes ago, adrian said:

Sorry, I take that back - I am using 1.3.12. Where do I find 1.3.13?

strange, the link should lead you to the last post on page 14, which contains the update. Not sure why it points to the first page. Tricked myself, the up arrow links to the correct post.

Duplicator-ATO1.3.13.zip

  • Like 3

Share this post


Link to post
Share on other sites

Thanks @Autofahrn - that new version does fix those notices.

Unfortunately I am getting MySQL gone away errors again when running via cronjob. I don't expect that is to do with the new version, but rather that it's a bit random and the tests after I changed the max_allowed_packet setting didn't actually fix it. The weird thing is that the error is being triggerd ~6 seconds into the duplicator process so it's not a timeout issue.

  • Like 2

Share this post


Link to post
Share on other sites

That's a quite short time, timeout should be 120 resp. 600 seconds from what I see in the code. The backup is effectively performed using a regular WireDatabaseBackup.

Could this be a security issue with cron running from a different user?

Sorry, no real idea.

  • Like 2

Share this post


Link to post
Share on other sites

Ok, I'm getting there 🙂

@flydev - there is still one more notice in 1.3.13 - line #44 should be: 

$this->log("Logging {$logName}\n");

$logName, rather than $zipFilename which isn't defined.

Regarding the mysql gone away error, it seems that has been fixed by doing this:

        //DUP_Util::setMemoryLimit(self::DUP_PHP_MAX_MEMORY);
        //DUP_Util::setMaxExecutionTime(self::DUP_PHP_EXECUTION_TIME);
        set_time_limit(0);
        ini_set('memory_limit', '-1');

I know this is not ideal, but it's time to move on so will have to do for now.

  • Like 3

Share this post


Link to post
Share on other sites

Thanks @Autofahrn 👊🏻👊🏻

 

@adrian About the issue "MySQL server gone away". 

Its quite hard to determine from where come this issue - and just saying - I still not found the way to track down this issue. FYI I also got this issue on a new MariaDB server on Windows (a boosted server where the PW setup contain more than 1.5 millions pages), and also sometime on my local MAMP setup. (The issue does not concern only Duplicator). I remember also two weeks ago, I got this issue on a shared server (another setup with about 260k pages)

ATM the only two hints which come to my mind is that the issue mostly appear when using the API - which can lead us to another hint: a server memory issue

Thing to test: Disabled TracyDebugger 

Other thing to check: I mostly work with InnoDB databases and I remember that adjusting some MariaDB parameters helped.

Parameters to check :

  • max_allowed_packet 
  • innodb_log_file_size
  • innodb_buffer_pool_size
  • innodb_log_buffer_size

PS: To adjust the settings, check this reply : 

 

What say the MySQL devs :

Quote

Some other common reasons for the MySQL server has gone away error are:

 

  • You (or the db administrator) has killed the running thread with a KILL statement or a mysqladmin kill command.

  • You tried to run a query after closing the connection to the server. This indicates a logic error in the application that should be corrected.

  • A client application running on a different host does not have the necessary privileges to connect to the MySQL server from that host.

  • You got a timeout from the TCP/IP connection on the client side. This may happen if you have been using the commands: mysql_options(..., MYSQL_OPT_READ_TIMEOUT,...) or mysql_options(..., MYSQL_OPT_WRITE_TIMEOUT,...). In this case increasing the timeout may help solve the problem.

  • You have encountered a timeout on the server side and the automatic reconnection in the client is disabled (the reconnect flag in the MYSQL structure is equal to 0).

  • You are using a Windows client and the server had dropped the connection (probably because wait_timeout expired) before the command was issued.

    The problem on Windows is that in some cases MySQL does not get an error from the OS when writing to the TCP/IP connection to the server, but instead gets the error when trying to read the answer from the connection.

    The solution to this is to either do a mysql_ping() on the connection if there has been a long time since the last query (this is what Connector/ODBC does) or set wait_timeout on the mysqld server so high that it in practice never times out.

  • You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 64MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.4.2.9, “Packet Too Large”.

    An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.

  • It is also possible to see this error if host name lookups fail (for example, if the DNS server on which your server or network relies goes down). This is because MySQL is dependent on the host system for name resolution, but has no way of knowing whether it is working—from MySQL's point of view the problem is indistinguishable from any other network timeout.

    You may also see the MySQL server has gone away error if MySQL is started with the skip_networking system variable enabled.

    Another networking issue that can cause this error occurs if the MySQL port (default 3306) is blocked by your firewall, thus preventing any connections at all to the MySQL server.

  • You can also encounter this error with applications that fork child processes, all of which try to use the same connection to the MySQL server. This can be avoided by using a separate connection for each child process.

  • You have encountered a bug where the server died while executing the query.

 

Let me know - thanks guys  👍🏼

  • Like 2

Share this post


Link to post
Share on other sites

I just pushed the new master version 1.3.13.

You can upgrade the module through ProcessWireUpgrade.

  • Like 4

Share this post


Link to post
Share on other sites
20 hours ago, flydev 👊🏻 said:

About the issue "MySQL server gone away".

Thanks for the detailed info. I must admit to being pretty new to InnoDB and actually I am glad you mentioned it, because the other site with the bigger DB which is having no problems at all is on MyISAM, so I guess that is the difference.

I have already played with a few of those settings you mentioned, but nothing consistently helped. The only thing that seems to be working is the change to set PHP time limit and max execution time to both infinite. It's also really confusing that this is only an issue when run via cron - it makes me think there is a significant difference in PHP or MySQL settings. Obviously PHP has a CLI specific .ini file, but I don't think there is anything like that for my.cnf. 

Maybe Duplicator should look into using WireDatabaseBackup's exec mode: https://github.com/processwire/processwire/blob/321ea0eed3794c5f2b50c216b603fad3e7347ce6/wire/core/WireDatabaseBackup.php#L131-L133 - any thoughts on trying that?

Regarding your suggestion of disabling Tracy - I have been thinking for a while about having an option to disable Tracy when php_sapi_name() shows the script is running from the CLI. I haven't added it yet, but certainly could, although I doubt that's the cause of this issue.

I am going to keep monitoring the daily cron over the next week to see if it works every day with those set_time_limit(0); ini_set('memory_limit', '-1'); settings and if it does, at least that will tell us something 🙂

Thanks again,
Adrian

 

  • Like 2

Share this post


Link to post
Share on other sites

Ok @adrian thanks ,  let us know then.

 

2 hours ago, adrian said:

Maybe Duplicator should look into using WireDatabaseBackup's exec

It's planned yes, I was already coding something this night using natives tools. Stay tuned.

  • Like 2

Share this post


Link to post
Share on other sites
3 minutes ago, flydev 👊🏻 said:

It's planned yes, I was already coding something this night using natives tools. Stay tuned.

Thank you!

FYI - after setting innodb_buffer_pool_size based on that calculation query, my duplicator cron failed so I almost feel like that made things worse. I am beginning to wonder if InnoDB just needs more resources than MyISAM and my VPS server just doesn't have enough oomph 🙂

  • Thanks 1

Share this post


Link to post
Share on other sites

Good to know;  To give you a simple answer: InnoDB was faster for writes and MyISAM for reads. Nowadays, InnoDB are the way to go, yo should be aware that MyISAM is deprecated and being 👉 removed in MySQL 8 I think, or maybe on a later release.

 

  • Like 4

Share this post


Link to post
Share on other sites
8 hours ago, flydev 👊🏻 said:

It's planned yes, I was already coding something this night using natives tools. Stay tuned.

Not sure if the native backup tools are really the best option... When restoring native backups on sites with a lot of data it took ages while it only took several seconds when using a regular mysql dump. I think the native tools add quite some bloat. It has been discussed somewhere, but I have no links at the moment, sorry. You might have this in mind 😉 

  • Like 2

Share this post


Link to post
Share on other sites
54 minutes ago, bernhard said:

When restoring native backups on sites with a lot of data it took ages while it only took several seconds when using a regular mysql dump

Ok for restoring, we might find a way to speed to the things, but about the backup, you kinda lost me here with your comment 😂

Share this post


Link to post
Share on other sites
1 hour ago, bernhard said:

Not sure if the native backup tools are really the best option... When restoring native backups on sites with a lot of data it took ages while it only took several seconds when using a regular mysql dump. I think the native tools add quite some bloat. It has been discussed somewhere, but I have no links at the moment, sorry.

I remember quite well. It felt like downloading the internet on a 28k baud modem back in the days 😴

  • Thanks 1
  • Haha 1

Share this post


Link to post
Share on other sites

Thx for the link @draganI don't know if there are any options that one could set for the export. Maybe there is a way to make it work properly with native export as well...

  • Like 1

Share this post


Link to post
Share on other sites

I made some progress and I could finally backup on an Unix machine the bigger setup I have (a backup imported as the method is not finished to be able to work on Windows) - for about 400MB files and a database of 1.6GB using the new implemented method which use the MySQL native tools. The method consist of writing a shell script on the fly which then is executed. 

If you wanna test it - still a work in progress - you can find the new method implemented on the dev branch on Github.

 

1952473296_Annotation2020-01-13095054.thumb.png.6088cdb5a32aa6ba02f0af851ba41e57.png

Edited by flydev 👊🏻
screenshot
  • Like 3

Share this post


Link to post
Share on other sites

It's also working on windows by now, pushing the update to the dev branch 👍🏻

Test version: v1.4.15

Edited by flydev 👊🏻
version
  • Like 4

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By David Karich
      ProcessWire InputfieldRepeaterMatrixDuplicate
      Thanks to the great ProModule "RepeaterMatrix" I have the possibility to create complex repeater items. With it I have created a quite powerful page builder. Many different content modules, with many more possible design options. The RepeaterMatrix module supports the cloning of items, but only within the same page. Now I often have the case that very design-intensive pages and items are created. If you want to use a content module on a different page (e.g. in the same design), you have to rebuild each item manually every time.
      This module extends the commercial ProModule "RepeaterMatrix" by the function to duplicate repeater items from one page to another page. The condition is that the target field is the same matrix field from which the item is duplicated. This module is currently understood as proof of concept. There are a few limitations that need to be considered. The intention of the module is that this functionality is integrated into the core of RepeaterMatrix and does not require an extra module.
      Check out the screencast
      What the module can do
      Duplicate a repeater item from one page to another No matter how complex the item is Full support for file and image fields Multilingual support Support of Min and Max settings Live synchronization of clipboard between multiple browser tabs. Copy an item and simply switch the browser tab to the target page and you will immediately see the past button Support of multiple RepeaterMatrix fields on one page Configurable which roles and fields are excluded Duplicated items are automatically pasted to the end of the target field and set to hidden status so that changes are not directly published Automatic clipboard update when other items are picked Automatically removes old clipboard data if it is not pasted within 6 hours Delete clipboard itself by clicking the selected item again Benefit: unbelievably fast workflow and content replication What the module can't do
      Before an item can be duplicated in its current version, the source page must be saved. This means that if you make changes to an item and copy this, the old saved state will be duplicated Dynamic loading is currently not possible. Means no AJAX. When pasting, the target page is saved completely No support for nested repeater items. Currently only first level items can be duplicated. Means a repeater field in a repeater field cannot be duplicated. Workaround: simply duplicate the parent item Dynamic reloading and adding of repeater items cannot be registered. Several interfaces and events from the core are missing. The initialization occurs only once after the page load event Changelog
      1.0.4
      Bug fix: Various bug fixes and improvements in live synchronization Bug fix: Items are no longer inserted when the normal save button is clicked. Only when the past button is explicitly clicked Feature: Support of multiple repeater fields in one page Feature: Support of repeater Min/Max settings Feature: Configurable roles and fields Enhancement: Improved clipboard management Enhancement: Documentation improvement Enhancement: Corrected few typos #1 1.0.3
      Feature: Live synchronization Enhancement: Load the module only in the backend Enhancement: Documentation improvement 1.0.2
      Bug fix: Various bug fixes and improvements in JS functions Enhancement: Documentation improvement Enhancement: Corrected few typos 1.0.1
      Bug fix: Various bug fixes and improvements in the duplication process 1.0.0
      Initial release Support this module
      If this module is useful for you, I am very thankful for your small donation: Donate 5,- Euro (via PayPal – or an amount of your choice. Thank you!)
      Download this module
      > Github: https://github.com/FlipZoomMedia/InputfieldRepeaterMatrixDuplicate
      > PW module directory: https://modules.processwire.com/modules/inputfield-repeater-matrix-duplicate/
    • By jaro
      This module (github) does with site/assets/files what Ryan's DatabaseBackups module does with the database:
      Backup site/assets Download ZIP archive Upload ZIP archive Restore site/assets Motivation: This module can be the missing part for projects with content backup responsibility on the client's side: The client will be able to download DB and assets/files snapshots through the backend without filesystem access, thus backing up all content themselves.
      Release state alpha – do not use in production environments.
      Credits for the nice UI go to @ryan – I reused most of it and some other code from the DatabaseBackups module.
    • By NorbertH
      Is there a hook to do something right after cloning a page ?
      I want the page to be saved right after cloning it either from the button in the tree or from a lister, because saving the page triggers several calculations that are not triggered by just cloning the page.
       
      Thanks in advance !
    • By John W.
      Question 1
      I recently installed PW 3.0.62 for a new site and also have sites running older version of PW 3.x.
      Can I export the database on an older version of PW 3.x and import it to PW 3.0.62 without any issues?
       
      Question 2
      (This is kind of alternative to the above for long term use - and maybe a better solution...)
      On  the sites I've previously built I have templates (home, basic-page, contact) and fields that I commonly use, such as business_name, phone_1.  The last site I built is running PW 3.0.42.  I was considering cloning this into a local site and running the upgrade module to bring it up to PW 3.0.62. From there on out when I start I new project I could just run the PW upgrade module, copy the folder to the location for the new project and duplicate the database using the new projects name.

      So basically, I'll always keep a "blank slate" site that I can just run the PW upgrade on, then duplicate into a new project. This would cut down on the work and time spent having to re-create these common fields, that I use. From there, I would just add fields, templates, etc, specific for the new website project.

      Is this a sound approach to speed up development?

       
×
×
  • Create New...