Jump to content

djr

Members
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

28 Excellent

About djr

  • Rank
    Jr. Member
  • Birthday 08/06/1996

Profile Information

  • Gender
    Male
  • Location
    Scotland
  1. Oh. That tells me it's using the native mysqldump (not the php implementation), but it's still failing. Perhaps the file permissions don't allow creating a new file (data.sql) in the root of your site? I should probably add a check for that.
  2. @tyssen, @jacmaes: released 0.0.2 which has a pure-php fallback for mysqldump and tar. Give it a go
  3. @tyssen, @jacmaes: Most likely the server doesn't have the mysqldump utility available. It's possible to add a pure-PHP fallback (Pete's ScheduleBackups did) but it will probably be considerably slower than real mysqldump. I'll see about adding it soon, but I'm a bit busy today.
  4. Now available on the module directory: http://modules.processwire.com/modules/schedule-cloud-backups
  5. At the moment it doesn't have any knowledge of Glacier, but you should be able to use S3's Object Lifecycle Management system to automatically transfer backups from S3 to Glacier.
  6. Thanks Ryan. Re your suggestions: (I've made some changes to the code) The data.sql file is already protected from web access by the default PW .htaccess file, and I've added a .htaccess file in the module directory to prevent access to the backup tarball. I've changed the shouldBackup check to be more specific (behaves the same as your suggestion, but simpler logic). I don't know what the issues around conditional autoloading in PW 2.4 are, so I'll leave that for now (?). I'll put IP whitelisting on the todo list, but I don't think it's essential right now, since it's unlikely anybody woul
  7. Hello I've written a little module that backs up your ProcessWire site to Amazon S3 (might add support for other storage providers later, hence the generic name). Pete's ScheduleBackups was used as a starting point but has been overhauled somewhat. Still, it's far from perfect at the moment, but I guess you might find it useful. Essentially, you set up a cron job to load a page every day, and then the script creates a .tar.gz containing all of the site files and a dump of the database, then uploads it to an S3 bucket. Currently, only linux-based hosts are supported currently (hopefully
  8. I'm really not sure what the issue is with writeable files in regards to security. If the server is compromised in a way that would allow an attacker to modify these files, then you're already screwed as they could modify the code as well. Also in that case they would be able to access the database. The only issue I can see is files having incorrect permissions modes, which could be easily detected and warned about in the admin interface. (Additionally, if I know that I will never want to make any metadata changes directly on the production site, I could set a configuration option (or someth
  9. Okay so what happens if: I take a copy of the production site (incl. database) and make some changes on my local machine. At the same time my client logs in and edits something, or the production database is otherwise altered. Then, I dump my db, and upload it and the files to the production server. You have just overwritten your client's changes. If you can ensure that they will not make any changes during the time between you taking a copy, and uploading the changed database, it works fine. Unfortunately I cannot make this guarantee. Additionally, since the metadata is in the databas
  10. @teppo: Ah, of course, selectors could be a problem. I'm not familiar enough with PW to know how this could work properly. And you're right, reusable fields are convenient. So I guess we could have global fields as it is in the current codebase. That would also minimize potential issues in migrating existing code which makes the assumption that fields are global, not local to each template. However, the fields could still be stored in the filesystem rather than in the database. That doesn't need to get in the way of the GUI tools in the admin interface to administer fields/templates. They
  11. It's definitely possible, but I'm not convinced it's the best way to do it. I was talking about changing the PW model. Your system essentially functions like a database (schema) migration tool. What I'm suggesting is avoiding the need to change the database schema at all. I'm imagining that we could create two files for each template: one is the actual template PHP code (as normal), and the other is a file containing the template and field settings that PW currently stores in the database. A very rough example using JSON as the metadata format: https://gist.github.com/DavidJRoberts
  12. I've been thinking about this issue lately also. I want the template/field metadata to be stored in files, not the database, since the metadata is closer to code than data. I think that using some sort of system to replay changes is a bit of a cop-out. I'd much prefer if no database changes were necessary. This means: 1. Template metadata has to be moved out from the templates table to files on disk. Easy enough. 2. Field metadata has to be moved out of the database. This is a bit trickier than the template metadata. The data in the fields table could easily go in a file, but the issue
×
×
  • Create New...