Jump to content

Pete

Administrators
  • Posts

    4,054
  • Joined

  • Last visited

  • Days Won

    67

Everything posted by Pete

  1. Thanks for sharing Luis - very useful points for something I'm working on right now
  2. @totoff, it keeps the cache for an hour and then re-caches the entire sitemap so it does 24 updates a day to keep current on even the most active sites. @Georgson - no problem!
  3. I think there are a lot of things for me to consider with memory usage and I know on one website of mine it might struggle as there are a few uploads that are over 100mb. In these cases though, there should at least be enough memory to have uploaded that file in the first place so I might be able to do something like when it gets to backing up the /assets/files dir that it checks the max size of any file fields in PW first to get an idea of how big files might be, then iterate through them X files at a time depending on that and what the PHP environment will allow, flushing anything in memory as it goes. Problem with something like that is it makes the process slower, but on the bright side it is a good opportunity to be able to feed data back to the browser to show some sort of progress (processing 1-10 of 256 pages or something like that). Some folders I actually need to make it skip are the /assets/sessions and /assets/logs as either of those could have numerous/large files that aren't necessary to backup. I get the feeling the system command for Linux actually won't have a memory problem simply because it's like running it at the command line in a Shell window (sorry Linux folk, I'm sure my terminology is all over the place ). The obvious problem there is that the actual page could well time out, but the command will keep running in the background so you would have a hard job knowing if it's ready if it was run manually. I think I can assume that aside from the site/assets/files directory everything else can be backed-up in a matter of seconds in terms of files. Even with 100 modules installed, they're all small files. Therefore if I have it give feedback once it's backed up the /wire directory as a whole as that should be a standard size more or less, then the /site directories one at a time and we can work it like that. It will actually give me a headache as I need to run more commands for Linux, but I know you can get it to pipe the successful results to a script even then, so I think for both Linux and Windows if it sends progress back to a database table specifically for backups we can easily poll that table every few seconds using AJAX and show which backups are in progress and which are complete as you get when running a backup via cPanel. Tables are another one where I will have to think about number of rows as well I guess to make sure it's not trying to do too much at once, so maybe iterating through each table to do one at a time, checking the number of rows and then splitting them if required would be the way to go there. It's all getting rather more complicated than I had originally intended the more I think about it, but I can hopefully make it better as a result. What I do know is that examples of code from the internet are really helping prevent me from re-inventing the wheel - hurrah for the Open Source community
  4. Forced WordPress dev and MODx enthusiast sounds like my bio from a few years back too Welcome to the wonderful world of ProcessWire. You may find as I did that you have to un-learn old stuff as much as learning new stuff and the rule to work by with ProcessWire is "if it seems too difficult then there's probably an easier way" which is what we're all here for on the forums
  5. If they won't edit the old pages you could even leave them in place and be clever with your navigation so new URLs are done the new way and old ones the old way. .htaccess only sends requests to index.php if the file doesn't exist so leaving the old pages there could be an option. I see your point about not wanting to take a chance with SEO, though the one time I did this I saw no drop, but it wasn't like the site was high up in terms of SEO in the first place.
  6. Thanks! I'll take a look at this in more detail later
  7. Nice work! I was talking to ryan what seems like ages ago now about version control and I remember stumbling across this easy to use diff class that might help in a future version: https://github.com/chrisboulton/php-diff EDIT: I just noticed that someone has built a useful jQuery merging tool that would also help of you follow the link above and read the readme
  8. Love it and I think it's an amazing module idea! Can it handle multiple photos per email as was originally suggested earlier in this thread? I just don't see any groups under one title in your example gallery ryan, though this could easily just be that you haven't sent an email with multiple photos yet It might be time to split this off from the original topic though as I only read this one by chance and wasn't expecting all this. Either that or when it's ready for release I suppose the new module topic can just link back here. I dunno... just mumbling to myself
  9. The problem there is that paths are inconsistent across hosting environments, but certainly if it's user-definable then it's people's own fault if it then doesn't work I put it under the module folder itself for now because it's not web accessible either as the .htaccess already forbids accessing the modules dir directly but I do see your point so I can easily add that as an option.
  10. I think the biggest hurdle here is that the GD2 functions are hardcoded into /core/ImageSizer.php What you might want to do is see if you can edit that to use something else (I'm guessing you might be thinking of something like ImageMagick?) on a test installation and then if that works out then I guess (lots of guessing) ryan would need to abstract that class and still have so that GD2 is the default and any alternatives can be downloaded as additional modules that override it. I think that's how it could work, but to get started I think editing that file how you like to see what you can achieve would be good
  11. The handy thing about building one in ProcessWire is that you can have it out of the box as simply "Enter your question", but for those companies that like to add complexity, customising it to add more questions is simply a case of adding more fields to the "ticket" template, which is easier than most helpdesk packages I've used
  12. Keep it coming with the suggestions - there's a way to go yet before I'll add some of them but they're all welcome and will be considered once the basics are in place. Step 1 is definitely to see if I can get backups working in multiple environments, so attached is version 0.0.1. Consider it Alpha and use it at your own risk, but I can't see why it would harm your installation. Just giving you the obligatory "you have been warned" speech To install: Unzip the file, stick the folder in your /modules directory, install the module, set the retention period and read the instructions below the retention drop-down in the module config to manually run a backup for now Backups are stored in /site/modules/ScheduleBackups/backups in /site and /db folders respectively so you can monitor those There is currently no "backup succeeded" message or anything like that printed when it's done - just wait for the browser to stop loading the page for now, or for the zip/tar file to stop growing Some things to note: It does an OS check. This is because Windows can't run system() commands as they're a Linux thing, and if it finds you're running Windows it uses the ZipArchive class built into PHP to backup the site and simple mysqli queries to backup the database If it doesn't detect you're on Windows and can run system() commands then it does that which, from my past experience, is far quicker (plus it makes for nifty one-liner backup commands ). It does some detection to see if safe mode is on and whether it can run system commands before backing up, so if safe mode is on or it can't run those commands then it falls back to using the functions mentioned in point 1. During installation, a unique hash is created and saved - this is because when we set it to run via a cron job/scheduled task we need a way of creating a backup without a logged-in superuser in attendance. The backup uses a URL that I doubt you would have as a page on your site (/runbackup) as well as this hash so it is extremely unlikely anyone will try and bring your server down by spamming that URL. Further checks will be added in later versions so it can't run more than once a day anyway or something like that. Also, if anyone who is a better programmer than me wants to contribute then please feel free. Some of what I've written is likely amateurish in places. Kudos to David Walsh for his excellent mySQL backup script and this excellent function on StackOverflow about recursively zipping folders that saved me from re-inventing the wheel in those two areas. ScheduleBackups.zip
  13. I've looked at many of them and still decided to build my own, which says something since it's not like I've got lots of free time After looking at your last two links, most the examples fall foul of my main gripe with helpdesk software - too many config options and trying to be all things to all users. They're fine for enterprises that have a team of IT folk to manage and man them and someone willing to pay a lot of money (in some cases), but I'm betting that the most commonly overlooked market for helpdesk software is people and smaller companies simply wanting departments, tickets and a knowledgebase with a little ticket escalation thrown in which is what I'm aiming for.
  14. Funnily enough I'm building one currently in ProcessWire but will be a while before it's ready
  15. Is there anything gained by specifyinh the template in the selector as well ryan or does that not save any time on the query?
  16. Ah, now that's ryan's department as I've not got access (I don't think I have anyway...)
  17. Hey all Since the topic of backups comes along every so often I decided to write a module to encapsulate some code I use for backups on Linux installs. It's not quite ready yet as I want a fallback option for Windows as well as an option for Linux that will work on shared hosting where you usually cant run system commands, but it is 80% complete for stage 1 and Linux backups work nicely. The general idea is that you set the number of days to store backups (I standardised it to 1 day, 3 days, 1 week, 2 weeks, 1 month, 3 months, 6 months and 1 year rather than having it as an integer field because I think these fit the most common scenarios and I wanted to have a dropdown in my module config too It defaults to 1 week, but depending on the size of the site and how much space you have you might want to increase or decrease the retention period. The idea is that you are given a URL with a unique hash (generated at install) which you then pass off to a cron job or Windows Scheduler and this generates the backups. It will be expanded on once I've got the backups working across different environments but the initial plan is to release a version that simply backs up the site and database, then a version that has a page where you can download backups from as well as an option to FTP/sync them to another server. I don't want to tackle restores though as this would be difficult - you are logged into the admin whilst running the restore so I think the first thing it would do when it has restored the database is log you out, plus I don't want to make assumptions about replacing a user's /site/ folder so I think restores require some manual intervention to be honest. An alternative would be to do this anyway but rename your /site/ folder and take another database copy before restoring, but I'm getting into the realms of trying to be too clever then and anticipate what people are trying to do and ProcessWire is all about not making assumptions Fortunately I have access to a site with several gigs if uploaded files as well as a reasonably large database so I should be able to monitor how well it handles that on Linux and Windows, but smaller sites shouldn't take more than a minute to backup, often a matter of seconds. I shall keep you posted.
  18. Well you could get around the .htm but by permanently redirecting the old URLs to the new ones via htaccess, but I'm not sure how you would include the category name in each page unless you actually named them manually to include it. I would personally set them all up as /category/pagename/ and then simply do a 301 redirect in htaccess from the old ones to the new ones and that should be fine. You're telling browsers and search engines "hey, that content is over here now" and nobody sees an error page so SEO should remain in tact.
  19. <?php // this is a single line comment $somevar = 'somevalue': // you can add one after a line of code /* Or you can span multiple lines */ ?>
  20. These sound like very good options to have on by default - thanks for pointing them out! Anything to curb the issues I've seen occasionally when people write their content in Word and then paste it in to TinyMCE is welcome
  21. Exactly this It would be great if you added some functionality like $discussion->posts->find('title=your-selector-here') and same for post content, author etc, but I'm not sure how easy that would be with custom tables!
  22. Indeed - there are so many things to consider in forum software this complex and a lot of it is unnecessary "fluff". They did trim out a lot of rarely-used features a few versions back but have been steadily adding them in again since. I'm hoping with their next major version that they strip out a lot of things and make it more modular like... ummm... ProcessWire Getting back to the idea of building a forum module in ProcessWire there is the discussion module Joss mentions, but when attempting to build forum software you might want to create some custom tables for that when scaling to millions of posts. It's not that you couldn't just throw more server power at it when you get to so many rows of data, it's more that even the posts table for this software has about 30 fields so I would suggest there's a point where that becomes hard to manage. Then there's the additional fields in the topics, users and forums tables, and all told there are something like 60 tables in the software. I've not yet seen forum software that isn't developed by multiple people over many years of work - it's a big undertaking even if you keep things relatively simple. Maybe the current software is a bit bloated, but it does the job I do have an module for this forum software as well that lets you grab member data, topics etc from the forums so you can validate users against forums as an alternative (or complimentary depending on how it's used) to ProcessWire users which is useful for community sites with a forum but I'm still working on making it as easy as possible to install and set up as well as standardising features before it will get released.
  23. Pete

    .pw domains

    I thought about processwire.pw when I first heard about the domain, but it sounded... recursive The things I thought of like showcase.pw and other one word domains like that will all be gone already I think, though I might be pleasantly surprised in that this might be a domain that isn't worth much to many businesses (marketing it as Professional Web like they have been just doesn't sell it in my mind - since anyone can buy a domain there is no guarantee of professionalism!).
  24. Yup, that part of the process does make any conversion simpler As long as that image is directly accessible via a browser then the importer can fetch it and put it into ProcessWire. Same with files and the file field. The only thing you would have to watch in either case is that there is always the slim possibility you could get an unexpected file type coming from the source site. I don't think it's a problem as PW file fields only accept the file types you tell them to, but I just wasn't sure whether it would throw an error with the API so it might be worth putting in an additional check to see if it is a valid type.
  25. I wasn't aware of that ryan, thanks!
×
×
  • Create New...