Jump to content

Pete

Administrators
  • Posts

    4,033
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by Pete

  1. I think it was legibility of code and re-usability of fields that prompted me to get into that habit in the first place on a larger project, so it's nice to know it's a good habit I guess the only time you would be bothered about reducing the scan of the MySQL data set is with tens/hundreds of thousands of pages as so much is indexed in PW that it should be fast anyway?
  2. @soma I was going to go the PageArray route for one suggestion but then thought it might not even be needed and ended up with my last example. Hard to tell without seeing the structure as I'm sure you'll agree
  3. Welcome! I think ASMSelect would be a good fit instead of autocomplete in module config - I know that works.
  4. Well PW user group meetings aren't going to spread the word though, but UK and European meet-ups have been discussed before but haven't actually got as far as happening yet. I believe a directory is on ryan's list somewhere (I might have imagined it, but I remember talking to someone about it months ago... maybe myself ).
  5. In fact, this may streamline it further: <?php echo '<ul class="blocklist">'; // Start the list here as I don't want to loop through things twice foreach($pages->find("template=report") as $location) { foreach($location->mountains->sort('-stats_maxElev') as $peak){ // we prefix the field name with a minus sign to sort descending rather than ascending if ($peak->stats_mtnName){ // pages create 4 blank repeater "value sets" by default, this is a test to grab only populated ones $maxElevft = number_format(round($peak->stats_maxElev * 3.28084)); // to convert a field value in metres to feet; using number_format() to add thousands separators (ie. 3200 -> 3,200) echo '<li><p class="tabular"><a href="'.$peak->url.'">'.$peak->stats_mtnName.' '.number_format($peak->stats_maxElev).'m / '.$maxElevft.' ft.</a></p></li>'; } } } echo '</ul>'; ?> But without a copy of the structure and data I suspect it might fall foul of repeater issues
  6. Just shortening your earlier code: <?php $dest = array(); foreach($pages->find("template=report") as $location) { foreach($location->mountains as $peak){ if ($peak->stats_mtnName){ // pages create 4 blank repeater "value sets" by default, this is a test to grab only populated ones $maxElevft = number_format(round($peak->stats_maxElev * 3.28084)); // to convert a field value in metres to feet; using number_format() to add thousands separators (ie. 3200 -> 3,200) ++$i; $dest[$i]->url = $mountain->url; // field from page $dest[$i]->name = $peak->stats_mtnName; // field from repeater $dest[$i]->maxElevM = $peak->stats_maxElev; // field from repeater $dest[$i]->maxElevF = $maxElevft; } } } usort($dest, function($a, $b) { return $b->maxElevM - $a->maxElevM; // sorts in descending order, reverse A & B for ascending order }); echo '<ul class="blocklist">'; foreach($dest as $item) { echo '<li><p class="tabular"><a href="'.$item->url.'">'.$item->name.' '.number_format($item->maxElevM).'m / '.$item->maxElevF.' ft.</a></p></li>'; } echo '</ul>'; ?> I think there are some other savings to be made, but if you use something like WinMerge to compare your code against the above it will show you some useful tips
  7. I think you may have already read this, but just in case: http://processwire.com/talk/topic/1621-sudden-death
  8. Hehe, it's easily overlooked. If you would like a fancy search then Soma is your man: http://processwire.com/talk/topic/1302-ajaxsearch/ He seems to have created more modules than you can shake a stick at... although if you're into shaking sticks at code you may need to take a break
  9. If it helps reassure you more there is at least one project I have in the works that will be scaling to many thousands of pages, but ProcessWire has already been stress-tested beyond what I'll probably achieve any time soon It is a reasonable question you've asked though as I remember asking it a year or two ago and, aside from code improvements in the core since then, I remember ryan saying hosting is the key with larger sites. Hopefully you're more excited now knowing you can build it in a system that you can easily tailor to your needs rather than bending other platforms to your will, which is never quite as fun for me! Elegant coding over forcing things to fit any day.
  10. Basically do what these chaps are saying - add a checkbox field like Soma says, then as Apeisa says have a module look for the checkbox before the page is saved and if it's checked then change the parent and continue saving as normal. If I had more time this morning there actually is very little code to this but I've got to go to work - I'm 100% certain someone will be here with some code soon though The actual code that does the page "moving" in amongst the other module code is actually as simple as this: // ...Module code above to intercept page save, check for correct template and a ticked checkbox $page->parent = $pages->get('/path/to/archive/page/'); // ...Rest of module code. No need to $page->save(); since we're going to hook before page save so the save routine will continue as normal after this
  11. You can definitely do this in ProcessWire. 10,000 isn't actually that big a number really, but one of the main things to consider with larger sites is web hosting as presumably you could end up with lots of images on the site, thousands of visits per day and bandwidth could get quite high. You'll be fine with a smaller hosting package to begin with, but just bear in mind you probably want to select one that can scale as the site grows rather than move sites around from provider to provider. I've read elsewhere on the forums that ProcessWire is happy up to hundreds of thousands of pages and beyond, but low server specs would limit you before ProcessWire does Also, if you are thinking there will be a lot of pages that don't change much then look into caching options, and if you will have many pages like galleries that don't change at all then definitely consider ryan's excellent ProCache module.
  12. Nope, I always forget to think big enough with my folder structures Plus I still have yet to do anything like a client dashboard in PW so those links are useful.
  13. Thanks for sharing Luis - very useful points for something I'm working on right now
  14. @totoff, it keeps the cache for an hour and then re-caches the entire sitemap so it does 24 updates a day to keep current on even the most active sites. @Georgson - no problem!
  15. I think there are a lot of things for me to consider with memory usage and I know on one website of mine it might struggle as there are a few uploads that are over 100mb. In these cases though, there should at least be enough memory to have uploaded that file in the first place so I might be able to do something like when it gets to backing up the /assets/files dir that it checks the max size of any file fields in PW first to get an idea of how big files might be, then iterate through them X files at a time depending on that and what the PHP environment will allow, flushing anything in memory as it goes. Problem with something like that is it makes the process slower, but on the bright side it is a good opportunity to be able to feed data back to the browser to show some sort of progress (processing 1-10 of 256 pages or something like that). Some folders I actually need to make it skip are the /assets/sessions and /assets/logs as either of those could have numerous/large files that aren't necessary to backup. I get the feeling the system command for Linux actually won't have a memory problem simply because it's like running it at the command line in a Shell window (sorry Linux folk, I'm sure my terminology is all over the place ). The obvious problem there is that the actual page could well time out, but the command will keep running in the background so you would have a hard job knowing if it's ready if it was run manually. I think I can assume that aside from the site/assets/files directory everything else can be backed-up in a matter of seconds in terms of files. Even with 100 modules installed, they're all small files. Therefore if I have it give feedback once it's backed up the /wire directory as a whole as that should be a standard size more or less, then the /site directories one at a time and we can work it like that. It will actually give me a headache as I need to run more commands for Linux, but I know you can get it to pipe the successful results to a script even then, so I think for both Linux and Windows if it sends progress back to a database table specifically for backups we can easily poll that table every few seconds using AJAX and show which backups are in progress and which are complete as you get when running a backup via cPanel. Tables are another one where I will have to think about number of rows as well I guess to make sure it's not trying to do too much at once, so maybe iterating through each table to do one at a time, checking the number of rows and then splitting them if required would be the way to go there. It's all getting rather more complicated than I had originally intended the more I think about it, but I can hopefully make it better as a result. What I do know is that examples of code from the internet are really helping prevent me from re-inventing the wheel - hurrah for the Open Source community
  16. Forced WordPress dev and MODx enthusiast sounds like my bio from a few years back too Welcome to the wonderful world of ProcessWire. You may find as I did that you have to un-learn old stuff as much as learning new stuff and the rule to work by with ProcessWire is "if it seems too difficult then there's probably an easier way" which is what we're all here for on the forums
  17. If they won't edit the old pages you could even leave them in place and be clever with your navigation so new URLs are done the new way and old ones the old way. .htaccess only sends requests to index.php if the file doesn't exist so leaving the old pages there could be an option. I see your point about not wanting to take a chance with SEO, though the one time I did this I saw no drop, but it wasn't like the site was high up in terms of SEO in the first place.
  18. Thanks! I'll take a look at this in more detail later
  19. Nice work! I was talking to ryan what seems like ages ago now about version control and I remember stumbling across this easy to use diff class that might help in a future version: https://github.com/chrisboulton/php-diff EDIT: I just noticed that someone has built a useful jQuery merging tool that would also help of you follow the link above and read the readme
  20. Love it and I think it's an amazing module idea! Can it handle multiple photos per email as was originally suggested earlier in this thread? I just don't see any groups under one title in your example gallery ryan, though this could easily just be that you haven't sent an email with multiple photos yet It might be time to split this off from the original topic though as I only read this one by chance and wasn't expecting all this. Either that or when it's ready for release I suppose the new module topic can just link back here. I dunno... just mumbling to myself
  21. The problem there is that paths are inconsistent across hosting environments, but certainly if it's user-definable then it's people's own fault if it then doesn't work I put it under the module folder itself for now because it's not web accessible either as the .htaccess already forbids accessing the modules dir directly but I do see your point so I can easily add that as an option.
  22. I think the biggest hurdle here is that the GD2 functions are hardcoded into /core/ImageSizer.php What you might want to do is see if you can edit that to use something else (I'm guessing you might be thinking of something like ImageMagick?) on a test installation and then if that works out then I guess (lots of guessing) ryan would need to abstract that class and still have so that GD2 is the default and any alternatives can be downloaded as additional modules that override it. I think that's how it could work, but to get started I think editing that file how you like to see what you can achieve would be good
  23. The handy thing about building one in ProcessWire is that you can have it out of the box as simply "Enter your question", but for those companies that like to add complexity, customising it to add more questions is simply a case of adding more fields to the "ticket" template, which is easier than most helpdesk packages I've used
  24. Keep it coming with the suggestions - there's a way to go yet before I'll add some of them but they're all welcome and will be considered once the basics are in place. Step 1 is definitely to see if I can get backups working in multiple environments, so attached is version 0.0.1. Consider it Alpha and use it at your own risk, but I can't see why it would harm your installation. Just giving you the obligatory "you have been warned" speech To install: Unzip the file, stick the folder in your /modules directory, install the module, set the retention period and read the instructions below the retention drop-down in the module config to manually run a backup for now Backups are stored in /site/modules/ScheduleBackups/backups in /site and /db folders respectively so you can monitor those There is currently no "backup succeeded" message or anything like that printed when it's done - just wait for the browser to stop loading the page for now, or for the zip/tar file to stop growing Some things to note: It does an OS check. This is because Windows can't run system() commands as they're a Linux thing, and if it finds you're running Windows it uses the ZipArchive class built into PHP to backup the site and simple mysqli queries to backup the database If it doesn't detect you're on Windows and can run system() commands then it does that which, from my past experience, is far quicker (plus it makes for nifty one-liner backup commands ). It does some detection to see if safe mode is on and whether it can run system commands before backing up, so if safe mode is on or it can't run those commands then it falls back to using the functions mentioned in point 1. During installation, a unique hash is created and saved - this is because when we set it to run via a cron job/scheduled task we need a way of creating a backup without a logged-in superuser in attendance. The backup uses a URL that I doubt you would have as a page on your site (/runbackup) as well as this hash so it is extremely unlikely anyone will try and bring your server down by spamming that URL. Further checks will be added in later versions so it can't run more than once a day anyway or something like that. Also, if anyone who is a better programmer than me wants to contribute then please feel free. Some of what I've written is likely amateurish in places. Kudos to David Walsh for his excellent mySQL backup script and this excellent function on StackOverflow about recursively zipping folders that saved me from re-inventing the wheel in those two areas. ScheduleBackups.zip
  25. I've looked at many of them and still decided to build my own, which says something since it's not like I've got lots of free time After looking at your last two links, most the examples fall foul of my main gripe with helpdesk software - too many config options and trying to be all things to all users. They're fine for enterprises that have a team of IT folk to manage and man them and someone willing to pay a lot of money (in some cases), but I'm betting that the most commonly overlooked market for helpdesk software is people and smaller companies simply wanting departments, tickets and a knowledgebase with a little ticket escalation thrown in which is what I'm aiming for.
×
×
  • Create New...