Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/29/2014 in Posts

  1. This is a beta release, so some extra caution is recommended. So far the module has been successfully tested on at least ProcessWire 2.7.2 and 3.0.18, but at least in theory it should work for 2.4/2.5 versions of ProcessWire too. GitHub repo: https://github.com/teppokoivula/ProcessLinkChecker (see README.md for more techy details, settings etc.) What you see is ... This is a module that adds back-end tools for tracking down broken links and unnecessary redirects. That's pretty much all there is to these views right now; I'm still contemplating whether it should also provide a link text section (for SEO purposes etc.) and/or other features. The magic behind the scenes The admin tool (Process module) is about half of Link Checker; the other half is a PHP class called Link Crawler. This is a tool for collecting links from a ProcessWire site, analysing them and storing the outcome to custom database tables. Link Crawler is intended to be triggered via a cron task, but there's also a GUI tool for running the checker. This is a slow process and can result in issues, but for smaller sites and debugging purposes the GUI method works just fine. Just be patient; the data will be there once you wait long enough Now what? For the time being I'd appreciate any comments about the way this is heading and/or whether it's useful to you at all. What would you add to make it more useful for your own use cases? I'm going to continue working on this for sure (it's been a really fun project), but wouldn't mind being pushed to the correct direction early on. This module is already in active use on two relatively big sites I manage. Lately I haven't had any issues with the module, but please consider this a beta release nevertheless; it hasn't been widely tested, and that alone is a reason to avoid calling it "stable" quite yet. Screenshots Dashboard: List of broken links: List of redirects: Check now tool/tab:
    13 points
  2. Sorry for the delayed answer, Pierre-Luc! Been busy with other stuff and this totally slipped my mind. What you've described there wasn't really possible without direct SQL queries until just a few moments ago. I've just pushed to GitHub an update to VersionControl.module (0.10.0) that adds new $page->versionControlRevisions() method. This isn't properly tested yet, but something like this should work: // current value of field 'headline' echo "Headline for current revision: {$page->headline}<br />"; // value of field 'headline' in previous revision $revisions = array_keys($page->versionControlRevisions(2)); $page->snapshot(null, $revisions[1]); echo "Headline for previous revision: {$page->headline}<br />"; // return Page to it's original (current) state $page->snapshot(); echo "Back to current revision: {$page->headline}<br />"; Since snapshot() returns Page object to given revision or point in time ($page->snapshot($time, $revision)) you'll want to make sure it's back to it's original state in case you're going to make changes and save the page -- otherwise the revision you fetched with snapshot will be returned from history once you save the page. $page->versionControlRevisions() returns an array of revisions for current page and can optionally take one param, $limit, to fetch only that many revisions if more exist. It's return value is in the form of array([revision] => [timestamp]), i.e. array(4 => '2014-01-01 02:00:00', 3 => '2014-01-01 01:00:00') etc. so in order to get just the revision IDs out of that I'm using array_keys() in the example above. You could probably also do something like this, if you want to make sure that Page doesn't get accidentally returned from history (this'll consume more memory, though): $revisions = array_keys($page->versionControlRevisions(2)); $page->previousVersion = clone $page; $page->previousVersion->snapshot(null, $revisions[1]); echo "Headline for previous revision: {$page->previousVersion->headline}<br />"; Not sure if you're still working on this, but this kind of feature felt useful so I'm glad you brought it up..
    4 points
  3. Right I'm officially in love with ProcessWire. I've been setting up all my templates and field types and wow....things like the repeater field type are amazing, and the level of customisation...just...wow!
    3 points
  4. Hi, just wanted to say hello to PW. Some former WebsiteBaker users made me aware of PW, so I decided to give it a trial this weekend. From my past CMS experience, I decided to convert an existing SilverStripe template into PW from scratch. This way you will quickly get a feeling about the system itself, the backend, template system and available documentation. After watching two videos and reviewing the API documentation, I started over and had a clone of my SilverStripe site running in less than two hours. I started with WebsiteBaker back in 2006, did some sites with ModX and Contao (formerly known as TypoLight) and was even forced to deal with WordPress by one client, before I finally switched to SilverStripe around 2009. My first impression of PW was: cool little CMS with easy backend, awesome API. Somehow feels a bit like a mixture of WebsiteBaker, Contao and SilverStripe to me, taking the best from all of them while skipping the bad parts. Respect. Looking forward learning more about PW. Cheers cwsoft
    2 points
  5. Based on second half i don't think it was pure luck. But not one of the best performances (especially first half). You don't hear me complaining though
    2 points
  6. @NooseLadder - thanks so much for your help with this. So if things are working with that file_exists check removed then I think we can assume it's a windows path issue. Weird that the addfile to the zip works, but file_exists fails. I am curious about the Directory not empty warning. Does the migratorfiles directory actually get removed? Again, this seems like a windows path issue because I am using a recursive directory delete function and it works here. I really might need that windows xammp setup to get all these sorted out. But for now, I have removed the file_exists check from the latest committed version as I don't really think it should be necessary as the $files array shouldn't contain any files that don't exist anyway. Another big enhancement this morning - I think all multi-language features should now be working! @tobaco - would you please check to see if the latest version fixes all the issues you were having with multi-language page names etc?
    2 points
  7. Matthew, sorry to hear you were so affected by this outage. It sounds like this particular outage was one that couldn't have been anticipated by anyone. From what I gather reading on other sites and on twitter, it sounds like a piece of network hardware that failed but provided no failure indicators. If that's the case, that would have made it particularly difficult to track down and left little room to put all that redundancy to work. Perhaps this particular type of outage is a once in a lifetime thing, but the reality is that outages occur everywhere and no webhost is immune to them. Not to mention outages can occur anywhere when it comes to networks, with the webhosts like ServInt probably being the most solid part of that chain. I was fairly lucky here in that I didn't really notice the outage other than someone emailed me about it when I was cooking dinner. But all seemed to be back online 30 minutes later and didn't go out again as far as I know. I've got most of my clients hosted at that Reston, VA data center, but the time the outage occurred was one of the least traffic times for the sites I work on, so I never heard from anyone about it. In 11+ years, I've only experienced one other major outage at ServInt and that was several years ago. Someone apparently got sloppy with a back-hoe in a barnyard and apparently cut off all lines of communication to McLean, VA. If I recall that outage was quite a bit longer than this one, but it's been awhile. There is absolutely nothing you could have or should have done extra here. On the other hand, if your client is giving a presentation, they are probably the ones that should have a backup plan. Anyone experienced giving presentations knows that you have to keep everything you need with you. You can't ever count on something being accessible from the internet, though usually for other reasons (bad wireless signal, something broken at the conference center's internet, etc.) So when it comes to presentations, you can only count on what's on your computer. Having a local running copy of a site, or a presentation with screenshots are good plans. If they couldn't access the site, hopefully that's what they did. One thing to take comfort in is that if this particular outage had occurred at some other host, chances are they would still be down right now. My opinion is I don't think there's any value in looking elsewhere due to this particular incident. I already know ServInt has the best people in the business. This kind of stuff can happen to any of them, and ServInt now has some experience that the others don't. Outages are a fact of life in the business and nobody is immune, but ServInt's history is that they are less prone to outages than most, and better equipped to handle them when the inevitable strikes.
    2 points
  8. Teppo this looks fantastic, nice work! While I haven't yet been able to test it out here I will be soon, as I have a regular need for a tool like this. It's also one of those things that come up with clients a lot: "how do I keep track of when a link no longer works?". I've been using Google Webmaster tools for 404 discovery in the past, but it's often hard to separate the noise from the goods there, and it's not particularly client friendly either. Regarding the cron side of this, I immediately thought of IftRunner (which itself is triggered by cron) and how this might work great as a PageAction with IftRunner. PageActions can also be executed by ListerPro and presumably other tools in the future as well.
    2 points
  9. Lucky Dutch!!! Congrats anyway.... Oh, by the way, I downloaded Luis Suarez's most memorable career moments. It didn't take long; it was only 3 megabytes (stolen from twitter)
    1 point
  10. I don't know about you guys but that is the reason why I never register the clients domain by the same company that is going to host the website. Register by a register company and host at a hoster company. Hoster A down ? Point domain to Hoster B and upload a copy. These days DNS propagation is less than 6 hours. Edit: instead of changing dns settings, use url forwarding. It works instantly! Visitors won't even see the change in host url if configured properly and it is only necessary during the time of primary host fail. I do the same with my email accounts. I never register an email address with my ISP. Works for me. Would like to know though what works for you guys.
    1 point
  11. Another significant update this morning which corrects a major omission in critical functionality. Migrator now handles migration of template context field settings. Now to tackle these new multi-language issues
    1 point
  12. The above is just the start of image manipulation potential in PW. There are so many things you can do from the API side with images. Just today we added some new cropping options to the dev branch (thanks to Horst, who is one of the best in the world when it comes to image related code). Btw, your site is great and I will definitely visit a lot in the future. We're in Atlanta, but have family moving to Orlando and plan on spending a lot of time at Disney World when we can (we have two young daughters that of course love everything Disney).
    1 point
  13. I agree, I think ProcessWire would be an excellent fit for your needs. With regard to centralized media manager, there's a reason we don't have one built in and that's because we've got something better. You may have to think differently about how you manage them, but once you get it I don't think you'd want to go back to an old style media manager. If you need any help understanding how a particular situation would be accomplished in PW I'm happy to give more info. This is a potential security hole. Allowing URLs to create images on the fly has high potential as a ddos (denial of service) hole that is an easy attack target. Someone can write a quick script to formulate and call millions of those URLs, consuming your server for hours or days till eventually filling up the hard drive (or your hosting quota). Basically, it's a security problem if non-predefined resize dimensions are coming from the client (user input, like URLs) rather than from the server. I substituted some other width/height values in there and can see it will take whatever I give it (no restraints). For example, this call uses 3 megabytes on disk, takes up several seconds of server time, and consumes 3 megabytes of your bandwidth: http://wdwfans.com/files/thumb/21638/3500/3500/fit Here's how you'd approach creating 500x500 images in PW: $image = $page->image->size(500, 500); echo "<img src='$image->url' />"; Our default behavior would be the same as your "fit" method described. I'm not sure there is a legitimate use for the "fill" method as it distorts the photo, something that I can't imagine is desirable on any site. But if you wanted to duplicate that, you'd turn cropping off: $image = $page->image->size(500, 500, array('cropping' => false));
    1 point
  14. new german updates for actual PW dev 2.4.5 (28 June 2014). Zip contains only updated/added files (in comparison to the default 2.4 lang pack). updated files: wire--core--pages-php.json wire--core--wireupload-php.json wire--modules--fieldtype--fieldtypefile-module.json wire--modules--inputfield--inputfieldfile--inputfieldfile-module.json wire--modules--process--processlist-module.json wire--modules--process--processpagetype--processpagetype-module.json wire--modules--process--processrole--processrole-module.json pw-lang-de-dev-update.zip
    1 point
  15. Maybe this is getting a little silly but had to post this. You Have A Higher Chance Of Being Bitten By Suarez Than By A Shark http://www.iflscience.com/health-and-medicine/you-have-higher-chance-being-bitten-uruguay%E2%80%99s-luis-suarez-shark
    1 point
  16. I haven't had any need for shop since the site I build it two (or was it three) years ago. That's one reason that shop hasn't got any love from my side. I build this paypal module since it was requested so many times. It should work just fine. I would like to build some of the components again, like better shopping cart, more general payment methods, independent checkout process etc, but not sure if I have time for it (especially since the need on e-commerce had been so little on my day job).
    1 point
  17. Thanks Can, glad it works for you I'm currently rewriting the module and will switch the pdf engine behind to mpdf which seems to have much better support for html. @Bacelo There are no stupid questions
    1 point
  18. wow that was fast and sounds logic will try it in a couple of minutes have to make some shopping first thank you soma! UPDATE: took a bit longer, because we just met a guy who will probably sail us to latin america It works like a charm..thank you Soma!!
    1 point
×
×
  • Create New...