Jump to content

renobird

PW-Moderators
  • Posts

    1,699
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by renobird

  1. It's pretty simple really. Shouldn't complicate or overload anything. Personally I would go the JS route, but just throwing another option out there.
  2. Could you use Markup Cache for all parts of the page, except the image?
  3. =operator dependencies still work fine in repeaters. At least as of 2.5.3 — haven't checked the dev branch.
  4. Ryan, I just used this on 3 sites — upgrades went perfect on all of them. This module is perhaps faster than the command line because it takes care of checking things like .htaccess and index.php mods. Hell, it took me longer to resync my local copies than it did to upgrade. Anyhow, just wanted to give a big thumbs up and say thank you!
  5. Without knowing the full requirements, it seems like you could use the Service Pages module to pull in the information on the other sites. Do you need to actually store the data on the sub-sites, or could you just pull it in from the master and display it?
  6. Mohammed, Is there some part of this that is ProcessWire related? Perhaps I'm misunderstanding your question, but it sounds like you need to ask this on the IP.Board forums.
  7. You need to save that as CustomAdminHomePage.module (or rename the class to whatever).
  8. Here's a quick module that should do what you need. http://pastebin.com/31xcBaFE
  9. Do you want this for all users (except Superuser)? Also: Do you want to hide the page tree from those users, or just go to a different page at login?
  10. Buried in a topic is a small example module I posted for creating a simple process module.
  11. Hmm. If the URL on the new site resolved to an actual page, then the redirect shouldn't get triggered — correct? So if you have: newsite.com/products/ (actual PW page) oldsite.com/products/ Then you could only get to oldsite.com/products/ by directly accessing that URL. I think this is the current behavior of the redirects module. All that said, I might be misunderstanding your question.
  12. Hi all, I'll take a look at any issues that are still lingering (and any suggested features/updates) ASAP. I don't know Ryan's plan for pushing things into the current master branch. I suspect that any fixes will just be part of 2.6.
  13. Wow! Nice work Mike! I'll try to carve some time later in the week to give this a thorough test run.
  14. Awesome! This is a great way to introduce new admin pages. Nice work.
  15. Very nice, and very fast. Are you using ProCache on the entire site?
  16. Hi Richard, If you use the pageField method, then every "tag" creates a page that has a unique name. Example (green-energy, shark-sandwiches, turtle-soup). The autocomplete field presents the user with the page title, so they would select tags like (Green Energy, Shark Sandwiches, Turtle Soup). Same applies for adding new tags. Now you can use UrlSegments to find pages with your multi-word tags because they will be in a format like /domain/tags/green-energy/
  17. Not currently, but it shouldn't be too difficult to add while Mike Anthony gets his version complete. I'm looking forward to his version myself—so long as I can specify my alt domain. *wink*
  18. I have an as yet undocumented update to this module that will do that. Here is the most current version ProcessRedirects.zip I've been using this version on a high traffic site for over a month, and it's solid. You may have to use 2 rules (depending on what you want) The * at the end is a wildcard.
  19. You could use an autocomplete pageField for the tags. That way existing tags are available across all pages using the field. Check the "allow new pages to be created from this field" option. The end result, is that your tags are now page references that you can search against by the sanitized name field.
  20. Thanks Dave! The next phase will make this look simple in comparison. Some of the paper processes that are being turned into web based tools are intense.
  21. A majority of the photos are from wikimedia commons. There are other sources too, but that seems to be the majority. The UF/IFAS Assessment Coordinator was tasked with gathering all the photos (previous incarnations of the assessment were just text). I gave her a few pointers about resolution, etc... but she really did an amazing job. There are some low-res photos or poorly composed shots here and there. They will all eventually be replaced with higher quality versions.
  22. Phase 2 has some other really cool features coming for the filters. Like result counts and "disabling" options if the combinations would results in 0 results.
  23. Forgot to mention: There is some $input->whitelist() stuff going on in the real code, but I left it out of the examples above for the sake of simplicity.
  24. All the taxonomy is handled by page fields. Some are just basic fields like Origin and Growth habit, and others like Conclusion Type and Zones are part of a PageTable. (Screenshots at the end of the post). There is a lot of JS involved in the filters — more than I have time to explain at the moment. The short version: Filter selections build a URL containing the page IDs for any selected filters via JS. We needed the URIs to be as short as possible, since they will be frequently emailed, and eventually cited in publications. They end up looking something like: /?zones=1030,1028&types=1082,1080&growth_habit=19040,1022 A selector is then built from those GET variables in the template. // default selector, used for assessment page $selector = "template=species, limit=16, sort=name,"; // zones GET variables if ($input->get->zones){ $zones = explode(",", $input->get->zones); $q = implode("|", $zones); $selector .= "@conclusions.conclusion_zones={$q}, check_access=0,"; } It gets a little more complicated, because of the infinite scroll. A maximum of 16 results are initially shown for any query, additional items are pulled in via AJAX on scroll. So if there are GET variables involved, the AJAX call needs to pass them along. In the JS function getUrlVars() { var vars = {}; var parts = window.location.href.replace(/[?&]+([^=&]+)=([^&]*)/gi, function(m,key,value) { vars[key] = value; }); return vars; } var GET_zones = getUrlVars()["zones"]; There is a GET_* for each possible filter. All of which are passed via the data param in the $.ajax() call. data: { start:start, query:query, zones:GET_zones, types:GET_types, origin:GET_origin, growth_habit:GET_growth_habit, tool_used:GET_tool_used }, Screenshots This is a bit of a disjointed explanation, but should give you some idea.
  25. @pwired, I think he was referring to backing up the filesystem by copying it to a remote location. Still on topic.
×
×
  • Create New...