• Content Count

  • Joined

  • Last visited

  • Days Won


thetuningspoon last won the day on May 8 2015

thetuningspoon had the most liked content!

Community Reputation

479 Excellent


About thetuningspoon

  • Rank
    Hero Member
  • Birthday 11/03/1986

Contact Methods

  • Website URL

Profile Information

  • Gender
  • Location
    CT, USA
  • Interests
    Design, Programming, Tiny Houses

Recent Profile Visitors

11,910 profile views
  1. thetuningspoon

    Maybe we should be marketing ProcessWire as "The original headless CMS"
  2. thetuningspoon

    Great idea on linking the blog posts to the docs pages. I’ve been using the blog more than the docs pages lately for finding documentation of new features, which is fine if you keep up with the blog like I do, but most newcomers are sadly not aware of these features!
  3. The separate method is a good idea. Glad my suggestion was useful!
  4. A bit of an update after some more experimentation tonight. I added pages.templates_id as a field always selected in the RockFinder results, and then attempted to use the $pages->newPage() method to take the results of the SQL query and convert them into a PageArray of Page objects. This worked, thus eliminating that second trip to the database I mentioned (and also bypassing the page cache and probably some other nice features of the normal page creation process). Unfortunately, this slowed the whole thing way down again. So I'm thinking now that it is something else about constructing Pages that is slow. Maybe generating the page path or some other properties are the problem. Perhaps I need to load up some additional fields up front. Will have to test more. WireData/WireArray works great, though.
  5. You're right... I was not counting some of the pages involved. There are at least 2 to 3 times that many. Also, I am counting the entire time from request to first response (not using a debug timer)
  6. @bernhard I've finally had a chance to try out your module tonight for a project where we're loading pages into a large data table (200+ rows) and were hitting a wall. Using RockFinder I now have the initial page load down to ~2 seconds, down from ~7+ seconds! This is a fantastic module, Bernhard. It looks like it's really well thought out and has some really powerful features for building queries. I love how it extends PW's native selectors and allows you to return standard objects, making it easier to substitute this in for a regular $pages->find. Thank you for making this! I think I can answer my own question now... The main issue with creating Page objects is that page instantiation requires a trip back to the database. The initial $pages->find converts a selector into a SQL query which returns an array of matching page IDs. Then those IDs are used to go back to the database and get the pages (or pull them from cache if they're already loaded). Then for any page field requested that isn't auto-join, an additional database query is required. If you're looping through a lot of pages, that's a lot of DB queries! It seems like there might be a way to provide the functionality of RockFinder in the native PW core, as an option when loading pages. You would still end up with Page objects in the end (which in my case would be a huge boon since I like to extend the Page class with a custom class and methods for each template), but we could skip that second trip to the database (getting pages by IDs) if we could just tell PW which fields we wanted it to get up front. After that, any additional fields we didn't specify could be loaded with another trip to the DB, as they are now. That being said, I'm sure @ryan has a good reason for that second trip to the DB. But it seems like there must be a way that we could improve the speed of native pages, even if it is a hidden/advanced option with some caveats. One minor complaint: I noticed is that the module seems to fail silently and return nothing when it can't find one of the fields. It would be good to throw an exception to make this easier to debug. Edit: Another thought... Is there a reason not to use WireData and WireArray for the objects returned from RockFinder, in place of an StdObject? This would allow you to use WireArray's built in sorting and selecting features on the result set: $results = $this->database->query($this->getSql()); if($array) { $objects = $results->fetchAll(\PDO::FETCH_ASSOC); } else { $objects = $results->fetchAll(\PDO::FETCH_CLASS, '\ProcessWire\WireData'); $objects = (new WireArray())->import($objects); }
  7. Why is it that creating page objects is so much slower/memory intensive than a multidimensional array? I could understand if the pages were loading all their fields, but most fields are not auto-join
  8. thetuningspoon

    @LostKobrakai This is on a production server. @dragan There are no files or images. @teppo We are creating a box subscription service where the user can have multiple boxes and configure what is included in each of them. The page structure is: - Subscription (Not an actual user page, just linked to the user by a page field) - - Boxset v1 <-- This is the page I'm calling clone on, and the resulting clone is stored as a sibling - - - Box 1 - - - Box 2 - - - Box 3 - - - Box 4 - - Boxset v2 - - - Box 1 - - - Box 2 - - - Box 3 ... etc There are not many fields on each page (~3-4). But each Box has a repeater field on it which usually contains 6 entries. The repeater is using AJAX to generate new items. The 30 pages number I came up with included the repeater pages. I've switched to using a ProFields Table field instead of a repeater since I think it will actually serve us just as well in this case. The clone time is now greatly reduced, although still feels a bit slow to me. It does sound like I should set up a test case and some debugging to see if something weird is going on. Thanks everyone for your feedback and let me know if you have any further thoughts!
  9. thetuningspoon

    In an app I'm building I frequently need to clone a set of pages that represent a user's 'order'. Every time they make changes to the order, I want to make a new version by cloning this page structure. The pages in the structure also contain repeaters, so while there is in theory no limit to the number of pages in the structure, there would realistically be no more than about 30 pages (repeaters included). From a coding perspective the clone() method makes this all a breeze. Unfortunately the clone operation just takes too long (~20 seconds). Does any one have any insight into why this is so slow? Is there any way around this aside from reducing the number of pages? Is there hope that we might be able to make this more efficient in a future version of PW?
  10. thetuningspoon

    I find it ironic that a product called "webpack" is now touting the advantages "code-splitting" It's like we're adding layer on top of layer of complexity just to get back to where we started.
  11. thetuningspoon

    For some reason I never really thought about the fact that, with client-side rendering, you are basically sending your entire application over the wire. It's like having to install an app each time you visit a website. Makes me feel more confident in the componentized server-side approach I've been pursuing, which seems to achieve 90% of what the fully client-side approach aims to achieve, without the complexity and overhead. And it still leaves room for plugging in a more progressive framework like Vue for the cases when you need that extra 10% of interactivity.
  12. thetuningspoon

    This is now possible using owner selectors! $pages->find('template=repeater_collections_detail_images, your_repeater_field.owner.collections_detail_designer=123, include=hidden, sort=name');
  13. thetuningspoon

    This can now be done with owner selectors ( $tagsThatHaveBeenUsedOnPosts = $pages->find('template=tag, tags.owner.template=post'); Where tags is the name of the field on the post template that holds the tag pages. Also, for any given tag page you can check how many post pages reference it with $page->references('template=post')->count();
  14. thetuningspoon

    @netcarver Thanks for chiming in. I just submitted a pull request on GitHub. Looks like I had submitted a bug report at the time, which this pull request resolves. I also added a password config option, and made some change to the read() method (but I don't remember what it's doing exactly).
  15. thetuningspoon

    An update on this for others: Using Redis for sessions solved the problem for me. I had to make some changes to the module to get it to work right (if anyone wants my code, let me know). Today I happened to try migrating my project from using ProcessWire's default MyISAM database engine to using InnoDB (had to convert all tables and set $config->dbEngine in site/config.php). I was playing around with the demo version of my system (which is not using Redis). And interestingly enough, I THINK this (in combination with SessionHandlerDB) actually resolved the issue with simultaneous ajax calls! Perhaps this is because InnoDB has row-level locking instead of table-level locking?