• Content Count

  • Joined

  • Last visited

  • Days Won


thetuningspoon last won the day on May 8 2015

thetuningspoon had the most liked content!

Community Reputation

458 Excellent


About thetuningspoon

  • Rank
    Hero Member
  • Birthday 11/03/1986

Contact Methods

  • Website URL

Profile Information

  • Gender
  • Location
    CT, USA
  • Interests
    Design, Programming, Tiny Houses

Recent Profile Visitors

11,836 profile views
  1. thetuningspoon

    @LostKobrakai This is on a production server. @dragan There are no files or images. @teppo We are creating a box subscription service where the user can have multiple boxes and configure what is included in each of them. The page structure is: - Subscription (Not an actual user page, just linked to the user by a page field) - - Boxset v1 <-- This is the page I'm calling clone on, and the resulting clone is stored as a sibling - - - Box 1 - - - Box 2 - - - Box 3 - - - Box 4 - - Boxset v2 - - - Box 1 - - - Box 2 - - - Box 3 ... etc There are not many fields on each page (~3-4). But each Box has a repeater field on it which usually contains 6 entries. The repeater is using AJAX to generate new items. The 30 pages number I came up with included the repeater pages. I've switched to using a ProFields Table field instead of a repeater since I think it will actually serve us just as well in this case. The clone time is now greatly reduced, although still feels a bit slow to me. It does sound like I should set up a test case and some debugging to see if something weird is going on. Thanks everyone for your feedback and let me know if you have any further thoughts!
  2. thetuningspoon

    In an app I'm building I frequently need to clone a set of pages that represent a user's 'order'. Every time they make changes to the order, I want to make a new version by cloning this page structure. The pages in the structure also contain repeaters, so while there is in theory no limit to the number of pages in the structure, there would realistically be no more than about 30 pages (repeaters included). From a coding perspective the clone() method makes this all a breeze. Unfortunately the clone operation just takes too long (~20 seconds). Does any one have any insight into why this is so slow? Is there any way around this aside from reducing the number of pages? Is there hope that we might be able to make this more efficient in a future version of PW?
  3. thetuningspoon

    I find it ironic that a product called "webpack" is now touting the advantages "code-splitting" It's like we're adding layer on top of layer of complexity just to get back to where we started.
  4. thetuningspoon

    For some reason I never really thought about the fact that, with client-side rendering, you are basically sending your entire application over the wire. It's like having to install an app each time you visit a website. Makes me feel more confident in the componentized server-side approach I've been pursuing, which seems to achieve 90% of what the fully client-side approach aims to achieve, without the complexity and overhead. And it still leaves room for plugging in a more progressive framework like Vue for the cases when you need that extra 10% of interactivity.
  5. thetuningspoon

    This is now possible using owner selectors! $pages->find('template=repeater_collections_detail_images, your_repeater_field.owner.collections_detail_designer=123, include=hidden, sort=name');
  6. thetuningspoon

    This can now be done with owner selectors ( $tagsThatHaveBeenUsedOnPosts = $pages->find('template=tag, tags.owner.template=post'); Where tags is the name of the field on the post template that holds the tag pages. Also, for any given tag page you can check how many post pages reference it with $page->references('template=post')->count();
  7. thetuningspoon

    @netcarver Thanks for chiming in. I just submitted a pull request on GitHub. Looks like I had submitted a bug report at the time, which this pull request resolves. I also added a password config option, and made some change to the read() method (but I don't remember what it's doing exactly).
  8. thetuningspoon

    An update on this for others: Using Redis for sessions solved the problem for me. I had to make some changes to the module to get it to work right (if anyone wants my code, let me know). Today I happened to try migrating my project from using ProcessWire's default MyISAM database engine to using InnoDB (had to convert all tables and set $config->dbEngine in site/config.php). I was playing around with the demo version of my system (which is not using Redis). And interestingly enough, I THINK this (in combination with SessionHandlerDB) actually resolved the issue with simultaneous ajax calls! Perhaps this is because InnoDB has row-level locking instead of table-level locking?
  9. thetuningspoon

    You should be fine using the native $_SESSION, although PW also provides the $session api variable if you want to use it (store vars using $session->myVar = $myValue and retrieve with $session->myVar). I am suspicious that something else is causing your problem.
  10. thetuningspoon

    Eyup, good point. Which is why this can come in handy.
  11. thetuningspoon

    What I meant was whether PW was using the same basic flow: Converting a selector to an SQL select statement, getting the IDs of the matching pages, and then calling getById() to load the actual pages into a PageArray. I spent this morning doing a deep dive into the core and have confirmed that this is how it's working. I was also able to simplify the example by @LostKobrakai to the following: $pf = $this->pages->getPageFinder(); $query = $pf->find($selector, ['returnQuery' => true]); # Show sql //$query->getQuery(); # Modify query //$query->where($sql); $statement = $query->execute(); $statement->execute(); # Load the pages $ids = array(); while($row = $statement->fetch()) $ids[] = $row[0]; $myPages = $this->pages->getById($ids); I haven't solved the pagination side of things yet. Unfortunately PagesLoader::find() is doing quite a bit of work that we're not able to take advantage of due to the fact that we have to bypass it completely and go straight to PagesLoader::find() in order to get the DatabaseQuerySelect() object. I'm not sure if this problem can be solved without modifying the core or duplicating a lot of its code. For future reference, this is the basic flow of a Pages::find() call (sans various options and some intermediary methods): Pages::find() Does nothing on its own. Delegates to PagesLoader::find() PagesLoader::find() Checks for page in runtime cache, returns cached page if it's in memory Creates a selector object from your selector string (PageFinder::find() can also do this, as I discovered) PageFinder::find() Turns selector object/string into a ProcessWire DatabaseQuerySelect object (via PageFinder::getQuery()) Turns DatabaseQuerySelect into an SQL select statement and queries the database Returns a multidimensional array with the ID, parent ID, and template ID of each matching page (OR a ProcessWire DatabaseQuerySelect object if the $returnQuery option is true) PagesLoader::getById() Takes an array of IDs Creates a page object for each ID and populates it with fields from the database (an additional database query). This is where any autojoin fields are pulled from the database. PagesLoader::find() Sorts pages by template (?) Sets up pagination Returns final PageArray
  12. thetuningspoon

    This seems to do it: $pf = $this->pages->getPageFinder(); $selector = new Selectors($selector); $query = $pf->find($selector, ['returnVerbose' => true, 'returnQuery' => true]); $statement = $query->execute(); $statement->execute(); $ids = array(); while($row = $statement->fetch()) $ids[] = $row[0]; $myPages = $this->pages->getById($ids); Is this how PW constructs the PageArray during a regular $pages->find()? So even if you were autojoining all your fields, it is still doing one query to find the matching pages and then another separate query for each page to load the desired fields?
  13. thetuningspoon

    Yes, I understand. I meant how does it work under the hood. I guess path is a dynamic page property, so it requires constructing the page object to get it? You mean with a $pages->get()? Wouldn't that mean going back to the database again to build each page? Anyway, thank you for explaining further. At this point I am wondering if you can provide any insight to my original question, which was how I can modify the sql of a regular $pages->find() and then return the results as a PageArray.
  14. thetuningspoon

    @bernhard How does the closure thing work? Is it creating a page object for each result? Can you give an example of when the closure would be required? Thanks
  15. thetuningspoon

    @bernhard This looks very cool, but I am not totally clear on what it does and how it does it. Can you give me a semi-technical explanation of how your module works (inputs and outputs) and where it plugs into the core? What exactly does a call to RockFinder return? Is there a way to get page objects from the results if I don't need the scalability features of the module and just want a normal page array in the end? Can I still use pagination with the results? Also, as cool as your module looks (and it looks very cool), if what I'm wanting to do can be achieved easily with just the core, I'd prefer to keep it simple.