Jump to content

Wouldn't this be a great demo/challenge for ProcessWire?


wbmnfktr
 Share

Recommended Posts

Wouldn't this be an awesome demo of ProcessWire as well?

Here is the original tweet from Lee Robinson (VP of Product @ Vercel) - links below:

Quote

How should you search, filter, and paginate data with Next.js? This demo has 50,000 books in a Postgres database.

Page Load: When the page loads, we see the React Suspense fallback. This loading skeleton is displayed until the first page of books is retrieved from the database.

• Searching: The search input has a 200ms debounce. After 200ms of inactivity, the form submits, updating the URL state with `?q={search}`. The Server Component reads `searchParams` and queries the database. On form submission, a React transition starts, allowing us to read the pending status with `useFormStatus` to display an inline loading state.

• State Preservation: Navigating to an individual book page retains the search input state. Reloading the page or sharing the link preserves the search results.

• Client-side Filtering: Filtering authors in the left sidebar is done client-side. Authors are fetched by a Server Component and passed as props to the sidebar. Changing the input value updates React state and re-renders the sidebar.

• Optimistic Updates: The sidebar’s selected authors are optimistically updated with `useOptimistic`. Checkbox selections update instantly without waiting for the URL to change.

• State Preservation: Navigating to an individual book page retains the sidebar filter input and selected author state across navigations, giving it an app-like feel.

• Pagination: Navigating between pages updates the URL state, triggering the Server Component to query the database for the specific page of books. We also fetch the total book count to show the total number of pages. This demo isn't perfect yet (still working on it) but it's been a fun playground for some of these patterns. You can imagine a similar experience for thousands of movies, cars, products, or any other very large dataset.

Lee/NextJS: https://x.com/leeerob/status/1822668486214140134
Josh/Laravel: https://x.com/joshcirre/status/1824227064184181150

Github: https://github.com/vercel-labs/book-inventory
Demo: https://next-books-search.vercel.app/

  • Like 4
Link to comment
Share on other sites

Couldn't resist and started playing around with it.

What I can say so far:

  • Importing 50,000 pages via ImportPagesCSV isn't the way to go
    I somehow remembered it to be faster and a bit more reliable even with such a large set of pages.
    Maybe I am wrong on this but I almost always use this module for CSV imports.
  • wired together a custom import script
    Not all books and details where imported due to weird symbols/letters/words in the book titles and didn't bother fighting through that. ? 
    I skipped almost everything on the first run and just created pages with titles. The database growth was kinda surprising for me.
    All that took less time, than trying to import everything with ImportPagesCSV.

Those weird symbols/letters/words I mentioned earlier:

symbols.png.c30f475c7eefca5b59e6ebdca60d1c28.png

Database difference pre/post import 50k pages with just the title:

pre-post-basic-import.thumb.png.53826382069fa4fd3508a031a2b6f91d.png

 

Created a public Github repo for this:

 

 

 

Will update this post with more details as they occure.

 

  • Like 2
Link to comment
Share on other sites

I have also started on this and I’m not sure how to approach it really. For instance, the sidebar with the authors is both kinda bad UX and wastefully implemented. Indeed the entire thing is wasteful as heck. Every time a book cover comes into view it preloads not only its entire details page, but also, for some reason, ONE MEGABYTE of random other stuff. Like check this out, this is what your browser deals with for EVERY book cover you see in that demo:

[Removed because it triggered some kind of spam protection, lol. Sorry about that! Just check out your dev tools when visiting the demo from the OP.]

Doesn’t seem like that much at first glance until you notice the weird repetition at the top and also, check out that horizontal scrollbar. Every book cover you look at seems to load EVERY AUTHOR EVER. Like… what? I’m not a galaxy-brained react dev, so maybe this is genius beyond my comprehension?

Anyway I’m kinda not into loading all authors even once just to filter them client-side, but deviating from the orginal seems like a cop-out?

Edited by Jan Romero
or maybe it was the swear word
  • Haha 1
Link to comment
Share on other sites

I won't try to rebuild the NextJS version of it just to mimic every detail but build a usable app like it - don't know yet how every part will work and look. But for sure I won't load tons and tons of stuff just because something was in hover state or so.

6 hours ago, Jan Romero said:

Every book cover you look at seems to load EVERY AUTHOR EVER. Like… what? I’m not a galaxy-brained react dev, so maybe this is genius beyond my comprehension?

Maybe there is a reason for that but there wouldn't be one in my version of it - probably.
I just want to see how nice and easy it could be to build such a tool and how fast it is at the end.

6 hours ago, Jan Romero said:

Anyway I’m kinda not into loading all authors even once just to filter them client-side, but deviating from the orginal seems like a cop-out?

As said before: I won't copy the app to the detail, I will try to build an app/tool that works and has all or most of the features.

 

The most fun part until now was to realise that importing 50k items is something different than handling 50k items that grew over years. A nice reality check.

  • Like 1
Link to comment
Share on other sites

8 hours ago, wbmnfktr said:

The most fun part until now was to realise that importing 50k items is something different than handling 50k items that grew over years. A nice reality check.

Yep. About 24 hours on InnoDB for a full import. More that 52k books, 983 genres, 48k characters, 10k publishers, 15k awards. All in all 95314 pages in an otherwise pristine PW installation.

I'm attaching a site profile with the data if someone wants to play around.

site-bookinventorydata.zip

  • Like 6
Link to comment
Share on other sites

22 hours ago, Jan Romero said:

[Removed because it triggered some kind of spam protection, lol. Sorry about that! Just check out your dev tools when visiting the demo from the OP.]

... what was it? ??

No, everything is good! ?


PS: I do get support forum e-mails, (as a forum moderator), whereas 99% of the blocked messages are real spam. But today I've read this in the mail subject: "Jan Romero has posted a post in a topic requiring approval", what lets me have used the posts url-link to look at it! ?

Edited by horst
  • Like 3
Link to comment
Share on other sites

9 hours ago, BitPoet said:

About 24 hours on InnoDB (...) All in all 95314 pages

Is it faster to add START TRANSACTION and COMMIT in the sql file?

In my current project I sometimes import 5000 pages using PW API, and with transactions (wire()->database->beginTransaction(); wire()->database->commit();) it is done in 20-30 seconds.

Edited by da²
  • Like 1
Link to comment
Share on other sites

1 hour ago, da² said:

In my current project I sometimes import 5000 pages using PW API, and with transactions (wire()->database->beginTransaction(); wire()->database->commit();) it is done in 20-30 seconds.

It should be faster with transaction, and I did add startTransaction and commit in my import script. I'm still trying to figure out where the bottleneck is. Shouldn't be the hardware (i9-14900K, 64GB, high speed NVMe SSD), and I didn't see any io waits in the stats, so something is probably wrong with my db server (MySQL 8.4.0 on Windows 11). Looks like I've got do some forensics there.

Edited by BitPoet
  • Like 2
Link to comment
Share on other sites

What was described in the tweet sounds somewhat similar to what I did with Transferware Collectors Club, which I built over the course of 2021 and with HTMX (before it was 1.0).  Check out the video I linked to in the post here (it's not a public site):

  • Like 3
Link to comment
Share on other sites

1 hour ago, da² said:

In my current project I sometimes import 5000 pages using PW API, and with transactions (wire()->database->beginTransaction(); wire()->database->commit();) it is done in 20-30 seconds.

I bet you didn't forget to assign a sensible page name, then ?

  • Haha 1
Link to comment
Share on other sites

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...