Jump to content

FireWire

Members
  • Posts

    510
  • Joined

  • Last visited

  • Days Won

    32

FireWire last won the day on February 3

FireWire had the most liked content!

3 Followers

About FireWire

  • Birthday January 1

Profile Information

  • Location
    California
  • Interests
    Writing code. Writing more code. Refactoring. Writing code.

Recent Profile Visitors

10,960 profile views

FireWire's Achievements

Hero Member

Hero Member (6/6)

1k

Reputation

  1. It makes sense to me that if ProcessWire will remove the value because it is the same then it might as well replace it with = . I think it would be a nice indicator visually and to prevent extra characters being used when translating. There would be no way for Fluency to know that a blank field indicates a match. Never thought about that before, good catch!
  2. @bernhard Haven't forgot about your request! The event for the organization I'm working with kicks off this week and it's been just a little hectic haha. I just checked and the language page says to use an '=' sign to designate tell PW to use the default language. When I entered that it saved the = sign. Am I talking about the same thing?
  3. Thanks! *ahem* PHAT stack 😆 It satisfies so many requirements and is a pleasure to work with. It has also been a gamechanger with RockPageBuilder. I've already moved onto my next project and having self-contained blocks with Alpine requiring no external JS makes reusing code trivial. I may create some Github gists together or a repo for easier reuse and sharing. Modals, maps, video embeds- all write once and reuse. I'll check this out. Modernism Week is a long-term retainer client and they're true passionate architecture enthusiasts. I'm trying to keep up with all the knowledge and this might be a nice shortcut 😎
  4. Sorry everyone, I just realized there's a dedicated case studies under the showcase section of the forum 🤦‍♂️
  5. Modernism Week is a nonprofit organization dedicated to celebrating and providing education about midcentury modern architecture in Palm Springs, California- the epicenter of the modernist architecture movement. The 10 day event takes place in every year in February and features over 200 unique activities that draw 125,000 people from around the world. At Modernism Week you can tour iconic homes and neighborhoods, attend a cocktail party at Frank Sinatra's house, see live music, watch films, and attend engaging lectures. The organization also engages in preservation efforts to keep historic buildings and landmarks protected for future generations, as well as providing scholarships for students pursuing degrees in architecture, preservation, engineering, and design. https://modernismweek.com Tech Alpine.js htmx Tailwind CSS For a full list of technology, software, members of the web team, and a list of thanks to module developers who made this site possible, view the humans.txt file here. I posted a deep dive into the behind the scenes features and application development that handles automation and extensive use of powerful ProcessWire features over on this forum post. It was a pleasure to work with ProcessWire on this project and its amazing API and features made it all possible.
  6. Hey all. This year I launched a new website for Modernism Week, a nonprofit located in Palm Springs, California that celebrates midcentury architecture and hosts engaging educational annual events. https://modernismweek.com This is my most complex ProcessWire website to date and I wanted to share some of the things that were implemented behind the scenes. Project Requirements When I met with stakeholders at the organization, their website was a 5 page brochure site with minimal content and links to buy tickets. The ticketing platform they use is robust, but doesn't provide an experience you would expect from an event this significant. The project was a blank canvas and after an assessment I provided a list of features that would benefit the organization, address shortcomings, identify opportunities, and support future growth and business goals. These included: A vibrant design that mirrors the architecture they celebrate to create a more inspiring experience for visitors Implement stronger adherence to brand standards and new branding to celebrating the 20th anniversary of Modernism Week Feature dedicated pages for events, activities, and offers Promote event and organization sponsors Challenges At first glance, the site looks pretty standard however, it features a full event section that displays everything visitors can do at each activity during the 10 day event. Leading up to and during the event, the information changes to stay up to date with newly added activities and updates to locations, ticket availability, descriptions, pictures, etc. During planning we identified unique challenges. Annual site growth of over 300 new pages for activities and events on top of any normal site growth Reduce the amount of work needed to initially enter the information for each event Add complexity without compounding increases in work to maintain actively changing information Maintain content accuracy by mirroring regularly changing information that is available on pages located on the ticketing platform Synchronize event and activity information with a ticketing platform that does not provide a public API Allow visitors to browse activities when previewing them prior to the ticket sales launch day Keep cross-referenced content across the site, such as promoted activities and offers, current Provide an integrated workflow between people managing the ticketing platform and maintaining the website Improve SEO and implement structured data derived from cumulative page content for all activities Ensure that all activities in an event are centrally searchable and filterable not only when using whole-website search, but within a dedicated activity search feature for each event Deliver a fast and performant experience for visitors Manage server and database loads to prevent overloading resources or rendering the site or admin unusable Oh, and the deadline was the day tickets went on sale, on a Friday, where traffic spikes to thousands of users per minute. We broke every rule in the book. Manually entering and managing each page is a task too large for one or even two people. Events and activities must be added to the site while they are simultaneously added to the ticketing platform in the months leading up to events. The amount of overhead for project management and task delegation would overburden a team already working tirelessly to successfully put on an event of this size. The timeline to design, build, test, and deploy the site was only 4 months. The solution was automation. The Application Layer The only way that the site could effectively and successfully be managed accurately is to automate tasks and consolidate complex processes into single-button clicks and cronjobs. This is where ProcessWire really flexes its muscle. The API and practically limitless flexibility in custom implementation provided the ability to build out everything that was required. The site has its own custom module that provides an interface for executing back end work to synchronize data between the ticketing platform and the site. The site also employs over 60 custom hooks and hook-supporting classes (some of them I shared here) that make chained and background actions possible. As mentioned, the ticketing platform does not provide an API so there was no conventional method to easily retrieve data. This means that all of the data pulled from the ticketing platform is pulled via web scraping. To more easily and accurately retrieve this data, I built a separate internal tool for the ticketing team where they enter activity information, pre-formats it for them to a design, and adds encoded data attributes on elements in the markup that helps make scraping more accurate. The data this retrieves is then parsed, cleaned, and fed through processors that compare, convert, and input into ProcessWire where pages are created or updated. The RoachPHP library facilitated scraping and it's great. If you're interested in web scraping, it's a great tool. Performance There are a lot of relatively expensive operations required to sync activities in large numbers. The way that unique data is stored, such as ticket pricing, availability, and schedule requires more complex fields like RepeaterMatrix. Often the data required to calculate or display information makes using selectors alone to query/filter data difficult, inefficient, or not possible. I chose to use RepeaterMatrix fields over individual pages because of the the reduction in complexity when managing the schedule, pricing, and ticket availability. Some activities occur once, other occur multiple times per day over the course of 10 days. I fully expect that the amount and complexity of information stored will increase over time due to future improvements and feature implementations. Being able to manage all of this data, compare it, and adjust it one one page is important for accuracy and time spent on task when manual edits need to be made and cross referencing multiple pages would be cumbersome. Regardless of page or field choices, nested loops of up to 200 pages involves significant server loads, especially when that process may involve resizing images to multiple dimensions and generating required output data from multiple fields. A good example is the live search feature. Thousands of visitors querying the database is not feasible for even the reasonably powered VPS it's on and I wanted to create something closer to "live search". To address the workload required, the search modal is empty on page load and a separate page is used on the back end that is separately cached using ProCache. This allows one page to serve modals being cached with every page. If the modal content was cached with all activity pages then event changing the ticket availability, title, summary, image, etc. would require expiring the cache of over 200 pages that have complex data. Opening the "All Activities" modal executes an htmx request, and the cached modal contents are loaded near instantaneously. The slowest part of that is the animating the loading animation in/out. Underneath the hood, the live search/filter uses keywords that are embedded in data attributes on each row. These keywords are taken from multiple fields, parsed, stripped/prepared for filtering, and combined. Doing this in a loop is noticeably slow to perform every time the page is called. Filters for categories. sold out, activities happening today, search by keyword, and viewing activities on a specific day is all done on the client side and require calculating per-day ticket availability, first and last occurrence dates, and during the event whether an activity has passed- which when displayed overrides a sold out status. Expiring this page is still an expensive task and calculating the dates, times, and statuses to produce the data needed has taken up to an unacceptable 6+ seconds. If too many pages are not cached this can cause performance issues for multiple users on multiple pages. It's not just about front end performance, it's also important to think about admin performance. To remedy this, I implemented a type of field pre-caching. Front end performance is entirely dependent on ProCache. Without it the server load would be far too high and performance hits would affect time on site and ticket sales. Overall, the goal is to touch the database as little as possible. ProcessWire is indeed fast, but there are always limitations across many different moving parts. Synchronizing Event/Activity Data Importing data can involve multiple background HTTP requests, reading and updating high numbers of pages and dozens of repeater pages and fields. Syncing is done by scraping pages on the ticketing platform. There may be instances that after analyzing the data retrieved does not require updating a page because there were no changes. If there are no changes then the page shouldn't be saved. Rather than check all of the fields for changes in values, data is hashed and stored for later comparison. Every activity page can be updated from the ticketing platform via either the main event page that summarizes useful data, or the individual ticketing page. Sync can be enabled or disabled for the activity entirely, if it is enabled, then sync can be toggled on/off for every individual field in case local management of data is preferred. The retrieved data is hashed and compared to the page hash value. If the last sync datetime is older than the time limit set in the event settings, or the hash does not match, the the page is updated and saved. If not, the page is then ignored. This saves a lot of processing time, power, and database hits while maintaining state. Example of an activity page. Ticketing occurrence data is parsed and run through regex to extract individual data and formatting for dedicated fields. The three fields on the bottom are taken from the Instances Summary field when parsed on import. Looping within loops to create renderable values for the front end is taxing. But using custom page classes to organize business logic with methods that handle processing, formatting, and caching data are immensely powerful and make short work of interacting with pages when bespoke functionality is required. Because of the high number of hooks on various operations, the ability to save quietly without triggering them is used extensively as an efficiency measure and also to prevent unwanted side effects when operations do not call for them. Sync uses this feature the most. Field Pre-Caching A global field called "Field Support Metadata" is used to move significant amounts of data processing from the page view event over to the page save event. This takes the multiple field value processing and image sizing out of loops for rendering and takes the extra time on the back end operations side. The activity page example above uses this method almost entirely for front end rendering and makes pre-calculated values accessible in one field. This was inspired by the SearchEngine module by @teppo that uses a single field to cache values for search. It really saved my bacon 🤣 big thanks! Page data is saved as JSON and then retrieved on demand. Metadata includes a wide array of values. Because it's JSON, adding additional data later is very easy. While i could have used the the $page->meta feature, being able to check the contents of the field visually in the admin without dumping or testing values has been useful. All of that data is processed and stored on a Pages::saveReady hook using methods defined on the custom page class. This process is skipped entirely if the activity has no occurrences scheduled. <?php wire()->addHookAfter('Pages::saveReady(template=event_activity)', function(HookEvent $e) { $page = $e->arguments('page'); $occurrences = $page->activity_occurrences; if (!$occurrences->count()) { return; } $firstOccurrenceDate = $page->firstOccurrenceDate(fromMetadata: false); $lastOccurrenceDate = $page->lastOccurrenceDate(fromMetadata: false); $timezone = wire('config')->timezone; $allOccurrenceDateTimeStart = array_map( fn ($date) => CarbonImmutable::parse($date)->setTimezone($timezone)->toDateTimeString(), $occurrences->explode('date_time_start'), ); $page->pushToFieldMetadata([ 'totalOccurrences' => $occurrences->count(), 'totalInstances' => array_sum($occurrences->explode('number')), 'firstOccurrenceDate' => $firstOccurrenceDate?->toDateTimeString(), 'lastOccurrenceDate' => $lastOccurrenceDate?->toDateTimeString(), 'isSoldOut' => $page->isSoldOut(fromMetadata: false), 'isUncategorized' => $page->isUncategorized(fromMetadata: false), 'ticketingUrl' => $page->ticketingUrl(fromMetadata: false), 'activityDates' => $page->activityDates(fromMetadata: false), 'allOccurrenceDateTimeStart' => $allOccurrenceDateTimeStart, 'listFilterKeywordString' => $page->listFilterKeywordString(fromMetadata: false), 'structuredData' => $page->getStructuredData( includeOffers: false, includeLocation: true, includeOrganizer: false, includeDescription: true, includeImages: true, includeParentEvent: false, fromMetadata: false, ) ]); }); Editing one hook makes adding or modifying pre-processed data available everywhere on demand at runtime via one field which is pretty nice. The 'fromMetadate' set to true will return cached values, if false, it will return values that are generated from field data. This allows for the same method to be used anywhere in the application with control over the data source. Any data that can be memoized is via the page class so that if a value is calculated at runtime (wasn't yet cached to metadata yet for some reason) and accessed more than once during a request/response loop then it will be pulled from memory. Granular ProCache Control Not every field changed should invalidate cache. For example, if sync is toggled on/off, that does not affect the rendered page so the cache for that page should not be cleared. Page save events for activity pages fire ProCache hooks that analyze which fields on the page have changed and only clear the cache if it will affect pages that rely on it. Activities are referenced elsewhere and added as featured items to help promote them on other parts of the website. So if an activity page is saved where the title is changed and that activity is featured on various other pages, that could clear that page, the Event page, the All Events modal page, the category page for that activity, and the home page. Using hooks allows for very specific cache operations that cannot be configured in the admin UI. Example of how activities can be featured and promoted in many places elsewhere on the site. There's a little more about this on a ProCache support thread I posted to discuss efficient caching strategies. Thanks to @ryan for the insight and recommendations 👍 Protip: Pre-Warming Your Cache ProCache is excellent but it can't cache a page until it's visited. You can seriously speed this up by using quicklink. Quicklink watches the page for links as they're scrolled into view where the browser makes a background prefetch request and caches the HTML document locally. If ProCache has already cached the page, the background request/response is so quick that the browser pulls every link on the screen near instantly. When you click on the link, you get response times like this. Which aren't response times, they're the speed that the browser cache operates locally. Total insanity. And the nice thing is that if they don't visit the page, they've kindly cached it for everyone else. Thank you, kind visitor. Dynamically Generated Content One of the features built in thanks to hooks is automatic content generation. Examples are the "more in category" and "you might also be interested in" type suggestions at the bottom of every activity page. This is designed to help visitors explore other things that they might like. This provides a great navigation experience on the site that can also increase ticket sales. These are automatically selected when a page is created during first sync/import using a Page::saveReady hook. Activities are given tags on the ticketing platform to help organize them. These are pulled during import, analyzed, and activities are automatically selected by ranking the most similar according to matching tag count. This boosts usable content on the page with zero work by a human. While they're automated, they can also be changed or re-ordered when editing an activity page. You can delete any or all of them and they'll be repopulated with new activities when the page is saved. The code that populates activities lives in a custom page class method for the activity so it can be called anywhere. Future Features The site as it stands currently is not in it's final state. Some planned features had to be cut due to prioritizing event and activities which are revenue generating. Future features include: A full robust blog that delivers a magazine-like experience which also requires importing posts dating back to 2015 from a separate WordPress site. An improved workflow that provides partners the ability to submit information about their activities for events via forms, integrates that tool the ticketing team now uses while adding the ability to save their progress and pre-create ProcessWire activity pages pages to reduce the overhead of major data import operations on the site. Adding a separate section that contains a web app that attendees can use during activities for additional educational information while at the event Create an "itinerary" feature that lets visitors browse the site and add activities that they plan to purchase tickets for. Use this feature to send email reminders when tickets are available and marketing leading up to the event Possibly implement an event calendar that lets people get an overview of the schedule and avoid schedule conflicts. A "Past Events" section of the site where events are moved to but can still be browsed. Where It Can Be Improved There are places in the code that should be using selectors rather than pulling larger numbers of pages since selectors and queries are optimized and efficient. Having a short deadline takes a toll on planning and execution. Offloading more to cronjobs. I had to keep a lot of operations manually triggered that I want truly handled by background automation but I needed an opportunity to analyzej real-world performance in production and have the chance to review the quality and accuracy of the data imported. Scraping is great, but sometimes it takes fine tuning to get things right. Not to mention the amount of regex used vs my skills with regex... More robust features for browsing activities. More filters and options to choose which activities to browse. A lot of great ideas have come out of living with the site and there are some basics I'd like to see implemented. At its heart, this site is still a website but ProcessWire really shines when it's given a big job. It eliminates the need to consider a full application framework and is faster to develop for when you need strong core features for content management. As you can imagine, trying to build something like this on a platform that is not as developer oriented as ProcessWire is would be very not fun and the high quality Pro modules, community modules, and outstanding API made it possible to plan and execute without compromise. If you've made it this far, thanks for reading!
  7. Quick update- previously Plates for ProcessWire required that you manually install plates via Composer. The module now comes pre-packaged with Plates. If you have already installed Plates via Composer or prefer to do so, this module will prioritize the version that you have installed. If not, the module will use the version that comes with it. I prefer to manage packages via Composer, but that may not always be the case for everyone and I want this module to be easy to get started with, just like Plates itself, regardless of preference or experience. I have pushed some enhancements and improvements to the functions provided by the custom extensions. Going forward extensions will be focused on bugfixes and nonbreaking changes for stability.
  8. Hey @bernhard not an issue with RPB but something that I tweaked to help out in a situation that came up while working. I deleted a settings array in settings.php and then didn't touch that project for almost a week. When I came back to it I got an error when trying to edit a page. I didn't realize that one of the block files was attempting to use the global setting that I had deleted. I was able to read through and find out what I did wrong but I had to check back and forth between the block file and settings.php to figure out which one was missing. I added this and it creates a more descriptive stack trace that can save some time. <?php public function use($arr): BlockSettingsArray { $result = new BlockSettingsArray(); foreach ($arr as $name) { $blockSettings = $this->get($name); // Now if I try to use a global settings array that doesn't exist it shows the missing settings name if (is_null($blockSettings)) { throw new LogicException( "Could not find settings with the name '{$name}'. Does it exist?" ); } $result->add($blockSettings->getArray()); } return $result; } This helped me while managing blocks and settings in different files, and also because I leave my work in a broken state for long enough to forget what I did 😆
  9. @David Karich That would be a great feature. I've scanned the API docs and the solution would put the work of managing glossaries entirely on Fluency. I'm familiar with the feature but am learning more in depth about it right now. Bummer. The process of managing them is also a bit of a task: Of course all of this is doable and I'd like to take it on but I'm really overloaded right now with work. I may be able to hack around a little bit but unfortunately couldn't provide a guarantee or estimate on when it could be added to the module. It's definitely a feature I'd like to have on the roadmap. If I have some time I can put towards it I will. Sorry I can't be of more help at the moment 🫤
  10. @bernhard I'll do some more testing myself and come back with anything I can find.
  11. @Stefanowitsch Thank you very much for taking the time to check. It's gotta be something odd on my end or some conflict unique to my case.
  12. Copied and pasted, still not working for me. Not sure what it is but if it's working for you it has to be some sort of conflict on my end. When I have time I'll do some digging and come back with anything I find. Thanks for checking it out!
  13. Great tweaks, and I just learned about insertBefore()/insertAfter() for the first time. Excellent!
×
×
  • Create New...