Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 03/02/2023 in all areas

  1. We've got just a few core updates on the dev branch this week, but next week we're looking at finally merging in the InputfieldTinyMCE module! This week I also wrapped up the WireRequestBlocker module that was mentioned in last week post, and the v1 beta is now posted in the ProDevTools download thread. I've been running it here on processwire.com this week and it's been doing a good job of keeping out the vulnerability scanners and bots. For more details on this new module please see the new Wire Request Blocker page that I just posted. Thanks and have a great weekend!
    23 points
  2. It's crazy sometimes how quickly the work week goes by. On Monday I started working on a Stripe Checkout integration for a client (using the FormBuilderProcessorStripe module), and somehow today I'm still working on it, and it feels like only a day has passed. This particular integration is a little more complicated than others I've worked on. The user makes a few selections that determine the final price, and when they submit the form, it has to authorize (but not yet capture) the amount due, so that the money is basically in a holding state. Then it has to send a notification to another company asking them to approve or deny the request. If they approve it, then it captures those funds through Stripe. Or if they deny it, then it releases the hold on those funds. It's also connected to multiple Stripe accounts in different currencies, and it has to use whichever one corresponds with the transaction details. In this particular form, some of the purchases also involve a 3rd party web service to confirm availability. And there's more to it as well, but I'll leave it at that... it's just a lot of moving parts, so I guess that's why I haven't done anything this week other than work on that. But the good news is that much of it has been added to the FormBuilderProcessorStripe module, so that the next this time need comes up for you or me, hopefully it won't take so much time. Here's a few of the things that have been added to FormBuilderProcessorStripe: Previously you could just accept a payment. Now you also have the option to setup a separate authorization and capture. And you can capture from a newly added API method, from the FormBuilder entries screen, or from our Stripe dashboard. On the API side, all you need to complete the capture is the ID of the payment (called the payment intent ID), which is saved with the form entry. The capture typically must be done within 7 days of the authorization. Doing an authorization (and later capture) is preferable to a charge when you think there's a reasonable chance the it'll need to be un-done. One reason is because an authorization costs nothing, whereas a charge (or capture) does, regardless of whether it is refunded. The module also has a new option to create a Customer in Stripe that you can charge later. This is different from authorization/capture in that creating a customer doesn't authorize any particular transaction or funds, but rather saves their payment info in Stripe, enabling you to charge it anytime later. This is useful for many cases, but one would be where a customer wants to save their payment information with their account, so they don't have to re-enter it every time they make a purchase. Several new configuration options were also added. New public API methods were added for capture, cancel and refund payments. You can now pass any data from the form into Stripe metadata. You can now specify the Stripe API version that you want to use with the module. And you can now send email receipts, even if not enabled in Stripe. More transaction information is now shown when using the "view entry" in the admin. Several new hooks were also added. Technically this update to FormBuilderProcessorStripe is ready to post now, but I'd like to do a little more testing first, so I'll be posting this module update in the FormBuilder board next week. I also have some updates to the InputfieldFormBuilderStripe module as well (which uses Stripe Elements rather than Stripe Checkout), and it may be updated at the same time, or shortly after. No core updates this week, but hopefully I got enough client work done this week that I can really focus on the core next week. Thanks for reading and have a great weekend!
    22 points
  3. There are several updates on the dev branch this week (commit log), including issue fixes, feature additions and minor class improvements. One of the updates I'd planned to add this week was moving InputfieldTinyMCE into the core. However, I noticed that TinyMCE was up to version 6.4.1 now and we were still running 6.2.0, so I decided instead to upgrade ours to the latest and test it out for another week in its own repository. If all continues to work well, I'll likely commit it to the core in 3.0.215. If you have a chance to test the latest version of InputfieldTinyMCE, please do, and open an issue report if you run into any trouble. Last week the Wire Request Blocker module was released in the ProDevTools board and this week we have version 2, which includes several new additions: Added support for blocking groups. Added configurable settings for immediate block (rather than just a strike) for URLs and user agents. Added support for using RequestBlocker in other applications (like we use it here in IP.Board). Added a feature were you can manually test URLs or user agent strings to see how they match your rules. Added a configuration setting so you can choose whether or not to use a log file. Added a section to the docs on how to block URLs from your .htaccess file. As I wrote this post, the processwire.com site is getting hounded with dozens of IPs trying to locate backup or database zip/rar/tar/gz files, using every possible combination of filenames and extensions you can think of, including those that include the term "processwire". Remember to never leave backup files or DB dump files accessible by URL lying around on your server, because they will get eventually found. Adding these rules (below) to WireRequestBlocker's URL matching rules seems to mostly stopped those DB/backup hunting bots: /ba=/backups/|/backup/|/bak/|/back/ .txt=credentials.txt|backup.txt|password.txt|passwords.txt .sql=.sql.gz|.sql.tar|backup.sql|dump.sql|db.sql|database.sql|mysql.sql|.com.sql .tar=.tar.gz|.tar.sql|dump.tar|backup.tar|bak.tar|website.tar|backup.tar|www.tar .zip=backup.zip|bak.zip|.com.zip|well-known.zip|index.zip|public_html.zip|website.zip|dump.zip|wallet.zip|application.zip .rar=bak.rar|website.rar|backup.rar|www.rar .gz=website.gz|bak.gz|backup.gz|.com.gz /old/ WireRequestBlocker only knows its rules and doesn't know who's real and who's a bot, so be careful not to hit URLs containing those strings on this site or it might hit you with nothing but 403's for a few hours. 🙂 Next week is Spring Break here, so I'll likely be on a reduced schedule with kids home from school. Thanks for reading, have a great weekend! +75 more blocks (not shown)
    20 points
  4. This week ProcessWire 3.0.214 is on the dev branch. Relative to 3.0.213 this version has 16 new commits which include the addition of 3 new pull requests, 6 issue fixes, a new WireNumberTools utility class, and improvements to various other classes. A newly added $files->size($path) method was added which returns the total size of the given file or directory. When given a directory, it returns the size of all files in that directory, recursively. Improvements were also made to ProcessWire's log classes (WireLog and FileLog) with new methods for deleting or pruning all log files at once. This version also fixes an issue with the front-end page editor (PageFrontEdit) when used with InputfieldTinyMCE. For more details on these updates and more see the dev branch commit log. Something else I've been working on this weekend is a vulnerability scanner blocker and throttler. I don't know if this is an issue for every site, or if it's because this is an open source project site, but we seem to get a lot of vulnerability scanner bots hitting the site. Sometimes they hit the site pretty hard (with hundreds of thousands of requests) and our AWS cluster servers and databases must scale to meet the demand, using more resources, and thus increasing cost. This is annoying, having to scale for a hyperactive vulnerability scanner rather than real traffic. And it always seems to happen in the middle of the night, when I'm not nearby to manually block it. So I'm working on a module that detects vulnerability scanner traffic patterns and then blocks or throttles requests from their IPs automatically. Once I've got it functioning smoothly here, I'll also plan to add it to ProDevTools board download thread in case it's useful to anyone else. Thanks for reading and have a great weekend!
    20 points
  5. This week the focus was on core updates and we have a quality mixture of minor issue fixes, pull request additions, and other improvements in ProcessWire on the dev branch. My favorite addition was from a PR by @matjazp that makes improvements to ProcessWire's pagination module, MarkupPagerNav. See more updates and PRs in the dev branch commit log as well. There's still more to add before we bump the version to 3.0.214, so stay tuned for that next week. By the way, if you've recently launched any new sites in ProcessWire, please add them to our sites directory. I think most of us are already subscribed to the ProcessWire Weekly email, but just in case you aren't, you can subscribe here. The email content is from weekly.pw written by @teppo and is great to read, highly recommended! Thanks and have a great weekend!
    15 points
  6. https://github.com/elabx/FieldtypeRecurringDates Well a few summers later, finally got like a sort of working version. For info please take a look at the repo's readme. It's still super alpha but going to deploy into a couple websites to replace Recurme so wish me luck. This module is only a Fieldtype and Inputfield, there are no "markup" modules as there used to be in Recurme. I'd appreciate any testing or any kind of comments.
    14 points
  7. WEBP to JGP Converts WEBP images to JPG format on upload. This allows the converted image to be used in ProcessWire image fields, seeing as WEBP is not supported as a source image format. Config You can set the quality (0 – 100) to be used for the JPG conversion in the module config. Usage You must set your image fields to allow the webp file extension, otherwise the upload of WEBP images will be blocked before this module has a chance to convert the images to JPG format. https://github.com/Toutouwai/WebpToJpg https://processwire.com/modules/webp-to-jpg/
    14 points
  8. As the project processwire-recipes.com went offline quite some time ago now, I took the repository and deployed a browsable version of it over on: https://processwire-recipes.pages.dev/ There is also a new repo: https://github.com/webmanufaktur/processwire-recipes/ ... on which this deployed version is based on. Changes, fixes, updates can be submitted there. Feel free to update links, fork the repo, submit changes for recipes layout or whatever. It uses 11ty for rendering .md to .html and all the other magic. I will add further details in regards to formatting recipes and submitting new content in the future. Further notice: @teppo: you link to the old domain on weekly.pw Changes/Updates: incoming - right now this is WIP (work in progress) Maintainers: feel free to let me know, I'll add you to that repo Details: the repo is on github: see above hosting/deployment: on Cloudflare Pages via 11ty updates: all updates against the master-branch will be deployed within 60 seconds after commit changes: right now only via pull-request maintainers: wanted - see above
    14 points
  9. I couldn’t resist the hype and created a simple module using the ChatGPT API to process field values. You can find it on GitHub here: https://github.com/robertweiss/ProcessChatGPT ProcessChatGPT is triggered upon saving, if you select it in the save dropdown. It processes the value of a page field, which can be set in the module config, using ChatGPT. The processed value can be saved back to the same field or to another field on the same page, which can also be set in the module configuration. You can add commands to the value that will be prefixed to the source field content. This way, you can give ChatGPT hints on what to do with the text. For example, you could add ›Write a summary with a maximum of 400 characters of the following text:‹. One of my clients is already using the module to summarise announcement texts for upcoming music events on their website (Let’s face it, nobody reads them anyway 😄). If anyone finds it useful, I would be happy to submit it to the official module list.
    13 points
  10. I want to show you a project that I started developing in summer of 2022 and that went online in january 2023. Kulturhaus Wilster ("Arts Centre Wilster") https://www.kulturhauswilster.de The Kulturhaus Wilster - also called "Wilster's living room" by many visitors - is a socio-cultural center in the heart of Wilster. Wilster is a small City located north from Hamburg, germany. Despite the fact that this is a small venue they offer large amount of events. The events range all the way from concerts to theatre and everything inbetween! The old version of the site was a super simple WordPress website in a black-and-white only color scheme. In my opinion it did no justice to the very colorful program that the Kulturhaus offers so I tried my best to bring some color into the game. The whole website should have a shabby-chique look combined with clean, modern elements. The Homepage offers a preview of the next 8 upcoming events. A blog section is also included and the latest post is displayed next to the event calendar book as PDF download. The event pages offer a quick-reservation form (tickets are not sold online) and a quick look to the next upcoming events in the sidebar. The website features a large event calendar. It was a really nice exercise in using ProcessWires very own paging and selector features. Events can be searched and filtered by type (and month), too. All with a few lines of code only. For example "Look for events that take place in the future in a specific category" $events = $pages->find("select_event_cat.title~|%=$c, template=event, date_event>=today, sort=date_event, sort=time_from, limit=6"); Besides that the website features some colorful content pages with large images, galleries, textboxes and some teaser elements. The editors of the website are able to display all facets of the Kulturhaus this way. Tech Talk: Frontend Framework is Bootstrap 4.6 ProcessWire Modules used on this one are: - WireMail: SMTP (https://processwire.com/modules/wire-mail-smtp/) - SEO Maestro (https://processwire.com/modules/seo-maestro/) - All in one minify for asses (https://processwire.com/modules/all-in-one-minify/) - PageImageSource for webp image srcsets (https://processwire.com/modules/pageimage-source/) - JkPublishPages is used for time-controlled publishing of the blog posts. Please check out this module! Thanks @Juergen - @bernhards great RockFrontend was also used. In this particular project only the autorefresh feature (because this module was brandnew back then and development of the page was almost done). But even "only" using autorefresh makes it worth using it! Please have a look:
    12 points
  11. TL;DR Just copy this folder to your module and get fully automated releases, tags, changelog and with one additional line of code (see note below!!) also fully automated version numbers for your PW modules! Update: There is a problem with this approach. Grabbing the version number from JSON does not work in the modules directory at the moment. See the discussion here: https://github.com/processwire/processwire-issues/issues/1690 Background/long story: @dotnetic asked me some time ago to use conventional commits for my modules. It was totally new to me, here is the link: https://www.conventionalcommits.org/en/v1.0.0/ I've since adopted that workflow to most of my modules, as it is really just copy and pasting the .workflow folder (and you can simply copy that folder from RockFrontend: https://github.com/baumrock/RockFrontend/tree/main/.github/workflows) This workflow is great, because It will create new releases automatically for you: https://github.com/baumrock/RockMigrations/releases It will create new tags automatically for you: https://github.com/baumrock/RockMigrations/tags (This is something that I always forgot and someone was asking for it long time ago, because tags are important if someone uses composer to manage their modules, as far as I understood. Now I can't forget them any more 🙂 ) It will create a changelog for you automatically: https://github.com/baumrock/RockMigrations/blob/main/CHANGELOG.md And last but not least: It will create version numbers automatically for you. One thing that has always been annoying though is that you have to make sure that the version number that is automatically created matches the version number that you set in your modules .module.php or .info.php file. I have always tended to have everything in my .module.php file, but I just recently noticed that this can lead to problems if you are using PHP8 syntax in your module and the website still runs on PHP version <8 --> You end up getting a fatal error whenever the module is loaded, which totally breaks ProcessWires module backend. The solution is easy though: Just split your module config into two files - the .module.php with all the module's logic and the .info.php file with only the array that defines the module's info array. Thx for that reminder @BitPoet! Here is the HelloWorld's info.php file with some helpful comments: https://github.com/ryancramerdesign/Helloworld/blob/master/Helloworld.info.php I did that today for RockMigrations and I realised that you can even fully automate the version numbering of your module, because the conventional commit workflow creates a package.json file that holds the version number in a json string: https://github.com/baumrock/RockMigrations/blob/main/package.json All you have to do is to grab that json string and write that version into your module's info.php file: https://github.com/baumrock/RockMigrations/blob/92a33ba4e7146a69fb1f52d5b67270b66d9f1e54/RockMigrations.info.php#L5-L9 Afraid of a performance penalty? You don't have to. The module's info is cached by PW and only updated on modules::refresh! Maybe other module authors like @adrian or @Robin S or @kongondo or @teppo or @netcarver or @Macrura or @flydev want to adopt that workflow as well. Thx for reading and have a great weekend 🙂
    11 points
  12. A module for ProcessWire CMS to integrate a user registration/login functionality based on the FrontendForms module. This module creates pages and templates during the installation for faster development. The intention for the development of such a module was to provide a ready-to-use solution for user management, which can be installed and put into operation in a few minutes. It works out of the box, but it provides a lot of configuration settings in the backend: Highlights "One-click" integration of an user login/registration system without the hazzle of creating all pages and forms by hand "One-click" switch between login or login and registration option Double opt-in with activation link on new registrations Option for automatic sending of reminder mails, if account activation is still pending Option for automatic deletion of unverified accounts after a certain time to prevent unused accounts Option to use TFA-Email if installed for higher security on login Mutli-language Select if you want to login with username and password or email and password Select the desired roles for newly created users Select, which fields of the user template should be displayed on the registration and profile form (beside the mandatory fields). Fields and order can be changed via drag and drop functionality Offer users the possibility to delete their account in the members area using a deletion link with time expiration Customize the texts of the emails which will be send by this module Usage of all the benefits of FrontendForms (fe. CAPTCHA, various security settings,...) Support for SeoMaestro if installed Lock accounts if suspicious login attempts were made This module runs on top of the FrontendForms module, so please download and install this module first. Download FrontendLoginRegister This module is early Alpha stage, so please do not use it on live sites at the moment. If you discover any issues, please report them directly on GitHub 🙂. Thanks!
    10 points
  13. On this side, I don't really find the spambots or seo bots to be much of an issue, so I mostly ignore them unless they get too aggressive. It's instead the vulnerability scanners that tend to be the issue here. They are fine when they are throttled. But when they are unthrottled (as is usually the case), they eat up a lot of resources. Here's just one basic example: a vulnerability scanner might send through thousands (or tens of thousands) of URL variations looking for SQL files that it can grab, with dozens of different names each, like db.sql, database.sql, backup.sql, [domain].sql, database-[domain].sql, db-[domain.sql], [domain]-db.sql, and so on and on and on. Then add all the extension variations, .sql, .sql.gz, .sql.tar, .sql.tar.gz, and then add every URL with a trailing slash in the site as the prefix path for every check. So just a scan for SQL files in-the-open might account for tens of thousands of requests. And it'll try to do them all in a very short period of time, making the server like ours scale to meet the demand. Yet this is just an example of one vulnerability check out of thousands that it'll do. Once a vulnerability scanner gets started, it'll run for potentially days. But I usually block them well before that. Once I get an email from AWS about things scaling, I watch the logs pretty closely and then start blocking IPs. But the goal is to have the module just block them automatically. What the module does is that it allows you to define suspicious patterns in GET or POST requests, or user agent strings (and it comes with several patterns to start). For example, you might have patterns to match things like wp-login.php, those SQL request variations mentioned above, requests for .py, .cfm, .rb, .exe files, or others that you don't use on the server, requests containing SQL commands in the query string... these are just obvious examples. Then it lets you define a number of strikes till the IP is out. So for every pattern match, the IP gets a strike. So if I set it to "3 strikes and you are out" then once it gets 3 pattern matches, the IP is blocked for a period of time, also defined with the module. If additional strikes occur while an IP is blocked, the block time gets reset so it starts over, ensuring it's always blocked that set amount of time from the last strike.
    10 points
  14. Template Access Log is a straightforward module that logs changes made to template level access settings: the useRoles option, or applicable roles and/or role-specific permissions. This module is primarily intended for use cases where an audit log is needed, and (at least for now) it just logs data to a log file template_access_log.txt and provides no admin view (apart from what can be found from the logs section in admin). Data is logged as JSON: 2023-03-18 18:42:05 admin https://example.com/processwire/setup/template/save {"template":"basic-page","template_id":29,"use_roles":1,"permissions":{"view":[37,1061,1062,1125],"edit":[1062],"add":[1061,1062],"create":[]},"permissions_changed":{"edit":[-1061]}} This is something that I needed for some projects, so thought I'd share it here in case someone else has use for it as well. I may add more features later, but at the moment it's already doing pretty much everything it needs to for my use case(s) 🙂 GitHub: https://github.com/teppokoivula/TemplateAccessLog Packagist: https://packagist.org/packages/teppokoivula/template-access-log
    8 points
  15. I oftentimes create a checkbox field called 'test' and assign it to certain templates. I check the box if the page is a test page. These pages may exist on the live site and I don't want to hide or unpublish them, but at the same time, I don't want them to appear in the XML Sitemap. (not part of this tutorial, but I also noindex,nofollow those pages using a meta tag in the head so search engines don't index them) In that case, you can remove them from the WireSitemapXML like this in /site/ready.php: $wire->addHookAfter('WireSitemapXML::allowPage', function(HookEvent $event) { $page = $event->arguments('page'); if($page->hasField('test') && $page->test) $event->return = false; });
    8 points
  16. TextformatterFootnotes Github: https://github.com/eprcstudio/TextformatterFootnotes Modules directory: https://processwire.com/modules/textformatter-footnotes/ This textformatter adds footnotes using Markdown Extra’s syntax, minus Markdown About This textformatter was primarly created to ease the addition of footnotes within HTML textareas (CKEditor or TinyMCE) using Markdown Extra’s syntax. It will also retain any inline formatting tags that are whitelisted. Usage To create a footnote reference, add a caret and an identifier inside brackets ([^1]). Then add the footnote in its own line using another caret and number inside brackets with a colon and text ([^1]: My footnote.). It will be put automatically inside the footnotes block at the end of the text. Notes: the identifier has to be a number but it is permissive in that you can put in the wrong order and it will be numbered back sequentially there is no support for indentation, though since <br> tags are allowed you should be fine if a footnote has no corresponding reference, it will be ignored and left as is Options In the module settings, you can change the icon (string) used as the backreference link but also the classes used for the wrapper, the reference and backreference links. You can also edit the list of whitelisted HTML tags that won’t be removed in the footnotes. Hook If you want to have a more granular control over the footnotes (e.g. per field), you can use this hook: $wire->addHookBefore("TextformatterFootnotes::addFootnotes", function(HookEvent $event) { $str = $event->arguments(0); $options = $event->arguments(1); $field = $event->arguments(2); if($field != "your-field-name") return; // Say you want to change the icon for a <svg> $options["icon"] = file_get_contents("{$event->config->paths->templates}assets/icons/up.svg"); // Or change the wrapper’s class $options["wrapperClass"] = "my-own-wrapper-class"; // Put back the updated options array $event->arguments(1, $options); }); Check the source code for more options.
    7 points
  17. Or maybe better update the docs so that they can reference blog posts. We have many important explanations in blog posts and the should be linked from/to the docs imho 🙂
    7 points
  18. I'm also reusing generic fields in my templates with renamed labels. Keeps the DB cleaner. I use custom page classes, a great feature that was introduced in 3.0.152 to map those generic field names to more meaningful properties. This way I get intellisense in my editor and don't have to look up the mappings of generic field names to meaningful ones. Here's an example page class: <?php namespace ProcessWire; use RockMigrations\MagicPage; /** * Custom page class that provides properties/methods for pages with template glossary-item * * */ class GlossaryItemPage extends DefaultPage { use MagicPage; /** * holds the glossary term * * @var string $term */ public $term; public $hide; public $exclude; public $tooltip; public $meaning; public $description; // set custom properties in loaded public function loaded() { $this->set('term', $this->title); $this->set('hide', $this->checkbox); $this->set('exclude', $this->checkbox2); $this->set('tooltip', $this->text); $this->set('meaning', $this->text2); $this->set('description', $this->rte); } /** * Magic Page hook for clearing cache for terms from module Glossary * */ public function onSaved() { $this->wire->cache->delete(Glossary::CACHE_NAME); } } The property->fieldname mapping happens in the loaded() method. you can comment the property definitions so you get some meaningful info with code intellisense. In the template where I want to use pages with template glossary-item, I define the type for those pages to get intellisense: /** @var GlossaryItemPage $p */ <?= $p->term ?> Some notes on the page class code: The GlossaryItemPage extends DefaultPage. My DefaultPage class is a base class for all other page classes which holds generic methods that I want to have available in all page classes I'm using @bernhard's fabulous RockMigrations module which, apart from migrations, provides MagicPages. This makes it super easy to add commonly used hooks to your page classes. I have a Glossary module installed which handles migrations and logic for the glossary. In the migrations for that module I define the custom field labels in the template context: 'glossary-item' => [ 'fields' => [ 'title' => [ 'label' => 'Name of the Term', ], 'checkbox' => [ 'label' => 'Hide', ], 'checkbox2' => [ 'label' => 'Exclude this term from parsing', ], 'text' => [ 'label' => 'Text Shown in the CSS Tooltip', ], 'text2' => [ 'label' => 'Meaning', ], 'rte' => [ 'label' => 'Description of the Term', ] All in all this is a very structured approach. It surely takes some extra time to setup. But this pays back, especially for larger, more complex projects. It took me quite some time as PW developer to come to this kind of setup. I started out with very simple procedural code. I wish I had all these tools available when I started out developing with PW that we have now (page classes, migrations etc.) thanks to this great community. Everyone here has their own approach and workflow. So you will surely get some inspiration.
    6 points
  19. Very interesting: Mapping Heritage: Geospatial Online Databases of Historic Roads. The Case of the N-340 Roadway Corridor on the Spanish Mediterranean https://www.mdpi.com/2220-9964/7/4/134/htm
    6 points
  20. @wbmnfktr Thank you so much for breathing life back into this project. Website already looks awesome. I have an idea/suggestion: it would be great if all the code snippets could easily be integrated and used in our IDE. With the current way of how they are stored on GH I wouldn't how this could be achieved. There is a vscode extension that lets you use gists as code snippets. So if the actual code snippets were organized in gists it would be possible to use those from within the IDE. Maybe there is an automated way of storing all snippets as gists from within your build process?
    6 points
  21. I'm creating a new topic in response to @cst989's question in the RM thread as I think this is a common question and misunderstanding when evaluating RockMigrations. It's also easier to communicate in separate threads than in one huge multi-page-thread... Hi @cst989 thx for your question and interest in RockMigrations. This sounds like you have a misconception in your head which is quite common I guess. Did you watch my latest video on RM, especially this part? https://www.youtube.com/watch?v=o6O859d3cFA&t=576s So why do you think it is a good thing to have one file per migration? I know that this is the way migrations usually work. But I don't think that this is the best way to do it. I'm not saying one way is right and the other is not. I'm just saying I'm having a really, really good time with RockMigrations and it makes working with PW in a more professional setup (meaning either working in a team and/or deploying PW to multiple locations and of course managing everything with GIT) a lot more fun an a lot faster and more efficient. If we look at how migrations usually work we can have a look at the other PW migrations module, which works just like usual migration modules work: You create one file per migration and you end up with a list of migrations that get executed one after another. See this screenshot from the modue's docs: In my opinion that screenshot perfectly shows one huge disadvantage of that approach: You don't see what's going on. You end up with a huge list of migrations that you can't understand on first sight. In RockMigrations this is totally different. You don't create one file per migration. You put all the necessary migrations where they belong and - in my opinion - make the most sense. An example: Let's say we want to add 2 fields to the homepage: "foo" and "bar". Ideally you are already using Custom Page Classes (see my video here https://www.youtube.com/watch?v=D651-w95M0A). They are not only a good idea for organizing your hooks but also your migrations! Just like adding all your HomePage-related hooks into the HomePage pageclass init() or ready() method, you'd add the migrations for your 3 fields into the HomePage pageclasses migrate() method. This could look something like this: Now let's say we develop things further and realise we also need a "bar" field: Do so see what we changed? I guess yes 🙂 Now one big difference to a regular migration approach is that you don't write downgrade() or reversion migrations - unless you don't want to revert the changes! In real life I've almost never ever needed to revert changes. Why? Because you develop things locally and only push changes you really want to have on your production system. If you happen to have to remove some changes that you applied on your dev it's easy to do, though: You see what we did? Nice! So does everybody else that has access to the project's GIT repo! So all your team mates will instantly see and understand what you did. Pro-tip: You don't even need lies 43-45 if you didn't push those changes to production! If you only created those fields on your local dev you can simply restore the staging database on your local dev environment and remove the migrations that create the fields. Pro-tip 2: Also have a look at RockShell, then restoring staging or production data is as easy as "php rockshell db-pull staging" Pro-tip 3: When restoring a DB dump from the staging system it can easily happen that you have data in your database that was created only on the remote and you don't have on your dev system (like new blog posts for example). If you then open those new blog posts on your dev system processwire and the blog post contains images processwire will not be able to show those images (as only the file path is stored in the DB and not the whole image!). Just add $config->filesOnDemand = "http://yourstagingsite.example.com" to your config.php file and RockMigrations will download those files once PW requests the file (either on the frontend or also on the backend)! Having all your changes now in your git history you can even jump back and forth in time with your IDE: You could. I thought about that as well. But I think it does not really make sense and I hope my examples above show why. I'm always open to input though and happy to try to think about it from other perspectives. One final note: I'm not sure if what you say about $rm->watch() makes sense here. If you watch() a file that means that RM checks it's modified timestamp. If that timestamp is later than the last migration that RM ran, then RM will automatically execute the migrations that are in that file. All other files and migrations will be ignored. That makes it a lot more efficient when working on projects that have many migration files in many different places. When triggered from the CLI though or if you do a modules refresh then it will always trigger all migrations in all watched files. I hope that makes sense! --- Ok, now really a final note 😄 One HUGE benefit of how RockMigrations works (meaning that you write migrations in page classes or in modules) is that you create reusable pieces of work/code. For example let's say you work on a website that needs a blog. So you create a blog module and in that module you have some migrations like this: <?php $rm->createTemplate('blogparent'); $rm->createTemplate('blogitem'); $rm->setParentChild('blogparent', 'blogitem'); $rm->migrate([ 'fields' => [...], // create fields headline, date, body 'templates' => [...], // add those fields to the blogitem template ]); You'd typically have those lines in Blog.module.php::migrate() What if you need a blog in another project? Yep --> just git clone that module into the new project and execute migrations! For example in migrate.php this: <?php $rm->installModule('Blog'); If you follow a regular migrations concept where all project-migrations are stored in a central folder you can't do that! Of course you don't have to work like this. You can still write all your migrations in the project's migrate.php file. Because I have to admit that it is a lot harder to build a blog module that can be reused across different projects than just creating one for one single project! It always depends on the situation. But - and now I'll really leave it for today 😄 - you could also make your Blog-Module's migrate() method hookable and that would make it possible that you build a generic blog for all projects and then you add field "foo" to your blog in project-a and you add field "bar" to your blog of project-b. Have fun discovering RockMigrations. I understand it can look frightening at first, but it is an extremely rewarding investment! Ask @dotnetic if you don't believe me 😄
    6 points
  22. Just took a moment to play a bit more with ChatGPT (v3.5) and built a simple module with it. While it was pretty easy, ChatGPT needs a lot of assistance to make things work out as expected. Adding the Processwire namespace? Nope. Taking care of user input and sanitizing? Nope. But that's totally fine for me. It took about 30-40 minutes from start to published on Github. See: TextformatterAsciiEmoji The README.md was also written by ChatGPT.
    5 points
  23. Very nice! 👍😄 (https://processwire.recipes/code-of-conduct/)
    5 points
  24. It's pretty straight forward. Instead of defining textual options, you create a page for every selectable option. Preferably, you group them under their own parent, and you'll usually want to assign them their own template, both to make queries easier and so you can add more fields to the template later. In your case, you can start with a (backend-)template "region", no need to add a PHP template file since the region pages are only for filling the dropdown. You'll also want a template "regions" for the regions' parent page. This one will be the overview page you're trying to build and therefore need a PHP template. In the first step, just create both templates with the minimum fields PW suggests (i.e. just a "title" field which will be the label in your dropdown). Then create the pages. Home - Regions - - Region 1 - - Region 2 - - ... In your hotel template, add a page reference field for the region, and on the "Input" tab in the field's configuration, go to "Selectable pages". Under "Templates", select "region". Edit your hotels and assign a region to each. Now you can easily iterate over all available regions in your regions.php template file, output the region's details and a list with the hotels for that region. A little PHP snippet as a starting point: <?php namespace ProcessWire; foreach($page->children() as $region) { echo "<h3>{$region->title}<h3>"; echo "<ul>"; $hotels = $pages->find("parent=/hotels/, region=$region"); foreach($hotels as $hotel) { echo "<li><a href='{$hotel->url}'>{$hotel->title}</a></li>"; } echo "</ul>"; }
    5 points
  25. Yes! Seriously, someone pls email @ryan about this. If no one else will do it, I'll do it. My new favorite ProcessWire infographic:
    5 points
  26. I hope we all together can keep this project stay alive, and make it a super-awesome-meta-collection-of-goodness for ProcessWire. There are so many awesome solutions and answers to common questions and challenges for whatever task buried deep into the forums. From basic questions to full-blown Pro module solutions. Maybe we all should be more sensitive to answers and solutions we get and at least should think about submitting those answers to the repository. In my opinion this repo/project should have the goal to be the starting point for all ProcessWire Newcomers. From easy tutorials to advanced setups. BUT... that's just my thought for the moment. To make this whole project NOT my personal project, I will add more and more people from our community as maintainers/owners to this whole repo and will think about a solution to find a way to finance the domain without me. But those are future tasks. Just wanted to let you know. It's not about me, but the thought behind all this!
    5 points
  27. Another short update: In the last few days @marcus and I talked and he made a member and owner of the "old" organization repo on Github so we can still use the old/former official repository - that's awesome! Yet... the old domain is long gone. I tried to contact the new owner but no chance so far. Even through buyback-services and domainer/s. Outlook: I will update all recipes to the new markdown format, while keeping a copy of the legacy version in a separate branch/version. I will update all guides on how to submit and update recipes in the next few days. AND: I bought the processwire.recipes domain - so I hope it was worth it. 😉 MORE (ideas and changes and whatsoever): Details will follow soon on processwire.recipes! In other news: Thanks @teppo for updating the link, I'll have to contact you again when the new domain and repo are online for the public. 😁
    5 points
  28. Would be nice to update the Fontawesome icons in PW core, so they can be used for labels in the backend. The current version 4 is quite old and has a very limit set of icons. Version 6 also has a free version with more icons. The implementation seems to be very similar, so it should be easy to implement. The icons can be downloaded here.
    5 points
  29. Good day, everyone! I am happy (and a bit scared) to announce the release of a long awaited new maintenance version for this module. @Mats blessed me to take control over the module. In fact, the repo is moved not to my github account, but rather to an org called Friends of ProcessWire in which there are some brilliant devs already, and maybe more will join. But that is a story for another post I am planning to write soon (a little intrigue))) As for now please test the new 3.0.4 version. It has some code merged from @ukyo (big thanks to him!) and a few lines by myself. I hope that this release does fix a few issues and hopefully not introduce new ones. But I can't be sure here, so changed the stability tag to Beta. Here is the changelog: Now the module uses https://nominatim.openstreetmap.org for all geocoding. Before it still used Google geocoding API in places. Fixed the issue with the map display in admin when the field is in the repeater or in the collapsed fieldset. There is still a problem with ajax tabs. Fixed ProcessWire namespace declaration. Fixed markup in README.txt. Executable bit is removed from all files in the repo. Git branches are renamed to be more familiar. The latest released code is now in the master branch again instead of PW3. The dev is used for current dev. Previously used branches are renamed and kept for now, though will be probably deleted in the future. Updated the module page in the modules directory.
    5 points
  30. Creating this thread for the AlpineJS module 🙂 https://github.com/elabx/AlpineJS
    5 points
  31. On a less positive note, this is a very interesting, and a bit scary, talk https://www.humanetech.com/podcast/the-ai-dilemma
    4 points
  32. Now that there is llamaindex https://gpt-index.readthedocs.io/en/latest/index.html with connector for github repo https://llamahub.ai/l/github_repo available, I thought it would be awesome to have a LLM that is specifically trained on PW. I thought I'd give it a try and asked GPT-4 for some guidance on how to create an index and then setup training and so on. Here's what I asked for And that was the answer Disappointing. This answer proves that even GPT-4 has only data until Sept. 21 available. At that time llamaindex was not public yet. I'm not into python at all. So I would have a hard time when trying to setup indexing of the PW github repo and then training a LLM on that index. I'd love to see how this would work out. But GPT-3 and 4 are already quite good at understanding PW, it seems. I installed https://marketplace.visualstudio.com/items?itemName=Rubberduck.rubberduck-vscode in vscodium. So I don't have to switch to the browser when asking GPT. Pretty neat tool.
    4 points
  33. I was wondering: when I use the same hooks on different pages, is it better to only use that hook once and divide with if statements or is it ok to call a hook multiple times. Like: $wire->addHookBefore('Pages::saveReady', function(HookEvent $event){ $page = $event->arguments(0); if($page->hasField("field1")) { //do something } if($page->hasField("field2")) { //do something else } }); $wire->addHookAfter('Pages::unpublishReady', function(HookEvent $event) { $page = $event->arguments(0); if($page->hasField("field1")) { //do something } if($page->hasField("field2")) { //do something else } }); or //Things regarding pages with Field 1 //###################### $wire->addHookBefore('Pages::saveReady', function(HookEvent $event){ $page = $event->arguments(0); if($page->hasField("field1")) { //do something } }); $wire->addHookAfter('Pages::unpublishReady', function(HookEvent $event) { if($page->hasField("field1")) { //do something else } }); //###################### //Things regarding pages with Field 2 //###################### $wire->addHookBefore('Pages::saveReady', function(HookEvent $event){ $page = $event->arguments(0); if($page->hasField("field2")) { //do something else } }); $wire->addHookAfter('Pages::unpublishReady', function(HookEvent $event) { if($page->hasField("field2")) { //do something else } }); //###################### Of course the first version looks nicer, but if you have a lot of functions it becomes increasingly confusing to keep track of those functions. The second version makes it much easier to keep logic together but I'm concerned if that will raise trouble with performance.
    4 points
  34. Using RockMigration's MagicPages feature: <?php public function editFormContent($form) { $form->add([ 'type' => 'markup', 'label' => 'Published at', 'value' => date("Y-m-d H:i:s", $this->published), ]); } No additional fields needed as @Jan Romero said 🙂
    4 points
  35. 🤡 How to provide solutions to the community for free forever working on ProcessWire ? 🤖 Apply to the following job, do physical activity to prevent hypertension and improve mental health 👇
    4 points
  36. mail sent... let's see
    4 points
  37. @wbmnfktr I checked our list and don't see your email is on it. So next I checked the list activity log and see that we got 3 bounces from your email about 2 years ago. After 3 bounces it removes you from the list, as Mailgun doesn't like it if we keep sending to an address that bounces. I think it can be solved just by re-subscribing. Please let me know if you find it doesn't. Thanks.
    4 points
  38. Thank you, @gebeer for the awesome feedback. I love that idea! I wish we could create some sort of snippet collection (right now) from all of these recipes in some kind or another. While it would be quite easy for me to export each recipe to another format of almost any kind, almost each and every recipe has to follow some or all rules of formatting. Right now... the formatting is super basic but we can work on it and even could create snippets for only those recipes that support the wanted formatting. I could export those snippets, each and every time into another repository, when a recipe get's flagged as "isSnippet=true". If we can find a workable solution and formatting for real snippets... I'm here to support it! But first we should define a default/standard format for recipes and go from there. I wish we could go through all of the recipes to make them some kind of uniform, working with ProcessWire 2+, and mark those which only work with ProcessWire 3, or whatever version of ProcessWire, just to find a solid foundation. From there we could work out a snippet format and necessary fields, export them and even publish them to whereever we want. Something I forgot... there are snippets, ready.php snippets, common hook snippets... and probably more... so we have to classify snippets and recipes right from the start in some kind. But sure... that's possible. This will be fun! I really love this idea.
    4 points
  39. For those who just want to let users upload WEBP format and are looking for a workaround, here is a simple module that converts WEBP to JPG on upload: I wonder though... how is it that the user acquired the WEBP image in the first place? Surely they didn't just pinch it off somebody else's website... 🤔
    4 points
  40. Hi @kongondo, thank you for your feedback. Since you still ask for wishes for the new version, here are two more. 😃 Just like the focus point, it would be very helpful to save the image description differently per page base. Example: I want to use a image several times in an article, but I want a different caption in each article. And the biggest wish I have to pass on: Folder, folder, folder. My clients love thinking in folders. It doesn't even have to be a real folder for PW, but just a "fake page tree". The main thing is to have the feeling of having something organised. In this context, a configuration would of course be necessary, where you can, for example, specify where the image is automatically categorised when uploading. An example of how it is solved, for example, in the WP-media Library as an extra plugin. Article: https://devowl.io/2020/create-folders-in-wordpress-media-library/
    4 points
  41. Hi, just a little idea i often think about, when you create a fieldset field and you give it one or more tags in the advanced tab, it may be a good idea that the fieldset_END gets the same tag(s) by default instead of ending in the fields with no tags list 🙂 have a nice day
    4 points
  42. HTMX will see the hx- attributes on your element and send a GET request to yourdomain.at/trigger_delay with the value of the input named “q” whenever it is triggered. Whatever the server sends as a response, HTMX will put in the div at the bottom (HTMX calls this “swapping“). Or something like that. I’m not familiar enough with HTMX’s defaults and the trigger syntax to be sure that your code will definitely work. Looks good though. The important parts happen in /trigger_delay, obviously. That is in keeping with HTMX’s whole raison d’être of mostly sticking to doing things on the server. What’s sticking out to me is the url of your endpoint “/trigger_delay”. It looks like this is supposed to be a page that returns search results or type-ahead suggestions, so I would expect something like “/search”.
    3 points
  43. @joshua Wow, you're faster than light 🔦 Thank you 🙂 https://github.com/webworkerJoshua/privacywire/releases/tag/1.1.2 Tested this on one of my sites and it works like a charm.
    3 points
  44. I also sort of have this scenario and decided to use separate Time fields to indicate start and end time of each occurrence. I just have to assume that the occurrences have the same time.
    3 points
  45. Thank you @bernhard for this extensive post. I use RockMigrations every day and love it, as I stated often before. It isn't as hard as it looks at first. Just try it out and if you should stumble over something, then just ask for support here in the forum. RockMigrations saves much time and makes it easy to develop features in your dev environment and then when the feature is finished push the changes to the live server with migrations being executed automatically, which creates all fields, templates and even pages and contents. Could live without it, but that would be a sad life. So.... start using it now!
    3 points
  46. @gebeer We're running the module on a couple of sites on PHP 8.0 and 8.1 with no issues. If you do find any issues running on PHP 8+, open an issue on GitHub or let me know here and I will fix it as soon as possible!
    3 points
  47. Great to see someone tackling this 🙂 I wish you all the best for the challenge 🙂 Do you have some screenshots? And maybe a short description how the module works from a technical point of view? Why did you split the finding part in a separate module?
    3 points
  48. Technically speaking best approach would be to block the IP as early as possible: preferably via firewall (on local machine or before it), but if that's not possible then Apache config, and if that's not possible either then .htaccess. Blocking access early means that more unwanted traffic gets blocked out, and it is also better for performance 🙂 That being said, I just updated https://processwire.com/modules/page-render-iprestriction/ to support a list of blocked IP addresses. This module can block access to your site, but note that it won't attempt to protect your static files, so again other alternatives are generally speaking preferable.
    3 points
  49. These files are third party classes from Manuel Lemos. I first thought to wait until he has updated his files, but a PR would be pretty fine!! 👍😄 (There are already some parts in this third party classes, that we have enhanced and or fixed on our self here.) Thanks @monollonom!
    3 points
  50. Behind the scenes entries in a PageTable field are just regular pages in a regular PageArray. Lets say you have a field "pagetable" and it is configured to use template "blog-post" for items. You can create a new page with new items in the pagetable field like this: // create page that holds pagetable items $p = new Page(); $p->template = 'basic-page'; $p->parent = '/foo/'; $p->title = "Bar"; $p->save(); // get parent id for pagetable items from field config $parentID = $p->getField('pagetable')->parent_id; // create item page for field pagetable $item = new Page(); $item->template = 'blog-post'; $item->parent = $parentID; $item->title = "Item Foo"; $item->save(); // add item page to pagetable field $p->pagetable->add($item); $p->save('pagetable'); Since the value of $p->pagetable is a PageArray, you can find and manipulate the pagetable items with methods from Pages, PageArray and WireArray. To remove a specific item: $item = $p->pagetable->get('title="Item Foo"'); $p->of(false); $p->pagetable->remove($item); $p->save('pagetable'); You can also use WireArray manipulation methods insertBefore(), insertAfter(), append(), prepend() to add items in a specific order.
    3 points
×
×
  • Create New...