Jump to content

ryan

Administrators
  • Posts

    16,780
  • Joined

  • Last visited

  • Days Won

    1,534

Everything posted by ryan

  1. @matjazp Thanks, I have updated all of those and others along the way. The only one I didn't update was Selectize because we're using what's called the "standalone" version, and I don't see it mentioned at all in the current version, so seems like there's something I'm missing and figure it's better to leave that one for now. (Especially if the new version isn't yet updated for jQuery 3.x.) Regarding Magnific, I had a look at the mentioned security leak and I don't think it's an actual security issue, if I understand correctly. It looks like it would require someone able to manipulate the image filename to insert XSS into the actual filename. If someone can do that then the installation would already be compromised whether Magnific is there or not.
  2. I'm not yet sure what ProcessWire could do here since it's the template file that controls all the logic of what gets output. But I may not yet fully understand the request, so I'll use an example or what I do understand below. Markup Regions don't have control over what your template file spends time rendering, just what gets output at the end. So there wouldn't be much benefit to having output of partials when it still has to spend the time to render everything, whether used in the output or not. Instead, you would need some logic in your template file in order to selectively render partials, and gain a performance benefit from it: <?php namespace ProcessWire; // render just $part if requested, otherwise render all parts $part = $input->get('part'); // i.e. header, content, footer ?> if($part == 'header' || !$part): ?> <div id='header'> ...header markup... </div> <?php endif; ?> if($part == 'content' || !$part): ?> <div id='content'> ...content markup... </div> <?php endif; ?> if($part == 'footer' || !$part): ?> <div id='footer'> ...footer markup... </div> <?php endif; ?> <?php if($part) return $this->halt(); ?> In the above example, if the page is requested without a "?part=" query string in the URL, then it renders everything (header, content and footer). But if rendered with a "?part=content" query string in the request URL (for example), then it would render and output just the <div id='content'>...</div>.
  3. @matjazpOne thing that's not totally clear to me is if the jQuery focus() and blur() methods are actually deprecated? I'm probably missing something, but so far I can't find anything on the jQuery site that indicates those two are deprecated. It would make sense that they would be since many of the other shorthand methods are.
  4. @matjazp Thanks!! I will update those files and the jQuery TableSorter version to the one you linked.
  5. @artfulrobot There's different ways you could go about it, but what you described should work. The way I built the comments form (here) for the example I linked earlier was to use a FormBuilder form for the comments/reviews form and just used the comments API to add comments. (In that example the "Rate more details" link at the bottom opens a bunch more fields). I mainly used FormBuilder because there were so many different fields and photo uploads, etc., that it went beyond what I wanted to do with extending a CommentForm class. Though a manually written regular HTML form would have also worked fine. I use CommentForm more often when it's more typical blog-type comments with the built-in optional stars, votes, etc., as it can save a lot of time since it's nearly turn-key. If you only need to add a field or two, that's probably the quickest route. Btw, I also see no harm with using pages for comments either, but just that you'd be building a lot from scratch with regards to spam prevention, comment approval, etc., and also just as a personal preference, I like to keep anonymous user generated content out of the page tree.
  6. @artfulrobot Comments are a kind of turn-key fieldtype focused just on comments (like you might use in a blog) or reviews, and their purpose is pretty specific and different from that of a page. So the point is more to be focused on solving a specific thing than to be flexible in the way that pages are. And actually, this is the purpose of most Fieldtypes. If what you are needing is the ability to build your own custom type then that's what pages, templates and fields are for, and maybe that's what you need, I'm not sure. But if you are needing specifically comments, then FieldtypeComments is also quite flexible for comment-specific needs. When it comes to custom data that you want to store along with the comment, there is the meta() method which you might find useful: https://processwire.com/api/ref/comment/meta/. This is what I use for storing photos and other Q&A with comments/reviews, like you see here: https://www.biketours.com/reviews/
  7. We've been running pretty much the same jQuery and jQuery UI versions for the last 10 years or more. I haven't really seen much urgency to upgrade because the versions we have work quite well, and I wasn't so enthusiastic about the amount of work and potential headaches the upgrade might entail. Over time there have been been a few security issues found in the jQuery library, which I've always kept an eye on, but they weren't ever things that affected our usage or caused any concern here. The biggest hangup I had was just that upgrading meant also updating a lot of code that uses jQuery, since many of the changes to the library are not compatible with code written for earlier versions. (Newer versions of jQuery have a slightly less convenient API than earlier versions). I place more value on stability than on having new versions of things. But it's always been in the back of my mind that sooner or later it would be nice to get these libraries upgraded for many reasons. After all, newer means better and faster right? Well, not always, but that's been the theme in jQuery at least, that newer versions of the library have some performance benefits over older versions. For awhile now, ProcessWire has been using newer jQuery version only when $config->debug = 'dev'; and I've been testing that out for quite awhile (maybe a year?). This week we upgraded our "main" core jQuery version from 1.8.3 to the last available 1.x release 1.12.4 (4 years newer), which is the one I've been testing. We also upgraded our "dev" jQuery version from 1.12.4 to 3.6.4, which is the newest available version, released by jQuery last month (March 8, 2023). In addition, our jQuery UI "dev" version is now updated to the newest available version, 1.13.2. After awhile, these "dev" versions will become our main versions, but likely not before the next main/master version. While the core seemed to work fine as-is with the newer jQuery (1.12.4), the newest versions of jQuery (3.6.4) and jQuery UI (1.13.2) required quite a few JS file updates to support, and that's primarily what you'll see in the commit log this week. If you'd like to test the newest versions of these libraries in the ProcessWire admin (in a dev environment), edit your /site/config.php file and set: $config->debug = 'dev'; When you do that, it will also load the jQuery migrate library with logging ON. Meaning, the Javascript console will contain messages about things that need to be updated. There's still work to do in the core here, so if you enable 'dev' mode then chances are you'll see some messages about things in the admin too. The "dev" debug mode also makes it use the newest jQuery UI library. Keep an eye out for any visual glitches or any UI things that don't work. For instance, I found that when using the newest jQuery UI version, the image resize/crop tool wasn't working quite right, though I hope to have that figured out soon. Chances are there may be other examples like that, if using the 'dev' debug mode, so please let me know if you come across any. If you are a module author, your module uses jQuery and you want to make sure it's working well with the new main core version (1.12.4) you can also enable jQuery migrate verbose messages in your javascript console by setting the following two in your /site/config.php: $config->debug = true; $config->advanced = true; I've found that updating code for jQuery 3.6.4 seems to be backwards compatible with 1.12.4, so maybe just using the $config->debug = 'dev'; option is a good bet when testing, but I wanted to mention both options are available. I'll be continuing to update our core .js files for 3.6.4 and jQuery UI 1.13.2, and next week will likely update some of our 3rd party jQuery libraries such as the TableSorter library and others. Also, I've not forgotten about pulling InputfieldTinyMCE into the core, that'll likely be in the next version 3.0.216. Thanks for reading and have a great weekend!
  8. There are several updates on the dev branch this week (commit log), including issue fixes, feature additions and minor class improvements. One of the updates I'd planned to add this week was moving InputfieldTinyMCE into the core. However, I noticed that TinyMCE was up to version 6.4.1 now and we were still running 6.2.0, so I decided instead to upgrade ours to the latest and test it out for another week in its own repository. If all continues to work well, I'll likely commit it to the core in 3.0.215. If you have a chance to test the latest version of InputfieldTinyMCE, please do, and open an issue report if you run into any trouble. Last week the Wire Request Blocker module was released in the ProDevTools board and this week we have version 2, which includes several new additions: Added support for blocking groups. Added configurable settings for immediate block (rather than just a strike) for URLs and user agents. Added support for using RequestBlocker in other applications (like we use it here in IP.Board). Added a feature were you can manually test URLs or user agent strings to see how they match your rules. Added a configuration setting so you can choose whether or not to use a log file. Added a section to the docs on how to block URLs from your .htaccess file. As I wrote this post, the processwire.com site is getting hounded with dozens of IPs trying to locate backup or database zip/rar/tar/gz files, using every possible combination of filenames and extensions you can think of, including those that include the term "processwire". Remember to never leave backup files or DB dump files accessible by URL lying around on your server, because they will get eventually found. Adding these rules (below) to WireRequestBlocker's URL matching rules seems to mostly stopped those DB/backup hunting bots: /ba=/backups/|/backup/|/bak/|/back/ .txt=credentials.txt|backup.txt|password.txt|passwords.txt .sql=.sql.gz|.sql.tar|backup.sql|dump.sql|db.sql|database.sql|mysql.sql|.com.sql .tar=.tar.gz|.tar.sql|dump.tar|backup.tar|bak.tar|website.tar|backup.tar|www.tar .zip=backup.zip|bak.zip|.com.zip|well-known.zip|index.zip|public_html.zip|website.zip|dump.zip|wallet.zip|application.zip .rar=bak.rar|website.rar|backup.rar|www.rar .gz=website.gz|bak.gz|backup.gz|.com.gz /old/ WireRequestBlocker only knows its rules and doesn't know who's real and who's a bot, so be careful not to hit URLs containing those strings on this site or it might hit you with nothing but 403's for a few hours. ? Next week is Spring Break here, so I'll likely be on a reduced schedule with kids home from school. Thanks for reading, have a great weekend! +75 more blocks (not shown)
  9. We've got just a few core updates on the dev branch this week, but next week we're looking at finally merging in the InputfieldTinyMCE module! This week I also wrapped up the WireRequestBlocker module that was mentioned in last week post, and the v1 beta is now posted in the ProDevTools download thread. I've been running it here on processwire.com this week and it's been doing a good job of keeping out the vulnerability scanners and bots. For more details on this new module please see the new Wire Request Blocker page that I just posted. Thanks and have a great weekend!
  10. On this side, I don't really find the spambots or seo bots to be much of an issue, so I mostly ignore them unless they get too aggressive. It's instead the vulnerability scanners that tend to be the issue here. They are fine when they are throttled. But when they are unthrottled (as is usually the case), they eat up a lot of resources. Here's just one basic example: a vulnerability scanner might send through thousands (or tens of thousands) of URL variations looking for SQL files that it can grab, with dozens of different names each, like db.sql, database.sql, backup.sql, [domain].sql, database-[domain].sql, db-[domain.sql], [domain]-db.sql, and so on and on and on. Then add all the extension variations, .sql, .sql.gz, .sql.tar, .sql.tar.gz, and then add every URL with a trailing slash in the site as the prefix path for every check. So just a scan for SQL files in-the-open might account for tens of thousands of requests. And it'll try to do them all in a very short period of time, making the server like ours scale to meet the demand. Yet this is just an example of one vulnerability check out of thousands that it'll do. Once a vulnerability scanner gets started, it'll run for potentially days. But I usually block them well before that. Once I get an email from AWS about things scaling, I watch the logs pretty closely and then start blocking IPs. But the goal is to have the module just block them automatically. What the module does is that it allows you to define suspicious patterns in GET or POST requests, or user agent strings (and it comes with several patterns to start). For example, you might have patterns to match things like wp-login.php, those SQL request variations mentioned above, requests for .py, .cfm, .rb, .exe files, or others that you don't use on the server, requests containing SQL commands in the query string... these are just obvious examples. Then it lets you define a number of strikes till the IP is out. So for every pattern match, the IP gets a strike. So if I set it to "3 strikes and you are out" then once it gets 3 pattern matches, the IP is blocked for a period of time, also defined with the module. If additional strikes occur while an IP is blocked, the block time gets reset so it starts over, ensuring it's always blocked that set amount of time from the last strike.
  11. This week ProcessWire 3.0.214 is on the dev branch. Relative to 3.0.213 this version has 16 new commits which include the addition of 3 new pull requests, 6 issue fixes, a new WireNumberTools utility class, and improvements to various other classes. A newly added $files->size($path) method was added which returns the total size of the given file or directory. When given a directory, it returns the size of all files in that directory, recursively. Improvements were also made to ProcessWire's log classes (WireLog and FileLog) with new methods for deleting or pruning all log files at once. This version also fixes an issue with the front-end page editor (PageFrontEdit) when used with InputfieldTinyMCE. For more details on these updates and more see the dev branch commit log. Something else I've been working on this weekend is a vulnerability scanner blocker and throttler. I don't know if this is an issue for every site, or if it's because this is an open source project site, but we seem to get a lot of vulnerability scanner bots hitting the site. Sometimes they hit the site pretty hard (with hundreds of thousands of requests) and our AWS cluster servers and databases must scale to meet the demand, using more resources, and thus increasing cost. This is annoying, having to scale for a hyperactive vulnerability scanner rather than real traffic. And it always seems to happen in the middle of the night, when I'm not nearby to manually block it. So I'm working on a module that detects vulnerability scanner traffic patterns and then blocks or throttles requests from their IPs automatically. Once I've got it functioning smoothly here, I'll also plan to add it to ProDevTools board download thread in case it's useful to anyone else. Thanks for reading and have a great weekend!
  12. @wbmnfktr I checked our list and don't see your email is on it. So next I checked the list activity log and see that we got 3 bounces from your email about 2 years ago. After 3 bounces it removes you from the list, as Mailgun doesn't like it if we keep sending to an address that bounces. I think it can be solved just by re-subscribing. Please let me know if you find it doesn't. Thanks.
  13. This week the focus was on core updates and we have a quality mixture of minor issue fixes, pull request additions, and other improvements in ProcessWire on the dev branch. My favorite addition was from a PR by @matjazp that makes improvements to ProcessWire's pagination module, MarkupPagerNav. See more updates and PRs in the dev branch commit log as well. There's still more to add before we bump the version to 3.0.214, so stay tuned for that next week. By the way, if you've recently launched any new sites in ProcessWire, please add them to our sites directory. I think most of us are already subscribed to the ProcessWire Weekly email, but just in case you aren't, you can subscribe here. The email content is from weekly.pw written by @teppo and is great to read, highly recommended! Thanks and have a great weekend!
  14. @elabx That seems like a major performance improvement (looking at the ms time at the bottom). And maybe it's about right. The only thing I'd mention is that the left join/null queries aren't really necessary for your Fieldtype since you are matching dates, and you can use field_name.count=0 to match non-presence of rows. So you may want to put in your own getMatchQuery(), like my earlier example, as that might optimize it further.
  15. @elabx First thing to check is that you've got an index on your `data` column for that field. If not then execute this query: ALTER TABLE field_event_recurring_dates ADD INDEX data (data) I'm also wondering about the other indexes on your Fieldtype. You've got an "id" as the primary key, which is unusual for a Fieldtype. Usually the primary key is this for a FieldtypeMulti: $schema['keys']['primary'] = 'PRIMARY KEY (pages_id, sort)'; So I'm guessing you may not have the index on pages_id and sort, which would definitely slow it down, potentially a lot. If you don't need the "id" then I would probably drop it, since you don't really need it on a FieldtypeMulti. The only Fieldtype I've built that keeps an id is FieldtypeTable, and it uses the 'data' column for that id and then uses this in its getDatabaseSchema: $schema['data'] = 'INT UNSIGNED NOT NULL AUTO_INCREMENT'; $schema['keys']['primary'] = 'PRIMARY KEY (data)'; $schema['keys']['pages_id'] = 'UNIQUE (pages_id, sort)'; unset($schema['keys']['data']); Most likely the above is the primary reason for the bottleneck. But another more general reason that your query may be slow is because the $pages->find() selector you are using doesn't filter by anything other than your event_recurring_dates field. In a real use case, usually you'd at least have template(s) or a parent in that selector, or you'd be using children(), etc. Any should improve the performance. Without any filter, you are asking it to query all pages on the site, and if it's a big site, that's going to be potentially slow. The reason for the LEFT JOIN is because without it you can only match rows that that exist in the database. You have an operator "<=" that can match empty or NULL values. Since you also have an operator ">=" that can't match empty values, then the left join of course isn't necessary, but each part of the selector is calculated on its own (independent calls to getMatchQuery). You are getting the fallback Fieldtype and PageFinder logic since your Fieldtype doesn't provide its own getMatchQuery(). This is a case where you may find it beneficial to have your own getMatchQuery() method. The getMatchQuery in FieldtypeDatetime might also be a good one to look at. In your case, since your Fieldtype is just matching dates, it may be that you don't need a left join situation at all, since you might never need to match null (non-existing) rows. So you could probably get by with a pretty simple getMatchQuery(). Maybe something like this (this is assuming 'data' is the only column you use, otherwise replace 'data' with $subfield): public function getMatchQuery($query, $table, $subfield, $operator, $value) { if($subfield === 'count') { return parent::getMatchQuery($query, $table, $subfield, $value); } // limit to operators: =, !=, >, >=, <, <= (exclude things like %=, *=, etc.) if(!$this->wire()->database->isOperator($operator)) { throw new WireException('You can only use DB-native operators here'); } if(empty($value)) { // empty value, which we'll let FieldtypeMulti handle if(in_array($operator, [ '=', '<', '<=' ])) { // match non-presence of rows return parent::getMatchQuery($query, $table, 'count', '=', 0); } else { // match presence of rows return parent::getMatchQuery($query, $table, 'count', '>', 0); } } // convert value to ISO-8601 and create the WHERE condition if(!ctype_digit("$value")) $value = strtotime($value); $value = date('Y-m-d H:i:s', (int) $value); $query->where("$table.data{$operator}?", $value); reuturn $query; }
  16. It's crazy sometimes how quickly the work week goes by. On Monday I started working on a Stripe Checkout integration for a client (using the FormBuilderProcessorStripe module), and somehow today I'm still working on it, and it feels like only a day has passed. This particular integration is a little more complicated than others I've worked on. The user makes a few selections that determine the final price, and when they submit the form, it has to authorize (but not yet capture) the amount due, so that the money is basically in a holding state. Then it has to send a notification to another company asking them to approve or deny the request. If they approve it, then it captures those funds through Stripe. Or if they deny it, then it releases the hold on those funds. It's also connected to multiple Stripe accounts in different currencies, and it has to use whichever one corresponds with the transaction details. In this particular form, some of the purchases also involve a 3rd party web service to confirm availability. And there's more to it as well, but I'll leave it at that... it's just a lot of moving parts, so I guess that's why I haven't done anything this week other than work on that. But the good news is that much of it has been added to the FormBuilderProcessorStripe module, so that the next this time need comes up for you or me, hopefully it won't take so much time. Here's a few of the things that have been added to FormBuilderProcessorStripe: Previously you could just accept a payment. Now you also have the option to setup a separate authorization and capture. And you can capture from a newly added API method, from the FormBuilder entries screen, or from our Stripe dashboard. On the API side, all you need to complete the capture is the ID of the payment (called the payment intent ID), which is saved with the form entry. The capture typically must be done within 7 days of the authorization. Doing an authorization (and later capture) is preferable to a charge when you think there's a reasonable chance the it'll need to be un-done. One reason is because an authorization costs nothing, whereas a charge (or capture) does, regardless of whether it is refunded. The module also has a new option to create a Customer in Stripe that you can charge later. This is different from authorization/capture in that creating a customer doesn't authorize any particular transaction or funds, but rather saves their payment info in Stripe, enabling you to charge it anytime later. This is useful for many cases, but one would be where a customer wants to save their payment information with their account, so they don't have to re-enter it every time they make a purchase. Several new configuration options were also added. New public API methods were added for capture, cancel and refund payments. You can now pass any data from the form into Stripe metadata. You can now specify the Stripe API version that you want to use with the module. And you can now send email receipts, even if not enabled in Stripe. More transaction information is now shown when using the "view entry" in the admin. Several new hooks were also added. Technically this update to FormBuilderProcessorStripe is ready to post now, but I'd like to do a little more testing first, so I'll be posting this module update in the FormBuilder board next week. I also have some updates to the InputfieldFormBuilderStripe module as well (which uses Stripe Elements rather than Stripe Checkout), and it may be updated at the same time, or shortly after. No core updates this week, but hopefully I got enough client work done this week that I can really focus on the core next week. Thanks for reading and have a great weekend!
  17. This week in the blog we’ll look at the new WireSitemapXML module, a new WireNumberTools core class, and a new ability for Fieldtype modules to specify useful ready-to-use configurations when creating new fields— https://processwire.com/blog/posts/pw-3.0.213/
  18. @cb2004 I've got v1 ready and am going to post in ProDevTools board this week.
  19. @qubism I wasn't sure if this particular feature was used. Sounds like it is -- I'll add it to TinyMCE.
  20. This week we have a new core version available on the dev branch. Relative to the previous version, 3.0.212 contains 32 commits, 16 issue fixes and 3 PRs. Here's a list of what's been added: Improvements were made to InputfieldImage so that you can now add your own image actions with hooks. More details in this post and ProcessWire Weekly #455. Significant refactoring and improvements were made to the ProcessPageEditLink module. One of the most notable is that it now will retain HTML class attributes on links, even if not specifically configured with the ProcessPageEditLink module. This is useful if you've added custom classes for links in TinyMCE or CKEditor, but haven't also added them to the ProcessPageEditLink module configuration. Improvements were made to Text fields so that HTML Entity Encoder is now automatically added. This was covered in detail in last week's post and in ProcessWire Weekly #457. InputfieldEmail has been updated with optional support for IDN emails and UTF-8 local-part emails, per request. Two new $sanitizer methods have been added: htmlClass() and htmlClasses(). These sanitize HTML class attributes, and it's part of what enables us to support the custom class attribute support added in the ProcessPageEditLink updates mentioned above. Added feature request #480 to support configurable file extensions for translatable files (beyond .php, .module and .inc). Added a new uploadName() method/property to Pagefile/Pageimage objects that returns the original unsanitized filename as it was uploaded. Applies only to files uploaded since this feature was added. Be aware the filename is unsanitized so be careful with it. This was to partially answer feature request #56 and this solution was suggested by Bernhard Baumrock. Demonstrating the above, InputfieldFile now shows this original filename if you hover the file icon, and InputfieldImage shows it if you hover the current filename. HTTP requests that contain an accidental double slash in the URL now redirect rather than render the page. The PagesParents class was refactored so that it is now a lot faster in saving its data to the pages_parents table. This is very noticeable if you have a site with more than million pages when changing the parent of existing pages, especially those having children. Previously there was a [potentially very] long delay at large scale, now it is instant. More details on these updates, and additional updates can also be found in ProcessWire Weekly #455, #456 and #457. Thanks for reading and have a great weekend!
  21. I have to stop a day early to leave town for one of my daughter's gymnastics meets, so I'm going to save the core version bump for next week, after a few more updates. The most interesting core update this week is one suggested by Netcarver and Pete. It is to make the "HTML Entity Encoder" Textformatter option (for text fields) more foolproof, by making it harder to ignore. That's because this option is rather important for the quality assurance and security of your site's output. If you forget to enable it for one text field or another, then you allow for the potential of HTML in the output for that field, by anyone that can edit pages using that field. Most of the time when you aren't entity-encoding output, HTML is exactly what you want, such as with TinyMCE or CKEditor fields. HTML entity encoding is necessary when the field value isn't itself HTML, but will eventually be used in HTML output and needs to be valid for HTML output. Think of a "title" field, for example. For these cases, you want to be sure that characters like greater than, less than, ampersand and single/double quotes are properly encoded. Greater-than and less-than characters form HTML tags like <script>alert("gotcha!")</script>, ampersands begin entity sequences, and quotes are used to open and close HTML attribute values. By entity encoding all of these characters, we ensure they can't be used for malfeasance, scripts, XSS, defacement, etc. The worst case scenario would be that you neglect to enable the entity encoding on a text field where you are allowing non-trusted user input, as that could open the door to such shenanigans. To make things more foolproof, ProcessWire now gets in your face a bit more about using the HTML Entity Encoder. Maybe it's a bit more annoying for more experienced users, but if you happen to be in a rush, it'll make sure you at least don't forget about it. Or maybe some less experienced developers might not know the importance of entity encoding in HTML, and this update helps them too. Here's what it does: It now enables the HTML Entity Encoder (Textformatter) for all newly created text fields (and derived field types). Previously it just suggested that you enable it, but let you decide whether or not it was appropriate. Now, it errs more on the side of caution. Since the entity encoder is now automatically enabled for newly created text fields (in the admin), it seemed necessary to detect cases where the field configuration clearly indicates it's intended to allow HTML (by input type or content-type). Examples include textarea fields configured to use TinyMCE or CKEditor, or any text field with a content-type set to HTML. When these cases are detected, it advises the user to remove the HTML Entity Encoder from the selected Textformatters. If editing an existing text field (in Setup > Fields) that doesn't appear intended to use HTML (i.e. not TinyMCE or CKEditor and doesn't have its Content-Type set to HTML), it will now test all the selected Textformatters you have selected (if any) and see how they handle HTML. If they leave HTML in place, or you have no Textformatters selected, It will provide a warning to you, letting you know that HTML is allowed, and leave it up you to decide whether that's what you want. Note that these additions are only for fields created in the admin. Fields created from the API make no such assumptions and work the same as before. That's it for this week. More updates and hopefully a version bump next week. Have a great weekend!
  22. @nbcommunication Yes, you are right, it's nice it's working with JSON, and nice that it's well documented. These things already make it well above average. Despite being simple to consume JSON, one thing I find a bit painful is the lack of granularity in the API. There's a lot of cases where I want to get one thing or another, but I have to get the entire 800kb structure of things and parse the one thing I need out of it. It seems like this will just not be feasible at some scale. When it comes to putting data into the system, you have to construct a large JSON object and manipulate it as a whole rather than being able to insert or modify individual parts. Maybe this is standard for APIs like this, I don't know. It seems cumbersome at times.
  23. These last few weeks I've been working on integrating a ProcessWire installation with the Fareharbor API for a client. Other than the authentication part (which is as simple as it gets), I've found this API to be one of the more time consuming ones to work with. It's not so much that the API is difficult to use, as much as it is just a time sink, taking a long time to reorganize the info it provides into something useful for our needs. And likewise taking a long time to prepare information to put back into it in the format it requires. My best guess is that it is an echo of an existing back-end API, projecting internals rather than tailoring a simpler public API to them. Perhaps it's an interface optimized for the some internal legacy system rather than the external consumers of it. Or perhaps it already is a lot simpler than what's behind it, and its interface has been carefully considered (even if it doesn't feel that way), who knows. To be fair, no API is perfect, and this particular API does provide a working and reliable interface to some pretty complex data, and an immense amount of power. It's good to work with lots of different APIs, from easy-to-painful, as it helps to clarify paths to take (and to avoid) when authoring new APIs. I ended up building an adaptor module in ProcessWire just to give this particular API a simpler interface that was more useful to the needs we had, and that is now saving us a lot of time. It reminded me of one reason why ProcessWire was built in the first place, to create a simple interface to things that are not-so-simple behind the scenes, and I think we've been pretty successful with that. We'll keep doing that as ProcessWire continues to mature, evolve and grow, as we always have. In terms of core updates, commits this week were similar to those from the last couple of weeks: a combination of issue fixes, a PR, feature requests and minor improvements. We are now 17 commits past 3.0.211, but I'm going to wait till next week before bumping the version to 3.0.212, as there's a little more I'd like to add first. Thanks for reading this update and I hope that you have a great weekend!
  24. This week we've got a few minor issue fixes and a couple of pull request additions on the dev branch. Pull request #251 thanks to @Jan Romero added a download button to the thumbnail images in InputfieldImage. I wasn't sure we really needed that, but really liked his thinking behind it, which was envisioning the ability to add more custom buttons/actions for images. So while I didn't specifically add the download button, I added the proposed system for adding any custom buttons, and then applied that same thinking to some other parts of InputfieldImage. And we'll talk about how to add that Download button here. ? First, let's look at how you might add your own download button, and note we're using this as just an example, as you might add any kind of button this way. A new hookable getImageThumbnailActions() method was added for this purpose. So here's how you might hook it (in /site/ready.php) to add a download button: $wire->addHookAfter('InputfieldImage::getImageThumbnailActions', function(HookEvent $event) { $image = $event->arguments(0); // Pageimage $class = $event->arguments(3); // class to use on all returned actions $a = $event->return; // array $icon = wireIconMarkup('download'); $a['download'] = "<a class='$class' href='$image->url' download>$icon</a>"; $event->return = $a; }); With that hook in place, here's what it looks like when you hover a thumbnail image. And if you click that Download icon, it downloads the file to your computer: or in list mode (download icon appears in right corner next to trash): I was thinking it would be useful to also be able to add custom actions after you click the thumbnail, and it shows the image edit features. So let's add a Download button there instead, by hooking the new getImageEditButtons() method: $wire->addHookAfter('InputfieldImage::getImageEditButtons', function(HookEvent $event) { $image = $event->arguments(0); // Pageimage $class = $event->arguments(3); // class(es) to use on all returned actions $buttons = $event->return; // array, indexed by action name $icon = wireIconMarkup('download'); $buttons['download'] = "<button class='$class'><a download href='$image->url'>$icon Download</a></button>"; $event->return = $buttons; }); And the result looks like this (see new Download button after Variations button): We also have that Actions dropdown that you see in the screenshot above. This is already hookable but we've not had any good examples of it. In this case, you need two hooks: one to add the action to the <select> and another to handle the processing of the action when the page is saved. So in our next example, we'll demonstrate how to display verbose EXIF information about whatever image(s) the action was selected for. In this first hook, we'll add the action to the Actions <select>: // Example of adding an “Get EXIF data” action to the <select> $wire->addHookAfter('InputfieldImage::getFileActions', function(HookEvent $event) { $image = $event->arguments(0); // Pageimage if($image->ext == 'jpg' || $image->ext == 'jpeg') { $actions = $event->return; // array $actions['exif'] = 'Get EXIF data'; $event->return = $actions; } }); And in this next hook, we'll handle the action, which gets called when the page editor form is submitted: // Example of handling an “Get EXIF data” action $wire->addHookAfter('InputfieldImage::processUnknownFileAction', function(HookEvent $event) { $image = $event->arguments(0); $action = $event->arguments(1); if($action === 'exif' && file_exists($image->filename)) { $exif = exif_read_data($image->filename); $event->warning([ "EXIF data for $image->name" => $exif ], 'icon-photo nogroup'); $event->return = true; } }); And here's what it shows after you hit save (for any images that had the action selected): The screenshot above is truncated because it was about twice is big as what's above. All the above code examples are also included in the phpdoc for each of the relevant hookable methods in the InputfieldImage module. For another recent useful addition, be sure to check out ProcessWire Weekly #454 (last week) which covered some new options available for the language translation functions like __text('hello'); where you can now tell it what kind of input type (and how many rows) to use in the admin translation interface, via inline PHP comments. Thanks for reading and I hope you have a great weekend!
×
×
  • Create New...