Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 03/30/2021 in all areas

  1. So you are looking into querying data from an external service (Shopify?) that uses GraphQL? Simply put, GraphQL is a query language: a client sends a query (defined using the GraphQL language) to the server, which then responds with a GraphQL object. WireHttp is a class you can use to send HTTP requests from ProcessWire. You could send a GraphQL request with WireHttp — so no, GraphQL is not a replacement for WireHttp ? I'm not saying that you should use it, but here's a very simple GraphQL client implementation: https://gist.github.com/dunglas/05d901cb7560d2667d999875322e690a. Here's an example of querying GraphQL API with Guzzle (which, by the way, is something you could use as a replacement for WireHttp): https://dev.to/jakedohm_34/how-to-make-a-graphql-query-or-mutation-in-php-with-guzzle-359o. Or you could use something a bit more sophisticated, perhaps https://github.com/mghoneimy/php-graphql-client. I'm not an expert on this topic, so perhaps someone with more GraphQL expertise can chime in. Just thought I'd drop some pointers ?
    3 points
  2. To be honest I don't see a big reason to switch if your solution works for you already ? Pros for built-in approach: Can be enabled or disabled at any point via arguments passed to the module. SE is "aware" of the built-in grouping feature, so it's something that will likely keep working consistently and may also benefit from future updates. Not saying that yours won't keep working, though. I've gone through some extra hoops to only include tabs / buttons for the templates that actually have matches. Personally I dislike it when I click that "Articles" button, wait for the page to load... and then there are no results. I don't want to waste users' time. The "group_by" setting is automatically considered when returning results as a JSON feed. Returned results are split into groups. (At least they should, it's been a while since I last checked this one...) Native feature may eventually get proper "autoload" support, so that switching between categories / templates doesn't cause additional delay. Again, this can be a bit tricky, and whether it's really "worth it" depends on the case. It's themeable and I've tried to use markup that is somewhat accessible. To be honest there's likely more to be done in this regard ? Whether or not any of this matters for your use case is debatable. Again, I don't see any notable reason to switch if your solution works fine already. Actually I'm pretty sure that your solution is more straightforward and results in slightly better performance. Not sure if I got this right, but SE has a couple of options that may be related. Both are under find_args: // Optional: values allowed for grouping. 'group_by_allow' => [], // Optional: values not allowed for grouping. 'group_by_disallow' => [], Basically if you add a list of templates to "group_by_allow", these are the only templates that will get their own tabs. Other templates are still included in the search, and thus under the "all" tab. The "group_by_disallow" option works the other way around, templates included here will always be excluded from grouping (they won't get their own tabs, but again will be found from the "all" section.)
    2 points
  3. May someone else finds this helpful. I wanted to create comments (using built-in comments fieldtype module) via API, as I needed this is for an easy migration from an existing site. I figured following does the job: <?php // get the page that you want to add comment(s), contains a "comment" fieldtype $p = $wire->pages->get("/guestbook/"); // create new Comment object $c = new Comment(); // fill the object with values $c->text = "Hello World!"; $c->cite = "John Average"; $c->created = time(); // timestamp, if you got to migrate a existing datetime you can convert using strtotime("2011-04-09 15:14:51") $c->email = "john@average.com"; $c->ip = "..."; // not needed (only for akismet) $c->user_agent = "..."; // not needed (only for akismet) $c->created_users_id = "..."; // not needed, automaticly set by pw // set status (Spam = -2; Pending = 0; Approved = 1; ) $c->status = 1; // add new comment object to the page "mycomments" field $p->mycomments->add = $c; // save page $p->save(); ?>
    1 point
  4. Your approach seems fine to me. If there were a lot of templates then it would result in multiple queries, but for most use cases I'd assume the performance impact to be extremely small. On the other hand the approach I've taken (modifying the DatabaseQuerySelect object on the fly) should result in just a single query, but is also potentially more fragile... ? Thanks — this is now fixed in the latest release.
    1 point
  5. @teppo - on another note, I just got this logged error overnight: PHP Warning: preg_match(): Compilation failed: missing closing parenthesis at offset 62 in /site/modules/SearchEngine/lib/Renderer.php:388 It came from this query: https://ian.umces.edu/search/?q=Ferocactus+wislizeni+(Fishhook+Barrel+Cactus It looks like someone was trying to find that exact cactus symbol by name, but left off the trailing close parenthesis. Currently I am using: $query = $searchEngine->find($input->get->q, $findOptions); as my search query. I assumed that SearchEngine would sanitize the input, but even using selectorValue or text sanitizers don't work in this case, because they both allow parentheses. Do you think this is a situation that we should manage, or SearchEngine should take care of?
    1 point
  6. I have the same in mine, although perhaps your approach is more efficient. I went with this when outputting the tab buttons, using a ->has() to see if there are any results for the template/tab. I thought given the performance of PW's has() that this would be the best approach, but I didn't dive into it too much. I should go see what you did. foreach($types as $name => $label): if($name != 'all' && $pages->has('search_index'.$findOptions['operator'].$query->query.', template='.$name) === 0) continue; ?> <a class="button small<?=((!isset($type) && $name == 'all') || (isset($type) && $name === $type) ? ' buttonactive' : '')?>" href="<?=$page->url?>?q=<?=$input->get->q?>&type=<?=$name?>"><?=$label?></a> <?php endforeach; Those do sound like they would do the same thing. I think I'll set up a test with the module's core approach and see what happens. Thanks again.
    1 point
  7. Hey @Erik. Could you check which PHP version you're running? That error is likely related to nullable returns types ("?Field"), and support for those was added in PHP 7.1. This module will not work on older PHP versions.
    1 point
  8. +1 ? in case it is a feature request ?
    1 point
  9. Hey @bernhard - just resave Tracy's settings and this will go away. It only happens when updating quite old versions.
    1 point
  10. Thanks @teppo - everything looks great now. Now I guess I need to figure out if there is any advantage to using the inbuilt grouping vs my approach. Obviously it handles all indexed templates automatically, but then mine gives the flexibility to include the results from some templates under "All", but not have a dedicated tab for them which can be handy in some cases. My question for you then, is there any performance gain in using the inbuilt grouping? I haven't explored the code, but my instinct is that you're not really doing anything different to me, except for the big convenience factor of having this work so easily.
    1 point
  11. If anyone's looking for a shortcut to simplified/extended password reset features, there is a commercial extension module available.
    1 point
  12. This module is 8 years old and it still works beautifully. Kudos Soma!
    1 point
  13. @MrSnoozles @teppo This is not a limiting factor in scalability at least. First off, at least here, the file-based assets are delivered by Cloudfront CDN, so they aren't part of the website traffic in the first place (other than to feed the CDN). If you wanted scalability then you'd likely want a CDN serving your assets whether using S3 or not. But a CDN isn't a necessary part of the equation in our setup either. File systems can be replicated just like databases. That's how this site runs on a load balancer on any number of nodes. Requests that upload files are routed to node A (primary), but all other requests can hit any node that the load balancer decides to route it to. The other nodes are exact copies of the node A file system that update in real time. This is very similar to what's happening with the DB reader (read-only) and writer (read-write) connection I posted about above, where the writer would be node A and there can be any number of readers. Something like S3 doesn't enhance the scalability of any of this. Implementing S3 as a storage option for PW is still part of the plan here, but more for the convenience and usefulness than scalability. You can already use S3 as a storage option in PW if you use one of the methods of mapping directories in your file system to it. But I'm looking to support for it in PW more natively than that. It is admittedly more complex than the DB stuff mentioned above. For instance, we use PHP's PDO class that all DB requests go through, so intercepting and routing them is relatively simple. Whereas in PHP, there is no PDO-type class that the file system API is built around, and instead it is dozens of different procedural functions (which is just fine, until you need to change their behavior). In addition, calls to S3 are more expensive than a file system access, so doing something as simple as getting an image dimensions is no longer a matter of a simple php getimagesize() call. Instead, it's more like making an FTP connection somewhere, FTP'ing the file to your computer, getting the image dimensions, storing them somewhere else, then deleting the image. So meta data like image dimensions needs to be stored somewhere else. PW actually implemented this ability last year (meta data and stored image dimensions). So we've already been making small steps towards S3-type storage, but because the big picture is still pretty broad in scope to implement, it's more of a long term plan. Though maybe one of my clients will tell me they need it next week, in which case it'll become a short term plan. ?
    1 point
  14. Also interested in this. My impression is that offloading assets to Amazon S3 / Google Cloud Storage etc. is a relatively common need, and there's currently no bulletproof solution. Some third party modules have attempted to, but I'm not sure if any of them really solve all the related needs (image management etc.) — and while remote filesystems such as EFS are easier to work with, they also come with certain downsides (such as a higher price point). I seem to recall that Ryan mentioned something about this a while ago too ?
    1 point
  15. This is a great feature for larger sites. Really excited about this. Speaking of scaling: I would imagine a limiting factor right now would be, that the uploaded assets are bound to the local drive. Is that a problem for your setup / would proccesswire.com benefit if it was possible to store files on S3? And is there a post somewhere explaining your setup on AWS?
    1 point
  16. I registered for check it, but confirmation mail didn't come. I wrote a Fieldtype Module like this. I am using this private module for managing hotel rooms availabilities and room prices etc. I made 2 video for you, you can check the usage. I used https://flatpickr.js.org/ Backend Ekran Kaydı 2021-03-27 01.52.04.mov Frontend Ekran Kaydı 2021-03-27 01.58.10.mov
    1 point
  17. Thanks for your thoughts @FireWire - I think I'll be having to switch to Lingvanex sooner than later, so there might be a PR coming you way. Let me know if you already have any thoughts on how you would like to implement different translation engines so that I hopefully do it in a way that you're happy with.
    1 point
  18. I have used it only one times. If it is of any help, I can paste in some code snippets here from that site. But it isn't an infinite only solution, it is bundled together with masonry. http://joerg-hempel.com/archiv/ JS parts: // JS in every page that uses infinite scrolling <script type='text/javascript'> var gLoaded = 0; var gMaxPages = 12; var gTriggerPageThreshold = 100; var gPaginationHistory = false; // switch on / off URI updating for pages: example.com/path/page2 | /page3 | page4 var gContainer = '#albums'; var gItemSelector = 'article.album'; var gColumnWidth = 228; // 48 var gGutter = 12; // 12 $(document).ready(function() { // make it visible if JS is supported $('div.ias_msg').css('display','block'); starteMasonry(); starteIAS(); }); </script> in an external JS file I have the functions for Masonry and IAS: function starteIAS() { jQuery.ias({ container : gContainer, item : gItemSelector, pagination : '.pagination', next : 'a.next-album', loader : "<img src='/site/templates/styles/images/loader.gif' />", thresholdMargin : -9, triggerPageThreshold : gTriggerPageThreshold, history : gPaginationHistory, noneleft : "<div class='ias_msg noneleft'><p>no more items available</p></div>", onLoadItems : function(items) { // HNLOG console.log('IAS onLoadItems (' + gTriggerPageThreshold + ')'); var newElems = $(items).show().css({ opacity:0 }); newElems.imagesLoaded(function(){ $('#albums').masonry('appended', newElems); newElems.animate({ opacity:1 }); }); $('#albums').masonry('reloadItems'); return true; } }); } function starteMasonry() { var items = $(gContainer + ' ' + gItemSelector).show().css({opacity:0}); $(gContainer).masonry({ itemSelector : gItemSelector, isOriginLeft : true, isOriginTop : true, columnWidth : gColumnWidth, gutter : gGutter, isAnimated : true, // !Modernizr.csstransitions isFitWidth : true }); items.animate({opacity:1}); } The PHP looks like: // calculate current amounts $limit = 100; $max = $pages->find("$pSelector,limit=2")->getTotal(); $cur = $input->pageNum; $next = ($cur * $limit) < $max ? $cur + 1 : 1; $prev = $cur > 1 ? $cur - 1 : 0; $start = $cur < 2 ? '0' : strval($limit * ($cur -1)); // echo prev-next links for pagination (only next is required, prev isn't used !!) $pagination = "<div style='display:none' class='pagination'>"; // hide it !! //if($prev > 0) $pagination .= " <a href='{$archivURL}page{$prev}'>prev</a> |"; if($next > 1) { if(isset($nextPageUrl)) { $pagination .= " <a class='next-album' href='{$nextPageUrl}page{$next}'>next</a> "; } else { $pagination .= " <a class='next-album' href='{$archivURL}page{$next}'>next</a> "; } } $pagination .= "</div>\n"; echo $pagination; // Hinweis zum scrollen echo "<div class='ias_msg scrolldown".($cur)."'><p>scroll down to get more items</p></div>"; // get PageArray $albums = $pages->find("$pSelector, limit=$limit, start=$start, sort=sort"); // output albums with thumbnail and infos foreach($albums as $album) { // ... render items } Using IAS, you even don't need to check for ajax request or not. just send the whole page, with header and body. The script knows the selectors and pick up the needed content and add it to the dom, after the last renderd from previuos request. Easy!
    1 point
×
×
  • Create New...