Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/14/2021 in all areas

  1. @AndZyk Alright, I had some fun with it. Here's an improved script for the asset export, which can handle nested repeater and matrix repeater fields: /** * Get a flat array of all images on the given page, even in nested repeater / repeater matrix fields. * * @var Page $page The page to get the images from. * @return Pageimage[] Array of Pageimage objects. */ function getImages(Page $page): WireArray { $images = new WireArray(); $fields = $page->fields->each('name'); foreach ($fields as $field) { $value = $page->get($field); $type = $page->fields->{$field}
    3 points
  2. 101aandd.com New site mostly designed by the client and implemented in Processwire. Mostly concentrating on small subtle animations.
    2 points
  3. Hi processwire community, The company that I work for used a processwire dev a few years ago to set up their website, but they are looking to have it updated as it seems to not be running as smoothly as they would have liked. In essence they are looking for someone to amend some of the issues that have occurred, such as: - Solicitors profiles disappearing and/or format changing, as well as adding some pictures of them. - Update the user-friendliness of things, such as having a 'back button/home button' whenever you have clicked on a certain part of the site. The
    2 points
  4. Not a solution, but for an explanation of why querystrings are not retained in automatic redirects see Ryan's comments in these GitHub issues: https://github.com/processwire/processwire-issues/issues/636 https://github.com/processwire/processwire-issues/issues/77 https://github.com/processwire/processwire-issues/issues/1140
    2 points
  5. @fedeb Very interesting, thx for sharing! πŸ™‚ @ryan Wouldn't it be great to abstract all that findings/knowledge into the files API? Importing data from CSV is a quite common need and thinking about all the pieces (like utf8 encoding, wrong delimiters etc) can be tedious and does not need to be πŸ™‚ $file = "/my/large/file.csv"; $tpl = $templates->get('foo'); $parent = $pages->get(123); $options = [ 'delimiter' => ',', 'length' => 100, 'firstLineArrayKeys' => true, ]; while($line = $files->readCSV($file, $options)) { $p = $this->wire(new Page()); $p->tem
    2 points
  6. @ryan I'm afraid not, fget() in PHP will return one string (a line) but fgetcsv() will return an indexed array. An array in PHP is not very friendly with big dataset because it's still saved into the memory! The main difference is in memory usage. In the first case all the data are in memory when you return the array. Because of no matter what, if you "returns" values, that will be save in memory. In the second case you get a lazy approach, and you can iterate the values without keeping all of them in memory. This would be a great benefit when memory is a constraint compared to the s
    2 points
  7. @fedeb Glad that moving the $parent outside the loop helped there. The reason it helps is because after a $pages->save() is the automatic $pages->uncacheAll(), so the auto-assigned parent from the template is having to be re-loaded on every iteration. By keeping your own copy loaded and assigning it yourself, you are able to avoid that extra overhead in this case. Avoid getting repeaters involved. I wouldn't even experiment with it here. That will at minimum triple the number of pages (assuming every protein page could have a repeater). Repeaters would be just fine if you were work
    2 points
  8. I think you could try the generators here while you are playing with CSV. For .eg <?php function getRows($file) { $handle = fopen($file, 'rb'); if ($handle === false) { throw new Exception('open file '.$file.' error'); } while (feof($handle) === false) { yield fgetcsv($handle); } fclose($handle); } // allocate memory for only a single line in the csv file // do not need the entire csv file is read into memory $generator = getRows('../data/20_mil_data.csv'); // foreach ($it as $row) {print_r($row);} while ($generator->valid()) { print_r(
    2 points
  9. Unless I'm forgetting something, the $pages->uncache($page); won't help here because $page is a newly created Page that wasn't loaded from the database. So it's not going to be cached either. Uncaching pages is potentially useful when iterating through large groups of existing pages. For instance, if you are rendering or exporting something large from the contents of existing pages, you might like to $pages->uncacheAll() after getting through a thousand of them to clear room for another paginated batch. Though nowadays we have $pages->findMany() and $pages->findRaw(), so there are
    2 points
  10. Hi, that's where comes, once more, the beauty of pw... something you can easily do, just create a hidden page, let's say "sliders" (can use an template without file) this page will have children using a template without file too, template to which you'll just need to associate the repeater used to create your sliders elements and this is where the magic comes, Hannacode! create yours, something as easy as [[slider id=123]] or [[slider name="xxx"]] id being. the id od the page containing the slider, name... well guess πŸ™‚ and where you want your user to be abble to use them,
    2 points
  11. Just to add, if the points from @ryan and @horst aren't enough (they should boost import times quite noticeably) you could try dropping the FULLTEXT keys on the relevant fields' tables before the import and recreating them afterwards (ALTER TABLE `field_fieldname` DROP KEY `data` / ALTER TABLE `field_fieldname` ADD FULLTEXT KEY `data` (`data`)). Finally, a big part of MySQL performance depends on server tuning. The default size for the InnoDB buffer pool (the part of RAM where MySQL holds data and indexes) is rather small at 128MB. If you have a dedicated database server, you can up that
    2 points
  12. Glad you got it sorted - if it's not already, I think this should be posted as an issue on Github. We desperately need to start culling these weird inconsistencies that waste hours of time and make you go grey πŸ™‚
    1 point
  13. I figured it was tracy, I was just marvelling at how much easier it is to read than phpMyAdmin. I found a solution here: This works. It's an odd behaviour for the Approve Now button in the notification email to only work when comments are rendered through the render function. I can work with it though πŸ˜‚ Thanks for your help.
    1 point
  14. You can look at the utils here: https://github.com/chartjs/Chart.js/blob/master/docs/scripts/utils.js. Seems like there not really complex, but likely make documentation examples more terse.
    1 point
  15. @ryan for the time being the data (groupID, start, end, sequence) are not supposed to be queryable. Ideally groupID should be, because I would like to display all proteins belonging to a groupID in the group page but I think I will use a workaround for this: I have a file for each group containing this information which I plan to parse when loading the group page. Individual files have at most 1000 lines (proteins). In this way I avoid querying 20+ million entries each time you try to access a particular group page. As you suggested I will load each entry (groupID, start, end, sequence)
    1 point
  16. @Hector Nguyen Functions can't be autoloaded in PHP. Two options to work around this: Put the QFramework\Function function in a class as a static method, then the class can be autoloaded. Add all files containing functions in the autoload files list in your composer.json. This way those files will be included on every request.
    1 point
  17. I've been working with ProcessWire for a while now, and I've noticed that using Composer to manage dependencies and autoload external libraries isn't as prevalent in ProcessWire development as in other areas of PHP programming. I started out by using the default setup recommend in this blogpost. However, one major problem I have with this approach is that all external dependencies live in the webroot (the directory the server points to), which is unfavourable from a security standpoint and, in my opinion, just feels a bit messy. In this tutorial, I want to go through a quick setup of Comp
    1 point
  18. HI @eelkenet, I have setup a new test site today and it looks that any possible setting is working as expected from the PW image generation site. So it seems I need some more information from your setup. Especially this: what settings are you using for $config->imageSizerOptions ? what are the settings for $config->webpOptions ? how do you call the image url in your template file(s) with $image->size()->webp->url or $image->size()->url, or how? do you use an individually passed options array there? what are your settings in the .htacces
    1 point
  19. What do you want to achieve? Do you want to get all images for a project out of the system to use them externally? Or do you want to restructure how/where ProcessWire saves images in general? For the first case, you can write a little script to iterate recursively through all fields, including any repeater / repeater matrix sub-fields, to get an array of images. Then you can use that to copy all the images to one folder (using $files->copy(), for example), get a link of filenames (see Pagefile::filename()) or do whatever you want with them, like export their meta data as JSON or anythi
    1 point
  20. Just a matter of changing the value in the field's settings JSON.
    1 point
  21. Awesome news. Hopefully it will account for revision A etc that you do with some modules as well. I always like to keep things current.
    1 point
  22. Hi, Thanks a lot for all the feedback. I did some additional tests based on all of the suggestions you gave me and results are already amazing!! Figure 1 shows @ryan suggestions tested independently: 1. I created the $template variable outside the loop. 2. I created the $parent variable outside the loop. The boost in performance is surprising! Defining the $parent outside the loop made a huge difference (before I didn't assigned the parent explicitly, it was already defined in the template thus the assignment was automatic) 4. I also tried this suggestion ($page-
    1 point
  23. Version 0.0.2 now on GitHub https://github.com/MetaTunes/ProcessDbMigrate This version more fully allows for different page ids in source and target systems. A meta value (idMap) maintains the mapping. This allows the replacement of links in RTE fields provided the relevant pages are all in the migration. Also, all existing image variants are migrated. EDIT: Now 0.0.3 fixes install problem and adds upgrade via modules -> refresh.
    1 point
  24. @markus_blue_tomato Great, glad to hear it's working well! @StanLindsey This would be very simple to add, I'll plan to add it this week. Question: would just an array of DB hosts be adequate, or would it need separate configuration (host plus db name, user, pass, port, etc.) for each of the readonly db hosts?
    1 point
  25. Thank you, Robin S! That was exactly what I was looking for πŸ‘
    1 point
  26. AdminOnSteroids has an option in the PageListTweaks section: "Always show extra actions"
    1 point
  27. Ok this one is mindblowing, looks like a perfect upgrade from the solution I'm currently using. I'm getting by with a repeater and a modded version of FieldtypeSelectFile that basically lets you select a php from a given content blocks directory, assumes there's a png that goes with it, and renders a neat block selector. From there, it's just hiding and showing fields depending on the value of that selector. It's not the most elegant of solutions from the "setting up" perspective, but quite nice and comprehensive for the editor and the php code on the frontend super understandable. B
    1 point
  28. Hey @cosmicsafari β€” not a bad question at all. The description field comes from a module config setting. By default the module is set up to look for field called "summary", but you can change this to something else: $config->SearchEngine = [ 'render_args' => [ 'result_summary_field' => 'summary', ], ]; My guess is that your pages don't have the summary field? You can use some other field instead (if there's a suitable field), or you could let the module auto-generate the description by setting the summary field as "_auto_desc"... though ple
    1 point
  29. Hi all, This is likely a really stupid question, but how do I go about enabling the result descriptions and text highlighting? I have seen @teppo mention it and have seen theres methods in the codebase relating to it but for the life of me I can't figure out how to enable it. My search results are only returning a page title and a url at present, which I figured was the out the box default but not 100% sure.
    1 point
  30. Hello @ all, I have no idea, but approval via email does not work in my case. Here are all my get variables that will be submitted by clicking the link in the email: code gKB6jlhWTowUeahUNX6OWWvBYBxYf1D41I5LZb4ws1YsA73jmk7sQeOoU1QAy4L6f1IAnmaSKXRjINOtGFDKO92e10Y5IuTzmuHOwkGI8bWtcXaIGstDB_xzq9hhwvZx comment_success approve field comments page_id 2006 As you can see all parameters are there. As far as I know the file CommentNotifications.php is responsible to save the new status "approved" after clicking the link, but in my case nothing changes and I do not
    1 point
  31. You can use $this->animal $this->animal = 'cat'; $this->addHookAfter('Page::render', function($event) { bd($this->animal); }); or you can also do this: $animal = 'cat'; $this->addHookAfter('Page::render', function($event) use($animal) { bd($animal); });
    1 point
  32. Hi johnnydoe! I adjusted my file permissions, but it still shows the same message!
    0 points
Γ—
Γ—
  • Create New...