Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/14/2021 in all areas

  1. @AndZyk Alright, I had some fun with it. Here's an improved script for the asset export, which can handle nested repeater and matrix repeater fields: /** * Get a flat array of all images on the given page, even in nested repeater / repeater matrix fields. * * @var Page $page The page to get the images from. * @return Pageimage[] Array of Pageimage objects. */ function getImages(Page $page): WireArray { $images = new WireArray(); $fields = $page->fields->each('name'); foreach ($fields as $field) { $value = $page->get($field); $type = $page->fields->{$field}->getFieldType(); if ($value instanceof Pageimage) { $images->add($value); } elseif ($value instanceof Pageimages) { $images->import($value->getArray()); } elseif ($value instanceof RepeaterMatrixPageArray || $value instanceof RepeaterPageArray) { foreach ($value as $repeaterPage) { $images->import(getImages($repeaterPage)); } } elseif ($value instanceof RepeaterMatrixPage || $value instanceof RepeaterPage) { $images->import(getImages($value)); } } return $images; } $images = getImages($page); // create target folder for the page assets $targetDir = $config->paths->assets . 'export/' . $page->name . '/'; $files->mkdir($targetDir, true); // move all images to the target folder foreach ($images as $image) { $name = $image->basename(); $target = $targetDir . $name; $src = $image->filename(); $files->copy($src, $target); } This could be extended in any number of ways: Handle file fields as well as images. Handle any other fields and dump them in the target folder as JSON. Handle native Page properties. At some point you got a complete page export module. Might want to look into the pages export module if you don't want to write all that stuff yourself ?
    3 points
  2. 101aandd.com New site mostly designed by the client and implemented in Processwire. Mostly concentrating on small subtle animations.
    2 points
  3. Hi processwire community, The company that I work for used a processwire dev a few years ago to set up their website, but they are looking to have it updated as it seems to not be running as smoothly as they would have liked. In essence they are looking for someone to amend some of the issues that have occurred, such as: - Solicitors profiles disappearing and/or format changing, as well as adding some pictures of them. - Update the user-friendliness of things, such as having a 'back button/home button' whenever you have clicked on a certain part of the site. There are other issues that will need amending, which I would be more than happy to divulge when we find the right web developer! Here is the web link: http://www.thekhanpartnership.com/ Thank you, Dan from The Khan Partnership
    2 points
  4. Not a solution, but for an explanation of why querystrings are not retained in automatic redirects see Ryan's comments in these GitHub issues: https://github.com/processwire/processwire-issues/issues/636 https://github.com/processwire/processwire-issues/issues/77 https://github.com/processwire/processwire-issues/issues/1140
    2 points
  5. @fedeb Very interesting, thx for sharing! ? @ryan Wouldn't it be great to abstract all that findings/knowledge into the files API? Importing data from CSV is a quite common need and thinking about all the pieces (like utf8 encoding, wrong delimiters etc) can be tedious and does not need to be ? $file = "/my/large/file.csv"; $tpl = $templates->get('foo'); $parent = $pages->get(123); $options = [ 'delimiter' => ',', 'length' => 100, 'firstLineArrayKeys' => true, ]; while($line = $files->readCSV($file, $options)) { $p = $this->wire(new Page()); $p->template = $tpl; $p->parent = $parent; $p->title = $line['Foo Column']; $p->body = $line['Bar Column']; $p->save(); } Support this request on github: https://github.com/processwire/processwire-requests/issues/400
    2 points
  6. @ryan I'm afraid not, fget() in PHP will return one string (a line) but fgetcsv() will return an indexed array. An array in PHP is not very friendly with big dataset because it's still saved into the memory! The main difference is in memory usage. In the first case all the data are in memory when you return the array. Because of no matter what, if you "returns" values, that will be save in memory. In the second case you get a lazy approach, and you can iterate the values without keeping all of them in memory. This would be a great benefit when memory is a constraint compared to the size of the sum of all your data. I've managed to import 4GB CSV file on the Shared VPS 1GB mem in seconds, the bottleneck of "yield" method now is your MySQL.
    2 points
  7. @fedeb Glad that moving the $parent outside the loop helped there. The reason it helps is because after a $pages->save() is the automatic $pages->uncacheAll(), so the auto-assigned parent from the template is having to be re-loaded on every iteration. By keeping your own copy loaded and assigning it yourself, you are able to avoid that extra overhead in this case. Avoid getting repeaters involved. I wouldn't even experiment with it here. That will at minimum triple the number of pages (assuming every protein page could have a repeater). Repeaters would be just fine if you were working in the thousands-of-pages territory, but in the millions-of-pages territory, it's not going to be worth even attempting. Using a ProFields table field would be the best alternative if you needed it to be queryable data. If you didn't need it to be queryable data (groupID, start, end, sequence), I would leave them as they are, space-separated in a plain textarea field — they can easily be parsed out at runtime so you can access them as as properties of the page. (If that suits your need, let me know and I'll get into how that can be done). When working at large scale, it's also always good to consider custom building a Fieldtype module for the purpose too (that's another topic, but we can get into it too). For your groupID, if the same groupID is referenced by multiple proteins, and there is more information about each "group" (other than just an ID) then I think it would make sense for it to be a Page reference field. What is the max number of groupID+start+end+sequence rows that a protein can have? If there is a natural limit and it's not large, then that would open up some new storage possibilities too. Another optimization you can make in your loop: $page->sort = $i; This prevents it from having to detect and auto-assign a sort value based on the quantity of children the parent page has. For the $page->name, if each page will have a unique "protein-name" then you might also consider using that rather than the ("protein" . $i), as it will be more reflective of the page than a generic index number.
    2 points
  8. I think you could try the generators here while you are playing with CSV. For .eg <?php function getRows($file) { $handle = fopen($file, 'rb'); if ($handle === false) { throw new Exception('open file '.$file.' error'); } while (feof($handle) === false) { yield fgetcsv($handle); } fclose($handle); } // allocate memory for only a single line in the csv file // do not need the entire csv file is read into memory $generator = getRows('../data/20_mil_data.csv'); // foreach ($it as $row) {print_r($row);} while ($generator->valid()) { print_r($generator->current()); //$generator->current() is your $row // playing with ProcessWire here $generator->next(); } $generator->rewind(); // http://php.net/manual/en/class.generator.php That's always my #1 choice while working with big datasets in PHP.
    2 points
  9. Unless I'm forgetting something, the $pages->uncache($page); won't help here because $page is a newly created Page that wasn't loaded from the database. So it's not going to be cached either. Uncaching pages is potentially useful when iterating through large groups of existing pages. For instance, if you are rendering or exporting something large from the contents of existing pages, you might like to $pages->uncacheAll() after getting through a thousand of them to clear room for another paginated batch. Though nowadays we have $pages->findMany() and $pages->findRaw(), so there are fewer instances were you would even need to use uncache or uncacheAll, if ever. ProcessWire actually does an uncacheAll() internally after saving a page already. This is necessary because changes to a page or additions/deletions to the page tree may affect other pages, and we don't want any potential for old cached data to appear in future $pages->find() or other operations. Just one example is if we called $parent->children() before a save, and then after the save called it again, we'd want our new page to be in the children rather than having it return the previously cached value. There are a lot of similar cases, so the safest bet is for PW to uncache the results of future page get/find operations after a save as the default behavior. So that's the way it's always done it. As far as I can tell from fedeb's example (and often other with import operations), it may be better to tell PW to skip this "uncacheAll-after-save" behavior. That's because imports often involve Page reference fields, and you don't want PW to have to reload referenced pages after every save. So you could potentially reduce overhead by telling it not to uncache after save, i.e. $pages->save($page, [ 'uncacheAll' => false ]); I'm not sure if fedeb's import involves loading of any other pages, whether for page reference fields, or anything else. So it may not matter one way or the other here, but wanted to mention it just in case. I know about ProcessWire tuning, but not about MySQL server tuning. When dealing with 20 million rows that seems like getting into the territory where optimizations to the DB configuration deserve a lot of focus, so I would bet that BitPoet's suggestions are going to make the most difference.
    2 points
  10. Hi, that's where comes, once more, the beauty of pw... something you can easily do, just create a hidden page, let's say "sliders" (can use an template without file) this page will have children using a template without file too, template to which you'll just need to associate the repeater used to create your sliders elements and this is where the magic comes, Hannacode! create yours, something as easy as [[slider id=123]] or [[slider name="xxx"]] id being. the id od the page containing the slider, name... well guess ? and where you want your user to be abble to use them, just add TextformatterHannaCode to your ckeditor; job done, it works a little like wp shortcodes but; a little difference, it works, it lets you write and use the code you want without generating hundreds of html/js/css lines in the page, it won't be broken by some update and it's fast as hell i let you decide where and how to tell your user what code to use, personally, i use kogondo's https://processwire.com/modules/fieldtype-runtime-markup/ it makes easy to add an easily selectable piece of code inside a readonly input in the backend, live example for a customer who wanted to be able to create carrousels and use them anywhere in any page if french, sorry, but you get the idea and, below, there is... a title and a simple repeater, the user has just to add a p in ckeditor, paste the code and hannacode takes care of what's left to do ? hope it helps have a nice day
    2 points
  11. Just to add, if the points from @ryan and @horst aren't enough (they should boost import times quite noticeably) you could try dropping the FULLTEXT keys on the relevant fields' tables before the import and recreating them afterwards (ALTER TABLE `field_fieldname` DROP KEY `data` / ALTER TABLE `field_fieldname` ADD FULLTEXT KEY `data` (`data`)). Finally, a big part of MySQL performance depends on server tuning. The default size for the InnoDB buffer pool (the part of RAM where MySQL holds data and indexes) is rather small at 128MB. If you have a dedicated database server, you can up that to 80% of physical memory to avoid unnecessary disk access.
    2 points
  12. I've been working with ProcessWire for a while now, and I've noticed that using Composer to manage dependencies and autoload external libraries isn't as prevalent in ProcessWire development as in other areas of PHP programming. I started out by using the default setup recommend in this blogpost. However, one major problem I have with this approach is that all external dependencies live in the webroot (the directory the server points to), which is unfavourable from a security standpoint and, in my opinion, just feels a bit messy. In this tutorial, I want to go through a quick setup of Composer and ProcessWire that keeps the dependencies, all custom-written code and other source material outside of the webroot, and makes full usage of the Composer autoloader. This setup is pretty basic, so this tutorial is probably more useful to beginners (this is why I'll also include some general information on Composer), but hopefully everyone can take something away from this for their personal workflow. Site structure after setup This is what the directory structure can look like after the setup: . ├── composer.json ├── composer.lock ├── node_modules │ └── ... ├── public │ ├── index.php │ ├── site │ ├── wire │ └── ... ├── packacke-lock.json ├── package.json ├── sass │ ├── main.scss │ ├── _variables.scss │ └── ... ├── src │ ├── ContentBag.php │ └── ... └── vendor ├── autoload.php ├── composer ├── league ├── symfony └── ... As mentioned, the main point of this setup is to keep all external libraries, all other custom source code and resources out of the webroot. That includes Composer's vendor folder, your node_modules and JavaScript source folder if you are compiling JavaScript with webpack or something similar and including external scripts via NPM, or your CSS preprocessor files if you are using SASS or LESS. In this setup, the public directory acts as the webroot (the directory that is used as the entry point by the server, DocumentRoot in the Apache configuration). So all other files and directories in the mysite folder aren't accessible over the web, even if something goes wrong. One caveat of this setup is that it's not possible to install ProcessWire modules through Composer using the PW Module Installer (see Blogpost above), but that's just a minor inconvenience in my experience. Installation You'll need to have composer installed on your system for this. Installation guides can be found on getcomposer.org. First, open up your shell and navigate to the mysite folder. $ cd /path/to/mysite/ Now, we'll initialize a new Composer project: $ composer init The CLI will ask some questions about your projects. Some hints if you are unsure how to answer the prompts: Package names are in the format <vendor>/<project>, where vendor is your developer handle. I use my Github account, so I'll put moritzlost/mysite (all lowercase). Project type is project if you are creating a website. Author should be in the format Name <email>. Minimum Stability: I prefer stable, this way you only get stable versions of dependencies. License will be proprietary unless you plan on sharing your code under a FOSS license. Answer no to the interactive dependencies prompts. This creates the composer.json file, which will be used to keep track of your dependencies. For now, you only need to run the composer install command to initialize the vendor directory and the autoloader: $ composer install Now it's time to download and install ProcessWire into the public directory: $ git clone https://github.com/processwire/processwire public If you don't use git, you can also download ProcessWire manually. I like to clean up the directory after that: $ cd public $ rm -r .git .gitattributes .gitignore CONTRIBUTING.md LICENSE.TXT README.md Now, setup your development server to point to the /path/to/mysite/public/ directory (mind the public/ at the end!) and install ProcessWire normally. Including & using the autoloader With ProcessWire installed, we need to include the composer autoloader. If you check ProcessWire's index.php file, you'll see that it tries to include the autoloader if present. However, this assumes the vendor folder is inside the webroot, so it won't work in our case. One good place to include the autoloader is using a site hook file. We need the autoloader as early as possible, so we'll use init.php: EDIT: As @horst pointed out, it's much better to put this code inside the config.php file instead, as the autoloader will be included much earlier: // public/site/config.php <?php namespace Processwire; require '../../vendor/autoload.php'; The following also doesn't apply when including the autoloader in the config-file. This has one caveat: Since this file is executed by ProcessWire after all modules had their init methods called, the autoloader will not be available in those. I haven't come across a case where I needed it this early so far; however, if you really need to include the autoloader earlier than that, you could just edit the lines in the index.php file linked above to include the correct autoloader path. In this case, make sure not to overwrite this when you update the core! Now we can finally include external libraries and use them in our code without hassle! I'll give you an example. For one project, I needed to parse URLs and check some properties of the path, host et c. I could use parse_url, however that has a couple of downsides (specifically, it doesn't throw exceptions, but just fails silently). Since I didn't want to write a huge error-prone regex myself, I looked for a package that would help me out. I decided to use this URI parser, since it's included in the PHP League directory, which generally stands for high quality. First, install the dependency (from the project root, the folder your composer.json file lives in): $ composer require league/uri-parser This will download the package into your vendor directory and refresh the autoloader. Now you can just use the package in your own code, and composer will autoload the required class files: // public/site/templates/basic-page.php <?php namespace Processwire; use \League\Uri\Parser; // ... if ($url = $page->get('url')) { $parser = new Parser(); $parsed_url = $parser->parse($url); // do stuff with $parsed_url ... } Wiring up custom classes and code Another topic that I find really useful but often gets overlooked in Composer tutorials is the ability to wire up your own namespace to a folder. So if you want to write some object-oriented code outside of your template files, this gives you an easy way to autoload those using Composer as well. If you look at the tree above, you'll see there's a src/ directory inside the project root, and a ContentBag.php file inside. I want to connect classes in this directory with a custom namespace to be able to have them autoloaded when I use them in my templates. To do this, you need to edit your composer.json file: { "name": "moritzlost/mysite", "type": "project", "license": "proprietary", "authors": [ { "name": "Moritz L'Hoest", "email": "info@herebedragons.world" } ], "minimum-stability": "stable", "require": {}, "autoload": { "psr-4": { "MoritzLost\\MySite\\": "src/" } } } Most of this stuff was added during initialization, for now take note of the autoload information. The syntax is a bit tricky, since you have to escape the namespace seperator (backslash) with another backslash (see the documentation for more information). Also note the PSR-4 key, since that's the standard I use to namespace my classes. The line "MoritzLost\\MySite\\": "src/" tells Composer to look for classes under the namespace \MoritzLost\MySite\ in the src/ directory in my project root. After adding the autoload information, you have to tell composer to refresh the autoloader information: $ composer dump-autoload Now I'm ready to use my classes in my templates. So, if I have this file: // src/ContentBag.php <?php namespace MoritzLost\MySite; class ContentBag { // class stuff } I can now use the ContentBag class freely in my templates without having to include those files manually: // public/site/templates/home.php <?php namespace Processwire; use MoritzLost\MySite\ContentBag; $contentbag = new ContentBag(); // do stuff with contentbag ... Awesome! By the way, in PSR-4, sub-namespaces correspond to folders, so I can put the class MoritzLost\MySite\Stuff\SomeStuff in src/Stuff/SomeStuff.php and it will get autoloaded as well. If you have a lot of classes, you can group them this way. Conclusion With this setup, you are following secure practices and have much flexibility over what you want to include in your project. For example, you can just as well initialize a JavaScript project by typing npm init in the project root. You can also start tracking the source code of your project inside your src/ directory independently of the ProcessWire installation. All in all, you have good seperation of concerns between ProcessWire, external dependencies, your templates and your OOP-code, as well as another level of security should your Server or CGI-handler ever go AWOL. You can also build upon this approach. For example, it's good practice to keep credentials for your database outside the webroot. So you could modify the public/site/config.php file to include a config or .env file in your project root and read the database credentials from there. Anyway, that's the setup I came up with. I'm sure it's not perfect yet; also this tutorial is probably missing some information or isn't detailed enough in some areas depending on your level of experience. Feel free to ask for clarification, and to point out the things I got wrong. I like to learn as well ? Thanks for making it all the way to the bottom. Cheers!
    1 point
  13. Glad you got it sorted - if it's not already, I think this should be posted as an issue on Github. We desperately need to start culling these weird inconsistencies that waste hours of time and make you go grey ?
    1 point
  14. I figured it was tracy, I was just marvelling at how much easier it is to read than phpMyAdmin. I found a solution here: This works. It's an odd behaviour for the Approve Now button in the notification email to only work when comments are rendered through the render function. I can work with it though ? Thanks for your help.
    1 point
  15. You can look at the utils here: https://github.com/chartjs/Chart.js/blob/master/docs/scripts/utils.js. Seems like there not really complex, but likely make documentation examples more terse.
    1 point
  16. @ryan for the time being the data (groupID, start, end, sequence) are not supposed to be queryable. Ideally groupID should be, because I would like to display all proteins belonging to a groupID in the group page but I think I will use a workaround for this: I have a file for each group containing this information which I plan to parse when loading the group page. Individual files have at most 1000 lines (proteins). In this way I avoid querying 20+ million entries each time you try to access a particular group page. As you suggested I will load each entry (groupID, start, end, sequence) as a text field and then use php explode method to parse it into an array at runtime. The only doubt is probably on groupID: A single groupID can be referenced by multiple proteins and it does contain additional information displayed in their respective group page (I create the group pages separately). The natural limit is around 20 groups although normally this number is 2 or 3 groups per protein. With this setup is it worth using a Page reference field? What are the other storage possibilities? In the future I think I will end up using ProFields or building a Fieldtype module. For this last approach I think I need to read a bit more about modules since I am new to processwire. This tutorial posted by bernhard is a good start. @Hector Nguyen if you are not constraint by memory then loading the csv into memory all at once is the way to go, right? Thanks for all the useful suggestions. p.s. maybe I am diverging from the original thread. If you prefer I can open a new one.
    1 point
  17. @Hector Nguyen Functions can't be autoloaded in PHP. Two options to work around this: Put the QFramework\Function function in a class as a static method, then the class can be autoloaded. Add all files containing functions in the autoload files list in your composer.json. This way those files will be included on every request.
    1 point
  18. HI @eelkenet, I have setup a new test site today and it looks that any possible setting is working as expected from the PW image generation site. So it seems I need some more information from your setup. Especially this: what settings are you using for $config->imageSizerOptions ? what are the settings for $config->webpOptions ? how do you call the image url in your template file(s) with $image->size()->webp->url or $image->size()->url, or how? do you use an individually passed options array there? what are your settings in the .htaccess file ? Please can you provide these information as exact as possible. If you don't want to post this here you can PM me. My newly set up testing, without any settings in the .htaccess doesn't recreate any variations. It looks like this:
    1 point
  19. What do you want to achieve? Do you want to get all images for a project out of the system to use them externally? Or do you want to restructure how/where ProcessWire saves images in general? For the first case, you can write a little script to iterate recursively through all fields, including any repeater / repeater matrix sub-fields, to get an array of images. Then you can use that to copy all the images to one folder (using $files->copy(), for example), get a link of filenames (see Pagefile::filename()) or do whatever you want with them, like export their meta data as JSON or anything else. Super quick and dirty, will probably not work right away, but you get the idea: function getImages(Page $page): array { $images = new PageImages; $fields = array_map(function($field){ return $field->name; }, iterator_to_array($page->fields)); foreach ($fields as $field) { $value = $page->get($field); $type = $page->fields->{$field}->getFieldType(); if ($type instanceof FieldtypeImage) { $images->import($value); } elseif ($type instanceof FieldtypeRepeater) { $images->import(getImages($value)); } } return $images; } $images = getImages($page);
    1 point
  20. Just a matter of changing the value in the field's settings JSON.
    1 point
  21. Awesome news. Hopefully it will account for revision A etc that you do with some modules as well. I always like to keep things current.
    1 point
  22. Hi, Thanks a lot for all the feedback. I did some additional tests based on all of the suggestions you gave me and results are already amazing!! Figure 1 shows @ryan suggestions tested independently: 1. I created the $template variable outside the loop. 2. I created the $parent variable outside the loop. The boost in performance is surprising! Defining the $parent outside the loop made a huge difference (before I didn't assigned the parent explicitly, it was already defined in the template thus the assignment was automatic) 4. I also tried this suggestion ($page->name = "protein" . $i;) and although it seems to boost a bit performance I didn't include the plot because results were not conclusive. Still I will include this in my code. Figure 2 is based on @horst suggestion. I tested the impact of calling gc_collect_cycles() and $pages->uncacheAll() after every $database->commit(). I didn't do a test for $pages->uncache($page) because I thought $pages->uncacheAll() was basically the same. Maybe this is not true (?). Results don't show any well defined boost in performance (I guess ryan's recent reply predicted this). I still need to try @BitPoet suggestion because I am sure this is something that will boost performance. I am now doing this tests on my personal computer. I will do this test when running on the dedicated server. I will also would like to try generators (first time a hear about them ) ______________________________________________________________________________________________________________________________________________________________________________________________ One last thing regarding the fields in the protein template and the data structure in general (the pseudo code I posted initially was just as an example). Proteins are classified into groups. Each protein can belong to more than one groups (max. 5). My original idea was to use repeaters because for each protein I have the following information repeated: GroupID [integer], start [integer], end [integer], sequence [text] The idea is that from GroupID you can go to the particular group page (I have around 50k groups) but I don't necessarily need a page reference for this. The csv is structured as follow. Note that some protein entries are repeated which means that I shouldn't create a new page but add an entry to the repeater field. Protein-name groupID start end sequence A0A151DJ30 41 3 94 CPFES[...]]VRQVEK A0A151DJ30 55 119 140 PWSGD[...]NWPTYKD A0A0L0D2B9 872 74 326 MPPRV[...]TTKWSKK V8NIV9 919 547 648 SFKYL[...]LEAKEC A0A1D2MNM4 927 13 109 GTRVW[...]IYTYCG A0A1D2MNM4 999 119 437 PWSGDN[...]]RQDTVT A0A167EE16 1085 167 236 KTYLS[...]YELLTT A0A0A0M635 1104 189 269 KADQE[...]INLVIV Since I know repeaters also creates additional overhead I am doing all my benchmarks without them. I can always build the websites without them. In the next days I will do some benchmarks including repeaters just to see how it goes. Once again, thanks for all the replies!
    1 point
  23. Version 0.0.2 now on GitHub https://github.com/MetaTunes/ProcessDbMigrate This version more fully allows for different page ids in source and target systems. A meta value (idMap) maintains the mapping. This allows the replacement of links in RTE fields provided the relevant pages are all in the migration. Also, all existing image variants are migrated. EDIT: Now 0.0.3 fixes install problem and adds upgrade via modules -> refresh.
    1 point
  24. @markus_blue_tomato Great, glad to hear it's working well! @StanLindsey This would be very simple to add, I'll plan to add it this week. Question: would just an array of DB hosts be adequate, or would it need separate configuration (host plus db name, user, pass, port, etc.) for each of the readonly db hosts?
    1 point
  25. Thank you, Robin S! That was exactly what I was looking for ?
    1 point
  26. AdminOnSteroids has an option in the PageListTweaks section: "Always show extra actions"
    1 point
  27. Ok this one is mindblowing, looks like a perfect upgrade from the solution I'm currently using. I'm getting by with a repeater and a modded version of FieldtypeSelectFile that basically lets you select a php from a given content blocks directory, assumes there's a png that goes with it, and renders a neat block selector. From there, it's just hiding and showing fields depending on the value of that selector. It's not the most elegant of solutions from the "setting up" perspective, but quite nice and comprehensive for the editor and the php code on the frontend super understandable. Brad seems to try to do just enough. Looks great for custom, controlled content blocks. Too much freedom and the editor usually makes a mess ?
    1 point
  28. Hey @cosmicsafari — not a bad question at all. The description field comes from a module config setting. By default the module is set up to look for field called "summary", but you can change this to something else: $config->SearchEngine = [ 'render_args' => [ 'result_summary_field' => 'summary', ], ]; My guess is that your pages don't have the summary field? You can use some other field instead (if there's a suitable field), or you could let the module auto-generate the description by setting the summary field as "_auto_desc"... though please note that the support for auto-generated descriptions is experimental, and comes with one major gotcha: SearchEngine doesn't know which parts of your search index are "public knowledge", so it may end up displaying anything stored there. If you end up using this option, be sure to test it and make sure that you haven't indexed anything you don't want to show up in the search results ?
    1 point
  29. Hi all, This is likely a really stupid question, but how do I go about enabling the result descriptions and text highlighting? I have seen @teppo mention it and have seen theres methods in the codebase relating to it but for the life of me I can't figure out how to enable it. My search results are only returning a page title and a url at present, which I figured was the out the box default but not 100% sure.
    1 point
  30. Hello @ all, I have no idea, but approval via email does not work in my case. Here are all my get variables that will be submitted by clicking the link in the email: code gKB6jlhWTowUeahUNX6OWWvBYBxYf1D41I5LZb4ws1YsA73jmk7sQeOoU1QAy4L6f1IAnmaSKXRjINOtGFDKO92e10Y5IuTzmuHOwkGI8bWtcXaIGstDB_xzq9hhwvZx comment_success approve field comments page_id 2006 As you can see all parameters are there. As far as I know the file CommentNotifications.php is responsible to save the new status "approved" after clicking the link, but in my case nothing changes and I do not get any message on the frontend. Tracy does not complain about anything so I dont know how to check where the problem is. Is there someone who could give me a hint to check out whats going on after clicking the link to find out the problem. Best regards EDIT: Ok, I see! This doesnt work if the comments were not rendered with the render function. So using your own markup to output comments inside a foreach prevents the status change after clicking the approval link. Solution: Copy the whole Fieldtype comments directory in site/modules and make all the markup changes there. Load comment form and list via the render function and everything is fine.
    1 point
  31. You can use $this->animal $this->animal = 'cat'; $this->addHookAfter('Page::render', function($event) { bd($this->animal); }); or you can also do this: $animal = 'cat'; $this->addHookAfter('Page::render', function($event) use($animal) { bd($animal); });
    1 point
×
×
  • Create New...