Jump to content

nurkka

Members
  • Posts

    130
  • Joined

Everything posted by nurkka

  1. I found the solution – of course right after posting ? One simply has to wrap the latte markup into {try} ... {/try} {try} {func($var)} {/try} And if you want to log the errors, you can do the following: $loggingHandler = function (\Throwable $e, \Latte\Runtime\Template $template) { wire()->log->save('latte_errors', $e->getMessage()); }; $latte->setExceptionHandler($loggingHandler);
  2. I am using latte (not with RockFrontend yet, but "standalone" - I can't integrate RockFrontend in my current project yet, but hopefully in the next one), and I noticed that if I make errors in the .latte-file, the latte engine outputs a 500 Server Error, although I wrapped the latte call in a try/catch-block. My code in ProcessWire template: templateContent = '{func($var)}'; // contains an undefined function call $params = []; try { $output = $latte->renderToString( $templateContent, $params ); } catch (\Exception $e) { wire()->log->save('latte_errors', $e->getMessage()); } When testing the same latte markup in the latte sandbox https://fiddle.nette.org/latte/#0b4eb6af74 , it displays an appropriate error message. So there must be a way to catch the error, but I can't find it. Or can this only be done with PHP-features (which I do not know yet), or has ProcessWire any features to catch the errors? Does anyone know how to do it?
  3. Thanks for your ideas! I have to automatically upload the generated JSON files to the shop server via SFTP. So, I think URL hooks and WireCache would not help in this case. After several hours of testing, I am now quite sure, that the original page saving problems resulted from the missing namespace in templates and includes. The problems didn't occur again. I am using the hooks ProcessPageView::finished and Pages::saved to generate the JSON-files, and that works for now. When testing, I had the impression that ProcessWire cached some of the includes, possibly because of the FileCompiler. Several times, changes in the code were not always immediately noticeable, but only after several reloads and/or a click on “Refresh” for the modules. I also tried the CacheControl module, to get rid of cached code. Perhaps this has something to do with the hosting and them caching php code, too ... but I don't know for sure. Anyway, the module now seems to work, at least for now.
  4. Thanks @poljpocket, you're right regarding the hook on Pages::saved, and I am already using that. But because of the nature of the online shop, which e.g. fetches up to four processwire pages per pageview, if I only use Pages::saved, in some cases, only one of those four pages will be up to date, when the others are outdated, until they are also saved. Which would never happen if it isn't done automatically. So, the perfect solution would be to determine the page ids of the other three pages, when saving one of those pages, and to render the JSON files of all those related pages. I am sure that is possible, but there are a lot of different cases, not only the described example. So I thought it would be a good idea, when processwire saved every page as JSON (additionally to displaying it as a normal page) when it is viewed - and of course every language variant (which is not viewed in the same request). To adress the issue with the same page not rendering twice: In the meantime, I found that the site had also a lot of templates without namespace. I corrected that, and now, the issue might have been gone. I am still testing that.
  5. @poljpocket Thanks for your reply! The site has quite good traffic, so LazyCron would be okay in principle. But no matter if I use LazyCron or the ProcessPageView::finished hook – you mentioned it already – one of the visitors (or, in the worst case, the google bot!) would get a very slow experience. So I practically have no other choice as to use a real cronjob, which unfortunately can only run every minute.
  6. I want to divide a possibly very long task into small parts, which should be completed relatively quickly. LazyCron leaves at least 30 seconds between two tasks, so I am looking for a way to run the script with every ProcessWire call. In my tests, I got different results when using PageRender::renderPage or ProcessPageView::finished and when the script was executed by LazyCron. With LazyCron, I got no errors, so I figured, it would be the best way to execute a task, independent from the original request or pageview. I don't know yet, why my version with ProcessPageView::finished did not work for me - but I will keep testing that. Thanks for your input!
  7. The shortest time between two LazyCron runs seems to be 30 seconds. Is there a way to let a LazyCron script run on every page load instead? If not, what would be a good technique to run a script on every page load, but outside of the normal ProcessWire request context, like LazyCron does? I tried a lot of Hooks, but every one of them kept my script within the current request. I am searching for the right hook or another technique, which let's me run the script in ProcessWire (in a module), but independent from the current request. Thanks and best regards! Update: I decided to use a "real" cronjob, which unfortunately can only run every minute, while LazyCron could run every 30 seconds, if your site had enough traffic. If you want to execute a task or parts of a task after every ProcessWire request, you could use the hook ProcessPageView::finished
  8. I am using ProcessWire as the backend for an online shop system, which fetches the content as JSON files. The processwire installation has some legacy template code, which causes the startpage of the shop to require 3 or 4 actual ProcessWire pages. The JSON files are saved into a custom cache folder. When this cache is deleted, it has to be rebuilt. To achieve that, I implemented a Hook into PageRender::renderPage, which saves the currently viewed page and all of its language variants into JSON files. To get the actual markup, I am using $page->render(). Doing that, I noticed, that ProcessWire seems to prevent the currently requested page from also being generated (a second time) via $page->render(). The return value of $page->render() was always empty within the PageRender::renderPage Hook if $page was identical to the page of the current request. And it always returned the correctly rendered markup, if the current page was not identical to the Page in the $page->render() call. I assume that this has something to do with a mechanism in ProcessWire which shall prevent infinite loops. Does anybody know more about this? Are there any workarounds? Update: The issues were solved by cleaning up legacy template code – especially missing namespaces.
  9. My suggestion would be to add a native global media manager and native multidomain support Media Manager: - Global management of images (also SVG) and Documents (e.g. PDFs) with a decent UI, Preview-Thumbnails, the possibility to organize the assets in folders, etc. - A field to reference those assets, with a usable UI and the possibility to define starting folders, e.g. having a field which only allows to select from a folder with employee portraits, etc. - References should be managed automatically, so one can't delete an image which is still referenced anywhere - If an image is not referenced anywhere anymore, there should be a dialog which asks, if one wants to delete the asset OR there could be a cleanup feature to find and delete unused media items - and so much more ideas, but the global management and the reference field with visual image preview in a clear UI would be great Multidomain Support: - Manage multiple Websites with different domains within one ProcessWire installation, with optional multilanguage support - Every website has a root parent page in the page tree, where everything is defined: domain name, language, etc. - Internal links will be managed by ProcessWire, so you can link between the domains. ProcessWire would determine if the links have to be prefixed with the domain name automatically - The root parent pages will be fully abstracted away, e.g. their page names won't be applied to urls I think that we really would need a native implementation of those features. Unfortunately, I don't have the time or expertise to develop them myself and make a PR, but I would like to add them to the wish list. And if they would be implemented, I would be happy to contribute ideas, feedback and provide beta testing.
  10. Hi all! If I had some feature requests for InputfieldTinyMCE, where would be the right place to post them? In https://github.com/processwire/processwire-requests or here in the forum in this thread or in a new thread?
  11. @adrian @Robin S I didn't know about these features yet – many thanks!
  12. Hi all, my proposal is, that every module should have an "active" checkbox, which allows to quickly deactivate it, instead of being forced to uninstall/reinstall it. Like AdminOnSteroids had and TracyDebugger has. Best regards!
  13. Thanks for testing. I was using the latest stable version of ProcessWire (3.0.229) and upgraded now to the latest dev version (3.0.239). The issue seems to be gone (for now), i.e. I get the same data on saving the page – no matter if I change something in the page editor or not.
  14. Here is an example of the template file: $json_data = []; foreach ($page->contentblocks as $cb) { // "contentblocks" is a repeatermatrix field $json_data_block = $cb->render(); // this returns an actual json array array_push($json_data, $json_data_block); } header('Content-Type: application/json'); echo json_encode($json_data); And here is an examle of a repeatermatrix field template file: $json_data = []; $json_data["headline"] = $page->text; // "text" is a multilanguage text field $json_data["text"] = $page->body; // "body" is a multilanguage textarea/tinymce field return $json_data; // no json_encode needed Returning JSON data from the template did not work, because the $page->render() method threw an error ("strlen(): Argument #1 ($string) must be of type string, array given"). So I had to convert the JSON data to a string in the template and convert it back in the module. The advantage is, that one can view the page in the browser and check the JSON output.
  15. Hello everyone, I use ProcessWire with multilangage support (5 languages) to provide a shop with content as JSON files. To match the specifications regarding the structure of those JSON files, I wrote a module that exports the files on page save. It renders a page and outputs the language content in different JSON files, within different folders. The module hooks into the Pages::saved hook. As soon as a page is saved in the admin area, the module loops over the languages, renders the page content in each language and writes a JSON file for each language. The following phenomenon now occurs: If one edits individual fields of the page in the admin area and then saves it, the JSON files are output correctly. If one saves the page again – without having changed a field on the page – $page->render() does not return the text content of the multilanguage fields, but objects with the text content of all languages. These are e.g. LanguagesPageFieldValue objects or ComboLanguagesValue objects. I believe this must have something to do with the trackChanges behaviour. ProcessWire tracks the changes that have been made to fields. If no changes have been detected, the behaviour appears to be different when saving the page and with $page->render within the Pages::saved hook. Here is the code (simplified) I am using: <?php namespace ProcessWire; class ExportAsJson extends WireData implements Module { public function init() { $this->addHookAfter('Pages::saved', $this, 'hookPageSaved', ['priority' => 200]); } protected function hookPageSaved(HookEvent $event) { $page = $event->arguments(0); // get the saved page // save current settings $saved_lang = $this->wire->user->language; $saved_user = $this->wire->user; $saved_output_formatting = $page->of(); // set current user to guest user $this->wire->users->setCurrentUser($this->wire->users->getGuestUser()); $page->of(true); $json_data = []; foreach ($this->wire->languages as $l) { $lang_name = $l->name; // set current language for rendering the page $this->wire->user->language = $l; // get rendered json data as string $markup = $page->render(); $json_data[$lang_name] = json_decode($markup); } // restore saved settings $page->of($saved_output_formatting); $this->wire->users->setCurrentUser($saved_user); $this->wire->user->language = $saved_lang; // write the JSON data to separate files on the remote shop server $this->writeJsonToSftpAsSeparateFiles($page, $json_data); } } The correct result in JSON would be like this: [ { "featureHeadline": "Lorem ipsum dolor" } ] But when I save the page without changing a field beforehand, I get this: [ { "featureHeadline": {} } ] Does anyone have an idea how to fix this behaviour? Thanks and best regards!
  16. Hi @Chris-PW I removed the javascript block for meta-shift-s and ctrl-s. After that, pressing ctrl-s in TinyMCE worked perfectly. Tested in Chrome/Windows11 and Firefox/Windows11, and via BrowserStack, also in Safari 17.3/MacOS Sonoma. Additionally, I removed the timeout in UIkit.notification(...) in QuickSave.js, because my pages have so many fields and languages that saving the page takes several seconds. And because the page is reloaded anyway, in my case it was better to simply leave the notification until the page reloads.
  17. @Chris-PW Many thanks for contributing this module. That was really a much needed missing feature. I am using the module with Chrome (Iron) under Windows 11, where in TinyMCE fields, pressing CTRL-S saved the page twice. It seems, that Chrome/Windows reacts also to "meta-s". I then went to quicksavetinymce.js and commented out the block for "ctrl+s". Now, pressing CTRL-S within a TinyMCE field saves the page only once. Thanks again for this great module! P.S.: This also works with Firefox/Windows
  18. Hi all! Is there a way to add comments and documentation directly in the admin area, e.g. above the page editor? Field descriptions and notes are not sufficient in some cases, when one wants to document more complex things or explain something more detailed. I know that one can add a custom admin page, but I am searching for a way to document things directly in the pages where the editor users are working. Thanks in advance for any tips!
  19. Hi @kongondo, today I tried to upgrade a ProcessWire website from PW 3.0.210 to 3.0.229. After that, when trying to access pages in the backend, which contain MediaManager fields, the following error occured: ProcessWire\WireException Item 'type' set to ProcessWire\Pagefiles is not an allowed type search File: /html/website/wire/core/WireArray.php:458 448: * 449: * @param int|string $key Key of item to set. 450: * @param int|string|array|object|Wire $value Item value to set. 451: * @throws WireException If given an item not compatible with this WireArray. 452: * @return $this 453: * 454: */ 455: public function set($key, $value) { 456: 457: if(!$this->isValidItem($value)) { 458: throw new WireException("Item '$key' set to " . get_class($this) . " is not an allowed type"); 459: } 460: if(!$this->isValidKey($key)) { 461: throw new WireException("Key '$key' is not an allowed key for " . get_class($this)); 462: } To verify that this error was caused by Media Manager, I renamed the directory "MediaManager" in the modules folder. Then, the error disappeared. I had to downgrade ProcessWire for now, but the website has to be updated sooner or later, as other modules and parts of the site depend on it. Do you have any hints what I can do to avoid this error or is there a bugfix release of MediaManager?
  20. @bernhard Thanks again for your help! I finally managed to also get SSH keys to work under Windows with WSL2 and with DDEV installed in Windows, so that now I am able to use RockShell how it's meant to be ? The key point, why I struggled with the ssh keys was, because I had put the key files in the ~/.ssh folder on the WSL2-Linux installation. But as I am using the Windows version of DDEV and using the command line from the windows side, the ssh keys must be placed in C:\Users\YourNameHere\.ssh After that, one can simply use this command, which will copy the ssh keys into one's web container: ddev auth ssh Then one can use e.g. the following RockShell command like so: ddev php RockShell/rock db:pull * * * * * And here is how to add a shell function under Windows, when using PowerShell 5: # test, if the PowerShell profile, where you can store custom shell functions, already exists Test-Path $PROFILE # if not, create the PowerShell profile New-Item -path $PROFILE -type file -force # Open the profile file with an editor. it's located here: # Path: C:\Users\YourNameHere\Documents\WindowsPowerShell # Name: Microsoft.PowerShell_profile.ps1 # Add the shell function to the profile file: function rockshell { param( [string]$Command ) ddev exec php RockShell/rock $Command } After that, save the file, restart PowerShell, and now it is possible to use RockShell like so: rockshell db:pull Many thanks @bernhard for creating RockShell, and for your help!
  21. @bernhard @dotnetic Thanks for your replies! In my current project I followed your advice and now work with a local ddev, a staging server and a live server. Currently I still have no automated scripts yet, but working with the command line (wsl2, ddev and ssh) works really good. Also, I mostly got rid of large ftp uploads, using rsync. Working with that tools feels better every day ? But when trying to copy the local ddev website to the staging server, I got stuck. I exported the database with ddev php RockShell/rock db:dump, uploaded it via rsync. connected via SSH and tried php RockShell/rock db:restore on the server. But as the ProcessWire database tables were not present at this point yet, RockShell/rock returned an error message. So I had to fall back to Adminer to import the SQL file. What would be the right way to copy a locally developed website to a staging server with RockShell?
  22. I have also tried RockMigrations, but I still don't use it as standard. I'll take another look at RockMigrations, thank you very much! Do I understand you correctly that when further developing a website that is already live, only templates, fields and modules can be updated? For example, when I create a new page type, I not only create a new template and new fields, but also one or more pages and fill their fields with content. Can this newly created content also be transferred to the production environment with RockMigrations? If so, that would mean that you have to create not only the new fields, but also all the new content in php code, and not via the ProcessWire backend, right?
  23. Firstly, many thanks for the brilliant input regarding DDEV. I'm finally using DDEV for local development and it's really super fast. My latest project is now 100% finished locally and I saved a lot of waiting time. @bernhard I understood from your posts that you have a staging version and a production version of your projects on the remote server. - What is the benefit of a remote staging version for a single developer? Wouldn't it be easier to omit the staging server? - What I still can't deal with is the following problem: If I put the website live now, my customer and their team will start editing content, creating pages, etc. in the ProcessWire backend. In addition, there will be contact form entries and blog comments from end users, i.e. ultimately user-generated content, which will also end up in the ProcessWire database. The website is to be continuously developed, i.e. I will be adding fields, modules and templates over the coming weeks and months. I understood, you would clone the database to local and continue working on the website locally, and later copy everything back to remote. But then I would overwrite the changes my client and the users have made in the meantime. How do you deal with this problem? Are you developing the site locally, and once deployed to production, do you work on the remote server? I tried to connect to a server database remotely, and it is super slow. Before DDEV, I had the classic setup with uploading every change to the remote server via FTP and refreshing the browser manually. I do not really want to go back to that ... Appreciating any help ... !
  24. Does anyone know how one can add a style to the TinyMCE styles dropdown, which applies a css class to several HTML-Elements at once? I managed to do it with JSON like so in the "Default setting overrides JSON text" field: { "style_formats": [ { "title": "Styles", "items": [ { "title": "Center", "selector": "p,h1,h2,h3,h4,h5,h6", "classes": "text-center" } ] } ] } But how is the syntax for achieving the same within the field "Custom style formats CSS"? Or is this simply not possible (yet)?
  25. I noticed that, in my case, I cannot have more than one CKEditor field with mystyles.js on the same template, except within repeaters or repeater matrix fields. The solution was to configure mystyles.js only for the first textarea CKEditor field and to use the feature "Inherit settings from another CKE field" for the others. Then the custom styles work as expected.
×
×
  • Create New...