Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 11/30/2018 in all areas

  1. This week ProcessWire 3.0.120 is on the dev branch. This post takes a look at updates for this version and talks about our work towards the next master version. In addition, we take a look at some more updates and screenshots for the new ProcessWire.com website: https://processwire.com/blog/posts/processwire-3.0.120-and-more-new-site-updates/
    7 points
  2. Hello friends, I participated in a project that needed lots of relationships between fields and pages. and I struggled on how to arquitech such complex scenario. So I figure out and documented this little idea. A CSS like language for data relationships. https://github.com/joyofpw/solidwire Hope you find it useful and all ideas and contributions are welcome ? Thanks.
    5 points
  3. @ryan, any chance you could set up a support forum thread for LoginRegister? Currently it doesn't have one, and these requests pile up in the Modules/Plugins area. Just trying to keep things nice and tidy here ?
    4 points
  4. The issue seems to be fixed in latest dev downloaded today.
    4 points
  5. https://www.spiria.com After several sites made with ProcessWire, Spiria decided it was time to get rid of its cumbersome Drupal site. To be honest, ProcessWire is still difficult to sell to customers, because this CMS/CMF is not as well known as the most popular ones. The migration to ProcessWire therefore served several purposes: Eliminate the frustrations experienced with Drupal (especially with image management and some structural problems). Allow integrators to learn the CMS during quiet periods, when they are not needed on other projects. Promote the CMS by adopting it. The challenges were many, but by no means insolvent, thanks to the great versatility of this programming framework. Indeed, if ProcessWire can be considered as a CMS in its own right, it also offers all the advantages of a CMF (Content Management Framework). Unlike other solutions, the programmer is not forced to follow the proposed model and can integrate his ways of doing things. The blog The site includes a very active blog where visuals abound. It was essential to cache the various dynamic components. For example, in all sections of the blog, there is a list of recent articles, a list of "short technical news", another list from the same author, a classification by category. In short, these lists evolve independently. ProcessWire's cache system, including its ability to classify by namespace, has significantly improved loading speed. Cache file management has been placed in a "saved" hook in the useful "ready.php" file. Data migration Importing the blog data was complex because at the time the site was designed in Drupal, programmers had not been used the easily translatable "entities", so each article resided in two different "nodes" (pages). We would have liked to use the core ProcessWire import module, but it does not yet take into account multilingual fields. However, we have used this code as a basis for building our own import module. This is one of ProcessWire's great qualities, as a CMF, it is easy to use existing code to design your own solutions. Reproduce the layout The current layout of the site has been reproduced exactly as it serves the company's needs very well. ProcessWire has simplified the work in many ways. Apart from the blog, which is very structured, the other sections of the site are more free, especially the case study section ("Our Work"). The use of page reference fields has particularly helped developers. As everything is a page in ProcessWire, you can create a pseudo relational database within the site itself. The administrator user becomes more aware of the data hierarchy and has better control over the data. Programming architecture The separation between controllers and Twig visualization files facilitates the management of the multiple components of the site. We haven't really explored the "regions" of ProcessWire, because we prefer not to mix these aspects of programming. This greatly facilitates the timely arrival of programmers in our department, used to an MVC structure, because they have a better understanding of what does what. The Search Once again, we were able to simplify what had been done in Drupal. There are two types of searches on the site, the blog search and the more general search on page 404 ( https://www.spiria.com/potato). The Drupal site search was driven by an Apache Solr server in Drupal. We decided to rely on the ease of ProcessWire and the Typeahead library (for the blog), because we didn't need the power of Solr (or Elasticsearch) anyway. Work to improve performance still needs to be done in this area. We would have liked to have seen the excellent search tool offered on the administrative side available on the frontend. We have not yet had time to explore the possibility of harnessing this code from the core of ProcessWire. Our wish here is that the CMS designer, Ryan Cramer, sees this as an opportunity to offer an exciting new feature to his CMS! Powerful modules We have the excellent modules ProCache (static caching), ProFields (fields that greatly improve the functionality of existing fields) and ListerPro (data search and processing tool). As the site is installed on a nginx server, we have ruled out ProCache for the moment and we are satisfied with the use of the cache() function alone. The ProFields fields are a blessing just like ListerPro. This last module is very useful to correct, for example, import errors (we had more than 800 blog articles, some of which date back to 2013). We used a functional field to gather translations of terms that would normally have remained hard coded and difficult to access in the translation interface (an aspect to be improved in ProcessWire, in our opinion). By grouping translations in a single page, site administrators can easily change or correct terms. Language management What remains a very small irritant for us is the management of languages, which is fantastic in many ways. The fact that there is a default language is both a blessing and a problem. For example, in 2013, blog articles were not systematically translated. We experienced the same situation with a customer's site. If the article is only in English, no problem, we only have to not check French as an active language. However, if the article is only in French, we are still required to create the page in English and make tricks in the code, thanks in particular to a checkbox such as "Not present in English" to reproduce the behaviour naturally present for English (or any language deemed by default). Perhaps there is a more elegant solution here that we have not yet discovered. It's not much, but some clients don't see why there are two ways to do it here. In conclusion In any case, ProcessWire's great qualities continue to appeal to programmers, integrators, graphic designers, users and even our UI/UX expert. The solidity of the CMS/CMF, its functionalities all translated into objects/variables ($pages, $page, $config, $sanitizer, $input... the list is long) allows us to systematize our workflow, easily recover code and reduce production costs. Although it is dangerous to offer only a CMS solution to our customers (hammer syndrome that only sees nails), it is tempting to consider ProcessWire as the Swiss Army knife par excellence of Web programming. As mentioned above, the CMF is suitable for all situations, has very good security tools and its designer has successfully improved PHP methods to make programming very pleasant and intuitive. For us, migrating the company's website to this platform was the best tribute we could pay to its designer, @ryan.
    3 points
  6. My son is interested in programming not in hardware so our C64 is waiting for its new owner. Just PM me...
    3 points
  7. Ah, ok. Seems that I have to clarify a bit: with the age of 10, he started with a desktop pc i5 and 16gb ram. From that on, he is more and more interested in retro machines. I'm not totally sure, but I think currently he owns 4-5 (old) Laptops of different decades, an old IMac, 5 mobiles, 3 tablets and 1 C64. The C64 is the most beloved for him, and he works on different hardware parts. Also he has built different audio adapters for DOS Laptops. Unfortunately the Sound Chip of the C64 seems to be broken. Don't ask me! - he told me something that not all 8 channels are working but only 2, and that it is not possible to detect the concrete issue of it with his limited equipment. So, he is more interested in modifying the hardware then in sitting in front of a screen. From time to time I pray, that the C64 is the stop in his retro interest, not that I someday has to look out for a Z4. ?
    3 points
  8. What @LostKobrakai said. :-). I don't think this is something Padloper should handle. If prices need to be adjusted, we have Hooks for such things. Alternatively, you could add custom price fields to the product template and use those instead, following some logic. So, this is up to the dev. On a side note, I'm not sure I get the rationale of adjusting a product's price based on a customer's location. Shipping, yes, but not the actual product price. Is it because of exchange rates? Aren't those handled by the merchants? I'll consider these when we get to that stage, thanks. My initial estimates was 6 months from when I got started (please see first post). I can't promise that though; it could even be shorter! Thanks for the interest.
    3 points
  9. It would be great to have a tool that takes the document as input and spits out an HTML flow diagram as output, where explanatory details could be shown in tooltips, modals, etc. Because to make sense of the language you'd have to spend a fair bit of time memorising the syntax and I can see that being a problem for non-devs (i.e. clients).
    3 points
  10. Hello Robin, Is not meant to be a programming language. Is more like a prototyping language. maybe in the future tools can be written to generate files or similar. The objective is to create a document that can show pages, templates and fields relationships in a eagle view fashion. Before you start creating the templates in PW you can prototype in this document like a scratch pad to clear ideas and see different options before investing time in programming.
    3 points
  11. I could maybe do that for RequestLogger over christmas...
    2 points
  12. A couple of updates to the new RequestLogger panel. You can now define which types of request methods are logged: GET, POST, PUT, DELETE, PATCH. By default, all are checked, but I think in most cases unchecking GET is probably helpful as it will prevent page views from being logged. You can now use this panel on non-ProcessWire pages, so if you use a php script like payment_confirmation.php in the root of your site, so long as it bootstraps PW (include ./index.php), it will let you log and retrieve request data. The only catch is that you need to trigger the logging manually by adding: $page->logRequests(); to the script file. Visit the page and enable logging (as with any normal PW page) and you're good to go. The only weird thing at the moment with this is that the logged url will be /http404/ but otherwise it works as expected. Let me know if you make use of this approach and if you find any issues. BTW, I still need to add docs for the Request Logger, API Explorer, and Adminer panels to https://adrianbj.github.io/TracyDebugger/ - I've been slack on this. Anyone out there like doing documentation and would like to help ?
    2 points
  13. Sometimes I wonder how I tie my shows in the morning... thinking way too complicated ? So many years and I love Processwire even more every day! Everything I need is already there. I do it in the backend with two fields(text) and FieldtypeRuntimeMarkup generates urlencode($page->encodekey)) and so on... and the result is the complete URL for the editors to copy. And even a button to test it! $prefix = $page->httpUrl . '?'; $l_key = urlencode($page->encodekey); $l_title = urlencode($page->encodetitle); $gclid = '123456789'; $result = '<textarea rows="5">'.$prefix . 'keyword=' . $l_key . '&title=' . $l_title . '&gclid=' . $gclid . '</textarea>'; return $result;
    2 points
  14. I just now went looking in their issue trackers and found this in their enterprise support tracker (in category tab "Standard Feature Requests): AG-1202 Allow rendering rows dynamically adapting their height to their content That does sound like what I want, but thanks to their closed system, we have no way of knowing the exact contents of the issue! ??? There is also this, which would only be for paying customers: Enterprise Row Model AG-1039 Allow dynamic row heights and maxBlocksInCache This is in Parked category: AG-2228 Allow lazy loading of rows when using autoHeight, ie on scroll to configured amount of rows, append n more rows at the bottom So looks like this stuff is on their radar (which I would have assumed based on how they acknowledge the pain point in their docs). Let's wait and see and for now enjoy my hack ? All this hacking does have the effect of improving my self-confidence and wanting to learn JS more deeply ?
    2 points
  15. Earlier in this topic I asked about autoHeight for rows. Now I have created a nice and only slightly hacky solution for it (no hacks in libs, just working around stuff). The existing simple option is not acceptable in our use case, because even the official docs say: When using autoHeight for 35k rows, I got twice the message "the web page seems to be running slow, do you want to stop it". I don't even dare to imagine what would happen on an old smartphone. The obvious (to me) solution was to calculate automatic height on demand only for the handful of rows displayed at a time. This has to happen on page load, on filtering and on pagination navigation. Due how pagination is implemented, a hoop had to be jumped through with it as well (to avoid an infinite loop). To be clear: the ag-Grid API offered no immediately useful event I could listen to! viewportChanged sounded like it would work, but in practice it failed to cover page navigation. var paginationAttached = false; var paginationHandler = function (event) { rowHeighter(grid, event); } function rowHeighter(grid, event) { // calculate row height for displayed rows to wrap text var cols = [ grid.gridOptions.columnApi.getColumn("code"), grid.gridOptions.columnApi.getColumn("variation") ]; grid.gridOptions.api.getRenderedNodes().forEach(function(node) { node.columnController.isAutoRowHeightActive = function() { return true; }; node.columnController.getAllAutoRowHeightCols = function() { return cols; }; node.setRowHeight(grid.gridOptions.api.gridOptionsWrapper.getRowHeightForNode(node)); }); // if listening to paginationChange, onRowHeightChanged creates an infinite loop, so work around it if(paginationAttached === false) { grid.gridOptions.api.onRowHeightChanged(); grid.gridOptions.api.addEventListener('paginationChanged', paginationHandler); paginationAttached = true; } else { grid.gridOptions.api.removeEventListener('paginationChanged', paginationHandler); grid.gridOptions.api.onRowHeightChanged(); grid.gridOptions.api.addEventListener('paginationChanged', paginationHandler); } } You can see here that I had to brute-force overwrite a couple of functions in the ag-Grid row height calculation logic. I define the columns I want to target for the auto height and trick the getAllAutoRowHeightCols to just provide them. My filter event listener in RockGridItemAfterInit is grid.gridOptions.api.addEventListener('filterChanged', function(event) { rowHeighter(grid, event); }); A CSS rule is also required: div[col-id="variation"].ag-cell, div[col-id="code"].ag-cell { white-space: normal; } Using automatic row heights brought about an issue with the default fixed height grid: a wild scrollbar appeared! I found out I can make the grid height adapt, putting this into RockGridItemBeforeInit: // pagination with a fixed number of rows and adapting grid height grid.gridOptions.paginationAutoPageSize = false; grid.gridOptions.paginationPageSize = 15; grid.gridOptions.domLayout = 'autoHeight'; Then I decided the pagination controls should live above the grid, because otherwise their position will change annoyingly whenever the grid height changes. Putting this into RockGridItemAfterInit: // move the pagination panel to the top, because the dynamic row heights would change its position at the bottom var pagingPanel = document.querySelector('.ag-paging-panel'); var rootWrapper = document.querySelector('.ag-root-wrapper'); rootWrapper.prepend(pagingPanel);
    2 points
  16. Assuming category is a page field I really don't understand your setup. Do you have multiple categories (pages) with the title: "Cyclonic Vacuums" under different parents? Why? So "Cyclonic Vacuums" is provided selectable in the list multiple times? 3 ways to solve this use unique titles for your categories, even if they are stored under different parents. use a custom label format in the field settings of category. e.g. {parent.title} - {title} use custom selector (screenshot)
    2 points
  17. This seems like a pretty good approach to both inline and CSS images: https://css-tricks.com/using-webp-images/ For inline images in a textarea a textformatter module could prepare the picture/srcset.
    2 points
  18. Nice one @bernhard! If you have a single remote you can also ... git fetch ... first afterwards you can safely git checkout dev If you have multiple remotes you can ... git checkout -b dev origin/dev ... in one command. No creepy warning about being in detached HEAD mode.
    2 points
  19. Custom Inputfield Dependencies A module for ProcessWire CMS/CMF. Extends inputfield dependencies so that inputfield visibility or required status may be determined at runtime by selector or custom PHP code. Overview Custom Inputfield Dependencies adds several new settings options to the "Input" tab of "Edit Field". These are described below. Note that the visibility or required status of fields determined by the module is calculated once at the time Page Edit loads. If your dependency settings refer to fields in the page being edited then changes will not be recalculated until the page is saved and Page Edit reloaded. Usage Install the Custom Inputfield Dependencies module. Optional: for nice code highlighting of custom PHP install InputfieldAceExtended v1.2.0 or newer (currently available on the 'dev' branch of the GitHub repo). The custom inputfield dependencies are set on the "Input" tab of "Edit Field". Visibility Show only if page is matched by custom find Use InputfieldSelector to create a $pages->find() query. If the edited page is matched by the selector then the field is shown. Show only if page is matched by selector As above, but the selector string may be entered manually. Show only if custom PHP returns true Enter custom PHP/API code – if the statement returns boolean true then the field is shown. $page and $pages are available as local variables – other API variables may be accessed with $this, e.g. $this->config In most cases $page refers to the page being edited, but note that if the field is inside a repeater then $page will be the repeater page. As there could conceivably be cases where you want to use the repeater page in your custom PHP the module does not forcibly set $page to be the edited page. Instead, a helper function getEditedPage($page) is available if you want to get the edited page regardless of if the field in inside a repeater or not. $edited_page = $this->getEditedPage($page); Required The settings inputfields are the same as for Visibility above, but are used to determine if the field has 'required' status on the page being edited. https://github.com/Toutouwai/CustomInputfieldDependencies http://modules.processwire.com/modules/custom-inputfield-dependencies/
    1 point
  20. 1) Setup the necessary GIT Repos Fork ProcessWire: https://github.com/processwire/processwire Clone your fork to your local dev environment (I named my folder processwire-fork): git clone git@github.com:BernhardBaumrock/processwire.git . To be able to keep the fork in sync with ProcessWire we need to set the upstream, because currently it only points to the fork: git remote -v // result: origin git@github.com:BernhardBaumrock/processwire.git (fetch) origin git@github.com:BernhardBaumrock/processwire.git (push) git remote add upstream https://github.com/processwire/processwire.git Now fetch the upstream: git fetch upstream As you can see we are still on the master branch (of the fork). We want to work on the dev branch. In VSCode you instantly see all the available branches and which branch you are on: But you can also use the console: git checkout origin/dev Ok, as we did not have a local dev branch yet, we get this warning: So we do what git tells us: git checkout -b dev We are almost ready to go and we can check if it is working by simply modifying the index.php file: Nice! Index.php shows up in the git panel on the left side under "changes" and clicking on it we get a nice diff (if you haven't tried VSCode yet, give it a go and see the corresponding forum thread here). 2) Setup a running ProcessWire instance Now that we have a fresh fork of ProcessWire we want to contribute something to the project. But how? We need to install ProcessWire first to test everything, right? But we don't want to copy over all edited files manually, do we? Symlinks to the rescue. We just install ProcessWire in another folder (in my case I use /www/processwire) and then we replace the /wire folder of the running instance by a symlink to the wire folder inside the fork. Let's try that via CMD: If you have your folders on the C:/ drive you might need to run CMD as admin. We are ready for our first contribution! ? This is how my ProcessWire installation looks like: Note these things: We are inside "processwire", not "processwire-fork" There is no .git folder, meaning this instance is not under version control The wire folder is symlinked. And this folder IS under version control, but in another folder (processwire-fork) If you want to contribute to something outside of the wire folder, just symlink these files as well (eg index.php or .htaccess)... 3) Example PR - Coding You might have seen this discussion about FieldtypeDecimal for ProcessWire: https://github.com/processwire/processwire-requests/issues/126 I thought it would make sense to have at least a comment in FieldtypeFloat that you can run into troubles when using FieldtypeFloat in some situations. I then wanted to make a PR and since I don't do that every day it's always a kind of hurdle. That's also a reason why I created this tutorial, as a side note ? Ok, we want to add some comments to FieldtypeFloat, so we open up the forked repository of ProcessWire in VSCode. This is important! We work on the forked folder "processwire-fork" but we TEST everything in the browser from the test-instance in the folder "processwire". This might be a little confusing at first, but we are not working on a local project, we are working on the core. We see that we are on the fork In the folder is a clone of ProcessWire that is NOT installed yet We are on the DEV branch and this branch is not yet uploaded to our git account (the cloud symbol shows this) Important: Before you start your changes you always need to make sure that you have the latest version of ProcessWire in your folder! This is done by pulling a branch from your upstream. If you want to work on different contributions it might make sense to create a separate branch for each modification. See this example: The commands are: git checkout -b FieldtypeFloat-comments git pull upstream dev Using VSCode it might be easier to create new branches or switch between existing ones: Just saying ? Now the changes: We open FieldtypeFloat.php and add our comments. We instantly see our changes in VSCode: Or if you prefer you can also open the diff: The Result: When we are happy with our changes we can commit them (either using the VSCode UI or the command line): git add . git commit -m "your commit message" 4) Submitting the PR First, we need to upload the new branch to our forked repo: git push -u origin FieldtypeFloat-comments See the first command and the comment what happens if you don't define the remote branch. Now head over to your github account and click to create the PR: Just make sure that you send your pull request to the dev branch! Check if everything is correct: And voilà: https://github.com/processwire/processwire/pull/130 That's how I do it. If you know better ways or have any suggestions for improvements please let me know ? -------------------- update ----------------------- 5) Updating the fork If you already have your setup and just want to grab the latest version of ProcessWire into your fork, this is what you have to do: git fetch --all As you can see it grabs the new dev version from the upstream repo. But if you go back to the running PW site's upgrades page you'll see that you are still on the old version. We need to merge the changes into our branch. But before we do that, we make sure we are on our local dev branch: git checkout dev git merge upstream/dev Now head over to your sites admin and you'll see that we merged the changes: You can now continue by creating a new branch and working on it as shown in section 3.
    1 point
  21. Let's say that you have an image called myimage.jpg and it's accessible via http://localhost/mysite/site/assets/files/1020/myimage.jpg. If you rename or delete that image and users keep visiting that link they will obviously view a 404 error page. Instead of displaying the 404 error page you can redirect them to the file's belonging page by adding the below code on top of your 404.php template: $url = $_SERVER["REQUEST_URI"]; $pattern = "@site/assets/files/(\d+)/@"; if (preg_match($pattern, $url, $pageid)) { $fp = $pages->get($pageid[1]); // get the page id from the file path if($fp instanceof RepeaterPage) $fp = $fp->getForPage(); // if the file is inside repeater item get the page where the repeater belongs if($fp->viewable()) $session->redirect($fp->url, false); // redirect only if the page is viewable // false = 302 temporary redirect - true = 301 permanent redirect }
    1 point
  22. Amazing.. ?‍? I just asked to my boss if he still has something, telling me that all the things came to the trash some time ago, but he told me that he still have a Sinclair ZX-81 with 1KB of RAM and an 16KB RAM extension (http://oldcomputers.net/zx81.html). I will have an answer tomorrow. If you're interested, just tell me. For the C64, it look like we can find some machines here in France. After the hardware part ?
    1 point
  23. Seems like a cool and smart son. I currently do not have any C64. But maybe if he likes chiptune music these projects with arduino could help https://github.com/stg/Squawk https://github.com/blakelivingston/DuinoTune And this tracker for gameboys https://www.littlesounddj.com/lsd/index.php ?
    1 point
  24. https://processwire.com/api/ref/sanitizer/filename/
    1 point
  25. @Michkael For now, you can turn off 'Repeater dynamic loading (AJAX) in editor' setting in repeater field settings. Also, it worth to test in clean chrome or firefox without extensions.
    1 point
  26. Hi @kongondo, thanks for the updates. Do you have a rough idea on when will an initial release be available? I'd need to add a simple store to a PW site after the holidays. You said there won't be an upgrade path so I'd rather not use V1 anymore.
    1 point
  27. Thanks. The problem is that editors would have to write this in a Google tool?(I don't know...) and I can't urlencode it in a template. Perhaps I write a tool for them or they use an online tool for the encoding. But GET parameter seem the better option.
    1 point
  28. Thx for sharing that @Beluga! Do you think the aggrid core team could be interested in your solution? Maybe they can implement it somehow?
    1 point
  29. Thank you all for your responses. This does in fact help! Turns out it wasn't a curl issue but trying to store and XML object as a session.
    1 point
  30. Well is true that non technical clients could have some trouble understanding this language. But I believe they often have a technical counter part that could understand this. Though tools that generate documentation pages and maybe even migrations scripts?. would be cool to have ?
    1 point
  31. Yep. It works perfect. Thanks again. Btw in case you don't know if re-order the rules and then press submit the re-order doesn't saved.
    1 point
  32. Actually, an even better option is probably to use $filePage as this will automatically choose the field from the repeater if the image is uploaded to a repeater image field or the main $page if the image is uploaded to an image field on the main page. Does that make sense?
    1 point
  33. Turns out it already works ? Just use $repeaterPage instead of $page. Let me know if it works ok for you.
    1 point
  34. Does this help? https://stackoverflow.com/questions/33758126/how-to-store-xml-obj-in-php-session-variable It might be related to the need to serialize the xml before storing. Is $xml->customer_id definitely a string?
    1 point
  35. Probably not news to anyone familiar with webp, but I just tried it and the reduction in file size is impressive indeed. WEBP image on the left generated from variation with 100% quality, JPG on the right with default PW quality setting. Interesting that there is a slight but perceptible difference in colour though.
    1 point
  36. In fact I misspoke in my previous post. I've been using the new version but on a different local site. On the site I'm currently on, I have the older Tracy :-). Yeah; now I seem to remember this. It was of course on the other site where I had the new Tracy. I even played around with Adminer, I recall. Good to know, thanks.
    1 point
  37. I experienced the same issue using the PW Upgrades module. I think it is Windows-specific and relates to a bug introduced by Ryan trying to resolve this issue: https://github.com/processwire/processwire-issues/issues/704 In recent dev versions PW is not able to delete files on Windows. See also:
    1 point
  38. New version of Tracy includes 4.7.0
    1 point
  39. Hey @Tom. - I am not sure your exact goal here - not sure if the session storage is for production use, or just while developing/testing, but @bernhard and I just put this together: https://processwire.com/talk/topic/12208-tracy-debugger/?do=findComment&amp;comment=176842 - which came about while he was working with foxycart and since you are as well, I just thought it might be worth mentioning ?
    1 point
  40. Ah, a quick test showed that I can use this already with my local dev environment: $cai3 = $page->croppable_images->first()->getCrop('thumb100'); // WebP with GD-lib bundled with PHP 7 $im = imagecreatefromjpeg($cai3->filename); imagewebp($im, str_replace('.jpg', '.webp', $cai3->filename)); imagedestroy($im); ?
    1 point
  41. Thanks bernard, great writeup! In case it helps someone else, here's the guides I use(which are very similar) https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/ https://akrabat.com/the-beginners-guide-to-rebasing-your-pr/ - the followup
    1 point
  42. @teppo <aside>Dude, you threw me off with that new avatar ?. I was like who's this replying to Teppo's post; I haven't 'seen' them before...</aside>
    1 point
  43. I've just created a frontendtheming module that does take the source files of uikit (in a separate module to be up-/downgradeable), injects custom LESS files as you want and parses it on the fly with a PHP less parser. That works really well. It's not something I can release soon, but maybe someone wants to take the same approach for the backend theme. Just had a look and this approach would also be possible for the backend as Ryan is also using LESS to compile the pw backend css: https://github.com/ryancramerdesign/AdminThemeUikit/blob/master/uikit/custom/pw.less https://github.com/ryancramerdesign/AdminThemeUikit#uikit-css-development-with-this-admin-theme
    1 point
  44. I strongly support @LostKobrakai 's approach. JSON looks nice at first sight, but we already have a powerful API and we MUST use it. JSON is basically only another syntax. It's some kind of an "API" but a lot less powerful. It's also not easier IMHO, because using the API we have intellisense in our IDE. You don't get that when using JSON. Updates are a pain using JSON. It might be easy to setup new fields and templates, but what if you want to set a new field width in template context? How would that look like in JSON? It's as easy as that with migrations: $this->editInTemplateContext('basic-page', 'headline', function(Field $f) { $f->columnWidth = '50'; }); The example is also in the screencast. Another one: How would you create pages in JSON? We know the problem because we already have this tool built into PW. You'll always end up with collision warnings needing some more user input (add this field after that, do you want to overwrite or rename etc etc). You can handle all that with migrations. It is also quite easy to find the correct property names now as we have Tracy (see end of screencast). And it's also easy with intellisense in your IDE. Where I see potential for the future is that migrations could be used in other modules as well. I think it would make sense to move the migrations feature into the core to have a solid base that is consistant, bullet-proof and maintained together with PW. I think it would make sense to have migrations inside modules, not only at one place. We could then build modules, ship some migrations with it (eg for creating pages and fields) and run / revert those migrations from within the module. I think that solution would be way better than the JSON config approach and I'd love to see this as a huge update to PW in 2019 @ryan ? PS: Also take a look at the great docs and examples here: https://lostkobrakai.github.io/Migrations/examples/
    1 point
  45. Good objections that I had never thought of before! If I only could edit my post I‘d correct the tutorial...
    1 point
  46. @kongondo Looking forward to seeing v2! I have v1 but I'll wait to see v2 before I move my shops away from snipcart and bigcartel.
    1 point
  47. Found a problem when the webiste is on a different port number (http:\\127.0.0.1:81) the backup fails. The logs list: 1 hour ago 2018-09-07 12:53:33 admin /processwire/setup/duplicator/?action=backup_n… - package build failed. 1 hour ago 2018-09-07 12:53:33 admin /processwire/setup/duplicator/?action=backup_n… - an error occured during package build. 1 hour ago 2018-09-07 12:53:33 admin /processwire/setup/duplicator/?action=backup_n… - an error occured while building the ProcessWire structure. It is adding the colon(:) into the filename of the zips. I changed the code in DupUtil.php on line 48 to filter the colon(:) out and replace it with a period: FROM: $filename = $date . '-' . str_replace(' ', '_', $name); TO: $filename = $date . '-' . str_replace(':', '.', str_replace(' ', '_', $name)); There may be a better place to put this in but it does the trick! Thanks for the plugin - it's my goto for securing backups. Yours truly Robert
    1 point
  48. v0.0.7 released: Adds support for FieldtypeFieldset, FieldtypeFieldsetGroup and FieldtypeFieldsetTab. @Federico, hopefully this update works for your use case.
    1 point
×
×
  • Create New...