Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/15/2019 in Posts

  1. Version 0.8.0 has been released: Adds facebook share preview for content editors when editing Opengraph meta data Adds support to extend the rendered meta title with additional information such as the domain or site name (#11) Renders structured data (JSON-LD) for breadcrumbs via new group "structuredData" (#10) Adds new meta group "structuredData" which will handle more types of structured data in the future ? Happy weekend everyone! Cheers
    7 points
  2. @ryan, thanks for the new features! In the past I've worked on modules that needed to store data for some pages and I had to create a custom DB table for that - the new page meta feature will avoid this and make module development easier. Regarding the Lister bookmarks feature, it would be nice if bookmarks were able to store filter rows where the value is empty. This is so that a bookmark could include rows that don't affect the results until a user enters a value, but are sitting there ready to use when needed. At the moment any rows without a value are not stored in the bookmark. I know that achieving a setup like this is possible with a Lister Pro instance or with a hook for the core Lister but it would be nice to have the option in Lister bookmarks too.
    6 points
  3. UPDATE 2019-06-15 The taxes repeater field in module config editor was updated: changed to a grid view for more flexibility added field validation prepared the taxes-repeater field code to become a standalone flexible Inputfield module for general use (Array/JSON repeater input field which stores its values in a single text field) The taxes handling is completed! SnipWire now acts as a full flexible taxes-provider for Snipcart (we use Webhooks for this). No configuration needed in Snipcart dashboard. SnipCart orders are now fully working (except shipping handling). The sample shop templates got an update: customer login/logout customer dashboard link/button to view orders history and subscriptions editing of customer profile Other updates and fixes: The SnipWire Dashboard was updated (Charts, Orders, Customers, ...) --> see screenshot below. The Dashboard fetches its data from Snipcart via CURL multi - the response time for a fresh load is now under 2 seconds The Webhooks handler now supports all Snipcart events via hookable methods. Snipwire now supports all major Admin themes (Uikit, Reno and Default) All module classes/files are more structured (e.g. separate helper and service classes) Under the hood many bugs are fixed and code is updated to prevent unexpected errors. Added a crispy SVG logo for all SnipCart back-end pages ? Screen-recording of updated taxes-repeater: Screenshot of SnipWire Dashboard:
    5 points
  4. Here's some JS for that, for a Table field named "table_select_once". Add the JS any way that suits you - with AdminOnSteroids, a hook before Page Edit execute, etc. $(function() { // Disable options already selected in another table row function disableSelectedOptions() { var $selects = $('.Inputfield_table_select_once select'); $selects.find('option').prop('disabled', false); $selects.each(function() { var $selected = $(this).find('option[value!=""]:selected'); if($selected.length) { $selects.not(this).find('option[value="' + $selected.val() + '"]').prop('disabled', true); } }); } // Disable selected options on DOM ready disableSelectedOptions(); // Disable selected options when select changes $(document).on('change', '.Inputfield_table_select_once select', disableSelectedOptions); });
    5 points
  5. This is not an easy question, and so also isn't the answer. Personally I would prefer Git submodules as it would be less effort for users who doesn't have either Git nor Composer understanding. Also Git is very tiny action to install and running, composer seems to take more effort. So, for users who have both capabilities, I also would go with option 3, the composer dependencies. My conclusion is, it depends on your main target group.
    4 points
  6. Repeater Images Adds options to modify Repeater fields to make them convenient for "page-per-image" usage. Using a page-per-image approach allows for additional fields to be associated with each image, to record things such as photographer, date, license, links, etc. When Repeater Images is enabled for a Repeater field the module changes the appearance of the Repeater inputfield to be similar (but not identical) to an Images field. The collapsed view shows a thumbnail for each Repeater item, and items can be expanded for field editing. Screencast Installation Install the Repeater Images module. Setup Create an image field to use in the Repeater field. Recommended settings for the image field are "Maximum files allowed" set to 1 and "Formatted value" set to "Single item (null if empty)". Create a Repeater field. Add the image field to the Repeater. If you want additional fields in the Repeater create and add these also. Repeater Images configuration Tick the "Activate Repeater Images for this Repeater field" checkbox. In the "Image field within Repeater" dropdown select the single image field. You must save the Repeater field settings to see any newly added Image fields in the dropdown. Adjust the image thumbnail height if you want (unlike the core Images field there is no slider to change thumbnail height within Page Edit). Note: the depth option for Repeater fields is not compatible with the Repeater Images module. Image uploads feature There is a checkbox to activate image uploads. This feature allows users to quickly and easily add images to the Repeater Images field by uploading them to an adjacent "upload" field. To use this feature you must add the image field selected in the Repeater Images config to the template of the page containing the Repeater Images field - immediately above or below the Repeater Images field would be a good position. It's recommended to set the label for this field in template context to "Upload images" or similar, and set the visibility of the field to "Closed" so that it takes up less room when it's not being used. Note that when you drag images to a closed Images field it will automatically open. You don't need to worry about the "Maximum files allowed" setting because the Repeater Images module overrides this for the upload field. New Repeater items will be created from the images uploaded to the upload field when the page is saved. The user can add descriptions and tags to the images while they are still in the upload field and these will be retained in the Repeater items. Images are automatically deleted from the upload field when the page is saved. Tips The "Use accordion mode?" option in the Repeater field settings is useful for keeping the inputfield compact, with only one image item open for editing at a time. The "Repeater item labels" setting determines what is shown in the thumbnail overlay on hover. Example for an image field named "image": {image.basename} ({image.width}x{image.height}) https://github.com/Toutouwai/RepeaterImages https://modules.processwire.com/modules/repeater-images/
    3 points
  7. ProcessWire 3.0.133 adds a useful new $page->meta() method for a new type of page-specific persistent data storage, adds the ability for users to create their own bookmarks in Lister, and has a handy and time saving update for the asmSelect input type. Read on for all the details, examples and screenshots— https://processwire.com/blog/posts/pw-3.0.133/
    3 points
  8. I forgot to write some update here. We made some minor and one major release since 1.0.0 with this changes: fix some log warnings from some Repeater fields Module can now be installed via composer Add support for $config->elasticsearchFeederConnectionOverride Better support of own hosted ElasticSearch Servers Use of PW 3.0.133's new $page->meta() feature instead of creating a fields for indexed pages CI Tests via circleci.com and peridot-php Current Version is 1.2.0 and since we use $page-meta() the module requires now PW 3.0.133 And a live production demo will follow the next 1-2 weeks.
    3 points
  9. You'd definitely have to store the data somewhere. Forms API itself provides the form only, it doesn't save anything yet. Depending on your needs you have a number of options: Saving the data as config for your module. ProcessWire allows you to save config data for any given module with $modules->saveConfig(), and then retrieve it with $modules->getConfig(). This is a really simple approach, and likely what you'd want here. If you're using the latest development version of ProcessWire, there's now a brand new $page->meta() method that you can use to store any data to a specific page. In my opinion would be a great use case for that method, but again: it's only available for the very latest development version of ProcessWire (3.0.133). Saving your data in a custom database table (using $database API variable). This requires you (among other things) to create a custom database table, and if possible, I'd suggest avoiding it – usually it's better to use the API to store data as ProcessWire values. That being said, if you look at the ProcessChangelogHooks install() method, you can see an example of how to create a custom database table automatically when a module is installed. There are other ways as well, but these are the ones I think fit your needs best ? -- Just for the record, I'm moving this thread to the "Module/Plugin Development" section of the forum. The "Modules/Plugins" section is intended for dedicated support threads for existing (published) modules only, and any development-related questions should be posted to the development subforum instead.
    3 points
  10. I wrote a script a while back to search for this kind of thing, though it doesn't spider Gitlab, PW gists or forum code yet. I've made a start at putting it online at pwgeeks.com. Currently only has about 60 items of over 1300 it has found to date - but as I massage it back to life, the directory will fill up.
    3 points
  11. We recently rebuilt the Architekturführer Köln (architectural guide Cologne) as a mobile-first JavaScript web app, powered by VueJS in the frontend and ProcessWire in the backend. Concept, design and implementation by schwarzdesign! The Architekturführer Köln is a guidebook and now a web application about architectural highlights in Cologne, Germany. It contains detailled information about around 100 objects (architectural landmarks) in Cologne. The web app offers multiple ways to search through all available objects, including: An interactive live map A list of object near the user's location Filtering based on architect, district and category Favourites saved by the user The frontend is written entirely in JavaScript, with the data coming from a ProcessWire-powered API-first backend. Frontend The app is built with the Vue framework and compiled with Webpack 4. As a learning exercise and for greater customizability we opted to not use Vue CLI, and instead wrote our own Webpack config with individually defined dependencies. The site is a SPA (Single Page Application), which means all internal links are intercepted by the Vue app and the corresponding routes (pages) are generated by the framework directly in the browser, using data retrieved from the API. It's also a PWA (Progressive Web App), the main feature of which is that you can install it to your home screen on your phone and launch it from there like a regular app. It also includes a service worker which catches requests to the API and returns cached responses when the network is not available. The Architekturführer is supposed to be taken with you on a walk through the city, and will keep working even if you are completely offline. Notable mentions from the tech stack: Vue Vue Router for the SPA functionality VueX for state management and storage / caching of the data returned through the API Leaflet (with Mapbox tiles) for the interactive maps Webpack 4 for compilation of the app into a single distributable Babel for transpilation of ES6+ SASS & PostCSS with Autoprefixer as a convenience for SASS in SFCs Google Workbox to generate the service worker instead of writing lots of boilerplate code Bootstrap 4 is barely used here, but we still included it's reboot and grid system Backend The ProcessWire backend is API-only, there are no server-side rendered templates, which means the only PHP template is the one used for the API. For this API, we used a single content type (template) with a couple of pre-defined endpoints (url segments); most importantly we built entdpoints to get a list of all objects (either including the full data, or only the data necessary to show teaser tiles), as well as individual objects and taxonomies. The API template which acts as a controller contains all the necessary switches and selectors to serve the correct response in <100 lines of code. Since we wanted some flexibility regarding the format in which different fields were transmitted over the api, we wrote a function to extract arbitrary page fields from ProcessWire pages and return them as serializable standard objects. There's also a function that takes a Pageimage object, creates multiple variants in different sizes and returns an object containing their base path and an array of variants (identified by their basename and width). We use that one to generate responsive images in the frontend. Check out the code for both functions in this gist. We used native ProcessWire data wherever possible, so as to not duplicate that work in the frontend app. For example: Page names from the backend translate to URLs in the frontend in the form of route parameters for the Vue Router Page IDs from ProcessWire are included in the API responses, we use those to identify objects across the app, for example to store the user's favourites, and as render keys for object lists Taxonomies have their own API endpoints, and objects contain their taxonomies only as IDs (in the same way ProcessWire uses Page References) Finally, the raw JSON data is cached using the cache API and this handy trick by @LostKobrakai to store raw JSON strings over the cache API. Screenshots
    2 points
  12. So, I've been working on a site profile and came across something I'd like to get some input on. Like probably many other site profiles out there, this one also depends on (and thus includes) third party modules. What I'm wondering is: what would be the best way to include them with the site profile? I can think of three possible / feasible approaches: Bundle the modules into the site profile as-is, i.e. include the full module directories. Benefits: just install the site profile and the module is instantly available. No other dependencies and doesn't require any knowledge beyond "how to install a site profile". Drawbacks: updating the module requires updating the files manually, or updating via Admin (which may not be permitted, and may not be a suitable option in case the whole site is stored in a version control system). Unless the site profile is manually updated from time to time, it may initially contain very old module versions when installed. Add module repositories as Git submodules. Benefits: modules are easy to install and update with Git, just run "git submodule init" followed by "git submodule update". Drawbacks: Git and some basic understanding of its command-line use is required, and simply installing the site profile itself isn't enough to get it up and running. Add modules as Composer dependencies. Benefits: same as with the submodule approach, but perhaps a bit more flexible – possible to define minimum versions, minimum stability, etc. Also: this would be familiar approach for many PHP devs, while something like Git submodules seem to be less commonly used. Drawbacks: obviously requires Composer and command-line access, plus a basic understanding of what Composer is and how it works. Module directories won't contain .git folder, so can't be updated with Git commands (not sure if this is a real drawback, though). What would you prefer, or which approach do you currently use? Any other ideas? I'm kind of leaning towards the Composer method, mainly because I'll likely have other Composer dependencies as well, so it would actually require fewer steps than including modules as Git submodules. Still the submodule approach does have certain benefits, like easier updates (sort of, at least) ? -- Note: since there's no separate forum section for site profiles, and technically this may apply to literal modules as well, I'm posting this the Module/Plugin Development area. If any of the moderators disagrees, I'm happy to move the thread to another area instead ?
    2 points
  13. Thanks, I just used the z-index: 11 you suggested in v2.017. There's a very high z-index on the datepicker calendar div added by AOS that was needed back then. This is still above the masthead on scroll but I think it's better not to change that. There's also a bunch of new CKEditor plugins added in this version: Color Button Color Dialog Table Resize Table Tools Table Tools Toolbar
    2 points
  14. @Wanze please force the module directory update ??
    2 points
  15. $page->meta() is great, I will use it in the coming version of https://modules.processwire.com/modules/elasticsearch-feeder/
    2 points
  16. You have three forms on one page, so provide a hidden element with a unique name (as @horst suggests), then test for the form name when submitted and process accordingly. The page with your forms... <form> <input type="hidden" name="form1" value="form1" /> ... // Your form data collection fields </form> ... <form> <input type="hidden" name="form2" value="form2" /> ... // Your form data collection fields </form> ... <form> <input type="hidden" name="form3" value="form3" /> ... // Your form data collection fields </form> Then within your page, test for each... <?php if( $input->post->form1 == "form1" ) { // process your data and send your message for option 1 } elseif ( $input->post->form2 == "form2" ) { // process your data and send your message for option 2 } elseif ( $input->post->form3 == "form3" ) { // process your data and send your message for option 3 } // render your page as normal
    2 points
  17. Yeah, it's pretty bare-bones at the moment. A form to allow folks to manually submit finds/report classification errors would be neat too.
    1 point
  18. Not sure if you have in mind a site profile for generally sharing with the community, or something that is made available only to some specific users/clients. If it's the former I would think option 1 would be necessary if you want the profile to be usable by a wide audience. I expect there are many here who are not running Git or Composer. And as you say you would update the profile from time to time (not really different in that regard from a module you have authored). If it's the latter then another option could be @bernhard's Kickstart tool: You would provide users with Kickstart settings that point to a remote profile zip that does not include any third-party modules within it, and then add the third-party modules in the "recipe". That way a user who is installing the profile always gets the latest module versions. I suppose you could use the Kickstart approach for a generally shared site profile too but it comes back to the audience thing again. If PW beginners should also be able to use the profile then it's probably best if the profile can be installed as per the standard procedure without them needing to learn any additional tools.
    1 point
  19. Ok, I think the directory is about there now. Has about 1400 assets - many of which are not available via the modules directory. Enjoy.
    1 point
  20. Sorry @teppo, i am new here. Will learn the rules... Thank you..
    1 point
  21. Googlebot is clever these days. JS-content is no problem for indexing anymore (React, Angular, Vue etc.). Meta tags and page titles are adjusted for every page-view (something a lot of developers simply forget when building SPAs). Of course, you could always improve SEO, e.g. creating a sitemap.xml... https://www.smashingmagazine.com/2019/05/vue-js-seo-reactive-websites-search-engines-bots/ ^ a good read about the subject
    1 point
  22. @Robin S Many thanks for this. I will try it soon!
    1 point
  23. This article goes through many different techniques: https://css-tricks.com/deployment/ My approach is using rsync inside a bash script: https://css-tricks.com/deployment/#article-header-id-9 @wbmnfktr This is for you: https://css-tricks.com/deployment/#article-header-id-5
    1 point
  24. If I understand right you need to use a repeating fieldtype that contains a Page Reference field and a text field for the unique code. You could use any repeating type such as Repeater, PageTable or ProFields Table, but I think ProFields Table would be the optimal interface. You can use some custom JS in Page Edit to disable options in the Page Reference field that have already been selected in another row of the table - just shout if you need advice for doing that.
    1 point
  25. Hey there! I've been using this module on several sites and it's great! I did notice, however, that it didn't seem to compress as much as the comparable plugin for WordPress. After doing some digging, I realized it's because it's set to retain exif data. For small images, this can increase size considerably, up to double (what would normally compress down to 40KB might be ~80KB with exif maintained). In my site installations, I'm just manually changing line 48 in AutoSmush.module from const WEBSERVICE = 'http://api.resmush.it/ws.php?exif=true&img='; to const WEBSERVICE = 'http://api.resmush.it/ws.php?img='; Would it be possible to add this as an option in the plugin?
    1 point
  26. You could try thuis module! At least with png, jpg most surely it works, not sure about bmp.
    1 point
  27. @thibaultvdb I think that by default on XAMPP your ProcessWire installation is in a subdir, and I bet that the module is giving you as api endpoint something like : /subdir/api instead of the /api endpoint. You should ask your friends to help you to setup your localhost environment to use a domain like http://mylocalwebsite.local To get started, you can follow the second answer here : https://stackoverflow.com/questions/16229126/using-domain-name-instead-of-localhost-in-with-https-in-xampp or follow this tutorial to use AcrylicDNS (the best solution IMO, do not be afraid, it's really easy to setup) : https://www.ottorask.com/blog/automated-apache-virtual-hosts-on-windows/ Good luck.
    1 point
  28. Very nice. For context, this is like custom fields for images by using repeaters. This request for custom fields for images natively is one that is open, and slated for core one day: https://github.com/processwire/processwire-requests/issues/21 https://processwire.com/about/roadmap/
    1 point
  29. Hi all, I thought that you'd be interested to know we've now launched our third ProcessWire site, called Home of the Blizzard - The Australasian Antarctic Expedition. It's about Douglas Mawson's famous expedition of 1911-1914. As a part of the move to ProcessWire from our previous CMS it has undergone a redesign as well. It has been done completely in-house. https://mawsonshuts.antarctica.gov.au/ It uses Fancybox for the image galleries and modal menu, Plyr for video, and Modernizr (primarily to check for CSS Grid compatibility). It was actually the first PW site we started working on, but then the SCAR COMNAP and Antarctic Jobs sites became priorities so it got pushed back. It's great to have it live at last. The next PW site we launch should be our main site (http://www.antarctica.gov.au)! That's still a work in progress. ?
    1 point
  30. Hey everyone searching for this topic — if you're using Let's Encrypt with URL validation, don't forget to replace the part blocking dotfiles location ~ /\. { deny all; } With block denying everything BUT .well-known location ~ /\.(?!well-known).* { deny all; }
    1 point
  31. Short answer: ProcessWire differs from many other systems in that it doesn't really generate any markup, and as such there are not that many easy ways to tell that a site is running ProcessWire. This is intentional. Letting everyone know what system you're running is usually fine (and it's a nice hat tip towards that system), but potentially you might also be tipping malicious visitors off on how to best attack your site. With ProcessWire it's easy to go into "stealth mode", so that potential attackers have absolutely no idea what system they're currently targeting. As for builtwith.com, my initial guess would be that their algorithm primarily looks for the "generator" meta tag (or similar headers), or the text "Powered by ProcessWire". At least those seemed to be present on all the sites listed on their site under ProcessWire that I quickly browsed through. They may be looking for other indicators as well, but anything more than that can get a bit tedious. The site mentioned above, isit.pw, gathers a number of clues and then estimates the likelihood of a site being powered by ProcessWire. A lot of times (probably most) it gets its estimate right, but then again it is relatively easy to fool ?
    1 point
  32. @bernhard It's a little trick you can do with Google Chrome: In the responsive view mode, select one of the iPhone models in the responsive mode settings bar. In the flyout menu on the upper right, there's an option "show device frame". Only works with some of the devices, I've used "iPhone 6/7/8 Plus". If you take a screenshot (via the same flyout menu) while in responsive design mode with the show device frame option turned on, the device frame will be included in the screenshot.
    1 point
  33. I just released version 0.7.0 of the module: It fixes the ugly label being showed above the "inhterit" checkbox for newer Processwire versions. I think there is a bug in core, because ProcessWire ignores the fact that the label is marked as "hidden". I had to fix it via CSS for now. Added the possibility to resize the Opengraph image when referencing a page image by specifying a width and/or height. Opengraph image: If the referenced image field is empty and pulls the image from another page (default value), the module now substitutes the default image as well. Other than that the readme now contains a chapter for the various hooks provided by SeoMaestro. This might be interesting if you need to customize the behaviour of the module. Please let me know if you find any issues! ? Cheers
    1 point
  34. We recently finished the relaunch auf camac.de, a client from the software industry, focused on controlling and business integration solutions. Concept, design and implementation by schwarzdesign. The site is focussed on longer content pages with text and image combination, with technical information for potential customers. It also offers multiple options for Call-to-Action elements to maximize conversions. Finally, the site is heavily optimized with caching and minification at every stop to be instantly readable even on slow connections. Features No hard-coded sections in any template: Almost all content types use the same repeater matrix field with several section types. This way, the entire site and all parts of the layout can be moved around and combined in any way imaginable. Custom Call-to-Action options: Two different contact forms, custom external links or a general CTA text. Automatically generated page navigation and human-readable anchors (as seen here) Every page fully loaded and interactive in ~1 second, even on poor mobile connections Automatically generated SEO meta tags, with overwrite fields available on every page Modules used ProFields FormBuilder Tracy Debugger Sitemap Duplicator ALIF - Admin Links In Frontend WireMailSMTP Content and conversions Every visitor is a potential customer - this is why we made sure there are ample opportunities to generate leads. At the end of every page, there are multiple options for the CTA section: A contact form with a customizable message (built with FormBuilder) A download form that allows visitors to download PDFs (e.g. the full article) in exchange for their name and E-Mail address. Other text + image combinations with custom links or buttons. The download form was custom coded, as it allows the editor to upload a file specific to the current page and make it available behind a small form. After successfully submitting the form, an e-mail is sent to the site owner, and the file is directly streamed to the client as a download. Technical insights This is one of the first pages where I used Twig for templating, and it's been a great developing experience. With Twig, you get content escaping and much better seperation between logic and view / display. I also spent some time working out a solid structure for the twig templates, with useful defaults, reusable blocks for page and section template (you can read more about the approach in my recent tutorial on integrating Twig with ProcessWire, part 1 and part 2). I also started using Parcel as a lightweight alternative for Webpack, when all I need is to compile a couple of small scripts (for the navigation, the lightbox, a dismissable cookie notice et c). What's great about parcel is that you get bundling of your own code and external libraries out of the box, as well as ES6+ transpilation and minification for production usage. Still, it required no configuration but a couple of command line options. This way, you get one bundled, minified JavaScript file, the same way we produce a minified CSS file with SCSS, but without the additional overhead of configuring Webpack. Screenshots
    1 point
×
×
  • Create New...