Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 03/10/2023 in all areas

  1. This week the focus was on core updates and we have a quality mixture of minor issue fixes, pull request additions, and other improvements in ProcessWire on the dev branch. My favorite addition was from a PR by @matjazp that makes improvements to ProcessWire's pagination module, MarkupPagerNav. See more updates and PRs in the dev branch commit log as well. There's still more to add before we bump the version to 3.0.214, so stay tuned for that next week. By the way, if you've recently launched any new sites in ProcessWire, please add them to our sites directory. I think most of us are already subscribed to the ProcessWire Weekly email, but just in case you aren't, you can subscribe here. The email content is from weekly.pw written by @teppo and is great to read, highly recommended! Thanks and have a great weekend!
    9 points
  2. I'm creating a new topic in response to @cst989's question in the RM thread as I think this is a common question and misunderstanding when evaluating RockMigrations. It's also easier to communicate in separate threads than in one huge multi-page-thread... Hi @cst989 thx for your question and interest in RockMigrations. This sounds like you have a misconception in your head which is quite common I guess. Did you watch my latest video on RM, especially this part? https://www.youtube.com/watch?v=o6O859d3cFA&t=576s So why do you think it is a good thing to have one file per migration? I know that this is the way migrations usually work. But I don't think that this is the best way to do it. I'm not saying one way is right and the other is not. I'm just saying I'm having a really, really good time with RockMigrations and it makes working with PW in a more professional setup (meaning either working in a team and/or deploying PW to multiple locations and of course managing everything with GIT) a lot more fun an a lot faster and more efficient. If we look at how migrations usually work we can have a look at the other PW migrations module, which works just like usual migration modules work: You create one file per migration and you end up with a list of migrations that get executed one after another. See this screenshot from the modue's docs: In my opinion that screenshot perfectly shows one huge disadvantage of that approach: You don't see what's going on. You end up with a huge list of migrations that you can't understand on first sight. In RockMigrations this is totally different. You don't create one file per migration. You put all the necessary migrations where they belong and - in my opinion - make the most sense. An example: Let's say we want to add 2 fields to the homepage: "foo" and "bar". Ideally you are already using Custom Page Classes (see my video here https://www.youtube.com/watch?v=D651-w95M0A). They are not only a good idea for organizing your hooks but also your migrations! Just like adding all your HomePage-related hooks into the HomePage pageclass init() or ready() method, you'd add the migrations for your 3 fields into the HomePage pageclasses migrate() method. This could look something like this: Now let's say we develop things further and realise we also need a "bar" field: Do so see what we changed? I guess yes ? Now one big difference to a regular migration approach is that you don't write downgrade() or reversion migrations - unless you don't want to revert the changes! In real life I've almost never ever needed to revert changes. Why? Because you develop things locally and only push changes you really want to have on your production system. If you happen to have to remove some changes that you applied on your dev it's easy to do, though: You see what we did? Nice! So does everybody else that has access to the project's GIT repo! So all your team mates will instantly see and understand what you did. Pro-tip: You don't even need lies 43-45 if you didn't push those changes to production! If you only created those fields on your local dev you can simply restore the staging database on your local dev environment and remove the migrations that create the fields. Pro-tip 2: Also have a look at RockShell, then restoring staging or production data is as easy as "php rockshell db-pull staging" Pro-tip 3: When restoring a DB dump from the staging system it can easily happen that you have data in your database that was created only on the remote and you don't have on your dev system (like new blog posts for example). If you then open those new blog posts on your dev system processwire and the blog post contains images processwire will not be able to show those images (as only the file path is stored in the DB and not the whole image!). Just add $config->filesOnDemand = "http://yourstagingsite.example.com" to your config.php file and RockMigrations will download those files once PW requests the file (either on the frontend or also on the backend)! Having all your changes now in your git history you can even jump back and forth in time with your IDE: You could. I thought about that as well. But I think it does not really make sense and I hope my examples above show why. I'm always open to input though and happy to try to think about it from other perspectives. One final note: I'm not sure if what you say about $rm->watch() makes sense here. If you watch() a file that means that RM checks it's modified timestamp. If that timestamp is later than the last migration that RM ran, then RM will automatically execute the migrations that are in that file. All other files and migrations will be ignored. That makes it a lot more efficient when working on projects that have many migration files in many different places. When triggered from the CLI though or if you do a modules refresh then it will always trigger all migrations in all watched files. I hope that makes sense! --- Ok, now really a final note ? One HUGE benefit of how RockMigrations works (meaning that you write migrations in page classes or in modules) is that you create reusable pieces of work/code. For example let's say you work on a website that needs a blog. So you create a blog module and in that module you have some migrations like this: <?php $rm->createTemplate('blogparent'); $rm->createTemplate('blogitem'); $rm->setParentChild('blogparent', 'blogitem'); $rm->migrate([ 'fields' => [...], // create fields headline, date, body 'templates' => [...], // add those fields to the blogitem template ]); You'd typically have those lines in Blog.module.php::migrate() What if you need a blog in another project? Yep --> just git clone that module into the new project and execute migrations! For example in migrate.php this: <?php $rm->installModule('Blog'); If you follow a regular migrations concept where all project-migrations are stored in a central folder you can't do that! Of course you don't have to work like this. You can still write all your migrations in the project's migrate.php file. Because I have to admit that it is a lot harder to build a blog module that can be reused across different projects than just creating one for one single project! It always depends on the situation. But - and now I'll really leave it for today ? - you could also make your Blog-Module's migrate() method hookable and that would make it possible that you build a generic blog for all projects and then you add field "foo" to your blog in project-a and you add field "bar" to your blog of project-b. Have fun discovering RockMigrations. I understand it can look frightening at first, but it is an extremely rewarding investment! Ask @dotnetic if you don't believe me ?
    3 points
  3. A big shout-out and thank-you to @bernhard for keeping on improving the RockFronend. (and the whole Rock-EcoSystem) continuously. I asked for a minify-feature and here it is! I am using RockFrontend in every project now as it combines some must-have features for frontend development (IMHO!). Today a new feature was added (or lets say improved as auto-minify was included before) The Minify Feature (for Styles and Scripts!) See the Wiki entry here Lets start simple: How do I include stylesheets with RockFrontend in the first place? RockFrontend always had the feature to bundle stylesheets via the styles() function. For example I am loading some LESS partials into my header like this. You don't have to use LESS files, regular CSS files work the same. Note: Since RockFrontend offers LESS support you can add those files directly, no need to compile them and add the CSS files. For this you have to install the ProcessWire Less module. <? $rockfrontend->styles() ->add($config->urls->templates . 'styles/less/uikit/_custom-import.less') // add single LESS or CSS file ->addAll($config->urls->templates . 'styles/less/project') // point to folder to include files automatically ->addAll($config->urls->templates . 'styles/less/custom') ?> The result in this case is a single compiled CSS file that will be included in your head automatically. RockFrontend is very smart. You don't have to include tons of partial LESS project files here. Just use the addAll() function and point to a folder where your assets are saved and the module does the import for you. This is how my folder structure looks like. If I create new LESS files in there, they will be added and compiled automatically at runtime. How to minify For debugging and development purposes I don't use the minify feature. Instead I use it on the staging server exclusively. To generate a minified version of your stylesheet just add minify(true) <? $rockfrontend->styles() ->add($config->urls->templates . 'styles/less/uikit/_custom-import.less') ->addAll($config->urls->templates . 'styles/less/project') ->addAll($config->urls->templates . 'styles/less/custom') ->minify(true); ?> If you want to chain the minify function it to your debug-mode state you can do it like this (my preferred solution). <? $rockfrontend->styles() ->add($config->urls->templates . 'styles/less/uikit/_custom-import.less') ->addAll($config->urls->templates . 'styles/less/project') ->addAll($config->urls->templates . 'styles/less/custom') ->minify(!$config->debug); ?> That's it! Does minify work with Scrips? Yes, exactly the same. But you make use of the scripts() function in this case. <? $rockfrontend->scripts() ->add($config->urls->templates . 'scripts/script1.js') ->add($config->urls->templates . 'scripts/script2.js') ->add($config->urls->templates . 'scripts/script3.js') ->minify(!$config->debug); ?> Note that these script files are not bundled (even if you chose minify false). Instead they all come out as minified versions separately. I find that this workflow I straight forward and it combines some of the best features that RockFrontend offers! If you combine this with the awesome autorefresh feature, frontend development becomes a breeze!
    1 point
  4. WebP to JPG Converts WebP images to JPG format on upload. This allows the converted image to be used in ProcessWire image fields, seeing as WebP is not supported as a source image format. Config You can set the quality (0 – 100) to be used for the JPG conversion in the module config. Usage You must set your image fields to allow the webp file extension, otherwise the upload of WebP images will be blocked before this module has a chance to convert the images to JPG format. https://github.com/Toutouwai/WebpToJpg https://processwire.com/modules/webp-to-jpg/
    1 point
  5. Subscribe Podcast RSS feed and save as something you want. The additional example module ProcessPodcastSubscriptionsEpisodes create new pages per episode. Download/Install Github: https://github.com/neuerituale/ProcessPodcastSubscriptions Module directory: https://processwire.com/modules/process-podcast-subscriptions/ Composer: composer require nr/processpodcastsubscriptions
    1 point
  6. Following recent discussion about the Latte templating language, I figured I'd give it a try too and put together a Latte renderer for Wireframe: GitHub repository at https://github.com/wireframe-framework/WireframeRendererLatte Composer installation: composer require wireframe-framework/wireframe-renderer-latte In case anyone wants to try this, it would be interesting to hear your thoughts. I'm honestly not sure if I'll be using it much myself, and I built the module without any real Latte experience under my belt, so it's possible that it does things in surprising ways. Let me know if you run into any issues ? Syntax wise, Latte doesn't differ much from regular PHP templates — you just write your PHP code within {curly brackets}, and it gets automatically escaped (with what they call context-aware escaping). They have this thing called n:attributes though, which is actually quite nice shortcut, especially if you happen to dislike if..else and foreach: <ul n:if="$page->numChildren(true)" class="menu"> <li n:foreach="$page->children as $child"> <a n:tag-if="!$child->hide_from_menu" href="{$child->url}"> {$item->title} </a> </li> </ul> You can read more about Latte from https://latte.nette.org/en/. For more details about Wireframe renderers and how they are enabled and used, check out https://wireframe-framework.com/docs/view/renderers/.
    1 point
  7. v3.16.0 adds an xDebug launch.json file to the .vscode folder and a checkbox to control whether you want to use that feature or not. That means if you are using DDEV all you need to do to is to check the checkbox and then run "ddev xdebug on" Thx @dotnetic for the hint about xDebug on DDEV! I've just tried it today and it works great! compared to my last experience with xdebug on laragon where performance was really bad the performance on my ddev container is so good that I even forgot to turn it off today ? Page load times without xdebug: 100ms; with xdebug: 150ms ? And all that with zero-config!!! You have to try it ? See here how to use xdebug in vscode: https://www.youtube.com/watch?v=HrQWtbxY1Hs&t=331s
    1 point
  8. I haven't read everything carefully and I have never used the feature myself, but TracyDebugger has a feature to render different template files with a customizable suffix. Here is how adrian implemented it: https://github.com/adrianbj/TracyDebugger/blob/f2fbcd88fcecad45b2ce7b428cfe80c1527c1122/TracyDebugger.module.php#L1481-L1494
    1 point
  9. Would be nice to update the Fontawesome icons in PW core, so they can be used for labels in the backend. The current version 4 is quite old and has a very limit set of icons. Version 6 also has a free version with more icons. The implementation seems to be very similar, so it should be easy to implement. The icons can be downloaded here.
    1 point
  10. Just something to note, a lot of core wire functions include hardcoded markup here, so hooking into __render is probably the cleanest way to address most of these replacements from a markup output stance - that's what I plan to do. But there's lots of little bits to check - the fa 4 icons appear all over the place. Certain places (like the installation pages) hardcode echo statements and so I think those will always need to be FA4 unless Ryan steps up to 6 completely. If you are just looking for a way to include fontawesome, I'm sure you know this already - but you've got RockAwesome in the modules library if you just want to be able to assign an icon from your library of choice to use on the front end - it supports whatever version you are able to link the lookup reference to in settings. https://processwire.com/modules/fieldtype-rock-awesome/
    1 point
  11. I'm actually working on something that allows for this in a limited capacity with my admin theme. I kickstarted with v5 back in the day and still subscribe so my solution will reach back to 4 5 and 6 and include pro support although I don't think I am going to support duotone unless I can find a good way to get theme colors in there. Haven't looked too deep into it.
    1 point
  12. Hi @kongondo First of all thank you for the awesome plugin! It would be really awesome if the selection would run through an interface with a folder structure. And it would be perfect if one could define per template in which folder the selection would start. An example: Assuming we have a folder "Employee photos" and a folder "Customers" (or a lot more folders and subfolders ? ), then it would be perfect if one could give the input field the information that the image selection on employee detail pages starts in the folder "Employee photos". That would be much easier for the clients/editors to work with. And without a folder structure, it would be great if one could define per template with which category or tag the input field starts, i.e. to have a ore-defined filter setting per template.
    1 point
  13. @kongondofor me this can be closed with padloper 2 version 8, latest pw 3.0.213 and php 8.2 i have no more issues here. so i am assuming something that was done in pw core fixed it
    1 point
  14. Thank you @bernhard for this extensive post. I use RockMigrations every day and love it, as I stated often before. It isn't as hard as it looks at first. Just try it out and if you should stumble over something, then just ask for support here in the forum. RockMigrations saves much time and makes it easy to develop features in your dev environment and then when the feature is finished push the changes to the live server with migrations being executed automatically, which creates all fields, templates and even pages and contents. Could live without it, but that would be a sad life. So.... start using it now!
    1 point
  15. Got this!! SELECT pages.id, pages.parent_id, pages.templates_id FROM `pages` JOIN field_event_recurring_dates AS field_event_recurring_dates ON field_event_recurring_dates.pages_id = pages.id AND ( ( ( field_event_recurring_dates.data >= '2023-03-09 00:00:00' ) ) ) LEFT JOIN field_event_recurring_dates AS field_event_recurring_dates__blank1 ON field_event_recurring_dates__blank1.pages_id = pages.id WHERE (pages.templates_id = 50) AND (pages.status < 1024) AND ( ( ( field_event_recurring_dates__blank1.data IS NULL OR field_event_recurring_dates__blank1.data <= '2023-04-08 10:36:35' ) ) ) GROUP BY pages.id -- [525.6ms] Would you think the id column is still hitting the performance some way?? The fieldtype's schema right now is as: $schema = parent::getDatabaseSchema($field); $schema['id'] = 'INT UNSIGNED NOT NULL AUTO_INCREMENT'; $schema['data'] = 'datetime NOT NULL'; $schema['keys']['primary'] = 'PRIMARY KEY (id)'; $schema['keys']['pages_id'] = 'UNIQUE (pages_id, sort)'; $schema['keys']['data'] = 'KEY data (data)'; Conclusion, need to wrap my head around MySQL properly, my favorite CMS has got me too spoiled haha.
    1 point
  16. @elabx First thing to check is that you've got an index on your `data` column for that field. If not then execute this query: ALTER TABLE field_event_recurring_dates ADD INDEX data (data) I'm also wondering about the other indexes on your Fieldtype. You've got an "id" as the primary key, which is unusual for a Fieldtype. Usually the primary key is this for a FieldtypeMulti: $schema['keys']['primary'] = 'PRIMARY KEY (pages_id, sort)'; So I'm guessing you may not have the index on pages_id and sort, which would definitely slow it down, potentially a lot. If you don't need the "id" then I would probably drop it, since you don't really need it on a FieldtypeMulti. The only Fieldtype I've built that keeps an id is FieldtypeTable, and it uses the 'data' column for that id and then uses this in its getDatabaseSchema: $schema['data'] = 'INT UNSIGNED NOT NULL AUTO_INCREMENT'; $schema['keys']['primary'] = 'PRIMARY KEY (data)'; $schema['keys']['pages_id'] = 'UNIQUE (pages_id, sort)'; unset($schema['keys']['data']); Most likely the above is the primary reason for the bottleneck. But another more general reason that your query may be slow is because the $pages->find() selector you are using doesn't filter by anything other than your event_recurring_dates field. In a real use case, usually you'd at least have template(s) or a parent in that selector, or you'd be using children(), etc. Any should improve the performance. Without any filter, you are asking it to query all pages on the site, and if it's a big site, that's going to be potentially slow. The reason for the LEFT JOIN is because without it you can only match rows that that exist in the database. You have an operator "<=" that can match empty or NULL values. Since you also have an operator ">=" that can't match empty values, then the left join of course isn't necessary, but each part of the selector is calculated on its own (independent calls to getMatchQuery). You are getting the fallback Fieldtype and PageFinder logic since your Fieldtype doesn't provide its own getMatchQuery(). This is a case where you may find it beneficial to have your own getMatchQuery() method. The getMatchQuery in FieldtypeDatetime might also be a good one to look at. In your case, since your Fieldtype is just matching dates, it may be that you don't need a left join situation at all, since you might never need to match null (non-existing) rows. So you could probably get by with a pretty simple getMatchQuery(). Maybe something like this (this is assuming 'data' is the only column you use, otherwise replace 'data' with $subfield): public function getMatchQuery($query, $table, $subfield, $operator, $value) { if($subfield === 'count') { return parent::getMatchQuery($query, $table, $subfield, $value); } // limit to operators: =, !=, >, >=, <, <= (exclude things like %=, *=, etc.) if(!$this->wire()->database->isOperator($operator)) { throw new WireException('You can only use DB-native operators here'); } if(empty($value)) { // empty value, which we'll let FieldtypeMulti handle if(in_array($operator, [ '=', '<', '<=' ])) { // match non-presence of rows return parent::getMatchQuery($query, $table, 'count', '=', 0); } else { // match presence of rows return parent::getMatchQuery($query, $table, 'count', '>', 0); } } // convert value to ISO-8601 and create the WHERE condition if(!ctype_digit("$value")) $value = strtotime($value); $value = date('Y-m-d H:i:s', (int) $value); $query->where("$table.data{$operator}?", $value); reuturn $query; }
    1 point
  17. Good day, everyone! I am happy (and a bit scared) to announce the release of a long awaited new maintenance version for this module. @Mats blessed me to take control over the module. In fact, the repo is moved not to my github account, but rather to an org called Friends of ProcessWire in which there are some brilliant devs already, and maybe more will join. But that is a story for another post I am planning to write soon (a little intrigue))) As for now please test the new 3.0.4 version. It has some code merged from @ukyo (big thanks to him!) and a few lines by myself. I hope that this release does fix a few issues and hopefully not introduce new ones. But I can't be sure here, so changed the stability tag to Beta. Here is the changelog: Now the module uses https://nominatim.openstreetmap.org for all geocoding. Before it still used Google geocoding API in places. Fixed the issue with the map display in admin when the field is in the repeater or in the collapsed fieldset. There is still a problem with ajax tabs. Fixed ProcessWire namespace declaration. Fixed markup in README.txt. Executable bit is removed from all files in the repo. Git branches are renamed to be more familiar. The latest released code is now in the master branch again instead of PW3. The dev is used for current dev. Previously used branches are renamed and kept for now, though will be probably deleted in the future. Updated the module page in the modules directory.
    1 point
  18. I've not heard of this module being discontinued, probably just some sort of a mishap. If you're interested in ProDrafts, I would recommend sending a PM to @ryan.
    1 point
  19. In this case, best is to simply go with either an "if" or "switch" statement, something like: $datum = $concert->date; $format = "E, d. LLLL yyyy, HH:mm"; switch($user->language->name) { case "deutsch": $lang = "de-DE"; $format = "EE dd.MM.YYYY"; break; case "français": $lang = "fr_FR"; break; case "default": $lang = "en_US"; break; } $fmt = new \IntlDateFormatter($lang, \IntlDateFormatter::FULL); $fmt->setPattern($format); echo $fmt->format($datum);
    1 point
  20. It's crazy sometimes how quickly the work week goes by. On Monday I started working on a Stripe Checkout integration for a client (using the FormBuilderProcessorStripe module), and somehow today I'm still working on it, and it feels like only a day has passed. This particular integration is a little more complicated than others I've worked on. The user makes a few selections that determine the final price, and when they submit the form, it has to authorize (but not yet capture) the amount due, so that the money is basically in a holding state. Then it has to send a notification to another company asking them to approve or deny the request. If they approve it, then it captures those funds through Stripe. Or if they deny it, then it releases the hold on those funds. It's also connected to multiple Stripe accounts in different currencies, and it has to use whichever one corresponds with the transaction details. In this particular form, some of the purchases also involve a 3rd party web service to confirm availability. And there's more to it as well, but I'll leave it at that... it's just a lot of moving parts, so I guess that's why I haven't done anything this week other than work on that. But the good news is that much of it has been added to the FormBuilderProcessorStripe module, so that the next this time need comes up for you or me, hopefully it won't take so much time. Here's a few of the things that have been added to FormBuilderProcessorStripe: Previously you could just accept a payment. Now you also have the option to setup a separate authorization and capture. And you can capture from a newly added API method, from the FormBuilder entries screen, or from our Stripe dashboard. On the API side, all you need to complete the capture is the ID of the payment (called the payment intent ID), which is saved with the form entry. The capture typically must be done within 7 days of the authorization. Doing an authorization (and later capture) is preferable to a charge when you think there's a reasonable chance the it'll need to be un-done. One reason is because an authorization costs nothing, whereas a charge (or capture) does, regardless of whether it is refunded. The module also has a new option to create a Customer in Stripe that you can charge later. This is different from authorization/capture in that creating a customer doesn't authorize any particular transaction or funds, but rather saves their payment info in Stripe, enabling you to charge it anytime later. This is useful for many cases, but one would be where a customer wants to save their payment information with their account, so they don't have to re-enter it every time they make a purchase. Several new configuration options were also added. New public API methods were added for capture, cancel and refund payments. You can now pass any data from the form into Stripe metadata. You can now specify the Stripe API version that you want to use with the module. And you can now send email receipts, even if not enabled in Stripe. More transaction information is now shown when using the "view entry" in the admin. Several new hooks were also added. Technically this update to FormBuilderProcessorStripe is ready to post now, but I'd like to do a little more testing first, so I'll be posting this module update in the FormBuilder board next week. I also have some updates to the InputfieldFormBuilderStripe module as well (which uses Stripe Elements rather than Stripe Checkout), and it may be updated at the same time, or shortly after. No core updates this week, but hopefully I got enough client work done this week that I can really focus on the core next week. Thanks for reading and have a great weekend!
    1 point
  21. As the project processwire-recipes.com went offline quite some time ago now, I took the repository and deployed a browsable version of it over on: https://processwire-recipes.pages.dev/ There is also a new repo: https://github.com/webmanufaktur/processwire-recipes/ ... on which this deployed version is based on. Changes, fixes, updates can be submitted there. Feel free to update links, fork the repo, submit changes for recipes layout or whatever. It uses 11ty for rendering .md to .html and all the other magic. I will add further details in regards to formatting recipes and submitting new content in the future. Further notice: @teppo: you link to the old domain on weekly.pw Changes/Updates: incoming - right now this is WIP (work in progress) Maintainers: feel free to let me know, I'll add you to that repo Details: the repo is on github: see above hosting/deployment: on Cloudflare Pages via 11ty updates: all updates against the master-branch will be deployed within 60 seconds after commit changes: right now only via pull-request maintainers: wanted - see above
    1 point
  22. For what it's worth, it makes sense to have the capability to test drive new template files, so I rolled a little module. This adds a field to the template configuration for an alternative template file only used for editors. https://github.com/BitPoet/TemplateTester
    1 point
  23. Thanks guys. I'll see what I can do this weekend, if I finally manage to have some free time. PS: although I built the following page before this module, it does use the lite-youtube-embed script, so you can check how it looks and how fast the page loads with 14 videos: https://www.spainculture.us/digital-projects/14-days-14-artists/
    1 point
  24. I use vimeo, so would be very happy with lite-vimeo-embed plugin
    1 point
  25. Hi Horst, With your instructions and help, I got it to work. Thanks again for working with me in understanding proper methods for rendering the values I needed. For Future reference for anyone else(including myself if I forget )... There are two methods for rendering data from Metadata EXIF module. Array Method: // output the value for one key of the returned array $firstGalleryImage = $page->galleryimages->first()->getExif( array('FNumber') ); echo $firstGalleryImage['FNumber']; Object Method: // if you have choosen the result as object, you would use $options = array('toObject' => true, 'keys' => array('FNumber')); $object = $page->galleryimages->first()->getExif($options); echo $object->FNumber;
    1 point
  26. Just a little update because of new API additions, so now you could just do foreach ($pag as $p) { foreach ($languages->find('name!=default') as $lang) { $p->setAndSave("status$lang", 1); } } Thanks @szabesz for mentioning I'm adding this here for completeness
    1 point
  27. Hmmm this is weird I actually like this new upgrade even on mobile, never used to comment via mobile but now it's slick.
    1 point
  28. ...if i'm honest i was a little bit overwhelmed and lost, too. But for me is not really the platform itself important...ok performance and some things are but the really important thing is....the people who are running this are already here And with that community such optical things like the right balanced CSS and colours are only a question of time - to get changed in the right manner and style! Since this is more a opinion topic, this is mine. Regards mr-fan
    1 point
  29. Just hitted this myself. Should all languages be active from API by default? That would be in line with how it functions in admin.
    1 point
  30. In case you want it a little more simpler $pages->setOutputFormatting(false); $pag = $pages->find("template=basic-page"); foreach($pag as $p) { foreach($languages as $lang) { if($lang->isDefault()) continue; $p->set("status$lang", 1); $p->save(); } }
    1 point
  31. Thanks! Just in case anybody else searches for similar things, here's a simple example $en = 'status' . $languages->get('en'); $us = 'status' . $languages->get('us'); $fr = 'status' . $languages->get('fr'); $pag = $pages->find("parent=1072, template='product'"); foreach($pag as $p) { $p->setOutputFormatting(false); // not sure if necessary here - I made it a habit to always include it, just in case... $p->$en = 1; $p->$fr = 1; $p->$us = 1; $p->save(); }
    1 point
  32. Regardless of whether handling this with processwire you should change your server variables to handle such a request. Your php.ini should be adjusted for it (at least): upload_max_filesize = 350M post_max_size = 350M And your apache should be configured too: memory_limit = 400M Update (thx Wanze) You should also adjust settings like max_execution_time max_input_time in your php.ini... I don't know if your clients host has such capabilities (-> memory limitations for such one vhost) but you should consider to convince your client to upload such files with ftp (not with http) and it is less error-prone for you and him with ftp...
    1 point
×
×
  • Create New...