Leaderboard
Popular Content
Showing content with the highest reputation on 02/04/2022 in all areas
-
This week we have some great performance and scalability improvements in the core that enable lazy-loading of fields, templates and fieldgroups, thanks to @thetuningspoon — https://processwire.com/blog/posts/field-and-template-scalability-improvements/14 points
-
10 points
-
I'm definitely meaning to 'weigh in'. I think this is a really important topic - possibly the most important one for the future development of PW, maybe alongside the pagebuilder. However I'm really buried in a complex app at the moment and don't have time to fully consider all the issues. First off, regardless of the merits of my particular module, I do think that the help file explains some of the functionality that I think is important. In particular, the approach is completely declarative and UI-based - it neither requires coding nor a clean start (it can be installed in an existing system). I encountered a number of issues in building the module this way that others should be aware of - for example that ids can vary between the development and target environments - but I think any other approach reduces the scope of application. As regards the status of ProcessDbMigrate, I have been using it quite successfully in a moderately complex live app (there are some bug fixes and improvements pending for the released version) but really wanted to try it out in the app referred to earlier, which has complex fieldtypes (e.g. my FieldtypeMeasurement) and many pro fields. I hit a snag with nested repeaters which needs attention once I have finished the app. I don't see ProcessDbMigrate as the solution however, but I do think it demonstrates a lot of the required functionality (it creates json not yaml files, but that's not a big difference). I see it as a partial prototype. Aside for any snags like that mentioned, I do not like the profusion of templates and fields which clutter up the admin. As to a way forward, I think a collaborative development of requirements and spec would help and then some agreement on who and how to build it. I also really think that a contribution to this discussion from @ryan before proceeding would be most helpful. I'll try and return with some more detailed thoughts once I get this app done!5 points
-
@horst Well then, allow me to raise your expectations again, because your description is not how it works ? In your scenario, both developers could merge their branches with zero conflicts, and as a result the main branch would incorporate all the changes from both branches. They don't even need to know what the other one is doing, and nobody needs to constantly keep up with changes from other branches / team members. That's because git is really smart in the way it performs merges. Basically, you can view every branch as a set of changes applied to the existing files. As long as those changes don't conflict, you can merge in multiple PRs back to back without any manual conflict resolution. So most of the time, you can just lean back and everything works. The only time you get a merge conflict that needs to be resolved is if there are actual conflicts that require a decision. For example, if developer A renames some_old_field to unicorns and developer B renames the same field to rainbows, that would result in a merge conflict, because a single field can't have multiple names. So someone needs to decide between unicorns and rainbows for the field name. In other words, you don't have any overhead caused by git itself – git acts as a safety net by warning you about merge conflicts so you can fix them. In a well-engineered system with good separation of concerns, it's rare to have non-trivial merge conflicts, since it's unlikely that two people working on separate features will need to touch the exact same files. And most of the time, if you do get a merge conflict it's trivial to resolve – for example, if two PRs add a new variable to our SCSS variables in the same place. This would be a merge conflicts, but it's trivial to resolve, since you know you want both changes. If you know git well, you can resolve those in under a minute, oftentimes with a single command (by specifying the appropriate merge strategy for the situation). It's the exact opposite – the larger the development team, the more you will benefit from this streamlined workflow. Everyone can focus on different features and merge their work in with the minimum amount of effort required by either them or other developers to keep in sync with each other. Regarding all the git stuff, I recommend the Git Pro book (available for free), a great resource to understanding how git works under the hood and discover some of the lesser-known features and power tools. Reading the book front to back helped me a lot to establish our feature-branch workflow (for Craft projects) at work, utilize git to work way more effectively, solve issues with simple commands instead of the xkcd 1597 route and much more. For branching and merging in particular, check out the following chapters: 3.2 Git Branching - Basic Branching and Merging 7.8 Git Tools - Advanced Merging2 points
-
Just to give my two cents: As a solo developer I think it would be nice to have an automated generation of config files to version control template and fields in Git, but I can absolutely understand Ryan if he doesn't find it necessary as solo developer for his workflow. Especially with his tools for Export/Import fields, templates and pages. For teams working on a project a good version control would be really helpful in the coordination, but as a solo developer I enjoy the way of creating and configuring templates and fields in the backend of ProcessWire. The last thing I want to is to have to write blueprints and migrations for simple configurations. For example the CMS Kirby uses blueprints in YAML for its templates and fields and this is of course great for version control, but I find it slows down the development process, because you have to study the reference documentation instead of just creating a template or field in the back-end. In Kirby it is part of the core concept, but in ProcessWire it is not and I hope it stays this way. If these config files for templates and fields would be automatically generated in YAML or JSON somewhere in the sites-folder, where I could version control them, this would be nice. But personally as solo developer I don't want to waste my time with writing configurations files or migrations and composer dependencies.2 points
-
+1001 ? And my first request is, to start with TDD (Test Driven Development)! I can share the basics (or a starting point) for that, means I've a small and comfortable to use setup ready for that. Would need to write a short documentation / explanation, on how it works and should be used. +1 ? -10 ?2 points
-
I did something like you want. Add this hook to ready.php: $wire->addHookBefore('ProcessPageSearchLive::execute', function(HookEvent $event) { $event->wire()->addHookAfter('FieldtypePageTitle::wakeupValue', function(HookEvent $event) { $page = $event->arguments(0); // specify your template if($page->template == 'tool') { // get the fields you like $prefix = $page->pre; $suffix = $page->suf; // add your data to the list $event->return .= " | {$prefix} {$suffix}"; } }); });2 points
-
I think that since $stack is a Page the valid parameter for render() is a field name and not a path. You might be better of doing something like this: <?php namespace ProcessWire; if (wireCount($value)) { echo "<div class='stacks'>"; foreach ($value as $stack) { if (!$stack->isHidden()) { echo wireRenderFile("fields/stacks/{$stack->template}", ['stack' => $stack]); } } echo "</div>"; // stacks } Just adapt the array, passes as a second parameter to wireRenderFile to be named according to the variables in "stack-{$template}.php"2 points
-
Hi @strandoo Probably, you can handle it by using owner selector. https://processwire.com/blog/posts/processwire-3.0.95-core-updates/ $speakers = $pages->find("template=speaker, speakers_field.owner.parent.parent.name=your-event-name"); Not tested, but should be someshing like this. It's always a bit difficult to grasp, but try to paly with it a little.2 points
-
@Kiwi Chris Migrations have their place and I definitely wouldn't do without them. I think it's best if config and migrations complement each other. I think there needs to be a distinctions between shared (community / open source) modules and site-specific modules. For Craft, this distinction is between plugins (external plugins installed via Composer and tracked in the project config) and modules (site-specific modules providing site-specific functionality). Both can provide config to be tracked in the project configuration. But they work slightly different and keeping those things separate makes it easier to talk about them. @horst I just meant that existing fields that already exist in the database and still exists in the config aren't wiped and recreated when applying the config (since that would wipe the data as well). The config always includes all fields (as well as entry types, settings, etc) that exist on the site. So if I remove a field in dev, its removed from the config. If I apply that config to other environments, the field is removed from those as well. You don't need the content in the version control. Local test environments only create data for testing purposes. So when I check out my colleagues PR, I will have all entry types for the news blog etc, but no actual data. For quick testing I can just create a couple of blog posts manually. For larger projects, you can use a data seeder plugin, or a content migration that imports some data. We've done this for a project where an existing database was imported to a new system, there we could just run the import migration to seed development/staging environments. Admittedly, it's a tiny bit of additional work. But far easier than making sure you don't put garbage data into version control, figuring out how to merge diverging content branches, dealing with assets. And I don't want content dumps muddying up my git commits anyway. Once you start working this way, it's super relaxing not having to worry about creating 'real' content in your dev environment, being able to wipe out content on a whim, try something out etc. The 'real' content editing happens in the staging/production environment anyway. How are you merging diverging branches from multiple people once the time comes to merge them? You can't really merge database dumps (and stay sane). Also, database dumps are terrible for version control, much to noisy to be readable in diffs. With a YAML config, I can just look at the diff view of the PR in Github and tell at a glance what changed in the config, I don't think you can do that with an SQL dump unless you're Cypher from The Matrix … The main branch is always the source of truth for the entire site state, the config includes all fields, entry types, general config settings, etc. Everyone working on a feature creates a new branch that modifies the config in some way – but those branches still include the entire site state, including the feature being worked on. So once you merge that feature in to the main branch and deploy in staging/production, the system can 'interpolate' between the current state and the config. That is, compare the config with the database state, then adjust the database to match the config by creating fields that are in the config but not in the database, removing fields from the database that aren't in the config anymore, applying any settings changes from the config to the database etc. Of course, there may be merge conflicts if some branches contain conflicting changes. In this case, after merging in the first branch, you'd get a merge conflict which prevents the PR from being merged. You resolve these like all regular merge conflicts in git, which is made easier since the config is just a bunch of YAML files. For simple conflicts, you can resolve those directly in the Github UI. For more complicated conflicts (which are very rare), you would do that locally by either rebasing or merging in the main branch.2 points
-
I don't mind your wording in the way it suggests refreshing the session, but I am not really sure "refreshing cookies" really describes what is going on. Regarding the typo - I am a proponent of the Oxford Comma - https://www.colesandlopez.com/blog/what-is-the-oxford-comma - I wasn't taught it in school, but I do like the way it reduces ambiguity. Obviously not necessary here, but I just use it all the time these days :)1 point
-
@Ivan Gretsky - new version with this is now available. Note that in the end I didn't actually need the update to the Tracy core because I am actually overwriting their dump and barDump methods anyway - sorry I didn't notice that sooner. Let me know if you notice any problems - this hasn't had a lot of testing yet.1 point
-
Sorry about that :) New version committed - I built it into that option and decided to rename it to "Clear Session, Cookies, & Modules Refresh" Let me know if you have any problems with it.1 point
-
@Zeka I had a chance to try this. Your selector worked perfectly, thanks again!.1 point
-
Hi @adrian that thx meant "It's late here in Austria and I'm on mobile in a train. Thx for your reply I'll look into that tomorrow" ? I've just tried that on a fresh and clean installation and the menu is only updating when I refresh modules + clear cookies/session I can't reliably say that. I've never ever needed to clear session+cookies other than making the manu catch up changes that I've made to process modules... I do think that a modules refresh should not be a problem though. I'm fine with the wording of the two options we already have. There's probably no need for explicitly stating that a "clear session&cookies" does already do a modules refresh behind the scenes. Thx for working on that request, will be much appreciated and save me from a lot of unnecessary clicks! ?1 point
-
Remember the years before and after the evo exodus to processwire ? The forum was vibrant and full with starting-coders and beginners and for a long time the forum was praised for fast replying and helping them out. Processwire has already lost almost all starting-coders and beginners like we used to have them in the past. I asked the forum a few times if there is interest in getting them back but without any reaction. A clear indication. Processwire is already a fantastic and full grown product so spend less time with adding weekly new xyz features and instead start spending more time with Marketing the potential of this great product. No secret that this is not going to happen because with Processwire, Ryan is only interested in coding and is followed by a lot of coders a like in the forum. Another suggestion to make the community grow is to split Processwire up in 2 versions: 1) a version for starting coders and beginners like it was in the beginning 2) a version for experienced coders like we have today Last but not least: maybe I should not complain about anything and just take Processwire for what it is and be happy with it.1 point
-
1 point
-
Thanks Zeka, using Lister is actually a good solution. I can create a bookmark with the right settings, and share it with the users.1 point
-
I'm talking about ProcessPageSearch. Concrete case: the user searches a particular page in the search field top right corner. If they type the name of a tree, there's a bunch of pages that will display with that particular tree. I would need it to display a second field with the location. So they see in the result "tree, location". I'm also using ProcessPageList (which is perfect for displaying the pages in the tree (this time I'm talking about the tree of pages in the backend ?). But this is a different need. Lister might be an alternative solution, actually ?1 point
-
+1 Well, we all know that Ryan is not interested in attracting hordes of developers to ProcessWire, and I agree, as I also prefer quality over quantity. However, making sure he keeps the current community he already has is vital to ProcessWire. If his standpoint is something like "Guys, I am not much interested in what you think is one of the most important things in order not to loose you by turning you towards other systems, but I am here to help to make mods to the core when you need them." than that approach will hurt ProcessWire, I think.1 point
-
@OllieMackJames I've experimented with most of the popular page builders in WordPress, including Oxygen. The direction I'm taking is vastly different from all of them. Simply put, the page builder being based on RepeaterMatrix is what I'm trying to accomplish and for it to be usable with the developers CSS framework of choice (Uikit, Tailwind, Bootstrap, Codyframe, no framework, etc.). Anything beyond that is outside my skillset and the scope of what I'm truing to do. I have no interest in using React, Vue, etc. and going to the extreme like the WP page builders do. The page builder will simply be "good enough" to make marketing pages based on the capabilities of whatever CSS framework is installed.1 point
-
Don't have much time to discuss, but I just came across this article that's talks about Craft's Project Config https://adigital.agency/blog/understanding-and-using-project-config-in-craft-cms that some may find relevant to this discussion?1 point
-
Yes, writing the config should always dump everything. Much easier than keeping track of changes. Of course, under the hood the actual implementation could optimize that further, for example by only writing files that have changed to reduce disk i/o. But conceptually, the config should always include the full config for the current system state. On import, you probably can't wipe out all fields since that would remove the database tables and wipe all content. When the config is applied, the appropriate process/class should read the config and apply any differences between the config and the database state to the database. I.e. create missing fields, remove fields that aren't in the config, apply all other settings etc. At least that's how Craft does it. Conceptually, the entire config is read in and the site is set to that state. In Craft, there's a clear separation of concerns between config and data. The config is tracked in version control, data isn't. That's not a problem if you don't do any 'real' content editing in your dev environment. For our projects, we usually set up a staging environment pretty early on and do all actual content editing there. Once the project is ready to go live, we either switch the live domain over to that staging environment (so staging is promoted to production, essentially). Or we install a new instance of the project and copy over the database and assets folder so we have separate production and staging environments. For projects that are already live, you just wouldn't do any real content in the dev or staging environments. If you really need a large content update alongside a code update, you could use an export/import module or migrations. Migrations complement the declarative config and most of the time, we don't need them at all. By the way, there's a discussion to be had about where you draw the line between config and content. For example, for a multilingual site, are the available languages configuration (only editable by the dev) or content (editors can create new languages)? There are many of those grey areas and I don't think this has a single right answer. Craft uses UUIDs in addition to the name. Each field also has an ID that's environment-specific, but that's an implementation detail you never have to interact with, since you can always refer to a field by name or UUID. So you can change a field handle while the UUID stays the same. This also prevents naming conflicts, since new UUIDs are pretty much guaranteed to be unique. ---- On a broader note regarding the difference between declarative config and migrations: It's important to distinguish between the 'conceptual' view (the config represents the entire site state) and implementation details. Take git as an example. Conceptual, each commit represents a snapshot of the entire codebase in a particular version. Of course, under the hood git doesn't just store a copy of the entire codebase for each commit, but optimizes that by having pointers to a database of blobs / objects. But that's an implementation detail, while the public API is inspired by treating each commit as a snapshot of the codebase, not a collections of diffs.1 point
-
With coded migrations you alter the structure of the application and it is great that your module provides this. But I think we should respect that not everyone wants to do this by code. Some would rather like to use the admin UI. And this is where the recorder comes in. Any changes are reflected in a declarative way. Even coded migrations would be reflected there. What I was trying to say is that the complete state of the application should be tracked in a declarative manner. How you get to that state, be it through coded migrations or through adding stuff through the UI, should be secondary and left up to the developer. ? Please don't go just yet. I'm sure we can all benefit from your input.1 point
-
Thanks @MoritzLost for the detailed post. One thing I don't understand and am hoping you might explain is how Craft handles field renaming within the project config file. Do the config files refer to fields by ID, name, or something else? It seems like IDs couldn't be used in the config because if the IDs auto-increment as fields are added then they wouldn't be consistent between installations. But if names are used instead of IDs then how is it declared in the config that, say, existing field "date" was renamed to "post_date", versus field "date" was deleted and a new field "post_date" was created? Because there's an important difference there in terms of whether data in the database for "date" is dropped or a table is renamed.1 point
-
I've not participated much here, since I feel there are more knowledgeable folks here already, but a few quick (?) opinions/experiences: I would love to have an easy way to migrate changes for fields and templates between environments, and version control all of that. I've had cases where I've made a change, only to realize that it wasn't such a good idea (or better yet, have a client realize that) and off we go to manually undo said change. Sometimes in quite a bit of hurry. These are among the situations in which an easy rollback feature would be highly appreciated. I do like making changes via ProcessWire's UI, but at the same time I strongly dislike having to do the exact same thing more than once. Once is fun, but having to redo things (especially from memory, and potentially multiple times) is definitely not what I'd like to spend my time doing ? I've worked on both solo projects, and projects with a relatively big team. While versioning schema and easily switching between different versions is IMHO already very useful for solo projects, it becomes — as was nicely explained by MoritzLost earlier — near must-have when you're working with a team, switching between branches, participating in code reviews, etc. I'll be the first one to admit that my memory is nowhere near impeccable. Just today I worked on a project I last worked on friday — four days ago! — and couldn't for the life of me remember exactly what I'd done to the schema and why. Now imagine having to remember why something was set in a specific way years ago, and if altering it will result in issues down the stream. Also, what if it was done by someone else, who no longer works on your team... ? Something I might add is that, at least in my case, large rewrites etc. often mean that new code is no longer compatible with old data structures. For me it's pretty rare for these things to be strictly tied to one part of the site, or perhaps new templates/fields only. Unless both you and the client are happy to maintain two sets of everything, possibly for extended periods on time, that's going to be a difficult task to handle without some type of automation, especially if/when downtime is not an option. Anyway, I guess a lions share of this discussion boils down to the type of projects we typically work on, and of course different experiences and preferences ?♂️ As for the solutions we've been presented with: I've personally been enjoying module management via Composer. Not only does this make it possible to version control things and keep environments in sync, it also makes deploying updates a breeze. As I've said before, in my opinion the biggest issue here is that not all modules are installable this way, but that's an issue that can be solved (in more than one way). While I think I understand what MoritzLost in particular has been saying about template/field definitions, personally I'm mostly happy with well defined migrations. In my opinion the work Bernhard has put in this area is superb, and definitely a promising route to explore further. One thing I'd like to hear more is that how do other systems with so-called declarative config handle actual data? Some of you have made it sound very easy, so is there an obvious solution that I'm missing, or does it just mean that data is dropped (or alternatively left somewhere, unseen and unused) when the schema updates? Full disclosure: I also work on WordPress projects where "custom fields" are managed via ACF + ACF Composer and "custom post types" via Extended CPTs + Poet. Said tools make it easy to define and deploy schema updates, but there's no out-of-the-box method for migrating data from one schema version to another (that I'm aware of). And this is one of the reasons why I think migrations sometimes make more sense; at least they can be written in a way that allows them to be reverted without data loss.1 point
-
Sure, I did it a few times but clients always have to pay the price :) Just kidding... Yes, it can sometimes be a real issue for sure. Anyway, I work solo and my projects/clients are somewhat "special" and most of the time clients just rely on all my decisions. Still, thanks for your insights! You have clearly explained your motivations which are more than reasonable, I think. I would be more than happy to join a crowd funding initiative if YOU were the one to lead it, but first someone needs to make Ryan firmly believe he also needs this... Sounds impossible, but never say never.1 point
-
@kongondo @szabesz @horst A completely automated deployment enables continuous deployment as well as a number of other workflows. Being able to rollback to a previous version is part of it, but it's only one of the benefits of version control, and probably not the most important one. It's all a question of how much your workflow scales with (a) the amount of work done on a project / number of deployments in a given timeframe and (b) number of people on your team. For me, the 'breaking points' would be more than maybe one deployment a week, and more than one person working on a project. There were many different approaches mentioned in the previous threads – migrations, cloning the production database before any changes, lots of custom scripting etc. But those all break down if you're starting to work in a team and get into workflows centered around version control. The key to those workflows is that each commit is self-contained, meaning you can rebuild the entire site state from it with a single command. For comparison, here's how I work on a Craft project with my team, following a feature-branch workflow. I may work on the blog section while my colleague works on the navigation. We both have our own local development environment so we can test even major changes without interfering with each other. Once my colleague has finished, they commit their changes to a separate branch, push it to Github and open a pull request – including any template changes, translations, config changes, etc. I get a notification to review the PR. I would like to review it now, but I'm working on a different feature right now, have a couple of commits and some half-finished work in my working directory that's not even working at the moment. No problem, I just stash my current changes so I have a clean working directory, then fetch and checkout my colleague's branch from Github. Now it only takes a couple of commands to get my environment to the exact state the PR is in: composer install (in case any dependencies / plugins have changed) php craft project-config/apply (Apply the project configuration in my current working directory) npm ci (install new npm dependencies, if any) npm run build (build frontend) Most of the time, you only need one or two of these commands, and of course you can put those in a tiny script so it's truly only one step. Now I can test the changes in my development environment and add my feedback to the PR. Keep in mind that the new 'blog article' entry type my colleague created with all it's fields and settings is now available in my installation, since they are included in the config that's commited in the branch. Now imagine doing that if you have to recreate all the fields my colleague has created for this PR manually, and remove them again when I'm done. Now image doing that 10 times a day. By the way, everything I was working on is savely stored in my branch/stash, but is not interfering with the branch I'm testing now. This is the benefit of a declarative config: Everything that's not in the config gets removed. So even if I left my own work in a broken state, it won't interfere with reviewing the PR. With migrations, you'd have to include an up and down migration for every change, and remember to execute them in the right order when switching branches. Any manual steps, no matter how easy or quick they are, prevent those workflows at scale. Automatic deployments also makes your deployments reproducible. Let's say you have an additional staging environment so the client can test any incoming changes before they hit production. If you do manual deployments, you may do everything right when deploying to staging but forget a field when deploying to production. With fully automated deployments in symmetric environments you'll catch any deployment errors in staging. That's not to say you can't introduce bugs or have something break unexpectedly, but by removing manual steps you're removing a major source of errors in your deployments. I can one-up that: zero clicks. Automatic deployments triggered through webhooks as soon as a PR is merged into the main branch on Github. Deployment notifications are sent to slack, so everyone sees what's going on. A branch protection rule on Github prevents any developers from pushing directly to the main branch, and requires at least one (or more) approvals on a PR before it can be merged ? You're clients never ask you to undo some change you did a while ago? Not because of some bug, but because of changed requirements? In any case, if your version control only includes templates, but not the state of templates/fields that those templates excect, you won't be able to reverse anything non-trivial without a lot of work. Which means you don't get a major benefit of version control. Going from commenting out chunks of code because 'you might need them later' and just deleting them, knowing you will be able to restore them at any time, is really enjoyable. Having the same security for templates, fields, etc is great. Fun story: I once implemented a change requested by a client that I knew wasn't a good idea, just because it would take less time than arguing. Once they saw it in production, they immediately asked me to revert it. One `git revert` later, this feature was back in its previous iteration.1 point
-
What you've done with the first example looks super powerful. Nice work. The second approach is what we use. It's always a battle between giving the client more control vs. making it easier for them to manage (and making sure they don't mess up the site's aesthetic!). So we tend to give them more clearly defined components and then add options to them as they need them, or build out new components if they want something totally new. With the example of the text w/ image block, you could create a single block but add a radio button for which side the image goes on.1 point
-
v0.6 is released with the following changes: New setting: Source language (thx for the idea to @Ivan Gretsky and for the pull request to @theoretic) Languages for source and exclude can only be selected if they have been defined in the fluency config first I had to do some refactoring of the translation strings and the module config, so please validate/correct your settings after the update to v0.6!1 point
-
During a recent maintenance routine we found that our website's database (1,700+ pages) had thousands of instances of unnecessary, garbage code that had come with copied text from Word. Passages with margins expressed in points, cms and inches, and some that were wrapped in upwards of 7 spans were among the most easily identified crimes. Purging all of this dropped our database size by over 4%. A few of the code examples above nuke all inline styles, which will impact some important out-of-the-box functionality for PW3 and CkEditor (depending on your use); specifically with many of the options with tables and lists, such as setting a column width or changing the bullet styles within a nested list. To work around that, I made some changes to Ryan's code to target specific tags and to eliminate spans (which you can only add via Source view without pasting them in). $wire->addHookAfter('InputfieldCKEditor::processInput', function($event) { $inputfield = $event->object; $value = $inputfield->attr('value'); if ((strpos($value, 'style=') === false) && (strpos($value, '<span>') === false)) return; $count = 0; $qty = 0; // Optional remove spans $value = preg_replace('/<span.*?>/i', '', $value, -1, $qty); $value = preg_replace('/<\/span.*?>/i', '', $value, -1); $count = $count + $qty; // Remove inline styles from specified tags $tags = array('p','h2','h3','h4','li'); foreach ($tags as $tag){ $value = preg_replace('/(<'.$tag.'[^>]*) style=("[^"]+"|\'[^\']+\')([^>]*>)/i', '$1$3', $value, -1, $qty); $count = $count + $qty; } if(!$count) return; $inputfield->attr('value', $value); $inputfield->trackChange('value'); $inputfield->warning("Stripped $count style attribute(s) from field $inputfield->name"); });1 point
-
Perhaps you have auto-prepended and/or auto-appended template files in your /site/config.php, in which case you would want to use the $options array to override those when rendering the page, e.g. foreach ($pages->find("template=article") as $article) { $content .= $article->render('teaser.php', ['appendFile' => null]); }1 point
-
1 point