Jump to content

mindplay.dk

Members
  • Posts

    305
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by mindplay.dk

  1. @pwFoo if this turns out to be a NetBeans issue, and if you figure it out, please post and let me know the solution - I can add it to the README to help others get started.
  2. The source is on GitHub, knock yourself out https://github.com/mindplay-dk/SystemMigrations
  3. I have a free account and only 10 shares allowed, but... I think I will migrate the project to GitHub tonight and just open-source it. I will post when it's done.
  4. Yeah, sorry about that. I just added an "EDIT" in bold to the original post, so others can decide whether they want to read through it all ;-)
  5. I don't use NetBeans, I use Storm, but I would be very surprised if NetBeans doesn't support basic standard php-doc annotations like @var. Maybe your IDE isn't set up to inspect the stubs.php file? Check to make sure the contents of stubs.php is being updated and reflects your templates/fields? Make sure the output matches your type-hint - this is configurable, e.g. "tpl\basic_page" is one option, but "tpl_basic_page" is also possible. Just checked, the default is the "tpl_" prefix, and no namespace (for compatibility with older PHP versions) so check the module configuration, you can set the prefix and namespace as needed.
  6. Cool, are you on BitBucket? If not, create and account and let me know your username.
  7. @joe_g this is basically what the module I've been working on does - except, it writes this file for you (as flat JSON files for source-control friendliness) creating a repeatable record of every change made to templates and fields. I don't work with ProcessWire in my day job anymore, and probably won't get around to finishing this... I wonder if I should go ahead and release it as-is. Would anyone be interested in taking over this project?
  8. I don't see it so much as adding another layer. The underlying data model already resembles entity/component - I see it more as getting to a run-time model that more accurately reflects the underlying data-model. I didn't know about ProFields - it looks like it does add a kind of "compound field", managing to reuse those field-types that can co-exist in a single, flat table - so, as far as I understand, any type that isn't multi-valued, e.g. doesn't require it's own table. So effectively, this does implement an entity/component pattern - though it kind of feels like an afterthought to me... I wish the field-type model had been built from the ground up using this pattern consistently...
  9. At the database-level, yes, it does closely resemble the E/C model - fieldgroups, however, are a logical grouping of fields, not a physical grouping; the difference being, fields inside a fieldgroup are still exposed at the root of the entity, just like any other non-grouped field. Because it's only a logical grouping, the model does not reflect this grouping as such. Getting fieldgroups their own database table would not be possible given the current architecture, as far as I can figure - field types manage things like table schema and query construction. I could be wrong, but pretty sure some drastic changes would be required to switch to a full-blown entity/component model.
  10. I had a realization recently about the PW data-model that I would like to share. The PW model is actually very akin to that of the Entity/Component model. Pages can be seen as Entities, and Fields can be seen as Components of those Entities. Templates can be seen as Component Types. One important difference from the E/C model, in terms of implementation, is the shape of the resulting entities - Fields are mapped directly onto Pages; or in other words, Component properties are mapped directly onto Entities, which means the Components themselves are limited to a single property at the model-level, even though some Field types are backed by tables with more than one property value, e.g. more than one data-column. I can't help thinking the resulting model is really close, but one step removed from being anywhere near as powerful as a real E/C data-model. To explain why I feel this way, here's a very simple case example. Let's say you have a shop with different types of products, many of which have at least one thing in common - they have a unit price, product title, and manufacturer name. With the PW model, I will add these three fields individually to every Template that describes a product type, and I will address these as, say, $page->price, $page->title and $page->manufacturer. Fetching these three fields requires three joins to separate tables. Conceptually, these three fields are meaningless on their own - they belong to a single component that defines the necessary properties required for something to "be" a product in our system. The, let's say, shopping cart service, for example, is dependent on the presence of all of these three fields and doesn't function if one of them is missing. With the Entity/Component model, instead of adding three individual fields, you define a component type that defines these three fields. You would address them as, say, $page->product->price, $page->product->title and $page->product->manufacturer. In other words, these three fields, which belong together, are exposed in the model as a single, integral component. Fetching these three fields requires a single join to one table. The ability to group together related properties in components has performance advantages (fewer joins/queries) but more importantly, it has practical advantages in terms of programming. You wouldn't need to group together related fields by prefixing them with "name_" anymore, which seems like an unnatural way to create a kind of "pseudo component". Adding/removing properties to a component property would naturally propagate that change to every entity that uses that component, rather than having to run around and update them manually. Controllers/services could define their requirements in terms of component shape. I realize it's a pretty substantial departure from the PW data-model, but perhaps something to think about for an eventual future 3.0. Or perhaps just an idea to consider for a different CMF in the future. Or perhaps just something interesting to think about and do nothing
  11. I just ran into this issue, and it was not a permissions problem. This was a site I have checked into source-control (Git) and the problem was, some folders are exempt from source control, namely "cache", "logs" and "sessions" in the "assets" folder. Apparently, PW will silently fail when these folders don't exist and can't be written to. Shouldn't it should throw an exemption on failure to write to any of these folders? The error message given currently is misleading more than helpful.
  12. Still thinking about this frequently. Still reaching no satisfying conclusions. Lately I'm leaning towards a solution that completely avoids HTML altogether - I came across this Markdown editor, and really like the concept: https://markdown-it.github.io/ The problem with this, is there's no server-side (PHP) version of this otherwise excellent, fast, very complete Markdown implementation. Would rather not depend on Node for this. Would rather not have to port and maintain the whole thing. Wonder if js2php would run it, but the project has been unmaintained for 3 years. I would like to use this in conjunction with a simple token replacement system - so if you were to type in "{kittens}", the token would appear on a list, from which you'd be able to specify what you want to replace it with. This would have plugins, so you could add image features, a table builder, or other things for which Markdown isn't ideal. Just thinking out loud here... I'm working on too many other projects at the moment
  13. > Wouldn't this already possible with the WireCache? Not really. Each page should be able to have more than one cache entry - you can't enumrate cache entries for a page, since cache entries are not indexed by page number. For that, you would need a page-specific caching facility.
  14. > sounds like you want to see a cache for storing calculated values associated with a page More or less - not really associated with the page per se though, more like tagged as being related to a page ID. I envisioned $page->cache returning e.g. new PageCache($page->id) and any name/value you write to the cache would then get written to a table with the Page ID. So cache entries indexed by Page ID basically. And with cache entries related to a Page getting automatically cleared when you delete the Page. Other than that, I picture this working just like any normal data cache - just a generic facility where you can store name/value pairs. I don't expect it to magically keep track of dependencies on other pages, although that is an interesting idea, if it could somehow be implemented without major performance drawbacks... probably not.
  15. I'd like to see a caching API added to Page objects. This would differ from WireCache, in that this would enable caching in a Page-specific context, which would make it possible to provide some meaningful options for (among others) cache expiration in a Page-context. The API might be something like a PageCache class, which is attached to Page objects as e.g. $page->cache. It would work much like WireCache, but would write cache entries with the Page ID attached to each entry, allowing some new options for cache expiration: Expires when the Page object is deleted Expires when the Page object is changed Expires when the Page object, or a Page in it's parent path, is changed When, for example, you have a page/template that aggregates content from another page (fetching content server-side), or when your template needs to perform expensive calculations or database queries to produce the page content, the fetched/generated content could be written as cache entries that lives forever (until the Page object is deleted) or until the next change to that Page, or a parent Page. Thoughts?
  16. This is clearly a common request and just needs to be added to the core. @ryan I went ahead and added a "checked by default" option - please have a look.
  17. I'm missing this feature - in my case I have an "Active" checkbox on some items, and want new items to be active by default. "Inactive" in my case wouldn't make much sense, and I'm also not fond of the double negation... "not inactive" - that's poor semantics.
  18. To implement a module to do this, I would need to hook the path generation at the lowest level, which appears to be PagefilesManager::path() or actually the static method PagefilesManager::_path() at the very lowest level? Ryan, any chance you can make this hookable? A module could then implement the path modification, e.g. adding ".protected/" to the path, and adding a new setting to FieldtypeFile, and a checkbox to InputfieldFile. Does that sound feasible?
  19. Also, just noticed - it looks like the filenames are mangled for storage (which is fine) but the original filename isn't stored anywhere?
  20. Two years later, what's the status on this issue? I am building a site where some files need to be secured, and will be made available for purchase. I like the idea of using e.g. a ".private/" folder prefix - that would work fine for me. But it appears there's no way to configure a path prefix for a File field still? And I didn't find a third-party module that adds this feature either. So how are you guys currently going about protecting assets?
  21. Wo and behold, somebody started building that content editor I talked about - separating content from layout: http://madebymany.github.io/sir-trevor-js/ Very interesting! Though I expect the server-side rendering needs to be done with e.g. Node and can't easily be done with PHP.
  22. @muzzer you're describing the typical approach and that's what I'm unhappy with - deploying applications used to be like this, too... While building mostly applications these past six years or so, continuous integration has become standard for me, more or less, and many clients expect it by now. So it feels like a huge setback when you have to go and tell clients "we'll be down for half a day while upgrading your site". Not to mention the extra half a day of work you need to bill for, effectively just to deploy. And of course you can't work in teams anymore - one guy has to do the deployment work, and preferably the guy who did the implementation work, since he has the best odds of doing the work again without screwing up. So one person becomes a resource bottleneck, which isn't good in a company that serves many clients and needs to move fast. Anyway, we all know about these reasons and others, I'm sure - the question is what can we do about it? Once I've seen a better way, I'm just not the type of guy who is able to lean back and go "well, this sucks, but let's just keep doing it the old way" ;-) By the way, the module basically works, but I just hate how much code it took, and the fact that it won't work for certain fieldtypes, and there's nothing I can do about it short of explicitly handling each fieldtype, possibly providing an API for thirdparty fieldtypes to integrate with the module etc. - it feels too brittle and halfbaked, and I'm not inclined to doing the rest of the grunt work to make the module usable, knowing that the idea is fundamentally flawed. I'm sort of manic that way ;-)
  23. > We use a user interface here for exactly what a user interface is meant for. Agree - however... > I would consider myself very lazy and remiss in my responsibilities if I expected people to use text files (YAML or otherwise) as the primarily method of configuratio I don't think anyone has proposed or suggested that? We're proposing a supplement/alternative, not a replacement. At least I would never suggest that, and I don't think that's what rajo was trying to imply. Ryan, do you have any thoughts on what I mentioned a couple of times earlier - changing the API internally to use a command pattern for changes to the data model? If every change to the data-model had to be submitted to a command processor in the form of an object, and those objects could be serialized, and assuming the command processor was hookable, that would make it trivial to implement all kinds of synchronization / change management / logging / recording modules. The problem with manipulating schema directly, is you can't only tell whether something changed, not how it changed or why - you are already working around this fact by introducing things like change-tracking internally in the objects, which need to know what changes were made. For example, when a field gets renamed, there are methods to capture that change and work around it's side-effects. When dealing with schema, I find it safer to have a centralized facility through which detailed records of change pass through - rather than manipulating individual aspects of the schema independently here and there. Template changes and Field changes, for example, are closely related because they are components of the same schema model - yet these changes are completely unrelated at the API level. A command facility could provide a central API point where all schema-related changes would be exposed, even future (and even third-party) changes would pass through it, so for example a synchronization module could work with future/third-party extensions to the data-model...
  24. You're right, metadata does not belong in the database, and this approach is a bit of a "cop out" - I'm just trying to do the best I can under the circumstances. I don't think redesigning the schema really solves the problem though - the metadata needs to be separated from the data, which means it needs to get out of the database; separation is the main point, structure is actually not really an issue. What I would really like to see, is a transaction design, in which command objects are used consistently when making changes to the structural metadata - this would completely solve the problem, and make it very easy to build a migration system. Simply put, direct changes to metadata would be encapsulated and kept strictly internal - to make a change, you would instead submit command objects - for example, to change the name of a template, you would have to submit a ChangeTemplateCommand object, which the system would then serialize and store in sequential flat files, and of course serialize and store the updated template state. To migrate a system forward, you would simply unserialize any command objects starting with the last applied numbered file and submit them again. The trouble is, the existing system was not designed with a clear separation of data and metadata - the same problem as in pretty much any CMS I can think of...
  25. Here's a draft of a simple service adding resource URIs. Just playing around, so don't take this too seriously But the idea would be to replace numeric IDs everywhere in the model with URIs instead - and then, in Fields::get() and Templates::get() etc. you would convert or "sanitize" the ID to a numeric ID using something like $key = wire('resources')->id($key) ?: $key; In addition, wire('resources')->get('...') would work for any type of resource - it uses the first portion of the URI path as the name of the resource collection, e.g. 'pages' or 'templates' etc. which can be resolved directly by just calling e.g. wire('templates')->get($id) etc. Supporting URIs in page queries might be tricky, I didn't spend much time investigating that, but it doesn't look trivial. Anyway, curious to hear what you think of the idea?
×
×
  • Create New...