-
Posts
3,227 -
Joined
-
Last visited
-
Days Won
109
Everything posted by teppo
-
Usually you wouldn't do this real-time. I'd suggest cron job that periodically updates the source JSON. ProcessWire even provides nifty "lazy cron" module you could use, but for tasks like this I prefer proper cron job -- that way there's not even that (rare) slowdown for end-users and you don't have to worry at all about that process getting interrupted halfway. I've recently been working on converting some pretty large and active sites to PW and that's exactly what I did there. In my case JSON file is generated once a day, but of course you could build it much more often. Depends on how often your data changes etc. That sounds just about right There are certain things that are slower than others, but the PW selector engine is pretty well optimized already. What you can and should do is mostly just about keeping it simple -- fewer fields in a selector string is usually faster (take a look at Fieldtype Cache by the way), searching with $page->children("selector") is faster than $page->find("selector") but only finds direct children, comparisons using "=" should be faster than "*=" (which should be faster than "%=") etc. I'm pretty sure you could find quite a few posts about keeping queries fast around here. I'm definitely not the most qualified person here to comment on this. As a general tip forget the native forum search function, it's not very helpful -- do a Google search with site:processwire.com/talk and you'll get much better results..
- 11 replies
-
- 3
-
- response time
- modx
-
(and 1 more)
Tagged with:
-
I've surely written some crappy replies earlier, but this was first one bad enough to be removed No need to restore anything, really. Main point of that reply was that this module had a bug that @dragan uncovered. When more than two languages were in use, values were getting stacked, i.e. version_control_for_text_fields__data contained property values such as "data10141015" instead of "data1015", "data101410151016" instead of "data1016" and so on. This is fixed in current version at GitHub so I strongly suggest everyone using this module to update to that. To fix existing data you have to replace those nonsensical values directly in your database: UPDATE version_control_for_text_fields__data SET property = 'data1015' where property = 'data10141015'; UPDATE version_control_for_text_fields__data SET property = 'data1016' where property = 'data101410151016'; # .. and so on, depending on actual language page IDs (broken values should be easy to spot)
-
@dfunk006: working with a ton of pages is going to be slow, it's simple as that. Sure, you can always add more muscle to your server, but honestly, how often do you really need to show something like 10 000+ results simultaneously? How is that ever going to be useful for end-users? Adding sensible limit and using pagination or infinite scroll etc. makes sense not only resource-wise, but also from usability point of view. To expand Martijns reply a bit: PW also loads fields defined as "autoload" (via field settings) automatically, so you'll want to be careful with that if you're expecting to handle huge numbers of pages. Unless, of course, you actually always need those fields
- 11 replies
-
- 3
-
- response time
- modx
-
(and 1 more)
Tagged with:
-
@dragan: thanks, that actually solved it. There was an issue with storing language versions; language ID's were getting "stacked" instead of last one being used, which resulted in useless data. Looks like I never properly tested this with multiple language versions.. I've just pushed fix to GitHub and would suggest you (and everyone else reading this) to update the module. To fix existing data you can do something like this in your database: UPDATE version_control_for_text_fields__data SET property = "data1015" WHERE property = "data10141015"; UPDATE version_control_for_text_fields__data SET property = "data1016" WHERE property = "data101410151016"; # .. and so on
-
ProcessImageMinimize - Image compression service (commercial)
teppo replied to Philipp's topic in Modules/Plugins
I'll have to agree with Adrian: many clients won't see the benefit and many ProcessWire developers aren't hosting client sites themselves so that they could really benefit from this themselves. Problem with large images doesn't seem huge when you're visiting a site often enough for cache to kick in (which is true for many clients, I'm afraid). Unless you use analytics to see how the site is doing for wider audience you might not even notice that anything is wrong. Most of the sites we build also have couple of API calls to images tops and a ton of images included within page content (Tiny / CK fields). Benefits of minimize.pw for sites like that would be less dramatic. Then there are people like me. I've been minimising static images with tools like OptiPNG and also had plans to develop something similar to this module for local use. Your prices aren't high, but if I can do same thing locally without any costs (other than the work required) and without any image limits and so on.. well, it does sound tempting. (People like me are probably not your target audience.) Long post in a nutshell: I'd try to make the benefit of this service as clear as possible and consider what @apeisa said about minimising images during upload. Latter is what I see as the biggest flaw in your current product, actually: if the service would automagically start working after one-click install I believe it would be a lot more beneficial than current solution. Ease of use combined with very clear explanation about what happens when your free quota is gone (does the service still work, does it prevent using the site normally etc.) and easy way to upgrade your plan and you've got a winner.. probably .. and, of course, if you want to make big bucks with this, add native support for more platforms. -
I've been using this module with 2.4 for couple of sites without issues. David, your problem sounds a lot like issues reported earlier, i.e. "empty" date fields containing 1970's timestamp. I'd suggest trying what Craig explained above: DELETE FROM field_publish_until WHERE data = '1970-01-01 01:00:00';
-
Could you specify what kind of field this is that's not saving, TextareaLanguage or something else? Based on your second post I'm guessing that data still gets saved to db tables, is that right? Are you using the default theme ("new" or old) or something else? I'm currently running a fresh test site where this seems to work properly, so any additional information would be helpful. I'd probably approach this directly via database. Table version_control_for_text_fields contains all changes and timestamps for those, so it would be easy to grab changes made during last 24 hours. Rest depends very much on what you want to send, i.e. do you want to also describe how content in each field changed and so on, so there's really no simple answer for this one. Two tables, actually, and you're right in that those are language values. All values are stored mostly because that's what ProcessWire itself does -- it updates all language values simultaneously. You're right in that it feels a bit weird, especially here where rows are stored potentially for a very long time. I'm not yet sure how to get more specific data from ProcessWire (which language version has changed), so setting these values separately might require another method for handling data. This could also make certain database queries a lot more complicated. Created an issue for this one so I won't forget it right away..
-
My solution for the issues mentioned by @diogo above is a simple script that syncs (via rsync) contents of /site/assets/files/ between dev and production sites ("two-way sync"), but that's just about it. Not a perfect solution, but works mostly just fine. Certain things (such as modules cache) still need to be cleared on a per-environment basis when making changes there (adding modules etc.) I wouldn't worry about logs, sessions or cache files -- it's actually better that they're environment-specific -- which leaves "files" only thing you'll need to keep in sync. One drawback is the situation where you have assets in both versions and then remove them via admin UI from one version; those assets will stay in other version of the site and you'll have to figure out a way to remove them from there too. This thread provides some helpful code samples for doing exactly that and you could also create a simple module that triggers one-way sync operation (rsync, for an example, has the ability to remove extra files from target directory) every time an asset is removed. If anyone has a better solution in place, I'd be happy to hear about it. I've been considering various options starting from shared directories, but haven't found anything solid yet (it'd be easier if both sites lived on same server, but generally dev environment shouldn't exist on same server as production one IMHO..)
-
ProcessWire 2.4 (possible to run on PHP 5.3.3?)
teppo replied to renobird's topic in General Support
Tested and so far working fine in PHP 5.3.2. Not that I'd be against updating PHP (I'm not!) but it should still be noted that most GNU/Linux distros backport security updates to bundled PHP versions. At this particular server I'm, for an example, running PHP 5.3.2-1ubuntu4.22 -- so technically it's not same thing as "vanilla" PHP 5.3.2. As long as you're using PHP bundled with your distro and a distro that takes security seriously, you should be safe. As long as that particular distro version is supported, that is -
Depends a bit on context, but especially for modules I prefer translating strings in module file, storing them in $config->js and then in JS file fetching from config var. Simple example: PageListPermissions.module => PageListPermissions.js (though I'd also suggest "namespacing" JS config content, i.e. instead of using config.i18n.*, use config.YourModuleName.* and/or config.YourModuleName.i18n.* etc.)
-
Update: version 1.3.0, just pushed to GitHub, adds support for repeaters -- or, to be more precise, support for saving revision data for fields that are within repeaters. Repeaters being pages after all, it seemed most logical to treat them as such. If repeater field added to a template for which version control has been enabled contains fields that are also under version control, values of those fields will be stored just like they would be for the main page (page containing the repeater field). I'm not confident that my explanation made any sense, so let's just say that this should be self-evident once you try it. Main point is that instead of saving repeater values on per repeater basis the module is treating individual repeater fields (or repeater field fields..) separately Another thing to note is that snapshot feature added in previous update is now module called PageSnapshots. It's still bundled with VersionControlForTextFields and initiated (and automatically installed) by VersionControlForTextFields init() method, so this shouldn't change anything. I'm simply trying to keep the "core" version control module as lean as possible. Once again, I'd suggest making sure that things work properly before putting this update into real world use. There have been a lot of changes and something could've broken. I've tried to write and run tests vigorously, but those definitely won't catch all issues.. yet
- 91 replies
-
- 10
-
Interesting how these days self-hosting source code instead of using "public" hosting like GitHub makes a project feel almost "antisocial".
-
I find #2 kind of pale, much better contrast in #1.. and I also dislike those huge button-style links, way too "mobile" for my taste. How would navigation bar of #2 work for subpages, i.e. how would templates, fields etc. fit there? Or would they not? (If not, that's yet another big plus for #1 IMHO )
-
First one is brilliant. I really like how simple and clean it makes things look. Great job!
-
RT @yellowled: Erm, @processwire, did you maybe forget to announce something? http://t.co/eAGodIJPND
-
RT @codepo8: Dungeon Keeper Android's rating system filters out "1-4 star" reviews - http://t.co/XJj0E4MXph *sigh*
-
Custom Markup in Module (without form or table)
teppo replied to kongondo's topic in Module/Plugin Development
Just checking: you've tried creating output (into a variable, such as $out = "my markup goes here") and returning that variable (return $out), not just echoing it out directly.. and it doesn't work? It definitely should, so I'm guessing there's something weird going on. If you could post some sample code that causes issues, I'd be happy to take a closer look. Answer to your non-intended question is that you'll still have to render some inputfield markup there. This is probably easiest to explain with some code. Example below will output "my value" first, then render any inputfields this wrapper contains (in this case just one markup inputfield with value "some markup.") $wrapper = new InputfieldWrapper; $wrapper->attr('value', 'my value'); $inputfield = new InputfieldMarkup; $inputfield->value = "some markup"; $wrapper->add($inputfield); echo $wrapper->render(); -
delete($page, true) doesn't delete repeated fields on 2.3.0
teppo replied to joe_g's topic in API & Templates
@ryan: the issue I was having was (once again) related to API use, in which case access control affecting things makes sense.. and yes, since then I've been running modified core code (include=all) and at least that particular issue seems to be fixed By the way, I only just noticed your post and had already written a slightly more extensive test script to see what exactly happens when I work with repeaters over API without aforementioned fix. I posted it as a public gist here, if you want to take a look: https://gist.github.com/teppokoivula/8889040. Still far from a proper test case and a bit disorganised, but at least it helped me figure out what works and what (possibly) doesn't.. It's very much possible that I'm doing something wrong here and/or should really do even more manually, but so far it seems to me that repeaters could use some extra garbage cleaning here and there -- and perhaps some simplification especially when it comes to creating and/or removing repeater fields. It should also be noted that I couldn't find any "official" documentation either, so the way I'm doing this at the moment is partly copied from core code, some if it came from examples adrian (I think) posted somewhere etc. -
I'm not exactly sure what you're thinking "combine" means here, but this field does exactly what the description says; it grabs data from other fields, mashes it all together into one big blob of (JSON) content -- and that's just about it. One very simple (yet sometimes very practical) use case is if you've got, say, 15 different text fields and you need to find pages that contain value "john doe" in any of those. Instead of doing this: $john_does = $pages->find('field1|field2|field3|field4|field5|...|field15%="john doe"'); .. you can create a cache field, select all of those fields to be cached in it, and then do this: $john_does = $pages->find('my_cache_field%="john doe"'); Not only does this look clean, in certain situations it can wildly improve query performance.
- 8 replies
-
- 19
-
delete($page, true) doesn't delete repeated fields on 2.3.0
teppo replied to joe_g's topic in API & Templates
Reviving an old topic, as I'm seeing something similar here at the moment. Sadly my current setup is far from clean test case, so I won't be going into too much detail, except that it seems to have something to do with Pages delete() method. Simply put it doesn't find actual repeater items, so they never get deleted. For an example page /processwire/repeaters/for-field-359/for-page-1646/ gets deleted, while one below it at /processwire/repeaters/for-field-359/for-page-1646/1391714225-7472-1/ remains. This is the original code that won't work for me: public function ___delete(Page $page, $recursive = false) { if(!$this->isDeleteable($page)) throw new WireException("This page may not be deleted"); if($page->numChildren) { if(!$recursive) throw new WireException("Can't delete Page $page because it has one or more children."); foreach($page->children("status<" . Page::statusMax) as $child) { If I change that last line to this, repeater items are found and properly deleted: foreach($page->children("include=all") as $child) { I'm probably missing something here, especially since I've no idea why status selector is used here or why it doesn't seem to find the repeater page in question, but at least for me this fixes the problem. Might cause some new ones, though, don't know about that yet.. -
@Anssi: sounds like an issue with secure connection. Google is using HTTPS for all search results and RFC 2616 clearly states that when going from HTTPS to HTTP referer header should not be preserved: One exception to this is referer meta tag, which Google is actually using to provide some basic referer information for browsers that support this -- namely the domain that a request originated from. There's not much information you can get out of that, though Simple test: open dev tools, switch to net panel (or whatever it's called in your particular flavour of tools), set up filter (if available) to only show documents, enable "sticky" logging ("Preserve Log upon Navigation" in Chrome) and click any Google search results. Only the originating domain should be included with request headers.
-
@dragan: if you're literally using var_dump(), it should be noted that it doesn't return anything. In order to store it's value in a variable you'd need to use output buffering or some other trick. Another option is to use print_r() with return param set to true. Main difference between var_dump() and print_r() is that var_dump() provides more information about the data types involved, while print_r() only outputs content. Edit: you should also take a look at this comparison between var_dump(), print_r() and var_export().
-
Should've done this earlier, but I've just pushed to GitHub updated version of the module. This version is set to require "changelog" permissions and create it when the module is installed. Dragan and any others who've already installed the module: I suggest that you get the latest version, add aforementioned permission manually and then apply it to those roles that should be able to access changelog page.
-
One (definitely not fool-proof but still useful) approach is to rely on those same proxy sites and implement a simple blacklist. Hide My Ass! has a nice list of public proxy site IP's you can use -- and, surprise surprise, even buy as a .txt file. For $25 they even promise to email you updated copy every day "for life". How's that for a business model?
- 191 replies
-
- 1
-
- bitnami
- installation
-
(and 2 more)
Tagged with: