Jump to content

teppo

PW-Moderators
  • Posts

    3,260
  • Joined

  • Last visited

  • Days Won

    112

Everything posted by teppo

  1. I realise that you kind of got your answer already, but just wanted to add some thoughts to the topic First of all, we've discussed the "what should be in the core" topic a few times, and the conclusion has so far been something along the lines of "features that most sites require". For most sites the published/unpublished separation is more than enough, and a feature like drafts could actually be even harmful (an added complication). ProcessWire, as it is right now, does most of what any regular site could require, and more. In fact I'd be tempted to argue that in the future we should trim down current distribution, not add to it, unless someone can point out a real need that still hasn't been answered. What Ryan has been doing with the AJAX inputfield updates, selector updates, etc. is enhancing existing features, and in my opinion that should remain the main area of focus in the near future. Anything that isn't in the core can be realised in the form of modules, and if those modules are commercial, that's (again in my opinion) fine. ProcessWire has a viable module ecosystem, and anyone can create and publish commercial modules, if they so choose; in other words, this isn't a right reserved for Ryan alone. Of course I hope that module authors choose to open source their code and provide it for free, but I can't force that decision on anyone.. and neither can anyone else All that being said, the way this works should be communicated clearly and visibly; what goes to core and why. Years ago I looked into Concrete5. Back then multi-language support was only available as a commercial addition, and my first reaction was literally "what a greedy bunch they must be, asking money for what should be a core feature". Thinking about it now, I probably just didn't understand their target audience and the reasoning behind this particular decision. I sincerely hope that as few people as possible take a look at ProcessWire, see Drafts or FormBuilder or ProCache, and think that "those ProcessWire developers must be a greedy bunch to take money for obvious core features!"
  2. Right. After looking around for a while, I still can't seem to find an obvious solution to this. Considering that you mentioned that sometimes things work as expected, only things I can think of right now are that a) the MySQL server is not functioning properly, b) connection between your site and the MySQL server is bad, or c) it's a performance/stability issue (i.e. the database server can't handle the load). Since you mentioned that you've been working with this host before, did you mean that you've had (or still have) other sites there that run just fine.. or did this particular site run fine at some point and break after a change on the server? If that would be the case, I'd be almost certain that the host migrated databases elsewhere and something went wrong, but if this is a brand new database, that's somewhat less likely. Either way, the issue itself still sounds like something related to your web host. I've never seen or heard this happening before, and while my experiences may be somewhat different from majority of users here (I prefer to manage all my servers myself, so don't have much experience with shared/managed hosts), I know for sure that ProcessWire has been running perfectly fine on so many hosts that this is very unlikely to be a ProcessWire issue per se.
  3. Which table doesn't exist? Is this the full error message, and if so, how do you know which table it's referring to? Do you get any errors or other weirdness in your /site/assets/logs/errors.txt or /site/assets/logs/messages.txt, and if you enable the debug mode (via /site/config.php), what's the full error message you're getting? Edit: reading your post properly, you mention that sometimes the site works. This makes it sound a lot like a database issue, and while I'd be tempted to say that it's most likely an issue with your host, we should probably dig into this a bit further. Is the database on the same machine (localhost), or is it on a separate server (and host)? Have you ever experienced connectivity issues with this host before?
  4. One hacky solution would be parsing the URL manually from $_SERVER['REQUEST_URI']. It's nowhere near as clean and simple as checking $input->urlSegment1, you'll have to route these requests through your home page each and every time, and it won't be a valid solution for all use cases.. so perhaps not the best thing to do after all Somehow I was under the impression that URL segments wouldn't be as "strict" as actual URLs, no idea where I got such idea.
  5. @EvanFreyer, open your .htaccess file and look for this line: # OPTIONAL: Send URLs with non name-format characters to 404 page This is the first step at enabling non-ascii URLs, though I'm not entirely sure if it's the only one you need to change.
  6. Doesn't sound right at all. 500M makes it sound like all your posts are getting loaded into memory all at once. Taking a quick look at the ProcessBlog.module, there's actually a bunch of queries without proper limits defined, such as this one for an example: wire('pages')->find('template=blog-post, include=all, sort=-blog_date, parent!=7'). If you want to debug this further, I'd try disabling all queries that find blog-posts without a proper limit, and see if that helps. Kongondo would know better how to handle these, if they're the issue here. Loading all posts into memory at once definitely shouldn't be the only option
  7. Isn't that alone pretty big difference? Modularity is the key argument for PostCSS, and unless I've missed something big time, that's a feature none of the "typical CSS (pre)processors" provide. This is why comparing PostCSS to typical preprocessors seems kind of pointless; they're entirely different types of beasts. If their benchmarks are anywhere close to truth, it seems more like this could be a suitable platform for rebuilding some of the current preprocessors. I've been using the Ruby SASS compiler for a while now, and that, at least, is painfully slow. @diogo: Unless I'm missing something here, Chris Coyier mostly discusses alternative/future CSS syntax in his post. That's an extremely valid point, but it's not yet an argument against (or for) PostCSS. PostCSS, like LostKobrakai pointed out above, is essentially a CSS parser with plugin support, and it's entirely up to you (and other devs) what the plugins you build/install actually do.
  8. PageArray is designed to hold unique records, so the easiest way around this would probably be adding those bogus pages via API: Run a script with a foreach/for/while loop and for each round add a page with (machine-generated) bogus data. Test and finally run a script that removes all bogus pages based on template (if these are the only pages using this template) or some other factor you've cooked into your bogus data – specific name format, parent, etc. Another thing to note is that if you run a find query on $bulk_matches, which is a PageArray you've already fetched, this will use in-memory selectors. Your actual search will most likely make use of database selectors instead, which will differ in both functionality and efficiency On the other hand, what you've described here doesn't really sound like much of a problem yet. Searching from thousands of pages should be fast, at least assuming that you a) add a sensible limit to each query, and b) don't add too many fields to the query and/or combine your search fields beforehand using Fieldtype Cache or something similar.
  9. At the moment Raymond's answer seems to be the only working solution. This is essentially the same thing that @thetuningspoon posted in this issue: https://github.com/ryancramerdesign/ProcessWire/issues/1121, and it's been one of the very few major headaches for me since the beginning. In my opinion when something is set as "required", it should mean "required", not "recommended". The page should not be saved under any circumstances unless all required fields are filled in. From what I can remember, this has been discussed before, and the general idea was something along the lines of "a page with a lot of fields and couple of them required could result in a lot of lost data if it wasn't saved when one of the required fields was left empty". Essentially the page can be saved, but left unpublished, and thus considered as "temporary storage". This, again in my opinion, would be better solved by storing those values not within actual pages, but a) some temporary alternative, b) within browser memory, or c) within session variables. Of course that would be a new concept for ProcessWire, and Ryan might also have other reasons to stick with the current behaviour Just my (relatively unhelpful) two cents. Edit: Just wanted to add that one specific issue with the current system is that once I fill in all fields I can publish the page, but after that I can clear required fields, save the page, and it remains published. This, at the very least, is not what I'd expect.
  10. Minor note regarding the 404 monitor: currently all hits to sitemap.xml (generated by Pete's Markup Sitemap XML module) are recorded as 404s. Haven't had a proper look under the hood yet, but so far I'm guessing that both modules hook into pageNotFound, in which case this sounds like a priority issue or the logger not checking if another module has modified the output (however that should be implemented). Anyway, just something you might want to consider for the new 404 logger module. For the record, the 404 logger is very handy when migrating a lot of content from one site to another, and it's also cool that one can easily register a new Jumplink from the 404 list. Great work, just like the rest of this module!
  11. I should probably add Page List Permissions after Antti adds User Groups. They're separate modules, after all, and User Groups doesn't really require Page List Permissions -- it has it's own UI for managing permissions via Page Edit.
  12. Antti, I think you should do the honors. Might also want to rename this topic (or create a new one) while you're at it
  13. I would advice against repeaters. This would definitely become a performance issue in the long run. What @Martijn suggested sounds like a possible solution too, though that would require creating a lot of pages. Depends on how popular your site is, but this sounds like another scalability issue to me, and at the very least it would create unnecessary load on your server. Table field from the (commercial) ProFields module bundle would be a great fit for this task. It creates a database table of its own, which would make sense here. Another option would be a new module that does pretty much the same, though that would be more complicated to set up. Unless there's something I'm completely missing here, I would actually suggest a completely different route. What you're doing here is trying to recreate a subset of what Piwik and Google Analytics already do extremely well. That doesn't make any sense to me at all. Set up Google Analytics and let it handle collecting data. If there's a need to pull the data to your site and display it there (instead of within GA GUI), use Process Google Analytics or query the data via GA API. There are PHP libraries available for doing that, so it's not that much of a hassle really.
  14. Sorry to hear that your site got hacked, Svet, and hope you can get this sorted without too much unnecessary pain! If you don't mind, and if it doesn't contain any information that is confidential / potentially harmful to you, It would be interesting to hear what exactly went wrong there? Your host not having PDO enabled by default makes me wonder if they've got everything else set up properly. While one can still compile PHP without PDO support, that alone sounds like a very bad idea. Unless your host is taking proper care of keeping everything up to date and (first and foremost) secure, you might want to check out other alternatives.
  15. Because it's important. Things might not work exactly same for logged-in users, let alone superusers, as they do for guest users. Permissions, for an example, are one important consideration, and many sites also have separate login details, admin tools, etc. visible only when you're logged in. In other words: if you render a page for caching while logged in, that's a potential problem right there. How much this really matters depends on the case, of course, but it's better to be safe than sorry, especially when we're dealing with assumptions that affect a) security, and b) a huge amount of ProcessWire users out there. As long as a given site is even moderately popular, meaning that it gets anywhere between tens and hundreds of page views each day, one visitor taking couple of seconds longer to render a given page (and that would already make it an *extremely* heavy page) makes little difference. You need to consider the big picture here. While the "every visitor matters" approach is admirable, and very much true, when you put things in perspective, single page load taking slightly longer isn't going to bring your site instantly crashing down or drive your users away for good. *Don't succumb to the trap of over-optimisation.* Most likely there are a lot of other things you could do with that time that would benefit your site much more. You need to put things in perspective here. In my humble opinion you're sacrificing a lot of time to do something that won't make noticeable difference considering the big picture. If that makes you happy, then by all means do it, of course Hardware may be cheaper now than ever, but it's still far from free, and while you can run a small site for relatively cheap, you need to consider scalability too. Large and/or more popular sites require more resources, which in turn means more costs. So far I've never heard a client say "I don't care about costs at all". Quite the opposite, actually. In the end, one part of your job as a developer is keeping the costs low for the clients you work for. Again, you need to put things in perspective. User experience is extremely important, but resources matter too. Every choice you make has pros and cons, and you need to keep those in balance. Admittedly user experience is much harder to measure than resources and their costs, so the decisions are not always obvious.. What you're saying here is roughly how it works, though especially in larger use cases caching everything on each page save could literally render the site unusable. At the very least it could mean that each save action is extremely slow, which is not a good thing either. There's a lot more to cache expiration than this. ProcessWire makes it easy to use content from other pages, and output lists of pages, etc; these need to be considered when selecting which pages the cache is invalidated for. In many cases it's not nearly enough to just invalidate the cache for the page you've saved. One might also be pulling content from other sources, and outputting content that is time-sensitive (such as dates, events, etc.) Hope this helped clarify things a bit!
  16. Hello to you too, @antpre, and welcome to the forum! Probably the one and only reason why the module isn't in the modules directory is that @apeisa felt it was too incomplete at the time of announcement. @apeisa, any comments on this? Shouldn't we add it by now? I've got two relatively minor updates pending, will probably merge those to the master branch of the module soon. Apart from that, the module is in use on at least a couple of sites already that I know of, and so far it's working just fine. There are still a bunch of things to take care and improve, but it's already very much usable. Not having much activity lately means, in this case, two things: first of all there's nothing major missing so no need to rush into action, and second of all everyone involved is quite busy with other stuff. If you have time to give this module a test, let us know how it works for you; we're still around and the module is still maintained
  17. Might not be relevant anymore, but still wanted to answer this particular question. To find out when PageinatedArray.php was added to the core, open the file in GitHub, click "History", find the applicable commit (in this case the first one), and click the "< >" link next to the commit number ("browse the repository at this point in the history"). This is probably the easiest way to browse the repository at given point in time and check applicable ProcessWire version from /wire/core/ProcessWire.php or whatever else you might require. For tracking more specific changes you'd use "Blame" instead of "History" to find the commit and then hit "Browse files" from the top right corner of the commit message
  18. Just a few quick notes: The sites directory, or the developers directory, are our best bet at identifying sites, companies, and developers using ProessWire. On the other hand, these are a subset of the real figures – for an example, none of the sites our company has built using ProcessWire are listed there (that's a longer topic, not going there right now).The truth is that we can't tell for sure just how widely ProcessWire is used, partly because it's built in a way that makes it possible to completely hide the fact that a given site is powered by ProcessWire. Which is a good thing, really. We do know that it's gaining momentum, and it's also quite safe to say that it's usage is nowhere near that of WordPress If, for any given reason, the lead developer had to step down, development would no doubt continue.We've got plenty of capable folks around here. It's pointless to speculate whether that would lead to one or more forks and what else might or might not happen, but I'm confident that ProcessWire wouldn't go away that easily. Personally I've always found the question of "how do we find another developer for a project built with platform X" somewhat off.First of all, this question completely ignores the fact that each system is different. ProcessWire is particularly easy to learn and understand, and unlike (apparently) with certain other platforms, there's no year-long learning period involved. A developer with basic knowledge of PHP and web development in general should be able to just step in without any major delays. To be fair, this also applies to WordPress, and other even remotely sensible platforms. Building sites with a CMS/CMF is not rocket surgery, and while years of experience with given platform will give you an edge over someone less experienced with said platform (you already know some of the possible pitfalls and shortcuts), that's all there is to it. What's much more important is how a particular project has been developed so far. Just like with any other platform out there, it's possible to build broken and overcomplicated crap with ProcessWire, while (and this is at least partly opinion-based) it's not as easy as with certain other platforms. The flexibility of ProcessWire means that even if the previous developer made some pretty horrible choices along the way, it should still be possible to salvage some parts and rewrite others. In the end it's always better to just brace yourself and tell the client that the whole thing needs to be blown into pieces than attempt to forcefully breathe life into a project that's already rotten to it's core, regardless of which system it was built with.
  19. @kixe: I'll try to set up a clean test case and/or a screencast later, probably easier to explain that way. To summarise, it looks like we're talking about different things here, since sleepValue() shouldn't have anything to do with my issue. The problem isn't saving broken/malformed values to database (which, to my best knowledge, is where sleepValue() steps in), it's about InputfieldSelect not being able to add "selected='selected'" to the selected option, and the field value reverting to the value of the first option available is simply a result of that
  20. @kixe: I'm seeing a potential issue with this module. Using this in combination with InputfieldSelect everything else works, but the inputfield won't display the selected option as selected, so saving the page again I always end up with the first available option selected InputfieldSelect checks the string value of field to determine if current item should get 'selected="selected"', and since the string value will always be "SelectExtOption" (name of the class), this won't work at all. I've solved this locally by adding a __toString() method to the SelectExtOption class: public function __toString() { return (string) $this->value; } Is there something I'm missing here? For the record, using latest dev version of ProcessWire (2.6.9).
  21. Thanks for sharing, this looks like a handy module! Will definitely give it a closer look soon. Also, your code looks absolutely beautiful, if I may say so One thing you might want to check, though, are the SQL queries. I noticed that you're sanitizing inserted URLs with $sanitizer->url(), and while this will make sure that they're valid URLs, they can still contain unescaped apostrophes. I don't see a straightforward way to exploit this for SQL injections, but it could probably result in broken SQL statements. Usually I'd suggest using prepared statements over plain queries, even in simple use cases, as they do make it easier to avoid various parameter-related problems. You were referring to a particular ProcessWire issue in your WireHttp "shim". From the issue it seems like Ryan has already implemented this, or at least added the completed tag to the issue itself; is this still necessary?
  22. Hi there! First thing to check is if you have ACF and/or HTML Purifier enabled via field settings. If so, you've just found the most likely culprit. Both features can be configured via field settings, and you should first check if you can enable the attributes etc. you're trying to use from there, but especially Purifier can be quite strict about allowed content. Sometimes disabling it is the easiest path out.
  23. @Neeks: that sounds pretty much right; revisions table is the only one containing references to page ID, so changing it there should be enough. I can't think of anything else you'd need to do straight away, though I haven't attempted anything like that before either, so it's entirely possible that something isn't working quite right after this (like you've mentioned above). Could you explain what exactly breaks after this? I might be able to help you if I knew what went wrong. Currently I'm busy with other stuff, and can't really set up a test case for this myself.
  24. You probably know both already, so perhaps you're looking for less well known alternatives, but I'd still suggest looking into Amazon CloudFront and/or CloudFlare. Big players have their benefits, and these guys can definitely "handle a lot of traffic"
  25. Glad to hear that you got it sorted!
×
×
  • Create New...