Jump to content

teppo

PW-Moderators
  • Posts

    3,259
  • Joined

  • Last visited

  • Days Won

    112

Everything posted by teppo

  1. I feel like that all the time.. Seriously speaking, what I've been missing most is automated checking, logging and reporting for broken links. This is partially solved by Page Link Abstractor, but I don't really like it's approach that much.. and it only works for local links. Manually running something like the W3C link checker helps a bit, but doesn't really solve the issue yet. Another thing I'd love to see is proper Active Directory / LDAP integration. At least around here that's pretty much a requirement in order to build intranets etc. for larger organisations, as they all seem to use AD for managing their local users. I know that there's some code floating around for this and Antti has apparently already used it in at least one project, but last time I checked it didn't really look like a finished product -- more like something that could probably be used to build on. That's all I can think of right now.
  2. There's an issue with your proposed approach; namely the way isLoggedin() works. As you can see, it only checks if this user is guest, i.e. if it's ID matches that of the guest user. It's going to return true for any user you've fetched with $users->get(). Putting that aside for a moment, there's an even bigger issue here. If I'm getting this right, you're logging user in, and later trying to check if that specific user (123) is logged in when anyone opens URL like http://example.com/user/123. Isn't that a huge security issue right there? How would you validate that the user opening this URL is the same one that earlier authenticated using correct credentials? I really wouldn't recommend pursuing this. There are going to be severe security implications no matter how you approach it. .. but if you really have to, I'd consider some sort of token-based authentication method. When the user logs in, provide an URL she can visit to log in. Typically that URL would be invalidated after single login (and after certain period of time) to make it slightly more secure. Automatically generating something like this would still be very risky (please don't do it). It's more often used in combination with, say, valid email: user types in her email and receives an URL that's valid for certain period of time and allows her to login (preferably once) before it's invalidated.
  3. What @clsource said. This definitely isn't typical AJAX behaviour. Something you're doing is very resource intensive; most likely retrieving or rendering pages -- though that's also all we know about your use case, so there could be something else involved we just don't know about. Agreed about the headers, too: by default POST requests are not cacheable, unless specifically forced using Cache-Control and Expires headers. You don't need to define any of those headers here unless you're somehow forcing caching for POST requests (although if you're just fetching records, not altering them, I'm equally confused about the use of POST here in the first place).
  4. Thanks for mentioning this, Soma. Looks like I'll have to take a closer look at the new format.
  5. RT @techdirt: Google May Consider Giving A Boost To Encrypted Sites http://t.co/MAeoEvetUo

  6. @muzzer: most fields (almost all of them) are only loaded as required. You can check "Autoload" within field settings (Advanced tab) to always include it with the page, though. This makes sense especially if that field is always used when the page is loaded. You should also take a look at built-in Fieldtype Cache too; sounds like it could be exactly what you're looking for here. It's a core module but not installed by default. I wrote a quick post about caching in ProcessWire a while ago, you can find it here. It's not too in-depth (and doesn't mention aforementioned Fieldtype Cache at all), so I'm not sure if it provides you any new information at all
  7. $event = new Event(); $event->date = "1979-10-12 00:42:00"; $event->location = "Vogsphere"; $event->notes = "The homeworld of the Vogons"; $page->events->add($event); That's one way to do it, at least. Since $page->events here is events field it returns an instance of EventArray, which in turn can contain multiple Event objects. For the most part EventArray acts just like a PageArray or Pagefiles or any other object extending WireArray. This is just the most basic example of what you can do with it.
  8. RT @lukew: 90% of the time, smartphone use is all thumbs. Design accordingly. http://t.co/dcQMmTJVxd

  9. I haven't had much (enough) time to work on my mailer module and haven't looked at how WireMailSMTP handles these particular things, but in general I'd have to agree with Pete. For things that are commonly used, it'd be best if there was a "standard" way to do that. One that doesn't depend on which module extending WireMail is installed at the time. I believe that we're talking about interfaces here, though that's not technically what it's going to be.. or what it is at the moment, at least. Then again, if @horst has implemented this feature already, I'll probably take a look at his implementation anyway and use similar public methods if possible to provide some consistency
  10. RT @brad_frost: Yes, and that voice says "get this goddamn thing out of my face." http://t.co/KIPgjpg20M

  11. On a slightly related note, I've found clients to be quite happy with "I don't know, I'll find out and get back to you", but if they've got an issue and can't reach anyone at your side, they won't be happy at all. Be clear about when you're available and stick to that. The absolute worst thing you can do is making promises and setting expectations you won't be able to fulfil. An easy way to increase customer satisfaction is exceeding expectations.. and the trick to exceeding expectations is setting them at a realistic level in the first place
  12. As far as I know, the only way to do this would be applying an RTE with a module. Hooking into InputfieldImage::renderItem would give you access to inputfield output, but I'm not exactly sure how TinyMCE (or CKEditor, if you're using that) is configured by default, i.e. is it enough to add a class to the description textarea (which should appear once the field is configured to hold more than one row of description data) or do you actually have to add custom JavaScript to apply it to image descriptions.
  13. Take a look at the example below "Custom PHP code to find selectable pages": Example: return $page->parent->parent->children("name=locations")->first()->children(); Isn't that almost exactly what you need here? Of course you'd have to use ...->child("template=categories|tags")->children() or something like that.
  14. Pete: I haven't had the opportunity (?) to deal with truly large databases myself, but I remember discussing this with someone more experienced a few years ago. They had built a system for local university and hospitals for managing medical research data (or something like that, the details are a bit hazy). I don't really know what that data was like (probably never asked), but according to him they started running into various performance issues at database level after just a few million rows of it. Indexes in general make searches fast and having results in memory makes them even faster, but there's always a limit on buffer size, searching huge index takes time too and in some rare cases indexes can actually even make things worse. (Luckily the Optimizer is usually smart enough to identify the best approach to each specific scenario.) Downside of indexes is that they too take space and need to be updated when data changes -- they're not a silver bullet that makes all performance issues vanish, but can actually add to the issue. This also means that you'll need to consider the amount of inserts/updates vs. searches when creating your indexes, not just the fields to index and their order. This is starting to get almost theoretic and you're probably right that none of it matters to any of us here anyway (although what do I know, someone here might just be building the next Google, Craigslist or PayPal) -- just wanted to point out that it's not quite that simple when it comes to really large amounts of data. And no matter what you claim, (database) size does matter Edit: just saw your edits, apparently we're pretty much on the same page here after all
  15. @rusjoan: for converting content of a page to JSON, you could do something like this: $data = array(); foreach ($page->template->fields as $field) $data[$field->name] = (string) $page->$field; $json = json_encode($data); Of course this isn't a complete solution and won't work in anything but the most limited use cases (think about images, files, page references etc.) but it's a start.
  16. @rusjoan: that error message is a bit vague, but it means that the name of your module is invalid. This is where it originates from. ProcessWire expects each module name to start with single letter (uppercase or lowercase), followed by one or more lowercase letters. Result of this is that "VkAPI", "Vkapi", "vkapi", "RusjoanVKAPI" etc. are valid names, while "VKAPI" is not. I'd suggest renaming your module to comply with that requirement, as that's the easiest solution here. Edit: for the record, I just submitted a pull request about adding a more descriptive error message.
  17. @Pete: actually archiving database content elsewhere could have it's merits, in some cases. Imagine a huge and constantly changing database of invoices, classifieds, messages, history data etc. Perhaps not the best possible examples, but anyway something that can grow into vast mass. Unless you keep adding extra muscle to the machine running your database (in which case there would only be theoretical limits to worry about), operations could become unbearably slow in the long run. To avoid that you could decide not to keep records older than, say, two years, in your production database. In case that you don't actually want to completely destroy old records, you'd need a way to move them aside (or archive them) in a way that enables you to later fetch something (doesn't have to be easy, though). Admittedly not the most common use case, but not entirely unimaginable either As for the solution, there are quite a few possibilities. In addition to deleting pages periodically you could do one or more of these: exporting pages via API into CSV or XML file(s) duplicating existing tables for local "snapshots" performing regular SQL dumps (typically exporting content into .sql files) using pages to store data from other pages in large chunks of CSV/JSON (or custom fieldtype per Pete's idea) In any case all of this isn't really going to be an issue before you've got a lot of data, and by lot I mean millions of pages, even. Like Pete said, caching methods, either built-in ones or ProCache, will make typical sites very slick even with huge amounts of content. If your content structure is static (unchanged, new fields added and old ones removed or renamed very rarely), custom fieldtype is a good option, and so is a custom database table. These depend on the kind of content you're storing and the features of the service you're building.
  18. RT @vruba: Idea: a high-level public agency with the mandate and funding to review and patch popular code. A national security agency, if y…

  19. RT @beep: Despite what so-called “climate change scientists” say, my walk through Harvard Square shows the brozone layer hasn’t depleted on…

  20. If you need to handle a large quantity of pages, I'd probably rely on SQL. Sounds like a rather trivial task that way, though this, of course, depends on what you're actually after. If I'm reading your post correctly and it's just selected pages you're looking for: SELECT GROUP_CONCAT(data SEPARATOR '|') data FROM (SELECT DISTINCT data FROM field_myfield ORDER BY data LIMIT 5) f; After that you've got a list of pages you can pass to $pages->find().. though I don't quite understand why you'd want to do this with the limit, so there's probably something I'm misinterpreting here. I hope you get the point here anyway IMHO It's questionable whether selectors even should be able to handle every imaginable task. This, for an example, seems quite rare need to me (and is already easily solved by either loop or SQL). Selectors are good at finding pages in general, while finding distinct values, even when those values are later used for finding other pages, sounds like a job for something entirely different -- or some kind of combination of SQL and selectors.
  21. +1 for some of these, especially the force delete field/template ones. It's apparently a safety feature to make it harder to lose important data, but also very, very annoying at times. IMHO a proper warning would be quite enough: "this will remove [the field from following templates / following pages using this template], are you sure you want to continue?"
  22. teppo

    Heartbleed

    Thanks, netcarver -- the article was great and the video linked from there was even better. Every good explanation needs a hand (or mouse) drawn diagram
  23. teppo

    Heartbleed

    I found this post kind of interesting: http://article.gmane.org/gmane.os.openbsd.misc/211963. Haven't checked the facts myself so can't really vouch for it, but if it's true.. well, it does tell something about the mindsets of the developers working on this particular product. Security in general is very complicated thing like Matthew already pointed out, but too often vulnerabilities are (at least partly) a result of laziness, general ignorance and/or bad practices.
  24. RT @sandofsky: The data is in: hamburger menus kill engagement. "It was a disaster!"http://t.co/DlxaM1JIXM

  25. @bwakad: once again this depends on your use case, but you can always add page-specific notes in a field that you simply omit from page output. For template-specific notes Soma's module does good job.
×
×
  • Create New...