Jump to content

teppo

PW-Moderators
  • Posts

    3,208
  • Joined

  • Last visited

  • Days Won

    107

Everything posted by teppo

  1. Based on the screenshot menu_item is definitely a page field. $pages->find() gives you an instance of PageArray and like @diogo already mentioned, you can't ask field "menu_item" of that PageArray -- it contains multiple Page items and doesn't have any fields of it's own. You'll have to first get individual Pages out of that PageArray: // this gives you first Page object found with your selector; if you're sure // that there's never going to be more than one Page, this should be fine $items = $pages->find('template=menu, title=Toolbar Menu, include=all'); foreach ($items->first()->menu_item as $item) { ... } // another thing you can do to get only the first result is use 'get' instead of 'find' $items = $pages->get('template=menu, title=Toolbar Menu, include=all'); foreach ($items->menu_item as $item) { ... } // if there's a possibility of multiple 'menu' pages, you need to use another foreach $items = $pages->find('template=menu, title=Toolbar Menu, include=all'); foreach ($items as $menu_page) { foreach ($menu_page->menu_item as $item) { ... } } Got it?
  2. Another take on "Don't Fuck Up the Culture", with some valid insights (whether or not one agrees with them): http://t.co/VuIL6ZEW6B

  3. @renobird: sounds awesome -- hope you'll find the time to do that. Shibboleth has come up a few times in the past and I'm sure a module integrating with it would be good thing to have @Pete: cool.. I'm starting to like ADLDAP too, looks like it makes a lot of things possible and the API isn't too bad either.
  4. Call me overly cautious, but I'd advice against self-managed VPS if this service needs to be highly secure and especially if you need a high level of availability. Anyone can manage a server when things go smooth -- install updates, add a few rules to a firewall and tweak Apache/PHP/MySQL settings. The real question is how well can you handle things going wrong; someone attacking your server, hardware or software failures (hardware issues are still very real even in this age of cloud computing, I'm afraid), restoring corrupted data etc. What about availability requirements -- do you need high availability and 24/7/365 support.. and if, can you really provide and guarantee that? A lot of time I'd recommend going with managed solution in one form or another rather than trying to do everything yourself. It depends a lot on the requirements and the nature of the service you're running, but the bottom line here is that unless you can guarantee that you're able to handle everything yourself, don't make any promises to the client you'll end up regretting.
  5. Two thoughts: writing something like that is simply awesome and an exceptional example of the kind of things devoted community (and a devoted member of that community) can do for an open source project.. yet at the same time the idea that one would need to read a book in order to create menus sounds kind of scary Of course I've no idea of the actual context here, what this Wayfinder is etc. so I guess it has to be about a lot more than just adding some navigational elements. It has to be, right?
  6. Same story here; I know just enough work with it. Never written any code that would've connected with AD either, mostly just used an in-house integration and query tool We've got a bunch of clients that use AD actively and based on that experience I'd say that a "proper" integration module wouldn't really have to do that much. Authentication, creating local users for AD ones (depends a bit on the use case whether that's actually desired, though) and finally a flexible way to connect users with roles (and possibly even custom entities, such as groups, if something like UserGroups is in use) based on OU's and/or groups would make it very useful already. Based on my (admittedly rather limited) experience usernames very rarely change -- can't remember a single case where this would've happened and caused problems -- and even if they do, it should always be possible to re-create (or rename) an old user account. I guess a changed username could cause quite a few other side-effects too, which might explain why admins seem to be rather reluctant to change these On the other hand, according to this SO thread and some MS resources, there are actually multiple unique identifiers for users, so perhaps one of those could be used instead if usernames seem too risky?
  7. I feel like that all the time.. Seriously speaking, what I've been missing most is automated checking, logging and reporting for broken links. This is partially solved by Page Link Abstractor, but I don't really like it's approach that much.. and it only works for local links. Manually running something like the W3C link checker helps a bit, but doesn't really solve the issue yet. Another thing I'd love to see is proper Active Directory / LDAP integration. At least around here that's pretty much a requirement in order to build intranets etc. for larger organisations, as they all seem to use AD for managing their local users. I know that there's some code floating around for this and Antti has apparently already used it in at least one project, but last time I checked it didn't really look like a finished product -- more like something that could probably be used to build on. That's all I can think of right now.
  8. There's an issue with your proposed approach; namely the way isLoggedin() works. As you can see, it only checks if this user is guest, i.e. if it's ID matches that of the guest user. It's going to return true for any user you've fetched with $users->get(). Putting that aside for a moment, there's an even bigger issue here. If I'm getting this right, you're logging user in, and later trying to check if that specific user (123) is logged in when anyone opens URL like http://example.com/user/123. Isn't that a huge security issue right there? How would you validate that the user opening this URL is the same one that earlier authenticated using correct credentials? I really wouldn't recommend pursuing this. There are going to be severe security implications no matter how you approach it. .. but if you really have to, I'd consider some sort of token-based authentication method. When the user logs in, provide an URL she can visit to log in. Typically that URL would be invalidated after single login (and after certain period of time) to make it slightly more secure. Automatically generating something like this would still be very risky (please don't do it). It's more often used in combination with, say, valid email: user types in her email and receives an URL that's valid for certain period of time and allows her to login (preferably once) before it's invalidated.
  9. What @clsource said. This definitely isn't typical AJAX behaviour. Something you're doing is very resource intensive; most likely retrieving or rendering pages -- though that's also all we know about your use case, so there could be something else involved we just don't know about. Agreed about the headers, too: by default POST requests are not cacheable, unless specifically forced using Cache-Control and Expires headers. You don't need to define any of those headers here unless you're somehow forcing caching for POST requests (although if you're just fetching records, not altering them, I'm equally confused about the use of POST here in the first place).
  10. Thanks for mentioning this, Soma. Looks like I'll have to take a closer look at the new format.
  11. RT @techdirt: Google May Consider Giving A Boost To Encrypted Sites http://t.co/MAeoEvetUo

  12. @muzzer: most fields (almost all of them) are only loaded as required. You can check "Autoload" within field settings (Advanced tab) to always include it with the page, though. This makes sense especially if that field is always used when the page is loaded. You should also take a look at built-in Fieldtype Cache too; sounds like it could be exactly what you're looking for here. It's a core module but not installed by default. I wrote a quick post about caching in ProcessWire a while ago, you can find it here. It's not too in-depth (and doesn't mention aforementioned Fieldtype Cache at all), so I'm not sure if it provides you any new information at all
  13. $event = new Event(); $event->date = "1979-10-12 00:42:00"; $event->location = "Vogsphere"; $event->notes = "The homeworld of the Vogons"; $page->events->add($event); That's one way to do it, at least. Since $page->events here is events field it returns an instance of EventArray, which in turn can contain multiple Event objects. For the most part EventArray acts just like a PageArray or Pagefiles or any other object extending WireArray. This is just the most basic example of what you can do with it.
  14. RT @lukew: 90% of the time, smartphone use is all thumbs. Design accordingly. http://t.co/dcQMmTJVxd

  15. I haven't had much (enough) time to work on my mailer module and haven't looked at how WireMailSMTP handles these particular things, but in general I'd have to agree with Pete. For things that are commonly used, it'd be best if there was a "standard" way to do that. One that doesn't depend on which module extending WireMail is installed at the time. I believe that we're talking about interfaces here, though that's not technically what it's going to be.. or what it is at the moment, at least. Then again, if @horst has implemented this feature already, I'll probably take a look at his implementation anyway and use similar public methods if possible to provide some consistency
  16. RT @brad_frost: Yes, and that voice says "get this goddamn thing out of my face." http://t.co/KIPgjpg20M

  17. On a slightly related note, I've found clients to be quite happy with "I don't know, I'll find out and get back to you", but if they've got an issue and can't reach anyone at your side, they won't be happy at all. Be clear about when you're available and stick to that. The absolute worst thing you can do is making promises and setting expectations you won't be able to fulfil. An easy way to increase customer satisfaction is exceeding expectations.. and the trick to exceeding expectations is setting them at a realistic level in the first place
  18. As far as I know, the only way to do this would be applying an RTE with a module. Hooking into InputfieldImage::renderItem would give you access to inputfield output, but I'm not exactly sure how TinyMCE (or CKEditor, if you're using that) is configured by default, i.e. is it enough to add a class to the description textarea (which should appear once the field is configured to hold more than one row of description data) or do you actually have to add custom JavaScript to apply it to image descriptions.
  19. Take a look at the example below "Custom PHP code to find selectable pages": Example: return $page->parent->parent->children("name=locations")->first()->children(); Isn't that almost exactly what you need here? Of course you'd have to use ...->child("template=categories|tags")->children() or something like that.
  20. Pete: I haven't had the opportunity (?) to deal with truly large databases myself, but I remember discussing this with someone more experienced a few years ago. They had built a system for local university and hospitals for managing medical research data (or something like that, the details are a bit hazy). I don't really know what that data was like (probably never asked), but according to him they started running into various performance issues at database level after just a few million rows of it. Indexes in general make searches fast and having results in memory makes them even faster, but there's always a limit on buffer size, searching huge index takes time too and in some rare cases indexes can actually even make things worse. (Luckily the Optimizer is usually smart enough to identify the best approach to each specific scenario.) Downside of indexes is that they too take space and need to be updated when data changes -- they're not a silver bullet that makes all performance issues vanish, but can actually add to the issue. This also means that you'll need to consider the amount of inserts/updates vs. searches when creating your indexes, not just the fields to index and their order. This is starting to get almost theoretic and you're probably right that none of it matters to any of us here anyway (although what do I know, someone here might just be building the next Google, Craigslist or PayPal) -- just wanted to point out that it's not quite that simple when it comes to really large amounts of data. And no matter what you claim, (database) size does matter Edit: just saw your edits, apparently we're pretty much on the same page here after all
  21. @rusjoan: for converting content of a page to JSON, you could do something like this: $data = array(); foreach ($page->template->fields as $field) $data[$field->name] = (string) $page->$field; $json = json_encode($data); Of course this isn't a complete solution and won't work in anything but the most limited use cases (think about images, files, page references etc.) but it's a start.
  22. @rusjoan: that error message is a bit vague, but it means that the name of your module is invalid. This is where it originates from. ProcessWire expects each module name to start with single letter (uppercase or lowercase), followed by one or more lowercase letters. Result of this is that "VkAPI", "Vkapi", "vkapi", "RusjoanVKAPI" etc. are valid names, while "VKAPI" is not. I'd suggest renaming your module to comply with that requirement, as that's the easiest solution here. Edit: for the record, I just submitted a pull request about adding a more descriptive error message.
  23. @Pete: actually archiving database content elsewhere could have it's merits, in some cases. Imagine a huge and constantly changing database of invoices, classifieds, messages, history data etc. Perhaps not the best possible examples, but anyway something that can grow into vast mass. Unless you keep adding extra muscle to the machine running your database (in which case there would only be theoretical limits to worry about), operations could become unbearably slow in the long run. To avoid that you could decide not to keep records older than, say, two years, in your production database. In case that you don't actually want to completely destroy old records, you'd need a way to move them aside (or archive them) in a way that enables you to later fetch something (doesn't have to be easy, though). Admittedly not the most common use case, but not entirely unimaginable either As for the solution, there are quite a few possibilities. In addition to deleting pages periodically you could do one or more of these: exporting pages via API into CSV or XML file(s) duplicating existing tables for local "snapshots" performing regular SQL dumps (typically exporting content into .sql files) using pages to store data from other pages in large chunks of CSV/JSON (or custom fieldtype per Pete's idea) In any case all of this isn't really going to be an issue before you've got a lot of data, and by lot I mean millions of pages, even. Like Pete said, caching methods, either built-in ones or ProCache, will make typical sites very slick even with huge amounts of content. If your content structure is static (unchanged, new fields added and old ones removed or renamed very rarely), custom fieldtype is a good option, and so is a custom database table. These depend on the kind of content you're storing and the features of the service you're building.
  24. RT @vruba: Idea: a high-level public agency with the mandate and funding to review and patch popular code. A national security agency, if y…

  25. RT @beep: Despite what so-called “climate change scientists” say, my walk through Harvard Square shows the brozone layer hasn’t depleted on…

×
×
  • Create New...