Jump to content

joe_g

Members
  • Posts

    356
  • Joined

  • Last visited

Everything posted by joe_g

  1. To conclude my lengthy rant: 1. Any ajax page combining parts of html fragments must be re-done server side. I guess there is no way around that everything has to be done twice (both server and client side). The alternative seems to be node (but then missing all the processwire goodness): http://stackoverflow.com/questions/20588865/node-framework-to-render-on-both-ends 2. Then my other problem I had is that any html fragment used for ajax loading can't have conditionals combined with caching (if ajax then), but that's quite obvious really. cheers, J
  2. Thanks Teppo, appreciate the help. My issue might be more about general pushstate/AJAX than processwire, probably. Sorry if the question is a bit sweeping. Maybe I can be more clear with an example (ignoring SEO for now): What I do now: 1. I have an url /list/open/first-item/ that shows a list of things where the first item is selected 2. The request is not ajax, so / gets sent instead 3. Client side, I load the two parts /list/ and /first-item/ and put them in place It works, but I can't cache /list/. In other words the URL's collide. I have to make sure the ajax fragments have separate urls (option 1 above). What I probably should be doing: 1. The url /list/open/first-item/ should recreate the exact same output as if it was done client side 2. send response to client, done In general, what I think I have to do is: 1. create separate url's for all fragments like (/list-fragment/ and /open-item-fragment/first-item/) so that the urls doesn't collide. 2. For every url, server side, paste together a response that is identical if it would be done client side. Is there any good way to do the double logic both server and client side? To at least try and do this as DRY as possible ? thanks! J
  3. Hi, I think I fell into a bit of a structural trap. I built a site using pushstate and client side routing. On the top of all pages I check if the request is an ajax request like so: if(!$config->ajax) { echo $pages->get('/')->render(); } else { ...serve the page } ...if the user ask for /an/url/ they will get / instead but the clientside routing will handle the path. And, in the future, it would be easy to check if the request is from a search spider, then I would also just serve the content straight up. Clever, I thought. Except it doesn't work with caching. (doh!). With this method an url using client side routing cannot be the same as the 'real' url underneath. How do you experts solve this? I was hoping there was a generic method that would work with all urls. I would like to have the same client side structure as server side structure as possible. I can see my options being 1. making separate urls for ajax request for everything /url and /url-ajax (ugly and high maintenance) 2. prefix the client side url with something client side /site/url/ maps to server side /url/ (easy and generic, but a bit weird for the user) 3. prefix the ajax calls with /ajax/url/ then server side get /url/ and remap all url segments (cleanest but url segments will cause trouble) 4. The traditional web app way: Use .htacces to serve / instead of /url/ if it's not a ajax call. (this might be the best option perhaps? Although I'm not an expert with .htaccess, this is what puts me off). I suppose with this method I could still use the caching, since the pages are effectively always ajax from processwire perpective (except root "/"). But this solution won't work with procache, I assume. Is there an easier way that I've missed? I have a feeling I'm missing some common knowledge, since this might be a common problem? I guess it would have been handy if /url/?ajax would create a separate cache entry than /url/, then I could simply append ?ajax to all ajax calls. But I think I understand that won't work since any GET variables basically turns off the caching, as far I understand. Any thoughts, tips, or comments are highly appreciated. thanks, J
  4. Actually, come to think about it, in this case I could skip the language prefixes and rely on a good old session variable instead. j
  5. Thanks, main reasons are - i don't want the editor to be able to unpublish per language (because it makes language-switching easier when everything is available in all languages, and I rather display a message "text is missing in english" with the un-translated text in dutch than not having a page at all) - Usability-wise: What happens when you want to switch from Dutch to English on a page that doesnt exist in English? (the EN button disappears? EN button is disabled? You get redirected to the home page? You get a message "page missing") Not good usability in any of these options, as far as I can see. - a page is either unpublished or published (for all languages, not per language). Conceptually simple. - the additional complexity for multiple urls, and what they mean, is confusing for the editor. The concept of having to care for the URLs at all is already a barrier. - avoid this issue https://processwire.com/talk/topic/5979-allow-new-pages-to-be-created-from-field-only-creates-in-default-language-when-using-multiple-page-names-module/ I'll look into the hook thing, but I have a feeling I might end up with the language-gateway solution in the end ... : P j
  6. Hi, I ran into an issue with the multiple page names module. When a page is created from a page field ( The "Allow new pages to be created from field?" checkbox), it is only published in the default language. So I wonder if there is a way to get these automatically created pages get published in all languages per default. thanks, J
  7. Hi, I'm doing a bilingual site, but I'm trying to avoid multi-language page names. I don't want the editor to be able to unpublish things per language, and one url is enough per page. If i uninstall multi-language page names, the possibility for language urls disappear (/en/url, /nl/url) – what would be the best way to get these urls without multi-language page names? I'm aware of the old method by having gateway templates and pages (/en and /nl) but this method complicates url segments, so I was hoping there was a more 'core' way of doing it. thanks J
  8. sweet thanks, all i have to do is if(!$config->ajax) { echo $pages->get('/')->render(); } else { ...stuff } ..no fiddling with .htaccess
  9. I'm trying to write a site where all urls/routing happens client-side via pushstate/popstate (I'm a bit new to pushstate). My problem is how to separate client side request and server side requests. Ideally I'd like to use the same structure for both client side and server side, like: 1. access /url 2. with htaccess rewrite / is served - (SPA app-style) 3. my JS loads /url and shows it This way I can use a $page->url like it's a classic server side site. One solution is to add ?ajax=true to all ajax calls and let RewriteCond pass these through (http://stackoverflow.com/questions/10708273/redirect-ajax-query-by-htaccess) Another solution would be to put all server side content in a separate folder somehow I'm not sure what would be the cleanest most future-proof way of doing this, maybe someone else did this and have some insights? thanks! J
  10. is there any server side solution to this ? Otherwise I should maybe try and write a module myself. All the client side solutions a pretty awful. I started using inline-block instead of float – much better, except for the space issue. j
  11. Thanks for the insights, Ryan. I figured there were good reasons, since virtually every CMS works this way. Ultimately what I'm mafter is to decouple editing and visiting, like, for example serving the site as static files from Amazon S3, but I understand this would be a whole different approach to website-making (with a number of issues).
  12. I have an existing site that I'm trying to convert to processwire, so I'm transferring a lot of data and deleting it again as part of the process to write the conversion of the data. I was wondering if this might mess up some indexes and slow things down gradually (although I'm guessing not). (by the way, converting an existing site to processwire is awesome) About performance, I don't have a lot of pages per repeater, but what I have is a lot of repeater fields. It seems like every added repeater (where the value needs to be displayed) adds a linear amount of processing power (twice the number of repeater fields, twice the time required to get the data). I was hoping that 'autojoin' would join the data (including the repeater data) into some kind of big outer join (to bundle it all in one SQL call), but it doesn't seem to be the case. So I figure autojoin makes more sense for regular fields, but not for repeaters. In my case, I guess I should just be conservative with the amount of things i display, and try to simplify/denormalize the structure a bit. J
  13. thanks, switching context would work. I'll try that. I thought the context would be the calling page, not the current page. But I guess it makes sense how things are. So, if I do $somepage = $page->get('/somepage'); echo $somepage->body; // body has MyTextFormatter filter in MyTextFormatter.php, $this->page is equal to $page, not $somepage. This is what confused me. J
  14. Hi, the save button was showing, and i clicked it, got to a refreshed page but no saved values in the database, so this was a different error - but I've only ran into it twice in total so far
  15. Thanks but, from inside the text formatter module code there is no "$page", only "$this->page", my problem is that $this->page refers to the wrong page. Not the page where the text field and textformatter resides. So, I can't do "$page->urlSegment", I don't think I can get the current urlSegment from within a text formatter, in any case it doesn't feel like a good idea. i found an easy workaround to simply inject the fields I need before I use the text field (that calls the text formatters), no big problems - but it's a little bit ugly like $page->footnotes = $pages->get($page->urlSegment(1))->footnotes //hack in read.php then in my TextFormatterFootnotes.php i can do $this->page->footnotes j
  16. Heyhey, run into this issue: if I access an URL read/x where x is a url segment. Then, in read.php (template for read) i find the page x and display a text field it has together with some text formatter I wrote that manipulates the text field. The problem is that inside the text formatter code $this->page points to read, not x – so my filter isn't working, because I'm trying to fetch data from other fields in x (I'm writing a filter for footnotes). Am I doing it wrong or is it a bug? (I'm probably doing it wrong). thanks!
  17. Hi, It was another install with version 2.2.9 using latest chrome. I'm trying to recreate the problem but not succeeding, if I manage I'll post the rest of the information. I'm not sure if it said 'saved page' but I remember the green fields indicating a change of the field. j
  18. Hi, This is a general CMS question, but since this is my favourite CMS, I'll ask it here. Instead of generating a page once the visitor ask for it, wouldn't it be better to generate it directly when the editor press 'save'? Then, all server side performance issues are gone completely. It totally wouldn't matter how long a page takes to generate, and you could focus on making the most maintainable structure instead of having to worry about performance. All request would be procache-level speed. I understand this would only work for content-driven sites, and not more complicated things like services, but still it would cover a lot of user cases. I also understand that you would need to keep track of all pages that depends on certain data, but that doesn't strike me as very complicated - even if it has to be done manually (compared to the big advantage of virtually perfect response time, all the time) I'm writing this because I'm working on a site where the structure is on the limit of what is possible to do with a CMS (processwire or any other) without things becoming too slow, and it struck me that this construction would eliminate all my worries about performance. Would this be possible to make by doing a hook to save page or similar? j
  19. I got really excited there for a moment, and in some cases this could work. In my case I'll need to display the times anyway, so they need to be fetched one way or another. So the expense of getting those relational pages is still there. I take it that every relationship / "join" that involves other pages adds a linear expense, and there is no way around that except custom sql? ...I think I have to rethink the structure.
  20. I had this happen again, the page doesn't save in chrome, but it works if I use safari. the green messages indicate all is well, but nothing happens
  21. This is absolutely brilliant. Automatic denormalisation, sort of. The 1 second it took was indeed only that very line, but without the auto-joined repeaters things improved a great deal. I'm repeatedly importing thousands of pages, then deleting them, lots of times. Could there be a performance degradation due to that? (I'm guessing thats not the case) The second snippet triggers the hook saveReady for existing pages, I guess? Many thanks for the help – I'll get back with the results later J
  22. Edit: I did something wrong before. But this line $pages->get('/events')->children('template=event,onfrontpage=1,times.enddate>'.$today); it returns 60 events, and it takes 1 full second. Is that normal? I would have imagined this would be one single sql with lots of outer joins in it? "times" is a repeater. each event containts perhaps 5 page-fields (where 2 is repeaters), and 10-15 regular fields. Everything that can be auto-joined is auto-joined, although it makes little difference seemingly. All fields a tri-lingual, perhaps that makes a difference? Ryan, you suggested I should do $times = $pages->get('/events')->find('template=time,parent.onfrontpage=1,enddate>'.$today); but since times is a repeater, i can't really do that (and it would be weird logically as well) J I started trying this out - and surprisingly the speed issue isn't so much searching and finding, but displaying the data. (as in looping through each field and echo'ing them) I thought that (if I was using auto-join) - once the stuff was loaded it would be super fast to display. Could it be that auto-join doesn't work for repeaters and references to other pages? It takes several seconds to loop through 200 events and display them including all ther doimain data (a couple of repeaters, and other connected page-references - say about 5 domain data fields, and 10 regular data fields). I tried before doing a equivalent SQL query that joins together all data including all the domain data (i was using the database of Symphony, but the abstraction is similar) - I joined together everything and the query would execute in milliseconds, and all the data would be loaded and ready to be displayed. Is there anything to do about this or is custom SQL the only option here? thanks!
  23. I was using chrome v30. But now (some time later, presumably some caching has expired) everything works fine again.. J
×
×
  • Create New...