Jump to content

LostKobrakai

PW-Moderators
  • Posts

    4,956
  • Joined

  • Last visited

  • Days Won

    100

Everything posted by LostKobrakai

  1. This might be interesting in combination with e.g. a matrix field though. Have the nice UI of a "text-editor", but the (often not negative) restrictions of properly managed content types. I'm not sure how those plugins for content-types are managed, but if it's at runtime it could talk to processwire for that.
  2. Having a compile step in software does allow for optimizations in the deployed endproduct, which just wouldn't be possible without it. To run that compile step for javascript one simply needs node, because e.g. a browsers runtime is even harder to get hold of. CKEditor is by it's functionality already tending to be a big package and it really needs to optimize every inch in it's package to save on download times and other performance critical parts of itself. That's the nature of anything you ship directly to the browser of your users. Part of that is modularizing code, so every bit of unused code can just be excluded from the shipped build. The downside of compile steps is that you often cannot directly hack on the end product anymore or it's way harder to do so. But would you rather have a hackable product, which performs worse for your endusers because of missing optimizations or a product, which requires you to setup a build environment, but builds a optimized package, which is not hurting your user's perceived performance. This is not to say node is super great. It does what I described above in a often super (and maybe unnecessary) complex way, but I wanted to give a bit of argumentation on why we see that trend.
  3. Can we all please give ryan the benefit of the doubt and assume it might have just been a mistake. If this would be an intended change of requirements – which seems unlikely from prev. strongly held support down to even 5.3 – I'd expect ryan to announce it properly. Also this is the branch of active development in the end and while it has been super stable in the past without any major issues it's still a development branch. I see it similarly as with beta versions of other software. Things might go wrong sometimes.
  4. I really meant "to be installed as module". "Being able to install something" might not matter, but how one is being able to install/configure something might indeed matter.
  5. Imho the big benefit of modules is installability. So anything reusable or more generic can be a fit for a module. Other than that I don't see much reason to use modules over just classes. Personally I'm of the opinion to keep modules just for the code needed to interface with processwire anyways. Business logic, which is not processwire specific can just as well be in a plain class the module brings with it.
  6. I've used http://www.tuneupmedia.com/ a few years ago quite successfully. The interface is a bit wonky, but it get's the job done.
  7. You could create hanna codes, which call code stored elsewhere.
  8. OR-Groups are not supported by the in-memory matcher.
  9. I wasn't talking about the frontend lazy loading, but the fact that in processwire fields are lazy-loaded from the db by default. So you load a page (+ autoloaded fields; 1 query) then you access the first field in the page (1 query), then you access another field of the page (1 query) and so on. That's why autoloading fields can be so useful, as it can quite quickly drop the query count and therefore improve performance. Using fieldtype matrix makes it sounds like it's still just one page, but with 30 blocks it's more in line of loading 31 pages, the parent page and all the blocks are also pages. Now if each page lazy loads their fields you're basically in a n+1 query situation.
  10. Did you check your site for the query count. ProcessWire generally uses quite a lot of queries and given you seem to have quite a lot of nested stuff + probably some loops over the content types it might just be that the lazy loading of fields results in your code constantly talking to the database. Adding some auto-joins here and there might already make things faster.
  11. Those sound exactly like the migrations / helper functions my Migrations module does provide ?
  12. Your posts sound like you have two problems here: One is code organization and the other is performant calculation of leaderboards. This is why people usually use some kind of separation between core domain logic (ranking of restaurants) and the website layer (actually serving a website) in their code. Duplicating such essential code as calculating the position of a restautrant because it‘s needed in multiple places of your website is a bad idea. The risk of both places getting out of sync over the time of development is one I'd not like to take. For the leaderboard calculation you seem to currently use the most simple but inefficient way: Load all the information needed into php and calculate the positions for the set of data in memory. There are two types of optimizations from here: Move the calculation logic nearer to the data it needs to work on, a.k.a. do the calculation in mysql instead of in php; This saves on the need to move all the data from mysql to php, which scales very poorly. Persist positions for as long as possible, a.k.a. caching; It seems you currently recalculate positions each time someone visits that page. But they do only change when a new ranking / comment is posted on your website. Also usually a little bit of delay between new comments and updating the ranks is not a problem. Given those two optimizations there are three concrete ways to handle things: Just Opt. 1: Write an SQL query, which can just return e.g. restaurantID + position in sorted order for all the leaderboards you have: global / per city / …. Just Opt. 2: Use some of ProcessWire's caching options to save your various leaderboards still calculated in php. Both Opt. 1 and Opt. 2: Write an SQL query and create a materialized view in the db to query. This way the calculation and caching happens in mysql, which is probably as performant as it can be. Edit: The missing puzzle piece here is treating those leaderboards as the data to store.
  13. Depending on what's supported processwire does use different stragegies to hash passwords: https://github.com/processwire/processwire/blob/master/wire/core/Password.php#L347-L386
  14. UX wise the best would probably be opening the folder after some time hovering over it.
  15. I've had similar experiences on a project of mine, which was quite a bit smaller, but also quite a bit less spec'ed in terms of server resources. What I noticed was the big hit of loading all the templates/fields upfront for each request, which added quite a hefty ms count to requests for somethings that hardly ever changed and it needs to be done before any actual work related to the request can be started. About a year ago we decided to go for a rewrite though, but that might be a place to look into for optimization. Also as with any other project using sql you want to look out for n+1 queries, which are actually quite easy to create with processwire as fields are loaded magically on demand. You can use autojoin manually as option for $pages->find() if needed to mitigate those. I'd also care more about the number of sql requests and less about how many joins they use. Joins can be a performance hit and sometimes 2 queries might be quicker than one with a join, but 1000 queries for a single page request sound unnecessarily many.
  16. Sure. There are just quite a few less experienced programmers here on the forums, so I wanted to point out that base64 does not secure data.
  17. As soon as you save anything on your server or in a db you loose the one feature JWTs really have over sessions, the one by which people generally choose JWTs: statelessness. So for a website adding an API for a JS frontend, why go through the length of trying to make JWT auth secure via some refresh token and some db table storing them, when php sessions already bring all you need to do basically the same without the fancy buzzwords and using cookies, which are more save in their handling on the browser side as well. Your argument about duration of authentication vs. refreshing the authentication often is a topic, which is actually totally unrelated to how any authentication prove is stored on the client. Both cookies as well as JWTs cannot be revoked once they're on the client. The difference is that php brings all the server side logic needed for sessions and their revokation, but you need to implement all of that for JWTs, which aren't really meant for stateful authentication in the first place. That sounds like a use-case for oauth. About your question: If you have multiple servers and you need to (be able to) actively revoke access for some user you need those server(s) to be aware of the revokation. Using client side stored tokens alone you just cannot revoke validity. Your server(s) could always ask your auth server about validity or your auth server could notify your task server(s) to drop sessions for users. You just need some way to make all your servers aware of the revoked access, so yeah multiple places need to be informed somehow. The usecase of getting a short-lived, single-use token from your auth server to authenticate against some task server for starting a session is one JWTs could fit in my opinion.
  18. https://processwire.com/api/ref/pagefile/description/ See the last example on how to get the description in a specific language.
  19. Except base64 doesn't do that at all, as it's not encryption, but encoding. It's like changing a .doc file to a .docx file (same content, but vastly different representation in how it's stored) and not like putting it e.g. into some encryted folder (same content, but it's stored in a secure manner).
  20. or that one But I had so search a bit as well.
  21. You probably need to define your $month variable more thoroughly. Currently you query everything based of of month long sections around „today“, which does not align with where month start/end.
  22. I'd really question why you don't want to use plain old sessions. I mean it doesn't have to be cookie-based even if that's imho still the easiest way to not get bitten by compromised sessions. To give my argumentation a bit more ground work you might want to look into the following blogposts. The latter has a quite simple flow-chart about why JWT just doesn't work well for session authentication. http://cryto.net/~joepie91/blog/2016/06/13/stop-using-jwt-for-sessions/ http://cryto.net/~joepie91/blog/2016/06/19/stop-using-jwt-for-sessions-part-2-why-your-solution-doesnt-work/ There's also oauth, but that's mostly just another way to obtain a "session" with an service, where you allow a third party to access your content without giving away your credentials.
  23. LostKobrakai

    other CMSs

    Netlify (CMS) fills that space. Whenever someone edits the content it's kicking of the static site generator and replaces the old page with the newly generated one.
  24. It's certainly not valid forever, but once a token is compromised there's no way to just invalidate that single token before it's going to expire on it's own. By using plain old sessions you have the ability to do so. And depending on the context 24h can be quite a long time.
×
×
  • Create New...