Jump to content

LostKobrakai

PW-Moderators
  • Posts

    4,954
  • Joined

  • Last visited

  • Days Won

    100

LostKobrakai last won the day on January 24 2022

LostKobrakai had the most liked content!

7 Followers

About LostKobrakai

  • Birthday 11/29/1991

Contact Methods

  • Website URL
    http://www.kobrakai.de

Profile Information

  • Gender
    Male
  • Location
    Munich, Germany

Recent Profile Visitors

24,844 profile views

LostKobrakai's Achievements

Hero Member

Hero Member (6/6)

5.3k

Reputation

188

Community Answers

  1. Jquery still being a dependency to so many higher level js tools is what keeps it alive much more than jquery by itself being a useful tool nowadays. With modern JS you can do many things natively, which made jquery so appealing years ago. E.g. the $(…), which made jquery famous is just document.querySelector or document.querySelectorAll in native JS. Ajax used to be verbose in native js, but fetch is way better. The only real benefit I still see for building new stuff on top of it would be some of its included normalization of certain behaviours across browsers. 4.0 also at least fixed my largest issue with jquery in modern codebases of not having been a native esm package before.
  2. I've been working with sqlite on an embedded project for the last few years, though not using the FTS. It's generally faster than client/server databases just by the nature of its implementation. It also hardly suffers from N+1 queries and other common pitfalls. It would certainly be interesting to see how it could work beneight processwire, but really sqlite is a different beast with a lot of different tradeoffs to other databases, even if many people (looking at laravel or rails using sqlite for tests) think switching databases doesn't come with costs.
  3. There's tradeoffs between maintainability and controllable surface area vs. flexibility. The more options available the more permutations of options there are. More permutations of options make downstream concerns more tricky like caching, authentication, internal querying, …. Just need to look at the issues with deploying a graphql API at scale, where graphql sits rather close to the flexibilit end of the mentioned scale.
  4. I just pulled a fresh master version today, so 3.0.210 it is. Having only parents kinda makes sense, that's a precision I certainly didn't know of yet.
  5. I'm currently in the process of implementing a naive selector parser and querying engine for the processwire database layout using elixir. This is for a talk I'll be giving in a few month, which is meant so showcase the flexibility of a database library, so this is not going into production or anything, but I'd still wonder about something I bumped into when working on this. I also might bump into more things, so I'll dedicate this thread to potential future questions as well. When I looked into supporting `parent=…`/`has_parent=…` I started by joining `pages_parents`, but the new pages I had just added via the processwire admin moments before weren't present in that table. This was on an otherwise fresh installation. Aren't all pages supposed to be present in `pages_parents` and if not, what would be the conditions for them being present or not? I vaguely remember having run into issues with that table being out of date in the past, but I always blamed myself and whatever I did, not the system. Edit: $pages->parents()->rebuildAll(1); This seems to have fixed the data in the table, but for some reason using it without the (documented to be optional) id it rebuilt the table incorrectly as well. See context below
  6. Nothing: https://github.com/processwire/processwire/commit/188d0e150ddbcac366a662e274997c77a50af66d
  7. AI is great in producing text, that looks plausible, but it's not concerned with actual correctness. I'm working with elixir nowadays and there have been many examples posted, where responses claimed some API to exist in the stdlib, which just didn't. Looked totally fine on paper until you looked into the actual stdlib and found nothing. So really AI can be useful, but in the case of ChatGPT you really want someone chatting, who can actually validate the responses. Languages evolve and that's not a new thing at all. Take for example computer, which used to be a human doing math, and now is a machine. There's a good reason why we don't use natural language for programming our – now machine – computers. They're messy and not at all strictly defined.
  8. To me the biggest factor missing here is intent. Why is the form where it is? What does it allow the user of the form to accomplish? What does the user want to have happened at the end? Answering those questions will bring out the UX part of what you want to do on submit, which technology you want to use to handle the submit. This is likely best described by examples: Contact Form: This is a form of allowing users to reach out. It allows users to put a message in whomever reads the messages' backlog. For these I think the outcome of the submit is ensuring the user that the submission was successful, so I usually have custom pages to redirect to, which show a success message and some prose telling them how soon they can expect an answer, …. If there's useful info to link to, e.g. FAQ, or something link to that. Also link back to the website if there isn't navigation anyways. As for technology simple is better. WebApp Create Resource form: When a form is meant to create (or edit) some resource in a system, the best way to ensure a user about having created/edited things correctly is just redirecting to that resources' details page and show a flash message of "Successfully created" or such. As for technology use whatever fits best with the rest of the system. Chat Input Form: Here the intend is to leave a message for wherever chat messages are visible, but not at all to leave the page. Make sure the form submits via AJAX, there shouldn't really be ways to input something, which is not "valid". Probably track acknowledgement of persistance by the server async (e.g. message rendered opaque until acknowledged). In the end don't think the page you get to after submitting the form successfully needs to be the page the form was on. There are places where this makes sense, but imo that's more the exception than the default.
  9. You could probably also wrap the image in an svg, which applies the mask.
  10. If content is created in both dev and prod concurrently and meant to be merged then you're essentially maintaining a distributed system. There's a lot of knowledge around on how to deal with them, but all that doesn't make it a simpler problem and not having conflicts (or resolving them automatically) remains to be a hard problem. That's why the migrations module I created ages ago used migrations to captured the intent for change instead of trying to merge observed changes after the fact – the latter either comes with a lot of caveats or is impossible. The simplest solution for the specific problem discussed is the suggestion of @wbmnfktr. Use two separate systems for files created in dev and pushed to prod vs. the system for files created in prod, that way there cannot be conflicts in the first place.
  11. Looking at my commit log on the folder it seems I had made that edit manually as well. I'm certainly with you on the storage. In the end for our system it would've been much more useful for pages to be assigned/tagged a customer and have access matched by assigned customers for users instead of matching drole to individual pages.
  12. We're in the last steps of phasing out the project I was using it on. Most parts had been replaced years ago since we moved away from processwire with that project. The module's approach to access is nice, but iirc there were some bugs in the master implementation and we actually would've needed a bit more flexibility out of it (still needed code to create those dynamic roles). I'd still suggest it over expensive runtime access checks if it aligns to a projects access setup. We've been using it because our access was not only scoped by roles, but also by customers. So a manager would only have access to manager pages, which belonged to a customer assigned to that manager. Customers weren't static, but also defined by pages within the system. Things also weren't segmented into individual page trees, though iirc the modules for segmenting by page tree didn't exist at that time as well.
  13. Another example could be migrating an address stored in a textarea, which should be split into multiple dedicated text fields (street, postal, city, …). Or some set of fields on template A, which should be extracted/migrated to template B – creating new child pages wherever there are template A pages. Imagine pages having a single address, and now they need to be able to have multiple addresses.
  14. This sounds interesting. Though I wouldn't really call this file based config. It's not a single config, but rather migrations, which work off of a declarative config. Still wondering how this would deal with data though. Say I have a text field, which needs to be switched out with a pro multiplier while keeping the current text around as the content of one of the subfields. The above makes it seem like the prev. field and contents would just be deleted and the new one would just be empty.
  15. There are also some other alternatives besides matomo with plausible analytics or fathom analytics. I personally like those because they generally also do less. Most people don't actually need all the fancy advanced features anyways.
×
×
  • Create New...