Jump to content

LostKobrakai

PW-Moderators
  • Posts

    4,956
  • Joined

  • Last visited

  • Days Won

    100

Everything posted by LostKobrakai

  1. Usually I'd say the best approach if you need multiple formats is storing the information in a canonial format and using different formatters, which can convert the canonical format the the various output formats you have. Kinda like e.g. datetimes are stored as unix timestamp, but that's hardly ever the output format.
  2. This sets the format for the input of dates/times (dateformat in the datepicker API), but it does not change the language of the datepicker UI, because this happens by including or not including those translation scripts you mentioned above. They basically set the datepicker to be in a certain language and don't just "make the translations available". Technically you can make jquery datepicker switch language at runtime, but I guess not including translations when not needed is the simpler solution. Edit: As to why the core itself doesn't handle including the correct files: ProcessWire doesn't have means of identifying languages. You can name your languages however you want. So it cannot map languages created by the user to something like the path for the datepicker translation, because it doesn't know a certain language is supposed to be e.g. german in the first place.
  3. That's what I was thinking about as well. Theming I'd divide up in 3 parts, which need to be handled: Getting data out of the persistance layer in a format, which is well defined to theme developers. Without knowing the data upfront I'm not sure how one would build a UI for it. Ways for theme code to format that data into markup snippets. A flexible way to compose those snippets into a whole page. (Having a way to handle http inputs (form submissions, get parameters, …)
  4. This is less a question for how processwire works and more about how jquery datepicker works: https://jqueryui.com/datepicker/#localization
  5. I guess this is a bit of a chicken and egg problem. The admin in processwire is hardly anything special based on what the core knows about it. It's a bunch of pages, which just happens to be tightly access controlled and forwarding their handling to process modules instead of rendering templates. So before knowing the page you're on there's not really a good way to know if the request is going to serve an "admin page" or not. To query the current page a lot of things already need to be started, and I guess you want to hook into those before any pages are queried, at best by already knowing the context, which is not going to work. A good example is the function you've shown in your initial post. It's a good heuristic for determining if the user is in the admin, but also not bullet prove, as there can be normal pages as children of $this->config->adminRootPageID as well. It might rarely happen, but it's possible. The only way to be sure is actually having $page queried.
  6. That's not really true. Ryan's Dynamic Roles module does add additional sql clauses to ensure dynamic roles are fully resolved at the database level. Sadly is the module is not very well maintained and it could also be build a bit more flexible (e.g. matching on calculated values instead of just "membership"), but it is possible. This to me sounds like a very hacky workaround. You initially said you don't want to bloat the system with hooks. Blacklisting IDs and filtering them at runtime is very likely to be way worse for performance than a handful or two more hooks.
  7. > If setting the cookie fails (like due to prior output) ProcessWire remembers it in the session instead, and sets it the response to the next web request (before any output). I‘m wondering if raising an error would actually be the better option. Letting a cookie creation succeed, but not actually creating one might lead to hard to understand bugs. At least the return value should indicate that creation of the cookie was delayed to the next request, even though I can‘t imagine cases where I would want that type of behaviour.
  8. I really feel this is a good thing that it's still only working that way. I get that there's no great way to discover modules, but I also feel that "discovering modules" is a totally different tasks to "managing modules on the system", which is what the current modules section in processwire is about. The ability to install just by name from the modules directory is imho a nice to have convenience and not an unfinished start of integrating the modules directory as the source for modules. The modules directory is just one source for modules, possibly the biggest at least for open sourced modules, but not the only one. This is not to say though that a module filling the gap of "discovering modules" isn't useful and what you created seems like a very nice way to browse the directory and move a module from being listed there to actually being downloaded/installed. I'd personally wouldn't like to see the current module section replaced though. This hint's at the reasons for the above. Browsing the modules directory is great with a cards view. Maintaining installed modules is a totally different task. It needs modules to be quickly scanable - table layouts are way better at that -, it needs to highlight different data - a version is more important than a lengthy description of what the module does, or how many hearts it got - and I'll also hardly switch rapidly between browsing and maintenance so it doesn't need to be co-located in the interface. That part I'd like to see in the core (a bit depending on how it's implemented though). This is an improvement to the "maintaining modules" part of having installed modules, but rather nice to have when browsing modules. To summarise: I really like the problems you're tackling with your module, but personally I'd like to see the efforts split up. The part about "discovery" is great, but certainly not essential to processwire and should in my opinion be either not in the core or at least not installed by default. The part about better maintenance of modules and maybe touching up the UX of the current modules section by lessening the clicks to handle certain usecases is something anyone would benefit from.
  9. If you have proper auditing tools in place then any modifications should be recorded. The other part is authentication/authorization. If you can verify that nobody can modify a certain record you're fine. Those things get tricky though for the people, who are responsible for setting up those systems, because they often also have the ability to circumvent them. That's the place where you want to keep access very restricted to a small set of people, still enable logs where possible and start using policies, which "tell" those people what they're allowed to do and what they're not allowed to do. As soon as you cannot (tech.) prove something wasn't modified you should at least be able to prove that you had other measures, which disallowed modification and to know exactly who had the ability to change things in a non-trackable fashion. There's no fool prove way to this, unless you give up control completely to a third party, and there are always going to be parts in your setup, which are best effort instead of watertight. From my understanding a good part of an evaluation by authorities will include not just technical checks, but also softer targets like proper training for personnel, written evaluations and documentation.
  10. Please don‘t. Brunch is no longer maintained since over a year iirc and even before it didn‘t really hit a place where is was working well for more than basic setups.
  11. ProcessWire protects what it knows it needs to protect. Blocking everything can be just as annoying especially on e.g. shared hosting where there might be some wordpress site in some subfolder. Or some new favicon file for some new mobile OS to be added to the root. There‘s even a xml file for MS in the mix already. So I feel like e.g. having a list of known sensitive files like .pem/.crt/... blocked would be a nice addition. .phpc is afaik also a often used extension for php. But I‘d not support blocking things beyond that.
  12. Memorizing classes will always be needed. If you take a pre-build framework or your own classes. The big productivity improvement is not in writing CSS using @apply and tailwind classes, but in writing just html and adding tailwind classes directly to the markup. So you need the backing css to already exist. Once you have to switch to a css file you're already loosing on the productivity gains. If you're using properly separated templates there might even be no need to later move the utilities to custom classes and `@apply` as everything's already in one place - the template of a component. Having the utilities out of the box is exactly the selling point of tailwind. Sass/less don't give you classes you can put in your markup - they're pre-processors. It also doesn't want to be a contender to e.g. uikit or bootstrap because they require so much work to be bent to custom design. Building your own components is expected, as custom design most often does require that anyways. There's however some work on a component library being done, which is build on top of tailwinds utility classes; and which will probably be easier to customize than e.g. bootstrap. If you have your own custom utilities library tailwind probably won't be of much benefit. If not however it' a great way to get started quickly on a existing, well documented, well adopted framework. Tailwind by default comes with 9 shades of each color, where afaik 4 are accessable colors on black and 5 are accessable on white background. And you can still edit them, those are just defaults. The message is not "you should not use more than (those) x colors/shades". The message is that a well managed design system must use a fixed set of colors, which is different to each person on the team just adding a new shade each and every time they add something to their codebase. The latter most often being the reason for the exploding number in color or text-size variations. It's about consistency and less about absolute numbers. Imho the need for all those customization variables of frameworks like uikit/bootstrap and the lock-in to a certain preprocessor already show how "inflexible" those solutions really are. I've not yet used any of those frameworks, where I didn't hit a brick wall at some point trying to skin a component to a custom design, because some aspect of a component wasn't customizable (either markup expectations or missing a "configuration variable"). To me tailwind css is neither an alternative to writing css, as the building blocks are way more cohesive and well rounded as well as "higher level" than css 1 – but also not for frameworks, which often impose quite an amount of markup constraints and understandably a certain kind of style, which can be problematic when implementing a custom design. [1] Take for example .truncate. There's no single css rule for truncating text properly. It's overflow: hidden; text-overflow: ellipsis; white-space: nowrap Same for e.g. .rounded-t. It applied rounded borders to both top edges, instead of css, which requires you to set it for each edge explicitly. This is more in line in how a designer would think about a design than how css needs it to be implemented. Nobody will say "make the top left and top right edge rounded". People say "make the top edges round" or "truncate that line".
  13. $page->template->altFilename; $page->template->filename;
  14. If it‘s more for quick dumps of things on your mind and less the super structured kind give nvALT a try. It‘s super fast, keyboard driven and iirc has a great search.
  15. There are two parts to this. Connection to a database and SQL features/parity in results. Connection to the db is abstracted through PDO and most things in processwire use the PDO powered database connection. In old code you might still find things calling to the old mysqli powered processwire class. The part about sql features and their parity in implementation/results is another topic, which requires a good amount of knowledge of both databases. Actually PWs query engine isn't really using many fancy sql features, most stuff is just many joins + conditions, but I don't know sqlite well enough to evaluate if everything will be working there as well.
  16. Some thoughts from before tailwind existed https://github.com/laravel/horizon/issues/56 and the other reason afaik was making tailwind work with any css preprocessor and not just SASS. That's probably how those class names in the initial post come to exist.
  17. I'm not sure what you mean by global tweaks, but I cannot imagine e.g. wanting to change the padding for a whole website at once. If you're talking about e.g. brand colors create a color called "brand" and now you don't need to update the class if the brand color changes just because the name no longer matches. The important part is scope the names to the granularity you're working on. p-1, p-2, p-4, … is to coarse? Try p-sm, p-md, p-lg. Also I don't think components are the problem. You can do components in server-side languages just as well. Twig or smarty templates are also "components", but without the client side logic. The big problem I see with more wide spread server side usage of component is the gap between non-node server side languages and javascript on the client side. To get the best of both worlds I kinda like the idea of doing business logic on the server in whatever language, delegate to node/v8 for rendering the SSR html (no node exposed to the internet) and sending it to the client, where the matching js framework then hydrates the state based of of the ssr's markup. Svelte makes that especially interesting as it compiles its components down to mostly string concatination for the ssr code. So no expensive vdom calculation or stuff like that just to throw everything out the window after the rendering is done.
  18. Maybe my guesses about twitter were a bit naively positive, but in the end it still comes down to that those big companies work under their own kind of constraints and people often look at/try to mimic those big companies even if their constraints are completely different, which is the point I was trying to make. They might still make things in bad quality or overly complex or overly verbose for one reason or the other and blindly imitating them is rarely smart.
  19. Article 6 is a good point, but its most important part is section f). The big problem of getting consent is that you need to deal with all the follow up responsibilities of people having the right to revoke their consent. While if you go with Art. 6 f) and you can clearly state that saving the commenters IP for a set amount of hours/days to detect spammers and prevent their acting does not outweight "the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child." you can just add a paragraph to your privacy documents, which your users can read, but be fine otherwise. You should only need to document the above and how you weight in the interests of the company vs the ones of the persons you save the IPs of. This is also a place where a good lawyer can be of great help. For the given case: IP addresses of people in a private (not corporate) contexts are rarely static. So to get to the real person behind the IP after certain timeframes you'd need to go to an ISP and ask them which connection had this IP at a give time and present a good reason for why you're asking. Also you might not even store the IP super long as more than 7/14/30 days is not really needed to be able to act on spam. On the other hand a proper spam attack can be a real risk to the business depending on the website. To the worst extend the website might make users buy stuff from scammers and therefore harm other users of your website. So I thinks there are quite a few arguments to weight against, which might resolve in a case where consent can be left out. Please keep in mind that the above is only from my own research and if you want to be sure about stuff talk to a lawyer.
  20. I think the reasoning for the above is quite plain: The web becomes a deployment target. "Proper" HTML is not a business goal at all. Performance/Accessibility/Maintenance/… are. For twitter there's probably also in more detail: Is it easily maintainable with many devs/teams even with high developer fluctuation and potentially ones, which are not super knowledgable in the realms of "proper html"? Are components/design guidelines enforceable over all of their platforms? Can components be widely/easily reused? Removing as many bytes at compile time out of the markup because compile time stuff is cheap. Removing nested divs is dangerous at compile time. Using short unreadable classes on the other hand is simple if you have all markup/css at hand. Sure accessibility sometimes comes for free with semantic html, but I guess at the scale of twitter even bugs/inconsistencies between the accessibility implementations of browsers will surface and simply doing everything on your own might give you the ability to work around those issues. And twitter likely has the manpower to do so, which is probably not true for anyone here. We struggle with styling those damn html inputs, twitter has probably more obscure browser inconsistencies to care about. All those things make companies like twitter/google/facebook kinda like the worst examples to follow. The tradeoffs those companies do will hardly ever match with the tradeoffs smaller agencies/companies/single developers should/would choose. I see similarities to go and google. Go was created because google had problems with getting young, unexperienced, directly out of college developers up to speed. Therefore they created the a simple to grok C like language with a GC and easy concurrency. Their proper error handling is literally a truck load of if statements. It's nowhere near what more experienced people would expect from a programming language. But it does what it was created for very well and as it's stuff on the server nobody cares if it's particularly verbose or not very abstract. If you want "beautiful" server side applications rather look for a handful of senior erlang developers than a whole can of go developers.
  21. In ProcessWire the wisdom usually is to avoid selecting much data at all. That's the sole reasoning for e.g. the nesting you described. It won't help at all if you want to aggregate over e.g. the last 5 years of weather data. The biggest question still open in this topic is "what for?". Without knowing the patterns of how you intend to access the stored data and which timeframes of aggregations of this data are appropriate it's not really possible to tell what you need. If you're fine with reports taking a hot minute to aggregate you're in a whole different ballpark than if you need huge aggregations to be live and instantly available in some web dashboard. Especially if you plan to hit the latter case I'd also suggest looking at proper databases for time series data, especially if the number of entries is meant to grow beyond the ~500k–1kk mark. I'd look at influxdb or postgresql with timescale plugin. Using pages in processwire might make sense for a mvp, but if things should scale it'll be a lot of manual querying even in processwire, so I'd opt for the proper solution from the start. Given the volume of data I doubt you can avoid getting more intimate with databases, as you just need to aggregate data directly on the db side, which processwire doesn't support to begin with.
  22. Chaining (piping usually means something different) in OOP doesn't mean returning $this. It means return the object, which you want to execute the next method call on. Where you get the object to return from is up to you. But I'm really wondering what the use case behind this is. Generally I'd tend to avoid classes knowing of each other and rather opting for composing their functionality with code outside of them.
  23. I'm not sure if anonymization is actually needed for gdpr complience. It might make sense if you're looking to aggregate e.g. geo location based of of the ip, but I'd expect it's there rather for spam protection reasons. So you can block actual IPs if you're flooded with comments. Securing your system against potential attacks is a solid foundation to gather the data without any consent even under the gdpr. The more important factor for that is just how long it's justified to save the IP for that reason.
  24. I wouldn't do a 302 for the operation. This essentially doubles the amount of requests the browser needs to do for images and I'm not even sure if it would actually cache-bust anything. Rather rewrite the url only internally so apache serves the correct file under the incoming url with cache-busting.
  25. If this is about tenancy management I'd strongly suggest using ryan's dynamic roles module (with all the known fixes) as a base. I once started to make it more flexible, so that you don't need a group per tenant, but rather it would match keys (e.g. tenant name or id). I just never came far because of other priorities. It shouldn't be super hard to do.
×
×
  • Create New...