tibs
Members-
Posts
17 -
Joined
-
Last visited
Everything posted by tibs
-
date problems because of mysql datetime/timestamp type damagery
tibs replied to tibs's topic in General Support
okay. it fixed it on my own server, but when i copied the whole thing on the other server (which is in a different time zone) things are off which makes me really really wish we'd just use unix timestamps in the database pretty please? sorry, that last thing is actually doing the job on both servers; i should really go to sleep (it's a time zone thing) -
date problems because of mysql datetime/timestamp type damagery
tibs replied to tibs's topic in General Support
so the monstrosity that gives me the correct results is: $mysqldiff = wire('db')->query('select timestampdiff(second, now(), utc_timestamp) diff')->fetch_assoc()['diff']; $date = $item->published + date('Z') + $mysqldiff; it seems like the configuration timezone is taken into account twice, and the database server's timezone never; but i'm kinda sleepy so maybe i shouldn't try to think about this -
whoa, this made into the roadmap!
-
wireRelativeTimeStr($page->published) returns weird results for me, i.e. "8 hours ago" for a page I just published on a site with +9:00 as its timezone (and -5:00 as the server / MySQL timezone, but I'm not sure if that matters.) When I looked into the code, I saw that WireDateTime->relativeTimeStr() uses the numeric timestamp when that's what it's given; $page->published returns a unix timestamp, so that can't be the problem, right? Then I checked the database, and the page table stores published in MySQL's datetime format (which lacks timezone support.) It seems that the application timezone setting was not taken into account somewhere during converting the database datetime field to the internal time representation (integer unix timestamp) in the PHP Page object. "By the way," (and sorry to propose another change; people here will get tired of me so fast) wouldn't it make sense to switch to a simple integer fields, storing the unix timestamp, for the database? No more need to switch back and forth between the internal representation and MySQL's, which is sadly broken whenever timezones matter. And just, less work, right? (well, no more "current_timestamp" default value for modified, but I think there are workarounds for that ) p.s. Why is "published" uses the datetime MySQL type while created and modified use timestamp? Timestamp (which is stored as UTC in the database) would probably make more sense for "published" as well (if only for consistency) -- that is, if we wanna stick to a native time column. As a note to this specific problem, looking up the "modified" field returned damaged results as well. Edit: it's on pw 3
-
Thanks for the responses! To be honest, it's not like I'm on a crusade to make this happen, it's just an idea that I kept reminded of so I thought why not throw it out there. (Reading back on my posts I may say sorry if my tone implied otherwise; I was just focusing on the arguments too much ) @teppo I think whatever I meant by the .htaccess thing ignored the case of having to make manual changes; I had the impression the upgrade process takes care of that. @szabesz yes I think you're partially right: it would just look "cleaner" haha. I'm not that concerned about my workflow; strict convictions about small things bring about much overhead @justb3a thanks for wireshell! I haven't seen it before, I just checked it out, it looks awesome! (omg drush reminds me of much struggling with drupal)
-
adding a useful use case at practically zero cost (as i understand it would be) is not a bad deal a component that is developed and used in a mostly independent fashion to get its own repository sounds like an improvement to the workflow index.php would still be there (as a stub like "<?php require 'wire/index.php';") so .htaccess would need not be changed (as for where it is stored, it isn't part of the installation, it's generated (well, copied) at installation; its template could be anywhere (i.e. within wire)) upgrades from a php viewpoint remain the same: having the code hosted in a submodule changes nothing to whether that code can alter stuff (e.g. the .htaccess file) from outside its tree (e.g. after a pull when PW decides there are upgrades to do: for php, it's the same as if the new version was manually copied, like we do it now)
-
I'm in no way a github guru, and I may be missing a lot about the implications of what I'm about to suggest, so please bear with me even if it's completely moronic. It's Christmas, after all When a PW site is installed, the original structure is destroyed. In other words, that neat looking .git directory becomes completely meaningless. On the other hand, the wire directory isn't ever touched and, as I see, upgrading PW is done exactly by replacing that directory (and the index.php file, though I believe that one hardly ever changes). So, the suggestion: Why not make the wire directory a separate github project, referenced from within ProcessWire as a submodule? Downloading PW would now take a git clone --recursive ... (I know, it's overbearingly complex) but upgrading would be super easy, just a single git command. For additional magic and peace of mind, index.php could be moved inside wire, and the original could be just one line to require it from there.
-
TL;DR Everything could already use the paths instead of the ids; it just needs some work to implement. If the ids are so unwelcome, why not go and work on the right kinda complicated stuff instead of adding unnecessary (and probably useless) complexity by inventing an obscure layer of "protection"? There's already an unambiguous 1-to-1 relationship between paths and ids; not using paths at certain places (e.g. assets and edit pages and whatnot) is not an architectural decision but that of convenience and performance. In other words, nothing stands in the way for the whole system to use the paths everywhere, i.e. /my/awesome/page/edit instead of /admin/page/edit/?id=3811 -- one can just set up a few .htaccess rules, rewrite (or hook?) whatever generates the ugly edit links, and done. Or, almost. It gets more involved for the asset paths, where there are two options I can think of. A) In the current storage, where the asset directories are addressed by the page id, Apache would need to consult the DB to resolve the paths to the ids. And that needs a module, which raises the bar for hosting. B) Then there is the option to use the page paths within the assets directory, but that may be a bit messy when moving / renaming things (the DB and the file system can get out of sync when something happens, so some journaling would be necessary.) P.S. I did use SHA based page ids before, to make scraping super hard (random access impossible) so I do agree that it has legit use cases. "Security" (i.e. "I don't want the baddies know too much about how I store my data" and the like) isn't one of them. Or am I missing something?
-
So that's why that 1000-byte limit sounded unfamiliar! Well, I'm not sure 191 is such a grave issue, but it sure would look weird in the specs. Most of the fields in question will only ever store [0-9a-z-], for which even ASCII is an overkill (and it could store 1000 characters yay), but I totes understand why nobody wants to break pw in fun ways while striving for marginal gain. Maybe I'll put together a pull request for the simple case (i.e. MySQL 5.5.3+ with the MyIsam engine and the 250-character limit), and we'll see if the gods favor it.
-
I wasn't talking about changing the requirements (I know the MySQL requirement isn't that high, and rightly so) -- I'm saying it's reasonable to use utf8mb4 when available, e.g. by running a "show character set" during install, and let's see what we got. (Yes, it would require reducing some field sizes from 255 to 250 characters, but that is hardly a deal-breaker.) After all, it's just a string in the code ('utf8'), and that only used sparingly. Actually, we already have an option for the encoding in the config file, so why not just use it instead of those hard-coded utf8's?
-
MySQL's utf8 encoding can only represent a tiny (though useful) subset of the full 31-bit range of Unicode; it's not UTF-8 at all. To fix what never should've been broken, they came up with utf8mb4. It's "of course" (i.e. UTF-8 is some properly designed piece of stuff) upwards compatible with the broken utf8 type. Well, not exactly, but more on that later. Would it be possible to default to the real UTF-8 type, when available? (MySQL 5.5.3+) (even wp did it! nudge nudge) And here comes that "later" you've all been waiting for. This important(ish) change would require the noble sacrifice of cutting some of the core (YES, CORE!) fields down to 250 characters to satisfy MySQL's sad-pathetic-deplorable limitation on index sizes. I know, that's 5 precious characters lost, but think about all those suffering emojis and suddenly you'll feel all fuzzy and warm, right? Anw, just a suggestion, because i18n and l10n and all the like. Edit: It seems a cruel RFC took away much of UTF-8's expressive beauty, and it is now valid only up to 4-byte lengths and values of 0x10FFFF, whichever smaller; I'm gonna go and cry myself to sleep now, goodbye.
-
Wow, sweet! Why is it not in the modules' directory though...
-
I don't always want to bother registering users, I just want them to be able to log in. I'm not alone because there are a number of other threads about OAuth, none of which mentions Opauth. Opauth is a lean, non-opinionated, modular PHP thingy. It has 18 providers listed on their page, some of which are built-in, some are 3rd-party. Their website itself is an example, and it's on github as well. It's lean: it doesn't need the provider SDKs, it implements only what it needs in the auth modules. It's non-opinionated: you give your user a link to /auth/facebook, and then you are waiting for the results on /auth/callback; what you do with the returned array is your choice (the urls are of course customizable). I could put together Twitter / Google / Facebook auth for a (non-processwire) site really quick. I stored the API auth params for the modules in a table as json-encoded hashes (just 2 keys and 2 values, but the keys vary by API, so that was the easiest), and I did the same with the returned credentials. Besides provider, uid, and credentials (what we really need), Opauth also returns a somewhat variable set of details about the profiles in the info key, stuff like name, gender, urls, and the like; the Processwire module could opt to mine and store in fields whatever it deems useful, then store the rest as json, or just throw it away. An example for what you get in your callback: Array ( [auth] => Array ( [provider] => Facebook [uid] => 1480000000008166 [info] => Array ( [name] => Tibs [image] => https://graph.facebook.com/1480000000008166/picture?type=square [first_name] => Tibs [urls] => Array ( [facebook] => https://www.facebook.com/app_scoped_user_id/1480000000008166/ ) ) [credentials] => Array ( [token] => CAAEajKBojCkBAHj6cy0mM4EZBPvs082t5Vkm3PGa44jsfrfzCS1SwV09qOaZBQYnfxLnCacb7Wkr536wDzusIh4mkMmvwoC5VdZBKFRAGyLpqcVJYdWCIFv0iwfkjpiLpDsGVBChj1ac1Lq6ZBFjuCdrtfpcQtgWbonDVCzwGwZB0A37vLaQbCuh8OnbdRY1U1dPE2ZBU2PmgNGNP1ghzNo [expires] => 2016-01-06T10:29:53+00:00 ) [raw] => Array ( [id] => 1480000000008166 [first_name] => Tibs [gender] => male [link] => https://www.facebook.com/app_scoped_user_id/148000000000166/ [locale] => en_US [name] => Tibs [timezone] => 1 [updated_time] => 2015-01-25T19:54:12+0000 [verified] => 1 ) ) [timestamp] => 2015-11-07T11:20:56+00:00 [signature] => hmcuwumqat4w84g1gkw0cgokloogg4w ) To be honest, I'm not even working on anything in Processwire at the moment, but I wanted to post about Opauth since I discovered it. If someone decides to implement OAuth before I get around it myself, Opauth is the perfect candidate to do the heavy lifting
-
It hindsight, yes. The thing is, PW does almost everything my app needs, it breaks down only at one particularly messy task. I wish it wasn't so good, then I could've resisted the temptation to use it for this project =P
- 16 replies
-
- graph databases
- database
-
(and 2 more)
Tagged with:
-
Interesting timing: I was just about to ask the same question the OP did a few weeks ago TL;DR Processwire can be looked at as a frontend to a network of pages with a semi-arbitrary selection of fields, so using a schema-less graph database instead of an RDBMS (if it can provide for all the other necessary features) would naturally mean less impedance mismatch to compensate for (that is, simpler code and easier implementation), it would allow for superior selectors (no need for intermediate PageArrays and foreach and such), and it would come with an increase in performance. Therefore, if Neo4j keeps gaining popularity and becomes more available as a choice, I believe it would be the right choice to support it as a store. In the long term. Not as a priority. Though I would love to see it tomorrow =) I understand my opinion about what's natural for PW is slightly less important than somebody else's with thousands of posts here on the forum =) Still, let me try to challenge it. I always considered PW's main strength its API and built-in admin interface, not the underlying data store. MySQL, of course, is an obvious choice considering its availability, and that one can force almost anything into an RDBMS with enough magic. However... Looking at how PW uses MySQL, I wouldn't call it a SQL framework by nature. I understand the analogy was more about the ubiquity of MySQL than about a supposed inherent superiority, but I think it's interesting to have a look at why SQL is not the best fit. PW uses MySQL perfectly to do the job, but it does it in ways that go against traditional RDBMS principles; I would say it's a good sign that it's not an RDBMS problem by nature. Examples: templates don't have their own tables in which they would store fields that have a 1-to-1 relationship to the PK, instead, fields are always stored in key/value stores (that happen to be implemented in a certain flavor of SQL), much of the querying is done using textual search in automatically populated columns that describe structural relationships, instead of key lookups, it's fundamentally hierarchical, and page fields extend this structure to a full directed graph. The above points summarized: Processwire is a frontend to a graph database implemented in MySQL. MySQL is a good choice because it is available and usable, not because it (or any other kind of RDBMS) is naturally the best choice for PW. The reason I was thinking about Neo4j is exactly because there is no need for any such optimization: Neo4j is a natural fit for the PW use case "as is," so why not make one of those "other projects [...] inspired by pw" a flavor of PW itself? PW selectors are trivial to implement in Neo4j because filters, as probably all other PW concepts, correspond to native Neo4j concepts and Cypher ("Neo4j SQL") statements. parent/child relationships and page fields directly correspond to connections (edges) in Neo4j, traversing/matching these connections is a native (extremely fast) operation, unlike the id and path based matching/joining that's necessary with an RDBMS, it's schema-less, so there's no need to worry about how to store fields (e.g. in separate tables), similarly no more need for an autojoin-like option to guess and explicitly fine-tune what we think would be frequently required, yet staying safe from extensive/expensive joins, while otherwise schema-less, Neo4j can have uniqueness constraints, so things like unique page names within a parent can be easily ensured, filtering against aggregates (e.g. "children.count") is possible just as with MySQL, except more flexible, textual properties can be queried using full-text search, just as in MySQL. I admit MySQL has these distinct advantages: it's already done, it's available anywhere and everywhere. I love PW the way it is; it's ridiculously amazing compared to anything else I've seen before. The only reason I started wishing for a Neo4j-based version is because I'm using PW for something it was probably not designed in the mind with =) It's a database that utilizes page fields with custom PHP code way more than I would consider it healthy, and I got to where I need to run several queries to collect certain kinds of parents, then use intersections from some of those resulting arrays, then in turn collect items based no queries on the items from those, and so on. If PW had Neo4j under the hood, we would have access to selectors that would render custom PHP for page fields all but unnecessary, and my messed up query would be so very simple, too... All at the "expense" of a simpler database layer =P Therefore, please forgive me if I can't but dream about being able to use Neo4j's capabilities from PW selectors =)
- 16 replies
-
- 3
-
- graph databases
- database
-
(and 2 more)
Tagged with:
-
Page selector getting users with specific role
tibs replied to Whistletoe's topic in Getting Started
i can verify that the "users" template doesn't work the same way as regular templates: i have a "student" role when i put "roles=student" as the custom selector, the field seems to work perfectly: it shows all users with the "student" role, and shows nobody else however, when i'm trying to save the page, i get this message: Error saving field "student" - Page 1017 is not valid for student if i use custom php (when of course i can't use autocomplete) it works perfectly with all of these: return wire("users")->find("roles=student"); return wire("pages")->find("roles=student"); return $pages->find("roles=student"); // which, incidentally, is the same that the custom selector is supposed to resolve in short, it must be something about the custom selector validation code; if i find some time i'll look some more into it -
Hi everyone, this is my first post here, so let me start with a big thank you. These past few days I duplicated almost all the functionality of a site I've been working on for much longer in Drupal, and it's much leaner and nicer, and without 27 extra modules. I was looking for something similar that Marc is talking about. Drupal has a module called "imagefield focus", where you can specify a region that you want in all of your cropped versions, regardless of their size or aspect ratio. It also lets to specify another (bigger) region that you don't want to go outside of; it's good if the original image has a border that you want to get rid of. The awesome thing about this approach is that the API would remain the same, except $image->size(...) would now work with those boundaries. This, of course, requires the boundaries stored in the database, and also a timestamp to know when to update the already rendered images.