Jump to content

All Activity

This stream auto-updates     

  1. Today
  2. Yesterday
  3. A client hired a security consultant to do a site analysis and they advised that the X-Content-Type-Options HTTP header should be set to "nosniff". The MDN docs for this header say: "Site security testers usually expect this header to be set." https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options This was easily resolved by adding the following to .htaccess Header set X-Content-Type-Options "nosniff" Do you think it would be good to add this to the default PW .htaccess file?
  4. Hello @LostKobrakai I have written a class for creating tables. The first class is the table class itself. The second class is a class for creating table-cells (the content of the table of course). Both class extends from the same parent class (a wrapper class). So both use the same methods from the wrapper class (in this case methods for setting attributes like class, id,...) My aim was to chain methods from the table-cell class inside the table class. My OOP-code of creating a table looks like this in this case: $table = ( new \UikitClass\Table(3))->setCaption('Beschreibung'); //thead $table->addHeaderCell()->setModifier('expand')->setText('Spalte 1')->setClass('thclass'); $table->addHeaderCell()->setText('Spalte 2')->setClass('custom'); $table->addHeaderCell()->setText('Spalte 3')->setClass('custom'); //tbody $table->addCell()->setText('Text 1'); $table->addCell()->setText('Text 1')->setClass('custom'); $table->addCell()->setText('Text 1')->setClass('custom'); $table->addCell()->setText('Text 1')->setClass('custom'); $table->addCell()->setText('')->setClass('custom'); //tfoot $table->addFooterCell()->setText('Footer')->setClass('uk-text-center')->setAttribute(['colspan' => '3']); echo $table->render(); So every chaining starts from $table for easier writing. Therefore I wanted to include the class of the table-cells inside the table class. In this case addHeaderCell(), addCell() and addFooterCell holding an object of the table-cell class and all methods after this affect the table-cell object. So I start with $table (table-class object), switch to the table-cell object (fe addHeaderCell() method), use methods to the table-cell object and add this to the table class. Maybe a little bit difficult to explain, but the idea behind was to make writing easier for the developer.
  5. I am running into the same issue. Unfortunately I wasn't able to reproduce it isolated, but it happens constantly in my current setup during debugging. Has this issue been solved for you @FrancisChung or is it still happening for you as well?
  6. Hi beluga, I'd be totally fine with allowing frontend if anybody wants to work on that. Personally I don't have the need for any grids on the frontend. And I'm not sure if it would be better to show a rocktabulator than implementing a custom regular tabulator. While it sounds nice to support frontend for a rocktabulator I'd very much prefer to have a way to present parts of the pw Admin to public users if necessary or wanted. In other words to have public backend pages (there was some discussion about that). This could be rocktabulator, but could also be file upload fields, forms, etc. I feel like supporting tabulator on frontend will have some side effects that will make development more complicated... But I'm happy if you come up with a good solution and prove me wrong 🙂
  7. Thanks for all input again. Again i was not to clear - single project just for me and just for information no public hits no further usecase for the system. Yes LostKobraKai i think i did'nt plan on this one...until now i didn't have such a usecase or wasn't deep into database performace (what is not really needed for normal websites if you use PW as a base...it scale for this cases and i know that well) So may i am a little bit to naive on this project - and my feeling said "Hey try to ask the dev guys in the forum for hints" ...;) I would first go the hard way and make my hands dirty and will try how far PW could go on this one.... Yes nested page setup like metioned wont scale on things like "live aggregating" all saved data, but i think i would be happy if i could get things like average of year/month/week and store them in separate fields on the dedicated template - so like netcarver wrote if i am modest in granularity it will run fine. But a important Tipp is the FieldtypeEvents module from Teppo, i forgot this one - it would work great for the week dataset so many pages are spared. What i will try with dummy data is now: templates /station/year/month/week station (id, title) year (id, title, temp, humidity, pressure, rain, light) - the average - average from 12 childpages should be no problem month (id, title, temp, humidity, pressure, rain, light) - the average - this will end in 52 childpages for a join for the average week (id, title, records - modified eventtype field for the intraday records) - this will end in 672 entries (if i take a record every 15 minute) in one week field Problems on this one would be selecting from date to date but i would be fine with averages of weeks or months...and maybe the amount of fieldtypeevent tables (52 per year)? I don't have the time to investigate in other DB systems, since this year i've got one more little maid that take over my speartime 😉 But with your interesting input i can think i get in the right direction, and don't have wrong views on aggregation and reporting of the collected data. Since alle of you pointed out that the collection wouln't be the problem with PW or MySql....but all the other stuff. And on these points i could make compromise since this is only a hobby project. I will report (even if it took a while). Even on such rare and offbeat questions in this forum you get helpfull and friendly answers! You all made my weekend! - I love this forum, it is a hidden island in the www
  8. Absolutely agree with this – use case for the data matters a lot! In my experience MySQL queries (with decent indexes) tend to be pretty fast until you reach the scale of millions of rows, but if this is going to be a public-facing service that gets a lot of hits and needs to generate all sorts of reports real-time then definitely consider alternatives right from the get-go. Might be a good idea to look into them anyway, but if it's a one-off project and you're likely to stay in 200-300k record range, you're probably not going to get a major benefit out of them. That being said, if you already know what your data is going to look like, you can take some guesswork out of the equation by starting from a simple proof of concept: create a database table for your data, add a script that generates some 200-300k rows of random mock data based on your actual expected data format, and build a proof of concept graph to display said data. If the database concept doesn't pan out, i.e. it's too slow or something like that, you can just swap that to something more performant while keeping other parts of the application. Either way it's often a good idea to build your product in layers, so that if a specific layer – graph library, database, or something in-between – needs to be swapped for something else, other layers remain more or less the same 🙂
  9. In ProcessWire the wisdom usually is to avoid selecting much data at all. That's the sole reasoning for e.g. the nesting you described. It won't help at all if you want to aggregate over e.g. the last 5 years of weather data. The biggest question still open in this topic is "what for?". Without knowing the patterns of how you intend to access the stored data and which timeframes of aggregations of this data are appropriate it's not really possible to tell what you need. If you're fine with reports taking a hot minute to aggregate you're in a whole different ballpark than if you need huge aggregations to be live and instantly available in some web dashboard. Especially if you plan to hit the latter case I'd also suggest looking at proper databases for time series data, especially if the number of entries is meant to grow beyond the ~500k–1kk mark. I'd look at influxdb or postgresql with timescale plugin. Using pages in processwire might make sense for a mvp, but if things should scale it'll be a lot of manual querying even in processwire, so I'd opt for the proper solution from the start. Given the volume of data I doubt you can avoid getting more intimate with databases, as you just need to aggregate data directly on the db side, which processwire doesn't support to begin with.
  10. Chaining (piping usually means something different) in OOP doesn't mean returning $this. It means return the object, which you want to execute the next method call on. Where you get the object to return from is up to you. But I'm really wondering what the use case behind this is. Generally I'd tend to avoid classes knowing of each other and rather opting for composing their functionality with code outside of them.
  11. Technically yes. I'd still check early on that ProcessWire won't at some point try to load all those rows to memory at once, or worse yet try to render them all as inputs on page (in admin). I have a vague memory of Ryan adding something to handle exactly this to one of his modules or the core (or both).
  12. Probably creating a custom FieldtypeDataLogger (or how your prefer to name it !) may give you the best of both worlds. You could have your own optimized MySQL table schema, while for post-processing/reporting you could embed it as field in a PW page/pages, where you will access individual data (temperature, humidity, etc..) as properties of the field, and with all the advantages of PW selectors.
  13. Not sure how familiar everyone here is with the inner workings of ProcessWire, so just to expand on this a little bit: FieldtypeComments is a ProcessWire Fieldtype module, and Fieldtype modules can define their own database schema. The FieldtypeEvents module was built as an example for custom Fieldtype modules, and if you take a closer look at FieldtypeEvents::getDatabaseSchema() (and other methods in that class), you should get a pretty good idea of how this stuff works. On the other hand if you want to store loads of data and don't really need/want said data to be stored in an actual field (accessible via the API and editable in the Admin), you can also define a custom database table, just as Edison pointed out above. You can find some examples of working with custom database tables from the ProcessChangelog module. Of course you don't have to wrap the table creation etc. into a module – not unless you expect to set this thing up on multiple occasions 🙂 One last point on naming custom tables: if you create a truly custom database table, you'll want to steer away from any native table names. This includes field_*, since that prefix is reserved for field data.
  14. Why not use pages and proper Fieldtype fields to store the data? Seems like it would be much easier, and anything out-of-the-ordinary that you want to show within your Process module you could show within Page Edit by hooking the render of a markup inputfield, or by using one of the runtime field modules from kongondo, bernhard, kixe, or me.
  15. There is some code and some links to explore in this post:
  16. Try: $wire->addHookAfter('Field::getInputfield', function(HookEvent $event) { $page = $event->arguments(0); $inputfield = $event->return; // Only for non-superusers if($event->wire('user')->isSuperuser()) return; // Only for a particular Repeater page ID if($page->id !== 1553) return; // Set collapsed to Inputfield::collapsedNoLocked or Inputfield::collapsedHidden as suits $inputfield->collapsed = Inputfield::collapsedNoLocked; });
  17. I did test it with a new image using preview, no change. Anyway, the picture is uploaded correctly--the website even displays it correctly. What is failing is the thumbnail that processwire is creating for its admin pages.
  18. When you do this you load all the children of the of the root parent into memory as a PageArray, and then you get just one of those pages. It's more efficient to directly get the single page you need: $page->rootParent()->child('template=menu_submenus, include=hidden');
  19. I suspect the image is corrupt in some way, or maybe contains some metadata, colour profile, etc, that can't be handled. Try opening this image in Photoshop or similar and then save it as a new JPG, making sure to use the standard sRGB colour profile.
  20. When you add an image via the API rather than via the inputfield you have to do all the work yourself that the inputfield would otherwise do for you. A recent answer to a similar question, which contains a link to some code that might be useful to you:
  21. Hi @mr-fan, you are dealing with a very interesting project! Wow… it brings me back to my previous lives in Microcontrollers and Nand Flash storage... If your MCU timer is sampling analog data every 10 minutes, and your objective is to store them for 5 years at this granularity level (10 minutes), as you have calculated, it means 2.6*10^5 records. That's not trivial at all. I love PW page-storage capabilities, but, on my personal opinion, resulting storage solution would be less efficient and performant than directly storing data into a MySQL table. I would use MySQL table for data storage and PW pages for data post-processing and reporting. A lot would depend from which kind of post-processing you are going to need, but as you are building a simple (but big!) data logger, another option for storage could be even to create a simple flat file to which you are appending a new record every 10 minutes. However MySQL, in the long run, will give much better flexibility, versatility, and performance. If you wish to look for a simple example of using MySQL from inside PW I suggest you to explore at FieldtypeComments core module. Comments in PW are not made using PW page-storage, but as a MySQL table (field_comments) so you can get some useful hints in dealing with MySQL schemas from within PW. Of course you do not need to create a separated MySQL database for your logger, it can be simply a new table inside your existing PW database, exactly as PW does to manage comments. Please also note that PW library provides an embedded support for dealing with MySQL (I did not use it so far, but .. now I feel I need to make a trial soon…) in $database API variable. By the way, if you need any help to create a MySQL data logger table inside your PW database, just give me the data specs, I would be glad to help you in setting up a trace. Wish you a nice week-end.
  22. v0.1.4 released. This version adds support for some new features added in Repeater Matrix v0.0.5 - you can disable the settings for matrix types (so items cannot change their matrix type), and when the limit for a type is reached it becomes unavailable for selection in the type dropdown of other matrix items. The module now requires Repeater Matrix >= v0.0.5. If you are using Repeater Matrix v0.0.4 or older then there is no need to upgrade Restrict Repeater Matrix as the new features in v0.1.4 are only applicable to Repeater Matrix v0.0.5.
  23. Last week
  24. Hello, I thought it might be useful to post a CSP I've recently deployed using this module. Every site is different - there's no prescriptive policy and that's the main caveat here. This is for a site with an embedded Shopify store, an Issuu embed, a Google Tour embed and Google Maps implementation (JS API). It also uses Font Awesome 5 from their CDN, jQuery from CDNJS, and some Google Fonts. It also has TextformatterVideoEmbed installed alongside its extended options module. default-src 'none'; script-src 'self' cdnjs.cloudflare.com *.google.com *.gstatic.com *.googleapis.com www.google-analytics.com www.googletagmanager.com e.issuu.com sdks.shopifycdn.com; style-src 'self' 'unsafe-inline' cdnjs.cloudflare.com *.googleapis.com use.fontawesome.com; img-src 'self' data: *.google.com *.googleapis.com *.gstatic.com *.ggpht.com www.google-analytics.com www.googletagmanager.com brand.nbcommunication.com *.shopify.com sdks.shopifycdn.com; connect-src 'self' www.google-analytics.com ocean-kinetics.myshopify.com; font-src 'self' fonts.gstatic.com use.fontawesome.com; object-src 'self'; media-src 'self' data:; manifest-src 'self'; frame-src www.google.com www.youtube.com www.youtube-nocookie.com player.vimeo.com w.soundcloud.com e.issuu.com; form-action 'self'; base-uri 'self' The Shopify embed script and Google Analytics initialisation have been moved into script files so there are no inline scripts at all. The script-src 'unsafe-inline' directive is an obstacle to getting that A+ on Observatory. Google Analytics is also a bit of an impediment to getting a top-drawer score, as its script doesn't use SRI. However, there is a reason for that as I understand it - it is a script that just loads other scripts so SRI implementation would just be token, it wouldn't actually be improving security. Still, it is possible to get A+ without dealing with this. It would be great to get some discussion going on CSP implementation - I'm only a few weeks in myself, so have much to learn! Cheers, Chris NB
  1. Load more activity
×
×
  • Create New...