Leaderboard
Popular Content
Showing content with the highest reputation on 07/24/2019 in all areas
-
Hi @mr-fan sorry that I'm late on this! I've done something very similar some time ago... a Raspberry Pi Zero monitoring the water + air temperature of a river every ? minutes, logging everything and showing graphs... Loading/showing big amounts of data was the reason for building RockFinder, RockGrid and now I'm working on RockTabulator (which will replace RockGrid one day...). You should really have a look at those tools and read the related forum threads. I've also done some performance tests back then: Comparing to findMany: I just added a quick example loading 200k pages via RockFinder into a RockTabulator: The whole site is ready to clone and install at github: https://github.com/BernhardBaumrock/tabulator.test You can also easily do custom SQL queries in the backend (see the simple php code of the example at the bottom). This way you could even do easy aggregations directly on the db (https://processwire.com/talk/topic/18983-rocksqlfinder-outdated-thread-link-to-current-version-inside/?do=findComment&comment=165807) Load times and data transfer (gzipped) can be seen in the devtools on the right. Finally using RockMarkup it's a piece of cake to implement any charting library in the pw backend: I think I'd store everything in regular PW pages and run a daily cron that does the aggregations. Aggregations could be stored in regular pw pages as well (eg as json in a textarea field) and this data could be easily displayed as chart via RockMarkup on the page edit screen. If you want to, I'd be happy to work on your project together and take it as an open source example to showcase my modules - I still think that the potential of those tools is really not getting through... 6 likes for RockMarukup... That's kind of disappointing...6 points
-
4 points
-
Are you familiar with page reference fields? Are you already using them? Assuming you are already using them, I guess you want cross-references in both directions. In that case, take a look at this module: https://modules.processwire.com/modules/connect-page-fields/ From the API side, there's also this relatively new method: https://processwire.com/api/ref/page/references/4 points
-
3 points
-
From past experience working with lat/lng coordinates, I suggest using a decimal field for these. Float fields only have a precision of 6 figures which is often not sufficient for a lat/lng value.2 points
-
Please don't use this module any more. I think in the end it just adds more complexity (and dependencies) than benefits. See this tutorial how simple it is to create a custom runtime-only Inputfield: WHY? I've started building this module because the existing solutions by @kongondo and @kixe (https://modules.processwire.com/modules/fieldtype-runtime-markup/ and https://github.com/kixe/FieldtypeMarkup) did not exactly fit my needs. Actually this module is aimed to be a base module that can easily be extended by other modules. It takes care of the heavy lifting that has to be done when working with custom fieldtypes in ProcessWire (injecting scripts and styles, handling JS events, doing translations). See RockTabulator as an example. I'm quite sure more will follow (eg ChartJS)... WHAT? This module helps you injecting ANY php/html/js/css into any PW backend form (either on a page or in custom process modules). It also comes with a sandbox process module that helps you setup your fields and provides handy shortcuts that integrate with TracyDebugger and your IDE: WHERE ...to get it? At the moment the module is released as early alpha and available only on github: https://github.com/BernhardBaumrock/RockMarkup2 If you have any questions or ideas please let me know ? PS: This module shows how easy it is to extend this module for your very own needs. All you need to do is providing the module's info arrays and then overwrite any methods that you have to modify (eg the InputField's render() method): https://github.com/BernhardBaumrock/RockMarkupExtensionExample1 point
-
A simple module to enable easy navigation between the public and the admin side of the site. After installation a green bar will appear to the upper side of the screen, containing a few navigation elements and displaying the PW version number. Heavily inspired by @apeisa's great AdminBar (Thanks!). I needed a bit simpler tool for my projects and as a result, this was made. Available on GitHub .1 point
-
I'm not sure, but maybe this is related: https://processwire.com/talk/topic/20006-module-restapi/?do=findComment&comment=176115 I guess that only superusers see such messages, and regular editors don't.1 point
-
@teppo First I would like to thank you for creating this module! However, I'm having problems with it. When I try to index my fields I get two warnings: Warning: Declaration of SearchEngine\Renderer::__get(string $name) should be compatible with ProcessWire\Wire::__get($name) in xxx/modules/SearchEngine/lib/Renderer.php on line 581Warning: Declaration of SearchEngine\Query::__get(string $name) should be compatible with ProcessWire\Wire::__get($name) in xxx/modules/SearchEngine/lib/Query.php on line 226 Then it says "indexed 0 pages in 0 seconds". Why does it do that? Thank you.1 point
-
I was wondering if there's a way to set an absolute path instead of relative when linking to a file in CKEditor? Any help would be appreciated.1 point
-
just a short update on this - i am stresstesting the PW way and i'm on the way yust with simple pages for now and see what comes up the road. made a crude script to get dummy data and created around 88k pages (DB are now 35MB) in very fast time on shared hosting...just for example to create 48 testrecords for every day (17260 pages - 2 records per hour) the scripts runs on in about 80 seconds...just for others that are search on create dummy content in a quick and dirty way i let my code here (if there are better ways - at least as a bad example..;) So far next things are to create a script that runs to build the averages of month, year for long term stats and then i will try how reporting and visualisation in an easy graph - works together with a DB that have around 90000 pages with just some integers and floats. I would first try how an easy to use chart library would work (https://gionkunz.github.io/chartist-js/index.html) Since this tools all use JSON data i think i could cache or better prepare this JSON strings for the charts and see how fast it performs...i'll report again. But this is an interesting experiment so far - i could choose how i spend my freetime in two really different worlds ->work on the backend with PW or solder and glue some things together on the hardware parts...before this project i was captured only on the web...;)1 point
-
@valan You need to include both the autoloader from Composer and the ProcessWire bootstrap file, see Bootstrapping ProcessWire CMS. Assuming your autoloader lives under prj/vendor/autoload.php and the webroot with the ProcessWire installation under prj/web/, you can use the following at the top of your script: # prj/console/myscript.php <?php namespace ProcessWire; # include composer autoloader require __DIR__ . '/../vendor/autoload.php'; # bootstrap processwire require __DIR__ . '/../web/index.php'; ProcessWire will detect that it is being included in this way and automatically load all the API variables (and the functions API, if you are using that). Keep in mind that there will be no $page variable, as there is no HTTP request, so there is no current page.1 point
-
As simple as helpful, very appreciated! Thanks! For full compatibility of the module I would add in the settings "you admin path" because the default "processwire" may not always be used. And (as in my case) this leads to a page not found. Or alternatively in the readme file make it known.1 point
-
Absolutely agree with this – use case for the data matters a lot! In my experience MySQL queries (with decent indexes) tend to be pretty fast until you reach the scale of millions of rows, but if this is going to be a public-facing service that gets a lot of hits and needs to generate all sorts of reports real-time then definitely consider alternatives right from the get-go. Might be a good idea to look into them anyway, but if it's a one-off project and you're likely to stay in 200-300k record range, you're probably not going to get a major benefit out of them. That being said, if you already know what your data is going to look like, you can take some guesswork out of the equation by starting from a simple proof of concept: create a database table for your data, add a script that generates some 200-300k rows of random mock data based on your actual expected data format, and build a proof of concept graph to display said data. If the database concept doesn't pan out, i.e. it's too slow or something like that, you can just swap that to something more performant while keeping other parts of the application. Either way it's often a good idea to build your product in layers, so that if a specific layer – graph library, database, or something in-between – needs to be swapped for something else, other layers remain more or less the same ?1 point
-
In ProcessWire the wisdom usually is to avoid selecting much data at all. That's the sole reasoning for e.g. the nesting you described. It won't help at all if you want to aggregate over e.g. the last 5 years of weather data. The biggest question still open in this topic is "what for?". Without knowing the patterns of how you intend to access the stored data and which timeframes of aggregations of this data are appropriate it's not really possible to tell what you need. If you're fine with reports taking a hot minute to aggregate you're in a whole different ballpark than if you need huge aggregations to be live and instantly available in some web dashboard. Especially if you plan to hit the latter case I'd also suggest looking at proper databases for time series data, especially if the number of entries is meant to grow beyond the ~500k–1kk mark. I'd look at influxdb or postgresql with timescale plugin. Using pages in processwire might make sense for a mvp, but if things should scale it'll be a lot of manual querying even in processwire, so I'd opt for the proper solution from the start. Given the volume of data I doubt you can avoid getting more intimate with databases, as you just need to aggregate data directly on the db side, which processwire doesn't support to begin with.1 point
-
Not sure how familiar everyone here is with the inner workings of ProcessWire, so just to expand on this a little bit: FieldtypeComments is a ProcessWire Fieldtype module, and Fieldtype modules can define their own database schema. The FieldtypeEvents module was built as an example for custom Fieldtype modules, and if you take a closer look at FieldtypeEvents::getDatabaseSchema() (and other methods in that class), you should get a pretty good idea of how this stuff works. On the other hand if you want to store loads of data and don't really need/want said data to be stored in an actual field (accessible via the API and editable in the Admin), you can also define a custom database table, just as Edison pointed out above. You can find some examples of working with custom database tables from the ProcessChangelog module. Of course you don't have to wrap the table creation etc. into a module – not unless you expect to set this thing up on multiple occasions ? One last point on naming custom tables: if you create a truly custom database table, you'll want to steer away from any native table names. This includes field_*, since that prefix is reserved for field data.1 point
-
Hi @mr-fan, you are dealing with a very interesting project! Wow… it brings me back to my previous lives in Microcontrollers and Nand Flash storage... If your MCU timer is sampling analog data every 10 minutes, and your objective is to store them for 5 years at this granularity level (10 minutes), as you have calculated, it means 2.6*10^5 records. That's not trivial at all. I love PW page-storage capabilities, but, on my personal opinion, resulting storage solution would be less efficient and performant than directly storing data into a MySQL table. I would use MySQL table for data storage and PW pages for data post-processing and reporting. A lot would depend from which kind of post-processing you are going to need, but as you are building a simple (but big!) data logger, another option for storage could be even to create a simple flat file to which you are appending a new record every 10 minutes. However MySQL, in the long run, will give much better flexibility, versatility, and performance. If you wish to look for a simple example of using MySQL from inside PW I suggest you to explore at FieldtypeComments core module. Comments in PW are not made using PW page-storage, but as a MySQL table (field_comments) so you can get some useful hints in dealing with MySQL schemas from within PW. Of course you do not need to create a separated MySQL database for your logger, it can be simply a new table inside your existing PW database, exactly as PW does to manage comments. Please also note that PW library provides an embedded support for dealing with MySQL (I did not use it so far, but .. now I feel I need to make a trial soon…) in $database API variable. By the way, if you need any help to create a MySQL data logger table inside your PW database, just give me the data specs, I would be glad to help you in setting up a trace. Wish you a nice week-end.1 point
-
Here is another snippet that I use to get rid of unwanted table properties: // Remove unwanted attributes from tables CKEDITOR.on('dialogDefinition', function(ev) { var dialogName = ev.data.name; var dialogDefinition = ev.data.definition; if (dialogName == 'table') { var info = dialogDefinition.getContents('info'); info.remove('txtWidth'); info.remove('txtHeight'); info.remove('txtBorder'); info.remove('txtCellPad'); info.remove('txtSummary'); info.remove('txtCellSpace'); info.remove('cmbAlign'); var advanced = dialogDefinition.getContents('advanced'); advanced.remove('advStyles'); advanced.remove('advId'); //Id attribute advanced.remove('advLangDir'); // writing direction advanced.get('advCSSClasses')['default'] = 'uk-table'; //set default class for table } Put this code inside your custom config.js Best regards1 point
-
Thanks a lot to Kongondo and Szabesz for this module. I have a small project on a university website, where they required quite simple task management for their internal research team. They want all actions triggered through emails and SMS because it is not easy to expect the team always log in to their website. I did a quick development based on this module and will share and post it in a new thread.1 point
-
1 point
-
Thanks guys. I've looked into ProcessPageEditLink before as well but as Robin S says I think it's absolute to the root, not including domain URL. I also agree that generally it's better practice to keep it this way but in my specific case my the backend, which is setup as an API, is on a different URL than the front end. Have not tested it yet, but for anyone else looking into this issue, there's another thread covering it with possible solution:1 point
-
I have puzzled over this too, but I think the confusion comes from a non-standard use of the word "absolute" in relation to the URL. So ProcessPageEditLink never inserts an absolute URL in that it never includes the protocol or domain. But I think the absolute option means absolute relative to the site root. So the link URL starts with '/' as opposed to the two relative options which can give a link URL like '../some-page/'. The current behaviour is a good thing, because otherwise all links would break when the root domain changes (e.g. going from dev to live environment). But it would help if the meaning of the absolute option was clarified.1 point
-
1 point
-
Yes, the export generates JSON. It's all documented here: https://processwire.com/blog/posts/august-2014-core-updates-1/ - Field export/import And here: https://processwire.com/blog/posts/august-2014-core-updates-3/#template-export-import - Template export/import and more here: https://processwire.com/talk/topic/2117-continuous-integration-of-field-and-template-changes/?p=688991 point
-
If you have a robots.txt, I would use it to specify what directories you want to exclude, not include. In a default ProcessWire installation, you do not need to have a robots.txt at all. It doesn't open up anything to crawlers that isn't public. You don't need to exclude your admin URL because the admin templates already have a robots meta tag telling them to go away. In fact, you usually wouldn't want to have your admin URL in a robots file because that would be revealing something about your site that you may not want people to know. The information in robots.txt IS public and accessible to all. So use a robots.txt only if you have specific things you need to exclude for one reason or another. And consider whether your security might benefit more from a robots <meta> tag in those places instead. As for telling crawlers what to include: just use a good link structure. So long as crawlers can traverse it, you are good. A sitemap.xml might help things along too in some cases, but it's not technically necessary. In most cases, I don't think it matters to the big picture. I don't use a sitemap.xml unless a client specifically asks for it. It's never made any difference one way or the other. Though others may have a different experience.1 point