Jump to content
mtwebit

DataSet import modules

Recommended Posts

I've created a set of modules for importing (manipulating and displaying) data from external resources. A key requirement was to handle large (100k+) number of pages easily.

Main features

  • import data from CSV and XML sources in the background (using Tasker)
  • purge, update or overwrite existing pages using selectors
  • user configurable input <-> field mappings
  • on-the-fly data conversion and composition (e.g. joining CSV columns into a single field)
  • download external resources (files, images) during import
  • handle page references by any (even numeric) fields

How it works

You can upload CSV or XML files to DataSet pages and specify import rules in their description.
The module imports the content of the file and creates/updates child pages automatically.

How to use it

Create a DataSet page that stores the source file. The file's description field specifies how the import should be done:

Spoiler

name: Testing the import
input: # Source configuration
  type: csv
  delimiter: ','
  header: 1
  limit: 10  # import only 10 entries, uncomment this if the test was successful
fieldmappings: # specified as field_name: csv_column_id (1, 2, 3, ...)
  title: 1
pages:  # Config for child pages
  template: Data
  selector: 'title=@title'

After saving the DataSet page an import button should appear below the file description.

dataset_file_description.thumb.png.b92cf93c8a529d9750622ef08b67fcad.png

When you start the import the DataSet module creates a task (executed by Tasker) that will import the data in the background.

You can monitor its execution and check its logs for errors.

dataset_import_running.thumb.png.ad0e58d907dcf1b379060afa9bc928e9.png

See the module's wiki for more details.

The module was already used in three projects to import and handle large XML and CSV datasets. It has some rough edges and I'm sure it needs improvement :) so comments are welcome.

  • Like 17

Share this post


Link to post
Share on other sites

Thanks for sharing your modules @mtwebit!  This looks like it could be really useful.  Is there any way you could include a place to add a url to the file instead of an upload?  For example, I store staff's contact information in a Google Spreadsheet.  This spreadsheets gets updated all the time.  It would be cool to just add the url to csv file instead of having to download the file and upload it into Processwire.  The input could also remember it's previous value so I can run the import over and over again as needed.  Maybe it also could be somehow automated to run the same import everyday?

If not, no worries.  Thanks again.

Share this post


Link to post
Share on other sites
14 hours ago, gmclelland said:

Thanks for sharing your modules @mtwebit!  This looks like it could be really useful.  Is there any way you could include a place to add a url to the file instead of an upload?  For example, I store staff's contact information in a Google Spreadsheet.  This spreadsheets gets updated all the time.  It would be cool to just add the url to csv file instead of having to download the file and upload it into Processwire.  The input could also remember it's previous value so I can run the import over and over again as needed.  Maybe it also could be somehow automated to run the same import everyday?

If not, no worries.  Thanks again.

I was thinking about this too...

There was a dev branch that dropped the [file + rules in description] scheme and introduced a fieldset of [rule + (optional) file]. It turned out to be too complicated and it did not work well so I dropped it.

An easy solution is to allow source location override. So... see this commit and use the input:location configuration option.
Not the best solution as it still requires a (dummy) file to be uploaded (to create the import rules in its description), but it works.
You can even use this solution to refer to files uploaded to other pages using this URL scheme: wire://pageid/filename

Hope it helps.

14 hours ago, gmclelland said:

It looks like you might have already considered and built this type of functionality https://github.com/mtwebit/DataSet/wiki/Import-rules#data-conversion-during-import

That's different. It downloads data for a single field (e.g. a file to be stored in a filefield) not for an entire DataSet.

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites

really like this, will be complete if can do bulk export too, currently i'm using custom php script in front end for huge data export, but prefer if i can do this in admin area.

Share this post


Link to post
Share on other sites

JSON rule format is now supported but I have a small problem with that. It works fine in the global rule field but storing JSON in file descriptions is not possible atm.

Pagefile uses JSON internally for storing multi-language file descriptions so it is not possible to store JSON data there... I could not find a way to overcome this issue (even if multi-language descriptions are disabled Pagefile still drops JSON descriptions).

Any idea?

See Github issue

  • Like 1

Share this post


Link to post
Share on other sites
9 hours ago, mtwebit said:

Pagefile uses JSON internally for storing multi-language file descriptions so it is not possible to store JSON data there... I could not find a way to overcome this issue (even if multi-language descriptions are disabled Pagefile still drops JSON descriptions).

Any idea?

JSON in the description field is detected if the first character is { and the last character is }, or if the first character is [ and the last character is ]. See here.

So one workaround could be to prefix the JSON with some character...

*{"json": "here"}

...and then trim the first character before the module decodes the JSON.

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites
On 1/25/2019 at 2:05 PM, Robin S said:

JSON in the description field is detected if the first character is { and the last character is }, or if the first character is [ and the last character is ]. See here.

So one workaround could be to prefix the JSON with some character...


*{"json": "here"}

...and then trim the first character before the module decodes the JSON.

Yeah, this is a little painful. I use the same approach in Tracy. I think it might be better if Ryan replaces that json detection code with the following which seems to be the most common approach to problem.

    /**
     * is the provided string a valid json string?
     *
     * @param string $string
     * @return boolean
     */
    public function isJson($string) {
        json_decode($string);
        return (json_last_error() == JSON_ERROR_NONE);
    }

PS - actually maybe this isn't useful at all with this issue, but in general I think he should be using a function like this for determining if a string is JSON.

  • Like 2

Share this post


Link to post
Share on other sites

I'm trying to import 60k pages from a CVS file. 

I installed the DataSet module but the dataset_config field seams not working.

By default it is set as a textarea filed, and the configuration is not valid. There is also a message "YAML is not supported on your system. Try to use JSON for configuration." I installed the fieldtype-yaml module and set it for dataset_config but this is also not working.

https://modules.processwire.com/modules/fieldtype-yaml/

Any suggestions? All other modules required are installed.

The formatting on the screenshot for YMAL is wrong, I know. 

1840444955_Screenshot2019-12-24at14_21_24.thumb.png.7c2e3e4717ce3c40c41d19c6dd1fc51f.png

 

 

Share this post


Link to post
Share on other sites
6 hours ago, flydev said:

@theqbap

The YAML thing is an extension of PHP which need to be activated on your server configuration. 

 

Ok, thank you for replay. And can you provide me with an example of JSON config for dataset_config field. Unfortunately I can't activate YAML on server side.

Share this post


Link to post
Share on other sites

@theqbap

The YAML example converted to JSON with an online tool give us this config :

{
  "name": "Testing the import",
  "input": {
    "type": "csv",
    "delimiter": ",",
    "header": 1,
    "limit": 10
  },
  "fieldmappings": {
    "title": 1
  },
  "pages": {
    "template": "Data",
    "selector": "title=@title"
  }
}

 

Share this post


Link to post
Share on other sites
On 12/25/2019 at 10:26 AM, flydev said:

@theqbap

The YAML example converted to JSON with an online tool give us this config :


{
  "name": "Testing the import",
  "input": {
    "type": "csv",
    "delimiter": ",",
    "header": 1,
    "limit": 10
  },
  "fieldmappings": {
    "title": 1
  },
  "pages": {
    "template": "Data",
    "selector": "title=@title"
  }
}

 

Still with message "DataSet config is invalid."

Share this post


Link to post
Share on other sites

@theqbap

Then add the string "JSON" before the config :

JSON{
  "name": "Testing the import",
  "input": {
    "type": "csv",
    "delimiter": ",",
    "header": 1,
    "limit": 10
  },
  "fieldmappings": {
    "title": 1
  },
  "pages": {
    "template": "Data",
    "selector": "title=@title"
  }
}

 

  • Thanks 2

Share this post


Link to post
Share on other sites

I've checked the above config on my DataSet test site and it is valid.
(Don't forget to save the page to run the validator again.)

  • Thanks 1

Share this post


Link to post
Share on other sites
On 12/27/2019 at 9:49 AM, flydev said:

@theqbap

Then add the string "JSON" before the config :


JSON{
  "name": "Testing the import",
  "input": {
    "type": "csv",
    "delimiter": ",",
    "header": 1,
    "limit": 10
  },
  "fieldmappings": {
    "title": 1
  },
  "pages": {
    "template": "Data",
    "selector": "title=@title"
  }
}

 

Thank you for the help 🙂

Share this post


Link to post
Share on other sites

One more question regarding importing data. 

When a row in a CSV file will result in a page with the same 'title' as one that's already exits is there an option to make the title unique and import new page with the same name.

CSV example:

Title,Number
Orange,20-300
Orange,10-20
Banana,5-10


ProcessWire pages:

+Import Folder
-Orange
-Orange
-Banana
 

Share this post


Link to post
Share on other sites

The two “orange” pages have the same page title, but it probably gives them different page names.  Look on the settings tab of each page.

  • Like 1

Share this post


Link to post
Share on other sites

By default DataSet will create a new PW page each time it imports a row. In the above example, two pages will be created with title "Orange" and one with "Banana".

There is no option to change the title for the new page (2nd Orange) if it matches an already existing one (1st Orange).
You can, however, combine several fields in the title making it unique. E.g. you can create the title like this (column #0 always contains the row's serial number):

title: [1, ' (', 0, ')']

The result will be:

Orange (1)
Orange (2)
Banana (3)

You can also update (overwrite or merge) already existing pages. In the "pages" section of DS config you can specify a selector and add the overwrite or merge option.

See the wiki for more details. (Which needs to be updated but it is probably still helpful 🙂 )

  • Like 1

Share this post


Link to post
Share on other sites

OK. It was time to update the wiki 🙂

I've uploaded a new DataSet version (0.9.5) to GitHub. It contains many improvements for data type conversions, page reference handling and several bug fixes.
It also has a new profiler to optimize the import routines.

Tasker is also updated.

  • Like 3

Share this post


Link to post
Share on other sites

Thanks for developing this module, my tests so far have been really positive. I'm developing a PW site that requires import and regular update of 100k+ pages and this will be invaluable.

One question I have (if you have time) is around Page References. I'm unable to modify my source data, so have created a page reference and field that corresponds to that of the source data e.g. 'LED' which is ID 1137. 

My CSV has this 'LED' data, however when I import, I get this result:

Processing data for field 'category'.
Page selector @ field category: templates_id=50, has_parent=1110.
Found referenced page 'First Category Item' for field 'category' using the selector 'templates_id=50, has_parent=1110'.
Setting field 'category' = '1111'.

Page ID 1111 (or First Category Item) is the first Category page. I've also tried setting the category to 1137 within the CSV file and get the same result.

This is when using the below config:

JSON{
  "name": "Import",
  "input": {
    "type": "csv",
    "delimiter": ",",
    "header": 1,
    "limit": 10
  },
  "fieldmappings": {
    "model_id": 1,
    "title": 2,
    "category": 3
  },
  "pages": {
    "template": "model",
    "selector": "model_id=@model_id"
  }
}

The other two (text fields) work fine. Any advice would be appreciated!

Edit: I've now found your reference to Page References in the Wiki that changes everything! The default for Page References is Title as you say. I've installed Autocomplete and it's working great. One task for later is figuring out the scheduling side of things. I did wonder, with Page References is it possible for pages to be automatically created if they don't already exist on import?

Share this post


Link to post
Share on other sites
18 hours ago, DonPachi said:

Edit: I've now found your reference to Page References in the Wiki that changes everything! The default for Page References is Title as you say. I've installed Autocomplete and it's working great. One task for later is figuring out the scheduling side of things. I did wonder, with Page References is it possible for pages to be automatically created if they don't already exist on import?

I use page references heavily in my projects. Page Autocomplete has a field (Settings specific to ...) on the Input tab of the field settings page that can be used to specify what fields are used during the query. You can even select multiple fields, e.g. a category_ref_by_id field can specify multiple ID fields. This way you can merge individual data sets into a single one. Each source set can have its own ID, and the ...ref_by_id field can use all of them.

I have no plans for the automatic creation of the missing referenced page but it can be achieved very easily. Just create another DataSet using the same CSV file and import the appropriate "category" columns for creating the missing pages. You can also try to use the location attribute in the DataSet config to make a reference to the file uploaded to the original DataSet (see the wiki) to avoid duplicate uploads.

If you need to perform these imports automatically you can create two tasks (category import and the original one) and specify a dependency between them (first import categories then the full data set). See Tasker wiki.

  • Like 3

Share this post


Link to post
Share on other sites

It's been a while, but I just wanted to follow up with you on a project that's now in its final stages and say dataset and tasker are really exceptional, powerful modules, and definitely up there as my favourites for ProcessWire.

You really covered the edge cases with being able to set task dependencies, merge, overwrites etc, and while it took some time to get my head around I now have a system that calls multiple tasks every hour via cron for fresh data from a specific set of CSV files.

Looking forward to hopefully working on another project that uses dataset/tasker!

  • Like 3

Share this post


Link to post
Share on other sites

Thanks for the feedback! I'm glad to hear that they are useful 🙂 although a bit complex to use.

Tasker has a few small improvements, I think I pushed the latest version to the GitHub repo.
DataSet changed a bit more, and some modified parts still need review and testing. Thanks for reminding me to finish them.

My DataSet project is still running. We have like 150k+ (mostly complex) data pages interconnected with many references and getting to hit the wall with MySQL during imports and complex page reference lookups.

  • Like 3

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Gadgetto
      SnipWire - Snipcart integration for ProcessWire
      Snipcart is a powerful 3rd party, developer-first HTML/JavaScript shopping cart platform. SnipWire is the missing link between Snipcart and the content management framework ProcessWire.
      With SnipWire, you can quickly turn any ProcessWire site into a Snipcart online shop. The SnipWire plugin helps you to get your store up and running in no time. Detailed knowledge of the Snipcart system is not required.
      SnipWire is free and open source licensed under Mozilla Public License 2.0! A lot of work and effort has gone into development. It would be nice if you could donate an amount to support further development:

      Status update links (inside this thread) for SnipWire development
      2020-07-03 -- SnipWire 0.8.7 (beta) released! Fixes some small bugs and adds an indicator for TEST mode 2020-04-06 -- SnipWire 0.8.6 (beta) released! Adds support for Snipcart subscriptions and also fixes some problems 2020-03-21 -- SnipWire 0.8.5 (beta) released! Improves SnipWires webhooks interface and provides some other fixes and additions 2020-03-03 -- SnipWire 0.8.4 (beta) released! Improves compatibility for Windows based Systems. 2020-03-01 -- SnipWire 0.8.3 (beta) released! The installation and uninstallation process has been heavily revised. 2020-02-08 -- SnipWire 0.8.2 (beta) released! Added a feature to change the cart and catalogue currency by GET, POST or SESSION param 2020-02-03 -- SnipWire 0.8.1 (beta) released! All custom classes moved into their own namespaces. 2020-02-01 -- SnipWire is now available via ProcessWire's module directory! 2020-01-30 -- SnipWire 0.8.0 (beta) first public release! (module just submitted to the PW modules directory) 2020-01-28 -- added Custom Order Fields feature (first SnipWire release version is near!) 2020-01-21 -- Snipcart v3 - when will the new cart system be implemented? 2020-01-19 -- integrated taxes provider finished (+ very flexible shipping taxes handling) 2020-01-14 -- new date range picker, discount editor, order notifiactions, order statuses, and more ... 2019-11-15 -- orders filter, order details, download + resend invoices, refunds 2019-10-18 -- list filters, REST API improvements, new docs platform, and more ... 2019-08-08 -- dashboard interface, currency selector, managing Orders, Customers and Products, Added a WireTabs, refinded caching behavior 2019-06-15 -- taxes provider, shop templates update, multiCURL implementation, and more ... 2019-06-02 -- FieldtypeSnipWireTaxSelector 2019-05-25 -- SnipWire will be free and open source Plugin Key Features
      Fast and simple store setup Full integration of the Snipcart dashboard into the ProcessWire backend (no need to leave the ProcessWire admin area) Browse and manage orders, customers, discounts, abandoned carts, and more Multi currency support Custom order and cart fields Process refunds and send customer notifications from within the ProcessWire backend Process Abandoned Carts + sending messages to customers from within the ProcessWire backend Complete Snipcart webhooks integration (all events are hookable via ProcessWire hooks) Integrated taxes provider (which is more flexible then Snipcart own provider) Useful Links
      SnipWire in PW modules directory SnipWire Docs (please note that the documentation is a work in progress) SnipWire @GitHub (feature requests and suggestions for improvement are welcome - I also accept pull requests) Snipcart Website  

       
      ---- INITIAL POST FROM 2019-05-25 ----
       
    • By Sten
      Hello
      Till now I hacked something with the twig template but it works no more with new PW versions so I look forward to create a module. I am working on a site in multiple languages : French, English, Italian, German, Spanish, Portuguese, Hebrew, Russian. The new posts are entered in any language with a field for language. Till now, I got twig files to get the translations with constants defined for each part of the pages.
      So I'd like to create a module to include theses files added according to the url /fr/en/...
      Have you some observations to do before I begin about the direction to take ?
      Thank you
    • By ukyo
      Mystique Module for ProcessWire CMS/CMF
      Github repo : https://github.com/trk/Mystique
      Mystique module allow you to create dynamic fields and store dynamic fields data on database by using a config file.
      Requirements
      ProcessWire 3.0 or newer PHP 7.0 or newer FieldtypeMystique InputfieldMystique Installation
      Install the module from the modules directory:
      Via Composer:
      composer require trk/mystique Via git clone:
      cd your-processwire-project-folder/ cd site/modules/ git clone https://github.com/trk/Mystique.git Module in live reaction with your Mystique config file
      This mean if you remove a field from your config file, field will be removed from edit screen. As you see on youtube video.
      Using Mystique with your module or use different configs path, autoload need to be true for modules
      Default configs path is site/templates/configs/, and your config file name need to start with Mystique. and need to end with .php extension.
      Adding custom path not supporting anymore !
      // Add your custom path inside your module class`init` function, didn't tested outside public function init() { $path = __DIR__ . DIRECTORY_SEPARATOR . 'configs' . DIRECTORY_SEPARATOR; Mystique::add($path); } Mystique module will search site/modules/**/configs/Mystique.*.php and site/templates/Mystique.*.php paths for Mystique config files.
      All config files need to return a PHP ARRAY like examples.
      Usage almost same with ProcessWire Inputfield Api, only difference is set and showIf usage like on example.
      <?php namespace ProcessWire; /** * Resource : testing-mystique */ return [ 'title' => __('Testing Mystique'), 'fields' => [ 'text_field' => [ 'label' => __('You can use short named types'), 'description' => __('In file showIf working like example'), 'notes' => __('Also you can use $input->set() method'), 'type' => 'text', 'showIf' => [ 'another_text' => "=''" ], 'set' => [ 'showCount' => InputfieldText::showCountChars, 'maxlength' => 255 ], 'attr' => [ 'attr-foo' => 'bar', 'attr-bar' => 'foo' ] ], 'another_text' => [ 'label' => __('Another text field (default type is text)') ] ] ]; Example:
      site/templates/configs/Mystique.seo-fields.php <?php namespace ProcessWire; /** * Resource : seo-fields */ return [ 'title' => __('Seo fields'), 'fields' => [ 'window_title' => [ 'label' => __('Window title'), 'type' => Mystique::TEXT, // or InputfieldText 'useLanguages' => true, 'attr' => [ 'placeholder' => __('Enter a window title') ] ], 'navigation_title' => [ 'label' => __('Navigation title'), 'type' => Mystique::TEXT, // or InputfieldText 'useLanguages' => true, 'showIf' => [ 'window_title' => "!=''" ], 'attr' => [ 'placeholder' => __('Enter a navigation title') ] ], 'description' => [ 'label' => __('Description for search engines'), 'type' => Mystique::TEXTAREA, 'useLanguages' => true ], 'page_tpye' => [ 'label' => __('Type'), 'type' => Mystique::SELECT, 'options' => [ 'basic' => __('Basic page'), 'gallery' => __('Gallery'), 'blog' => __('Blog') ] ], 'show_on_nav' => [ 'label' => __('Display this page on navigation'), 'type' => Mystique::CHECKBOX ] ] ]; Searching data on Mystique field is limited. Because, Mystique saving data to database in json format. When you make search for Mystique field, operator not important. Operator will be changed with %= operator.
      Search example
      $navigationPages = pages()->find('my_mystique_field.show_on_nav=1'); $navigationPages = pages()->find('my_mystique_field.page_tpye=gallery');
    • By Robin S
      This is a module I made as an experiment a while ago and never got around to releasing publicly. At the time it was prompted by discussions around using Repeater fields for "page builder" purposes, where the depth feature could possibly be used for elements that would be nested inside other elements. I thought it would be useful to enforce some depth rules and translate the depth data into a multi-dimensional array structure.
      I'm not using this module anywhere myself but maybe it's useful to somebody.
      Repeater Depth Helper
      This module does two things relating to Repeater fields that have the "Item depth" option enabled:
      It enforces some depth rules for Repeater fields on save. Those rules are:
      The first item must have a depth of zero. Each item depth must not be more than one greater than previous item depth. It provides a RepeaterPageArray::getDepthStructure helper method that returns a nested depth structure for a Repeater field value.
      Helper method
      The module adds a RepeaterPageArray::getDepthStructure method that returns a multi-dimensional array where the key is the page ID and the value is an array of nested "child" items, or null if there are no nested children.
      Example

      The module doesn't make any assumptions about how you might want to use the depth structure array, but here is a way you might use it to output a nested unordered list.
      // Output a nested unordered list from a depth structure array function outputNestedList($depth_structure, $repeater_items) { $out = "<ul>"; foreach($depth_structure as $page_id => $nested_children) { $out .= "<li>" . $repeater_items->get("id=$page_id")->title; // Go recursive if there are nested children if(is_array($nested_children)) $out .= outputNestedList($nested_children, $repeater_items); $out .= "</li>"; } $out .= "</ul>"; return $out; } $repeater_items = $page->my_repeater; $depth_structure = $repeater_items->getDepthStructure(); echo outputNestedList($depth_structure, $repeater_items);
       
      https://github.com/Toutouwai/RepeaterDepthHelper
      https://modules.processwire.com/modules/repeater-depth-helper/
    • By MoritzLost
      Cacheable Placeholders
      This module allows you to have pieces of dynamic content inside cached output. This aims to solve the common problem of having a mostly cacheable site, but with pieces of dynamic output here and there.  Consider this simple example, where you want to output a custom greeting to the current user:
      <h1>Good morning, <?= ucfirst($user->name) ?></h1> This snippet means you can't use the template cache (at least for logged-in users), because each user has a different name. Even if 99% of your output is static, you can only cache the pieces that you know won't include this personal greeting. A more common example would be CSRF tokens for HTML forms - those need to be unique by definition, so you can't cache the form wholesale.
      This module solves this problem by introducing cacheable placeholders - small placeholder tokens that get replaced during every request. The replacement is done inside a Page::render hook so it runs during every request, even if the response is served from the template cache. So you can use something like this:
      <h1>Good morning, {{{greeting}}}</h1> Replacement tokens are defined with a callback function that produces the appropriate output and added to the module through a simple hook:
      // site/ready.php wire()->addHookAfter('CachePlaceholders::getTokens', function (HookEvent $e) { $tokens = $e->return; $tokens['greeting'] = [ 'callback' => function (array $tokenData) { return ucfirst(wire('user')->name); } ]; $e->return = $tokens; }); Tokens can also include parameters that are parsed and passed to the callback function. There are more fully annotated examples and step-by-step instructions in the README on Github!
      Features
      A simple and fast token parser that calls the appropriate callback and runs automatically. Tokens may include multiple named or positional parameters, as well as multi-value parameters. A manual mode that allows you to replace tokens in custom pieces of cached content (useful if you're using the $cache API). Some built-in tokens for common use-cases: CSRF-Tokens, replacing values from superglobals and producing random hexadecimal strings. The token format is completely customizable, all delimiters can be changed to avoid collisions with existing tag parsers or template languages. Links
      Github Repository & documentation Module directory If you are interested in learning more, the README is very extensive, with more usage examples, code samples and usage instructions!
×
×
  • Create New...