• Content Count

  • Joined

  • Last visited

Community Reputation

12 Good

About combicart

  • Rank
    Jr. Member

Recent Profile Visitors

1,304 profile views
  1. combicart

    Ah, totally overlooked this option. Setting this to false solved the problem! Thanks!
  2. combicart

    I'm currently using the RSS Feed Loader module ( to import an RSS feed. In the feed there is an 'description' field that has html markup inside. By default the html markup is stripped out by the module Therefore i've used the option below to render the description including markup: $rss->stripTags = false; After adding this option, the values in the database are still escaped, e.g. like: <h3 class="deprecated-h3">Description</h3> <p>Description</p>  Would it be possible to save the description field directly into the database (including it's HTML markup)?
  3. combicart

    I would like to import a description field including it's HTML markup. In the RSS feed the markup is included, however by default the markup is stripped by the RSS feed module. Therefore i've used the option below to render the description including markup: $rss->stripTags = false; After adding this option, the values in the database are still escaped, e.g. like: <h3 class="deprecated-h3">Description</h3> <p>Description</p> Would it be possible to save the description field directly into the database (including it's HTML markup)?
  4. combicart

    Thanks, working perfectly!
  5. combicart

    I'm currently working on a website where I would like to fetch data from an RSS feed and save them as actual pages inside ProcessWire. Ideally ProcessWire is an exact copy of the RSS feed. When a new page is added to the RSS feed, the new page would also be added in ProcessWire. Or when a page got removed from the RSS feed, the page should also be delete from within ProcessWire. I'm using the following setup: 1. Fetch the RSS feed with $rss = $modules->get('MarkupLoadRSS'); $rss->load(''); 2. Compage the RSS feed with the pages that are currently save in Processwire foreach ($rss as $item) { ... } 3. If page is in RSS feed, but not in ProcessWire, save page 4. If page is in RSS feed and in ProcessWire, update values with data from the RSS feed 5. If page is not in RSS feed, but in ProcessWire, delete page To get all the pages from the RSS feed, check if it exists and create the page from the API if it doesn't exist, i'm using the code below: <?php include './index.php'; // bootstrap PW $rss = $modules->get('MarkupLoadRSS'); $rss->load(''); foreach ($rss as $item) { $p = $pages->get("job_id=$item->id"); // get the ide of the job if (!$p->id) { $p = new Page(); // create new page object $p->template = 'job-offer'; // set template $p->parent = wire('pages')->get('/vacatures/'); // set the parent $p->name = slugify($item->title); // give it a name used in the url for the page $p->title = $item->title; // set page title (not neccessary but recommended) // added by Ryan: save page in preparation for adding files (#1) $p->save(); // populate fields $p->job_id = $item->id; $p->save(); // testing echo 'id: ' . $p->id . '<br/>'; echo 'path: ' . $p->path; } } So far everything works and the pages got created However I'm having trouble with the logic for points 4 and 5. - When a page is in the RSS feed and is already add to ProcessWire. Is there a way to find a page and update it's content through the API? - When a page is not in the RSS feed anymore. Is there a way to delete all pages that are not in the RSS feed? Thanks!
  6. combicart

    Thanks @FrancisChung and @adrian! Will check out both SimpleXML and the CSV parser to see which approach fits best for the XML feed. About the 1300 pages, yeah I've already checked about ways to speed up the import process. Unfortunately they only provide 1 large XML feed which contains both the old and updated pages. They provide a date and UUID field inside the XML feed which I could use to check if there is something updated or not. Totally agree with you that ideally only the changes are being processed instead of the complete file again and again.
  7. combicart

    I'm currently in the process of setting up a new website. Most pages on the website will be imported from an XML feed. The feed itself is located on an URL and will be updated daily. The pages should basically be an copy of the XML feed. So it might be possible that based on the import pages could be created, updated or deleted. As far as I understand, there are a couple of ways ProcessWire could import the feed: - Through one of the importer modules ( - With the new JSON import function that is available from version 3.0.64 ( - Use the API ( The feed itself is around 16Mb and contains about 1300 pages. Each page has around 10 images of 1Mb per page (13Gb in total). Does anybody have worked with this kind of setup or has some advise on what the best way is to start? Thanks!
  8. combicart

    Thanks BitPoet, will use this setup to fetch the page id. Also the sanitation is a good tip!
  9. combicart

    Hello, I'm currently creating an Events website. I've setup an 'events' page with a date, start time and end time. I'm using the fields to print them out on the page. However, I would like to add the option that visitors can download an ICS file so that they can quickly add the event to their calendar. I've found for the creation of het actual ICS files. I've created a new template file (download-ics) and used one of the examples ( for the template. When i'm browsing to the url with the (download-ics) template the ICS file is being downloaded correctly. However now there are only fixed values inside the ICS file. How can I pass the variables from the actual event into the newly created (download-ics) template file so that the actual date and times can be used?
  10. combicart

    I would also go for the Jumplinks module. One great feature is that you can mass upload a CSV file with all the pages that you would like to redirect. After that you can check how often the old link has been redirected to the new location and perhaps remove the redirect when it isn't being used anymore.
  11. combicart

    Thanks Robin, that worked perfectly!
  12. combicart

    Hello, I'm currently trying to implementing a store locator on my website ( To render the stores on the map the stores are being loaded from an XML file. To geocode the locations, i've installed the Maps Marker from Ryan ( So far so good, however now I'm stuck at generating the actual XML file. Especially in printing the individual stores in a foreach loop. For the XML file, i've take some inspiration from the sitemap.xml file The stores all have a template called 'location'. Can I just load those locations with $pages->find("template=location")? The code that I have so far: <?php function renderLocation(Page $page) { return "\n<marker " . "name=" . $page->title . "/>"; } function renderLocationsXML(array $locations = array()) { $out = '<?xml version="1.0" encoding="utf-8"?>'; $out .= '\n<markers>'; // Foreach loop? $out .= '\n</markers>'; return $out; } header("Content-Type: text/xml"); echo renderLocationXML; The XML file has to be in the following format <?xml version="1.0" encoding="utf-8"?> <markers> <marker name="Chipotle Minneapolis" lat="44.947464" lng="-93.320826" category="Restaurant" address="3040 Excelsior Blvd" address2="" city="Minneapolis" state="MN" postal="55416" country="US" phone="612-922-6662" email="" web="" hours1="Mon-Sun 11am-10pm" hours2="" hours3="" featured="" features="" /> <marker name="Chipotle St. Louis Park" lat="44.930810" lng="-93.347877" category="Restaurant" address="5480 Excelsior Blvd." address2="" city="St. Louis Park" state="MN" postal="55416" country="US" phone="952-922-1970" email="" web="" hours1="Mon-Sun 11am-10pm" hours2="" hours3="" featured="" features="Online Ordering " /> </markers> Could someone point in the right direction in how to create the function / foreach loop to generate the marker data / locations inside the XML file?
  13. combicart

    Hi Doolin, I'm not really sure if I understand you correctly. But normally I just use an 'image field' set to '0' (no limit) for image galleries. You can use e.g. the description of the image for working with captions inside the gallery. With an foreach loop you can generate the markup that is needed for the image gallery.
  14. Hello Makari, When you would like to redirect an old url to a new url you need to include the complete url for the new location (including http://) E.g. Redirect 301 /old_path/old_sub_path In general it's also a good practice to redirect users from the toplevel domain to it's www. variant (or the other way around. from www. to the toplevel domain). You can set this up with mod_rewrite. The example below will redirect users from to RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] One other way is to install one of the modules below to handle the redirects from inside ProcessWire:
  15. combicart

    Thanks for checking it out. Uploading images isn't the problem. At first the image is being added to the page correctly and everything is working. The problem arises the next day (I have the feeling it happens somewhere through the night) when the image is being removed. This only happens when uploading the image through CKEditor. When uploading images through the 'Media' page in the sidebar, the images aren't removed automatically. Yes indeed. That's indeed also the case. The image is added to the assets/files/[id of media library page] folder. Yes, that's also the case. In the page the id is used from the media library. The image is there after I uploaded it. There are two images (the original file and a thumbnail version). On the website, the following modules have been installed: - ProCache - Form Builder - Profields (Table) - MarkupSEO - MarkupSimpleNavigation - TextformatterGoogleMaps - TextformatterVideoEmbed - TextformatterSrcSet (but it's not enabled inside the body field where the images disappear) To be safe, i've added a screenshot of the settings that i've used for the body field. Might the 'HTML Options' have something to do with it? If you want I can give you temporarily access to the website.