Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 11/07/2019 in all areas

  1. Or just use Tracy Debugger's (by @adrian) Mail Interceptor feature: https://adrianbj.github.io/TracyDebugger/#/debug-bar?id=mail-interceptor I always rely on it, it has never let me down so far ?
    6 points
  2. If you plan to go live with validation email enabled anyway, it would also make sense to use a dummy SMTP server like FakeSMTP. That way, no mail is actually sent out but you have an instant preview, can save all mails to files, view those in your mail application and test the validation links inside.
    5 points
  3. Design: polimorf.de Development: muskaat.de They team up for tasks like that and I'm also part of Muskaat. ? So feel free to take a look at the references there. Located in Neumünster - close enough to Hamburg for meetings.
    4 points
  4. Welcome @Elko! From comments that I've seen here in the forums I know that there are people using PW successfully with millions of pages. So that might mean that there is nothing to worry about if you decide to use a page for every time slot for every user. But personally speaking, when I've had situations like this it has felt a bit wrong to use large numbers of pages to store such simple data and I have used Profields Table for this sort of thing instead. This isn't based on any tests I've done - just on gut instinct. I have a feeling that if you need to regularly and quickly load and process large numbers of records then pages can be a bit less than optimal. @bernhard's RockFinder module might be useful to avoid loading full Page objects: Or if you're comfortable working with databases then there's no reason why you couldn't use an external database (or just an extra table in the PW database) together with the $database API. This might give the fastest performance.
    4 points
  5. Sanitizer EasySlugger Allows the use of the EasySlugger library as Sanitizer methods. Installation Install the Sanitizer EasySlugger module. Usage The module adds four new sanitizer methods. slugger($string, $options) Similar to $sanitizer->pageName() - I'm not sure if there are any advantages over that method. Included because it is one of the methods offered by EasySlugger. $slug = $sanitizer->slugger('Lorem Ipsum'); // Result: lorem-ipsum utf8Slugger($string, $options) Creates slugs from non-latin alphabets. $slug = $sanitizer->utf8Slugger('这个用汉语怎么说'); // Result: zhe-ge-yong-han-yu-zen-me-shuo seoSlugger($string, $options) Augments the string before turning it into a slug. The conversions are related to numbers, currencies, email addresses and other common symbols. $slug = $sanitizer->seoSlugger('The price is $5.99'); // Result: the-price-is-5-dollars-99-cents See the EasySlugger readme for some more examples. seoUtf8Slugger($string, $options) A combination of utf8Slugger() and seoSlugger(). $slug = $sanitizer->seoUtf8Slugger('价钱是 $5.99'); // Result: jia-qian-shi-5-dollars-99-cents $options argument Each of the methods can take an $options array as a second argument. separator (string): the character that separates words in the slug. Default: - unique (bool): Determines whether a random suffix is added at the end of the slug. Default: false $slug = $sanitizer->utf8Slugger('这个用汉语怎么说', ['separator' => '_', 'unique' => true]); // Result: zhe_ge_yong_han_yu_zen_me_shuo_3ad66c4 https://github.com/Toutouwai/SanitizerEasySlugger https://modules.processwire.com/modules/sanitizer-easy-slugger/
    2 points
  6. I believe you can do this with http://modules.processwire.com/modules/page-image-manipulator/ Check the docs at
    2 points
  7. You should give this a try: https://processwire.com/talk/topic/5704-wiremailsmtp/?do=findComment&comment=191580 And also this (might be related to your hosting company not allowing 3rd party smtp services, like google, to send mail): https://processwire.com/talk/topic/5704-wiremailsmtp/?do=findComment&comment=162368
    2 points
  8. Your last screenshot says it's working. So... what's the (new) issue now? P.S.: there is no need for ASAP.
    2 points
  9. Thx, seems that I missed that ?
    1 point
  10. This is on the top of my to-do list now ?
    1 point
  11. Hi Teppo, I'm close to build my first-time-ever search engine in processwire and wonder if you have made progress into the multilanguage side of your cool module ? Not a big deal anyway, otherwise I will try another approach ? Thanks!
    1 point
  12. Wow, thank you mate for your fast response to my feature request!
    1 point
  13. Cool. I was totally unaware of this great feature. Will start using it today ?
    1 point
  14. v0.1.6 released. Prompted by a feature request, the module now has a config option to include files from Repeater fields that are in the page being edited. Nested Repeater fields (files inside a Repeater inside another Repeater) are not supported. As with all my modules going forward, PW3 is now a requirement.
    1 point
  15. Above is off topic sorry, I was getting push confused with the http/2 functionality of all resources getting sent async in a single connection (which does happen out the box). Yes, push is confusing and it seems there are no real standards or guidance on how to use it. I feel it is pretty extreme optimisation since after first load things are cached anyway. What would be a candidate for push in PW Admin, assets from unloaded modules or image previews or something?
    1 point
  16. I didn't thought on that, I've just tried through the user roles page, now works as expected, thanks!
    1 point
  17. Lots of PW goodness in this one too: https://processwire-recipes.com/
    1 point
  18. Thanks @sz-ligatur for those screenshots - you are confusing two different features here. The use of "snippets" is probable confusing things here. The snippets that can be stored in that linked js file are autocompletion snippets triggered automatically when typing in the ACE code editor that the Console panel uses. I think for your purposes you should be using the SnippetRunner panel (https://adrianbj.github.io/TracyDebugger/#/debug-bar?id=snippet-runner) and storing each snippet and a separate *.php files under either: /site/templates/TracyDebugger/snippets/ /site/assets/TracyDebugger/snippets/ Does that help explain things?
    1 point
  19. Wow @OLSA thx for that great example! Creating PDFs via PHP (mpdf) definitely has some drawbacks (formatting is sometimes tedious since not all css commands are supported, for example you can't use block elements inside table cells). Your method would also work with charts or other complex elements. Not sure how multipage would work, but I guess support for that via the @page directives should be possible. @jmartsch For creating 1000s of reports you could use RockGrid's batcher. You could create a grid containing all elements to create a PDF from and batcher would create the reports one-by-one with a progressbar and user feedback, similar to this:
    1 point
  20. Hello, there is also one "alternate" solution, using Node.js and Puppeteer with headless browser. In this case, export to PDF is only one segment what can be done with that tools (remote login, automated processing, deep testing, etc...). If you have Node.js on your machine, here is example (Windows) where project directory "printer" is in C partition (C:\printer). C:\> mkdir printer cd printer npm i puppeteer easy-pdf-merge After this, inside project directory are all required node modules (Puppeteer, Chromium browser, Easy PDF). Last step is to create index.js file and place it inside project directory ( C:\printer ) // index.js const puppeteer = require('puppeteer'); const merge = require('easy-pdf-merge'); // configuration // *** EDIT THIS: var admin_url = "http://my_site.com/admin_url"; var user = '*****'; var pasw = '*****'; // desired pages // *** EDIT THIS: var pdfUrls = [ "/page/edit/?id=1054&modal=1", "/page/edit/?id=1016&modal=1", "/page/edit/?id=1019&modal=1", "/setup/field/edit?id=1#inputfieldConfig", "/setup/field/edit?id=1&modal=1#inputfieldConfig" ]; var pdfFiles = []; // START async function main(){ const browser = await puppeteer.launch({headless: true, args:['--start-maximized']}); const page = await browser.newPage(); await page.setViewport({width: 1366, height: 768}); // login await page.goto(admin_url, { waitUntil: 'networkidle0' }); await page.type('#login_name', user); await page.type('#login_pass', pasw); // login submit await Promise.all([ page.click('#Inputfield_login_submit'), page.waitForNavigation({ waitUntil: 'networkidle0' }) ]); for(var i = 0; i < pdfUrls.length; i++){ await page.goto(admin_url + pdfUrls[i], {waitUntil: 'networkidle0'}); var pdfFileName = 'page' + (i + 1) + '.pdf'; pdfFiles.push(pdfFileName); await page.pdf({ path: pdfFileName, format: 'A4', printBackground: true,margin: {top: 0, right: 0, bottom: 0, left: 0}}); } await browser.close(); await mergeMultiplePDF(pdfFiles); }; const mergeMultiplePDF = (pdfFiles) => { return new Promise((resolve, reject) => { merge(pdfFiles,'processwire.pdf',function(err){ if(err){ console.log(err); reject(err) } console.log('Success'); resolve() }); }); }; // run all this and exit main().then(process.exit); *** Note: edit this script and write your login parameters and desired urls. After all, run script (inside C:\printer>) node index.js After a while (for this example, ~10 sec.) you will find PDF files in project folder (partials and 1 merged with all). As example here is attachment. Regards. processwire.pdf
    1 point
  21. The issue that I raised on May 8 has now gone away. It was very probably caused by an entirely separate problem I was having with permissions, and thus is nothing to do with this module! I posted about the permissions issue at https://processwire.com/talk/topic/19266-page-edit-permission-not-working/.
    1 point
  22. Hi there, My form is not getting submitted, it is showing: Unable to verify successful email delivery of this form submission. Attaching for your reference as well: In the Backend, it is showing Connection timed out with smtp.gmail.com Pl guide me how to resolve that
    0 points
×
×
  • Create New...