Search the Community
Showing results for tags 'performance'.
-
Hi there, while developing a sideproject which is completly build with ProcessModules i suddenly had the urge to measure the performance of some modules ? as a result, say welcome to the FlowtiAppPerformance module. It comes bundled with a small helper module called FlowtiModuleProfiler. In the first release, even though you could select other modules, it will track the execution of selected Site/ProcessModules. This will give you the ability to gain insights how your Application behaves. The Main Module itself will come with 2 Logging Options, Database or PW Logs. Select Database for Charts and Logs...well If you just want your profiles as a simple log file in PW. You also could choose to dump the request profile into TracyDebugger as shown here: Dont wonder about my avg_sysload, somehow my laptop cant handle multiple VMs that good ? Settings Screen Monitoring FlowtiLogs again, dont look at the sysload ? I will update the Module in the future to give some filter options and aggregation, but for now it satisfies my needs. I hope it is helpfull for some. Module is submited to the directory and hosted at github https://github.com/Luis85/FlowtiAppPerformance Any suggestions, wishes etc. are much appreciated. Cheers, Luis
- 4 replies
-
- 12
-
- admin area
- monitoring
-
(and 2 more)
Tagged with:
-
ProcessWire has builtin cache system (enabled per template basis). Why ProcessWire is not using HTTP headers Last-Modified and If-Modified-Since when cache is turned on? It seems that using this headers could increase performance and is quite easy to implement.
- 1 reply
-
- performance
- cache
-
(and 1 more)
Tagged with:
-
Hello, One of our sites is suffering from very slow boot times, and I'm not sure how to diagnose the problem. Here's a grab of the debug panel in Tracy debugger after loading the homepage. A have a couple of questions - Are all of the times listed separate items, or are some of them a breakdown? I ask because the number shown in the tracy debug bar is the total of all of the items but the wording suggests boot.load.modules, boot.load.fields etc are a breakdown of the boot.load. How do I find out what these times consist of? Currently, when using the site, and when running page speed tools, the server load time is consistently upwards of 1s often above 1.5s. This is before the browser even starts downloading resources - a quick grab from my firefox dev tools was even worse: I would appreciate any advice on finding the cause here. A few details: Server is a digital ocean droplet (2GB memory + 2CPUs) running nginx and php7.0 - neither memory or cpu seem particularly taxed Site has 8 locales Using template cache and wirecache for heavy pieces of markup We're on the latest dev branch - the speed issue has been present for the last couple of versions. The speed is similar on when running locally (similar but stripped back nginx config) Thanks, Tom
-
Hey Ryan, hey friends, we, Mobile Trooper a digital agency based in Germany, use ProcessWire for an Enterprise-grade Intranet publishing portal which is under heavy development for over 3 years now. Over the years not only the user base grew but also the platform in general. We introduced lots and lots of features thanks to ProcessWire's absurd flexibility. We came along many CMS (or CMFs for that matter) that don't even come close to ProcessWire. Closest we came across was Locomotive (Rails-based) and Pimcore (PHP based). So this is not your typical ProcessWire installation in terms of size. Currently we count: 140 Templates (Some have 1 page, some have >6000 pages) 313 Fields ~ 15k Users (For an intranet portal? That's heavy.) ~ 195 431 Pages (At least that's the current AUTOINCREMENT) I think we came to a point where ProcessWire isn't as scalable anymore as it used to be. Our latest research measured over 20 seconds of load time (the time PHP spent scambling the HTML together). That's unacceptable unfortunately. We've implemented common performance strategies like: We're running on fat machines (DB server has 32 gigs RAM, Prod Web server has 32gigs as well. Both are running on quadcores (xeons) hosted by Azure. We have load balancing in place, but still, a single server needs up to 20 sec to respond to a single request averaging at around about 12 sec. In our research we came across pages that sent over 1000 SQL queries with lots of JOINs. This is obviously needed because of PWs architecture (a field a table) but does this slow mySQL down much? For the start page we need to get somewhere around 60-80 pages, each page needs to be queried for ~12 fields to be displayed correctly, is this too much? There are many different fields involved like multiple Page-fields which hold tags, categories etc. We installed Profiler Pro but it does not seem to show us the real bottleneck, it just says that everything is kinda slow and sums up to the grand total we mentioned above. ProCache does not help us because every user is seeing something different, so we can cache some fragments but they usually measure at around 10ms. We can't spend time optimising if we can't expect an affordable benefit. Therefore we opted against ProCache and used our own module which generates these cache fragments lazily. That speeds up the whole page rendering to ~7 sec, this is acceptable compared to 20sec but still ridiculously long. Our page consists of mainly dynamic parts changing every 2-5 minutes. It's different across multiple users based on their location, language and other preferences. We also have about 120 people working on the processwire backend the whole day concurrently. What do you guys think? Here are my questions, hopefully we can collect these in a wiki or something because I'm sure more and more people will hit that break sooner than they hoped they would: - Should we opt for optimising the database? Since >2k per request is a lot even for a mysql server, webserver cpu is basically idling at that time. - Do you think at this point it makes sense to use ProcessWire as a simple REST API? - In your experience, what fieldtypes are expensive? Page? RepeaterMatrix? - Ryan, what do you consider as the primary bottleneck of processwire? - Is the amount of fields too much? Would it be better if we would try to reuse fields as much as possible? - Is there an option to hook onto ProcessWires SQL builder? So we can write custom SQL for some selectors? Thanks and lots of wishes, Pascal from Mobile Trooper
-
What i wanna achive is a simple counter like that count up on visit (this is no problem) AND save the specific date (year/month/day) of the count... in the end i will be able to get visits per day/per month/per year in a nice and dirty graph. Just to have a way better simple counter system. Should i only go with a complex setup of pages like this: --stats (home template for pageviews) ----2018 (year) ------08 (month) ---------29 ->page_views (integers on every day template) ---------30 ->page_views Or just simple use: --stats (home template for pageviews) ---->count (template) that holds simple field page_views and a date field or could a fieldtype like tables (one table field for every month/year or so) be also a solution? Or a own SQL table special for this and use it in a module? I don't have any experience on this topic... What i have in mind of performance sideeffects on such a thing? Or is there a solution that works with PW? I wanna go the hard way and implement something like this: http://stats.simplepublisher.com/ only directly within PW and use the API to get the data...maybe create a simple module from it later i don't know if i could set it up right from the start ? this is the reason for my questions on more experienced devs Kind regards mr-fan
- 7 replies
-
- performance
- counter
-
(and 1 more)
Tagged with:
-
Long but well written, detailed and informative article written by an Engineering Manager for Google Chrome about the true cost of Javascript and what you can do to alleviate some of that cost. Must read! https://medium.com/@addyosmani/the-cost-of-javascript-in-2018-7d8950fbb5d4
- 4 replies
-
- 7
-
- javascript
- performance
-
(and 2 more)
Tagged with:
-
Hello, I want to develop an eshop with processwire + padloper + any needed module (ex. ProCache n stuff). Right now, the eshop is in CS-Cart and I hate it. Will pw be able to support "proper" eshop features such as some simple sales graphs, 1k+ products, shipping methods & costs, user registration, search history, order status, payment status (for bank transfers for example) etc. For example I'm thinking that products and orders as sub-pages will be problematic (10k+ sub-pages) I also want to know if it's actually worth it or if it would be better to move to a bigger platform which has some of these features out of the box. Currently, the problem with CS-Cart is major lack of documentation which limits me in writing custom features (connection with shipping providers, export of data via CardDAV and more). Also CS-Cart is too complex to handle without documentation. I can manage and understand the code of PW but CS-Cart is just too big. Of course the server is beefy enough to handle things even heavier than CS-Cart. My focus is on development process and workflow (I've already set up CD from github to a docker container for a small website) EDIT: Sylius and Thelia2 really caught my eye. Comments on them will be appreciated
- 3 replies
-
- performance
- eshop
-
(and 1 more)
Tagged with:
-
Hello, i have encountered a random problem with my processwire page. I have build a simple php file where i include the processwire index.php and run 1 simple database query and echo the total execution time of the script. I call this file periodically to check the performance of the page. Most of the time the execution time is under 0.5 seconds, but some times i goes up to over 3 or 4 seconds. The database query takes very little time, so the problem seems to be in the code called in index.php. I checked my server and database metrics and they look perfectly fine. Has anybody experienced similar issues before? Or do you guys have any tips how to debug that issue the best? Thanks for your help in advance BR Joscha
-
Hey guys, I have two performance related questions. Question 1. find vs get In my setup, I have a template called 'article,' which is a child of 'articles.' There are two ways to get all articles. This: $pages->get("/articles")->children; or this: $pages->find("template=article"); And if I wanted to further filter the articles by author , the queries would look like these: $pages->get("/articles")->children("author=john"); and $pages->find("template=article,author=john"); I think "get" would be faster in such situations. What are your thoughts? Are there any best coding practices for PW performance? Question 2: URL Routing The article urls in my old site are in this format: http://domainname.com/article-url. But, in PW, the urls look like this: http://domainname.com/articles/article-url. For URL redirection, based on what I was able to find on the forums, I made the homepage accept url segments. Then I route the requests based on the url segment (which essentially is the 'name' of the article page.) The question now is -- how big of performance hit will the site take due to this routing? Or am I needlessly thinking too much? thanks,
-
We have a big selector which we have broken down into 3 chunks to return a list of notes (pages) with repeaters as follows. We also allow the user to filter the results. The problem we have is that the page currently takes nearly 10 seconds to process results. Is there anything we can do to improve the performance of this? I wonder if it would be worth bringing the filters into each of the find()s. I assume that caching here wouldn't work due to querystring parameters? $selector = "template=horse-note"; // Notes with unread comments (date order, most recent first) $notes_with_unread_comments = $pages->find("{$selector}, h_notes_comments.count>0, h_notes_comments.{$session->unread_by}>0, sort=h_notes_last_comment"); //echo 'Notes with unread comments ('.count($notes_with_unread_comments).'):<br />'.$notes_with_unread_comments.'<br /><br />'; // Unread notes (date order, most recent first) $notes_unread = $pages->find("{$selector}, {$session->unread_by}>0, sort=h_notes_last_comment"); //echo 'Notes unread ('.count($notes_unread).'):<br />'.$notes_unread.'<br /><br />'; // Read notes in date order (most recent first) that they were either added or that the last comment was made, whichever is most recent. $notes_other = $pages->find("{$selector}, sort=-h_notes_last_comment"); //echo 'Notes other ('.count($notes_other).'):<br />'.$notes_other.'<br /><br />'; // create notes PageArray $notes_total = new PageArray(); $notes_total->add($notes_other); $notes_total->prepend($notes_unread); $notes_total->prepend($notes_with_unread_comments); // FILTER // sanitize inputs $horse = $sanitizer->text($input->get->horse); $category = $sanitizer->int($input->get->category); $from_date = $sanitizer->text($input->get->from_date); $to_date = $sanitizer->text($input->get->to_date); $comments = $sanitizer->int($input->get->comments); // horse name if($horse) { $selector .= ", parent.h_name%=$horse"; } // note category if($category) { $selector .= ", h_notes_category_id=$category"; } // from date if($from_date) { $selector .= ", h_notes_last_comment>=".strtotime("$from_date 00:00:00"); } // to date if($to_date) { $selector .= ", h_notes_last_comment<=".strtotime("$to_date 23:59:59"); } // comments if($comments) { $selector .= ", h_notes_comments.count>0"; } // apply filter if($selector!='template=horse-note') { $notes_total = $notes_total->find($selector); } // slice PageArray according to pageNum $pageNum = $input->pageNum; $limit = 15; $start = ($pageNum-1)*$limit; $notes = $notes_total->slice($start, $limit);
-
Helo, I have a site running on PW 2.7.2 stable that is solely used for data logging. No frontend code at all. Every log entry is saved as a page. The template for the log pages has 5 fields: The channels field is a Pro Tables field with 4 columns: The data stored for the logs looks like this: Apart from that there are only about 10 more pages and about 12 templates sharing 12 fields amongst them. The logs are coming in from another PW site via REST API and are stored every minute. So no big overhead there. ATM there is only one log coming in every minute. But for the future there will be 100 or even 1000 logs per minute. The logs will be stored in the DB for 1 month, then stored away to csv and deleted from the DB. I am experiencing quite slow page loading already for the login screen and in the backend once there are more than 1500 log pages. Also loading the listing for the logs with Lister Pro takes quite some time (about 15 seconds for loading of approx 2000 logs and paginating them). The site is hosted on a godaddy 2GB VPS at the moment which hosts only one more site with no heavy traffic. What I can see from the server side there is not much average CPU load but quite some memory usage. I wonder how performance will be once there are 10000s of log pages in the DB (will know in about 2 weeks). Only 1 log per minute already amounts to over 40.000 log pages a month. Do you think PW can scale well with about 400.000 - 1 million pages (then on a better server, of course) ?
-
Just a small question on selector performance and room for improvments... i use a little function that renders me all archiv links like /2015/ /2014/ and so on...all years that are _used_ in the article system... This cause a selector without limit...and there will be many posts/articles in future now there are about 40 - it will scale up to 100 fast and more will follow...so here is the question about caching the find query or using other code...? Basic information i work with a URL segment approach that show me /year/ and /category/ overview pages. (Category or not the problem while here limit works and pagination is used) Code from the year archive: function renderArchive() { //check for url segments $seg1 = wire('sanitizer')->pageName(wire('input')->urlSegment1); //get all posts to check for years $posts = wire('pages')->find("template=artikel"); //get homepage url for the links $homepageurl = wire('pages')->get('/')->httpUrl; //get article root page for links $artikel_root = wire('pages')->get(1068); //get year list $out = '<h4 class="subtitle">Archiv</h4>'; $out .= '<div class="listbox1"><ul>'; //Setup for the year sidemenu $years = array(); // find the array of years for all events foreach ($posts as $y) { $dateY = $y->getUnformatted("publish_from"); $years[]= date('Y',(int)$dateY); } $years = array_unique($years); arsort($years); $year = ""; foreach($years as $key => $year) { // Output the year $class = $year === $seg1 ? " active" : ''; $out .= '<li class="'.$class.'"><a href="'.$homepageurl.$artikel_root->name.'/'.$year.'/">'.$year.'</a></li>'; } //close list $out .= '</ul></div>'; return $out; } Could this done better - or cached somehow (I'm not really experienced with caching such things) regards mr-fan
-
Having milliseconds retrieval time in the PDO querylog would shed light on an otherwise dark corner of performance monitoring. When the MySQL adapter transitioned to PDO, the query log timings were omitted from the log output. The older MySQLi logs included milliseconds elapsed, which was helpful in surfacing retrieval bottlenecks. Attempts to manually profile PDO queries often run up against a profiling_history_size of 100, a limit imposed by MySQL itself. That is to say that getting a complete list of timings over the course of serving a page isn't always easy. Having these added back to the querylog (if possible) would be extremely helpful.
-
- 1
-
- performance
- debug
-
(and 1 more)
Tagged with:
-
Does anyone have a recommendation for creating a MySQL connection pool on an Apache httpd server? Every invocation of PW is slowed on this system with about 1 second taken up before any useful work is done, primarily in establishing a database server connection. This is on a VPS, so admittedly it isn't fully under our control, but still...
-
- database
- performance
-
(and 2 more)
Tagged with:
-
Hi guys, i've just a simple question of a little textformatter that i use for replacing different textvalues. I'm not that experienced PHP professional so i asking before i get in possible troubles... Setup is a "Glossar" like page holder with glossar_entries of various types like (abbr, internal link, external link) In my textfields i use pipes to set a term of that glossary like ||PW|| is great! i can preset ||Internal|| and ||External|| links and reuse them in every textblock...if i change the glossar entry on every page i use this it will automagic change... so far so simple (i've tested autolink and bought Profields...but i wanna give the user the power to edit this entries and have more control where these kind of autolinks work and could used) Here is my first more advanced textformatter on this and my simple question is - will this produce any overload or trouble if i get around about 0-10 terms on a content page? Second Question is on caching with textformatter replacements? How this is handled? <?php /** * ProcessWire TextformatterGlossary * * module made by mr-fan. * 15.09.15 basic class and wrapper configuration added * */ class TextformatterGlossary extends Textformatter { /** * getModuleInfo is a module required by all modules to tell ProcessWire about them * * @return array * */ public static function getModuleInfo() { return array( 'title' => 'Autolink from Glossar', 'version' => 101, 'author' => 'mr-fan', 'summary' => "Allows to use tags in textareas to autolink to specific glossary links." //'href' => 'http://processwire.com/talk/topic/1182-module-image-tags/?p=57160', ); } /** * Format the given text string. * * @param Page $page * @param Field $field * @param string $value */ public function formatValue(Page $page, Field $field, &$value){ // use fast strpos check to make sure that $value contains pipes || if (stripos($value, '||') === false) return; //get all terms in ||pipes|| in an array $matches = array(); preg_match_all('/\Q||\E[^|]+\Q||\E/', $value, $matches); //the multidimensional array holds the single strings in the second array['0'] foreach ($matches['0'] as $key => $match) { //get all glossary pages in a pagearray $entry = wire('pages')->find("template=glossar_item,title=$match")->first(); if ($entry) { //entry is found in our glossar pages //rip the pipes $term = str_replace('|', '', $match); //set the replacement depending from the item type switch ($entry->glossar_type) { case '1': //abbr $replacement = '<abbr title="' . $entry->headline . '">' . $term . '</abbr>'; break; case '2': //external link $replacement = '<a rel="help" target="blank" href="' . $entry->extern_link->url . '" data-original-title="' . $entry->headline . '"><span class="fa-globe" aria-hidden="true"></span> ' . $term . '</a>'; break; case '3': //internal link //internal link need to get the url $internLink = wire('pages')->get("$entry->page_link"); $replacement = '<a rel="help" href="' . $internLink->url . '" data-original-title="' . $entry->headline . '">' . $term . '</a>'; break; default: $replacement = $term; } //works the part inside the tags are changed ||test|| on every match $value = str_replace($match, $replacement, $value); } else { //the entry for ||term|| is not found and get renderd without pipes just as normal text //rip the pipes $term = str_replace('|', '', $match); //replace the matches of ||term|| with the cleaned value $value = str_replace($match, $term, $value); } } } } Best regards mr-fan
- 5 replies
-
- textformatter
- beginner
-
(and 1 more)
Tagged with:
-
Hello everybody, I'd like to use the images field as an upload UI. One image field would hold 1000+ images and I am wondering how this would perform in the backend. I would only need the upload functionality. No need for thumbnail display, ordering etc. Has anyone experience in using an image or file field with very large quantities of images? Or is there an alternative upload UI for PW that stores all images to a single folder? EDIT: I don't need to upload 1000+ images in one go.
-
Hi, i noticed on my sites and also on other processwire sites, that the time to the first byte is quite long (> 500ms). Is there a way to speed up this waiting time, without using plugins like ProCache? i benchmarked some sites from the showcase and from Processwire weekly and nearly all of them perform bad at First Byte Time. is this related to Processwire or just standard behaviour for php/cms sites? Here are the tests: First Byte Time - Grade F URL: http://c-logistic.de/ Test: http://www.webpagetest.org/result/150601_J7_3992e9491b6acf72f73f10441880e6a2/ URL: http://von-bergh.de/ Test: http://www.webpagetest.org/result/150601_8S_cc7abdaa6519aae4904ea067ca5adf3c/ URL: http://www.canton.de/ Test: http://www.webpagetest.org/result/150601_7S_a8d3c3cf3d4f8ad2c3d3048a31864c81/ URL: https://www.maletschek.at/ Test: http://www.webpagetest.org/result/150601_42_8379a987c81ee2b8aa11f50b7a74694c/ URL: http://www.deichtorhallen25.de/ Test: http://www.webpagetest.org/result/150601_SS_90b428db03a86a5a309bd80aec294706/ URL: http://www.orkork.de/ Test: http://www.webpagetest.org/result/150601_VJ_63b76ec1ab9f37a93b9776320059a0a6/ URL: http://www.dojofuckingyeah.de/ Test: http://www.webpagetest.org/result/150601_J5_db3c199ddb74cfae513bbbe775b947c8/ URL: http://www.schloss-marienburg.de/ Test: http://www.webpagetest.org/result/150601_KY_01db3003b66dfad3b91aa40cbfda8f82/ First Byte Time - Grade C URL: http://www.grupoolmos.com/ Test: http://www.webpagetest.org/result/150601_ZH_8b9a2af518a9ac0cb30130e2aa3d3f2d/ URL: http://www.pipistrello.ch/ Test: http://www.webpagetest.org/result/150601_RK_5cf4d97c7fe1d4db3a145c565a84069f/ URL: http://transformationswerk.de/ Test: http://www.webpagetest.org/result/150601_9D_f16e00a262ec9d30b19ff67a290216ca/ URL: http://transformationswerk.de/ Test: http://www.webpagetest.org/result/150601_9D_f16e00a262ec9d30b19ff67a290216ca/ First Byte Time - Grade B URL: http://brakhax2.com/ Test: http://www.webpagetest.org/result/150601_JE_6552783547f8f7b73945a4698af8f231/ First Byte Time - Grade A URL: http://www.1815.ch/ Test: http://www.webpagetest.org/result/150601_Z5_f97b31533f61453aaeefc6cafef20e83/ URL: http://new.korona-licht.de/ Test: http://www.webpagetest.org/result/150601_RD_37ece6079e30ff5858c95db932f91e30/ URL: http://processwire.com Test: http://www.webpagetest.org/result/150601_V0_c812f846f6555e506fbd2b8ba9efe2e6/ Thanks!
- 3 replies
-
- 2
-
- performance
- loading
-
(and 1 more)
Tagged with:
-
As some of you might have noticed recently there has been a large "Frontend Performance Talks Offensive" (not only) by Google Engineers. Here are some high quality (regarding content) Videos which i enjoyed very much and thought you also might be interested in. A Rendering Performance Guide for Developers by Paul Lewis: Performance Tooling by Paul Irish Your browser is talking behind your back by Jake Archibald Gone In 60fps – Making A Site Jank-Free by Addy Osmani http://addyosmani.com/blog/making-a-site-jank-free/ Any suggestions for more interesting performance related stuff are welcome!
- 14 replies
-
- 15
-
- frontend
- performance
-
(and 1 more)
Tagged with:
-
Hello, I'm in the process of building a web application with PW that delivers data to mobile clients. There will be up to 1000 requests per minute to my webapp (later maybe more). Every request triggers a search through up to 1000 pages and compares timestamps that are sent by the mobile clients with the request to timestamps that are saved with each page that is being searched. The timestamps are saved in PW in their own table in the DB together with a page reference id which makes searching pretty fast. For my search I use: $ads = $pages->find("template=advertisement, ad_server=$serverID, ad_publish_at.date<$tsHigh, ad_publish_at.date>$tsLow"); I want to do some load testing for my webapp to ensure it can handle that many requests per minute and further optimize it. What I need is a testing framework that lets me simulate hundreds of requests/minute. Have you ever done this and what testing framework would you use? Here are some apps that I briefly took a look at: http://jmeter.apache.org/ http://www.pylot.org/ https://code.google.com/p/httperf/ https://github.com/JoeDog/siege
- 4 replies
-
- Performance
- test
-
(and 1 more)
Tagged with:
-
Hi, Coming from modx (Evolution) I found one restriction (which was later solved in Revo version) was a page limit of between 2000 and 5000 pages, after which performance apparently took a nosedive. I never fully tested this but it was a restriction I was always aware of and it did prevent be building a couple of sites. I would be interested to know if there is a theoretical "page limit" in PW, or alternatively has anyone developed a site with 5000+ pages and how well does it perform. How does it affect caching? Thanks
- 12 replies
-
- performance
- page limit
-
(and 1 more)
Tagged with:
-
What is the difference between $a->add($item) and $a->append($item)? I know there are some speed improvements in 2.5.x (comparing to 2.4) but is there room for more? I'm using caching in templates, I can not use ProCache. Looking at timers in debug mode I see: boot: 0.2262 boot.load: 0.1925 boot.load.fieldgroups: 0.0506 What is going on between loading fieldgroups and load?
-
Hi all! I updated some of my PW installations (v. 2.5.3) and one is facing performance issues. A module (on that site) sends about 400 emails and it took usually 90 seconds. Now it takes more than 11 minutes. Before it was about 5 mails per second on average. Now it takes about 2 seconds per email. I didn't change the code – I just updated PW. That's how my module works: Editor creates a page (Newsletter template) Editor adds child-pages (Section template) Editor pastes into a textarea all emails/subscribers (and hit "dispatch") Module grabs the Newsletter page ( $mailtemplate = $newsletter->render() ) (template from #1, which "displays" all child pages as sections like a blog listing) Module makes some edits (convert path to URL, copy convert to plain text, etc.) Module constructs (via Swift mailer) email once (subject and body) Send each subscriber an email (foreach array with email addresses) Within my foreach I only validate the email and save to a log file after sending I really was surprised to see that big increase. I am sending those newsletters since june '14 and never was there something above 180 seconds. Swift mailer is configured to pause after 200 mails for 5 seconds. SMTP is mandrill, which allows a lot more mails to dispatch at once. So there is no problem I think.
-
Hey guys, I was wondering... Does the number of used/installed modules affect on site performance?
-
Hi guys, I have a category template and articles in different category ( articles template ). category1 - articles category2 - articles category3 - articles - articles I am listing all the articles via the selector "template=articles" . When the category is made as unpublished or hidden then also the articles are displaying. Is there a way other than doing $category = $pages->find("template=category"); $category_ids = $category->__toString(); $articles = $pages->find("template=articles, parent=$category_ids"); I can get only the articles for published ones? Thank you
-
Hello together! I am working on a new site with pw. It is just a little gallery site. In the past I used MODx for this but since version 2 (Revolution) this CMS is pretty oversized and extremly slow. So I decided to give it a try with pw. So far so good. On my local dev environment a basic page needs about 1.1s to get delivered by the server (only html). A short and superficially check (= end_ts - start_ts) has shown that "new Processwire()" took approx. 1s. Is this a "normal" loading time or is there any space for performance improvements. Imho 1s seems a little long... :/ Thanks guys! Chris #UPDATE well... next time I should dig a little deeper: It is a documented problem within MySQL. Mysqli needs about 1 second to connect to the database: http://www.borngeek.com/2011/04/05/mysql-performance-and-localhost-performance/ A switch from dbHost = 'localhost' to '127.0.0.1' fixed this problem.