Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 02/26/2013 in all areas

  1. Good morning little app builder ninjas, As promised yesterday I will point you to the important forum posts on how you could build your own app in Processwire. Are we all seated? First we assume that our app is separated into two main parts. The Frontend with custom login and the Backend for App Administration and User Dashboard. Before we beginn follow this link: http://processwire.com/api/cheatsheet/ Say hello to the Processwire API Cheatsheet made by Soma, whenever you meet Soma just give him a hug for this outstanding work. Well the Cheatsheet would be definitely your best and maybe your most intimate friend during developing process. Make yourself familiar with the Cheatsheet and make sure you activated the advanced view. Ok, lets beginn. Right before coding any line in our editor we have to think about our App. What should the App do? Which functionality do we need? Maybe you mockup your first design, but do not start to code! Think about your App, think about structuring your data, fields, templates,folder structure and ,believe me (really important), think about name conventions. A little example from one of my projects. The app handles three use cases, Frontend, Backend Administration and Client Dashboard, maybe a normal setting. I started to think about naming template files and folders. So I came up to the decision to name folders like this: includes-backend --admin --views --scripts --client --views --scripts includes-frontend --views --scripts Template files are named like this: backendHome.php backendMyData.php home.php homeNormalContent.php homeLogin.php Think about naming your page tree, this is maybe the first step in data structure you do. Sure you could start this way, but do you know how large your app possibly could grow before really thought about? So, do yourself a favour and write down your thoughts. I´m working currently on a App which contains round about 300 Files of Script Logic and Output forms, editing without any logic in naming would definitely be a mess. Ok, we thought a lot of our App in theory, what now? We need a custom login! You find some usefull snippets and logic about login from frontend in this posts: http://processwire.com/talk/topic/1716-integrating-a-member-visitor-login-form/#entry15919 http://processwire.com/talk/topic/107-custom-login/ (I really could recommend Renobirds login thoughts and his script) Read carefully through this posts, they contain almost everything you have to consider on frontend user management. You are now on a point of using processwire where you definitely receive input from users, so think about security and make yourself familiar with $sanitizer. Never ever work with user generated data without sanitizing them! There are two main headlines you always have to remember: 1st DO NOT SAVE PASSWORDS IN PLAIN TEXT IN YOUR APP 2nd DO NOT TRUST INPUT YOU RECEIVE FROM USERS Calm down little Ninja. Ryan just gave you a strong and powerful tool. He gave you Excalibur and named it just API. Follow this links: http://processwire.com/talk/topic/352-creating-pages-via-api/ (creating content via api) http://processwire.com/talk/topic/296-programmed-page-creation-import-image-from-url/ (image handling) Thats it.
    9 points
  2. Hey guys! Our first PW project is online, finally: www.typneun.de So, please do not look into the code TOO close - we are designers, not coders, OK? We started to develop this little portfolio site about 2 years ago, but didn't find time to finish it. So it took nearly 1,5 years to get it done till oct 2012. That's why much stuff is not perfectly coded right now. The site was developed with static html-files - until tonight when I launched the PW-website. Modules we are using: procache multilanguage/german redirects formbuilder (not yet, will come soon) Actually this site was our second one on PW, but the first one is still in development and will go online in a few days. The experiences we made with this first site and the help I found in this forum made me switch to processwire for our own website. So, thanks to all of you guys! Any ideas or criticism to make the site better? Comments are welcome...
    6 points
  3. @Pete: have tested it on Windows with the zip-extension, in main it works very smooth! here are what I have explored: A Typo in Line 271: you check for Backupfolder to exclude itself strpos($file, 'ScheduleBackups\backups' but in folder site->modules there is no 'ScheduleBackups', only 'backups' for now. when starting a backup with a zip-name that already exists, - the file will be used and updated, this is by design / behave of the zip-extension (it was first time I used it). Could be useful could be not, just wanted to note it. the (memory) bottleneck is here: $zip->addFromString(basename($source), file_get_contents($source)); because reading a hole file into memory, passes it over to zip-function where it gets compressed can lead in memory consumption of 3-4 times the filesize! When testing with 64MB available memory for php, I was not able to add a file bigger than 14MB. It craches the Apache for a second! Maybe there are functions with the zip-lib that provide passing files by chunks to the archive. I havn't checked at php.net. Maybe one could try to increase memory usage with ini_set(). Also I do not know how it behaves with memory usage when using system calls on Unix. Probably there may be a limit too. You can use php-function memory_get_usage() compared against ini_get('memory_limit') to be up to date of available memory resources I use little helper-class for that, For now there is no output to screen when do a manual backup. If you want provide it in a simple text output for every file or only directory passed to archive you can disable output caching with these directives: if(function_exists('apache_setenv')) @apache_setenv('no-gzip', '1'); @ini_set('zlib.output_compression', 'Off'); @ini_set('output_buffering ', '0'); @ini_set('implicit_flush', '1'); @ob_implicit_flush(true); @ob_end_flush(); echo 'some info'; @ob_flush(); These ones may be useful too: set_time_limit( 0 ); ignore_user_abort( true ); All other is perfect to me. As I've said above: runs very smooth! This is a must have!
    3 points
  4. I think there are a lot of things for me to consider with memory usage and I know on one website of mine it might struggle as there are a few uploads that are over 100mb. In these cases though, there should at least be enough memory to have uploaded that file in the first place so I might be able to do something like when it gets to backing up the /assets/files dir that it checks the max size of any file fields in PW first to get an idea of how big files might be, then iterate through them X files at a time depending on that and what the PHP environment will allow, flushing anything in memory as it goes. Problem with something like that is it makes the process slower, but on the bright side it is a good opportunity to be able to feed data back to the browser to show some sort of progress (processing 1-10 of 256 pages or something like that). Some folders I actually need to make it skip are the /assets/sessions and /assets/logs as either of those could have numerous/large files that aren't necessary to backup. I get the feeling the system command for Linux actually won't have a memory problem simply because it's like running it at the command line in a Shell window (sorry Linux folk, I'm sure my terminology is all over the place ). The obvious problem there is that the actual page could well time out, but the command will keep running in the background so you would have a hard job knowing if it's ready if it was run manually. I think I can assume that aside from the site/assets/files directory everything else can be backed-up in a matter of seconds in terms of files. Even with 100 modules installed, they're all small files. Therefore if I have it give feedback once it's backed up the /wire directory as a whole as that should be a standard size more or less, then the /site directories one at a time and we can work it like that. It will actually give me a headache as I need to run more commands for Linux, but I know you can get it to pipe the successful results to a script even then, so I think for both Linux and Windows if it sends progress back to a database table specifically for backups we can easily poll that table every few seconds using AJAX and show which backups are in progress and which are complete as you get when running a backup via cPanel. Tables are another one where I will have to think about number of rows as well I guess to make sure it's not trying to do too much at once, so maybe iterating through each table to do one at a time, checking the number of rows and then splitting them if required would be the way to go there. It's all getting rather more complicated than I had originally intended the more I think about it, but I can hopefully make it better as a result. What I do know is that examples of code from the internet are really helping prevent me from re-inventing the wheel - hurrah for the Open Source community
    2 points
  5. There are several options i guess. Have a look/read: http://processwire.com/talk/topic/1001-pagebreak-or-multi-page-on-single-entry/ and http://processwire.com/talk/topic/1516-cutting-long-content-into-url-segments-with-html-tag/ I'm not sure if there's talk about making TOC's in those threads but i think that would be fairly easy to code.
    2 points
  6. Keep it coming with the suggestions - there's a way to go yet before I'll add some of them but they're all welcome and will be considered once the basics are in place. Step 1 is definitely to see if I can get backups working in multiple environments, so attached is version 0.0.1. Consider it Alpha and use it at your own risk, but I can't see why it would harm your installation. Just giving you the obligatory "you have been warned" speech To install: Unzip the file, stick the folder in your /modules directory, install the module, set the retention period and read the instructions below the retention drop-down in the module config to manually run a backup for now Backups are stored in /site/modules/ScheduleBackups/backups in /site and /db folders respectively so you can monitor those There is currently no "backup succeeded" message or anything like that printed when it's done - just wait for the browser to stop loading the page for now, or for the zip/tar file to stop growing Some things to note: It does an OS check. This is because Windows can't run system() commands as they're a Linux thing, and if it finds you're running Windows it uses the ZipArchive class built into PHP to backup the site and simple mysqli queries to backup the database If it doesn't detect you're on Windows and can run system() commands then it does that which, from my past experience, is far quicker (plus it makes for nifty one-liner backup commands ). It does some detection to see if safe mode is on and whether it can run system commands before backing up, so if safe mode is on or it can't run those commands then it falls back to using the functions mentioned in point 1. During installation, a unique hash is created and saved - this is because when we set it to run via a cron job/scheduled task we need a way of creating a backup without a logged-in superuser in attendance. The backup uses a URL that I doubt you would have as a page on your site (/runbackup) as well as this hash so it is extremely unlikely anyone will try and bring your server down by spamming that URL. Further checks will be added in later versions so it can't run more than once a day anyway or something like that. Also, if anyone who is a better programmer than me wants to contribute then please feel free. Some of what I've written is likely amateurish in places. Kudos to David Walsh for his excellent mySQL backup script and this excellent function on StackOverflow about recursively zipping folders that saved me from re-inventing the wheel in those two areas. ScheduleBackups.zip
    2 points
  7. I missed the XML sitemap generator that I used in a previous CMS so I built my own module to achieve the same functionality. This module outputs an XML sitemap of your site that is readable by Google Webmaster Tools etc. I've generally found that it reduces the time it takes for new sites and pages to be listed in search engines using one in combination with Webmaster Tools etc (since you're specifically telling the service that a new site/new pages exist) so thought I may as well create a module for it. The module ignores any hidden pages and their children, assuming that since you don't want these to be visible on the site then you don't want them to be found via search engines either. It also adds a field called sitemap_ignore that you can add to your templates and exclude specific pages on a per-page basis. Again, this assumes that you wish to ignore that page's children as well. The sitemap is accessible at yoursite.com/sitemap.xml - the module checks to see whether this URL has been called and outputs the sitemap, then does a hard exit before PW gets a chance to output a 404 page. If there's a more elegant way of doing this I'll happily change the code to suit. Feedback and suggestions welcome On a slightly different note, I wanted to call the file XMLSitemap originally so as to be clearer about what it does in the filename, but if you have a module that begins with more than one uppercase letter then a warning containing only the module name is displayed on the Modules page, so I changed it to Sitemap instead which is fine as the description still says what it does. File can be downloaded via GitHub here: https://github.com/N.../zipball/master
    1 point
  8. Hey all Since the topic of backups comes along every so often I decided to write a module to encapsulate some code I use for backups on Linux installs. It's not quite ready yet as I want a fallback option for Windows as well as an option for Linux that will work on shared hosting where you usually cant run system commands, but it is 80% complete for stage 1 and Linux backups work nicely. The general idea is that you set the number of days to store backups (I standardised it to 1 day, 3 days, 1 week, 2 weeks, 1 month, 3 months, 6 months and 1 year rather than having it as an integer field because I think these fit the most common scenarios and I wanted to have a dropdown in my module config too It defaults to 1 week, but depending on the size of the site and how much space you have you might want to increase or decrease the retention period. The idea is that you are given a URL with a unique hash (generated at install) which you then pass off to a cron job or Windows Scheduler and this generates the backups. It will be expanded on once I've got the backups working across different environments but the initial plan is to release a version that simply backs up the site and database, then a version that has a page where you can download backups from as well as an option to FTP/sync them to another server. I don't want to tackle restores though as this would be difficult - you are logged into the admin whilst running the restore so I think the first thing it would do when it has restored the database is log you out, plus I don't want to make assumptions about replacing a user's /site/ folder so I think restores require some manual intervention to be honest. An alternative would be to do this anyway but rename your /site/ folder and take another database copy before restoring, but I'm getting into the realms of trying to be too clever then and anticipate what people are trying to do and ProcessWire is all about not making assumptions Fortunately I have access to a site with several gigs if uploaded files as well as a reasonably large database so I should be able to monitor how well it handles that on Linux and Windows, but smaller sites shouldn't take more than a minute to backup, often a matter of seconds. I shall keep you posted.
    1 point
  9. I was wondering if maybe the ProcessWire site could have a directory of contractors that work in ProcessWire kind of like how the ModX site has. I think it would be a great way to make connections and spread the word about ProcessWire.
    1 point
  10. Great idea and module Pete! Regarding the backup directory, ProcessWire only prevents direct access to .php, .inc and .module files. As a result, it's still possible for some files to be accessible. But this is easy to get around. Just make your backups directory start with a period, like "/site/assets/.backups/" and then the directory will be completely blocked from web access.
    1 point
  11. I've always had good luck with ImageMagick, but didn't implement it in ProcessWire because it's not near as universal as GD2 and requires an exec(), which is something I didn't want the core to rely on. But would totally support inclusion of other libraries for resizing. Admittedly, I've been pretty happy with the quality of GD2's output (at least, after tweaking the quality settings), but if there are visible quality or size benefits to NETPBM or ImageMagick then it would seem to make sense. Attached is the ImageSizerInterface. If you make your NETPBM class implement this interface, then I can convert ImageSizer to be an injected dependency, enabling the image sizer library to be configurable per installation. The crop options are a relatively recent addition, and I don't think that many people are using them or know about them, so if difficult to implement, they aren't crucial to have at this point. ImageSizerInterface.php
    1 point
  12. Like Joss said, you can create your own date field(s) and have them automatically format however you want. This is configured from your date field settings. But 'created' and 'modified' are built-in date fields that have no default formatting options. However, they are still very easy to output in the manner you asked about using PHP's date() function. For instance, if you wanted to output 'created' in the DD.MM.YYYY format, you would do this: echo date('d.m.Y', $page->created);
    1 point
  13. Great website. Really took some minutes to explore the content. Your texts are a pleasure to read. Have to bookmark the site as a good example. And your loading speed is good. Not that typical overloaded jQuery page with megabytes of scripts and requests. Another dead link: http://www.typneun.de/leistungen/ Under the FAQ Question about the AGBs, your link to the AVG just throws a 404.
    1 point
  14. I was looking for jQuery Carousel and I stumble upon this cool jQuery Carousel call Roundabout. It has lots of options! http://fredhq.com/projects/roundabout Found from this article: http://www.tripwiremagazine.com/2012/12/jquery-carousel.html
    1 point
  15. I really like your site Georgson, gives a very professional image of your company. I also enjoyed reading the German, makes me want to start relearning it! Also: Did ProCache make a big difference to how your site loads?
    1 point
  16. Clear and clean layout and good presentation of your business.
    1 point
  17. You can use the Page link abstractor module to prevent this from happening http://modules.processwire.com/modules/page-link-abstractor/
    1 point
  18. Hello Processwire folks, today I want to show how to implement piwik into Processwire to show some specific site details to visitors or, if you create a special frontend login, for yourself. First Step (preparation): What do we need? Well, to grab piwik data we need the piwik open source analytics suite. Grab your copy here: http://piwik.org/ Now copy piwik right into your Processwire folder, you get a folder structure like: piwik, site,wire next install piwik! After installation log into your piwik dashboard and setup your site, we need some information from piwik later in the tutorial, so just let the piwik dashboard open in the background. Now we have to think about data presentation. I´m currently working on a Webapp to let customers create their own Landing Pages and measure their performance, to show the data I´ve choosen morris.js a goodlooking javascript based on raphael.js and jQuery. So, we need jQuery. morris.js and raphael.js. jQuery : http://jquery.com/ morris.js : http://www.oesmith.co.uk/morris.js/ raphael.js : http://raphaeljs.com/ Install all three scripts in your Processwire templates folder and dont forget to copy the provided .css file, too. Second Step (Connect to the piwik API): To grab data from piwik we have to include the piwik API in our Processwire installation and we need some special settings to do. We need 2 informations about our piwik installation: 1st our own Auth Token 2nd the SiteID of our tracked page You could find your Auth Token in the piwik Dashboard. Just click on API in the head menu and there it is, in this little green box. Now we need to know the site id of our page we want to show in our template files. If you only setup one page in piwik the SiteId is 1, if you setup multiple sites click on "all websites" in the head menu now click on add new site. Now you could see all your pages and their own site Id. Remember where to find this informations, we need them in our next steps. Third Step (templating and API connection): Go to your PW Admin and create a new Template. We don´t need any fields so far, so title and body would be enough, now create a new page with your new template. Thats it on clicki bunti side, we now have to open our favorite ninja code editor. Create a new file in your templates folder named like the template you created. Create a new file in a subfolder of your templates folder, maybe if exists in scripts or includes, wherever your include files are. Name this file analyticSettings.php This is our piwik API connector. Copy and paste the following code: <?php // if you don't include 'index.php', you must also define PIWIK_DOCUMENT_ROOT // and include "libs/upgradephp/upgrade.php" and "core/Loader.php" define('PIWIK_INCLUDE_PATH', realpath($config->paths->root.'/piwik')); define('PIWIK_USER_PATH', realpath($config->paths->root.'/piwik')); define('PIWIK_ENABLE_DISPATCH', false); define('PIWIK_ENABLE_ERROR_HANDLER', false); define('PIWIK_ENABLE_SESSION_START', false); require_once PIWIK_INCLUDE_PATH . "/index.php"; require_once PIWIK_INCLUDE_PATH . "/core/API/Request.php"; Piwik_FrontController::getInstance()->init(); function piwikRequest($piwikSiteId,$piwikMethod,$piwikPeriod,$piwikDate,$piwikSegment,$piwikColumns, $piwikColumnsHide) { $piwikGetData = new Piwik_API_Request (' method='. $piwikMethod .' &idSite='. $piwikSiteId .' &period='. $piwikPeriod .' &date='. $piwikDate .' &format=php &serialize=0 &token_auth=AUTH TOKEN &segment='.$piwikSegment.' &showColumns='. $piwikColumns .' &hideColumns='. $piwikColumnsHide .' '); $piwikGetData = $piwikGetData->process(); return $piwikGetData; } ?> line 1 - 14 sets the paths to the piwik API and setup some error handling for piwik. Enter your own Auth Token in line 26. Thats it, connection to piwik allready done. Now lets see how we grab our data. There is a function beginning in line 15. I named this little friend piwikRequest. You could pass 7 values to our function to define which, how many, which date,period and method we want to request from piwik. $piwikSiteId = from which tracked page we need data. $piwikMethod = which data method do we need? You could find a full list here: http://piwik.org/docs/analytics-api/reference/#toc-standard-api-parameters $piwikPeriod = day, year, moth, week $piwikDay= today, yesterday $piwikSegment = Segment of data, e.g. pageTitle==home $piwikColumnsShow = Only grab Data from this columns in the piwik database $piwikColumnsHide = Do not grab data from this columns So, if we want to grab the visitor count from yesterday we could call our function like this: <?php echo piwikRequest('3','VisitsSummary.getVisits','day','yesterday' ?> 3 = id of tracked site from piwik (I use piwik to track frontend, backend and the user created landing pages in its own site) VisitsSummary.getVisits = our API Method to tell piwik which data we need day = our period yesterday = well yes, data from yesterday This request returns just a neat integer so there is no need to do additional logic, if we use methods providing more data the function returns an array which could be displayed by an print_r() or json_encode() Well our piwik connection is ready, now we want to show some data on our template file. The Templating: Open your template file, make sure you included all needed files from jquery, morris and raphael. Include on top of the page your piwik api connector file. Make sure that jquery, morris.js and raphael.js are included in this order and above the html code. Order: jQuery Rapaehl.js Morris.Js We now want to show 2 Donuts with 2 data values each. Visits today and Yesterday. Total pageviews from this year, unique and all. Copy and paste the following code to your template file: <h2>Visitors</h2> <div id="donut" style="height:200px;"></div> <h2>Page Views</h2> <div id="donut2" style="height:200px;"></div> <script> Morris.Donut({ element: 'donut', data: [ {label: "Yesterday", value: <?= piwikRequest('3','VisitsSummary.getVisits','day','yesterday') ?>}, {label: "Today", value: <?= piwikRequest('3','VisitsSummary.getVisits','day','today' ) ?>}, ] }); Morris.Donut({ element: 'donut2', data: [ {label: "unique", value: <?= piwikRequest('3','Actions.get','year','yesterday',' ','nb_uniq_pageviews') ?>}, {label: "all", value: <?= piwikRequest('3','Actions.get','year','yesterday',' ','nb_pageviews') ?>} ] }); </script> We add 2 empty divs with some inline css. You could change this via a stylesheet. In this 2 divs the morris script will draw a donut with the grabbed data from piwik. Right under the divs we are calling our morris.js and provide data from our piwikRequest function. Thats all You could find all piwik methods, dates, periods and so on in the piwik api reference here: http://piwik.org/docs/analytics-api/reference/#toc-standard-api-parameters
    1 point
  19. I attached a screenshot to show what I did with processwire - piwik
    1 point
  20. Hey Joss... you can actually "scroll" through the project images... there's also a little link that says "TOP" at the end of each project, maybe jQuery wasn't fully loaded or something... Oh and Georgson, your website looks awesome and loads pretty fast for me too...
    1 point
  21. Ha! Thanks guys! Yeah, linking the logo would make sense, especially as other sections are added. Thanks for the favicon, diogo.
    1 point
  22. Gretings Georgson, Thanks for sharing, and congratulations on the launch! Seems that ProcessWire helped you zoom to a great finish. I like the site. The colors present a positive, active sensation throughout the site. The photos on all the slideshows are very professional. I'm viewing the site on an iPad (I'll check from a desktop later). My only comment would be -- if possible -- to make each project a bit more distinct as you scroll down the page. Thanks again for sharing, Matthew
    1 point
  23. Nice work! I was talking to ryan what seems like ages ago now about version control and I remember stumbling across this easy to use diff class that might help in a future version: https://github.com/chrisboulton/php-diff EDIT: I just noticed that someone has built a useful jQuery merging tool that would also help of you follow the link above and read the readme
    1 point
  24. Love it and I think it's an amazing module idea! Can it handle multiple photos per email as was originally suggested earlier in this thread? I just don't see any groups under one title in your example gallery ryan, though this could easily just be that you haven't sent an email with multiple photos yet It might be time to split this off from the original topic though as I only read this one by chance and wasn't expecting all this. Either that or when it's ready for release I suppose the new module topic can just link back here. I dunno... just mumbling to myself
    1 point
  25. Oh, I see - you're right. The .htacces-file mostly would be not writable and also more often isn't the problem. Maybe only test with HTTP-HEAD for the admin-page and inform the user if it is not reachable. So he get warned and frustration isn't that big as it could be now And yes, I would like if you add the additions to the class. ( afterwards I can tell everyone who wants to hear about or not, that I've provided code to the core of one of the best CMS out there! ) ( but I wouldn't tell that it actually was a total of 6 or 7 lines )
    1 point
  26. We've got this module just about ready for release. Here's an example of the module up and running and the [largely unstyled] output: http://processwire.com/email-images/
    1 point
  27. Favicon added to modules.processwire.com
    1 point
  28. Funnily enough I'm building one currently in ProcessWire but will be a while before it's ready
    1 point
  29. An old, old client of mine (we produced some dramas together in the early 80s) wants to upgrade his website to something a little more "dynamic." He has asked for some help, and to get the ball rolling, he sent me over the specs for the server space he is using. Apparently it is running php 4 or 5.1 - you can choose either! I asked him to send a support ticket to see if they would upgrade to php 5.3 at least. The reply has come back "Newer versions of PHP are still unstable at best, and until they are better tested, we will be staying with the more reliable 5.1" I moved my old friends website in about 30 seconds flat first thing this morning. Joss
    1 point
  30. This is where pages are your friends It is worth splitting everything up on to pages somewhere if you are going to treat them in different ways. Then you can just go and grab them and so what you want. And, as pages are part of the page list, then you can drag and drop them. You can grab ->created and ->modified. If you are using a find() system (see selectors in the API) then you can sort by date. You can also create a data field which will allow you to intentionally re-date things (for instance if you are importing archives). So here are some options: You can create pages just as children in the page tree and have them appear on the menu You can create pages as children of a hidden page on the page tree (they wont appear on the menu) then import them into a template using $pages (see cheatsheet) You can create a multi Page field, choose the asm type on the input tab, then manually choose pages from a parent and drag and drop the order to your hearts content. You can create pages anywhere you like but have a common template then import them using $pages->find("template=mytemplate, sort=-created, limit=10"); and then loop through them you can ... oh, loads of other ways. You can have fun, basically.
    1 point
  31. As with everything in PW there are many ways of doing anything. One that I think would work (untested, written in browser) would be something like $child = $page->child; // 1st child of page // some html here echo $child->headline; // more html echo $child->teasertext; // more html $child = $child->next; // next child // some html here echo $child->headline; // more html echo $child->teasertext; // more html // and so on <edit>Wanze beat me to it, so take your pick </edit>
    1 point
  32. Hi ashrai, foreach works, you can give your news-boxes different classes based on the iteration index. For example: $news = $pages->get('/news-parent/')->children('limit=3, sort=-created'); foreach ($news as $k => $n) { echo "<div class='box{$k}'>"; echo "<h2>{$n->headeline}</h2>"; echo $n->teasertext; echo "</div>"; } And in your css, style your boxes: .box0 { /*first box*/ } .box1 { /*second box*/ } .box2 { /*third box*/ }
    1 point
  33. If you have a robots.txt, I would use it to specify what directories you want to exclude, not include. In a default ProcessWire installation, you do not need to have a robots.txt at all. It doesn't open up anything to crawlers that isn't public. You don't need to exclude your admin URL because the admin templates already have a robots meta tag telling them to go away. In fact, you usually wouldn't want to have your admin URL in a robots file because that would be revealing something about your site that you may not want people to know. The information in robots.txt IS public and accessible to all. So use a robots.txt only if you have specific things you need to exclude for one reason or another. And consider whether your security might benefit more from a robots <meta> tag in those places instead. As for telling crawlers what to include: just use a good link structure. So long as crawlers can traverse it, you are good. A sitemap.xml might help things along too in some cases, but it's not technically necessary. In most cases, I don't think it matters to the big picture. I don't use a sitemap.xml unless a client specifically asks for it. It's never made any difference one way or the other. Though others may have a different experience.
    1 point
  34. I think I understand what you mean now. No problem--I'll move the definition of that text into the __construct so that they can be translated.
    1 point
×
×
  • Create New...