Leaderboard
Popular Content
Showing content with the highest reputation on 01/06/2015 in all areas
-
-------------------------------------------------------------------------------------------------------------------------------- for PW 3.0+ please follow this link! -------------------------------------------------------------------------------------------------------------------------------- Croppable Image Module for PW >= 2.5.11 and PW <= 2.7.3 Version 0.8.3 alpha Hey, today I can announce an early (alpha) release of CroppableImage, what was forked from Anttis Thumbnails module. Until now there was a lot of work done by owzim, Martijn Geerts and me. We have solved the issues regarding the list from here: The modules are bundled together so that you only can and have to use FieldtypeCroppableImage for install, uninstall & configure. It uses new naming scheme that was introduced with PW 2.5.0 that supports suffixes. The complete image rendering is delegated to the core ImageSizer, or to any optional hooked in rendering engine. Template-settings are now fully supported, including removing variations when settings have changed. It fully respects settings for upscaling. If upscaling is set to false, you cannot select rectangles smaller than the crop setting. We implemented these enhancements: The GridView now is very nice and compact, and also benefits from the lately introduced setting for $config->adminThumbOptions. Permanent storage of the crop coordinates, quality and sharpening settings are now implemented native. No need to use PiM for this anymore. The usage/display of the Quality and Sharpening DropDown-Selects can be globally disabled/allowed in the modules Configpage. (additionally to that a setting on a 'per field base' is planned.) And the most wanted feature by the community: It gives back a pageimage and not the URL-string. This way you can use it like this: // get the first image instance of crop setting 'portrait' $image = $page->images->first()->getCrop('portrait'); You can further use every pageimage property like 'url', 'description', 'width' & 'height' with it: // get the first image instance of crop setting 'portrait' $image = $page->images->first()->getCrop('portrait'); echo "<img src='{$image->url}' alt='{$image->description}' />"; And you can proceed further image rendering with it: // get the first image instance of crop setting 'portrait' and proceed a resize with imagesizer $image = $page->images->first()->getCrop('portrait'); $thumb = $image->width(200); // or like this: $thumb = $page->images->first()->getCrop('portrait')->width(200); // and if you have installed Pia, you can use it here too: $thumb = $page->images->first()->getCrop('portrait')->crop("square=120"); The only downside with this is that when you (as the site developer) have enabled the usage of DropDown-Selects in the images editor, you do not know the values the editors have chosen for the images. As a workaround for this you can use the getCrop() method with a second param. This is a PW selector string. It can contain as many of the known pageimage options like 'quality', 'sharpening', 'cropping', etc, as you need, but none of them is required. But required is at least one setting for 'width' or 'height': $image = $page->images->first()->getCrop('portrait', "width=200"); $image = $page->images->first()->getCrop('portrait', "width=200, height=200, quality=80"); $image = $page->images->first()->getCrop('portrait', "height=400, sharpening=medium, quality=75"); . . You can get the module from GitHub: https://github.com/horst-n/CroppableImage (Better Docs are coming soon) Screenshots Related Infos A good setting in site/config.php for the AdminThumbs are: (height=>200 and scale=>0.5 !) $config->adminThumbOptions = array( 'width' => 0, 'height' => 200, 'scale' => 0.5, 'imageSizer' => array( 'upscaling' => false, 'cropping' => true, 'autoRotation' => true, 'sharpening' => 'soft', 'quality' => 90, 'suffix' => array(), ) );15 points
-
Jumplinks for ProcessWire Latest Release: 1.5.63 Composer: rockett/jumplinks ⚠️ NEW MAINTAINER NEEDED: Jumplinks is in need of a new maintainer, as I’m simply unable to commit to continued development. Jumplinks is an enhanced version of the original ProcessRedirects by Antti Peisa. The Process module manages your permanent and temporary redirects (we'll call these "jumplinks" from now on, unless in reference to redirects from another module), useful for when you're migrating over to ProcessWire from another system/platform. Each jumplink supports wildcards, shortening the time needed to create them. Unlike similar modules for other platforms, wildcards in Jumplinks are much easier to work with, as Regular Expressions are not fully exposed. Instead, parameters wrapped in curly braces are used - these are described in the documentation. As of version 1.5.0, Jumplinks requires at least ProcessWire 2.6.1 to run. Documentation View on GitLab Download via the Modules Directory Read the docs Features The most prominent features include: Basic jumplinks (from one fixed route to another) Parameter-based wildcards with "Smart" equivalents Mapping Collections (for converting ID-based routes to their named-equivalents without the need to create multiple jumplinks) Destination Selectors (for finding and redirecting to pages containing legacy location information) Timed Activation (activate and/or deactivate jumplinks at specific times) 404-Monitor (for creating jumplinks based on 404 hits) Additionally, the following features may come in handy: Stale jumplink management Legacy domain support for slow migrations An importer (from CSV or ProcessRedirects) Open Source Jumplinks is an open-source project, and is free to use. In fact, Jumplinks will always be open-source, and will always remain free to use. Forever. If you would like to support the development of Jumplinks, please consider making a small donation via PayPal.5 points
-
Hi Lauren, I have thrown something together for you. It isn't well tested yet, but seems to be working fine. You can edit the StopWords.js file if you want to adjust the words that are removed. If there is general interest in this module, I might consider making it configurable. Let me know how it goes for you. PageNameRemoveStopwords.zip5 points
-
Buried in a topic is a small example module I posted for creating a simple process module.4 points
-
Hi Lauren, You'll need to hook into ProcessPageAdd::buildForm and potentially ProcessPageEdit::buildForm if you also want to change the name when the title is changed during a later edit. You can get an idea of how to do this from my Page Rename Options module (https://github.com/adrianbj/PageRenameOptions/blob/master/PageRenameOptions.module#L65) You can see that I have added some JS to override the native functionality when it comes to naming the pages. You will be looking to add to/override the functionality in these files: https://github.com/ryancramerdesign/ProcessWire/blob/6cba9c7c34069325ee8bfc87e34e7f1b5005a18e/wire/modules/Inputfield/InputfieldPageName/InputfieldPageName.js https://github.com/ryancramerdesign/ProcessWire/blob/6cba9c7c34069325ee8bfc87e34e7f1b5005a18e/wire/modules/Inputfield/InputfieldPageTitle/InputfieldPageTitle.js Hope that helps to get you started. There is nothing wrong with messing with the page name - in fact it is possible to manually edit it by hand. The one thing that some don't agree on is whether it should ever be changed after it is originally created. I would personally rather the name matched the title, but others think it should never change due to SEO and broken links. I make use of the Page Path History module to deal with these - maybe not perfect, but typically the only time the titles will change is during development, so I am ok with it.4 points
-
Good work on helping out of course - I was more thinking out loud. I think having just read up on it quickly it doesn't seem to matter much - see the final reply to the chosen answer here (which disagrees with that answer): http://stackoverflow.com/questions/9734970/better-seo-to-remove-stop-words-from-an-articles-url-slug as well as the comment further down about the fact that StackOverflow don't do it. Having checked that, nor do Slashdot and some other big sites. In fact most search results about stopwords seem to relate to Wordpress plugins rather than anything official from Google saying it makes any difference. I think it's one of those things that may have mattered in the past but not so much now. There are certainly some respectable SEO companies out there who aren't removing the stopwords from their own website URLs either. But please don't take my word for it - as I say I was just thinking out loud and know very little about the subject so if someone finds a definitive answer somewhere from Google themselves then please do share as the short research I did wasn't really conclusive.3 points
-
In a previous life I used to do IT support in a school, and I was responsible for a Zimbra install there. For them, it was (and still is) a very good and capable system. Different settings for staff/students, external access, IMAP for mobile users, LDAP integration, the works. The management features and API are robust and featureful. To an extent, it's very customisable. From the time I implemented it, (around 2008) its ownership has changed hands several times but has gone from strength to strength. On the technical side, you will need a server with a lot of resources to run it. It is not a lightweight webmail client like RoundCube or Horde; it is an entire self-contained product that bundles its own SQL server (currently MySQL), mail agent, web application server, antivirus/antispam, logging, monitoring services and more. I'm not sure if it's worth the resource overhead and hassle if it's just going to be 1-2 accounts on a single domain and you are the one maintaining it. But your situation and priorities may be different2 points
-
I have always tended to look at these as more of a problem in titles and so on than in URLs - if you have a lot of "of" "in" "at" and so on, your titles are going to be waffly and probably too long for good SEO. Making the Title of the page neat and sensible means the resulting name will be the same - it is a copy writing problem. Over use can also make bad copy - when being attentive to SEO, you should first and foremost be attentive to the audience. If removing all stop words from a URL or title turns it into gibberish, you have not done yourself any favours from either the SEO or readability point of view. From the little I know, it seems like these days Google et al do not just remove all stop words - they have lists of phrases where stop words should be left alone and generally seem to be growing a more pragmatic approach to everything. In these sorts of circumstances human editing is much better than automation.2 points
-
2 points
-
Thanks Alex for catching this. I can confirm this behaviour only happens in the dev branch since the addition of the updated FieldtypeComments. Ryan managed to break something . Hey, the new fieldtype is still under development. I will bring this to Ryan's attention. Meanwhile, a workaround that I have tested is to first install the stable branch PW, install Blog, upgrade PW to dev branch. Works fine. FYI the message logs for dev branch (unsuccessful upgrade of blog_comments) 2015-01-06 18:07:29 lightning_admin http://curium-kcu.lightningpw.com/processwire/blog/ Updating schema version of 'blog_comments' from to 4 2015-01-06 18:11:06 lightning_admin http://curium-kcu.lightningpw.com/processwire/blog/ Updating schema version of 'blog_comments' from to 4 2015-01-06 18:17:05 lightning_admin http://curium-kcu.lightningpw.com/processwire/blog/ Updating schema version of 'blog_comments' from to 4 Compare that to a successful one.. 2014-11-30 01:51:37 kongondo http://localhost/pb/pb/blog/ Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:51:37 kongondo http://localhost/pb/pb/blog/ Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:51:37 kongondo page? Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:51:37 kongondo page? Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:54:34 kongondo http://localhost/pb/pb/blog/ Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:54:34 kongondo http://localhost/pb/pb/blog/ Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:54:34 kongondo page? Updated schema version of 'blog_comments' to support website field. 2014-11-30 01:54:34 kongondo page? Updated schema version of 'blog_comments' to support website field. 2015-01-06 17:46:32 kongondo http://localhost/pb/pb/blog/ Updating schema version of 'blog_comments' from 1 to 5 (FieldtypeComments)2 points
-
<development post> New Alpha Release - 0.1.1 The full changelog is below, but I want to highlight an important addition to the module. In its present state, this new feature is classified as an experiment, and so it may not make it into the final 1.0 release. The experiment is called Enhanced Path Cleaning which basically splits and hyphenates TitleCased wildcard-captures, as well as those containing abbreviations or acronyms in capital letters. This is quite handy for those who come from a DNN background, or some other ASP-based framework. As an example, EnvironmentStudy would become environment-study and NASALaunch would become nasa-launch. It also changes any numbers that are not broken out with hyphens. You'll need to turn the experiment on in the module's config page, as it is off by default. Example from scan log: Page not found; scanning for redirects... - Checked at: Tue, 06 Jan 2015 10:20:20 +0200 - Requested URL: http://processwire.local/NAGMagazine/home/tabid/1027/default.aspx - PW Version: 2.5.13 [ASPX Content] - Source Path (Unescaped): {path}/tabid/{id}/Default.aspx - Source Path (Stage 1): {path:segments}/tabid/{id:num}/Default.aspx - Source Path (Stage 2): ([\w/_-]+)/tabid/(\d+)/Default.aspx - Destination Path (Original): {path}/?otid={id} - Destination Path (Compiled): processwire.local/{path}/?otid={id} - Destination Path (Converted): processwire.local/nag-magazine/home/?otid=1027 Match found! We'll do the following redirect (301, permanent) when Debug Mode has been turned off: - From URL: processwire.local/NAGMagazine/home/tabid/1027/default.aspx - To URL: processwire.local/nag-magazine/home/?otid=1027 Changelog [5 Jan] - Modified Module Config page - Simplified wildcard and cleanPath() expressions - Added 'segments' check to 'segments' smart wildcard, where only 'path' existed before - Added 'num' check to 'num' - Changed ellipses style (added border) - Made Legacy Redirects temporary (302) - Fixed CSS #ModuleEditForm declaration so other modules are not impacted by the style change - Changed CSS p.notes to p.parNotes so we don't conflict with anything else - Other small semantic code modifications [6 Jan] - Added Experiments fieldset in Module Config - Added Enhanced Path Cleaning Experiment - Removed license file, refer to http://mikeanthony.mit-license.org/2 points
-
@adrian - i'm doing something similar, but instead of automatically importing, i let the user choose a 'save action' which could be either to create the pages from the media (in my case audio files), or delete the files on save; this of course save so much time because users can batch upload a huge set of media, and then on save they can batch create pages from those files.. the best of both worlds.. I was going to have the action also delete the files, but i was thinking that the user should check to make sure it all worked and they no longer needed the source set of files.. but a lot of times the users are forgetting to delete the audio; i was thinking that it would be cool to have a popup or something that said "import successful, you may remove the source files" as a reminder, but haven't worked out how to get that to display after save yet... maybe load a new js file with that...2 points
-
Hi all, My team has built a few ProcessWire sites, and we LOVE it - it is now our go to CMS. However, we have come up against the same issue on each site and wonder if/how any of you are managing it. I don't think the issue is entirely specific to ProcessWire, but due to the structure of the way PW stores data it may require an answer that only applies to PW. It's the issue of keeping a staging site database in synch with a live site. To illustrate: we start developing a new section, or modifying the functionality of a current section. All the while the client is making content changes on the current site, and in so doing changing the data in many different tables. Is there an approach, other than manually noting which tables have changed during our update, that people are using to manage this situation? With WordPress, for example, all of the content ends up in one big table, the structure of which doesn't change, so updates aren't that tricky. With PW data and structure being intermingled, it makes it a bit more tricky to manage. I'd really appreciate knowing more about how people manage this, even if it's just confirming that there isn't a better way than doing it manually. Thanks.1 point
-
Hi there, Is there a native way to adjust how a page's name/url/slug is generated as you type? I'd like it to function similarly to WordPress, which filters out stop words like, "a", "to", "the", etc. If there's not a way to do it natively, would you recommend a module that hooks into the event of when a new page is added? A page's name seems like a pretty vital part of ProcessWire. So, I'm a bit hesitant about messing with it at all! What do the seasoned PW developers out there think? I'm I asking for problems by messing with the page name? Thanks! Lauren1 point
-
Indeed! Wow, such insightful, helpful answers. Thank you everyone! Adrian, the module you whipped up...so cool and super simple too. Love it. I installed it and tested it out; it's exactly what I was hoping for when I first started thinking about how to accomplish it. I was hoping you could modify the built-in JS behavior like that. Really nifty. Also, really excellent points about the SEO aspect. The main reason behind wanting to remove stop words was more so for the benefits that come along with having shorter URLs. All though, in the StackOverflow question that Pete referenced, I thought the following was a really good point: "Keep them in your URL. Even though Google may ignore them in normal search they do not when someone does an exact match search (i.e. using quotes)." It's so worthwhile posting questions on this forum, because I always walk away with way more knowledge about PW than I expected. Thank you everyone1 point
-
Has anyone experience with installing and running Zimbra Collaboration ? http://www.zimbra.com/downloads/zimbra-collaboration-open-source1 point
-
Very nice! As the thumbnails module got more and more out of date it faded from my default modules list. But I think this will be the replacement mighty soon.1 point
-
That sounds like the key issue in all this to me - very good point! And Pete - thanks for those links. Lauren will have some thinking to do when she finally gets back to this thread1 point
-
Thanks for this, folks. Looks great, can't wait to give it a try! As a minor observation, you might want to repeat some of the recent fixes for Thumbnails module here too (strict standards, repeater permissions, etc.)1 point
-
1 point
-
Thanks to the three of you for you work on this! Can't wait to test - will definitely be using it on a new site that is starting dev in the next couple of weeks. I'll let you know how it goes!1 point
-
At this point, the module is quite feature-rich. As such, I'm pausing development. This gives time for those testing the module to do so without me introducing new things every three seconds, and it also gives me time to prepare for the whole 'back to work' thing - I return on Monday, and need to prepare for it. I'll still be here to discuss things, but there won't be any changes for about two weeks. Looking forward to the results/feedback of everyone's testing. Update: Lol, the stuff I had to do couldn't be done today - so there are a few changes up on the repo now.1 point
-
1 point
-
<development post> New Alpha Release - 0.1.2 (Breaking Changes) This module introduces a new feature: Mapping Collections (this is the final name for it). The feature allows you to map key/value pairs to your redirects so that you need not repeat yourself. An example would be the following: Redirect /blog.php?id=2309 to /hello-world/, by using the following redirect: blog.php?id={id} => {id|pages} You're wondering, what's {id|blog}? Simply put, that's called a mapping reference, which refers to an entry in a Mapping Collection. A Mapping Collection, in this case, would look something like this: 1=a-post 2=another-post 3=third-post ... 2309=hello-world So, when it comes time to redirect, it will scan the provided id against the collection called blog and, if it finds a match, will redirect accordingly. If, however, there is no match, the segment that asked for the mapping will be left out, resulting in an invalid URL. I may get it to insert the original capture instead. Some Screenshots: -- -- -- Full Changelog for this release: [6 Jan] - Some refactoring; moving code blocks from main module to UtilityProcess - [New Feature] Mapping Collections - Changed module title to Advanced Redirects [ALPHA] Breaking Changes: The DB schemas have changed in this release. If you already have 0.1.0 or 0.1.1 installed, please uninstall it first.1 point
-
Awesome! Nice one man! You are a bloody marvellous chap. Some might say a gentleman and a scholar... I most certainly would say so! I shall follow your process process down to the wire.1 point
-
Don't think this is theme related as it need some sort of scripting. Have you considered AdminCustomFiles for this?1 point
-
Sometimes during a migration people have second thoughts and want to make some adjustments after redirects are in place. How about an option to redirect as 302 for a while (think of it as a probationary period) before automatically converting to 301 (so we don't forget).1 point
-
1 point
-
To answer your question, yes! I currently have 2 sites on DigitalOcean and one on NearlyFreeSpeech. git is SO engrained in my workflow now, I can NOT work without it. I have a new site that I bought hosting, ssl certificate and a year of hosting with NameCheap, but I will be leaving, actually today, because the made things a little complicated, and they lack documentation. DigitalOcean on the other hand, offers a wealth of tutorials and docs to get started and plenty of intermediate and advanced topics. I am indeed VERY excited for 2015 What about my workflow are you interested in?1 point
-
This is interesting because I was approaching some advnced automatic naming behind the scenes only last night during the page save process (so ignoring giving the visual indicator this will on the edit page) but in my case that was find because those pages needed no user control over the page names. Very nice idea and keeps URLs shorter but I'm not sure it makes a difference to SEO any more to be honest, though I am no expert. Still, ProcessWire is all about providing the tools that in turn can provide more options so good work1 point
-
@MarcC, @Richard Cook This might not be exactly what you want but, not long ago, repeaters gained the ability for you to give them custom title strings by hooking the renderRepeaterLabel() method. Can that get you any closer to your goals?1 point
-
1 point
-
1 point
-
I think I know what you mean - I have just committed an update that will use the entered description for the title of the page is it is entered - otherwise it will fallback to the filename. Is that what you meant? The code was quickly put together, so I expect the OP may want to tweak other things along those lines - should hopefully be a good starting point though.1 point
-
I've made few changes, which I'll be uploading tomorrow morning as v0.1.1. Some of it is to do with bettering my code (specifically in the way of Regular Expressions), and the rest of it is to do with better path cleaning, and a few backend CSS fixes.1 point
-
From my point of view this has less to do with html5 and more to do with SEO. The general thought is that you should have one H1, then a group of H2s, more H3 and so on. Though there are arguments against that as well. It is all about second guessing Google and Bing When it comes to HTML5 and SEO, however, it is probably more important to look at structural elements such as <article> and <section> and then the relevant attributes from Schema.org: <article role="article" itemtype="http://schema.org/BlogPosting" itemscope> And so on1 point
-
That may be because you switched these two rules around when you were encountering problems: # ----------------------------------------------------------------------------------------------- # Pass control to ProcessWire if all the above directives allow us to this point. # For regular VirtualHosts (most installs) # ----------------------------------------------------------------------------------------------- # RewriteRule ^(.*)$ index.php?it=$1 [L,QSA] # ----------------------------------------------------------------------------------------------- # 500 NOTE: If using VirtualDocumentRoot: comment out the one above and use this one instead. # ----------------------------------------------------------------------------------------------- RewriteRule ^(.*)$ /index.php?it=$1 [L,QSA] As such, it's looking for index.php in the root of domain, not the root of the installation. If you swap them around again (comment the second one, uncomment the first one), it should work. Please let me know if it doesn't...1 point
-
Hi @paulbrause, Lots of great suggestions above regarding PageTable fields etc, but I just put together a quick module that I think does exactly what you are looking for. https://gist.github.com/adrianbj/2b3b6b40f64a816d397f To make it work as is, you will need these: Templates: album (for the album parent pages) image (used for the automatically created child pages) Fields: images (for the album template) image (for the image template)1 point
-
1 point
-
perhaps this is the wrong subforum, perhaps there are better waays to do this, but I wanted to share some thoughts regarding multi sites together with multiple robots.txt If you are running multiple sites with one PW setup you can't place multiple robots.txt files into your root. As long as all robots.txt files are identical there is no problem with it. You can stop reading right here. In my robots.txt I wanted to include a link to the current sitemap, e.g. Sitemap: http://www.domain.com/sitemap.xml I put each robots.txt into the "site-" directories. Search engines expect the robots.txt file directly in the root so I added some lines to my htaccess file (for each domain you have to do this) # domain 1 RewriteCond %{REQUEST_URI} /robots\.txt RewriteCond %{HTTP_HOST} www\.domain\. [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /site-domain/robots.txt [L] # domain 2 RewriteCond %{REQUEST_URI} /robots\.txt RewriteCond %{HTTP_HOST} www\.domain\-2\. [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /site-domain-2/robots.txt [L] Another possible approach: create a PW page within each site1 point
-
1 point
-
For 4K you get the logo.1 point
-
That sounds like a really interesting project Some measures you can consider: Session inactivity timeouts. Force regular password changes and/or use 2-factor authentication. Use SSL. Ensure users can only access the data they are allowed to. Not just through interface options, but URLs as well. Look at the hosting infrastructure. Credentials to access it. Who has access? What about your provider? Where is your database is stored? (Shared hosting? Easy to guess credentials?) Look at how data is imported and exported within the system. Is it possible to bypass any validation or auth checks? Forms. They should definitely be using CSRF protection. Can user input be overloaded? That is, can I submit additional form values that the system doesn't check or expect, but still get saved to the DB. Logging. Log as much as you can in order to provide an audit trail. It is guaranteed that somewhere down the line, someone will ask the question "When did this record change to this value and who made the changed?" Backups. Hopefully the data will be backed up. How easy and quick can this be restored? What granularity? User education. Some users may need it explaining to them that sharing usernames and passwords, or writing them down, is not good practise. There are probably some additional things - that could start getting into the realms of penetration testing - but that's a summary things I can think of in a short time1 point
-
Looks more like dr dree headphone logo.1 point
-
Too simple to be a module, consider a script like this: $array = $pages->find("template=basic-page")->explode(function($item){ return array( 'id'=> $item->id, 'title' => $item->title ); }); $fp = fopen('file.csv', 'w'); foreach ($array as $fields) fputcsv($fp, $fields); fclose($fp); Note, $pagearray->explode() used here is only available in 2.4 (2.3 dev) http://cheatsheet.processwire.com/pagearray-wirearray/getting-items/a-explode/ And the anonymous functions requires php >= 5.3 http://php.net/manual/de/functions.anonymous.php1 point
-
Thanks for the tips Ryan! I will make those changes edit: hm, I think I won't implement the second tip. For a matter of privacy, I think an unsubscribed email should be immediately deleted.1 point
-
Looks like a great solution, thanks for posting Diogo. A couple of suggestions I'd have would be: 1. For all your $pages->get("title=$email"), I'd suggest making them as specific as possible since they can be initiated by a user. Specifically, I'd make it "title=$email, template=person, parent=/subscribers/". While not technically necessary, it just puts in another layer of security at very little expense. Who knows, maybe you'll use email addresses for titles somewhere else? 2. You may want to consider changing your $page->delete() to a $pages->trash($page), just to give you the opportunity to review who unsubscribed before emptying.1 point