Peter Knight

Options for trailing slashes and SEO

Recommended Posts

Had a question about trailing slashes and forcing one or other.

I've a site where most pages can be accessed with AND without a trailing slash

IE

domain.com/about-us/contact
and
domain.com/about-us/contact/

are both accessible and being indexed by Google. It's obviously bad for SEO but I can't seem to make PW respect one and redirect etc.

There is a setting in templates>template>URLs 

Quote

Should page URLs end with a slash
If 'Yes', pages using this template will always have URLs that end with a trailing slash '/'. And if the page is loaded from a URL without the slash, it will be redirected to it. If you select 'No', the non-slashed version will be enforced instead. Note that this setting does not enforce this behavior on URL segments or page numbers, only actual page URLs. If you don't have a preference, it is recommended that you leave this set to 'Yes'.

I must be overlooking something as I have 'yes' selected and both URLs are still reachable with no redirect.

What do you guys do to counter this?

 

Share this post


Link to post
Share on other sites

Hello @Peter Knight,

as long as you don't mix both versions in your internal links or XML sitemap, I don't think it is bad. ;)

Here is what Google writes to this topic:

Quote
  • Leave it as-is. Many sites have duplicate content. Our indexing process often handles this case for webmasters and users. While it’s not totally optimal behavior, it’s perfectly legitimate and a-okay. :)

Or could you please explain a little further or provide an link, why it would be "obviously bad"? I am no SEO expert, so that would interest me.

I have never changed the default setting for URLs and never experienced an downside from it. Also search engines maybe indexing both versions both show only the version with slash as results.

Regards, Andreas

Edit: I have missed, that your URLs do not redirect.

Edited by AndZyk
  • Like 1

Share this post


Link to post
Share on other sites

Just an FYI - I have that setting on (as is default), and if I link to a page without the trailing slash, it always redirects to the slash version. Any chance you have something that might be hijacking the redirect?

  • Like 1

Share this post


Link to post
Share on other sites

I always assumed duplicate content incurred a Google penalty and affected your rankings. Possibly this isn't as much a problem anymore as @AndZyk mentioned and Google gets better about handling it etc.

@adrian I have the usual batch of .htaccess settings, ProCache and and SEO Module. I couldn't narrow it down to any.

Share this post


Link to post
Share on other sites

https://webmasters.googleblog.com/2010/04/to-slash-or-not-to-slash.html

quote:

You can do a quick check on your site to see if the URLs:

  1. http://<your-domain-here>/<some-directory-here>/
    (with trailing slash)
  2. http://<your-domain-here>/<some-directory-here>
    (no trailing slash)

don’t both return a 200 response code, but that one version redirects to the other.

  • If only one version can be returned (i.e., the other redirects to it), that’s great! This behavior is beneficial because it reduces duplicate content. In the particular case of redirects to trailing slash URLs, our search results will likely show the version of the URL with the 200 response code (most often the trailing slash URL) -- regardless of whether the redirect was a 301 or 302.

Share this post


Link to post
Share on other sites

Just tested this on a PW site I'm currently developing, and which is using default settings for slashes etc. Using Chrome extension Redirect Path to confirm, PW does indeed redirect @szabesz's example 2 (no trailing slash) above to example 1 (with trailing slash), with a 301. Hurrah!

  • Like 2

Share this post


Link to post
Share on other sites
On 08/08/2017 at 0:12 AM, AndZyk said:

Or could you please explain a little further or provide an link, why it would be "obviously bad"? I am no SEO expert, so that would interest me.

The post you linked to actually mentions two reasons why this might be bad: it could be identified as duplicate content and it could have a negative effect on crawl efficiency. Neither is something you should really worry about in this case, but then again: Google is a black box (it's impossible to say for sure how their algorithm works), that article is from 2010 (things have changed a lot since then), and some SEO experts do seem to love their micro-optimisations :)

Although in that article they go on to say that their indexing process "often" automatically detects and corrects this issue, it is also true that this is one of those things that are so easy to fix that, even for a tiny theoretical chance that it could affect your rankings, you should stick with one URL format.

On 08/08/2017 at 9:38 AM, Peter Knight said:

I always assumed duplicate content incurred a Google penalty and affected your rankings. Possibly this isn't as much a problem anymore as @AndZyk mentioned and Google gets better about handling it etc.

According to various sources there's no penalty for duplicate content, especially not in cases like this. It would seem that the only way you can get penalised is if Google thinks you're trying to deceive them somehow with said duplicate content. I highly doubt that the trailing slash issue would count.

  • Like 3

Share this post


Link to post
Share on other sites
On 8/8/2017 at 0:16 PM, teppo said:

some SEO experts do seem to love their micro-optimisations

That's the whole thing in a nutshell. Don't sweat the tiny details

FWIW I don't think of the 'duplicate content penalty' as a penalty per se, more a discounting of the value of any content that is identified as a duplicate of some other content that is counted. If there is a negative, it's that it costs your crawl budget. By which I mean that say Google is prepared to crawl 10 pages of your site per visit, if 2 of those pages are the same content under very slightly different URLs, you are blowing the opportunity to have another actual different page crawled.

Having said that, and getting back to the point, there are any number of more significant things to be spending time on.

  • Like 1

Share this post


Link to post
Share on other sites
43 minutes ago, DaveP said:

FWIW I don't think of the 'duplicate content penalty' as a penalty per se, more a discounting of the value of any content that is identified as a duplicate of some other content that is counted.

I might be misunderstanding what you meant by "discounting of the value" and "counted", but the sources I've read so far seem to say that, behind the scenes, duplicate content is automatically bundled together under one canonical URL by Google. In this case it could mean that, for an example, all links to example.com/foo and example.com/foo/ would count towards the version that Google deems "primary", and it would also be the only one they display in their search results (unless the user specifically chooses to show duplicate content).

Possibly the most common problem with duplicate content is that Google could choose a different version as the primary one than what you might prefer, unless you use canonical tags to advice them. Probably the most fearsome problem, on the other hand, would be that they could think that you're trying to copy same content over and over as an attempt to "cheat", in which case they could remove some pages, or even the whole site, from their search index. But again, I highly doubt that they could ever interpret a trailing slash issue as such an attempt :)

I must admit that crawl budget was a new concept for me. After reading a bit about it, I'd say that it definitely won't be an issue for the vast majority of us here (first of all in a blog post they say that it usually only affects sites with thousands of pages), but it's definitely something that very large sites, or sites that auto-generate URLs in some way, should take into consideration. Anyway, thanks for sharing this point of view! :)

  • Like 1

Share this post


Link to post
Share on other sites
2 minutes ago, teppo said:

all links to example.com/foo and example.com/foo/ would count towards the version that Google deems "primary"

Absolutely - Google is quite often wrongly thought to be the enemy. Not at all, unless people are trying to cheat in some way. They actually do their best to understand broken site architecture, which is something that PW does a great job of helping us avoid anyway.

And we can easily go for a belt & braces approach by adding

<link rel="canonical" href="<?=$page->url?>">

in our site's <head>...</head>, letting PW handle it. (Might very well be unnecessary, but can't hurt.)

  • Like 1

Share this post


Link to post
Share on other sites

It seems that when using ProCache there is no redirect, both pages produce a status 200. SEO experts/tools seems to feel this is a fault with PW. Adding no or several /// trailing slashes all return a regular undirected page.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Leftfield
      Hi All 🙂

      How to append canonical URL to head from certain templates?

      Thanks!!!
    • By Marco Angeli
      Hi there,
      I added a ssl certificate to my site and I'd like to redirect every single http url to its new https version
      So I added this code in the .htacces file, after the RewriteEngine On :
      Redirect 301 /about https://www.mysite.it/about
      Unfortunately this is now working: I get the "too many redirects" error.
      The following code works, but it's a bulk redirection to the home page, something I don't want for SEO reasons (https://moz.com/blog/save-your-website-with-redirects😞
      RewriteCond %{HTTP_HOST} mysite\.it [NC]
      RewriteCond %{SERVER_PORT} 80
      RewriteRule ^(.*)$ https://www.mysite.it/$1 [R,L]
      Any suggestions?
    • By chrizz
      hey there
      I guess a lot of you have already heard of the hreflang attribute which tells search engines which URL they should list on their result pages. For some of my projects I build this manually but now I am wondering if there's need to add this as a module to PW modules directory. 
      How do you deal with the hreflang thingy? Would you you be happy if you can use a module for this or do you have concerns that using a module maybe does not cover your current use cases?
      Cheers,
      Chris
       
       
       
       
    • By FrancisChung
      Hi, I have an ongoing issue with Google SEO that I can't seem to fix. Wondering if anyone has come across a similar situation?

      We deployed a new version of the website using a new deployment methodology and unfortunately, the wrong robots.txt file was deployed basically telling Googlebot not to scrape the site.

      The end result is that if our target keywords are used for a (Google) search, our website is displayed on the search page with "No information is available for this page." 

      Google provides a link to fix this situation on the search listing, but so far everything I have tried in it hasn't fixed the situation.
      I was wondering if anyone has gone through this scenario and what was the steps to remedy it?
      Or perhaps it has worked and I have misunderstood how it works?

      The steps I have tried in the Google Webmaster Tool :
      Gone through all crawl errors Restored the Robots.txt file and Verified with Robots.txt tester Fetch/Fetch and Render as Google as both Desktop/Mobile, using root URL and other URLs, using Indexing Requested / Indexing Requested for URL and Linked Pages. Uploaded a new Sitemap.xml  Particularly on the Sitemap page, it says 584 submitted, 94 indexed.
       
      Would the Search Engine return "No Information available" because the page is not indexed? The pages I'm searching for are our 2 most popular keywords and entry points into site. It's also one of 2 most popular category pages.  So I'm thinking it probably isn't the case but ...

      How can I prove / disprove the category pages are being indexed?

      The site in questions is Sprachspielspass.de. The keywords to search are fingerspiele and kindergedichte.

       
    • By John W.
      SYNOPSIS
      A little guide to generating an sitemap.xml using (I believe) a script Ryan originally wrote with the addition of being able to optionally exclude child pages from being output in the sitemap.xml file.
      I was looking back on a small project today where I was using a php script to generate an xml file, I believe the original was written by Ryan. Anyway, I needed a quick fix for the script to allow me to optionally exclude children of pages from being included in the sitemap.xml output.
      OVERVIEW
      A good example of this is a site where if you visit /minutes/ a page displays a list of board meetings which includes a title,  date, description and link to download the .pdf file.
      I have a template called minutes and a template called minutes-document. The first page, minutes, when loaded via /minutes/ simply grabs all of its child pages and outputs the name, description and actual path of an uploaded .pdf file for a visitor to download.
      In my back-end I have the template MINUTES and MINUTES-DOCUMENT. Thus:


      So, basically, their employee can login, hover over minutes, click new, then create a new (child) record and name it the date of the meeting e.g. June 3rd, 2016 :

       
      ---------------------------
      OPTIONALLY EXCLUDING CHILDREN - SETUP
      Outputting the sitemap.xml and optionally excluding children that belong to a template.
      The setup of the original script is as follows:
      1. Save the file to the templates folder as sitemap.xml.php
      2. Create a template called sitemap-xml and use the sitemap.xml.php file.
      3. Create a page called sitemap.xml using the sitemap-xml template
       
      Now, with that done you will need to make only a couple of slight modifications that will allow the script to exclude children of a template from output to the sitemap.xml
      1. Create a new checkbox field and name it:   sitemap_exclude_children
      2. Add the field to a template that you want to control whether the children are included/excluded from the sitemap. In my example I added it to my "minutes" template.
      3. Next, go to a page that uses a template with the field you added above. In my case, "MINUTES"
      4. Enable the checkbox to exclude children, leave it unchecked to include children.
      For example, in my MINUTES page I enabled the checkbox and now when /sitemap.xml is loaded the children for the MINUTES do not appear in the file.

       
      A SIMPLE CONDITIONAL TO CHECK THE "sitemap_exclude_children" VALUE
      This was a pretty easy modification to an existing script, adding only one line. I just figure there may be others out there using this script with the same needs.
      I simply inserted the if condition as the first line in the function:
      function renderSitemapChildren(Page $page) { if($page->sitemap_exclude_children) return ""; ... ... ...  
      THE FULL SCRIPT WITH MODIFICATION
      <?php /** * ProcessWire Template to power a sitemap.xml * * 1. Copy this file to /site/templates/sitemap-xml.php * 2. Add the new template from the admin. * Under the "URLs" section, set it to NOT use trailing slashes. * 3. Create a new page at the root level, use your sitemap-xml template * and name the page "sitemap.xml". * * Note: hidden pages (and their children) are excluded from the sitemap. * If you have hidden pages that you want to be included, you can do so * by specifying the ID or path to them in an array sent to the * renderSiteMapXML() method at the bottom of this file. For instance: * * echo renderSiteMapXML(array('/hidden/page/', '/another/hidden/page/')); * * patch to prevent pages from including children in the sitemap when a field is checked / johnwarrenllc.com * 1. create a checkbox field named sitemap_exclude_children * 2. add the field to the parent template(s) you plan to use * 3. when a new page is create with this template, checking the field will prevent its children from being included in the sitemap.xml output */ function renderSitemapPage(Page $page) { return "\n<url>" . "\n\t<loc>" . $page->httpUrl . "</loc>" . "\n\t<lastmod>" . date("Y-m-d", $page->modified) . "</lastmod>" . "\n</url>"; } function renderSitemapChildren(Page $page) { if($page->sitemap_exclude_children) return ""; /* Aded to exclude CHILDREN if field is checked */ $out = ''; $newParents = new PageArray(); $children = $page->children; foreach($children as $child) { $out .= renderSitemapPage($child); if($child->numChildren) $newParents->add($child); else wire('pages')->uncache($child); } foreach($newParents as $newParent) { $out .= renderSitemapChildren($newParent); wire('pages')->uncache($newParent); } return $out; } function renderSitemapXML(array $paths = array()) { $out = '<?xml version="1.0" encoding="UTF-8"?>' . "\n" . '<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">'; array_unshift($paths, '/'); // prepend homepage foreach($paths as $path) { $page = wire('pages')->get($path); if(!$page->id) continue; $out .= renderSitemapPage($page); if($page->numChildren) { $out .= renderSitemapChildren($page); } } $out .= "\n</urlset>"; return $out; } header("Content-Type: text/xml"); echo renderSitemapXML(); // Example: echo renderSitemapXML(array('/hidden/page/'));  
      In conclusion, I have used a couple different processwire sitemap generating modules. But for my needs, the above script is fast and easy to setup/modify.
      - Thanks