Jump to content
FrancisChung

Google not indexing Category Pages

Recommended Posts

Hi, I have an ongoing issue with Google SEO that I can't seem to fix. Wondering if anyone has come across a similar situation?

We deployed a new version of the website using a new deployment methodology and unfortunately, the wrong robots.txt file was deployed basically telling Googlebot not to scrape the site.

The end result is that if our target keywords are used for a (Google) search, our website is displayed on the search page with "No information is available for this page." 

Google provides a link to fix this situation on the search listing, but so far everything I have tried in it hasn't fixed the situation.
I was wondering if anyone has gone through this scenario and what was the steps to remedy it?
Or perhaps it has worked and I have misunderstood how it works?

The steps I have tried in the Google Webmaster Tool :

  1. Gone through all crawl errors
  2. Restored the Robots.txt file and Verified with Robots.txt tester
  3. Fetch/Fetch and Render as Google as both Desktop/Mobile, using root URL and other URLs, using Indexing Requested / Indexing Requested for URL and Linked Pages.
  4. Uploaded a new Sitemap.xml 

Particularly on the Sitemap page, it says 584 submitted, 94 indexed.

 

Would the Search Engine return "No Information available" because the page is not indexed? The pages I'm searching for are our 2 most popular keywords and entry points into site. It's also one of 2 most popular category pages.  So I'm thinking it probably isn't the case but ...

How can I prove / disprove the category pages are being indexed?

The site in questions is Sprachspielspass.de. The keywords to search are fingerspiele and kindergedichte.

 

Share this post


Link to post
Share on other sites

Just wondering, but how long has this been happening? In somewhat similar situations I've had delays of hours .. days.

Another thing I noticed is that your (current) robots.txt has a max-age of 30 days. According to Google's docs, they may "increase the time" a robots.txt file is cached based on this header, so in theory they could still be holding on to the old version. In that case I don't really know which steps to take, but first I'd wait for a few days to make sure that it isn't just Google's regular delay :)

Share this post


Link to post
Share on other sites

Hi @teppo, it's been going on for a few weeks now. Very frustrating.
It's definitely beyond a wait and see approach stage now.

Is the robots.txt max-age defined in .htaccess? I don't recall creating such setting for robots.txt specifically, but I do recall such setting for the website in general for cache-headers.

 

Share this post


Link to post
Share on other sites

I run the robots.txt file through this
http://www.webconfs.com/http-header-check.php

It says :
Cache-Control => max-age=2592000
Expires => Wed, 21 Feb 2018 18:21:44 GMT

Which is a bit baffling.

The Homepage and Main Category pages are showing Cache-Control : No Cache at the moment.
I do have Procache installed but currently, it's off at the moment. 

This is our current Cache-Control settings.

 

<ifModule mod_headers.c>
  # Ignore comments, everything is set to 8 days atm.
  # 1 week for fonts
  <filesMatch "\.(ico|jpe?g|jpg|png|gif|swf)$">
    Header set Cache-Control "max-age=691200, public"
  </filesMatch>
  <filesMatch "\.(eot|svg|ttf|woff|woff2)$">
    Header set Cache-Control "max-age=691200, public"
  </filesMatch>
  # 1 day for css / js files
  <filesMatch "\.(css)$">
    Header set Cache-Control "max-age=691200, public"
  </filesMatch>
  <filesMatch "\.(js)$">
    Header set Cache-Control "max-age=691200, private"
  </filesMatch>
  <filesMatch "\.(x?html?|php)$">
    Header set Cache-Control "private, must-revalidate"
  </filesMatch>
</ifModule>

I will add a separate filesMatch module for Robots.txt and see what happens ...
 


  <filesMatch "^robots.(txt|php)$">
    Header Set Cache-Control "max-age=0, public"
  </filesMatch>

 

Share this post


Link to post
Share on other sites

I guess you're committing major SEO sins here... You deliver the exact same content under several URLs / domains. Google is not amused by such practises.

e.g.

http://sprachspielspass.de/kinderlieder/alle-kinderlieder/ri-ra-rutsch/

http://kinder-reime.com/kinderlieder/alle-kinderlieder/ri-ra-rutsch/

http://finger-spiele.com/kinderlieder/alle-kinderlieder/ri-ra-rutsch/

This is called "black hat SEO", and frowned upon. I only discovered these other domains by checking your HTTP headers, where it says

access-control-allow-origin: kinder-reime.com, finger-spiele.com, sprachspielspass.de

 

  • Thanks 1

Share this post


Link to post
Share on other sites
21 hours ago, dragan said:

I guess you're committing major SEO sins here... You deliver the exact same content under several URLs / domains. Google is not amused by such practises.

e.g.

http://sprachspielspass.de/kinderlieder/alle-kinderlieder/ri-ra-rutsch/

http://kinder-reime.com/kinderlieder/alle-kinderlieder/ri-ra-rutsch/

http://finger-spiele.com/kinderlieder/alle-kinderlieder/ri-ra-rutsch/

This is called "black hat SEO", and frowned upon. I only discovered these other domains by checking your HTTP headers, where it says


access-control-allow-origin: kinder-reime.com, finger-spiele.com, sprachspielspass.de

 

@Dragan, the other two domains are our test server / test domains. In theory, Google or any other crawlers shouldn't be indexing them because robots.txt for those other sites instructs it not to. Surely this isn't a frowned upon practice?

In fact, the reason why I'm in such a pickle is because I migrated the code + contents from our UAT site to our live site including the robots.txt file from the UAT. 

I'm aware of what Google does to punish sites that circumvent around Seach Engine rules and I thought it's pretty hard to circumvent them these days. And I assure you that's not our intention here to cannibalise our own SEO rankings.

Perhaps I need to fix the access-control-allow-origin to have only one domain? I wasn't sure listing all our domains would have a negative impact? 

Share this post


Link to post
Share on other sites

Hi,

Also, you don't have the non-www version redirected to the www version, or vice versa.

And you should normally only have one meta name="canonical" version in your source code. 
Currently, it changes depending on the non-www or www version.

  • Like 1

Share this post


Link to post
Share on other sites

I've managed to get Google display our meta description correctly again today in their search results.

Not sure which of the following corrected it, but the steps are took were :

1) Not to cache Robots.txt (.htaccess) 

  <filesMatch "^robots.(txt|php)$">
    Header Set Cache-Control "max-age=0, public"
  </filesMatch>

2) Removed other test sites from access-control-allow-origin
3) Created a sitemap-category.xml file where it listed the homepage plus important category pages and uploaded it to Google.
4) Uploaded sitemap.xml to Google again after fixing 1) & 2)

I initially tried 3) but it didn't fix the problem initially (Google initially indexed 0 pages ... then 1 page today ).

I tried 2) 3) 4) today and it seems to have fixed it. 
Process 1) I tried when I first posted this thread.

It's also quite possible it "organically" fixed itself today by coincidence and none of my steps was actually effective.

As for the canonical meta-name issues, I'm using MarkupSEO module so it possibly is an issue with that.
I'll post any updates here.

Thanks again to @dragan@Christophe & @teppo for responding. 

  • Like 1

Share this post


Link to post
Share on other sites

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Mithlesh
      Hi there, 

      How to add Google Recaptcha V 3.0 in the processwire?

      My website has this module: Markup Google reCAPTCHA but I am not sure whether that is V.3 or any other, sharing the SS below:


      Let me know how to update or integrate the same

      Thanks
    • By Mike Rockett
      Docs & Download: rockettpw/markup-sitemap
      Modules Directory: MarkupSitemap
      Composer: rockett/sitemap
      MarkupSitemap is essentially an upgrade to MarkupSitemapXML by Pete. It adds multi-language support using the built-in LanguageSupportPageNames. Where multi-language pages are available, they are added to the sitemap by means of an alternate link in that page's <url>. Support for listing images in the sitemap on a page-by-page basis and using a sitemap stylesheet are also added.
      Example when using the built-in multi-language profile:
      <url> <loc>http://domain.local/about/</loc> <lastmod>2017-08-27T16:16:32+02:00</lastmod> <xhtml:link rel="alternate" hreflang="en" href="http://domain.local/en/about/"/> <xhtml:link rel="alternate" hreflang="de" href="http://domain.local/de/uber/"/> <xhtml:link rel="alternate" hreflang="fi" href="http://domain.local/fi/tietoja/"/> </url> It also uses a locally maintained fork of a sitemap package by Matthew Davies that assists in automating the process.
      The doesn't use the same sitemap_ignore field available in MarkupSitemapXML. Rather, it renders sitemap options fields in a Page's Settings tab. One of the fields is for excluding a Page from the sitemap, and another is for excluding its children. You can assign which templates get these config fields in the module's configuration (much like you would with MarkupSEO).
      Note that the two exclusion options are mutually exclusive at this point as there may be cases where you don't want to show a parent page, but only its children. Whilst unorthodox, I'm leaving the flexibility there. (The home page cannot be excluded from the sitemap, so the applicable exclusion fields won't be available there.)
      As of December 2017, you can also exclude templates from sitemap access altogether, whilst retaining their settings if previously configured.
      Sitemap also allows you to include images for each page at the template level, and you can disable image output at the page level.
      The module allows you to set the priority on a per-page basis (it's optional and will not be included if not set).
      Lastly, a stylesheet option has also been added. You can use the default one (enabled by default), or set your own.
      Note that if the module is uninstalled, any saved data on a per-page basis is removed. The same thing happens for a specific page when it is deleted after having been trashed.
          
    • By franciccio-ITALIANO
      Hi, we can choose the "headline" and "title" and "summery" in panel page of processwire, but we can't write the "metadecriptions" and "tags".
       I can write mdescropt and tags in templates, but I've same templates for many articles... so, how I can change mdescription and tags?

      Thanks...
    • By Leftfield
      Hi All 🙂

      How to append canonical URL to head from certain templates?

      Thanks!!!
    • By Marco Angeli
      Hi there,
      I added a ssl certificate to my site and I'd like to redirect every single http url to its new https version
      So I added this code in the .htacces file, after the RewriteEngine On :
      Redirect 301 /about https://www.mysite.it/about
      Unfortunately this is now working: I get the "too many redirects" error.
      The following code works, but it's a bulk redirection to the home page, something I don't want for SEO reasons (https://moz.com/blog/save-your-website-with-redirects😞
      RewriteCond %{HTTP_HOST} mysite\.it [NC]
      RewriteCond %{SERVER_PORT} 80
      RewriteRule ^(.*)$ https://www.mysite.it/$1 [R,L]
      Any suggestions?
×
×
  • Create New...