Jump to content

Search the Community

Showing results for tags 'seo'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Welcome to ProcessWire
    • News & Announcements
    • Showcase
    • Wishlist & Roadmap
  • Community Support
    • Getting Started
    • Tutorials
    • FAQs
    • General Support
    • API & Templates
    • Modules/Plugins
    • Themes and Profiles
    • Multi-Language Support
    • Security
    • Jobs
  • Off Topic
    • Pub
    • Dev Talk

Product Groups

  • Form Builder
  • ProFields
  • ProCache
  • ProMailer
  • ProDrafts
  • ListerPro
  • ProDevTools
  • Likes
  • Custom Development

Categories

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 31 results

  1. Docs & Download: rockettpw/markup-sitemap Modules Directory: MarkupSitemap Composer: rockett/sitemap MarkupSitemap is essentially an upgrade to MarkupSitemapXML by Pete. It adds multi-language support using the built-in LanguageSupportPageNames. Where multi-language pages are available, they are added to the sitemap by means of an alternate link in that page's <url>. Support for listing images in the sitemap on a page-by-page basis and using a sitemap stylesheet are also added. Example when using the built-in multi-language profile: <url> <loc>http://domain.local/about/</loc> <lastmod>2017-08-27T16:16:32+02:00</lastmod> <xhtml:link rel="alternate" hreflang="en" href="http://domain.local/en/about/"/> <xhtml:link rel="alternate" hreflang="de" href="http://domain.local/de/uber/"/> <xhtml:link rel="alternate" hreflang="fi" href="http://domain.local/fi/tietoja/"/> </url> It also uses a locally maintained fork of a sitemap package by Matthew Davies that assists in automating the process. The doesn't use the same sitemap_ignore field available in MarkupSitemapXML. Rather, it renders sitemap options fields in a Page's Settings tab. One of the fields is for excluding a Page from the sitemap, and another is for excluding its children. You can assign which templates get these config fields in the module's configuration (much like you would with MarkupSEO). Note that the two exclusion options are mutually exclusive at this point as there may be cases where you don't want to show a parent page, but only its children. Whilst unorthodox, I'm leaving the flexibility there. (The home page cannot be excluded from the sitemap, so the applicable exclusion fields won't be available there.) As of December 2017, you can also exclude templates from sitemap access altogether, whilst retaining their settings if previously configured. Sitemap also allows you to include images for each page at the template level, and you can disable image output at the page level. The module allows you to set the priority on a per-page basis (it's optional and will not be included if not set). Lastly, a stylesheet option has also been added. You can use the default one (enabled by default), or set your own. Note that if the module is uninstalled, any saved data on a per-page basis is removed. The same thing happens for a specific page when it is deleted after having been trashed.
  2. Hello Everyone, I was trying to update SEO meta title, description and meta keywords for my website in Process Wire CMS but it saving in the backend but it is not reflecting on my website, Please help me regarding this error. Please find below attached screen shot for your ref. TIA.
  3. Hi, we can choose the "headline" and "title" and "summery" in panel page of processwire, but we can't write the "metadecriptions" and "tags". I can write mdescropt and tags in templates, but I've same templates for many articles... so, how I can change mdescription and tags? Thanks...
  4. Hello for all, these days finishing project what is combination of e-commerce and portal website. Problem is "Products" page-tree because content need to be divided inside 48 categories. Also the client gave me content divided in 68 categories, but at the end they accepted my proposal about reducing that number. But problem is that and with 48 categories in backend, "Products" page tree is very large, long list, and very hard for administration. On front-end mega-menu with labels (menu group headings) solved that problem for vistors. And, at the end, I decided to test variant with additional categories to group categories, and result is good (now it's easy for adminsitration to find what they want very easy). Problem: now urls are longer, and in some parts (categories) not sure how that's can affect on SEO (eg. before: "products/showers/some-product", now: "products/showers-and-bathtubs/showers/some-product"). There are few more examples like that. And another thing is that urls are now longer (more characters, and deeper). For better administration I added new level of categorisation - I think - that is not a way to go. What is your opinion or suggestion about that? Maybe different, custom admin template, with custom navigation, or some folders? or ...? Thanks.
  5. Is there any way with PW to do environment-specific robots.txt, i.e. to block robots from staging sites without having to manually edit files in different environments?
  6. Hi All 🙂 How to append canonical URL to head from certain templates? Thanks!!!
  7. Hi there, I added a ssl certificate to my site and I'd like to redirect every single http url to its new https version So I added this code in the .htacces file, after the RewriteEngine On : Redirect 301 /about https://www.mysite.it/about Unfortunately this is now working: I get the "too many redirects" error. The following code works, but it's a bulk redirection to the home page, something I don't want for SEO reasons (https://moz.com/blog/save-your-website-with-redirects😞 RewriteCond %{HTTP_HOST} mysite\.it [NC] RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://www.mysite.it/$1 [R,L] Any suggestions?
  8. hey there I guess a lot of you have already heard of the hreflang attribute which tells search engines which URL they should list on their result pages. For some of my projects I build this manually but now I am wondering if there's need to add this as a module to PW modules directory. How do you deal with the hreflang thingy? Would you you be happy if you can use a module for this or do you have concerns that using a module maybe does not cover your current use cases? Cheers, Chris
  9. Had a question about trailing slashes and forcing one or other. I've a site where most pages can be accessed with AND without a trailing slash IE domain.com/about-us/contact and domain.com/about-us/contact/ are both accessible and being indexed by Google. It's obviously bad for SEO but I can't seem to make PW respect one and redirect etc. There is a setting in templates>template>URLs I must be overlooking something as I have 'yes' selected and both URLs are still reachable with no redirect. What do you guys do to counter this?
  10. Hi, I have an ongoing issue with Google SEO that I can't seem to fix. Wondering if anyone has come across a similar situation? We deployed a new version of the website using a new deployment methodology and unfortunately, the wrong robots.txt file was deployed basically telling Googlebot not to scrape the site. The end result is that if our target keywords are used for a (Google) search, our website is displayed on the search page with "No information is available for this page." Google provides a link to fix this situation on the search listing, but so far everything I have tried in it hasn't fixed the situation. I was wondering if anyone has gone through this scenario and what was the steps to remedy it? Or perhaps it has worked and I have misunderstood how it works? The steps I have tried in the Google Webmaster Tool : Gone through all crawl errors Restored the Robots.txt file and Verified with Robots.txt tester Fetch/Fetch and Render as Google as both Desktop/Mobile, using root URL and other URLs, using Indexing Requested / Indexing Requested for URL and Linked Pages. Uploaded a new Sitemap.xml Particularly on the Sitemap page, it says 584 submitted, 94 indexed. Would the Search Engine return "No Information available" because the page is not indexed? The pages I'm searching for are our 2 most popular keywords and entry points into site. It's also one of 2 most popular category pages. So I'm thinking it probably isn't the case but ... How can I prove / disprove the category pages are being indexed? The site in questions is Sprachspielspass.de. The keywords to search are fingerspiele and kindergedichte.
  11. SYNOPSIS A little guide to generating an sitemap.xml using (I believe) a script Ryan originally wrote with the addition of being able to optionally exclude child pages from being output in the sitemap.xml file. I was looking back on a small project today where I was using a php script to generate an xml file, I believe the original was written by Ryan. Anyway, I needed a quick fix for the script to allow me to optionally exclude children of pages from being included in the sitemap.xml output. OVERVIEW A good example of this is a site where if you visit /minutes/ a page displays a list of board meetings which includes a title, date, description and link to download the .pdf file. I have a template called minutes and a template called minutes-document. The first page, minutes, when loaded via /minutes/ simply grabs all of its child pages and outputs the name, description and actual path of an uploaded .pdf file for a visitor to download. In my back-end I have the template MINUTES and MINUTES-DOCUMENT. Thus: So, basically, their employee can login, hover over minutes, click new, then create a new (child) record and name it the date of the meeting e.g. June 3rd, 2016 : --------------------------- OPTIONALLY EXCLUDING CHILDREN - SETUP Outputting the sitemap.xml and optionally excluding children that belong to a template. The setup of the original script is as follows: 1. Save the file to the templates folder as sitemap.xml.php 2. Create a template called sitemap-xml and use the sitemap.xml.php file. 3. Create a page called sitemap.xml using the sitemap-xml template Now, with that done you will need to make only a couple of slight modifications that will allow the script to exclude children of a template from output to the sitemap.xml 1. Create a new checkbox field and name it: sitemap_exclude_children 2. Add the field to a template that you want to control whether the children are included/excluded from the sitemap. In my example I added it to my "minutes" template. 3. Next, go to a page that uses a template with the field you added above. In my case, "MINUTES" 4. Enable the checkbox to exclude children, leave it unchecked to include children. For example, in my MINUTES page I enabled the checkbox and now when /sitemap.xml is loaded the children for the MINUTES do not appear in the file. A SIMPLE CONDITIONAL TO CHECK THE "sitemap_exclude_children" VALUE This was a pretty easy modification to an existing script, adding only one line. I just figure there may be others out there using this script with the same needs. I simply inserted the if condition as the first line in the function: function renderSitemapChildren(Page $page) { if($page->sitemap_exclude_children) return ""; ... ... ... THE FULL SCRIPT WITH MODIFICATION <?php /** * ProcessWire Template to power a sitemap.xml * * 1. Copy this file to /site/templates/sitemap-xml.php * 2. Add the new template from the admin. * Under the "URLs" section, set it to NOT use trailing slashes. * 3. Create a new page at the root level, use your sitemap-xml template * and name the page "sitemap.xml". * * Note: hidden pages (and their children) are excluded from the sitemap. * If you have hidden pages that you want to be included, you can do so * by specifying the ID or path to them in an array sent to the * renderSiteMapXML() method at the bottom of this file. For instance: * * echo renderSiteMapXML(array('/hidden/page/', '/another/hidden/page/')); * * patch to prevent pages from including children in the sitemap when a field is checked / johnwarrenllc.com * 1. create a checkbox field named sitemap_exclude_children * 2. add the field to the parent template(s) you plan to use * 3. when a new page is create with this template, checking the field will prevent its children from being included in the sitemap.xml output */ function renderSitemapPage(Page $page) { return "\n<url>" . "\n\t<loc>" . $page->httpUrl . "</loc>" . "\n\t<lastmod>" . date("Y-m-d", $page->modified) . "</lastmod>" . "\n</url>"; } function renderSitemapChildren(Page $page) { if($page->sitemap_exclude_children) return ""; /* Aded to exclude CHILDREN if field is checked */ $out = ''; $newParents = new PageArray(); $children = $page->children; foreach($children as $child) { $out .= renderSitemapPage($child); if($child->numChildren) $newParents->add($child); else wire('pages')->uncache($child); } foreach($newParents as $newParent) { $out .= renderSitemapChildren($newParent); wire('pages')->uncache($newParent); } return $out; } function renderSitemapXML(array $paths = array()) { $out = '<?xml version="1.0" encoding="UTF-8"?>' . "\n" . '<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">'; array_unshift($paths, '/'); // prepend homepage foreach($paths as $path) { $page = wire('pages')->get($path); if(!$page->id) continue; $out .= renderSitemapPage($page); if($page->numChildren) { $out .= renderSitemapChildren($page); } } $out .= "\n</urlset>"; return $out; } header("Content-Type: text/xml"); echo renderSitemapXML(); // Example: echo renderSitemapXML(array('/hidden/page/')); In conclusion, I have used a couple different processwire sitemap generating modules. But for my needs, the above script is fast and easy to setup/modify. - Thanks
  12. Hi, I'm using Formbuilder to build forms in my website, I have different forms to track Google Adwords Conversions but I have like 20 differents forms. I was wondering how do you guys handle conversions in Google Adwords
  13. Hi guys, i was trying to implement SEO URL structure on my another processwire website. SEO team requested us to put .html on every pages. e.g http://www.mydomain.com/products.html and if click any product then it will like http://www.mydomain.com/products/product-one.html so after little bit research i find out it can be done with URLsegment option in template options. i checked on Allow URL Segment, said No to "Should page url end with slash" and said No to "Should URL segments end with a trailing slash? " but after all this setting we try to access the it is showing 404 page. i don't know why. Any help will be highly appreciate Thanks J
  14. Hi there, I am using a url schema like this: page.html page1.html etc. As the name unter settings. I don't want to start a discussion about how useful .html is Works but not for pages that have child pages, where the path would is page.html/child1.html page1.html/child4.html The page and page1 should be without html if I access the children and with if I click the parent.
  15. Hi guys, I'm a Processwire-Newbie and new to this forum. Happily I have to struggle with very few difficulties, thanks to the clear and pleasing concept and structure of PW. Currently there is only one thing that makes me brood: I have a main category 'posts' that contains the majority of all pages. So the regular url would be 'domain/posts/post-one' etc. As I prefer the url scheme 'domain/post-one' I followed the instructions discussed in this topic. This hook is in my 'init.php': wire()->addHookBefore('Page::path', function($event) { $page = $event->object; if($page->template == 'post') { $event->replace = true; $event->return = "/$page->name/"; } }); And this is in my 'home' template: if(strlen($input->urlSegment2)) { throw new Wire404Exception(); } else if(strlen($input->urlSegment1)) { $input->urlSegment1; $name = $sanitizer->pageName($input->urlSegment1); $post = $pages->get("/posts/")->child("name=$name"); if($post->id) echo $post->render(); else throw new Wire404Exception(); $renderMain = false; } else { // regular homepage output $posts = $pages->find("parent=/posts/, limit=$limit, sort=-date"); $body = renderPosts($posts); } Both templates 'home' and 'post' have 'url segments' option activated. On the first sight everything is working fine. $page-name outputs '/domain/post-one' and the page '/domain/post-one' ist getting displayed. What's frighening me is the fact, that 'domain/posts/post-one' is working as well. This means 'post-one' can be adressed with two different urls, and I’m not sure how to rate that. On one hand, nobody will ever notice the '/domain/posts/page-one' option, as it's listed nowhere. So I could just ignore it. On the other hand, I don't know for sure if this presumption is correct. Maybe there are unknown channels where the 'wrong' urls will be spreaded, then there will be 'doubled content' which is bad, as far as I know. So what I'm asking for: Is there an easy way to avoid the double url-scheme option and output a 404 error, when 'domain/posts/page-one' is called? Or should I just don't care, as it doesn't matter a all? Unfortunately, I don’t fully understand every line of the second code, so I would be very grateful if someone could light it up for me a bit. Thanks + regards Ralf
  16. One of my blogs seems to be accessible from multiple URLs and it's affecting my clients SEO. For example, using the following URL structures, I can access the same page. http://www.domain.not/blog/page2/ http://www.domain.not/blog/posts/page2/ The correct one is probably the second one as all posts are children of blog. My actual blog structure is as follows Blog - Posts -- Post A -- Post B -- Post C (etc) - Tags - Categories I have pagination enabled on a template called blog-posts which is applied to the Posts page. I'm not sure though why the double URL is occouring?
  17. Hello, I have several times discussions with the seo guys about urls in processwire. There are some which are valid with the regular url and also with domain.xx/index.php?it= What do you think about this? Is it possible to redirect the "domain.xx/index.php?it=" url to a "seo-friendlier" url? I have this within several projects.
  18. Hi folks, Bit of an odd one... I'm adding some meta tags for Facebook and Twitter to the head.inc file and I'm trying to make the output per tag as global as possible. These are the tags I have so far: <meta property="og:title" content="<?php echo $page->title; ?>" /> <meta property="og:image" content="" /> <meta property="og:url" content="<?php echo $page->url; ?>" /> <meta property="og:description" content="" /> <meta name="twitter:url" content="<?php echo $page->url; ?>"> <meta name="twitter:title" content="<?php echo $page->title; ?>"> <meta name="twitter:description" content=""> <meta name="twitter:image" content=""> As you can see I have a few obvious ones set up, but I'm curious to know how you all might think about outputting the image and description meta information? I'm thinking finding the first image on the page and the first text output on the page but that's a little tricky, isn't it? Do you think these should be set up as separate fields for pages within the CMS for the client to fill out as they fill out the pages... as part of the process for SEO? And if there's any tips to why using these tags are bad please do say so. Many thanks, R
  19. I moved an old site to processwire and want to redirect the old URLs with php-filenames to the new folders via .htaccess to match the new structure. I tried: RedirectMatch 301 /filename.php http://example.com/filename/ The Browser end up like this: http://example.com/filename/?it=filename.php Its interfering with that Processwire .htaccess rule # ----------------------------------------------------------------------------------------------- # Pass control to ProcessWire if all the above directives allow us to this point. # For regular VirtualHosts (most installs) # ----------------------------------------------------------------------------------------------- RewriteRule ^(.*)$ index.php?it=$1 [L,QSA] If i comment that out the redirect in the URL is ok, but every Klick on a websitelink is showing the homepage. Any idea how i can fix this? Thanks Andreas
  20. Hi Guys, I have a question in regards to SEO and Processwire. We are in process of migrating a site with about 600+ pages that resides on a bespoke CMS. About 200 of these pages are keyword targeted pages. They are content heavy with friendly URLs. However, these pages reside in the CMS under a section called other. Since the CMS they are on is not Processwire but custom, the URLs are vanity URLS, whereas Processwire the URL would look like: /other/page-title Therefore, I have setup a vanity url field and used a similar approach to what Soma did here: https://processwire.com/talk/topic/3057-how-to-keep-pages-organized-when-managing-lots-of-landing-pages/ But for my solution I do show the shorter vanity url, in which some users are displaying the redirect. My question is this: If I show the shorter URL, is Google going to crawl the page as: /other/page-title or /page-title Any help is appreciated. Thanks guys!
  21. perhaps this is the wrong subforum, perhaps there are better waays to do this, but I wanted to share some thoughts regarding multi sites together with multiple robots.txt If you are running multiple sites with one PW setup you can't place multiple robots.txt files into your root. As long as all robots.txt files are identical there is no problem with it. You can stop reading right here. In my robots.txt I wanted to include a link to the current sitemap, e.g. Sitemap: http://www.domain.com/sitemap.xml I put each robots.txt into the "site-" directories. Search engines expect the robots.txt file directly in the root so I added some lines to my htaccess file (for each domain you have to do this) # domain 1 RewriteCond %{REQUEST_URI} /robots\.txt RewriteCond %{HTTP_HOST} www\.domain\. [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /site-domain/robots.txt [L] # domain 2 RewriteCond %{REQUEST_URI} /robots\.txt RewriteCond %{HTTP_HOST} www\.domain\-2\. [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /site-domain-2/robots.txt [L] Another possible approach: create a PW page within each site
  22. I'd be grateful if anyone has a solution to the following issue. By default Processwire builds its urls like this: www.domain.com/pagenameparent/pagenamechild On the surface this seems excellent for SEO but has been causing us a few issues. Let me explain. In a responsive environment a drop-down menu like below must activate on touch/click. And, the top level is NOT a page. (Obvious, when you consider that you cannot 'click' to this link with your finger). See this example. Below you will see that ABOUT US is a top level menu with children. Is this arrangement on a Bootstrap menu ABOUT US cannot be clicked as it could be on a tablet. Therefore, ABOUT US is not a page. Let's take the first child as an example. WHO WE ARE, would have its url structure built by Processwire like this: www.domain.com/about-us/who-we-are/ 2 problems here: If we create breadcrumb navigation then ABOUT US becomes a link to a page that should not exist. /about-us/ on its own creates issues with SEO. In a complex system trying to exclude these from searches and sitemaps is a a real issue. What would solve the problem is the ability to rewrite the url structure. It would be preferable to write the example url as www.domain.com/who-we-are/ Can this be done without having to change any core code?
  23. How can I possibly get rid of blank menu pages in process wire. I've been told the way PW works, you need to always create a page in order to use a top-level menu. For example, Top Menu - submenu1 - submenu 2 - submenu 3 Now, although Top Menu is not clickable in the main navigation, PW still creates a blank page for it which I'm trying to get rid of. Any suggestions please?
  24. title inspiration TL;DR Unless a last bit of checking I am doing over the next short while concludes otherwise, I am going to convert all my sites to run from httpS connections <del>and ensure all the certificates I use are of type SHA-2 not SHA-1</del><ins>and later on ensure all the certificates I use are of type SHA-2 not SHA-1</ins> Dull detail I amy be wrong about a little or a lot of this stuff so please check my facts before you rush off and do stuff, but, I've learnt some new stuff over the last little while and thought I'd share with PW friends in case it's of any help. The following is just a bunch of things that I believe are correct and that may be helpful, sorry I had no time to write it up into a nice article/post: Google preferring websites that serve their pages over httpS connections (so clearly it's a good idea to make a website deliver pages over httpS) source many of the the companies selling certificates are selling SHA-1 type certificates rather than SHA-2—I (I bought two in recent months from different suppliers and they are both SHA-1)—want to test a site's certificate? Check out https://shaaaaaaaaaaaaa.com/ (even tho the URL looks mad I believe it's good and comes from this authoritative looking source: https://konklone.com/post/why-google-is-hurrying-the-web-to-kill-sha-1) Google are gradually sunsetting SHA-1 (in favour of SHA-2) Microsoft, Chrome, and Firefox all recently deprecated SHA-1, and plan to turn it off in 2017. source x
  25. Just in case anyone else is interested who like me didn't know until now, it looks like [goodrelations](http://www.heppnetz.de/projects/goodrelations/) is a valuable 'add on' to the use of schema.org. I know nothing about goodrelations except that it appears to be blessed by Google, Yahoo, Microsoft (I think (pretty sure I found them via a link within the schema.org site and once I visited the goodrelations site I noticed they say they are used by Google and Yahoo)). First read suggested it's something like an extension library for ecommerce markup, but don't quote me on that I only skim-read a page. Just wanted to note it here in case it's helpful to you.
×
×
  • Create New...