|Number of votes:||2|
This article serves as an introduction on how to create an EPiServer CMS site with a search engine friendly foundation that is easy for editors to maintain.
Have you placed the meta title and meta description under the advanced tab just because meta data is advanced stuff? Well, you’re probably not the only one. But, if you didn’t know, metadata nowadays is for everyone since all major search engines uses this data, both for presentation and ranking.
Therefore meta title and meta description should be highlighted at least on all section pages. A section page is typically the start page, the alternatives in the main menu and other pages with several sub pages. An example on how the metadata can be set up in EPiServer CMS is described below.
A good way to keep track of the titles and description is to create an internal only sitemap. The sitemap should show the friendly URL, page name, meta title and meta description for all pages. This makes it really easy to see what the pages look like in the search engines’ result listings.
It is also possible to add meta keywords for each page, but this is better for internal search since most search engines don’t put much weight in its ranking on that one. A better way of highlighting keywords is to use headings in the content. Make sure to only have well structured formats available in the WYSIWYG editor. This will force the editors to set the correct semantic elements.
“Make sure the keywords are on the page”. Remember, you read it here first. By thinking of what people are going to type in to try to find content on your site you can make sure these keywords are on the respective section pages. These section pages will then work as landing pages for specific topics and can later also be used if you start buying keywords. Also make sure to use these keywords when you link to these pages internally.
Using a term over and over again, also called keyword stuffing, is generally a bad idea. Treat the search engines like humans, if they aren’t interested after you told it a couple of times they probably never will be.
Title pretty much says it all. Doesn’t it? The canonical URL is the primary domain you want to use for your site. Let’s say you’ve got the following domain names which all serves the same content:
To help your visitors and the search engines you should select one of these domains as the preferred domain name. That is, the URL that is shown in the address bar to the user and the URL that the search engines show in their results. All other URLs should just make a permanent redirect (or 301 if you prefer that language) to the main URL. Also make sure that all internal links use this main URL.
By using just one domain name you will be able to get a better search engine ranking since this domain will have more content and more incoming links. Note that some search engines’ webmaster centrals now also support the setting of a preferred domain, but that doesn’t mean that you should skip the permanent redirects.
For further reading on how to accomplish a preferred domain with multiple languages in EPiServer, see Ted Nyberg’s blog post:
If you’re curious, EPiServer CMS uses permanent redirects to make sure that the friendly URL is always shown in the address bar.
The Sitemaps protocol defines a way for webmasters to tell search engines about pages on their site available for crawling. This is done by providing an XML file with URLs to all pages on the site. But, instead of just the URLs, the file also contains additional data for each page including: last updated, how often it usually changes and how important it is relatively to other URLs on the site. I’m leaving it as an exercise to the reader how this metadata can help all the crawlers out there. Find out more at: http://www.sitemaps.org/
The Sitemaps protocol has wide adoption and is currently supported by for example Google, Yahoo! and Microsoft. You can download a sitemap generator for your EPiServer CMS site from Jacob Khans blog:
Use it and make sure to register it with relevant search engines and keep track of the current status of the links within the search engines’ administration interface.
The page wasn’t found? Too bad, but by using common sense you will hopefully minimize the number of times the user or the search engine notice this. I would say there are two different types of "page not found":
In the first case the page usually exists somewhere else. Since cool URIs don’t change (http://www.w3.org/Provider/Style/URI ) this will never happen. But, if contrary to belief it does, you probably already know where the page currently resides. In most cases this happen when you’ve remade the information architecture, switched CMS or both. If you by using historic data can find out which page the user was looking for, make a permanent redirect to this new page.
In the second case the user has probably entered the wrong address or there is a mistyped link somewhere. If you think you know which place the user was looking for, present a 404 page with the new page as a suggestion. This suggestion can for example be based on parsing the URL for keywords.
If you have absolutely no idea what the user is looking for, a good advice is to present links to the main areas of your site and a search box. One important aspect for the 404 page is that it should be clearly separated from the design of the other pages. It should be apparent that the page wasn’t found. For inspiration I recommend:
Lastly, don’t ever ask for user input, eg “how did you get here?”. You have all this information already. Don’t you?
If you’re a developer you may be aware of the fact that Internet Explorer likes to show its own 404-page. The easiest way to get around this is to make sure your custom 404 page is larger than 512 bytes (counting just the text). This can for example be done with HTML comments if you don’t have enough content already.
As we all know, if you’re a developer anyway, most “function controls” in ASP.NET is built using a paradigm based on POST and something called ViewState. Using ViewState may be suitable in some situations, but I usually don’t recommend it since it requires a POST from the browser. What does that mean? Well, for example, if you use the built in calendar control in ASP.NET, you will notice that the URL in the address bar doesn’t change when you select a new day in the calendar.
If you try to bookmark this specific page you will have no luck. As a developer aware of the foundations of REST and the World Wide Web a bell should ring saying that someone just broke one of the basic rules: use GET to get data. By using GET, the URL will always be updated. This will not only enable bookmarking, it will also make the search engine happy as it can treat the content as any other link.
If you’re working with AJAX, Flash, Silverlight etc, make sure to use anchor links when you update the page to notify the browser of the change. If you visit a site and the back button in your browser starts acting weird it’s probably because the developers violated the anchor rule.