Bourbon: Debunking Duplicate Content

While Allegra cracked down on duplicate content, Bourbon, the subsequent update in May 2005, continued the process. Even within the same domain, pages with similar content are devalued. This algorithm change affected a lot of websites which led to a massive stir in the webmaster community.

What’s It For

At the time of the rollout, a Google spokesperson advised that webmasters should take a break from checking their sites’ rankings. The reason for this was that the team was conducting “3.5 improvements in search quality” which took a few days to complete. Bourbon was a continuation of the search engine’s focus on improving the relevance of search results for the benefit of the end user.

What Were Its Effects

Several websites were described in a Webmaster World forum as having gone “from the heights of the Olympus to the depth of Hades” in the blink of an eye. Most of them didn’t have a clue on why they experienced such a considerable loss in the rankings. There were changes in the SERPs for Google.com and the local Google site with others experiencing different pages for the same query at the time.

What It Means for You

While having duplicate content won’t get you a penalty from Google, the search engine does discourage it. It affects SEO since it splits the relevance of your content between two links which would have had a more significant impact on your website if all your pages had unique content.

Fortunately, you can spot duplicate content on the same domain. Here are typical instances to watch out for:

  • Boilerplate Content This one can show up across all parts of our website like the home, about us, and contact us pages. It also pertains to links that pop up in the blogroll and navigation bar since most content management systems enable users to display recent or most popular posts on the homepage. It doesn’t harm your SEO.
  • Inconsistent URL System While you may think that www.sitesample.com/, sitesample.com, http://www.sitesample.com, http://sitesample.com/, and https://sitesample.com are the same, search bots consider them as unique and different from each other. You should have a consistent way of creating your URLs to avoid being flagged with duplicate content.
  • Localized Domains Several websites, especially those that cater to a more global audience, create localized domains for each country they serve to improve user experience. An example of this practice is having a different top-level domain for your site when accessed in Australia (.au) or Germany (.de). If you don’t translate your content for German readers, search bots would consider one post in your Australian domain as a duplicate of the German one.

Meanwhile, duplicate content from different domains come in these forms:

  • Copied Content – This practice refers to blatant plagiarism where one website reproduces and publishes the content of another site without permission. It occurs when some people copy a substantial chunk of information from a page and pass it off as their own. Those who do this run the risk of being penalized by Google.
  • Content Curation – Content curation, on the other hand, is when you look for stories online and create blog posts based on them. Think of it as compiling articles that you think your target audience would enjoy. Google doesn’t consider this as spam as long as you provide fresh insights or perspective on the topic.
  • Content Syndication– If you promote your blog post word-for-word in third-party sites, then you’re doing content syndication. A syndicated post is one that has several copies placed all over the Internet. Websites such as Huffington Post, Fast Company, Inc, LifeHacker, Medium, and social media sites like Facebook and LinkedIn allow duplicate content. If you’re planning to utilize this method for your digital marketing campaign, make sure that you ask the republishing websites to give you credit as the source and ask for a backlink to your page using the right anchor text.
  • Content Scraping – Also known as data scraping, this tactic is similar to copying content, but the primary difference is that it’s automated. This means that there are software programs that can lift off unique content from a website and publish it somewhere else. Some bots can crawl through sites and copy thousands of pages in a snap. Content scraping is illegal and typically done without the permission of the source. What’s worse is that the duplicate would rank higher than the original content creator, rubbing salt to their metaphorical wounds. Articles, blog posts, product reviews, news, research, and listings are some of the favorite targets of data scrapers.

arrow_icon