Search engines do not like duplicate content. Google is especially aggressive when it comes to weeding out unnecessary content from its listings. The simple way Google does this is to keep the first copy it finds and to ignore all duplicate versions after that.
There are several types of Duplicate content:
- Content that is stolen, ether blatantly taken from a site or scrapped by bots to create content.
- Added in two locations on the site to aid navigation. This content belongs in two places on a site but it is really the same content.
- Accidentally duplicated content.
- This can be subtle. For example, if you have a gallery and each picture has its own page, if you keep the meta tags and head titles the same and the rest of the pages contain only minimal text, you may find only one of your gallery pages will be listed.
- Another example of this is what is known as boilerplate repetition, where too much content is the same on each page. For example, having a large copyright statement on all pages.
- Printer friendly pages.
- Syndicated content.
The good news is that Google will not ban your site for duplicate content. Unless it appears deceptive and intentional, it will just not list the duplicate pages.
The best ways to avoid any duplicate trouble are to manage your site with a good robots.txt file or better still, use Google Webmaster tools and a sitemap.xml document to label the content correctly for the spiders.
Oh and of course the number one way to avoid duplicate content is to hire good writers to create original compelling Search-Engine-friendly content. And look no further then DDA as we have the best writers in the whole world when it comes to such a task.