Google Scraper Report Allows Users to Report Scraper URLs that Outrank their Original Content
In order to maintain the quality of its search results, Google has waged an ongoing war on spam and scraped content. For years, webmasters have complained that their original content is often stolen by scraper sites that then post it on their own websites or blogs as their own. Though it is logical to assume that original publishers of content should outrank scraper sites, in many cases scraper sites outrank original publishers of content on Googles organic SERPs.
Fortunately, Google has repeatedly tried to address this issue, and the companys latest initiative was revealed by Matt Cutts via his [Twitter] account on February 28, 2014:
On his Tweet, Cutts provided a link to Googles new Scraper Report form, which allows webmasters to report instances in which scraped content outranks original content on Googles organic SERPs. On the Google Scraper Report form, users can enter the URL of the original site where the content was scraped from, the exact URL of the scraper site, and the Google search URL that demonstrates the ranking issue.
Google also requires webmasters to confirm that their sites are following its Webmaster Guidelines, and to confirm that their sites have not been affected by any manual actions. On the other hand, Google hasnt stated their exact designs for this gathered data, and Google hasnt stated that theyre trying to remove infringing content from search results on spam grounds rather than copyright grounds. Google does offer a way to remove scraped content from its search results via the DMCA process, but some critics have criticized this process for being time consuming.
Some SEO experts theorize that Google is harvesting data to improve its algorithmic ranking system, so that original content shows up first before scraped content.
Matt Cutts Take on Duplicate Content on the Internet
While duplicate content might be a serious issue for online marketers who are trying to drive traffic to their sites, Google isnt particularly worried about the presence of duplicate content on its SERPs. In a YouTube video entitled How does Google handle duplicate content? Matt Cutts stated that between 25% or 30% of content on the web is duplicative.
Duplicate content does happen, stated Cutts. People will quote a paragraph of a blog and then link the blog, that sort of thing. So its not the case that every single time theres duplicate content, its spam. And if we made that assumption, the changes that [would] happen as a result would end up probably hurting our search quality rather than helping our search quality.
According to Cutts, Google searches for duplicate content, groups it together, and treats it like one piece of content. Instead of treating duplicate content as spam, the search engine will choose the best result from that cluster and show it to the searcher. However, Google regards duplicate content that is used in a malicious or manipulative way as spam, and will implement the necessary sanctions.
Marketing Digest Writing Team
Latest posts by Marketing Digest Writing Team (see all)
- How Taco Bell Struck Gold with Its Memorable Viral Marketing Campaigns - September 15, 2015
- Salesforce Marketing Cloud Releases New Instagram Marketing Tools - September 12, 2015
- Chrome Begins Pausing Flash Ads by Default to Improve User Experience - September 3, 2015