Google have reached out to website owners in their bid to continue their battle against the poor quality search engine results that seem to be creeping into their organics, even though they have two automatic algorithms doing a sanity check on what they call "low quality" or "spam" sites.
In his latest outreach to the online community, head of Web Spam at Google, Matt Cutts asked website owners to help them identify scraper websites, specifically saying that he was looking for sites that were ranking higher than the original content source.
Taking to social networking site Twitter, Cutts tweeted:
If you see a scraper URL outranking the original source of content in Google, please tell us about it: http://t.co/WohXQmI45X
— Matt Cutts (@mattcutts) February 27, 2014
Google know that the value of a scraper site is very little and have always maintained that they have tried to eliminate these from their search engine results however with a vast number of sites still populating the organics, they are clearly looking for a helping hand to be able to get to grips with the automated websites that seem to feed off unique content only to dodge the Google Panda algorithms and benefit from your work.
Cutts shared a link to a Google Docs form where website owners are encouraged to help Google gather data, prompting you to provide information about the search query in which the scraper site is ranking higher than the original source of content, the "exact URL" of where the content is hosted on the reported site and the original URL source on your own website where the scraper stole the content from.
Interestingly though, looking at the form there is a disclaimer that websites owners need to ensure that they are only reporting original source sites that are not currently under manual action, which seems to indicate that if you are under manual penalisation, you are unable to report scrapers?!
Questions are being asked regarding why Google would need webmaster input to determine what a scraper site is but there is also speculation that Google are once again looking to formulate enough data to be able to calculate a potential algorithm check that would target the removal of scraper websites, something that would allow for a direct implementation into what Google Panda is looking to target already.
What do you think? Are Google getting ready to evolve Google Panda with a whole new targeting in tow or are Google really struggling to determine the original source of content?