The fun fallout over the whole Bing-plagiarising-Google thing is quite the thing today (in case you missed it, Danny Sullivan has the definitive explanation). But apart from the “is this stealing” hoo-ha, there’s a more interesting philosophical point bubbling below the surface.
For years, and right up to the present day, people have been asking whether Google manipulate the SERPs in order to increase AdWords spend. The short version of this conspiracy version being:
“If Google give 4 or 5 slots of the first 10 to non-commercial results (pace: Wikipedia, BBC, Direct.gov.uk etc) then competition – and therefore cost per click – for AdWords slots is driven higher. Ergo, why wouldn’t they game the SERPs?”
Of course, Google have always denied this. The straight bat they play with is: “we cannot and do not manipulate search results – everything is algorithmic.” But any SEO with more than 4 days experience know that within the algorithm, specific sites are penalised negatively for engaging in various bits of skullduggery.
But if individual sites can be canned, what’s to stop individual sites getting a push in the opposite direction? Anyone who’s ever wrestled with a site with seemingly few quality indicators sitting just ahead of a technically brilliant and well-marketed site must have allowed that thought to surface.
With this test, Google claim to have broken their own rule, as Sullivan reports:
“Now that Google's test is done, it will be removing the one-time code it added to allow for the honeypot pages to be planted. Google has proudly claimed over the years that it had no such ability, as proof of letting its ranking algorithm make decisions. It has no plans to keep this new ability and wants to kill it, so things are back to "normal”.”
“No plans to keep… wants to kill it….” Really?
Google have now demonstrated that they can – and are willing to – game their own results. Does that make a subtle psychological difference to the way you view the SERPs now?
Against the conspiracy theory it’s worth noting that of course, the algorithm is of such complexity at this date that it seems likely that no one individual at Google could actually describe the way every aspect of it works. Like any huge piece of code, there will be legacy issues known only to people who long ago left involvement with the project. Hardcoded fixes, internal code conflicts and varying competing priorities across different markets and the near-constant churn of ‘tests’ mean that true understanding of the search results is probably beyond reach.
But even so: we now know what Google can or could do if they wanted. Food for thought.