I’ve been working with Eric Enge on his interview series. The topics vary, but he has recently interviewed a number of SEO industry thought leaders including Rand Fishkin and Bill Slawski on the February Google algorithm change known as Panda. The interviews are lengthy reading but full of interesting insights into what Panda actually was and how it might shape our online activities now and in the future.
Bill Slawski, who writes the frequently referenced SEO by the Sea blog, gleans insights on what the search engines might be up to by studying the whitepapers and patents published by Google’s and Microsoft’s engineers. In the interview, he theorizes that “Panda may be a filter … where some web sites are promoted and other web sites are demoted based upon some type of quality signal score” that was placed on top of the existing algorithm.
Rand Fishkin, co-founder of seomoz.org, thinks this quality score was based on work done by human quality raters, and “in combination with machine learning algorithms, Google is using the aggregated opinions to filter and reorder the results for a better user experience”.
At this point, some of you might be putting your hands up and saying “umm, translation please”.
Let’s start with a blog post from Google itself that outlines a series of questions that a human might use to rate a site’s quality. As Rand points out in the interview Google has a stable of human raters that look at sites and rates the quality. In the past Google had used the aggregated opinions of these raters as a data point, but apparently with Panda it is more aggressively using the data to actually impact where the site ranks in the search engine results.
How is this done? Well it’s truly rocket science, but the ratings by the humans is used as a reference point to judge the quality of the sites that Google crawls. Google also has the capacity to “learn” from the process, getting additional ongoing feedback from the raters and the new site blocking extension you can install into Chrome and Firefox. As the interviews also point out, Google is increasingly also using a number of “engagement” signals that track user behavior with sites that give it clues on whether visitors like the site’s content or abandon it quickly. This all takes a lot of compute power and sophisticated processing and the whitepapers that Bill was pointing to show that Google has developed this.
So what does this all mean?
I’ve just touched on a few points here, there’s a lot more in the interviews, check them out:
Kathy Alice Brown is a SEO expert specializing in Technical SEO and Content. In her spare time she loves to get outside.