I had the great privilege to have coffee with Gary Illyes (@methode), a Google webmaster trends analyst dedicated to creating a better search experience for users. Well known in the SEO industry, Gary often speaks at search conferences and is frequently interviewed and quoted in the major SEO blogs.
Most interviews with Gary focus on big changes at Google, such as mobile first indexing (which yes, is still coming). These are important but tend to be a higher level conversation. I wanted to ask Gary more tactical questions related to current issues I’m seeing in the daily work I do as a SEO. So the below dialog covers JavaScript, International SEO, GSC URL parameters, a potential change to robots.txt and more. If you have an interest in technical SEO, there should definitely be a nugget or two of interest. So keep reading!
The image above is the upper portion of a lovely painting titled “Casual Conversations” by Richard Rappaport (Own work) [license CC BY-SA 4.0], the full image can be found on the Wikimedia Commons
Speaking of Casual Conversations …..
One thing I do want to make clear is that the Q and A below is not an exact transcription. It was derived from the gist of our conversation and I’ve paraphased, but I’ve done my best to accurately reflect his comments.
Now that is out of the way, let’s dive in.
On Google’s ability to read JavaScript
If Google has gotten so good at reading and executing JavaScript on a page, is there any reason NOT to use it?
It’s important to keep a few things in mind. First we don’t support every piece of JavaScript. We use a web rendering service that is currently based on Chrome 41. So it is possible to encounter JavaScript that isn’t supported by Chrome 41.
Second, there might be a delay in processing the rendered page. Googlebot crawls the HTML very fast, and then the page is queued for rendering so that the JavaScript will be executed to get the full version of the page. Often, it might wait in the queue for only an hour or two, but if it is a less important page it could be days. So if you have important information relevant for SEO on the page that depends on the execution of JavaScript, such as links generated by the JavaScript, it might take longer for us to find it.
Also keep in mind that it is about 10,000 pixels of the page that is rendered. It’s not infinite.
Some of us have noticed that the cache: command just shows the HTML snapshot crawled and not the fully rendered page.
That’s not strictly true, for some important pages, the cache: command will show the full rendered page.
On hreflang tags and international SEO
I’ve been really struggling to get my clients to fully implement hreflang tags for international SEO. Part of the challenge is that different teams own the different websites (or subsections of the website). Any thoughts?
I spent a half year interviewing site owners and SEOs on this topic with a goal to make it better. People find hreflangs confusing and the implementation resource intensive. I haven’t come up with a solution yet, but I have a couple of comments.
Remember that you can build a XML sitemap for hreflangs as an alternative to placing hreflangs on the page. Just keep in mind that the interlinking requirement is still there, so if you have separate websites you need to place a XML sitemap of the hreflangs on each.
The SEO community has developed tools to help. Bill Hunt’s HREFLang XML Sitemap builder comes to mind, also Aleyda Solis has a free hreflang generator on her site.
If I have agreement from four out of the ten international site owners to implement the hreflangs, should I include the hreflangs referencing the other six sites even though there won’t be reciprocal links yet?
No, I would limit the tags to the four country/language tags of the sites that you have are working with. You can expand the tags later.
Note: at this point Gary voiced his view that SEOs should sit with the engineering teams so they can influence engineering priorities. There are definitely days I agree!
3 Tips for International SEO
There’s a lot of confusion about international SEO; inspired by my conversation with Gary, here’s an infographic that I put together:

My conversation with Gary continues in the next section.
More technical SEO topics
When you use the URL parameters feature of Google Search Console and set a parameter to “No URLs”, does that affect the indexation of those URLs as well as the crawling?
It affects both.
What kind of projects are you working on now?
One thing we are taking a look at is adding additional rules to the robot.txt file. A common problem we run into is that Googlebot is blocked by robots.txt disallows from accessing important resource files such as CSS and JS files. We are talking about the idea of a rule that allows crawling of dependent resources even when disallowed.
Would that be the default, ie: allow crawling of dependencies even when blocked?
That would be great, but the legal team said no. So if we do it, it would have to be an explicit rule.
What about a feature or new rule to address the caching of the robots.txt file?
Yes, the time to live for the robots.txt file is 23 hours. But you can use the robots.txt tester today in Search Console to force a refresh. I believe it is a separate button.
I rely heavily on the site: command and I hear it’s going away. Also why is it so inaccurate?
No, the site: command is not going away. The SEO usage of the site: command is a small use case, it’s actually used by scrapers, so the inaccuracy is on purpose.
What else would people find interesting to know about Google search?
It can be difficult to strike a balance between the desire to innovate and offer new features and the reality of the very large community we serve. We know that it takes 3 years for 50% of the internet to adopt a change, that’s a statistic we discuss often when evaluating changes.
Like with structured data.
This is just my vision, but in five years I think we might not need structured data because we are able to understand the page that well. People focus on the rich snippet benefit of structured data, but it also helps us understand the page better. When there is a price with a line across it and then another price showing, a visitor to the page understands that the product is on sale for a discount price. The line doesn’t mean anything to the algorithm, it needs the structured data markup to understand there is a discounted price.
Hope you enjoyed the above article. Thanks Gary!