What’s the SEO News & Updates for w/e 4th August 2017?

Read the TWIS SEO News and Updates for w/e 4th August 2017. This week there are a bunch of interesting stories to read, including:

#SEO #GoogleSearchConsole #Googlebot #WebSpam #LocalSEO #ClickThroughRates


Google Reveals Two Secret Google Search Console Features

Summary:

  • Following the public outing of screenshots from the upcoming Google Search Console update, which we revealed, Google has now also revealed a further two secret features coming soon.
  • The first is an “index coverage” report, showing the count of indexed pages, as well as information about why some pages could not be indexed, and tips on how to fix them.
  • As a further fillip to AMP, there is also an AMP issues report, which modifies the flow of fixes and enables better bulk fixes to be put in place and Google notified to re-crawl.

Google Search Console Secret Features

Actions to take:

  1. Review the Google Webmasters Blogpost.
  2. Note specifically, that these features are not available yet, but will be as Google slowly releases the update to GSC users.
  3. When the fuetures launch in your console, play with the index coverage report to start to analyse issues preventing indexing and providing remedies.
  4. Also, review the the AMP issues report (assuming you’re using AMP, which you should be, AMP is getting popular), and start to work through the flow of issues, fixes and re-submissions.
  5. Contact me to find out how to analyse the existing Google Search Console to find additional optimisation options and traffic bumps.

Discussion:

It looks very much like Google’s hand has been forced by the leaked images from a couple of weeks ago. Of course, they would have had a staged rollout anyway, and may have released some images, but it’s unlikely it would have been this far in advance.

Naturally, reading Google’s wording, you may think the information they present in Search Console is all you could ever need. This isn’t the case. Google has a habit of giving partial information which benefits it, and may benefit webmasters, but which doesn’t enable a complete fix to be put in place. Obfuscation is a great word to use in these trying circumstances.

There is a lot of excitement about the “reasons why your pages could not be indexed” part of the report. If it gives a good reason, this Google Search Console feature could be an absolute essential one to review. I suspect it will give purely technical showstoppers, which in some instances will be thoroughly useful in grasping indexation issues, but in others, will not be.

More info:

Return to Top


Google Reveals How Googlebot Web Rendering Works

Summary:

  • In an update to Google’s developer documentation, they have revealed how Googlebot’s  Web Rendering Service works, and importantly limitations.
  • Googlebot’s WRS is based on Chrome 41. Meaning, if it works in that version of Chrome, it should work for Googlebot.
  • WRS and Googlebot only supports HTTP/1.x and FTP. There is not WebSocket protocol supported yet.
  • Googlebot and WRS are stateless across page loads – no cookie interaction or retention.
  • Any permission requests are declined by WRS.

TWIS Googlebot Web Rendering Service

Actions to take:

  1. Review the list provided by the Google Developer’s Guide. There are other limitations, especially around feature detection.
  2. Test your content in Chrome 41. If it works there, it should work for Googlebot’s WRS.
  3. Don’t try to serve Googlebot or the Web Rendering Service non-HTTP/1.x, or FTP content.
  4.  Don’t try to force cookies onto Googlebot. It’ll take them, but will clear them. If pages require cookies for content, there will be nothing to read and index.
  5. Don’t think Googlebot will tick boxes for permission, or allow things to happen. It won’t.
  6. Contact me to discuss how best to get your content into a state where it can be Found, Crawled and Indexed – that is critical for surfacing in SERPs.

Discussion:

Many people get confused by Googlebot, and its extension the Web Rendering Service. The Web Rendering Service is essentially the headless browsers which turns HTML & JavaScript etc into something with can be displayed to a person, or interpreted by an algorithm.

It comes as no surprise that the Web Rendering Service is based on Chrome. Why else would you have a browser? Or possibly more accurately, if you have a browser, why wouldn’t you turn it into something which is used by Googlebot.

It is important to remember that Googlebot and the Web Rendering Service come to each page as a fresh user would, with no cookies, no settings, no permissions allowed, and it doesn’t do anything to change that. It’s likely that Googlebot eats the cookies and inspects them to see what they do, but it takes no action. It is also likely that Googlebot has an incognito crawler which goes out and crawls the web to see what happens if it does eat cookies, click permissions and allows for different settings. It’s vital to understand the web as a raw user, and as a returning user. This helps to defeat cloaking and other nasty SEO practices.

More info:

Return to Top


Two Google Algorithm Flaws Exposed

Summary:

  • Google’s algorithm has plenty of flaws, but this last week, two fairly egregious examples have been exposed.
  • Firstly, there has been widespread commentary on the prevalence of Pinterest holding pages in certain queries, with up to 8 normal listings and similar numbers in image listings. This domain crowding is poor, and should have been resolved a number of years ago.
  • Secondly, and  perhaps worse, the top result for “cure for cancer” is (or was), effectively, carrot juice. This is a flaw of linking and algorithmic flaws in dealing with it.

Actions to take:

  1. Generally, there is little that can be done when algorithm flaws are exposed. Google will work to eradicate them, normally.
  2. If the poor results are the result of spam, then Google will take action if reported.
  3. If the result of an algorithm error, then Google is often less inclined to admit it is an error, but will usually work to remedy the situation.
  4. Don’t start thinking carrot juice is a cure for cancer. The chances are that it isn’t.
  5. Contact me to discuss how the Google ranking algorithm works .

Discussion:

The Pinterest domain crowding example is horrible. This was supposed to have been worked out a number of years ago, when Wikipedia and YouTube completely ruled every query. It still works for branded search terms, which is fair enough, but it doesn’t, or shouldn’t work for non-branded terms.

The other thing it may be is spam. If you look at the Pinterest title tags, they are all variations on a theme, and no terribly inventive ones. It is possible that Pinterest has enough content categorised accurately enough, but chances are that it doesn’t. Unless something really bad has happened, it is unlikely Google will see this as spam to be pinned on Pinterest, however, there have been many instances of large companies being called out for it.

The carrot juice is a cure for cancer is worse. Not from a spam perspective (it may be the result of link spam), but from an information purity perspective. Google, even though it has a fake news problem, is seen as being authoritative in medical queries, and works hard to maintain that. Having the number one cure for cancer listed  as being carrot juice would be an embarrassment for Google and its engineers.

More info:

Return to Top


Only 20% of Clicks Now Go to No.1 – New CTR Study

Summary:

  • You have to drill through a few charts in the Internet Marketing Ninja’s study, but eventually you spot that in terms of organic click-through rates, only 20% of non-branded clicks go to the number 1 spot in organic listings.
  • In fact, the CTR (click-through rate) distribution curve is vaguely similar still to the AOL Click Through Data which launched 10,000 SEO discussions over the years, even if the initial numbers are a lot smaller now.
  • IMN analysed 20,000 queries, 64mm impressions and, 4mm clicks. (This compares to 9mm clicks from the original AOL data).
  • If you spend a while digging into the data, there are nuggets in there. The biggest point that you walk away with is that the click distribution varies enormously based on brand / non-brand, B2B vs B2C, and so on, so these figures should always be taken with a large grain of salt.

TWIS Organic Click Through Rates 2017

Actions to take:

  1. Review the charts from IMN. They make good reading.
  2. Apply the distribution curves to your own ranking performances to get an idea of Share of Voice between you and your competitors.
  3. Use this information to formulate SEO strategies which are focused on being more visible than competitors, as well as ranking number 1.
  4. Contact me to help turn your ranking data into actionable share of voice insights.

Discussion:

Organic click-through rates are the stuff of legend, ever since AOL accidentally allowed information from just under 10 million search clicks to leak out. It would be fascinating to see Google’s data on this now.

I like this study. It seems to have a good base of data. It is naturally skewed by selection, rather than random walk picking, but it probably has more than a grain of accuracy to it.

What is evident from the data is that fewer people are clicking on search results, as Google strives to surface more information immediately, and that you have to take Share of Voice into account when judging SEO performance – simple rankings do not cut it.

More info:

Return to Top


Are You Checking Your GMB Pages for Mystery Updates?

Summary:

  • This is a great post from the people at Local SEO Guide – well worth a read.
  • There are issues with Google My Business updating parts of your GMB pages without notifying you.
  • This may include random pics, incorrect or irrelevant information, and auto-generated rubbish, based on your listings, auto-generated listngs and other sources of information.
  • Whatever you do, keep a weather eye on your GMB listings.

Actions to take:

  1. Regularly check your GMB console for suggested updates. Diarise it so it become a routine.
  2. Regularly check your listings, using available tools where possible, but also with physical, manual checking as well. Again diarise this do it becomes routine.
  3. Make sure that you are on top of other auto-generated listings which may or may not surface for your local searches.
  4. Although a bit like whack-a-mole, keep attemtpting to correct the Google databases, and claim listings where possible.
  5. Contact me to discuss surfacing and managing your locations in Google search.

Discussion:

Google My Business / Google Places / Google Local are just what you’d expect a free crowd-sourced, or algorithmically generated solution to be: right, about 60-70% of the time, but so far wrong in the remaining 30% to seriously call into question the correctness of the 60-70%.

This is a source of frustration for business owners, and for users. Considering it is so critical for Google (local + mobile = the dominant form of navigational search), you would have thought they might have worked it out by now. Clearly not, however.

Google My Business is unfortunately becoming as discredited as part of Wikipedia used to be.  Taking information and then changing it randomly, and capriciously, erodes trust in your service, for consumers and information providers.

Having said that, I am scratching my head about how Google My Business could improve, when it realistically knows it probably only has 10% if the business information it would like to have indexed.

More info:

Return to Top


Bits & Pieces

  • Commentary from SE Roundtable, noting that Google’s Adwords listings are looking more, and more,  and more like normal, everyday organic listings, Perish the thought.
  • After I’d spent a long time consoling myself that Google hasn’t added a hamburger menu to its mobile pages, what goes and happens? They add a hamburger on mobile! Fortunately these are more account actions, rather than primary navigation options, but still.
  • Googlebot’s crawl limit is now about 200mb. That is an awful lot of page. Or really poorly formed infinite scroll.
  • There was a fair degree of excitement over a small update to the Google quality raters guidelines. Personally, I think these are interesting (having written lots of publication / editing guidelines before) but not earth-shattering. The point of them is to help improve Google’s algorithms, not edit SERPs directly.
  • Apparently, Google tested a version of SERPs last week in which there were no URLs displayed on mobile. This is a bad thing, make no mistake. It makes picking the URL to click a bit more of a lottery, which is silly. I often use a search as a bookmark and know the URL to click rather than the title. It also helps to distinguish between a trusted source and a dodgy source. And I’m not alone.

Return to Top


TL;DR

Thanks for reading. If you would like to discuss what these changes mean for your web property, or would like to know how to implement them, please feel free to contact me.

Return to Top