There’s always been a ton of debate surrounding the value of search engine rankings. While I don’t always value single ranking metrics for certain SEO activities, I value ranking data highly for keyword and competitor research. Aggregate ranking data can also provide a view of the health of your website – something I take quite seriously!
What is search engine visibility?
“Search engine visibility” is a term I associate with a metric, largely because of the ranking checker (Advanced Web Ranking) we use, but also because of a KPI in my last “in-house” role. Before you write off this metric as useless, let’s look at how it may be calculated, why that calculation can be valuable and where it can let us down.
As I mentioned in the post, “Site Performance After Hosting Upgrade“, we monitor the rankings for SEOgadget’s top 200 industry and traffic driving search terms on a daily basis. The chart above shows the search engine rankings expressed as a percentage, which works a little like this (assuming one keyword):
– Position 1 = 30/30 (100% Visibility)
– Position 2 = 29/30 (96.6% Visibility)
Calculated visibility metrics – strengths
Albeit a rather simplistic calculation, there’s definitely a beauty in the visibility principle. Imagine you’ve collected daily rankings for the same keywords for a few years. With a visibility metric, you’re able to compare year on year rankings and, with a strong sense of certainty, report on an overall improvement in rankings. You could attribute incremental improvements to various SEO activity if you wish, and it wouldn’t be out of the question to demonstrate an increase in traffic for your monitored terms as visibility improves. From this perspective, a visibility score could be considered a proxy metric to traffic performance.
Even if your keyword list changed, provided you had a consistent keyword selection methodology (eg: “seasonal” terms in travel, top 200 according to Hitwise, etc) you’d still have a comparable set of figures from one year to the next.
Calculated visibility metrics – weaknesses
Visibility is an overly simplistic calculation. In my example above, there’s no weighting to favour higher positions (a position 1 ranking is “worth” more than a position 12 ranking). If you were measuring rankings in Bing and Google, how would you account for the likely difference in traffic and ranking value because of search engine market share? There are obvious problems with the approach, none of which I’ll deal with here, suffice it to say I am aware of them and I choose to accept them for now.
Visibility in competitive analysis
Very recently we carried out an investigative piece of research on a new market niche for a client. Part of the work was to identify the top traffic driving keywords in that sector, and identify who ranked best for those terms. In order to assess overall search engine visibility, we calculated visibility metrics for each domain. This gave us an extremely strong sense of who was leading the pack in the rankings, and who (potentially) we should consider investigating in more detail (anonymised data):
I think competitive analysis is where visibility scores really add value – remember, this is a like for like comparison of the same keywords in the same search engine, all of which have been identified as traffic generating terms.
The perfect visibility score
My perfect visibility score might take the following factors into account:
- Type of search results included in the page (image, video, local) and prominence in each section
- Weighting for positioning based on estimated CTR for that position
- Weighting for search engine based on market share or traffic for multiple engine rankings
I’ve been meaning to write about this subject for months – and I’d love to hear thoughts and opinion (for / against) for search engine visibility as a means to aggregate individual rankings data. Do you use this metric in your KPI’s or SEO health checks and if so, how would you like to see it improve?