Back in 2012 I watched a page on one of my sites slip from position #3 down to #1,114 on Google for a specific keyword that had 40,500 phrase match searches a month in the UK.
Before I share with you what I discovered, you may want to check out your own site from Google’s perspective. Replace the domain google.com at the end of this URL with your own site:
Google Safebrowsing
For the site in question, this now comes up as clean I am delighted to say, so I still had the problem of discovering why it tanked for a specific keyword. More of which in a minute.
But first I tried a little experiment with Google’s safebrowsing tool. I tested it on… Google’s own site. And this is what came up:
If you left the link as is (so it actually tested Google’s own site), you would have seen a pretty strange result. Here’s how it looked:
Just to show that I am completely impartial, I also did the same test with Yahoo and Bing. Try it for yourself.
Whilst this will give you an idea of what Google thinks about your site from a bad neighborhood point of view (more of which later) it doesn’t tell you anything about why.
So, I wanted to find out how my particular page could have slipped so far behind its competitors. I compared:
- the current top 5 pages for keyword density
- the number of page backlinks
- the number of domain backlinks
- quality of page backlinks
- quality of domain backlinks
- the domain age
These results were post Google Penguin updates and one thing that stands out to me is domain age.
In 2011 it was possible to get a zero age domain to rank #1 for any long tail keyword within a few weeks with some simple SEO.
Internet marketers were buying .info and similar like there was no tomorrow since vendors such as GoDaddy were selling them for $1.
Here’s what you typically did:
- On page optimisation (kw in url, title, h1, h2, bold, italic, image, video, metas etc.)
- Off page optimisation (kw in anchor, articles, bookmarks, images, videos)
And generally you only needed a little of the above.
You could push a tougher kw with a whole host of methods that included backlinking backlinks. Everything back then was pretty simple really.
In early 2011 Google released its Panda updates targetting poor quality sites (too much advertising, scraped content, sniper sites, bad quality content, duplicate content, no original content etc.).
They got this slightly wrong at first, removing some quality sites in the process. Around 12% of sites were affected according to Wikipedia.
One of the main casualties that I remember was top article directory EzineArticles, that lost something like 30% of its traffic.
Alexa showed them falling off a cliff in traffic terms! (you used to be able to get this data for free, but at least you can still see a whole bunch of other stats at no cost)
In 2012 Google released the Penguin update. This was aimed squarely at curing the backlinking problem (linkwheels, spammy backlinks, irrelevant backlinks – all tied up with the single phrase ‘unnatural backlinks’) and put the concept of bad neighborhoods into the frame.
There were some famous cases, and also some examples of paid backlinking services such as BuildMyRank being penalised.
No one outside of Google knows how their ranking algorithm works of course, but we are given insights by various Google employees including Matt Cutts.
There are masses of videos on this on YouTube worth checking out if you are new to it, but here’s one that recently caught my eye: Matt Cutts on Human Raters
As Google move more into human evaluation, some may remember that the only reason Google trounced Yahoo and dominated the search space was by not using human’s to rate sites (thereby speeding up search).
Another of the famous cases involves blogger and SEO expert Dan Thies. Dan is a friend of Matt Cutts but does not work for Google.
Dan and Matt believe in good old clean white hat SEO and great content. They both hate spammers with a vengeance.
What happened next if you don’t know the story is that spammers attacked Dan’s site. They spammed it by sending a flood of links from bad neighborhoods.
The next thing that happens is Dan gets a Google warning about unnatural links. Fair enough you say (or not!!). But here’s the thing.
Back in 2011 Google said said there was no way anyone could attack another site this way. Then in late November 2011 they said there was ‘almost’ no way this could happen.
Post Panda, they have changed their mind completely, as Dan’s case testifies. None of this makes any sense and it is reason I wrote a post on Negative SEO.
So what can you do?
In a nutshell, very little. You can disavow the bad links using Google’s disavow tool on Google Webmaster Tools. This may have an effect – but no one really knows. The only point of this as far as I can see is to show them that you have done everything possible to remove the links (the links you never put there in the first place remember) and you can’t do any more so you’re asking them to look kindly on you.
Then wait and hope. The problem is that even if they remove the ‘penalties’, assuming they accept you cannot get rid of the bad links, they are not going to tweak their algorithm just for you.
I have read a whole bunch of papers on this subject and the bottom line is this:
- Make sure the majority of your content is original. If you do content curation make sure the curated part is less than the whole of the post (i.e. quote dilligently and sparingly).
- Use Google Webmaster Tools site link option to find the backlinks Google knows about and check them out.
- Contact all bad neighborhoods you find linking to your site and ask them to remove the links.
- Use completely white hat SEO methods to get your backlinks. Guest blogging and forum posts is probably the best way to do this – and it brings in non-SE traffic too.
Now back to my disappearing page. As I said, domain age was definitely right up there. I checked out long tail words going down to 40 broad searches a month and found all top 10 domains were well aged.
I also found back then that depending on the country you were searching in, the domain extension was almost universally .com. But that was back in 2012.
Results in 2015 are very different. Try searching for a service in a particular town. Plumbers in London brings up almost exclusively .co.uk domains (apart from the usual directory listings such as Yell).
I also found that on the whole for major keywords the backlinks still make a difference, but as you go down in search numbers, they are far less important.
For example the long tail keyword I used had 3 sites with 1,000+ backlinks and 7 sites with either zero of just a handful.
One of the other things I found was that keyword density was vitally important, but for the opposite of what you would expect.
For my major keyword, the #2 site had a density of 1.3% and the #1 site had a density of precisely 0 (yes the keyword was not present at all). This is how latent semantic indexing (LSI) is now affecting search results. Google is starting to understand the topic of a page based on all its words, not just the so called ‘keyword’.
The worst part of all was the discovery that someone had placed 1,170 links from a single site to my page. All the links that Google saw from this site were redirected to other sites when clicked on.
I followed a trail on the Google Webmaster forums from others who had complained about this particular site.
The thread ends pre-Panda, but the reply is not to worry about it as Google will ensure that sites like this are not counted.
So there you have it. The site concerned has been attacked on 14 keywords and traffic has dropped by 66% for the pages concerned. Unless the algorithm is changed the only get out clause I can think of is to start from scratch on those pages.
I wish there were an easier answer to all this, but from an SEO perspective I don’t think there is. This is one good reason why any webmaster should be looking at alternate traffic sources such as social media and direct traffic.
Leave a Reply