SEO Culture
Publisher SEO
Business & Reporting
Botify News & Products
SEO Foundations
SEO News & Events
Future of SEO
E-Commerce SEO
Enterprise SEO
Content SEO
Technical SEO

Back to blog

Technical SEO

The Big List of SEO Horror Stories

X
 min read
August 28, 2020
Kameron Jenkins
Director of Content & Brand Marketing

A while back, I asked SEO Twitter to share their SEO horror stories with me.

Let's talk SEO horror stories.

We always hear about the big ones:

- Site migration gone wrong ("redirects? who needs 'em!")
- Accidental de-indexing / indexing (robots.txt issues, noindex etc)

But what else you got? What SEO disasters do we not talk about enough?

-- Kameron Jenkins 👋 (@Kammie_Jenkins) July 27, 2020

But I'm definitely not the first.

In fact, there are countless articles about SEO horror stories in popular SEO publications, and plenty of SEOs who've shared their disaster stories on forums like Twitter and Reddit.

The topic even has a dedicated hashtag.

So I decided, best I could, to aggregate them into one big list and organize them into categories.

When I did this, the first thing I noticed was that SEO horror stories often have their origin in one of two places:

  1. Bad advice (i.e. actively doing something that's bad for SEO)
  2. Mistakes (i.e. negligence or pure missteps)

And both often have their root in a lack of SEO maturity, but not always. For example, mistakes can result from one misplaced keystroke because we were in a time crunch, but they can also result from not even realizing that the SEO team needed to be brought in for a development project.

Let's take a look at both the intentional and unintentional situations that can lead to SEO disasters so that we can (hopefully) avoid these situations in the future, or at least catch them earlier.

  1. UX/SEO conflicts
  2. Mishandled migrations
  3. Robots.txt issues
  4. Canonicalization mistakes
  5. Testing/staging pages getting indexed
  6. Internationalization issues
  7. Crawl traps
  8. Webmaster guideline violations
  9. JavaScript implementation
  10. "Content is king"
  11. Internal linking mistakes
  12. URL removals tool
  13. Treating ranking factor studies as law
  14. Server meltdowns
  15. Sites that have none of the basics right

💡 Related Resource: SEO Horror Stories: A Roundtable Discussion of Sudden SEO Disasters & Why They Happen [WEBINAR]

1. UX/SEO conflicts

Every website has two main groups of users: humans and bots. Since humans are the ones that can buy our products and services, brands typically task their design and development teams with optimizing for them.

Makes sense.

However, in some cases, focusing only on users can have negative ramifications for a site's performance in organic search, which just so happens to be the largest driver of traffic to the average website.

Take this example from RankScience CEO Ryan Bednar.

Design team changes navigation bar => Internal linking dramatically impacted

-- Ryan Bednar (@ryanbed) July 27, 2020

Ryan's example is all too common. The design team was likely making a change they thought would simplify the navigation, and maybe had even conducted an A/B test and found that a condensed navigation resulted in more clicks to a certain page or more time on site.

Unfortunately, losing a link from the home page can cut down on the flow of valuable PageRank to that page, and bury it deeper on the site, causing Google to see it as less important. In many cases, this causes it to rank worse.

Or how about this example, where the testing team failed to put the proper canonicals in place during an A/B test resulting in Google dropping the page from its index.

Redirect split testing with no cannonical tags = page no indexed

-- Chris Rydburg (@Rydch41) July 27, 2020

To avoid disasters like this, SEO teams need to form closer bonds with their design, development, and testing teams. Make it a habit to talk with each other and bring each other into the decision-making process.

2. Mishandled migrations

Perhaps the most famous culprit of SEO horror stories is mishandled site migrations -- and there's a good reason for this.

There's no single situation that constitutes a site migration.

Rather, "site migration" is a loose umbrella term that encompasses a lot of different types of changes, such as replatforming (i.e. moving to a new CMS), a website redesign, or even just changing the names of some of your pages.

In the worst cases, the SEO manager isn't even informed that anything will be changing -- like in this example from Dana DiTomaso.

The client changed a bunch of URLs and didn't put any redirects in place. We noticed when the script we run to check that all Google Ads landing pages work started returning errors.

-- Dana DiTomaso (@danaditomaso) July 28, 2020

Or maybe an SEO manager is involved in the planning process, but when they try to work with the development team to set up redirects, they're told those aren't necessary.

Dev's adamant advice on a client migration, 'You don't need to set-up 301 redirects anymore, as Google takes care of it automatically.' 🙃

-- Screaming Frog (@screamingfrog) September 8, 2017

Site migrations are complex projects. SEOs need to be involved early so they can plan things like redirect mapping, fixing legacy issues, and launch QA.

💡 Recommended Resources:

  1. The Website Migration Checklist: How To Safeguard Your SEO While Updating Your Site
  2. How to Leverage In-Market & Out-of-Market SEO Tests to Prepare for Site Migrations

3. Robots.txt issues

Your site's robots.txt file instructs Google and other search engines how to crawl your website, so it can make or break your performance in search results.

Take a look at this example from Andrew Optimisey in an article for Search Engine Journal:

"The developer team had released a bunch of updates and… included the 'User-agent: *Disallow: /' in the robots.txt.

Ouch.

It took them two days to notice that the traffic jumped off a cliff."

But why does this ever happen?

In some cases, brands launch a new site but forget to update legacy robots.txt rules or robots.txt rules that they applied to the staging site to prevent it from getting indexed. In other cases, developers can unknowingly remove their site from search because they were trying to reduce the strain of bots on their servers.

The main takeaway here is that SEOs and the teams they work with should monitor their robots.txt files to ensure they're not blocking search engine bots from any critical pages.

💡 Related Resource: Google's Updates To Robots.txt: What SEOs Need To Know

4. Canonicalization mistakes

A canonical tag tells Google which version of a page is preferred for indexing -- the "master copy" of multiple, duplicate pages.

Google can ignore this signal if other signals indicate the canonical may be incorrect, but not always. This can lead to some pretty substantial issues, like in this example, where a canonical tag pointing to the home page was added to all product pages.

On a Magento site I found (and removed) a canonical link present on all product pages pointing to the home page.

Who does such a scary thing??? It's not even close to halloweeen #SEOHorrorstories pic.twitter.com/zs0QSzVLLB

-- Lars Skjoldby (@skjoldby) July 26, 2018

Another common canonical disaster happens when relative URLs are used instead of absolute URLs -- if a staging site gets indexed accidentally, both staging and production could be claiming to be the canonical version, and the staging site could be chosen as preferred.

There's typically some level of automation to canonical tags, so if something breaks, your canonical tags could be wrong across the entire site.

when the canonical breaks and references the wrong URL on every single article on the site #seohorrorstories

-- Carolyn Shelby (@cshel) October 30, 2015

So, it's a good idea to proactively monitor your canonical tags to ensure:

  • You canonicalize all duplicate/near duplicate URLs to the master copy of the page
  • You have a canonical on every URL, ideally
  • Your canonical tags aren't pointing to non-indexable URLs

💡 Related Resource: The Top 10 Questions About Canonical Tags Answered

5. Testing/staging pages getting indexed

Developers typically test changes before they go live on a staging (not indexed) version of the site.

But sometimes, those testing pages can be indexed accidentally, like in this example. Not only did these testing pages go live, but people actually started using them for internal linking.

DEVs create pages for testing purposes on live website, but don't delete them afterwards.

Junior Marketers think those pages are "real" pages, using them for internal linking.

Same process. For years.

Duplicate content.
Keyword cannibalization.
Link juice wasted.

Yay!

-- Alexander Außermayr (@Aussermayr) July 28, 2020

Others report full indexing of staging domains that results in the staging domain outranking the "real" production domain.

I've had it happen THREE times in the last 5-6 years. The biggest problem is that it's a giant duplicate copy of every piece. Yes, in some cases staging outranked www.

-- Alicia K Anderson (she/her) (@A_K_Anderson) July 28, 2020

This is one of the most common types of issues that comes up when SEOs get together to discuss their horror stories, so make sure to keep a close eye on things like your log files to make sure Googlebot isn't crawling any staging/private URLs.

6. Internationalization issues

Targeting sections of your site for different countries and languages is one of those things that's simple in theory but easy to get wrong in practice.

The first mistake people often make is adding new language versions to their site without accounting for the additional support you'll need to maintain them. It's a mistake to have a multilingual site and only truly maintain the primary language version, for example.

On a multilingual site, making changes to EN and not testing through to all language versions.

-- Craig Harkins (@CraigHarkins) July 27, 2020

Another common internationalization mistake is simply implementing your hreflang tags incorrectly. This can send confusing signals to Google and even cause the wrong language or country version to be served to your audience.  

Hreflang tags on a site with a dozen languages but the syntax is wrong and/or points the the wrong url.

-- Lisa Brown⁷ (@bunltd) July 28, 2020

What does that look like in practice? Aleyda Solis shared this example:

Hreflang gone wrong #SEOhorrorstories #Halloween 👻☠️💩 pic.twitter.com/jmu2InvTbj

-- Aleyda Solis (@aleyda) October 31, 2018

💡 Related Resource: International SEO: Helping Google Serve Your Content to Global Audiences With Hreflang Tags

7. Crawl traps

Do you know how big your site actually is?

If you manage a site with a faceted navigation or any type of parameterized URL structure, you may be at risk for crawl traps.

Crawl traps happen when websites can create nearly endless combinations of your core URLs. That means you could end up with Googlebot crawling millions of URLs even if your site is just a few thousand core URLs, like in Kathy's example.

Inadvertent links leading to exponential crawling of 44 Million URLs, on a 30K page site.

-- kathy alice brown (@kathyalice) July 28, 2020

This can result in crawl budget waste as well as accidental indexing. And remember, even if you block certain parameters in your robots.txt file, if other crawlable pages link to them, Google could still index them.  

💡 Related Resource: Why Google Deindexed 10.5 Million Pages On HubSpot's Site -- And How We Fixed It

8. Webmaster guideline violations

It seems just about every SEO practitioner has a horror story about spammy practices.

In many cases, that involves working with a client who's adamant about trying a new "tactic" that's actually just a violation of webmaster guidelines. In other cases, we're brought in to clean up the damage done by a black-hat agency.

In a recent article by Kaspar Szymanski, he talks about a medical website that took a substantial hit because of link schemes:

"None of the backlinks were Google Webmaster Guidelines compliant. The ongoing campaign utilized primarily PageRank-passing guest blogs with commercial, hard anchor texts. Before long, a manual action due to link building was issued. Over the subsequent two quarters, SERP visibility continued to degenerate with a lingering active manual penalty."

Or this example from Matt Lacuesta, where the site was engaging in multiple violations of webmaster guidelines, such as hidden text and links, link schemes, and doorway pages.

A2: invisible font/txt links in footer, tiered link program. built 100s of sites, pushed spammy links to main site. still do it #SEMrushchat https://t.co/rCbRRCzTsn

-- Matt Lacuesta (@MattLacuesta) November 2, 2016

Not only are practices like this frustrating because they can lead to manual actions. They're also frustrating because instead of being able to be proactive, SEOs working on these accounts have to spend their time on cleanup work.

9. JavaScript implementation

According to Google's Martin Splitt, JavaScript isn't inherently bad for SEO.

"JavaScript is a tool, and there are many tools in your toolbox. If you're using it right, you'll be fine. If you're using it wrong, things can go wrong."

However, things often do go wrong.

That's because in some cases, developers use JavaScript to load important content and links.

Relying on JS to render your critical content #SEOhorrorstories #Halloween 👻☠️💩 pic.twitter.com/3FnLwEozCG

-- Aleyda Solis (@aleyda) October 31, 2018

Because search engines aren't always able to process JavaScript successfully or immediately, JavaScript can be missed, leading to Google and others not seeing your content.  

JavaScript can also add seconds of load time to your pages, which can suck up a lot of your crawl budget.

This is why many brands with JavaScript websites opt for server-side rendering over client-side rendering, or a dynamic rendering solution like SpeedWorkers.

💡 Related Resource: JavaScript 101 For SEOs

10. "Content Is King"

"Content is King" is an SEO myth that just won't seem to die.

While good SEOs know that quality and intent-matching content trumps quantity any day, some people are still of the mindset that more content is always better.

When your client "ungates" thousands of paid landing pages to get more content #SEOHorrorStories

-- Matt Lacuesta (@MattLacuesta) November 1, 2016

This theory flies in the face of countless case studies showing that pruning low-quality content from your site can actually help performance. Not to mention that "thin content with little or no added value" can actually warrant a Google penalty.

11. Internal linking mistakes

Google uses links to crawl the web. However, not all links are created equal.

When links aren't formatted as "a" elements with "href" attributes, it can harm Google's ability to crawl through your site as well as the flow of PageRank.

This is the nomenclature of the all of the internal linking on a client's website:

No href values 😱#SEOHorrorStories pic.twitter.com/sSprJ2AKKc

-- Pedro Dias (@pedrodias) August 5, 2020

Sometimes! If links are added as normal "a" elements with "href" attributes using JS, that's fine. Sometimes people add "onclick" events instead of links, that's not always something we can notice & crawl through.

-- 🍌 John 🍌 (@JohnMu) April 17, 2018

Other internal linking issues include orphan pages (pages that aren't linked anywhere in your site structure, but Google is still finding), very deep pages, and excessive linking to the non-canonical version of your pages (although Google has said they can usually figure this out).

12. URL Removals Tool

Did you know that Google has a tool that allows you to temporarily remove URLs from search results?

It's called the "Remove URLs" tool and it can have some pretty devastating consequences, if used incorrectly.

I pondered the complete weekend why one of our pages is indexed in Bing but not in Google, got lost in theories about rendering and noindex transfer via canonicals but at the end it was just easy: the complete site was removed in webmastertools. 🧐 #seohorrorstories pic.twitter.com/OzwZHuosla

-- Raphael Raue (@raue) November 5, 2018

A while back, when it appeared that LinkedIn had been deindexed from Google, a tweet from John Mueller caused many to think that the cause was an improper use of the removals tool.

PSA: Removing the "http://" version of your site will remove all variations (http/https/www/non-www). Don't use the removal tools for canonicalization.https://t.co/yTfRzWZGtd

-- 🍌 John 🍌 (@JohnMu) May 6, 2020

While we don't know for sure what caused the deindexing in LinkedIn's case, it's safe to say that this tool can relatively quickly remove your site or some of its pages from search, so tread lightly.

13. Treating ranking factor studies as law

Have you ever heard SEO truisms like "you need X number of words to rank on page 1"? If so, this likely has its roots in a ranking factor study, so proceed with caution.

Ranking factor studies attempt to find common denominators on top-ranking pages. The result is often a list of best practices we should follow in order to rank well.

However, blindly following these types of lists can result in disaster.

For example, imagine if an e-commerce SEO blindly followed the advice that long pages rank better, and decided to make all their product pages blog post-length!

While these lists can be helpful to some degree, we should always consult our own data when making decisions about how to optimize.  

💡 Related Resource: Ranking Factor Studies: Where They Fall Short & How to Find Your True Ranking Factors

14. Server meltdowns

If Googlebot starts to hit a lot of 5xx errors while crawling your site, it typically takes that as a sign that they're overwhelming your servers and will stop crawling. This can result in Google missing some of your critical pages, which means they won't be added to the index or able to drive organic traffic.

Server errors are also bad for users, and consequently, your bottom line. Take this example from Jenny Halasz in an article on Search Engine Journal:

"Working for a very big ecommerce brand about five weeks before Christmas, we had a post about the best gifts get really large reach and get a #1 ranking for "Christmas gifts" on Google. The increased traffic caused the site to start throwing 503 errors!"

It's an SEO's job to ensure Google finds all our critical content, and that the content drives traffic and revenue for the business. Server bandwidth issues can come in the way of both, which is why this is one of the most frustrating disasters for SEOs.

15. Sites that have none of the SEO basics right

Despite Google's best efforts to educate webmasters, there are still plenty of sites on the web that seem to have absolutely none of the on-page basics right.

Maybe they have no unique title tags or none of their content is unique.

A1: Look out for things like missing tags, meta descriptions, broken links, duplicate content to name a few. #semrushchat

-- Miles Technologies (@milestech) November 2, 2016

Depending on your outlook, these cases represent some of the biggest disasters or the biggest opportunities for improvement.

Hey, with so much wrong, there's really nowhere to go but up!

There's more where that came from...

On September 3rd, we'll be chatting with Jenn Mathews (Enterprise SEO Manager at GitHub), Martin MacDonald (Founder of MOG Media), Tim Resnik (Head of SEO, Product at Walmart) and Kaspar Szymanski (former Googler & Founder of Search Brothers) about SEO horror stories they've experienced, as well as their process for diagnosing and correcting these issues.

Sign up to join us (after September 3rd, the webinar will also be available on-demand)!

Want to learn more? Connect with our team for a Botify demo!
Get in touch
Related articles
No items to show
Join our newsletter
SEO moves fast. Stay up-to-date with a monthly digest of the industry's best educational content, news and hot takes.