Back to learn seo
Botify's SEO Split Testing: Is this the end of "it depends" for SEO?
How SEO Split Testing Brings Greater Organic Predictability
Anyone working in SEO will be familiar with the phrase “it depends”. It might even be considered its unofficial catchphrase. SEO is complex, and certainty is rare, which can prove challenging when seeking wider stakeholder buy-in for making optimizations proposed to increase organic traffic and revenue.
Some brands try to test their SEO ideas by leveraging internal development resources. Because this relies on engineering and data science teams who have other priorities and demands on their time and bandwidth, such an approach can severely limit the number of tests that can be deployed in a given year. Other brands with even smaller teams, more limited engineering bandwidth, or no data science team cannot conduct SEO testing at all.
The result? Countless hypotheses about improving revenue through SEO do not get actioned. Or possibly worse, time is spent optimizing pages that ultimately have no impact.
To change this, we developed new split testing capabilities within PageWorkers. This allows teams to adopt a data-driven, test-and-learn approach to executing their SEO roadmap. More predictability allows SEO teams to take greater control of the process and narrative of organic search impact. By improving the odds of success for critical website optimizations they make a stronger case for SEO and can think and act like performance marketers.
What is SEO split testing, and how can it be included in SEO strategies?
SEO split testing is essential for gaining insight into how search engines react to SEO changes. To execute this effectively, we need to segment a set of comparable pages into two groups and implement a specific change to only one of these groups. This allows for a systematic comparison of the performance and impact of SEO changes, providing valuable data to refine and optimize strategies for improved search engine visibility. In short, it allows SEO teams to conduct controlled and safe experiments that provide more predictability to how their optimizations can result in increased organic traffic and revenue.
To be clear, SEO split testing is not the same as A/B testing focusing on user experience. Typical user-focused split testing tools aim to test how users interact with a page. They achieve this by presenting different versions of the same page to different cohorts of users, tracking the users’ engagement on each. This approach is useful to focus on conversion rate optimization, but doesn’t help you to capture more users to these pages. These tools don’t track if search engines bots see these variations and how these changes impact rankings and clicks from search result pages. More often than not, they actually block bots from seeing these variations since bots won’t interact with pages and therefore will negatively impact the measurement of these tests. On the contrary, SEO Split testing is meant to test how changes to your pages influence search engines behavior, such as crawl activity, and how these changes impact your rankings and organic traffic (impressions, clicks).
Why is SEO split testing important, and what can it achieve?
By automating the setup and deployment of SEO tests, split testing empowers SEO and marketing teams to move faster with greater certainty and confidence that the changes they are looking to make are actually safe, and will drive traffic and revenue. Equally important, they can now identify in advance those optimizations that will not! Having such certainty in advance avoids using limited engineering bandwidth to hard-code modifications that ultimately will not generate ROI, focusing instead on permanently implementing the optimizations that will. Such certainty doesn’t just improve the odds of success, it improves the understanding and value of SEO with other stakeholders within the business.
Within as little as 30 days, teams can see the results of their test and they can compare the performance of the two groups of pages in terms of clicks, impressions, CTR, and average position on SERPs to determine the impact of their content changes on the key business metrics.
It’s a controlled experiment that provides a more robust case for carrying out changes and anticipating ROI.
What is PageWorkers SEO split testing?
We developed a comprehensive solution designed to streamline the setup, deployment, and reporting on SEO split tests. This results in increased autonomy to SEO teams, enabling them to effortlessly conduct large-scale content tests with just a few clicks. Notably, this can be accomplished without reliance on development or data science teams, providing a user-friendly and efficient way to enhance SEO strategies.
How to run an effective SEO split test with PageWorkers:
- Start with selecting an idea to test (Example: Will adding limited-time words like “now,” “exclusive,” or “hurry” in the titles of my product pages increase their CTR?) PageWorkers makes the set up of the test really simple. It enables users to automatically split the selected scope (eg.product pages) into variant and control groups.
- Run your test: the changes will be applied to the variant group of pages within seconds.
- Within as little as 30 days you can analyze your results, which are based on built-in statistical analysis and reporting, and determine whether your change has impacted your website performance.
- Share the results with your stakeholders and secure the development resources you need for the highest priority fixes.
With PageWorkers, you can run multiple SEO split tests on your website at the same time, and only one for a specific group of pages that share the same template.
What can SEO and marketing teams test through PageWorkers?
Here are some examples of hypotheses teams may want to test:
- Will AI-written titles perform better than human-written titles?
- Where content is best positioned hierarchically on a page?
- Do pages with a certain keyword in their description receive more clicks than other pages that don’t include the keyword?
- Do the number of products on a category page impact rankings?
- Does adding a publication date to an article increase click-through rates?
By testing and learning in advance, these new capabilities empower teams to both predict the impact of SEO changes on a website before implementing them permanently, and to more accurately attribute a change in performance to a specific optimization. In sum, it provides greater visibility for SEO impact before and after making optimizations.
The end of “it depends”?
Will the ability to test and prove the case for making key optimizations start to spell the end of the uncertainty that has made “it depends” SEO’s unofficial catchphrase?
Making changes with greater confidence will certainly start to lift the critical blockers of doubt and delay for SEO teams. Doing this with greater velocity will achieve a positive impact faster. But that’s still only half the picture when seeking to prove the ROI directly derived from such optimizations. And this has also historically proven difficult, with a lack of measurability conspiring with a lack of certainty to hold SEO teams back.
The good news is, that’s changing too. Botify recently announced the launch of its model for calculating the Return on Organic Search Spend (ROSS), designed to be directly comparable with ROAS which digital marketers use to track the ROI of paid media. The Botify ROSS model has just emerged from alpha testing and will be developed further based on customer and industry feedback.
We’re steadily reaching a point where SEO teams and digital marketers can know, activate, measure, and share the impact and ROI of organic search with greater velocity and certainty than was ever possible before. SEO is no longer a guessing game, nor is it a waiting game. Welcome to the New Search Era.
Split testing is now live within Botify’s PageWorker’s solution. To discover how this capability, combined with ROSS, can bring greater certainty in making optimizations, and greater visibility over their impact and ROI, book a demo here.