A/B Test Development

The development of an AB test is a robust, detailed and comprehensive one.

With our experience of thousands of A/B tests and as we are approved by all AB testing tools, we're best placed to develop and QA your experiments - especially with our tailored and measured process.


A/B test development is vital to enabling us and our clients to test ideas for improvement on their website.

We believe in putting qualified ideas to the test. We can’t do any of this without reliable test development.

Each experience we test is mapped out in a hypothesis document which acts as a brief for the development team. It details the required amends, supporting insights (so we know why the test is being run), what the goals are, and various other pieces of information to help us create the test.

The most important of these though is the required amends. This is where our A/B test developers need to use their creativity and skill to bring these requests to life. This can be challenging on an unfamiliar website.

The nature of A/B testing is that we’re changing an existing page. Our developers work on the fly, frequently weighing in on experiment ideas before a design is finalised. Their expertise ensures that the ideas we come up with work in the real world.

The first port of call is always a thorough test of the page – clicking around the current functionality, and playing about with the elements on the page. The next stop is the trusty Developer Tools in your favourite browser, examining the website HTML and the relevant scripts/styles on the page will tell the developer what they might experience in terms of roadblocks to intended implementation, any possible pitfalls for user experience with the requested amends, and any areas where the test might be more difficult to implement than previously thought.


Let’s imagine we’re building the following test:

Hypothesis: “We believe that making filters on the category page sticky will increase click through to products because the filters will be utilised more often.”
Supporting insight: “When users interact with filters at a category level they are 20% more likely to convert to product level.”
Change log: “Make the category filters sticky (ie follow the user down the page).”

There are a number of different ways to handle category filters.

  • You can use AJAX to reload the results based on what the user clicks on
  • You can use a page reload so that when the user clicks on an option, it reloads the page based on that new filter.

So the way that you implement your test is always based on how the page currently performs (rather than according to a set pattern of development) – which makes that initial investigation period so crucial.

Of course, there’s nothing better than diving right in, so that you can have a play around with code to determine what will work and what won’t. So once you’ve thoroughly investigated the page you will be altering, the next step is starting to put some code together.

At User Conversion, all of our developers (you can meet them here) had issues with the inbuilt editors that the main AB testing platforms provide.

Too slow to update was a common complaint – and not flexible enough, unsurprisingly for a web-based editing system. One of the main issues was not being able to use things like SASS to make styling easier and more intuitive.


So we built a new system using Node.JS and the Gulp automation tool, which takes our experiment code and uses an automatic deployment system to deploy that code to a server that we host.

You can read about this system in more detail in this blog post.

We then link to these files from the in-built AB Testing Platform editors – and we then never have to make changes to the editor until we’re ready to push the experiment live. This means that any changes we make are saved instantly and reflected on the site that we are working on; no more ‘waiting for upload’ for us.

The real benefits are the flexibility it gives us though. These include:

  • We can use whichever text editor we like (for ease of development)
  • We can integrate tools like SASS (covered earlier) and Autoprefixer (to deal with CSS vendor prefixes automatically)
  • We can use things like JSlint to ensure our code is syntactically correct and formatted well.
  • We can use our new ‘tags library’ to grab bits of functionality that we use often and implement them in our code quickly and without fuss – to save time on rework.

These things are not available in any of the in-built editors that are provided by standard AB testing tools.

The final benefit is that our experiment code is all linked up in a Git source control system, meaning all of our code is stored centrally, backed up and version controlled. This is incredibly handy for things like rolling back to earlier versions, or seeing how certain members of the team might approach different situations. Truly collaborative optimisation.

This system means that we can quickly and easily set up an experiment, work on it locally with all of our favourite build tools and editor extensions, and preview it on the webpage that we are working on, all without actually making any changes to the Platform editor (so reducing the chances of making any inadvertent changes to any running experiments)

Once we’re happy with the experiment, we pass it to our QA team, who give it a thorough run-through and provide feedback on any issues or UX problems that they notice.

We then loop around this cycle until QA and Development are happy, when it is then passed back to the Conversion Team to set up GA tracking, configure goals, and push live.


We only use javascript or jQuery to make our website amends; rather than using the visual editor, and the latest CSS styling technology to alter the look of pages.

To give an example, many developers use CSS3 these days, but some don’t take the time to use the relevant vendor prefixes (-moz, -webkit etc) that are required to make these properties work in older browsers. Autoprefixer, as mentioned above, does all of this work for you, so that all of our code is automatically compatible with older browsers.

We also ensure that all of our experiments (if required) work all the way from Desktop to Mobile – and all screen sizes and configurations in between. iPads and iPhones are important, sure, but it’s also just as important to make sure that your experiment works on an old Samsung Galaxy tablet, or a stock browser on a cheap Android Phone.

In short, we are the professionals’ choice – we have the expertise to make changes to any and all types of website and web app, are certified by Optimizely, VWO, Qubit and Maxymiser.

We have invested time and money into developing a solution which makes us better AB test developers by cutting out a lot of the problems that we all have faced before.

The result is seamless variations created so that our clients are sure which is their site, and which is the variation.