The danger with assumptions

by David Mannheim

posted on: November 29th 2015

Do you look at a website and think to yourself “I know why users aren’t converting”?

If the answer is yes, you’re in the wrong career. You should be a clairvoyant.

Somewhat facetious, I admit, but this type of assumption is what leads you as an optimiser down the wrong or dangerous path. Some of us come from UX backgrounds, such as myself, and we can certainly assume from experience what might be wrong but we can’t know for sure. Not without validation. Nor should we be assuming anything. We should practice what we preach and talk in ways of objective facts that know, not subjective language that assume.

Shifting mindset from assumptions to questions shift our own behavioural optimisation processes. Let me give you an example.

We are looking at a site for the first time - www.onefinestay.com. Our immediate assumptions is that the big data capture pop-up of “£20 off your first onefinestay booking” is both too immediate and too disruptive. This shouldn’t be there, right? It creates immediate friction for the user and disrupts the user journey of what they came to the site to do; to search. However, we don’t know their users. Behaviour might dictate that Onefinestay users who sign up for the newsletter have a higher propensity to purchased. This element might have been tested through 5 or 6 different iterations testing ‘when’ the message should be shown to the user down the millisecond. Users who land on the homepage from PPC might have a reduced bounce rate with this pop-up and engage them more in the site making them more ‘curious’. That’s the wonderful thing with assumptions: we don’t know!

An assumption is only an assumption

Let me explain why your thought process of assuming creates a dangerous path for optimisation. Firstly, it is what it says; it’s an assumption. As optimisers we can make assumptions but we need to validate them. We must continually ask “why”. Why has the site done this? Why are users being forced down this path? Why is the call to action a teeny-tiny link hidden in between mountains of photos that cognitively overload the user? The danger with assuming “change [x] to product [y]” is that they can lead to creating bias within the optimiser’s mindset which is dangerous both in terms of the validation, seeing data you want to see, and hypothesis, creating insight and solutions you want to create. This can ultimately lead to false positives and a poor, rocky road of optimisation.

Tactical Driven Optimisation

Second of all, assumptions lead to tactical driven optimisation. Tactical experiments are those that are done on best practice assumptions - but we’ll get to that. They are those experiments that don’t actually affect perception or behaviour but are done in bitty (I couldn’t come up with a more professional word, apologies), independent scenarios lacking process and structure. For example, making the button bigger here or changing the headline of copy there. In the words of Kissmetrics “tacticians have a tool kit that includes a wide array of tested elements that can be applied to a problem quickly”. And that is exactly where the problem lies. It doesn’t help that there is so many “you too” examples in blog posts from the likes of Whichtestwon.com and so forth. Here, companies say “I did [x] and it resulted in [y] so you should too” which creates this sub-optimal mentality of assuming.

Best practices (?)

Third; assuming is akin to that of best practice. We assume because AO.com or JohnLewis.com did something so to should we. No. I’ve written before about there being no such thing as best practice because there isn’t. With such a variety of difference in target audiences, user behaviour, user perception and so forth - no two sites are the same. We’ve had instances where, because of partner request we’ve added, say, a mega menu on the site. “Surely this will increase conversions. All the best sites have a mega menu”. This I can’t deny and all the competitors had one and it certainly seemed like ‘best practice’. However, when tested the new mega menu variation bombed. It forced a different behavioural pattern to our users down a path with a reduced propensity to purchase. Users couldn’t find what they were looking for who so often fell back on the prominent search. And now, users were using the search less which was optimised to increase that propensity to purchase. But what happens if AO.com or JohnLewis.com tested it, surely then it’s OK? Again no. They probably didn’t test it. Even if they did the skeptic within me would argue the probability that they didn’t test it right* (poor hypothesis generation, poor QA’ing etc). But more importantly than that, they are testing on a different site with different behavioural patterns - be they a competitor or not.

In summary, I’m not saying assumptions don’t work; it can lead to curiosity which when validated could uncover one gem of an insight. Or it could form part of a hypothesis which when tested could reveal an interesting user behaviour. What I am saying is that it’s dangerous to assume for the reasons given above. If we assumed something shouldn’t happen, why would we test? After writing this blog post disguised as a rant, I searched for similar blog posts, post-validating my thought responses and found a chap by the name of Ivan Imhoff who said “as a general rule if you make assumptions about what will convert, you are always going to be wrong. There’s a fundamental flaw in a marketer’s belief that they know what all of their visitors want. So, in order to maximise conversion you must be able to see first-hand, what works and what doesn’t, from the visitor’s perspective and not what you think is right.” I’ve just spent 5 or 6 paragraphs explaining this very same thing and you’ve succinctly put it together in a couple of sentences. Bravo, sir.

So the next time a client asks a question “What is wrong with this site? Why aren’t users converting” remain vague and educational. We don’t want to lose the sale, sure, but saying that there are too many products per category page or there is a guest checkout in place we are assuming their target audience and respective behaviour. Equally as one the great thought leaders of our generation once said, assuming makes an ass out of u and me, because that’s how it’s spelled. Ellen De Generes. Ahead of her time.

*to clarify, I’m not suggesting AO.com or JohnLewis.com do test poorly, I don’t know. In fact, from an ecommerce standpoint these two examples of websites are very well optimised. Or, at least, I assume them to be!

avatar for author

David Mannheim

David is an experienced conversion optimiser and has worked across a series of core optimisation disciplines including web analytics, user experience and AB & MVT testing.

The danger with assumptions

by David Mannheim Time to read: 5 min