XOVI Experts

Top optimisation pitfalls and how to learn from them

Chris McCormick | December 3, 2015

Based on my experience over the years (working with some of the largest online retailers in the UK both client side with Shop Direct and agency side with PRWD), I have seen some consistent trends of the most prevalent pitfalls within the conversion optimisation industry. This article discusses some of the more common mistakes before detailing the tips to prevent you falling into the same trap.

TAKING ‘BEST PRACTICE’ TOO LITERALLY

A lot of businesses take best practice rules and techniques too literally, incorporating them into their website without any thought or consideration. ‘Best Practice’ guides usually conclude that all ecommerce sites should have: clear navigations, large high quality images, detailed product descriptions etc. Whilst these are elements which do need to be addressed and are often pain points on a site, what is often overlooked is that EVERY ecommerce website is different. More importantly, every ecommerce website has different customer bases who will interact in different ways. The key point here is that ‘One size does not always fit all’. Consider this: best practice is past practice and should not be used to reshape your website – they are guidelines and that is how they should be used, a checklist to get the very basics right. To make changes on your website, use the data and evidence that you have on your customers to drive change.

FORGET EVERYTHING ELSE, IT’S ALL ABOUT CONVERSION RATE

I’ve worked with a number of different businesses now where conversion rate metric alone defines the success of a test, rather than looking at a holistic approach and treating every test individually. As much as conversion needs to be made a priority, there are other metrics out there which can’t be overlooked and are just as important. Here are a select few:

  • Customer Experience/Satisfaction – in the short-term, conversion rate may not increase but improved customer satisfaction may lead to improved LTV
  • Returns – you may find conversion rate goes up but so does your return rate. Is this a good model for business success?
  • Bounce Rate – bounce rate is a key metric to look at for every ecommerce website. If users are visiting your website and bouncing straight away, there is something fundamentally wrong with your offering
  • Average Order Value – you may find that conversion rate actually drops but average order value increases. Does this mean the test is a failure?

The key thing to takeaway here is to set your primary goals for each and every test you run early on – what does success mean for that test you are about to run? For example, you may find that for the test you’re running on the homepage, conversion rate shouldn’t be the key metric and success should be measured on the uplift in how many product page visits you are getting instead.

I’VE GOT A GREAT IDEA…LET’S TEST IT!

Coming up with test ideas purely based on best practice or because you ‘think’ it’s the best thing to do is wrong. Making decisions based on instinct or having the HIPPO (Highest Paid Person’s Opinion) be final is risky and not how a modern day customer-centric business should act. A few years ago, somebody told me something which has stuck with me ever since: “the business is not the customer, without the customer there will be no business”.

Another common example of this pitfall is businesses coming up with test hypotheses based solely on what your competitors are doing. The “It has worked for them so it should work for us” mentality is always a dangerous one; sometimes it will pay off but the majority of the time you will be left disappointed. Both instances essentially ignore what the customer wants.

In reality, you should be using insights from analysis of your data (quantitative) and a variety of research (qualitative) to form your test hypotheses but not to determine solutions. Once you have your hypotheses, it’s then time to prioritise them. At PRWD we prioritise based on three key elements:

  • Impact – how much of an impact do you feel the test will have on your users?
  • Importance – where abouts will the test be on your website (checkout and product page areas are considered the most sensitive and important on retail websites)
  • Ease – how much resource do you need from developers etc. in order to get the test live?

HOW MANY TESTS HAVE YOU GOT LIVE THIS MONTH!?!

It is becoming increasingly common for conversion optimisation and ecommerce teams to be internally measured based on how many tests they have run within the last month/year, regardless of whether the basic fundamentals for successful conversion optimisation have been mastered or not. These are what we at PWRD like to call ‘vanity’ metrics – metrics which look good on paper but offer little value to a business. Before you start focusing on the quantity of tests run, you need to make sure the tests are ‘quality’, starting with quality research which will help produce solid testing hypotheses and so on and so forth. Rather than focusing on how many tests you have run in a month, look at how many of those tests have delivered an uplift or have actually given you a statistical result for you to take learnings from. These are what we like to call ‘sanity’ metrics.

At the end of the day, testing costs money. If you are in the early stages of conversion optimisation as a business, then it’s much easier to gain ‘buy-in’ from senior members if you have delivered 10 tests backed up with good research and solid hypotheses that have generated a good uplift and key learnings, rather than 40 tests with a 90% failure rate and no learnings gained.

The importance of focusing on quality first over quantity cannot be underestimated. It’s only when you become more mature within your optimisation journey (like Shop Direct, Netflix or Google are) that the quantity of tests ran should be prioritised (due to winning tests become increasingly harder to find). Even then, quality still underpins all the tests they run.

OH NO, THE TEST HAS FAILED!

No-one wants to fail and that is no different when it comes to testing. However, tests which fail should not be considered a bad thing; document the result properly and make sure you take learnings from it. Even the biggest and most mature businesses’ tests fail but what makes them successful is what they learn from them and how they use this data in the future. “Failure is success if we learn from it” and that’s exactly how you should treat your ‘test and learn’ optimisation programme. Ensure you’re taking learnings from any failed tests and applying them to future ones, that’s good practice. The likes of Google and Netflix have a failure rate of in and around 90%. As previously stated, the more mature your testing programme gets, the harder it becomes to find ‘winning tests’ (as you have already picked off all the ‘low hanging fruit’) therefore prioritisation of testing hypotheses becomes even more important.

The key learning to take away here is don’t be scared of test failures – learn from why they happened and go again: “Fail fast to learn better”.

YOU NEED TO ITERATE OR INNOVATE

There is no right or wrong answer about whether iterative, innovative or radical/strategic testing is better than the other. Each one individually has its pros and cons but the key to success lies in finding the balance between all three. I find within the industry that most businesses are either just iterating or making huge radical changes – there is nothing in-between. With constant iterative tests you will always be learning but are you actually growing as a business? On the flip-side of that, large scale tests and radical redesigns may add more to your bottom line but are you actually learning anything that can be fed into future hypotheses? At PRWD we have three streams for testing types:

  • Iterative – small tweaks to your website
  • Innovative – slightly larger changes, these could be page structure changes, larger content changes etc
  • Strategic – full page or website redesigns

What we look to do is make sure we balance our testing programmes between all three of these streams to make sure maximum benefit is achieved (see below). For example, some of the larger strategic tests may take four-six weeks to deliver, so during this time you could test a few iterative changes which could inform a larger strategic change further down the line. This method will help create a balanced optimisation programme where smaller changes help feed larger changes and so on.

DID IT WIN? GREAT, ON TO THE NEXT TEST!

Congratulations, your hard work has paid off and your test has won. Is that all you have learnt from your post-test analysis? Don’t rest on your laurels, there’s always more to a test than ‘success or failure’. Once your test has completed, segment your post-test data and look below the surface, there are a number of different elements you could look into further:

  • New vs returning customers – are there any different trends between your different customer bases?
  • Days of the week – was the start of the week (Monday or Tuesday) a better day for conversion than the rest?
  • Product categories – did your test work well for certain product categories like televisions (electricals) etc. but not as well for others such as home and living products?
  • Device type – what was the difference in behaviour across your different devices?

There is so much more rich data to mine when a test concludes, data which can help inform future tests. Really dig deep into every test you run and understand different trends which may have occurred – you may find some very interesting and unexpected patterns. The only caveat I would make to data segmentation is that sometimes the segments you are looking at could be ‘insignificant’, therefore just be careful that you are not taking things too literally based on very small numbers. If you need to run the test for longer in order to gain significance in those segments then do it.

IT’S DROPPED, STOP THE TEST! (AND VICE VERSA)

For me this is one of the biggest mistakes businesses make when it comes to testing. Too many times I’ve seen clients want to stop tests after a couple of days because conversion has dropped, or they want to send the test live to 100% of their traffic because conversion has rocketed. Tests need to be given time to run their course. A test should be live for least a period of two weeks in order to gain a more accurate perspective. Things can change drastically within a two week period, so never peak too early and make irrational decisions; allow the test to settle down.
At PRWD we have two key guiding principles:

  • Minimum of 250 conversions per variation (need a good sample size to base a result on)
  • Test duration – minimum two weeks (if not three-four)

With these principles it means looking at them all collectively, not in isolation. All three of these go hand in hand together to determine a test’s success. For some businesses however, reaching 250 conversions could take three months to achieve. If this is the case and traffic to your site is low, then focus on other elements such as micro-conversions (engagement/clicks) to help determine a result – don’t focus on conversion rate if it’s going to take you forever to reach that threshold. Another consideration for businesses with low traffic etc. could be more high impact tests which are going to speed things up.

FINAL CONSIDERATION

DON’T ALWAYS GET STUCK IN AREAS WHICH ARE NOT CONVERTING WELL

Traditionally, a business will look for broken pages and weaknesses in their customer journeys that don’t convert that well. Whilst this should always be a focal point of a strategy, why not spend more time analysing the strengths in your ecommerce site in order to capitalise on and repeat your successes? For example, if you know customers are more likely to transact with your website after reading a review or a product page video, then why don’t you feed that information into future tests? Don’t just blind yourself with what isn’t going well, maximise your strengths as well to generate success for your business.