Note from Jeanne: Happy Throwback Thursday! This article is from last year; it was published by ClickZ in April 2014. But the three tips for better testing are still applicable. Enjoy!
Last week I attended and spoke at Which Test Won: The Live Event in Austin (the European version of the show is taking place in London later this month). As the name suggests, the show is dedicated to testing to improve performance. If you know me, or if you’ve be a regular reader of my articles over the years, you’ll know that I am a huge fan of testing. So this has been a fabulous experience!
Here are some tips on performance testing – both for those just starting out with A/B split or multivariate testing, or for those who have been doing it for a while.
1. Have a Solid Hypothesis
This has been one of my rallying cries for years – so it was so great to hear other speakers say it from the stage. Without a solid hypothesis, you’re just guessing – and guessing is rarely as successful as having an informed point of view about what you’re testing and why you think it will boost performance.
Sometimes the hypothesis is there, in your head, and you just need to massage it a bit and get it down on paper. Other times you’re grasping – and there’s no rhyme or reason to support your test. This latter situation is where you need to go back to the drawing board, come up with solid reasons why certain changes will boost performance, and change your test (see item three below, where I provide places to go for testing inspiration).
Here’s an example: testing colors, let’s say red versus green, for elements of your email. If you can not only explain why you’re testing this but defend it, you have a solid hypothesis.
I did this type of color testing years ago when I worked in the stock advisory publishing industry. My solid hypothesis there was that “in the red” in relation to money was bad, while green was the color of money and was good. So I felt that making the calls to action and highlight color green, instead of red, would boost response because subliminally people would feel better about the chances that they would make money off our recommendations.
2. Technology Alone Won’t Make Your Testing Program a Success
I’m a big fan of technology, but that alone won’t make your testing program successful. It’s the strategy behind your testing program that will make or break it, not what you use to implement it.
Garbage in, garbage out certainly applies here. If you don’t have a sound hypothesis and you don’t set up the test to get reliable, statistically significant results, you’re wasting your time. It doesn’t matter how sophisticated or expensive your technology solution is.
3. Look for Inspiration From Other Marketers
So where can you get ideas for things to test that will move the needle and improve your bottom-line performance?
One of my favorite sources is WhichTestWon.com. They post a new test on their website each week and invite visitors to weigh in on which of the creatives they believe was more successful. Even better, the website will then tell you which test actually won and provide you quantitative details on the lift that was reported. Sign up for their email newsletter to get their tests delivered to your inbox on a regular basis. They also have a database of nearly 500 case studies you can peruse.
Not all the content here is about email marketing – but it’s still a great place to go to see what others are testing and how it’s working out for them. And a lot of the ideas here are very easy to adapt to email.
Another place I love for inspiration is MarketingSherpa.com. Their case studies are top-notch; they cover many channels but they have a newsletter dedicated to email marketing which is worth signing up for. But once again, don’t limit yourself. See what people are testing and how it’s doing – and then think about how you can adapt it to email.
Until next time,