I am a big fan of performance testing – in fact, I have a successful consulting business built on helping clients test to make their email marketing efforts more effective and more profitable.
But if you don’t choose the right Key Performance Indicator (KPI), your efforts could be a waste of time.
Here’s a ‘don’t make this mistake’ case study from an organization with the best intentions whose performance testing efforts were for naught.
Sometimes the devil is in the details. One of the recommendations I made when I audited this organization’s email program was to do more performance testing. We parted ways and they moved to implement the recommendations on their own. This is their first testing effort; read along and see if you see the same red flags that I did.
“Your A/B Testing Campaign Winner Has Been Sent” was the headline of the email that I received. I was excited; I clicked-through to see the results (below left).
So the open rate was about the same for both.
The first version had a higher click-through rate — a 35% boost. That’s pretty good.
And sales… there were no numbers for sales. So what was their KPI?
A little background: your KPI should almost always be a business metric — something that directly impacts your bottom line. Business metrics include:
- Return-on-investment (ROI)
- Return-on-ad-spend (ROAS)
- Revenue per Email (RPE)
- Conversion Rate
Notice what I didn’t list in there — open rate and click-through rate. These are diagnostic metrics. That means that they can give you insight into how readers are engaging with your email messages, but they don’t usually directly impact your bottom line.
Now, sometimes you really are just going for opens or clicks – for instance, if your revenue model is based solely on how many people see an advertisement or click on an advertisement in your email message, then yes, open rate or click-through rate, respectively, will be your KPI.
But this particular organization — they are an online retailer. They sell products. So opens and clicks, while they might lead to sales, aren’t KPIs for them. They need a business metric for that, something involving revenue or at least conversions.
And in this case, for this test, click-through rate was chosen as the KPI.
As I mentioned, this is usually not a good idea, but it can become a very bad idea depending on what you test.
In this case, the test was whether or not to include prices in their email messages. Sample creative from the two versions are below.
Are you seeing any red flags?
What makes click-through rate a really bad idea as a KPI for this test is the nature of the test itself.
If there are no prices, you’ll have more people clicking-through to find out how much things cost.
Some of these people will be qualified buyers — but many will not be.
By including the prices you automatically weed out those people who aren’t qualified buyers — and so even though the click-through rate is lower your conversion rate on this group will likely be much higher. This is because you’ve already overcome one objection — the price is too high — before the click ever happens.
So what did they learn from this test? In reality, nothing. They would need to use some form of revenue or conversion (conversion from the quantity sent, not the number of clicks) as a KPI to get a true read on which of these creatives performs better.
Morale of the story: choose your KPI carefully when you are developing tests. Make sure it goes to your bottom line goals for the email. Otherwise your efforts could be wasted.