HOW VALID IS YOUR A/B TEST **REALLY**?
I ask, because some A/B tests *aren't* valid.
And even a lot of the valid tests aren't the "scientific proof" we so often think they are.
I know, because I spent almost fifteen years leading high-speed, data-driven direct-response marketing teams.
My first Corporate America jobs post-MBA were senior analyst jobs in large companies' direct-response teams, coding in SQL & SAS, developing multivariate experimental designs, running and interpreting analyses, managing modeling projects and training junior analysts.
It was smart and fun work. Only, the error that Younger Me made was to believe that our “data-driven” methods were fundamentally superior to the more qualitative approaches taken by creative professionals.
I thought “well, we get to the truth of what works; those creative people can only guess.”
I was more than a little annoying in my superiority complex.
But I was so sure that our “scientific” approach was much better at determining truth.
And I was wrong.
I thought of this while reading C.S. Lewis’s essay “De Futilitate” in which he writes:
“If popular thought feels ‘science’ to be different from all other kinds of knowledge because science is experimentally verifiable, popular thought is mistaken. Experimental verification is not a new kind of assurance coming in to supply the deficiencies of mere logic. We should therefore abandon the distinction between scientific and non-scientific thought.”
It got me thinking about the ways that we marketers routinely and sometimes wrongly feel “scientifically assured” by our A/B test results.
Here is a partial list of the way in which a lot of direct response tests don’t prove what folks think they prove:
"It’s Not Statistically Significant But It’s Directional"
Temporary Bumps
Non-Replicable Results
Invisible Invalidity
Predicting Beyond the Data
I'll dig into each of these further in my next posts.
What's the point of this?
It's certainly not to invalidate direct response.
Hardly. SUBLIMITY launches nearly every audience experience we co-create with our awesome clients with digital, direct response promotion, after all.
But I do want to help correct this idea that A/B tests "scientifically prove" much.
They really don't.
Remember, the results of a valid A/B test (and many aren't valid) simply show that "on these specific days, these specific people responded to these specific marketing treatments differently."
That's it.
Now....that's not nothing! It can be very helpful indeed. Thus why we do so much testing.
But we so often make soooo much more out of it than that.
And we shouldn't.
Never miss a SUBLIMITY insight! Subscribe to our occasional Newsletter here!