How to Test What Works Better
Bob's trying to generate more leads for his catering company.
- Bob: "Hey, let's test if this direct mail piece performs better than the one we mailed last month!"
- Dikembe: "Bob, you're a moron."
Why Is Bob a "Moron"?
What occurred last month probably won't occur this month.
For instance, more graduation-dinners/weddings/conferences will probably happen this month than last month -- rendering Bob's test super flawed.
To test well, testing conditions must be as identical as freakishly possible.
How to Test Effectively
- Run tests simultaneously.
- See what works better.
- Win.
Restrictions:
- Population (e.g. target markets) should be identical.
- Subjects (e.g. people) should be randomly chosen.
Hooray!
Are results really reliable?
You want statistically significant results -- not results that happen by pure chance.
For instance:
- You interview three ugly people.
- Two want to jump off a cliff.
Does that mean 66% of all ugly people in America want to jump off cliffs?
NO WAY JOSE! HIGH-FIVE!
To Test for Significance...
Use this free split-testing calculator to see if your results are really significant.
By the way:
- Goals: Mean the ^ of desired actions taken (e.g. newsletter sign-ups)
- Visitors: The number of people tested in each group
(There, we just saved you from solving complex statistical mathematics.)
If you see your results as being at least 90% statistically significant, take it.
Remember, you want to make consistently good decisions that over time, will pull you ahead -- regardless if there's a slight chance that your results might be incorrect.
Example Up
Bob identifies his testing parameters:
- Target market: 1000 California business owners between 30-35 years old.
- Testing collateral: Two different ads pitching catering services.
- Mail date for both: Tomorrow.
Flash Forward a Few Weeks...
Bob gets back his results (so far):
- Ad A: 500 targets, 25 leads called
- Ad B: 500 targets, 10 leads called
He plugs his results into the nifty split-testing calculator, and sees that Ad A has a 98% chance of kicking Ad B's ass if Bob sent his ad to all of California's 30-35 year-old business owners.
(Note: anything above 90% chance is good. Take it.)
So, Bob concludes Ad A is the winner! YAY!
Simultaneous tests that reach statistically significant results win, b!^ch.
If you enjoyed How to Test What Works Better, get a complimentary subscription to our freshest articles through email or through your feed reader.
Posted on April 25