BRAD’S RULE #17 Test, Test, Test – Keep Testing
How do you know that one ad is better than another? There’s only one way to find that out: you must test!
How do you know that one ad is better than another?
There’s only one way to find that out: you must test!
Testing is what makes direct-response advertising better.
Measuring results is only the first step. It’s obviously important to know whether you’re making money or not.
But by testing and comparing results – you can improve the results of your ads and promotions with certainty.
However, you can only compare apples with apples. Comparing apples with oranges doesn’t mean a thing.
For example, if you run one ad in Friday’s paper and a different one in next Monday’s, do you have anything to compare?
No. Two different days of the week means you should expect the results to be different.
The results of this comparison might mean something if Monday’s ad far outpolled Friday’s. Normally, with two ads so close together, you’d expect response to the first to cut into response to the second.
But you don’t know. Maybe it rained on Monday. I mentioned rain because in the US back in the days before television, direct mail advertisers would pray for it to rain on Sundays.
Why? So that people would spend more time indoors, more time reading the Sunday papers – and hence spend more time reading their ads!
To compare apples with apples you must control the variables. The only variable (s) should be the difference(s) you want to test, like two different prices or a different offer or two completely different approaches.
You want everything else to be the same so that you know that the difference in response is solely due to the difference(s) between the two ads.
Three ways to test
Here are three ways to test:
1. Split runs. The best way to test two ads is by doing what’s called a “split run.”
In “the good old days,” before offset printing, newspapers were set in metal type. Paper Mache “stereos” (impressions) were made of the page. Then, a mould was made – curved to fit on the rollers of the printing press.
The intriguing thing about the old letterpress printing presses was that two moulds were made of every page. And with each revolution of the press, two newspapers were printed.
This meant that you could run a different ad in every second issue of the same newspaper – Same position, same day, same everything else on the page.
The only difference was whatever difference you wanted to test – Clearly, apples with apples.
Maybe your newspaper can do this. (Maybe they don’t know that they can.) But with the advent of offset printing for newspapers, split runs have become harder to find.
2. Approximating split runs. An alternate way to do a split run in a newspaper or magazine is to run an insert. Two inserts.
It’s not practical to have every second insert different (check with your printer) but you can ensure that every second bundle of 500 or 1,000 copies alternates between inserts.
If your business is in two or more cities (or suburbs) you can approximate split runs with regional testing.
In the first week, you run ad A in the first newspaper, and ad B in another. In the second week, you run ad B in the first newspaper and ad A in the other.
By averaging the results, you’ll have a good (but not perfect) gauge of which ad is better.
3. In the mail. A simpler and possibly cheaper way to test is by doing a mailing to your customer and prospect list. (You don’t have one? Start collecting names and addresses immediately!)
Divide the list into two – every other name on a different list to eliminate regional differences, like wealthier vs. poorer suburbs, business addresses vs. residential, and so on.
To the first segment of your list you mail one offer; to the other half, the second.
To be statistically significant, you should have at least 30 responses to the winning offer – and it should be at least 25% higher than the other.
I once divided a mailing list in half this way, and mailed the same offer (with different key codes) to both halves. There was a 20% difference in response – “noise” you need to account for.
If the ad is appropriate, you can simply print it in quantity and mail it to your customers or prospects. Include a little covering note saying something like: “We’re going to be running this ad shortly in ______. And I would like to extend this same offer to you before the general public sees it.”
Is testing worth the trouble?
You bet it is!
I’ve had tests where one offer has produced four times the number of sales as the other offer.
That’s four times the sales, four times the revenue – at exactly the same cost!
Other people I know have gotten even more dramatic differences.
Imagine how much more business you would have if you could increase the responsiveness of your ads by just 50%!
Only test important things
The price. The offer. The headline.
Does the color of the paper you use in a mail shot make a difference?
Is it worth testing? Maybe – when you’ve tested price, offers, and headlines. And you’re stuck for a more dramatic headline to test.
Your “control” is sacrosanct
Your “control” ad or “control” mail shot is the benchmark against which you test everything else.
Never change your control without testing the change.
If you change the benchmark without proving that the change is worthwhile, you can no longer compare tests you do in the future against tests you’ve done before.
Once you’ve established a “control” – an ad that works – don’t touch it.
If you can think of a better headline, a better ‘phraseology’, test it.
Once you’ve got an ad that works, try to beat the ad, but any change even a minor change – and you’ve introduced a variation that you haven’t measured.
– 0 –
Get your brochures & ads designed in the most effective way (ActionCOACH Fast Track program) … for price & detail information call now 021 2567 5775
Follow us on