Are you trying to figure out if your text marketing campaign is as effective as it could be?
Perhaps you’ve been working on a new text marketing format or campaign but you’re unsure how to measure its efficiency. Maybe you’re just wondering if you need to try something new.
A/B testing your marketing texts is a reliable way of determining what works and what doesn’t. With the right know-how, you can get precise information on exactly what works better.
With the right A/B testing process, you eliminate all need for guesswork and complicated calculations.
Read on to learn how.
Introduction to A/B Testing
A/B testing, or split testing as it’s also referred to, is when you compare two versions of the same thing.
You compare version A, the control, to version B, the variation.
It’s a results optimization method. In text marketing, results to optimize include opt-in rates and redemption rates. Basically, you test how well you achieve whatever it is you want to achieve through texting.
This can be as fundamental as trying the efficiency of different special offers. It can also be as detailed as trying different wording.
The test lets you know what works best and drives more profit.
Defining Your Sample Size
Before you run your tests, it’s in your best interest to figure out what’s an appropriate sample size for your test. This is necessary for determining if the different results are a significant indicator.
You won’t have to waste time and resources running multiple text versions just to realize the results aren’t repeatable. Only repeatable results matter, that’s where you’ll find useful indications of where to go next.
If you’re not familiar with the term, a sample size is a smaller portion of your audience. If your messaging list has 1000 people on it, a sample could be 100, 250, or 333. Those are just examples.
The first component of the sample size equation is the total number of recipients for your text marketing program. Then you need to determine the confidence you need.
Confidence, in this context, refers to the statistical trustworthiness of your conducted experiment. Most A/B tests use a confidence of 95 percent. This means the significance of the results is determined with 95 percent certainty.
Statistical math handles probabilities. You can never be 100 percent certain of anything. 95 percent is close enough for marketing purposes. You can also choose to aim for 99 percent if you have the time and resources to spare.
The next factor is the confidence interval, also known as the error. If you’ve seen a Â±Â symbol on a graph or survey, you’ve seen the confidence interval. The number next to the symbol is the error in percent.
For example, a 50 percent statistic with an error of three could be anywhere between 47 and 53.
Your confidence interval is the level of error you’ll tolerate. The smaller the error you need, the bigger your sample size must be.
Find out your total number of people on your messaging list before moving on to the next step. You should also decide what level of confidence you want. 95 percent is normal.
Calculating the Sample Size
When you have the aforementioned numbers, it’s time to do the calculation.
Don’t worry, you won’t have to do any heavy maths. Use a dedicated sample size calculator like this one.
- Tick your preferred confidence percentage.
- Type in your chosen confidence interval, between 1 and 50.
- Type your total number of message recipients into the “population” field.
If you need to calculate your confidence interval, use the designated calculator below.
Keep in mind that the number you get returned is one half of your A/B test. Group A or B. You need to multiply it by 2 to get your final sample size.
It’s worth repeating the reason why setting a proper sample size is important. You’re guaranteed to end up with significant results. If you don’t, you’re not, and there will be a lot of guesswork.
Running Your Text Marketing A/B Test
Now you’re ready to run the test. Make two messaging lists, one for group A and one for group B.
Each list will consist of the sample size number you got from the calculator. Keep in mind that a sample group must consist of random people. Don’t sort them in any way, as such factors will throw off the test.
Send your control message A to one group and your variation message B to the other.
Control vs Variation
Let’s start with the terminology.
Your control is the message that’s already in use. It’s doing all right, but you think you can achieve better results.
This makes the standard against which you will measure new alternatives. This is how you determine what works best.
Your variation is the new message you’re trying out. It’s a variety of the current standard, or control.
The difference can be whatever you want it to be. You can compare two different sale durations, or a coupon code to a coupon link. It could also be the same message using different wording or a different call-to-action.
It’s important, however, that there’s only one difference between control and variety. Otherwise, you can’t know which difference is responsible for the new result.
Reading the Results
When you’ve run the test, it’s time to calculate what the results mean to your business.
The basic gist of it is very simple. Let’s have a quick example before going into details.
Let’s say your existing message has a 20% redemption rate. You want to see if you can beat that with a different message.
Let’s say B got a 30% percent redemption rate, with 95 percent confidence. Then you can deduce that the new message is between 5 and 15 percent more effective. The variation should become the new standard and the control for future tests.
What if the new message got, for example, a 15 percent redemption rate? That means the control is still the better option.
This is where you’ll really see the importance of the sample size and confidence calculations we did earlier.
The more diligent and precise you’ve been, the better your results. In mathematics, this is called significance. It tells you if the different results really matter.
Let’s look at another example to understand this: There’s a 50/50 percent of heads or tails when you flip a coin. If you flip a coin 100 times this should mean 50 of each, right?
In reality, the result won’t be exact, it will differ from time to time. Perhaps you got 44 heads and 56 tails. This is why we included a confidence interval number in our calculations earlier.
Because results can vary just like the heads and tails. You may see pleasing results when you run the test, which really were due to other factors. It may not have had anything to do with the variation.
If you flipped the coin 100 times and got 75% heads, you’d be quite certain that it wasn’t just chance. In the same way, an appropriate sample size and confidence will make it clear that your test produced the results.
If you did things right, you’ve determined the statistical significance of your results. Now you’re free to celebrate or reconsider, because you’ve verified the results.
Calculating your results takes a bit of math. But don’t worry, there’s an online calculator for that too.
Type your test results into this calculator, and see the significance. If you followed the instructions in this post, you should get a high significance.
Advance Your Text Marketing
A/B testing is only one of the great assets and tools you can enable and improve with a better SMS texting service.
Bulk text scheduling, recipient lists, autoresponders, and other features can make your text marketing so much easier.
If you have more questions, contact us.