October 22, 2018
Manual A/B testing, along with segmentation and targeting, are considered essential methods for website optimization. Marketers have long relied on these practices to improve customer experiences while delivering strong business results, and there is no doubt that they are the foundation of personalization technology across channels today. However, as new methods emerge, it is important that we continuously evaluate the tools we have.
The development of Artificial Intelligence (AI) for testing and segmentation has changed the field of what is possible—but many businesses have built their strategies on traditional methods, and they may be reluctant to stray from the tried and true. That’s not a bad thing: they’re right to think twice before jumping at every new trend that comes along. So, is AI worth the hype it’s receiving? It’s time to take a hard look at both traditional and AI-enabled tools for testing and targeting so that we can truly understand the difference, and the best uses for each.
Monetate recently examined data from over 2 billion personalized experiences in order to learn how manual methods perform compared with their AI counterparts.
Monetate has developed this portfolio of AI marketing solutions to help marketers maximize results based on the data they have, while minimizing the impact of poorly-performing variants to improve customer experience overall. Our solutions also include manual testing and targeting capabilities, so that our customers can choose the option that works best for them. In this comparison we pitted results from standard A/B tests against results from either or both of Monetate’s AI personalization tools.
The results of the comparison were so pronounced, even we were surprised. Read on for the analysis.
First, we learned a surprising fact about traditional testing: less than 10% of traditional A/B tests reach significance at all. That means that for all the resources that organizations spend theorizing, designing, and executing tests, they are only getting value from a fraction of that effort. This is a disappointing figure, as tests are used not just as a way to drive key performance indicators (KPIs) but as a way to gather audience knowledge. With so few of the tests delivering conclusive results, a huge amount of potential learning is being lost.
When compared with the corresponding AI-enabled test, the contrast was clear: more than 25% of of the MFEs and a whopping 39% of IFEs reached significance—the latter representing an improvement of over 290%.
This gap in performance can be explained. MFEs detect and eliminate clearly worse variants early on, focusing their samples on the few variants that are most likely to be the winners—whereas standard A/B tests distribute an equal quantity of samples across all variants, for the entire duration of the test. This leads to tests that take longer to reach significance if they reach significance at all.
In the case of IFEs, the higher rate of significance is due to the fact that the experience is designed to personalize within the target audience, which A/B tests do not. This means that they can pick up on more complex relationships, giving the algorithm more opportunity to find significant differences than it would for a broader audience. Thus, the ability of AI marketing tools to fine-tune the tests as they run (whether by eliminating poorly performing variants as soon as they are identified with certainty, or updating personalization as customer context changes in real time) leads to faster and clearer results.
The AI personalization solutions offer increased potential to optimize for business KPIs. Both traditional tests and AI tests are designed to optimize for a selected performance metric—such as conversion to purchase, add-to-cart rate, or revenue per session—eventually determining which variant is the top performer for that goal metric. In comparing solution types, we found that 84% of MFEs outperformed their corresponding A/B test relative to the goal metric.
The difference in performance was particularly noticeable in two areas: Revenue Per Session, in which the lift of the MFE over the A/B test exceeded 9%, and New Customer Acquisition, which saw a lift of over 15%.
Because Individual Fit Experiences are designed to maximize the content relevance for each individual user (based on data such as demographics, location, past browsing and purchase behavior, and real-time behavior onsite), they perform particularly well at generating user engagement. In experiences that were designed to encourage the user to take some action—for instance, email signups or offer clicks—the IFE outperformed a non-personalized control group 65% of the time, showing an average lift in click rate of over 41%.
So, what are the key takeaways?
First, it is clear that AI methods offer unprecedented power when it comes to testing and optimization and 1-to-1 personalization. They consistently deliver outstanding ROI while requiring less oversight, which means that they can drive value without increasing the drag on your team. Majority Fit Experiences, which mimic A/B testing in functionality, are more effective than standard testing in driving toward key goal metrics a vast majority of the time. And, while Individual Fit Experiences also boost goal metrics across the board, they perform especially well at boosting site engagement by driving user actions, which have previously been shown to correlate with stronger customer relationships for the long-term.
Second, marketers should reexamine their testing practices: the fact that less than 10% of their A/B tests reach significance may be an indication that the variants they are testing are not different enough from one another to make a meaningful difference. Testing low-impact variables like font size and button color may be low-risk, but they are also low-reward. To gain the kind of revenue rewards and audience insights that will truly impact the bottom line, marketers should practice putting bolder design choices to the test.
Traditional A/B testing still deserves a place in your web optimization program: AI marketing methods give you the power to optimize for a goal metric, but if you are not sure what your goal metric should be—if you haven’t identified the KPIs and audience groups where the test variants are poised to have the greatest impact—standard testing is a great place to start. A manual A/B test will surface basic audience insights that can lay the foundation for the AI tests that maximize performance. This type of learning is invaluable.
However, if business results are your top priority then the answers are clear: the rules of testing have changed, and you can’t afford to ignore the possibilities of AI.
If you’re ready to explore the possibilities that AI testing and targeting have to offer, reach out to Monetate to schedule a demo of the Monetate Intelligent Personalization Engine today.