February 28, 2012
In website testing, there's no such thing as a bad idea. But while trial and error is an important part of learning what works best for your visitors, the reality is that marketers want each and every test to succeed. To wish for the contrary, that is, for a test to yield no significant lift (or perhaps, even to make performance worse), is simply counterintuitive.
As marketers, we design experiments with the expectation that each variation might work best. Too often, however, we dismiss unsuccessful tests too quickly, missing the opportunity to better understand why the test was ineffective, what it teaches us about our visitors, and—just as important—what it teaches us about ourselves.
Two reasons why we avoid analyzing failed tests are:
But each of these explanations is ultimately grounded in mistaken thinking. First, the cost of ignoring why tests fail is usually far greater than the cost of studying it. And second, arriving at some “Analysis Nirvana,” characterized by complete confidence and perfect information, isn’t the goal in the first place.
Alex Cohen, the most talented analyst I’ve worked with, taught me an offshoot of the formative assessment known as “What Went Well (WWW) / “Even Better If” (EBI). This spin works great for website testing, is fast to use, emphasizes qualitative analysis over numbers-based approaches, and seeks to answer just two simple angles on two simple questions:
Organizations that give inadequate attention to why each test fails or succeeds typically have sub-optimal testing programs characterized by:
In contrast, organizations focused on understanding the website visitor’s motivations are built on cultures of continuous and iterative improvement and informed optimization.
If your test analysis is struggling, or if you’re committed to improving what’s already good, click the thumbnail to the left to download a test assessment sheet (featuring the questions listed above) that will support your quest for a well-tuned testing program.