Blueprinting Step 4: Side-by-side testing
  1. BLUE HELP
  2. Rest of Blueprinting (Steps 4-7)
  3. Blueprinting Step 4: Side-by-side testing

2. Designing your test methods

Your side-by-side test protocols will be mostly driven by what customers said during Preference interview.

If your team proceeds with Blueprinting Step 4: Side-by-Side Testing, you’ll need to develop some form of test protocol or each of your outcomes (often 10 of them). These tests could measure anything… software download speeds… electrical motor efficiency… seed germination time.  

To design the best possible test method, ask yourself 5 questions….

  1. What did you hear in Preference Interviews? This should be the primary driver of your test methodology. If 7 out of 8 customers recommended the same test, you probably have your answer.
  2. Will this test method be understood and accepted by most customers? Can you imagine using results from this test in your promotional efforts?
  3. What is your capability for running this test… or having it run for you?
  4. How reproducible is this test method?
  5. How well does this test predict the real-life outcome you are interested in?

You should also consider what to test, and how to test.

What to test: Be sure to test all the alternatives the customer has… not just products that look like yours. If you produce welding machines, the following might be viable customer alternatives to test:

  • Other welding machines (like yours)
  • Robotic welding machines
  • Mechanical fasteners (screws & bolts)
  • Structural adhesives

It helps if you can stop thinking of your competitors and instead think of customers’ alternatives.

How to test: Consider how much rigor you want to apply to your testing. There are 3 levels of rigor you can choose from… Direct Measure, Panel Comparison and Expert Prediction.

  • Direct Measure: Imagine you’re developing a resin for paint. If you want to test for paint settling, you could use a lab centrifuge and measure the amount of material that settles after a specified time at a certain rotor speed.
  • Panel Comparison: Let’s say you’re developing a textile fabric and want to test for softness. You could have a panel of colleagues score several fabrics on a 1 to 5 scale… maybe comparing against standard samples, where cashmere is 5, cotton is 3, and burlap is 1.
  • Expert Prediction: This involves the least rigor. Perhaps you’re developing software and want to test for training time. You might have some internal experts predict performance of each product based on their experience.

You don’t have to use the same level of rigor for all 10 Outcomes. It’s fine if you can use direct measure for everything… because this increases your confidence. But it usually also increases your costs in terms of time and money.

You want to keep your testing costs reasonable, because your project is in the front-end of product development—and could yet be killed. For this reason, some teams are tempted to completely skip side-by-side testing. But it’s often better for a team to think through this together and ask, “How can we conduct this testing, even if we use a lower level of test rigor?”

For more on Blueprinting side-by-side testing, see e-Module 25: Side-by-Side Testing at www.blueprintingcenter.com > e-Learning. Also check out the 2-minute video, Benchmark competing alternatives, part of the B2B Organic Growth video series by Dan Adams.

 

Keywords: Blueprinting Step 4, side-by-side testing, side by side testing, competitive testing, competitive benchmarking, customer alternative, competitive offering, competing product, what to test, how to test, test method, test protocol, reproducible test, direct measure, panel comparison, expert prediction