Strategic Testing During Development

Can’t we just test the same way every time?

Perry Parendo

September 29, 2023

4 Min Read
Dilok Klaisataporn/iStock/Getty Images Plus via Getty Images

I was talking to a friend recently about product testing. The conversation started like this. “How do you do product testing? Do you test at low or high levels? We are having a lot of debate within the team but no consensus. How can we keep it simple yet do what is best for each project?”

It doesn’t really matter whether it is hardware or software, the thinking is much the same. It also doesn’t matter the industry. Language changes more than anything. Similar to NFL offenses…

Let’s use two examples to help think about this:

Assume we have 3 components to our “system.” One is a primary function and has major user impact. The second one has a moderate impact. The third and final component is low risk and the customer won’t see it.

How do we test this system? When do we do a user test? These are typical questions and are often labeled as “product test” and stuffed at the end of the schedule. Let’s think about how to do a little bit better.

In this first example, have a few users examine the first component. It is high impact and high risk, so we want to understand it. Things could change after all parts are put together, but better to get bugs exposed early with this component isolated from others.

The last component may not need much verification testing. It is low value and low risk. The verification testing by the designer is likely good enough. If an issue slips into the final system test, the impact will be limited.

The second component is less obvious, yet is still risk based. Maybe you avoid involving the user because they are busy and it takes time to coordinate with them. Instead, maybe you have other designers check it out. It is better than only having the initial designer check things out. However, these new people will be more comfortable with the product than a user. If you do consider a user test, it may just be a sanity check with one or two users.

After these three “component level” verification tests are complete, then the full system can be put together. Likely having a few more users check it out is useful. Now you are likely to find integration or interface issues. The problem resolution is expected in a different place than it was during the component tests, because we now consider the components as “known.”

But maybe your “system” has 100 components and is more complex. Do we do the same thing in this example? Not exactly. In this case, we need to group a number of items together. But which ones do we combine? We do a risk assessment on the 100 items. We group together similar risks and functions, then use a similar strategy as before.

What if a few of these elements are considered “major” impact changes? Do we do them at the same time? While I do not think there is one set answer, I would split them apart. If the errors can happen anywhere, pinpointing the source can be hard. But if we have fewer components, then it is easier to troubleshoot and resolve them. This entire step takes creativity and insight. It can be done several ways and still get a good outcome. It takes creating strategic groups of high, moderate, and low risk areas of the design, then developing a test plan for each area and for various levels of integration or assembly.

Yes—developing a test strategy is an art. Yes—each situation is unique. Is it frustrating to not have a “standard way of doing things?” Again—yes. But we want to work smarter, and no harder than required. Doing everything in a standard way will take longer, require more people, and will limit people thinking in most cases.

While the approach shared is generic, the framework addresses many situations. It gets complicated with varied customer expectations, management demands, and differing developer knowledge of the architecture. Those experiences create important changes to the approach. All the more reason to ensure strong leaders are involved, and the key players have a voice, so the plan can be motivating to execute.

What techniques do you use to create your test strategy? Is it standard or custom for each project? How do you consider risk? I would enjoy hearing about your approach to development and test strategy.

Be sure to hear Perry Parendo speak October 10 at the upcoming MD&M Minneapolis show during the presentation, The Failure of QFD, and the panel discussion, EU MDR & FDA Guidance on 3D Printing for Medical Device Production.

About the Author(s)

Perry Parendo

Parendo began developing and seeing results from his Design Of Experiments (DOE) techniques at the General Motors Research Labs in 1986. His unique insight into DOE has saved time and money while solving complex problems during product and process development. This paved the way for him to lead multi-million dollar New Product Development (NPD) projects with international teams.

Parendo founded Perry’s Solutions LLC in 2006 to help organizations with critical product development activities. He has consulted in a wide range of industries such as consumer products, biomedical products, and heavy equipment. He is currently a regular columnist for Design News. He received his Mechanical Engineering degree from the University of Minnesota.

Sign up for the Design News Daily newsletter.

You May Also Like