Demystifying Data Logging -- and Why Stepping It Up Has Benefits

Data logging, a critical part of test engineering, is often treated as an afterthought, but giving it the following considerations can net big payoffs.

February 3, 2016

3 Min Read
Demystifying Data Logging -- and Why Stepping It Up Has Benefits

Data logging, a critical part of test engineering, is often treated as an afterthought by some test engineers -- myself included in the past. Getting the test set right for accurate testing of a product is such a big challenge, with so many variables to consider, we often don’t get to data logging -– acquisition and recording of data over long periods of time -- until the end. This occurs since test engineers probably have test systems in place that just need tweaking with a few software adjustments for the specific parameters under test.

Not only do we need to measure the parameters of the device under test, we need to store the results for evaluation by different audiences. This can happen with a stand-alone data logging unit that measures and stores data, which can be found for a variety of applications, but when building test sets for new products, a more customized solution is required.


(Source: ddpavumba at FreeDigitalPhotos.net)

Some canned test and measurement software packages provide data logging options. Other custom programs (written by an employee or a contractor) simply spit out raw data, which need to be massaged into a recognizable format. In the past, this often was accomplished by dropping the raw data into a spreadsheet with automated macros. However, just because we have been collecting data the same way for the past 10 (or 20) years, it doesn’t mean we should avoid examining the specific needs of the folks who are evaluating the data, how they are using it, and what data format would be most effective for their job function.

Meeting the Needs of the Audience

Raw data from a test set can typically be manipulated in a number of ways, depending on whether it is viewed by engineering, sales, or the end customer. Statistical process control (SPC) becomes very important when data is viewed by a QC engineer, who wants to be able to immediately detect process shifts (e.g., do you need to provide a real-time bell curve of critical parameters on the actual test set, in addition to data logging, and present the data in graphs, as well?). Product engineers want to scan the data and understand quickly how the device performed against the required criteria for each parameter.

Meanwhile, salespeople are typically more interested in how much product is available and thus extremely data-philic to pass/fail information. The customer may want to understand how the device performed under specific test conditions so that it can have a confidence level of performance for its specific application.

[Network with your peers at Pacific Design & Manufacturing, Feb. 9-11, at the Anaheim Convention Center.]

All of these scenarios are embedded in the raw test data and require different data logging retrieval approaches. If you have more than one audience, you can store the data and have end users select their desired formats, as an addendum to your program.

This looks like a lot of upfront work, but it can really pay off in greater efficiency and accurate interpretation of data that extends much further down the line than just handing someone raw data. You can also combine functions -– always have a pass/fail summary included in your data so that you don’t need an additional option for sales.

Reducing Human Error

Another consideration to place data logging higher on the test set priority list as a customized component of the test set: by providing data logging options at test, the probability of errors go down dramatically (think about the old telephone game where you whisper something in someone’s ear and how the initial phrase turns into something very different by the end).

Sign up for the Design News Daily newsletter.

You May Also Like