Sponsored By

Quality Data Is Key in Avoiding Failure in AI Software

At the upcoming virtual event MD&M BIOMEDigital, Pat Baird, senior regulatory specialist and head of global software standards at Philips, will share some specific ways bad data can impact AI results.

Susan Shepard

March 30, 2021

4 Min Read
Image by Gerd Altmann from Pixabay

Unlike traditional software that uses an algorithm designed by a programmer to perform a specific function, AI software decisions can be unpredictable, said Pat Baird, senior regulatory specialist and head of global software standards at Philips, in an interview with MD+DI. “For these machine learning systems, the programmer doesn't tell the software how to solve the problem,” he explained. “The programmer develops software that finds patterns in the data. The programmer really doesn't know why the software made a decision, he just knows he put together an engine that calculates a bunch of stuff.”

The key to avoiding potential problems with AI software is getting good data to start with, Baird said. Like traditional software, “it's garbage in, garbage out,” he said. “What's going to happen is you have garbage in your data, and since you don't know how the software works, it's going to be a problem.”

In a real-world example of how bad data can give inaccurate results, he told a story of how he and his wife took an off-road adventure in a Jeep, and despite being in the Jeep for most of the day, his wife’s wearable recorded over 20,000 steps that evening. “This was because of all the potholes and how much we were thrown around in the Jeep,” Baird said.

In his upcoming MD&M BIOMEDigital presentation, “Artificial Intelligence & Risk Management: New Ways to Fail,” Baird will talk about some specific ways in which bad data can affect results. One of these is bias. For example, he mentioned that a certain software was designed to detect early-stage breast cancer. It was trained using data from people of one ethnicity. However, women of other ethnicities have different breast tissue density, and so the software did not work as well for them.

“But, as a programmer, would I have known that there's a difference in tissue density?” Baird questioned. He stressed that besides massive amounts of data, programmers need the context and the knowledge of clinical professionals to develop the software.

“Overfitting is where there's a ton of data, but the software picks up on patterns that really aren't the patterns that you're looking for,” Baird said. He cited an example of image recognition software where the goal was to be able to tell the difference between an Alaskan Husky dog and a wolf. He said the data performed well, but it was picking up on background cues, rather than the ones the programmer intended.

“What actually happened,” Baird said, “was that most of the photos that people had of their dog were taken sometime during the summer or fall in their backyard, whereas the photos of the wolves were taken during the winter out in the wild. The software that looked great to detect the difference between the Alaskan Husky and the wolf was actually picking up whether or not there was snow on the ground.”

Another topic Baird will explain is underfitting, where there simply is not enough data. “So you make the decision, but it's based on noise,” he said. “It's not based on anything real.”

Over trust is when people blindly believe in either technology or the infallibility of their doctors, Baird said. He talked about how people can be susceptible to malware schemes. These schemes use a pop-up on computer that says the user is vulnerable to malware and gives prompts to install software. Because they trust completely in technology, users will click on the pop-up, which then actually installs the virus.  

Another area to consider is levels of autonomy, Baird said. “Is the software just giving you driving directions, or is that self-driving car that's actually doing the driving for you? How much responsibility do we give it?”

Companies already know how to do a lot in terms of quality control, Baird said, and they can apply this knowledge to data for AI software. He recalled that in his days as a forklift driver for a plastic bottle manufacturer, he would unload the truck of raw plastic into a quarantine section for the QA people to test before it went on the manufacturing line. “And we didn’t get the plastic beads from just anybody,” Baird said, noting that they had be from a qualified supplier. “And so, to me, this is like we can't use any old data,” he explained. “We want to get data from a qualified supplier. We want to know what to test for in the data before we can use it.

“We just didn't think of it for data and for software, but the shipping and receiving department of your company already has these controls in place. So, let's go learn from them,” he concluded. “There's some differences but it's not as strange and different as you think.”

Baird will present “Artificial Intelligence & Risk Management: New Ways to Fail,” from 3:00 pm - 4:00 pm on April 6, 2021.

About the Author(s)

Susan Shepard

Susan Shepard is a freelance contributor to MD + DI.

Sign up for the Design News Daily newsletter.

You May Also Like