Emerging ethics principles could provide a starting point for medical device manufacturers grappling with potential concerns about AI.

Shannon Flynn

July 20, 2021

4 Min Read
artificial-intelligence-3382507_640-3_web.jpg
Image by Gerd Altmann from Pixabay

While artificial intelligence (AI) has the potential to revolutionize a number of industries, the technology isn’t without its controversies. Over the past few years, researchers and developers have raised concerns around the potential impacts of widespread AI adoption—and how a lack of existing ethical frameworks may put consumers at risk.

These concerns may be especially relevant to medical device manufacturers, which are increasingly using AI in new medical devices like smart monitors and health wearables. New standards and regulations on ethical AI may provide essential guidance for medical device manufacturers interested in leveraging AI.

Ethical Challenges in Current Artificial Intelligence Applications

The widespread use of AI could pose a number of ethical challenges. Some of these challenges are still hypothetical. For example, when an AI makes a mistake, who is held responsible? How do we prevent mistakes to begin with?

If a self-driving car is involved in an accident, businesses may have trouble determining what system caused the accident—mechanical failure or an error made by the AI driving algorithm.

In the event of an industrial accident, workers are typically entitled to compensation from their employer. Would the use of AI make the process of determining fault in this situation more difficult?

Other potential ethical issues have already resulted in real-world consequences. Amazon, for example, abandoned an experimental resume-scanning tool after it was discovered that the AI algorithm the company developed was devaluing women’s resumes.

AI-powered healthcare systems have struggled with similar problems. A 2020 report in Scientific American, for example, noted that AI-powered diagnostic algorithms often performed worse when it came to analyzing health data from under-represented groups. For example, the article referenced a report describing AIs intended to support doctors in reading chest X-rays that ended up performing worse when presented with an X-ray from “an underrepresented gender.” SA also pointed to another article, "Machine Learning and Health Care Disparities in Dermatology," which shared concerns about the potential consequences of skin cancer detection algorithms trained primarily on data from light-skinned individuals.

Devices like smart health wearables have the potential to revolutionize healthcare—but if the algorithms they rely on are biased, they may be limited in their usefulness.

The problem lies in the vast datasets that AI relies on. AI algorithms typically perform worse when analyzing new data from groups that were underrepresented in training data. In practice, this often means training will encode existing biases into a new algorithm. At the same time, the use of AI may give those biases an objective veneer, sometimes allowing them to slip through undetected.

Potential AI Ethics Standards

While there are no clear solutions yet to many of these problems, a number of AI organizations are already pioneering the creation of new regulations and standards that may help companies interested in AI development.

A large number of AI ethics frameworks have emerged over the past few years. One report from Deloitte summarized key trends in those frameworks and identified a few broad principles—“beneficence, non-maleficence, justice, and autonomy”—that they tend to stress.

These frameworks, and the principles they focus on, could provide a starting point for device manufacturers grappling with the potential impact of AI in new devices. The development of more-specific certifications and regulatory standards may also help provide guidance for businesses concerned about the ethical implications of AI adoption.

For example, the Institute of Electrical and Electronics Engineers (IEEE) launched The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) in 2018. The purpose of the program is to develop specifications and a certification framework for developers of AI systems who work to mitigate issues of transparency, accountability, and AI bias. Ultimately, the organization hopes ECPAIS certifications will provide assurance to end-users and individual consumers that certain AI products are safe, and their developers are taking active steps to manage the ethical challenges AI can pose. Bias in AI could require further elaboration on developer best practices and new guidelines.

Major AI organizations are already making progress towards understanding how bias in datasets translates to bias in AI algorithms. This understanding has helped them develop new frameworks for preventing bias in new AI algorithms.

Google’s AI division, for example, has already published a set of recommendations for responsible AI use. IBM’s Fairness 360 framework offers a “comprehensive open-source toolkit of metrics” that can help developers uncover unwanted bias in new AI algorithms.

In any case, better training and data-gathering methodologies will likely be necessary for medical device manufacturers wanting to minimize bias in new health care algorithms.

The growing use of AI means businesses have started to seriously consider how they will manage the ethical issues that AI algorithms can pose.

The development of new frameworks, guidelines, and standards can help businesses guide the development of new AI systems and products. A number of organizations have already published best practices for ethical AI development—and certifications currently in development may soon provide additional structure for businesses.

About the Author(s)

Shannon Flynn

Shannon Flynn is a freelance writer who covers medical tech, health IT, and data. She's written for ReadWrite, MakeUseOf, Hackernoon, and more. To read more of her work, follow her on MuckRack.

Sign up for the Design News Daily newsletter.

You May Also Like