CES panel discusses efforts to implement policies that make the science responsible and ethical to wide range of stakeholders.

Spencer Chin, Senior Editor

January 11, 2023

3 Min Read
gettyaiethics.jpg
Image courtesy of Syahrir Maulana/Getty Images

With artificial intelligence (AI) gaining increasing acceptance as a technology that can help many companies, concern is growing about issues such as ethics and responsibility. During a session at the recent CES (Consumer Electronics Show), panelists discussed measures the public and private sector is taking to ensure that AI will be responsibly implemented and mitigate user concerns.

In Europe, members of the European Union are seeking to develop legislation that would provide a set of guidelines for AI, according to Laura Caroli, Accredited Parliamentary Assistant, European Parliament. The process involves the member nations trying to agree on drafted legislation and will not likely be enacted until the later part of this decade. Caroli expects the legislation will provide a set of guidelines that address risk management and ethics, accounting for disclosure, transparency, and other factors.

Keeping People First

Elham Tabassi, Senior Research Scientist for NIST (National Institute of Standards and Technology), said her group, which develops standards for measurement science, is trying to develop guidelines for AI risk management. Unlike many of the objective measurement standards NIST develops, Tabassi acknowledges AI guidelines are different in that the effects on people must be at the forefront.

Related:AI Is Poised to Gain Traction with Manufacturers in 2023

“We have launched an open, transparent process to develop a voluntary framework for risk management, that puts the protection of individuals at the forefront. We need to understand who is being affected and how much. We need to understand risk management and governance.”

Private companies are also grappling with risk management issues. Farzana Dudhwala, Private Policy Manager for AI Policy and Governance at Meta, said her role at the company was to best understand how AI should be regulated at the social media giant, developing new ways to view and implement AI.

Risk management for AI is also an issue in the healthcare industry, where there are policies in place to protect patient privacy and health rights, said Stephanie Fiore, Director Digital Health Policy, Elevance Health. She noted the Health Insurance Portability and Accountability Act (HIPA), the 21th Century Cures Act, and FDA Digital Health regulatory policies are some of the guidelines in place that healthcare companies need to consider when implementing AI.

Gaining Trust

The panelists agreed that creating public trust in AI would be a key step in implementing the science in various sectors.

“We are convinced that if we ensure through regulation and other tools that systems used in high-risk areas are secure and robust, people would be less concerned about using AI,” said the European Parliament’s Laura Caroli.

Related:Looking to 2023: AI on the Upswing as Users Learn From Missteps

A big part of gaining human trust in AI would be creating guidelines and policies that minimize bias due to age, sex, and race.

“We have worked hard to develop open-source data sets to weed out bias and make demographics more wide-ranging in regards to age, sex, etc.,” said Meta’s Farzana Dudhwala.

Creating those data sets also requires attention to structural inequities in society, Dudhwala noted. “We are also trying to add data that reflects diversity and addresses bias and discrimination.”

Standards Efforts

The panelists also felt that developing standards, at both organizational and government levels, would be useful in ensuring some degree of compliance to AI policies. But it is not an easy task.

“We are giving standards a role in achieving compliance, but the challenge is getting European Union member nations to agree and cooperate, and there we are lagging a bit” said the European Parliament’s Laura Caroli.

Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at [email protected].

About the Author(s)

Spencer Chin

Senior Editor, Design News

Spencer Chin is a Senior Editor for Design News, covering the electronics beat, which includes semiconductors, components, power, embedded systems, artificial intelligence, augmented and virtual reality, and other related subjects. He is always open to ideas for coverage. Spencer has spent many years covering electronics for brands including Electronic Products, Electronic Buyers News, EE Times, Power Electronics, and electronics360. You can reach him at [email protected] or follow him at @spencerchin.

Sign up for the Design News Daily newsletter.

You May Also Like