This Guide Can Help Companies Implement AI Responsibly

Multiple companies collaborate to produce report on best practices for ethical AI use.

Spencer Chin, Senior Editor

October 6, 2023

2 Min Read
GettyImages-ethicalai1461412822.jpg
EqualAI solicited input from major corporations to come up with a guide on the responsible use of AI.Parradee Kietsirikul/ iStock / Getty Images Plus

Some of the biggest concerns regarding artificial intelligence center around how companies use these tools responsibly, in a manner that transparent to all and not detrimental to company employees and customers. To this end, a non-profit industry group called EqualAI, and co-authors from Google DeepMind, Microsoft, Salesforce, PepsiCo, Liveperson, Verizon, Northrop Grumman, the SAS Institute, Amazon, and others, have published a report on the state of responsible AI.

The report’s Executive Summary there is currently an absence of any consensus on national, let alone global standards for responsible AI governance. The report warns that with standards a ways from emerging, organizations cannot wait for the regulatory and litigation landscape to settle before adopting best practices for AI governance. The potential harm and liability associated with the complex AI systems currently being built, acquired, and integrated is too significant to delay adoption of safety standards.

“In this report, we have gathered the expertise of leaders in responsible AI adoption to present our guide on best practices on how to establish and implement responsible AI governance,” said Miriam Vogel, President and CEO of EqualAI, in a statement. “At EqualAI, we have found that aligning on AI principles allows organizations to operationalize their values by setting rules and standards to guide decision making related to AI development and use. 

This report builds on discussions from the culminating 7th session of EqualAI’s Responsible AI Badge Program, where senior executives gathered to address best practices in responsible AI governance. The final framework they aligned on consists of following six pillars: 

  1. Responsible AI Values and Principles

  2. Accountability and Clear Lines of Responsibility

  3. Documentation

  4. Defined Processes

  5. Multistakeholder Reviews

  6. Metrics, Monitoring, and Reevaluation

According to EqualAI, putting these six pillars in place will better help position organizations to develop, acquire, and/or implement AI responsibly. The framework further identifies key components to implement across an enterprise, including, but not limited to, securing C-suite or Board support, incorporating feedback from diverse and underrepresented communities, and empowering employees to flag potential concerns. 

To access the report, click here.

Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at [email protected].

About the Author(s)

Spencer Chin

Senior Editor, Design News

Spencer Chin is a Senior Editor for Design News, covering the electronics beat, which includes semiconductors, components, power, embedded systems, artificial intelligence, augmented and virtual reality, and other related subjects. He is always open to ideas for coverage. Spencer has spent many years covering electronics for brands including Electronic Products, Electronic Buyers News, EE Times, Power Electronics, and electronics360. You can reach him at [email protected] or follow him at @spencerchin.

Sign up for the Design News Daily newsletter.

You May Also Like