AI Policy and Governance: Shaping the Future of Artificial Intelligence

Is it time for a transparent ethical and regulatory framework to define the use of artificial intelligence?

Ed Watal, founder and principal at Intellibus

November 20, 2024

4 Min Read
AI policy and regulations
Cagkansayin for iStock / Getty Images Plus via Getty Images

At a Glance

  • To reap the benefits of AI, we must mitigate the risks of abusing the technology.
  • We must determine how an ethical regulatory framework should be enforced.
  • Proposed frameworks have included everything from self-regulation bodies to federal laws.

Artificial intelligence is still an emerging technology, meaning it comes a definite level of ambiguity and uncertainty. People have no idea what AI technology will look like in the next year, much less five, leading critics and proponents of artificial intelligence alike to call for the development of a more transparent ethical and regulatory framework to define how individuals and businesses can responsibly use artificial intelligence.

These proposed frameworks have included everything from self-regulation bodies to federal laws setting requirements for using artificial intelligence technology and penalties for its misuse. However, developing this ethical framework may not be as easy as it seems.

First, we must determine how this ethical framework should be enforced. Should there be a set of guidelines at the business level or enforceable laws at the national or international level? And how do we strike a balance between instituting restrictions that encourage safe development without discouraging development as a whole?

Ethical frameworks for artificial intelligence use

Some of the common concerns that ethical frameworks for artificial intelligence should address include:

  • Bias: One of the primary concerns critics have raised against artificial intelligence is its potential for bias. In its current state, AI still relies upon pre-existing data, which means that any biases present in the databases upon which models are trained will be reflected in those models’ output. Ethical frameworks should set clear policies and expectations to mitigate this bias.

  • Privacy: The general public has also expressed concern about the privacy of their data regarding artificial intelligence. Many AI platforms use their user inputs as part of their training process. In other words, any information a user puts into an AI model could be reused by the model, putting it at risk of being revealed. Ethical frameworks for AI should include clear, transparent guidelines for data use in AI platforms.

  • Accountability: Another critical consideration regarding AI governance is accountability. Who is responsible for the consequences of AI use — the user, the service provider, the AI company, or some combination thereof? There is much ambiguity in these legal outcomes, and a clearer framework will be necessary for effective governance.

Related:Can Engineers Benefit From an AI Use Policy?

Ethical frameworks should also include provisions that help assess and mitigate risks. Whether from misuse or inherent flaws in the technology, AI has significant shortcomings that must be addressed to create an environment where its potential can be explored responsibly.

Related:The EU Steps Up with AI Regulations

Addressing concerns about the use of AI

One risk that has become a prevalent concern for many is job displacement. Many critics of artificial intelligence technology have shown worry that they will be “replaced” by AI programs. 

To address these concerns, we must hold businesses accountable for artificial intelligence's impacts on the workforce. For example, policies should be instituted that encourage reskilling and upskilling programs, helping workers whose positions have been made redundant by artificial intelligence move to new positions that support and are supported by AI.

Others have expressed concern that artificial intelligence could be weaponized to create autonomous weapons, automated cyberattacks, or other malicious uses that result from innovation. Although any powerful new technology will be abused by wrongdoers who hope to exploit it for their own nefarious gain, increased regulation will help create an environment where beneficial innovation is encouraged, and these dangerous uses are limited.

Forming an effective framework for responsible AI

To effectively govern artificial intelligence technology, there must be cooperation between individual users, businesses, and legislators. Regulation alone cannot address the concerns that people have expressed about AI; there must also be internal compliance policies that set expectations for how employees will use this technology. By approaching proper use from several angles and methods, we can ensure artificial intelligence is leveraged in a way that genuinely benefits society.

Related:How Automation & AI Can Ease the Compliance Burden

Collaboration must also happen on an international scale. Many artificial intelligence tools, the companies that produce them, and the people that use them are not restricted by borders. If a company using an artificial intelligence platform is subject to vastly different regulations in one jurisdiction than another, it could discourage them from using this technology. 

Finally, artificial intelligence providers should emphasize restoring public trust in the technology. Transparency about artificial intelligence technology and its use — through regulation and internal initiatives — will help the public better understand and trust the technology. Educational programs should also be instituted to help people learn how to use artificial intelligence responsibly.

Artificial intelligence is a powerful tool that has the potential to revolutionize several industries for the better. However, to reap these benefits, we must mitigate the risks of abusing this technology.

The way for us to best manage this risk is to establish a clear ethical framework between regulations and internal compliance policies that sets clear precedence for governance on how to use AI. Then, we can create an environment conducive to the responsible growth of AI.

About the Author

Ed Watal

founder and principal at Intellibus, Intellibus

Ed Watal is an AI thought leader and technology investor. One of his key projects includes BigParser (an Ethical AI Platform and Data Commons for the World). He is also the founder of Intellibus, a US software firm. Forbes is collaborating with Watal on a book on our AI. Watal is also the lead faculty for AI Masterclass – a joint operation between NYU SPS and Intellibus.

Sign up for Design News newsletters

You May Also Like