Where Is AI Headed in 2024?

More powerful hardware to handle AI and growing concerns for ethics and responsibility are some key trends.

Spencer Chin, Senior Editor

December 14, 2023

4 Min Read
AI will progress but face mounting challenges in the new year.
Industry experts have various opinions on the key trends and issues facing artificial intelligence in 2024.Shutthiphong Chandaeng/ iStock / Getty Images Plus

At a Glance

  • Hardware suppliers are likely to engage in all-out battle to develop ever-faster chips and networks to handle AI
  • AI and the IoT will converge to benefit multiple industries
  • Issues such as ownership of AI models, ethics, and regulation will continue to confront AI users

2023 will be remembered as the year AI finally started making headlines on a regular basis. The mania spread by ChatGPT and other generative AI generated enthusiasm over the tasks it can accomplish but also concerns that it would be misused and create ethical concerns. And, more recently, the AI world gained even more scrutiny with a rapid-fire series of events that saw Sam Altman, the CEO of OpenAI, ousted by the company’s Board of Directors only to be reinstated several days later under mounting pressure by company stakeholders.

Developments in generative AI could share the spotlight next year as well, but there are other important issues as science continues to make inroads into our daily lives.  Several companies and individuals have chimed in to share what they think AI will head next year. What follows is an edited compilation of concerns and predictions.

According to a blog post in Nvidia’s website, AI is about to become a space race, according to Ian Buck, Vice President of Hyperscale and HPC. He expects countries to attempt to create their centers of excellence to drive significant advances in research and science and improve GDP. This will involve the rapid building and deployment of exascale AI supercomputers.

Buck also expects enterprise leaders to launch quantum computing research initiatives based on traditional AI supercomputers and the availability of an open, unified development platform for hydro-classical quantum computing.

Related:Balancing the Promise, Progress, & Problems of AI

Likewise, Bill Morelli, Chief Research Officer and Research VP of Enterprise IT for Omdia, Informa’s market research arm, stated in an excerpt from Omdia’s 2024 Trends to Watch eBook: “The fear of missing a wave of market opportunity fueled by the rapid adoption of ChatGPT is driving investment in AI training infrastructure, from servers configured with high-performing co-processors to high-speed networks and storage. Commercializing large-scale generative AI applications then requires a highly optimized and distributed army of servers with co-processors optimized for low-cost computing.”

The AI and IoT Converge

In recent years, the buzzword has been the growth of the IoT (Internet of Things). Now, with AI booming, there’s a belief that both the AI and IoT will converge to benefit multiple industries.

An exponential leap forward in AI developments, combined with the realization of hyperscale IoT, mean that we are on the cusp of the true convergence of AI and IoT, “said Andy Brown, Practice Lead, IoT, in Omdia’s  2024 Trends to Watch eBook. “The technologies can work together to provide real advantages across multiple industry sectors, with AIoT offering the potential to create enormous value for both AI and IoT industries. AI can provide the intelligence and decision-making capabilities that are essential for making sense of the vast amounts of data that IoT devices generate, allowing for true automation of previously manual processes, while IoT prices the real-world context and feedback that AI needs to make accurate and timely decisions.”

Related:This Guide Can Help Companies Implement AI Responsibly

Who Owns AI Models

While few doubt the continued growth of AI, concerns over AI governance and ethics are likely to intensify as it plays a bigger role in business and industry. Some industry observers believe the concentration of AI machine learning models in the hands of large companies such as Google and Microsoft will only lead to AI benefiting them and are calling for more decentralized ownership of AI models.

One group trying to achieve this is the Opentensor Foundation, which developed the Bitsensor Protocol, a peer-to-peer machine-learning protocol that incentivizes participants to train and operate machine-learning models in a distributed manner.

“We are taking advantage of open-source AI protocols,” said Ala Shaabana, co-founder of Opentensor, in a recent interview with Design News. “This is a network that extracts the best answers from the best models, sort of like speaking to a panel of smart people.”

Shaabana noted that generative AI models are based on machine learning models run by larger companies. “There are only a few elements of decentralized AI in ChatGPT,” Shaabana said. “What we need is a process that has more input.”

Cries for Regulation

Concerns over the potential concentration of AI in the hands of a few also generates growing concerns over the potential misuse of AI growing. Thus, there’s also a growing chorus of industry leaders calling for federal regulation of AI practices.

In late October, President Biden issued an Executive Order to establish standards for AI safety and security. Among other things, the order requires that developers of powerful AI systems share their safety test results and other critical information with the U.S. government. The order also calls for the National Institute of Standards and Technology (NIST) to set rigorous standards for extensive testing of AI system prior to public release.

While the measure immediately conjures up thoughts of big government meddling, Sara Gutierrez, chief science officer at SHL, a business solutions provider, stated in an e-mail, “I generally support the proactive regulation of AI in the workplace by Congress. The concern about a patchwork of AI laws at the state and local levels, especially concerning selection decisions, poses a significant challenge. The likelihood of conflicting regulations across different jurisdictions could lead to confusion and impede the cohesive development and application of AI systems."

Gutierrez added, "As AI becomes increasingly integral to employment decisions, legislative bodies must establish a common framework that organizations and industry experts can consistently apply to ensure fairness and equity in AI systems.”

About the Author(s)

Spencer Chin

Senior Editor, Design News

Spencer Chin is a Senior Editor for Design News, covering the electronics beat, which includes semiconductors, components, power, embedded systems, artificial intelligence, augmented and virtual reality, and other related subjects. He is always open to ideas for coverage. Spencer has spent many years covering electronics for brands including Electronic Products, Electronic Buyers News, EE Times, Power Electronics, and electronics360. You can reach him at [email protected] or follow him at @spencerchin.

Sign up for the Design News Daily newsletter.

You May Also Like