A handful of tech leaders met at the White House this summer to discuss artificial intelligence and whether regulations need to be developed to keep the technology from going rogue. Biden was part of the discussions. Before the meeting, Biden commented, “AI is already making it easier to search the internet, helping us drive to our destinations while avoiding traffic in real-time, AI is going to change the way we teach, learn, and help solve challenges like disease and climate change.”
Earlier this year, the White House produced a Blueprint for an AI Bill of Rights. While the document offers guidelines, they are effectively voluntary. Several congressional representatives have also commented on the potential need for government oversight.
We caught up with Lou Bachenheimer, CTO of the Americas for SS&C Blue Prism, to get his take on potential government moves to provide guardrails on AI.
Design News: What concerns do the US government and the European Union have about AI?
Lou Bachenheimer: There are several concerns that governments are looking into. One is potential faulty outcomes and the inability to see or understand how decisions are made. There is a major lawsuit against a bank that was making decisions based on an algorithm, and it turned out the algorithm was making racist decisions.
Another issue is what happens if the AI gets something wrong. In general, the more complex the AI model is, the more room for error. This is similar if you’re buying a house and you’re figuring out what to offer by looking at comparables. An error in comparable sales data can result in a huge error. As the algorithm gets more complicated and you move into neural networks, when you get something wrong, it could go drastically wrong. Sometimes generative models give an answer that is horribly incorrect.
DN: Did anything important come from the White House meeting on AI?
Lou Bachenheimer: During the White House meeting, Biden sat down with seven leaders of AI companies. They agreed to have some volunteer regulations. For one, they intend to make sure they’re not abusing copyrighted materials. The White House is also meeting with labor and civil rights organizations to consider more concrete regulations. The AI leaders suggested that the White House should ask for a third party to evaluate the monitoring of AI. As for imposing regulations, that comes with an expense. That expense could lead to further challenges. It would be a way for larger companies to block out start-ups. The European Union is putting through the AI Act. It’s the first large AI regulation of its kind. It’s relatively similar to the White House AI Bill of Rights, but they’re putting some teeth behind it.
DN: Is machine learning getting bundled with AI?
Lou Bachenheimer: Machine learning is part of AI, but there is a distinction between AI and ML. ML is a subset of AI, ML has a feedback loop. Data analytics is going to be like machine learning. Recent concerns about AI don’t have to do with ML or its feedback loop.
DN: How about AI and copyright issues?
Lou Bachenheimer: Plagiarism is a big issue. AI companies were going to put in watermarks so it the materials could be identified as AI. It comes down to the dollars and cents. The infrastructure is not cheap. Almost every large enterprise has writers on staff. The writers who make the most money are writing in code. So for AI, it makes more sense to write code. If you’re having it write code, they do make mistakes.
DN: Are there special challenges with AI-generated code?
Lou Bachenheimer: If you miss something in a piece of code it can have a drastic outcome. We provide intelligent automation that can take the decisions made by AI and provide hands-on scores for that. A lot of work in IT amounts to making edits. There are a lot of IT individuals who do that daily. If you’re using ChatGPT to write code, you still need an abstraction layer to make sure it doesn’t do something wrong, so you can correct it if it does something wrong.
You have to validate the code. We typically don’t run validation. There are easier ways to do it. We can execute the code and monitor it. You can have a digital worker handle the code. When we execute the code, we provide high-level government and full audit training. We’re playing around with AI code internally, but it’s very new. A lot of our clients are concerned about governments and risk avoidance, especially in healthcare and financial services. Most companies are kicking the tires but waiting before diving in.
Europe Looks at AI Rules
The European Union (EU) is typically ahead of the US when it comes to technology regulation, often my many years. They began looking at potential AI regulation back in June 0f 2021. In those discussions, EU regulators noted they wanted to create “better conditions for the development and use of this innovative technology.” They wanted to make sure AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly, stating that “AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
In June of this year, Members of the European Parliament (MEP) adopted the Parliament's negotiating position on the AI Act. With that, the discussion moves to the individual EU countries for discussion. The MEP also wants to establish a clear definition of AI that can be applied to future AI systems. While the US isn’t likely to adopt the resulting EU regulations, those regs will put pressure on the US government to create guardrails on the technology.