This Q&A shows that a collaborative platform that provides full access to workers and tools is key.

January 26, 2023

4 Min Read
GettyImages-ainterop.jpg
AI interoperability helps ensure that users meet their requirements.Image courtesy of Getty Images

Johanna Pingel, Product Marketing Manager, AI, MathWorks

What is interoperability’s role in AI?

AI inherently requires engineers and researchers to aggregate and analyze data from disparate systems within the organization. There isn’t a single framework that can provide everything for everyone working on deep learning. However, it’s essential to remember that interoperability is not the goal; instead, engineers should focus on building a fully working system that enables cross-platform collaboration between multiple working groups and all available tools, with their varying requirements and workflows.

If it’s not the end goal, then why is interoperability important?

The biggest challenge for organizations in achieving interoperability is the sheer scale of the complete system. Most AI applications have multiple working groups, so interoperability helps facilitate collaboration and transparency.

While interoperability seems like basic IT function, the value of empowering a cross-disciplinary AI project team to work with their preferred platform and tools is immeasurable. Engineers, for instance, will have different technical requirements than other groups. Innovation happens within many different systems and computing languages, so providing engineers with the opportunity to do their best work with the tools they trust is critical. Interoperability is the bridge that helps businesses achieve that goal.

Related:AI and AR/VR Expert Says Don’t Leave Humans Behind

When multiple systems are “talking” to one another, how do engineers ensure that the systems are doing what they are supposed to be doing? Is testing a factor?

Systems communication challenges are nothing new for engineers. The minute an organization switches languages or has different groups using different tools, there will be a translation issue. It’s the technological equivalent of the game of telephone we all played as kids.

Testing and simulating the final system are the best ways to tell if the tools, platforms, and languages in a computing environment communicate effectively. For example, Voyage, an automotive startup, was building a guidance system for their Level 3 autonomous vehicles. The company used various tools and systems to incorporate multiple complex systems to sense the surrounding environment, plan a path to a destination, and control steering and speed. Testing in a simulation environment like Simulink gave the engineers the confidence the system would work correctly before beginning their own in-vehicle testing.  

What are the top issues engineers should address to ensure interoperability between their systems?

Related:Making AI Accountable to All

Engineers should proactively manage three significant considerations when incorporating AI into their systems.

The first is vital to building real systems: the target platform, especially in cases of embedded systems. In these scenarios, data scientists try to create the most accurate model for the task, while embedded engineers focus on putting that model onto hardware. Interoperability can help bridge the gap between the initial model and the final requirements, and techniques like model compression are important when incorporating AI into complex systems.

Low-code tools also play a part in interoperability. Engineers and scientists would instead focus on the problem to solve than a particular coding language, so many organizations use low-code tools to lower their entry barrier. This is especially true in data-centric workflows where point-and-click tools can quickly preprocess data, circumventing a standard time sink. The problem with using low code tools is that they will likely differ from your end target requirements. This means that low-code tools can play their part in addressing a portion of the solution, but they won’t get you across the finish line. Interoperability is the bridge between using the low-code tools that are good for certain parts of the workflow and the (perhaps) more code-intensive end solution.

Finally, engineers must evaluate the AI features and functions they need for their specific application. General-purpose AI platforms might work for data scientists but may not be sufficient for engineers. Automotive engineers, for instance, prefer highly specific platforms that understand the workflow differences between Visual Inspection, Radar, and Lidar. Because of these needs, organizations will quickly find that they need specific capabilities for their application.

How do MathWorks products support interoperability?

Interoperability is critical for engineers across industries because it allows them to use different products and services from their preferred vendors. In this way, interoperability is not only a technical requirement but also a competitive advantage. 

In the case of AI, organizations are looking for product flexibility to allow cross-functional teams to collaborate and take advantage of AI models from the community to create innovative products. For example, MATLAB users can now import AI models from TensorFlow and PyTorch and use MATLAB for domain-specific functionality and low-code apps. This is the type of interoperability we’re hoping to encourage within the tech industry because we believe putting the power of choice in engineers’ hands is critical to unlocking innovation.   

 

Sign up for the Design News Daily newsletter.

You May Also Like