Hyper Automation, Multi Experience, And Securing AI (Or Baby Yoda) : Page 3 of 3

Are these Gartner identified trends unique to 2020?

Trend: AI Security Or How To Teach Baby Yoda

Gartner: “Evolving technologies such as hyper automation and autonomous things offer transformational opportunities in the business world. However, they also create security vulnerabilities in new potential points of attack. Security teams must address these challenges and be aware of how AI will impact the security space. AI security has three key perspectives:

  1. Protecting AI-powered systems: Securing AI training data, training pipelines and ML models. 
  2. Leveraging AI to enhance security defense: Using ML to understand patterns, uncover attacks and automate parts of the cybersecurity processes. 
  3. Anticipating nefarious use of AI by attackers: Identifying attacks and defending against them.”

My take: Each of these AI security perspectives can be dramatized as the struggles faced by the Mandalorian in protecting, teaching and raising Baby Yoda. In the new Disney Plus TV series, the Mandalorian is a warrior that decides to save a small infant that resembles Yoda for George Lucas’s original Star Wars movie trilogy. Once saved, the Mandalorian is tasked with protecting and raising the infant. But how do you teach the impressionable yet force-powerful Baby Yoda right from wrong, especially when you are a once honorable warrior who must get by as a bounty hunter in a world where right and wrong are not always clear?

Returning to the real world, the same scenario and questions could be asked of nascent AI systems. How can human flaws and bais be kept out of the learning experience for AI products?

The challenge of AI machine-biasing came clearly into focus in 2019. Similar to human-bias, machine-bias occurs when the learning process for a Silicon-based machine makes erroneous assumptions due to the limitations of a data set and pre-programming criteria. One example of machine-bias was recently revealed in Apple’s new credit card, which contained an algorithm to decide how much trustworthy (or risky) a user might be. This evaluation used to be done by trained humans but now is often performed by AI-based algorithms.

Apple’s credit card was shown to have a gender bias. Males were more likely to get a higher credit line than females. This bias was highlighted when a male entrepreneur was assigned a spending limit 10 times higher than that of his wife, even though they have a common account.

Just like Baby Yoda, AI is an infant who has access to powerful (computing) forces. And like the Mandalorian, the humans setting up the AI and it’s learning dataset are flawed with bias and personal prejudices. May the force (of sound logic and reasoning) be with us all.

Image source: Lucasfilm/Walt Disney Pictures via Disney+


John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.

Comments (1)

Please log in or to post comments.
  • Oldest First
  • Newest First
Loading Comments...