Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Apple..jpg Pixabay

How AI, ML, and AR Will Change the Face of Design

The combination of artificial intelligence (AI), machine learning (ML), and augmented reality (AR) will change the face of design in the not-so-distant future.

It can be a funny old world sometimes. Here I am, quivering on the edge of my command seat, ensconced in the Pleasure Dome (my office), poised to pen this column on how the combination of artificial intelligence (AI), machine learning (ML), and augmented reality (AR) will change the face of design in the not-so-distant future. At the same time, I just commenced the process of creating my first AI application, which is really focusing my attention and making me think about all of the areas with which I could use a little AI/AR help.

Before we plunge headfirst with gusto and abandon into the use of AI/AR in the context of electronic design, let us first set the scene a little (for the purposes of this column, we will take AI to encompass ML). I, personally, am very excited by all of this, because I truly believe that the combination of AI and AR is going to change the way in which we interface with our systems, the world, and each other.

Don’t Blink

One thing that’s important to note is the fact that we are still in the very early days of AI and AR. When Charles Babbage (1791–1871) commenced work on his Analytical Steam Engine in the 1830s, he thought of this machine only in the context of performing mathematical calculations, which he very much disliked doing by hand. It’s fascinating to me that Babbage’s assistant, Augusta Ada Lovelace (1815–1852), mused on something akin to AI. In fact, Ada wrote about the possibility of computers using numbers as symbols to represent things like musical notes, and she went so far as to speculate of machines one day “having the ability to compose elaborate and scientific pieces of music of any degree of complexity or extent.”

The founding event of the field of artificial intelligence as we know and love it today was the Dartmouth Workshop, which took place in 1956. Following this meeting, a humongous amount of work took place over the years. Sad to relate, however, AI largely remained in the realm of academia until around the 2010s, at which point a combination of algorithmic developments coupled with advances in processing technologies caused it to explode onto the scene.

In the 2014 version of the Gartner Hype Cycle, AI (in the form of ML) wasn’t even considered to be a blip on the horizon. Just one year later, in the 2015 edition of the Hype Cycle, ML had already crested the “Peak of Inflated Expectations.” The point is that this was only five years ago at the time of this writing. Today, AI pops up all over the place. For example, the Nebo handwriting recognition app on my iPad Pro uses multiple artificial neural networks (ANNs) to decipher notes I’ve made for myself that are so cryptic even I cannot decipher them without Nebo’s help. Meanwhile, my Subaru Crosstrek uses binocular cameras and machine vision to take control of the steering wheel and brakes to prevent me from wandering out of my lane or crashing into the car in front.

Think of how far we’ve come from the first point-contact transistor in 1947 to silicon chips containing tens of billions of transistors today. The fact that this is a tad over 70 years (which really isn’t long in the scheme of things) is deceptive, because technology is evolving at an exponential rate, to the extent that it’s almost impossible to predict where we will be in as little ten-years’ time. All I can say is, “Don’t blink!”

Alternative Realities

Sometimes the general public latches onto a term that is perhaps not the optimum choice. Such is the case with AR, which -- as we previously indicated -- is short for “augmented reality.” As its name suggests, AR refers to an interactive experience of a real-world environment in which objects that reside in the real world are enhanced by computer-generated perceptual information.

A lot of people think of this augmentation in the form of text-based information, especially those who saw Arnold Schwarzenegger’s portrayal of a cyborg in the 1984 movie, The Terminator. In reality, visual information could be presented in the form of text or graphics, and multiple other sensory modalities could be employed, including, auditory, haptic, somatosensory, and olfactory.

The point is that AR is just one side of the coin. Its counterpart is diminished reality (DR), which -- as its name suggests -- involves diminishing or deleting information from the real-world environment. Consider wearing an AR/DR headset at a noisy cocktail party, for example. By listening to you talking and observing whose lips move in response, your AI could determine with whom you are holding a conversation -- even if they aren’t the closest person to you -- and fade down other voices and sounds while amplifying, filtering, and enhancing the voice(s) of the person(s) of interest. Similarly, your AI and AR/DR headset could translate the scene you are observing into black-and-white and remove any extraneous items, leaving only the object of interest in color.

PixabayApple.ML_.Clive_..jpg

Diminished reality involves removing information.

But wait, there’s more, because we also have virtual reality (VR), whereby the “reality” is completely generated by a computer. In turn, this leads us to augmented virtuality (AV). As opposed to AR, in which objects and scenes in the real world are augmented with computer-generated information, augmented virtuality refers to augmenting virtual environments with real-world objects or people.

In reality (no pun intended), we should really be talking about mixed reality (MR), which refers to the merging of real and virtual worlds to produce new environments and visualizations in which physical and digital objects co-exist and interact in real time.

Time for Design

And so, finally, we come to consider AI and MR in the context of design. Of course, the term “design” means different things to different people. In addition to being courteous and polite, civil engineers focus on things like roads, bridges, canals, dams, airports, sewerage systems, pipelines, the structural components of buildings, and railways, so “design” to them is a very different kettle of fish to “design” for your humble narrator.

When it comes to electronic systems, we have the designers of silicon chips, the designers of printed circuit boards, the designers of embedded systems, and so forth. We also have hardware designers and software developers, each having their own views of the world.

I know that it’s not all about me (it should be, but it’s not). Be this as it may, let’s consider the AI project I’m working on at the moment. This is going to be mounted on my vacuum cleaner. It’s going to employ a 3-axis accelerometer to monitor the vibrations, and it will use green and red LEDs to tell me if the container is OK or if it needs to be emptied. Just for giggles and grins, I want to equip it with Wi-Fi. Thus, on the remote chance my son decides to hoover the house while I’m at work, my gizmo can send a message to my smartphone saying “The bag needs changing” so I can call my son and pass on the good news.

I’ll need to use batteries. I’m working with an Arduino Nano 33 IoT board. What sort of voltage can I use to power this little scamp? I know it’s described as a “3.3V” board, but I’m assuming that refers to the general-purpose input/outputs (GPIOs). The power supply information is strangely hard to track down. Eventually I find a posting on GitHub that informs me I can feed 4.5 to 21V into the Vin pin. It would have been helpful if my AI could have rooted this information out for me.

Now I’m trying to locate the Vin pin, but the writing on the board is tiny. I could have a quick Google while no one was looking to find a pinout diagram on the web, but it would be easier to ask my AI and have the MR headset highlight the pin in question.

With regard to prototyping, a few weeks ago I was breadboarding a simple circuit that totally failed to work as planned. The signal coming out of the microcontroller was good, but the LED failed to light. It took an inordinate amount of time for me to discover that I’d forgotten to add a simple jumper wire to the breadboard. Just to increase the fun and frivolity, I did exactly the same thing a week later. If only I had an AI to say “I think you’ve forgotten something” and an MR headset to display a glowing representation of the missing wire in the desired location.

When it comes to the physical construction, I can imagine my AI monitoring my every move, telling me when I’m mounting things the wrong way, reminding me where I hid the various components and tools, and spotting potential shorts or dry solder joints and using my MR headset to guide me to these problem areas.

With respect to the software, suppose I declare pins 3 and 4 as driving the green and red LEDs, respectively. Wouldn’t it be handy if my AI informed me that I’d actually connected the LEDs the other way round (or to different pins entirely), thereby saving me a lot of angst (I speak from experience).

Since I’m not an expert software developer, it would be wonderful if my AI/MR headset could help me determine which libraries I need to use for what I’m trying to do, and then help me to locate and install these little rascals. Similarly, when I start to create a function, it would be awesome if my AI/MR headset could search the web to find examples of code that may not look the same, but that is intended to perform a similar task. (It may surprise you to learn that this sort of AI-based software search capability is available today, just not via an MR headset.)

The more I think about this, the more I realize just how many aspects of design could be impacted by this type of AI/MR technology. If someone is designing an ASIC, for example, then they could be presented with suggestions regarding functional design and design for test (DFT). Someone laying out a printed circuit board could be advised as to any specific design rules employed by the fabrication and population companies. Someone creating an embedded system could be provided with suggestions re packaging based on the target environment. And the list goes on…

I fear I’ve only scratched the surface in this column. I also fear that you are going to say, “This is the stuff of science fiction. It will never happen.” To which I would reply that, as compared to the technology of my childhood circa 1960, we are already living in a world of science fiction. Smartphones, GPS, MP3 players, digital cameras, tablet computers, wireless networks... we didn’t even dream of stuff like this back then.

How about you? Let yourself go wild and tell me what you would like an AI/MR headset to do for you if such a beast was available on the market today.

Clive “Max” Maxfield received his B.Sc. in Control Engineering from Sheffield Hallam University in England in 1980. He began his career as a designer of central processing units (CPUs) for mainframe computers. Over the years, Max has designed all sorts of interesting “stuff” from silicon chips to circuit boards and brainwave amplifiers to Steampunk Prognostication Engines (don't ask). He has also been at the forefront of electronic design automation (EDA) for more than 30 years.

 

Hide comments
account-default-image

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish