Sensors I Didn’t See Coming

We are heading into a world in which we are surrounded by sophisticated sensors.

8 Min Read
Owl Autonomous Imaging

At a Glance

  • Sensing technologies that were the stuff of science fiction a few decades ago are now appearing daily in the real world.
  • We already have force-torque sensors for robots and night vision systems for cars.
  • And now a new development allows a single CMOS sensor to capture a regular image and a 3D point cloud.

It’s no secret that I love science fiction. I’ve been reading it since I was a lad. What's amazing me is that things that were literally the stuff of science fiction when I was young are now appearing in the real world on a daily basis.

As one simple example, I remember circa 1975 when I was about 18 years old and microprocessors were starting to make their presence felt. For some reason I was mulling the problems of traffic congestion (possibly I was sitting in a traffic jam). I was thinking that it would be awesome if there were some way to track all the cars on all the roads in England (which is where I hail from). That way, if it were detected that a lot of cars were travelling slowly or even stopped in the middle of the road, signs mounted above the roads could advise other cars approaching from the rear to take alternate routes.

Those were the days in which everyone who drove a car had to be an expert at reading a map if they intended to travel outside the bounds of their own locale, which meant that large printed road atlases were an indispensable fact of life.

The only applicable sensors of the time were things like induction loops, which were coils of wire embedded under the surface of the road near intersections. The changes in inductance caused by a car passing or sitting overhead were used to control the traffic lights.

Related:AI Makes the Rounds at Sensors Converge

Of course, I quickly realized that it would be impractical to embed these induction loops along every mile of every road in England, not least that there would be no way to gather, aggregate, and employ any data they generated.

Never in my wildest dreams did I envision the conception, cumulation, and combination of technologies like GPS, smartphones, cellular communications, online maps, and “Big Brother” entities like Google. Who could have imagined that almost every driver would have a smartphone in their pocket and that Google could track these devices by their periodic pinging of cell towers and use this information to (a) determine the state of the traffic and (b) report congested areas to digital maps presented on the cars’ dashboards (or on the drivers’ smartphones).

One type of science fiction story I really enjoy is based on the concept of a "Generation Starship," which is a hypothetical type of interstellar ark that travels at sub-light speed. Since a ship of this type would require such a long time to reach nearby stars, the original occupants of a generation ship would grow old and die, leaving their descendants to continue the voyage.

Related:Sensors the Key to Feeding Growing Population

A classic offering of this ilk is Orphans of the Sky by Robert A. Heinlein. We enter the story in a distant future in which the descendants of the original crew have lapsed into a pre-technological culture. They’ve forgotten the purpose and nature of their ship, coming to believe that the ship is the entire universe. Eventually, our hero, Hugh Hoyland, discovers the hidden control room. When he sits in the captain’s chair and moves his hands over the chair’s arms, lights start to come on and systems start to activate.

What sort of sensors would still work after such a long time unattended? Well, as I wrote in "We Need Switches Appropriate for the 21st Century," one contender would be the ultrasonic switch sensors from UltraSense Systems.

max-0066-01-ultrasense.jpg

These little scamps can be mounted under, or embedded into, a control surface (such as the arm of a captain’s chair in a Generation Starship, for example). In addition to the ultrasonic element, this also includes force sensors and a microcontroller that performs a limited amount of sensor fusion to screen out any unintended actions. 

Many old science fiction stories involved the ability for people to communicate with automated systems using voice commands. Again, this seemed like something for the far future. So, you can only imagine my surprise when Amazon introduced the first Echo in 2014 alongside the voice of the product embodied as Alexa. In addition to an array of acoustic sensors in the form of microelectromechanical system (MEMS) microphones, this involved extreme digital signal processing (DSP) coupled with artificial intelligence (AI) in the cloud.

Another science fiction staple was the topic of robots. I’m thinking of books like I RobotThe Rest of the RobotsThe Caves of Steel, and The Naked Sun by Isaac Asimov. I can only wonder what Asimov would have thought about the humanoid robotics currently under development by the folks at 1X Technology (Halodi Robotics, as was).

Humanoid robots need all sorts of sensors. In addition to vision, they can greatly benefit from the ability to detect linear forces (stretching and compressing) along the X-Y-Z axes and torques (the rotational analog of linear force) around the X-Y-Z axes. This isn’t as easy as it sounds, as I discovered during a chat with the folks at Bota Systems who supply a range of 6-axis force-torque sensors for robots.

max-0066-02-bota-systems.jpg

Thermal imaging is another type of sensor that would be of benefit to all sorts of applications, including robots, so they could move around the house doing things at night, and cars, so we could stop running over each other at night (76% of pedestrian fatalities, of which there are ~700,000 around the globe each year, occur at night).

I used to believe that thermal sensors needed to be cooled, but the folks at Owl Autonomous Imaging inform me that their fully digital thermal imaging cameras do not require any additional cooling. Furthermore, they are using the imagery captured with these cameras in conjunction with AI algorithms and appropriate displays to keep drivers informed as to what is coming their way (or vice versa, if the truth be told).

max-0066-03-owl.jpg

The folks at Owl tell me that, within five years, all new cars will be able to see at night. I wouldn’t argue with them. I’ve seen stranger things (no pun intended, but “I’ve seen stranger things” works on so many levels LOL).

Another sense that benefits robots greatly is that of the ability to perceive depth to accompany what they are seeing with their machine vision systems. One way to obtain depth is to use a LIDAR but—in addition to being bulky and the opposite of cheap—correlating the resulting depth map (3D point cloud) with the optical imagery is non-trivial.

Another way to perceive depth is to implement binocular vision using two CMOS sensor-based cameras. The downside here—in addition to all the complex processing required—is that you need two cameras. Wouldn’t life be easier if we could use just one? Actually, we can.

For example, I can track and catch a ball headed my way, even if I have one eye closed. The reason I can do this is the awesome processing power of my brain (I don’t like to boast), which understands the sizes of things and—based on things getting bigger and bigger—knows when it’s time to duck. Similarly, the video stream from a single camera can be analyzed by artificial intelligence to detect and identify objects. The AI can use its knowledge of things to estimate their positions in 3D space, and so forth. Once again, however, all this comes at the cost of a humongous amount of power guzzling computation.

I’m constantly amazed by people’s ingenuity. For example, I was chatting with Jean-Sébastien “JS” Landry, who is Director of Product Management at AIRY3D. JS told me about a new technology that had me gasping with astonishment. The idea is to take a single CMOS sensor manufactured using standard CMOS processes and to apply a transmissive diffraction mask (TDM) on top.

max-0066-04-airy3d-01.jpg

The diffraction mask modulates the light coming from the various objects in the scene, thereby encoding depth information onto the optical signal (basically, letting the light itself perform the “heavy lifting” computations).

In a standard imaging pipeline, the raw data from the CMOS sensor would be fed to an image signal processor (ISP), and the resulting 2D image would be fed to any downstream applications and displays. In this new scenario, the raw data is first fed to a pre-filtering decoder as shown below.

max-0066-05-airy3d-02.jpg

This decoder separates out the regular image data, which is fed to the ISP as before, and the encoded depth data, which is fed to a compute-lite image depth processor (IDP). The output from the IDP is a 3D depth map that matches the optical image on a pixel-by-pixel basis and frame-by-frame.

Wow! We can generate a 3D depth map with only one sensor and one frame of data. I certainly didn’t see that coming! How about you? What do you think about all of this?

About the Author(s)

Clive 'Max' Maxfield

Clive "Max" Maxfield is a freelance technical consultant and writer. Max received his BSc in Control Engineering in 1980 from Sheffield Hallam University, England and began his career as a designer of central processing units (CPUs) for mainframe computers. Over the years, Max has designed everything from silicon chips to circuit boards and from brainwave amplifiers to Steampunk Prognostication Engines (don't ask). He has also been at the forefront of Electronic Design Automation (EDA) for more than 35 years.

Well-known throughout the embedded, electronics, semiconductor, and EDA industries, Max has presented papers at numerous technical conferences around the world, including North and South America, Europe, India, China, Korea, and Taiwan. He has given keynote presentations at the PCB West conference in the USA and the FPGA Forum in Norway. He's also been invited to give guest lectures at several universities in the US and at Oslo University in Norway. In 2001, Max "shared the stage" at a conference in Hawaii with former Speaker of the House, "Newt" Gingrich.

Max is the author and/or co-author of a number of books, including Designus Maximus Unleashed (banned in Alabama), Bebop to the Boolean Boogie (An Unconventional Guide to Electronics), EDA: Where Electronics Begins, FPGAs: Instant Access, and How Computers Do Math.

Sign up for the Design News Daily newsletter.

You May Also Like