Artificial intelligence (AI) is finding less hype and more real applications in space than other industries.

John Blyler

February 24, 2021

5 Min Read
Adobe Stock

Engineers of a certain age might well remember the B9 Robot—usually just called “Robot”—that roamed the universe with the Robinson family in the TV series, Lost in Space. Robot was probably one of the earliest examples of an autonomous, advanced AI robot, i.e., not a sentient being but a robot with human-like intelligence.

The technical community has yet to create such a robot and, if Ray Kurzweil’s prediction from the 2011 Times article is correct, it won’t achieve such AI prowess until the year 2045. Still, AI continues to dominate the news. The reality has not always lived up to even a modest level of the hype. For example, the goal of fully autonomous vehicles for the general public remains elusive in the automotive space.

The space industry is a notable exception to the AI hype. Currently, AI is actually be used to help in the manufacturing of satellites and spacecraft systems. It’s easier to prevent biological contamination during the assembly of satellites if fewer humans are involved. AI is being used to help robots’ function more efficiently in such tasks.

AI-enhanced imagery is often the key functionality for many satellites in orbit. Some estimate that satellites process about 150 terabytes of data every day to capture weather and environmental images, to name a few examples.

Related:NASA's Landing Tools Key to Sticking Tricky Touch Downs

Monitoring the health of satellites is another growing application for AI. By constantly watching sensors and equipment, AI can detect failures, provide alerts and sometimes carry out corrective action. SpaceX for example uses AI to keep its satellites from colliding with other objects in space.

To learn more about the application of AI and machine learning (ML) technology in space, Design News sat down with Ossi Saarela, Space Segment Manager, MathWorks. 

Design News:  What potential does AI have in the space industry, specifically for engineers?

Ossi Saarela: The space industry has a long history of striving to increase the autonomy of spacecraft, which goes all the way back to the early days of spaceflight. Today’s machine and deep learning technologies are being used to address the industry’s most challenging problems.

The uses of AI in the space industry will enable engineers to work on more capable systems with improved efficiency. Further, AI and ML have the potential to help engineers throughout the design, test, and operational phases of a space program:

  • AI will enable engineers to design capabilities for spacecraft that were previously prohibitively complex to implement.

  • AI will help to assure the quality of these complex systems by providing new insights into test data.

  • AI will continue to be used as an aid to operations engineers by monitoring spacecraft health data for potential issues.

Related:Can AI Help Identify Problems in Manufacturing?

Design News: What does the future of autonomy look like for spacecraft?

Ossi Saarela: The more decisions space vehicles can make on their own, the more valuable they are for space exploration. In deep space, if these vehicles depend on humans on the ground for decisions, they are left idle while waiting for ground commands to reach them for minutes, hours, or even days due to the distances involved. Autonomy is a crucial limiting factor for spacecraft and increasing it will allow improvements to existing space applications and enable completely new missions.

New missions are experiencing an increase of autonomy requirements, which is usually driven by more ambitious goals like deep space exploration missions to other planets, moons, and asteroids with a focus more on landings and returns as opposed to fly-bys. These types of time-critical maneuvers are not feasible without autonomy. The trend holds true even for crewed programs, where autonomous docking to other spacecraft and autonomous moon landings are becoming the required norm, allowing valuable astronaut training time to be spent learning other tasks.

Design News: How are machine learning and deep learning techniques pushing autonomy forward?

Ossi Saarela: The primary autonomy applications of machine learning and deep learning currently are improving the perception capability of spacecraft through computer vision applications and allowing spacecraft to make better sense of what they are seeing with their cameras and other instruments. For example, the European Space Agency (ESA) is flying a technology demonstrator Earth observation satellite called PhiSat-1, which uses AI to detect and filter out pictures of clouds, which for many Earth observation purposes are not useful. For deep space exploration, potential applications include safer landings on planets or moons utilizing autonomous hazard detection as well as autonomous driving capabilities for rovers.

Design News: How does engineering software play a key role in the advancement of AI?

Ossi Saarela: Machine learning and deep learning inherently requires engineering software because the models are generated by computers rather than humans. The software to test deep learning and machine learning prototypes often have a steep learning curve, which can lead to inefficient development cycles. There is a growing demand for software tools that allow engineers who are domain experts in their industry to more easily evaluate and deploy machine learning and deep learning models into their production programs. This is where tools such as MATLAB and Simulink, which have machine learning and deep learning capabilities but are designed for engineering workflows, can help.

Setting deep learning and machine learning aside for a moment, even traditional autonomy algorithms benefit from using modern engineering software. As decision-making capabilities are delegated from human operators to the spacecraft, the complexity of the design increases dramatically. Design errors in these complicated AI systems can be subtle and hard to catch. For example, it’s difficult to assess a vision-based sensing and perception algorithm’s reliability in multiple lighting and perspective conditions through review alone; doing so requires extensive simulation and testing. Engineering software tools can enable simulation and testing capability throughout the design lifecycle. They also enable engineers to assess the design at different levels of abstraction—from static architecture to dynamic behavior modeling, all the way to the source code.  These are crucial capabilities for good systems engineering.

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.


Caption: PhiSat-1satellite is providing AI for Earth observation. (Image Source: ESA, CERN/M. Brice)


About the Author(s)

John Blyler

John Blyler is a former Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an engineer and editor within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to RF design, system engineering and electronics for IEEE, Wiley, and Elsevier. John currently serves as a standard’s editor for Accellera-IEEE. He has been an affiliate professor at Portland State Univ and a lecturer at UC-Irvine.

Sign up for the Design News Daily newsletter.

You May Also Like