What were the breakthrough technologies for 2019? The answer depends on who you ask. Several common themes have emerged such as cobots, emerging energy source, AI, and cybersecurity breaches. Let’s consider each in more detail.
1.) Robotics – collaborative robots (or cobots)
![]() |
(Image source: OpenAI and Dactyl) |
Remember Dum-E (short for dummy) from the first Iron Man movie? Dum-E was a cobot that helped Tony Stark created his flying robotic suit. It was a scaled down, more human, interactive version of the traditional industrial-grade manufacturing line arm robots.
Cobots are designed to collaboratively work alongside human with a gentle touch, i.e., to not smash fingers or step on the toes of their work buddies. Doing so requires that cobots be much more aware of their location in relation to the humans, via sensing and perception technologies. To achieve this goal, one company, Veo Robotics, uses a variety of 3D sensors placed around the robot's workcell to aid in location awareness. The company’s sensors add an extra measure of safety by automatically slowing down the movement of the industrial cobots whenever a human co-worker comes close.
To help supplement actual human activity, cobots are becoming more dexterous and moving beyond merely picking components on an assembly line. Robots need greater dexterity to pick up objects that have moved even slightly beyond their programmed parameters. Cobots cannot yet grasp any object just by looking at it, but they can now learn to manipulate an object on their own.
OpenAI, a nonprofit company, recently introduced Dactyl, a dexterous robotic arm that taught itself to flip a toy building block in its fingers. Dactyl uses neural network software to learn how to grasp and turn the block within a simulated environment before the hand tries it out for real. According to the company, they’ve been able to train neural networks to solve the Rubik’s Cube Problem using reinforcement learning and Kociemba’s algorithm for picking the solution steps.
2.) Power generation – Gen IV nuclear and ocean windmills
![]() |
The pebble-bed reactor. (Image source: US Dept. of Energy and X-Energy) |
Designing nuclear power systems that are both safer, sustainable,and cheaper than in the past has been the goal of the nuclear industry for many years. The more notable of these systems are the new Generation IV (Gen IV) reactors. Gen IVs are a set of nuclear reactor designs being considered for commercial applications by the Generation IV International Forum. These designs include thermal reactors, fast reactors, and Very High Temperature Reactor (VHTR) technologies. One implementation of the latter is the so-called pebble-bed reactor (PBR) that features spherical fuel elements called pebbles.
Besides nuclear, the other often forgotten energy source is wind powered turbines. Unfortunately, the global onshore wind energy market is saturated in many developed countries due to a lack of land availability. However, offshore wind installations have the potential to provide enormous amounts of power for the global market.
One disadvantage is that offshore technology is limited to relatively shallow depths of less than 50 meters, significantly restricting where they can be installed. Floating turbines could substantially increase the placement of offshore wind power sources to include areas with very deep coast lines like the west coast of the US and Japan, and the deeper waters in Europe’s North Sea.
3.) Artificial intelligence
![]() |
(Image source: silicium ©P.JAYET/CEA) |
Artificial intelligence (AI) is on everyone’s list of trends. And rightly so as IDC estimates that the AI technology market should grow from $8 billion in 2016 to over $47 billion in the year 2020.
AI tech is being used in facial recognition, cybersecurity, and edge and cloud computing, Another trend, specialized, AI-enabled chips, gained increased attention this year in the semiconductor and EDA chip design tool industries when an entire day was dedicated to AI at SemiconWest, one of the largest semiconductor conferences around.
As keynote speaker for AI Design Forum which kicked-off the AI day event, Synopsys chairman and co-CEO, Aart de Geus, highlighted the shift from the Moore’s Law era of computational chips to the newer chip architectures required by AI. He sees simulation, modeling, and virtual prototyping as crucial areas affected by this change.
From a business perspective, AI and machine learning continued to dominate investment in new semiconductor chips in 2019. According to Wally Rhines, CEO emeritus of Mentor, a Siemens business, venture capital funding of 30 fabless AI companies in 2018 totalled $2.3 billion. In 2019, year-to-date funding has exceeded $850 million so far. While lower than 2018 at this time, the funding is more diverse, covering a broader base of companies and applications.
“Machine learning also had a major impact on EDA, with dozens of new and improved design tool capabilities making their appearance by applying AI techniques to traditional problems like simulation, optical proximity correction, timing closure, place/route, and yield enhancement,” explained Rhines.
One example of an emerging AI application was in edge computing. In 2019, two of Europe’s leading R&D centers, CEA-Leti of France and the Microelectronics Institute within Germany’s Fraunhofer Society, announced joint research the development of neuromorphic computing techniques for edge computing. This computing approach looks to create electronic circuits that mimic biological architectures present in the human nervous system.
The institutes’ work toward edge-AI systems is building up Leti’s strength in fully-depleted silicon-on-insulator (FD-SOI) chip design and the expertise of both Fraunhofer and Leti in 3D packaging. There is also the likelihood that it will draw upon finFET architectural research by another EU R&D powerhouse, Belgium’s Imec.
4. ) Cybersecurity
![]() |
(Image source: Sai Kiran Anagani on Unsplash) |
This year experienced even more cybersecurity breaches. In 2018, there were 500 million personal records stolen, according to the Identity Theft Resource Center. That number was miniscule compared to the 7.9 billion records exposed in 2019 by over 5,000 breaches, as reported by Risk-Based Security. Compared to the 2018 Q3 report, the total number of 2019 breaches was up 33.3% and the total number of records exposed more than doubled, up 112%. Here’s just a small sampling of the more infamous breaches:
> ElasticSearch server breach
An online casino group leaked information on more than 108 million bets that included customer’s personal data, deposits, and withdrawals. The data leaked from an ElasticSearch server that was left exposed online without a password. ElasticSearch is a portable, high-grade search engine that companies install to improve their web apps' data indexing and search capabilities.
>Canva data breach
Security Magazine reported that Canva, a graphic-design tool website, suffered a data breach that affected 139 million users. The data exposed included customer usernames, real names, email addresses, passwords, and city and country information. In addition, of the total 139 million users, 78 million users had a Gmail address associated with their Canva account.
>Facebook app data exposure
UpGuard security researchers revealed that two third-party Facebook app datasets were exposed to the public internet. One database originated from Cultura Colectiva, with more than 540 million records exposed detailing comments, likes, reactions, account names, Facebook IDs, and more. The other third-party app was exposed to the public internet via an Amazon S3 bucket, the researchers said. This database backup contained columns for user information such as username IDs, friends, likes, passwords, etc.
> Orvibo leaked database
An open database linked to Orvibo Smart Home products exposed more than 2 billion records. Orvibo runs an IoT platform that claims to have around a million users, including private individuals who connected their homes, as well as hotels and other businesses with Orvibo smart home devices.
>Social Media Profiles Data Leak
Researchers Troia and Diachenko at DataViper found an enormous amount of data exposed and easily accessible to the public on an unsecured server, which contained about 4 billion records. A total count of unique people across all data sets reached more than 1.2 billion people. The researchers said this made the event one of the largest data leaks from a single source organization in history. The leaked data contained names, email addresses, phone numbers, LinkedIN, and Facebook profile information.
Sadly, the common theme in many of these data exposures is that data aggregator obtained and used personal information in a way the owners never imaged or gave their consented. This is a legal problem as much as a technical one.
What can be done to slow the loss of data information? Using passwords on some of these servers would be a start. More realistically, cybersecurity companies are implementing machine learning, analytics and automation to detect and remediate threats.
Finally, the IEEE Computer Society lists five particularly onerous security threats for 2019: AI theft, cloud platform weaknesses, cryptojacking or illicit mining, and Advanced Persistent Threat (APT) spying methods such as keyloggers and IoT device vulnerabilities.
5.) Additive manufacturing
We are well into Industry 4.0, the fourth industrial revolution, which relies heavily on the digitization of analog manufacturing processes and information. In turn, digitization relies on internet connectivity, smaller and smarter sensors and edge-cloud computing. These three factors are critical elements the Industrial IoT (IIoT), a platform of technologies that about 63% of manufacturers believe will substantially increase their profitability in the next few years.
Two major areas for additive manufacturing that are being transformed are in predictive maintenance and the 3D printing of parts.
Predictive maintenance (PdM) anticipates maintenance needs on the manufacturing floor to avoid costs associated with unscheduled downtime. Using smart sensor technologies to connect and monitor equipment helps to identify patterns that lead to potential problems or failures. AI and machine learning software help train monitoring equipment to recognize these patterns and sometimes even figure out how to address them.
IoT Analytics’ estimates the global predictive maintenance market reached $3.3 billion in 2018 and is expected to become a $23.5 billion market by 2024. This industry is served by a number of existing and startup companies. For example, a US startup called Seebo employes a tried-and-true process known as Root Cause Analysis (RCA) to identify factors that cause defects or quality deviations in the manufactured product. The company combines industrial AI, machine learning, and probabilistic graphical models to improves it RCA related predictions.
![]() |
(Image source: Sessbo) |
The other growing additive manufacturing innovation is 3D printing technologies, which continue to find new applications in rapid product prototyping and creating parts on demand. One example of the later is the UK’s British Airways airline recent intention to use 3D printers to create aircraft cabin parts.
While printing cabin parts may seem like a small start in the use of 3D printers, one should note that the British Airways fleet currently consists of more than 280 planes supplied by both Boeing and Airbus. These two major airline companies have already integrated additive manufacturing technologies like 3D printing within the maintenance, repair, and operation (MRO) processes, as well as for the prototyping of new aircraft part designs.
6.) Edge computing
This new computer network topology find application in 2019. In edge computing, data processing capabilities are placed closer to the source of information than would be possible on the cloud. Utilizing edge computing, as on the shop floor of a manufacturing plant, reduces the delays in processing critical data.
Edge devices began to appear more frequently in 2019, often as a part of advanced (smart) sensing, AI applications, and computationally intensive activities. One example is in the medical market, where edge computing is expected to handle the data explosion from the rise of genomic computations for DNA mapping.
Another area driving edge computing technology is in the implementation of 5G cellular systems. 5G in industrial IoT and consumer applications will enable massive data bandwidths and supports a range of devices, from smartphones to autonomous vehicles and large-scale IoT. 5G systems will rely on an edge computing infrastructure to handle the large data loads.
Edge computing is not without challenges. One of the big concerns is that current edge computing networks lack a common security framework. For now, edge computing shares the same security challenges as the IoT, where devices are small and resource constrained, lacking sufficient security features.
![]() |
(Image source: IEEE Innovation) |
7.) 5G
The first wave of 5G-enabled devices hit the market in 2019. Smartphones took the lead with this latest generation of telcomm technology that included the Galaxy S10, OnePlus 7, and Huawei P30 among others.
An IDC estimate notes there will be 41.6 billion connected IoT devices, generating 79.4 zettabytes (ZB) of data in 2025. The IoT and other new applications are driving this incredible data explosion especially at the edge of the network. The advent of 5G, with its lower latency, improved speeds and higher capacities, will enable virtualization and edge computing for everyone and everything. 5G will enable more hybrid and cloud applications ranging from machine learning to cloud-based graphics rendering for VR, AR, and gaming. With gigabit speeds, 5G will also eliminate the need for wires as the last mile of connectivity, even within homes and enterprises.
For a cool demonstration of the potential power of 5G, Samsung Electronics and SK Telecom 5G showed how the technology could improve the motor racing experiences for fans. The demonstration was held at the Korea International Circuit racetrack. It used Samsung Networks’ end-to-end 5G mmWave platform, including 5G New Radio (NR) base stations. The same equipment has been in commercial operation in the United States since the first half of this year.
In the physical demonstration, a car races around track at 130 miles per hour (210Km/hour) while transmitting live downloads, uploads, and handovers between 5G cell sites on the racetrack. Download speeds reached up to 1Gbps, demonstrating the capacity for multi-gigabit downloads on a 5G device inside a racing car, using 200MHz bandwidth of 28GHz spectrum.
![]() |
(Image source: Samsung) |
8. ) Satellites everywhere
![]() |
(Image source: NASA) |
The year 2019 might well be known as the Year of the Satellites. Regular-sized and tiny ones were launched in record numbers this year. By mid-year, 60 small satellites were sent into low-earth orbit (LEO) on SpaceX's Falcon 9 rocket. These communication satellites were the first installment of an internet-beaming mega-constellation that the company hopes will grow to include thousands of satellites over the next few years.
Then, in November, 60 more Starlink satellites were sent into orbit. This continued barrage of launches were driven by SpaceX’s contract to have 2,213 Starlink satellites in orbit by March 2024 or face penalties from the FCC. Of note on the November launch was the reuse of SpaceX Falcon 9 fairings. This is the location where cubesats are typically placed, which might mean more cubesats will accompany the StarLink communication satellites into LEO.
Which leads into the other big – or rather tiny – satellite trend for 2019, namely, the growth of tiny, nano-satellites (or nanosats). Nanosats and cubesats typically have a mass from 1 kg to 10 kg. There is even an even small versions known as a Chipsats - cracker-size, gram-scale wafer miniprobes.
All of these tiny satellites have been made possible by the semiconductor-driven miniaturization of electronic and electromechanical systems (think Moore’s Law). The original goal of all these miniature satellites was to provide affordable access to space for the university science community. Many major universities now have a space program, as do several private company startups and even government agencies like NASA and the DoD.
9.) Smarter everything … Thanks to semiconductors
![]() |
(Image source: Taiwan Semiconductor Manufacturing Co., Ltd.) |
When someone adds the word “smart” in front the names of objects – like smartphones, smart homes or buildings, smart cars and the like – they are acknowledging the addition of semiconductor intelligence to the objects. Adding ever smaller hardware chips with software has enabled our smart and connected IoT world.
Even though the semiconductor industry for 2019 is forecasted to be down 3% by Technavio, it still added $454.5 billion in revenue to the US. Modest growth is expected to return in 2020. This year, the semiconductor industry was driven by the continuing growth of global smartphone sales and adoption of system-on-chip (SoC) devices in automobiles.
Semiconductor chip growth is expected to return in 2020 and beyond thanks to the following market areas: automation in smart cities and factories; AI; 5G; autonomous cars; and the IoT. Some of the growth will be in China as a report from Deloitte pegs China’s semiconductor industry revenue to grow by 25% to approximately $110 billion in 2019, from an estimated $85 billion in 2018. Still, there is some doubt as to China’s capability to maintain the fabrication of chips without the help of US technology, which has been interrupted due to the trade wars between the two countries.
RELATED ARTICLES:
John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier.