Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Sitemap


Articles from 2018 In December


Centralized or Decentralized? Autonomous Vehicles Are Forcing Key Architectural Decisions

Centralized or Decentralized? Autonomous Vehicles Are Forcing Key Architectural Decisions

Designing autonomous vehicles includes making key architectural decisions relative to in-vehicle networks. (Image source: Mentor, a Siemens business)

If you were to ask any scientist or engineer how long until fully autonomous vehicles are commercially realized, the answers would range from a few years through to a few decades.

There are surely a number of reasons for this disparity, but what cannot be underestimated are the technical challenges necessary to realize this milestone. Specifically, one of those challenges will be the sensing and sensor fusion technology necessary to effectively understand the environment in a way that is conducive to the driving scenario. Presently, there isn’t even a consensus on whether driverless cars should be considered automated or autonomous, which poses a stark challenge to the engineers developing sensor integration frameworks.

So how will the sensors be developed to support driverless vehicles, and should they be automated or autonomous? Sensors for highly assisted driving (i.e., ADAS) are used to measure a very specific property such that the assistance system can undertake a very specific action. In automated driving, the sensor systems will be used to support the robotic vehicle to undertake a pre-defined set of maneuvers as precisely as possible. At the extreme end, sensors will be used in autonomous vehicles to provide the system with a sufficiently high-fidelity representation of the environment such that it can plan its own course of action. This, of course, introduces a substantial challenge from the sensing perspective, as no single sensor offers either the fidelity, precision, or spectral capability necessary to perceive the environment to the level necessary to support fully autonomous driving.

In order to achieve such a highly effective level of sensing, there remains a number of open concepts for sensing. One of the most important of those is the role of sensor fusion and, in particular, the question of centralized versus decentralized architectures for fusion. One of the challenges with traditional ECU-based architectures is that their growing number and sophisticated software escalates the vehicle system design complexity at a rate that is not commensurate with the application functionality. While it’s possible to develop a centralized information and communications technology (ICT) architecture for cars, it remains a challenge that has yet to be effectively addressed.

What we do know is that, with the increasing number of sensors in vehicles (as outlined in A Centralized Platform Computer Based Architecture for Automotive Applications by Stefan Sommer at the research institute fortiss GmbH in Munich, Germany), architectural complexity will continue to increase. This raises the question as to whether the architectures for integrating sensors in autonomous vehicles should employ a centralized or decentralized approach, where individual sensors estimate independent object tracks to be shared with a centralized system. Below, we consider some of the key technical challenges from a sensor fusion perspective.

Centralized Fusion Architectures

Within a centralized fusion architecture, all of the remote nodes are directly connected to a central node, which undertakes the main fusion functionality. The client nodes are often disparate in terms of their physical location, spectral operation, and physical characteristics. For example, radar and LiDAR sensors will be placed in different locations, have different capabilities and levels of fidelity, and have different physical operation. These sensors will forward their “raw data,” which is aligned and registered to a common spatial and temporal coordinate framework. Sets of data from the same target are associated, and then the data is integrated together. The system has thus generated an estimate of the target state using multi-sensor data.

From a fusion perspective, a centralized architecture is preferable, as the measurements from different sensors can be considered conditionally independent in that there has been no sharing between nodes prior to the centralized fusion. However, there are two major drawbacks: The communications bandwidth necessary to transport all of the ‘raw data’ to the centralized node expands to the order of Gigabits per second, and the computational expense of associating all of the raw data to possible targets rises significantly. It does, however, have the benefit of providing the most appropriate framework for optimal Bayesian fusion (remembering that the data is conditionally independent). From a systems perspective, a centralized architecture results in reduced system complexity (in terms of the system design) and decreases the latency between the sensing and actuation phases.

The right combination of centralized and decentralized information processing is key to the optimal design and functionality of autonomous vehicles. (Image source: Mentor, a Siemens business)

Decentralized Fusion Architectures

A decentralized fusion system consists of a network of sensing nodes—each with an ability to process its own data to form object tracks, and then communicate these with both adjacent nodes and the centralized node. In this system, locally generated object tracks are communicated toward the central fusion system, which combines all of the local tracks to form a common estimate that is used for global decision making. This is typically the type of fusion system implemented in modern, highly assisted and automated driving vehicles.

In many ways, the decentralized fusion process is more complicated from both an architectural and algorithmic perspective. This is principally due to more efficient use of the individual sensor characteristics and optimization of signal processing in each sensor. From an architectural perspective, this increases latency between sensing and actuation, but greatly reduces the demand on bandwidth and centralized processing. From a fusion perspective, it reduces the complexity of the data association step, but introduces challenges related to the sharing of common information. Specifically, there is a danger of information being double-counted within the system. There is an entire research field that investigates how to undertake this most effectively.

What is important to realize in this context is the importance of sensor modeling. Fusion algorithms, such as the Kalman filter, rely on effective models of sensor noise/uncertainty in order to function effectively. However, sensor modeling is a complex and sophisticated process. Many systems (at least to my experience) rely on guesstimates of the sensor model, as opposed to undertaking high-precision measurements to determine the real sensor models. In these cases, fusion algorithms degenerate into weighted averaging algorithms, which simply combobulate the data. With multiple fusion algorithms running across different nodes in the system, this process can be exacerbated.

So what is the right architecture to use? Well, like all good answers, it depends. Both architectures offer different tradeoffs. Centralized architectures are less complex, have less total computational power, and likely produce the optimal fusion result. However, they are more algorithmically complex and require a team to have excellent knowledge of the sensor systems in order to effectively model them. Decentralized architectures, on the other hand, have lower bandwidth requirements, but increase latency, boost system complexity, and make the fusion system more conservative. However, they allow the system developers to focus purely on the system integration and development of object-level fusion algorithms, as opposed to needing deep expertise in raw sensor data processing and complex data association methods.

Success in making the most effective autonomous vehicle will undoubtedly go to the organization that effectively realizes the right combination of centralized and decentralized information processing.

Dr. Daniel Clarke is principal engineer for the Automotive Business Unit at Mentor, a Siemens business.

DesignCon 2019 engineering educationBy Engineers, For Engineers. Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend, hosted by Design News’ parent company UBM.

SimScale Speeds Transient CFD Simulations

SimScale LBM Sim

Much of modern engineering relies upon simulations of physical phenomenon. This is especially true in aerodynamics and fluid flow, where full-size vehicles or building structures are too large to easily test in a prototype stage. Yet experiments run in wind tunnels at smaller scale have difficulty accurately representing the characteristics of the flow fields. As a result, computational fluid dynamics (CFD) programs have been developed to aid engineers in their quest for products with improved aerodynamic capability.

In essence, there are two goals in CFD as applied to external flow. The first is to calculate forces and moments that result from the flow of a fluid (air and/or water) over a vehicle (car, truck, racecar, aircraft) or an architectural design (building, bridge)—usually to determine drag and lift forces in an attempt to predict the stability of the vehicle or structure. The second goal is to visualize the flow patterns that occur as air passes over an object to help the designer improve the flow—for example, by reducing drag or enhancing desirable lift characteristics of a vehicle or by improving pedestrian comfort when walking between buildings.

Shown is a simulation of wind loads on high-rise buildings using the SimScale Lattice Boltzman method cloud-based solver. This is LOHAS park in Hong Kong. (Image source: SimScale)

Navier Stokes Constant Volume Solvers

By far, the most common CFD approach is to solve a series of nonlinear, partial differential equations—the Navier Stokes Equations—via a constant volume method to obtain the pressure and velocity of millions of macroscopic elements or cells that represent volumes of air. The results can be examined either as a steady state flow or with a transient simulation. Approximating a solution to the Navier Stokes equations takes hundreds or thousands of iterations. And even with high-power computing resources, it can take many hours for a steady-state solution or many days of computation for a transient result.

The computational side of CFD has evolved during the past decade. Early programs required gigantic supercomputer facilities and massive amounts of energy to run the solvers. Eventually, the programs were made more efficient so that they could run faster on arrays of parallel processors with significantly less cost. The most recent versions of CFD programs can be run on souped-up desktop computers with runtimes of several hours providing reasonable usable results—at least for steady-state simulations. More recently, a company called SimScale has created a cloud-based CFD system that allows users to access an array of parallel processors with a user-friendly interface on a simple desktop computer.

A New Approach to the Lattice Boltzman Method

Because approximating a solution for the nonlinear Navier Stokes equations is so difficult, other CFD approaches have been developed. One is called the Lattice Boltzman Method (LBM). LBM deals with a so called "distribution function," which describes the amount of molecules moving into different discrete spatial directions. The macroscopic fluid properties, such as velocity and pressure, can be then reconstructed from the distribution function.

Working with a company called Numeric Systems GmbH, SimScale has created a cloud-based LBM solver that allows users an alternative to Navier Stokes (NS) constant volume solvers. David Heiny, SimScale’s CEO and co-founder, told Design News that, “As always in engineering simulation, there is no ‘one method for all applications,’ but every method comes with advantages and disadvantages.”  Using LBM provides some advantages over traditional CFD solvers. “Since LBM results in an inherently transient, explicit, and purely linear mathematical formulation for the fluid state evolution, it is less compute intense than the nonlinear terms of NS,” said Heiny. “There are some other numerical advantages in LBM, such as high order advection (2 and above), low numerical viscosity, and good conservation,” he told us.

For the traditional NS CFD user, the new LBM does have some limitations. One of the main challenges for LBM is wall modeling. “As LBM requires regular Cartesian grids, appropriate sub-grid modeling of the geometry surface is not straightforward and challenging if one is interested in accurate results,” noted Heiny. Where NS simulations allow a variety of shapes and sizes to produce a “mesh” of volume elements, LBM requires the mesh to be built only from rectilinear cells. The size of the LBM cells can be varied, however, to achieve acceptable surface representation.

Fast, Accurate, Robust

Pacefish, the LBM solver developed by Numeric Systems and integrated in SimScale, comes with its own Cartesian mesh engine. The user only defines flow domain and other physics and the grid is generated at solver runtime. Because it's a purely Cartesian grid, it's significantly faster, less error prone, and more tolerant toward dirty geometry/CAD. In fact, the speed of the LBM solver is impressive. This completely new implementation of the Lattice-Boltzmann method is tailored to the massively parallel architecture of GPUs, allowing it to run efficiently on multiple GPUs in parallel and giving results up to 20 to 30 times quicker than a typical NS transient simulation, according to information provided by SimScale.

“The accurate analysis of transient flows has historically been fraught with very long computing times and high up-front costs in order to yield realistic results. With the release of this new Lattice-Boltzmann solver, CFD engineers no longer need to choose between speed and accuracy—and on top of that, can access it with the convenience of an entirely web-based workflow. We’re very excited to see our customers seize the new opportunities this technology is opening up,” said SimScale’s David Heiny.

Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation topics for more than 30 years. He has masters degrees in Materials Engineering and Environmental Education and a doctorate degree in Mechanical Engineering, specializing in aerodynamics. He has set several world land speed records on electric motorcycles that he built in his workshop.

BDesignCon 2019 engineering educationy Engineers, For Engineers. Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend. Hosted by Design News’ parent company, UBM.

Studying Membrane Behavior in Fuel Cells Aids Future Designs

hydrogen fuel cell

Researchers have uncovered one of the mysteries of how a polymer material used in hydrogen fuel cells behaves, paving the way for new and better designs for the useful source of energy.

Specifically, a collaboration between researchers in Russia and Australian professor Barry Ninham from Australian National University in Canberra observed how a Nafion membrane swells as it interacts with water—partially unwinding some of its constituent fibers, which then protrude away from the surface into the water, they said.

A diagram shows the design of a hydrogen fuel cell. Researchers have unveiled a mystery of how the polymer membrane in between anode and cathode behaves, paving the way for better future designs. (Image source: Wikipedia)

Swelling Membranes Reduce Efficiency

A Nafion membrane is used to separate the anode and cathode within a fuel cell, but its tendency to swell has always reduced its efficiency. Now that they have a better understanding about something which “they did not have a clue” about before, they can set out making improvements to these energy sources, Ninham said.

Indeed, scientists long have considered fuel cells a good source of continuous energy for numerous and especially remote applications—from spacecraft to remote weather stations. However, fuel cells have some limitations that researchers could correct if they understood some of the design elements better, they said.

Nafion is the highest-performance commercially available, hydrogen-oxide proton exchange membrane currently used in fuel cells. The membrane is extremely porous, which allows for significant concentration of the electrolyte solution while separating the anode from the cathode. This, in turn, allows the flow of electrons producing energy in the fuel cell, researchers said. 

For this project, researchers set out to determine why the membrane shrinks when it interacts with water by first examining a proposed hypothesis attributing a new state of water to explain this phenomenon, they said. However, what they found in their work was a different scenario—the growth of polymer fibers extending from the membrane surface as it interacts with water, Ninham said.

Specialized Instrumentation

The research team developed a specialized laser instrumentation—photoluminescent UV spectroscopy—to characterize the polymer fibers along the membrane-water interface. While they could not directly observe individual fibers because of the instrumentation’s spatial limitation, they reliably detected their outgrowth into the water, researchers said. 

What they found in their work is that the number of fibers in the membrane increases as a function of deuterium concentration of the water, said Nikolai Bunkin of Bauman Moscow State Technical University, one of the Russian researchers on the project. This observation required them to describe the molecular-level interaction of deuterated water with the polymer, he said.

Because researchers were able to determine that the effect they observed is most pronounced in water with deuterium content between 100 and 1,000 parts per million, they can now make more informed decisions to optimize future designs of hydrogen fuel cells, Bunkin said. Specifically, scientists now can customize the structure and electrical properties of the Nafion membrane by studying changes induced by ion-specific effects on its organization and function.

The team published a paper on their work in AIP Publishing’s The Journal of Chemical Physics.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

Pacific Design and ManufacturingSAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

Graphene Nano Anode Allows Longer Cycle Life

graphene metal oxide mix

Lithium ion batteries continue to show the most promise in the near and mid-term to power everything from cell phones and personal electronics to electric vehicles (EVs) and power grids. There are improvements to be made, however, and significant research is underway to find new materials to improve lithium ion battery performance.

One area of great interest is to extend the charge and discharge cycle life of the batteries. Commercial lithium ion batteries begin to lose their performance after about 1,000 charge cycles. This seems a large number (corresponding to charging your EV every day for three years). But the advent of fast charging, opportunity charging, and recharging an only partially depleted battery means that this number of charge cycles can be reached over a much shorter time period.

Shown is a scanning electron microscope picture of a nanocomposite metal oxide based on graphene. (Image source: © Freddy Kleitz/Universität Wien; Glaudio Gerbaldi/Politecnico di Torino, CC-BY-NC)

To address the charge cycle lifetime question, materials chemist Freddy Kleitz from the Faculty of Chemistry of the University of Vienna—together with a team of international scientists—has developed a new nanostructured anode material for lithium ion batteries. Based on a mesoporous, mixed metal oxide in combination with graphene, the researchers have shown that the material could extend the capacity and cycle life of the batteries.

In a conventional lithium ion battery, the anode is often made from a carbon material, such as graphite. "Metal oxides have a better battery capacity than graphite, but they are quite unstable and less conductive", explained Kleitz in a University of Vienna news release. The researchers developed a new family of electrode active materials, based on a mixed metal oxide and the highly conductive and stabilizing graphene. The new anode material showed superior characteristics when compared to those of most previous transition metal oxide nanostructures and composites.

Mixing Metals
Copper and nickel were mixed homogeneously under a controlled manner to achieve the mixed metal. Based on nanocasting—a method to produce mesoporous materials—they then created structured nanoporous mixed metal oxide particles. Due to their extensive network of pores, they have a very high active reaction area for the exchange with lithium ion from the battery’s electrolyte. A spray drying procedure was applied to wrap the mixed metal oxide particles tightly with thin graphene layers. 

"In our test runs, the new electrode material provided significantly improved specific capacity with unprecedented reversible cycling stability over 3,000 reversible charge and discharge cycles even at very high current regimes up to 1,280 milliamperes," said Kleitz. Although the tests involved small scale batteries, according to the news release, there is no reason why the same process wouldn’t scale up to larger batteries, such as those used in hybrid and battery electric vehicles.

"Compared to existing approaches, our innovative engineering strategy for the new high-performing and long-lasting anode material is simple and efficient. It is a water-based process and therefore environmentally friendly and ready to be applied to industrial level," the study authors concluded in the release.

Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation topics for more than 30 years. He has masters degrees in Materials Engineering and Environmental Education and a doctorate degree in Mechanical Engineering, specializing in aerodynamics. He has set several world land speed records on electric motorcycles that he built in his workshop.

BDesignCon 2019 engineering educationy Engineers, For Engineers. Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend. Hosted by Design News’ parent company, UBM.

3D NAND Takes Aim at Autonomous Cars, IoT Applications

3D NAND Takes Aim at Autonomous Cars, IoT Applications

Autonomous cars and Internet of Things (IoT) devices present a data storage challenge for engineers. In many cases, hard drives are too slow and Flash drives are too expensive to store the monumental amount of data associated with such applications.

That’s where 3D NAND comes in. Increasingly, engineers are talking about employing Flash-based 3D NAND—a technology that literally stacks transistors atop one another—to solve the speed and cost issues.

To learn more about 3D NAND, Design News talked to Wenge Yang, vice president of market strategies for Entegris, Inc., a maker of material-based products for semiconductor device fabrication. Yang, who works with the company’s etching and vapor deposition processes, told us why 3D NAND could serve as a key enabler for autonomous cars, IoT systems, and a wide variety of other data-driven applications in the near future.

Wenge Yang of Entegris, Inc.: “No matter what IoT devices you’re talking about, you’re going to have a need to generate, transmit, and store data.” (Image source: Entegris, Inc.)

DN: We’re starting to hear more about semiconductor applications where engineers want to use 3D NAND technology. Can you give us a simple explanation? What is 3D NAND?

YANG: Device engineers are figuring out they can’t make transistors smaller in the horizontal direction anymore. They’re reaching the limit of the physics. So they’re asking: Is it possible to stack vertically on the wafer?

This is what we call 3D NAND. Instead of 2D—one transistor per area—we stack another transistor on top of it. Then, suddenly you can start scaling again and you generate lower cost on the wafer. This was started three or four years ago by Samsung. They stacked eight transistors vertically.

When they went to eight, everybody loved it, so the industry starting scaling up. They went from eight to 16, then from 16 to 32. Then they went from 32 to today, where everybody is making stacks of 64 transistors.

DN: What’s driving the move to this technology?

YANG: The common theme is data—the tremendous amount of data being generated, transmitted, analyzed, and stored. We’re using our phones and other devices to create data, store data, and view data.

The semiconductor industry used to be processor-driven—Intel CPUs deciding the whole direction of the industry. But in the last year or two, it has shifted from processor-centric to memory-centric because of all the data. Data is driving memory. It’s all about storage. And when we talk about storage, the primary medium now is 3D NAND.

DN: So you’re saying that 3D NAND is the new storage solution. Why not use hard drives for data storage?

YANG: Data storage started with hard drives. The biggest benefit of hard drives was that they were cheap per Gigabyte. But the problem is hard drives are too slow. You can actually hear the mechanical drive winding around.

So people invented nonvolatile Flash memory. It stores data on semiconductors. It’s much faster. We’re talking about 1,000 times or even 10,000 times faster in terms of read/write.

DN: So why not use 2D Flash memory?

YANG: The problem is that 2D semiconductor-based storage—Flash memory or solid state devices—is too expensive compared to hard drives. And there’s a second problem: The reliability is not as good as hard drives.

So the question became: How do you make it cheaper? The answer is, you make it smaller. But they were already making 20-nanometer transistors. It’s almost to the limit of how small they could make a transistor. So they started stacking vertically.

You get all the benefits without sacrificing cost, performance, or reliability. It’s suddenly getting the industry very excited. By the end of this year, most of the Flash memory makers will be going to 96 layers. And next year, they’re planning for 128. In August, there was a conference called Flash Memory Summit. The technologists there said they think 200 to 500 layers of stacking is possible.

DN: So what are the applications for it?

YANG: In the past, when we talked about semiconductor-based storage, we talked about thumb drives, PCs, phones, and cameras—consumer-oriented devices. But the larger market is at the enterprise level in servers.

DN: What about the IoT and automotive? There must be an application for autonomous cars.

YANG: IoT devices, such as thermostats in a refrigerator, don’t need a lot of local storage. Or, if you use Amazon Echo, it’s the same. You don’t need to store locally. But those applications transfer data to the cloud, and most of the storage will happen there—in servers. That’s why enterprise storage is a hot market.  

The autonomous car is different. It’s connected and the amount of data is tremendous. There’s between 3 Gigabits and 10 Gigabits of new data generated every second. The challenge is: How do you store the data? Most say store it in the car because the transmission of the data is not fast enough. You can’t transmit everything to the network.

So you need two things. One is big storage in the car. The other is in servers. You need to upload data to servers, so it can be collected for analysis or liability purposes in case of an accident.

But overall, no matter what IoT devices you're talking about, you're going to have a need to generate, transmit, and store data.

DN: What are the challenges to making this happen? How do you get to 128, or 200, or 500 layers?

YANG: If you’re stacking many layers, it’s like a very high skyscraper. The diameter of the hole is 40 nm, and the depth of the hole can be 50 times that. You have an aspect ratio of 50:1. That means that if you dig a hole from the top of those 96 layers to the bottom, the hole becomes more and more difficult to dig.

It’s an engineering challenge: How do you dig that hole? And how do you remove the things in that hole—take them out of the wafer? The third thing is: You need to fill the hole with conductors, so you can contact each layer in the hole. So how do you fill the hole without voids or coverage issues?

All of those things are big challenges for our industry. That has triggered a lot of innovation in terms of the etch process you use and the cleaning process you use for those holes. The entire purpose is to make this a viable manufacturing process as we move to more layers.

DN: So what’s the takeaway? What do engineers in automotive, aerospace, medical, and elsewhere need to remember about 3D NAND?

YANG: We’re entering a data-driven era. And 3D NAND will be one of the key enablers. It shows a lot of promise as a primary storage device for data in medical, automotive, and aerospace. And as it evolves, the most important enabler going forward will be in the material space.

Senior technical editor Chuck Murray has been writing about technology for 34 years. He joined Design News in 1987, and has covered electronics, automation, fluid power, and auto.

DesignCon 2019 engineering educationBy Engineers, For Engineers. Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend, hosted by Design News’ parent company UBM.

'Waltzing' Nanoparticles Target Medication Delivery

waltzing particles

Researchers constantly explore the pairing of technology and biology for next-generation medical applications. Some of the latest research in this field comes out of Indiana University (IU), where researchers have discovered that drug-delivering nanoparticles can be made to “dance” so as to attach to targets within the body differently for more effective treatments.

A team from the IU Bloomington College of Arts and Sciences' Department of Chemistry has been exploring how the movement of therapeutic particles, when they bind to human cell receptors, can indicate the effectiveness of drug treatments. What they’ve found is that some nanoparticles attach to these receptors based upon their position when they meet, like ballroom dancers who change moves with the music, they said in an IU news release.

Pictured is a "dance pair" of nanoparticles dyed red and green to reveal molecular binding under a fluorescence microscope. A team at Indiana University engaged in the research as part of its exploration of the effectiveness of drug-delivering nanoparticles. (Image source: Yan Yu, Indiana University)

Better Effectiveness of Drugs

The work could have a significant effect on the evolution and success of immunotherapy, which uses the body’s own immune system to fight diseases like cancer and depends partially on the ability to control how strongly it bonds to cells, said the study’s leader, Yan Yu, an assistant professor in the IU Bloomington College of Arts and Sciences' Department of Chemistry.

"In many cases, a drug's effectiveness isn't based upon whether or not it binds to a targeted receptor on a cell, but how strongly it binds," she said. "The better we can observe these processes, the better we can screen for the therapeutic effectiveness of a drug."

Previously, the common theory among researchers was that particles slowed down and became trapped during the process of binding to a cell receptor. However, Yu said the team saw “something new.” "We saw the particles rotated differently based upon when they became trapped in binding to their receptors,” she said. In other words, the particles were acting as single dancers in the metaphorical waltz of molecular motion—something they identified as a further option for experimentation and study.

Creating Dance Partners

To do this, Yu's team created dance partners—or two nanoparticles, one dyed green, the other red—that paired together to form a single imaging marker that could be viewed under a fluorescence microscope. Researchers then camouflaged the marker, or nanoprobe, with a cell membrane coating taken from a T lymphocyte—a type of white blood cell found in the body's immune system.

Using the two colors, the researchers could simultaneously observe two unique movements of the particle before cell attachment—a "rotational motion,” which has it circling in place, and "translational motion,” in which it moves across physical space.

"We found that the particles began with random rotation, moved to rocking motion, then a circling motion, and finally a confined circling motion," Yu said. "The observation of this wide range of rotational motion—and the transition from one form to the next at different points in time—is completely new."

Researchers also found that they could start connecting different motions of the particles to different bond strengths, and observed these interactions, she said. They plan to continue their work to create more effective cell bonds for immunotherapy, Yu noted. The next phase will be to monitor the "waltzing" of camouflaged T lymphocytes to understand their targeted-binding to tumor cells. The team published a paper on the work in the journal ACS Nano.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

Pacific Design and ManufacturingSAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

MakerBot's New 3D Printer Bridges the Gap Between Hobby and Industrial

MakerBot's New 3D Printer Bridges the Gap Between Hobby and Industrial

The Method 3D printer is targeted at engineers who need industrial-quality 3D printing, but at closer to a desktop scale. (Image source: MakerBot)

MakerBot is calling its new 3D printer, Method, a “performance 3D printer.” It is targeted at engineers and designers who don't need something as extensive (or costly) as an industrial 3D printer, but require more functionality and accuracy than what's provided by the desktop 3D printers available today in retail outlets.

“Current desktop 3D printers derive their DNA from hobbyist 3D printers and are insufficient for many applications in the professional segment,” Nadav Goshen, MakerBot CEO, said in a press statement. “We believe that Method is the next step in helping organizations adopt 3D printing at a larger scale. Method provides a breakthrough in 3D printing that enables industrial designers and mechanical engineers to innovate faster and become more agile. It is built for professionals who need immediate access to a 3D printer that can deliver industrial performance to accelerate their design cycles.”

In essence, MakerBot wants Method to make industrial-quality 3D printing more accessible. According to company specs, Method is capable of producing parts with ± 0.2 mm dimensional accuracy (or ± 0.002 mm per mm of travel, whichever is greater) that are repeatable and have vertical layer uniformity and cylindricity. The printer features a dual performance extrusion system with a 19:1 gear ratio that provides up to three times the push force of a typical desktop 3D printer. It also features a longer thermal core than a standard desktop 3D printer to enable faster and smoother extrusion at higher movement speeds.

Method also has a circulating heated chamber that provides full active heat immersion for the duration of the print, allowing for temperature and quality control at every layer. MakerBot said this facilitates a controlled rate of cooling, allowing for higher dimensional accuracy as well as improved layer adhesion and part strength.

Method relies on a series of specialty-made materials provided by MakerBot for its performance. MakerBot's line of precision materials come in three varieties: Tough, for printing high-strength, durable prototypes and fixtures with more impact strength than ABS; PLA, for early stage development and conceptualization applications; and a water-soluble PVA for printing supports that are easily removed without distorting or damaging the part. MakerBot said it is also planning on rolling out a series of specialty materials for Method. The first of these is PETG, a polymer that is growing in popularity in 3D printing circles for its balance of flexibility and durability.

MakerBot is currently taking pre-orders on Method and is expected to ship units beginning in the first quarter of 2019. As its function would imply, the unit retails for $6,499—somewhere between a desktop and industrial 3D printer.

DesignCon 2019 engineering education By Engineers, for Engineers
 Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend, hosted by Design News’ parent company UBM.

Chris Wiltz is a Senior Editor at  Design News covering emerging technologies including AI, VR/AR, and robotics.

New Initiative Takes OPC UA Out to Field Devices

Rockwell Automation, OPC Foundation, OPC UA, field devices, Ethernet/IP, networking

Rockwell Automation and a group of automation organizations have joined an OPC Foundation initiative to extend the OPC UA protocol. Specifically, a series of working groups has formed to bring the OPC UA protocol’s vendor-independent, end-to-end interoperability out to devices in the field. The initiative plans to address use cases not currently in scope for EtherNet/IP. The goal is to help simplify other use cases—especially in multi-vendor, controller-to-controller environments and for the vertical integration of field devices.

Here are the logos of the companies involved in the initiative to bring OPC UA to field devices. (Image source: Rockwell Automation)

Rockwell sees the need to extend OPC UA as part of the build-out of advanced manufacturing. “Smart manufacturing is making a number of things more relevant. Flexible manufacturing applications drive flexible communications. And analytics require more interaction between devices and software and devices and the cloud,” Paul Brooks, business development manager for networks at Rockwell Automation, told Design News. As we look at that changing dynamic, we see places where OPC UA can add value. We’ve moved, and the OPC Foundation has moved, and we find ourselves in the middle.”

The companies involved in the initiative include: ABB, Beckhoff, Bosch-Rexroth, B&R, Cisco, Hilscher, Hirschmann, Huawei, Intel, Kalycito, KUKA, Mitsubishi Electric, Molex, Moxa, Omron, Phoenix Contact, Pilz, Rockwell Automation, Schneider Electric, Siemens, TTTech, Wago, and Yokogawa.

Building the Protocol Out to Field Devices

In a statement, Rockwell noted that the company is the primary author of the EtherNet/IP specifications and understands that EtherNet/IP users may see compatibility risks in technology developed for a different ecosystem. Rockwell intends to mitigate these risks through both its ongoing development of EtherNet/IP and its intentions for the OPC UA protocol. “We’ve been a member of OPC since it was founded. We were part of writing the specifications,” said Brooks. “We use OPC UA in many of our communications devices. We’ve been on this journey for 14 or 15 years.”

OPC UA is generally considered an inherently secure protocol, which is one of the advantages when taking it beyond the plant wall. “The OPC Foundation was one of the first organizations to bake security into the protocol,” said Brooks. “OPC UA is more present in software than other Ethernet protocols. We need to make sure all of the use cases include controller-device communications that get included into the analysis, and we need to make sure the security offering from OPC UA is sufficient down to the device.”

The Details and Priorities of the Initiative

Rockwell Automation’s priorities within the new OPC Foundation initiative include working to help ensure the following:

  • OPC UA specifications are written with the same level of rigor and completeness as the EtherNet/IP specifications.
  • Time-sensitive networking (TSN) is commonly applied across the OPC UA, EtherNet/IP, and PROFINET protocols, so all three can coexist on a common TSN-based network.
  • OPC UA pub/sub technology is implemented in a way that allows existing EtherNet/IP installations to support OPC UA devices.
  • OPC UA hardware requirements allow the protocol to be deployed on hardware platforms that are common in today’s EtherNet/IP components.
  • OPC UA software requirements allow the protocol to be deployed within current EtherNet/IP-centric software tools without significant changes to user workflows.
  • Conformance test practices mandated for EtherNet/IP reflect the necessary requirements for OPC UA conformance testing.

Brooks noted that extending the OPC UA protocol out to field devices will be entirely up to each customer. “Our objective is that our customers get to choose when it use the technology rather than the technology making the decision,” said Brooks. “So, we build it and demonstrate its value to our customers. Then it’s up to them.”

Rob Spiegel has covered automation and control for 17 years, 15 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

Pacific Design and ManufacturingSAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019!      
Pacific Design & Manufacturing  , North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event.   Click here to pre-register for the event today!

E-Skin Turns a Person into a Compass

electronic skin

A new electronic skin with magnet-sensitive capabilities developed by researchers in Germany can turn someone into a human compass, providing opportunities for navigation as well as virtual-reality and other applications.

Researchers at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) developed the e-skin, which can detect and digitize body motion as it corresponds to the Earth’s magnetic field, they said in an HZDR news release.

An ultra-thin golden foil on the middle finger is all that’s needed to control a virtual panda with the help of the Earth’s magnetic field, thanks to new research to develop an electronic skin by researchers in Germany. (Image source: Helmholtz-Zentrum Dresden-Rossendorf)

The e-skin is extremely thin and malleable, which means it can comfortably and easily be affixed to human skin to create something like a bionic compass, comprised of a foil attached to a person's finger and sensors for easy use, said Gilbert Santiago Cañón Bermúdez, one of the researchers on the project.

“The foil is equipped with magnetic field sensors that can pick up geomagnetic fields,” he said. “We are talking about 40 to 60 microtesla—that is 1,000 times weaker than a magnetic field of a typical fridge magnet.”

Birds already can naturally perceive the Earth’s magnetic field and use it to orientate themselves—something humans can’t do naturally yet. The e-skin developed by the team can provide a means to do this using sensors—ultra-thin strips of the magnetic material permalloy—that operate using the principle of the so-called anisotropic magneto-resistive effect.

“It means that the electric resistance of these layers changes, depending on their orientation in relation to an outer magnetic field,” explained Cañón Bermúdez, another researcher on the project. “In order to align them specifically with the Earth’s magnetic field, we decorated these ferromagnetic strips with slabs of conductive material—in this case, gold—arranged at a 45-degree angle.”

In this way, the electric current can only flow at its particular angle, which changes the response of the sensor to render it most sensitive around very small fields, he said. Somewhat like a compass, the voltage is strongest when the sensors point north and weakest when they point south, Bermudez said.

Outdoor Experiments

Researchers demonstrated that the e-skin works for navigation by conducting outdoor experiments. A person with the sensor attached to an index finger started out from the north, then headed west, south, and back again, causing the voltage to rise and fall again accordingly, researchers said. They matched the cardinal directions with those on a traditional compass for reference.

“This shows that we were able to develop the first soft and ultra-thin portable sensor, which can reproduce the functionality of a conventional compass and prospectively grant artificial magnetoception to humans,” Bermúdez said.

Researchers also demonstrated their invention in a virtual-reality setting to show its versatility, using the magnetic sensors to control a digital panda in the computer game engine, Panda3D. By swiping a hand to the left, the virtual panda on the screen starts moving toward the bottom left. Swiping a hand to the right causes the animal to face the other direction.

The team said that its work is the first demonstration of a highly compliant electronic skin that’s capable of controlling virtual objects by relying on interaction with geomagnetic fields. Previously, an external permanent magnet was needed to achieve the same results.

In addition to the scenarios already described, researchers see the e-skin having another application—as a psychologist’s tool, Bermudez said. “Psychologists, for instance, could study the effects of magnetoception in humans more precisely, without bulky devices or cumbersome experimental setups, which are prone to bias the results,” he said.

Researchers published a paper on their work in the journal Nature Electronics.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time, she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

Pacific Design and ManufacturingSAVE THE DATE FOR PACIFIC DESIGN & MANUFACTURING 2019! 
Pacific Design & Manufacturing, North America’s premier conference that connects you with thousands of professionals across the advanced design & manufacturing spectrum, will be back at the Anaheim Convention Center February 5-7, 2019! Don’t miss your chance to connect and share your expertise with industry peers during this can't-miss event. Click here to pre-register for the event today!

An FPGA for DIY Electronics

An FPGA for DIY Electronics

The MKR Vidor-4000 FPGA is the latest market entry of the Arduino boards. 

Since its introduction in 2011, the Arduino low-cost electronics prototyping platform has allowed engineers, designers, educators, and makers to create new industrial tools and consumer products. One of the more attractive features of the Arduino is that it allows the personalization of new features and functions to be realized by way of customized boards, called shields. The shield mounts on top of the Arduino through two single, inline female connectors soldered on both sides of the Arduino board. Now, there is a new family of Arduino called the MKR (Maker), which is allowing more computing power and connective capabilities.

The MKR Vidor-4000 Peripherals

The latest version of the MKR platform to hit the market is the MKR Vidor-4000 field programmable gate array (FPGA). The MKR Vidor-4000 is not packaged using the usual, signature footprint Arduino printed-circuit boards (PCBs) known in the maker community. Instead, the PCB has a dimensional footprint of 83 mm (3.25 in) by 25 mm (0.98 in). This small form factor has a variety of peripheral connectors as well as semiconductor integrated circuits. Unlike the Arduino Uno, the MKR Vidor board has a mini high definition multimedia interface (HDMI) connector, which allows it to be attached to a high definition monitor. The Vidor PCB also has Peripheral Component Interconnect Express (PCIe), allowing the FPGA to be connected to a larger processor system for applications like image recognition processing.

Another connector provided on the board is a standard Mobile Industry Processor Interface (MiPi) for attaching a camera. Appropriate inline connectors (female or male) can be soldered on both sides of the MKR Vidor PCB for accessing the board's I/O pins. On a traditional Arduino Uno board, the inter-integrated circuit (I2C) is obtained by inserting two solid wires into the appropriate inline female connector’s cavities. The MKR Vidor PCB has an I2C bus connector soldered onto the board instead. The MKR Vidor-4000 FPGA PCB is also 3.3V compliant. Therefore, a lithium-ion polymer (LiPo) connector is provided for battery operation.

The electrical connections used on the MKR Vidor-4000 FPGA PCB are shown here. 

Semiconductor ICs

In addition to a variety of electrical connections, the MKR Vidor board has processing and communication capabilities by way of populated semiconductor ICs. For providing the FPGA capabilities, an Intel Cyclone 10 IC is mounted onto the PCB. The Cyclone chip includes 16K logic elements (LEs), 504kb of embedded RAM, and a 56 x 18 x 18-bit hardware multiplier array to support the high-speed, on-board digital signal processor (DSP). It also has 8 MB of Static Random-Access Memory (SRAM) with a 2 MB Quad Serial Peripheral Interface (QSPI). NOR Flash IC provides the appropriate processing power for the MKR Vidor FPGA device. As an added feature, there is 1 MB of user available memory for the Vidor board. A SAMD21 processor supports the computing power of the Vidor board along with providing control signal resources to the Cyclone 10 IC.

Other special features supported by the MKR Vidor include a u-blox NINA-102 WiFi module. This module's design is based on the ESP32 WiFi system on a chip (SoC) that’s popular with other wireless microcontroller development platforms. In addition, the Arduino team has provided cryptography on this board via a Microchip ECC508 cryptography IC, which provides local networks and internet secure connections.

Pictured are the semiconductor ICs supporting the MKR Vidor-4000 FPGA PCB. 

Programming the MKR Vidor-4000 FPGA

FPGA programming is related to the digital circuits' physical wiring description in contrast to calling functions, as on a typical microcontroller. With an FPGA development toolchain, programming the digital IC can be accomplished using a schematic editor. Counters, latches, flip-flops, logic gates, and memory blocks can be drawn to create specific processing or controls applications for the FPGA using the schematic editor. A binary file of the target application is then developed from the schematic editor and uploaded to the FPGA.

Another programming approach is to use a high-level description language (HDL). There are two variants of the HDL—one being very high-level description language (VHDL) or Verilog. With either programming options, sophisticated control, signal processing, and imaging recognition applications can be developed using the MKR Vidor-4000 FPGA platform.

Additional information on sample code and cost of the MKR Vidor-4000 FPGA PCB can be found on the Arduino website.

DesignCon 2019 engineering education By Engineers, for Engineers
 Join our in-depth conference program with over 100 technical paper sessions, panels, and tutorials spanning 15 tracks. Learn more: DesignCon. Jan. 29-31, 2019, in Santa Clara, CA. Register to attend, hosted by Design News’ parent company UBM.

Don Wilcher is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He’s worked on industrial robotics systems, automotive electronic modules/systems, and embedded wireless controls for small consumer appliances. He’s also a book author, writing DIY project books on electronics and robotics technologies.