Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Sitemap


Articles from 2018 In March


Building a World Where Algorithms Know Your Face

Building a World Where Algorithms Know Your Face

What if you could withdraw money from an ATM just by walking up to it and having it recognize your face? If it works well and securely it would offer huge conveniences for customers. But are we ready to let artificial intelligence algorithms track our movements and recognize our faces?

Chinese artificial intelligence startup Yitu Technology is betting we are. The company has recently rolled out a series of AI algorithms aimed at leveraging facial and image recognition for improved security, enhanced consumer experiences, healthcare, and even smart cities.

Yitu Technology and companies like it envision smart cities driven by facial and image recognition algorithms. (image source: Yitu Technology)

Speaking from the 2018 GTC Conference, where the company made its series of product announcements, Dr. Shuang Wu, an AI research scientist at Yitu, told Design News the company began in 2012 with an initial focus on computer vision-based algorithms. Today, Yitu has expanded its expertise into a variety of industries including automotive and healthcare and has also branched into developing chip hardware for AI algorithms.

The company's banking solution, for example, allows any customer to make ATM transactions using their face instead of a bank card and PIN number. “We were the first ones in China, probably in the world, to apply face recognition to ATMs,” Wu said. “Obviously there are a lot of technical challenges in terms of trying to make it hack-proof. We have live face detection, infrared cameras, and solutions for various lighting conditions. All those things together provide a secure solution for the ATM.”

According to Yitu, all of the major banks in China have adopted this technology to some degree and it is now deployed in 12,000 ATMs around the country. Wu added that banks are also using the technology for customer ID verifications. “Normally you'd have a teller verifying ID's. Now we have an algorithm doing the same thing, helping the teller to verify that it's the same person.” He said there has even been growing discussion in China of making AI-assisted customer identification a legal requirement for banks.

A concept model of Yitu's face recognition ATM on display at the 2018 GPU Technology Conference (GTC). (image source: Design News)

Banks in China have been adopting facial recognition technology since 2015, but there has been a surge in rollouts in recent years. In an article on its own decisions to roll out facial recognition into its ATM's Agricultural Bank of China (ABC) touted an enhanced customer experience and enhanced security as primary motivators.

While face recognition for biometrics (i.e using your face to unlock your phone or laptop) has been around a while, Wu said that expanding facial recognition into the public sphere in such a large way requires a lot more consideration. “In the past systems like these were very brittle because as an algorithm programmer you have to anticipate all sorts of different situations. Now as long as you have the data you can do it,” he said.

“The good thing about face data is the labeling cost is low,” Wu added. “But of course you have different situations that will require different tradeoffs. There are cases where recall is the most important metric. For example, you might have a blacklist set up for an office where you don't want certain people to come in. There it's less important to mistake someone for someone on the blacklist occasionally, that's okay. But you don't want people on the blacklist walking through. For ID purposes you don't want to make false positives.”

In an ATM for instance, Wu said it's okay to have an ATM occasionally reject an actual person by failing to identify them. What you don't want however is having any stranger be able to walk up to the ATM and withdraw money. “The need for precision is very high in that instance,” he said. “For other cases the speed is another issue.You want the algorithm to be fast; you don't want people standing around waiting at a door or access point, for example. So having low latency becomes a requirement.”

Yitu has rolled out a series of applications for its facial recognition algorithms. Included in the company's product portfolio: an “intelligence workplace” solution for tracking activity in offices and offering real time alerts for individuals; customized shopping experiences that use your facial data to keep track of metrics such as your consumer behavior and where you like to go in a mall; AI assisted diagnosis via medical imaging; and a cloud-based smart city solution that monitors and adjusts traffic flow based on various road conditions. “If there's an accident we can redirect traffic just by changing the traffic lights,” Wu said, offering one example. “The traffic light and traffic cameras are so close physically, yet so far apart virtually, so we have to connect them somehow.” In testing Yitu says it has been able to increase traffic speeds by 11 percent.

A conceptual video demonstrating Yitu's use of facial recognition AI to customize the shopping mall experience (source: Yitu Technology)

Chinese interest in facial recognition technology has exploded in recent years, driven by concerns over public safety and security. A recent Washington Post feature discussed China's efforts to create an “omnipresent video surveillance network to track where people are and what they're up to” by leveraging everything from ATMs to surveillance and traffic cameras. Railway police in China have already been able to apprehend criminals hiding in plain sight thanks to smart glasses with built-in facial recognition technology. In 2017, Chinese authorities reported dozens of criminals, including a man who had been wanted for 10 years, were caught at a beer festival in Qingdao, China thanks to facial recognition cameras.

Putting aside law enforcement applications, the idea of ubiquitous facial recognition still naturally raises privacy concerns. But Wu said the right amount of education should assuage these issues. “First of all you need government and agencies to regulate for sure,” he said. “Laws are always lagging behind tech, so you need to educate regulators and the general public. In terms of privacy, facial recognition is not automatically going to affect privacy because we can recognize you as a regular customer without knowing your identity.”

Tracking faces, but not attaching faces to personal data? That notion may be a hard to sell for those of us who feel our face is inextricably linked to our identity. But Wu offered another way of looking at it. “Another way to see it is you are already walking around in public domains; it's just that before the technology it was harder for people to track you or know who you are. If you are going to the same store very often, for example, the employees will recognize you. Is that a violation of your privacy? Hardly.

“It's more about making it clear what privacy means and what data is not allowed to be transmitted. Like where is your privacy expected? If you walk into McDonald's is your privacy in any way violated? I don't know. And look at what you're doing online. You're already doing a lot of stuff that Facebook and Google are tracking.”

Yitu is currently expanding its offerings in Southeast Asia, the Middle East, Africa, Europe, and is also looking into U.S. markets where the company sees business opportunities. But of course each region brings its own challenges, not only in terms of social concerns and expectations but also government regulations. But Wu does not look at these as roadblocks, merely opportunities to educate the public on the potential benefits of the technology. “You have to make people realize and understand what privacy is and what to expect in different scenarios,” he said. “Then they will go 'I'm walking in there and I know what's going to happen and I don't feel like I'm being watched without my knowing.' ”

Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, and robotics.

 
One of the major hassles of Deep Learning is the need to fully retrain the network on the server every time new data becomes available in order to preserve the previous knowledge. This is called "catastrophic forgetting," and it severely impairs the ability to develop a truly autonomous AI (artificial intelligence). This problem is solved by simply training on the fly — learning new objects without having to retrain on the old. Join Neurala’s Anatoly Gorshechnikov at ESC Boston, Wednesday, April 18, at 1 pm, where he will discuss how state-of-the-art accuracy, as well as real-time performance suitable for deployment of AI directly on the edge, moves AI out of the server room and into the hands of consumers, allowing for technology that mimics the human brain.
 

Nickel Is the New Key to Recycling C02 Emissions

Nickel Is the New Key to Recycling C02 Emissions

Researchers at Harvard, Stanford, and Brookhaven National Lab have discovered a new nickel-based catalyst that marks a major step in the quest to recycle carbon dioxide into useful industrial chemicals, plastics, and fuels. The resulting catalyst is not only far more economical than anything made previously, it is also highly efficient. Their paper, recently published in the journal Energy & Environmental Science, reports a 97 percent conversion efficiency.

The scientific consensus on climate change indicates that it won't be possible to meet the goals laid out in The Paris Agreement without a significant operational capability to actively remove carbon dioxide from the atmosphere as a means of restoring balance to the carbon cycle.

Charge density distribution of the Ni single atoms confined in graphene vacancies. (image source: Brookhaven National Laboratory)

A number of diverse efforts are underway in the realms of forestry and agriculture as well as industrial direct air capture systems that can extract carbon dioxide from the air, anywhere. However, while CO2 plays a role as an important industrial chemical, the anticipated demand for it is far smaller than what needs to be extracted to stabilize the environment.

That leads to a question: What else can be done with the excess CO2?

Scientists have long known that carbon dioxide’s dangerous cousin, CO, or carbon monoxide, was a far more useful chemical since it can be reacted with water to produce hydrogen or readily combined with hydrogen to produce any number of hydrocarbon products ranging from plastics to fuels such methanol, ethanol, and diesel. But converting the highly stable CO2 molecule to CO by stripping off one of the oxygen atoms has proven difficult and requires expensive catalysts, such as gold or platinum, and also significant amounts of energy.

But a team of scientists have found a far more affordable catalyst, nickel, to be very effective, when used in a single atom form.

Klaus Attenkofer, Program Manager of Hard X-ray Spectroscopy at Brookhaven’s National Synchrotron Light Source II, was part of the group behind this breakthrough. Attenkofer’s team was primarily involved in characterization, utilizing a combination of X-ray absorption spectroscopy and scanning transmission electron microscopy that allowed the researchers to visualize and measure the performance of the reactions at an atomic scale.

Attenkofer told Design News the new set of capabilities to both visualize and manipulate matter at these scales has led to what he calls, “rational design.” What this refers to is the idea that, while catalysts needed to be found in the past, today they can essentially be built.

The catalyst built by this team gets its potency from the interaction between the individual nickel atoms and the surface to which they are attached. Stabilizing the atoms on the surface, which in this case is a graphene layer, was one of the key challenges.

To achieve this, the graphene layer is doped with nitrogen, which essentially punches a hole in the layer, displacing carbon atoms in the process. Once a nitrogen atom is in there, it provides a place for the nickel atom to attach. It’s important to note that the nickel atom is not embedded in the graphene plane, but is suspended above it, providing better interaction with the carbon dioxide. The bond is strong enough so that it cannot be thermally disturbed.

While the optimization of the reactivity could be done in an ordinary lab through experimentation, the visualization and characterization provided by Attenkofer’s team allows for a detailed understanding of what is actually going in there. It was possible, said Attenkofer, “that nanoparticles could actually be the catalyst.”

The imaging capability allowed the role of the nickel to be verified. But that was not straightforward either. It’s not simply a matter of watching individual atoms to see what they do. Not every atom participates in the reaction. It’s more of a statistical process involving next neighbor distance, which can be used to identify the atoms. Looking at the absorption spectrum, to see what wavelengths are being absorbed, chemical activity can be interpreted with the assistance of a model.

RP Siegel, PE, has a master's degree in mechanical engineering and worked for 20 years in R&D at Xerox Corp. An inventor with 50 patents, and now a full-time writer, RP finds his primary interest at the intersection of technology and society. His work has appeared in multiple consumer and industry outlets, and he also co-authored the eco-thriller Vapor Trails.

ESC, Embedded Systems ConferenceToday's Insights. Tomorrow's Technologies
ESC returns to Boston, April 18-19, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.

Nissan Shines New Light on Battery Reuse

Nissan Shines New Light on Battery Reuse

Working with 4R Energy, Nissan has joined with the town of Namie in Japan to install new street lights that are powered by reconfigured Leaf electric vehicle battery cells. The Leaf cells are charged during the day by solar panels to provide completely off-the-grid outdoor lighting. The coastal town is near Fukushima, an area that was devastated by the 2011 earthquake and tsunami.

The Nissan Leaf is the most popular electric vehicle in the world with more than 300,000 sold. The Leaf went on sale in 2010 and is powered by a lithium ion battery with an expected life of 5-8 years. That means that the batteries of some of those early Leaf vehicles are reaching the end of their useful life as transportation power. But batteries that are too depleted for use in an EV still retain about 80% of their capacity. They have the potential to be repurposed to other applications needing reliable, rechargeable energy storage.

In 2010, Nissan began a joint venture with Sumitomo Corporation in Japan to establish 4R Energy. The company was created to explore and develop applications for used Nissan Leaf lithium ion batteries, particularly in the stationary energy storage market.

The Reborn Light uses solar panels and a light source mounted atop a pole, and used Nissan Leaf batteries in its base to provide off-the-grid lighting (Image source: Nissan)
The new Nissan streetlight project, which is called “The Reborn Light” features a solar panel and light source placed atop a 4.2 meter (8.9 ft.) high pole. The recycled Nissan Leaf battery cells are located in the base of the pole. A Nissan corporate website notes that 17% of the world’s population lives without electricity and stated, “Thanks to high-performance battery, we can … bring light to places that have never been lit before.”

Nissan is not the only car company that is looking at alternative applications for their electric battery expertise. Tesla’s PowerWall and PowerPack are already providing utility scale grid storage in applications in the U.S. and Australia. Daimler, maker of Mercedes-Benz and Smart automobiles, in its 2016 Annual Report, outlined the use of its battery technology for stationary energy-storage devices.

BMW has also constructed several large-scale energy stationary storage systems using recycled and reclaimed used lithium ion batteries from its i3 and i8 models. In an October 26, 2017 press release from the BMW Group, Harald Krüger, Chairman of the Board of Management of BMW AG stated, "In the interests of sustainability, today we are also presenting a concept for a second use of BMW i3 high-voltage batteries. With our Strategy NUMBER ONE> NEXT, we are looking far beyond the car itself and driving change in our industry with totally new approaches and business models.”

In addition to providing innovative lighting to the town, Nissan has located a new battery reclamation facility in Namie as a joint venture with Sumitomo and run by 4R Energy. The plant will refabricate old Leaf battery packs as the start of a new Nissan battery exchange program for owners of older Leafs in Japan. The price of the refurbished pack to customers is less than half the cost of a brand-new Nissan Leaf pack.

Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation topics for more than 30 years. He has set several world land speed records on electric motorcycles that he built in his workshop.

Related articles:

Adding Resilience to Fragile Power Systems

ESC, Embedded Systems ConferenceToday's Insights. Tomorrow's Technololgies.
ESC returns to Boston, April 18-19, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.

The Nvidia DGX-2 Is the World's Largest GPU, and It's Made for AI

The Nvidia DGX-2 Is the World's Largest GPU, and It's Made for AI

No entity has been more invested in applying in GPUs for artificial intelligence than Nvidia. Now the chipmaker, known traditionally as a graphics processor company, has made a definitive statement into pivoting itself into an AI and enterprise hardware manufacturer with the announcement of the DGX-2, the “largest GPU ever created.”

Prior to the DGX-2 announcement, Nvidia had been positioning the previous model, the DGX-1, (released only six months ago) for deep learning applications as well. When it was first released Nvidia touted it as the platform that would help leap frog autonomous vehicles to full level-5 autonomy (no steering wheel and no need for a human driver). While the DGX-1 was able to deliver up to 1000 teraflops of performance, the DGX-2 is a 350-lb powerhouse boasting a speed of 2 petaflops (For those doing the math, that's 2 quadrillion floating point operations per second) and 512GB of second-generation high-bandwidth memory (HBM2).

The DGX-2 weighs 350 lbs and boasts up to 2 petaflops of processing performance for deep learning. (image source: Nvidia)

But this new platform won't be used to deliver the next-generation of photorealistic graphics for video games. The DGX-2 has its sights set on deep learning. “The extraordinary advances of deep learning only hint at what is still to come,” Jensen Huang, founder and CEO of Nvidia, said during his keynote at the GTC Conference, where the DGX-2 was unveiled. “We are dramatically enhancing our platform’s performance at a pace far exceeding Moore’s law, enabling breakthroughs that will help revolutionize healthcare, transportation, science, exploration and countless other areas.”

Nvidia wants to position the DGX-2 to engineers as a ready-to-go solution for scaling up AI, allowing them to easily build large, enterprise-grade deep learning computer infrastructures. According to specs from the company, the DGX-2 has the processing power of 300 dual CPU servers, but only takes up 1/60th of the space, consumes 1/18th of the power (10 kW), and at 1/8th of the cost (the DGX-2 will retail for $399,000).

When Huang calls the DGX-2 the “world's largest GPU,” he's not speaking in purely technical terms however. The DGX-2 is really a system that combines an array of GPU boards with high-speed interconnect switches and two Intel Xeon Platinum CPUs. Under the hood the DGX-2 combines 16 Nvidia Tesla V100 GPUs. Each Tesla V100 delivers 100 teraflops of deep learning performance, according to Nvidia. Adding to this, the GPUs are connected via a new connective network Nvidia calls NVSwitch. Huang said NVSwitch allows the GPUs to communicate with each other simultaneously at a speed of 2.4 terabytes per second. “ [About] 1440 movies could be transferred across this switch in one second,” Huang said.

A big part of the DGX-2's performance is owed to NVSwitch, a new communication architecture that allows GPUs to communite at speeds up to 2.4 terabytes per second. (image source: Nvidia) 

Huang also discussed benchmark tests Nvidia had conducted for the DGX-2 against it's predecessor. When the DGX-1 was released in December 2017 it broke records when it took 15 days to train it on Fairseq, Facebook's neural network for language translation. The DGX-2 can be fully trained on Fairseq in one-and-a-half days, according to Nvidia. Similarly it takes two GeForce GTX 580 GPUs (released in 2012) working together six days to be trained on AlexNet, a well-known image recognition neural network. The DGX-2 by comparison can be trained on AlexNet in about 16 minutes. “That's 500 times faster processing in only five years,” Huang said.

The DGX-2 is currently available for order, but given it's hefty price tag it's unlikely to see entities outside of big hitters in the AI space like Google investing in it in the short term. In conjunction with the DGX-2's release Nvidia has also updated its software stack for deep deep learning and high-performance computing software stack at no charge to its developer community. The updates include new versions of NVIDIA CUDA, TensorRT, NCCL and cuDNN, and a new Isaac software developer kit for robotics.

Chris Wiltz is a Senior Editor at Design News covering emerging technologies including AI, VR/AR, and robotics.

One of the major hassles of Deep Learning is the need to fully retrain the network on the server every time new data becomes available in order to preserve the previous knowledge. This is called "catastrophic forgetting," and it severely impairs the ability to develop a truly autonomous AI (artificial intelligence). This problem is solved by simply training on the fly — learning new objects without having to retrain on the old. Join Neurala’s Anatoly Gorshechnikov at ESC Boston, Wednesday, April 18, at 1 pm, where he will discuss how state-of-the-art accuracy, as well as real-time performance suitable for deployment of AI directly on the edge, moves AI out of the server room and into the hands of consumers, allowing for technology that mimics the human brain.

Will the Supply of Lithium Meet Battery Demands?

Will the Supply of Lithium Meet Battery Demands?

According to the 2018 U.S. Geological Survey (USGS), Mineral Commodity Summaries for 2018, “Lithium supply security has become a top priority for technology companies in the United States and Asia.” The USGS went on to say, “Strategic alliances and joint ventures among technology companies and exploration companies continued to be established to ensure a reliable, diversified supply of lithium for battery suppliers and vehicle manufacturers.” Recently car companies like BMW, Volkswagen, Nissan, Toyota, Tesla, BYD, Dongfeng, and Great Wall have expressed concerns about their future sources of lithium during coming decades with battery-powered electric vehicles.

Globally, the end-use for lithium is 46% batteries, 27% ceramics and glass, 7% lubricating grease, 5% polymer production and several other industrial uses, each under 5%. It’s the use in batteries that has grown dramatically, both as a percentage and also in outright tonnage. The price for lithium has followed this growth, increasing 61% from 2016 to 2017.

The rapid growth in demand and price increase is making auto manufacturers nervous. In January of this year Toyota Tsusho (a Toyota Group trading company) announced that they would become a 15% shareholder in Orocobre Limited, a mining company with a lithium brine facility in Argentina. Chinese carmaker Great Wall late last year took a stake in Pilbara Minerals, an Australian lithium hard rock mining company. In February of 2018, Pilbara also made agreements to supply spodumene concentrate to POSCO, a South Korean manufacturer of lithium ion battery materials. BMW was also reported in the German press to be close to signing a ten year deal that would ensure its supply of lithium.

On March 13, 2018, Volkswagen AG, in a press release announced that 16 locations around the world would be producing electric vehicles by 2022. The company also announced that partnerships with battery manufacturers for Europe and China had resulted in contracts totaling more than 20 billion Euros (approximately $24.7 billion U.S. dollars). A decision about North American battery suppliers was expected later.

The push to lock in a supply of lithium is based on the forecast for dramatic growth in the uptake of electric vehicles (currently about 1% of the market in both the U.S. and China). At least one investment bank, Morgan Stanley, however, expects that EV growth won’t meet expectations, resulting in an oversupply of lithium and reduction in the metal’s price. As reported in a recent Design News story, the pace of growth of EV sales in the U.S. could depend on the direction the U.S. government takes on future fuel economy regulations.

The fear of an oversupply and a price crash for lithium was countered in the January Benchmark Mineral Intelligence research note that indicated that while lithium feedstock supplies are increasing, the effect of the demand from automakers has not yet become a significant part of the equation. The research note sated, “Therefore, while this auto speculation aids the upward trajectory of lithium’s price curve, it is not yet the defining factor. Real lithium supply and demand—producers selling to cathode manufacturers—is what is driving this price. The auto majors are yet to enter this “real” market.”

Although much of the focus has been on the auto industry, the use of lithium batteries for grid storage, such as Tesla’s Hornsdale Power Reserve in South Australia, as reported in Design News, also promises to increase demand to lithium into the future. Large scale projects have been announced at locations around the globe, and battery systems of 100 megawatt-hours or more require amounts of lithium equivalent to the amount needed for several thousand electric vehicles.

Hornsdale Power Reserve uses a 129 megawatt-hour lithium ion battery produced by Tesla for grid storage (Image source: Tesla)

Long-term, the outlook for lithium is good. According to the USGS, annual consumption of lithium worldwide is about 41,500 tons, while total estimated reserves of the metal is 47 million tons. The USGS estimates that the current known reserves will last at least until 2100, not including the reclamation of lithium through the eventual recycling of lithium ion batteries. When that lithium finds its way into the transportation system will largely depend on how soon electrification and EVs are embraced by the car-buying public.

Where Does Lithium Come From?

Lithium is the 30th most common element, ahead of lead, tin, and silver. The Earth’s crust on land contains about 20 ppm (parts per million), the ocean’s crust about 4.3 ppm, and seawater has trace amounts. There are two principle ways that commercial lithium is produced: from minerals via hard-rock mining, and from brines that are pumped to the surface from beneath salt flats.

Lithium minerals are fairly rare, found mostly in finely grained granites known as lithium-cesium-tantalum (LCT) pegmatites. The most important commercial lithium minerals arte spodumene and petalite, both of which are lithium aluminum silicates. Pegmatites contain about 26 percent of the world’s lithium. The main producer of lithium from hard rock pegmatite mining are Australia, Brazil, China, Portugal, and Zimbabwe. The U.S. has large reserves of lithium in pegmatite that is not currently mined in the Kings Mountain district in North Carolina. Canada has significant reserves in Quebec and other provinces, which are currently under exploration in order to start lithium mining.

Lithium minerals are extracted using open pit mining and spodumene is the most common ore with lithium concentrations as high as 4.8% (Australia). Lumps of granite are crushed and then milled and sent to flotation cells, where minerals are separated. The spodumene is finally concentrated to a level of about 5% to 7% which is sent on to be further chemically processed to produce lithium carbonate. Most of this processing takes place in China.

The other way to commercially produce lithium is from brines. Brines contain about 58 percent of the world’s lithium sources and are located primarily in North and South America, and Asia. The deposits tend to be in arid area where a closed basin containing a salt lake or salt flat has formed. The regional climate must be dry enough so that surface waters (mostly snowfall melt) can flow in, but does not flow out. The inflowing water brings mineral such as dissolved calcium, magnesium, potassium, sodium and lithium from the surrounding area. The lake often dries out completely, leaving a salt flat covered by evaporated minerals, rock salt being the best known of these. As the water evaporates, lithium acts differently than the other dissolved minerals. Because it is more soluble, in remains in solution and becomes concentrated as it sinks into the underlying aquifer brine, which can be up to several hundred meters below the salt surface. The brine is pumped to the surface and placed in shallow evaporation ponds.

It can take 2-3 years of pumping the brine from one evaporation pond to another to the point where the lithium is enriched to as high as 5,000 ppm (about 0.5%). When it reaches this concentration it is pumped to a chemical plant where lithium carbonate and lithium hydroxide are produced. The first brine production of lithium was in the United States in California in 1938, but today the majority of lithium from brines comes from Argentina, Chile, China, and in the U.S. from a facility in Nevada.

With either method of extracting the raw material, conversion of feedstocks to battery-grade lithium carbonate has proven to be a bottleneck. The two leading processing companies, Ganfeng Lithium and Sichuan Tianqi are both increasing their production capability, in China and abroad. Benchmark Mineral Intelligence, in a January 2018 research note on lithium stated, “Spodumene conversion capacity in China will have the greatest bearing on lithium supply and price in 2018.”

Related Articles:

Adding Resilience to Fragile Power Systems

If Trump Pulls Back on 54.5-MPG Mandate, EV Sales Could Stall

Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation topics for more than 30 years. He has set several world land speed records on electric motorcycles that he built in his workshop.

ESC, Embedded Systems ConferenceToday's Insights. Tomorrow's Technololgies.
ESC returns to Boston, April 18-19, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.

Software Key Enabler for IIoT Applications

IIoT, OPC UA, PLCs, automation, indiustrial networks, Opto 22, SCADA, HMI, Inductive Automation

For the Industrial Internet of Things (IIoT) to achieve its almost unlimited potential, everyone knows what a massive hurdle is how to transform of data collected at the edge of the network into actionable information. It’s also clear that the key is software, and lots of it, that is not only easy to use but also relatively inexpensive to implement and able to exploit the ongoing expansion of powerful new hardware platforms.

Software is King

“The IIoT is all about access to data, lots of data,” Travis Cox, co-director of sales engineering for Inductive Automation told Design News recently. “And with so much data coming from so many devices, we really need cost-effective approaches to collecting and using that data. Hardware that comes with software embedded in it provides more benefits to customers. They get more for their money.

Opto 22’s new groov EPIC programmable industrial controller comes with Ignition Edge from Inductive Automation for OPC-UA drivers and MQTT/Sparkplug communications along with drivers to Allen-Bradley, Siemens, and more. (Source Opto 22)

“Pre-installed software can perform supervisory control and data acquisition (SCADA), human-machine interface (HMI), alarming, protocol conversion, and numerous other functions. Even protocols such as MQTT are being embedded in devices,” Cox added.

IIoT Software Trends

One software approach that is gaining traction in IIoT applications is the use of embedded software in standard industrial control products. An example is device manufacturers embedding Ignition and Ignition Edge software in the devices they manufacture. Ignition is an industrial application platform developed by Inductive Automation that offers tools for building solutions for human-machine interfaces (HMI), supervisory control and data acquisition (SCADA), and the Industrial Internet of Things (IIoT).

Ignition Edge is a line of lightweight, limited, low-cost software products designed for edge-of-network use. Companies including Opto 22, Advantech B+B SmartWorx, Hilscher, Moxa, and EZAutomation are putting Ignition Edge on products. KEBA, Brown Engineers, Azul Systems, Nexoforge, and Tyrion Integration are embedding the full version of Ignition.

Cox said that, in today’s competitive IIoT landscape, device manufacturers are doing everything they can to add real value to their devices. Providing pre-installed software is a great example of that. With the best software and hardware companies working together, better products are getting to market more quickly.

“With the software embedded, the hardware can fulfill its primary function and do other things for you as well. It can enable local HMIs, convert data, capture brownfield data, or accomplish a number of other tasks. With the ability to do many things, these devices can simplify your architecture,” he said.

“Device manufacturers are experts at what they do,” Cox added. “They make hardware, and they’re good at it. But if they want to add software to their devices, they get the greatest benefit if they work with a strong software company. By embedding the product of a software company, they’re getting the best of both worlds. They can provide customers with best-in-class hardware and equally strong software. If they tried to make the software themselves, it would be much too expensive, time-consuming, and difficult.”

For the device manufacturers, embedded software provides additional capabilities. For example, powerful software can get a hardware device to talk to other devices it could never connect to before because it just didn’t have the drivers for that. Software can create these possibilities via new applications for a historian, alarming, HMI, visualization, MQTT, and more.

Edge Software Evolving to Meet IIoT Trends

Cox said that the biggest evolution of the software is the move toward open standards and interoperability. Another is the trend toward becoming a platform, enabling customers to address a variety of applications and effectively scale systems.

“It’s not about a one-and-done project. Today’s software lets you expand, talk to more devices, and achieve more goals,” Cox said. “Maybe one day you’ll want to send your data to the cloud. You don’t want to have to put in a whole new system to enable that. You want software today that will help you make changes in the future.”

Another important evolution is the increasing interest in unlimited licensing for software. Customers don’t want to pay extra costs whenever they want to expand their systems. Unlimited licensing allows you to grow however you want. It helps you build your dream projects — projects that wouldn’t be affordable under the traditional licensing model.

To learn more about Ignition Edge, visit the Inductive Automation website: https://inductiveautomation.com/ignition/edge/onboard

Al Presher is a veteran contributing writer for Design News, covering automation and control, motion control, power transmission, robotics, and fluid power.

As the Internet of Things (IoT) pushes automation to new heights, people will perform fewer and fewer “simple tasks.” Does that mean the demand for highly technical employees will increase as the need for less-technical employees decreases? What will be the immediate and long-term effects on the overall job market? What about our privacy and is the IoT secure? These are loaded questions, but ones that are asked often. Cees Links, wireless pioneer, entrepreneur, and general manager of the Wireless Connectivity business unit in Qorvo, will address these questions, as well as expectations for IoT’s impact on society, in this ESC Boston 2018 keynote presentation, Thursday, April 19, at 1 pm. Use the Code DESIGNNEWS to save 20% when you register for the two-day conference today!

Engineering Deep Learning Hardware at the University Level

Engineering Deep Learning Hardware at the University Level

In addition to conducting its own research into AI-specific processors, MIT is introducing coursework to train the next generation of engineers in building hardware for deep learning and AI applications.

Traditional hardware architecture for deep learning (DL) has consisted of CPUs, Graphics Processing Units (GPUs),Field Programmable Gate Arrays(FPGAs), and ASICs. Recent improvements in FPGAs' digital processing efficiency have been used to increase computation throughput of deep learning architecture based computers and hardware systems. But Vivienne Sze, an Associate Electrical Engineering Professor at MIT, says that, although these computational hardware units have been used to develop deep learning-based algorithms and applications, their efficiency in performing such n-dimensional analysis has been subpar.

Vivienne Sze and Joe Emer discussing hardware architecture design for a MIT Deep Learning Class.
(image source: Little Pauqette / MIT School of Engineering
)

Performing powerful artificial intelligence (AI) tasks requires energy-efficient chips. And it also begs a fundamental question that Sze, along with Joel Emer, a Senior Research Scientist at Nvidia and Electrical Engineering Professor at MIT, and their team of MIT researchers, have concerned themselves with for years: How can you write algorithms that map well into hardware so they can run faster?

In 2016 their research culminated with the development of a new chip they call Eyeriss that the research team says is optimized for neural network computing. According to a research paper on the Eyeriss' development published in the IEEE Journal of Solid-State Circuits, the chip is powerful and energy efficient enough to allow sophisticated artificial intelligence applications to even run internally on mobile devices.

The Eyeriss chip is an accelerated, state-of-the-art deep learning convolutional neural betwork (CNN), a class of deep learning neural network applied to analyzing visual imagery. The chip is optimized for a complete DL system consisting of an off-chip DRAM for various CNN-based architectures. AI systems typically use CNN for improved data throughput and energy efficiency of the target hardware host. Large datasets require significant computational energy to process and move data from on-chip to off-chip devices. Such processing and data movement functions consumes energy on traditional CPU-based chips.

According to the MIT researchers, Eyeriss uses a dataflow technique call Row Stationary (RS) processing to achieve the low power energy consumption and high throughput from the chip. The spatial architecture with 168 processing elements and RS reconfigures the computation mapping, thereby producing optimized energy efficiency aided by reusing local data to reduce DRAM access through reduced data movement inside the computer chip.

Deep learning is an expanded subgroup of a branch of computer science called machine learning. DL specifically draws inspiration from the function and structure of the organic brain to create groups of algorithms called artificial neural networks. The idea is to develop learning algorithms that are both efficient and easy to use.

A diagram of the Eyeriss chip. The chip, designed by researchers at MIT, has the power and energy effiency to potentially enable mobile devices to run deep learning applications.
(image source: IEEE Journal of Solid State Circuits)

DL uses learning data representations instead of task-specific algorithms. Learning data representations in a machine learning application allow a system to discover a similar object required for detecting qualities/features and arrangements/classifications of raw data automatically. The two methods of learning are supervised (using labeled input data such as neural networks) or unsupervised (using unlabeled input data such as clustering). Biological nervous systems aided by information processing and communication patterns are primarily used in building DL models. A more practical application of DL is text, document, and image data arrangement or classification used for discovering or mining information on websites.

Development of chip architectures targeted directly at AI processing has ballooned in recent years, driven by the demand for AI applications in everything from manufacturing, to healthcare, to entertainment, and even retail. Companies like Nvidia, Microsoft, and Google have entered into a hotly-contested battle over who can provide the best hardware for running deep learning and other AI algorithms. For Nvidia the answer lies in high-powered GPUs. Microsoft, as mentioned, is looking to leverage FPGAs, while Google is working with a whole new processor of its own development called a Tensor Processing Unit (TPU). Earlier this year, chipmaking giant Intel got into the game by announcing it had developed a prototype of a “neuromorphic” chip it calls Loihee, that will allow devices to perform advanced deep learning processing on the edge.

More and more institutions are exploring deep learning hardware at the university level as well. In 2017 Sze and Emer began teaching a course at MIT, “Hardware Architecture for Deep Learning.” Regarding the goals of the course, Sze told MIT News, “The goal of the class is to teach students the interplay between two traditionally separate disciplines...How can you write algorithms that map well onto hardware so they can run faster? And how can you design hardware to better support the algorithm? It’s one thing to design algorithms, but to deploy them in the real world you have to consider speed and energy consumption.”
Sze and Emer have also written a tutorial/journal article on building hardware for DL that provides an in-depth discussion and analysis to creating such computation devices.

Don is a passionate teacher of electronics technology and an electrical engineer with 26 years of industrial experience. He has worked on industrial robotics systems, automotive electronic modules and systems, and embedded wireless controls for small consumer appliances. He's currently developing 21st century educational products focusing on the Internet of Things for makers, engineers, technicians, and educators. He is also a Certified Electronics Technician with ETA International and a book author.

One of the major hassles of Deep Learning is the need to fully retrain the network on the server every time new data becomes available in order to preserve the previous knowledge. This is called "catastrophic forgetting," and it severely impairs the ability to develop a truly autonomous AI (artificial intelligence). This problem is solved by simply training on the fly — learning new objects without having to retrain on the old. Join Neurala’s Anatoly Gorshechnikov at ESC Boston, Wednesday, April 18, at 1 pm, where he will discuss how state-of-the-art accuracy, as well as real-time performance suitable for deployment of AI directly on the edge, moves AI out of the server room and into the hands of consumers, allowing for technology that mimics the human brain.

PTC Releases Creo 5.0, Upgrading for Optimization and 3D Printing

PTC, Creo, Creo 5.0, 3D printing, additive manufacturing, substractive manufacturing, topology optimization, flow analysis

PTC has announced the release of an upgrade to its Creo CAD design package. Creo 5.0 adds a number of functions, including topology optimization, which suggests improved design possibilities to objects; additive and subtractive print integration, which allows users to prepare for printing functions without leaving Creo; flow analysis, which gives the user a quick look into analysis while inside Creo; and a number of production enhancements created to speed the design process.

Here’s a screen shot of a 3D drill with volume helical sweep to calculate accurate geometry for a grinding wheel scenario. (Source: PTC)

Many of the moves to improve Creo came directly from customer requests. “We made a plethora of advancements. From a user standpoint, we made significant productivity and usability enhancements to improve workflows,” Paul Sagar, VP of product management for CAD at PTC, told Design News. “We also focused on additive manufacturing. Additive is becoming more prevalent. A lot of customers have requested design function for additive capabilities. It’s also an element of where the market and industry are going.”

Topology Optimization

The Creo Topology Optimization Extension is designed to automatically create optimized designs based on a defined set of objectives and constraints, unfettered by existing designs and conventional thought processes. The goal is to help users save time and accelerate innovation by enabling creation of optimized and efficient parts.

Topology optimization is a new addition to Creo. “Topology optimization has been around, but one of the significant challenges with any topology optimization is that the end result is a bunch of data – it’s not CAD data,” said Sagar. “The secret sauce in Creo is the ability to reconstruct the topology optimization as a CAD model inside the CAD environment.”

Additive and Subtractive Manufacturing

Another feature of Creo 5.0 is the ability to use additive and subtractive print technology without the need for multiple pieces of software. Creo Additive Manufacturing Plus Extension was created in part with Materialise to extend 3D print capabilities to metal parts technology.  The goal of the extension is to let users to print production-grade parts directly from Creo. Users can connect to the Materialise online library of print drivers and profiles. “We’re providing direct integration with 3D printers,” said Sagar. “Just choose a printer, optimize the orientation, do the print checks, and send it to the printer. We expanded the list of supported software and hardware.”

As part of the effort to integrated 3D metal printing, PTC partnered with Materialize. “With the Materialize integration, you can connect to metal printers and their support structures,” said Sagar. “The support structures are important when talking about metal printing. The support can make a big difference.”

On the subtractive printing side, the Creo Mold Machining extension provides dedicated high-speed machining capabilities optimized for molds, dies, electrodes, and prototype machining. Creo 5.0 supports 3-axis and 3+2 positioning machining. “In subtractive, the goal was to generate the optimum tool part as fast as possible,” said Sager. “Our customers want to quickly produce a template-driven part that is optimized for mold machining. We’ve introduced new dedicated high-speed mold machining for 3-axis tool designs.”

Creo Flow Analysis

The Creo Flow Analysis extension is a computational fluid dynamics (CFD) tool that lets users simulate fluid flow issues directly within Creo. The seamless workflow between CAD and CFD integrates analysis early and often to demonstrate product function and performance. The software is directly integrated within Creo with the goal of providing accurate and fast results.

Flow analysis is a new feature for Creo. In the past, Creo has connected out to flow-analysis vendors. “We haven’t had flow analysis in the past. We’ve used third-party add-ons,” said Sagar. “The flow analysis comes from customer requests. They want to be able to perform flow analysis in Creo. We provided thermal and structure analysis, and now flow. One of the biggest challenges was constructing the volume that the fluid is going to flow through in an assembly of pumps. Our flow solution does that for you automatically, speeding up the analysis time and effort.”

Productivity Improvements

The Creo upgrade offers a number of productivity enhancements, including an improved user interface, geometry creation with sketch regions, and volume helical sweeps. Surfacing and sheet metal design have also been upgraded, as well as the application of draft features involving rounds. Users can also now design in Creo while maintaining perspective display mode. 

Rob Spiegel has covered automation and control for 17 years, 15 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years, he was owner and publisher of the food magazine Chile Pepper.

As the Internet of Things (IoT) pushes automation to new heights, people will perform fewer and fewer “simple tasks.” Does that mean the demand for highly technical employees will increase as the need for less-technical employees decreases? What will be the immediate and long-term effects on the overall job market? What about our privacy and is the IoT secure? These are loaded questions, but ones that are asked often. Cees Links, wireless pioneer, entrepreneur, and general manager of the Wireless Connectivity business unit in Qorvo, will address these questions, as well as expectations for IoT’s impact on society, in this ESC Boston 2018 keynote presentation, Thursday, April 19, at 1 pm. Use the Code DESIGNNEWS to save 20% when you register for the two-day conference today!

Proton Battery Could Offer Lithium Ion Alternative

Proton Battery Could Offer Lithium Ion Alternative

A traditional proton exchange membrane (PEM) hydrogen fuel cells uses platinum catalysts to combine gaseous hydrogen and oxygen from the atmosphere to produce electricity, water vapor, and heat. Producing, transporting, and storing hydrogen gas, however, have proven to be speed bumps on the road to adoption of fuel cells in the mainstream transportation sector.

When charging, electricity applied to a catalyst breaks down water into hydrogen ions (protons) and oxygen. The protons pass through a membrane and are stored in the hydrogen storage electrode. In discharge mode, the hydrogen ions are released from storage, pass back through the membrane and join with oxygen from the atmosphere to form electricity and water vapor, as in a standard fuel cell. (Image source: RMIT University)

RMIT University in Melbourne, Australia has announced a new a “proton battery” that eliminates those speed bumps by producing hydrogen ions (using catalysts) from water and electricity inside the battery and storing them in a special electrode during the charging process. During discharging, the hydrogen ions are released from the storage electrode and combined with oxygen to form water and generate electricity. The new experimental proton battery has achieved an energy density on par with today’s commercially available lithium ion batteries.

Previously, Professor John Andrews, from RMIT, had proposed in 2014 a new kind of battery that was part hydrogen fuel cell and part traditional battery, Andrews called his invention a “proton flow battery” and built a working prototype battery that proved the concept. But the metal hydrides he used resulted in a cell that suffered from low rechargeability and hydrides that contained rare-earth elements that were both heavy and expensive.

In the latest version, now called a proton battery, a porous activated-carbon electrode made from a phenolic resin replaces the metal hydride from the 2014 version as the hydrogen storage electrode—the new version is cheaper, lighter, and performs better.

The latest research work was published in the International Journal of Hydrogen Energy, where it was reported the new carbon electrode was able to store nearly 1% (by weight) of hydrogen in charge mode and release 0.8% (by weight) during the fuel cell electricity supply mode.  The voltage produced was 1.2 volts.

“Future work will now focus on further improving performance and energy density through use of atomically thin layered carbon-based materials such as graphene, with the target of a proton battery that is truly competitive with lithium ion batteries firmly in sight,” Andrews said in the RMIT press release.

There are concerns that future production of lithium ion batteries for both electric vehicles and electricity grid storage may be constrained by availability of lithium and metals such as cobalt used in their manufacture. Although the proton battery described by the RMIT uses small amounts of platinum as a catalyst (as do fuel cells), the remaining materials have a relatively low cost and are readily available. If the proton battery can produce equivalent results to current lithium battery technology, it could be a contender.

At the same time, it can be misleading to judge potential performance of battery technology from small experimental cells, and commercialization of the proton battery seems a long way off. But the use of lower cost, readily available carbon to store hydrogen ions is a nevertheless a breakthrough and a promising step.

Senior Editor Kevin Clemens has been writing about energy, automotive, and transportation topics for more than 30 years. He has set several world land speed records on electric motorcycles that he built in his workshop.

Related article:

For EV Market to Grow, Investments in Battery Facilities Are Needed

ESC, Embedded Systems ConferenceToday's Insights. Tomorrow's Technololgies.
ESC returns to Boston, April 18-19, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.

Siemens to Roll Out New Simulation Platform for Self-Driving Vehicles

Siemens to Roll Out New Simulation Platform for Self-Driving Vehicles

Siemens today will introduce a computing and simulation platform aimed at speeding the validation and verification of autonomous cars.

The new platform, to be rolled out at a company event in Chicago, could enable automotive engineers to reduce the amount of physical testing that would otherwise need to take place on public highways and test tracks. Siemens engineers say that the platform would allow automakers to simulate billions of test miles and countless scenarios that could take place in real life. “We do believe that in the end, you can account for 99.99999% of everything that can happen on the roads,” Martijn Tideman, director of products for TASS International, a Siemens business, told Design News.

Siemens’ new model-based platform offers a solution to validation and verification of autonomous car. (Source: Siemens)

The new platform involves existing technologies from two relatively new Siemens acquisitions. Those include TASS International’s PreScan simulation environment and Mentor Graphics DRS360 data fusion platform. PreScan, which has existed for about a decade, is a physics-based simulation platform for developing and validating automated systems. DRS360, meanwhile, is a product that takes raw, unfiltered data from cameras, radar, and Lidar systems and fuses it for subsequent use by a central processor.

In autonomous car applications, PreScan would feed the simulated data to DRS360, essentially as if it were real-world information. “DRS360 doesn’t know if it’s real or virtual because our sensor data is so good,” Tideman said. “So it enables you to test massive numbers of scenarios as if they were real.”

The ability to do that is critical to the auto industry right now, because verification and validation of autonomous cars is such a monumental task. Many industry engineers believe the sensors, actuators, and software are already in place for the creation of a Level 5 car, but they still need to “teach” vehicles how to react to the billions of possible scenarios that could take place on real-world roads. To do so on public highways and test tracks would be impossible, they say.

“There are voices in the industry talking about the needs for billions of miles of testing before a full validation cycle is complete,” noted AminKashi, director of ADAS/AD driving for Mentor Graphics, a Siemens company.

Products such as PreScan could change that, however. PreScan would enable developers to change the simulated parameters from, say, urban to rural, dark to light, rainy to sunny, or sleet to snow. “PreScan is able, not only to create a scenario, but also to re-create it for the maximum number of different situations and environmental conditions,” Kashi said.

Siemens’ end-to-end solution, complete with the ability to feed simulated data to a fusion platform, is believed to be unique in the industry right now. “There are bits and pieces of this out there right now,” Phil Magney, founder and principal advisor for VSI Labs, told Design News. “But no one has tried to stitch together and end-to-end solution for automated vehicles.”

Waymo probably has an in-house solution right now but is not making it commercially available. Nvidia Corp. may also offer a similar solution very soon (possibly as early as this week) but has not released it yet.

The development of such systems is being driven by a dire need for autonomous simulation solutions in the auto industry right now. Experts say that Level 5 autonomous cars may still be a decade or more away, largely because validation and verification of them is unlike anything in the 130-year history of the industry. The need to log billions of physical test miles would only add to the long wait, which is why simulation is so critical, they say.

“The autonomous car changes everything,” Magney said. “It heightens the role that simulation plays.”

If you have a comment, send it directly to [email protected] and we will publish it in a future article. Just keep it concise (100 words or less) and type the words “story comment” in the subject line.

Read More Articles on Automotive Technology

Suppliers Prepare New Products, Processes to Meet 54.5-MPG Standard

NXP Rolls New Development Platform for EVs, Hybrids

GM, Waymo Top Ranking of Autonomous Car Leaders

Senior technical editor Chuck Murray has been writing about technology for 34 years. He joined Design News in 1987, and has covered electronics, automation, fluid power, and auto.  

ESC, Embedded Systems ConferenceToday's Insights. Tomorrow's Technologies.
ESC returns to Boston, April 18-19, 2018, with a fresh, in-depth, two-day educational program designed specifically for the needs of today's embedded systems professionals. With four comprehensive tracks, new technical tutorials, and a host of top engineering talent on stage, you'll get the specialized training you need to create competitive embedded products. Get hands-on in the classroom and speak directly to the engineers and developers who can help you work faster, cheaper, and smarter.