Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Sitemap


Articles from 2016 In November


Using Design by Contract for Developing Embedded Software

Design by Contract, ESC, Embedded Systems Conference, embedded software

Embedded software design methodologies have been developed by the dozen if not by the hundreds. Methodologies are meant to help guide engineers toward developing software that is more robust, has fewer bugs, and is more maintainable, to name a few coveted characteristics. Methodologies vary in focus from being heavily reliant upon documentation all the way through being focused on cleverly crafting code to act as documentation. One methodology that can be useful for developing embedded software interfaces is Design by Contract.

The term Design by Contract was first used, and later trademarked, by Bertrand Meyer in connection with the object-oriented programming language, Eiffel. The idea behind Design by Contract is that developers should be designing precise and verifiable software interfaces for components in their applications; in essence, they create a contract between the interface and the application code that will use the interface. For example, when creating a hardware abstraction layer interface for a GPIO component, each component interface would contain some method for defining the components pre-conditions, post-conditions, and invariants. In many circumstances, developers using an interface are often left in the dark about these conditions which can be quite dangerous.

Figure 1 – Example Interface Header using Pre-conditions and Post-Conditions.

ESC logoMastering Modern Debug Techniques. Attendees of this session will walk away with an understanding of the modern debug techniques available to engineers in todays development cycle. Attendees will learn how to quickly setup SWV and how it can benefit their development effort. Finally, a look at ETM and code coverage analysis will be examined. Don't miss "Mastering Modern Debug Techniques" at ESC Silicon Valley, Dec. 6-8, 2016 in San Jose, Calif. Register here for the event, hosted by Design News ’ parent company, UBM.


For a designer to share and identify the pre-conditions and the post-conditions for a component makes sense, yet, very rarely do developers explicitly provide this information. An application developer would be happy to know that they need to enable a clock and initialize the interrupt vector controller before calling the Gpio_Init function rather than spending 30 minutes or more debugging the system only to discover that the initialization function doesn’t do it for them. Design by Contract provides application developers a contract for using the component. If the preconditions are met, then the interface is contractually guaranteed to provide the specified post-condition from calling the interface. An example Doxygen interface function header can be seen in Figure 1 that demonstrates how easily a developer can specify the contract by simply adding comments for the PRE-CONDITIONS and the POST-CONDITIONS.

Since the pre-conditions and post-conditions are fully specified up front, the component developer can use C assertions to validate that the contract is being met by the component user. For example, the component implementer would create an assertion to verify all pre-conditions including function input values. If any precondition is not met or an input value is out of range, this is a bug in the application software and the assertion will stop the application from executing and provide a message stating which file and line number the assertion is located in addition to how it failed. Once an application has been developed and all assertions have been met, the assertions can be turned off to decrease the code size and minimize overhead associated with the assertions.  

Design by Contract sounds like a great idea but for embedded software developers simply following Design by Contract doesn’t quite go far enough. For example, in a standard implementation, interface implementers would not be responsible for detecting or handling errors that might occur. If an application developer invokes the component without meeting the pre-conditions, then it is up to the application code to realize this and handle potential errors. In many implementations, this means the system is going to crash or enter into an unexpected state while the component itself just doesn’t care. Using defensive programming techniques is not expected in the Design by Contract environment and often ignored.

Developers looking to improve an interfaces understanding can use techniques from Design by Contract and then take the methodology one step further by applying defensive programming techniques to the implementation. Relying on user application code to “do the right thing” can be dangerous given the speed at which software is developed and the pressure engineers are under to just make something work. Even though assertions can be used to verify the application code is adhering to the contract, error handling code should still be put in place just in case a pointer goes awry or memory becomes corrupted during run-time which then violates the contract.

Design by Contract is by no means a new idea. It was first described in computer science literature as far back as 1986. Developers who are involved in creating API’s and HAL’s may find that borrowing ideas from this methodology can improve clarity and component use. Undoubtedly mixing key concepts with more modern ideas can provide a robust solution that allows developers to decrease the time they spend debugging and focus on delivering new features that their end users will love.

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer and holds three degrees which include a Masters of Engineering from the University of Michigan. Feel free to contact him at [email protected], at his website, and sign-up for his monthly Embedded Bytes Newsletter.

Silence of Owl Flight Inspires Method for Reducing Wind Turbine Noise

owls, wind turbine, green engineering

Researchers from Lehigh University, Virginia Tech, Florida Atlantic University, and University of Cambridge have collaborated on research, which observed the way many owl species can fly silently to invent a passive noise reduction device. The device can disrupt the air flow of turbines to cut back on the sound they make, and also has applications for other turbine-based technology used in airplanes, naval ships, and automobiles.

Justin W. Jaworski, assistant professor of mechanical engineering and mechanics at Lehigh, who worked on the research, explained to Design News in an interview what about owl flight inspired the design of the mechanism.

This image shows the feathers of the Eurasian eagle owl, (a,b) great gray owl, (c) and snowy owl (d). The ability for many owl species to fly without noise to better capture prey has inspired a cross-university research team to develop a passive noise reduction device to help cut back on noise from wind turbines. The research also has applications for airplanes, naval ships, and automobiles. (Source: Ian A. Clark, Conor A. Daly, William Devenport, W .Nathan Alexander, Nigel Peake, Justin W. Jaworski, Stewart Glegg)

“Owls are able to suppress the noise due to turbulent air flowing over their wings over the same frequency range where human ears are most sensitive, so the implications for engineering design are obvious for designs at the bird scale,” he said. “The noise produced at the trailing edge of a wing sets its minimum noise level, so to be effectively silent the owl must modify its trailing edge in a novel way. Many real-world applications (e.g. wind turbines) cite trailing-edge noise as a dominant or predominant noise source; therefore, understanding the owl trailing edge may have direct implications for industrial design for quieter operation.”

Specifically, the downy layer found on top of owl features posed particular interest for researchers, Jaworski said. The layer, which has a velvety texture, has a forest-like structure that pushes off the noisy air flow from the wing surface. When this layer is placed upstream of the trailing edge, it modifies the air flow before generating noise at the edge, he said.


ESC logoDesign Technologies. Learn more about electronics and security at ESC Silicon Valley, Dec. 6-8, 2016 in San Jose, Calif. Register here for the event, hosted by Design News ’ parent company, UBM.


Using this as a model, the team developed a 3D-printed, plastic passive noise reduction device called “finlets” based upon the structure of the fluffy down material found on owl feathers, Jaworski said. This device is comprised of small rails or fins that are aligned with the air flow upstream of the trailing edge, which disrupts the air flow in such a way to weaken noise generation at the trailing edge.

“The finlet device is passive and could be a design retrofit -- such as winglets on the tips of airplane wings -- or incorporated into new designs,” Jaworski said. “Also, because finlets simply pretreat the air flow, they may be used with other noise reduction technologies -- e.g. serrated edges -- to achieve even greater noise reductions and/or over a broader frequency range.”

Jaworski and other members of the team have published a paper about their work in the Journal of Sound and Vibration. The team already has demonstrated 10-decibel broadband noise reduction on a wind turbine blade in a laboratory at Virginia Tech. The next step is to test the finlet invention on full-scale wind turbines to see how it performs in a real-world scenario, Jaworski said.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 15 years. She has lived and worked as a professional journalist in Phoenix, San Francisco, and New York City. In her free time she enjoys surfing, traveling, music, yoga, and cooking. She currently resides in a village on the southwest coast of Portugal.

Embedded Hardware in Space. It’s Hard.

Embedded Hardware in Space. It’s Hard.

Putting things into space has long been a human fascination, but it hasn’t always been easy. Indeed, sending hardware into orbit can be challenging to say the least. “It’s really hard,” said Eli Hughes, a research engineer at Penn State ARL, who will be delivering his talk at ESC Silicon Valley this year on how engineers can go from a base idea to getting something prototyped and ready to go to space.

The first challenge, said Hughes, is finding parts to build a prototype with. “Your component selection goes down from millions of parts to not many parts,” he explained, noting that anytime one is designing something for quick operation, state-of-the-art goes out the window. “You’re always bumped back five, 10, 15 years. That’s the first challenge.”

Adding to that challenge, of course, is then how to implement an algorithm on the parts you do have available, while keeping the entire project within budget, because using space-qualified parts can be expensive. That means finding the best available proxy parts that would work in space to test your theory before building your actual device with proper space parts.


ESC logoTake a Journey with "Mr. X."  Designing embedded systems for space applications is both costly and difficult. A particular challenge is developing lower-cost hardware to serve as a development model before designing fully qualified flight-ready hardware. This session, at ESC Silicon Valley, Dec. 6-8, 2016 in San Jose, Calif., will tear down hardware used for the development of a high-speed X-ray event detection processor. Register here for the event, hosted by Design News ’ parent company, UBM.


“You have to cobble together your own design tools, to prove the concept and get the investment money,” Hughes explained, noting that engineers had to pull together a development system representative of parts they have available just to prove that the algorithm or measurement system will work. Once that hurdle is overcome, it’s easier to get money to fund the next step, building flight-ready engineering models.

“We have an idea, we have an algorithm, but then to move it into hardware is really tough.”

Hughes’ team should know. They have been developing a new circuit to do high-speed X-ray event detection in space, which the team has fondly dubbed “Mr. X.”

“A good way to describe it is that what’s out there now is standard definition video and we’re trying to move it to a high-speed high-definition video.”

The "Mr. X" design incorporates a Xilinx Virtex-5 FPGA, high-speed QDR-II SRAM, and a ARM Cortex M0-based microcontroller to implement the core processing architecture. One of the constraints of this design was to use components that offered a clear path to rad-hard, flight-ready hardware. 

The "Mr. X" board allowed the engineering team to demonstrate the core functionality of the processing concept while being able to significantly improve the technology readiness level of the instrumentation.

The space-qualified FPGA Hughes’ team wanted to use, made by Xilinx, was based on a commercial part roughly 10 years old.

Unable to simply go out and buy the right development hardware, or source bits and bobs from eBay, the team found hardware that was fairly close and decided to use that. “But halfway through the process of proving our logic, we needed to wire in some additional hardware, and that was just impossible, there was just no way to get the circuits we needed with the development hardware, so we decided to use the funds we had to build our own development system, so we could prove that what we had with commercial parts worked,” explained Hughes. Every part on the current board has a path to a rad-hard equivalent, so the team can get as close as they can with the money they have, and should be able to make a pretty good case to investors.

“With ‘Mr. X’ we have a pretty good representational model. We’ve gone a couple of steps up from what we call technology readiness level (TRL).”

When designing for space, Hughes noted one is automatically working with expensive parts, even the commercial equivalent parts. “I probably have $12k in parts on one of these boards here,” he said, adding that one had to have a great respect for the design process and a lot of discipline going into a board design, because typically teams only have enough money to get one shot.

“The original budget didn’t include designing this hardware, so we had to get it right the first time. We had to really sweat the small stuff. Looking over every detail we possibly could, running as many simulations as we could, having as many design reviews as we could,” he said.

The result in Hughes’ case is a 20-layer board that uses a fairly exotic substrate and cost quite a bit to get made. Hughes said he couldn’t stress enough how important it was to have a good relationship with an assembler to help engineers through the process. “I literally spent a week at the assembler,” he said, noting that when his team received the boards, they had the assembler make up partial sections and ensure they were there to make any fixes and get the process right. 

“Make sure you have a very methodical, well-planned, and disciplined approach,” he re-iterated.

“Of course, spending a lot of time on it costs a lot, but it costs a lot more if you don’t get it right the first time,” he added.

The ESC talk, said Hughes, will be particularly valuable for engineers who are interested in FPGAs, especially FPGAs for going into space, as well as people looking to learn about how hard it is to prototype for space.

“A lot of the things you might run into in space are not necessarily the things you might think of; for example, one of the biggest challenges is dealing with heat. A lot of people think space is cold, but it’s not, it’s a vacuum, so it’s really hard to get heat to dispel from anything that’s generating heat,” he explained, adding that engineers really had to think about things like that as well as their general software development cycle.

“In space, things can’t crash. They have to be rock solid,” he said.

Meanwhile, another ESC speaker looking to the final frontier for embedded hardware is Peter Ateshian, Faculty Research Associate Lecturer at the Naval Postgraduate School.

Ateshian believes the Internet of Space has some resounding benefits, including the ability to connect anywhere, anytime, without infrastructure or power.

His team at the NPS has been developing Femto satellites, powered by onboard solar cells and which transmit with CDMA frequency so the signals can be received with a standard cell phone. Each satellite orbits a point about three times a day, so Ateshian notes that with around 60 of them, one could get near continuous coverage.


ESC logoInternet of Space.  Internet of Space (IoS) is an embedded device application with a 1" x 1" UV stabilized PCB containing a CDMA radio transceiver, MEMS magnetometer, gyroscope, inertial measurement unit (IMU), switched ECC RAM, and thermal sensors. This embedded device platform is called the Femto Satellite. Learn more at ESC Silicon Valley, Dec. 6-8, 2016 in San Jose, Calif., will tear down hardware used for the development of a high-speed X-ray event detection processor. Register here for the event, hosted by Design News ’ parent company, UBM.


The Femto satellite program is an evolution from the days of the Cube-Sat, some 15 odd years ago. Cube-Sats were cool, but a bit on the large side. People had also started trying to add various sensors and capabilities to them, so NASA and Cornell University teamed up to see how they could improve them, resulting in the Femto Satellite.

The NPS is currently working on the next generation to those developed by Cornell and NASA, applying Moore’s law to satellite technology and shrinking it down to a 1” x 1” PC board dimensional thickness for the entire device.

The 1" x 1" UV-stabilized PCB containing a CDMA radio transceiver, MEMS magnetometer, gyroscope, inertial measurement unit (IMU), switched ECC RAM, and thermal sensors costs less than $50 to make, including its $10 software-defined radio (SDR) dongle which maps the live frequency to your cell phone frequency (because you’re not allowed to transmit at cell phone frequency for obvious reasons).

The CC430 SoC is at the core of the Femto Satellite, providing all computing and communication capabilities. It combines an MSP430 microcontroller, which is clocked at 8 MHz and provides 4 kB of RAM and 32 kB of Flash memory, with a very flexible CC1101 UHF transceiver capable of output powers up to 10 mW and data rates up to 500 kbit/s. Both the MSP430 and CC1101 have flight heritage on CubeSat missions. An Arduino-based development environment, known as Energia, has also been ported to the CC430 to facilitate rapid code development and prototyping.

The Femto Satellite part of the Internet of Space means that effectively, a cell phone, tablet, or notebook can become a ground station via the low-cost SDR. It can also be useful for missions like asteroid detection, true random number generator (TRNG), protected CDMA communications, solar weather or CME monitoring, earthquake and tsunami detection, and radiation or cosmic particle detection.

“There are hundreds of industrial, commercial, and agricultural applications” said Ateshian, noting that if other simple pH, chemical, and salinity sensors were added to this IoS and IoT platform, it would increase that number even more.

“Not only can the Femto Satellites be deployed in space, they can also, with a little coat of shipping container primer, be deployed at sea and float on the ocean and operate as a sea sensor. That’s one of the very interesting IoT applications,” he added.

“They can also be used on the ground as a GPS navigation system, so if we lost our GPS satellites you could use a swarm of these to be your navigation, though it wouldn’t be quite as accurate,” he noted.

In space, mission time is usually six to eight weeks, and the satellites burn out on re-entry.

Interested in learning more about embedded technology in so far as it relates to Femto Satellites and the Internet of Space? Ateshian’s ESC talk should have you covered, targeted at anyone interested in the Internet of Space, or anywhere, anytime connectivity in power grid-less and infrastructure-less environments, as well as those interested in Earthquake, tsunami, radiation, and cosmic particle detection, or TRNG for cryptographic and communications applications.

Both talks sound out of this world to us!

A regular speaker on the tech conference circuit and a Senior Director at FTI Consulting, Sylvie Barak is an authority on the electronics space, social media in a b2b context, digital content creation and distribution. She has a passion for gadgets, electronics, and science fiction.

Collaborative Robot Leverages Motion Control Speed and Agility

ABB, YuMi, robot, robotics, Mechatronics
<p><u><a href="https://new.abb.com/products/robotics" target="_blank"><strong>ABB Robotics</strong></a></u></p><p>ABB manufacturers YuMi (shown above), along with other lines of collaborative and industrial robots. What's most notable about YuMi in this case however is that the robot can be controlled using ROS.</p><p><em>(Image source: ABB Robotics)</em></p>

A new collaborative dual arm robot named YuMi uses a unique mechatronic design and innovative motion control to eliminate the need for barriers, cages, and software safety zones. In developing the new design concept, aimed at making automation technically and economically feasible for small parts assembly, the goal was to automate cells that can coexist with both manual work cells nearby and implement an inherently safe environment when interacting with human workers.

Unique Mechatronic Design

YuMi’s design is based on a revolutionary integration of motion control software, speed-limited hardware, reduced weight, a compact frame, and 14-axis agility. Lightweight, padded magnesium arms can cease operation in milliseconds if necessary, in the event of an unexpected collision, for example, while cameras embedded in its integrated hands monitor the immediate environment. The combined effect is to ensure the safety of human coworkers on production lines and in fabricating cells.

“YuMi is designed not to hurt people, even when contact occurs,” Nicolas De Keijser, Assembly and Test Business Line Manager for the Robotics Business Unit at ABB Inc., told Design News.

“The soft padded dual arms and the lightweight design contribute to the overall safety of the robot’s co-workers. The design of the robot makes humans feel safe and comfortable.”

“Put simply, in the unlikely event of a safety failure, the physical robot including its grippers is incapable of causing harm,” De Keijser said. “Moderate robot speeds also allow time for human reaction to avoid collision.”

YuMi is what ABB claims is the world’s first inherently safe, collaborative, industrial robot. Its inherently safe design has been classified as a global certification by UL.

Focused on Small Parts Assembly

Dual arm technology used in the design of the robot offers many advantages for assembly applications. It enables flexible fixturing within an assembly cell, as one arm can be used as fixture while the other manipulates the part, and also allows for guiding of flexible parts. Another key is that YuMi enables high throughput in a collaborative manner with its small footprint, speed, and precision/accuracy to thread a needle.

The robot’s inherently safe design is intended to eliminate the need for fencing, caging, or other barriers, and its servo gripper hands can easily locate and then direct the grippers to pick parts. Real-time algorithms set a collision-free path for each arm, customized for the required task. Padding protects coworkers by absorbing force if, in the unlikely event, contact is made.

“If the robot encounters an unexpected object, the slightest contact with a human coworker for example, YuMi can rapidly diagnose the change in its environment and, if necessary, register the overload, shutting down the motion within milliseconds to prevent injury,” De Keijser said.

If YuMi senses an unexpected impact, such as a collision with a coworker, it can “pause” a motion in progress within milliseconds, and the motion can easily be restarted again. Additionally, the robot can rapidly diagnose changes in its environment and, if necessary, register the overload, shutting down its motion within milliseconds to prevent injury. When this is combined with the floating padding, safety for a human coworker is drastically increased. Even with its inherent safety features, the robot is precise and fast, returning to the same point in space over and over again to within 0.02 mm accuracy and moving at a maximum velocity of 1,500 mm/sec.

YuMi was specifically designed to meet the flexible and agile production needs required in the consumer electronics industry, and increasingly in other market sectors, ABB has developed a collaborative, dual-arm, small-parts assembly robot solution that includes flexible hands, parts-feeding systems, camera-based part location, and state-of-the-art motion control. It is also very suitable, through the use of its hand cameras, for inspection and test applications.

“Since its global market introduction nearly two years ago at Hannover Messe, YuMi has been successfully deployed by numerous small parts assembly manufacturers and is increasingly becoming the choice of customers in industries such as toys, wrist watches, and writing pens, to name a few,” De Keijser said. “Packaging applications for YuMi are widespread, including kitting of multiple components into packages, component assembly and packing of fragile materials.”

Collaborative Robot Trends 

YuMi’s performance and accuracy makes it a highly most suitable collaborative robotic solution for tackling small parts assembly and other handling applications that require high dexterity and accuracy. As such, it grows the overall market where collaborative robots can be used.

Leveraging its flexibility and ability to be deployed rapidly, YuMi can support and augment the capabilities of skilled workers from SMEs to large corporations particularly at repetitive and dull tasks. We expect this kind of interaction between man and robot to continue as new applications emerge every day in more and more industries.

De Keijser said the “new norm” of small parts assembly is lower product volumes, shorter product lifecycles, shorter lead times, and a growing trend to customize goods close to final markets. A collaborative robot’s extreme versatility enables it to adapt to changes in the production environment in much the same way a high-value employee does, by quickly repositioning and learning new tasks. The new thinking is that collaborative robots will radically change the way the industry thinks about assembly automation.

As robot systems and solutions are simplified, they will become easier to install, commission, and program. A good example of simplicity is lead-through programming, which allows the robot to be taught to complete tasks by operators by simply moving its arms which a wide range of customers can do -- from the largest automakers to a local baker.

Challenges Moving Forward

De Keijser said improvement in the ease-of-use and programming of collaborative robots will be one of the main drivers of more widespread adoption. This will go hand-in-hand with advancements in computer vision, artificial intelligence, and sensor technologies which will give robots better abilities to perceive and feel their environment.

“Customers today want robot systems and solutions that allow man and machine to work in greater proximity to one another, that are simple and easy to use and provide effective plant management,” De Keijser said.

With that in mind, ABB has identified three growing trends in robotics which they believe will shape the way all robots will be used in the future: collaboration, simplification, and digitalization. Today, thanks to their flexibility and ability to be deployed rapidly, collaborative robots support and augment the capabilities of skilled workers from small and medium enterprises, or as they are often referred to as SMEs, to large corporations particularly at repetitive and dull tasks.

De Keijser said this kind of interaction between man and robot will continue as new applications are emerging every day in more and more industries. The second trend of simplification will make installation, commissioning, and programming of robots easier. A good example of simplicity is intuitive programming, which anyone can do. Finally, effective plant management will be dependent on digitalization, or the ability to unlock the power of the Internet.

In the year since YuMi’s, the team at ABB has been learning more and about the potential of this technology.

“YuMi enables leaner automated assembly and material handling systems while still managing complex applications. The cost benefits associated with this are unlocking the potential for this type of automation to be rolled out for many assembly applications where the return on investment was not favorable with more traditional automation technology. In addition, we see many small to medium enterprises starting to invest in automation for this very reason,” De Keijser said.

[images via ABB]

Al Presher is a veteran contributing writer for Design News, covering automation and control, motion control, power transmission, robotics, and fluid power.

Graphene-Based Photodetector Aimed at IoT, Wearable Devices

Tiny Device Turns Light Into Electronic Current for Wearable Technology, IoT

Researchers at the Center for Integrated Nanostructure Physics within the Institute for Basic Science (IBS) in Korea have used graphene to develop the thinnest photodetector to date that can produce even more electrical current than even larger devices, they said.

The work bodes well for providing smaller and more powerful components for Internet of Things (IoT) devices and wearable electronics, as well as has implications for photovoltaic and other optical applications, researchers said.

A photodetector converts light into an electric current and comes in many variations. The one developed by the team at the IBS is just 1.3 nanometers thick—10 times smaller than current standard silicon diodes—and is comprised of molybdenum disulfide sandwiched in graphene.

Graphene is a key element in the device, being conductive and thin, allowing for the slim size of the photodetector. However, typically its application for electronics is limited because it does not act as a semiconductor, researchers said.

In this case, the team was able to boost the usability of graphene by putting a layer of the 2D semiconductor molybdenum disulfide between two graphene sheets and then putting that over a silicon base. To their surprise, it generated an electric current despite being so thin, said Yu Woo Jong, one of the researchers on the team and first author of a paper published about the work in the journal Nature Communications.  

"A device with one-layer of MoS2 (molybdenum disulfide) is too thin to generate a conventional p-n junction, where positive charges and negative charges are separated and can create an internal electric field,” he said. “However, when we shine light on it, we observed high photocurrent. It was surprising. Since it cannot be a classical p-n junction, we thought to investigate it further.”

Researchers then set out to find out why the thinner photodetector they developed works better than a thinner one, since typically a photocurrent is proportional to the photo absorbance, Jong said. This means that if the device absorbs more light, it should generate more electricity, he said.

“In this case, even if the one-layer MoS2 device has smaller absorbance than the seven-layer MoS2, it produces seven times more photocurrent," Jong said.



To understand why this is the case, the team proposed an analogy using a group of people in a valley surrounded by two mountains, they said. In the scenario, the group wants to cross to the other side of the mountains without using too much effort. In the case of the larger device with seven layers of molybdenum disulfide, both mountains have the same height. Whichever mountain is crossed, the effort will be the same, so half the group crosses one mountain and the other half the second mountain, researchers said.

In the second case—comparable to the thinner photodetector—one mountain is taller than the other, so the majority of the group decides to cross the smaller mountain. However, because it’s quantum physics and not typical electronics theory, they do not need to climb the mountain until they reach the top, but instead can pass through a tunnel, researchers said. The idea is here is that electric current is generated by the flow of electrons, and the thinner device can generate more current because more electrons flow towards the same direction, Jong said.

When applied to the photodetectors, this theory means that when light is absorbed by the thinner device and MoS2 electrons jump into an excited state, they leave holes—basically positions left empty by electrons that absorbed enough energy to jump to a higher energy status, researchers said. These holes act like positive mobile charges. In the thicker device, however, electrons and holes move too slowly through the junctions between graphene and MoS2.

For these reasons, 65% of photons absorbed by the thinner device are used to generate a current, while only  7% do so for the seven-layer MoS2 apparatus, resulting in the performance difference between the two devices, researchers found.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 15 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York. She currently resides in a village on the southwest coast of Portugal.

Graphene-Based Photodetector Aimed at IoT, Wearable Devices

Graphene-Based Photodetector Aimed at IoT, Wearable Devices

Researchers at the Center for Integrated Nanostructure Physics within the Institute for Basic Science (IBS) in Korea have used graphene to develop the thinnest photodetector to date that can produce even more electrical current than even larger devices.

The work bodes well for providing smaller and more powerful components for Internet of Things (IoT) devices and wearable electronics, as well as has implications for photovoltaic and other optical applications, researchers said.

A photodetector converts light into an electric currentand comes in many variations. The one developed by the team at the IBS is just 1.3 nanometers thick -- 10 times smaller than current standard silicon diodes -- and is comprised of molybdenum disulfide sandwiched in graphene.

Graphene is a key element in the device, being conductive and thin, allowing for the slim size of the photodetector. However, typically its application for electronics is limited because it does not act as a semiconductor, researchers said.

In this case, the team was able to boost the usability of graphene by putting a layer of the 2D semiconductor molybdenum disulfide between two graphene sheets and then putting that over a silicon base. To their surprise, it generated an electric current despite being so thin, said Yu Woo Jong, one of the researchers on the team and first author of a paper published about the work in the journal, Nature Communications.  

The diagram above shows an analogy that explains why the device with one-layer of molybdenum disulfide generates more photocurrent than the seven-layer molybdenum disulfide device, which is much thicker. (Source: IBS)

"A device with one-layer of MoS2 (molybdenum disulfide) is too thin to generate a conventional p-n junction, where positive charges and negative charges are separated and can create an internal electric field,” he said. “However, when we shine light on it, we observed high photocurrent. It was surprising. Since it cannot be a classical p-n junction, we thought to investigate it further.”

Researchers then set out to find out why the thinner photodetector they developed works better than a thinner one, since typically a photocurrent is proportional to the photo absorbance, Jong said. This means that if the device absorbs more light, it should generate more electricity, he said.

“In this case, even if the one-layer MoS2 device has smaller absorbance than the seven-layer MoS2, it produces seven times more photocurrent," Jong said.

To understand why this is the case, the team proposed an analogy using a group of people in a valley surrounded by two mountains, they said. In the scenario, the group wants to cross to the other side of the mountains without too much effort. In the case of the larger device with seven layers of molybdenum disulfide, both mountains have the same height. Whichever mountain is crossed, the effort will be the same, so half the group crosses one mountain and the other half the second mountain, researchers said.

In the second case -- comparable to the thinner photodetector -- one mountain is taller than the other, so the majority of the group decides to cross the smaller mountain. However, because it’s quantum physics and not typical electronics theory, they do not need to climb the mountain until they reach the top, but instead can pass through a tunnel, researchers said. The idea here is that electric current is generated by the flow of electrons, and the thinner device can generate more current because more electrons flow toward the same direction, Jong said.

When applied to the photodetectors, this theory means that when light is absorbed by the thinner device and MoS2 electrons jump into an excited state, they leave holes -- basically positions left empty by electrons that absorbed enough energy to jump to a higher energy status, researchers said. These holes act like positive mobile charges. In the thicker device, however, electrons and holes move too slowly through the junctions between graphene and MoS2.

For these reasons, 65% of photons absorbed by the thinner device are used to generate a current, while only  7% do so for the seven-layer MoS2 apparatus, resulting in the performance difference between the two devices, researchers found.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 15 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her free time she enjoys surfing, traveling, music, yoga and cooking. She currently resides in a village on the southwest coast of Portugal.

Designing Ourselves: A Future of Cybernetics for Everyone

Designing Ourselves: A Future of Cybernetics for Everyone

That antenna sticking out of Neil Harbisson's head isn't a medical device, it's not a wearable, and it's not a gadget. Ask Harbisson and he'll tell you it's an “artificial sensory organ” that allows him to overcome his colorblindness and perceive color.

Neil Harbisson 

For the past 13 years, since he first had his artificial organ (which he calls the “Eyeborg”) drilled into his cranium, Harbisson, who will be delivering a keynote at the upcoming ESC Silicon Valley conference, has emerged as one of the world's preeminent futurists as well as a self-proclaimed cyborg. And he believes we're progressing toward a world where more and more people (disabled or not) will be choosing to augment themselves and experience the world in brand new ways.

Harbisson, 34, was born with achromatopsia, a rare form of colorblindness that allows him to only see in greyscale. As a child Harbisson said he knew that color existed, but he also understood that he had no way of perceiving it. His study as a musician led him to an answer. “When I started studying music I found out there are technologies that can create sounds. I was interesting in creating a sense of color without changing my existing sense,” he said. Transposing colors into different frequencies of sound seemed an ideal solution, but Harbisson also did not want to sacrifice his ability to hear the rest of the real world for the sake of hearing color tones.

He started looking into bone conduction as a solution and eventually settled on the design for his Eyeborg antenna. “Finding people to collaborate with on the project was easy because I was in an art school at the time,”Harbisson said. “The technology is not complex, it's the way it's being used that's unusual.”

What was complex was finding a doctor willing to graft the Eyeborg onto his skull...particularly after a bioethical committee shot the idea down. He eventually found a doctor in Spain who was willing to perform the procedure under the condition of anonymity. 

ESC logoNeil Harbisson will be delivering a keynote, “The Art and Science of Extending Perception Through Cybernetic Technology on December 7 as part of ESC Silicon ValleyRegister here for the event, hosted by Design News ’ parent company UBM.

If being bombarded with tones coming from every color source around you sounds overwhelming, it is.  “It took time for my brain to accept this as a sense. It was all chaotic, I had strong headaches. It was exhausting to hear all of this information every day,” Harbisson said. It took five weeks for the headaches to subside and for him to normalize to the new sensory input. Now Harbisson said he has even begun to dream in color.

The Eyeborg also allows him to perceive beyond the visible color spectrum, into the infrared and ultraviolet range as well. “Infrared makes me aware of movement detectors. If I go in a space and can sense infrared I can sense an alarm or something tracking my movements,” Harbisson said. “I can also sense at night, even if there's no light.” He said his ability to sense ultraviolet also helps him in sensing intense sunlight and helping him know when he's been outdoors too long. Today Harbisson's Eyeborg affords him a 360-degree perception of the color around him and allows him to perceive an infinite number of colors by assigning each color a unique tone. “There's no way of counting the number of colors,” he said. “There are 360 hues and you also factor in saturation and light levels.” Since his initial implantation he's also added Bluetooth functionality to his Eyeborg to allow for easier software upgrades as well as Internet access (he lets friends share images with him directly via Internet so he can perceive colors remotely). 

Given all of the different tones coming at him constantly, Harbisson describes his day-to-day as a rather musical experience. “All of my other senses have awakened a bit more. You find connections. The smell of orange makes my brain create the sound of F-sharp. I might hear a sound and relate it to a specific color or a smell.”

His favorite place to visit? “I really like supermarkets. Walking around the supermarket is a very unique experience. You don't really find the combination of colors placed that way in aisles anywhere else and the light is always really good in supermarkets.” On the other hand he said he doesn't really like spaces with a lot of violet coloring since it can be very high pitched.

Harbisson is far from the only person like himself. In 1998 Kevin Warwick, professor of cybernetics at the University of Reading, England, became the world's first cyborg when he implanted with himself with an RFID chip that allowed him to control lights and machines around his lab. Warwick has since removed the chip from his body, but an entire underground movement has sprung up – mostly through online forums and websites – around biohacking and personal cybernetics. There are even annual conventions like BodyHacking Con.

Biohackers, or “grinders,” as some members of the community have branded themselves, are conducting often risky DIY experiments into implanting themselves with chips and other devices for reasons varying from medical to pure entertainment. They even have their own online stores dedicated to selling homemade implants as well as the required surgical tools – all with full disclaimers of course. A biohacker named Amal Graafstra recently gathered attention for a proposed project to use RFID implants to create smart guns that only fire when they're being held by their owner ( Graafstra has already demonstrated a working prototype, using himself as a guinea pig).

While he hasn't declared himself a part of the grinder movement, Harbisson himself has started the Cyborg Foundation – a platform dedicated to bringing together individuals around the world who want to become cyborgs, and “come out of the cyborg closet” according to its website. “There are lot of people that are in the cyborg closet ... We believe that everyone should perceive the world as they wish. And this platform exists to give you the tools to become who you seek to be by expanding your senses and/or abilities as you please,” the foundation's website reads.

Taking his mission one step further, in 2015 Harbisson co-founded his own company, Cyborg Nest, that will be selling implantable technologies Harbisson is referring to as “new sensory organs.” Whereas engineers and scientists are abuzz about artificial intelligence (AI) Harbisson believes there is a new, emerging field that is just as exciting – artificial sensing (AS). “Cyborg Nest is about exploring our relationship with AS, and the applications of AS to enhancing body intelligence” Harbisson said.

The company is already taking pre-orders on its first product, North Sense, and is planning on shipping the first units out in January. North Sense is a partially implantable sensor, about a square inch in size, that anchors to the skin via titanium barbell piercings and vibrates when its wearer is facing truth north – transforming a person into a sort of human compass.

The North Sense will affix to body like a piercing and detect true north. (image source: Cyborg Nest) 

Cyborg Nest also has other new devices planned on the horizon. One that Harbisson is particularly excited about is a forehead implant that will use heat generated at various orientations to give a user an inherent sense of time. “Humans don't have an organ for time, so we decided to create one” he said, adding that the finished product will also have a “flight mode” and be able to adapt to travel as well as different time zones.

Animals already have sensory abilities, like the ability to see ultraviolet or detect true north, that go beyond human perception and Harbisson thinks it's time that humans joined the club. “It's not bad to modify ourselves or design ourselves. It's positive,” he said. He believes adopting more cyborg technology could have a powerful effect on society.

“I usually give the example of night vision. If we created night vision instead of lightbulbs it would be much better for the planet; we wouldn't be using so much energy to create artificial life,” he said. “By adding senses you can also prevent illnesses and accidents. Sensing ultraviolet would prevent too much sunbathing, for example. Once you feel nature you're more aware of it. If we all felt something like climate change instead of just knowing it's there we'd act differently.”

Research is showing that Harbisson's feelings aren't just confined to niche online communities and is expanding into the larger consumer space. A survey by Ericson ConsumerLab on “10 Hot Consumer Trends for 2016” listed “Internables,” implantable technologies, as one of the in-demand emerging consumer technologies. “Judging by consumer interest, the next generation of body-monitoring technology may not be worn, but may instead be found within the human body,” the report said, “But this is only the beginning; eight out of 10 smartphone owners would like to augment their sensory perceptions and cognitive capabilities with technology – the most popular being vision, memory, and hearing.”

Though it will probably be some years, or even decades, before we see the first people walking around with augmentations and cosmetic implants in their bodies, Harbisson believes there is a cultural shift happening in younger generations that, coupled with technological advancements, particularly in 3D printing, will make cyborg technology much more acceptable and eventually ubiquitous.

From Ericsson Consumer Labs' report "10 Hot Consumer Trends 2016" 

“Things are changing slowly, very slowly. I feel the younger generations are much more aware of what's happening,” Harbisson said. “The 20th century was harmful in many cases in the way that technology was very negative. But now the younger generations don't' see this as so bad.”

But for Harbisson it's only a matter of time before we accept technology as being a normal part of our biology. “Once we can 3D print with our DNA we'll be able to print existing and new organs, he said. He even hopes that someday 3D printing will turn his antenna into an organic implant. “Instead of using chips we'll use biological organs. We are at the beginning of the renaissance of our species. Children will be born with new senses by the end of this century if we keep pushing.”

Neil Harbisson will be delivering a keynote, “The Art and Science of Extending Perception Through Cybernetic Technologyon December 7 as part of ESC Silicon Valley.

Chris Wiltz is the Managing Editor of Design News

Where Are the Women Engineers?

Where Are the Women Engineers?

The numbers are getting better, but they still aren’t great.

According to Solving the Equation: The variables for Women’s Success in Engineering and Computing, research published in March by the American Association of University Women, more than 80% of STEM (science, technology, engineering, and math) jobs are in engineering and computing. Yet women comprised only 12% of the engineering workforce and 26% of the computing workforce in 2013. And those low numbers reflect increases, with engineers at about 10% in 2010.

More substantial increases have been recorded, as well, but they are few and far between. Harvey Mudd College, as example, is credited with changing its structures and environments under president Maria Margaret Klawe in ways that lead to significant increases in women’s representation in computer science. The school saw women graduating from its computing program climb from 6% in 2007 to a whopping 55% in 2016.

Mudd, sadly, is an exception. Even worse, many women who enter engineering fields post graduation, filter out over time. And we find ourselves back at the low double digits.

Needless to say this is concerning as we know and have shown time and time again that diversity in the workforce contributes to creativity, productivity, and innovation, not to mention that companies with more diversity perform better financially over the long run. Diversity is needed to steer the direction of engineering and technical innovation.

We also know that in the very near future, the United States will need a mass of new engineers and computing professionals as Baby Boomer engineers exit their cubes and technology continues to become a more pervasive part of our economies, healthcare, in general, our lives. Yet nearly half the population is not approaching or sticking with careers in engineering, nor science, technology, and math.

At the Embedded Systems Conference (ESC) in December we will continue this conversation. Our panel and networking session, Women in Engineering, Dec. 7, 7:30 a.m. to 9 a.m., will explore professional opportunities for women in engineering, the reasons why women represent less than 12% of engineers, and our panelists careers in engineering-driven professions.

Women in Engineering Panel and Networking Session
Location:  211D, San Jose Convention Center
Date:  Wednesday, Dec. 7
Time:  7:30 a.m. – 9 a.m.

Our panelists, themselves, are exceptional leaders in engineering. Joining us will be:

Eileen Tanghal, an MIT electrical engineering grad and MBA from the London School of Business, who is currently VP of New Business Exploration, New Business Ventures at ARM, owner at The Coder School, Fremont, while also an angel investor/advisor at Goldenspear LLC.

Heather Andrus, general manager, Radius innovation Studio, who has more than 20 years of experience in creative product design and team leadership combining user-centered experience design and engineering acumen. 

Jessica Gomez, founder and CEO, Rogue Valley Microdevices, who is described by her peers as a powerful, passionate, persuasive, and visionary business and community leader who has continually expanded her sophisticated technology business during the most challenging of economic times.

Lisa Q. Fetterman, who successfully made a career shift into tech and navigated the start-up waters as founder and CEO of Nomiku.

Consider this blog your invitation to join us. And come prepared for an exceptional discussion and networking. These women are stellar examples of engineering and business acumen.

The panel and networking session will start at 7:30 a.m. (early, yes, but we wanted to make sure people could attend and still get to their desks on time) when we will gather at the San Jose Convention Center, room 211D, for what will be a thought-provoking panel featuring the above-mentioned leaders. We will then allow for networking and continued informal discussion with our panelists.

The session is open to all ESC attendees. You can register for ESC here and let us know if you plan to attend or if there’s any specific questions you’d like the panel to answer by commenting below or emailing me.

We hope to see you there.

Metamaterials Enable Faster, More Powerful Semiconductor-Free Microelectronics

Activated by low voltage and a low-power laser, the device in tests showed a 1,000% increase in conductivity

Researchers at the University of California San Diego (UCSD) have developed the first optically controlled microelectronic device that doesn’t use a semiconductor. The research allows for the design of microelectronic devices that work faster and can handle higher power loads, as well as paves the way for more efficient efficient solar panels, researchers said.

Current microelectronic devices, such as transistors, are limited in capability by the properties of components such as semiconductors, which can impose limits on a device’s conductivity, or electron flow. That’s because semiconductors have what’s called a band gap, meaning they require a boost of external energy to get electrons to flow through them. This limits electron velocity, as electrons are constantly colliding with atoms as they flow through the semiconductor.

To help overcome these limitations, a team in the Applied Electromagnetics Group at UCSD—led by electrical engineering professor Dan Sievenpiper—aimed to remove these roadblocks in conductivity at a microscale by using free electrons in space, said Ebrahim Forati, a former postdoctoral researcher in Sievenpiper’s lab and first author of a paper on the work published in the journal Nature Communications.

However, allowing electrons to roam free without being linked to materials is easier said than done, researchers found. To liberate them from materials typically requires applying high voltages of at least 100 volts, high-power lasers, or extremely high temperatures of more than 1,000 degrees Fahrenheit, Forati said. These methods just aren’t practical in electronic devices at the micro- and nanoscale, however.

To solve the problem, the UCSD team turned to metamaterials—in this case, a metasurface comprised of an array of gold mushroom-like nanostructures on an array of parallel gold strips. Researchers fabricated a microscale device comprised of this metasurface on top of a silicon wafer with a layer of silicon dioxide in between.


The design of the gold metasurface is such that when a low DC voltage—under 10 volts—and a low power-infrared laser are both applied, it generates so-called “hot spots,” or spots with a high intensity electric field. These spots provide enough energy to pull electrons out from the metal and free them into space, according to the team.

This method also showed a 1,000 percent improvement in conductivity, meaning more electrons are available for manipulation, Sievenpiper said. The microelectronic device designed may not be well-suited to all semiconductor-dependent applications, he said, “but it may be the best approach for certain specialty applications, such as very high frequencies or high power devices.”

The team designed the metasurface as a proof of concept, but aims to develop and optimize different metasurfaces for different types of microelectronic devices, Sievenpiper said. Researchers also are exploring other applications beyond electronics for the technology, including photochemistry, photocatalysis, new kinds of photovoltaic devices, and environmental applications.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for more than 15 years.

Simulation Takes on Bigger Roles in Product Development

simulation, CD Adapco, design

The real world isn’t what it used to be when it comes to testing. Simulation has created a world of new product testing that puts products through scenarios that cannot be duplicated by prototypes in the real world. Instead of just testing an actual part physically, simulation can test an entire complex product – like a car – and see how each part performs in conjunction with the entire product – a form of accurate testing that can’t be done in the real world.

The exception is with composites and some 3D-printed parts. There is not enough data on the new materials and 3D-printed shapes to provide accurate simulation. That’s temporary, however. The data from the physical testing of composites and 3D shapes are getting fed into simulation programs so those programs can begin to include new materials and shapes into the digital world of simulation.

Simulation at the Center of the Design Process

Simulation used to be a side function, something done after preliminary design to see how the product performs in the real world. Simulation has moved to the center of the process so the product’s performance can be evaluated as it is being designed.

“Simulation has become an entire part of the design process, saving development time, reducing physical prototyping cost, and improving the quality of the product,” said Nicolas Tillet, SolidWorks manager at Dassault Systemes. “Additionally, by saving development time through simulation, the design team can spend more time innovating.”

With simulation in the hands of the design team, a wider range of different digital versions of the product can be created and tested, making simulation part of the design process itself.

“With the simulation model, once you invest in it, you can try different scenarios. You can simulate driving up the hill pulling a trailer and see how much the engine heats up,” said Stephen Ferguson, product marketing director at CD-adapco. “In simulation, you’re not simulating the test bed, you’re simulating the real world.”

The idea of putting more emphasis on testing digital versions of the product – rather than costly physical prototypes – is new to product design. “I’ve been working in the industry 25 years, and it’s only in the last two or three years that simulation has been replacing physical prototypes,” said Ferguson. “It’s an economic necessity, because building physical cars is more expensive. It lets people design better products in less time, and that’s happening in auto and aerospace.”

Simulating the Whole Thing, Not Just the Part

One of the significant advantages of simulation is the ability to see how different structures behave as part of the whole assembly project.

“It used to be that fluid and structures were handled differently by different people, but in order to solve difficult engineering problems, you have to do it all at once,” said Ferguson. “Any fluid flow problem flows around things, and that involves interaction with solids. So we simulate the entire problem, not just parts of it.”


ESC logoSecuring the Internet of Things. Today's IoT devices are under increasing attack. Device manufacturers and embedded software designers must be vigilant if they are to provide a secure system for applications to do their work. Learn more about securing IoT devices and applications in the Connected Devices track at ESC Silicon Valley. December 6-8, 2016 in San Jose. Register here for the event, hosted by Design News’ parent company UBM.


Another major change in the way simulation is functioning in design is the ability to change a part and then see how it functions within the entire assembly, even the finished product.

“Daimler is simulating all of the car, not just some assemblies. They show the flow around the car, by the engine, and into the cabin to see on how it affects the air conditioning,” said Ferguson. “You can do each simulation separately, but then you just get a degree of approximation. So it makes sense to simulate all of the car.”

Data from the Field and Everywhere

With products becoming more complex – for one, everything now has electronics – the simulation has to include a wide range of physics.

“What we’re seeing is the complexity of products, with electrical behaviors and electronics,” said Ravi Shankar, who works with simulation product marketing at Siemens PLM. “That’s leading to need for a multi physics and not just single discipline.”

The New World of Composites and 3D Shapes

The kink in the simulation process is new materials and new shapes created by 3D printing. Since there is no depth of data on how these materials and shapes perform, they still have to be tested in the real world.

“There are opportunities to replace the real world with simulation. It’s less expensive and gives you accurate results,” said Shankar. “Yet with the introduction of new materials and new techniques for joining new materials together and new manufacturing capabilities like additive manufacturing, the tools and materials are seeing quite a few changes. The impact is that companies may in some instances require more physical testing, and that testing will provide data for the simulation.”

The results of real-world testing of new materials and shapes can be fed into simulation for future digital testing.

“When you start to deal with these more exotic materials, you are in effect designing the material at the same time you’re designing the part. With traditional materials, they’re consistent throughout. With the new materials you have to see its properties in different modeling,” said Patrick Farrell, senior marketing manager for simulation and test solutions at Siemens PLM. “Same with additive manufacturing. It opens up the possibilities for creating shapes that were not possible with other manufacturing. On the testing side, you may have to do more base-level testing to make sure it’s what you expect in the manufacturing process.”

Collaborating with Simulation and Integrating It into Design

Simulation also makes it easier for collaborating teams to enter the design process.

“Allowing departments such as manufacturing, sales, and even marketing, to test the effect of a new product requirement before initiating a design iteration relieves the design team of repetitive tasks performed on previously validated numerical models,” said Valerio Marra, marketing director at COMSOL. “Instead of shifting their focus from current projects, design engineers can let their colleagues run an app and perform analysis on their own and then suggest a design change based on simulation results.”

Integrating simulation with design tools has also been part of the move toward bringing simulation into the heart of the design process. “Integration between different design applications and tools is important going forward,” said Shankar. “Simulation needs to be linked with a data center backbone, so if anything changes at any point, everyone is notified and you don’t have to switch from application to application.”

Rob Spiegel has covered automation and control for 15 years, 12 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cyber security. For 10 years he was owner and publisher of the food magazine Chile Pepper.