Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Sitemap


Articles from 2013 In July


Slideshow: BMW Unveils Premium Electric Car

BMW’s i3will cost $41,350 before federal incentives and will offer 81 to 99 miles of all-electric range.<br>(Source: BMW)

In a glitzy ceremony that featured a New Years' Eve-style countdown in three cities around the world, BMW AG unveiled its all-electric i3 this week.

The unveiling, peppered with references to sustainable power sources, marked BMW's first foray into mass production of pure electric cars. "We are at the starting blocks of a new era -- the era of sustainable mobility," BMW CEO Norbert Reithofer said in a web-based conference.

The rear-wheel-drive i3 is expected to hit the streets in the US in the second quarter of 2014. It is considered unique in its prominent use of carbon-fiber-reinforced plastics (CFRP), its optional range-extending two-cylinder engine, and in its from-the-ground-up design philosophy. BMW expects it to be priced at $41,350 before federal tax breaks.

Click on the image below for a closer look at BMW's all-electric i3.

During the press conference -- held in New York, London, and Beijing -- BMW executives emphasized that the i3 would appeal to environmentally smart buyers of premium vehicles. "We can see our customers' values are really changing," said Ian Robertson, BMW's chief sales officer. "They want sheer driving pleasure, but with a clear conscience."

The i3 will serve those changing values with its use of a 22-kWh lithium-ion battery. The battery, slightly smaller than the Nissan Leaf's 24-kWh unit, will nevertheless produce an all-electric range of 81 to 99 miles, BMW said. Liberal use of CFRP helped enable that range. A CFRP-based passenger cell will sit atop an aluminum "drive module" that incorporates the suspension, the battery, the drive system, and structural components, thus minimizing vehicle mass.

"Our expertise in manufacturing with this material makes the passenger cell extremely strong and lightweight," Reithofer said. He added that the company produces carbon-fiber components using hydro-electric power. It also employs wind power to build the i3, he said.

For those who want more than 80 or 90 miles for their investment, the company also said it is offering a range extender. By mounting a 650cc two-cylinder engine adjacent to the electric motor and above the rear axle, BMW said it could boost the car's maximum range to nearly 180 miles. The optional range extender is expected to push the price tag to about $45,000.

Industry analysts estimate that BMW may have already invested more than a billion dollars in its i brand, which encompasses multiple vehicles, including the i3 and the hybrid-electric i8. That investment is proof the company is not just doing the i3 as a so-called "compliance vehicle" (to meet California-type mandates), but as a cornerstone of a larger plan. "It's one thing for a startup like Tesla to do an electric vehicle from the ground up," Thilo Koslowski, vice president and distinguished analyst for Gartner Inc., told Design News. "But for a vehicle manufacturer that sells over a million vehicles a year worldwide to do something like this, this is a big risk."

Koslowski said that the 2,600-lb i3 will probably compete most directly with the Tesla Model S, a premium electric car with a price tag that's about $30,000 higher than the i3's. He wondered, however, whether the vehicle's character will be right for traditional BMW buyers. "At the end of the day, it's still a very small vehicle," he told us. "It looks different from their other vehicles, and that could drive away some of those consumers who like BMWs."

Related posts:

Video: This Touchscreen Can Sense Fingerprints

Video: This Touchscreen Can Sense Fingerprints

Mobile device and computer security are of serious concern in today's world (just ask the NSA). Leaving these devices in public can lead to a host of problems including unwanted access to bank accounts, fraudulent use of Social Security numbers, and unwanted guests at both work and home.

Currently there are security measures in place that can keep unwanted visitors out of your digital world, the most basic being passwords, which can easily be hacked. Other more advanced methods of security include retina and fingerprint scanners that identify a particular user to grant access to sensitive materials. These systems are standalone devices that were not typically integrated into any system, mobile device, or touchscreen until now.

Researchers from the Hasso Plattner Institute in Germany have developed a touchscreen that is capable of identifying the fingerprints of its users. The problem with incorporating a fingerprint reader is that most fingerprint readers (or biometrics) rely on light to capture the unique patterns on people's fingers, and most touchscreens can't sense light, but rather function using capacitive or resistive technology that senses pressure when touched.

The team, led by Christian Holz and Patrick Baudisch, developed the Fiberio tabletop touchscreen in an effort to combine both technologies together for an ultra-secure interface that can be used by the public. The touchscreen uses a new screen material known as large fiber optic plate, which is comprised of millions of 3mm-long optical fibers bundled together in the form of a platter. This both diffuses projected light and reflects it at the same time, making it ideal as a touchscreen with the reflection needed to read fingerprints.

The screen itself sits atop an aluminum stand with a projector housed underneath the screen itself. An IR (infrared) illuminator is housed adjacent to the projector, which illuminates the user's fingerprints and reflects the images to a high-resolution infrared camera that identifies not only finger/hand location, but hovering objects, as well. The screen has the potential to be used in myriad places and projects that require a certain level of security.

The team envisions the technology could be used in banking institutions for approving loans (requiring more than just a signature) or used as tables in coffee shops, allowing patrons to answer emails, surf the Web, or hone their writing skills without the need for passwords.

Related posts:

Now Is the Time for Battery Alternatives

Now Is the Time for Battery Alternatives

Batteries are part of our daily life. Increasing mobility means increasing numbers of devices powered by batteries. Battery capacity is improving, but there are limits and hazards associated with this proliferation.

Battery technology is pushing the limits of current chemistry. The troublesome fact remains: batteries contain heavy metals such as mercury, lead, cadmium, and nickel, which are detrimental to the environment. At the end of their lifetime, batteries remain hazardous waste and need to be carefully (and expensively) disposed of by the manufacturer or the user.

One recommendation by the US Environmental Protection Agency (EPA) to reduce the number of batteries in the waste stream is to use rechargeable batteries. This is reasonable, but it's almost always limited to devices that don't need to function 24/7. In Internet of Things (IoT) networks, where small devices like sensors and relay receivers collect and process data for an intelligent control, reliability, and continuous operation are critical in keeping the system functional. The fact is, more malfunctions are caused by battery failures than by electronics, especially in large systems.

These days, governments set ambitious goals to slow down climate change, support the use of renewable energies, and find new ways to reduce waste and carbon emissions. In the words of President Obama, "...to put us, and the world, on a sustainable long-term trajectory." So isn't this the ideal time to think about alternatives to batteries?

This brings the energy harvesting principle to the forefront, not just as a future option but as a relevant solution today. Over the past 10 years, energy harvesting wireless technology has made significant leaps enabling wireless modules to gain their power from the surrounding environment.

For example, tiny electro-dynamic energy converters use mechanical motion or a miniaturized solar module generates energy from light. Combining a Peltier element with a DC-to-DC ultra-low-voltage converter taps temperature differences as an energy source. Even minute amounts of harvested energy are sufficient to transmit a wireless signal. Adding a capacitor can ensure adequate power storage to bridge intervals when no energy can be harvested.

The energy harvesting technology enables batteryless automation devices and systems, such as a building's efficient energy control, based on resource-saving technologies that eliminate the need for batteries. Such automation systems can save up to 40 percent of energy use. The energy harvesting principle is a unique aspect that ensures the sustainability of all system components, bringing the cleantech idea to each single device.

When deciding on wireless technologies, design engineers aren't solely interested in the cleantech character of energy harvesting wireless devices. A demand for battery-powered devices is that batteries must be easily removable from consumer products to make it easier to recover them for recycling. This restriction to product designs doesn't apply to energy harvesting-powered devices, enabling a more flexible, functionality oriented design.

Based on energy-harvesting wireless technology, a wide range of energy-autonomous applications are available today. Including batteryless switches, intelligent window handles, temperature, moisture, and light sensors, as well as presence sensors, relay receivers, heating valves, control centers, and smart home systems. In contrast to battery technology, energy harvesting has considerable potential both to improve the efficiency of the current three energy sources and to develop new ones.

Batteries will never disappear, and for some applications they'll remain a necessity. But from a design, environmental, and reliability standpoint, energy harvesting is the technology with a future. With sensor networks growing to the Internet of Things, where billions of small devices get connected, this will become even truer.

Jim O'Callaghan is the president of EnOcean Inc.

Related posts:

NASA 3D-Printed Rocket Engine Is Ready

NASA 3D-Printed Rocket Engine Is Ready

3D printing has been used to manufacture almost everything you can imagine, from action figures to guitars. It has even been used to create parts for robots and automobiles. It looks as though a different form of 3D printing will be used to send rockets into space.

NASA recently teamed up with aerospace/defense company Aerojet Rocketdyne to develop a rocket engine constructed through an additive printing process. The additive process (a form of 3D printing) can be accomplished through several methods. Selective laser sintering fuses small particles of metal into a desired shape. Fused deposition modeling melts metal through the nozzle, creating shapes one layer at a time. Stereolithography uses a vat filled with curable resin that "hardens" one layer at a time.

Another form of additive printing is known as selective laser melting (SLM), which is the method both companies used to create their functioning prototype rocket engine. SLM uses a high-powered laser to melt the metal into a particular shape determined by a CAD file. Specifically, the collaborative parties used the SLM process to create a liquid-oxygen/gaseous hydrogen rocket injector assembly, which is a critical part of the rocket engine, since it is the part responsible for controlling the combustion process.

Traditionally, these parts are tested in controlled settings to determine their performance and quality before they are assembled into the engine itself. These are known as SLS (Space Launch System) acoustic tests, where engineers listen for anything that may be off during a test firing. The injectors for these tests are traditionally smaller than their full-scale mockups. They usually take around six months to fabricate using four parts with a total of five welds costing upwards of $10,000 US each.

To put that into perspective, NASA engineers fabricated the same test injector in one piece using the SLM method with Iconel stainless-steel (nickel-iron-chromium alloy) powder along with their industrial-grade printer. The engineers fabricated the part, and then used a minimal amount of machining to clean it up. They used a series of computer scans to search for defects.

The whole process of manufacturing the injector took only three weeks to accomplish, at a cost of only $5,000, which is a significant reduction over the time and cost of its parent part. Actually, it only took 40 hours to fabricate the injector itself. The rest of the time was spent polishing and inspecting it before it headed off to the test bed! A total of two injectors were made and were subjected to 11 main-stage hot-fire tests for a total of 45 seconds worth of burn time at up to 6,000°F. Not only did the injectors successfully handle the test fires, they showed little to no damage after being subjected to extreme heat and pressure.

Related posts:

Computer Vision Enhances Crop Yield

Computer Vision Enhances Crop Yield

Wine making is big business, and the process is performed all over the world in almost every country. Companies take great pride and care in maintaining their respective vineyards, with science and art playing a major role in grape yields and flavor.

A number of factors contribute to the grapes' overall flavor and yield, including soil make-up, altitude, climate, terrain, and even hill slope.

Taste is also a factor in the crops' yield, which is measured either in the amount of grapes per vineyard surface or in the amount of the volume of wine per unit surface. Wineries differ in their processing techniques, which is dependent on what variety of grape is being used for their respective styles of wine. Those varieties determine how much wine can be made from each surface unit area of the vineyard, which makes it nearly impossible to gain an exact yield count for conversion to the wine's actual volume.

The traditional way to get an estimation of how much the total yield might be is to use personnel to physically count each grape on each vine respectively, which is costly and could potentially damage the crops.

To overcome this obstacle, roboticists from Carnegie Mellon University's Robotics Institute have designed a system that uses computer vision to count grapes for an exact crop yield.

The robotics team, led by Stephen Nuske, developed the automated system using an HD camera that takes five high-resolution pictures every second while it's mounted on an off-road vehicle. A lighting system rigged on the vehicle provides tactical illumination directed at the vines, which is done at night to eliminate random sunlight patterns that can interfere with the teams system.

Additionally, a laser scanner is mounted alongside the HD camera, which images the grape/vine clusters in finer detail and is capable of detecting grapes of only 4 mm in diameter. The data collected, about one terabyte overall, is then fed into a computer-vision system that uses specialized algorithms to pinpoint the grape-clusters derived from both the pictures and scan.

This system converts the total berry count into the harvest yield within a margin of error of 9.8 percent of the overall crop weight per vineyard row! That is incredible when compared to the estimates given using human counters, which have an accuracy rate to within 30 percent of the total harvested weight.

The roboticist's system also provides a 3D map based off of the laser scanner's imaging system, which can show areas that are thin or thick, allowing vineyard workers to take the necessary action to promote healthy growth. The team hopes to refine their computer-vision, which can also be mounted to ROVs for a truly automated system, for wineries (and other berry producing farms) sometime next year. It begs the question, though, whether it will actually be adopted for use in vineyards, and if so, will field workers lose their jobs?

Related posts:

Can 48V Be the Auto Industry's Next Big Thing?

Can 48V Be the Auto Industry&#039;s Next Big Thing?

Another new powertrain technology may be poised to bring major change to the auto industry by delivering the advantages of hybridization without some of the complexity.

This technology, known simply as 48V, combines a dual-voltage setup with the well-known advantages of start-stop technology. By doing so, it more effectively captures a vehicle's braking energy, provides more power for a growing list of electrical loads, and simultaneously boosts fuel efficiency -- possibly by as much as 15 percent.

"We believe that, by the end of this decade, 48V will become a significant part of the market," Craig Rigby, vice president of product management strategy for Johnson Controls Inc. (which has developed a 48V product), told us. "It's probably the next technology after start-stop that will make sense for the mass market consumer."

If you're a bit skeptical because you think you've heard some of this before, you're not alone. During the late 1990s, auto industry executives talked up a technology known as 42V. After a few years of contentious debate, however, 42V died an inauspicious death. But suppliers say this technology is different; unlike 42V, it could offer a big jump in fuel economy. Moreover, the regulatory environment has changed, and battery technology has improved.

The 48V configuration, supported by a number of automotive suppliers, calls for a conventional 12V network using a lead-acid battery like those employed in most conventional vehicles. However, it adds an extra layer: a 48V lithium-ion battery with a separate 48V network. The 12V network handles traditional loads: lighting, ignition, entertainment, audio systems, and electronic modules. The 48V system supports active chassis systems, air conditioning compressors, and regenerative braking.

One of the keys to 48V is lithium-ion battery technology, which wasn't available at the time of the ill-fated 42V systems. Engineers say lithium-ion technology offers about three times as much energy density as lead-acid chemistry. "There's a size and weight advantage to using a lithium-ion battery," Rigby said. "Back in the '90s, we were looking at stringing three lead-acid batteries together to get that voltage."

More importantly, the 48V lithium-ion battery has more charging capability, making it a better candidate for capturing regenerative braking energy. "It really comes down to the notion of regenerative braking -- harvesting the kinetic energy from the vehicle during deceleration and storing it in the battery."

Slideshow: Ford Steps Up Sustainable Materials

Ford Motor Co. is stepping up its use of recycled, biobased, and recyclable materials in components throughout the car and increasingly across models. This is part of its ongoing efforts to cut vehicle carbon dioxide emissions and waste that goes to landfills.

The automaker has been famously ahead of the curve in using renewable and sustainable materials in its automobiles. We've told you about its work to turn recycled bottles into car seats and its collaboration with Weyerhaeuser on a cellulose-based composite for car interiors. Ford uses sustainable materials in many of its car components: seat cushions, carpeting, head rests, door bolsters, instrument panels, heater and air-conditioner housing, roof lining, fan shrouds, replacement bumpers, seals and gaskets, wheel arch liners, underbody shields, cylinder head covers, and sound-dampening material.

"Our sustainable materials strategy is to develop environmentally friendly alternatives while we're not in a rush, and far enough ahead of time to introduce them in an orderly way," Carrie Majeske, Ford's product sustainability manager, told us.

Click the image below to start a slideshow on Ford's efforts.

From 2007 to 2011, Ford cut landfill waste 40 percent to 22.7 pounds per vehicle, and it hopes to lower this figure another 40 percent by 2016. The Van Dyke Transmission Plant in Dearborn, Mich., recently became the first in North America to have zero waste going to landfills.
(Source: Ford)

Sustainable materials make good business sense by hedging against future price volatility in materials that may become scarce commodities. "There's also evidence that consumers increasingly want green products," she said. "Although it's not part of the purchase decision when buying a car, they like it after the fact when they know they have it."

Some materials come from surprising sources. Blue jeans are recyled as sound-dampening material, and tires become under-hood gaskets. The automaker uses recycled post-consumer plastics, post-industrial yarns, post-consumer nylon carpeting, and fiber-reinforced plastics. The materials being examined for future use include out-of-circulation US currency (for nonstructural plastic components), coconut fiber (to reinforce molded plastics), and an ingredient in Russian dandelions that might replace synthetic rubber.

Ford has set specific content standards. Since the 2009 model year, seat fabrics used in any new vehicle must contain at least 25 percent recycled material. Electric and hybrid vehicles are meeting even higher standards -- several already are using fabrics made from 100 percent recycled materials.

For a lot of sustainable and renewable materials, we do the R&D on them internally and then seek suppliers for production. We also get ideas for new materials from suppliers. The wood fiber material we developed with Weyerhaeuser is about to go into production, and other new materials are already in the pipeline. We also want to take some existing renewable materials to the next level and make them recyclable.

Right now, Ford uses many recyled materials but few biobased ones. The automotive environment presents some major challenges for bioplastics. For example, materials must withstand temperature extremes and UV cycling and last 10-20 years, Majeske said. They must also be odorless, have no or few volatile organic compound emissions, and meet cost targets similar to those for virgin, petro-based plastics.

Video: Kinect Sensor Enables Sign Language Recognition & Translation

Video: Kinect Sensor Enables Sign Language Recognition &amp; Translation

A collaboration with researchers from Microsoft Research Asia and the Institute of Computing Technology (CAS) has developed a sign-language communicator using the Kinect sensor.

The idea is nothing new to the researchers of Microsoft as the company filed patents regarding sign language early in the Kinect's development. The researcher's system allows users who know and don't know sign language to interact with one another by translating sign into a wide variety of languages and back again.

The system has two modes with one being a Translation Mode where the system translates the American version of sign (it is capable of multiple versions as well) into text or speech for non-sign users. The second is the Communications Mode, which translates verbal language into sign through the use of an onscreen avatar. The verbal language is done using a keyboard where the sentences are then translated to sign, text, or both. Users on both ends do not have to wait for one another to finish before they can respond as the translation is done almost in real-time, which is incredible, to say the least.

The research team developed specialized algorithms that track deaf or hard of hearing user's hands while signing with "3D motion-trajectory alignment," which is then parsed with matching words, phrases, or even whole sentences after the motions have been analyzed. The system works surprisingly well and tests have showed that both the hearing impaired and verbal speaking users can communicate at a natural pace.

The system's initial success was attributed not only to the researchers themselves, but also in part by garnering information from teachers and hearing impaired students from Beijing Union University by providing "real-world" data when it came to sign language. Using the system would benefit a tremendous amount of users worldwide who don't have translators readily available in order to speak to one another. Companies and businesses would benefit greatly using the new technology, as a greater pool of talent would become available for positions otherwise limited in scope because of their disabilities.

The language translation project was done using the first generation of Kinect sensor. It will be interesting to see what the researchers may be able to do with the Kinect 2 sensor as it has increased tracking and audio capabilities on a completely new level than its predecessor. Now, the only question is how much will it cost and when will it become available. Chances are it will be affordable enough to use in our homes and be available sometime in the near future.

Related posts:

Software Engineering Is Changing the Design Workflow

Software Engineering Is Changing the Design Workflow

Smart products and devices tend to be very complex. Customers demand products whose functionality is just right for them. They also want products that look good and demonstrate their individual values. That often means creating a product that the customer can customize. This wide scope of possibilities causes complexity. There are many requirements, and it's difficult to envision all use cases. The implementation of such a product that fits the requirements at a competitive price involves more engineering disciplines and specialities than dumb products. Finally, product testing and verification is challenging.

In the Victorian era, a product was often created by an individual inventor. Those days are long gone, and products have steadily grown in complexity. Smart products and devices represent a step change in complexity.

In the 1960s and 1970s, engineering management popularized the idea that design projects could have better outcomes if they were tackled by multi-disciplinary teams. References to this approach date back to the 1920s, when it was realized that organizing into departments and firms, each of them specializing in a particular expertise, tends to create barriers. Assembling a single team of individuals with all the necessary expertise produces a more innovative approach to solving design problems. The common purpose focuses effort on the project outcome and discourages suboptimization.

In the past 10 years, smart products and devices have added a new discipline to the mix: software engineering. That adds complexity and new modes of thought from the world of software development. In the short term, it creates some communication difficulties, especially when the software people come from a pure computer science background. However, we should also see this as an opportunity. Different engineering disciplines can learn best-practices from one another.

The world of software development has learned how to cope with rapid change in requirements and underlying technologies. As well as the traditional waterfall development model, a technique called agile software development has emerged. In this approach, requirements and solutions evolve through collaboration between self-organizing multi-disciplinary teams. The software development is iterative. Incremental solutions are released to users, who provide feedback and help develop requirements for the next iteration.

This technique could sometimes be applied to other engineering disciplines. The prerequisite is that different disciplines use a common language for requirements (especially derived requirements) and solutions.

Systems engineering and systems modeling allow better communication of a solution's architecture to the requirements by dividing the solution into manageable chunks. Simulation and visualization of solutions allow each discipline to communicate its solution to its peers.

The main mechanical engineering software suppliers are expanding their portfolios to include systems and software engineering tools. Dassault bought Geensoft, PTC acquired MKS, and Siemens PLM bought LMS. These tool sets are evolving rapidly. There are potential problems with agile development techniques. How do you test for safety when a product is used outside the use cases in the requirements? If the solution iteration triggers a change in a mechanical or electrical artefact, then costs can increase dramatically.

We do expect all these software engineering techniques to change the design workflow. There are things software engineers could learn from mechanical engineers, such as using more off-the-shelf components, rather than writing all code in-house from scratch.

We will be watching with interest this first encounter with alien software engineers to identify best-practices in the design workflow.

Mike Evans is the research director for Cambashi.

Related posts:

Mentor Graphics Releases New Automation Software

Mentor Graphics Releases New Automation Software

Embedded systems are at the heart of all our electronics today. Using microcontrollers, microprocessors, or digital signal processors, they are driving a rapidly changing modern technology. While some embedded systems can get by using only a small processor with little memory, some systems require extra computing power and external peripherals. It is not at all uncommon to see embedded applications that require entire operating systems. This is where system-on-chip (SoC) and system-in-package (SiP) designs can prove to be optimal choices. Typical SoC designs can consist of the processor, external memory, a timing source, analog-to-digital converters, and communication interfaces such as USB, UART, SPI, or I2C.

Due to the increasing complexity of modern designs, debugging these systems can sometimes turn out to be extremely troublesome. Some bugs can lay hidden in systems until it is too late -- when the product is already past its deadline and entering the production phase. The traditional debugging method involves hours of testing, which is never the most reliable route. Many companies are beginning to invest in software automation to handle much of their testing. This process can be more efficient and reliable, and has an increased chance of finding those hidden bugs.

Mentor Graphics' patented Questa Platform allows high-performance simulation in systems that integrate devices such as FPGAs and SoCs. As a result, companies can achieve more efficient design and verification management.

Mentor Graphics has just announced that it will be releasing an intelligent-software-driven verification (iSDV) feature to their Questa Platform. The iSDV will allow designers to generate C programs automatically that can run on multicore SoC systems. This allows companies to uncover more bugs earlier in their design process, thus saving time, money, and trouble. In addition, the complexity of today's systems makes the iSDV option a unique choice for testing. Although automation software for single-core SoC systems is difficult enough to create, multicore systems make that creation extremely difficult.

"To fully verify our performance SoC bus fabric subsystems, we have to generate all kinds of complex traffic scenarios. Using Questa's intelligent testbench automation we are able to achieve all of our performance and functional verification goals while shaving time off our schedule. With Questa iSDV we can run embedded C test programs with RTL-level testbenches, allowing us to fully verify our system under stressful, but realistic operational conditions, giving us the highest degree of confidence," Galen Blake, Altera's senior verification architect, commented.

The Questa Platform is already an extremely powerful tool for engineering design teams. Likewise, any company that can benefit from software automation is going to find their new iSDV feature is a bonus. In some designs, writing C test programs was once not practical. It was almost an impossible option. Mentor Graphics has put the once impossible task into the hands of companies by creating systems that can incorporate multicore SoC designs. This ability allows systems to become more complex as better, faster, and more sophisticated systems are created.

Related posts: