Design News is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Don’t Forget About Standby Power

Don’t Forget About Standby Power

Standby power management technology has enabled the resource constrained IoT world but not without trials.

Standby power refers to the electronic and electrical power consumed by systems when the primary operational functions are waiting to be activated. Standby power needs are often overlooked by systems designers but are crucial considerations to ensure power is available for the smart devices that make up the Internet of Things (IoT).

Consider the design of a smart home, a dwelling that consumes zero net energy. To maintain zero net power consumption, the smart home must be capable of monitoring and controlling the main energy consumers – e.g., HVAC and lighting – as well as interfacing with energy sources such as solar panels/batteries and the power grid. Adding control and monitoring intelligence to the home will itself require energy. The trick is to make sure that the controlling and monitoring electronics don’t consume more power than the devices themselves. One part of this trick is to make sure that the smart systems pay attention to stand-by loads, those mischievous power draining loads consumed by electronics and electrical appliances even when they are turned off (but still drawing power in standby mode).

In addition to – or often part of – controlling and monitoring electronics, connectivity transceivers like RF and wireless are another reason why standby power awareness are so important. Most of our modern appliances and control devices constantly consume a trickle of power to be ready to perform updates, connect to edge or cloud servers, listen for our voice commands, and the like.

Numerous studies attest to the amount of energy lost from devices not in use due to standby power consumption. According to a report from the Natural Resources Defense Council (NRDC), an international nonprofit environmental organization, always-on but inactive devices can cost Americans $19B annually. That comes to about $165 per U.S. households on average—and 50 large (500-megawatt) power plants’ worth of electricity.

Further, Berkeley Labs notes that standby power is roughly responsible for 1% of global CO2 emissions.

What are the best ways to reduce the impact of standby power? Let’s consider one approach that looked promising but so far has failed and another more integrated approach that has proven to be successful.

Image source: Natural Resources Defense Council (NRDC)

Near-Threshold Voltage Technology

The near-threshold voltage is the point at which transistors turn on and conduct electricity. Operating certain devices near this threshold can increase energy efficiency by an order of magnitude over running a device at normal power levels. Further, by moving the supply voltage closer to but still above the transistor threshold voltage, the effects of common transistor leakage current can be dramatically reduced.

Intel first introduced a near-threshold voltage (NTV) processor, code-named during the 2011 Intel Developer Forum (IDF). The Claremont,was a prototype chip that could allow a computer to power up on a solar cell the size of a postage stamp. My tweet from the IDF event referenced a solar-powered Claremont demonstration in which the Claremont powered a short video clip of a playful kitten. When the sunny was out, the NVT-powered video had enough power for the cat to dance. But when it rained (i.e., the sun’s energy was clouded over), then the cat video froze and the dancing stopped.

The Claremont relied on an ultra-low-voltage circuit to greatly reduce energy consumption. This class of processor operated close to the transistor’s turn-on threshold voltage. Not surprisingly, threshold voltages varied with transistor type. Typically, though, they are low enough to be powered by a postage-stamp-sized solar cell.

The other goal for the Claremont prototype, fabricated at 32nm, was to extend the processor’s dynamic performance – from NTV to higher, more common computing voltages – while maintaining energy efficiency.

The Claremont prototype showed that the technology works for ultra-low-power applications that require only modest performance. Reliable NTV operation was achieved using unique, IA-based circuit-design techniques for logic and memories.

Unfortunately, further developments were needed to create standard NTV circuit libraries for common, low-voltage CAD methodologies. Specifically, NTV designs required a re-characterized, constrained standard-cell library to achieve such low corner voltages. Creating such libraries took time and money plus the support of EDA tool vendors and foundries. To date, that support has not been forthcoming. For these reasons, the adoption of NTV technology in the commercial market has been very slow.

Fortunately, the continued march of Moore’s Law beyond the 32nm geometry of the original Clairmont prototype has improved the power efficiency of silicon chips – thus removing the urgency for the NTV technology.

Image source: Intel / IDF 2011 / Claremont

Power Management Technology

A power-management system controls, regulates, and distributes power in an electronic system. Power management technology was first implemented in PCs back in 1989, when Intel developed processors that could be slowed down, suspended and even selectively turn off power to different parts of the system platform (e.g., like the hard drive) to reduce energy consumption and increase battery life.

Today’s embedded and IoT devices have built upon the power management approaches of the past. To make power design and control more practical, designers have separated the power domain into three power subsystems or sectors: microcontroller, RF and wireless transceivers, and sensors and actuators. These sectors, sometimes known as power islands, align with the typical operational use case of an embedded or IoT device, where data is acquired by sensors, sent or received via wired or wireless connectivity, and used to control mechanical devices like actuators.

Regardless of the subsystems, both hardware design and software techniques are needed to make efficient use of battery storage and wall-output power sources. Hardware design includes the careful selection of processor, memory, interface and passives components to optimize power consumption during operation but also at rest. Software power management techniques include switching off peripherals when they are not in use and adjusting the frequency and voltage of the CPU according to the required performance requirements. This is often known as Dynamic Voltage and Frequency Scaling or DVFS.

Another technique is power gating, which is used to lessen the power leakage of smaller chip process nodes. Power gating works by turning off the supply voltage of unused circuits. It does incur an energy overhead; therefore, unused circuits need to remain idle long enough to compensate for this overhead.

In a chip or printed circuit board, the processor controls most power management actions via software. For example, most modern processors support three levels of sleep to conserve power:

  • No sleep – The device is always-on, always consuming power.
  • Light sleep – In this mode, the processor is often suspended, and its internal clock is turned off.
  • Deep-sleep – In this state, everything is turned off except the RTC (Real Time Clock), which enables the clock to be turned on periodically. This is the most efficient mode. It is used when the device needs to send data at specific intervals, e.g., to read sensor data, transmit it and then go back into a deep-sleep mode.

Moving on to the RF and wireless transceivers power subsystems, low-power wireless technology options like Bluetooth, Zigbee and Wi-Fi must be balanced with performance needs, battery life and data throughputs. Such an evaluation is heavily based on the application and will be covered in another article. Its sufficient to note that, like the processor, RF and wireless subsystems can also be placed in standby mode to conserve energy.

Standby power techniques can also be applied to sensors and actuators. IoT sensors typically spend significant amounts of time in sleep mode, so idling the device for low sleep power is an obvious technique. Naturally, energy consumption of sensing devices varies widely with the requirements of the application. Often it is best to consider the standby power needs of the entire system. For example, smart sensors are capable of performing basic computations, thus removing some of the compute (and power) loads from the main system processor.

A systems approach to standby power management – and power management in general – will permit a balanced consideration of all power needs plus the timing of those needs. An understanding of the overall power requirements will enable a partitioning of power resources matched with the appropriate implementation technologies to ensure the entire system will operate as desired.

Image Source: Photo by Thomas Jensen on Unsplash

John Blyler is a Design News senior editor, covering the electronics and advanced manufacturing spaces. With a BS in Engineering Physics and an MS in Electrical Engineering, he has years of hardware-software-network systems experience as an editor and engineer within the advanced manufacturing, IoT and semiconductor industries. John has co-authored books related to system engineering and electronics for IEEE, Wiley, and Elsevier. 

Hide comments
account-default-image

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish