Categories
Analog Electronics Digital Electronics Engineering Renewable Energy

Power Saving Techniques

Two things will make people want to use less power: not giving them much to start with and making it prohibitively expensive. Both of these scenarios seem to be dovetailing right now with the shrinking of many devices and energy becoming an ever more expensive and sought after.

Sure, there are people out there trying to create and harvest more energy. Either through more drilling, more wars, more acquisitions or new technologies. But eventually, people start to question why we are using so much energy in the first place. Instead of running device batteries into the ground quickly, why not draw less current? Instead of putting a bigger more expensive battery on a device in the first place, why not come up with new techniques to conserve power? Instead of paying high prices for energy and polluting the environment, why not conserve energy in our devices so that we don’t need as much energy overall?

Here are some of the methods that designers use in increasing numbers to reduce power consumption

  1. New chips — The basic idea is the same for any chip: Try and have the same or better performance of today’s chips with incrementally less power.  Most often, the best way to do so is to reduce the number of electrons it takes to store a value or drive another circuit (or whatever your task may be). However, there is a lower limit to how few electrons are required to complete a task (one, duh). How do we get less electrons doing these tasks?
    • Smaller geometries — Moore’s law tells us that process technologies will allow a doubling of technological ability every 18 months. This could even be a faster rate than previously thought, according to one of my favorite futurists, Ray Kurzweil. As fabrication facilities race to leapfrog one another to the next smallest process technology, they also help to reduce the number of electrons running through a device. If you look at the path of an electron along a trace on a microchip or op amp, it resembles a “tunnel” that electrons flow through. As process technologies get smaller and smaller (32 nm, anyone?) there is less room for electrons to flow through and thus, less power is used.
    • New materials — If you have less electrons flowing through a semiconductor, that means the total current flowing through the semiconductor is lower (current is defined as the number of electrons [measured in charge] flowing past a point for a period of time i.e. Coulombs per second). While less current can also mean less noise (fewer electrons bumping into other molecules and heating them up), it also means that if there is more resistance in a connection between two points, it will be harder for the electrons to travel that distance. As such, semiconductors are now made with new doping compounds (the molecules they force into silicon) or they forgo the silicon and try entirely new materials (Gallium Arsenide is a good example). These new materials allow for more efficient transistors and lower power consumption in devices.
    • New architectures — National Semiconductor has been pushing a new, more consistent power metric called “PowerWise“; it is targeted towards the mobile market and the “green revolution”. While this is a bit of a marketing move, it also helps to highlight their most efficient products across the different product types (LDOs vs Switching Regulators vs Op amps, etc).  Some of these newer, higher effeciency products use new architectures, as in the case of some of the switching regulators
    • Lower supply voltages — This one affects me on a more regular basis. Sure, the lower potential across a junction will drive less current in the off state (Iq) and will have less noise due to lower potentials; but this also throws a wrench in the works if you’re trying to find parts that will drive some significant currents or have any kind of large allowable input voltage ranges to a circuit without bootstrapping the supplies.
  2. PWM — Pulse Width Modulation (or PWM) is an easy way to reduce power in LED lighting situations. The idea is based off the fact that the human eye cannot determine the continuity of a light signal if it is below a certain frequency; instead, pulsing an LED on and off quickly will translate to the human eye as a lower intensity than an LED lit continuously. This idea is used regularly in portable electronics to dim the “backlight” of a laptop screen, cell phone, GPS device, etc. The duty cycle is the time that a device is powered divided by the total time it is on; usually it is given as a percentage. So if an LED is lit for 1 seconds and then off for 3 seconds (1 second on divided by 4 seconds total), the duty cycle is 25. In that example, the LED would appear to be one quarter as bright as a fully powered LED, but will also save a little less than 75% of the power normally required. The power saved can never be the entire difference between the normal case and the PWM case because some amount of power is required in order to switch between the on and off states.
  3. Microcontroller/Code Improvements — One of my favorite new blogs, written by Rick Zarr of National Semiconductor, has two great posts about the energy content of software. In it, he points out some of the ways that software can intelligently shut down portions of the code in order to reduce redundant processes and save on processing power. However, the points that I really like are the ones  he makes about making the simplest possible solution that will still get the job done well. This could mean cutting out some software libraries that were easier to just include in a project or learning how to properly construct a software project. Other techniques could be a combination of better coding and PWM: putting a device to “sleep” for a set period of time only to have it wake up at set intervals to see if it is needed.
  4. Going Analog — One last great point that Rick makes in his first post about energy saving techniques in software actually relates more to hardware. Instead of using a DSP, an ADC and some coded FIR filters, why not pull the filter back into the analog domain? Sure, it’s a little more difficult at the beginning but there won’t be any quantization errors (the error that comes from approximating a real signal with a digital signal). Analog engineers can do the same task with an active filter as digital engineers can do with a digital filter for many simpler applications. With the lower part count and the lower strain on the system of not converting a signal from analog to digital and back again, designers can save some significant power.

The final solution to our energy problems will be a combination of power saving techniques and new renewable energy sources. With some of the above techniques, designers will be able to use smaller batteries that allow longer usage times and have less of an impact on the environment. Please feel free to leave comments or any other power saving techniques you have heard of in the comments!

By Chris Gammell

Chris Gammell is an engineer who talks more than most other engineers. He also writes, makes videos and a couple podcasts. While analog electronics happen to be his primary interests, he also dablles in FPGAs and system level design.

11 replies on “Power Saving Techniques”

Comments are closed.