Analog Electronics Digital Electronics Work

As big as…space

Last week on my electronics podcast, The Amp Hour, I did something uncharacteristic: I mentioned where I’m working, while I’m working there. Normally I don’t talk about my place of employment until after I have left, which has always served me well. There is no conflict of interest in talking about work that protected by non-disclosure agreements (NDAs).  This time is different though, because the nature of the company is different (and my role will be more public facing than my normal role as an engineer).

I’ve been working with Supply Frame part time, in addition to my work on Contextual Electronics. It has been great working with a team dedicated to making the supply chain (the complex system of vendors and distributors) a bit easier to navigate. As it so happens, Supply Frame also purchased a popular blog site a while back that I have always been a fan of, Hackaday. That site highlights fun projects from around the web and is a great way to keep up on recent innovations in personal projects.

So why all this explaining and build-up? Well I’ve also been asked to help out on a project as part of Hackaday. In fact, it falls in line with my past experience of running the 555 contest back in 2011.

Hackaday is sending the grand prize winner of a new design contest to space. Whoa.

When they first told me about this idea, I figured they must be joking. How the hell would this be possible? Well, it turns out there are more and more commercial space flight options opening up. These days with enough money (and yeah, it’s a lot of money), you can buy a ticket to ride. So that’s how we’re doing it.

The contest itself is really exciting as well. The goal is for people to build “open, connected hardware”. In my experience (with the 555 contest), you need a constraint to base the contest around; openness as a constraint is particularly interesting. Not only does it encourage people to design something cool (like an open source Nest Thermostat or similar), but it also then allows that hard work to be built upon later. I’m a huge fan of open source hardware and up until this point, the way most people are rewarded for their openness has been a community building up around their project (a prize unto itself); now they can also win a trip to space (or other prizes).

I put in a couple emails to friends and acquaintances and we’re going to have a killer judges panel as well; they’re all as interested in sending open hardware experts to space as we are. Bunnie, Limor, Dave Jones, Elecia, Jack Ganssle, Ian of Dangerous Proto, Joe Grand, Sprite_tm. We’re also announcing the final winner in Germany at the huge tradeshow Electronica.

Anyway, I’m super pumped and I hope you are too. I feel very lucky to be involved with such a fun project and hope lots of people will be interested in submitting an entry.

Analog Electronics Digital Electronics Learning OSHW

Contextual Electronics Announcement

“But Chris, what happened to your milling videos?”

“Well, the same thing that happens to lots of projects, they got re-prioritized!”

I really enjoy working with my new mill! It’s awesome and I’ve learned a ton. I didn’t post it to this site as a separate blog post on this site, but I did post two new videos to my YouTube channel. Both were failures…but that’s ok! A large part of the decision to get the mill was the learning process. The first was figuring out problems in using a half inch cutter and the second in doing profile cuts. But since then, I took a break.

I’ve been trying to take a new approach in 2013 to projects by focusing on 1 (2 at most) things at a time. As such, when a new project pops up that is more important, others fall behind (believe me, the state of maintenance of my house would agree).

So what?

Well, this is all because of my newest project, which has been brewing for a while. I’m calling it Contextual Electronics. This will be a 10 week course all about how to build hardware. Not only that, it will also have instructional videos about the particular part of the circuit we’re designing or troubleshooting, as we work on it. The information will be learned on an “as needed” basis, just like designers (like me!) need to do in the real world. People who participate in the class will also be able to build hardware all at the same time, so we can learn at the same time; this will be especially important for a skill like troubleshooting, which can be a very nebulous topic to people just getting into hardware.

Here is the introductory video:

Also of note is something which hasn’t been announced previously on this blog, though I’ve talked about it on The Amp Hour. I’ve been selected as one of the first 8000 to be able to buy Google Glass. While this does mean I’ll still need to purchase the glasses, I’m excited for them. Even moreso now that I can use them to livecast troubleshooting sessions and other events for Contextual Electronics using Google+ hangouts.

So that’s all for now. If you’re interested in this idea and want to be alerted to updates as we move towards the start date of the course, fill out the form below and be sure to confirm through email. I’m excited! Hope you are too!

Blogging Digital Electronics

I’m on EETimes!

So I’ve been at ESC Boston since Monday, both as a participant and as a writer. It’s been a really cool experience meeting a lot of people in the technical writing field and a lot in the publishing industry (as well as those in the technical side of things, of course). And today for the first time, I was published in EETimes on the EELife section. Check out a couple of my articles, linked below.

Any comments can be left here or on the specific article pages.

Analog Electronics Conferences Digital Electronics

Final Thoughts On The Embedded Community

This is part 3 of 4 in a series about ESC Chicago and the Sensors Expo and Conference. See previous posts about Day 1 and Day 2.

I imagine if a doctor was diagnosing the medical condition of the embedded community, he would walk into the tiny exam room, take one look at the embedded community sitting there in its socks and underwear on the crinkly disposable exam table cover and say:

“Yup, still fragmented.”

What do I mean by this? It means that even with my posturing about the need for community AND my lack of expertise in the topic, there are some undeniable rifts in the embedded community. And they will always be there. Why?

  1. Too many vendors with their own pieces of silicon
    • Guess what? Companies like making money! Amazing, right? I can name at least 5 monstrous companies that produce independent silicon chips, almost always with similar cores that rhyme with “schlARM”. They have their niche areas and peripherals that are used in that segment; examples areas that a vendor might try to target are motor control, display processing, low cost, low power or RF. But in the end, the very things that distinguish them from their competitors and therefore allow them make money, also drives the community apart.
  2. Too many closed doors
    • Another problem on the vendor side can be the amount of information provided to the people working on their chips. Without open access to the information, users are forced into the “camps” of the vendors in order to access features buried within the silicon. Less mobility between chips means more fragmentation.
  3. Too much software
    • Well what about abstraction? For those out there that are more on the analog side of things, abstraction is writing code that isn’t controlling something directly. Think about it like you’re a teacher. You care a lot about turning the lights off in your classroom and want to teach your kids about why it’s important in order to save energy. In a non-abstracted case, you would tell each of the kids to turn the lights off when it’s their turn. Perhaps Wednesday it’s Johnny and Thursday it’s Susie. So you tell them directly. Abstraction in the simplest sense would be assigning Bobby to remember whose turn it is each day of the week. That way, you only have to tell Bobby to have someone turn off the lights; it’s the same every time. Bringing it back to processors and the embedded community, if things were abstracted, you could always tell “Bobby” to do the same thing and he would have close to the same response each time. Well there is such a thing that even the layman such as me is familiar with: operating systems. But this isn’t like the PC world where the choices have been culled down to a select two or three. There are embedded versions of larger OSes (think Win CE or Embedded Linux) and RTOS (Real Time Operating Systems) which are an even lighter version of their half cousins named previously. Beyond that there are superloops and other small implementations. The point is, there are a lo00000t of choices for software for a looooot  of different processors. It’s fragmented. But why all the trouble? Why do we need so many choices?
  4. Too many market segments
    • It’s true. That’s why embedded has been growing steadily for the past 20 years and will likely continue to keep growing. There are a lot of  different needs! I guarantee you that engineers working on high-reliability industrial controls don’t care that much about Android. Sure, it could work, but it’s a new OS with lots of potential bugs and doesn’t really fit the needs. Similarly, handset makers don’t want to use reliable code from 10 years ago because all the reliability in the world doesn’t make a flashy new interface for mobile, web-enabled handsets. Chip vendors pick and choose to play to specific segments, as do the software vendors, creating hundreds of potential combinations; it’s much more likely that whatever current developers are working on though is a much smaller combinatorial subset. And so the fragmentation continues.

I know that analog is my niche and that there are some very compelling cases for using it in different areas of electronics. But I’m not stupid; there was a reason I took interest in the embedded space and why you should do the same. Everyone will continue to expect more from their devices, whether scientific, consumer or somewhere in between; if you’re on the internet reading this post, you’re likely used to the benefits of Moore’s Law and will continue to be.

What I’m trying to say is that there is value in learning about embedded systems; learning about some component of embedded computing is better than ignoring it. As software continues ascending into further levels of abstraction (think Python instead of C), there will be fewer people around that know how to reach down into silicon and flip a bit. Knowing how to do so not only will help you in your day to day tasks, but could make you a very employable engineer/programmer.

And who knows, perhaps embedded design will be the next black magic, much like analog is considered today!

Blogging Conferences Digital Electronics Learning

ESC Chicago and Sensors Conference and Expo, Day 2


What a whirlwind day. I started at 7:30 am and I ended at 10:30 pm, my mind still reeling. I was talking Beagle Boards and Agile processes in the morning and discussing the media (with the media) and visiting hackerspaces in the evening. But the best part about it? I felt like there were a lot of people around me that cared about similar stuff to what I do.

I am lucky enough to work with some of these people as well. But the nerd population in Cleveland isn’t at the critical mass that occurs at conferences nor at hackerspaces. So yesterday was a great opportunity to converse on some of the topics I love with people who were interested to hear it (in person of course, I realize there are many great people who “listen” to me on this site…thanks!).

The theme I kept finding throughout the day (or perhaps was seeking), was figuring out where the communities are and why hardware engineers (or even embedded engineers) don’t seem to congregate in one place. This started in the morning talking to James Grenning, a consultant and coach on Agile methodologies; I got talking to him and found that many of the same issues I’ve seen in trying to find analog communities, he has also seen in the embedded community. “Where are they?” we ask. “Why doesn’t there seem to be as much involvement online from the electronics community?”. James specializes in bringing Agile to the embedded community; easy to find people to speak with about the Agile part, less so for the embedded folks.

Towards the end of the day, I had the opportunity to talk to the folks at Element 14, a new engineering community site. I had heard about them earlier in the day; they were on the conference floor giving away iPads and the usual conference swag.  How does a “community” site have the money to attend a conference though? I later found out that they are a subsidiary of Premier Farnell, one of the top 10 electronics components distributors and recent acquirers of the EAGLE CAD program. As of right now, I’m underwhelmed with the site itself (NI uses the same interface and it’s not all that friendly to the eye nor the user), but not necessarily the content. It seems to have some involvement right now, but not the levels that I really desire (I’m hard to please!); I do like that they have qualified “experts” on hand, but haven’t taken a good enough look at them yet to judge how “expert” they might be (assuming I could even tell something like that). I will keep an eye on Element 14 though, because of one of their innovative programs: linking manufacturers to customers. They offer beta services, as in finding and requesting feedback from users on a range of products. In terms of value a site can have for both the user and the sponsors, I believe this is a strong one.

Next I got to meet Karen Field, the head of the new EETimes community, EELife. While this site hasn’t been released yet, the article I saw on it looked like a fancy implementation of a location to share info with other engineers. In fact, it may have been a little too fancy–the release of the site was pushed out from its proposed date. Still, I’m hopeful that the site could actually bring people together. EETimes has a great following in print and online; if there’s one place that people might think to go to first, EETimes might have the name recognition to do it.

Here’s the thought that keeps irking me though: the corporate world isn’t great at “social”. It doesn’t help that engineers aren’t quite social creatures by nature. Sure, some companies use social media to their advantage, but a lot more are using it wrong.

I got a chance to talk to Jason Kridner about why this might be. Jason is one of the many passionate members behind the Beagle Board group, a high powered open-source hardware board based on the TI OMAP processor (though Jason explained that the Beagle Board is a separate entity in every way from TI). When I asked him why communities such as the Beagle Board developers come together, he stated it simply and succinctly: “They unite behind a common purpose”. In the case of the Beagle Board, it’s about having a high power processor on an open platform (possibly contrasted with a slightly simpler Arduino board using an AVR processor, also on an open platform). And the community shows; there are many open projects you can pull down from the GIT tree and start immediately on your Beagle Board. The ones that excite me most are the DIYdrone types of projects.

At the meet and greet later in the evening, I started talking with some conference attendees that also happened to be members of the local hackerspace. They invited me to attend one of their weekly meetings at their location. The Pumping Station 1 (PS1) hackerspace/makerspace has been around for about 2 years now and is one of the only in Chicago as of now (more are forming). It was great seeing this area of shared tools, DIY projects and a general atmosphere of collaboration, for no reason more than these people wanted to make stuff in their spare time. And one of the things I found most interesting is that many of the projects going on at the space were embedded projects! The desire to have things talk wireless almost demands that you start to delve into low level code and be able to get your device talking to another device. So while I came to learn about the broad range of sensors and embedded devices this week, I ended up finding the lower-end (in terms of system complexity) but used for unique and intricate implementations (they had built their own MakerBot to CNC parts right there in the lab…amazing!).

So what was the conclusion to my small quest for finding community among the different factions of the electronics industry? There isn’t one general location or place to gather. And possibly for good reason. It’s more like a democratic republic in that way, where members get to vote with their feet. Say a platform really starts to bog down and no one is developing on it anymore. People aren’t tied to it because the “community” is locked in; instead they just pick up and move platforms. “Don’t like the PIC anymore? Switch to an OMAP! OMAP too expensive? Switch to an AVR!” So perhaps the real need is instead an active listing of where to find all these different communities, in whatever form they take; message boards, blogs, video tutorials, anything and everything–as long as the list stays current, it will be valuable. In fact it would be much more valuable than trying to pull in every single person into one platform.

I didn’t come to these two conferences for this purpose. I could have looked for it at home while browsing the web (I’ve done that before too). But in the midst of walking among many smart people and many products made by other smart people I’ve collected hints. Where to look and who to talk to in order to find the most people interested in technology, in whatever form or level of complexity it may take.

Analog Electronics Digital Electronics Interview

A Talk With An EDA Consultant

As more circuits get pushed into SoC (Systems on a Chip), the software that designs them becomes more and more important. Well, it’s been important for a while now. Important enough to be a multi-billion dollar industry. Biiiiig money.

Harry Gries is an EDA consultant with over 20 years in the electronics industry in various roles. He now consults for different companies and also writes a blog about his experience called “Harry…The ASIC Guy”. I love hearing about the different pieces of the electronics food chain and Harry was nice enough to take some time to talk to me about his work. Let’s see what he had to say…

CG: Could you please explain your educational and professional background and how you got to where you are today?

Harry The ASIC Guy (HTAG): My education began when I was raised by wolves in the Northern Territory of Manitoba. That prepared me well for a stint at MIT and USC, after which I was abducted by aliens for a fortnight. I then spent 7 years as a digital designer at TRW, 14 years at Synopsys as an AE, consultant, consulting and program manager. Synopsys and I parted ways and I have been doing independent consulting for 3 years now. A good friend of mine tricked me into writing a blog, so now I’m stuck doing that as well.

CG: What are some of the large changes you see from industry to industry? How does company culture vary from sector to sector?

HTAG: Let’s start with EDA, which did not really exist when the aliens dropped me off in 1985. There were a few companies who did polygon pushing tools and workstations and circuit complexity was in the 1000s of gates. Most large semiconductor companies had their own fabs and their own tools. Gate arrays and standard cell design was just getting started, but you had to use the vendor’s tools. Now, of course, almost all design tools are made by “EDA companies”.

As far as the differences between industries and sectors, I’m not sure that is such a big difference culturally. The company culture is set from the top. If you have Aart DeGeus as your founder, then you have a very technology focused culture. If you have Gerry Hsu (former Avant! CEO), then you have a culture of “win at all costs”.

CG: How hard was it for you to jump from being a designer to being in EDA? What kinds of skills would someone looking to get into the industry need?

HTAG: The biggest difference is clearly the “soft skills” of how to deal with people, especially customers, and understanding the sales process. For me it was a pretty easy transition because I had some aptitude and I really had a passion for evangelizing the technology and helping others. If someone wanted to make that change, they would benefit from training and practice on communicating effectively, dealing with difficult people, presentation skills, influence skills, etc.

CG: With regards to the EDA industry, how much further ahead of the curve does the software end up being? For instance, is EDA working on software necessary to define the 13 nm node currently?

HTAG: As you know, the industry is never at a single point. Rather, there is a spectrum of design nodes being used with some small percentage at the most advanced nodes. Most EDA tools are being developed to address these new nodes, often in partnership with the semiconductor manufacturers developing the process node or the semiconductor designers planning to use them. The big EDA companies are really the only ones, for the most part, that have the resources to do this joint development. Whatever is the newest node being developed, some EDA company is probably involved.

CG: You have written about the nature of the industry and how there being few players affecting the nature of the system. What kinds of limitations do you see in the industry due to the economies of scale (TSMC dominance, for instance)?

HTAG: Consolidation is a fact in any industry and a good thing in EDA. Think of it as natural selection whereby the good ideas get gobbled up and live on with more funding (and the innovators are rewarded); the bad ideas die out. Most small EDA companies would want to be bought out as their “exit”. At the same time, there are some “lifestyle companies” also in EDA where the founders are happy just making a good living developing their tools and selling them without having to “sell out” to a larger company. For all these small companies, the cost of sales is a key factor because they cannot afford to have a larger world-wide sales direct force as the larger EDA companies have. That’s where technologies like Xuropa come into play, that enable these smaller companies to do more with less and be global without hiring a global sales force.

CG: What drives the requirements placed upon new technology in the EDA space? How are the products developed? Are there a lot of interactions with specific big name designers (i.e. Intel) or does it shade more to the manufacturers (TSMC)?

HTAG: In fact, a handful of key customers usually drive the requirements, especially for small companies. When I was at Synopsys, Intel’s needs was the driver for a number of years. Basically, the larger the customer, the greater the clout. Other customers factor in, but not as much. The most advanced physical design capabilities of the tools are often a collaboration between the EDA company and the semiconductor manufacturers (e.g. TSMC) and the also the designers (e.g. Qualcomm). Increasingly, EDA tools are focusing on the higher-levels and you are seeing partnerships with software companies, e.g. Cadence partnering with Wind River.

CG: A good chunk of chip design is written and validated in code. This contrasts with much more low level design decisions in the past. In your opinion how has this changed the industry and has this been a good or bad thing? Where will this go in the future, specifically for analog?

HTAG: Being a digital designer and not an analog designer, it’s all written in code. Obviously, the productivity is much higher with the higher level of abstraction and the tools are able to optimize the design much better and faster than someone by hand. So it’s all good.

For analog, I am not as tied in but I know that similar attempts are being tried; they use the idea that analog circuits can be optimized based on a set of constraints. I think this is a good thing as long as the design works.  Digital is easy in that regard, just meet timing and retain functionality and it’s pretty much correct. For analog there is so much more (jitter, noise margin, performance across process variation, CMRR, phase margin, etc, etc). I think it will be a while before analog designers trust optimization tools.

CG:It seems that the EDA industry has a strong showing of bloggers as compared to system level board engineers or even chip designers. What kinds of benefits have you seen in your own industry from having a network of bloggers and what about EDA promotes having so many people write about it?

HTAG: I think blogging is just one form of communication and since EDA people are already communicators (with their customers), they have felt more comfortable blogging than design engineers. Many of the EDA bloggers are in marketing types of positions at their companies or are independent consultants like me, so the objective is to start a conversation with customers that would be difficult to have in other ways. A result is that this builds credibility for themselves that then accrues to their company. I think there has also been a ton of sharing and learning due to these blogs and that has benefited the entire industry. On a personal level, I know so many more people due to the blog and that network is of great value.

CG: How has your career changed since moving back out of the EDA space and into consulting? What kind of work have you been doing lately?  How has your experience helped you in consulting?

HTAG: It is interesting to have been on the EDA side and then move back into the design side. Whenever I communicate with an EDA company, whether a presentation or a tool evaluation or any conversation, I can easily put myself in their shoes and know where they are coming from. On the one hand, I can spot clearly manipulative practices such as spreading FUD (fear, uncertainty, and doubt) about a competitor and I can read between the lines to gain insights that others would miss. On the other hand, I also understand the legitimate reasons that EDA companies make certain decisions, such as limiting the length of tool evaluations, qualifying an opportunity, etc.

Most recently I’ve been working on some new technology development at a new process node. It’s been interesting because I’ve been able to dig deeper into how digital libraries are developed, characterized, and tested and I’ve also learned a lot more about the mixed-signal and analog world and also the semiconductor process.

Many thanks to Harry for taking the time to answer some questions about his industry and how he views the electronics world. If you have any questions, please leave them in the comments or pop over to Harry’s main site and leave a comment there.

Analog Electronics Digital Electronics Life Politics

The Digital Switchover and Why It’s About People

The Digital Switchover.

Not me. I almost did that a while back, but no. Not me.


Normally I wouldn’t write about it. A digital television standard is long overdue and in the end this will be a good thing. When you compare Analog vs Digital, there are many more benefits on the digital side of things: lower power for transmission, better bandwidth of signal, more bandwidth usage over the spectrum. All of these are good things. I can even talk about how those digital signals still have lots of analog components as they’re transmitted over the airwaves: multipath, signal loss, power calculation, reception problems, etc.

But no. I’d rather point something else out:

Technology adoption is driven by human nature. It must be adopted before it can help people.

Sure, the digital signals will be great. High Definition pictures and you don’t have to give a dime to those lovely cable companies. Lower power generation required to transmit the signals will help save the environment by lowering the carbon footprint. But until the switch actually happens (today…maybe), no one gets the benefits. The switchover has been delayed to now from this past February. Lawmakers deemed the country unready to make the switchover at that time. I mean, if people can’t watch TV, how will the politicians get their message out to the masses?

No matter how many new devices are introduced into the marketplace and no matter how available they make DTV switcher boxes, people still will not change until pushed. They will not go out and get the digital box or call their local politician until one day they turn on their television and the signal is not there. That is what will drive the final changeover. I wouldn’t be surprised if we saw a little bit more leeway from politicians before stations are officially told to shut off the analog transmitters.

This problem isn’t exclusive to television. This has happened for the past 30 years in conservation and renewable energy.  Regardless of how many times climate change experts point out we’re killing the planet, nothing moves until there is a scare that oil is running out (it is) or natural gas won’t always be available (it won’t) or coal is filthy (it is) or the power just goes out. Then people change their tune; they change gears and start thinking about buying that solar array or that home wind turbine. They start recycling again because they think it will start to help (it will, but what about the past 10 years of bottles you put in the landfill?). But the thing is, you need to think about buying the solar cells now, when there isn’t a 6 month backlog of installation requests and prices are jacked up due to demand. And Solar might even already be an affordable option for you.

I’m sure people will say there’s an economic aspect of it for DTV and that the people that use analog signals the most can’t afford the converter boxes. Perhaps that has some truth to it. But the point remains that no matter the technology, until that last group resistant or indifferent to change decides to go out and do something about it, those people can’t be helped.

What about you? Have you made the switchover yet? If not, why? Leave a note in the comments.

Analog Electronics Digital Electronics Engineering

The Future of Troubleshooting

If you are an engineer who regularly works with your hands, you likely troubleshoot on a daily basis. It’s just part of the job. Sure, you can say, “I never mess up!”, but hardly anyone will believe you. Because even when your best laid plans go perfectly, Murphy’s Law will soon kick in to balance things out. We learn to deal with these things and have developed tools and measurement equipment to help us diagnose and deal with these problems: Multimeters, Electrometers, SourceMeters, Oscilloscopes, Network Analyzers, Logic Analyzers, Spectrum Analyzers, Semiconductor Test equipment (ha, guess I know a little about that stuff)…the list goes on and on. But what has struck me lately has been that as parts on printed circuit boards get smaller and smaller, troubleshooting is getting…well….more troubling.

  1. Package Types — I don’t want to get into another discussion of analog vs digital, but I will say that digital parts on average have many more pins which complicates things. And as the parts get more and more complex, they require more and more pins. The industry solution was to move to a Ball Grid Array package, using tiny solder balls on the bottom of the chip that then line up with a grid of similar sized holes on the board. When you heat up the part the solderballs melt and hold the chip into place and connects all of the signals. The problem is the size of the solderballs and the connecting vias: they’re tiny. Like super tiny. Like don’t try probing the signals without a microscope and some very small probes. But wait, it’s not just the digital parts! The analog parts are getting increasingly small to accommodate any of those now-smaller-but-still-considerably-bigger-than analog parts. You thought probing a digital signal was tough before? Now try measuring something that has more than 2 possible values!
  2. Board Layers — As the parts continue on their shrink cycle, the designers using these parts also want to place them closer together (why else would they want them so small?).The circuit board designers route signals down through the different layers of insulating material so that mutiple planes can be used to route isolated signals to different points on the board. So to actually route any signals to the multitude of pins available, more and more board layers are required as the parts get smaller and closer together. Granted, parts are still mounted on either the top or bottom of the board. But if a single signal is routed from underneath a BGA package, down through the fourth layer of an 8 layer board board and then up to another BGA package, the signal will be impossible to see and measure without ripping the board apart.
  3. High Clocks — As systems are required to go faster and faster, so are their clocks. Consumers are used to seeing CPU speeds in the GHz range and others using RF devices are used to seeing even higher, into the tens of GHz. The problem arises when considering troubleshooting these high speed components. If you have a 10 GHz digital signal and you expect the waveforms to be in any way square (as opposed to sinusoidal) you need to have spectral data up to the 5th harmonic. In this case, it means you need to see 50 GHz. However, as explained with analog to digital converters in the previous post, you need to sample at twice the highest frequency you are interested in to be able to properly see all of the data. 100 GHz! I’m not saying it’s impossible, just that the equipment required to make such a measurement is very pricey (imagine how much more complicated that piece of equipment must be). High speed introduces myriad issues when attempting to troubleshoot non-working products.
  4. Massive amounts of data — When working with high speed analog and digital systems there is a good amount of data available. The intelligent system designer will be storing data at some point in the system either for debugging and troubleshooting or for the actual product (as in an embedded system). When dealing with MBs and even GBs of data streaming out of sensors and into memories or out of memories and into PCs, there are a lot of places that can glitch and cause a system failure. With newer systems processing more and more data, it will become increasingly difficult to find out what is causing the error, when it happened and how to fix it.
  5. Less Pins Available out of Packages — Even though digital packages are including more and more pins as they get increasingly complex, often times the packages cannot provide enough spare pins to do troubleshooting on a design. As other system components that connect to the original chip also get more intricate (memories, peripherals, etc), they will require more and more connections. The end result is a more powerful device with a higher pin count, but not necessarily more pins available for you the user/developer to use when debugging a design.
  6. Rework — Over a long enough time period, the production of  printed circuit boards cannot be perfect.  The question is what to do with the product once you realize the board you just constructed doesn’t work. When parts were large DIP packages or better, socketed (drop in replacements), changing out individual components was not difficult. However, as the parts continue to shrink and boards become increasingly complex to accommodate the higher pin counts, replacing the entire board sometimes becomes the most viable troubleshooting action. Environmentally this is a very poor policy. As a business, this often seems to be a decent method (if the part cost is less expensive than the labor needed to try and replace tiny components) but if and when the failures stack up, the board replacement idea quickly turns sour.

While the future of troubleshooting looks more and more difficult, there have always been solutions and providers that have popped up with new tools to assist in diagnosing and fixing a problem. In fact, much of the test and measurement industry is built around the idea that boards, parts, chips, etc are going to have problems and that there should be tools and methods to quickly find the culprit. Let’s look at some of the methods and tools available to designers today:

  1. DfX — DfX is the idea of planning for failure modes at the design stage and trying to lessen the risk of those failures happening. If you are designing a soccer ball, you would consider manufacturability of that ball when designing it (making sure the materials used aren’t super difficult to mold into a soccer ball), you would consider testability (making sure you can inflate and try out the ball as soon as it comes off the production line) and you would consider reliability (making sure your customers don’t return deflated balls 6 months down the line that cannot be repaired and must immediately be replaced). All of these considerations are pertinent to electronics design and the upfront planning can help to solve many of the above listed problems:
    1. Manufacturability — Parts that are easy to put onto the board cuts down on problem boards and possibly allows for easier removal and rework in the event of a failure. It becomes a balancing act between utilitizing available space on the board and using chips that are easier to troubleshoot.
    2. Testability — Routing important signals to a test pad on the top of a board before a design goes to the board house allows for more visibility into what is actually happening within a system (as opposed to seeing the internal system’s effect on the top level pins and outputs).
    3. Reliability — In the event you are using parts that cannot easily removed and replaced and you are forced to replace entire boards, you want to make sure your board is less likely to fail. It will save your business money and will ensure customer satisfaction.
  2. Simulation — One of the best ways to avoid problems in a design is to simulate beforehand. Simulation can help to see how a design will react to different input, perform under stressful conditions (i.e. high temperature) and in general will help to avoid many of the issues that would require troubleshooting in first place. A warning that cannot be overstated though: simulation is no replacement for the real thing. No matter how many inputs your simulation has and how well your components are modeled, no simulation can perfectly match what will happen in the real world. If you are an analog designer, simulate in SPICE to get the large problems out of the way and to figure out how different inputs will affect your product. Afterward, construct a real test version of your board or circuit and make sure your model fits your real world version. By assuming something will go wrong with the product, you will be better prepared for when it does and will be able to fix it faster.
  3. Very very steady hands — Sometimes you have to accept the fact that you messed up and the signal traces on your board and you have to rewire it somehow. My analog chip designing friends needn’t worry about trying this…chips do not have the option for re-wiring without completely reworking the silicon pathways that build the chip. In the event you do mess up and have to try and wire a BGA part to a different part of the board or jumper 0201 resistors, make sure you have a skilled technician on hand or you have very steady hands yourself. And in the event you find yourself complaining about how small the job you have to do is, think of the work that Willard Wigan does…and stop complaining.
  4. On the Chip/Board tools — Digital devices have the benefit of being stopped and started at almost any point in a program (debug). Without being able to ascertain what the real world output values are though, it doesn’t help too much. If in the event you do not Design for Test and actually pull signals you need to probe to the top level then you create a board then there are a few other options. One option is to try and read your memory locations or your processor internals directly by communicating through a debugger interface. But if you are looking at a multitude of signals and want to see exactly how the output pins look when given a certain input there is another valuable tool known as “boundary scan”. The chip or processor will accept an interface command through a specified port and then serially shift the values of the pins back out to you. Anytime you ask the chip for the exact state of all the pins, an array of ones and zeros will return which you can then decode to see which signals and pins are high or low.
  5. Expensive equipment — As mentioned above when describing an RF system measurement needs, there will always be someone who is willing to sell you the equipment you need or work to create a new solution for you. They will just charge you a ton for it. In cases I have seen where a measurement is really difficult to calculate or you need to debug a very complicated system, the specially made measurement solutions often perform great where you need them, but are severely limited outside of their scope. To use the example from before, if you needed a 100GHz oscilloscope, it is likely whomever is making it for you will deliver a product that can measure 100GHz. But if you wanted that same scope to measure 1 GHz, it would do not perform as well because it had been optimized for your specific task. However, there are exceptions to this and certain pieces of equipment sometimes seem like they can do just about anything.

Debugging is part of the job for engineers. Until you become a perfect designer it is useful to have methods and equipment for quickly figuring out what went wrong in your design. Over time you become better at knowing which signals will be critical in a design and planning on looking at those first, thereby cutting down on the time it takes to debug a product. And as you get more experience you recognize common mistakes and are sure not to design those into the product in the first place.

Do you know of any troubleshooting tools or methods that I’ve missed? What kinds of troubleshooting do you do on a daily basis? Let me know in the comments!

Analog Electronics Digital Electronics Engineering

When To Use Analog Vs. Digital

Analog. Digital. Continuous. Discrete. Choices abound.

Well, not really.

In reality you will deal with both kinds of signals when working on just about any electronics these days. A simple example is in a switching regulator. These devices are meant to take input power from a wall plug or something providing a relatively constant voltage and then the regulator will ensure that the voltage is always the same when leaving. Internal to the circuit, a “digital” signal (on or off) determines when to let in incoming power go from the input to the output. The “digital” signal translates into an “analog” voltage at the output, hopefully the voltage you programmed.

From there, systems become increasingly complicated, translating real world data to digital format, processing the digital data and spitting it back out again. The guts of the systems have infinite internal combinations and options, but in the end just about every hybrid system looks like this:


The remainder of this post will be devoted to explaining situations that are either contained within the above system or situations that benefit from looking nothing like it; some of these situations mandate analog or digital implementation but more importantly, some are best implemented as analog or digital.

To start, what is the definition of analog? We’ll consider it a continuous signal that has infinite bandwidth and complete spectral information. Analog in the context of this site usually refers to the circuitry used to operate on those continuous signals, but we also use the word “analog” interchangeably to describe the signals. Which situations are best suited to using analog components and circuitry?

  1. Continuous filtering — Filtering a signal is necessary when it has frequency components included that you do not want. Some filters are digital and are extremely accurate at removing one signal while retaining others (FIR). However, if you are dealing with a continuous signal and you want to filter ALL possible frequency content (and not be limited by the sampling frequency you used when converting to digital), then you need a continuous analog filter. There are many options available that can also help to push your filtering towards accuracies similar to digital filters but they become increasingly complex (multi-pole active filters). The main advantage to an analog filter here is that it is simple, less expensive (usually) and beyond your roll-off frequency you know that all information is being removed (whereas it might still be hidden in a sampled signal).
  2. Pre-A/D and Post D/A — Hybrid systems require both analog-to-digital converters and digital-to-analog converters to switch between continuous and discrete data. However, the sampling frequency must be at least twice the frequency of the highest frequency component contained within the signal, as explained by Nyquist’s Theorem. In order to ensure that the Nyquist Theorem is fulfilled, you can filter (see above) any signals that are inadvertently included in the original signal so that it does not create noise and artifacts after sampling. Since the signal is not yet digital, you HAVE to filter the signal with an analog filter (convenient, right?). Once you are done operating on a signal digitally and you convert it back to analog, all processing must once again be done with analog components and circuitry (see picture above). I usually think of an iPod after the signal has gone through the DAC. You need to control the gain (volume) and shape the frequency components (tone). Some post DAC activities can be done in the processor, but are often more efficient (read: cheaper) to do in simple analog components after the DAC.
  3. High power — While digital measurement and control is possible for high power systems, having a digital signal that switches between 0 and 400V would not be efficient. In either AC or DC systems, analog components are responsible for transforming and transmitting signals (although there may be digital control of those analog components at some point in the system). The continuous nature of power delivery mandates analog components that are well characterized and durable.
  4. Gain Control/Signal Conditioning — Say you want to measure the amplitude of a 4000 V signal. You decide that you want to use a computer to do so, so you shove your signal into an A/D converter. But wait, where the heck do you find an A/D converter that can convert a 4000V signal? Sorry, they don’t exist (yet). You instead have to condition the signal to fit into a range of 0V to +2.5V, or whatever is the input range of your specific ADC. You can do so with a simple resistive divider (passive, simple) or an inverting amplifier (active, more difficult).
  5. Control systems — While digital control systems are possible and are becoming more and more prevalent, analog systems can be simpler. One of the simplest examples is an inverting op-amp configuration. The load of the op amp is the plant, the op amp is the controller and the resistors are the feedback paths to the summing node. There are some delays in the system, but in general, the signal can handle a wide range of frequencies without complicated circuitry and the system can adjust to however the input changes. In a similar digital system, the feedback resistor would be replaced with an ADC, some kind of computing machine (microcontroller) and a DAC to convert the data back to analog to push into the summing node. The system is dependent upon the technology and speed of the components, whereas the analog system is dependent on resistors and the nature of the load (plant). Digital control systems are becoming more popular as DACs and ADCs become faster and more accurate but as of now, analog control systems remain simpler in some of the more common instances.
  6. Sensors — These devices are meant to help convert real world information that isn’t necessarily electrical, into a format that is recognizable by a computer or embedded system. Oftentimes these are not taking real world (analog) data and directly turning them into digital signals. Instead, the sensor (sometimes known as a transducer) first creates an analog signal that can later be converted. Converse to the high voltage systems, sensors are often very low amplitude and require some signal conditioning to increase the value of the signal to better utilize the full range of an ADC.
  7. Fidelity/Data loss — Some people just love analog stuff, especially when it comes to music. Even though audio systems containing ADCs and DACs are making very good analog equivalents these days, you will have to tear the record players and the tube amps out of the hands of the most die hard audiophiles. So instead of converting back and forth between digital and analog media, they prefer to keep the signal continuous all the way throughout the process. Starting from the air pressure variations emitted from Louis Armstrong’s trumpet that are then captured by a microphone and then amplified and pressed into a record, then touched by a needle and amplified again by a transistor or tube amp to recreate the sound as it is pushed out of your high end speakers. And even though there are processes to mathematically capture all of the data that is present to sample and perfectly recreate the original signal, some people won’t touch the stuff. Since I can’t afford the high end equipment audiophiles claim is necessary, I will sit on the sidelines for this argument. However, I enjoy that there is still so much interest in preserving audio fidelity in analog formats and don’t mind that it keeps analog engineers employed.

I feel a little silly explaining digital advantages because they seem to be flaunted at every opportunity by media and digital chip makers. Still, let’s go over some of the more important places to use digital as opposed to analog.

  1. Computing — Again, I know it sounds silly, but digital has emerged as the better way to compute numbers. How did they compute mathematical sums before the advent of the microchip and digital logic? Why, operational amplifiers of course! That is actually where the name comes from, since there are many different possible operations for incoming signals.  If you have two incoming signals, one at 2 volts and the other at 1 volt, you can: add them (summing amplifier), subtract them (differential amplifier), integrate them or differentiate them. While this can and still does work quite well on a large signal DC basis, using operational amplifiers in the computing machines today would be a bit unruly. Just to start the power usage and the offsets would pose enough problems to make you run out and buy ADCs, DACs and micro-controllers. If you have a big math problem to do, follow that urge. However, if you do have a simple math operation you need to do on two signals and you don’t want the overhead of a digital section, op amps can still do the trick nicely; with their fast reaction and the complete lack of sampling issues you won’t miss those ones and zeros for a second.
  2. Counting — In analog systems, counting can be a difficult task. Instead, using integrators to “sum up” signals is a way to figure out where you might be in a process. Discretizing a signal and then counting how many times it happens can have many uses in control systems, measurement systems and a range of other applications.
  3. Memory — Storing analog signals would be difficult. For even a simple 0-1V signal, you would have to be able to store an infinite number of values. If you have 4 bits to represent the range from 0 to 1 volt, then you instead only need 16 places to store values. In control systems and other places that require memory, the old way to “store” values was to sufficiently delay them and feed them back so as to combine them with a newer signal. Using memory now allows for interesting systems and use of state machines to determine what to calculate or execute next based on current and past input data.
  4. High noise environments — If you are trying to transmit an analog walkie-talkie signal (5Vpp sine wave) in a field that happens to have a white noise generator transmitting (2V) at the same frequency you are using, it is likely that whoever is on the receiving end of that signal will also get a good bit of white noise in their signal (think static). If you instead use a digital signal (varying between 0 and 5V) your friend who has a digital transceiver will be able to discern your transmitted highs (5V) and lows (0V) even if they also have noise added to them. Once the digital data is received and decoded, the original signal (5Vpp sine wave) can be reconstructed on the receiving end.
  5. Signals Transmission – As stated above, there are advantages to transmitting digital signals as opposed to analog. Most notable is the lower power spectral density of the digital signals and that less power is needed to transmit those signals. In current events, we see TV transmission changing from analog to digitla because of the lower power required to transmit the signal and the possibility for multiplexing signals on specific frequencies in order to get more channels transmitted in the allowable spectrum.
  6. Data storage — To use the mp3 example again from above, data is best stored in a digital format (easy there audiophiles, records are alright for some people too). True, some information is lost, but only information above the Nyquist Sampling rate. In audio signals, most people cannot hear above 20 kHz, so there isn’t too much to worry about beyond that (perhaps the harmonics that some people claim to hear and desire in their recorded works).
  7. RF — Digital Signal Processing (or DSP) is one of my favorite digital topics. There are so many cool things you can do with a Radio Frequency (RF) signal once it is sampled and put into a powerful processor. In fact, this process makes your cell phones and Wi-Fi connections possible. FIR filters, CIC filters, baseband shifting and so many other interesting topics make it possible. Hell, maybe some day I’ll start “Chris Gammell’s DSP Life“. Anyway, can’t we do this stuff in analog? Well yes, we can. But with RF, it comes down to precision. With the filters listed above, you can trade off processor time/power for a more precise filter. In analog systems, you instead need more and more precise components and increasingly complex systems to achieve similar results. In DSP there is also reconfigurability, either through logic rework (FPGAs) or coding (in DSP chips), so long term investment usually will favor DSP over analog RF solutions. Finally, there is more efficient use of bandwidth with digital systems, so you can shove more data into the same frequency space. All of these things have helped to push the RF areas towards digital processing.

I think one of the most interesting things when reviewing this list is that it’s possible to implement solutions in myriad ways. Oftentimes cost and tradition (or past work) determine which way a solution will eventually lean (digital or analog). And although I hope to expand upon it in future posts the most interesting thing to me is that analog and digital begin to merge at the extremes: do analog signals really exist if energy is explainable by quantum mechanics? Will digital signal continue to only have two logical states when there is so much data storage capacity available between 0 and 1?

Please comment on the above lists–right or wrong–and let me know a situation or two that you think benefits from analog or digital.

Analog Electronics Digital Electronics Engineering Learning

Designing For The Long Term

I was at the gym the other day and glanced over at a fellow gym-goer on their cellphone. I did a triple take as the phone was a flip phone that was maybe 4 inches wide and 5 inches high on each flap of the flip (making a 10 inch phone when completely extended).  On my third glance at this monstrosity of a phone I realized it was in fact a Blackberry that he had pulled out of it’s case/holder but the case looked like the bottom half of a flip phone. It got me thinking about design longevity.

I think back on the cell phones of the past and recent past and remember how clunky and awkward they were. That was maybe 5 years ago and those phones have long been sitting at the back of peoples’ desk drawers or hopefully donated to causes that recycle phones. I am amazed that these phone manufacturers continually get away with phones that will be obsolete in 5 years maximum. Why don’t we expect more from our mobile devices (in terms of longevity)? Do we really think your phone will last more than 3 years?

My most recent phone just passed away after 2 years. In my case I MAYBE dropped my phone in a bowl of soup, but I think it just got one of the external speakers; really I think the kiss of death was something a bad battery (which was not contaminated with soup). But even if it lasted another year and THEN died, would I have been upset? I don’t think I or most people would be because we have come to expect consumer products to have a shorter life span.

How do we design electronics for the long term? There are a bunch of great examples of electronics that have been built to last:

  1. Military designs –Aside from the humongous budgets that most contractors have for their military products, the specs on military designs can be equally large in scope. Translation: The military gets high quality products that were expensive but are built to last. These products are often ahead of the technology curve (thanks to the money available), so the technology often goes obsolete later too. The final piece is that the harsh environments encountered by military personnel requires gadgets that are sturdy enough to last for a long time; the ones that function in the field can continue to do so for a long time. A good example would be this emergency radio which was recently torn down by EETimes after an eBay purchase. The 1950s internals reveal high quality workmanship with components that match.
  2. Space designs — Although NASA’s budget has been cut back since Bush has taken office, this research intensive organization has produced some of the finest inventions for human kind. Not only that, they have a mandate to create equipment that can last for long periods of time. My favorite example is the Voyager 1 satellite, currently exiting our solar system and headed further than any other human instrument has ever been. Not only that, but this advanced spacecraft started taking up close pictures of Jupiter and Saturn before I was even born (first passed Jupiter in 1979). The fact that this machine is still functional, still running tests and still capable of sending back data until 2025 (est.) is mind boggling. Not only that, but the spacecraft has not had the advantage and protection of the earth’s ionosphere, so it has been taking much more direct cosmic radiation than normal electronics.
  3. Power companies — These terrestrial behemoths don’t have to worry about cosmic radiation quite as much as the NASA folks, but they often have materials carrying hundreds of amperes of current over long distances. Unfortunately, these systems are in need of some updating (especially to accommodate new renewable energy resources onto the grid), but once they are built, I’m sure they will hold up. Usually power companies achieve longevity in their equipment by using high quality, high strength materials that are designed with enough overhead to manage higher loads that they expect to see (i.e. A copper wire that is designed to carry 1000A of current, but only carries 600A on a regular basis).
  4. Nuclear Facilities — Some of the remnants of the Cold War include the control systems that decided whether missiles would fire or not. These are still some computers operational today in Russia that (we hope) are still making logical decisions. While I don’t agree with these computers in the first place, I sure hope they continue to hold up, otherwise it will prove to be a doomsday device. Proper shielding from radiation and free radicals help to prevent aging damage to electronics from fissile material, in addition to starting with high quality, military-grade products.
  5. Autos — While the auto industry might be falling on its face currently, the designers in Detroit used to help drive new technologies in many other walks of life. Looking at cars that have lasted since the 50s and beyond, we see examples of simple yet elegant electrical designs that were meant to last. Cars have not always had the GPS systems of today (which I’m guessing will have a much shorter lifespan), but have had electronics powering the wiper blades and the spark plugs for a long time. These systems in vintage cars require some maintenance and the occasional fuse replacement, but on the whole are sturdy enough to continue powering well-cared for vintage vehicles.

So these industrial/military and some commercial applications obviously present the need for longevity in finished products. However, designers need to consider many different parameters of a system in order to produce the best product for the long term.

  1. Communication protocol — This item applies most directly to cell phone makers and is a decent excuse for their short life products (but does not excuse everything about them). Unfortunately for phone users (and fortunately for phone makers), wireless protocols are always changing in order to try and achieve the highest bandwidth, usually through higher frequencies or different transmission methods. So once a technology changes for good, older phones become obsolete (and the phone makers happily sell you a new shiny one). This problem also exists when looking to the internals of products; to prevent obsolescence due to outdated protocols, they should be standard to the industry in which the product will be used, simple enough to incorporate into a new standard (and included legacy support) and well documented. Nothing is worse than having a 20 year old device that works fine but can no longer transmit information. An example might be an industrial test fixture on an old computer that only has a 5.25 inch floppy drive. The test fixture might work great, but getting data off that computer is no longer viable so the entire setup is obsolete. A tried and true method for machines to communicate has always been serially and with good reason. While a newer communication protocol might require myriad signals that are not available on an older product, most improvements to a serial signal are often speed (increasing the frequency of the oscillator driving the serial line) or encoding. Since devices can be re-programmed to send a new encoding or you could slow down the device on the receiving end, serial communications seem to be a viable solution for lots of applications.
  2. Long term drift of components — Designing for 10’s of years often requires attention to detail and deep pockets. The most important first step is to watch for this parameter on a data sheet for any critical component (marked as “long term drift”, often given as a percentage change over a specified period). But beware, many vendors simply leave this data off of their spec because they either do not think it is relevant, do not want to display poor data or because they don’t know what it means. In any of these situations it is critical to demand this data or to perform testing yourself in order to create lasting products.
  3. Susceptibility to thermal stress — Size matters when it comes to handling thermal stress; this is partially why older electronics hold up so well. The smaller components on a device get, the less heat they can dissipate (assuming similar materials in a larger package). A good example would be resistors. A 0603 resistor (.6mm x .3mm) can only dissipate 1/10 of a watt while a standard through-hole component can dissipate 1/4 watt on average. This is a trade-off that must be made in any system designed for portability, but could result in lower product lifetime (especially in high heat or high current situations).
  4. Standard packaging — The chip industry is a highly competitive environment where silicon designs are always being touted as the next best thing. Unfortunately for older products, this can often mean that components such as op-amps or a buck converter will no longer be produced. It’s a symptom of being in a dynamic industry and has to be dealt with. The best way to combat obsolescence is to create projects that have standards designed in to them. Thinking about creating a great new analog circuit with a non-standard pin-out in a device package that is so obscure that you have trouble finding it in catalogs?  Why not try making some other compromises on your circuit board and squeezing in a proven SOIC-8 with a pin-out similar to 4 other op-amps. You’ll be happy you do in about 4 years when that op-amp you’re using goes obsolete.

There are probably other ways to help design a product with a long life span, but these are a good start. A common theme is to pay more for higher quality components, which might not be preferred in certain situations. However, designing products for the long term can help save money year after year by not having to replace products or maintain sold products so spending a little more up front could pay off in the end. Some newer consumer electronics industries create new products each year either to drive demand or to fulfill needs after older devices break (which they may have produced).  In the process, they try to drive cost down by using the cheapest parts available; this can cause failures and unhappy customers. To design a long term product, costs and long term design considerations must be balanced.

What’s the longest period you’ve ever had a piece of functioning electronics? What kinds of changes did you see over the years? Have you ever created a low cost design that lasted more than 5 years? Let me know in the comments.