That makes perfect sense, Didymus7. With the ever increasing time-to-market pressures on design, I would imagaine this is a common problem. Each year (maybe six months), you take your product, add new competitive features, and shove it back out the door.
This was an excellent explanation of the problem and the fix. And I can tell you exactly why it happened. The root cause is most probably because the designers were not given enough time to test the design. The design completion date was not set based on proper design and testing, but more likely set due to a customer PO.
I do agree, but if the SOP was followed for developement, there is no excuse for this to happen. ISO certification works when the employees follow the deveopement paper trail. Cut and paste should be caught by the ISO plan. There is something to say about QC Controls.... It sounds like a total QC failure to me, and a slap on the hand for the engineers for not following the QC plan.
As several commenters here have pointed out, this could easily be traced to a human (i.e., personnel) problem. I suspect that these kinds of software/hardware glitches are commonplace, largely because companies too often can't manage to hang on to the design engineers who know why a product was designed that way in the first place.
A big chunk of what happens -- particularly in the modern world where "software guys" and "hardware guys" are rarely one in the same -- is that when somebody goofs up on one side (in this case a hardware guy undersizing a power supply), it's the guy on the other side of the wall who sees it first and diagnoses it through the lens of his own world view. And to a software guy, a piece of code that crashes repeatably on the same instruction every time smells like a bad tool, not a bad power supply.
At the same time, there seems to be this dividing line between the "analog circuits guy" (who in this case got the design responsibility for the switchmode power supply) and the "digital circuits guy" (who got the job of designing the processor core and attaching its peripherals). It was those two not talking with each other that produced the problem of "undersized supply" in the first place.
Where it's good to have an "old timer" around is where you encounter those problems that cross boundaries between "hardware" and "software" as well as between "analog" and "digital." Engineers like that were being minted 30+ years ago. Now they're not, and the only way they come into being is with the accumulated on-the-job experience of a few decades behind them. In theory, project managers should have enough visibility into their project's activities that they should also be able to spot cross-functional problems like this. But in the last few years there's been a trend for engineers who don't like the profession after a few years to go get the MBA and come back to the organization with a "management" hat on. And when that happens, you're about guaranteed to have projects run by people with neither the breadth or depth to solve thorny problems like this.
The good news in the United States is that at least there are a few engineers who put in the 25 years it takes to get good at the business before taking on project management responsibilities. There are countries in this world where societal pressure is such that you're perceived a failure if you're still "just an engineer" after 25 years and you haven't moved into management. And the net result there is that *nobody* in that society puts in the time to get really good.
I logged 25 years as a "pure engineer" myself before founding Focus Embedded. And the average age of employees here is 54. But we're something of an anomoly in that regard. That said, we also get all of the really hard problems that nobody else can solve because nobody has the breadth of experience to see the entire picture when a beast such as "all F's change to all 0's" raises its head.
Good points, TJ. I believe that inexperienced engineers may be part of the problem. Another is the rapid pace of new product development. So may products get updated on a schedule (think Apple). Engineers are tasked with adding new features with each release. It makes perfect sense that they would keep adding new features to the existing design without revamping the entire product design.
This is the risk companies take when they force out senior engineers for less expensive junior ones. The loss of corporate knowledge can be critical. This is not just a case of experienced vs. inexperienced, but that of knowing a product line well and remembering the reasons for design expansions.
Companies end up answering the question "Why is the design this way?" with the answer "Because that's the way we've always done it" and have no basis to support the answer.
This error is so understandable. The design works, so it gets passed down again and again. Why not? Why redesign? At a certain point developments in what the design is supposed to do outgrow the basic dependable design. This has got to be happening all the time.
What should be the perception of a product’s real-world performance with regard to the published spec sheet? While it is easy to assume that the product will operate according to spec, what variables should be considered, and is that a designer obligation or a customer responsibility? Or both?
Biomimicry has already found its way into the development of robots and new materials, with researchers studying animals and nature to come up with new innovations. Now thanks to researchers in Boston, biomimicry could even inform the future of electrical networks for next-generation displays.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.