Tough one, most of the designs I have worked on have worked the first time but all were fully functional before release to manufacturing. Most were related to communications issues between software and hardware.
I have been able to get designs right "first time" -- never changed after release to production -- no bugs found -- more than a decade of use. It can be done -- those projects had thousands of circuits.
@LevitonDave - Most of the time the hw team is tasked with designing the interface because it is very hw-centric. But this is where it is important that sw/fw also be involved to make sure it works for them. We'll discuss this more tomorrow.
@LevitonDave - If the problem is clearly on the hw or the sw side, then assign it to that team. If it is unclear, then both should work on it. It may be assigned to one side or the other, depending on who has the best chance to solve it, but the other side would be obligated to assist.
@williambreit - Determining which is more profitable between first time right and finish it now depends on where you are measuring your profits. If management is looking at maximizing this quarter's profits, then finish it now is the mantra. But when you look long term, first time right is much more profitable. If you want more specifics, send me an email.
@LevitonDave - Getting management to buy into these concepts takes time. Some of the stuff I did anyway without management approval or knowing about it but it eventually became part of the process when they saw the benefits. If you want to talk specifics, send me an email.
@luizcosta - At HP we had to use an old ASIC as a workaround to a new ASIC the third-party supplier could not get working. I have seen FPGAs used for workarounds. So it does happen, but only if it can't be fixed in SW.
@BobDJr - There is probably not much new in here than what I gave in Boston ESC because the principles are still the same. So a lot of the examples and illustrations will be similar. So this would be a good refresher course for you if you feel you need it.
@raghu - My reference to bank loan employees is to say that my principles also apply to areas outside of embedded systems development, that it can even apply to employees working on loan applications at a bank.
@garysxt - Was your "both HW and SW is guilty until proven innocent" by design or the way everybody angrily thought? It is a good way to go in that both side have the burden of proof, rather than, as someone else said, wait to see if they fix it first.
Leviton, I know of companies that are so hung up in doing it their own way that it took a massive failure out in the field for the management team to get punished so severely that upper management had to face the customer and do what he said. It resulted in a top down restructuring of the company along with Six Sigma and Kaizen training for the company to get back into the good graces of the customer. It cost the company about 9 months of profits.
@JSP - I have not developed my own "version" per say of collaboration and standards. There are enough out there already. My principles will apply no matter what standards are used. And best practices must be analyzed to ensure correct application.
LUIZCOSTA, ASICs are part of the design and usually implement functions that are very time sensitive or involve too much functionality for existing ICs such as micros. Sometimes ASICs are used to protect IP as they are harder to reverse engineer. An example of a function that typically goes into ASIC is a real time DSP function such as a multiplier/accumulator that implements a Finite Impulse Response filter. ASICs can produce a result every clock cycle, whereas micros take many execution cycles to produce the final outcome.
I agree. I'm designing a digital filter whose interface to the computer is exceedingly difficult, because it's designed to a computer back in the 1980s. Today's computers are much faster, so SW has to design loops to slow down its interface, not knowing that the filter can run much faster. It has a 10 ns cycle time, whereas the SW guys are slowing it down to 1 ms! I've talked to them about it, and management said "Nope, keep it as is."
I had to laugh when one day I received a "we have the time and staff to do things right the first time" then the next day found out that the same group did a poor cooling design that caused our whole system to shutdown!
@Alex – Concerning your interest in interfacing practices outside the circuit board, I found that performing the job of company/project liaison with a contracted software/firmware house to be a good example.If I didn't put forth the effort and stay on top of everything going on, things got "hosed up" pretty quickly.
My interest in this presentation is to learn more about the line between hardware and software for use in FPGA SoC designs. How to make the hardware easier or more efficient to program and how to define which parts to place in hardware and which parts to put in software.
2 attendees - Engineering Manager and HW/SW Engineer. We are ostly interested in User Interface design considerations balancing with other important tasks ? Do we use DMA, things like that ? Using SPI interface.
@bill.whitehead: I was not getting audio either - first checked that I could open www.blogtalkradio.com and then www.llnwd.net (LimeLight Networks) - both worked. The did a <F5> (refresh) and it started working...
Often able to build in "hooks" for test access?? I was lucky enough to have lots of indicator LEDs and configuration toggle-switches. Being able to build diagnostics into the device has been VERY productive.
Predominantly hardware, but more than enough software to make the hardware walk and talk, and debug if it DOESN'T do either... NOT a manager (even though that's typically the end-of-life-cycle for engineers).
Guess I could read through today's slide deck and get a feel for whether we're going to see "top-down" vs "bottom-up", or "holistic", "conceptual", etc. Hey, all! Everyone enjoy the "break"?
Let's see: currently unemployed (searching, available on short notice) recent/nontraditional BsEE with years of professional experience in MANY areas (telecommunications, industrial lasers, biomedical devices, satellite dish positioners, etc). No specific current projects (although I've got a few "percolating", as usual).
Welcome to this class. I'm looking forward to presenting this subject which is something that I feel strongly will improve your development processes. If you have any questions, post them here and I will answer them after the lecture part is over.
The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.
In an age of globalization and rapid changes through scientific progress, two of our societies' (and economies') main concerns are to satisfy the needs and wishes of the individual and to save precious resources. Cloud computing caters to both of these.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.