I third ttemple and RogerD's comments. In my 30 years of embedded hardware/software development the most successful efforts in terms of product quality and timely delivery were the result of small teams of very talented individuals that worked well together. That got the product off on the right foot with a great architecture, kept it on track with quality code, and got it tested, debugged and delivered by the team supporting and helping one another. Barring management interference, the quality of the code and the pace of development tends to be equal to the least-capable member of the team. One person is fastest, but can get stuck occasionally. That's where a few more team members help. More than 3-5, though, and the project can rapidly enter the death-by-committee/meeting realm. Larger projects can only succeed if a great architect can break it down into manageable subprojects that integrate well, or so many engineers are thrown at it that it gets done by sheer overwhelming numbers.
"The average programmer writes about 200 lines of code per month. At that rate, a staff of 50 would need 100 months -- more than eight years -- to write a million lines of code."
Knowing the author is simplifying for the sake of brevity, it may not be obvious to some that total lines of 'good' project code per time unit is never linear in the number of people devoted to the task. There is definitely a point of diminishing returns, and a point at which adding more people does the project a disservice by making the overall task unwieldy, if not outright unmanageable. Microsoft used to blame IBM for ruining OS2 because non-technical managers relied on the 'masses of asses' principle as a means of (erroneously) getting it done faster, then ran roughshod over the coders when the simple arithmetic did not prove axiomatic. The people who were a party to the overall 'vision' at the outset can become disconnected from what is actually emerging, as new people are added at the back end to expedite certain tasks or address new requirements. Moreover, the newcomers may have a completely different view of what the goal posts look like. If you start out with a few people who all know 'C' well then, for example, marketing decides the thing needs Android, bringing in Java experts who've never seen a pointer in their life may cause the team to split into two camps, and they may end up competing more often than working together.
Regarding operating systems, I agree they should be avoided wherever prudent and possible. For some projects however, there's no getting away from it. For example, if you're targeting a high end MPU like a Cortex A8 or above, you NEED an OS else you'll get bogged down in the minutiae of writing drivers etc. The first rule of thumb is you should abandon all rules of thumb. The second might be if you're using a MCU like a MSP430 or a Cortex-M0 to M3, you can probably get away without an OS. As stated in the article, concurrency beyond all but the simplest of requirements dictates you need an OS to manage access to inter-process, shared objects.
Thanks to the author for making a software guy feel important for an afternoon. It's time to go home so my teenager can ruthlessly burst that delicate bubble.
Great article Charles. I am one of those mechanical types and have very limited experience with software, embedded or otherwise. This field fascinates me and I am certainly appreciative of your article highlighting the difficulties with the technology. Your comments about the time-consuming efforts and costs to produce the code were revelations. Revelations. My experience in programming is with C++, Pascal and Visual Basic, which are basically "learning-types" of software. If I may, I write an educational blog published through WordPress; i.e. www.cielotech/wordpress.com. Would you mind giving me permission to reference your article in an upcoming blog discussing embedded systems? I think my readers would also be fascinated by the subject. Many thanks, Bob J.
ChasChas: It can't really be boiled down any further than to say that writing and de-bugging code is a very slow, tedious, complex process and many products have hundreds of thousands, or even over a million, lines of code. As RogerD accurately points out here, the numbers cited here refer more to larger projects. Still, the stories I've heard seem to indicate that many, many teams don't have a full appreciation for the scale of the software portion, and that misunderstanding (or lack of understanding) gets them into trouble. As for your reference to eccentric behavior by programmers, we'll need some deeper insight from some of our readers on that one.
I couldn't have said it better. I completely agree that the smallest possible software team will usually minimize development time.
I have seen a one man "team" design, debug, and program a very high performance FPGA/DSP/Microcontroller based motion control and data acquisition system in about a year (hardware and software). I doubt if a whole team could have done it in five years. A government funded team would probably never have finished it. I'm not saying that anybody could have done what he did, but he was the right man for the job, and adding more people to the mix would have only slowed him down.
Unless a software project can be very distinctly divided and conquered, the fewer programmers the better.
One additional thought: when a project takes any COTS module and places it onto a product host PCB, any Agency Approvals (FCC, UL, CE, etc.) are streamlined because the COTS module already "grand-fathers" the host product's approval process. On the contrary, embedding the solution eliminates that short-cut and you must face the full scrutiny of any Agency. Plan on adding at least another 8-12 weeks before approvals are granted.
To point #1 (all about SW) truer words were never spoken. A recent contract assignment involved placement of a Standardized (COTS) transceiver on a motherboard. One staff meeting discussion entertained the topic of eliminating the COTS transceiver in favor of a direct chipset embedded solution. Easier for the EE's; easier for the ME's. But the SW guys hit the ceiling, citing months of recoding development. All the points of your article are great checkpoints for whole teams and especially program managers to post on their walls for continuous awareness.
Could our view of distant galaxies be obstructed by a lawnmower? That unlikely question is at the heart of a growing debate between the National Radio Astronomy Observatory and a robot manufacturer that seeks to build self-guided lawnmowers.
Design News readers spoke loudly and clearly after our recent news story about a resurgence in manufacturing -- and manufacturing jobs. Commenters doubted the manufacturers, describing them as H-1B visa promoters, corporate crybabies, and clowns. They argued that US manufacturers aren’t willing to train workers, preferring instead to import cheap labor from abroad.
Using wireless chips and accessories, engineers can now extract data from the unlikeliest of places -- pumps, motors, bridges, conveyors, refineries, cooling towers, parking garages, down-hole drills and just about anything else that can benefit from monitoring.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.