I wasn't implying that we should live in a perfect world with zero risk of bad things happening. I'm no Pollyanna. And yes, I accept the risk of fatality when driving. But that's me choosing to do so (to some extent; driving is pretty well required much of the time, especially in non-urban areas). I do not choose to drive a car that's potentially fatal, or use a medical device or prescription medicine that could kill me. If I knew about those possible failures ahead of time, I might be able to make different choices, either a different car or medical device or none of the above. If I don't know, then something's wrong. Why should there be so many different electronic doodads, whether automotive or medical, or medications, for example, that require so much time and energy being regulated, all in the name of consumer choice? It looks to me like commercial interests have trumped all others in this regard.
Good point, Ann. What is an acceptable risk if the result is a fatality? I think there are some areas where we accept risk readily. One is driving, mentioned in an earlier comment. Most of us accept that risk on a daily basis. Another, also mentioned in an earlier comment, is exploration. Our current space program is amazingly safe compared to earlier human exploration. Throughout history, we've always accepted high risk for exploration. I agree with you on allergies. No risk of fatality is acceptable to reduce allergies, partly because there are so many alternatives with no risk of fatalities.
These examples are rather spectacular and easily draw our focus. I'd like to offer a slightly different perspective. Part of my current job is to manage medical equipment recalls for a VERY large healthcare organization. Why would an engineer do this? Because someone needs to understand the technology,its failure modes and how those failures will affect patients (which I'm one as well).
I review over 2000 identified medical device failures/hazards per year. It's simply too many to effectively track each one to it's final resolution. I have to manage the risk - i need to triage the issues for impact and where I can be most effective. As much as I'd like to track every one down and ensure all of our hospitals can manage the problem, it's simply not possible. That means there's a chance that a patient will be injured or die because I didn't follow up on the 'right' issue. It's terrifying, but that's simply the way it is. No company can perform perfect risk elimination. In the real worlld, we have to perform risk management, we have to focus limited resources where we think they will do the most good. And sometimes, we get it wrong.
The Columbia 'accident' may have been preventable; I think it was the book "Comm Check". Several engineers' / groups' concerns, if acted on, could have detected the damage.
The Challenger ' incident' was preventable. I think that was the book "The Challenger Launch Decision". The Shuttle operational limits were something like 40F to 99F. So when ice was observed on the vehicle, the engineers' recommendations against launch were well founded.
Before that was Apollo 1, when engineers argued against a 100% oxygen test, on top of many poor design features.
In each case, the advice of the engineers (experts) was ignored or over-ruled. I had much more respect for NASA before reading these books.
Thanks for a great article. I agree with Rob, you'd think that it's the scarier real-world numbers that would be paid attention to, not what is supposedly the norm based on a few tests.
But the numbers also need to be related to actual people and actual harm, not thought about abstractly. If the statistical likelihood of something occurring is greater than zero and that occurrence has fatal results, then that risk is too high. For example, I once took a prescription medical for allergies that started getting bad press for fatal heart attacks. When discussing this with my doctor he said "but the risk is only 2%." Uh, right, but what if I'm in that 2%? No thanks.
Nice article. Seems to me that if the blowout perventer's actual performance included a real-world 45 percent failure rate -- even while tests indicated an 0.07 percent failure rate, this would be grounds to call a foul and look into whether the blowout preventer system was adequate protection against catastrophe. Is this an example of regulators asleep at the wheel?
Excellent point, Dave. I should note that I spoke with Roger Boisjoly after the Challenger disaster. (He was the one engineer who resisted going ahead with the launch, and lost his job as a result.) I also attended the first Washington, D.C. hearing of the Rogers Committee. That's the group where the late physicist Richard Feynman famously dipped an o-ring in ice water to show how brittle it became. I could go on; it was a fascinating experience.
Another excellent article by Professor Petroski. In a couple of other recent threads on this site there has been some discussion of groupthink, and the kind of treatment which engineers who challenge it can expect.
When I worked in quality, I often encountered the argument, "We've accepted this out-of-spec condition before and everything worked out ok, so we might as well accept it now." My response was always, "If you're playing Russian roulette and you pull the trigger and no bullet comes out, does that mean no bullet will come out the next time you pull the trigger?"
The Smart Emergency Response System capitalizes on the latest advancements in cyber-physical systems to connect autonomous aircraft and ground vehicles, rescue dogs, robots, and a high-performance computing mission control center into a realistic vision.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.