Page 1/4  >  >>
Beth Stackpole
User Rank
Failure analysis should be look forward, not backward
Beth Stackpole   4/3/2012 8:11:41 AM
Using the recent Costa Concordia disaster, framed up against the lessons learned from the infamous Titantic disaster, is a perfect "teachable moment" for proving out the importance of failure analysis as part of upfront design. I would hope the takeaway from Professor Petroski's thoughtful post is that failure analysis needs to be a proactive part of the principal design process, not simply an after-the-fact exercise that comes on the heels of any kind of related disaster or product failure. On the upside,  I would think the flurry of more accessible CAE and simulation tools can greatly aid engineers in this very important exercise.

Henry Petroski
User Rank
Re: Failure analysis should be look forward, not backward
Henry Petroski   4/3/2012 8:22:58 AM
"Proactive failure analysis" is the ideal way to approach any design problem.

User Rank
Re: Failure analysis should be look forward, not backward
williamlweaver   4/3/2012 8:36:41 AM
This quickly turns into a discussion of ethics. "Proactive Failure Analysis" should have predicted the Tylenol poisonings which created the safety sealing industry. A broken red lens on a train signal should have predicted the folly of using "white means go" for traffic signals. O-Ring elasticity measurements should have prevented the loss of the Challenger, and research into the toxicity of poly-alcohols (sugars) should have pointed to an epidemic of diabetes and heart disease.

We should always be mindful of risk analysis, but unless we are willing to live in the society described in Minority Report, perhaps the best thing we can do is learn the truth of our mistakes quickly and incorporate those lessons as we continue to evolve our technology...


TJ McDermott
User Rank
Re: Failure analysis should be look forward, not backward
TJ McDermott   4/3/2012 10:18:45 AM
The Titanic disaster is a result of too many flaws stacked up to survive.  Eliminate any one of the flaws, and the result would be vastly different.  Had the bulkhads been full-height, the progressive flooding would not have occurred.  Had there been enough lifeboats, the loss of life would have been minimal.  Had the captain chosen to slow down in poor visibility conditions, the collision could have been avoided or been inconsequential.

Professor Petroski's point that lessons are learned from the failures is accurate: the SS Andrea Doria / MS Stockholm disaster proves it.  The Andrea Doria was unable to launch half her lifeboats due to severe listing, but the ship's better design permitted it to stay afloat for 11 hours, allowing other ships to arrive in time to rescue all aboard.  Only the design helped; having enough lifeboats wasn't enough to prevent loss of life.

There were lessons not learned though:  the collision still occurred in low visibility conditions.

Even TODAY, ship collisions occur in fog, despite shipboard radar and GPS technology:


Take note of the date of this accident.

There are still lessons to be learned from Titanic.


Dave Palmer
User Rank
Learning the right lessons
Dave Palmer   4/3/2012 11:23:56 AM
A big part of my work involves analyzing failures after they happen. (Fortunately, most of the time they're failures which occur during in-house testing, not in the field).  The rest of my work involves trying to apply these lessons to prevent failures from happening in the first place.

It's not only important to learn from failures -- it's important to learn the correct lessons from failures.  Whenever you are trying something new, if anything goes wrong, the knee-jerk reaction is to blame whatever is new in the design.  For example, if you are trying out a plastic bearing and you experience a failure, many people will conclude that plastic bearings are simply no good.  Then, for the next decade, if anyone suggests using a plastic bearing, the response will be, "We tried that before and it didn't work."

There's also a tendency to throw the kitchen sink at a problem.  Often, when a part fails, different engineers may suggest three or four possible changes to the design or manufacturing of the part which might (or might not) fix the problem.  There may be pressure from management to "just get it fixed," which might mean implementing all of the changes at the same time and hoping one of them works.

Sometimes, this results in the problem going away, and the engineers look like heroes to management for solving the problem in such a timely manner.  But really, nothing has been learned.  Worse, in the tribal knowledge of the engineering organization, one of the changes which actually had no effect may undeservedly get the credit for fixing the problem.  Once this becomes part of the conventional wisdom, it may prove very difficult to dispell.

Finally, I'd like to point out that wishful thinking is not a successful design strategy.  Occasionally, when a failure occurs, people try to convince themselves that the conditions under which it occurred were so unique that it will never repeat itself.  This is especially true of failures which occur in testing, where there's a tendency to attribute any difficult-to-explain failures to the test method and assume they won't happen in the real world.  If that's really the case, then you should try to come up with a more representative test method.  But in any case, if you're arguing that a failure should be ignored, you'd better make damn sure you're right.   

Rob Spiegel
User Rank
Icebergs in April
Rob Spiegel   4/3/2012 4:09:28 PM
Excellent article, Prof Petroski. I read an article recently about the Titanic's maiden voyage. Apparently the crew was warned there were icebergs in the area by a ship that had just moved through the area the Titanic was heading toward. The crew of the Titanic even acknowledged the warning. 

User Rank
Re: Icebergs in April
Mydesign   4/4/2012 6:14:41 AM
Rob, as a part of curiosity I would like to know even though the crew came to know about the iceberg well in advance, why they didn't change the root or any other saftey precautions.

Nancy Golden
User Rank
Old Adage
Nancy Golden   4/4/2012 8:20:11 AM
"Expect the best, but prepare for the worst" certainly seems to apply in this situation. While we can not prepare for scenarios yet unimagined, it certainly makes sense to prepare for known risks. However, the human element will always be an uncontrollable variable. I designed a test set that required a cylinder to come down with some force over the test bed. In order to prevent the operator from inadvertently getting a finger smashed, I used two thumb switches and programmed it so that both switches must be blocked in order for the test to run and the cylinder to actuate. One of the engineers informed me after their visit to the plant it was installed at, that the workers had simply stuck a glove to block the thumb switch sensor on one side - totally negating my built in safety design. I guess I should have anticipated the human element, but one can only go so far in anticipating the actions of those who will be operating the equipment...

User Rank
Anticipating failure is great, but...
ChasChas   4/4/2012 9:48:43 AM
Anticipating failure is great, but the engineer must know the fine line between probable cause and paranoia. At some point the design goes on - fears or no.

But true, arrogance seemed to be part of the Titanic's design criteria.

Henry Petroski
User Rank
Re: Anticipating failure is great, but...
Henry Petroski   4/4/2012 10:04:27 AM
Yes, there is a fine line between anticipating failure and paranoia, and the design must go on. The perfect is the enemy of the good, but the key question is always how good is good enough.

Page 1/4  >  >>

Partner Zone
Latest Analysis
Eric Chesak created a sensor that can detect clouds, and it can also measure different sources of radiation.
Festo's BionicKangaroo combines pneumatic and electrical drive technology, plus very precise controls and condition monitoring. Like a real kangaroo, the BionicKangaroo robot harvests the kinetic energy of each takeoff and immediately uses it to power the next jump.
Practicing engineers have not heeded Yoda's words.
Design News and Digi-Key presents: Creating & Testing Your First RTOS Application Using MQX, a crash course that will look at defining a project, selecting a target processor, blocking code, defining tasks, completing code, and debugging.
Rockwell Automation recently unveiled a new safety relay that can be configured and integrated through existing software to program safety logic in devices.
Design News Webinar Series
3/27/2014 11:00 a.m. California / 2:00 p.m. New York / 7:00 p.m. London
2/27/2014 11:00 a.m. California / 2:00 p.m. New York / 7:00 p.m. London
12/18/2013 Available On Demand
11/20/2013 Available On Demand
Quick Poll
The Continuing Education Center offers engineers an entirely new way to get the education they need to formulate next-generation solutions.
Apr 21 - 25, Creating & Testing Your First RTOS Application Using MQX
SEMESTERS: 1  |  2  |  3  |  4  |  5

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
Next Class: April 29 - Day 1
Sponsored by maxon precision motors
Learn More   |   Login   |   Archived Classes
Twitter Feed
Design News Twitter Feed
Like Us on Facebook

Sponsored Content

Technology Marketplace

Datasheets.com Parts Search

185 million searchable parts
(please enter a part number or hit search to begin)
Copyright © 2014 UBM Canon, A UBM company, All rights reserved. Privacy Policy | Terms of Service