3. Jumping to conclusions. In failure analysis, comparing parts or assemblies that failed with ones that didn't can be a very valuable exercise. However, there is a temptation to latch onto any difference and immediately assume that it was responsible for the failure. You may even have a reasonable-sounding theory, which explains how this difference could have caused the failure.
Unfortunately, the fact that something sounds reasonable is no guarantee that it is true. You can't determine the root cause of a failure by simply thinking or talking about it. You need to test your theories against reality. Without data to back it up, a reasonable-sounding theory is just a reasonable-sounding theory.
4. Collecting data instead of thinking. Instead of jumping to conclusions, some engineers go to the opposite extreme: They try to perform every test and collect every piece of data possible, regardless of how relevant it may be to the problem at hand. Of course, this provides them with a ready response when managers ask why the problem hasn't been solved yet: They're waiting for test results.
Clearly, it's important to have data, and too much is better than not enough. But before performing a test, ask yourself: What information do I expect to get from this test? What questions will this information allow me to answer? What are the limitations of this test? Thinking clearly about your test plan before you start testing will help you to stay focused on the root cause.
5. Throwing the kitchen sink at the problem. Often, there are competing theories as to why a part failed. Based on these theories, different engineers may suggest three or four possible changes to the design or manufacturing of the part, which might (or might not) fix the problem. Pressure from management to "just get it fixed" might mean forgoing testing to determine which theory is correct, implementing all of the changes at the same time, and hoping one of them works.
Sometimes, this results in the problem going away, and the engineers look like heroes to management for solving the problem in such a timely manner. But really, nothing has been learned. Worse, in the tribal knowledge of the engineering organization, one of the changes, which actually had no effect, may undeservedly get the credit for fixing the problem. Once this becomes part of the conventional wisdom, it may prove very difficult to dispel. Learning the wrong lesson from a failure can be worse than learning nothing at all.