That's funny tekochip - I had the opposite problem. When I was a test engineer and parts started failing, everyone always wanted to point to the test set. I always kept calibrated "golden" units around so that I could verify tester operation. I made sure my golden units included passing units at both ends and the middle of the spec as well as rejects. Most of the time that would satisfy all involved that we needed to look at the parts themselves...
The author did a good job isolating the problem. I think perhaps the other folks were not so experienced in troubleshooting. I'll never forget as a young tech making an assumption that wound up delaying a resolution to a problem we were having. I don't remember what the problem was but I sure remember my angry boss pulling me into his office and writing ASS U ME in large letters on his white board. He then asked me if I knew what happens when I assume - If you look closely, I am sure you can figure out the rest of the story...
That happened in 1990 but is a lesson I carry with me to this day!
The 'When you assume...' phrase is common, but useless. Everybody makes assumptions. Assuming the rejected parts are actually bad, or assuming the rejected parts are really good is a beginning of troubleshooting. The real trick is to know and realize what your assumptions are, and be prepared to revisit them when troubleshooting doesn't agree. During a light-hearted conversation I was asked of I knew what happens when you assume. I replied 'Did you assume that I was listening to you ?. Of course you can't say that to your manager.
Most any Quality Manager worth their salt will run Gage R&R (gauge repeatability and reproducibility) on their test and verification people and equipment on a regular basis. When you don't do this, you haven't a clue as to what you're producing. You need to understand or get get full control of all variables in your inspection processes.
Our people are amazed when we report the source of variables and variation in our most basic inspections (like using a mic or caliper). It's a real eye-opener for most people outside the Quality field. Managing Type I and Type II inspection errors is a fundamental problem in most every company.
I read that the position which failed held the batteries in a "canted" position, but what was the precise reason for the test failing? Did the canted batteries fail to make contact at all? Was there poor contact resistance, or what? The observation and deduction is smart, but I feel I am left hanging.
I appreciate that you want to be fair, but step # 1 in ANY endeavor (Engineering or other) is to identify the task or requirement. In this case, to identify what kind of failure you are investigating (luckily there was a quar. set of failed batteries to test).
If you manually test your failed bin of parts and the failure rate matches the expected (hopefully low) failure rate, then NDF (no defect found). Of course, you want to cut a few "good" ones apart to make sure there are no latent or intermittent problems before you announce to your boss NDF. Task identified: find out why good batteries tested bad.
Which is what the author did, which is what makes this one a good story. Kudo's.
Apologies for keeping you hanging...the canted condition caused bad or high resistance contact and the test registered as a failure. The observation that the test position was canted jogged my memory about other tests I had done varying the terminal contact which led to a wide variance in test results.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.