Actually I have 2 chargers. One will say yes and no depending on the day, and the other always says yes(green light). It's very annoying and I wish I had an answer as to why this happens. Either way, the batteries work...I just never am sure to trust if they are fully charged or not. I am about to just get all new batteries and if the chargers say bad...toss the charger.
Monitoring failures for each test position is usually mandatory for quite a few organizations. Others just monitor for three in a row on the whole machine. The reason is that in an automated line there could be a test fixture failure and it could result in a shift's worth of rejects when there was nothing wrong with the parts. So it is very good economics.
I have designed a lot of industrial testing machines and one common feature on many of them was code to detect three-in-a-row failures at any test position. That would either set a warning flag or stop the machine, since the processes were stable enough to make 3 in a row faults be a warning about some kind of process deviation problem. Of course if the tester did not record which fixture position the faults were in then it would never have spotted the problem.
But it was certainly good detective work to locate the cause.
This Sherlock reminds me of some things I heard when reporting on machine vision and inspection equipment. Mainly, how do you tell when the test/inspection equipment is the cause of a failure, and not the part? The big automated production and assembly systems have the ability to gather data, via software, which can be aided by the use of machine vision, to allow just that. But it all has to be configured correctly, and that takes a lot of time and energy. I heard from MV vendors that often that doesn't get done.
Apologies for keeping you hanging...the canted condition caused bad or high resistance contact and the test registered as a failure. The observation that the test position was canted jogged my memory about other tests I had done varying the terminal contact which led to a wide variance in test results.
I appreciate that you want to be fair, but step # 1 in ANY endeavor (Engineering or other) is to identify the task or requirement. In this case, to identify what kind of failure you are investigating (luckily there was a quar. set of failed batteries to test).
If you manually test your failed bin of parts and the failure rate matches the expected (hopefully low) failure rate, then NDF (no defect found). Of course, you want to cut a few "good" ones apart to make sure there are no latent or intermittent problems before you announce to your boss NDF. Task identified: find out why good batteries tested bad.
Which is what the author did, which is what makes this one a good story. Kudo's.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.