Actually I have 2 chargers. One will say yes and no depending on the day, and the other always says yes(green light). It's very annoying and I wish I had an answer as to why this happens. Either way, the batteries work...I just never am sure to trust if they are fully charged or not. I am about to just get all new batteries and if the chargers say bad...toss the charger.
Monitoring failures for each test position is usually mandatory for quite a few organizations. Others just monitor for three in a row on the whole machine. The reason is that in an automated line there could be a test fixture failure and it could result in a shift's worth of rejects when there was nothing wrong with the parts. So it is very good economics.
I have designed a lot of industrial testing machines and one common feature on many of them was code to detect three-in-a-row failures at any test position. That would either set a warning flag or stop the machine, since the processes were stable enough to make 3 in a row faults be a warning about some kind of process deviation problem. Of course if the tester did not record which fixture position the faults were in then it would never have spotted the problem.
But it was certainly good detective work to locate the cause.
This Sherlock reminds me of some things I heard when reporting on machine vision and inspection equipment. Mainly, how do you tell when the test/inspection equipment is the cause of a failure, and not the part? The big automated production and assembly systems have the ability to gather data, via software, which can be aided by the use of machine vision, to allow just that. But it all has to be configured correctly, and that takes a lot of time and energy. I heard from MV vendors that often that doesn't get done.
Apologies for keeping you hanging...the canted condition caused bad or high resistance contact and the test registered as a failure. The observation that the test position was canted jogged my memory about other tests I had done varying the terminal contact which led to a wide variance in test results.
I appreciate that you want to be fair, but step # 1 in ANY endeavor (Engineering or other) is to identify the task or requirement. In this case, to identify what kind of failure you are investigating (luckily there was a quar. set of failed batteries to test).
If you manually test your failed bin of parts and the failure rate matches the expected (hopefully low) failure rate, then NDF (no defect found). Of course, you want to cut a few "good" ones apart to make sure there are no latent or intermittent problems before you announce to your boss NDF. Task identified: find out why good batteries tested bad.
Which is what the author did, which is what makes this one a good story. Kudo's.
What should be the perception of a product’s real-world performance with regard to the published spec sheet? While it is easy to assume that the product will operate according to spec, what variables should be considered, and is that a designer obligation or a customer responsibility? Or both?
Biomimicry has already found its way into the development of robots and new materials, with researchers studying animals and nature to come up with new innovations. Now thanks to researchers in Boston, biomimicry could even inform the future of electrical networks for next-generation displays.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.