Actually I have 2 chargers. One will say yes and no depending on the day, and the other always says yes(green light). It's very annoying and I wish I had an answer as to why this happens. Either way, the batteries work...I just never am sure to trust if they are fully charged or not. I am about to just get all new batteries and if the chargers say bad...toss the charger.
Monitoring failures for each test position is usually mandatory for quite a few organizations. Others just monitor for three in a row on the whole machine. The reason is that in an automated line there could be a test fixture failure and it could result in a shift's worth of rejects when there was nothing wrong with the parts. So it is very good economics.
I have designed a lot of industrial testing machines and one common feature on many of them was code to detect three-in-a-row failures at any test position. That would either set a warning flag or stop the machine, since the processes were stable enough to make 3 in a row faults be a warning about some kind of process deviation problem. Of course if the tester did not record which fixture position the faults were in then it would never have spotted the problem.
But it was certainly good detective work to locate the cause.
This Sherlock reminds me of some things I heard when reporting on machine vision and inspection equipment. Mainly, how do you tell when the test/inspection equipment is the cause of a failure, and not the part? The big automated production and assembly systems have the ability to gather data, via software, which can be aided by the use of machine vision, to allow just that. But it all has to be configured correctly, and that takes a lot of time and energy. I heard from MV vendors that often that doesn't get done.
Apologies for keeping you hanging...the canted condition caused bad or high resistance contact and the test registered as a failure. The observation that the test position was canted jogged my memory about other tests I had done varying the terminal contact which led to a wide variance in test results.
I appreciate that you want to be fair, but step # 1 in ANY endeavor (Engineering or other) is to identify the task or requirement. In this case, to identify what kind of failure you are investigating (luckily there was a quar. set of failed batteries to test).
If you manually test your failed bin of parts and the failure rate matches the expected (hopefully low) failure rate, then NDF (no defect found). Of course, you want to cut a few "good" ones apart to make sure there are no latent or intermittent problems before you announce to your boss NDF. Task identified: find out why good batteries tested bad.
Which is what the author did, which is what makes this one a good story. Kudo's.
Digital healthcare devices and wearable electronic products need to be thoroughly tested, lest they live short, ignominious lives, an expert will tell attendees at UBM’s upcoming Designers of Things conference in San Jose, Calif.
Designers of electronic interfaces will need to be prepared to incorporate haptics in next generation products, an expert will tell attendees at the upcoming Designers of Things conference in San Jose, Calif.
The company says it anticipates high-definition video for home security and other uses will be the next mature technology integrated into the IoT domain, hence the introduction of its MatrixCam devkit.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.