It doesn't sound like the engineer allowed the change. Or was even aware of the change. The article states he found out the production had moved from California to Mexico. I don't care how much a vendor tries to tell me things have not changed when they move production to a "lower" cost facility, things will change. Something in the process will be different and suddenly it becomes an exercise in trying to fix something that someone else broke by trying to get a cost savings. I hope your company at least got to capitalize on part of the savings.
Question:Was the design failing when it was made in California?
Actually I did receive Change NOtices from several Disk Drive OEM's in the 80's. The best docs. were from Nippon Peripheral Ltd (NPL) a division of HItachi, and Maxtor ( now part of Seagate) who had the best Change Control Process and the latter had the best quality Transfer Process moving production to Singapore with my friend Ken Wing at the QA helm in Seagate. in late '80s.
Mgrs need to make sure they have the right resources to support a transfer to Mexico. Back in the 80's I was involved in closing down a 200K sqft Sperry Site when I was at Burroughs when they bought Sperry and became Unisys. For the most part the transfer was succesfully split between Singapore, Santa Clara, WInnipeg and Nogales Mex. plants. Although purchasing oversights lead to daily flights to hand deliver shortages in ramp up from Mexico to Winnipeg Canada, shipments were high quality and closely monitored.
Transfers always create new problems. Like one-time in San Diego at Unisys. All board that were made became outsourced and one critical card with a Time Of Day Crysal Oscillator started to fail during 24 Burnin and at customer sites. causing system failure. The Docs for fabrication said install XTAL POST WAVE... which was fine before with no clean flux, but new board shop used water wash.. So they Wave Solder board.. Then installed XTAL with foil seal and then did water flux removal.
I discovered the problem as a contract Test Eng there,in '99. by grinding the Xtal and discovering RUST inside. Bingo... Got with Purchasing QA and called vendor pronto.... Change the process. It allowed about a week or two of bad product. Fail Analysis Eng's did not discover the problem. Factory ICT/ATE all passed naturally since rust had not got started until burnin. or time.
Instructions were accurate but not explicit enough for a different flux cleaning process. :) Quick Failure Analysis is my strength after decades of design and experience in Murphy's Law.
Years ago I worked for a passenger rail company. One project used several dozen miniature panel mounted circuit breakers in each end of each car. We bought them from a highly regarded manufacturer a thousand at a time. One Fall we got a hiccup in delivery so we called. Turns out they had turned the lights off in their New England plant, loaded all the machines on trucks and moved the entire operation to Mexico one weekend, so to speak. And the fellows in Mexico couldn't seem to figure out how to make them right so they weren't shipping and weren't sure when they could. Meanwhile our production department is making big noise as their stock disappears. We called around and a smaller competitor offered to modify one their stock products slightly to match what we were buying. I never learned if the original supplier ever got back in the game as we finished the contract a couple years later still buying from the other folks, and I moved on shortly thereafter.
So, just because the consultant and HR said you can save a bundle moving your production to a foreign country, don't believe it until the Customer can see it. There's an old but very reliable method we used to use in IT [what can I say, many hats worn I have]: parallel operation. Don't take one system down until the replacement actually works properly. Hard you say? Any harder than losing your entire business?
Beth, good comments on design and process verification standards.
Often this has to be invented by the ODM since it is proprietary wisdom and peculiar to experience of the design iteration. I had to invent my own special DVT/PVT methods in the 80's for OEM 5.25" and 3.5" HDD and add then to the standard Corporate Reliability slew of tests. LIke Gain/phase margin tests and BER margin tests and HALT/HASS test criteria on top of drop tests, thermal shock and 4 corner ambient, EMI ingress and egress tests, EMI Susceptibility to RF, ESD, conducted and radiated, PS noise etc etc. Testers are also very customized and tester knowledge often requires reverse engineering skills beyond most users. So standardization is generic at best.
HALT/HASS is where its at with margin test measurements and covering all the environmental bases for stress testing.
Aprox. 30 pages of tests done in 2 months before release to Purchasing of an OEM disk drive in volume for Corporate wide use in products. That was how I did it as TEst Eng Mgr in the 80's for Burroughs/Unisys.
Now DVT/PVT may be streamlined for better or worse. I could write a book about it. But stds don't go deep enough often to find the obscure defect.
For example Seagate and We both did contaminated particle air flow tests inside the head/disk assy (HDA), but I proved their method was flawed like a closed loop vacuum machine and always read 0 particles per cubic ft > 0.5um. My method made them stop production and fix the problem to make sure they were getting true air samples and not return air flow in the closed loop Laser air particle analyzer. ( circa '85) It turned out the ferro-fluidic bearing seal was releasing micro airborne particles but they were so light they floated above the surface. It was not a huge reliability risk to head crash on the ST-212 but a flawed test result was a concern. (simple solution... slow down the sampled air flow rate and take more time)
Nevertheless, Seagate Eng worked round the clock to design a seal shield for the rotary actuator to deflect potential head crashing magnetic particles. Back when production was 100K units/month in Singapore.
Alexander, you are 100% correct. It is a sixth sense or black magic.
I was confident I could quickly resolve the issue faster than purchasing getting delivery of units from the other supplier Llamba. A week or two of frustration was maybe easier for me. Others might reject the lot. Stop production, beat up the supplier, miss shipments, or purchase from both suppliers , but purchasing vetoed that recommendation , I made earlier to save a few bucks. In the end it cost the vendor maybe 20 burnt out supplies from HIPOT arccing, 50 rejects and they had to eat crow, at the expnse of a few hours of my time.
Perhaps wiser engineers might have enforced supplier process change control notification in the conditions of the contract. But that might not have worked either.. Becuase they never really change the process... just were sloppy in details of the assy process documents or training. ( poor process control) They learnt how to document these often overlooked details in their assembly drawings... after I finally gave supplier the ultimatum to make clear pictures of manual assembly details that caused HIPOT weaknesses.
The other main issue missed by those who commented is that Hipot standards do not dictate that floating secondary be grounded. AND it was not obvious that the HIPOT tester could damage the DUT inspite of current limiter. .So standards do not define dont get into tester implementation nor PSU usage which affected the results... They passed at factory. WHich in this case creates a common mode breakdown lower than if it was floating ( primary in series in effect with the secondary insulation)) After that, everything was flawless.
But yes I did too much work for the supplier. My reward has always been my satisfaction in solving impossible puzzles quickly (Sherlock Ohms style) al beit without financial benefit., which unfortanately has been my weakness.
I did the same for Seagate, SyQyest, Burroughs, Iris Systems , C-MAC etc
( watch out for future posts) ... Like the RF noise from hell episode with a microwave radio inside a hydro meter in only 1 of dozen or brands of meters for AMR use. on a 1GHz signal.)
Anyone need an old retired gumshoe for hire? ;) Just ask. I love to help ( THought for food )
The real question is why do you let your CM's change their processes without informing you and without providing first article samples for you to test before they get into your supply chain?
Clearance / spacing was not sufficient? Changes were made to the layout/copper in the AC side of your supply and the first you are aware is when they start to fail? This kind of stuff happens all the time with printed circuit assemblies but it should never happen with a PSU with so many safety and regulatory requirements.
A new service lets engineers and orthopedic surgeons design and 3D print highly accurate, patient-specific, orthopedic medical implants made of metal -- without owning a 3D printer. Using free, downloadable software, users can import ASCII and binary .STL files, design the implant, and send an encrypted design file to a third-party manufacturer.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.