Nancy, it is much worse when a completely predictable situation could occur and the result would be a malfunction that could possibly kill people. THAT is the sort of problem that can give one the cold sweats.
I warned them but I did not create an official memo describing the reason that the problem needs to be fixed. But the final frosting on the cake was when I told the one manager that I had left a set of instructions on how to fix the problem, and he assured me that all of those instructions had been implemented. The catch is that I had NOT left any set of instructions. But the guy assured me that they had followed them completely. I don't work for that company now, and it does not exist as a company any more, but quite probably the potential for a problem is still there.
I agree, William - you make an excellent point and a programmer worth his or her salt would know that. When I was a test engineer, I was responsible for both the hardware and software design of my test sets so it was a given that I would know the ramifications of any changes I made...but larger and more complex systems often are developed by teams or some projects have the luxury of a hardware engineer and a software engineer collaboration and coming in and doing a software change without thoroughly understanding the HW operation would certainly be problematic.
Nancy G, the big challenge is to make certain that a fix imlemented in software does not cause unintended problems a bit more distant from the fix, such as when the power has been switched on and the I/O is active but the software is not yet functioning. That was where the problem was in the system that I was talking about. So fixing a hardware problem in software takes a lot of understanding the whole system to do it safely. So yes it can be done, but not always can it be done safely.
I have certainly "fixed it in the software" on numerous occasions and it was a good call for that specific scenario. This is especially true with using software time delays to solve problems. I have also had to invert a 0 or 1 because of a replacement part that operated the opposite of the original - or some such thing. As others have already said, software can't fix everything and in an ideal world the hardware will function properly so that the software just orchestrates its functions in a logically flowing program. Software fixes can often be done quickly and do not require buying additional parts so it makes the test engineer out to be a hero at times. The important thing with software fixes is to document them. I would always comment my software so three years later when something needed to be troubleshooted or changed, I knew what I had done previously - and anyone else working on that test set behind me would know also.
GlennA is certainly correct about there being things that simply can not be fixed in software. The worst ever that I came across was where the sense of a valve was reversed in installation, and then changed back in software. The result was that the action of the macjhine started when the air pressure came on and was not corrected until the computer controls and software were running. And hitting the "E-Stop" switch would cause the valve to go to the wrong position. It was a vary bad condition made much worse by "fixing it in software."
Well, I'm not in the machinery area, but rather in telecom. For many years i was Chief Engineer (remember that glorious title!) for a smallish maker of specialized switching systems. Our key markets were mostly call centers of various kinds. These had to be extremely reliable/highly available (think about 911 centers, or inbound telemarketing/order centers where time was literally equal to money); the traditional central-office architecture depended on multiple layers of redundancy to achieve that goal. As the systems architect, I was faced with trying to meet the true availability numbers (many 9s) without getting overly complex (and expensive). Since I was quite familiar with MTBF analysis, I decided to try to use those tools to evaluate architectures. I found that once redundancy passed a certain level, availability actually DECREASED because the increasing complexity made non-recoverable failures MORE likely. Any redundancy scheme depends on failure detection; if there are failure mechanisms that require a highly complex "umpire" (e.g. for majority-logic failure detection), I actually proved by failure analysis that having a single point of failure (a "no-no" in the orthodox reliability world) could actually provide greater availability (the true goal) than a complex system along with the necessary mechanisms to switch out the failed unit. With the help of my analysis, we were able to convince the customer that our approach provided better availability than the competition's mega-redundant version (and at less than half the cost). Thus we won the contract to provide ALL of the California Highway Patrol 911 call centers (back in early '80s), and had a VERY satisfied customer base with the resultant availabilty demonstrated in the field.
I had to write a PLC program for a machine designed by mechanical engineers. Several of the glitches were in the sensors. The mechanical engineer specified normally open sensors for the end-of-travel sensors. I tried, unsuccessfully, to explain that the end-of-travel sensors should be normally closed. Another glitch was sensor locations. One of the end-of-travel sensors was re-assigned to double as a home position sensor, and the machine had to move past the home sensor. The standard answer to the sensor problems was to 'fix it in the software'. Some mistakes cannot be fixed in the software.
In this particular case, all was not lost since the contract called for several of these machines, and this event, as I described, was with the first two carcasses, so the piles of "scrap" got used on the later four machines, as my memory serves, with very little waste at the end of the project. And, since this was a custom-designed machine, we kept all the "leftovers" in case they were needed for spare parts and/or repeat build contracts.
But, in essence, this phenomenon can become a disease amongst engineering types. Things that once seemed quite difficult to achieve, especially in hardware, all of a sudden become trivial exercises in software. Some of this is good for the end-user, some becomes "bloat", and we all know how we handle bloat with our canine pals.......
You're absolutely correct. I've designed & built a lot of sophistcated production machinery in my career, and I can attest to your sentiments. I was once part of a team that repurposed a series of web presses for a totally different process. While the project was a lot of fun & interesting, the owner of the company's nephew was the chief programmer. These presses featured industrial-grade 486/nn PCs, housed in a standard 19" rack cabinet. All the control electronics, including the servo-amps, etc. were contained in this cabinet. the "bells & whistles" never seemed to stop coming. As the project progressed, we saw more & more interface control hardware being added, etc. The control programs of course expanded exponentially as a result, since there were all kinds of "tie" lines from one system to another. And, the upshot of the whole thing was that the machines never worked to expectation early on. The mechanicals were OK, but the control was always off. Finally, the programming "genius" left on his own accord, and we revisited the project. By the time we were finished, a couple of weeks later, we had a pile of scrap wire, pile of scrap redundant sensors & control relays, and a considerably less cluttered control cabinet, AND, the machine ran like a buzz saw, producing about 4,000 items per minute, which was over specification by a good margin. Of course, the machines could be throttled down to any respectable running speed, and the quality of finished item remained exceptional throughout the production runs.
What should be the perception of a product’s real-world performance with regard to the published spec sheet? While it is easy to assume that the product will operate according to spec, what variables should be considered, and is that a designer obligation or a customer responsibility? Or both?
Biomimicry has already found its way into the development of robots and new materials, with researchers studying animals and nature to come up with new innovations. Now thanks to researchers in Boston, biomimicry could even inform the future of electrical networks for next-generation displays.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.