So how did you attach the relay to the fixtures holding the scanners, so that it could physically execute the key press? And did you have to build those fixtures; I'm assuming not, since you were using the burn-in room anyway, prior to your time-saving auto-keypress solution.
You alreayd corrected the typo in the article (burning room? Where are those smoke detectors when you need them - or was this a device to identify firemen while they enter a building ;-)
I presume that the relay was the first installment of the test and not used as soon as the solution was found in adding test routines in the unit's own software that would automatically scan every minute, so they only had to place the cups with simulated retina, throw the hidden switch and there was no need to have something interact with the device to initiate the test.
I recall our automated test of mobility-enabled DECT base stations and terminals. Two base stations were installed at two sides of a room, toy train tracks were laid out between them, the train cars were stuffed with handsets that had a special version of their normal software, the added User Interface routines generated (and tallied) outgoing calls with random timing. After a night of testing you could read the tallies and compare with statistics from the Base Station to see how many calls were missed or dropped, while the terminals roamed between the base stations.
I can never get the software crew to add test routines to the "operational" software. They insist that this would require retesting the entire software, and they are right. Their policy is to program the test routines, run the tests, then re-program with the operational file. What iif a bug caused a jump to a non-existant address, wihch in you setup would suddenly exist?
@ Battar: Additional code certainly does increase the software V&V burden, but if architected properly they can actually make the SW team's work easier. In every instance where I've requested operational additions I architect them to be generalized tools that everyone can use for test and debug. Once implemented, I then work with the SW team to work these features into their V&V test plans and procedures. In the end, the additions have actually reduced the overall SW test load and have repeatedly streamlined the whole development process.
I originally thought the same thing as Battar. In stuff I used to work on, test routines were never kept in the main operational software. They changed the timing, for one. But that was in the old days when cycles weren't "free." I can see if you architected generalized tools, as you say, Chris, then adapted them for different applications right at the outset, the could "live" in the real code because they'd go thru the complete test and dev cycle like everything else.
In an age of globalization and rapid changes through scientific progress, two of our societies' (and economies') main concerns are to satisfy the needs and wishes of the individual and to save precious resources. Cloud computing caters to both of these.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.