@clia (and @THasham): There are several books out there dealing with various aspects of software/system testing. I have not been out looking for what's there so I can't say. I would guess that there is not a book out there about Hackware Testing or similar. It may be because it violates some good practices in software development.
@jrjohns and @caa028: I didn't use the term, "hackware," back then. In fact I didn't realize that what I was doing was that unusual. So in trying to describe it to others, "hackware" was what I came up with because I hacked up the code real quick to run a quick test, then I took it out. So it never became a formal development/test methodology deployed within HP. But yes, "hackware" does have a rogue or underground connotation. I could not find online the term "debug private" used in this context. Another name that describes it is "ad hoc" testing but that is already used and has other meanings within the testing field. Here's some other possibilities: "precision testing," "fleeting testing," "temporary testing," "short-term testing," and "in-code testing." None of those have the same impact that "hackware" does. Any ideas out there?
@caa28: Yes, you can induce some errors at the system level. We could cause a paper jam to occur in the printer but we could not cause the fuser to overheat or the laser to stop working without extensive modification to a pristine system.
@Ran: I also didn't get any awards for having talked to the hardware engineers 18 months earlier which allowed me to get my driver out on time because I didn't have to mess with working around hardware defects.
@snandu13: Humidity is a good way to stress physical items, like the hardware and mechanical components. Most of what I was talking about was stressing the software/firmware aspects. It is rare but I have dealt with cases when humidity affected the firmware, such as changing the timing of mechanical or electrical parts.
@luiscosta: "Fuzzing," a term I had not heard before. There is a Wikipedia entry for it. http://en.wikipedia.org/wiki/Fuzz_testing. But it looks like it is stuff external to the unit being tested by hammering it from the outside with a bunch of random parameters. That, to me, is very similar to the Stress Testing I was talking about. My hackware is internal to the module being tested and it does a specific task (not random.)
It doesn't make sense to version control hackware. For example, I may, on the fourth loop, pretend that the hw returned 0x3D. I verify it works, then I change it to 0x7D, recompile, and run it again. I may run through a dozen values in 30 minutes to complete my test. Submitting each one of those to version control doesn't seem practical to me. And, if I were to make that a unit test, I'd have to deal with a list of desired values and a loop of some sort. Again, my hackware tests the finer details of my code, details that are so minor that most people probably wouldn't make a unit test for it.
Expanding on the #define DEBUG <level> idea: one can have a global debug variable which is a bitmask, each bit of which controls a particular class of debug information to allow selection at run-time. This global variable can then be set via an initialization file and/or adjusted via a console command, depending on what is available in the system.
Thank you again Gary; some very interesting thoughts and techniques here. I wanted to reiterate THasham's earlier request for book recommendations. I have to run now, but I can check back later here or during tomorrows lecture for your response.
Good points on using #define DEBUG <level> to control how much debug code is in there. I also used a global variable that was accessible from the debugger to control the level. By default, it would be low debug output. But if I wanted to, I could break into the debugger, change that global variable, then have more verbose debug output when running a test. A global variable also allowed hackware to temporarily turn on a higher verbose level when desired.
@luizcosta: Regression testing where there is a list of tests that get run through every time. It is used, for example, every week to qualify a new code drop before releasing it to everybody, to make sure nothing is broken. It is a subset of all the tests because it is mainly a quick test to make sure nothing is generally broken.
@syakovac: You brought up the ATPG scan chain to force a state. And it made me realize that the primary focus of today's lecture was on software. I neglected the hardware side of the equation. Sorry about that. But yes, when testing hardware, my hackware principles still apply. Come up with a quick and dirty scan pattern, run it through and test. What I was referring to with states was software states, where it is easy to write to the state variable to put it in a desired state.
@GStringham – When testing code during the writing of code (hackware), I found it very useful to build into my code compiler directives (at the top of the code) for conditional compiling.That way, I could leave the test code in the final version if I wanted, or I could take it out if I wanted.It really didn't matter because I could set the compiler directive to TRUE or FALSE.
@s.schmiedl: You asked why hackware instead of doing it right the first time by writing a unit test. Good question that seems contrary to what I said. I can test things in finer detail with hackware than I can with unit testing. If I had the time to create a unit-level test for every type of hackware test case I run through, I would create so many unit-level tests that there would not be enough time to run through them all more than once or twice before the product needs to be shipped.
@GStringham - You are so correct when you say there are no awards for producing code with zero defects.On more that on occasion I have gone to unit integration with the other programmers on the team and been the only contributor with zero defects, but nobody recognizes the effort/results. No, there are no awards!
I wanted to find out how much attentions validation time should be planned for DOE's (Design Of Experiment).
Testplaning - there are planning factors that need attention on repeatability over multi-runs, VT's and other varriables. Those thoughts will add or reduce validation time, resorces, and other infrustructure. I rely on DOE's to build statistics and modles to help focus on worst case areas. This gives many advantages like varriablility, repeatability, sensativity to help focus on more or focus less on particular areas. If you can not test milions of chips, platforms, and devices you need to plan a statistical target on exceptable margins. You can test ever thing so how do you plan and take some risks.
@s.schmiedl: I don't agree that ALL test code has to be extracted from the product before shipping to customers. But it definitely must not cause a hinderance to the performance of the product. You can tell that Microsoft leaves test and debug code in because when some app crashes, it asks if it can send the data back to MS for analysis. This really helps them understand what is going on out there.
GARY: If you use "HACKWARE" isn't better to incorporate that in the versioning process where the "librarian" of documents keeps in track of all inserted "bugs" used during the test phases? Another point is "fuzzing" is already used instead of "hackwaring" and there are books and research done on it..
Testplaning - there planning factors that need attention on repeatability over multi-runs, VT's and other varriables. Those thought add or reduce validation time, resorces, and other infrustructure. I rely on DOE's to build statistics to help focus on worst case areas. This gives many advantages like varriablility, repeatability, sensativity to help focus on more or focus less on particular areas.
The streaming audio player will appear on this web page when the show starts at 2pm eastern today. Note however that some companies block live audio streams. If when the show starts you don't hear any audio, try refreshing your browser.
Robots that walk have come a long way from simple barebones walking machines or pairs of legs without an upper body and head. Much of the research these days focuses on making more humanoid robots. But they are not all created equal.
The IEEE Computer Society has named the top 10 trends for 2014. You can expect the convergence of cloud computing and mobile devices, advances in health care data and devices, as well as privacy issues in social media to make the headlines. And 3D printing came out of nowhere to make a big splash.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.