I recall several publications and reporters reveling in the "failure" of the HTV-2 test back in August. But the ability to withstand forces 100x greater than design specifications and still manage to deploy a controlled abort should be a success in everybody's metrics. Controlled flight at Mach 20 for 3 minutes should have provided a wealth of telemetry. And these are the unclassified tests.... exciting.
I guess what's not clear to me is, why was the aircraft designed to withstand shockwaves 100 times LESS strong than it actually experienced? I'm especially surprised since this was apparently the second flight, not the first. Why didn't engineers do a better job of prediction?
That's a good question, Ann. The fact that it travelled successfully for three minutes might indicate that the shock wave was a sudden anomaly shortly before it failed (I can't imgine any design standing up to 100X loads for three minutes). Still, it's hard to imagine why no one foresaw a shockwave of this magnitude.
The reason is our ability to predict turbulence. Some simulation software has gotten close. But to date we can only predict tested conditions. The facts behind turbulence are still largely guessed and even after a good bit of aviation history we are still working on the kinks. I have been to several meetings with mathematicians that are leaders in this field. It's difficult for them to predict with any great accuracy. Yes 10000% error is outrageous but it's possible in a field we are infants on.
I absolutely agree...this was a fantastic accomplishment. After all, the whole point of testing something is to determine potential failure modes. Simulations give us a fantastic set of tools to better predict failure, but there is really no subtitute for an actual real world test. So often, we discover important variables or interactions that were not anticipated by simulation.
In the early days of SPICE (circuit simulator), the late Bob Pease had a rant in his weekly column. He published a circuit to simulate in SPICE and pointed out a certain resistor dissipated negative power! He said he couldn't wait to put together the real circuit and watch it get colder by the minute. He speculated on the breakthroughs it would bring to food and beer storage.
Then he got serious and made the point: simulations are a good tool but no substitute for hands-on prototypes. Of course simulation software has made huge advances in all disciplines but Pease's point still rings true today.
Chuck, I was also guessing that the 100x shockwaves might be an anomaly. I just assumed that we knew a lot more about their potential force after all this time, and could therefore compute the relevant loads.
This is a DARPA project. Being able to fly that fast in the atmosphere means that it can outrun anything shot at it (SAM, bullets, etc.) - so that it could get somewhere fast and drop a payload (bombs, etc.). It could also be used to catch anything in the air (planes, missiles, etc.). It would also be extremely difficult to track or anticipate. Even a laser would have a hard time hitting it, especially if it is making random micro adjustments to its flight path. LA to NY in 12 minutes means that it could get to North Korea in under 30 minutes.
For industrial control applications, or even a simple assembly line, that machine can go almost 24/7 without a break. But what happens when the task is a little more complex? That’s where the “smart” machine would come in. The smart machine is one that has some simple (or complex in some cases) processing capability to be able to adapt to changing conditions. Such machines are suited for a host of applications, including automotive, aerospace, defense, medical, computers and electronics, telecommunications, consumer goods, and so on. This discussion will examine what’s possible with smart machines, and what tradeoffs need to be made to implement such a solution.