In the (Zenth Studios) reproduction instance you get all of the performer's nuances except for actually watching the performer who is long dead. And the reason for reproducing the performance on a real instrument, voiced to sound like the correct period instrument is to hear as close a reproduction to the original performance as possible. Nothing is more high fidelity than hearing an instrument with your own ears! Second best is a modern hi definition digital audio recording of that re-created performance. While you can listen for technique, timing, interpretation, you can also hear a lot more nuanced tonality and sound from a modern recording than an old scratchy acitate of low bandwidth and poor frequency stability, wow and flutter.
This type of performance restoration and instrument automation is a new era for the long sought after technology of capturing the moment for posterity. The Kodak moment! And it does work even if you doubt it.
Of course, short of being there you cannot recreate being in the live audience. But that too will happen someday as we learn to store infinitely more data and read it out at faster and faster rates with dirt cheap digital processors.
Yes, the performance is frozen in time and will not vary as it would if you attended multiple concerts of the same program. That's not the point. They're not trying to eliminate the human artist, at least not yet!
bdcst, I don't think we're comparing apples to apples here. Why would I listen to an automated piano playing Gershwin (derived from an actual Gershwin performance) when I could just listen to the recording myself? Also, the Gershwin recording is ONE example of a performance by Gershwin of that piece of music. Do you think he played it exactly the same for every performance? Were other performances different? Better? Worse?
That is my point.The reason for recording music is to capture a single performance. If this is what the automated equipment is doing, it is no different than listening to a vinyl record, CD, tape, or mp3 file.
The nuance, expression, attitude, emotion and soul of the musician is what makes it special. You get none of that with a machine.
I see a human versus machine guitar "shredding" contest here. Just like the battle between Garry Kasparov and IBM's Deep Blue, how about pitting this contraption against some of the guitar greats and see what happens over the next few years as technology works on catching up....
Well, bronorb, its time for your humble opinion to change. Mechanized instruments can and do sound amazing when the engineering is done right. Check out Zenph Studios. These folks have developed a technology via DSP analysis of older audio performances of piano recitals that are turned into control files for reproducing the original physical performance and playing it back on a piano as though the dead performer was there!
Altair has released an update of its HyperWorks computer-aided engineering simulation suite that includes new features focusing on four key areas of product design: performance optimization, lightweight design, lead-time reduction, and new technologies.
At IMTS last week, Stratasys introduced two new multi-materials PolyJet 3D printers, plus a new UV-resistant material for its FDM production 3D printers. They can be used in making jigs and fixtures, as well as prototypes and small runs of production parts.
In a line of ultra-futuristic projects, DARPA is developing a brain microchip that will help heal the bodies and minds of soldiers. A final product is far off, but preliminary chips are already being tested.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.