In the (Zenth Studios) reproduction instance you get all of the performer's nuances except for actually watching the performer who is long dead. And the reason for reproducing the performance on a real instrument, voiced to sound like the correct period instrument is to hear as close a reproduction to the original performance as possible. Nothing is more high fidelity than hearing an instrument with your own ears! Second best is a modern hi definition digital audio recording of that re-created performance. While you can listen for technique, timing, interpretation, you can also hear a lot more nuanced tonality and sound from a modern recording than an old scratchy acitate of low bandwidth and poor frequency stability, wow and flutter.
This type of performance restoration and instrument automation is a new era for the long sought after technology of capturing the moment for posterity. The Kodak moment! And it does work even if you doubt it.
Of course, short of being there you cannot recreate being in the live audience. But that too will happen someday as we learn to store infinitely more data and read it out at faster and faster rates with dirt cheap digital processors.
Yes, the performance is frozen in time and will not vary as it would if you attended multiple concerts of the same program. That's not the point. They're not trying to eliminate the human artist, at least not yet!
bdcst, I don't think we're comparing apples to apples here. Why would I listen to an automated piano playing Gershwin (derived from an actual Gershwin performance) when I could just listen to the recording myself? Also, the Gershwin recording is ONE example of a performance by Gershwin of that piece of music. Do you think he played it exactly the same for every performance? Were other performances different? Better? Worse?
That is my point.The reason for recording music is to capture a single performance. If this is what the automated equipment is doing, it is no different than listening to a vinyl record, CD, tape, or mp3 file.
The nuance, expression, attitude, emotion and soul of the musician is what makes it special. You get none of that with a machine.
I see a human versus machine guitar "shredding" contest here. Just like the battle between Garry Kasparov and IBM's Deep Blue, how about pitting this contraption against some of the guitar greats and see what happens over the next few years as technology works on catching up....
Well, bronorb, its time for your humble opinion to change. Mechanized instruments can and do sound amazing when the engineering is done right. Check out Zenph Studios. These folks have developed a technology via DSP analysis of older audio performances of piano recitals that are turned into control files for reproducing the original physical performance and playing it back on a piano as though the dead performer was there!
Last year at Hannover Fair, lots of people were talking about Industry 4.0. This is a concept that seems to have a different name in every region. I’ve been referring to it as the Industrial Internet of Things (IIoT), not to be confused with the plain old Internet of Things (IoT). Others refer to it as the Connected Industry, the smart factory concept, M2M, data extraction, and so on.
Some of the biggest self-assembled building blocks and structures made from engineered DNA have been developed by researchers at Harvard's Wyss Institute. The largest, a hexagonal prism, is one-tenth the size of an average bacterium.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.