Winners of a chance to stay at a European hotel chain will leave the next morning with a brand new piece of art, based on their sleep pattern data, which will be captured by sensors and painted by this ABB robot. (Source: ABB)
It is an interpretation of the data, so how was the software designed? It would be interesting to see how the data is handled. I will admit, I'd like my dreams to be interpreted. I would like to see a nightmare vs. a typical one.
Cabe, yes it would be interesting to see how the sensor data is converter to robot movement. Since the data is coming from body movement, it may not distinguish between pleasant dreams and unpleasant dreams unless the dream affected body movement.
I know when my wife is having a bad dream, she tends to toss around and mumble. If the data is body sensors and audible, could the software interpret 'erratic' behavior? Or how about erotic behavior? Or most nights, you remember nothing?
Either way, this is cool. Wake up in the morning and see what surprise painting is waiting for you!
I like the questions about bad vs good dreams and whether resulting paintings would they'd look very different from each other. I had similar questions. I think GTOlover is right, generally speaking: sleepers tend to get more active during bad dreams, so the painting might have a lot more going on in it than one produced by peaceful sleep.
I agree about the interpretation of the data: in fact, that was my first (and second and third...) question to ABB: what were the assumptions in the software design about how motion, temperature and sound sensor data would be interpreted visually? Although I didn't get an answer, it's obvious that you can design it any way you want (more or less). So the applications could be pretty broad.
Now the elephant that paints pictures with a brush in it's trunk is going to go hungry. Even elephants are not immune to being replaced by technology. I could make the argument that the elephant is painting what I'm thinking and I defy anyone to prove me wrong.
Robert, I agree. Ann this is a nice article.The idea of using FSR data to control a robot is pretty cool. I discussed FSR's and motion control in my book, Learn Electronics with Arduino and can see the technique in operating a robot being implemented in this application. I'm wondering what's the room rate for this techno-art experience?
It seems that some forlks with access to a lot of resources and a lot of time on their hands, got creative. Of course, as in many projects, the creativity is in the algorithm. Unfortunately there is not much clue about the relationship between input and output, and no method of interpretation is offered.
Of course if the initial directive was to find a new way to turn a profit then it is quite reasonable. After all, how in the world could any potential customer claim the translation is incorrect? So it is quite an accomplishment from a business point of view.
One of the biggest walls in embedded software development is the integration of low-level drivers with higher-level middleware and application code, but silicon vendors are stepping up to bring it down.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.