Jon, your idea is interesting, but I think it needs a lot more investigation before being implemented. As for programming the robots, even when you have these high level languages, as in LabView, the code you write is internally stored in a form more like the "older" styles of code you mention. In the end, these things have to be translated into something a computer can understand. That means a formal language that can be compiled, optimized and translated into machine code.In the end, these codes also have to be precise and provable. I see a lot of higher level languages used, but still the lower level ones are needed to do things the developers of the high level codes did not anticipate.
I agree naperlou - while higher level languages are convenient and typically much easier to use (loved CEC's Testpoint back in the day), nothing beats the control you can get at the lower levels. This coming from someone who still uses assembly when programming PICs- love moving those bits around!
I think the idea of extending vision-based gesture recognition to industrial robots makes a lot of sense. We've written about use of the Kinect vision sensor, a major new input device for vision-based gesture recognition, applied to robotics, as well as this robotic gesture-recognition software based on a 3D bend-and-twist fiber optics sensor used in the film industry for motion capture: http://www.designnews.com/author.asp?section_id=1386&doc_id=245683 Interestingly, those researchers said their next rev would be Kinect-based. A different approach we wrote about would help industrial robots predict humans' next moves in assembly based on a decision-tree algorithm: http://www.designnews.com/author.asp?section_id=1386&doc_id=246646 A member of the Embedded Vision Alliance (whose representatives lectured at our recent Digi-Key CEC on embedded vision) has also written this article on vision-based gesture recognition focusing on Kinect, which also discusses software: http://www.digikey.com/us/en/techzone/microcontroller/resources/articles/vision-based-gesture-recognition.html
Respectfully, you may have a misconception about LabVIEW that is (unfortunately) somewhat common. Many people assume that LabVIEW provides an interpreted programming approach because the code is largely represented graphically instead of through text.
The truth is that "G" (i.e. graphical code) in LabVIEW has been a compiled language for several decades now. At the end of code compilation, LabVIEW code is converted into machine code in the same manner as other programming languages. LabVIEW now uses a very powerful, open-source compiler known as LLVM that is also used by industry leaders like Apple, Adobe and Sun Microsystems.
In addition to being able to compile to machine code, LabVIEW code can also be compiled to run on silicon in the form of FPGAs. The reliablity and performance achieved through FPGAs have made popular as part of the control systems in many robotics and mechatronics systems.
You can read more about the LabVIEW compiler here:
The Microsoft Kinect's range of uses that have been reported represent just the tip of the proverbial iceburg. I'm certain we will see it being used for many more applications, including those within the areas of robotics.
Gesture controls for video games and entertainment systems are fine; if a gesture is incorrectly there is nothing worse happens than needing to do a reset. Gesture controls on something as fast and powerful as an industrial robot could easily knock ones"block" off, quite literally. Equipment and machines that are capable of being unsafe are probably not a good choice for control inputs that are subject to interpretation. That is something that needs to be kept in mind when choosing an input system, but may not have come to mind when considering a departure from the more standard methods.
William K, I think gesture controls could be used in industrial applications by providing a simple movement/motion protocol. The protocol could have a similar format like sign language but less complicated in physical movement. For example, to stop an industrial process one would gesture a hand pressing an e-Stop (emergency) pushbutton. Also, for critical industrial processes, redundant switches/sensors can be packaged on the machine as well.
Nancy Golden, Using assembly language with gesture controls may improve response time because of the software being one level above machine code. Therefore, bit processing is closer to the target microcontroller than a high level language like "C" code. Just thinking out loud folks!
Hi, Naperlou, et al. LabVIEW creates compiled code. It does not create some sort of intermediate code that requires an interpreter. Instead, you get native code that runs on your target processor. And as far as I know, you can mix in C-language code, too, if you need to do something at that level. Also, LabVIEW will compile and run applications in FPGAs.
Digital healthcare devices and wearable electronic products need to be thoroughly tested, lest they live short, ignominious lives, an expert will tell attendees at UBM’s upcoming Designers of Things conference in San Jose, Calif.
Designers of electronic interfaces will need to be prepared to incorporate haptics in next generation products, an expert will tell attendees at the upcoming Designers of Things conference in San Jose, Calif.
The company says it anticipates high-definition video for home security and other uses will be the next mature technology integrated into the IoT domain, hence the introduction of its MatrixCam devkit.
Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.