Looking back 10 to 20 years, human-machine interfaces in industry almost always meant a PC with a keyboard and a mouse or a machine panel with hard keys. The user interface was based on a single action: pushing a button. But fast forward a few years, smartphones and tablets have become norms and brought interface technologies that are spilling into industrial HMIs.
Voice recognition such as Siri for iOS devices has not replaced the traditional keyboard from a productivity point of view -- yet. But screens integrating multi-touch technology and gestures are realizing a potential game-changing impact on industrial HMIs.
“There has been a revolution in user interfaces, driven by mobile devices that forced HMI software to adapt to the new reality,” Fabio Terezinho, vice president of consulting services at InduSoft, told Design News. “Instead of a PC with keyboard and mouse, we have a smartphone and tablet. But in order to adapt to these devices, HMI software has evolved to provide new ways to interface with them. The most striking development has been multi touch with gestures, and a point of view that has become more intuitive and more familiar.”
Multi-touch technology and gestures offer potentially game-changing impact on industrial HMIs in usability and intuitiveness.
Since these HMI technologies surfaced in industrial automation, Terezinho said one thing often heard is “what’s the advantage of swiping instead of clicking a button to switch to another screen?” It’s not so much the advantage, but the fact that the new generation coming into the workforce has a different user-interface mindset. The new workforce will expect and naturally try to swipe instead of push a button.
Evolving Multi-Touch Technology
Multi-touch technology with gestures is being adopted in phases. According to Terezinho, people know about the technology but it has not reached a high level of popularity yet. Many users might say, “I know there is multi touch, but I’ve never used it, and I don’t know exactly what it is.”
“I don’t think the technology is mainstream yet, but we are in a phase where people know it exists, and we are getting to the point where people expect it to be there,” Terezinho said. “What I see now is that most people expect to have the same access to the same information, from different devices with different screen sizes and look and feel, and new ways to interact with applications. It is not necessarily the same user interface, the same screens and layout, due to physical constraints, but the display is at least the same set of information.”
In the past, HMIs also tended to be very single-document-interface oriented, not allowing users to open more than one screen at the same time. They have evolved to provide multiple-document-interfaces, where the HMI can open several screens at the same time.
Now, a push with the new generation of HMIs is not to open several screens but instead to use multiple frames. In one frame, for example, the user can see all alarms while visualizing a synoptic in another frame. Another key feature is the ability to navigate to different frames that can be dynamically resized during run time.
Another trend is the use of objects as frame data sources that are not necessarily native HMI objects. HMIs, in the past, provided a set of proprietary objects with animations, but now customers expect much more.
Nowadays, objects are placed on top of maps, creating a need to interface with third-party interfaces, such as Bing or Google maps. Users want to display camera information from a live camera, or use biometrics to show fingerprints or graphical components on the screen to relay information.
“Not only are HMIs evolving in a multi-frame kind of framework, but also into a multi-data source environment not only for objects but also for components used on those frames,” Terezinho said. “Users can mix and match native objects from the HMI with external controls. And being multi-platform is another huge benefit and where we are betting our new technologies.”