Sign up for the Design News Daily newsletter.
How 3D Artificial Intelligence with AR & VR Impacts Enterprises Today
Before you deploy immersive mixed-reality technologies, consider the benefits of adding in artificial intelligence and cloud computing.
June 7, 2023
5 Min Read
gorodenkoff/iStock/Getty Images Plus via Getty Images
The race continues among the world’s largest tech leaders and companies to see which one will prevail and power the next generation of tools, technologies, and resources for manufacturing, healthcare, construction, and many other vertical market software applications. These companies have been working tirelessly to create changes that will make a significant impact on our world. This all starts with the technological advances that have been made in recent years through software powered by artificial intelligence (AI) and immersive mixed-reality technologies such as augmented reality (AR) and virtual reality (VR).
All these technologies have specific differences, but they’re also now working together in advanced 3D applications and environments, all for the benefit of these companies and their customers.
Immersive Mixed-Reality Uses
With virtual reality, a user wears a headset that fully delves into a new world or environment, some that even mimic the real world. The user is given a both visual and audible experience that is meant to replicate a real-world setting in a manufacturing environment.
Augmented reality is similar in concept, but it also displays digital content in the real world. Imagine an automotive manufacturing engineer holding up an iPad in front of a car being designed to see virtual specs of a vehicle’s design layout or various engine spec options.
Where Immersive Mixed Reality Falls Short for Enterprises
The challenge is that these technologies require heavy doses of data, the ability to process vast amounts of data at impeccable speeds, and the ability to scale projects in a computer environment that doesn’t often allow in traditional office environments.
Immersive mixed reality requires a precise and persistent fusion of the real and virtual worlds. This means rendering complex models and scenes in photorealistic detail, rendered at the correct physical location (with respect to both the real and virtual worlds) with the correct scale and accurate pose. Think of the accuracy and the precise nature required in leveraging AR/VR to design, build, or repair components of an airline engine or an advanced surgical device used in medical applications.
This is achieved today by using discrete GPUs from one or more servers and delivering the rendered frames wirelessly or remotely to the head mounted displays (HMDs) such as the Microsoft HoloLens and the Oculus Quest. Apple also recently introduced its new Vision Pro high-tech goggles that offer augmented reality and introduce spatial computing for users.
The Need for 3D & AI in Immersive Mixed Reality
One of the key requirements for mixed-reality applications is to precisely overlay on an object its model or the digital twin. This helps in providing work instructions for assembly and training and in catching any errors or defects in manufacturing. The user can also track the object(s) and adjust the rendering as the work progresses.
Most on-device object-tracking systems use 2D image and/or marker-based tracking. This severely limits overlay accuracy in 3D because 2D tracking cannot estimate depth with high accuracy, and consequently the scale and the pose. This means even though users can get what looks like a good match when looking from one angle and/or position, the overlay loses alignment as the user moves around in 6DOF. Also, the object detection, identification, and its scale and orientation estimation—called object registration—is achieved, in most cases, computationally or using simple computer vision methods with standard training libraries (examples include Google MediaPipe, VisionLib). This works well for regular and/or smaller and simpler objects such as hands, faces, cups, tables, chairs, wheels, regular geometry structures, etc. However, for large, complex objects in enterprise use cases, labeled training data (more so in 3D) are not readily available. This makes it difficult, if not impossible, to use the 2D image-based tracking to align, overlay, and persistently track the object and fuse the rendered model with it in 3D.
Enterprise-level users are overcoming these challenges by leveraging 3D environments and AI technology into their immersive mixed-reality design/build projects.
Deep learning-based 3D AI allows users to identify 3D objects of arbitrary shape and size in various orientations with high accuracy in the 3D space. This approach is scalable with any arbitrary shape and is amenable to use in enterprise use cases requiring rendering overlay of complex 3D models and digital twins with their real-world counterparts.
This can also be scaled to register with partially completed structures with the complete 3D models, allowing for ongoing construction and assembly. Users achieve an accuracy of 1 mm—10 mm in the object registration and rendering with this platform approach. The rendering accuracy is primarily limited by the device capability. This approach to 3D object tracking will allow users to truly fuse the real and virtual worlds in enterprise applications, opening many uses including but not limited to training with work instructions, defect and error detection in construction and assembly, and 3D design and engineering with life size 3D rendering and overlay.
Working in Cloud Environments Is Critical
Manufacturers should be cautious in how they design and deploy these technologies, because there is great difference in the platform they are built on and maximized for use.
Even though technologies like AR/VR have been in use for several years, many manufacturers have deployed virtual solutions that are built upon an on-premise environment, where all the technology data is stored locally.
On-premise AR/VR infrastructures limit the speed and scalability needed for today’s virtual designs, and it limits the ability to conduct knowledge sharing between organizations that can be critical when designing new products and understanding the best way for virtual buildouts.
Manufacturers today are overcoming these limitations by leveraging cloud-based (or remote-server-based) AR/VR platforms powered by distributed cloud architecture and 3D vision-based AI. These cloud platforms provide the desired performance and scalability to drive innovation in the industry at speed and scale.
About the Author(s)
Co-founder and COO, GridRaster Inc.
Dijam Panigrahi is Co-founder and COO of GridRaster Inc., a leading provider of cloud-based AR/VR platforms that power compelling high-quality AR/VR experiences on mobile devices for enterprises. For more information, please visit www.gridraster.com.
You May Also Like
How Repairable Is Apple's AR/VR Headset?Mar 4, 2024
Video: How to Use Ford's NACS Adapter for Tesla SuperchargersMar 1, 2024
Breakthrough PPG Matte Clear Film Wraps New MustangsMar 1, 2024|3 Min Read
Taxicabs May Not Be So Bad After AllMar 1, 2024|5 Min Read