Here’s yet another sign that 3-D technology is breaking out of science labs and engineering departments and finding purpose as a tool to solve a wide range of problems, including those in the medical field.
A team at Stanford University has created what they say is the first-ever touch-enabled virtual “body double” of patients undergoing endoscopic sinus surgery using those patients own preoperative CT scans. The Stanford Rhinological Virtual Surgical Environment leverages touch-enabled modeling solutions and haptic devices from SensAble Technologies to allow surgeons to rehearse specific surgery approaches in a virtual environment prior to the actual surgery, eliminating any element of surprise. The haptic capabilities deliver an immersive 3-D experience, allowing physicians to work by feel and giving them a nearly identical replica of the complex shapes and structures they will encounter in the actual patient during the procedure.
The Stanford team developed the virtual 3-D environment by taking multiple CT scans of the sinus cavities of patients and using standard off-the-shelf hardware and software to create a composite of their anatomy. The haptic device closely resembles an endoscopic surgical instrument, providing that realistic sense of touch. Using the off-the-shelf components, the Sanford team was able to create a 3-D immersive virtual training system for around a tenth of the cost of existing endoscopic surgery simulators and the ability to personalize the training with a patient’s body structure is a trend it hopes will spur more personalized surgical planning and training.
“The ability to use patient-specific data from CT scans is a key point in creating simulations that enable a meaningful rehearsal of procedures,” notes Kenneth Salisbury, a haptics expert and a professor in Stanford’s Department of Computer Science and Surgery, who is spearheading the development of this virtual environment. “We’re beginning to make it feel much more like the actual patient during interaction.”
Advances in haptics, improved algorithms and more affordable computational horsepower is empowering better visualization and driving these new types of surgical training systems, which Salisbury likens to flight simulators used by pilots for flight training. Most surgical training systems today employ serial 2-D CT scan images, which make it difficult for surgeons to understand a patient’s unique variations nor get any true sense of feel.
Salisbury’s team is currently working with doctors to get feedback on the system and hopes to get it into clinical trials soon. The system also has applicability for other types of surgical training beyond endoscopic procedures, Salisbury says.