Gesture recognition could be used by surgeons to control a robotic nurse
During surgery, surgeons routinely need to review medical images and records. However, stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria. The new approach which solves that problem is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot.
“Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection”, said Juan Pablo Wachs, an assistant professor of industrial engineering at Purdue University.
Research into hand-gesture recognition began several years ago in work led by the Washington Hospital Center and Ben-Gurion University, where Wachs was a research fellow and doctoral student, respectively. He is now working to extend the system’s capabilities in research with Purdue’s School of Veterinary Medicine and the Department of Speech, Language, and Hearing Sciences.
“One challenge will be to develop the proper shapes of hand poses and the proper hand trajectory movements to reflect and express certain medical functions”, Wachs said. “You want to use intuitive and natural gestures for the surgeon, to express medical image navigation activities, but you also need to consider cultural and physical differences between surgeons. They may have different preferences regarding what gestures they may want to use.”
The hand-gesture recognition system uses a new type of camera developed by Microsoft, called Kinect, which senses three-dimensional space. The camera is found in new consumer electronics games that can track a person’s hands without the use of a wand. Accuracy and gesture-recognition speed depend on advanced software algorithms.
“While it will be very difficult using a robot to achieve the same level of performance as an experienced nurse who has been working with the same surgeon for years, often scrub nurses have had very limited experience with a particular surgeon, maximizing the chances for misunderstandings, delays and sometimes mistakes in the operating room”, said Wachs. “In that case, a robotic scrub nurse could be better.”
The Purdue researcher has developed a prototype robotic scrub nurse, in work with faculty in the university’s School of Veterinary Medicine. Researchers at other institutions developing robotic scrub nurses have focused on voice recognition. However, according Wachs, little work has been done in the area of gesture recognition.
“Another big difference between our focus and the others is that we are also working on prediction, to anticipate what images the surgeon will need to see next and what instruments will be needed”, he said.
Wachs is developing advanced algorithms that isolate the hands and apply anthropometry – predicting the position of the hands based on knowledge of where the surgeon’s head is. The tracking is achieved through a camera mounted over the screen used for visualization of images.
“Another contribution is that by tracking a surgical instrument inside the patient’s body, we can predict the most likely area that the surgeon may want to inspect using the electronic image medical record, and therefore saving browsing time between the images”, said Wachs. “This is done using a different sensor mounted over the surgical lights.”
Other challenges include providing computers with the ability to understand the context in which gestures are made and to discriminate between intended gestures versus unintended gestures. The system could also be adapted to be used in other various applications, such as gesture controlled sentinels or aiding robots used in disaster rescues.
For more information, read the article published in the Communications of the ACM Magazine named: “Vision-based hand-gesture applications“.