MIT Glove Mouse adds second pointer to your digital experience
One of the limitations of modern computers is the lack of robust positional inputs. Today’s personal computer oﬀers only a single mouse for positional input. The Glove Mouse (previously Video Acquisition Multi-touch Controller) provides a solution to this problem by allowing the user to use both hands to control two mouse cursors independently by wearing a specially designed glove.
The Glove Mouse is a former 6.111 [digital electronics lab] project by Tony Hyun Kim (EECS ’09) and Nevada Sanchez (EECS ’10), in which they demonstrated intuitive control of a map application using only one’s hands.
In order to provide smooth and reliable response, the positional information is acquired by a video camera mounted above the user’s workspace. To ensure accurate input data, distinctly colored LEDs are mounted on the gloves directly above the user’s index ﬁnger. The implementation breaks down into three fundamental stages: video acquisition, video processing and application. Furthermore, multiple buttons placed on the user’s ﬁngertips extend the user’s control, allowing many diﬀerent actions by gesturing with multiple button press combinations.
The camera interfaces directly with the video capture subsystem. This interface stage handles the necessary video format conversions and timing issues before the data can be used by the subsequent modules. It is responsible for converting interlaced video from YCrCb color-space into progressive scan video in RGB color-space. All video is buffered into a ZBT RAM to provide proper functionality despite the clock mismatch between the camera and the digital system.
The tracking module scans through the data in memory, and ﬁlters for the green and blue LEDs. It keeps a running average of the coordinates of the pixels that pass through the ﬁlter, and calculates the center of mass at the end of each complete frame. It then transforms these coordinates to establish correspondence between the hand motions and our system’s response on the display output. Finally, the coordinates for the left and right hand are then sent to the application module.
The ﬁnal application module demonstrates what a two-cursor input scheme can achieve. It makes use of both the hand coordinates from the video tracking module, as well as the multiple button inputs from the glove. The speciﬁc application implemented in this project is a map viewing application. This program stores a high resolution map of MIT in memory. The user is then able to navigate the map using the glove controllers. For instance, the user may click a single button to pan the viewing window. Or, by pressing buttons on opposite ﬁngers and separating them, the user can zoom into a particular portion of the map. They have also incorporated distinct icons on the map that the user can interact with.
The hardware for the project is relatively cheap (it costs less than $100) and it still needs to be miniaturized and perfected a bit. You can read more about the project in their Final Lab Report (PDF).