SixthSense – an aid in everyday life
Remember the futuristic sci-fi computer interfaces where you operate devices with the use of your hand gestures? Although still a prototype, the Fluid Interfaces Group at MIT’s Media Lab unveiled the SixthSense, a wearable, gesture-driven computing platform that can continually augment the physical world with digital information.”We’re trying to make it possible to have access to relevant information in a more seamless way,” says Dr Pattie Maes, who heads the Fluid Interfaces Group at MIT.
“We have a vision of a computing system that understands, at least to some extent, where the user is, what the user is doing, and who the user is interacting with,” says Dr. Maes. “SixthSense can then proactively make information available to that user based on the situation.”
The prototype has changed since it was first introduced to the public last year. Originally, it consisted of a web camera strapped to a bicycle helmet. The current prototype promises to be a bit more “consumer friendly”. It consists of a small camera and projector combination (about the size of a cigarette pack) worn around the neck of the user. A smartphone worn by the user runs the SixthSense software and handles the connection to the internet. The camera is used as a digital eye and it tracks the movements of the thumbs and index fingers of both hands.
The idea is that the device tries to determine what someone is interacting with as well as the purpose of action. The software searches the internet for information that is potentially relevant to that situation, and then the projector takes over.
“You can turn any surface around you into an interactive surface,” says Pranav Mistry, an MIT graduate student working on the SixthSense project.
“Let’s say I’m in a bookstore, and I’m holding a book. SixthSense will recognize that, and will go up to Amazon. Then, it will display online reviews of that book, and prices, right on the cover of the book I’m holding.”
Since the system is customizable you won’t be obliged to use predefined sources of information. The system is constantly trying to figure out what’s around you, and what you’re trying to do. It has to recognize the images you see, track your gestures, and then relate it to the relevant information at the same time.
It is not surprising then, that is this initial research phase, the SixthSense team has only developed a few applications. The developers have a long-term idea of opening up the SixthSense platform and letting others develop applications for it.
Pranav Mistry sees some commercial applications for the system in the near future. For example, he wants to develop a sign language application that would “speak out” a translation while someone was signing.
The main purpose of the SixthSense is providing additional information around us or helping us in some tasks like reading maps on the street with your current location, watching additional information and media while reading newspapers or a book, checking time when you draw a circle on your wrist, making a call without the need to use your phone or taking digital photographs by putting your thumbs and forefingers together to make a picture frame. The uses are virtually limitless.
However, no one involved in the SixthSense project feels that their platform will replace laptops and smartphones. Although they emphasized the ability to check you e-mail on a (public) surface it’s really not that much applicable because of the individual privacy. Another drawback is the usage of the 4 very (system) noticeable colors on user’s fingers.
That problem could be surpassed with the usage of accelerometers or some kind of rings worn on the fingers. Along with the advance of technology, a better design and maybe a few other hardware features the SixthSense should secure its practical use in the future.
I like that
Innovation at its best.
I agree with the author about the privacy, but I still like it more than the Google Glasses.