TapSense uses sound to enhance information interaction on touch surfaces
A group of researchers, led by a Carnegie Mellon University (CMU) researcher, developed a technology which recognizes determines the difference in sound when you tap of a fingertip, the pad of the finger, a fingernail or a knuckle on a touchscreen. This technology, called TapSense, enables richer touchscreen interactions by employing the anatomy and dexterity of our fingers.
“TapSense basically doubles the input bandwidth for a touchscreen”, said Chris Harrison, a Ph.D. student in Carnegie Mellon’s Human-Computer Interaction Institute (HCII). “This is particularly important for smaller touchscreens, where screen real estate is limited. If we can remove mode buttons from the screen, we can make room for more content or can make the remaining buttons larger.”
If you think you already heard about something similar, or that you’re familiar with this researcher’s name, it is because we already wrote about his other research named OmniTouch less than a week ago. TapSense was developed by Harrison, fellow Ph.D. student Julia Schwarz, and Scott Hudson, a professor in the HCII in Santa Barbara, California.
An inexpensive microphone could be readily attached to a touchscreen to tell the difference between different parts of the finger by classifying the sounds. The microphones which are already installed into devices for phone conversations would not work well for the application because they are designed to detect frequencies in the vocal range.
TapSense technology could be used while typing on a virtual keyboard, where users might capitalize letters simply by tapping with a fingernail instead of a finger tip, or might switch to numerals by using the pad of a finger, rather toggling to a different set of keys. Another possible use would be a painting app that uses a variety of tapping modes and finger motions to control a pallet of colors, or switch between drawing and erasing without having to press buttons.
The technology can also be used to discriminate sounds between tools made of different materials. Such passive markers, which don’t require batteries to operate, made from materials as wood, acrylic and polystyrene foam. This would enable people using styluses made from different materials to collaborate and sketch simultaneously on the same surface, with each person’s contributions appearing in a unique way (different color or some other mark an individual tool would represent).
During the tests performed on TapSense technology, the researchers found that their proof-of-concept system was able to distinguish between the four types of finger inputs with 95 percent accuracy, and could distinguish between a pen and a finger with 99 percent accuracy. Since the technology relies on acquired sound, it’s probably less accurate in noisy surroundings.
For more information, you can read the paper named: “TapSense: Enhancing Finger Interaction on Touch Surfaces” (PDF).