Oblong g-speak – the near future of information interaction?
If you’re a regular tech news reader you must have stumbled upon articles where they compare some information manipulation (or human-machine interface) technology to the technology seen at the sci-fi movie Minority Report. In this article we’re going to describe the Oblong g-speak Spatial Operating Environment (SOE), technology that is most resembling to the one seen in the popular movie.
That shouldn’t be a surprise, since John Underkoffler, the co-founder of Oblong, actually imagined the gesture-based computer interface for Spielberg’s flick. The g-speak platform braids development arcs begun in the early 1990s at MIT’s Media Laboratory, where Oblong’s principals produced radical user interface advances, distributed and networked language designs, and media manipulation technologies. After many changes and additions for this tech, it was uncovered in November 2008 as development of Oblong Industries.
It is a combination of gestural I/O, recombinant networking, and real-world pixels. The g-speak platform is a complete application development and execution environment that gives more natural interaction compared to traditional GUIs. Its idiom of spatial immediacy and information responsive to real-world geometry enables a necessary new kind of work – data-intensive, embodied, real-time, and predicated on universal human expertise.
The gesture technology tracks hand movements and uses special displays or projectors in order to project images onto surfaces. Wall-sized projection screens co-exist with desktop monitors, table-top screens and hand-held devices. Every display can be used simultaneously and data moves selectively to the displays that are most appropriate. Three-dimensional displays can be used, too, without modification to application code.
The data manipulation is accomplished with special gloves that you wear in order to interact with the interface in six degrees of control, just like Tom Cruise did in the Minority Report film. Applications are controlled by hand poses, movement and pointing. Finger and hand motion are tracked to 0.1mm at 100 Hz and pointing is pixel-accurate. The folks from Oblong develop another device for simpler control in shape of a remote controller (or a wand as they call it).
The g-speak Spatial SOE applications process large data sets and support multi-person work-flows. The g-speak networking framework provides a collection of core library components that allow applications to scale transparently and dynamically across clusters of machines. It enables effective use of CPU power in a LAN environment, built-in support for applications that enable collaborative work across the network, and the ability to add functionality to applications at run-time by adding new code, new machines, new screens and new people to a work context.
A software development kit that runs on both Linux and Mac OS X is available. Applications are source-compatible across both operating systems and can run on ordinary desktop and laptop computers in addition to gesturally-equipped g-speak machines and clusters. Its input framework provides offers the use if traditional input devices such as mouse and keyboard. Underkoffler is convinced that the technology will find its way into everyday computers within five years.