T(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data.
Created by Matthew Blackshaw (@mblackshaw), Dávid Lakatos (@dogichow), Hiroshi Ishii and Ken Perlin, the display acts as a window offering users a perspective view of three- dimensional data through tracking of head position and orientation. Virtual objects can be created and manipulated through gestures made using the hand. Likewise, multiple people can edit the same virtual environment.
T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the user’s hand extends into the virtual world, enabling the user to interact with objects directly.
The system uses Vicon motion capture cameras to track the position and orientation of tablets, user heads and hands. The motion capture system consists of 19 cameras mounted on a frame, covering a tracked space of 14 by 12 by 9 feet, where the tracking of retro-reflective tags occurs. The cameras are connected to a server, which processes the marker data from each camera reconstructing spatial position and orientation. Apple’s iPad 2 tablets are used as a window to the virtual world. Rendering was implemented using the Cinder low-level OpenGL wrapper. The team also built a custom Objective-C scene graph on top of Cinder allowing use of native Cocoa user interface elements.