Arduino, Featured, openFrameworks
comments 3

inFORM – Dynamic Shape Display from Tangible Media Group

OLYMPUS DIGITAL CAMERA

OLYMPUS DIGITAL CAMERA

Created at the Tangible Media Group / MIT Media Lab, inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible way. After the leaked video the project went viral on YouTube last week, the team have now put out the full documentation of the project including some making of information + images and videos for CAN.

The first version of inFORM project was called Relief, and subsequently Recompose. Their initial attempt at prototyping the interface was called Contour. inFORM is a project can also interact with the physical world around it, for example moving objects on the table’s surface. Remote participants in a video conference can be displayed physically, allowing for a strong sense of presence and the ability to interact physically at a distance. inFORM is a step toward their vision of Radical Atoms. Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. They propose utilising shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects.

We are currently exploring a number of application domains for the inFORM shape display. One area we are working on is Geospatial data, such as maps, GIS, terrain models and architectural models. Urban planners and Architects can view 3D designs physically and better understand, share and discuss their designs. We are collaborating with the urban planners in the Changing Places group at MIT. In addition, inFORM would allow 3D Modelers and Designers to prototype their 3D designs physically without 3D printing (at a low resolution). Finally, cross sections through Volumetric Data such as medical imaging CT scans can be viewed in 3D physically and interacted with. We would like to explore medical or surgical simulations. We are also very intrigued by the possibilities of remotely manipulating objects on the table.

touch trackingobject trackingrender+tracking

The project uses Kinect for input and a custom 30×30 actuator table, displays and projectors for output. 150 custom Arduino PCB’s are controlled by custom software written in openFrameworks. The process includes capturing Kinect data, running colour tracking and generating normalise image and combining into 3d tracking, also running touch tracking and generating a combined depth map which is converted into the position of pistons. For more information, see the links below.

Project Page

Project by Daniel LeithingerSean Follmer and Hiroshi Ishii at the Tangible Media Group / MIT Media Lab.
Academic Support: Alex Olwal
Software Engineering Support: Akimitsu Hogge, Tony Tang, Philip Schoessler
Hardware Engineering Support: Ryan Wistort, Guangtao Zhang, Cheteeri Smith, Alyx Daly, Pat Capulong, Jason Moran
Video Support: Basheer Tome

inFORM table construction Motor Moduel Test RigMotor Pannel1st prototype motorscarflashlight_remotesitemathflashlight_table

  • Dominic Follett-Smith

    My mind is blown away. Wow!

  • dharmendra

    can any one say how this technology works exactly please…

  • Randy Cadman

    whats the actuator? is it pneumatic