Created at the Tangible Media Group at the MIT in collaboration with Sony Corporation, exTouch creates Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality. In other words exTouch is an interface system that allows you to manipulate actuated objects in space using augmented reality. The “exTouch” system extends the users touchscreen interactions into the real world by enabling spatial control over the actuated object. When users touch a device shown in live video on the screen, they can change its position and orientation through multi-touch gestures or by physically moving the screen in relation to the controlled object.
The team demonstrates the system used for applications such as an omnidirectional vehicle, a drone, and moving furniture for reconfigurable room. They envision that proposed spatially-aware interaction provide further enhancement of human physical ability through spatial extension of user interaction. Client mobile application running on an iPad was built using OpenFrameworks. Omnidirectional vehicle was build with Arduino and the system uses WiFI for communication between the client and the device.
This project was introduced at the demo and presentation at TEI2013.
Created by Shunichi Kasahara, Ryuma Niiyama, Valentin Heun and Hiroshi Ishii.
- Touch Vision Interface [openFrameworks, Arduino, Android] Created by Teehan+Lax Labs, Touch Vision Interface is a combination of software and hardware to allow realtime manipulation of content on a remote device via touch interface on a mobile device. Instead of purely using mobile device screen as an input, the user views the remote content and applies the content simultaneously, better know but not necessarily a form of AR. I can still recall the first time I saw an Augmented Reality demo. There was a sense of wonderment from the illusion of 3D models living within the video feed. Of course, the real magic was the fact that the application was not only viewing its surrounding environment, but also understanding it. AR has proven to be an incredible tool for enhancing perception of the real world. Despite this, I’ve always felt that the technology was somewhat limited in its application. It is typically implemented as output in the form of visual overlays or filters. But could it also be used for user input? We decided to explore that question by pairing the principles of AR (like real-time marker detection and tracking) with a natural user interface (specifically, touch on a mobile phone) to create an entirely new interactive experience. The translation of touch input coordinates to the captured video feed creates the illusion of being able to directly manipulate a distant surface. Peter imagines future applications of this technology both in the living room or in large open spaces. Brands could crowd-source easier with billboard polls, group participation on large installations could feel more natural. Likewise other applications could include music creation experience where each screen becomes an instrument. The possibilities become even more exciting when considering the most compelling aspect of the tool – the ability to interact with multiple surfaces without interruption. No need to switch devices through a secondary UI – simply touch your target. You could imagine a wall of digital billboards that users seamlessly paint across with a single gesture. Created using opencv-android, openframeworks and python/arduino for the led matrix. Touch Vision Interface (Thanks […]
- Second Surface – Multi-user spatial collaboration system Collaborative drawing in 3D space has a long history at MIT and on CAN. Starting in the early 2000's where translucent screen was used to observe three-dimensional objects drawn in space (sorry no link) to our own GD3D app published on the AppStore in september 2010 (free). There is general interest in populating virtual environments and we have seen a large number of projects in the past that do just this. Unfortunately this virtual space is very fragmented, each project relying on its own interface and technologies not to mention different devices used to navigate it. Tangible Media Group at the MIT have developed T(ether) – a Spatially- and Body-Aware Window and the latest iteration of the research comes in the form of Second Surface created by the new team including Shunichi Kasahara (Sony Corporation, MIT Media Lab), Valentin Heun, Austin S. Lee and Hiroshi Ishii (MIT Media Lab). The project aims to create an environment for creative collaboration which can adapt to everyday environment. Second Surface, a novel multi-user augmented reality system fosters a real- time interaction for user-generated contents on top of the everyday environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Second Surface uses image based AR recognition technology that recognizes natural image as the target. Based on the AR recognition, they estimate the pose of user’s device from the target object. Then their system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other co-located users who use the same software at the same spot. Our system can provide an alternate reality space that generates playful and natural interaction in an everyday setup for multiple users. Our goal is to create a second surface on top of the reality, invisible to the naked eyes could generate a real-time spatial canvas on which everyone could express themselves. We believe that our system can create an interesting and new collaborative user experience and encourages playful content generation. We also believe that our system can provide new ways of communicating within everyday environment such as cities, schools and households. The Second Surface, is collaborative work between MIT Media Lab and Sony Corporation and was introduced at the Emerging Technologies, SIGGRAPH ASIA 2012 and was awarded the Emerging Technologies Prize. Client mobile application and server application were built using OpenFrameworks. More about the project […]
- Macrofilm – A Tangible Narrative Ribbon by panGenerator Created by the panGenerator collective, Macrofilm is a permanent interactive installation for The Museum of The History of Polish Jews that combines traditional, tangible experience of browsing through old archives with subtly augmented digital […]
- Trace Modeler [openFrameworks] Created by Karl D.D. Willis, Trace Modeler is an application that uses real-time video to create three-dimensional geometry. The silhouette of a foreground object in a video frame is subtracted from the background and used as a two-dimensional slice. At user-defined intervals new slices are captured and displaced along the depth axis. The result is a three-dimensional model defined by silhouette slices over time. Trace Modeler was built using the openFrameworks and the OpenCV library to recognize contours from the video image. Source code is available for download here. Project Page (re-descovered via Cedric Kiefer) See also Beautiful Modeler [iPad, […]
- inFORM – Dynamic Shape Display from Tangible Media Group inFORM is a Dynamic Shape Display that can render 3D content physically, so users can interact with digital information in a tangible […]
- SKÅL [Objects] Skål (Norwegian for bowl) is a media player designed for the home that lets you interact with digital media using physical objects. You place objects in a wooden bowl to play back related media on the TV. Skål uses RFID to sense small, batteryless tags inside or attached to physical objects, toys, dolls and figures. You can fit any toy of suitable size with RFID tags and connect it to your own media content. A bowl sits on the living room table and a range of physical objects can be placed within it. When an object is placed in the bowl related media is played back on the TV. For example a physical Moomin character like Little My will play a sequence from the Moomin cartoon where she is featured. Skål lets you control all kinds of digital media; movie-clips, Youtube channels, Flickr photo streams, home videos, online radio etc. Skål is designed by Jørn Knutsen, Einar Sneve Martinussen and Timo Arnall. It emerged from Touch, a design research project looking at RFID in products and interactions. Touch at AHO is funded by Verdikt. SKÅL See also Immaterials: the ghost in the […]
- MIMPI / Mobile interactive multiparametric image Created by Moscow based duo Stain, MIMPI - mobile interactive multiparametric image is an experiment using generative imagery combined with simple multiuser interaction. Audience has a direct influence over the image by tilting their iOS or Android device. Sound is created by Lazyfish and realtime synthesized with the image using incoming parameters from vvvv via OSC. Audience interaction with the installation becomes a kind of collective game or even meditation. Visually complex image is a metaphor of virtual structures, which one can affect intuitively easy. Participant's mind is immersed in the process of influence and perception of emotional feedback. Graphic style hints at futurist aesthetics and bears a certain historical irony with a desire to rethink our attitudes to techology. Streams of user data are transform and colorise the parametric surface. Fast or slow, smooth or step movement shows time, the fourth dimension. Immersion in the process of continuous changing image is supplemented with melodic and crispy sound, which is directly connected to the incoming data and image parameters. Communication method is the most comfortable for the viewer and provides a wide range of variation and ambiguity owing to data from multiple participants. First installation appeared at MEL space in May 2012. Project […]
- Resinance – Interaction, smart materials and swarm behaviour Resinance explores the potential use of smart materials in an architectural context influenced by the behaviour of simple organic life […]
Posted on: 08/03/2013
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG