Back in 2013 the team at the MIT Media Lab’s Fluid Interfaces Group developed a method of creating Spatially-Aware Embodied Manipulation of Actuated Objects using augmented reality. The project’s aim was to extend users touchscreen interactions into the real world by enabling spatial control over the actuated object. Earlier this year, the team released libraries and examples that could also enable others to do the same. Using Open Hybrid (see video below), users are able to directly map a digital interface onto a physical object using the likes of Arduino and other popular hardware/software environments (example).
This week the project has taken yet another exciting step in the 3-year development process. The Reality Editor is a new kind of tool for empowering users to connect and manipulate the functionality of physical objects using the camera on their smartphones. Drag a virtual line from one object to another and create a new relationship between these objects. With this simplicity, you are able to create an entire spectrum of connected objects.
The goal of the research is the creation of technology that grants users maximum control by leveraging human strength such as spatial coordination, muscle memory with tool-making. The Reality Editor allows you to define simple actions, change the functionality of objects around you, remix how things work and interact. Make something that is virtual into something that is physical and the physical more virtual. Through its simplicity, the Reality Editor allows users to merge two separate realities into one interwoven experience.
You can download the Reality Editor in the App Store and use their open source platform Open Hybrid to build a new generation of Hybrid Objects. This vision is not only for the DIY designer and engineers, but is also fully feasible for the next generation of high-tech users.