Weather Worlds is a new interactive installation by design I/O that grants children weather controlling superpowers. Utilising cameras and real-time greenscreening via openFrameworks, the installation allows children to see themselves immersed in an interactive and dynamic environment. The custom computer vision system tracks the heads, hands, feet and movement of children on the platform and also recognizes gestures. Using their bodies children can conjure a storm, release a twisting tornado or rain down bolts of lightning from their fingertips. There are mighty wind fields to move through, stomping earthquakes, light bending sunshine and blizzards that will make you shiver!
Weather Worlds uses an HD CCTV camera, combined with a Blackmagic UltraStudio Mini to capture 1080p video of the audience standing in front of a greenscreen. The audience is then chromakeyed in realtime by the openFrameworks based software, which extracts the contours of each person and builds a skeleton of them, allowing specific gestures to be detected. The chromakey happens in two passes once on the CPU at low res for the tracking and analysis and then separately on the GPU as a shader to allow fast compositing at 1080p.
The skeleton tracking code will be released as an openFrameworks addon and a standalone app which others will be able to use in their projects.
- Night Bright [openFrameworks, Kinect] Night Bright is an interactive installation by Design I/O ( Emily Gobeille & Theo Watson) of nocturnal discovery where children use their bodies to light up the nighttime forest and discover the creatures that inhabit it. Children listed to sounds in the forest and play a nighttime game of hide and seek. Some creatures are curious and will investigate the light, while others are frightened and will hide in the shadows. Likewise, children can grow nocturnal plants and release fireflies from their flowers. Night Bright uses an Xbox Kinect for tracking and takes the position of the children and their distance from the wall and uses it to calculate the location and amount of light they emit into the forest. The light is calculated with a shader and some of the other creatures in the forest ( like the fireflies ) also emit light which helps light up the environment. Creatures exist at different depths in the forest and the ones that are further back take more light to discover. Some creatures will also only come out when the forest is quiet, so you have to listen for the sounds they make to locate them. For this project Theo and Emily developed a bunch of custom tools for working with the forest elements that enabled them to have the creatures aware of the structure of the environment ( like the edges of trees, rocks etc ). These tools made it easy for the team to have birds land on branches and the woodpecker to peck on trees, for example. Also used are animated sequences with accompanying xml files to describes different behaviours which they could trigger, making it quite easy to chain sequences together and have the creatures naturally react to the children's movement. The project was made with openFrameworks 007 and uses ofxKinect and ofxControlPanel addons. Night Bright was created for the Bumble children's cafe in Los Altos, California. Thanks to Theo for the detail info. Project […]
- Trace Modeler [openFrameworks] Created by Karl D.D. Willis, Trace Modeler is an application that uses real-time video to create three-dimensional geometry. The silhouette of a foreground object in a video frame is subtracted from the background and used as a two-dimensional slice. At user-defined intervals new slices are captured and displaced along the depth axis. The result is a three-dimensional model defined by silhouette slices over time. Trace Modeler was built using the openFrameworks and the OpenCV library to recognize contours from the video image. Source code is available for download here. Project Page (re-descovered via Cedric Kiefer) See also Beautiful Modeler [iPad, […]
- Puppet Parade [openFrameworks] Puppet Parade is an interactive installation by Emily Gobeille and Theo Watson of Design I/O that allows children to use their arms to puppeteer larger than life creatures projected on the wall in front of them. This dual interactive setup allows children to perform alongside the puppets, blurring the line between the 'audience' and the puppeteers and creating an endlessly playful dialogue between the children in the space and the children puppeteering the creatures. The setup consists of a stage where two puppeteers can control puppets displayed on a large interactive screen. The audience can stand in front of the screen and interact with the puppets directly, making food for them to eat which transforms their appearance or just engaging in a fun dialogue with the puppet, the performer and themselves. There are two kinect cameras mounted on the stage which tracks the arm position of the two puppeteers, mapping their arm position and angle to the body of the puppets. The puppeteers can therefore control with their arm exactly where the puppets are and what they are looking at. Tracking of the hand of the puppeteers allows the team to map the angle between the thumb and forefinger to how open or closed the mouth of the puppet is, allowing the puppets to eat food or in some cases breath fire. Another subtle detail is that the creatures in Puppet Parade are modeled in 3D and can rotate towards or away from the audience when the puppeteer rotates their arm towards or away from the Kinect. For the narrative, Theo explains, they really liked this idea of 'you are what you eat' where the food that the creatures ate would transform them to look like the style of the food.The audience can make the food for the creatures by holding out their hands. Once the food forms they can push it towards the creature (or just have the creature eat it out of their hand ). If someone stands with their arms above their heads they can make a cloud which travels up and which colors the creatures with a cloudy pattern when they eat it. There is also a feather bush and a fire bush, which you can see from the video what they do. In terms of the interaction they really enjoyed this idea that the audience were also performers creating stories together with the puppeteers. Having them interact with the puppeteers through the puppets, as well as with each other. They actually saw a lot of children going back and forth between being at the wall and being on the stage, so most got to experience the project from both perspectives. The software is made with openFrameworks 007 and runs on a Mac Pro connected to two Xbox Kinects and an IR camera ( Sony M183 ). For the kinect tracking Theo and Emily used ofxKinect which in turn uses the open-source libFreenect. They also decided to do their own joint tracking as both the Microsoft and OpenNi SDKs lack the ability to track fingers and the team needed to be able to detect when someone had opened or closed their thumb and forefinger as that would then be mapped to the mouth of the puppet. To do the arm tracking they use openCv and lots of ofPolyline operations to find the arm, the elbow, writst and fingers and then the hand of the puppeteer. They then run a bunch of other operations on just the hand region to try and determine whether the hand is open or closed. Its all very custom to the problem at hand but they were really happy with the accuracy. They have also found the Kinect motor control to be incredibly useful as it allowed them to easily adjust for heights of people ranging from 3-7 feet. If the system saw something but wasn't finding an arm it would get the kinect to look down or up till it found a good candidate. Puppet Parade premiered at the 2011 Cinekid festival in Amsterdam. Puppet Parade is made with openFrameworks and the ofxKinect addon. Project Page | Original Prototype Video | Cinekid | MOST Original Soundtracks | Video by Go […]
- Recompose [openFrameworks] Created by Anthony DeVincenzi, David Lakatos, Matthew Blackshaw, Daniel Leithinger and Hiroshi Ishii at the MIT Lab, Recompose is a system for manipulation of an actuated surface. By utilising openCV, Kinect (i believe) and gesture recognition the team is working on an array of 120 individually addressable pins, whose height can be actuated and read back simultaneously creating an ever transforming landscape responding to body behaviour. Our system builds upon the Relief table, developed by Leithinger. The table consists of an array of 120 individually addressable pins, whose height can be actuated and read back simultaneously, thus allowing the user to utilize them as both input and output. Building upon this system, we have furthered the design by placing a depth camera above the tabletop surface. By gaining access to the depth information we are able to detect basic gestures from the user. In order to provide visual feedback related to user interaction, a projector is mounted above the table and calibrated to be coincident with the depth camera. Computer vision is utilized to determine and recognize the position, orientation, and height of hands and fingers, in order to detect gestural input. Project […]
- Interactive Puppet Prototype w Kinect [openFrameworks] A quick installation prototype by Theo Watson and Emily Emily Gobeille (design-io.com) with the libfreenect Kinect drivers and ofxKinect (openFrameworks addon). The system is doing skeleton tracking on the arm and determining where the shoulder, elbow, and wrist is, using it to control the movement and posture of the giant bird! Concept and Production by Design I/O Emily Gobeille - Theo Watson design-io.com 3D depth camera for arm tracking, courtesy of Microsoft and the open source / diy community If you're interested in making projects with the Kinect and openFrameworks check out the addon as it progresses on the OF forums. openframeworks.cc/forum/viewtopic.php?p=24948#p24948 Previously: Kinect – OpenSource […]
- Mo Money Mo Problems [openFrameworks] Created by Nick Hardeman, these images are generated by evaluating and interpreting the 1997 music video “Mo Money Mo Problems” from the first disc of the Notorious B.I.G. album, Life After Death. The algorithm detects edges in the image and attempts to trace motion from frame to frame, using the initial frame as their starting point. The output is rendered as a vector image, the curves represent the motion. The points represent the pixels detected in the edge, their size determined by the distance from their previous location, the further, the larger the circle. The color and location of the points are determined by the corresponding pixel in that frame. The bright colored track suits worn by Puff Daddy and Mase in the dark backgrounds make for good tracking and nice color combinations. The only imagery added manually is the background color. You can check out some more renders in the Mo Money Mo Problems photoset on flickr. Nick Hardeman was born and raised in Miami, FL and grew up studying fine art. He received a BFA in graphic design from Florida State University in 2006. He then worked as a Flash web developer in Miami, FL at WA007. He is currently living in New York, NY and is pursuing a MFA in Design and Technology from Parsons The New School for Design and is expected to graduate in […]
- ‘Paik Times Five’ by Flightphase – Painting with Kinect and video Created by Karolina Sobecka, Jeff Crouse and with some help from Nick Hardeman, Paik Times Five was part of the one-night exhibition Infinite Loop, organized by the New Museum and curated by Lauren Cornell (Rhizome) in Seoul, South Korea. Paik Times Five was one of the three specially commissioned interactive video installations premiering at the event. Rafaël Rozendaal and Scott Snibbe created the other two installations, and a curated program of single channel videos was displayed on the world’s largest LED screen. Karolina and Jeff's piece generates imagery from the viewer’s movement: each person leaves behind them graphic traces. Various media are the raw material for this imagery — the viewer ‘paints’ the space with colours and textures of those videos. Each time the viewer lowers his or her arms, the source video changes to another, picked at random from a library of different video sources. When nobody is present in the interaction area, the installation goes into idle mode. Read more about the process on the Project Page. Created with openFrameworks and Kinect. Project created by FlightPhase Creative direction: Karolina Sobecka Technical direction and lead software development: Jeff Crouse Additional software development: Nick Hardeman See also Kinect Graffiti Tool [Processing, Kinect] /thanks […]
- Hand tracking gesture experiment with iisu middleware and oF Created by Ben McChesney at Helios Interactive, is this handtracking gesture experiment using openFrameworks and the latest release of the iisu 3.5 - gesture recognition middleware, compatible with all 3D cameras. The video shows Ben using a grasping hand to draw 3D ribbons based on the hand position or navigating the camera within 3D space. One of the features the team are excited about is the Close Interaction Mode which allows for a more detailed API based on hand and finger tracking. Users can use a hand pose gesture to easily toggle between drawing and camera mode. Ben also believes that the long range interaction should work with a kinect. He tested the iisu middleware with the ASUS xtion, Panasonic D-Imager, and the Depthsense. He ads that the close range mode will only work with the SoftKinetic Depthsense 311 camera. The code for the whole project and several examples are available on the company's github : https://github.com/HeliosInteractive/ofxIisu Examples include : User Representation , Skeleton Tracking, and Close Range Hand Tracking. These examples are written for visual studio 2010 with OF version 0071.The code for the ribbons is based off of an old flocking example by roxlu. The Ribbon source can be found on Ben's github. The application in the video below is running on a Samsung transparent LCD panel which displays white pixels as being transparent. iisu | Helios […]
Posted on: 26/06/2013
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- Climate Service Data Visualiser at FutureEverything
- Web Developer at &Associates
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific
- Data Engineer at Seed Scientific
- Design Technologist at Seed Scientific
- Creative Technologist, The ZOO at Google