Last week we wrote about the wonderful work that happened over the weekend after the release of XBox Kinect opensource drivers. Today we look at what happened since then and how the Microsoft gadget is being utilised in the creative code community.
In case you missed our post from last week, you can see it here: Kinect – OpenSource [News]
Chris from ProjectAllusion.com got to play with the Kinect and one late night he made this little demo in Processing using the hacked Kinect drivers. The processing app is sending out OSC with depth information based on the level of detail and the defined plane. The iPad app is using TouchOSC to send different values to the Processing app.
Daniel Reetz and Matti Kariluoma have been playing with Hacking a Powershot A540 camera for infrared sensitivity enabling you to see Kinect projected infra red dots in space.
Microsoft’s new Kinect sensor is garnering a lot of attention from the hacking community, but the technical specifics of how it works still aren’t clear. I am working to understand the technology at a fundamental level – my interest is in the optical side of Kinect. My ultimate goal is to make the sensor nearsighted, so that the depth resolution can be used to scan small objects. The first step in understanding a technology is to look at it — that’s why teardowns like this one at iFixit are so important.
Ben at KODE80, the creator of Holo Toy created also this quite wonderful demo of Kinect being used to track your position in space and show image on the screen based on your position thus creating an illusion of 3D image.
Several months ago I threw together an OSX HoloToy demo that used OpenCV and the iSight camera to replicate the facial recognition head tracking used in the iPhone 4/iPod touch version. This seemed like a perfect place to insert the Kinect! The above video shows various scenes with the perspective controlled via the Kinect. At this point it is simply tracking a specified depth range however with motion tracking of the depth map and other techniques, this could be really special.
Philipp Robb has some early experiments with a Microsoft Kinect depth camera on a mobile robot base. Say hello to KinectBot.
The robot uses the camera for 3D mapping and follows gestural directions. It’s basically a pimped iRobot Create with a battery-powered Kinect which streams the depth and color images to a remote host for SLAM and 3D map processing.
Peter Kirn covered the work Ben Tan X was doing with the Kinect system to perform MIDI control. Result: depth-sensing, gestural musical manipulations! From the description:
Coded in C#.net using this: http://codelaboratories.com/nui Very hacky ugly, yucky, alpha prototype, source code available here: http://benxtan.com/temp/pmidickinect.zip Next project is making a version of pmidic that uses Kinect. Then, you can control Ableton Live or any other MIDI software or hardware with you limbs. Isn’t that amazing!!! If you are interested, you should also check out: http://pmidic.sourceforge.net/ http://benxtan.com
Yesterday, Stephan Maximilian Huber posted this video of Joy Division-esque realtime 3D scan using Kinect where points are connected only horizontally. Very effective and quite beautiful.
Simultaneously, Dominick D’Aniello is working on Kinect Object Manipulation, creating a system using openFramework that allows you to rotate and manipulate 3D objects using Kinect.
A threshold is used on the depth-map to filter out everything but my hands, and then blob detection is used to locate their centers. This information is then used to scale and rotate an onscreen object. Note that because the Kinect provides depth information, the object can be rotated on both its Z and Y axis. With a bit of work, a gesture could theoretically also be made to rotate along the X axis.
Few days ago we posted a quick installation prototype by Theo Watson and Emily Emily Gobeille (design-io.com) with the libfreenect Kinect drivers and ofxKinect (openFrameworks addon). The system is doing skeleton tracking on the arm and determining where the shoulder, elbow, and wrist is, using it to control the movement and posture of the giant bird!
Another great news is that Kinect now also works with MaxMSP created by Jean-Marc Pelletier.
It’s still very alpha. I still have to implement “unique” mode, multiple camera support, proper opening/closing, and I can’t seem to be able to release the camera properly but the video streams work as they should. Read more on the forums.
Last week Rui Medeira also ported drivers to Cinder framework and this morning Robert Hodgin aka Flight404 posted these videos to his vimeo account. Made with Cinder and the Kinect sensor. Runs in realtime.
Another great week of Kinect projects. The work is finally beginning to take shape beyond tech demos which is wonderful to see. I highly doubt will be posting any more updates of this nature as more work will develop as individual projects which will require their own posts. Big up once again to the communities including openFrameworks, Processing, Cinder, MaxMSP and many more..
Posted on: 22/11/2010