Kyle McDonald has been exploring possibilities of face tracking using FaceShift Studio, an affordable markerless facial performance capture in development since 2009 and presented presented over the years at the SIGGRAPH conference. The tutorial below show you how to connect it to other apps in realtime via OSC with FaceShiftOSC, and connect to openFrameworks via his ofxFaceShift addon.
Kyle’s tutorial begins with the instruction how to use your Kinect to generate a nurb model of your face. Once you have gone through the process, FaceMesh is able to translate your face movements directly onto the three dimensional model in realtime. The model is so precise that besides capturing your facial expressions extremely well it is also capturing your eye movement.
The next step is to capture the movements though the OSC server and reflect the same in the openFrameworks app Kyle has written. All the data is incoming, even the eye movement. Finally, the OSC data is fed into Ableton, and you are able to generate sounds using your facial movement.
Kyle has made all the code and examples available online so you can have a play.
FaceShift intro faceshift.com/
FaceShiftOSC setup github.com/kylemcdonald/ofxFaceShift/downloads
ofxFaceShift overview github.com/kylemcdonald/ofxFaceShift
Sound design from soundcloud.com/robclouth/
- Missing by The xx – Spatializing Sound with Sonos Created along with the english indie band The xx, Matt Mets, Aramique Krauthamer and Kyle McDonald created an exhibit that incorporates the band’s music into a room full of stepper motor controlled speakers that pivot to follow listeners as they move through the space. Missing is part of Coexist, an exhibition cycle at the Sonos Studio in Los Angeles that attempts to explore the “relationship between man and machine.” The Missing part of that exhibit, on display through Dec. 23, uses 50 speakers, two hacked Kinect 3-D cameras, and a whole lot of code and robotics to create an environment where people are moving through the music and interacting with the speakers, without even trying. - writes Wired. Development took about 6 weeks from accepted pitch to opening night. Matt Mets developed a system that uses stepper motors with potentiometers (in place of servos) in combination with a custom pcb that is a modified version of an old open source makerbot stepper driver (Matt used to work for makerbot). After a lot of laser cutting and machining used to fabricate the mounts, the pcb on each mount is powered and controlled by a single ethernet cable which is daisy chained along the ceiling of the installation. These are controlled by a single ethernet cable connected to a teensy plugged into the Mac Mini which is also connected to two Kinects that follow the visitor through the space. The system is powered by a single openFrameworks app. All the audio is sent from Ableton Live, and controlled from OF using local midi (e.g., for restarting the piece when people walk in, and turning it down if no one is around). The github repo the team has made available contains everything from pcb schematics to arduino code to the OF code and can be downloaded here. Kyle also describes some clever tricks involving orthographic reprojection of the point cloud, sampling the accelerometer to help align the data from multiple Kinects, generating binned "heatmap" style images from the reprojected points and doing blob detection on that image. The system is also doing dynamic renderings of the space using simple speaker models loaded from Rhino.The original layout for all the speakers was done using rhino/python scripting, but the final heights were actually based on a call to ofRandom() with a seed which, Kyle tells us, made him especially happy. Credits: Music by The xx, Robotics by Matt Mets, Software by Kyle McDonald, Producer Aramique Krauthamer, Speakers and space by Sonos with special thanks to Jon-Kyle Mohr for installation and […]
- The Maccabees (in the dark) – Live performance recording with 10 Kinects Two weeks ago we were invited to be at the filming of the new video of the The Maccabees, presented by Vevo for the Magners ‘Made in the Dark’ campaign. Unfortunately we could not make it but earlier today, James Aliban posted details of the result. Project is the brainchild of Directors Jamie Roberts and Will Hanke, and the performance contained a combination of live action footage (shot with an Alexa on a technocrane) and an animated sequence by Jamie Child and James Ballard. The scene was all shot in 3D, with a rig that contained 10 Kinect cameras, each attached to a Macbook Pro. The technical consultant on the project was James Aliban. Three applications were built to achieve this, all using openFrameworks. The client application used ofxKinect to record the point clouds. The millimetre data for each pixel of the depth map was transcoded into 320×240 TIFF images and exported to the hard drive at roughly 32 fps. A server application was used to monitor and control the 10 clients using OSC. Among other tasks, this starts/stops the recording, synchronises the timecode and displays the status, fps and a live preview of the depth map. Once the recording had taken place a separate ‘mesh builder’ app then created 3D files from this data. Using this software, the TIFFs are imported and transformed back into their original point cloud structure. A variety of calibration methods are used to rotate, position and warp the point clouds to rebuild the scene and transform it into 2 meshes, one for the band and another for the crowd. A smoothing algorithm was implemented but this was dropped in favour of the raw chaotic Kinect aesthetic. A large sequence of 3D files (.obj) were exported and given to the post production guys to create the animated sequence in Maya and After Effects. This app also formats the recorded TIFF and .obj files so that there are only 25 per second and are in an easily manageable directory structure. For more information about the project visit James' blog. Credits: Jamie Roberts, Will Hanke, Jamie Child, James Ballard, James […]
- Show me how [not] to fight [openFrameworks, Games] A small LAN party in a warehouse in Pittsburgh is staged. Gamers actions (jumps, kills, bullets, grenades, etc) are transposed into a graphical score for a percussion ensemble to play back in realtime. OSC out of Source Game engine >> Ableton Live >> Notes Quantized to 1/16 >> MIDI out to OpenFrameworks for graphical score on each musician's screen. Percussion // Lisa Pegher, Gordon Nunn, Ryan Socrates, Peter Roduta Gamers // Alex Klarfeld, Alex Wachsman, Rachel Tadeu and Dinesh Ayyappan By Riley Harmon with support from Marynel Vazquez. More images on […]
- Reactor for Awareness in Motion (RAM) by YCAM – Download C++ creative coding toolkit to create realtime feedback environments for dancers is now available for download. Available both as open source download and applications for Mac and Windows to choreograph or rehearse previously programmed […]
- Smile TV – It works only when you smile Recent Royal College of Art (RCA) design graduate David Hedberg’s Smile TV turns the medium’s engagement pattern on its head: instead of making you smile at on-screen silliness, you have to “smile to […]
- Touch Vision Interface [openFrameworks, Arduino, Android] Created by Teehan+Lax Labs, Touch Vision Interface is a combination of software and hardware to allow realtime manipulation of content on a remote device via touch interface on a mobile device. Instead of purely using mobile device screen as an input, the user views the remote content and applies the content simultaneously, better know but not necessarily a form of AR. I can still recall the first time I saw an Augmented Reality demo. There was a sense of wonderment from the illusion of 3D models living within the video feed. Of course, the real magic was the fact that the application was not only viewing its surrounding environment, but also understanding it. AR has proven to be an incredible tool for enhancing perception of the real world. Despite this, I’ve always felt that the technology was somewhat limited in its application. It is typically implemented as output in the form of visual overlays or filters. But could it also be used for user input? We decided to explore that question by pairing the principles of AR (like real-time marker detection and tracking) with a natural user interface (specifically, touch on a mobile phone) to create an entirely new interactive experience. The translation of touch input coordinates to the captured video feed creates the illusion of being able to directly manipulate a distant surface. Peter imagines future applications of this technology both in the living room or in large open spaces. Brands could crowd-source easier with billboard polls, group participation on large installations could feel more natural. Likewise other applications could include music creation experience where each screen becomes an instrument. The possibilities become even more exciting when considering the most compelling aspect of the tool – the ability to interact with multiple surfaces without interruption. No need to switch devices through a secondary UI – simply touch your target. You could imagine a wall of digital billboards that users seamlessly paint across with a single gesture. Created using opencv-android, openframeworks and python/arduino for the led matrix. Touch Vision Interface (Thanks […]
- ‘Ghost’ installation traps visitors in an interactive snow storm Ghost is an interactive installation of a snow storm, raging within an abandoned, barren landscape. Within this storm the visitor can make out a procession of human forms which seemingly try to find a way out. The bodies are remnants of the previous visitors, their ghosts, trapped in the hostile […]
- Parhelia [vvvv] Parhelia by Paul Prudence is a real-time A-V performance piece where sample based mechanical sounds are used orchestrate a family of concentric forms in space. The vvvv scenes suggest the workings of a imaginary machine where its component parts, or 'gears' interact with one an another triggering corresponding sounds. Parhelia uses direct translation of sound to visual material via OSC and MIDI data transmission. Sound design and composition is done in Ableton Live which runs along side VVVV. MIDI & OSC data is sent from Ableton Live's time-line and processed in VVVV to choreograph the animation in real-time. A midi device is used to control aspects and behaviour of each of the forms during a performance ensuring the each performance is unique. A variety of field recordings, including samples of mechanical devices, form the basis of the sound design. The patching involved is quite large and modular, so it made sense to create a set of subpatches running from the main control patch. Isolating specific tasks as modules allows them to be re-used and scaled accordingly and used in new work. The image shows the main patch and 15 sub-patches that make up Parhelia layered together on a hi-res sheet. Although none of the subpatches are in themselves are very complex - the way they interact and fit together is. Im quite interested in the aesthetics of patch schematics generated by the contraints of visual programming and will write an article on just that sometime soon. (see below) More stills of the piece can be found here. Project Page See also Hydro Acoustic Study [vvvv] by Paul Prudence Paul Prudence is an artist and real-time visual performer working with generative/computational systems, audio responsive visual feedback and processed video. He is particularly interested in the ways in which sound, space and form can be synthaesthetically amalgamated. He is a writer, researcher and lecturer in the field of visual music, process art, and computational design. Paulprudence.com – A blog documenting Paul's personal projects including artworks, performances & lectures. Transphormetic.com – A portfolio of recent computational artworks. dataisnature.com - Blog The 16 patches that make up Parhelia: Click here to open the above image full size in new […]
Posted on: 04/08/2012
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- Climate Service Data Visualiser at FutureEverything
- Web Developer at &Associates
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific
- Data Engineer at Seed Scientific
- Design Technologist at Seed Scientific
- Creative Technologist, The ZOO at Google