The Saatchi & Saatchi New Directors’ Showcase is one of the highlights of the Cannes Lions Festival, and has become one of the most popular events during Cannes week. Again this year, it took place in The Grand Auditorium, with overspill into the Debussy. Saatchi & Saatchi senior creative team Jonathan Santana & Xander Smith came up with the initial concept for this year’s showcase – ‘Hello Future’, a dark, retro futuristic setting with a ‘HAL’-like host. Juliette Larthe – an independent producer (formerly of Warp Films) pulled a creative team together: Marshmallow Laser Feast (Memo Akten, Robin McNicholas, Barney Steel), Aaron Meyers, Jamie Lidell, Clark, Mark Titchner, Gary Card.
The (MLF) directed the visual performance, created all the graphics for the ‘HAL’ character, the titles, and various other ‘interludes’ (such as a special hypnosis moment). The team worked closely with Aaron Meyers who developed the kinect performance software for Jamie Lidell’s section, and the stage + lighting + sound designers to pull together the overall look and feel.
The graphics app used was made in Cinder and there are 3 stage computers each with a Kinect running an OpenNI-based openFrameworks application that sends information over OSC. The software was controlled live with controls mapped to an Oxygen8 midi keyboard and a few little controls on an iPad running Mrmr.
The process of creating images was very analogue utilising found objects such as a step ladder, a couple of broom sticks, rear projection perspex, large metal bowl, bin bags, gaffer tape and glass prisms. These were then filmed with a handheld pointgrey dragonfly2, sending this live feed into a laptop running modul8, playing with a few midi sliders and knobs to deform and affect the image in realtime and project onto perspex, and then film that back with Canon 5D. The show was run using the new VDMX to play content on the 14x8m rear screen, and the side ‘banana’ screens.
Behind the scenes video and images coming soon…
UPDATE 01/09/2011 – Making of video added. See below.
- OFFF + CAN Workshop Collaborative 2011 [Cinder, oF, Js, Events] Earlier this year we have been thinking about the concept of "curated workshops", an opportunity to bring people together to work for a very short period of time and share their creations. These would include setting up a team, inviting few high profile individuals and opening up submissions for participation. When I was approached by Héctor Ayuso earlier this year to give a talk at OFFF, instead of talking about CAN, I thought this would be a great opportunity to do something more, a workshop, and use the workshop material as the content to drive the talk. Hector and I agreed, 'Workshop Collaborative' was born. What was the aim of “Workshop Collaborative”? 1. Initiate collaborations between those that share common interests. 2. Create a playing field, both physical and virtual. 3. Allow ideas to evolve by asking questions. When we announced the workshop back in January, we also opened to applications for participation. In total, 80 applications were submitted and 11 participants chosen by the team including Aaron Koblin, Ricardo Cabello - mr.doob, myself and Eduard Prats Molner. The participants included: Marek Bereza, Alba G. Corral, Andreas Nicolas Fischer, Martin Fuchs, Roger Pujol Gomez, Marcin Ignac, Rainer Kohlberger, Thomas Mann, Joshua Noble, Roger Pala and Philip Whitfield. Programme - Single Day 09:00 - 10:00 Introductions / Teams 10:00 - 13:30 Stage 1 13:30 - 14:00 Lunch 14:00 - 19:00 Stage 2 (Completion) Total creation time: 6.5 hours Few weeks before the workshop, Aaron and I decided four themes we should allow to influence the work we would be making. By allowing other participants to comment and feedback on these themes we would discover areas we all want to explore. The themes included: 1. Digital Ecosystem - Build an application, an organism of information, sound and visuals, a digital ecosystem that flows through different mediums and evolves. “living system - travelling through technology and mutates through tools. 2. Analogue Digital - Explores the notions of physicality in code. Using made objects as assets to code. Scan 3d objects, cut paper and cut-out, traditional 2d scans, 3d objects scanned using flatback scanners, etc.. 3. Projection Mapping - Address projection mapping conceptually. Moving away from technical demos, time to question what does it all mean; surface, source, angle, point projection, scale, form, interaction, animation. 4. Data re-embodied - Tell stories through the juxtaposition of data sources and their methods of representation. How can we create new meaning, understanding and value from reinterpretation of data. By no means this ment that we would have to choose one over the other. The purpose was to get the feel where the interest lies amongst the participants and set up, so to say, a 'playing field' and allow first ideas to develop. We knew that working together for a single day we would not be able to produce anything of "finish" quality, rather focus on the subjects themselves and see what comes out. Following the feedback, a number of keywords were derived, to summarise our interests: ecosystem, data, scan, evolution, input, mutation, osc, node, rhythm, pattern, touch, physical, language, viewport and mobility. Five projects developed during the 6.5 hours of work. These included Kinect > WebGL bridge, Kinect Image Evolved, Input Device, Data Flow and Receipt Racer. -- Kinect > WebGL This project was the work of mr.doob, Marcin and Edu although other people were involved also. The task was to create a bridge between the Kinect and browser, allowing the real time feed over the web. Although aspirations were much higher than the time allowed, instead of utilising node.js server - which I understand was 99% complete anyhow, the team setled for feeding downscaled image data from cinder application using standard http requests to the three.js script which was reading the images at about 10f/s. Several rendering styles are presented below. First one is just simple point cloud done by Marcin for debugging while the rest was done by mr.doob using his amazing Three.js engine. Download .js code here. -- Kinect Image Evolved Simultaneously while Ricardo was working on the .js part, Marcin was exploring different ways of kinect image representation. In attempt to get away from standard kinect point cloud, we developed idea of trying slitscan effect with the point cloud. What this means is that the kinect point cloud was dispersed along the time lapse, different bands representing different moment in time. Likewise, Macin also was exploring what happens if the point location was reversed when particular depth was reached. What you see in the videos below are both effects. Code available soon. Thomas and Andreas were also testing different tools to manipulate kinect image. Meshlab, Blender were used to pull kinect point clouds and convert them into meshes which could then be render, distorted, split, etc. -- Input Device Marcin was also working on ways to control the input, ie how one could interact with the Kinect point cloud. We were toying with the idea of being able to assign different devices over OSC to different kinect body parts. This would allow for each individual to be assigned unique element of te point cloud and to interact with it. The first step was to use simple gyroscope datam sent from an iPhone over OSC. The video below shows what is happening. Likewise, Rainer and Roger were working on the iPhone application that would send the OSC data. Rather than just utilising gyro or accelerometer, Rainer was exploring different forms of interaction with the device, seeing whether a language could be evolved, one that would somehow enhance emotional attachment the kinect body parts. The videos below show and instrument like application that also has audio feedback. Code available soon. -- Data Flow With all the data moving, Marek was wondering if the input and output are in same medium, you can compare them, apples for apples, what would happen. Marek looked at the process of the loop by examining the image obtained by subtracting the initial input from the output so we're just left with the parts that change. For the loop algorithm, jpeg compression was chosen because it was easily available in oF and ubiquitous enough to warrant investigation. The boxy images are a result of feeding the jpeg "high" quality compression back into itself and subtracting it from the original. The finer images are using the "best" compression setting. Then Marek tried the same thing with sound (using logic), using first the original sound, then the encoded and seeing what is left. You can hear all the sounds below. Original / OFFFCAN Workshop Collaborative by filipvisnjic Encoded / OFFFCAN Workshop Collaborative by filipvisnjic Difference / OFFFCAN Workshop Collaborative by filipvisnjic Code available soon. -- Receipt Racer The receipt racer combines different in and output devices into a complete game. It was made by Martin, Philip and Joshua utilising a receipt printer, a common device you can see at every convenient store, small projector, sony ps controller and a mac running custom openFrameworks application. Print is a static medium, that's why, Philip, Martin and Josh explain, it was an intriguing challenge to create an interactive game with it. First the team tried to do it only with the printer as the visual representation but that seemed rather impossible. But then Joshua Noble came up with a small projector, perfect to project a car onto a preprinted road. There is no game without an input device. So they were lucky enough as at least one of them always carries a gamepad around. The cables connect back to the laptop running an openframeworks application the team wrote parts of. The app was entirely programmed during the workshop. Internally it runs something like the basic js game. Only a car driving on a randomly generated race track. Then it broadcasts its components to the external devices, prints the street and guesses where the car's projection is supposed to be to perform the hit test. That's the trickiest part. Everything has to be in sync and needs some calibration in the beginning. The paper also has a little bit of a mind of it's own and tends to slide around or curl. But that's nothing some duct tape and cardboard can't fix. It was a lucky day. Somehow everything was just lying around, waiting to be used. Even the stand and this plastic thing you would normally use to put in your name on a conference. Even the timing was perfect. Right at the end of the workshop we finished adding the details like a little score and the YOU CRASHED TEXTS. Project Page (code available) -- On Saturday we presented the creations. Regardless of the fact that Erik Spiekermann was presenting in the other OFFF room, we had a full theatre (500 ppl estimate) including another room where our talk could be watched on a large screen. Photo above by Arseny Vesnin CAN would like to thank all the participants at the workshop as well as Aaron and Ricardo for taking time off their busy schedules to take part of the workshop. For more information on the workshop and all future information/code/links see creativeapplications.net/offf2011 Photos by Jason Vancleave We leave you with OFFF Barcelona 2011 Main Titles made for OFFF by PostPanic (full screen […]
- ‘Ghost’ installation traps visitors in an interactive snow storm Ghost is an interactive installation of a snow storm, raging within an abandoned, barren landscape. Within this storm the visitor can make out a procession of human forms which seemingly try to find a way out. The bodies are remnants of the previous visitors, their ghosts, trapped in the hostile […]
- flight404 at Decode / V&A [Events, News] Robert Hodgin aka flight404 has just posted this video of an application he is working for the Decode event at London's V&A to open next month. Robert was asked to rework his older Solar piece so that it could be audio responsive in real-time. Whilst the details of the actual exibit are yet unknown, it is nevertheless exciting to see Robert's work at the V&A. Video at the bottom is the older piece but do make sure you watch at HD / full screen. He will be joined by the names such as Golan Levin, Daniel Brown, Daniel Rozin, Troika and Simon Heijdens. More about the event here. 8 December 2009 - 11 April 2010 // Curated in collaboration with onedotzero (via Homage to Radiolab « all manner of […]
- Fabricate Yourself [openFrameworks, Kinect] Fabricate Yourself is a project by Karl D.D. Willis that documented the Tangible, Embedded and Embodied Interaction Conference. Given the tangible theme of the conference Karl decided to engage the community by capturing and fabricating small 3D models of attendees. Attendees firstly capture their favorite pose using a Microsoft Kinect. The depth image from the Kinect is processed into a mesh and displayed onscreen in real-time. At any time they can capture the mesh and save it as an STL file. Dovetail joints are automatically added to the side of the 3x3cm size models so they can be snapped together. This allows multiple models to be connected to form a larger overall model, like a jigsaw puzzle. The STL files were printed using a Dimension uPrint 3D printer provided by Stratasys. Created using openFrameworks. Project Page (Thanks Karl) Previously: Beautiful Modeler [iPad, openFrameworks] - Gestural sculpting on ... Trace Modeler [openFrameworks] - Real-time video to create […]
- Kinect – One Week Later [Processing, oF, Cinder, MaxMSP] Last week we wrote about the wonderful work that happened over the weekend after the release of XBox Kinect opensource drivers. Today we look at what happened since then and how the Microsoft gadget is being utilised in the creative code community. In case you missed our post from last week, you can see it here: Kinect – OpenSource [News] Chris from ProjectAllusion.com got to play with the Kinect and one late night he made this little demo in Processing using the hacked Kinect drivers. The processing app is sending out OSC with depth information based on the level of detail and the defined plane. The iPad app is using TouchOSC to send different values to the Processing app. - Daniel Reetz and Matti Kariluoma have been playing with Hacking a Powershot A540 camera for infrared sensitivity enabling you to see Kinect projected infra red dots in space. Microsoft’s new Kinect sensor is garnering a lot of attention from the hacking community, but the technical specifics of how it works still aren’t clear. I am working to understand the technology at a fundamental level – my interest is in the optical side of Kinect. My ultimate goal is to make the sensor nearsighted, so that the depth resolution can be used to scan small objects. The first step in understanding a technology is to look at it — that’s why teardowns like this one at iFixit are so important. - Ben at KODE80, the creator of Holo Toy created also this quite wonderful demo of Kinect being used to track your position in space and show image on the screen based on your position thus creating an illusion of 3D image. Several months ago I threw together an OSX HoloToy demo that used OpenCV and the iSight camera to replicate the facial recognition head tracking used in the iPhone 4/iPod touch version. This seemed like a perfect place to insert the Kinect! The above video shows various scenes with the perspective controlled via the Kinect. At this point it is simply tracking a specified depth range however with motion tracking of the depth map and other techniques, this could be really special. - Philipp Robb has some early experiments with a Microsoft Kinect depth camera on a mobile robot base. Say hello to KinectBot. The robot uses the camera for 3D mapping and follows gestural directions. It's basically a pimped iRobot Create with a battery-powered Kinect which streams the depth and color images to a remote host for SLAM and 3D map processing. - Peter Kirn covered the work Ben Tan X was doing with the Kinect system to perform MIDI control. Result: depth-sensing, gestural musical manipulations! From the description: Coded in C#.net using this: http://codelaboratories.com/nui Very hacky ugly, yucky, alpha prototype, source code available here: http://benxtan.com/temp/pmidickinect.zip Next project is making a version of pmidic that uses Kinect. Then, you can control Ableton Live or any other MIDI software or hardware with you limbs. Isn’t that amazing!!! If you are interested, you should also check out: http://pmidic.sourceforge.net/ http://benxtan.com - Yesterday, Stephan Maximilian Huber posted this video of Joy Division-esque realtime 3D scan using Kinect where points are connected only horizontally. Very effective and quite beautiful. - Simultaneously, Dominick D'Aniello is working on Kinect Object Manipulation, creating a system using openFramework that allows you to rotate and manipulate 3D objects using Kinect. A threshold is used on the depth-map to filter out everything but my hands, and then blob detection is used to locate their centers. This information is then used to scale and rotate an onscreen object. Note that because the Kinect provides depth information, the object can be rotated on both its Z and Y axis. With a bit of work, a gesture could theoretically also be made to rotate along the X axis. - Few days ago we posted a quick installation prototype by Theo Watson and Emily Emily Gobeille (design-io.com) with the libfreenect Kinect drivers and ofxKinect (openFrameworks addon). The system is doing skeleton tracking on the arm and determining where the shoulder, elbow, and wrist is, using it to control the movement and posture of the giant bird! - Another great news is that Kinect now also works with MaxMSP created by Jean-Marc Pelletier. It's still very alpha. I still have to implement "unique" mode, multiple camera support, proper opening/closing, and I can't seem to be able to release the camera properly but the video streams work as they should. Read more on the forums. - Also, Kinect now runs in VVVV. Late evening live coding at node10 by Julien Vulliet (thanks @defetto) - Last week Rui Medeira also ported drivers to Cinder framework and this morning Robert Hodgin aka Flight404 posted these videos to his vimeo account. Made with Cinder and the Kinect sensor. Runs in realtime. Another great week of Kinect projects. The work is finally beginning to take shape beyond tech demos which is wonderful to see. I highly doubt will be posting any more updates of this nature as more work will develop as individual projects which will require their own posts. Big up once again to the communities including openFrameworks, Processing, Cinder, MaxMSP and many […]
- Dancing With Swarming Particles [Kinect, Unity] Dancing With Swarming Particles is an interactive installation and performace that explores the relationship between a physical user/performer and a virtual performer the “avatar” which has the physical characteristics of morphing flocking particles. The avatar’s body is composed by flocking particles that initially float in the virtual space without any apparent order the using user/performer’s movements the particles start to morph into their body. by Rodrigo Carvalho, performer Tamar Regev coordinator : Anna Mura Made in Specs [Synthetic Perceptive, Emotive and Cognitive Systems group] - UPF - Barcelona Made in Unity3d, using Kinect and Osceleton [vimeo.com/17966780] the Skeleton […]
- The Maccabees (in the dark) – Live performance recording with 10 Kinects Two weeks ago we were invited to be at the filming of the new video of the The Maccabees, presented by Vevo for the Magners ‘Made in the Dark’ campaign. Unfortunately we could not make it but earlier today, James Aliban posted details of the result. Project is the brainchild of Directors Jamie Roberts and Will Hanke, and the performance contained a combination of live action footage (shot with an Alexa on a technocrane) and an animated sequence by Jamie Child and James Ballard. The scene was all shot in 3D, with a rig that contained 10 Kinect cameras, each attached to a Macbook Pro. The technical consultant on the project was James Aliban. Three applications were built to achieve this, all using openFrameworks. The client application used ofxKinect to record the point clouds. The millimetre data for each pixel of the depth map was transcoded into 320×240 TIFF images and exported to the hard drive at roughly 32 fps. A server application was used to monitor and control the 10 clients using OSC. Among other tasks, this starts/stops the recording, synchronises the timecode and displays the status, fps and a live preview of the depth map. Once the recording had taken place a separate ‘mesh builder’ app then created 3D files from this data. Using this software, the TIFFs are imported and transformed back into their original point cloud structure. A variety of calibration methods are used to rotate, position and warp the point clouds to rebuild the scene and transform it into 2 meshes, one for the band and another for the crowd. A smoothing algorithm was implemented but this was dropped in favour of the raw chaotic Kinect aesthetic. A large sequence of 3D files (.obj) were exported and given to the post production guys to create the animated sequence in Maya and After Effects. This app also formats the recorded TIFF and .obj files so that there are only 25 per second and are in an easily manageable directory structure. For more information about the project visit James' blog. Credits: Jamie Roberts, Will Hanke, Jamie Child, James Ballard, James […]
- Quadrotors at the Saatchi & Saatchi New Directors Showcase 2012 by MLF – Details Now in its 22nd year, the Saatchi & Saatchi New Directors’ Showcase hit Cannes again, unveiling another presentation of the new directorial talent. Marshmallow Laser Feast (Robin McNicholas, Memo Akten and Barnaby Steel) were the creative and technical directors of the production which included a theatrical performance by 16 flying robots reflecting light beams on the stage. CAN got the all the details on how this mesmerising performance came into being. Memo describes the goal as to create something simple, beautiful and mysterious: "to push the experience to that of watching an abstract virtuoso being made of light, playing a bizarre, imaginary musical instrument." The performing Quadrotors are not the *stars* of the show but rather the light forms generated. Their role, Memo describes, is to manipulate the space by sculpting the light creating a ballet of anthropomorphic light forms. As the audience anticipates performance on the stage, buzzing fills the dark auditorium - causes confusion with no idea what is about to happen. 16 Quadrotors take the stage, hovering above a pyramid with light beams aimed at each one and light reflected back onto the stage. They move, reassemble, reshape the space. The most fascinating / educational / unexpected aspect of the project for me (probably due to my naivety and inexperience in working with robotics) - is how the outcome ended up being a definite collaboration between the team (us humans), and the hardware (the vehicles themselves and the moving head lights). The team knew that the hardware was going to impose constraints and they will have to stay within the confines of "physically possible", after all these are flying machines and gravity is at play. They also knew they were not going to get the exact same motion they had in their animation and simulations. They didn't expect that the flying robots were going to add such a distinct charming characteristic of movement, all of the team members falling in love with them instantly as soon as they saw the machines flying the trajectories. The Quadrotors themselves are the brainchild of Alex Kushleyev and Daniel Mellinger of KMel Robotics. University of Pennsylvania graduates Alex and Daniel are experts in hardware design and high- performance control. Their Quadrotors push the limits of experimental robotics, and the Quadrotors performing at the NDS have been built and programmed specifically for the event. The MLF team started working on the project back in January this year. They brought KMEL Robotics onboard to collaborate on the robots and asked them to build a bunch of quadrotors with a (polycarbonate) mirror on servo, and super bright LEDs. The MLF animated the robot (trajectories, mirrors, LEDs) in Cinema4D and developed a simulation environment (using espresso + coffee) to track the virtual vehicles with virtual spot lights, animate the mirrors, bounce the lights off the mirrors etc. They could simulate everything accurately (minus the airflow dynamics simulations) - including warnings for impossible or dangerous manoeuvres (i.e. acceleration, velocity, proximities etc.). Generated data (trajectories, mirrors, LEDs) was then exported using a custom python exporter, into a custom data format which they could feed straight into KMELs system. So C4D wasn't used for just previz, but ended up being fed straight into the flying robots. The quadrotors were tracked by a VICON mocap rig. The setup included about 20 cameras mounted on a truss 7.5m high, covering a 9m x 4.3m area. KMEL wrote the software to use the VICON tracking data to control their vehicles. Each vehicle knows where it wants to be (based on the trajectories they exported) and knows where it is (based on the VICON data). It then does the necessary motor adjustments to get to where it wants to be - much more complicated process than it sounds. The VICON tracking data also feeds into openFrameworks app Memo created. The moving head light animations are all realtime and based on the tracking data. i.e. VICON says "quadrotor #3 is at (x, y, z)". OF app says "ok, adjust Sharpy#3 pan/tilt so that it hits quadrotor #3). (using DMX). The oF app also runs the data to turn lights on and off (DMX) and launch other lighting presets at particular times (gobos etc). The team also used the VICON rig to calibrate the VICON space (quadrotor coordinate system) to stage space (the coordinate system we used for their animations) - by placing tracking markers on all of the lights on the floor (so the software knows where all the lights are in the world). Likewise VICON rig was also used to calibrate each of the individual sharpy orientation motors. i.e. when they send the instructions to set Pan/Tilt to 137 deg / 39 deg, to Memo's dismay they were always considerably off! (even though they have very precise motors). So he had to map his desired angles, to real world angles (specific to each device). Most of the testing, playing, calibrating, setting up was done on the iPad using Lemur configuration Memo built for the setup. For the actual show everything was preprogrammed and nothing performed live. Music was created by Oneohtrix Point Never with whom the team worked closely and iteratively to develop bespoke piece for the performance. The team would animate, send him the simulation, he would add music, sends back, they animate, send him the simulation, he changes music, sends back, they animate….etc. This went on until a few days before the performance. I now realise that it's no different to a choreographer instructing a dancer, who takes the choreography and makes it their own; or a composer writes a piece of music for a musician, who takes the score and makes it their own. These robots took our animations, and made it their own. We quickly realised this and fully embraced it, adding little touches which would really allow the vehicles quirks and character to fully shine through. marshmallowlaserfeast.com | Directors Showcase 2012 Event concept created by: Jonathan Santana & Xander Smith, Saatchi & Saatchi Producer: Juliette Larthe email@example.com Production Supervisor: Holly Restieaux Show Directors: Marshmallow Laser Feast - Memo Akten, Robin McNicholas, Barney Steel MLF Team: Raffael Ziegler, Devin Matthews, Rob Pybus, James Medcraft Quadrotor Design & Development: KMel Robotics Sound Design: Oneohtrix Point Never Intro Music “Shine a Light” Spiritualized® Set Design: Sam & Arthur Production: Ben Larthe, Mike Tombeur, […]
Posted on: 01/07/2011
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- Climate Service Data Visualiser at FutureEverything
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific
- Data Engineer at Seed Scientific
- Design Technologist at Seed Scientific
- Creative Technologist, The ZOO at Google
- Web Designer and Developer at the School of Visual Arts