When James George and Alexander Porter first took Kinect on a photographic journey of NY Subway little did they know that within a year the project would become RGB+D and raise almost $35k on Kikstarter to fund the development of “Clouds”.
While the team is currently busy working on Clouds, they continue to explore the medium. Latest is “Drowning of Echo”, an interpretation of the Transformation of Echo & the story of Narcissus from Ovid’s Metamorphoses, drawing from McLuhan’s essays on Understanding Media.
This is the first film Alexander and James shot together, back in December of 2011 as a follow up to their Depth Editor Debug experiments.
It was important to us to take the aesthetics away from a computational reference and push the medium into creating something in it’s own referential space. In editing it just this spring we were able to draw from the new unreleased capabilities of the toolkit, such as specular relighting and crepuscular rays.
Whilst the medium itself is still very new, in this latest piece James and Alexander continue to interrogate their own toolset, and while it may be premature to categorise this as a new form of cinema, nonetheless it does raise critical question how new tools impact the creation and communication of narrative in moving image.
(also, new RGB+D workshop @ Eyebeam in July!)
- Kinect RGB+Depth Filmmaking [openFrameworks] Golan Levin was invited by the FITC conference to answer a series of "Ask Me Anything" questions posted by Reddit visitors. At the STUDIO for Creative Inquiry, Golan's video was created by Fellows James George and Jonathan Minard, artists-in-residence researching new forms of experimental 3D cinema. Their work explores the notion of "re-photography", in which otherwise frozen moments in time may be visualized from new points of […]
- RGB+Depth Workshop with James George and Alexander Porter – Barcelona, Spain / 28th April photo by TELLART In case you missed Resonate festival and live in Barcelona this is your opportunity to participate at RGB+Depth workshop lead by James George and Alexander Porter. Taking place this coming Saturday and Sunday (28th,29th April) at Hangar Barcelona-Spain, James and Alexander will introduce you to this exciting new form of filmmaking using Kinect and DSLR. "We will show how to use the new RGBDToolkit, an open source and cross platform (windows+osx) system for calibrating, capturing, and visualizing Kinect data combined with HD video creating a unique hybrid of video and 3d graphics. The workshop will be two days. The first we'll learn the RGBD workflow hands on, from camera mounting and calibration to shooting and rendering techniques. The second day we will use the data captured from the day before, openFrameworks hackers will pair up with designers and videographers to dig into the RGBDToolkit code. Together we'll create new expressive ways of remixing data with generative and dynamic effects." Price for Saturday and Sunday: 125 euros €25 per day Intended for audiovisual artists and filmmakers as well as software programmers, there is no coding experience necessary to take the workshop. For more information see gbdtoolkit.com/hangar.html and if interested send a RSVP note to […]
- CLOUDS Interactive Documentary – Exploring creativity through code RGB-D and "Clouds" needs no introduction on CAN but it is exciting that the project authors, James George and Jonathan Minard have just launched a Kickstarter to complete their interactive film and they need our support! CAN featured the project a number of times already. First time on the interwebs back in February 2011, then only known as Kinect NYC Subway, followed by the first recordings of the film at the Art && Code event organised by the STUDIO for Creative Inquiry, then Eyeo Festival and their most recent participation at our own Resonate festival in Belgrade where James and Jonathan ran a workshop and completed a large number of recordings which constitute good part of the final project. Over the last year the team has captured interviews with over 30 new media artists, curators, designers, and critics, using this new 3D cinema format called RGBD. CLOUDS presents a generative portrait of this digital arts community in a videogame-like environment. The artists inhabit a shared space with their code-based creations, allowing you to follow your curiosity through a network of stories. The interview subjects in CLOUDS include Bruce Sterling, Casey Reas, Daniel Shiffman, Golan Levin, Greg Borenstein, Jer Thorp, Jesse Louis-Rosenberg, Jessica Rosenkrantz, Josh Nimoy, Karolina Sobecka, Karsten “Toxi” Schmidt, Kyle McDonald, Lindsay Howard, Regine Debatty, Satoru Higa, Shantell Martin, Theodore Watson, Vera Glahn, Zachary Lieberman and many more. The final CLOUDS documentary will come in the form of an application for Mac or Windows, presenting a full-screen, immersive, interactive audio-visual experience. If you are in the NYC area, consider also donating for a ticket to their launch at Eyebeam in Chelsea. There you’ll see the film presented as an installation, meet the filmmakers and catch a few CLOUDS participants in person. So, enough talk, lets make it happen! Please Support today! James George (@obviousjim) is a media artist using code to critically interact with the implications of emerging technology. In the process of creating installations and videos he shares his process open source so that others may express themselves using the tools he develops. Jonathan Minard (@deepspeedmedia) is a new-media documentarian with a background in anthropology. His work follows cultural shifts at the frontiers of technology and art, and develops new cinematic techniques for crafting stories of invention and discovery. For more great Kickstarter projects, see our own curated page on Kickstarter and of course our own […]
- The Carp and the Seagull – Interactive short film by Evan Boehm The Carp and the Seagull is an interactive short film about one man’s encounter with the spirit world and his fall from grace. It is a user driven narrative that tells a single story through the prism of two connected spaces. One space is the natural world and the other is the spirit or nether […]
- Interactive Wall at UD [openFrameworks, Kinect] Few months ago Flightphase were brought to this project by HUSH Studios as an Art and Technology Director to create, in collaboration with HUSH and 160over90, the image-based responsive environment at the University of Dayton. The 36-foot wall at the admissions center was to become an interactive attractor for the prospective students and their families. The result is an engaging live surface driven by simple elements beautifully choreographed. The project evolved from a basic element, the cube, used as a mechanism to both animate the screen and show videos. Cube, also being the visual language of UD cubes were used in their orthographic projections, no camera, no lighting, and frequently rendering one of the faces of the cubes with the same color as the background. Each face of each cube is rendered with a single color, but this color changes depending on the faces’s angle to the camera. The color is picked from pre-designed image gradient that constitutes a palette. Altogether, the entire field of cubes, with how they overlap and with the negative-space shapes formed between the cubes, created an opportunity to create a variety of looks and patterns giving more more structural and dimensional appearance that could ‘open up’, rather than just being on the surface of the wall. The fields of cubes were then animated with waves of activity. The designed Affectors start small and grow to their final size as they travel around. The longer the cube has been under the effect of the Affector, the more it is influenced by it. The gestural interaction is driven by 4 Kinect cameras, embedded in the ceiling in front of the wall providing viewer’s presence and movement. The software was built using openFrameworks. For video tracking the team used a modified version of TSPS (Toolkit for Tracking People in Spaces). 2 Mac Minis were used to get input from the Kinect cameras — each Mac Mini running the TSPS app blending the input from two Kinects, and sending the contour information over to the Mac tower. More details about the process with great insight into resolving both mapping and blending is available in the form of a case study on Flightphase's website. Client: University of Dayton, Agency: 160over90, Production Company: HUSH, Art & Technology Director: Flightphase FLightphase credits: Creative Direction, Interaction Design, Bespoke Software Design Creative Direction/Design: Karolina Sobecka, Technical Direction: James George, Jeff Crouse, Lead Sofware Development: Jeff Crouse, Additional Software Development: Caleb Johnston Project Page Flightphase is an art and design studio based in Brooklyn. We are dedicated to creating work that is engaging and evocative, creating a unique design and format solution for any challenge. We develop a variety of art and commercial projects, embracing emerging technologies, interactivity and new media as well as all the traditional tools of creative expression from pencils to film to product […]
- Study of real-time 3D Internet by Akihiko Taniguchi Study of real-time 3D Internet is an experiment that explores interaction on the web from the 3rd point perspective. As the user navigates the internet, he/she can project beyond the two dimensional screen into three-dimensional environment recorded by the […]
- The Maccabees (in the dark) – Live performance recording with 10 Kinects Two weeks ago we were invited to be at the filming of the new video of the The Maccabees, presented by Vevo for the Magners ‘Made in the Dark’ campaign. Unfortunately we could not make it but earlier today, James Aliban posted details of the result. Project is the brainchild of Directors Jamie Roberts and Will Hanke, and the performance contained a combination of live action footage (shot with an Alexa on a technocrane) and an animated sequence by Jamie Child and James Ballard. The scene was all shot in 3D, with a rig that contained 10 Kinect cameras, each attached to a Macbook Pro. The technical consultant on the project was James Aliban. Three applications were built to achieve this, all using openFrameworks. The client application used ofxKinect to record the point clouds. The millimetre data for each pixel of the depth map was transcoded into 320×240 TIFF images and exported to the hard drive at roughly 32 fps. A server application was used to monitor and control the 10 clients using OSC. Among other tasks, this starts/stops the recording, synchronises the timecode and displays the status, fps and a live preview of the depth map. Once the recording had taken place a separate ‘mesh builder’ app then created 3D files from this data. Using this software, the TIFFs are imported and transformed back into their original point cloud structure. A variety of calibration methods are used to rotate, position and warp the point clouds to rebuild the scene and transform it into 2 meshes, one for the band and another for the crowd. A smoothing algorithm was implemented but this was dropped in favour of the raw chaotic Kinect aesthetic. A large sequence of 3D files (.obj) were exported and given to the post production guys to create the animated sequence in Maya and After Effects. This app also formats the recorded TIFF and .obj files so that there are only 25 per second and are in an easily manageable directory structure. For more information about the project visit James' blog. Credits: Jamie Roberts, Will Hanke, Jamie Child, James Ballard, James […]
- Suwappu Prototype [iPhone] Few months ago Berg posted concept material for Suwappu, their toy design project with Dentsu. The project is based around woodland creatures that talk to one-another when you watch them through your phone camera, a form of AR. Whilst still only a concept at that time, Matt Webb has posted latest details on Berg's blog including the iPhone app internal beta, where you can see Deer and Badger talk as you play with them. The app was developed by Zappar, whose AR technology brought Suwappu to life. Interesting feature but not new to CAN readers is camera recognition of objects without QR codes using only the faces design of objects the camera is looking at. The real jewels of the Suwappu project are the stories that happen between these toys inside this new environment, one that moves around and behind the toy as you pan around the characters. As Matt describes: Seeing the two characters chatting, and referencing a just-out-of-camera event, is so provocative. It makes me wonder what could be done with this story-telling. Could there be a new story every week, some kind of drama occurring between the toys? Or maybe Badger gets to know you, and you interact on Facebook too. How about one day Deer mentions a new character, and a couple of weeks later you see it pop up on TV or in the shops. The real challenge will be making the technology work. Whilst story telling has fluency of moving between the characters quickly and cross referencing, it will be interesting to see how the AR technology keeps up with the narrative, ie avoiding the long delays and almost over cautious movement of the device so not to loose the tag. I nevertheless enjoy what Berg are doing, simultaneously exploring new forms of story telling using the tech and vice versa. Read more about the development on their blog | post about the prototype on Dentsu | more […]
Posted on: 05/06/2013
Posted in: openFrameworks
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- Climate Service Data Visualiser at FutureEverything
- Web Developer at &Associates
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific
- Data Engineer at Seed Scientific
- Design Technologist at Seed Scientific
- Creative Technologist, The ZOO at Google