Last week The Mill unrevealed their rear projected, 5’x 3’ interactive touch screen which is an interactive visualization of their portfolio (spanning 20 years) developed in Cinder. What makes this screen unique is extensive effort behind design and software development by the NY Digital team comprising Andrew Bell and an incredible group designers, flash developers and digital artists. For those that may be unaware, Andrew Bell is the main person behind Cinder, powerful, intuitive toolbox for creative programming, previously an in-house tool at the Barbarian Group where Andrew worked, now open sourced and developed by Andrew as part of the Mill’s R+D. We got a chance to to ask Andrew about the project including some making-of details and features we will see in future releases of Cinder, including the forthcoming Cinder Timeline API – tweening library.
Over to you Andrew…
The hardware includes a commodity PC, custom-built switchable glass (allowing software control of the glass’s transparency/opacity) and a computer vision / infrared-based touch combination, designed by Supertouch (http://supertou.ch). The software is written in Cinder, and the animations are achieved with the Cinder’s forthcoming tweening library.
The content for the screen is downloaded via a separate Cinder app that walks an XML dump of our website for the metadata, and then scrapes the site for the videos (since its CMS doesn’t provide the appropriate hooks). These are stored locally so that the system never needs an internet connection for normal operation.
The metadata is all searchable and presented in the list mode sorted by brand, director, ad agency, production company, and chronologically. Clients usually get a kick out of looking at the very early Mill projects we have on there. Advertising has come a long way in 20 years, and it’s pretty interesting to be able to view that progression in one place.
The additional behind-the-scenes aspects like the lens are prepared separately, and obviously not available for every project. However going forward we’ll be creating more content like this. The lens is achieved using some custom GL shaders (including a mathematically accurate lens model) and a pair of QuickTime movies depicting the original footage and the final piece. Part of why we modeled a lens that distorts around the edges is to hide the seams that come from these two videos rarely lining up “pixel-perfectly”. It’s not unusual for shots to be nudged or scaled relative to the original footage during the VFX process, and the lens does a nice job of hiding this. In addition to the lens view, we can also present interactive turntables (generally of CG models) as well as traditional making-of videos that the contracts with our clients often prevent us from presenting on the internet.
The “multi-touchable” particle system was written by Hai Nguyen and is comprised of 2.4 million particles. The bulk of the math is performed on the GPU of course, using raw GLSL/FBO GPGPU techniques as opposed to CUDA or similar. The fluid sim is computed on the CPU using a custom solver and uploaded to the GPU as a velocity texture. We can get a lot more particles running but in practice we have to leave room for the rest of the visuals, which we try to keep close to 60 FPS at all times. The design challenge here was to hit a look that felt “technical” and structured while still having the type of motion that is unique to fluids. Hai found a great solution by advecting bicubic patches through the fluid. Little grids of dots still maintain their rectilinear relationship to each other relative to the boundary of the patch, but the patch points themselves are moving around naturally in the fluid.
The movie below is a very early, admittedly low quality video of the glass recorded on an iPhone. This was shot before we had touch working, so the interaction is via a mouse. It shows the glass in its transparent state as well, which is software controllable. My favorite function in the whole code base is definitely makeGlassTransparent(). From a software perspective we just send a 1 out the serial port to an Arduino. It’s responsible for pushing current through the glass, which makes it transparent. In practice we don’t do this much since it confuses the infrared-based touch input, but the effect is pretty striking, especially when the room is dark.
One thing we’re especially proud of is the video playback timeline. This allows users to scrub through the video with realtime feedback, hiding the latency that is inherent to seeking inside of QuickTime movies, particularly those encoded for the web. This as well as the software keyboard for searching were designed and coded by Chris McKenzie. As an aside, Chris has a Flash background originally, so he brings a great UX sensibility refined in the fires of website interaction design. Conversations with him have helped provide insight into how Flash guys have innovated in user interaction for a long time. Whether you love or hate Flash, I think there’s a lot of untapped wisdom in the Flash community – they’ve been defining high-end interaction for a very long time, both with respect to process and results. It’s something we’re trying to learn from and capture in Cinder, with a C++ twist of course.
Cinder Timeline API
The forthcoming Cinder Timeline API is a great example of this. Anyone familiar with Greensock’s TweenMax library or similar will understand the value of a library like this. Really sophisticated animation effects are possible with a pretty high level abstraction. You can make a statement like, “In 2.5 seconds, I want this scale value to be 1.25, and I want you to use quadratic easing to get there. Oh and fire this callback function when you’re done”. All of that is expressible in one succinct line of code. The implementation was derived from the Choreograph Cinder library by David Wicks (http://sansumbrella.com/), and we’re pretty pleased with the design – usage is very fire-and-forget, and you can tween any type of class that supports the multiply and add operators, so things like colors or quaternions just work. In Cinder we don’t roll anything major into the core until the community has discussed and debated it in the forums, but we’ll be submitting it for that process shortly. Here’s a small video from the ImageAccordion sample that we’ve developed to show off the Timeline API:
Several of the features in our recent 0.8.3 release (described here: http://libcinder.org/blog/posts/5_cinder-083-released/ ) were developed and tested around this project, including the new hardware accelerated text rendering functionality in gl::TextureFont. This sort of push-and-pull of doing production work and then rolling the learnings and code back into Cinder has been the primary development model so far, both for the core developers and the users who have been contributing code. It’s how the library has gotten this far in its relatively short life, and what we think will keep growing it into the tool that its community needs it to be.
Thank you Andrew.
Credits: Executive Producer: Bridget Sheils, Lead Creative Technologist: Andrew Bell, Creative Director: Sheena Matheiken, Creative Technologist: Hai Nguyen, UX Lead/Developer: Chris McKenzie, Digital Producers: Bridget Sheils, Kei Gowda, Design Director: Jeff Stevens, Art Director: Bowe King, Designer/Animator: John Koltai, Designer: Audrey Davis.
For more information see themill.com/work/mill-touch/behind-the-scenes.aspx | http://libcinder.org