Events, Other, Review
Leave a comment

Zef & Santo – 3D Real-Time Performance, Workflow and Collaboration

Zef&Santo and Keith Fullerton Whitman – MUTEK 2012

Zef&Santo and Keith Fullerton Whitman – Mutek 2012

Even when judged against its usual high standards, MUTEK 2012 was a stellar year for AV performance. In addition to the A/Visions program, there were a number of other noteworthy shows, screenings and installations that reinforced the prominence of real time graphics and ‘cinematic ambience’ across the festival. Whether it was Jeff Mills’ figure poised over his 909 against the backdrop of a massive projection of the moon, Robert Henke and Tarik Barri’s audiovisual interpretation of the recent ethereal-but-groovy Monolake LP Ghosts or the immersion and impeccable curation of Recombinant Media Labs’ CineChamber – multimedia collaboration was everywhere. One of highlights of the festival was undoubtedly the I Dream of Wires modular synthesizer showcase that took place in the Satosphere, a huge dome hardwired for 3D projections that is permanently installed atop the Société des arts technologies (SAT). While the whole evening was great (Clark’s set felt like partying in a near-future rap video), the set by veteran American producer Keith Fullerton Whitman and visualist duo Zef & Santo was delightfully weird. Playing out as some kind of demented 8-bit hall of mirrors, Zef & Santo’s glitched-out geometric machinations perfectly complimented Whitman’s analog improvisation. I recently caught up with Zef & Santo to learn more about their intricate 3D projection workflow.

What are the challenges in working in 3D versus traditional projection contexts?

Zef: Performing visuals in the Satosphère has its own particular challenges. It’s a rather unique place for visuals, a full 360º x 210º degree dome surface which completely envelops the audience. A special approach is needed for the visuals because feeding standard video resolutions directly into the dome severely distorts the content. Another special challenge is the high resolution required for the dome, needing a 2240x2240px spherically distorted video feed.

The potential of this permanent installation really shines when the content is created as a 3D environment since the dome can accurately represent this environment to the audience. Using this technique, it is possible for the audience to actually lose perception of the dome’s surface, having it replaced by the perceived effect of being within an alternate 3D virtual space.

One challenge with working in 3D at such high resolutions is that render times can be extremely long, this is where using a real-time rendering engine is extremely useful. Thanks to the fisheye for Unity 3D project it is possible to output the spherical map needed for the dome directly from within the Unity game engine in real-time, bypassing the need for any offline rendering.

Santo: We love the mix between hi-fi and low-fi, it gives our output a more human feel and the sense that it is being created in the moment versus in a studio, rendered for three weeks by some huge renderfarm.

In one of the first emails we exchanged you mentioned a “complex chain” of devices and software for digital and analog signal processing. Could you describe your kit and general workflow?

Z: We are two visual artists working on two separate machines in tandem to create the final output. Santo creates live visuals using actual objects and lights captured by a real-world camera and fed into Resolume where he processes the visuals even further. His output is then captured into VDMX on my machine and piped into Unity 3D using Syphon to be used as textures on the objects within the scene. VDMX is also used to parse incoming OSC messages received from a Lemur touch interface enabling full control of key parameters of the 3D environment.

S: We like to change the workflow between shows because we find doing the same thing twice extremely boring. As mentioned above, I had a laptop with some sound reactive content feeding into a small analog TV being filmed by an HD camera. This was captured on a PC and processed in Resolume. This live content was placed into a multichannel content matrix (within Resolume) in order to be able to mix different textures with different elements. This matrix of content is sent in 1080p back to Zef to texturize various components (background, main object, secondary objects etc.).

Z: This is a good moment to note my gratitude to the invaluable work done by Paul Bourke for his Unity 3D fisheye project, Anton Marini Vade and Tom Butterworth for developing Syphon and Brian Chasalow for developing and maintaining the Unity 3D plugin for Syphon.

I’d love you to hear you describe your accompaniment of Keith Fullerton Whitman’s music. What was going on conceptually in that collaboration and how do you feel it worked out?

S: Keith Fullterton Whitman’s music was very interesting for performing our kind visuals to because it is totally improvised and completely analog (no bullshit!) and I think that each show is really different depending of what happens within the space. That’s also how we see our work: emerging from a particular moment. When we started the show we were supposed to be receiving a live video feed from Keith but that never happened. Keith later said “I saw that you guys didn’t need it [the feed].” I think that Keith’s music really inspired us and that we worked together to create an experimental journey into an analog world, both musically and visually.

Z: We wanted to have a prominent analog feel for this particular show seeing given Whitman’s signal flow. Using our camera to capture audio-responsive glitches generated by a cathode ray tube TV as well as not clearing the depth buffer in Unity went quite far in giving us some nice feedback effects.

Zef & Santo | Zef | Santo
MUTEK | See also: Just another day at the lab: MUTEK A/Visions 2012
SAT

See also: Zef & Santo’s visuals for Pole, from an April SAT show.