Instead of AdBlock, enjoy ad-free CAN by becoming a member. Everybody wins!
HOLO is a biannual magazine about emerging trajectories in art, science, and technology brought to you by the people behind CAN. Learn more!

Shedding Light on Squidsoup – A Conversation with Anthony Rowe

Squidsoup – Submergence

For more than a decade, the artist collective Squidsoup (Anthony Rowe, Gaz Bushell, Chris Bennewith and Liam Birtles) have been designing rich interactive experiences. From their early navigable sonic environments, through their playful experiments with computer vision and sustained interest in ‘volumetric visualizations’, the group have deftly crafted playful, at times sublime, spaces for contemplation and exploration. The group has demonstrated a particularly single-minded fixation on dynamically controlled 3D arrays of LED lights and realized several iterations of this ongoing research as the Ocean of Light project, most recently with Submergence, an installation that was shown at the ROM for Kunst og Arkitektur (Norway) through mid-February. An idle email exchange between Squidsoup’s Anthony Rowe and CAN began several weeks ago and that banter begat a mammoth interview about light, sound and many of the collective’s projects.

Your recent installation Submergence is the latest iteration of your ongoing Ocean of Light project. What are the origins of your sustained interest in volumetric visualizations?

I’m always on the lookout for ways to surround and immerse people in digital controllable media, get away from screens, fill their field of view, near and far, create an impression of being within an ocean, yet also have it within the real world, rather than a virtual representation.

I think the low-res volumetric approach to doing this came from a range of inspirations and ideas. An exhibition of Jim Campbell’s work that I saw in Nagoya (Japan) in 2002 stuck with me. The works are very-low res grids of lights that recreate video at a resolution of something like 24×16 pixels or less, and yet you can still very clearly see people walking in the video, and even ascribe detailed characteristics to the people from their gate, their style of movement. I was transfixed by the fact that we are so wired up to fill in gaps in information that we do it and often don’t even know it’s made up. And in fact I think we enjoy it more that way – it’s more open to creative interpretation. I got to wondering if that kind of thing can be done in 3D.

Some of us at Squidsoup were also becoming increasingly frustrated by the limitations of the screen as visual output. We live in a 3D world, we engage with it naturally in three dimensions. Why limit our experiences with digital media, visual and in terms of interaction, to two dimensions? So we wanted to liberate the pixel, free it from its rigid flat grid and give it, and the media it represents, spatial presence.

And finally, light in itself is such a beautiful thing… nobody in Squidsoup is a trained lighting designer, but as a medium in the physical world it is perfectly suited to being a vehicle for digital presence. It moves at the speed of light, is not constrained by many of the laws of gravity, it can appear, disappear and morph at will, it’s not hugely technical or dangerous to use, and it has an all-encompassing effect on its surroundings.

So the whole Ocean of Light thing stemmed from this. We wanted to create highly flexible digital media that occupies physical space, embodies all the beauty of light as a medium, is interactive and controllable, and people will fill in the gaps.

For me, the aim has also been to create experiences that have a strong immersive element. I first realised the potential for this when I saw Osmose – Char Davies’ highly immersive VR artwork in London in 1996. I really felt lost within that virtual space. So the Ocean of Light project is also in part an attempt to get that kind of immersive experience – but without wearing the headgear, the body suit, and it also being within a shared, physical space.

Squidsoup – Volume 4096
Volume 4096 wiring

Could you talk CAN’s readers through the assembly of a single light ‘strand’ within one of the Ocean of Light projects and identify how are these are sequenced and controlled?

We come from a media rather than electronics background, and the hardware we have used throughout the Ocean of Light project has either been made for us, by various people, or existing hardware that we have borrowed (this was the case for Stealth and Discontinuum, that used a system called NOVA that was developed by ETHZ in Zurich). With Submergence, the hardware is actually reconfigured video wall technology, using components that are built to order and to our specification, but not actually designed by us. The hardware system uses DMX protocols to map video from a screen onto a large 2D array of LEDs – more like Jim Campbell’s 2D video works but at any resolution you choose. We simply rebuilt the hardware into a series of slices, or planes, through a volume. We assign each pixel to a light, then we create the volumetric visuals from geometric principles in software. So an image like this (see below) represents a series of slices through a volume, which are then played back through the LEDs in planes, one behind the other, and so recreating the 3D volume. The image shows five moments, each sliced in this case into twelve vertical segments.

Squidsoup – Volumetric visuals, process

For us, the excitement of this is in what we can put into the lights, what we can make them suggest and represent, rather than seeing them as an interesting piece of hardware. We have used three or four different LED grids so far – each has its own aesthetic and own characteristics to a point, but there is also a lot of commonality. So, the focus is on the content, on using these as vehicles or outputs for an emerging medium.

The Ocean of Light project documentation includes a thorough chronology of LED cubes dividing them into general two typologies – ‘object cubes’ and ‘penetrables’. If penetrables were the the logical progression of ‘object cubes’, what comes next?

The idea of penetrables was first coined I think by Jesús Rafael Soto, a Venezuelan Op Artist who worked with similar ideas to ours  – in some ways at least- from the 60s. He built these big volumes of suspended coloured tubes that people were invited to walk through. Some of them also represented 3D shapes, suspended in space and formed within these coloured strings. Really beautiful stuff, and very tactile.

So I’m not entirely sure if penetrables are the next step to object cubes, or just a different approach – it is the difference between an object and an environment (or the two different types of QuickTimeVR, objects and panoramas, if you learned your digital media in the late 90s). The approaches are almost diametrically opposite. With an object you look at its entirety from outside. With an environment, you’re not sure of its boundaries but you are entirely within it.

As to the next step, two thoughts occur.

The first would be to increase the resolution. Despite low resolution being a primary attraction to this approach for us, it is also a limitation in that creating recognizable complex visuals is hard – even Submergence, which occupies an 8m x 4m x 4m volume, and has 8,064 LED clusters, has the same resolution as piece of screen real estate of 90 pixels squared. Not much compared to an iPad – you can cover that with your thumb. A retina screen approach to LED cubes is a way off and, if it is ever made, will need to deal with the fact that you won’t be able to walk through it, or even see through it (both crucial for immersive 3D environments), so there is work to be done in that direction. However, we can always just go bigger and bigger (very attractive in itself from my perspective).

I also co-run an event called LUX, where we bring together leading luminaries at the intersections of light, space and technology. A couple of years ago, Adam Pruden spoke about a project he was working on at MIT SENSEable City Lab, called Flyfire. The idea was to use drones with attached LEDs to create visuals out of dynamic points of light. As far as I know that project has tanked, but a very similar same idea has been developed at Ars Electronica Futurelab. Horst Hörtner, the director there, spoke just last week at Oslo Lux about their version of the project, called Spaxels. They have a swarm of 50 of these controllable RGB dynamic points of light that was first shown at last year’s Ars Electronica Festival in Linz. They are still working on it and making very interesting progress. They plan to use it to create the illusion of bridges at night, buildings and so on – visualizing architectural visions at a 1:1 scale in physical space.

Give that a few years, and the size of the drones will shrink dramatically, they’ll be semi-autonomous and able to sense each other, the lights will be brighter and the batteries will last longer – the idea of orchestrated visuals floating in space all around us will be a possibility, maybe even reality. There’s still a lot of untapped potential in static points of dynamic light though!

On the drone front, beyond Horst Hörtner’s research what research and projects have got you excited? Will we see a hovering Ocean of Light project anytime soon?

Hah, I wish! We’re not electronics wizards and so we need to find ways into such technologies. However, drone technology is developing at a phenomenal rate, much of it for all the wrong reasons, but it will present some quite surreal creative opportunities in the not-very-distant future. The AR drone stuff is interesting too; I do like the idea of navigating freely in space and seeing life from a miniature first person perspective – a lot of potential for games that blur the boundaries between real and virtual spaces in new and surprising ways, for example. And I have heard of a couple of projects that are working at flocking algorithms using drones. Starling murmurations are fantastic to watch and the algorithms to simulate them fairly straightforward – whether using lights or just the drones themselves, watching a large flock of flying robots self organise and produce emergent behaviours would be extraordinary, and steeped in creative potential.

I guess that once drones become powerful and cheap enough, all sorts of stuff becomes possible. We’re already seeing developments by people (like Memo Akten) using mirrors attached to quadcopters being used to deflect beams of light, drawing dynamic shapes in space. Combining drones with structured light and projection mapping with a single big projector could create extraordinary visions in the sky.

All interesting stuff, but we try generally not to get too excited by kit and gizmos, but rather by ideas, and then find the technical approach that suits the idea.  Which is, in fact, how we came across drones. The obvious ultimate vision of the Ocean of Light project is to get rid of the physical side altogether. No strings and wires, no chips, just freely floating points of light. Using drones is a step in that direction in that it gets rid of the wires and strings, and adds the fact that the points of light can move. But drones do come with their own limitations, and the lack of dangerous technologies in the tech used in Ocean of Light and Submergence is a big attraction: people can walk freely among the lights without fear of being spliced by a propeller, laser or rapidly moving object.

However, if somebody has a technology out there that could be used to create freeform points of light in space (even if it is dangerous) that can be seen from any angle, we’d LOVE to hear from them!

Let’s shift our focus from dynamic light to some of the themes explored in your earlier projects. Driftnet, Freq and Come Closer all endeavour to provide “navigable instruments” for visitors to explore and experiment with. How integral do you consider sound design to interactive environments?  

There are numerous theories about how sound and vision affect us differently, how sound has a more direct and unmediated connection with the brain. In terms of creating interactive environments, sound is clearly an important factor, very powerful and also able to transcend physical space, creating paradoxes and portraying information and emotions, making connections that visuals alone cannot.

You mentioned some of our earlier installation work that treated sound as at least equal to visuals – in fact our very first installation piece, Altzero, started that journey into the idea of navigable soundscapes; trying to find ways to create and ‘play’ (navigate) compositions that are spatial, so non-linear, in their constructions. In that sense, sound is as effective as visual media at aiding navigation. An aside: my Masters thesis project used audio-enhanced Quicktime VR panoramas to help people find otherwise invisible buttons, and it worked effectively even in 1997.

In our pursuit of immersive experience, we worked on several approaches to spatialising sound, including a system developed for Martyn Ware (of Heaven 17 and Human League fame) called 3dAudioScape, which we combined with stereoscopic visuals to create flythrough audiovisual experiences – this culminated in Driftnet. Ware has gone on to do some huge and impressive soundscaping using the hardware.

At the time, our visuals were built on projected media (2D or 3D), and we found that combining this with spatialised sound creates perceptual problems, and always seems to fall short of expectations – sound and vision never quite gel into a cohesive experience. This is probably mainly because we were working at too basic a level technologically, and also because we were already at that stage moving away from head-mounted devices – the obvious solution to the problem. We felt, and still do, that glasses and/or headphones create large perceptual boundaries between the audience and the physical space they inhabit, and therefore distance people from the experience we want them to have. Cinema struggles with the same issue, with or without 3D stereoscopy. The screen only works as an immersive visual medium if you willingly suspend disbelief – you have to forget the fact that the screen has edges and only covers a small amount of one’s full visual field of view, and you have to ignore all of your natural, physical and spatial embodied sensibilities. But as soon as sound goes outside that cinematic box, you are reminded of the real world, the real space all around you without any imagery in it.

This train of thought reminds of me of Paradise Institute, a piece where Janet Cardiff and George Bures Miller played with exactly this problem: cinema space, real space, audio space. They resolved the issue by sitting people inside a miniature cinema, and using binaural stereo headphones to very powerfully merge cinematic, real and audio spaces.

Returning to our LED cube projects, these work well with spatialised sound, because they occupy physical space, and so the sound can also be placed in the same natural spatial setting without any hidden black holes. Stealth and Surface, two LED projects from a few years ago, both use a quadraphonic system to place sounds in at least a planar position within the volume occupied by the LED lights. So, matching a sound with a visual point (represented by illuminated LEDs) makes for an easier relationship, at least when you are outside the LED volume.

Ironically, with Submergence, our most recent piece, we have had to go right back to mono. The reason for this is the same old mismatch between audio and visual worlds – positioning a sound in 3D physical space is doable with eight speakers if you know where the listener is, but if the sound source is between two listeners, then the technical challenges start to mount. So the sound, much to the frustration of Ollie Bown who worked on it with us, has taken on a more ambient role, colouring the spatial characteristics of the physical space but not affecting it spatially – washing it in a uniform sonic palette.

Returning to the specifics of the question, I’d say that sound can be just as powerful as visual media in helping people navigate their way through interactive environments. In the traditional screen-and-stereo-speakers variety of outputting these experiences, stereo sound and flat images work well together. However, as soon as you try to work outside of the parallel image space of the screen or the cinema, that relationship stretches to the point where it is only in quite restrictive circumstances that they work synergistically.

Well, as a companion question to the query about drones as an emerging hardware, protocol and medium, what sound spatialisation technologies are you anticipating and looking forward to experimenting with?

My knowledge of spatialised sound technologies is not very up-to-date. There is a project that we have been talking about that would combine the Submergence/Ocean of Light visual approach with powerful spatialised sound – possibly a single user experience as this would reduce or eliminate a lot of the problems mentioned earlier, or else perhaps using a large array of downward pointing directional speakers. I think the struggle to make sound and visuals cohabit symbiotically in real 3D space is a worthy challenge. With Submergence we slightly ducked that issue.

I would like to explore strong, visceral phenomenological experiences: loud sounds, powerful experiences and big spaces, stuff that makes you jump. Expanding the palette and the feeling of scale; sound can suggest that things are far bigger, heavier and more distant (or nearer) than they really are – using sound to augment and expand the visual experiences with weight and density would be very interesting avenues to explore.

Or maybe we just stick speakers in drones – that way you really could pinpoint the location of a sound.

Squidsoup | Ocean of Light

8463968483_356a56becd_b