Infinity – New malleable subject matter by UE

Ending tonight, after a week of live streaming, is the latest in the series of artworks by Universal Everything (UE) that feature 3D humans shrouded in digital costumes. Titled ‘Infinity‘, the new work is a live stream, always reshaping and evolving from one character to the next, generating an average of 3,180 unique characters per hour with a total of 50,000+ variations over the last week, stream live on YouTube, only for a few more hours!

These works by UE, if unfamiliar (unlikely), have received a global acclaim; an Ars Electronica Prix Winner in Animation (2014) titled ‘Walking City (Citizens)“, for The Hyundai in Seoul, South Kore as “Superconsumers“, for Incheon Airport as ‘Do We All Dream of Flying?’, for Hyundai Motorstudio as “Run Forever“, an iOS app titled “Super You“, artworks produced for Sedition and the recent NFT drops on Hic et Nunc, with reservations of course – after all their studio is powered by solar panels combined with a 100% renewable energy. What is different this time is they went out of their way to create a live version to test the potential of a ‘infinite video’, creating a form of an ‘infinite digital life’. No loops here!

Infinity‘ is their first attempt to push their work exploring abstract figurative processions to the limits of today’s real-time and streaming tech. They wanted to create content which is forever surprising – not a repetitive video loop which becomes familiar and possibly ignored over time. This work is after all made for display on public videowalls as seen on architecture / airports / headquarters previously. In these spaces the viewers are often passing the videowall as part of a daily commute – there is something new to discover on every viewing.

Infinity‘ is also a transition for UE from traditional Houdini prerendered works to working in real-time with Unity. With the latest graphics hardware they believe that they are very close to their expectations of Houdini pre-rendered video. Now they can have near realism and malleable subject matters. Using a large pool of motion capture walk data, coloured source images and wide parameters for hair behaviour to ensure variety, beauty and life like forms every time. Using most recent simulation and rendering techniques previously only available in offline 3D that are possible now in real-time, the team wants to know what is possible as well as start a conversation about what can be done with new simulations and rendering techniques that can be interactive or infinitely dynamic is a way of inspiring new digital expression moving forward.

Part of this process relied on creating a realtime hair physics simulation, a new system that is custom written, used to simulate strands of hair with full-fledged physics on the GPU. The simulation takes minimum and maximum values the team specifies and generates a number of parameters for stiffness, waviness, gravity, length, wind, strand thickness, etc. It also generates colour data based on input textures and generative gradients. Because of the amount of information going into the simulation and the fact that it’s generated as a random value in a range they specify, it’s entirely conceivable that it will never, ever generate the same thing twice.

With modern graphics cards, the team feels they have a tremendous amount of power at their fingertips, but it is still ultimately limited. A complex motion graphics piece made in Houdini can simply take a really long time to render or simulate and no one would know. But because they’re maintaining real-time performance, there’s a graphics budget and they need to decide how to utilise it. Things like, better shadows and lighting meaning less characters onscreen at once, preferring certain kinds of visual settings that are less render heavy, making considerations for final output resolution.

Adam Samson, Unity Developer at UE, tells us that one of their favourite things to do was apply the system to a character controller and they could play with using their keyboard; make it run around and jump and see the hairs respond to that movement. If anything it was hard not to get attached to a variation, Adam tells us, know that as soon as you hit the generate button again that character would be gone forever – “It made for an interesting kind of zen”.

UE are only getting started here it seems. They are currently looking into including a form of viewer input / influence over the costumes as well as AI driven human movements for more natural interactions between strangers walking in many directions.

The live stream has been running for the past week and it is ending tonight. The live version was a short term experiment to test the potential for this medium. Sadly they cannot monitor the stream beyond today, so they prefer to show a 2hr excerpt. This current YouTube stream should divert to a 2hr excerpt video once it ends this evening (Wednesday 11th August 2021).

YouTube Stream (ends soon) | Project Page | Universal Everything

Full credits: Universal Everything – Matt Pyke (Creative Director), Adam Samson (Unity Developer) and Simon Pyke (Sound)

/++

/+

/++

Leave a Reply