Created by Matthew Biederman for the upcoming album Sonopeutic Smooth Sailing by Pierce Warnecke, A Quickie in the Bouncy House is a music video deploying Stable Diffusion to explore the edges of the AI text-to-image technology, forcing AI into SFW corners of its training where it appears to break down, get stuck between gender, race or time.
Matthew has been working with AI since 2018, but always creating his own models, exhibited their training, and often misused the technology. With this project, Matthew wanted to explore the visual landscape of these new CLIP and diffusion techniques, specifically with Stable Diffusion, since it is open and there is a large community sharing tools and knowledge. Conceptually, after talking with Pierce and learning about his ideas around the album, Matthew had much to work with, and it immediately sparked some visual ideas. Specifically, the name of the album, ‘Sonopuedic Smooth Sailing’ and the track “a quickie in the bouncy house” were particularly sonically and metaphorically interesting. He took those two concepts and ran with them.
I let it go (Stable Diffusion), just kind of trying to keep up with some of the things it was generating. I wanted to try to push Stable Diffusion to do something that I hadn’t been seeing, something out of the ordinary and beyond video styling/rotoscoping.
Matthew Biederman
Matthew mostly relied on pure prompting with a bit of input on his side, but not in the most obvious ways. He spent a lot of time researching different models and combining them for a particular output. In exploring these techniques, He noticed that many of the models being created are for ‘adult purposes’, and it was interesting to use some of these but try and force them into SFW corners of their training where they seemed to break down and get stuck between genders, races and even chronologically in terms of hairstyles or clothes.
It’s always interesting to see where the edges of these technologies are. Everything is moving so fast right now that I was incorporating new techniques and processes as soon as they were being released or discussed. Ultimately, though, I was asking the question of just what ‘sonopeudic’ actually could mean and just had fun with that question and posing it in a variety of ways.
Matthew Biederman


For the visual portion, at the core of it all is Stable Diffusion running locally. It is about 50/50 Deforum through Automatic1111 and the other half uses a api calls to A1111 via a custom Touch Designer node created by @dotsimulate that allows him to create a custom feedback loop so that the diffusion/generation happens in a variety of ways. On top of that there are a bunch of different models and LORAS utilised from civitai.com. An indispensable extension for Deforum in this project was Parseq, which allows for a very granular manipulation of prompts and schedules for all the variables within Deforum so that you can more precisely synchronise the diffusion process with audio, especially with something like Pierce’s audio, which doesn’t follow a specific time signature and is very organic and improvisational. On a lower level, Matthew used techniques that allowed him to use the motion of another video to affect how and where the diffusion is created in each frame. With some careful timing and circular ways of working (and many mistakes), he got some fascinating visual outputs.
Project Page | Pierce Warnecke | Matthew Biederman
Credits: Music by Pierce Warnecke / Video by Matthew Biederman / Produced by Raster-Media






