MaxMSP, Sound
comments 4

Sonic Pendulum – AI soundspace of tranquility

Created by Yuri Suzuki Design Studio in collaboration with QOSMO and presented at the recent Milan Design Week inside a historical former seminary, Sonic Pendulum is a sound installation in which articial intelligence imagines and materialises an endless soundscape.

Whilst continuously generating a calming ambient-sound and bass-line atmosphere; the algorithm conducts the voice of Miyu Hosoi and processes disturbances around the space, generated by the crowd themselves. The result is in an ever-evolving piece – each moment singular, never to be repeated exactly again – in conversation with its visitors and the way in which they move around the space.

The original concept was created in response to the inspiration behind the new AUDI A5 line. The structure is made up of 30 pendulums which – while gently modulating the sound coming from speakers through doppler effect – are also a visual representation of a dream: a systematised mess, with moments of order emerging from the chaos.

The team trained the AI to create an infinite composition, which is site and moment-to-moment specific, as well as interactive. Microphones are set up around the space, recording sounds of the Seminario; sometimes they are purely atmospheric, at other times, created by onlookers moving through the space. As well as using the A.I. to generate an endless soundscape, the structure itself is used both to modulate the sound and physically represent it.

By playing music through the structure an audible low frequency effect can be felt through the dynamics of the pendulums. This sonic experience plays with the phenomenon of brainwave entrainment, where neural oscillations start to mimic those of outside sensory stimula such as music with a similar frequency. Our soundscape mimics frequencies known to induce a state of relaxation.

The installation is comprised of 30 pendulums, each one has a speaker, with 4 subwoofers in the corners of the Seminario. In addition, 30 high power steppers bring the pendulums to a position of maximum amplitude. A clutch releases them for a natural oscillation. The change in pendulum length from outside to inside is how the pendulum wave effect is achieved. Each motor has an rotary encoder sending data to the central algorithm. The pendulum’s speakers were wired in groups allowing surround panning, and the separation of certain samples, so that the doppler effect could easily be discerned when walking alongside the structures.

One deep learning algorithm (auto encoder) was used. It was trained using one of Yuri’s ambient compositions and gave out midi data. The algorithm was affected by a number of live inputs to modulate the soundtrack accordingly: Microphones in the space picked up audio, during periods of high intensity the sound track became more chaotic and intense. The audio was also used to further train the algorithm, to deliver a composition that matched the mood (sound intensity) of the space. The other included a networked camera and computer vision of the crowd size was counted out. Much like the microphones, the music increased in intensity alongside crowd size. Finally the information using the encoders influenced the amount of surround panning for ambient glitch sounds in the sound track. This was limited to amplitude of the pendulum swing, but was originally designed to correspond to the emergent order of the standing wave patterns which you see in the final physical structure.

Project Page | Yuri Suzuki Design Studio | QOSMO

↑ Early sketch in P5.Js investigating pendulum behaviour