MaxMSP Python Sound
Leave a comment

The Welcome Chorus – The voice of a community

Created by Yuri Suzuki (Pentagram) in collaboration with Fish Fabrications and Counterpoint, ‘The Welcome Chorus’ is an interactive installation that brings together sound, sculpture and artificial intelligence.

Commissioned by Turner Contemporary for Margate NOW festival, the sculpture consists of twelve horns, each representing a different district of Kent in the UK. Each horn continually sings lyrics which are generated live by a uniquely trained, site-specific piece of AI software. Symbolically and aesthetically, these sculptural forms reference the origin of the word ‘Kent’; thought to derive from the word ‘kanto’, meaning horn or hook.

“This project has been about the endless process of musical composition – our aim has been to create an anthem of sorts for Margate and Kent. The AI embedded within the sculpture has been learning and absorbing how people think about the area – every 2 minutes it generates a brand new piece of music. Since the piece was installed, the algorithm has greatly expanded both its vocabulary and with it its knowledge about Kent. I am truly excited to see how the next few weeks will continue to influence the songs generated.”

Yuri Suzuki

Through an inclusive democratic process of workshops and gatherings at Kent Libraries, people from all over the county contributed lyrics reflecting on their Kentish experience to the AI data bank. Generated lyrics and sound bites, on the history of Kent, its landscapes and estuaries, changes to industry and services, the relationship between urban and rural areas and perceptions around journeying, migration and movement, were all submitted. The colours of each horn were selected by library staff from the varying areas of Kent. 

The lyrics were made from a vast amount of text, collected at the 12 workshop sessions, using the transcriptions that resulted in a bulk material for the AI database. The GPT2 by openAI is used as a language model to generate lyrics and folkRNN model developed by Bob Sturm, KTH (trained on the original Irish folk melodies) to create the melodies.

All audio processing in the installation happens through Ableton. There are two separate processes happening – the first is the text-to-speech generation and synthesis in the computer. The speech synthesis uses two separate vocoder plugins – for lead we are using Synth V, which has better definition and pronunciation of the words, and for ‘backup’ singers we use Vocaloid, which has more of a musical and textural characteristic. The second process is the audience interaction through the microphones. There are three dynamic mics contained within the “conductor’s station” that people can go up to and speak into. The resulting audio does two things – firstly, it continually trains the AI by adding those words to its database through speech-to-text conversion. Secondly, it feeds the audio straight into Ableton, where it is slightly delayed by a custom Max patch before being put through Ableton’s Vocoder and fed back to the speakers. This way we create a call and response effect with the sculpture.

The hardware is comprised of three Shure Sm11 Dynamic Lavalier microphones, the speakers are IP rated for outdoor use. The interface is a Behringer 8in 8out, which output into Denon amps which power the speakers. Everything runs through a MacBook Pro.

For more information on the project, please visit the link below.

Project Page |Yuri Suzuki | Turner Contemporary

Full credits: Turner Contemporary (client), Yuri Suzuki ( Author – Pentagram Partner), Gabriel Vergara II, Adam Cheong-MacLeod, Alice Lazarus, Karol Sielski, Eira Szadurski (Project team) and Fish Fabrications, Counterpoint (Collaborators).

Leave a Reply

Your email address will not be published. Required fields are marked *