Embedded/Embodied – Sound as a means of obtaining knowledge

Created by Farzaneh Nouri & Arash Akbari, Embedded/Embodied unites ‘acoustics’ and ‘epistemology’, acoustemology to investigate sound as a means of obtaining knowledge, delving into what can be known through listening.  Unlike the analytical, reductionist, and quantitative methods, Steven Feld‘s acoustemology is not preoccupied with evaluation but with the lived experience of sound. Within this paradigm, Embedded /Embodied is an interactive installation and a digital sound walk that explores AI sound recognition, generation, and communication through a situated and reflexive method. It speculates on the potential machine learning approaches that can go beyond conventional quantitative and generalized methods and incorporate cultural, ecological, and cosmological diversity through a lived and locative methodology.

When we say an AI agent ‘listens’ to its surroundings, what do we mean? How can it ‘know’ a physical space? How does the knowledge it gains through listening differ from the way humans know spaces through sound, and is there a way for us to imagine the AI agent’s experience of that space?

Farzaneh Nouri & Arash Akbari

The project also taps into the epistemological possibility of using Augmented Reality as a situation for developing a relational form of sonic knowledge through the interplay between hearing and the other senses by territorializing computational sonic investigations within the visual field.

Created with Super Collider, Unity 3D, Three.js and Web Audio.

Project Page | Live Web Version | Farzaneh Nouri | Arash Akbari

Embedded/Embodied is commissioned by Sonic Acts. Part of New Perspectives for Action. A project by Re-Imagine Europe, co-funded by the European Union.

/++

/+