Unlearning Language – Using AI to be less machine-like

Created by Lauren Lee McCarthy & Kyle McDonald, Unlearning Language is an interactive installation and performance that uses machine learning to provoke the participants to find new understandings of language, undetectable to algorithms.

What does language mean to us? As AI generated text proliferates and we are constantly detected and archived, can we imagine a future beyond persistent monitoring? 

Lauren Lee McCarthy & Kyle McDonald

A group of participants are guided by an AI that wishes to train humans to be less like a machine. As the participants communicate, they are detected (using speech detection, gesture recognition, and expression detection), and the AI intervenes with light, sound, and vibration. Together, the group must work together to find new ways to communicate, undetectable to an algorithm. This might involve clapping or humming, or modifying the rate, pitch, or pronunciation of speech. 

Through this playful experimentation, the participants find themselves revealing their most human qualities that distinguish us from machines. They begin to imagine a future where human communication is prioritized.

Working with YCAM, Rhizomatiks engineer Yuta Asai, and Motoi Shimizu, Lauren and Kyle developed custom software that incorporates speech detection, expression detection, and gesture detection and controls voice, sound, light, and vibration in the space. Software tools used include TouchDesigner, python, node.js and GPT-3.

Project Page | Lauren Lee McCarthy | Kyle McDonald | YCAM

/++

/+

/++