Floating Codes – The (spatial) topology of an artificial neural network

‘Floating Codes’ is a site-specific light and sound installation that explores the inner workings and hidden aesthetics of artificial neural networks – the fundamental building blocks of machine learning systems or artificial intelligence. The exhibition space itself becomes a neural network that processes information, its constantly alternating environment (light conditions/day-night cycle) including the presence of the visitors.

05/01/2022
The Lost Passage

The Lost Passage is an interactive experience for the web that creates a new digital home for an extinct species called passenger pigeon. It’s a digitally crafted world of a swarm of artificial pigeons, which seem to be inhabiting a sublime yet destitute memory of a lost landscape.

25/11/2021
Latentscape – Franz Rosati

Created by Franz Rosati, ‘Latentscape’ depicts exploration of virtual landscapes and territories, supported by music generated by machine learning tools trained on traditional, folk and pop music with no temporal and cultural limitations.

28/10/2021
Inside Inside – Remixing video games and cinema with ML

Created by Douglas Edric Stanley, Inside Inside is an interactive installation remixing video games and cinema. In between, a neural network creates associations from its artificial understanding of the two, generating a film in real-time from gameplay using images from the history of cinema.

02/09/2021
CAN 2018 – Highlights and Favourites

As 2018 comes to a close, we take a moment to look back at the outstanding work done this year. From spectacular machines, intricate tools and mesmerising performances and installations to the new mediums for artistic enquiry – so many great new projects have been added to the CAN archive! With your help we selected some favourites.

31/12/2018
Variable – The signification of terms in artists’ statements

Created by Selcuk Artut, Variable is an artwork that explores the signification of terms in artists’ statements. The artwork uses machine learning algorithms to thoughtfully problematise the limitations of algorithms and encourage the visitor to reflect on poststructuralism’s ontological questions.

16/11/2017
TraiNNing Cards – Flash cards to train your machines

Latest in the series of critical design projects by Shanghai design and research studio Automato, TraiNNing Cards is a set of 5000 training images, physically printed and handpicked by humans to train any of your machines to recognise first and favorite item in a house: a dog.

01/02/2017
Objectifier – Device to train domestic objects

Created by Bjørn Karmann at CIID, Objectifier empowers people to train objects in their daily environment to respond to their unique behaviours. Interacting with Objectifier is much like training a dog – you teach it only what you want it to care about. Just like a dog, it sees and understands its environment.

23/01/2017

Created by Benedikt Groß, Maik Groß and Thibault Durand, Mind the “Uuh” is an experimental training device helping everyone to become a better public speaker. The device is constantly listening to the sound of your voice, aiming to make you aware of “uuh” fill words.

Created by Richard Vijgen, ‘Through Artificial Eyes’ is an interactive installation that lets the audience look at 558 episodes of VPRO Tegenlicht (Dutch Future Affairs Documentary series) through the eyes of a computer vision Neural Network.

‘Floating Codes’ is a site-specific light and sound installation that explores the inner workings and hidden aesthetics of artificial neural networks – the fundamental building blocks of machine learning systems or artificial intelligence. The exhibition space itself becomes a neural network that processes information, its constantly alternating environment (light conditions/day-night cycle) including the presence of the visitors.

The Lost Passage is an interactive experience for the web that creates a new digital home for an extinct species called passenger pigeon. It’s a digitally crafted world of a swarm of artificial pigeons, which seem to be inhabiting a sublime yet destitute memory of a lost landscape.

Created by Franz Rosati, ‘Latentscape’ depicts exploration of virtual landscapes and territories, supported by music generated by machine learning tools trained on traditional, folk and pop music with no temporal and cultural limitations.

Created by Douglas Edric Stanley, Inside Inside is an interactive installation remixing video games and cinema. In between, a neural network creates associations from its artificial understanding of the two, generating a film in real-time from gameplay using images from the history of cinema.

Created by media artist and creative director Dalena Tran, Incomplete is an algorithmically visualized music video for UK musician Ash Koosha’s track from his 2018 album Aktual.

Created by Christian Mio Loclair (Waltz Binaire), ‘Blackberry Winter’ is an investigation into the possibilities of identify motion as a continuous walk in a latent space of situations.

Don’t miss the 9th installment of this inspiring Creative Technology Conference.

As 2018 comes to a close, we take a moment to look back at the outstanding work done this year. From spectacular machines, intricate tools and mesmerising performances and installations to the new mediums for artistic enquiry – so many great new projects have been added to the CAN archive! With your help we selected some favourites.

Uncanny Rd. is a drawing tool that allows users to interactively synthesise street images with the help of Generative Adversarial Networks (GANs). The project was created as a collaboration between Anastasis Germanidis and Cristobal Valenzuela to explore the new kinds of human-machine collaboration that deep learning can enable.

The chAIr Project is a series of four chairs created using a generative neural network (GAN) trained on a dataset of iconic 20th-century chairs with the goal to “generate a classic”. The results are semi-abstract visual prompts for a human designer who used them as a starting point for actual chair design concepts. 

Created by Waltz Binaire, Narciss is a robot that uses artificial intelligence to analyse itself, thus reflecting on its own existence. Comprised of Google’s Tensorflow framework and a simple mirror, the experiment translates self-portraits of a digital body into lyrical guesses.

Created by AnneMarie Maes, Genesis of a Microbial Skin is a mixed media installations and a research project exploring the idea of Intelligent Beehives with a focus on smart materials, in particular microbial skin. The project is about predominantly growing Intelligent Guerilla Beehives from scratch, with living materials – just as nature does. 

Artificial Imagination was a symposium organized by Ottawa’s Artengine this past winter that invited a group of artists to discuss the state of AI in the arts and culture. CAN was on hand to take in the proceedings, and given the emergence of documentation, we share videos and a brief report.

Latest in the series of experiments and explorations into neural networks by Memo Akten is a pre-trained deep neural network able to make predictions on live camera input – trying to make sense of what it sees, in context of what it’s seen before.

Created by Tore Knudsen, ‘Pour Reception’ is a playful radio that uses machine learning and tangible computing to challenge our cultural understanding of what an interface is and can be. Two glasses of water are turned into a digital material for the user to explore and appropriate.

Created by Arvind Sanjeev, Lumen is a mixed reality storytelling device that lets users explore AR/VR content without being confined to headsets or mobile devices.

Created by Selcuk Artut, Variable is an artwork that explores the signification of terms in artists’ statements. The artwork uses machine learning algorithms to thoughtfully problematise the limitations of algorithms and encourage the visitor to reflect on poststructuralism’s ontological questions.

Created by Benedict Hubener, Stephanie Lee and Kelvyn Marte at the CIID with the help from Andreas Refsgaard and Gene Kogan, ‘The Classyfier’ is a table that detects the beverages people consume around it and chooses music that fits the situation accordingly.

Created by Philipp Schmitt (with Margot Fabre), ‘Computed Curation’ is a photobook created by a computer. Taking the human editor out of the loop, it uses machine learning and computer vision tools to curate a series of photos from an archive of pictures.

Created by the R&D team at the creative technology agency DT, Anti AI AI is a wearable  neural network prototype designed to notify the wearer when a synthetic voice is detected in the environment.

Created by Refik Anadol in collaboration with Google’s Artists and Machine Intelligence program, ‘Archive Dreaming’ is a 6 meters wide circular installation that employs machine learning algorithms to search and sort relations among 1,700,000 documents.

Created by Seoul based artistic duo Shinseungback Kimyonghun, ‘Animal Classifier’ is an AI trained to divide animals into arbitrary classifications to foreground the imperfections and edge cases in classification systems.

Created by Dries Depoorter in collaboration with Max Pinckers, Trophy Camera is a photo camera that can only make award winning pictures. Just take your photo and check if the camera sees your picture as award winning.

Latest in the series of critical design projects by Shanghai design and research studio Automato, TraiNNing Cards is a set of 5000 training images, physically printed and handpicked by humans to train any of your machines to recognise first and favorite item in a house: a dog.

Created by Sebastian Schmieg, ‘Decision Space’ explores how new datasets can enable new experiments in teaching computers how to understand images within a set of meaningful and complex categories.

Created by Bjørn Karmann at CIID, Objectifier empowers people to train objects in their daily environment to respond to their unique behaviours. Interacting with Objectifier is much like training a dog – you teach it only what you want it to care about. Just like a dog, it sees and understands its environment.

Count

30

Tags

  • 3d printing
  • Benedikt Gros
  • Device
  • Edge Impulse Studio
  • Machine learning
  • Maik Gros
  • Speech
  • Tensorflow
  • Thibault Durand
  • Abletob
  • Data
  • Database
  • Ios
  • Learning to see
  • MaxMSP
  • MongoDB
  • Neural network
  • Nieuwe Instituut Rotterdam
  • Python
  • Richard Vijgen
  • Socketio
  • Unity
  • YOLOV5
  • Ai
  • Artificial intelligence
  • ATTiny85
  • Computation
  • Custom electronics
  • Custom pcb
  • Environment
  • Installation
  • Perception
  • Ralf Baecker
  • Sound
  • System
  • Topology
  • 3d
  • Amay Kataria
  • Climate change
  • Extinction
  • Threejs
  • Ableton
  • Ableton live
  • Adobe Premiere
  • Audiovisual
  • Brightsign Media Player
  • Cubase
  • Elektron Octatrack
  • ER301
  • FFMPEG
  • Franz Rosati
  • GAN
  • Landscape
  • Max 4 Live
  • Peformance
  • PyTorch
  • Quadspinner GAEA
  • Reaper
  • SampleRNN
  • Simulation
  • StyleGAN
  • Touchdesigner
  • Unreal engine
  • Douglas Edric Stanley
  • Film
  • Games
  • Inside
  • Opencv
  • Playstation
  • Alternate reality
  • Architecture
  • Ash Koosha
  • Blender
  • Cinema
  • Deep learning
  • Delena tran
  • Digital art
  • GANs
  • Music video
  • Visual art
  • Addie Wagenknecht
  • Ala Tannir
  • Alan Warburton
  • Bianca Berning
  • Bianca Berning (Dalton Maag)
  • Cathy O’Neil
  • Christian Kaegi (Qwestion)
  • Christian Mio Loclair (Waltz Binaire)
  • Cloud computing
  • Davide Fornari
  • Dev Joshi
  • Ecal
  • ECAL Research Day
  • EPFL+ECAL Lab
  • Fabrice Aeberhard (Viu)
  • Featured
  • Ghosts
  • Haunted Machines
  • Hugues Vinet (IRCAM)
  • Impakt Festival
  • IRCAM
  • James Bridle
  • Kai Bernau
  • Kate Crawford
  • Mario de Vega
  • Matthew Plummer-Fernandez
  • Max bense
  • Natalie D Kane
  • Natalie Kane
  • Neri Oxman
  • Nicolas Henchoz
  • Nicolas Nova
  • Patrick Keller
  • Random International
  • Skylar tibbits
  • Sustainability
  • Thilo Alex Brunner
  • Tobias Revell
  • V&a
  • Waltz Binaire
  • Christian Mio Loclair
  • Houdini
  • Javascript
  • OpenFrameworks
  • Raygan
  • Vvvv
  • Code
  • Creative code
  • Creative technology
  • Events
  • Festival
  • Minneapolis
  • Technology
  • 2018
  • Adrien Kaeser
  • Algorithm
  • Automation
  • Automato
  • Blockchain
  • Cloud
  • Drawing
  • EEG
  • Furniture
  • Fuse
  • Giulia Tomasello
  • Interactive
  • Iot
  • Jessica In
  • Kimchi and Chips
  • LUST
  • Machine
  • Maria Smigieska
  • Matteo Zamagni
  • Matthias Dörfelt
  • Mediated Matter
  • Near-future
  • Performance
  • Philipp Schmitt
  • Pierre Cutellic
  • Projection
  • Refik Anadol Studio
  • Rndr
  • Speculative
  • Steffen Weiss
  • Tool
  • Weather
  • Anastasis Germanidis
  • City
  • Cityscape
  • Cristobal Valenzuela
  • Germany
  • Painting
  • Collaboration
  • Design
  • Furniture design
  • Neural networks
  • Camera
  • Im2txt
  • Reflection
  • Robotics
  • AnneMarie Maes
  • Bacteria
  • Bee
  • Beehive
  • Biology
  • Fabrication
  • Microbial skin
  • Nature
  • Process
  • Research
  • Smart materials
  • Allison Parrish
  • Artengine
  • Ben Bogart
  • Chris Salter
  • Consciousness
  • Event
  • Jackson 2Bears
  • Kristen Anne Carlson
  • Nell Tenhaaf
  • Nora O Morchú
  • Ottawa
  • Philosophy
  • Sofian Audrey
  • Theory
  • Video
  • Computer vision
  • Experiment
  • Knowledge
  • Learning
  • Memo akten
  • Reality
  • Arduino
  • Capacitive
  • Interface
  • Processing
  • Radio
  • Sensor
  • Simone Okholm Hansen
  • Tore Knudsen
  • Victor Permild
  • Water
  • Wekinator
  • AR
  • Arvind Sanjeev
  • Ciid
  • Classification
  • Darknet
  • Future of the screen
  • Laser projector
  • Mixed reality
  • Projector
  • Raspberry Pi
  • Screen
  • Screen futures
  • Vr
  • Yolo
  • Generative
  • Language
  • Linguistics
  • Markov Chain
  • Meaning
  • Selcuk Artut
  • Text
  • Andreas Refsgaard
  • Benedict Hubener
  • Gene Kogan
  • Kelvyn Marte
  • Machine listening
  • Music
  • Stephanie Lee
  • Student
  • Basiljs
  • Curation
  • Genetic tsp
  • Metadata
  • Photography
  • Tsne
  • Dt
  • R&d
  • Wearable
  • Archive
  • DirectX 11
  • Google
  • Library
  • Refik Anadol
  • Training
  • TSNE algorithm
  • Analysis
  • Shinseungback Kimyonghun
  • Clarifai api
  • Dries Depoorter
  • Google Vision api
  • Max Pinckers
  • Microsoft emotion api
  • Raspberrypi
  • Raspbian Jessie
  • Cards
  • Creativeappsnet
  • Critical design
  • Media technology
  • Object
  • Crowdsourcing
  • Dataset
  • Exhibition
  • Sebastian Schmieg
  • Bjørn Karmann
  • Domesticity
  • Internet of things
  • Objects
  • Programming
  • RasperryPi
  • Adnan Agha
  • Agustín Ramos Anzorena
  • Alex Wagner
  • Baku Hashimoto
  • Bryan Wilson
  • Caitlin Morris
  • Cinema4d
  • Computer science
  • Dan Gorelick
  • Dannie Wei
  • Francis Tseng
  • Hiroshi Okamura
  • Ingrid Burrington
  • Instrument
  • Jason Toy
  • Kaleidoscope
  • Katrina Allick
  • Lauren Gardner
  • Medhir
  • Media art
  • Osc
  • Patricio Gonzalez Vivo
  • Philip David
  • Physical Computing
  • Printing
  • Ps3eye
  • Publishing
  • Ramsey Nasser
  • Report
  • Robby Kraft
  • Ruby Childs
  • School
  • Sequencer
  • SFPC
  • Showcase
  • Taeyoon Choi
  • Teaching
  • Tools
  • Visualization
  • Zach Lieberman