Created by Douglas Edric Stanley, Inside Inside is an interactive installation remixing video games and cinema. In between, a neural network creates associations from its artificial understanding of the two, generating a film in real-time from gameplay using images from the history of cinema.
Inside Inside is the first in a series of interactive installations combining video games and cinema, as filtered through neural networks. In this first iteration of the system, players control a central character, a small boy, as he tries to survive within the dystopia of the popular video game “Inside”. In parallel, a neural network analyses in real-time the images of their gameplay and attempts to look “inside” the images emerging from their Playstation and find imagery from a curated list of eerily relevant science fiction and horror dystopia from film and television.
As the player evolves level by level throughout the game, the system analyses the game environment, as well as other factors, and recognizes certain features that it has “learned” to recognize via the trained neural network. It then associates these images with images on the second screen, taken from a database of curated sequences from the history of cinema. As a result, players can essentially re-sequence these images by lingering, advancing, retreating, or skipping throughout the game via the built-in “chapter” selection of the game: the timing, order and duration depends entirely on the nature of the player’s gameplay. While players evolve within the space of the gameplay, they are simultaneously editing a film.
At the heart of this installation lies a neural network, designed to take video game imagery as its real-time input and to output images from a database of images. This machine learning dispositif is designed entirely from the ground up, using standard C++ and APIs from the open-source OpenCVlibrary used to break down the images into their component (machine analysable) parts. The system also uses OpenCV’s most recent implementations to “train” during the machine “learning” phase, where it creates the mechanically-determined image-to-image relationships.
For more information about the project including technical details, project references and precedents, please see links below.