The Transcriptions of Space – AI assisted visual stimuli

Created by Artem Stepanchuk, ‘The Transcriptions of Space’ is an experimental application (PWA) developed using deep learning algorithms that demonstrates the ability of artificial intelligence to realize the inherent human creativity. By connecting two independent neural networks, the application observes the world around and expresses its thoughts based on previously acquired knowledge.

The first algorithm based on the machine learning process of image recognition and a convolutional neural network acts similar to the human desire to detect patterns and find meanings in vague visual stimuli. Through the camera interface, it identifies letters in the shapes of surrounding objects and writes them down sequentially. Approximate string matching algorithm using an English dictionary converts the sequence of found letters into a readable word. The second algorithm of language model based on recurrent neural network using the converted word as input predicts the next most likely words in a sequence and thus create entire sentences of text. Considering the possibility of using the application in different places such as Natural environment and Built environment, two language models were trained.

For the Natural environment model was used the text corpus consisting of scientific and fiction books about Nature. (2,763,098 words). For the Built environment model, the text corpus consisted of books describing the architectural features of cities and the history of urban space and its interaction with humans, as well as science and fiction literature of the human future in cyber cities. (3,134,152 words).

Try it here (only mobile) →

Tools: Progressive web application using libraries: ml5.js, tensorflow.js, p5.js, jsfeat

Project Page | Artem Stepanchuk



Leave a Reply