Latest in the series of video essays by an artist and researcher Alan Warburton, is ‘RGBFAQ’, tracing the trajectory of computer graphics from WW2 to Bell Labs in the 1960s, from the visual effects studios of the 1990s to the GPU-assisted algorithms of the latest machine learning models.
/?s=Datas
Displaying search results
62 Resultsgr1dflow is a collection of artworks created through code, delving into the world of computational space. While the flowing cells and clusters showcase the real-time and dynamic nature of the medium, the colours and the initial configuration of the complex shapes are derived from blockchain specific metadata associated with the collection.
Assembling Intelligence, a multidisciplinary symposium organised by HEAD – Genève (HES-SO), will bring together artists, designers, and researchers to highlight a spectrum of alternative definitions for ‘artificial intelligence’.
SPIN is an AI music synthesizer that allows you to co-create compositions with a language model, MusicGen. It is a playful invitation to explore the nuances of algorithmic music, encouraging you to slow down and zoom in on its artifacts. It celebrates the marriage between human and machine creativity through music.
Helios 2024 is a celebration of the sun in its phase of solar maximum by audiovisual artist Dan Tapper. The exhibition gathers solar data from space organizations, as well as utilizing Tapper’s DIY devices that encode solar data into lo-fi playable records and reveal radio spectrum from the earth’s ionosphere and space.
Narratron is an interactive projector that augments hand shadow puppetry with AI-generated storytelling. Designed for all ages, it transforms traditional physical shadow plays into an immersive and phygital storytelling experience.
Latent Imaging and Imagining is part of an autoethnographic artistic research study to explore the concept of chrononormativity through an inverted perspective of nonconforming and how to negotiate a careful and queer mode of accessing childhood memories.
Created by Richard Vijgen, ‘Through Artificial Eyes’ is an interactive installation that lets the audience look at 558 episodes of VPRO Tegenlicht (Dutch Future Affairs Documentary series) through the eyes of a computer vision Neural Network.
Created by Franz Rosati, ‘Latentscape’ depicts exploration of virtual landscapes and territories, supported by music generated by machine learning tools trained on traditional, folk and pop music with no temporal and cultural limitations.
NØ SCHOOL NEVERS – JUNE 29TH TO JULY 11TH 2020 is a unique international summer school, held in Nevers, in Burgundy, aimed at students, artists, designers, makers, hackers, activists and educators who wish to further their skills and engage in critical research around the social and environmental impacts of information and communication technologies. During 2…
Created by Christian Mio Loclair (Waltz Binaire), ‘Blackberry Winter’ is an investigation into the possibilities of identify motion as a continuous walk in a latent space of situations.
Created by Shanghai based design studio automato.farm, ‘BIY™ – Believe it Yourself’ is a series of real-fictional belief-based computing kits to make and tinker with vernacular logics and superstitions.
What is art? Is it the unsaid? The unsettling? The last few years have been very happening in the field of Generative/Procedural art. We have seen some of the exciting applications of this field hitting mainstream media — may it be generative architecture like the Digital Grotesque, or the AI generated paintings which sold for a bang…
Sorry, this is Members Only content. Please Log-in. Join us today by becoming a Member. • Archive: Access thousands of projects, scores of essays, interviews and reviews.• Publish: Post your projects, events, announcements.• Discuss: Join our Discord for events, open calls and even more projects.• Education: Tutorials (beginners and advanced) with code examples and downloads.•…
Sorry, this is Members Only content. Please Log-in. Join us today by becoming a Member. • Archive: Access thousands of projects, scores of essays, interviews and reviews.• Publish: Post your projects, events, announcements.• Discuss: Join our Discord for events, open calls and even more projects.• Education: Tutorials (beginners and advanced) with code examples and downloads.•…
As 2018 comes to a close, we take a moment to look back at the outstanding work done this year. From spectacular machines, intricate tools and mesmerising performances and installations to the new mediums for artistic enquiry – so many great new projects have been added to the CAN archive! With your help we selected some favourites.
Created by Jessica In, Machinic Doodles is a live, interactive drawing installation that facilitates collaboration between a human and a robot named NORAA – a machine that is learning how to draw. The work explores how we communicate ideas through the strokes of a drawing, and how might a machine also be taught to draw through learning, instead of via explicit instruction
Uncanny Rd. is a drawing tool that allows users to interactively synthesise street images with the help of Generative Adversarial Networks (GANs). The project was created as a collaboration between Anastasis Germanidis and Cristobal Valenzuela to explore the new kinds of human-machine collaboration that deep learning can enable.
The chAIr Project is a series of four chairs created using a generative neural network (GAN) trained on a dataset of iconic 20th-century chairs with the goal to “generate a classic”. The results are semi-abstract visual prompts for a human designer who used them as a starting point for actual chair design concepts.
Created by Juliane Götz and Sebastian Neitsch of Quadrature and currently on view within the Ars Electronica exhibition at the DRIVE Volkswagen Group Forum in Berlin, “Positions of the Unknown” is an installation of 52 custom-made mini machines that, ever so slowly, track unidentified objects (possibly classified satellites) in Earth’s orbit.
Guillaume Massol’s openFrameworks app titled “All work and no play” watches videos coming from different training datasets and generates sentences loosely based on what is happening on the screen, sometimes creating pearls of wisdom by coincidence.