“Captured” is a temporary installation realised in May 2011 by Nils Völker and graphic designer Sven Völker at MADE Space in Berlin. The installation is comprised of four hanging walls with 304 framed graphic pages and a field of 252 inflatable silver cushions. Both artworks related to the theme of light and air and interact with each other in a twelve minute performance that also includes sound.
Sven Völkers graphic work were his so called “books on walls” and narrated the installations four chapters of the intangible, the volume, the border and the ephemeral. Nils Völkers custom made inflatable air bags were programed by the artist to create sequences according to the chapters. He also controlled the existing multi-colour light system to intensify the dramaturgy and to create a close relationship between all elements.
The setup consists of 252 modules, inflating cushions made from space blankets, that cover about 130 square meter on the floor. Inside each module there are eight cpu cooling fans inflating and deflating each bag in variable speeds. All together there are 2016 fans moving about 60 cubic meters of air. The whole set is controlled by a single Arduino board with shift registers attached to it to receive a total of 504 output pins. In this way every single bag could be controlled fully independently.
In a addition, the exhibition space includes a pre-installed lighting system. It consists of 255 large lamps of which each is equipped with both a fluorescent lamp and rgb-leds. During the performance it was controlled by a program Nils wrote in Processing which was able to access each lamp individually. The Processing program did also take care of the sound playback and the timing for the Arduino programs.
- Ninety Six – Inflatable pixels by Nils Völker Ninety Six is a site specific installation comprised of 96 plastic bags that are selectively inflated and deflated in controlled rhythms, creating wavelike animations across the wall of the […]
- Shedding Light on Squidsoup – A Conversation with Anthony Rowe For more than a decade, the artist collective Squidsoup have been designing rich interactive experiences. From their early navigable sonic environments, through their playful experiments with computer vision and interest in 'volumetric visualizations', an email exchange between Squidsoup's Anthony Rowe and CAN begat a mammoth interview abound light, sound and many of the collective's […]
- Type Case [Processing] Created by Martin Bircher, "Type Case" is a low-resolution display embedded in an old printers' type case with 125 rectangular pixels of different sizes. These are formed from the reflecting light of Processing controlled LEDs, embedded in each section of the case. Images are converted and fed to the case via arduino. Due to the standardized fragmentation of its compartments, the density of visual information is decreased towards the objects' centre. Viewed close by, it is nearly impossible to recognize more than a flicker – however after moving some distance away, it becomes distinguishable, that the lights and shadows give a representation of the latest headlines. The Processing script is fetching an RSS feed of an online newspapers headlines. It stores the title and description of the ten latest entries into a variable and rendered in the serif font Caslon, which is scrolled horizontally on the top of the window. Every frame the pixels of the middle section of the scrolling text are stored in an array (1600 pixels) and displayed on the left. These pixels are then reduced to an array of 20 x 10 pixels (also on the left of the window). Since some of the compartments are the doubled or quadrupled size of the smallest units, these pixels have to be reduced again. On the biggest part of the window there is the simulation, which represents the gray scale values of each pixel of Type Case. Project Page Martin Bircher, born 1978 in Aarau, Switzerland, served his apprenticeship as an electrician and studied Fine Arts. In 2006 he migrated to Kouvola, Finland, where he works as a media artist and a full-time lecturer for Digital Media. The interest in his artistic work lies in combining antiquated items and cutting-edge technology in order to create objects with new purposes. (Cartes) See also on CAN: Parallel Image [Environment, Objects] The Stealth Project [Environment, Objects] Swarm Light [Inspiration, […]
- Four Letter Words [Arduino, Processing, c++] About a year ago, Rob Seward created the Four Letter Words piece. The original video now counts about 111k views on vimeo and has been blogged by numerous sites out of which I think Pieter and Rhizome were the first. Earlier today, I got an email from Rob about the latest video he made, that was projected on a screen hung between two trees, with several other sound and video installations in the woods nearby. It was made using After Effects with sound in Ableton Live and using the FLW installation as source material (see bottom of this post). I thought the project needed a re-visit, looking at ins and out of how it actually works, what makes is tick as they say. After few emails back and forth with Rob, here are the details: The installation consists of four units, each capable of displaying all 26 letters of the alphabet with an arrangement of fluorescent lights. The piece displays an algorithmically generated word sequence derived from a word association database developed by the University of South Florida between 1973 and 1998. The algorithms take into account word meaning, rhyme, letter sequencing, and association. There's a mac mini running Processing that sends alignment data to 4 arduino boards (one for each letter) that are chained together. The positions of the lights are stored in an XML file. There is an app that allows Rob to tweak the positioning in case anything gets out of alignment (see first image below). There's another app just takes what you write on the keyboard and sends it straight to the machine - that's what was used in the A-Z section of the video. The third app reads lists and sends words to the sculpture. Rob describes it as a bit more complicated than he thought it would be because there are certain transitions the sculpture cannot do without intermediary positioning of the lights. For example, if S goes to D, the top and bottom lights will collide, causing the machine to jam. The processing app makes sure that none of these problem transitions occur without inserting an intermediary arrangement of the lights that allows them to move safely. For installations, the words lists are derived from some C++ apps Rob wrote. You can find more information about them here robseward.com/associations (second image above). The words you see in the video are put together by association. Thus DEER goes to HUNT goes to KILL. KISS goes to LIPS. The words that it's choosing tend to have more negative associations. The other two images above show text with english-like letter ordering (see third image above). Rob made it by modifying a Markov-chain ruby script. The software, written in Processing also places 4-letter words adjacently (fourth image). The installation in total includes 4 arduinos, 20 servos, 8 Step motors, 24 3.9 inch CCL (cold cathode) lights and their inverters. Each arduino has 2 steppers, 5 servos, and 6 lights to control. There are 2 custom shields on each arduino – one for the lights and one for the motors. I wrote a library to operate the servos and stepper simultaneously which you can download here (github). While the piece was conceived with idea of displaying algorithmically generated lists, it was designed with flexibility and expandability in mind. The individual units can be connected ad-infinitum, and are theoretically capable of displaying any length of text. While Four Letter Words deals with a specific range of content, the technology can be easily expanded for future textual experiments. Thanks Rob! Rob Seward is an artist and programmer. His work has been exhibited at the Blanton Museum, Austin; CVZ Contemporary, New York; Center For Opinions in Music and Art, Berlin; and Nova Scotia College of Art and Design, Halifax. He has lectured at the Centre Pompidou, Paris; Columbia University; and Location One, both in New York. He holds a master's from the Interactive Telecommunications Program (ITP) at New York University's Tisch School of the Arts. Before getting his master's, he worked in collaboration with composer Fred Lerdahl creating software based on the Generative Theory of Tonal Music. Rob lives and works in New York City. Previously: Kunst Bauen [iPad, iPhone, oF, Mac] - "interactive […]
- IRIS by HYBE – New kind of monochrome LCD display Created by Korean collective HYBE, IRIS is a media canvas with matrix of conventional information display technology, that is a monochrome LCD.Through the phased opening and closing of circular black liquid crystal, IRIS can create various patterns and control the amount (size) of passing lights. IRIS is an interactive medium for visual simplicity which uses the passage of ambient light, not emission of light itself. The installation below consists of 400 LCDs (20x20), 20 Custom-designed Arduino compatible controllers and Processing and Kinect used for both autoactive & interactive content play. HYBE IRIS was selected and supported by the Da Vinci Idea Program(2012) by Seoul Art Space_Geumcheon, […]
- PSS Studie [Processing, Arduino] PSS Studie by Daniel Franke attempts to question how we perceive images. The project is interested in sensory experience and how our distorted perception somewhat alters / duplicates the world we see. The moving image in the form of a "simulation" is the initial point - digital data generated by an animated movie are transformed back to the real world illustrated by a pointer moving through space. Eight nylon - cords link to a mutual point that can only hold the position in space because of their interdependent movement. A loop occurs - a movement is simulated in a digitally reconstructed physical space and the resulting information of the position is transformed back to the physical space. The Outcome is form of spatial image, a kinetic plane which expands in three dimensions. As a Consequence the perception is changing, the moving image cannot only be seen from one fixed viewing angle or rather one unique viewer position. The observer is autonomous, moving around the sculpture and is thus controlling his/her own point of view of the spatial film. Consequently restrictions of the medium are scrutinised similar to that what the expanded cinema movement questioned. With that in mind the work follows the idea of the work "Spatial Soundsculpture", but in contrast to the older work the Screen has completely vanished. The Interface that lead to the digital medium in form of a window is only visible by the edges of the mapped coordinate system. Components: D-2011. acryl glas cube, acryl glas spools, PC, Screen, Servo Motors, Microcontroller, Processing Application, 130 cm x 60 cm x 30 cm. Project Page Previously: Augmented Perspective [C++] - Transparent concrete cube sculpture ... Not in Death [Scripts] - Custom AE scripts for 'We are the […]
- Untitled Faces [openFrameworks, Processing, Arduino] Untiled Faces by Nathan Selikoff is an interactive sculpture that mixes a chaotic dynamical system with its “meta” representation, allowing the viewer to explore the somewhat unpredictable four-dimensional parameter space. This work builds off both my Aesthetic Explorations and my Faces of Chaos series. With the former, I am exploring individual strange attractors—each image encodes four specific parameters. With the latter, I am exploring the space of all possibilities, and each image encodes a range of parameters in a “meta” view of the system. The left-most pane shows a small representation of another artwork, Tiled Faces, with a small red square over one image of this 32×32 grid. As the left lever is moved, the red square moves, updating the x and y position, and simultaneously updating both the center and right-most panes. The right pane shows the image from the left pane, zoomed in. The right-most lever moves a small red target within this image, updating another x and y position, and simultaneously updating the center pane. The center pane shows a chaotic attractor, whose four coefficients are taken from the positions of the left and right levers. The center lever adjusts the virtual camera that is viewing this strange attractor. The objects attempts to suggest connection between the images, and in a some way Nathan writes, show the mysterious relationship between a strange attractor and its Lyapunov exponent. This artwork was prototyped in Processing, with the final version produced in openFrameworksrunning on Ubuntu. For full list of components and more info see project page. (Thanks to Nathan for sending this in. It was a pleasure meeting you at […]
Posted on: 29/06/2011
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG