PSS Studie by Daniel Franke attempts to question how we perceive images. The project is interested in sensory experience and how our distorted perception somewhat alters / duplicates the world we see. The moving image in the form of a “simulation” is the initial point – digital data generated by an animated movie are transformed back to the real world illustrated by a pointer moving through space.
Eight nylon – cords link to a mutual point that can only hold the position in space because of their interdependent movement. A loop occurs – a movement is simulated in a digitally reconstructed physical space and the resulting information of the position is transformed back to the physical space. The Outcome is form of spatial image, a kinetic plane which expands in three dimensions. As a Consequence the perception is changing, the moving image cannot only be seen from one fixed viewing angle or rather one unique viewer position. The observer is autonomous, moving around the sculpture and is thus controlling his/her own point of view of the spatial film. Consequently restrictions of the medium are scrutinised similar to that what the expanded cinema movement questioned. With that in mind the work follows the idea of the work “Spatial Soundsculpture”, but in contrast to the older work the Screen has completely vanished. The Interface that lead to the digital medium in form of a window is only visible by the edges of the mapped coordinate system.
Components: D-2011. acryl glas cube, acryl glas spools, PC, Screen, Servo Motors, Microcontroller, Processing Application, 130 cm x 60 cm x 30 cm.
- Petting Zoo by Minimaforms – Artifical creatures designed to learn and explore Created by London based experimental architecture and design studio Minimaforms, this project is speculative life-like robotic environment that raises questions of how future environments could actively enable new forms of communication with the […]
- Four Letter Words [Arduino, Processing, c++] About a year ago, Rob Seward created the Four Letter Words piece. The original video now counts about 111k views on vimeo and has been blogged by numerous sites out of which I think Pieter and Rhizome were the first. Earlier today, I got an email from Rob about the latest video he made, that was projected on a screen hung between two trees, with several other sound and video installations in the woods nearby. It was made using After Effects with sound in Ableton Live and using the FLW installation as source material (see bottom of this post). I thought the project needed a re-visit, looking at ins and out of how it actually works, what makes is tick as they say. After few emails back and forth with Rob, here are the details: The installation consists of four units, each capable of displaying all 26 letters of the alphabet with an arrangement of fluorescent lights. The piece displays an algorithmically generated word sequence derived from a word association database developed by the University of South Florida between 1973 and 1998. The algorithms take into account word meaning, rhyme, letter sequencing, and association. There's a mac mini running Processing that sends alignment data to 4 arduino boards (one for each letter) that are chained together. The positions of the lights are stored in an XML file. There is an app that allows Rob to tweak the positioning in case anything gets out of alignment (see first image below). There's another app just takes what you write on the keyboard and sends it straight to the machine - that's what was used in the A-Z section of the video. The third app reads lists and sends words to the sculpture. Rob describes it as a bit more complicated than he thought it would be because there are certain transitions the sculpture cannot do without intermediary positioning of the lights. For example, if S goes to D, the top and bottom lights will collide, causing the machine to jam. The processing app makes sure that none of these problem transitions occur without inserting an intermediary arrangement of the lights that allows them to move safely. For installations, the words lists are derived from some C++ apps Rob wrote. You can find more information about them here robseward.com/associations (second image above). The words you see in the video are put together by association. Thus DEER goes to HUNT goes to KILL. KISS goes to LIPS. The words that it's choosing tend to have more negative associations. The other two images above show text with english-like letter ordering (see third image above). Rob made it by modifying a Markov-chain ruby script. The software, written in Processing also places 4-letter words adjacently (fourth image). The installation in total includes 4 arduinos, 20 servos, 8 Step motors, 24 3.9 inch CCL (cold cathode) lights and their inverters. Each arduino has 2 steppers, 5 servos, and 6 lights to control. There are 2 custom shields on each arduino – one for the lights and one for the motors. I wrote a library to operate the servos and stepper simultaneously which you can download here (github). While the piece was conceived with idea of displaying algorithmically generated lists, it was designed with flexibility and expandability in mind. The individual units can be connected ad-infinitum, and are theoretically capable of displaying any length of text. While Four Letter Words deals with a specific range of content, the technology can be easily expanded for future textual experiments. Thanks Rob! Rob Seward is an artist and programmer. His work has been exhibited at the Blanton Museum, Austin; CVZ Contemporary, New York; Center For Opinions in Music and Art, Berlin; and Nova Scotia College of Art and Design, Halifax. He has lectured at the Centre Pompidou, Paris; Columbia University; and Location One, both in New York. He holds a master's from the Interactive Telecommunications Program (ITP) at New York University's Tisch School of the Arts. Before getting his master's, he worked in collaboration with composer Fred Lerdahl creating software based on the Generative Theory of Tonal Music. Rob lives and works in New York City. Previously: Kunst Bauen [iPad, iPhone, oF, Mac] - "interactive […]
- Underwater by David Bowen – Hundreds of servos controlling wave patterns Created by David Bowen, Underwater is an installation created for INTERIEUR 2012. It uses a Microsoft Kinect to collected real-time surface data from moving wave patterns and translates them into this large scale installation comprised of hundreds of […]
- Pulse Mirror [Processing, Arduino] Created by Chris Lee & Henry Chang, PulseMirror is an interactive installation device that collects and translates participants’ pulse rate into a mirrored visual image. The mirror image is created by a series of circles that pulsate heart rate data collected from different participants. Participants input their heart rate by placing their finger on the device for 15 seconds. The device which incorporates an Arduino detects the heart rate is detected and a random circle on the screen will become the representation of participant's heart rate. Those circles on the screen will change their color to form a mirrored image that is captured by the webcam on the monitor. See also R133 (sadly no longer available in the […]
- May The Force Be With You – Teo Park Created by Seoul based artists Teo Park, "May the Force be With You" is a kinetic installation that invites visitors to interact with an interactive water tank. The tank uses gravitational force driven by the position of the viewer's hand […]
- My little piece of Privacy [Processing] My little piece of Privacy is an installation by Niklas Roy. His workshop is located in an old storefront with a big window facing towards the street. In an attempt to create more privacy inside, Niklas decided to install a small but smart curtain. The curtain is smaller than the window, but an additional surveillance camera and an old laptop provide it with intelligence: "My little piece of Privacy" is a robotic curtain which is too small for the window where it is installed. But since it is robotic and controlled by a laptop (running some processing code which does the computer vision), it detects the location of pedestrians outside and positions itself rapidly to where they are, thus not only blocking their looks inside, but also serving as a playful installation for people on the street. The rapid positioning of the curtain is done with a custom linear servo drive. It is basically a DC gearmotor which is controlled by an Atmega, programmed in AVR-GCC (as the Arduino-style-GCC performed to slow to handle the job). Processing code and AVR-GCC code are both on the page for download, as well as plans and schematics for building the linear servo. Love it! Project Page Album with (even more) hires photos Download AVR-GCC and Processing codes Download plans and […]
- Untitled Faces [openFrameworks, Processing, Arduino] Untiled Faces by Nathan Selikoff is an interactive sculpture that mixes a chaotic dynamical system with its “meta” representation, allowing the viewer to explore the somewhat unpredictable four-dimensional parameter space. This work builds off both my Aesthetic Explorations and my Faces of Chaos series. With the former, I am exploring individual strange attractors—each image encodes four specific parameters. With the latter, I am exploring the space of all possibilities, and each image encodes a range of parameters in a “meta” view of the system. The left-most pane shows a small representation of another artwork, Tiled Faces, with a small red square over one image of this 32×32 grid. As the left lever is moved, the red square moves, updating the x and y position, and simultaneously updating both the center and right-most panes. The right pane shows the image from the left pane, zoomed in. The right-most lever moves a small red target within this image, updating another x and y position, and simultaneously updating the center pane. The center pane shows a chaotic attractor, whose four coefficients are taken from the positions of the left and right levers. The center lever adjusts the virtual camera that is viewing this strange attractor. The objects attempts to suggest connection between the images, and in a some way Nathan writes, show the mysterious relationship between a strange attractor and its Lyapunov exponent. This artwork was prototyped in Processing, with the final version produced in openFrameworksrunning on Ubuntu. For full list of components and more info see project page. (Thanks to Nathan for sending this in. It was a pleasure meeting you at […]
Posted on: 17/06/2011
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG