Created in the labs of Teehan+Lax, “Painting with a digital brush” is an attempt to free ASCII Art from the confines of the screen and enable it to exist in physical space – with simply light and paint.
For many of us that have grown up with computers, text-mode art represents something deeper than nostalgia. It is an artform manifested from technological constraints, inspired by the same hacker ethos that build the early machines used to produce and view it. Fundamentally, it is both an expression and prisoner of the system it inhabits. This latest experiment attempts to free ASCII art from the confines of the screen and enable it to exist in physical space – with light and paint.
The project started as an attempt to find a different way to approach image-to-text conversion. We have already seen many image to ASCII converters but in this project, Teehan+Lax use real paint to convert the painted to ASCII. As the user paints with white paint on black background, software converts the painted into projected ASCII code in realtime thus creating a sensation as if the code is painted directly on the surface. The resulting program (available as an openFrameworks add-on) can scale to huge dimensions with little strain. Another advantage of writing the new library in OpenGL was easy portability to WebGL, enabling a unique twist on exploring the physical world with three.js – google street view in ASCII (see below).
Peter Nitsch created the shader in openFrameworks using Sol’s TextFX library (http://sol.gfxile.net/textfx/
- OFFF + CAN Workshop Collaborative 2011 [Cinder, oF, Js, Events] Earlier this year we have been thinking about the concept of "curated workshops", an opportunity to bring people together to work for a very short period of time and share their creations. These would include setting up a team, inviting few high profile individuals and opening up submissions for participation. When I was approached by Héctor Ayuso earlier this year to give a talk at OFFF, instead of talking about CAN, I thought this would be a great opportunity to do something more, a workshop, and use the workshop material as the content to drive the talk. Hector and I agreed, 'Workshop Collaborative' was born. What was the aim of “Workshop Collaborative”? 1. Initiate collaborations between those that share common interests. 2. Create a playing field, both physical and virtual. 3. Allow ideas to evolve by asking questions. When we announced the workshop back in January, we also opened to applications for participation. In total, 80 applications were submitted and 11 participants chosen by the team including Aaron Koblin, Ricardo Cabello - mr.doob, myself and Eduard Prats Molner. The participants included: Marek Bereza, Alba G. Corral, Andreas Nicolas Fischer, Martin Fuchs, Roger Pujol Gomez, Marcin Ignac, Rainer Kohlberger, Thomas Mann, Joshua Noble, Roger Pala and Philip Whitfield. Programme - Single Day 09:00 - 10:00 Introductions / Teams 10:00 - 13:30 Stage 1 13:30 - 14:00 Lunch 14:00 - 19:00 Stage 2 (Completion) Total creation time: 6.5 hours Few weeks before the workshop, Aaron and I decided four themes we should allow to influence the work we would be making. By allowing other participants to comment and feedback on these themes we would discover areas we all want to explore. The themes included: 1. Digital Ecosystem - Build an application, an organism of information, sound and visuals, a digital ecosystem that flows through different mediums and evolves. “living system - travelling through technology and mutates through tools. 2. Analogue Digital - Explores the notions of physicality in code. Using made objects as assets to code. Scan 3d objects, cut paper and cut-out, traditional 2d scans, 3d objects scanned using flatback scanners, etc.. 3. Projection Mapping - Address projection mapping conceptually. Moving away from technical demos, time to question what does it all mean; surface, source, angle, point projection, scale, form, interaction, animation. 4. Data re-embodied - Tell stories through the juxtaposition of data sources and their methods of representation. How can we create new meaning, understanding and value from reinterpretation of data. By no means this ment that we would have to choose one over the other. The purpose was to get the feel where the interest lies amongst the participants and set up, so to say, a 'playing field' and allow first ideas to develop. We knew that working together for a single day we would not be able to produce anything of "finish" quality, rather focus on the subjects themselves and see what comes out. Following the feedback, a number of keywords were derived, to summarise our interests: ecosystem, data, scan, evolution, input, mutation, osc, node, rhythm, pattern, touch, physical, language, viewport and mobility. Five projects developed during the 6.5 hours of work. These included Kinect > WebGL bridge, Kinect Image Evolved, Input Device, Data Flow and Receipt Racer. -- Kinect > WebGL This project was the work of mr.doob, Marcin and Edu although other people were involved also. The task was to create a bridge between the Kinect and browser, allowing the real time feed over the web. Although aspirations were much higher than the time allowed, instead of utilising node.js server - which I understand was 99% complete anyhow, the team setled for feeding downscaled image data from cinder application using standard http requests to the three.js script which was reading the images at about 10f/s. Several rendering styles are presented below. First one is just simple point cloud done by Marcin for debugging while the rest was done by mr.doob using his amazing Three.js engine. Download .js code here. -- Kinect Image Evolved Simultaneously while Ricardo was working on the .js part, Marcin was exploring different ways of kinect image representation. In attempt to get away from standard kinect point cloud, we developed idea of trying slitscan effect with the point cloud. What this means is that the kinect point cloud was dispersed along the time lapse, different bands representing different moment in time. Likewise, Macin also was exploring what happens if the point location was reversed when particular depth was reached. What you see in the videos below are both effects. Code available soon. Thomas and Andreas were also testing different tools to manipulate kinect image. Meshlab, Blender were used to pull kinect point clouds and convert them into meshes which could then be render, distorted, split, etc. -- Input Device Marcin was also working on ways to control the input, ie how one could interact with the Kinect point cloud. We were toying with the idea of being able to assign different devices over OSC to different kinect body parts. This would allow for each individual to be assigned unique element of te point cloud and to interact with it. The first step was to use simple gyroscope datam sent from an iPhone over OSC. The video below shows what is happening. Likewise, Rainer and Roger were working on the iPhone application that would send the OSC data. Rather than just utilising gyro or accelerometer, Rainer was exploring different forms of interaction with the device, seeing whether a language could be evolved, one that would somehow enhance emotional attachment the kinect body parts. The videos below show and instrument like application that also has audio feedback. Code available soon. -- Data Flow With all the data moving, Marek was wondering if the input and output are in same medium, you can compare them, apples for apples, what would happen. Marek looked at the process of the loop by examining the image obtained by subtracting the initial input from the output so we're just left with the parts that change. For the loop algorithm, jpeg compression was chosen because it was easily available in oF and ubiquitous enough to warrant investigation. The boxy images are a result of feeding the jpeg "high" quality compression back into itself and subtracting it from the original. The finer images are using the "best" compression setting. Then Marek tried the same thing with sound (using logic), using first the original sound, then the encoded and seeing what is left. You can hear all the sounds below. Original / OFFFCAN Workshop Collaborative by filipvisnjic Encoded / OFFFCAN Workshop Collaborative by filipvisnjic Difference / OFFFCAN Workshop Collaborative by filipvisnjic Code available soon. -- Receipt Racer The receipt racer combines different in and output devices into a complete game. It was made by Martin, Philip and Joshua utilising a receipt printer, a common device you can see at every convenient store, small projector, sony ps controller and a mac running custom openFrameworks application. Print is a static medium, that's why, Philip, Martin and Josh explain, it was an intriguing challenge to create an interactive game with it. First the team tried to do it only with the printer as the visual representation but that seemed rather impossible. But then Joshua Noble came up with a small projector, perfect to project a car onto a preprinted road. There is no game without an input device. So they were lucky enough as at least one of them always carries a gamepad around. The cables connect back to the laptop running an openframeworks application the team wrote parts of. The app was entirely programmed during the workshop. Internally it runs something like the basic js game. Only a car driving on a randomly generated race track. Then it broadcasts its components to the external devices, prints the street and guesses where the car's projection is supposed to be to perform the hit test. That's the trickiest part. Everything has to be in sync and needs some calibration in the beginning. The paper also has a little bit of a mind of it's own and tends to slide around or curl. But that's nothing some duct tape and cardboard can't fix. It was a lucky day. Somehow everything was just lying around, waiting to be used. Even the stand and this plastic thing you would normally use to put in your name on a conference. Even the timing was perfect. Right at the end of the workshop we finished adding the details like a little score and the YOU CRASHED TEXTS. Project Page (code available) -- On Saturday we presented the creations. Regardless of the fact that Erik Spiekermann was presenting in the other OFFF room, we had a full theatre (500 ppl estimate) including another room where our talk could be watched on a large screen. Photo above by Arseny Vesnin CAN would like to thank all the participants at the workshop as well as Aaron and Ricardo for taking time off their busy schedules to take part of the workshop. For more information on the workshop and all future information/code/links see creativeapplications.net/offf2011 Photos by Jason Vancleave We leave you with OFFF Barcelona 2011 Main Titles made for OFFF by PostPanic (full screen […]
- Street Views Patchwork by Julien Levesque Created by Julien Levesque, Street Views Patchwork is a website which combines four embedded Google StreetView scenes from different places together to create a coherent landscape scene. It works like a slideshow, but the collage is quickly broken if you try to use it as you would the Streetview. Images match because the vanishing points, the horizon and camera are always filmed from the same height - moving google vehicle. STREET VIEWS PATCHWORK looks like a panorama redialed. This mural uses a presentation in bands superimposed on the horizontal, landscape pictures from the famous application Google Street View.Fjords, desert, mountainous valley, steep roads ... These landscapes recomposed, or rather "composite", the freeze for a moment and only Internet pictures taken, for example, in Finland, California, Mexico, Australia in Auvergne or the South of France. Continuously updated on a slideshow mode, these pictures make us travel in an imaginary geography as ephemeral. These "scenery flow", by sequencing a reality where every animal and human figure is absent, completes dematerialize our world. We are faced with fragmentation by this visualization plural. Try it full screen here | Julien […]
- The Carp and the Seagull – Interactive short film by Evan Boehm The Carp and the Seagull is an interactive short film about one man’s encounter with the spirit world and his fall from grace. It is a user driven narrative that tells a single story through the prism of two connected spaces. One space is the natural world and the other is the spirit or nether […]
- This Exquisite Forest – Project by Aaron Koblin and Chris Milk This Exquisite Forest is a new collaborative project by Aaron Koblin and Chris Milk produced by Google and Tate. Following the opening party last night at the Tate Modern where we had a chance to get a first look at the project, This Exquisite Forest is now live and awaiting your contributions. In the tradition of collaborative projects between Aaron and Chris (3 Dreams Of Black, The Wilderness Downtown, The Johnny Cash Project), "This Exquisite Forest" is primarily a crowd sourced animation drawings tool which combines drawings into ever growing trees. Each animated drawing is a branch to a tree and each response is yet another branch to the same tree. Someone may start a drawing, you may respond to it, making the tree grow larger. The installation at the Tate Modern includes a room with projections on each wall, showing a tree or two per wall. Each tree can be browsed using the provided infra-red pointer and similar to the website when pointer is located over a branch, animation stored in this branch is shown. Likewise, at the main entrance of the Turbine Hall, visitors are greeted by two projections at the end of the ramp, showing a selection of trees from the archive. Visitors can also create drawings on the third floor of the gallery using installed Wacom tablets. The team behind the project tells CAN, the selection of trees shown in the Tate have been "approved" by the Tate whereas the website contains all other trees "grown" by the users around the world. The project makes use of several HTML5 features in Google Chrome. The HTML5 Canvas element is showcased in the site’s drawing tool. Canvas is hardware-accelerated in Google Chrome, offloading rendering to the GPU and reducing CPU load, which improves performance. The Web Audio API provides music playback when the user views an animation. Music is dynamically generated for each tree based on the input of the contributors. Many of the project’s styling and transitions utilize CSS3. All animations are played back using the HTML5 video player. Here are some of our favorite featured artists: Casey Reas, Olafur Eliasson, Miroslaw Balka and Aaron Koblin himself. Try it for yourself at exquisiteforest.com Aaron Koblin | Chris Milk | […]
- Fractal Lab [WebApp] Created by Tom Beddard aka subblue, Fractal Lab is a WebGL based web application for rendering 2D and 3D fractals in real-time and the latest in Tom's research in complex and visually stunning 3D fractal structures. As we already reported few months back, Tom already created some truly stunning works (see bottom of the post) using the Adobe Pixel Bender Toolkit and working with Photoshop, After Effects and Quarz Composer. This time, Tom's fractals run in your browser with ability to explore and loose yourself in these magnificent structures. You will need a WebGL enabled browser, currently Google Chrome or Firefox 4 beta are the best choices, and a reasonably modern graphics card. fractal.io also available on github.com/subblue/FractalLab + make sure you pay a visit to Tom's […]
Posted on: 30/07/2012
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG