Some time ago Sosolimited and Plebian Design set out to create a large scale transparent LCD sculpture for a science museum atrium. Each pixel was designed as a piece of glass that could independently change the transparency of: from opaque black to transparent. The sculpture was designed to curve up through the atrium of the museum and display down-sampled patterns from nature, along with a high fidelity soundtrack. Almost two years later, its wonderful to see this project finally come to life.
“Patterned by Nature” was commissioned by the North Carolina Museum of Natural Sciences for the newly built Nature Research Center in Raleigh, North Carolina. The exhibit celebrates our abstraction of nature’s infinite complexity into patterns through the scientific process, and through our perceptions. It brings to light the similarity of patterns in our universe, across all scales of space and time.
The 90’x10’ “ribbon” winds through the five story atrium of the museum and is made of 3600 tiles of LCD glass. It runs on roughly 75 watts, less power than a laptop computer. Animations are created by independently varying the transparency of each piece of glass. The content cycles through twenty programs, ranging from clouds to rain drops to colonies of bacteria to flocking birds to geese to cuttlefish skin to pulsating black holes. The animations were created through a combination of algorithmic software modeling of natural phenomena and compositing of actual footage.
An eight channel soundtrack accompanies the animations on the ribbon, giving visitors clues to the identity of the pixelated movements. In addition, two screens show high resolution imagery and text revealing the content on the ribbon at any moment.
- CSIS Data Chandelier by SOSO Limited 425 hanging pendants located in the new HQ of the Center for Strategic and International Studies visualise global […]
- Exquisite Clock [Objects, iPhone] Based on the idea that time is everywhere, Exquisite Clock is a clock made of numbers taken from everyday life â€“ seen, captured and uploaded by people from all over the world. The project connects time, play and visual aesthetics. Itâ€™s about creativity, collaboration and exchange. The exquisite clock has an online database of numbers â€“ an exquisite database â€“ at its core. This supplies the website and interconnected physical platforms. The online database works like a feeder that provides data to different instances of clocks in the form of the website, and installations, mobile applications, designed products and urban screens.Â All uploaded numbers are tagged according to a category selected by their creator, and are added to the growing database. People viewing the clock can then choose to view all types of numbers, or can make a selection to view only numbers from a specific category â€“ a clock made of vegetables, or clouds, or garments etc. The exquisite clock can exist as different physical installation variations, each using the numbers provided by the database. These physical installations might be LCD monitors hanging in a gallery space, tiny cellphone screens or large scale public monitors. These variations can also be reconfigured as interactive installations where users in the space also collaborate to feed images back into the database â€“ the principle is that all instances of the exquisite clock access the single exquisite database forming a conversant network made by different perceptions of time. Already shown at a number of exhibitions, exquisite clock has taken varying shapes and sizes utilising different size monitors combined with visible circuit boards playing on the honesty of data exchange between the database and the display devices, also stripping traditional LCD computer monitors off their corporate branding shell exposing circuits boards. To contribute to the project, take pictures of numbers, go to exquisiteclock.org click on the option upload in the main menu, choose the file you want to upload, select what number you wish to upload and tag your submission accordingly. Remember to crop and save your image following the dimensions (320 x 480), file size (120kb) and copyright restrictions described in the upload page in the website. Through the free iPhone application clock in Submit Image, select your image from a preroll or use the camera to take a new photo. After selecting the photo select the number that refer to your picture, tag and click on submit. This project was created and developed Joao Henrique Wilbert at FABRICA in 2008. Creative Direction: Andy Cameron. Download the free iPhone app […]
- High Arctic by UVA [c++, Events] Photos by John Adrian Currently on display at the National Maritime Museum is a collaboration between United Visual Artists (UVA) and Cape Farewell, an installation High Arctic. This is an exhibition with no touchscreens, no static photographs, and no panels with text: instead High Arctic is an immersive, responsive environment. As you approach the entrance you are given an ultraviolet torch, met by darkness and an overwhelming array of columns of varying height occupying the space. Ultraviolet torches unlock hidden elements whilst constantly shifting patterns of interactive projections react to visitors approaching. As you "embark on this journey" of discovery, Max Eastley and Henrik Ekeus's generative soundscape flow through the gallery, weaving in the voices of arctic explorers across the centuries... The project began in 2010 when UVA’s Matt Clark travelled with the arts and climate science foundation Cape Farewell to the Arctic archipelago of Svalbard, which lies between mainland Norway and the North Pole. Sailing aboard The Noorderlicht, a 100-year old Dutch schooner, Matt’s trip brought him into contact with scientists, poets, musicians and polar bears. He saw vast tundra, monochromatic rainbows and huge chunks of ice falling from calving glaciers. He saw vast tundra, monochromatic rainbows and huge chunks of ice falling from calving glaciers. Conceived as a response to the expedition, High Arctic uses a combination of sound, light and sculptural forms to create an abstracted arctic landscape for visitors to explore. UVA Creative Director Matt Clark’s response to his experience of the Arctic as part of Cape Farewell’s 2010 expedition. UVA’s in-house tool (D3) is the main ‘glue’ for this process, however a multitude of other tools were used to explore the various iterations of the physical and digital build. These included, various scale models built in polystyrene, lego, 3D renderings and a full scale pool mock up built on the ground floor of the UVA studio. Likewise, various tools were written or integrated into D3’s existing capabilities to produce generative content for the interactive pools; Houdini (Houdini Ocean Toolkit) for producing realistic source wave depth maps, SVG handling, dither pixel shaders, video sprite management, openCV (for contour finding), a port of Memo’s Navier Stokes fluid algorithm, box2D, particle systems, lens & shift correction algorithms. Making the physical sculpture integrate with the digital projection pools was important for creating a more seamless landscape. CAD designs were imported into D3 to allow the testing of various physical setups with generative content before fabrication of the columns. Interaction is made up of ten Basler GigE with various cut and pass filters, plus 250 UV torches. The system builds upon existing D3 libraries for multi-camera 2D & 3D tracking. Lighting is created with Source4’s plus Martin Tripix strips, controlled by D3. The physical build incorporates hundreds of columns with UV reactive paint, a 40m stretched mirror and a good amount of timber and metal. 58 channels of generative and pre-composed audio are managed by Super Collider, Max/MSP and Apple Logic, which give a constantly evolving narrative across the room. The installation is run by a cluster of six D3 machines. A mix of custom protocols, web services and OSC integrate the various components. Coding for D3 is in C++, HLSL & Python. Special credit: Luke Malcolm for the bulk of the coding (he worked on the camera tracking system, show synchronisation, and interaction in three of the five pools). It’s 2100 AD and the Arctic landscape we once took for granted has changed forever. How will we choose to remember our Arctic past? Is it possible to travel somewhere that no longer exists? Set in one of many possible futures High Arctic conveys the scale, beauty and fragility of our unique Arctic environment through an immersive installation which fills the entire 820m2 gallery space. Intended to be a future vision of a receding world, it encourages us to question our relationship with the world around us. Admission: Full £6.00 / Concession £5.00 / Children (age 7+*) £4.00 (*children 0-6 go free) Dates: 14 July 2011–13 January 2012 Opening times: every day, 10.00–17.00 (closed 25–26 December) Venue: Special Exhibitions Gallery, Sammy Ofer Wing (National Maritime Museum) United Visual Artists | Cape […]
- IRIS by HYBE – New kind of monochrome LCD display Created by Korean collective HYBE, IRIS is a media canvas with matrix of conventional information display technology, that is a monochrome LCD.Through the phased opening and closing of circular black liquid crystal, IRIS can create various patterns and control the amount (size) of passing lights. IRIS is an interactive medium for visual simplicity which uses the passage of ambient light, not emission of light itself. The installation below consists of 400 LCDs (20x20), 20 Custom-designed Arduino compatible controllers and Processing and Kinect used for both autoactive & interactive content play. HYBE IRIS was selected and supported by the Da Vinci Idea Program(2012) by Seoul Art Space_Geumcheon, […]
- Shedding Light on Squidsoup – A Conversation with Anthony Rowe For more than a decade, the artist collective Squidsoup have been designing rich interactive experiences. From their early navigable sonic environments, through their playful experiments with computer vision and interest in 'volumetric visualizations', an email exchange between Squidsoup's Anthony Rowe and CAN begat a mammoth interview abound light, sound and many of the collective's […]
- Sosolimited – reConstitution [Profile, Events, c++] Sosolimited is an art and design consultancy formed in 2003. The group specialises in interactive installation and audiovisual performance. They create immersive works that play on the immediacy of live media. I had a pleasure of hearing Sosolimited speak at the recent OFFF conference and with some scepticism of their more recent commercial work was very fond of the work they've been doing in the area of live media. Their presentation was focused on the evolution of their live media project - reConstitution, describing different iterations of the software created to allow real-time interpretation of live television. More of an outsider view than creating an impact, Sosolimited acts as a mediator with their software capable of extracting and reformatting information from live TV. Of course, this at times creates new narratives that do not necessarily represent the discussion at hand nevertheless they allow for a new way of understanding, selecting and reinterpreting voice, sound and image. The new meaning is left over to the spectator to contextualise, compare, make sense of or reject all together. The kicked off back in 2004, named reConstitution, it was a three-part live audiovisual remix of the US 2004 presidential debates. A hybrid of video art and public service, the piece represented a shift away from the polarized manner in which people approach political artwork. Sosolimited designed a piece of software that allowed them to sample the television broadcast in real time, extracting the video, audio, and closed captioned text. The software consisted of a series of modes, each of which transformed, analyzed, and reassembled these pieces in a distinct way. The transformed visuals were projected into a large screen and the audio was played through a PA system. Some aspects of the broadcast were obscured while others were highlighted and analyzed, all intended to augment the raw information contained in the television signal. A clean version of the candidate's voices was always present in the audio mix, so as to maintain the legibility of the debates. Every word spoken by the candidates was catalogued, analyzed, and displayed, integrated with the transformed video signal. The visuals would react to the physical movement of the candidates as well as the words they spoke. In 2008, once again for the new wave of presidenial debates, Sosolimited organised three performances in three cities (ReConstitution2008), each coinciding with the live broadcast of the debates. The software was redesigned to further enhance the translation. Through a series of visual and sonic transformations the team reconstituted the material, revealed linguistic patterns, exposed content and structures creative alternative understanding of the debates while they were being watched. Over 1500 people attended our three performances in Boston, New York, and Washington D.C. See more images/videos here. In 2010, their longest performance to date (the long conversation); nine hours, was held during the Transmediale Festival in Berlin at the Haus der Kulteren der Welt. The performance occurred in parallel with the "Futurity Long Conversation", which was a nine hour lecture and debate series involving 21 speakers in an auditorium. There were two speakers on stage at a time, with one of them being swapped out for another every 20 minutes. On a separate stage at the opposite end of the HKW the team had typists transcribing the words of all the artists, designers, and authors who were speaking, and sent the text streams to the analysis software. The visualizations of the conversations were projected on the screen behind the typists. The words of all the participants were matched to lexical databases, and sorted by topic, tense, and certitude. Soso displayed realtime statistics of all the speakers and used a dozen or so different transformational modes throughout the night. Most recent iteration of Sosolimited's software was for 2010's UK Parliamentary elections which included American style live debate between the party leaders. Having enjoyed the US Debate remix, ReConstitution, the organizers of the FutureEverything festival in Manchester invited Sosolimited to do something similar for a live audience in the UK. With fully integrated LIWC text analysis libraries to track things like emotion and self-reference in our software, the show was streamed live on TV, a first for the team, on April 29th - Prime Numerics. Whether the project creates an insight into what the world leaders "mean" is somewhat debatable. Extracting words out of their context and interpreting facial expressions may only begin suggest new narratives disconnected from their origin. The fact of the matter is that most of these political debates are no more than theatre, created for public and press media to feed on. The truth is that we, without the additional software, create meaning depending on our social standing, education and what might be most relevant to us. Sosolimited's segmentation nevertheless does provide an insight into how machines interoperate live television but whether we can relate to these machines is altogether another matter. What is true, shown here, is that popular media is a spectacle that should be celebrated whether this be in the form of entertainment, critique or even a new form of self discovery. Made with ACU, a C++ MIT library / […]
- Four Letter Words [Arduino, Processing, c++] About a year ago, Rob Seward created the Four Letter Words piece. The original video now counts about 111k views on vimeo and has been blogged by numerous sites out of which I think Pieter and Rhizome were the first. Earlier today, I got an email from Rob about the latest video he made, that was projected on a screen hung between two trees, with several other sound and video installations in the woods nearby. It was made using After Effects with sound in Ableton Live and using the FLW installation as source material (see bottom of this post). I thought the project needed a re-visit, looking at ins and out of how it actually works, what makes is tick as they say. After few emails back and forth with Rob, here are the details: The installation consists of four units, each capable of displaying all 26 letters of the alphabet with an arrangement of fluorescent lights. The piece displays an algorithmically generated word sequence derived from a word association database developed by the University of South Florida between 1973 and 1998. The algorithms take into account word meaning, rhyme, letter sequencing, and association. There's a mac mini running Processing that sends alignment data to 4 arduino boards (one for each letter) that are chained together. The positions of the lights are stored in an XML file. There is an app that allows Rob to tweak the positioning in case anything gets out of alignment (see first image below). There's another app just takes what you write on the keyboard and sends it straight to the machine - that's what was used in the A-Z section of the video. The third app reads lists and sends words to the sculpture. Rob describes it as a bit more complicated than he thought it would be because there are certain transitions the sculpture cannot do without intermediary positioning of the lights. For example, if S goes to D, the top and bottom lights will collide, causing the machine to jam. The processing app makes sure that none of these problem transitions occur without inserting an intermediary arrangement of the lights that allows them to move safely. For installations, the words lists are derived from some C++ apps Rob wrote. You can find more information about them here robseward.com/associations (second image above). The words you see in the video are put together by association. Thus DEER goes to HUNT goes to KILL. KISS goes to LIPS. The words that it's choosing tend to have more negative associations. The other two images above show text with english-like letter ordering (see third image above). Rob made it by modifying a Markov-chain ruby script. The software, written in Processing also places 4-letter words adjacently (fourth image). The installation in total includes 4 arduinos, 20 servos, 8 Step motors, 24 3.9 inch CCL (cold cathode) lights and their inverters. Each arduino has 2 steppers, 5 servos, and 6 lights to control. There are 2 custom shields on each arduino – one for the lights and one for the motors. I wrote a library to operate the servos and stepper simultaneously which you can download here (github). While the piece was conceived with idea of displaying algorithmically generated lists, it was designed with flexibility and expandability in mind. The individual units can be connected ad-infinitum, and are theoretically capable of displaying any length of text. While Four Letter Words deals with a specific range of content, the technology can be easily expanded for future textual experiments. Thanks Rob! Rob Seward is an artist and programmer. His work has been exhibited at the Blanton Museum, Austin; CVZ Contemporary, New York; Center For Opinions in Music and Art, Berlin; and Nova Scotia College of Art and Design, Halifax. He has lectured at the Centre Pompidou, Paris; Columbia University; and Location One, both in New York. He holds a master's from the Interactive Telecommunications Program (ITP) at New York University's Tisch School of the Arts. Before getting his master's, he worked in collaboration with composer Fred Lerdahl creating software based on the Generative Theory of Tonal Music. Rob lives and works in New York City. Previously: Kunst Bauen [iPad, iPhone, oF, Mac] - "interactive […]
- Hyper-Matrix – Thousands of physical pixels in a 180º vertical landscape Created by Seoul based media arts group Jonpasang, Hyper-Matrix is an installation created for the Hyundai Motor Group Exhibition Pavilion in Korea, the 2012 Yeosu EXPO site. The installation comprises a specially made steel construction to support thousands of stepper motors that control 300x300mm cubes that project out of the internal facade of the building. Lightweight blocks move in and out of the facade, creating infiniti number of possibilities in the vertical, 180 degree, landscape. The audience is part of the pattern shaping performance as thousands of cubes move with the sounds. Pixel waves sweep the space, ripples emit from the centre and just in case this is not enough, projection mapping takes care of the rest. See videos for more. Jonpasang collective includes Jin-Yo Mok, Sookyun Yang, Earl Park, Jin-Wook Yeo and Sang-Wook […]
Posted on: 25/04/2012
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG