Leave a comment

The Light Barrier, Third Edition – Drawing volumes in the air with light

Created by Seoul based duo Kimchi and Chips, The Light Barrier Third Edition is the latest and largest in the series of works by the studio to create volumetric drawings in the air using hundreds of calibrated video projections. These light projections merge in a field of fog to create graphic objects that animate through physical space as they do in time.

The installations present a semi-material mode of existence, materializing objects from light. The third edition continues to exploit the confusion and non-conformities at the boundary between materials and non-materials, reality and illusion, and existence and absence. The viewer is presented with a surreal vision that advances the human instinct of duration and space. The name refers to the light barrier in relativistic physics, which separates things that are material from things that are light, and since 1983 has been used to specify the exact meaning of the metric system of spatial measure.

Installed at the ACT Center of Asia Culture Complex in Gwangju, South Korea, the installation presents a 6-minute sequence employing the motif of the circle to travel through themes of birth, death, and rebirth, helping shift the audience into the new mode of existence.  In this third edition, 8 architectural video projectors are split into 630 sub-projectors using an apparatus of concave mirrors designed by artificial nature. Each mirror and its backing structure are computationally generated to create a group that collaborates to form the single image in the air. By measuring the path of each of the 16,000,000 pixel beams individually, light beams can be calibrated to merge in the haze to draw in the air. 40 channels of audio are then used to build a field of sound which solidifies the projected phenomena in the audience’s senses.

To produce the installation the team started by farming mirror arrangements using artificial nature algorithms. They designed the overall shape of the installation using sheets of paper and hands and called this shape the ‘movement’. Since the movement is curved in both directions, there is no easy way to efficiently arrange the mirrors onto the shape (as with previous versions). In order to create the design, they created a simple algorithm which ‘grows’ mirrors onto the shape. The algorithm tweaks its own variables based on success rates, creating a simple machine learning mechanism to control the growing process. The mirrors and the steel structure were created using Rhino  and Grasshopper and finally cut using CNC machines at the ACC workshop.  The structure was then finally assembles on site by the team and fabricators.

↑ Farming windows and the view of Assembly line: (Yoona, Youngjae, Jamie)

Projection is achieved by mapping 8 video Panasonic 20,000 lumens WUXGA 4-lamp projectors to the installation (with allowances for physical deviations).  20*8m projection screen on digital motor winches moved into several different positions for calibration.  Each pixel from the projectors lands on a mirror and is scattered in an unknown direction. To discover the path of each pixel, the team place a projection screen in front of the mirror arrangement, and shine a scan pattern into each mirror one at a time. This scan gives us one 3D point for each pixel in the volume. To calculate the ray path, they need at least 2 points, so then move the screen to discover the second point. Since the area is so large, they needed to use a large screen, and thankfully the ACC had a 20x8m front-projection screen from a previous exhibition which they used for this purpose. To move this large screen, they attached it to a truss hung from the venue ceiling winches, and electrically controlled the length of chain between each of those motorised winches and the screen in order to move it to the different calibration positions. Canon C300 camera on ceiling truss watches the screen as they project scan patterns into each mirror and is captured into Rulr using a BlackMagic UltraStudio device. Rulr finally computes all 3D ray paths for each of the 18 million pixels.

To render content, they constructed the shapes from beams of light arriving from all the mirrors. Each pixel of each projector is treated as a 3D ray in space. Rulr is used to scan these rays by managing the cameras and projectors, and running the solve algorithms. Eventually it outputs the data into a GPU consumable format which they can use for rendering.

Volumetric video recordings using a genlocked pair of BlackMagic Micro Studio 4K cameras and the software PFDepth which is used for processing stereo movies. Storyboarding was produced by hand and content ideas were tested on Light Barrier 2 installation in test room. Final content design was produced in Cinema 4D and rendered as depth+colour maps. The marching rays which are projected onto the installation were rendering in VVVV from depth maps (~4fps) and all sequences were rendered to 18 megapixel HAP (open source video codec developed by Vade (Anton Marini) et al that allows for GPU accelerated realtime playback of high quality video streams.), Finally, playback was achieved thanks to d3 media server at 60fps, 1920x1200x8 resolution.

Project PageKimchi and Chips

Credits: Kimchi and Chips (Mimi Son, Elliot Woods) (Artists) / Chung Youngjae, Studio Sungshin (Engineering) / Pi Junghoon (Sound design) / Lee Soyoung, Yang Yoona, Yoh Donghoo, James G Jackson, Yi Donghoon (Production team). Produced  and presented in collaboration with Arts & Creative Technology Center, ACC.

Leave a Reply

Your email address will not be published. Required fields are marked *