Please disable AdBlock. CAN is an ad-supported site that takes hundreds of hours and thousands of dollars to sustain. Read More.
HOLO is a biannual magazine about emerging trajectories in art, science, and technology brought to you by the people behind CAN. Learn more!

‘Assembly’ by Kimchi and Chips – 5,500 physical pixels and digital light

A hemisphere of 5,500 white blocks occupies the air, each hanging from above in a pattern which repeats in order and disorder. Pixels play over the physical blocks as an emulsion of digital light within the physical space, producing a habitat for digital forms to exist in our world. This is “Assembly”, the latest installation by Kimchi and Chips located in Nakdong river cultural centre gallery in Busan, Korea.

A group of external projectors penetrate the volume of cubes with pixel-rays, until every single one of the cubes becomes coated with pixels. By scanning with “structured light“, each pixel receives a set of known information, such as its absolute 3d position within the volume, and the identity of the block that it lives on.

The spectator is invited to study a boundary line between the digital and natural worlds, to see the limitations of how the 2 spaces co-exist. The aesthetic engine spans these digital and physical realms. Volumetric imagery is generated digitally as a cloud of discontinuous surfaces, which are then applied through the video projectors onto the polymer blocks. By rendering figurations of imaginary digital forms into the limiting error-driven physical system, the system acts as an agency of abstraction by redefining and grading the intentions of imaginary forms through its own vocabulary.

The flow of light in the installation creates visual mass. The spectator’s balance is shifted by this visceral movement causing a kinaesthetic reaction:

For digital to exist in the real world, it must suffer its rules, and gain its possibilities. The sparse physical nature of the installation allows for the digital form to create a continuous manifold within the space across the discreet blocks, whilst also passing through each block as a continuous pocket of physical space. The polymer blocks are engineered for both diffusive/translucent properties and to have a reflective/projectable response to the pixel-rays. This way a block can act as a site for illumination or for imagery. The incomplete form of the hemisphere becomes extinct at its base, but extends through a reflection below, and therein becomes complete. It takes inspiration from nature, whilst becoming an artefact of technology.

Technical principle

Kimchi and Chips think of a projector as ~1 million static spotlights, each of which aimed down a unique direction away from the projectors lens. Each spotlight hitting the structure creates a pool of light with a defined physical location in space. By scanning the 3D location of these tiny pools of light, they can understand how to construct a macroscopic volumetric scene out of them, and most simply, imagine them as a cloud of individually addressable LED’s.

A set of 5 projectors are connected to a computer, wherein they simultaneously render to all projectors in real time. Every part of every block is seen by at least 1 projector. Using ofxGraycode and a set of 5 high resolution cameras (each respectively positioned alongside a projector) they make a structured light scan to find correspondences between projector pixels and camera pixels.

They solve the intrinsic and extrinsic properties of the cameras and projectors using these correspondences, and then use this information to triangulate the 3d position of every pixel (thereby creating voxels).

Using Point Cloud Library they cluster this data to discover the locations of the blocks, and then fit cuboids to these clusters. Using this information they can now know for each of the ~3.5 million active pixels:

• 3d position of pixel
• 3d position of cube centroid
• Cube index
• Face index on cube
• 2d position of pixel on cube face
• Wire index which the cube belongs to
• Pixel normal
• Cube rotation

Final system

A camera calibration app written in openFrameworks allows them to determine the intrinsics and extrinsics of the cameras following a chessboard calibration routine. This data can then be used to calibrate the projectors using structured light. They define the calibration tree as a travelling salesman problem, with different calibration routes (e.g. camera A to camera C) being assigned a cost based on the accuracy of the available calibration route, they then evaluate the best calibration tree for each camera and projector, and integrate through the calibration.

Each day a startup script first performs the scan in openFrameworks and then starts the runtime in VVVV.

An application first performs the simultaneous capture of structured light on the 5 cameras whilst stepping sequentially through each projector. Following this they triangulate the projector pixels to create a dense mapping between the 2D pixels of the projector and the physical locations of those pixels in 3D space. This map is then stored to disk.

The startup script then loads VVVV which transfers the datamaps to the GPU. They define ‘brushes’ using HLSL shaders which act on the dataset. Different brushes generate different visual effects, for example, some generate density fields which are interpreted as either gradients or isosurfaces. The VVVV graph plays through a script of generative animations and performs systems management.

Software platform choices

Kimchi and Chips started with identifying VVVV, openFrameworks and Cinema 4D as valuable platforms for them to develop the project. The intention of was to play to the strengths of the platforms available in terms of quality of output, immediacy of creative process and to experiment with developing new workflows.

• VVVV has been used throughout the pre-visualisation, simulation and prototyping stages, and later for runtime and systems management.
• openFrameworks was used for more advanced vision tasks where minimalism, timing, threading control and memory management were favored.
• Cinema 4D offers a tuned environment for designing and animating 3D content, but is generally limited to producing renders as 2D images/video or exporting meshes. Using python, they ‘hacked’ Cinema 4D’s cameras to capture volumetric data from scenes that could be defined as a multitude of image files.

Commonly the software research became focused on developing effective interoperability between these platforms e.g.:

• Developing an OpenGL render pipeline in VVVV, thereby allowing them to embed openFrameworks rendering within VVVV (experimental, fledgling)
• Creating a threaded image processing platform within VVVV so that they could rapidly prototype advanced vision tasks within the VVVV graph (released and currently deployed in projects by other studios)
• Developing the python scripted ‘volume capture rigs’ inside Cinema 4D to export volumetric fields to be reloaded into either a standalone openFrameworks simulation app, or VVVV for runtime (project specific)

Code

Throughout the development process, all code used for the project has been available on GitHub.

Project files: VVVV projects | openFrameworks projects

Applications: OpenNI-Measure : take measurements on a building site using a Kinect | Kinect intervalometer : apps for taking timelapse point-cloud recordings using a Kinect | VVVV.External.StartupControl : Manage installation startups on Windows computers

Algorithms: ProCamSolver : Experimental system for solving the intrinsics and extrinsics of projectors and cameras in a structured light scanning system | PCL projects

openFrameworks addons: ofxGrabCam : camera for effectively browsing 3d scenes | ofxRay : ray casting, projection maths and triangulation | ofxGraycode : structured light scanning | ofxCvGui2 : GUI for computer vision tasks | ofxTSP : solve Travelling Salesman Problem | ofxUeye : interface with IDS Imaging cameras

VVVV plugins: VVVV.Nodes.ProjectorSimulation : simulate projectors | VVVV.Nodes.Image : threaded image processing, OpenCV routines and structured light

Credits

Kimchi and Chips: Mimi Son and Elliot Woods
Production staff: Minjae Kim and Minjae Park
Mathematicians: Daniel Tang and Chris Coleman-Smith
Videography: MONOCROM, Mimi Son, Elliot Woods | Music by Johnny Ripper
Manufacturing: Star Acrylic, Seoul and Dongik Profile, Bucheon