Memo developed software to control this ILDA controllable laser projector (or any other pro laser projector), for realtime interactive laser interaction with openFrameworks, on OSX, and is releasing everything (including firmware of the etherdream) opensource.
Since these lasers take vector as input, i.e. you can’t just send it images/videos, he is first rendering everything to an FBO, then vectorising and re-calculating optimal paths, adding/removing points, curves etc. If your source graphics are vector (as it could be in many generative systems), it’s probably more efficient to keep it that way without rasterising first, but he was curious as to how the galvos (electromagnetic devices that move mirrors which reflect the laser beam and essentially create the patterns, text or animations) were responding, and this was a great way to understand the system and test the limits.
The later experiment uses Leap Motion to draw amoeba like objects via finger tracking. These are projected onto person wearing safety goggles since eyes and lasers are not great friends.
The Latest experiment, seen above, adds soap bubbles to the equation with some openCV tracking, edge finding and back to vectorising to be projected onto them. Of course all of this appropriately accompanied by Get Lucky / Daft Punk track.
Looking forward to see where this goes next.
- ‘Forest’ – Musical forest by Marshmallow Laser Feast for STRP Biennale 450 square meters of musical forest comprised of 150 'trees' for audience to explore spatially and physically by tapping, shaking, plucking, and vibrating them to trigger sounds and […]
- Quadrotors at the Saatchi & Saatchi New Directors Showcase 2012 by MLF – Details Now in its 22nd year, the Saatchi & Saatchi New Directors’ Showcase hit Cannes again, unveiling another presentation of the new directorial talent. Marshmallow Laser Feast (Robin McNicholas, Memo Akten and Barnaby Steel) were the creative and technical directors of the production which included a theatrical performance by 16 flying robots reflecting light beams on the stage. CAN got the all the details on how this mesmerising performance came into being. Memo describes the goal as to create something simple, beautiful and mysterious: "to push the experience to that of watching an abstract virtuoso being made of light, playing a bizarre, imaginary musical instrument." The performing Quadrotors are not the *stars* of the show but rather the light forms generated. Their role, Memo describes, is to manipulate the space by sculpting the light creating a ballet of anthropomorphic light forms. As the audience anticipates performance on the stage, buzzing fills the dark auditorium - causes confusion with no idea what is about to happen. 16 Quadrotors take the stage, hovering above a pyramid with light beams aimed at each one and light reflected back onto the stage. They move, reassemble, reshape the space. The most fascinating / educational / unexpected aspect of the project for me (probably due to my naivety and inexperience in working with robotics) - is how the outcome ended up being a definite collaboration between the team (us humans), and the hardware (the vehicles themselves and the moving head lights). The team knew that the hardware was going to impose constraints and they will have to stay within the confines of "physically possible", after all these are flying machines and gravity is at play. They also knew they were not going to get the exact same motion they had in their animation and simulations. They didn't expect that the flying robots were going to add such a distinct charming characteristic of movement, all of the team members falling in love with them instantly as soon as they saw the machines flying the trajectories. The Quadrotors themselves are the brainchild of Alex Kushleyev and Daniel Mellinger of KMel Robotics. University of Pennsylvania graduates Alex and Daniel are experts in hardware design and high- performance control. Their Quadrotors push the limits of experimental robotics, and the Quadrotors performing at the NDS have been built and programmed specifically for the event. The MLF team started working on the project back in January this year. They brought KMEL Robotics onboard to collaborate on the robots and asked them to build a bunch of quadrotors with a (polycarbonate) mirror on servo, and super bright LEDs. The MLF animated the robot (trajectories, mirrors, LEDs) in Cinema4D and developed a simulation environment (using espresso + coffee) to track the virtual vehicles with virtual spot lights, animate the mirrors, bounce the lights off the mirrors etc. They could simulate everything accurately (minus the airflow dynamics simulations) - including warnings for impossible or dangerous manoeuvres (i.e. acceleration, velocity, proximities etc.). Generated data (trajectories, mirrors, LEDs) was then exported using a custom python exporter, into a custom data format which they could feed straight into KMELs system. So C4D wasn't used for just previz, but ended up being fed straight into the flying robots. The quadrotors were tracked by a VICON mocap rig. The setup included about 20 cameras mounted on a truss 7.5m high, covering a 9m x 4.3m area. KMEL wrote the software to use the VICON tracking data to control their vehicles. Each vehicle knows where it wants to be (based on the trajectories they exported) and knows where it is (based on the VICON data). It then does the necessary motor adjustments to get to where it wants to be - much more complicated process than it sounds. The VICON tracking data also feeds into openFrameworks app Memo created. The moving head light animations are all realtime and based on the tracking data. i.e. VICON says "quadrotor #3 is at (x, y, z)". OF app says "ok, adjust Sharpy#3 pan/tilt so that it hits quadrotor #3). (using DMX). The oF app also runs the data to turn lights on and off (DMX) and launch other lighting presets at particular times (gobos etc). The team also used the VICON rig to calibrate the VICON space (quadrotor coordinate system) to stage space (the coordinate system we used for their animations) - by placing tracking markers on all of the lights on the floor (so the software knows where all the lights are in the world). Likewise VICON rig was also used to calibrate each of the individual sharpy orientation motors. i.e. when they send the instructions to set Pan/Tilt to 137 deg / 39 deg, to Memo's dismay they were always considerably off! (even though they have very precise motors). So he had to map his desired angles, to real world angles (specific to each device). Most of the testing, playing, calibrating, setting up was done on the iPad using Lemur configuration Memo built for the setup. For the actual show everything was preprogrammed and nothing performed live. Music was created by Oneohtrix Point Never with whom the team worked closely and iteratively to develop bespoke piece for the performance. The team would animate, send him the simulation, he would add music, sends back, they animate, send him the simulation, he changes music, sends back, they animate….etc. This went on until a few days before the performance. I now realise that it's no different to a choreographer instructing a dancer, who takes the choreography and makes it their own; or a composer writes a piece of music for a musician, who takes the score and makes it their own. These robots took our animations, and made it their own. We quickly realised this and fully embraced it, adding little touches which would really allow the vehicles quirks and character to fully shine through. marshmallowlaserfeast.com | Directors Showcase 2012 Event concept created by: Jonathan Santana & Xander Smith, Saatchi & Saatchi Producer: Juliette Larthe email@example.com Production Supervisor: Holly Restieaux Show Directors: Marshmallow Laser Feast - Memo Akten, Robin McNicholas, Barney Steel MLF Team: Raffael Ziegler, Devin Matthews, Rob Pybus, James Medcraft Quadrotor Design & Development: KMel Robotics Sound Design: Oneohtrix Point Never Intro Music “Shine a Light” Spiritualized® Set Design: Sam & Arthur Production: Ben Larthe, Mike Tombeur, […]
- ‘Obake’ (o-baa-keh) – 2.5D interaction gestures to manipulate 3D surfaces Created by Dhairya Dand and Robert Hemsley, the project seeks to develop gestures to evolve three-dimensional surfaces using 2.5D […]
- Light Leaks – Filling a room with projected light Light Leaks is a light installation by Kyle McDonald and Jonas Jongejan comprised of fifty mirror balls projecting controlled light in the […]
- ‘Assembly’ by Kimchi and Chips – 5,500 physical pixels and digital light A hemisphere of 5,500 white blocks occupies the air, each hanging from above in a pattern which repeats in order and disorder. Pixels play over the physical blocks as an emulsion of digital light within the physical space, producing a habitat for digital forms to exist in our world. This is "Assembly", the latest installation by Kimchi and Chips located in Nakdong river cultural centre gallery in Busan, Korea. A group of external projectors penetrate the volume of cubes with pixel-rays, until every single one of the cubes becomes coated with pixels. By scanning with "structured light", each pixel receives a set of known information, such as its absolute 3d position within the volume, and the identity of the block that it lives on. The spectator is invited to study a boundary line between the digital and natural worlds, to see the limitations of how the 2 spaces co-exist. The aesthetic engine spans these digital and physical realms. Volumetric imagery is generated digitally as a cloud of discontinuous surfaces, which are then applied through the video projectors onto the polymer blocks. By rendering figurations of imaginary digital forms into the limiting error-driven physical system, the system acts as an agency of abstraction by redefining and grading the intentions of imaginary forms through its own vocabulary. The flow of light in the installation creates visual mass. The spectator’s balance is shifted by this visceral movement causing a kinaesthetic reaction: For digital to exist in the real world, it must suffer its rules, and gain its possibilities. The sparse physical nature of the installation allows for the digital form to create a continuous manifold within the space across the discreet blocks, whilst also passing through each block as a continuous pocket of physical space. The polymer blocks are engineered for both diffusive/translucent properties and to have a reflective/projectable response to the pixel-rays. This way a block can act as a site for illumination or for imagery. The incomplete form of the hemisphere becomes extinct at its base, but extends through a reflection below, and therein becomes complete. It takes inspiration from nature, whilst becoming an artefact of technology. Technical principle Kimchi and Chips think of a projector as ~1 million static spotlights, each of which aimed down a unique direction away from the projectors lens. Each spotlight hitting the structure creates a pool of light with a defined physical location in space. By scanning the 3D location of these tiny pools of light, they can understand how to construct a macroscopic volumetric scene out of them, and most simply, imagine them as a cloud of individually addressable LED’s. A set of 5 projectors are connected to a computer, wherein they simultaneously render to all projectors in real time. Every part of every block is seen by at least 1 projector. Using ofxGraycode and a set of 5 high resolution cameras (each respectively positioned alongside a projector) they make a structured light scan to find correspondences between projector pixels and camera pixels. They solve the intrinsic and extrinsic properties of the cameras and projectors using these correspondences, and then use this information to triangulate the 3d position of every pixel (thereby creating voxels). Using Point Cloud Library they cluster this data to discover the locations of the blocks, and then fit cuboids to these clusters. Using this information they can now know for each of the ~3.5 million active pixels: • 3d position of pixel • 3d position of cube centroid • Cube index • Face index on cube • 2d position of pixel on cube face • Wire index which the cube belongs to • Pixel normal • Cube rotation Final system A camera calibration app written in openFrameworks allows them to determine the intrinsics and extrinsics of the cameras following a chessboard calibration routine. This data can then be used to calibrate the projectors using structured light. They define the calibration tree as a travelling salesman problem, with different calibration routes (e.g. camera A to camera C) being assigned a cost based on the accuracy of the available calibration route, they then evaluate the best calibration tree for each camera and projector, and integrate through the calibration. Each day a startup script first performs the scan in openFrameworks and then starts the runtime in VVVV. An application first performs the simultaneous capture of structured light on the 5 cameras whilst stepping sequentially through each projector. Following this they triangulate the projector pixels to create a dense mapping between the 2D pixels of the projector and the physical locations of those pixels in 3D space. This map is then stored to disk. The startup script then loads VVVV which transfers the datamaps to the GPU. They define ‘brushes’ using HLSL shaders which act on the dataset. Different brushes generate different visual effects, for example, some generate density fields which are interpreted as either gradients or isosurfaces. The VVVV graph plays through a script of generative animations and performs systems management. Software platform choices Kimchi and Chips started with identifying VVVV, openFrameworks and Cinema 4D as valuable platforms for them to develop the project. The intention of was to play to the strengths of the platforms available in terms of quality of output, immediacy of creative process and to experiment with developing new workflows. • VVVV has been used throughout the pre-visualisation, simulation and prototyping stages, and later for runtime and systems management. • openFrameworks was used for more advanced vision tasks where minimalism, timing, threading control and memory management were favored. • Cinema 4D offers a tuned environment for designing and animating 3D content, but is generally limited to producing renders as 2D images/video or exporting meshes. Using python, they ‘hacked’ Cinema 4D’s cameras to capture volumetric data from scenes that could be defined as a multitude of image files. Commonly the software research became focused on developing effective interoperability between these platforms e.g.: • Developing an OpenGL render pipeline in VVVV, thereby allowing them to embed openFrameworks rendering within VVVV (experimental, fledgling) • Creating a threaded image processing platform within VVVV so that they could rapidly prototype advanced vision tasks within the VVVV graph (released and currently deployed in projects by other studios) • Developing the python scripted ‘volume capture rigs’ inside Cinema 4D to export volumetric fields to be reloaded into either a standalone openFrameworks simulation app, or VVVV for runtime (project specific) Code Throughout the development process, all code used for the project has been available on GitHub. Project files: VVVV projects | openFrameworks projects Applications: OpenNI-Measure : take measurements on a building site using a Kinect | Kinect intervalometer : apps for taking timelapse point-cloud recordings using a Kinect | VVVV.External.StartupControl : Manage installation startups on Windows computers Algorithms: ProCamSolver : Experimental system for solving the intrinsics and extrinsics of projectors and cameras in a structured light scanning system | PCL projects openFrameworks addons: ofxGrabCam : camera for effectively browsing 3d scenes | ofxRay : ray casting, projection maths and triangulation | ofxGraycode : structured light scanning | ofxCvGui2 : GUI for computer vision tasks | ofxTSP : solve Travelling Salesman Problem | ofxUeye : interface with IDS Imaging cameras VVVV plugins: VVVV.Nodes.ProjectorSimulation : simulate projectors | VVVV.Nodes.Image : threaded image processing, OpenCV routines and structured light Credits Kimchi and Chips: Mimi Son and Elliot Woods Production staff: Minjae Kim and Minjae Park Mathematicians: Daniel Tang and Chris Coleman-Smith Videography: MONOCROM, Mimi Son, Elliot Woods | Music by Johnny Ripper Manufacturing: Star Acrylic, Seoul and Dongik Profile, […]
- Little Magic Stories [openFrameworks, Kinect] Little Magic Stories is the latest project by Chris O'Shea, with aim to encourage children to use their creativity to bring stories to life. The installation allows them to create a performance from within their imagination, on stage, in front of an audience of family and friends. Chris writes: This version This is the first version of the project to test the idea and build the system. This story about the seasons was created entirely by the children, with the interactivity in the scenes built by me. Some scenes used motion detection in zones to trigger animations, such as catching Easter eggs, squashing sand castles or launching fireworks. Body tracking and basic physics were used in other scenes. The future I am planning to use this project in workshops with groups of children to get them excited about storytelling. They will be able to use the system to create their own narratives, as well as drawing the content by hand, before performing to their friends. The system will have improved physics, dynamic animation of objects and scene animated sounds. Chris used the Musion Eyeliner holographic projection system for this project, allowing the graphics to appear to be alongside the performers. This uses a technique called Pepper’s ghost, and you can see the technical set-up here. An Xbox Kinect camera was also used to track the performers on stage. The software was custom written in C++ and used openFrameworks, openCV and Box2D. Project […]
- Imposition – Live performance by Edisonnoside and Daniel Schwarz Created by Edisonnoside and visual artist Daniel Schwarz, imposition is a live audiovisual performance that combines dark, blurry electronic music and non-linear rhythms meld with hypnotic with abstract visuals superimposed on a matrix of […]
- Lit Tree [openFrameworks, Kinect] Kimchi and Chips have just finished exhibiting their latest project at FutureEverything in Manchester - Lit Tree. Through the use of video projection, a tree is augmented enabling the presentation of volumetric light patterns using itʼs own leaves as voxels (3D pixels). Kimchi and Chips developed their own structured light system to scan in the location of every pixel in 3D, allowing a cloud of scattered projector pixels to be used as 3D Voxels. The software was written using c++, openFrameworks, XCode and Visual Studio. As a person places their hand above a plinth, their hand is scanned in 3D using a Kinect. Their realtime 3D shape is reflected inside the tree allowing them to select a volume of the tree to highlight. The tree invites viewers with a choreographed cloud of light that can respond visitors motion. As visitors approach, they can explore the immediate and cryptic nature of this reaction. The tree can form gestures in this way, and can in turn detect the gestures of its visitors. By applying a superficial layer of immediate interaction to the tree, can people better appreciate the long term invisible interaction that they share with it? Material / Hardware (At Futureeverything): Bamboo tree, 2 x High resolution webcams (reading structured light patterns), 2 x Video projectors, Microsoft kinect, Par 16 'Birdie' light w/Black wrap, Wooden plinth, Mac Mini 2010 Exhibition FutureEverything 2011, Manchester UK , May 11th-May 22nd More about the process on their blog. Project Page Previously: Link [openFrameworks, iPad, Flash, vvvv] - Installation by Kimchi […]
Posted on: 17/05/2013
Posted in: openFrameworks
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG