Interactive Laser Sculpture is work in progress by Jayson Haebich that involves creating light sculptures out of laser light. The beams of light are interactive and responsive to touch and create an environment of light, colour and sound.
The laser light appears as an almost tactile object cutting through mid air illuminating dust, smoke and other airborne particles to create a swirling, mesmerising planes of light.
The openFrameworks application creates the lasers beams by sending vectors to the laser via a DAC. The lasers position is calibrated to the 3- Dimensional field of data viewed from the kinect, this then is used to detect when intersections between users and the laser beams occur, thus triggering effects such as colour changes or causing the beams to move.
For those familiar with Anthony McCall‘s work know how mesmerising light interaction can be. Whereas Anthony’s work is largely about intersecting mostly static or moving beams of light, Jayson introduces an exciting new interactive element where light itself becomes an interface to manipulate the beam, colour and sound.
- Lit Tree [openFrameworks, Kinect] Kimchi and Chips have just finished exhibiting their latest project at FutureEverything in Manchester - Lit Tree. Through the use of video projection, a tree is augmented enabling the presentation of volumetric light patterns using itʼs own leaves as voxels (3D pixels). Kimchi and Chips developed their own structured light system to scan in the location of every pixel in 3D, allowing a cloud of scattered projector pixels to be used as 3D Voxels. The software was written using c++, openFrameworks, XCode and Visual Studio. As a person places their hand above a plinth, their hand is scanned in 3D using a Kinect. Their realtime 3D shape is reflected inside the tree allowing them to select a volume of the tree to highlight. The tree invites viewers with a choreographed cloud of light that can respond visitors motion. As visitors approach, they can explore the immediate and cryptic nature of this reaction. The tree can form gestures in this way, and can in turn detect the gestures of its visitors. By applying a superficial layer of immediate interaction to the tree, can people better appreciate the long term invisible interaction that they share with it? Material / Hardware (At Futureeverything): Bamboo tree, 2 x High resolution webcams (reading structured light patterns), 2 x Video projectors, Microsoft kinect, Par 16 'Birdie' light w/Black wrap, Wooden plinth, Mac Mini 2010 Exhibition FutureEverything 2011, Manchester UK , May 11th-May 22nd More about the process on their blog. Project Page Previously: Link [openFrameworks, iPad, Flash, vvvv] - Installation by Kimchi […]
- Robert Henke ‘Lumière’ – Cutting the room with vectors and lasers As Robert Henke sets of on his tour with the new project Lumière, kicking off in NYC on the 10th May, we offer a little preview of what is to […]
- Laser Projection Mapping on Soap Bubbles by Memo Akten What happens when you mix soap bubbles and lasers? Memo Akten, 1/3 of Marshmallow Laser Feast, just got hold of Etherdream DAC laser and has been "messing" with it using […]
- “unnamed soundsculpture” by Daniel Franke & Cedric Kiefer / Kinect Produced by onformative and chopchop the "unnamed soundsculpture" is a project by Daniel Franke & Cedric Kiefer, building from the simple idea of creating a moving sand sculpture from the recorded motion data of a real person. For the work the team asked a dancer to visualize a musical piece (Kreukeltape by Machinenfabriek) as closely as possible by movements of her body. She was recorded by three depth cameras (Kinect) using Processing, in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud) in 3D Studio Max, so they were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer. She moves to a noise field, where a simple modification of the random seed can consistently create new versions of the video, each offering a different composition of the recorded performance. The multi-dimensionality of the sound sculpture is already contained in every movement of the dancer, as the camera footage allows any imaginable perspective. The body – constant and indefinite at the same time – “bursts” the space already with its mere physicality, creating a first distinction between the self and its environment. Only the body movements create a reference to the otherwise invisible space, much like the dots bounce on the ground to give it a physical dimension. Thus, the sound-dance constellation in the video does not only simulate a purely virtual space. The complex dynamics of the body movements is also strongly self-referential. With the complex quasi-static, inconsistent forms the body is “painting”, a new reality space emerges whose simulated aesthetics goes far beyond numerical codes. Similar to painting, a single point appears to be still very abstract, but the more points are connected to each other, the more complex and concrete the image seems. The more perfect and complex the “alternative worlds” we project (Vilém Flusser) and the closer together their point elements, the more tangible they become. A digital body, consisting of 22 000 points, thus seems so real that it comes to life again. nominated for the for the MuVi Award: kurzfilmtage.de/en/competitions/muvi-award/selection.html see video in full quallity: daniel-franke.com/unnamed_soundsculpture.mov Recorded at: LEAP - Lab for Electronic Arts and Performance music: The Naked and Famous - "The […]
- Power of One #Point – Refracting laser light installation by Shohei Fujimoto Created by Shohei Fujimoto, Power of One #Point is an installation exploring input and output using laser and reflecting […]
- White Light / White Heat [vvvv] Created by Rainer Kohlberger and Wilm Thoben (praxis berlin), White Light / White Heat (2011) is a piece for the site specific installation ’shift’ shown at the 22PRESENTS showroom in Prague. A strong laserbeam is deflected by two mirrors oscillating at very high speeds. The human eye perceives a stable image as long as they move fast enough. White Light / White Heat operates at the threshold of this rate. A constant modulation is introduced, flicker and envelopes modulate basic geometric shapes and waveforms. "Shift": The skeleton of the space is shifted into a different perspective and outlined on the walls. Focussing on the transformation of one perspective into the other the space between the two is explored through iterative processes. The infinite variations and states of the process are reflected in generative laser drawings. The sound creates a constantly changing environment. Space is made perceivable through moving relatively static sound sources across a special loudspeaker setup. Different sound beams scan the room and explore its spatial properties. Wilm used a dodecahedric loudspeaker which is a geometric shape constructed out of 12 equally big pentagons with 12 speakers mounted on each flat face. It is possible to create soundbeams on it which can be rotated in a spherical manner. Likewise it is also possible to achieve any kind of spherical spatial distribution of sound. Wilm describes sound material as very basic: Sines, Noise, Impulses modulated by different Envelopes which are mostly triggered by the Flicker frequency of the laser. Software used was VVVV and SuperCollider and the laser was mounted on a moving head so they could point it to every position in the room. 22presents.com | […]
- Shedding Light on Squidsoup – A Conversation with Anthony Rowe For more than a decade, the artist collective Squidsoup have been designing rich interactive experiences. From their early navigable sonic environments, through their playful experiments with computer vision and interest in 'volumetric visualizations', an email exchange between Squidsoup's Anthony Rowe and CAN begat a mammoth interview abound light, sound and many of the collective's […]
- Apparel by the Normals – Clothes that evolve in real-time with the user We have already seen a number of projects that try to address both the concept of generative clothing as well as the new manufacturing techniques that allow the creation of one off, per order items. What does not seem to be addressed are the implications of these new technologies on the design process and how they change the role of the designer in this real-time, almost immediate, media culture. Created by the new french collective Normals, A P P A R E L is a piece of clothing designed to exist both physically and digitally. Rather than utilising AR to only bring virtual objects to the physical environment, the project uses personal data as a method to both generate the dress and to decide how the dress may evolve. Its real esthetic intent occurs on a 3d overlay, viewable with a camera and our custom-coded application, allowing users to digitally dress up. As it uses personal data as an input, the piece’s design evolves in real-time together with its user. This is a first prototype for the ongoing project by Normals. Also part of many other interesting projects that address future scenarios through cross-media. It's refreshing to see a shift from "output thinking" towards re-examining how the realtime data may affect many things we take for granted as static. If output is only an iteration of the generative process, this makes the always evolving algorithm the actual 'product'. This version of the project was made using openFrameworks with the help of: imagery analysis : OpenCV, ofxCv, ofxARToolkitPlus; 3D modeling : ofxAssimpModelLoader, OpenGL; camera settings : ofxUVC, ofxQTKitVideoGrabber, ofxYAML; GUI : ofxUI, […]
Posted on: 18/07/2012
Posted in: openFrameworks
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG