Faith Condition by Lukas Franciszkiewicz is a project that attempts to address the understanding and applications of technology within the religions circles of current “media society”. Lukas is interested in the transformation of religion and technological reproduction of the religious phenomenon of an ‘out-of-body’-experience. The initial aim was the manipulation of human self-perception by blurring the boundaries between the real and a virtual body. Derived from these experiments, Lukas experimented with few scenarios for a disembodied sense.
Todays‘ technologies tend to convey security and confidence rather than functional transparency. In order to its illusional potential, technology is strongly connected to mechanisms of faith and religion. Based on this awareness, I created a fictional scenario for faith-conditioning objects. The first object is a camera device which is pulled by an attached cord. It addresses the personal demand of an objective view in a world scattered with digital artefacts and acts as constant reminder of technological dependence. The user connects the device to a pedestal that invites you to kneel down – a faith-based interaction manifests itself in a technological ritual. How does implicit trust in technological products changes our behaviour and moral?
The first few series of perception-experiments included head mounted webcams, video glasses and vvvv prototyping to portray the “disembodied sense”. The later proposals include more ‘completed’ and objectified experience complemented by a film about the project.
Lukas Franciszkiewicz is a ‘subversive’ product-based designer further interested in fields of interaction design, speculative design and conceptional products. His approach is informed by ideas that challenge the autonomy of design to extend it to its broadest contexts. Focused on research and experimental concepts, he deals with the impact of technology on human perception and behaviour. Using a wide variety of media from models to prototypes and video, he aims to encourage people to develop a critical view of their relationships with technology and design. Fiction enters his work as a tool to rethink our behaviour as a framework for dialogue.
- Trace Modeler [openFrameworks] Created by Karl D.D. Willis, Trace Modeler is an application that uses real-time video to create three-dimensional geometry. The silhouette of a foreground object in a video frame is subtracted from the background and used as a two-dimensional slice. At user-defined intervals new slices are captured and displaced along the depth axis. The result is a three-dimensional model defined by silhouette slices over time. Trace Modeler was built using the openFrameworks and the OpenCV library to recognize contours from the video image. Source code is available for download here. Project Page (re-descovered via Cedric Kiefer) See also Beautiful Modeler [iPad, […]
- Computer Augmented Crafts [vvvv] Christian Fiebig designed a computer interface that makes suggestions to the designer while he’s working. In his version, the computer follows a structure in the making via a webcam and instantly generates other design suggestions based on any special parameters programmed by the designer. It’s like having a colleague in your workshop, giving you direct feedback. The experimental prototype can recognise and respond to basic structures created on-camera by spot-welding thin strips of metal together. The Computer Augmented Crafts program accomplishes two things. First, for the designer it can point out new ways of dealing with the design process. Second, it makes the most of modern technology without sacrificing the advantages of craftsmanship. The working prototype was realized with the help of Roman Grasy . Software used was vvvv + fiducial marker tracker on a Windows XP Desktop PC with webcam. Project […]
- Delineating the Future – an interview with N O R M A L S CAN goes in-depth with the Paris-based 'anticipatory' design studio N O R M A L S to learn about their forthcoming dark, dense, and dizzying graphic novel series. Working process, representational techniques (that bridge illustration and code), and a critical reading of contemporary design […]
- Therefore I Am – Fictional instrument to explore prenatal diagnostics "Therefore I Am" is a project exploring prenatal diagnostics - the measurement of a human before birth and consequences and ethics as scientists encode our DNA further and […]
- ‘Assembly’ by Kimchi and Chips – 5,500 physical pixels and digital light A hemisphere of 5,500 white blocks occupies the air, each hanging from above in a pattern which repeats in order and disorder. Pixels play over the physical blocks as an emulsion of digital light within the physical space, producing a habitat for digital forms to exist in our world. This is "Assembly", the latest installation by Kimchi and Chips located in Nakdong river cultural centre gallery in Busan, Korea. A group of external projectors penetrate the volume of cubes with pixel-rays, until every single one of the cubes becomes coated with pixels. By scanning with "structured light", each pixel receives a set of known information, such as its absolute 3d position within the volume, and the identity of the block that it lives on. The spectator is invited to study a boundary line between the digital and natural worlds, to see the limitations of how the 2 spaces co-exist. The aesthetic engine spans these digital and physical realms. Volumetric imagery is generated digitally as a cloud of discontinuous surfaces, which are then applied through the video projectors onto the polymer blocks. By rendering figurations of imaginary digital forms into the limiting error-driven physical system, the system acts as an agency of abstraction by redefining and grading the intentions of imaginary forms through its own vocabulary. The flow of light in the installation creates visual mass. The spectator’s balance is shifted by this visceral movement causing a kinaesthetic reaction: For digital to exist in the real world, it must suffer its rules, and gain its possibilities. The sparse physical nature of the installation allows for the digital form to create a continuous manifold within the space across the discreet blocks, whilst also passing through each block as a continuous pocket of physical space. The polymer blocks are engineered for both diffusive/translucent properties and to have a reflective/projectable response to the pixel-rays. This way a block can act as a site for illumination or for imagery. The incomplete form of the hemisphere becomes extinct at its base, but extends through a reflection below, and therein becomes complete. It takes inspiration from nature, whilst becoming an artefact of technology. Technical principle Kimchi and Chips think of a projector as ~1 million static spotlights, each of which aimed down a unique direction away from the projectors lens. Each spotlight hitting the structure creates a pool of light with a defined physical location in space. By scanning the 3D location of these tiny pools of light, they can understand how to construct a macroscopic volumetric scene out of them, and most simply, imagine them as a cloud of individually addressable LED’s. A set of 5 projectors are connected to a computer, wherein they simultaneously render to all projectors in real time. Every part of every block is seen by at least 1 projector. Using ofxGraycode and a set of 5 high resolution cameras (each respectively positioned alongside a projector) they make a structured light scan to find correspondences between projector pixels and camera pixels. They solve the intrinsic and extrinsic properties of the cameras and projectors using these correspondences, and then use this information to triangulate the 3d position of every pixel (thereby creating voxels). Using Point Cloud Library they cluster this data to discover the locations of the blocks, and then fit cuboids to these clusters. Using this information they can now know for each of the ~3.5 million active pixels: • 3d position of pixel • 3d position of cube centroid • Cube index • Face index on cube • 2d position of pixel on cube face • Wire index which the cube belongs to • Pixel normal • Cube rotation Final system A camera calibration app written in openFrameworks allows them to determine the intrinsics and extrinsics of the cameras following a chessboard calibration routine. This data can then be used to calibrate the projectors using structured light. They define the calibration tree as a travelling salesman problem, with different calibration routes (e.g. camera A to camera C) being assigned a cost based on the accuracy of the available calibration route, they then evaluate the best calibration tree for each camera and projector, and integrate through the calibration. Each day a startup script first performs the scan in openFrameworks and then starts the runtime in VVVV. An application first performs the simultaneous capture of structured light on the 5 cameras whilst stepping sequentially through each projector. Following this they triangulate the projector pixels to create a dense mapping between the 2D pixels of the projector and the physical locations of those pixels in 3D space. This map is then stored to disk. The startup script then loads VVVV which transfers the datamaps to the GPU. They define ‘brushes’ using HLSL shaders which act on the dataset. Different brushes generate different visual effects, for example, some generate density fields which are interpreted as either gradients or isosurfaces. The VVVV graph plays through a script of generative animations and performs systems management. Software platform choices Kimchi and Chips started with identifying VVVV, openFrameworks and Cinema 4D as valuable platforms for them to develop the project. The intention of was to play to the strengths of the platforms available in terms of quality of output, immediacy of creative process and to experiment with developing new workflows. • VVVV has been used throughout the pre-visualisation, simulation and prototyping stages, and later for runtime and systems management. • openFrameworks was used for more advanced vision tasks where minimalism, timing, threading control and memory management were favored. • Cinema 4D offers a tuned environment for designing and animating 3D content, but is generally limited to producing renders as 2D images/video or exporting meshes. Using python, they ‘hacked’ Cinema 4D’s cameras to capture volumetric data from scenes that could be defined as a multitude of image files. Commonly the software research became focused on developing effective interoperability between these platforms e.g.: • Developing an OpenGL render pipeline in VVVV, thereby allowing them to embed openFrameworks rendering within VVVV (experimental, fledgling) • Creating a threaded image processing platform within VVVV so that they could rapidly prototype advanced vision tasks within the VVVV graph (released and currently deployed in projects by other studios) • Developing the python scripted ‘volume capture rigs’ inside Cinema 4D to export volumetric fields to be reloaded into either a standalone openFrameworks simulation app, or VVVV for runtime (project specific) Code Throughout the development process, all code used for the project has been available on GitHub. Project files: VVVV projects | openFrameworks projects Applications: OpenNI-Measure : take measurements on a building site using a Kinect | Kinect intervalometer : apps for taking timelapse point-cloud recordings using a Kinect | VVVV.External.StartupControl : Manage installation startups on Windows computers Algorithms: ProCamSolver : Experimental system for solving the intrinsics and extrinsics of projectors and cameras in a structured light scanning system | PCL projects openFrameworks addons: ofxGrabCam : camera for effectively browsing 3d scenes | ofxRay : ray casting, projection maths and triangulation | ofxGraycode : structured light scanning | ofxCvGui2 : GUI for computer vision tasks | ofxTSP : solve Travelling Salesman Problem | ofxUeye : interface with IDS Imaging cameras VVVV plugins: VVVV.Nodes.ProjectorSimulation : simulate projectors | VVVV.Nodes.Image : threaded image processing, OpenCV routines and structured light Credits Kimchi and Chips: Mimi Son and Elliot Woods Production staff: Minjae Kim and Minjae Park Mathematicians: Daniel Tang and Chris Coleman-Smith Videography: MONOCROM, Mimi Son, Elliot Woods | Music by Johnny Ripper Manufacturing: Star Acrylic, Seoul and Dongik Profile, […]
- Monolith [vvvv, Objects, Arduino] 'Monolith' is the latest project from the London based design studio Signal | Noise. The team collaborated with the Swiss design studio Unit for the french luxury label Hermés, and their new flagship store in Geneva. The theme for the evening was the meeting of handcraft and technology and in the first room they created an iPad application which invited guests to leave their hand print on the evening, wheres the second installation, shown here, included a six metre interactive object that allowed visitors to control strips of light passing through it. The so called "Monolith" was interwoven with "digital stitches" - arrays of infra-red sensors and LEDs, which allowed guests to create and control strips of light in the minimal, high-gloss surface. The structure is made of timber frame, routed high gloss MDF panels, acrylic strips, LED strips, IR transmissive plastic and custom circuit boards. The custom application made in vvvv by Gareth Griffiths communicates with the LED strips using Arduino boards. The Arduino boards were programmed by Dom Robson to send and receive binary messages which are decoded using a combination of vvvv nodes and a custom plugin called ShiftData made by Vux. The on and off touch signals are sent to the LED control patch where the data is analysed and sent back to the Arduino controlling individual brightness of the LED. See vvvv patch images below with further description of the process. vvvv Patch: Gareth Griffiths / Uberact Hardware Design and Programming: Dominic Robson Project Page | Unit | Signal / […]
- ExR3 at NODE Forum by Elliot Woods and Kyle McDonald ExR3 is an a anamorphic analogue interactive installation that exists coherently in a fractured, mirrored version of a reflected room visible from four points within the real […]
- Six-Forty by Four-Eighty [Objects] Six-Forty by Four-Eighty is an interactive lighting installation designed to reveal the materiality of computation by recontextualizing the common pixel. Composed of two hundred and twenty magnetic pixel-tiles in a darkened room, Zigelbaum + Coelho have created each pixel able to be touched, moved, and modified. At the start of the day the pixel-tiles are packed together as a display and by the end of the day they will have migrated across the walls in the room. By transposing the pixel from the confines of the screen and into the physical world, focus is drawn to the materiality of computation itself and new forms for design emerge. The team have developed a new technology that makes the glass in the pixel-tile sense touch as well as send information through your body when you are touching it. They had to create our own composite glass and case for that. In general, each tile can also receive incoming IR data so you can control it with a remote. They are battery powered and can last for up to two weeks on a single charge if they are constantly on. Holding down on the tile will make it pulse. Touching another tile will let you copy the colour of first tile onto the second one. The pixels can only talk to each other through your body, ie combining for example colours to create new colours or touching two tiles at the same time will copy third colour onto the new tile. They also work with a remote allowing you to cycle through colours from distance. Materials: Injection Molded ABS/Polycarbonate, Glass, Light Emitting Diodes, Custom Electronics, Microcontroller, Embedded Software, Stainless Steel. The team are publishing a paper with more detailed technical information at the end of the month so do keep an eye on the project's website. Designed by Marcelo Coelho and Jamie Zigelbaum, with the assistance of Joshua Kopin. Made in collaboration with the Fluid Interfaces Group, MIT Media Lab. (Thanks […]
Posted on: 06/04/2012
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG