Developed as a collaboration between Quayola & Sinigaglia, Dedalo is a collection of custom developed vvvv engines (and a toolkit) to generate, exchange and map data between a series of graphics modules and a rendering engine used for live performance and audiovisual concerts. The first project to utilise the toolkit was performed with Mira Calix and premiered in Moscow for the opening of Quayola’s solo show, then performed in LA at USC (video below).
The project began as a collaboration in 2010 with the development of software called Partitura. The aim was to create an instrument for performance that would allow visualisation of sound in realtime. Since it’s first iteration, the project went through a number of different stages and projects (partitura-ligeti, partitura-ligeti-paris, cite-de-la-musique-paris, flexure), building on the original concepts but eventually resulting in a whole new system/framework, built from ground up and named Dedalo.
Dedalo is built using vvvv together with custom specific addons and plugins. All of the graphics are generated in the GPU via DirectX11 and the system works across two separate machines over a network (the manager and the renderer) with additional iPads for interface/macros control. The manager is the brain machine where the status of every parameter is stored in a big buffer. Using PC or iPad interface one can modify the status of most parameters. In the manager, data from the Ableton live plugins are also interpreted and assigned to automated parameters. This is also where presets are saved and loaded and specific tools (like a procedural texture engine or a colour palette engine) are controlled (see image below). Renderer not he other hand is the brute force machine where each parameter of the manager buffer is routed to a specific control value of the graphic modules. It’s also where all the DirectX11 cooking happen and where camera controls allow to direct realtime contents. Renderer also includes the offline component which can record live sessions and render them at high quality.
(Procedural map engine, where one creates procedural functions)
The software setup is divided into two separate groups of modules – Graphic and System. The graphic modules control the appearance, separated into visual groups. The system modules, on the other hand, are in charge of things that you don’t directly see but drives most of the output. Graphic modules drive the particles, splines, surfaces and voxels/volumes. The System modules control Audio Analysis (custom plugin for AbletonLive – see mage), automation (delivers automations for each parameter in the system), gradient maps (sources available to map any given parameter or automation), 3d force field (controls forces and volumetric informations), presets manager (save/recall presets for any given module) and the colour Manager (used to create complex colour palettes).
(Particles | Splines | Surfaces)
The translation of sound to visuals parameters is managed by the Ableton Live plugin which offers different types of sound analysis including FFT, ConstantQ (a version of the FFT with logarithmic frequency spacing that provides energy in each musical semitone of n octaves), MEL (a logarithmically spaced and psychoacoustic representation of amplitude in the audio signal for different frequencies), amplitude, centroid (It is a measure of brightness of the sound), pitch tracking, spread (the amount that the spectrum is spread out), snapshot (a “snapshot” of the raw audio samples at a certain interval) and Onsets (events detection in the audio signal). Using the Automation tool you can assign any of these analysis data to specific parameters of the manager buffer. For instance one can control the “x position” of a specific graphic object connecting the parameter to the Amplitude signal from the analysis plugin.
(Analysis receiver, where all analysis data from plugin are collected)
(Automation system on the iPad)
Finally, the render engine includes a virtual lighting rig comprised of 4 spot-lights with soft shadows. Also included is a Material Editor using BDRF (physically-based shading model) and HDR Environment Rig (to control global illumination and reflection). To finally adjust the appearance of materials, the rendering engine also includes tone-mapping, ambient occlusion, depth-of-field and blurs. Below are some examples of the render engine and the new physically-based shading.
(Material Editor | Rendering Engine 1 | Rendering Engine 2)
The development plans for dedalo are always steered around specific artworks/projects that the team is working on. Their next project focuses on landscapes, so new modules will be added accordingly. The team are also currently discussing a possible new fog system through which also to control diffuse lighting of the scene and at the moment the focus is on the new audio analysis plugin that will work on any sequencer, not just Ableton Live (in collaboration with Adam Stark) and a new dynamic interface system.
The next performance using thededalo toolkit will be held at Enghien-les-Bains, France | 26 Nov 2013, Centre des Arts titled Ravel Landscapes [music by Ravel, performed by pianist Vanessa Wagner].
Big thanks to Quayola & Sinigaglia for sharing dedalo details with CAN.
Quayola & Sinigaglia is a collaborative duo whom continually explores new relationships between sound and image where both languages share form and meaning. Inspired by the research of Kandinsky and Klee, they create complex digital systems informing congruous visualisations, translating the audible into visual form.