Led by Aaron Meyers, in collaboration with Re:Group artist, Aaron Koblin and in creative partnership with Ghostly International, Visual Music Collaborative is a summer school masterclass hosted by Eyebeam this July in NYC.
Invited participants will explore the relationship between music, sound, and dynamically generated imagery and motion. Topics will include sound-analysis techniques, advanced OpenGL programming, and interfacing with mobile control devices. Guest speakers and musicians provide additional insight. The master class culminates in an event where participants perform using work created during the week.
Participation in this program is via competitive application process only. Applicants should be at least at the graduate level of study, or have an emerging creative practice, and have established experience using OpenFrameworks, Processing, or an equivalent programming tool. Content created during this workshop will be released under Creative Commons licensing and may be promoted by Ghostly International.
Qualified applicants can apply here.
Applications are due May 21, and participants will be notified by May 28.
More info on the event here.
- Visual Music Collaborative [Events] – Results This is a collection of work produced at the recent Visual Music Collaborative workshop hosted by Eyebeam this July in NYC. The event was led by Aaron Meyers, in collaboration with Re:Group artist, Aaron Koblin and in creative partnership with Ghostly International. Invited participants were asked to explore the relationship between music, sound, and dynamically generated imagery and motion. Topics included sound-analysis techniques, advanced OpenGL programming, and interfacing with mobile control devices. The selection below shows only a small selection of the work done at the workshop. As more appear on line, we'll add to the list although I am sure, as I write this, the organisers are working on collecting them all. For full list of participants see here. For more information on this and future workshops see visualmusic.tumblr.com + Eyebeam + Wiki (photo above - eyebeam's flickr stream) the Illusionist from Lars Berg on Vimeo. music: Shigeto, "the illusionist" made with openframeworks BETA: Visuals (x) CWCIII - Telefon Tel Aviv / Richard Devine from /// *** this.riley *** \\\ on Vimeo. OpenFrameworks live audio analysis / visual generation, OSC in/out (to be controlled or control other devices) 2D Geometry + GL shaders /// semi auto control via analysis or manually from performers input Special track CWCIII - Telefon Tel Aviv / Richard Devine for Charlie Cooper on Ghostly International Look at all the Smiling Faces — Shigeto from jonobr1 on Vimeo. This video is a recording of my visual performance to Shigeto's new track, Look at all the Smiling Faces, on July 23, 2010. This is the culmination of the 5 day long Master Class Workshop at Eyebeam (http://eyebeam.org/events/summer-school-masterclass-visual-music-collaborative) in New York. While listening to the track — in total 299 times this week — I imagined some kind of microscopic underwater scene. I wanted to merge this idea with my desire and passion to perform. With the guidance of the instructors, Aaron Meyers and Aaron Koblin, I decided to interface the application with my XBox 360 Controller. Built with Processing Visual Music Collaborative - Sieve test w/ Dabrye + Dog Eating Ice Cream from Will Calcutt on Vimeo. School of Seven Bells - Windstorm (Improv) from blair neal on Vimeo. This is a visual improv I recorded to the SVIIB song "Windstorm" off of their new album Disconnect from Desire. Unfortunately, my 4 year old computer is not able to chew through video and do screen grabs at a very high framerate so it's incredibly choppy, so please don't take it as anything polished. Built in Max/msp and jitter. Eyebeam & Ghostly International - Visual Music Workshop from George Michael Brower on Vimeo. Christopher Willits: Colours Shifting Ghostly International Performed live for this recording, using data gathered from the Echo Nest API. Made in Processing. The Old Man and the Sea from Evan Boehm on Vimeo. My week's output for the Visualist Master Class at Eyebeam NY, July 19th-23rd. This piece was written in C++ and is fully interactive. The basis of the course was to create a visual accompaniment to a song off the Ghostly International label. Choosing The Sight Below's 'Simmer' I decided to recreate Hemmingway's The Old Man and the Sea as a serious of 3D dioramas. In the novella, the protagonist has lost his pride and respect within the community because he has not caught a fish for 86 days. On the 87th day, a giant Marlin appears who he battles to catch. After a long struggle where he refers to the Marlin as 'brother,' he finally subdues the fish and brings him aboard. On the way back to land, the fisherman has to fight off numerous sharks who are attracted by the trail of blood in the water. Eventually, the fisherman returns to land with just the massive bones left. The ordeal can be read as the man's fight for his sense of self and purpose with the Marlin the physical representation. My interest lay in the use of polygon reduction as a storytelling tool. 3D graphics models are made up of a series of points (vertices) that are 'skinned' to create models. The more points, the smoother and more detailed the model. Could reducing/increasing the number of points on a model be used to drive a story forward or create a emotional response? Throughout the piece the character's emotional state is represented by both the polygon count and distortion of the water. Hopefully a first experiment of […]
- Visual Music Collaborative 2010 [Events] Video of the recent Visual Music Collaborative workshop has just been added by Aaron Meyers to our group on Vimeo. We wrote about it few days ago with examples of work produced but the video below discusses ideas behind some of the projects including guest speakers such as Zach Lieberman (oF) and Peter Kirn (CDM). The event was hosted by Eyebeam this July in NYC and led by Aaron Meyers, in collaboration with Re:Group artist, Aaron Koblin and in creative partnership with Ghostly International. For more information on this and future workshops see visualmusic.tumblr.com + Eyebeam […]
- flight404 at Decode / V&A [Events, News] Robert Hodgin aka flight404 has just posted this video of an application he is working for the Decode event at London's V&A to open next month. Robert was asked to rework his older Solar piece so that it could be audio responsive in real-time. Whilst the details of the actual exibit are yet unknown, it is nevertheless exciting to see Robert's work at the V&A. Video at the bottom is the older piece but do make sure you watch at HD / full screen. He will be joined by the names such as Golan Levin, Daniel Brown, Daniel Rozin, Troika and Simon Heijdens. More about the event here. 8 December 2009 - 11 April 2010 // Curated in collaboration with onedotzero (via Homage to Radiolab « all manner of […]
- Bla Bla Bla [iPhone, oF, Processing, Sound] Bla Bla Bla is a sound reactive application for iPhone and iPad created by the students at the design department of IUAV of Venice and later at the ISIA of Urbino. This app is a little selection of an exercise called “Parametric Mask” with aim to introduce students to “Procedures of Basic Design” and the basics of programming. The new exercises introduce the use of a programming language as a tool that allows to solve problems. It’s not a programming workshop, but a way to use numbers, math and logic, which permits to show objectively how we solve a design problem. Students attend the first year of a design school and have never programmed before. The programming language used in the workshop is Processing. The porting for iOs was written in Open Frameworks. You can also download the mac desktop app including Processing sourcecode here. Platform: iPhone/iPad Version: 1.0 Cost: Free Developer: Lorenzo […]
- OFFF + CAN Workshop Collaborative 2011 [Cinder, oF, Js, Events] Earlier this year we have been thinking about the concept of "curated workshops", an opportunity to bring people together to work for a very short period of time and share their creations. These would include setting up a team, inviting few high profile individuals and opening up submissions for participation. When I was approached by Héctor Ayuso earlier this year to give a talk at OFFF, instead of talking about CAN, I thought this would be a great opportunity to do something more, a workshop, and use the workshop material as the content to drive the talk. Hector and I agreed, 'Workshop Collaborative' was born. What was the aim of “Workshop Collaborative”? 1. Initiate collaborations between those that share common interests. 2. Create a playing field, both physical and virtual. 3. Allow ideas to evolve by asking questions. When we announced the workshop back in January, we also opened to applications for participation. In total, 80 applications were submitted and 11 participants chosen by the team including Aaron Koblin, Ricardo Cabello - mr.doob, myself and Eduard Prats Molner. The participants included: Marek Bereza, Alba G. Corral, Andreas Nicolas Fischer, Martin Fuchs, Roger Pujol Gomez, Marcin Ignac, Rainer Kohlberger, Thomas Mann, Joshua Noble, Roger Pala and Philip Whitfield. Programme - Single Day 09:00 - 10:00 Introductions / Teams 10:00 - 13:30 Stage 1 13:30 - 14:00 Lunch 14:00 - 19:00 Stage 2 (Completion) Total creation time: 6.5 hours Few weeks before the workshop, Aaron and I decided four themes we should allow to influence the work we would be making. By allowing other participants to comment and feedback on these themes we would discover areas we all want to explore. The themes included: 1. Digital Ecosystem - Build an application, an organism of information, sound and visuals, a digital ecosystem that flows through different mediums and evolves. “living system - travelling through technology and mutates through tools. 2. Analogue Digital - Explores the notions of physicality in code. Using made objects as assets to code. Scan 3d objects, cut paper and cut-out, traditional 2d scans, 3d objects scanned using flatback scanners, etc.. 3. Projection Mapping - Address projection mapping conceptually. Moving away from technical demos, time to question what does it all mean; surface, source, angle, point projection, scale, form, interaction, animation. 4. Data re-embodied - Tell stories through the juxtaposition of data sources and their methods of representation. How can we create new meaning, understanding and value from reinterpretation of data. By no means this ment that we would have to choose one over the other. The purpose was to get the feel where the interest lies amongst the participants and set up, so to say, a 'playing field' and allow first ideas to develop. We knew that working together for a single day we would not be able to produce anything of "finish" quality, rather focus on the subjects themselves and see what comes out. Following the feedback, a number of keywords were derived, to summarise our interests: ecosystem, data, scan, evolution, input, mutation, osc, node, rhythm, pattern, touch, physical, language, viewport and mobility. Five projects developed during the 6.5 hours of work. These included Kinect > WebGL bridge, Kinect Image Evolved, Input Device, Data Flow and Receipt Racer. -- Kinect > WebGL This project was the work of mr.doob, Marcin and Edu although other people were involved also. The task was to create a bridge between the Kinect and browser, allowing the real time feed over the web. Although aspirations were much higher than the time allowed, instead of utilising node.js server - which I understand was 99% complete anyhow, the team setled for feeding downscaled image data from cinder application using standard http requests to the three.js script which was reading the images at about 10f/s. Several rendering styles are presented below. First one is just simple point cloud done by Marcin for debugging while the rest was done by mr.doob using his amazing Three.js engine. Download .js code here. -- Kinect Image Evolved Simultaneously while Ricardo was working on the .js part, Marcin was exploring different ways of kinect image representation. In attempt to get away from standard kinect point cloud, we developed idea of trying slitscan effect with the point cloud. What this means is that the kinect point cloud was dispersed along the time lapse, different bands representing different moment in time. Likewise, Macin also was exploring what happens if the point location was reversed when particular depth was reached. What you see in the videos below are both effects. Code available soon. Thomas and Andreas were also testing different tools to manipulate kinect image. Meshlab, Blender were used to pull kinect point clouds and convert them into meshes which could then be render, distorted, split, etc. -- Input Device Marcin was also working on ways to control the input, ie how one could interact with the Kinect point cloud. We were toying with the idea of being able to assign different devices over OSC to different kinect body parts. This would allow for each individual to be assigned unique element of te point cloud and to interact with it. The first step was to use simple gyroscope datam sent from an iPhone over OSC. The video below shows what is happening. Likewise, Rainer and Roger were working on the iPhone application that would send the OSC data. Rather than just utilising gyro or accelerometer, Rainer was exploring different forms of interaction with the device, seeing whether a language could be evolved, one that would somehow enhance emotional attachment the kinect body parts. The videos below show and instrument like application that also has audio feedback. Code available soon. -- Data Flow With all the data moving, Marek was wondering if the input and output are in same medium, you can compare them, apples for apples, what would happen. Marek looked at the process of the loop by examining the image obtained by subtracting the initial input from the output so we're just left with the parts that change. For the loop algorithm, jpeg compression was chosen because it was easily available in oF and ubiquitous enough to warrant investigation. The boxy images are a result of feeding the jpeg "high" quality compression back into itself and subtracting it from the original. The finer images are using the "best" compression setting. Then Marek tried the same thing with sound (using logic), using first the original sound, then the encoded and seeing what is left. You can hear all the sounds below. Original / OFFFCAN Workshop Collaborative by filipvisnjic Encoded / OFFFCAN Workshop Collaborative by filipvisnjic Difference / OFFFCAN Workshop Collaborative by filipvisnjic Code available soon. -- Receipt Racer The receipt racer combines different in and output devices into a complete game. It was made by Martin, Philip and Joshua utilising a receipt printer, a common device you can see at every convenient store, small projector, sony ps controller and a mac running custom openFrameworks application. Print is a static medium, that's why, Philip, Martin and Josh explain, it was an intriguing challenge to create an interactive game with it. First the team tried to do it only with the printer as the visual representation but that seemed rather impossible. But then Joshua Noble came up with a small projector, perfect to project a car onto a preprinted road. There is no game without an input device. So they were lucky enough as at least one of them always carries a gamepad around. The cables connect back to the laptop running an openframeworks application the team wrote parts of. The app was entirely programmed during the workshop. Internally it runs something like the basic js game. Only a car driving on a randomly generated race track. Then it broadcasts its components to the external devices, prints the street and guesses where the car's projection is supposed to be to perform the hit test. That's the trickiest part. Everything has to be in sync and needs some calibration in the beginning. The paper also has a little bit of a mind of it's own and tends to slide around or curl. But that's nothing some duct tape and cardboard can't fix. It was a lucky day. Somehow everything was just lying around, waiting to be used. Even the stand and this plastic thing you would normally use to put in your name on a conference. Even the timing was perfect. Right at the end of the workshop we finished adding the details like a little score and the YOU CRASHED TEXTS. Project Page (code available) -- On Saturday we presented the creations. Regardless of the fact that Erik Spiekermann was presenting in the other OFFF room, we had a full theatre (500 ppl estimate) including another room where our talk could be watched on a large screen. Photo above by Arseny Vesnin CAN would like to thank all the participants at the workshop as well as Aaron and Ricardo for taking time off their busy schedules to take part of the workshop. For more information on the workshop and all future information/code/links see creativeapplications.net/offf2011 Photos by Jason Vancleave We leave you with OFFF Barcelona 2011 Main Titles made for OFFF by PostPanic (full screen […]
- Processing Paris [Events] After a highly successful first edition in 2010, Processing Paris are proud to announce the second installment of their series of workshops for creatives working with Processing. Three Processing workshops have been organised : - A beginners workshop with Christian Delecluse. - An intermediate workshop with Julien Gachadoat. - A masterclass with Hartmut Bohnacker. ** Date : 22/23 April 2011 Price : 60, 75 & 90 Euros respectively. Where : La Fonderie de l’Image, 83 Avenue Gallieni, 93170 Bagnolet, Paris, www.lafonderiedelimage.org How : For further information and to register, please send the team an email stating your full name, address and which workshop you would like to attend to : firstname.lastname@example.org You may also consult the web site for details on this and other events : www.processingparis.org Please note that due to limited places it is imperative to register beforehand to ensure a place. The deadline for registering is April 15. The workshops run from10:00 to 18:00. No need for a computer although you are welcome to bring your own. For the advanced class, you will require a very good level of English. Both the beginners and intermediate class will be taught in French. Full information of the workshops' content will be communicated very soon. Processing Paris is an initiative organised by FAB, in collaboration with La Fonderie de L'Image. FAB (The Free Art Bureau) is a non-profit organisation set up for artists & designers using code as their main creative tool and medium. They organise workshops, conferences, forums, exhibitions as well as support & document work specifically created with code. They believe in sharing, educating, innovating and promoting. They believe that being free is an essential way of life and […]
- The Root Of The Root [Events] If you live in NYC, here is something coming this friday definitely worth checking out: Devotion Gallery Presents: The Root Of The Root : Generative Art By Marius Watz, Paul Prudence and Aaron Meyers Opening 6Pm On Friday, October 22nd. On View Until Sunday November 21St 2010. This exhibition showcases three artists working with generative code to create abstract and reactive works. Marius Watz and Paul Prudence have been contributing to the dialogue for computational art since 2005 with seminal essays, some of the foremost blogs and ground-breaking software. As a seminal member of the processing community, Marius Watz has, in many ways, defined part of the aesthetic associated with code-based art. Paul Prudence works with VVVV and with visual feedback systems to create audio-visual performances. Aaron Meyers is an artist and programmer using generative strategies in the creation of software and moving image. He is currently a fellow at the Eyebeam Art & Technology center where he recently led the Visual Music Collaborative workshop in collaboration with Ghostly International. Marius Watz will be teaching a special workshop on Processing at Devotion Mon Oct 18th – Tues Oct 26th. More details. Devotion Gallery 54 Maujer St Brooklyn, NY 11206 718-576-1107 We have written about Paul, Aaron and Marius a number of times in the past. It should be an interesting mix of work, hosted by Phoenix Perry in his Devotion Gallery. Previously: Node10 [Events, vvvv], Hydro Acoustic Study [vvvv], Visual Music Collaborative, Cosmogramma Fieldlines [openFrameworks] and […]
- Geeky By Nature – Ticket Giveaway [Events] We've teamed up with Geeky By Nature to bring CAN readers a chance to win free tickets to the event to be held in NYC between March 31st and April 1st. Included are two full days of design, art and code for digital designers and developers. Read on how to win.. What is Geeky by Nature all about? It's about bringing together the worlds best artists, designers, coders, and creative minds to explore the possibilities that other disciplines can bring. Whether it's computer based, film, print or inspirational, one thing is for sure, it will be unmissable! GBN will be an amazing two days, exploring the best from art, code & design. Add a list of speakers who are second to none, a whole lot of creative inspiration, the backdrop of New York City and you have the ingredients for a really exciting two days! The line includes Jared Tarbell, Branden Hall, Keith Peters, Hillman Curtis, Gmunk, Grant Skinner, Robert Hodgin, Joshua Davis, Ken Perlin, Paula Scher, Lisa Larson-Kelley, Rob Chiu, Andre Michelle, Jer Thorp, Joa Ebert, Rich Shupe, Chuck Freedman, Joshua Hirsch and Joel Gethin Lewis. We have a total of 4 tickets (4 x $299) to giveaway and we'll find a lucky winner every weak leading up to the event. All you have to do is tweet this post and you'll be entered into the random draw. One lucky winner will be chosen each week starting Wednesday 9th, then 16th, 23rd and 30th March. Make sure you are following @creativeapps as we will send you a DM if you are a winner. WINNER #1 - @lionelbui WINNER #2 - @nselikoff WINNER #3 - @mariame WINNER #4 - @AlliArchitect If you can't wait and want to save money, see CAN sidebar for special code to get $100 off the price of the ticket. See you in NYC! Rules and information 1. The event is being held in New York City: SVA Theatre from March 31st to 1st April 2011. You will need to arrange your own travel and accommodation (not included). If you win the ticket and you can't make it to the event, we would appreciate you let us know so we can give the ticket to someone else. Tickets once issued are not transferable! 2. Competition is open to everyone and anyone but you must be over 18 years of age. There will be a total of FOUR winners for this competition. 3. Winner will be selected by random. 4. Winner will be contacted first via twitter to provide email address. If they wish to pass on a ticket to another person, we will need their email but this MUST be done before the ticket is issued. If the winner does not respond within 3 days of each giveaway we will pick another winner. You must follow @creativeapps to receive a DM. 5. Only one entry per […]
Posted on: 14/05/2010
- Freelance Interactive Producers at Psyop
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- 3D Technologist at INDG
- Creative Director at INDG
- Lead Developer at INDG
- Web Developer at &Associates
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific