/computer vision (20)







Since the first movie musical back in 1927, film audiences have delighted in seeing bodies in motion on the big screen. Movements etched into our minds. Scenes like Liza Minelli’s cabaret performance with a chair, Gene Kelly swinging joyously in the rain, the iconic lift scene in Dirty Dancing. These historic moments are now accessible…
22/06/2022Created by Kachi Chan, ‘Sisyphus’ is an installation featuring two robots engaged in endless cyclic interaction. Smaller robots build brick arches, whilst a giant robot pushes them down – propelling a narrative of construction and deconstruction.
20/06/2022OpenDataCam is a open source tool to quantify the world. It consists of a camera attached to a mini computer that is running an object detection algorithm that counts and tracks moving objects.
30/08/2019Created by Soonho Kwon, Harsh Kedia and Akshat Prakash, Anti-Drawing Machine project explores possible alternatives of how we engage with robots today—instead of purely utilitarian and precise, Anti-Drawing Machine is a robot that can be whimsical and imperfectly characteristic.
08/02/2019Created by Madeline Gannon for the 2018 Annual Meeting of New Champions at the The World Economic Forum, in Tianjin, China, Manus is a set of ten industrial robots that are programmed to behave like a pack of animals.
08/11/2018Latest in the series of experiments and explorations into neural networks by Memo Akten is a pre-trained deep neural network able to make predictions on live camera input – trying to make sense of what it sees, in context of what it’s seen before.
20/03/2018“Three Pieces with Titles” is the latest audiovisual performance by Montreal’s artificiel. In it Alexandre Burton and Julien Roy manipulate an eclectic collection of objects within the field of view of a computer vision system to generate real-time video and abstract sonic collage.
24/11/2017Guillaume Massol’s openFrameworks app titled “All work and no play” watches videos coming from different training datasets and generates sentences loosely based on what is happening on the screen, sometimes creating pearls of wisdom by coincidence.
31/05/2017A project by Design I/O for TIFF Kids International Film Festival’s interactive playground digiPlaySpace, Mimic brings a UR5 robotic arm to life and imbues it with personality. Playfully craning its neck to get a better look, arcing back when it is startled – it responds to each child that enters its field of view.
07/03/2017Created by Matthias Grund, Kadir Inan and Wookseob Jeong at the Köln International School of Design, >200 °C is imagined as a closed feedback system that combines computer vision with a poetic perspective of the physical occurrence called the Leidenfrost effect.
28/11/2016Created as a collaboration between Prokop Bartoníček and Benjamin Maus, Jller is part of an their ongoing research in the field of industrial automation and historical geology. Installation includes an apparatus, that sorts pebbles from a specific river by their geologic age.
19/05/2016Created by Brad Todd, Collimation takes a form of basic form of artificial intelligence, where the visual stimuli is translated, in a performative act of seeing with the resulting data that takes the form of a neuron.
20/10/2015Created by Jamie Zigelbaum, Triangular Series is a site-specific lighting installation composed of numerous, truncated tetrahedral forms. Each object has an unique form and senses the other and the physiological rhythms of visitors beneath them.
09/12/2014Created by Adam Ben-Dror, The Abovemarine is a vehicle that enables José, or any other fish to roam on the land freely
15/09/2014Created by Kyle McDonald, “Sharing Faces” uses a megapixel surveillance camera and custom software to match the face locations of the persons looking at the screen. As the person moves, new images are pulled from the database matching the new location and create a mirror-like image of yourself using the images of others.
27/08/2014Cloud Piano plays the keys of a piano based on the movements and shapes of the clouds. A camera pointed at the sky captures video of the clouds. MaxMSP patch uses the video of the clouds in real-time to drive a robotic device that presses the corresponding keys on the piano.
31/07/2014Google’s got a new consumer hardware initiative is a mobile phone with machine vision eyes, ultra-fast inner-ears and spatially aware brains. And around that 5″ Android reference hardware, could this be all your AR Kickstarters come true?
24/02/2014Developed by the italian interaction designer at Fabrica, Angelo Semeraro, ‘Sadly by your side’ is a music album where each song can be endlessly transformed depending on the images you focus on with your camera.
14/10/2013When Julian Oliver, Arturo Castro and James George finally get to work on Google’s most wanted/feared device. We are watching you and hope you will come soon with a project that will set the tone and give relevant food for the critical engineers we all should be.
04/08/2013Sorry, this is Members Only content. Please Log-in. Join us today by becoming a Member. Archive: More than 3,500 project profiles, scores of essays, interviews and reviews.Publish: Post your projects, events, announcements.No Ads: No advertisements, miners, banners.Education: Tutorials (beginners and advanced) with code examples, downloads.Jobs Archive: Find employers who have recruited here in the past…
19/04/2012Since the first movie musical back in 1927, film audiences have delighted in seeing bodies in motion on the big screen. Movements etched into our minds. Scenes like Liza Minelli’s cabaret performance with a chair, Gene Kelly swinging joyously in the rain, the iconic lift scene in Dirty Dancing. These historic moments are now accessible…
Tags: computer vision / dance / digital art / film installation / interactive art / interactive installation / machine learning / mediapipe / musical / skeleton tracking
Created by Kachi Chan, ‘Sisyphus’ is an installation featuring two robots engaged in endless cyclic interaction. Smaller robots build brick arches, whilst a giant robot pushes them down – propelling a narrative of construction and deconstruction.
Tags: architecture / behaviour / Cinema 4D / computer vision / Fusion 360 / interactive / Kachi Chan / Kuka / robot / servo
OpenDataCam is a open source tool to quantify the world. It consists of a camera attached to a mini computer that is running an object detection algorithm that counts and tracks moving objects.
Tags: Benedikt Groß / computer vision / cv / Florian Porada / gpu / Markus Kreutzer / Neele Rittmeister / open source / quantify / Raphael Reimann / Thibault Durand / yolo
Created by Soonho Kwon, Harsh Kedia and Akshat Prakash, Anti-Drawing Machine project explores possible alternatives of how we engage with robots today—instead of purely utilitarian and precise, Anti-Drawing Machine is a robot that can be whimsical and imperfectly characteristic.
Tags: Akshat Prakash / arduino / Carnegie Mellon / computer vision / drawing / Harsh Kedia / Soonho Kwon / stepper-motor
Created by Madeline Gannon for the 2018 Annual Meeting of New Champions at the The World Economic Forum, in Tianjin, China, Manus is a set of ten industrial robots that are programmed to behave like a pack of animals.
Tags: abb / animals / behaviour / biomimicry / computer vision / Events / featured / installation / machine / Madeline Gannon / mimicry / ofxCv / ofxEasing / ofxGizmo / ofxOneEuroFilter / openFrameworks / robotics
Latest in the series of experiments and explorations into neural networks by Memo Akten is a pre-trained deep neural network able to make predictions on live camera input – trying to make sense of what it sees, in context of what it’s seen before.
Tags: camera / computer vision / experiment / featured / knowledge / learning / machine learning / memo akten / neural networks / openFrameworks / process / reality
“Three Pieces with Titles” is the latest audiovisual performance by Montreal’s artificiel. In it Alexandre Burton and Julien Roy manipulate an eclectic collection of objects within the field of view of a computer vision system to generate real-time video and abstract sonic collage.
Tags: ableton / Alexandre Burton / artificiel / audio / audiovisual / computer vision / Csound / featured / Jimmy Lakatos / Julien Roy / montreal / music / MUTEK / ofxDecklink / ofxOpenCv / ofxPostProcessing / openFrameworks / performance / sampling / sequencer / Sound
Guillaume Massol’s openFrameworks app titled “All work and no play” watches videos coming from different training datasets and generates sentences loosely based on what is happening on the screen, sometimes creating pearls of wisdom by coincidence.
Tags: computer vision / Guillaume Massol / machine dreaming / machine intelligence / neural network / openFrameworks / Ross Goodwin
A project by Design I/O for TIFF Kids International Film Festival’s interactive playground digiPlaySpace, Mimic brings a UR5 robotic arm to life and imbues it with personality. Playfully craning its neck to get a better look, arcing back when it is startled – it responds to each child that enters its field of view.
Tags: computer vision / Dan Moore / debugview / Design I/O / digiPlaySpace / Emily Gobeille / featured / interactive / kids / Madeline Gannon / Nick Hardeman / Nick Pagee / ofxRobotArm / ofxURDriver / openFrameworks / robotics / Ryerson University / Theo Watson / TIFF
Created by Matthias Grund, Kadir Inan and Wookseob Jeong at the Köln International School of Design, >200 °C is imagined as a closed feedback system that combines computer vision with a poetic perspective of the physical occurrence called the Leidenfrost effect.
Tags: computer vision / effect / feedback / Kadir Inan / Leidenfrost effect / loop / Matthias Grund / opencv / student / Wookseob Jeong
Created as a collaboration between Prokop Bartoníček and Benjamin Maus, Jller is part of an their ongoing research in the field of industrial automation and historical geology. Installation includes an apparatus, that sorts pebbles from a specific river by their geologic age.
Tags: algorithm / automation / Benjamin Maus / computer vision / geology / installation / opencv / performance / process / Prokop Bartoníček / sorting / stones
Created by Brad Todd, Collimation takes a form of basic form of artificial intelligence, where the visual stimuli is translated, in a performative act of seeing with the resulting data that takes the form of a neuron.
Tags: Brad Todd / computer vision / Dix2X / Elie Zananiri / generative / Ian Ilavsky / living system / loop / microscope / neuron / opencv / openFrameworks / process / system
Created by Jamie Zigelbaum, Triangular Series is a site-specific lighting installation composed of numerous, truncated tetrahedral forms. Each object has an unique form and senses the other and the physiological rhythms of visitors beneath them.
Tags: computer vision / Events / installation / Jamie Zigelbaum / light / raspberrypi / responsive / swarm / tracking
Created by Adam Ben-Dror, The Abovemarine is a vehicle that enables José, or any other fish to roam on the land freely
Tags: Adam Ben-Dror / animal / arduino / computer vision / device / fish / machine / opencv / process / Processing / vehicle
Created by Kyle McDonald, “Sharing Faces” uses a megapixel surveillance camera and custom software to match the face locations of the persons looking at the screen. As the person moves, new images are pulled from the database matching the new location and create a mirror-like image of yourself using the images of others.
Tags: APAP / camera / computer vision / data / debug / debug screen / face tracker / history / Inspiration / Kyle McDonald / location / mirror / network / node / place / process / Reference / ycam
Cloud Piano plays the keys of a piano based on the movements and shapes of the clouds. A camera pointed at the sky captures video of the clouds. MaxMSP patch uses the video of the clouds in real-time to drive a robotic device that presses the corresponding keys on the piano.
Tags: computer vision / David Bowen / generative / image / image processing / maxsmp / music / Sound
Google’s got a new consumer hardware initiative is a mobile phone with machine vision eyes, ultra-fast inner-ears and spatially aware brains. And around that 5″ Android reference hardware, could this be all your AR Kickstarters come true?
Tags: Advanced Technology and Projects Group / Android / computer vision / cv / google / Johnny Chung Lee / mobile / research / spatial
Developed by the italian interaction designer at Fabrica, Angelo Semeraro, ‘Sadly by your side’ is a music album where each song can be endlessly transformed depending on the images you focus on with your camera.
Tags: Angelo Semeraro / blob detection / computer vision / fabrica / generative / music / opencv / openFrameworks / process
When Julian Oliver, Arturo Castro and James George finally get to work on Google’s most wanted/feared device. We are watching you and hope you will come soon with a project that will set the tone and give relevant food for the critical engineers we all should be.
Tags: Arturo Castro / computer vision / device / facial recognition / google glass / hacking / james george / Julian Oliver / News / technology
Sorry, this is Members Only content. Please Log-in. Join us today by becoming a Member. Archive: More than 3,500 project profiles, scores of essays, interviews and reviews.Publish: Post your projects, events, announcements.No Ads: No advertisements, miners, banners.Education: Tutorials (beginners and advanced) with code examples, downloads.Jobs Archive: Find employers who have recruited here in the past…
Tags: aesthetics / art / computer vision / cybernetics / ecology / jon goodbun / machines / new aesthetic / philosophy / Theory