openFrameworks, Processing
comments 4

Objectifier – Device to train domestic objects

Created by Bjørn Karmann at CIID, Objectifier empowers people to train objects in their daily environment to respond to their unique behaviours. It gives an experience of training an artificial intelligence; a shift from a passive consumer to an active, playful director of domestic technology. Interacting with Objectifier is much like training a dog – you teach it only what you want it to care about. Just like a dog, it sees and understands its environment.

Using computer vision and a neural network, Objectifier allows for complex behaviours to be associated with your command. For example, you might want to turn on your radio with your favourite dance move or connect your radio to the Objectifier and use the training app to show it when the radio should turn on.


Bjørn began his research by asking a question whether machine learning be a way of programming. He was concerned with whether learning the language of the machine is unnecessary if the machine could learn to understand ours and how would we interface “teaching” in the physical space and how would the machine manifest.

“As a designer, I was interested in the future of the language and relationship between machine and human, but as a maker, I wanted to bring the power of machine learning into the hands of everyday people.”

The concept is called: “Spatial Programming” – A way to program or rather train a computer by showing it how it’s done. When the space itself become the program, then the objects, walls, lights, people and actions all become functions that are part of the program. When being present in the space the functions can be moved and manipulated in a physical and human way. The spatial manifestation of the programming language opens up new and creative interaction without the need of screen or single line of code.

↑ Six Prototypes

The first prototype for the project was called Pupil and it served as a physical interface for the machine learning program “Wekinator”. It was a remote control to explore different ideas. Pressing red or white records data and blue toggles the neural network to process the data and run the feedback. Second prototype named Trainee v1 allows makers to train any input sensor and connect them to an output without any need to write code. Trainee can combine and cross multiple output pins to create a more complex training result. Third prototype Trainee v2 was a refined version of the trainee v1 as an open-source PCB-circuit for creating you own trainee board. This build is based on a small Teenzy microcontroller and comes with a digital interface called Coach. Intern, the name of the fourth ptototype is an extension for the Trainee v1 to control devices as the output pin. The Intern has a power outlet with a relay so the Trainee v1 could train objects with 230V. Its purpose was to invite non-makers and average consumer to manipulate objects they can relate to and inspire custom problem-solving in their own contexts. Prototype 5, titled Apprentice, was designed to combine all the learnings from the previous prototypes in one device. Apprentice uses computer vision as sensor input and can be controlled wirelessly from a mobile app where feedback is given. With a raspberryPi 3 as its brain it runs a custom server to connect the app and neural network. Any domestic device can be plugged into the Apprentice learn on your command.

Finally came the Objectifier, the sixth prototype that is a smaller, friendlier and smarter version of the Apprentice. It gives an experience of training an intelligence to control other domestic objects. The system can adjust to any behaviour or gesture. Through the training app the Objectifier can learn when it should turn another object on or off. By combining powerful computer vision with the right machine learning algorithm the program can learn to understand what is sees and what behavior triggers what.

Hardware used includes RasperryPi 3 and Pi Zero and Pi Camera, whereas software includes  Wekinator, Processing and openFrameworks – ml4a, by Francis Tseng and Gene Kogan.

To learn more about the project, visit Bjørn’s website below.

Project PageBjørn Karmann


  1. ckombo says

    Making emotive comparisons to beloved animals and using anthropomorphized language to describe objects ignores the fact that objects aren’t creatures, and they have no emotions or desires of their own. Other than that it sounds neat.

    • I think the intention here is to describe a process of training, ie repetitive action to teach AI to recognise and respond appropriately. Whereas objects by themselves have no emotions or desires, this may not be the case once the AI / or machine learning has been attributed to them (like in this example).

      • ckombo says

        I don’t want to overstate the whole object don’t have emotions thing here, especially since the project seems good, the use of gestures is an interesting departure from the norm of a screen or keyboard. I could imagine an adaptive house rigged with motion trackers learning when to make my coffee and save power in unoccupied rooms, etc.

        On the larger topic: people transferring emotions formerly reserved for other people and shared experiences is fundamental to consumerism. As people have grown more emotionally invested in their objects, we design the objects and systems to stimulate human patterns so as to justify and increase that investment. I fundamentally don’t think objects can have emotions or desires regardless of how well they simulate these traits.

        • Guy Stimpson says

          So you don’t believe that AI will ever become truly sentient?

          Emotion is merely a response to events or environmental conditions in order to elicit a particular result or behaviour in oneself and/or others. Is this any different to a computer which detects a condition and then alters its own state in order to protect itself or inform a human about the condition?

          Just playing Devil’s advocate.

Leave a Reply

Your email address will not be published. Required fields are marked *