Announcing: CAN + Intel Perceptual Computing Challenge + Giveaway!

 

In collaboration with Intel, CAN is pleased to announce the launch of Intel Perceptual Computing challenge! This is your opportunity to change the way people interface with computers and win a chance at $100,000 and get your app on new Perceptual Computing devices.

Perceptual computing is changing human–computer interaction. It’s been a while since there has been any disruption in the way we interface with traditional computers. The Intel Perceptual Computing Challenge wants your apps to change that.

We’re looking for the best of the best. Check your semicolons and really think about how you can change the world of computing. You can register and submit your ideas until June 17, 2013 when the first phase of the competition ends.

Important Dates

  • Idea Submission / Until 17 June, 2013 Until 1 July, 2013 (Extended)
  • Judging & Loaner Camera Fulfilment / 2 July 2013 – 9 July 2013 (Updated)
  • Early Profile Form Submission / 10 July 2013 – 31 July 2013  (Updated)
  • Early Demo App Submission / 10 July 2013 – 20 August 2013  (Updated)
  • Final Demo App Submission / 10 July 2013 – 26 August 2013  (Updated)

interact

First Stage – Deadline 17 June, 2013 1 July, 2013 (Extended)

This software development competition is seeking innovative ideas for a software application (“App”) using Intel’s new programming Software Development Kit (“SDK”) which can be used with a web camera that is able to perceive gestures, voice and touch (on screen). The idea must fit one (1) of the following categories:

  1. Gaming
  2. Productivity
  3. Creative User Interface
  4. Open Innovation

No code submission necessary at this stage but idea submission is mandatory to proceed to the next stage and receive one of the 750 Creative* Interactive Gesture Camera Developer Kits.

registernow

The SDK (Second Stage)

Intel’s Perceptual Computing SDK is a cocktail of algorithms which can be compared with the Microsoft Kinect SDK, offering high level features such as finger tracking, facial feature tracking and voice recognition. This sits alongside a fairly comprehensive set of low level ‘data getter’ functions for its time of flight camera hardware.

CreativeCamera_4625fd50-8c95-41e9-acfa-79b1381a4393The data from the time of flight sensor comes in pretty noisy in a way that differs from the kinect (imagine having some white noise applied to your depth stream). Intel supplies a very competent routine for cleaning it up, allowing for realtime noise-less depth data with smooth continuous surfaces. Unlike the Kinect, much finer details can be detected as each pixel acts independently of all others.

In an interesting development, Intel doesn’t prioritise listing support for ‘C++, .NET, etc’, instead reeling off a set of creative coding platforms as its primary platforms : Processing, openFrameworks and Unity being the lucky selection. Under the cover, this directly translates to Java, C++ and C#/.NET support. Despite the focus on creative coding platforms, the framework currently only supports Windows.

The ‘openFrameworks support’ is a single example directly calling the computer-scientist-riffing low level Intel API, which in our opinion, demonstrates pretty shallow support for openFrameworks, but also means that you’ll be equally well placed if you’re coming from Cinder or C++ otherwise. There are some friendlier / more complete openFrameworks addons popping up on GitHub already which wrap up some features nicely, but are quite far off from wrapping the complete Intel offering of functionality.

The SDK ships with a comprehensive set of examples written in C++, demonstrating features such as head tracked VR, gesture remote control and an augmented reality farm.

lair-bright3

Events

Intel is organising a number of events around the world to support the competition. From workshop to Hackathons, join others to develop and create!

Prizes

This is your chance to be one of the Perceptual Computing leaders. Win part of the $1 Million prize pool and get your apps on next generation devices. Check out these prizes:

Grand Prize: $100,000 USD + $50,000 in Marketing and Development Consultations

For the ultimate Perceptual Computing Application Demo!

For each category:

  • One (1) First Place Prize of $75,000 + $12,500 in Marketing and Development Consultations each
  • Three (3) Second Place Prizes of $10,000 each
  • Twenty (20) Third Place Prizes of $5,000 each

Plus even more opportunities to get your app to market and thousands of dollars for finalists who submit early!

CAN Team

Together with Intel team, CAN in participating in the judging of entries. Your team includes:

Filip Visnjic

Architect, educator, curator and a new media technologist from London. He is the founder and editor-in-chief at CreativeApplications.Net, co-founder and curator of Resonate festival and editorial director at HOLO magazine.

Elliot Woods

Digital media artist, technologist, curator and educator from Manchester UK. Elliot creates provocations towards future interactions between humans and socio-visual design technologies (principally projectors, cameras and graphical computation). He is the co-founder of Kimchi and Chips and also a contributor to the openFrameworks project (a ubiquitous toolkit for creative coding), and an open source contributor to the VVVV platform.

CAN Contest

To kick off the competition we are giving away THREE (3) Creative* Interactive Gesture Camera Developer Kits. All you have to do to win is leave a comment below. We ask you – “How would you change human–computer interaction”. Go wild, go crazy, speculate!

This contest is now closed. Winners:

@jeremytai:

The biggest barrier to change in human-computer interaction is our preconceived notions and actions about how we have interacted with machines in the past. That said, instead of deciding by analogy we have make decisions based on logic. As we get older we come to expect an interaction that what was done in the past. This is analogy-based decision making. To get past this barrier I would work closely with children (I have a 4 y.o. and 6 y.o.) as they interact with their environment based on almost pure logic. Children will be the ones that inherit all the interactions we invent today so they should naturally be part of the development.

@kosowski_:

I would like to change the human-computer interaction from computer-centric, controlling a mouse and keyboard to see the result on a screen, to human-centric, where we can manually manipulate actual objects and see how they change. Objects can be augmented, projecting on them for example, adding all the information and tools computers provide, but keeping the connection with the physical world.

@ilzxc:

I think the feedback is what’s lacking these days. I can’t say that I’m too excited for gestural recognition, because it seems to be a rather quaint improvement without adequate haptic feedback. So, I would pledge support towards research of devices that enable the machines to wave back at us. Or, better yet, push back. More specifically, modular controllers for multi-touch that could be assembled to a variety of different configurations would be ideal. This type of controller-assembly needs to sense touch, but also mechanize feedback. The ultimate goal of such controllers is to remove dependency on binary switches of non-pressure sensitive screens, increase the practical ranges of fine control, and enable control systems to evolve (change state), enabling fast access to various tools and their applications.

Rules and information

  1. Postage and Packing included.
  2. You must be over 18 years of age. There will be a total of THREE winner for this competition.
  3. Winners will be selected by best comment selected by CAN team.
  4. Winners will be contacted via email and will be asked to provide their full name and postal address. If they wish to pass on the book to another person, we will need their name and postal address. If the winner does not respond by the following Thursday (30th May) we will pick another winner.

Good Luck!

closeimage2

/++

/+

/++

46 comments on “Announcing: CAN + Intel Perceptual Computing Challenge + Giveaway!

  1. “How would you change human–computer interaction” My physical actions would generate bots (virtual and maybe physical) that learn from me and continue working when I’m not around on items I might be expected to do.

  2. “How would you change human-computer interaction?”
    With projected haptic environments. Imagine a portable technology that lets you project a model of any kind of data in 3D and interact with it with haptic feedback. Modelling a sculpture? Pick a generic “source material” and start pressing and pinching on it. Trying to guess the weight of 16 kg? Just place a virtual mass over your hand. Want to protect your hands while working with a fixed circular saw? Project a barrier that you can feel when you approach it.

  3. Making digital experiences more tangible helping brands to avoid to mass produce some products (for instance plastic toys). Maybe in a more adequate future for the environment we’ll manufacture only the objects that, from our digital experience, deserve to be rendered physically, so the human-computer iteration will be closer to a human-object experience.

  4. “How would you change human-computer interaction?”

    I would create a integrated kitchen computer platform that would help you find and create recipes based on ingredients, or give you recipe options for a proposed dish you would like to make.

    The sensor would be positioned over top of a counter(and stove were necessary), pointing down or across to catch objects on the counter and hand gestures above the counter. The screen/PC would be mounted on the wall behind the counter.

    You would approach the counter and start by saying something like ‘suggest a recipe for these ingredients’. You would then hold the ingredients up one by one(label or UPC facing forward if its not a fruit/vegetable). The device would first attempt to identify the object itself(based on its shape, UPC code, or label), giving you a few options to gesturally choose from if more than one possibility comes up. You could also instruct via voice recognition what the exact ingredient is(ie: One pound of lean Venison)

    From here, you would place the ingredients down on the counter in the frame of the sensor. Things like herbs/spices, and available cooking facilities(maybe you only have a hot plate or toaster oven?) could be added via voice recognition to be considered by the application . When all of your ingredients are laid out, you will say ‘done’ or have a gesture which states that all ingredients options are present.

    Working off of available databases of recipes, the user would be presented with a number of recipes in which you could gesturally navigate through on the screen in front of you. Once you have chosen your recipe, it will guide you step-by-step how to prepare and combine ingredients. This will be done by showing images of YOUR ingredients presented in previous steps or through the use of a pizo projector to ‘point’ to the objects on the counter. The interaction will be passive, intuitive and not require prompts(ie: It will be able to see that you picked up the flour and mixed it in the bowl that you just put the sugar in or that your eggs have been boiling for 6 minutes). Further, the application would be able notify you if you were about to make a mistake and give voice commands for pre-heating the oven, setting a timer, etc.

    The entire interaction would be driven by voice recognition prompts(to pause or make recipe modifications mid way), gestural commands, and object-identification.

    The application would use facial recognition to identify whether its(for example):

    -You: A strict vegan that uses the platform to help identify whether pre-packaged ingredients have animal product/by-products in them(ie: Does this manufacturer use vegetable rennet or cow rennet?)

    -Your room mate Bob who prefers to make mostly brazilian food.

    -Your girlfriend Liz who mostly does baking.

    It would then adaptively track previous recipes and whether they were finished, or stopped mid way, to better suggest new recipes and ingredient substitutions.

    Recipe modifications/substitutions would also be dynamically driven by voice recognition and object tracking. A personal database of previous changes would be updated so that if you started to go gluten free, and substitute rice flower for wheat flower, it could suggest other appropriate substitutions in a particular recipe to compensate for the difference in texture or food chemistry.

    _______________

    Well, I wasn’t expecting that idea to become a wall of text. Reading through the SDK, I could expand on this much further but will just leave it at that for this comment!

    1. Weird. I posted this under my user name and then deleted it to make corrections and now it is showing up as a guest post!

      Is there an admin that can re-associate the comment with my user name?

  5. I never win anything, but going in regardless.

    I think the feedback is what’s lacking in these particular settings. I can’t say that I’m too excited in regards to gestural recognition, because it seems to be a rather quaint improvement without adequate haptic feedback. So, I would support research of devices that enable the machines to wave back at us. Or, better yet, push back.

  6. “How would you change human-computer interaction?”

    I would create an interactive kitchen computer platform that would help you find and create recipes based on ingredients, or give you recipe options for a proposed dish you would like to make.

    The sensor would be positioned over top of a counter, pointing down or across to catch objects on the counter and hand gestures above the counter. The screen/PC would be mounted on the wall behind the counter.

    You would approach the counter and start by saying something like ‘suggest a recipe for these ingredients’. You would then hold the ingredients up one by one(label or UPC facing forward if its not a fruit/vegetable). The device would first attempt to identify the object itself(based on its shape, UPC code, or label), giving you a few options to gesturally choose from if more than one possibility comes up. You could also instruct via voice recognition what the exact ingredient is(ie: one pound of lean Venison)

    From here, you would place the ingredients down on the counter in the frame of the sensor. Things like herbs/spices, and available cooking facilities(maybe you only have a hot plate or toaster oven?) could be added via voice recognition to be considered by the application . When all of your ingredients are laid out, you will say ‘done’ or have a gesture which states that all ingredients options are present.

    Working off of available databases of recipes, the user would be presented with a number of recipes in which you could gesturally navigate through on the screen in front of you. Once you have chosen your recipe, it will guide you step-by-step how to prepare and combine ingredients. This will be done by showing images of YOUR ingredients presented in previous steps. The interaction will be passive, intuitive and not
    require prompts(ie: It will be able to see that you picked up the flour and mixed it in the bowl that you just put the sugar in). Further, the application would be able notify you if you were about to make a mistake and give voice commands for pre-heating the oven, setting a timer, etc.

    The entire interaction would be driven by voice recognition prompts(to pause or make recipe modifications mid way), gestural commands, and object-identification. The application would use facial recognition to identify whether its you, your room mate Bob who prefers to make mostly brazilian food, or your girlfriend Liz who mostly does baking. It would then adaptively track previous recipes and whether they were finished, or stopped mid way, to better suggest the best options possible.Through eye tracking, it could give gentle reminders not to forget something you have on the stove while you’re preparing the next step.

    Recipe modifications/substitutions would be dynamically tracked through voice recognition and object tracking. A personal database of previous changes would be dynamically updated so that if you started to go gluten free, and substitute rice flower for wheat flower, it could suggest other appropriate substitutions in a particular recipe to compensate for the difference in texture or food chemistry.

    ______________________
    Well, thats way more text than I was expecting to type for this. Reading through the SDK, the potential ideas just keep flowing. Its a pretty awesome development!

  7. More AI, processes observing your decisions, learning the patterns and helping to go out of your boundaries or simply complete tasks what are predictible…

  8. “How would you change human-computer interaction?”

    I would create a integrated kitchen computer platform that would help you find and create recipes based on ingredients, or give you recipe options for a proposed dish you would like to make.

    The sensor would be positioned over top of a counter(and stove were necessary), pointing down or across to catch objects on the counter and hand gestures above the counter. The screen/PC would be mounted on the wall behind the counter.

    You would approach the counter and start by saying something like ‘suggest a recipe for these ingredients’. You would then hold the ingredients up one by one(label or UPC facing forward if its not a fruit/vegetable). The device would first attempt to identify the object itself(based on its shape, UPC code, or label), giving you a few options to gesturally choose from if more than one possibility comes up. You could also instruct via voice recognition what the exact ingredient is(ie: One pound of lean Venison)

    From here, you would place the ingredients down on the counter in the frame of the sensor. Things like herbs/spices, and available cooking facilities(maybe you only have a hot plate or toaster oven?) could be added via voice recognition to be considered by the application . When all of your ingredients are laid out, you will say ‘done’ or have a gesture which states that all ingredients options are present.

    Working off of available databases of recipes, the user would be presented with a number of recipes in which you could gesturally navigate through on the screen in front of you. Once you have chosen your recipe, it will guide you step-by-step how to prepare and combine ingredients. This will be done by showing images of YOUR ingredients presented in previous steps or through the use of a pizo projector to ‘point’ to the objects on the counter. The interaction will be passive, intuitive and not require prompts(ie: It will be able to see that you picked up the flour and mixed it in the bowl that you just put the sugar in or that your eggs have been boiling for 6 minutes). Further, the application would be able notify you if you were about to make a mistake and give voice commands for pre-heating the oven, setting a timer, etc.

    The entire interaction would be driven by voice recognition prompts(to pause or make recipe modifications mid way), gestural commands, and object-identification.

    The application would use facial recognition to identify whether its(for example):

    -You: A strict vegan that uses the platform to help identify whether pre-packaged ingredients have animal product/by-products in them(ie: Does this manufacturer use vegetable rennet or cow rennet?)

    -Your room mate Bob who prefers to make mostly brazilian food.

    -Your girlfriend Liz who mostly does baking.

    It would then adaptively track previous recipes and whether they were finished, or stopped mid way, to better suggest new recipes and ingredient substitutions.

    Recipe modifications/substitutions would also be dynamically driven by voice recognition and object tracking. A personal database of previous changes would be updated so that if you started to go gluten free, and substitute rice flower for wheat flower, it could suggest other appropriate substitutions in a particular recipe to compensate for the difference in texture or food chemistry.

  9. Please: something that analyses boredom through eyebrow curvature detection and could, for instance, trigger an algorithm that shortens comments.

  10. So much ways to change and improve computer-human interaction. A lot has been done in computer vision and other “computer sensing” technologies to allow novel interactions yet it still needs to be computationally less expensive to allow it to integrate with mobile, light, battery powered devices. This would allow to have more wearable and “inteligent” devices. Certainly mobile phones have gotten really powerful but still is a completely insane or impossible task to try to attach some of this more advanced sensors to mobiles. Being able to do so would change a lot the way we interact with these tiny computers. Yet this would be mainly a human to computer interaction, the computer reacts to our behaviour; the other way round still needs a lot more development to be done to leverage with the former. Currently computers interact with us mainly through some sort of image displaying device and sound devices. Besides sight and hearing we have several other senses, one in particular need to be explored is touch. Imagine if all the new computer vision devices could be attached, or embedded in mobile devices that could give us a haptic feedback. Don’t forget that we posses touch sense in all of our skin, not only on our hands, so eventually having under skin embeded sensors and actuators, even further, muscle embedded, doesn’t seem to be so insane, and this would not only change the way we interact with computers, it would enhance our body or fix several illnesses and diseases . Cyber-humans aren´t that far away.
    Besides this making computers to understand how we see and listen, how we process this extremely complex information is still to be refined a lot to be able to get to a point in which computers could help us as in our daily tasks, such as remembering things, people and places. At the end we are an extremely sophisticated image and sound processing computer, so the goal is to make a silicon based computer to be as effective and elegant as the bio/organic computer that our brain is. The hardware research towards such goal is so important as the research related to our own internal “algorithms” related to sight and hearing, so after having a good and powerful hardware sensing device experimentation and research of what to do with it´s data is primary. Here´s where we come in. Arts and sciences must merge to allow the full potential of experimentation to emerge and become useful.

    In other words, give me the damm camera! ;)

    PS: I could write a lot more but I don’t want to get boring.

  11. I want to create new 3d metaphors for spatial interaction that are intelligent enough to meet your gestures halfway ( anticipation by probability) so that dynamic gestural combinations help you get things done faster.

  12. “How would you change human–computer interaction”

    I would use gestures and voice recognition to teach and translate sign language.
    When a user interacts verbally with a deaf colleague the audio would be converted to subtitles and an on-screen reference the user would follow with their fingers to express the phrase in sign language. The same would work vice versa as the colleagues sign language is interpreted as gestures and spoken allowed or shown as subtitles to the hearing user.

  13. “How would you change human–computer interaction”

    I would build a more realistic flight simulator to help student pilots practice at home. The control board would be displayed on screen and in absence of multiple or large monitors, the screen would scroll based on a gesture or eye tracking. This would allow for a variety of situational flight simulations to be added to a traditional training curriculum. Using the gestures would also promote task action repetition creating muscle memory and procedure familiarity on a more natural level.

  14. “How would you change human–computer interaction”

    Using gestural computing to replace telemanipulators and computer controls in robotically-assisted surgery. A surgeon could operate more naturally while still gaining the benefits of robotic precision. The operation would take place on a 3D rendering of the surgical site using AR glasses with a time delay to allow for mistake correction. The robotic assistance would negotiate for human imperfection, unintentional movement of patient or surgeon, and other variables. The software would also create a video record matched with the 3D data, instrumentation data, patient metrics, and package them into a global library that can be referenced for research.

  15. “How would you change human–computer interaction”

    We need to go interface-less. Computers should anticipate to our thoughts and react accordingly without us having to do anything. Gestures are often unnatural and somehow gimmicky. Eye-tracking and brain-computer interfaces should be helping computers to understand where are we looking at and what are we thinking of.

  16. “How would you change human-computer interaction?”
    With better support of gesture & speech recognition, I would like to help people who do not know how to (or are afraid to) use computers. Majority of Indians are still semi-literate (can read numbers, can talk in Hindi, and know common English terms). It would be interesting to see how they would react to a more gesture based interaction for common tasks, like in banking (quicker transactions at banks, instead of waiting in longer queues as they can’t/won’t use ATM machines), or basic information retrieval (where simple hand gestures & voice could be used, for example finding bus routes/ticket prices at bus stations) etc.

    A similar use case could also be for rural kids, who run away from text books, and might enjoy simple interactive games to keep them motivated to come to school. With government schools already having computer labs, the Intel sensor could be easily plugged in and used. :)

    PS: Glad to see it works with Processing. One less language to learn. :D

  17. I would change the human-computer interaction to help people with disabilities. And, secondly, to help human beings spend less time sitting at the computer.

  18. I would like to change the human-computer interaction from computer-centric, controlling a mouse and keyboard to see the result on a screen, to human-centric, where we can manually manipulate actual objects and see how they change.
    Objects can be augmented, projecting on them for example, adding all the information and tools computers provide, but keeping the connection with the physical world.

  19. How would you change human–computer interaction

    just send me the camera developer kits and i will show you what i get out of the computer

  20. Looking at interaction as a conversation, There are three places in which we can try to change the conversation (and thus the interaction): The user’s dialog, the computer’s dialoge, and the language of interaction. (the language of interaction is what we usually think of when we “change human-computer interaction”). Changing the language leads to easier (and possibly more complex and interesting) conversations, but if you don’t change the “subject” being discussed nothing new will be learned.

    Because computers are usually used to simplify or take over human tasks, human effort as a topic of discussion is often lost. But really it is the endeavour and the journey to becoming masterful at something that we appreciate when viewing somebodies creative output. I would change the subject of discussion away from digital media, computational experiences, and how the computer can simplify XYZ. I would change this to discussions that communicate the tangible human experiences of embodied labor, effort, and time investment.

    As such I would create software that requires the user to become engaged “in body”, and become like a martial art requiring skilled and controlled bodily movement in order to create work. Fusing arts such as dance, martial arts, calligraphy, and painting into a digital form. Art creation software that requires refined human movement as the paintbrush.

    This would bring embodied human experience back into the digital realm, and celebrate the process of creation as art in itself.

  21. Well… Is the computer what I really want to be interacting with?

    … Or is it my environment, or in the case of procedural art, computation-in-the world?

    Once the question is reframed, there are many technologies that could help bring about such changes but my personal interest is In augmented reality.

    Currently, augmented reality comes across as a cheesy overlay on the real world where virtual objects always eclipse real objects no matter how far or close by comparison. Paradoxically, disappearance makes things more ‘real’.

    I would like to use the intel kit to create more natural occlusions by mapping and segmenting the real world image so ‘virtual’ objects can disappear behind my hands and other real world things in an augmented scene.

    As an artist, this would let me build more meaning and significance around the spatial relationships between real and virtual elements in an augmented reality work rather than being pushed toward annotations and graffiti that currently dominate AR art.

  22. I believe that a big change in the fundamentals of human-computer interaction is due. We learnt simple commands that were suitable for our computing needs of the past but as we seek for more complex commands, interactions that provide more accurate control are required. Interactions should be able to be used by a beginner but should also provide additional control for an experienced user.

  23. “How would you change human–computer interaction”
    I would push hard for greater anticipatory capabilities. Computers (stored securely, rather than social networking sites) that accurately learn who and which relationships are valuable to you, so that trust develops between user and computer. Also that these preferences can be ported/synced across all devices.

    In Google I/O 2013, Larry Page addressed the issue of the gap between the rapid speed of tech developments vs. adaptation of technology among populations. Part of this is he said is that there aren’t enough evangelists / midway people to help educate/transition people in traditional industries to understanding these new concepts.

    My experience is people who do not use technology feel like they are constantly fighting with their devices, and are frustrated when they have to go through multiple manual commands to do something like input a calendar event. There is no trust, however learned and synchronized preferences across all devices from the start may help alleviate this issue, and help everyone get on the same page (..and leave things like IE6 & IE7 behind!)

    Aside from all the more exciting things that are being developed in HCI, I think the first and most important change to strive for is a more pragmatic one: change the attitude of non-tech savvy users to trust.

  24. The facial recognition could track which mapped areas of the screen I’m looking at and lip cues could serve as “click” type events to navigate. Hands free browsing among other applications would be possible.

  25. How would you change human computer interaction?

    My answer would be “breaking the screens and the keyboards”!!!
    I always saw computers as a tool to help human beings to live better, i.e.: reduce their problems. However what we face is that we spends tons of hours in front of the computers, time that you reduce from direct interactions with friends and family, nature, etc. Somehow, that slaves us and makes us closer to the machines instead of making the machines closer to us. In those terms, next revolution in human computer interaction would be breaking the screens, so no visual feedback would be needed and keyboard wouldn’t be used anymore. New paradigms in HCI should reinforce the idea of ubiquitous computing, where computers helps us without extensive direct feedback. So I can be working in the garden with my childreen, and exchanging information with the network without sense an screen or sit typing. To reach this goal multimodal fusion of voice and gestures recognition with AI to infer patterns as input and explore other types of output beyond visual feedback is the key to advance in more human technology.

  26. “How would you change human–computer interaction”.

    Control your car interface with moderate hand-gestures in combination with voice-recognition to drive safer and more comfortable. That’s my dream project.

  27. “How would you change human–computer interaction”
    By giving computers the ability to learn when humans don´t want to be “observed” and when their interaction is not required.

  28. “How would you change human-computer interaction?”

    A computer isn’t the only think we use to work. Machine should learn from our interaction with the physical world. Tinkering and manufacturing is being reinvented and I think computers should teach us also how to use our hands.

    A simple example can be, when you reach for a tool, like a soldering iron, computer opens a dialog asking if you want instructions. Then, it reads your movements, and based on data, shows you what your doing wrong. Or when you grab a screw, it tells you with size of screwer you should be using.

  29. “How would you change human–computer interaction” –

    Minority Report, Iron Man, Star Trek, — Voice and Gesture controlled computing has been promised in Sci-Fi for so long it’s about time we make it a reality. My team and I would build a system for capturing and working with your ideas. Voice recognition and gestures will allow us to build a true Natural User Interface into an idea manipulation productivity tool.

    We are Lionsharp and this is what we’d love to be doing. http://www.lionsharp.com

  30. I would change human-computer interaction by redesigning the way we relate to the Web. Because of its inherent openness and connectivity, the Web continues to offer unique opportunities to develop new forms of interactive applications. Its pervasiveness makes it THE place to make a difference.

    As the Web envelops us more and more, it becomes essential to move part of our interactions to the background for a more fluid and natural experience. I see this happening in 3 ways:

    – The Web connects physical objects. The Internet Of Things will enable us to sense and monitor our physical environment or let your fridge order beer (http://fuckyeahinternetfridge.tumblr.com/).
    – We move on the spectrum from Interactive Systems to Automagic Systems. Looking at creative expression, the Web has offered a showcasing platform as well as a tool to create collaboratively. Now, the cloud becomes the brain that automates the heavy lifting of filtering, enhancing and collating of human expressions (have a look at Google I/O keynote on photos and note how instead of offering a ‘creative suite’ its all about magical *auto* http://www.youtube.com/watch?v=9pmPa_KxsAM&feature=youtu.be&t=1h36m40s). Another form of moving interactivity to the background by magical automation is autonomous bots who for example stroll the web to compare prices or even shop for us (http://randomshopper.tumblr.com/).
    – The web becomes more time-based. Time-based media like audio and video are starting to take up a larger part of our everyday web experience and offer a lean back experience as alternative to all our leaning forward. On the other side it means the media that have traditionally been produced in linear form are opening up and ask (inter)active participation from the people formerly known as the audience. This two-sided shift makes the Web an ideal medium for interactive storytelling, utilizing not only hypertext, but also hypervideo and -audio (see http://popcornjs.org/ if you’re into JavaScript).

    The last point is the one Im currently most passionate about and I see collaboration between visual storytellers and coders as one of the key aspects in moving this medium forward. We need a lot of multidisciplinary experimentation and I would love to use one of the Camera Kits in my next collaboration.

  31. “How would you change human–computer interaction” – I’d make everything an interface. No need to point and click, just touch things around you and the computer will augment them! Track fingers while practicing an instrument, browse web by showing related objects (books, credit cards, business cards..), if I wanna go play basketball I’d need to take a ball, it would share my thought to my friends via social networks that I’m going out to my favorite court.. Stuff like that – share and connect through real stuff around us!

  32. How would you change human–computer interaction?

    Hiding them (computers) behind bio-haptic living interfaces. Destroy all the touchscreens. Destroy all the screens. Interact with your living-interface.

  33. Most interesting devices will necessarily have to do with the
    intersection between art , biology and technology. That´s why I need your camera:
    to became a real “biotech-artist” !!!!

  34. I envision the computer not as a tool, but as an extension of human creativity…It’s time to revamp our interaction models to better foster speculative play through the computer.

  35. The biggest barrier to change in human-computer interaction is our preconceived notions and actions about how we have interacted with machines in the past. That said, instead of deciding by analogy we have make decisions based on logic. As we get older we come to expect an interaction that what was done in the past. This is analogy-based decision making.

    To get past this barrier I would work closely with children (I have a 4 y.o. and 6 y.o.) as they interact with their environment based on almost pure logic. Children will be the ones that inherit all the interactions we invent today so they should naturally be part of the development.

  36. In my view Human-computer interaction will be changed in the right way when i will be seamless. Exactly the same way you do not realise you hand is plugged / interacting with your brain or you do not realise what you see is in fact an interfaced (hardware + some code) process between you eye and your brain… to sum up: devices/computers are part of the reality, there is a friction between us and “them”, they should be part of us.