Five Days Left To Enter! – Intel Perceptual Computing Challenge

Ever since even fourteenth century we have been interested in the development of new forms of geometric projection which led to the invention of novel pictorial art forms of visual representation. Understanding perspective allowed us develop new sets of systems or mechanisms to produce representations of objects in space as if seen by an observer through a window or frame. Like Albrecht Dürer (above), we have however not moved away much from understanding information through a rectangle, now better known as a computer screen, reconstructing and interacting with the ‘new’ worlds. Input/assisted devices have not changed much either and while Kinect has allowed us to move away from the screen and understand the space beyond three-dimensionality, the way we interact with information is still constrained to a two dimensional surface.

Intel understands the implication of perceptual computing but stuggles to see beyond the status quo. Waving hands in the air and conforming to traditional and familiar interaction of moving objects on the screen may be interesting but quickly becomes a gimmick, not to mention the AR demos. This is exactly where the Perceptual Computing Challenge comes in.

For CAN, it has always been important when artists push the boundaries of technology. Intel has the resources and R+D support and they are looking for the most innovative and unique uses of Perceptual Computing. A grand-prize of $100,000 and thousand of dollars more in prizing are available but you must act soon!

At this stage, its is about the ideas, no code submission necessary. 750 proposals will be selected as finalists and receive an interactive gesture camera in order to turn their idea into reality. You have until August to develop and deliver your demos and CAN is on the judging panel. Go crazy, speculate, imagine!

How to Enter:

  1. Visit to the contest website
  2. Register in just a few easy steps
  3. Submit your idea!

We look forward to your submissions!

registernow2

Important Dates

  • Idea Submission / Until 17 June, 2013 Until 1 July, 2013 (Extended)
  • Judging & Loaner Camera Fulfilment / 2 July 2013 – 9 July 2013 (Updated)
  • Early Profile Form Submission / 10 July 2013 – 31 July 2013  (Updated)
  • Early Demo App Submission / 10 July 2013 – 20 August 2013  (Updated)
  • Final Demo App Submission / 10 July 2013 – 26 August 2013  (Updated)

The SDK (Next Stage)

Intel’s Perceptual Computing SDK is a cocktail of algorithms which can be compared with the Microsoft Kinect SDK, offering high level features such as finger tracking, facial feature tracking and voice recognition. This sits alongside a fairly comprehensive set of low level ‘data getter’ functions for its time of flight camera hardware.

The data from the time of flight sensor comes in pretty noisy in a way that differs from the kinect (imagine having some white noise applied to your depth stream). Intel supplies a very competent routine for cleaning it up, allowing for realtime noise-less depth data with smooth continuous surfaces. Unlike the Kinect, much finer details can be detected as each pixel acts independently of all others.

In an interesting development, Intel doesn’t prioritise listing support for ‘C++, .NET, etc’, instead reeling off a set of creative coding platforms as its primary platforms : Processing, openFrameworks and Unity being the lucky selection. Under the cover, this directly translates to Java, C++ and C#/.NET support. Despite the focus on creative coding platforms, the framework currently only supports Windows.

The ‘openFrameworks support’ is a single example directly calling the computer-scientist-riffing low level Intel API, which in our opinion, demonstrates pretty shallow support for openFrameworks, but also means that you’ll be equally well placed if you’re coming from Cinder or C++ otherwise. There are some friendlier / more complete openFrameworks addons popping up on GitHub already which wrap up some features nicely, but are quite far off from wrapping the complete Intel offering of functionality.

The SDK ships with a comprehensive set of examples written in C++, demonstrating features such as head tracked VR, gesture remote control and an augmented reality farm.

Events

Intel are also organising a number of events around the world to support the competition. From workshop to Hackathons, join others to develop and create!

Here are some ideas from our giveaway few weeks back:

@jeremytai:

The biggest barrier to change in human-computer interaction is our preconceived notions and actions about how we have interacted with machines in the past. That said, instead of deciding by analogy we have make decisions based on logic. As we get older we come to expect an interaction that what was done in the past. This is analogy-based decision making. To get past this barrier I would work closely with children (I have a 4 y.o. and 6 y.o.) as they interact with their environment based on almost pure logic. Children will be the ones that inherit all the interactions we invent today so they should naturally be part of the development.

@kosowski_:

I would like to change the human-computer interaction from computer-centric, controlling a mouse and keyboard to see the result on a screen, to human-centric, where we can manually manipulate actual objects and see how they change. Objects can be augmented, projecting on them for example, adding all the information and tools computers provide, but keeping the connection with the physical world.

@ilzxc:

I think the feedback is what’s lacking these days. I can’t say that I’m too excited for gestural recognition, because it seems to be a rather quaint improvement without adequate haptic feedback. So, I would pledge support towards research of devices that enable the machines to wave back at us. Or, better yet, push back. More specifically, modular controllers for multi-touch that could be assembled to a variety of different configurations would be ideal. This type of controller-assembly needs to sense touch, but also mechanize feedback. The ultimate goal of such controllers is to remove dependency on binary switches of non-pressure sensitive screens, increase the practical ranges of fine control, and enable control systems to evolve (change state), enabling fast access to various tools and their applications.

See here for more.

/++

/+

/++

11 comments on “Five Days Left To Enter! – Intel Perceptual Computing Challenge

  1. What am I waiting for? I’m waiting for intel to lift those MORONIC
    COUNTRY RESTRICTIONS GODDAMMIT! This very month I finished my first master
    thesis for interaction design and it is about two-handed gesture
    interfaces. I even built a prototype, yet I can’t enter because I’m a
    Dutch citizen studying in Sweden, neither of which is covered by the
    options.

    Yes, I am very frustrated by this.

    1. @jobvanderzwan:disqus Sorry about that Job. Unfortunately not much we can do about this and I would like to think Intel have a good reason for doing it, whatever it may be. :(