Unity, vvvv
Leave a comment

Who Wants to be a Self-Driving Car? – Empathising with self-driving vehicle systems

Created by Joey Lee (US), Benedikt Groß (DE), and Raphael Reimann (DE) from the moovel Lab, in collaboration with MESO Digital Interiors (DE), Who Wants to be a Self-Driving Car? is a data driven trust exercise that uses augmented reality to help people empathise with self-driving vehicle systems. The team built an unconventional driving machine that lets people use real-time, three-dimensional mapping and object recognition displayed in a virtual reality headset to navigate through space.

The project makes tangible the many ways that sensors, data, computation and mobility are intertwined. By “becoming a self-driving car” people are given a medium through which the black box of self-driving technologies can be better expressed and read. The underlying technologies of self-driving cars, and therefore the discussions about their strengths and unresolved challenges, are rendered visible in this explorative and immersive experience.

The idea was to make a machine that replaces the human senses with the sensors that a self-driving car might use. This unconventional driving machine is essentially a steel-frame buggy with in-wheel, electric motors, complete with hydraulic breaking. Drivers lay head first on the vehicle; the positioning used to enhance the feeling of immersion (and vulnerability) created during the experience. A physical steering wheel controls the turning of the vehicle.

The VR experience is created using data collected by the sensors outfitted on the driving machine. The main view is a presentation of data from a 3D depth camera – ZED Stereo Camera – that uses stereoscopic imaging to map the landscape in real time. The 3D mapping of the vehicle’s vicinity is supplemented with visual object detection using the YOLO library (GitHub) from a standard web camera helping the driver to better understand what’s around them. A video camera is also outfitted on the back of the vehicle which allows the driver to see while in reverse. Lastly, a light detection and ranging (LIDAR) – Slamtec RPLidar – sensor adds an additional layer of distance sensing by sending out pulses of light from the sensor to nearby objects and calculating the 2-way return time. These components are pulled together into the VR headset – Oculus Rift – using VVVV and the 3D Unity Game Engine to provide the drivers with data that they must interpret to navigate the driving machine through space essentially replacing the control unit of an autonomous vehicle with a person.

There are 2 computers on board the buggy – a PC and a NVIDIA Jetson TX2. The PC takes in the data from the 3D depth camera and lidar and also receives the detected objects from the Jetson TX2 Board running the YOLO object detection software. The detected objects are sent via OSC to VVVV which composes the visualization as mentioned above all at 90 fames per second. The system is powered by on-board batteries.

The moovel lab collaborated with MESO Digital Interiors to prototype this immersive experience. For more information on the project, visit the project website below.

Project PageJoey Lee |  Benedikt GroßRaphael ReimannMESO Digital Interiors