Created by Golan Levin, David Newbury, and Kyle McDonald, with the assistance of Golan's students at CMU, Irene Alvarado, Aman Tiwari, and Manzil Zaheer, Terrapattern is a visual search tool for satellite imagery. The project provides journalists, citizen scientists, and other researchers with the ability to quickly scan large geographical regions for specific visual features.
Terrapattern uses a deep convolutional neural network (DCNN), based on the ResNet ("Residual Network") architecture developed by Kaiming He et al. The team trained a 34-layer DCNN using hundreds of thousands of satellite images labeled in OpenStreetMap, teaching the neural network to predict the category of a place from a satellite photo. In the process, their network learned which high-level visual features (and combinations of those features) are important for the classification of satellite imagery. They used 466 of the Nominatim categories (such as "airport", "marsh", "gas station", "prison", "monument", "church", etc.), with approximately 1000 satellite images per category. Their resulting model, which took 5 days to compute on an nVidia 980 GPU, has a top-5 error rate of 25.4%.