Guillaume Massol’s openFrameworks app titled “All work and no play” watches videos coming from different training datasets and generates sentences loosely based on what is happening on the screen, sometimes creating pearls of wisdom by coincidence.
Some of the text is generated using the models created by Ross Goodwin for his NeuralSnap project. NeuralSnap is part of Ross’ ongoing research, developing tools that he hopes will serve to augment human creativity. Specifically, NeuralTalk2 uses convolutional and recurrent neural networks to caption images. Ross trained his own model on the MSCOCO dataset, using the general guidelines but made adjustments to increase verbosity. He then trained a recurrent neural network using Char-RNN on about 40MB of (mostly) 20th-century poetry from a variety of writers (and a variety of cultures) around the world.