Matthew Plummer-Fernandez’s ongoing exploration of digital fabrication needs little introduction to CAN readers. Over the past few years the London-based artist has designed several series of exquisite artifacts that a viewer might be tempted to distill down to ‘volumetric considerations of glitch aesthetics and low poly geometries’, but that kind of surface reading would overlook a world of nuance. There are smart polemics about representation and replicability embedded within these abstract forms and Plummer-Fernandez is as dedicated to cultivating novel generative techniques as he is to reconciling their physical realization. Plummer-Fernandez was recently commissioned to produce a new work, titled Venus of Google, for Design Exquis and CAN was fortunate enough to engage him in a freewheeling conversation about the undertaking.
But first, some context: Venus of Google is a 3D print of an algorithmic interpretation of a photograph served up by Google Images in response to another photograph. This might sound like an over-mediated choreography but the workflow isn’t outlandish at all considering Plummer-Fernandez’s commission was to participate in a game of exquisite corpse. The meta-commentary here is that he’s riffing on the computer vision system driving Google Images and created his own image-comparison algorithm to ‘sculpt’ the image found through Google and then output it for 3D printing. Watch the above screencast to get a sense of the process, and then read on. The only other important detail to keep in mind is that this piece is the first work within Long Tail Multiplier, a new series.
Why did you choose Google Image Search as the ‘interpreter’ of your original image?
I’m currently interested in automating the production of my work. I’m not that interested in my interpretation of an object or image, I’m more excited by seeing how an algorithm interprets it, and Google’s technology is increasingly becoming an accessible form of computer vision that tries to do that for you. It is currently rather impractical and I like that about it. I like to see Google fail at recognising objects and draw up a collage of totally non-related artefacts, perhaps a side effect of having too many options to retrieve, with many images having just the right shade of colour and shape to dilute the search results.
Failure, or at least problematizing ‘success’, seems to be a recurring theme in your recent projects. The Digital Natives 3D print series takes simple everyday vessels and transforms them into improbably complex glitch housewares, while sekuMoi Mecy questions the limits of intellectual property with a derivative, headwound-suffering bastardization of Mickey Mouse. One might read these examples and the Venus of Google as starting with ‘a rigorous undermining of form’ followed by a sculptor-like zeal for the exquisite realization of your findings. Do you see this as a contradiction? If not, then how would you describe it?
I see the Venus of Google as more in line with the work that came prior to those projects, Glitch Reality I and II, where I’m using the technology as best as it will provide to expose and aestheticize its shortcomings, rather than designing it to fail. The sekuMoi Mecy headwound is actually my best attempt at scanning Mickey’s head with a laser-scanner that struggles to scan black surfaces. Failure in my work is much more honest when the intention was not to fail. Anyone can recreate a glitch, but sighting glitches in technology that is evangelised as being state-of-the-art is much more satisfying. I’m drawn to the stupidity of technology, it’s my favourite paradox, and arguably it drives our never-ending aspiration for better technology. With the Venus I’m trying to push this theme a little further, the failure and deformation of the platonic mesh runs in parallel to the formation of a cultural artefact. It is an ambigram of both formation and destruction. The hill-climbing algorithm approach itself can be described as few successful steps amidst a succession of failures, and the algorithm unknowingly can never reach a perfect score, all it can do is try, in fact it never stops trying, the final object is instantiated when I switch the program off.
Let’s drill further into the hill-climbing algorithm that derives the Venus form; your statement describes it as performing “thousands of random transformations and comparisons between the shape and the image” to evolve the volume to be ‘more like’ the source image. Can you shed some light on how this algorithm works and describes any challenges you encountered when developing it?
I have an automated set of interactions happening on my computer. A Processing sketch distorts a mesh and saves a screenshot of it. A Python script notices the new screenshot and invokes an image comparison between this and the target image, using a command-line tool called ImageMagick. This returns a score – a high number meaning high resemblance between the two – by comparing pixel RGB values. The score is sent back to Processing, if its a higher score than before i.e. higher resemblance, it will save the current state of the mesh, if not it will undo that last transformation and try again. This is the ‘Hill-Climbing’ technique, it is like walking to the peak of a hill by blindly taking steps and committing to them if you’re any higher than before.
The main challenge was to find the right balance of conditions such as pace of change – strong mesh transformations can achieve higher scores but also lead quickly to irreversible mistakes, and small alterations can take hours to get you anywhere. It took a few versions to find successful conditions so it would be amazing if these could also be optimised iteratively by the computer. Another challenge was the mesh transformations, these are carefully engineered mesh ‘blisters’ that combine localised face translation, subdivision and smoothing.
Ok, we’ve got process covered – let’s switch to content and speculate as to its implications. Venus of Google emerged from an image search result of a photograph of a box filled with feathers and LEDs. That search was algorithmic, but your decision was not. What made you pick a photograph of a model versus the other results? Furthermore, how do you think riffing on figurative sculpture tropes and tradition changes the way a viewer might receive this work?
I like to automate the process but retain the right to make the main creative decisions, my work is automated but not entirely out of my control. With Digital Natives, sekuMoi Mecy and others I select the initial objects. There is an element of collaboration between the algorithms and myself, its more like having algorithmic art assistants, this one just happens to be a lot more independent than the others that came before it. I processed about 50 images from the search results and test-printed a couple of other objects to, but I decided the figure was the most intriguing outcome. I agree that I’m perhaps conditioning the viewer with name associations to primitive art, but the name actually came about from hearing the Venus connection suggested to me by others. Venus figures tend to be named after the place where they were found so it was just too tempting to play on that. I’m also interested in primitivism and early human culture – not just the artefacts but the level of intelligence we guess to be behind it.
Chris Anderson’s 2004 evangelism of ‘the long tail marketplace’ celebrated a new plurality where fringe manufacturers and artists could suddenly generate sustainable revenue from aspatial marketplaces like Amazon.com. Approximately a decade later, you’re critically probing a state of ‘post-archivism’, where we traverse networks that are brimming with inconceivable amounts of content and algorithms are our primary means of retrieving information and – increasingly – generating objects. What does Long Tail Multiplier teach us about our new made-to-order supply chain?
Celebrations and welcome social change aside, all great technologies seem to proliferate towards bottlenecks and introduce undesirable consequences – cars introduced traffic, rail introduced derailment, planes introduced plane hijacking, twitter introduced fake followers and long-tail fringe economies could facilitate algorithmic marketplace spamming. I’m not arguing against social-technological change, I’m illustrating an awareness of these paradoxes. The long tail marketplace is constantly praised and ideologically driven, I myself benefit from it and the whole DIY-maker/online-seller industry holds faith in this ideology. To deny ourselves even an interest in critically probing it and unpacking it would lead us to be totally ignorant to any forthcoming undesirable consequences. Consequences can otherwise be discussed, reshaped or reframed into more desirable outcomes, like fake twitter followers that we want to befriend, or better traffic management, or perhaps an appreciation of algorithmic artifacts.
On the topic of algorithmic artefacts, in the email conversation we had leading up to this exchange, you mentioned a few examples of marketplace spamming had influenced the thinking that prefigured The Venus of Google. Could you talk a little about what you took away from T-shirt and book spamming and perhaps speculate how these specific generative anomalies might become the new normal?
Freely speculating, I imagine that the online music marketplace will be plagued by algo-remixes of K-pop deliberately confusing an Asian market into thinking they are coming from famous Western music producers. Algo-driven music videos will then follow and one of these will go on to reach global meme status. Following that the UK advertisement industry will be lured in and begin creating algo-driven entertainment that mines your social network for images to use in trashy personalised video campaigns. This will chime with a growing marketplace for 3D printed family portraits and one US start-up will combine this with targeted facebooks ads, sometimes bringing you one click away from a 3D print of you drunk at a party. Yahoo or Pinterest then buy the technology to make objects resembling anything you’ve ever liked. Due to poor sales, investors will fund algorithms that can predict what individuals want before they know it. Sentiment analysis will change the colours on garments to match your mood swings. Once this becomes the norm an appreciation will emerge for the early algorithmic artefacts, and we’ll go to a curated show to laugh about stupid t-shirt slogans and illegible markov chain books. Just saying.
Venus of Google will be exhibited at The Museum of the Order of St John (London) as part of Design Exquis from May 21-23rd
Matthew will be discussing his work at the Parasol unit foundation for contemporary art (London) at 7pm on May 16th [details]