Digital culture, for all of its inherent reflexivity, can be surprisingly dumb when it comes to reflecting critically upon itself. However, when it does, ideas mash quickly, and the fast moving meme emerging around The New Aesthetic is a fascinating example (you can literally watch this discussion in real time online right now).
Image above: Selective Memory Theatre by Matthias ‘moka’ Dörfelt
The current conversation was launched at a panel of the same name, at the SXSW (South by South-West) Interactive conference in Austin a few weeks ago (see here for summary). The panel featured James Bridle, Joanne McNeil, Ben Terrett, Aaron Straup Cope, and Russell Davies, and was given a critical momentum following remarks made by Bruce Sterling in his closing talk at the same event, which he further developed in a Wired article. In the days since Sterling made his comments a swarm of other commentators have continued the discussion, and there are no signs of it stopping yet.
The focus of the discussion is an ongoing research project initiated by James Bridle, which primarily takes the form of a tumblr blog: The New Aesthetic. The blog assembles material of all kinds – from CAD generated building facades to the glitches in the stitched photos on Google streetview images, from short clips of iPad manufacturing plants to the fragmented images on broken Kindle screens, from on-set shots of actors wearing computer vision costumes to a mapping of Tesco’s corporate organisational sprawl. The suggestion is that there is a new aesthetic to be found in our environment which is in not directly of human design, but comes out of our interactions with machines, and perhaps, from the machines themselves: the subtitle of the SXSW panel was indeed ‘Seeing Like Digital Devices’.
The individual posts are more or less enjoyable and interesting as the momentary ephemera of a culture and a global economy increasingly determined by the techno-scientific processes of digital production. But is the site any more than a contemporary Wunderkammer? Sterling describes it as ‘a gaudy, network-assembled heap’, and I wonder how deliberate his use of the term ‘heap’ is? In an early attempt to describe something like emergence in some systems and organisms, Aristotle stated that ‘the totality is not, as it were, a mere heap, but the whole is something besides the parts’. So, does this assemblage of material constitute a ‘mere heap’, or is there something else, an emerging idea that we can start to discern here? Can we see what the cybernetic ecologist Gregory Bateson would describe as a pattern that connects?
Bridle states that the material ‘points towards new ways of seeing the world, an echo of the society, technology, politics and people that co-produce them.’ For Bruce Sterling, ‘the evidence is impossible to refute… modern reality is on display there’ and most commentators broadly agree with that – the discussion starts when we ask: Who is doing the seeing? Who hears the echo? What exactly is being pointed at? And just as importantly, how exactly should we theorise the kind of collecting of material that is going on here?
Scanning through the blog I think of David Greene (of the sixties avant garde architecture group Archigram), and his quasi-imaginary ‘Institute for Electric Anthropology’ (ref), which he has used since the early-1970s to talk about the ways in which new technologies and communication networks alter modern life. The NA blog certainly constitutes some kind of electric anthropology, and could even become a department in Greene’s ‘Invisible University’ project? Several other commentators have made connections to the work of earlier twentieth century avant garde art movements. In the panel discussion Joanne McNeil of Rhizome talked about how technology changes perception, referencing the work of Cubists and Futurists. Several others have also asked whether this constitutes (or needs) a manifesto of some kind? Sterling suggests that the NA material is ‘like early photography for French Impressionists, or like silent film for Russian Constructivists, or like abstract-dynamics for Italian Futurists.’ This all makes some sense, as this is in many ways a classic modernist research project: the material is after all object trouvé, an assemblage of found ready-mades, stuff circulating in the world. Perhaps we should read the tumblr blog as a neo-Dadaist assemblage, a reworking of a Kurt Schwitters Merzbau? Or perhaps it is better to think of it in terms of the kind of contemporary artworks that have a strong curatorial aspect within the art itself? Something like say some of the work of the Otolith Group, or Mark Leckey? Thinking about it as an art project only gets us so far though… let’s look at the material a bit more closely…
Kurt Schwitters: Merzbau, 1933
The posts tends to fall into one of a few categories (and rarely more than one, interestingly). Some of the material is the unintended side product of digital production processes – glitches and so forth. Some artefacts are the result of the way that we are now marking up and reorganising environments to facilitate machines interactions. Many posts reference human designs which are in some way responding to or mimicking the previous two categories. There is an awful lot of rather mediocre and predictable pixelated, faceted, blurred, stretched etc etc stuff mixed in there.
So what are we to make of these categories? Sterling reminds us that ‘glitches and corruption artefacts aren’t machine vision’, and he is right. But they do, through their very slippage, reveal something about the system, mind and logic that produced them. I am reminded here of the research of William and Gregory Bateson on glitches in humans, animals and plants, where they found clues regarding the role of information and symmetry in evolutionary processes. Regarding the second category, it might of course be argued that humans have always transformed their environments to facilitate the use of tools. For a century now we have put humans second to the needs of the car in our cities, for example, transforming cities beyond recognition in the process. However, there is perhaps something else important to note concerning the way that environments (and indeed human behaviours) are being manipulated to facilitate their recognition by machines today. For example, Rev Dan Catt has noted, regarding the claim that much of the aesthetic seems retro (Sterling: ‘retro ’80s graphics are sentimental fluff’), that this is a reflection of the current state of CV: ‘or put another way, current computer vision can probably “see” computer graphics from around 20-30 years ago … because machine/computer vision isn’t very advanced, to exist with machines in the real world we need to mark up the world to help them see.’
It is perhaps the third category of objects that are simply designed ‘art pieces’ that are the most difficult to think about as a new aesthetic. Is this just simplistic mimetic iconography, or does it represent a more interesting attempt to empathise with machines? Still, many commentators do want to find (or initiate) a serious attempt by artists and designers to engage with these questions. Kyle Chayka argues that ‘The New Aesthetic, as it exists in drone technology and Google Maps imagery and data surveillance, represents a ground-level change in our existence. Instead of shocking society, New Aesthetic art must respond to a shocked society and turn the changes we’re confronting into critical artistic creation.’
Bridle has joked that if he had known how influential the site would become, he would have chosen a better name. Yet in many respects the name is just right, and has raised the stakes in an important way. How we think about NA depends very much on how we understand the word ‘aesthetic’. There is a weak sense of ‘aesthetics’, which means something like a style or a look, and to be sure, it seems that much of the material on the actual blog, as well as much of the commentary, is concerned with this reading: what kind of ‘look’ is emerging, what extent is it intentionally designed,what extent is it the result of frictions between different systems and different visual logics, etc etc.. all interesting enough. There is however another stronger meaning to the word ‘aesthetic’, which refers to a tradition of philosophical thought concerned with understanding how it is that we perceive and have knowledge of the world. In this sense aesthetics is inseparable from, and perhaps unites, epistemology (philosophy of knowledge) and ontology (philosophy of being). In an important way, the question whether we can identify a new aesthetics is then not just a stylistic question of appearances, but is also a philosophical question concerning technologies of perception and production in the world. Clearly, much of the discussion around NA above has aspects of both senses of aesthetics. However, once Sterling stated that ‘The New Aesthetic is a genuine aesthetic movement with a weak aesthetic metaphysics’, it was only a matter of time before philosophers descended…
Ian Bogost and Greg Borenstein are amongst two of the commentators who have responded to NA from an Object Oriented Ontology (OOO) perspective. I always have a degree of sympathy with this position. OOO thinkers typically try to deal with both the reality of objects, and their extended relationships in the world. Drawing upon the work of Bruno Latour in particular, they see the world as an extended horizontal network of actors, where the actors are anything from people to machines to atoms to, well, anything. I agree with those who think that OOO is an approach that can help us to think about NA. I concur when Bogost calls for ‘philosophical lab equipment that helps us grasp, as best we can, the experience of objects themselves’, and am curious when Borenstein suggests that ‘the New Aesthetic is actually striving towards a fundamentally new way of imagining the relations between things in the world.’ But when Bogost wonders why focus on computers, asking ‘why couldn’t a group of pastry chefs found their own New Aesthetic, grounded in the slippage between wet and dry ingredients?’ it becomes clear to me what is missing in most of the NA discussion (and indeed much Latourian thought) so far: politics, economics… There is of course a reason why we are talking about computers and not pastry, and it is not because pastry chefs are too lazy to get their stuff together on tumblr. The point is that digital production technologies have become fundamental to the processes of global capitalism, in terms of production, in terms of finance, in terms of media, in terms of surveillance, and indeed, are also increasingly central in anti-capitalist movements and post-capitalist alternatives. To reflect upon a possible aesthetics of digital technology at the beginning of the twenty-first century is then in large part to explore the contradictory internal relations of global capitalism itself. Yes, I know that we could draw a network of actors that connects pastry to millers to farmers to wheat fields and water tables and clouds in one direction and to consumers and advertising and so on in the other. Yes, we can show how a cake ultimately networks and internalises all of these relations and more. Nonetheless, pastry simply is not active in reorganising global production today in the same way that computers are. Cakes just do not express so directly and clearly, and at the same time obscure so thoroughly, the techno-scientific processes that are transforming, in simultaneously progressive and appalling ways, life on Earth. It is regarding these questions that the discussion on new aesthetics must now, to some extent turn to attend.
It is quite staggering just how apolitical much of the NA discussion has become. This is despite the fact that Bruce Sterling opened his talk at SXSW with a series of comments regarding the economic and ecological crisis, and the Occupy movement. However, he did not go on to develop these questions in relation to his later discussion of NA … he left that work to us. I suggest that we start with the important footnote at the beginning of the chapter on ‘Machinery and Large Scale Industry’ in Capital, where Karl Marx states ‘technology reveals the active relation of man to nature, the direct process of the production of his life, and thereby it also lays bare the process of the production of the social relations of his life, and of the mental conceptions that flow from these relations.’ Commentators on NA would do well to reflect upon the range of relations that technology mediates for Marx here. Technology is in this conception radically political, and radically ecological. Amongst the many questions we might ask of the NA material then, are questions like: What are the means of production embodied in these objects? What is the division of labour (between humans, and between humans, machines, and other actors)? What is the difference between material and immaterial labour in these processes? Ultimately, for both Bateson and Marx, technology and aesthetics are ecologically related: this is the pattern that connects.
Jon’s interests range across a network of architecture, process philosophy, radical cybernetics, urban political ecology, and the natural and cognitive sciences. He sometimes refers to himself as an metropolitan tektologist, for want of a better description. His work focuses on near and medium term future scenarios. You can find out more about Jon at rheomode.org.uk or follow on twitter @jongoodbun.
The Space Beyond Me by Julius von Bismarck and Andreas Schmelas
image source: Invisible University
- On simulation, aesthetics and play: Artifactual Playground In 1958, the American physicist William Higinbotham created what is one of the first instances of what we would today call a modern "video game". The game, named Tennis For Two, was built at the Brookhaven National Laboratory for their yearly open-house presentations of the lab's activities. The game was built using an oscilloscope and a programmable analog computer, the Donner Model 30. It simulated a simple tennis match between two players, with a sideways perspective of the net and a ball bouncing back and forth, controlled by two player-manipulated inputs. _ William Higinbotham, Tennis For Two, Brookhaven National Laboratory, 1958 Although it would take a few more years, namely 1962 and the game "Spacewar", before we could see the emergence of a true modern form of "gameplay", "Tennis for Two" nevertheless contains enough basic elements of interactive play to connect it to more contemporary descendants, for example the iconic Nintendo hit, "Wii Tennis". While there are a few missing details here and there, such as avatars, scoring and the various forms invented to interact with the machine, fundamentally there is very little that has changed since "Tennis for Two". It contains all the modern tropes of animated algorithmic representation, namely a highly kinetic visual form that emerges in real-time from within the game via its gameplay. From this perspective, it is one of the forebears for "arcade" style games. The game is fast and dynamic, and only by interacting with the system does the image emerge. But perhaps most importantly, "Tennis for Two" is significant in that it is not only a representation of playable interactive visual forms, but that these forms represent something greater than their graphical output: the game is in fact a physics simulator of a ball moving through space and interacting with objects in its path. Watch how the ball bounces against the net and then try to imagine what it would take to program such a movement, even today; then remember that Higinbotham was working back in 1958. For its time, this is a sophisticated simulator of physical interactions: "The 'brain' of Tennis for Two was a small analog computer. The computer's instruction book described how to generate various curves on the cathode-ray tube of an oscilloscope, using resistors, capacitors and relays. Among the examples given in the book were the trajectories of a bullet, missile, and bouncing ball, all of which were subject to gravity and wind resistance. While reading the instruction book, the bouncing ball reminded Higinbotham of a tennis game and the idea of Tennis for Two was born." — Brookhaven National Laboratory, The First Video Game?, p.2. In other words, Tennis for Two was not only the first "Pong" game, but also the first physics game, à la Box2D and its shameless re-branding in the infinitely more popular form, Angry Birds. And like Angry Birds' relation to Box2d, the underpinnings for the game "Tennis for Two" were already inscribed in the routines of the machine itself, the Donner Model 30. These routines were then re-contextualized using what we would today call "joysticks" and voilà: a modern arcade game. Wii Dog vs Wii Cat & Angry Birds Live, T-Mobile Given the historical context, there is nothing surprising in this idea of a computer simulating a physical phenomenon such as a bullet or a missile. In the 1950's, computers were still emerging from World War II era cybernetic formulations of "telelogical" or "self-regulating" machines, precipitated in large part by the acceleration of faster and faster flying weapons that required new techniques for shooting them out of the sky (cf. V-2 Countermeasures). The history of interactivity is traversed by this question of simulation, i.e. by the idea of adaptive mathematical and physical models that could allow machines to regulate themselves in real-time, based on constantly evolving conditions. So while it might be considered a historical curiosity that post-war cybernetic machines would produce the modern video game, it is unsurprising that such a game would be constructed out of a physical simulator of bouncing balls or flying bullets and missiles. Aesthetics, Simulation, Play The historical relationship between aesthetics and play has always been a complex one. There is much overlap and interpenetration, but they are in no way interchangeable terms. Most performative art forms, such as theatre or music, oscillate constantly between the ludic and aesthetic realms. In the work of art-game pioneer Eddo Stern — for example his work with C-Level, or his newer Wizard Takes All — we can see these two domains interact with one another in a contstant back-and-forth that suggests perhaps a more fundamental genealogy connecting the two. But despite the deeply connected roots, they are nevertheless two expressive forms that cannot be conflated, all the calls for games-as-art be damned. But whatever the relationship between aesthetics and play, it is further complicated by this introduction of the principle of simulation in play, made all the more acute in the context of video games. Simulation questions the mimetic tendencies of representation, which might explain in part the constantly recurring uproar over violence in video games (and all the ire over provocative gamer-artists that apparently "hate freedom" ;-). But no matter how small-minded the complaints, people nevertheless understand that these games are not merely mimetically presenting us with representations of violence; instead, they are directly modeling the violence itself of the scene. The resulting image flows from the model; it is a "rendering" of the underlying scene. This is the specificity of simulation: the ability to represent the dynamics of a situation as itself a form of representation. The representation needs to be played in order to take form. This is the historical twist of simulation: the image has shifted from a predominantly mimetic function of re-presentation to that of rendering complex interactions visible through playability. In fact, simulations can take place through other mediums and channels of perception. The American far-west simulator, The Oregon Trail (1971), for example, was a simulator that originally used only textual communication to represent the state of the game. Although modern variants of The Oregon Trail, such as Red Dead Redemption now use sophisticated graphics to represent the game state, the game is nevertheless animated by a simulation engine that cannot be be reduced merely to the artifacts displayed on-screen. The Oregon Trail (Apple II edition), 1971/1984 & Red Dead Redemption, 2010 A Poor Man’s Simulator The quality of the simulated movements of the Higinbotham/Model-30 ball and its interactions with the net are impressive, especially when compared to the clunky, almost weightless movements of Pong, designed some fifteen years later. If there were so many games about space in the 70s and 80s, it might be because earthbound physical simulations are hard to design and certainly hard to calculate in real-time, especially when you've moved from analog computers to digital ones. Physics are a mostly logarithmic, analog realm, and are hard, or long, to calculate using digital circuitry. Although many games with bouncing balls and gravity would appear throughout the next few decades of digital gaming, it would truly take Erin Catto's Box2D and accelerometer-based controllers like the Wiimote and the iPhone for the form to emerge as a fundamental gameplay mechanic. Why so early then our first variant on what would later become Angry Birds? The prophetic nature of Tennis for Two can somewhat be explained by context: Higinbotham was a physicist, whereas Pong’s inventors — Ralph Baer (Magnavox) and Allan Alcorn (Atari) — were engineers. Higinbotham was working with scientific instrumentation that did not adhere to the economic constraints or objectives of Baer who was for his part trying to design mass-producible circuitry that could be plugged into to millions of customers’ televisions. But it is precisely this poor-man's quality of video game's simulators that helped emerge the ludic qualities of gaming. Tennis for Two is frankly a little boring next to Pong, whereas Pong remains one of the best-designed games of all time, giving birth to an infinitely expanding field of variants all the way from Breakout to Bit.Trip Beat. Ralph Baer and Bill Harrison Play Ping-Pong Video Game, 1969 & Bit.Trip Beat, Gaijin Games, 2009 One of the ironies of video game history relates to this desire to simulate infinitely complex interactions, but with access to only the most mediocre means of calculation. This contradiction has led to what might in some senses be considered an historical anomaly: an in-between period in which computer games’ desire for "realism" would have to wait for the technological means to catch up. A Poor Man's Renderer This anomaly relates not only to the simulation itself, but also to the manner in which it is rendered to the screen. In this in-between period of video game design, situated somewhere between the late 1960s and Box2D (circ. 2006), a cornucopia of visual forms emerged from video games that have given games their distinctive identity as an aesthetic form. We now identify video games as much by their visual artifacts, as by their particular form of gameplay. A truly innovative game will in fact design a specific form of visual artifact, in order to better match the gameplay, outside of any criteria of realism. This approach will often go on to trump the simulation itself and become the central mechanism of gameplay. It is precisely because of the technological limitations of early gaming technology that gaming eventually found its singular language of representation where the graphical artifacts would themselves become the playable form. Artistic Playgrounds This playable visual language has even circled back around to influence various forms of visual communication, in order to make them more "playful". And artists for their part have used this visual language of computer game artifacts to transform less electronic contexts into playable forms. The list could go on almost forever of artists working in this space: Mary Flanagan, Aram Bartholl, Damien Aspe, etc. In the well-known work of French artist Invader, the city landscape becomes a platformer to be traversed literally, leaving behind physical pixels: Invader Sneakers & Space Invader in Shoreditch, London In the aforementioned Eddo Stern’s "portal" sculptures, gaming logics of representation and interaction are re-projected back onto traditional spaces of representation (gallery, public square, etc) in the form of sculpture: Eddo Stern, Fake Portal, 2012 While neither of these examples are even playable as games, they communicate nevertheless with the video game medium through this imperfect, unrealistic video game form of visual rendering. They look and feel like classical electronic forms of play. The artifactual visual language of video games is sometimes constructed out of a patchwork of various historical forms that have been redefined through the filter of gaming. Sometimes video games skeuomorphically imitate previous technologies and mediums, for example by flashing television-style signal noise to signify a weak connection, or imitating hand-written messages and drawings strewn about a 3d world (cf. Myst, Resident Evil). But video games have also introduced their own domain of visual logic based on the specific contours of the technological limitations that animate them. Often a closer reading is required in order to reveal the nature of these contours. Raster-Scan A strange by-product of the historical anomaly can be seen in the role of the pixel in video games. Originally, as was the case with Tennis For Two, games were built with vectors, as were many related visual technologies such as Ivan Sutherland's Sketchpad. In fact, Tennis for Two used vectors for both the simulated phenomena (force, velocity, etc), as well as the physical image constructed within the oscilloscope. This is completely logical if you're looking to construct a physics simulator. This vector-based approach is also the case today, where games are often built out of polygons which — assembled together — construct the playable scene. But somewhere in between Tennis for Two and our modern-day graphics pipeline, came the pixel. And this anomaly, the pixel, continues to this day to influence profoundly the manner in which even vector-based images are rendered to our eyes. Alan Kay, The Early History of Smalltalk, 1993 Like many of the computing concepts we take for granted today, the pixel concept was perfected in the late 60's and early 70's somewhere between Douglas Engelbart's Stanford Research Institute and the Xerox PARC in neighboring Palo Alto: "The TX-2 display that Ivan Sutherland used for Sketchpad [...] would project a single bright spot on a dark screen and then electronically move that spot around to trace out a circle, say, or the letter A. By tracing and retracing the pattern very, very fast, [it] could create the illusion of a solid outline. [...] The problem was that the more complicated the drawing, the faster you had to wiggle that spot. [...] Then there were the "raster-scan" displays that Bill English had developed for the "PARC Online Office System", POLOS. [...] The POLOS displays used digital electronics that were better suited to the binary world of computing: in effect, they would divide their screens into a fine grid of "pixels" and then make a picture by turning each pixel either on or off, as appropriate, with no shades in between. [...] The programmers would have a much easier time devising graphics software to generate those images, because all they had to do was define a chunk of computer memory to be a map of the screen, one bit per pixel, and then drop the appropriate bit into each memory location: 1 for white and 0 for black. [...] Unfortunately, that use of the computer's memory was also the major difficulty with bit-mapped graphics: memory was very, very expensive in those days." — The Dream Machine. J.C.R. Licklider and the Revolution that Made Computing Personal, W. Mitchell Waldrop, Penguin Books, 2001, p.366. In many ways, "bit-map" graphics are simply a historical hack used to generate text and images dynamically on a screen. In the case of the heavily text-centric Xerox PARC machines, one would assume that a more vector-based image generator would make more sense: typography is essentially a history of shapes built out of lines, with a visual language heavily influenced by the traits of handwritten letterforms. In fact, it took some thirty-odd years, led by Apple's "retina" ultra hi-definition screens, for bitmapped text to match the quality of the printed page. So it could probably be argued that the "bit-mapped" approach was historically the wrong one, even if it is now somewhat catching up. Douglas Engelbart, Workstation With Mouse, Agumentation Research Center, cir. 1964-1966 & Maze War, Xerox Alto, 1974 But from a purely technological, engineer's perspective, bit-map images make all the sense in the world. In the above quote we need only retain that "the programmers would have a much easier time..." in order to understand why the pixel approach won out. Computers are "discrete" machines, capable of switching parts of itself off and on independently. This logic gives us random-access memory which in turn gives us databases, which in turn gives us things such as hyperlinks. Machine architecture influences use and to assume that this would not influence the resulting aesthetics is naïve. The infinitely re-configurable and re-contextualizing nature of the machine is the whole point of why we use these damn things. So an image construction method that would closely match this discrete logic, down to the very 0s and 1s of the machine's ABCs, was an important step in creating a "plastic" image, capable of reconfiguring itself multiple times per second. It is out of just such a type of image that video games as a medium emerge. Raster-scan vs. Vector-scan Let's compare two images from two iconic video games from 1980, Battlezone and Pacman. Battlezone is a vector-based game, and originally used a vector-scan method for displaying shapes on-screen. This created razor-sharp images, albeit in black-and-white, or actually black-and-green. The use of vectors also allowed Battlezone to be one of the first mass-market games to effectively represent a three-dimensional scene, using the first-person perspective of a tank commander to navigate the game space. It would be many years before a pixel-based computer system could anywhere approach the visual elegance of early 1980s 3D games such as Battlezone, Star Wars or Tempest. One of the great iconic raster-based 3D games, Castle Wolfenstein, wasn't even in 3D at its introduction in 1981; and even when it became Castle Wolfenstein 3-D in 1992, that visual representation was made up of large blocky pixel shapes, far inferior to Atari's 1980's graphical representations. But Battlezone's vector-scan technique also created some curious visual anomalies: for example objects on screen were fully transparent, defined solely by their outlines without any possibility for image "textures" to fill in the gaps. This created the odd situation where an enemy tank could be seen transparently on the other side of an obstacle, but could not be shot at. In a sense, this improved the gameplay and created part of the strategy of playing Battlezone — no matter what level of realism it achieved as a simulation. Ultimately, it was a game made for fun, for play, but even so it would eventually be used by real tank commanders as a training simulator for their soldiers. The simulation was good enough so as to be a functional form of training in the real world manipulation of tanks. Visually, Pacman (a.k.a. Puckman) is a very different animal. Contrary to Battlezone, or even the more-colorful Tempest, Pacman is practically drenched in color. Ghosts are brightly-colored with different hues based on character traits, allowing players to read their individual algorithmic behavior within the game. The player's character, Pacman, is a completely opaque bright yellow animated blob, full of visual charm. Like the ghosts, he is full of personality. Color is even used as a gameplay element, allowing players to distinguish between dangerous ghosts (multi-colored) and edible ones (blue). Everything about Pacman screams "bit-map" techniques: the maze is a series of bit-mapped 0s and 1s, turned on or off to represent a wall or a navigable open space. And the dots or crumbs that we eat are also represented as a bit-map, i.e. a scattering of pixels that we have to turn off by running our character over them. In Pacman, the gameplay, in fact the whole game algorithm, is directly controlled by the graphical representation, as opposed to Battlezone where the graphical representation is often in contradiction with the physical simulation of interaction with physical objects. Pacman is a collection of pixels, he lives to eat other pixels, and the level is over when there are no more pixels to be eaten. Pacman essentially spends his time running around a memory map until he has effectively manipulated all the memory registers by setting them all to 0. The internal circuitry of the machine is visually exposed to the player who is then asked to navigate into this memory register map and manipulate the digital switches via an on-screen representation. Cellular Automata While it is not technically a video game, and was in fact designed as a scientific simulation experiment, John Conway's Game of Life is nevertheless one of the best examples of one of these immanent pixel-plane spaces from which a "playable" image emerges. The "game" is played entirely by comparing one pixel to the pixels that surround it: too many surrounding pixels, the pixel dies from overcrowding; too few, it dies from lack of resources; and from just the right number of pixels, a new pixel is born (if none) or survives (if already alive). The visual representation of the life "game" is exactly the same map of values as the memory registers that control it. There is no representation of the simulation outside of the frame of the grid. Based on this immanent principle, a complex interaction of forms emerges, hence the term "game of life". Conway's Game of Life, 1970 & Runxt, R-Life for iOS One of the best known games of all time, Sim City, was directly inspired by this Conway thought-experiment: "[John Conway's work] is so extraordinary, because the rules behind it are so simple. It's like the game Go. [...] They can arise from fairly simple rules and interactions, and that became a major design approach for all the games: "How can I put together a simple little thing that's going to interact and give rise to this great and unexpected complex behavior?" So that was a huge inspiration for me." — The Replay Interviews: Will Wright, Gammasutra, 23 May 2011. In Conway's Game of Life as well as Wright's Sim City, the immanent pixel grid is the space itself of the "game", conflating both the pictorial representation and the simulated one. It is the "map" upon which the simulation of SimCity, an architectural construction if there ever was one, would be built. Animation Another significant trait found in pixel-based games such as Pacman, far more absent in vector-based games, is the narrative dimension. Pacman tells a story, and even introduced comedic interludes every few levels, telling little Keaton-esque sketches of Pacman being chased by ghosts and then turning the tables to chase the ghosts in turn. Pacman cutscenes, arcade edition 1980 & Atari 800 edition, 1983 Many interactive characters were built out of these basic, often extremely limited, collection of "bit-map" pixels: the whole Pacman family (Pacman, Ms. Pacman, Pacman Jr., etc), Mappy, Dig Dug, Mr. Do, Mario, et cætera. Even known animated characters — such as Popeye —, found their way into the heavily pixellated game screens of the 1980s. There is nothing arbitrary about this use of cinema-animation logic aesthetics to animate the characters of early video games. For animation had already solved this problem of opening up cinematic figuration by eschewing realism and embracing the artificial nature of the image. Gerty the Dinosaur, Betty Boop and Felix the Cat, all the way up to La Linea and Don Hertzfeldt's pencil-drawn absurdities: these are all forms of reduction down to the visual interaction of a few basic visual forms. So too in video games: the key to their success in adding expressive characteristics came not from the militaristic, cybernetic-inspired scientific simulation instrumentation. Instead, it came precisely from embracing the abstract, graphical, nature of their primitive cousins and in accepting the artifactual, visually limited detail of the early digital machines. In accepting this fate, video games tapped into a deep tradition of expressive visual tapestries that had been explored throughout the 20th century in cinema through the work of experimental film-makers and animators such as Len Lye or Norman McLaren, using simple abstract shapes such as lines, scratches, and blobs of color to great expressive effect. Vanishing Points Although the term is a bit dubious, we are exploring here the problem of realism, or perhaps more specifically that of mimesis, i.e. the art of imitation. A significant historical component to this debate on art and realism relates to the introduction of a very specific form of pictorial representation: geometric perspective of the sort demonstrated by Brunelleschi in the early 1400s. In our parallel history of video games — notably as it traverses its naive period of representation —, we as well can see some interesting effects of perspective as it relates to how images are constructed on-screen. Due to the purely arbitrary nature of the discrete pixel grid where any section can be turned on or off at will, a strange form of mixed perspective becomes possible with multiple forms of perspective not only co-existing on screen but even interacting with one another. Pacman and the ghosts within the maze are completely devoid of principles of foreshortening and vanishing points, and are in fact a mixture of top-down vertical perspective (of the maze), and side-view perspective (of the characters) reminiscent of early forms of perspective emerging in the work of Giotto where, to take an observation from Deleuze & Guattari in Mille Plateaux (p.219), Christ alternates between divine receiver, enduring the stigmata, and kite-machine, commanding the angels and heavens via kite strings. The emerging nature of the Brunelleschian-style of geometric perspective is not fully developed at the time of Giotto, hence the optical oscillations for a modern eye between flatness and depth, foreground and back, and so on. Jesus is at once commanding Saint Francis, and simultaneously being flown by him like a kite. It is only through narrative cues, understood by semiotically reading the painting, that we are able to reconstruct these spatial relationships between the various figures. Like many paintings from the middle ages to the early renaissance, perspective in early video games contain multiple points of view and often chooses its perspectival representation based on contextual narrative needs. These are naïve and/or mixed perspectival geometries (cf. Tapper, Zoo Keeper, et al.) that have recently been exploited to brilliant effect in Polytron's visual delight, Fez. Tapper, Bally Midway, 1983 & Fez, Polytron, 2012 We could also mention Game Yarouza's Echochrome where the gameplay takes place somewhere in between the OpenGL pipeline where vector data is rasterized into pixel data and itself becomes a gameplay mechanic as players exploit visual absurdities and try to line them up. Echochrome, Game Yarouze, Japan Studio, 2008 Such hybrid forms of perspective would have been much harder to acheive had gaming stuck with purely vectorial and mathematical forms of representation. Visual abstractions It might be temping, based on such an art-historical exposition, to start comparing video games to the history of art and graphical design. For example, it would be fairly easy to visually juxtapose the paintings of Piet Mondrian/De Stijl, with Taito's 1981 arcade classic, Qix: Piet Mondrian, Composition 10, 1939–1942 & Piet Mondrian, Composition II in Red, Blue and Yellow, 1930 & Taito, Qix, 1981 Obviously, on some level there is a visual inheritance taking place, either explicitly, culturally or unconsciously, even though such causalities are either impossible to prove or even, if true, merely anecdotal. Another juxtaposition might be to look at the Russian avant garde, starting with El_Lissitzky, and compare his visual language with the shapes and forms of more abstract forms of video games, including early 3D games that had not yet perfected their perspectival rendering engines: A Prounen, El_Lissitzky, c.1925 (cf. Prouns) & Sixty Second Shooter, Happion Laboratories, 2012 Blaster, Williams 1983 & Ballblazer, Lucas Arts 1984 Rez, Tetsuya Mizuguchi, 2001 The problem, ultimately, with all these approaches is that these are merely visual cues and not aesthetic ones. The problem with just such a visualist reading is that it assumes that both De Stijl and Taito constructed their representations purely as visual tableaux — in other words as just a bunch of pretty pictures —, instead of looking at the material, conceptual and historical visual languages and logics that might have led them there. In the case of Qix, it would probably be far more instructive to compare its geometric abstractions to early MacPaint software, and Bill Atkinson's visual algorithms that made it possible, especially since these routines would go on to influence gaming history via Bill Budge's Pinball Construction Set. To begin with, both Qix and MacPaint were built as profoundly raster images, and both use similar algorithms for "painting" in their geometric forms. But more importantly, much of Atkinson's work, like that of Qix, was not only an attempt to find an algorithmic method for interactively constructing visual output, but to do so within the constraints of a Motorola 68000 microprocessor using 128kb of memory. MacPaint, Macintosh, 1984 & Pinball Construction Set, Bill Budge, 1983 And again, we can see even in these early days of MacPaint, that in order to construct the computer image in a visually compelling way, Apple's marketing machine opted to look back to previous techniques of image construction, here the japanese wood cut, and not to that of the photograph. Pixel Clouds One of the most beautiful games to emerge in the last few years is Proteus, a love-letter to this naive period of highly pixellated gaming. Only here, the game is rendered with a modern vector-based graphics pipeline. This creates a strange oscillation between the utterly fluid 3D navigation, and the giant blocky pixellated landscape. Trees, shrubbery, waves, raindrops, animals: everything has been reduced down to a limited grouping of pixel blocks. In Proteus, we walk around the simulation of an island world and explore its aesthetic qualities: sound, color and shape all interact in an elegant generative landscape. There is no real "goal" to the game, although season-shifts can be provoked in a pleasant transition that eventually leads the player to new forms of gaming experience. The whole experience suggests that perhaps some new media form — of an entirely new quality — could be afoot in what we call gaming, although I cringe to qualify such a future as "just over the horizon" because gaming has been promising such an unattainable land for the past several decades. Still, the hope here is that this emerging form is less about Holodecks and more about the raw interactive audiovisual experience of this new media form. The ultimate goal of Proteus, I suppose, is that of aesthetikos, i.e. sensation, or perhaps more accurately the experience itself of human sensing. In other words, we are talking about aesthetics in the Kantian sense of a search for beauty — via the senses — that eventually discovers itself in the limits of its search (cf. Sublime). For, the overall effects turns out to be indeed highly romantic, something akin to a multidimensional interactive 8-bit rendition of a Turner-esque tone poem. While playing Proteus recently, I found myself in a curious situation. I was high up atop one of the hilly peaks of the island, watching as night began to fall and rainclouds emerged below. As I descended down from the hill and onto the rain-soaked plains, I suddenly found myself awash in a pure sea of color that originally felt like a visual glitch: while I could still move somewhat, it seemed that any direction just led me to more colored polygons rendered as flat shapes. For a few moments, I even imagined that the game engine had crashed and I started to reach for the ESC button to get myself back in control of the machine. But then, slowly, I began to realize that I had merely descended down into the level of the clouds themselves and was swimming in the middle of their visually depth-less space. Anyone who has flown in a plane knows this de-spatialized zone while traversing the clouds: there is no focal point or point of reference and everything feels atemporal and ethereal. Essentially this is what happened to me looking through the little portal of my computer screen, the same logic taking place on a purely representational level of pixels that refused to figure the depth contours of the objects in space. Finally, I just leaned back and watched as abstract geometric shapes of treetops re-emerged only to be submerged again in swaths of color as waves of clouds chased ever more waves of clouds. It was a profoundly pleasureful oscillation between recognition and disorientation, one of the key ingredients to many successful works of at. Eventually the cloud formation began to recede from my point of view, and the three dimensional perspective of the landscape re-emerged, re-aligning the simulated first-person perspective of my view portal onto a three-dimensional landscape. The beauty of the moment had something to do with what the art-historian Hubert Damisch calls the théorie du /nuage/ or theory of /cloud/. The term /cloud/ is written with two slashes in order to reconstruct in text the odd, receding nature of clouds from realism and perspective and their re-apparition within the tableau in the form of a semiotic signifier, almost like a placeholder or an asterisk. Clouds in classical painting are the limit of perspectival representation, the resistance of aesthetics to the mere logics of mimesis and perhaps even of representation. Whatever the case, it is the limit of the realism model of aesthetic forms (cf. Cory Archangel's Super Mario Clouds). This limit of perspective within a three-dimensional simulator takes us back to Battlezone and its visual, artifactual, limits. And this limit speaks to one of the fundamental problems confronting video games today, beyond the problem of figuration and by extension the problem of figuring the human face. This representational limit of the /cloud/ in Proteus is what we could call the limit of realism as a model for what simulations, and therein gaming, seek to achieve. Taken to its limit, these clouds of Proteus have their cousin in a wonderful little game built by two lifetime members of the glory days of the Atelier Hypermedia: Pascal Chirol and Grégoire Lauvin. In their collaborative piece NEVERNEVERLAND Color Suite, a 3D simulator and a joystick open up a landscape of nothing but infinite gradients of color: Consider it a 3D simulator of navigation within the color selector of your favorite painting software. And it is also probably one of the outer limits only an artist can propose to the world of gaming in its relationship to the aesthetic realm: a landscape of color, a perspective of visual artifacts, as itself the "goal" of the game. Via play, via simulation, we are now beyond play, beyond simulation, and even figuration; the play has moved into the aesthetic realm, the domain of sensation, opening up an entirely different sphere of experience than that of the reconstruction of a physical world. This is a playable aesthetic world, not beyond ours, but instead immanent to a new field of perception within our world: the realm of artifactual play. This post first appeared on Douglas Edric Stanley’s blog. For more interesting observations, […]
- Beyond the Pixel Sculpture – V2_’s New Aesthetic, New Anxieties The pundit scrum that massed around the New Aesthetic may not have yielded the overarching conversation about digital aesthetics that we needed to have in 2012, but it was the one we got. Fueled by the forward-thinking 'curation' of James Bridle's tumblr and a related SXSW panel, interest in the New Aesthetic exploded after Bruce Sterling penned a wildly evocative essay on his WIRED blog this past April. In the weeks and months that followed a few dozen creative technologists, curators, theorists and foresighters conducted a distributed Monte Carlo experiment to hash out a rough consensus to a) determine if something novel was indeed going on, and b) consider what some of the implications of this pervasive “eruption of the digital into the physical” might be. Rotterdam's venerable V2_ Institute for Unstable Media recently hosted a booksprint to produce New Aesthetic, New Anxieties, a short book focused on leveraging the excitement/confusion/controversy around the New Aesthetic to inform ruminations on computation, curation and—of course—aesthetics. First things first: it is really important to note that New Aesthetic, New Anxieties was authored in just four and a half days (!!) by an interdisciplinary team of curators, writers and academics. It is hard to know exactly what standards to hold a text produced this quickly to, but I'm happy to report that my impressions of this undertaking is definitely net-positive. In fact, it is really a testament to the expansive knowledge and experience of the team involved (and presumably the guidance of facilitator Adam Hyde) that this short book can cover as much ground as it does and generally succeed with its varying ambitions. Structurally, New Aesthetic, New Anxieties sets out to achieve three major goals: to introduce, contextualize and re-frame the frenzy of commentary inspired by Bridle and Sterling, consider related curatorial implications and explore the New Aesthetic as representation. The first chunk of the book provides a cursory description of Bridle's tumblr and ignites a broader conversation about computation through a survey of numerous responses to Sterling's initial post, discussion list fodder and media theory from the last few years. Marius Watz, Wendy Chun, Mez Breeze, Geert Lovink, Christian Paul, Greg Borenstein, Madeline Ashby – and this is just a 'core sample' of the commentators addressed. While the pace is furious, this is a pretty fabulous contextualization (and problematization) of many of the major voices that have weighed in on or are directly relevant to various facets of the New Aesthetic. The 'curatorial readings' portion of the text stumbles out of the gate with a fairly tedious reading of 'a blogpost as exhibition'. I can't really see the value of attempting to reverse engineer what Bridle was thinking when he decided to include certain material on the New Aesthetic tumblr and found the subsequent related conversations about online remix culture/curation and key precedents much more engaging. This section closes with some bang-on commentary considering how projects like Aram Bartholl's Map and Dead Drops , Julius von Bismarck's Topshot Helmet and Wafaa Bilal's Domestic Tension (all deeply invested in embodiment) are quite estranged from their online representations to underscore how tricky it can be to contemplate digital aesthetics if our point of reference is simply marginally-captioned photos and videos of projects posted to <insert web service here>. The text concludes with an even more fine-toothed examination of 'screen essentialism', and compliments this with investigations into the limits of perceiving/experiencing computation and a nuanced survey of the political opportunities and blind-spots associated with this 'eruption' of interest. New Aesthetic, New Anxieties does not terminate with any specific mandate or conclusion and the book is best approached as an annotated map of a contentious landscape that has yet to come completely into focus. Publication Page (available in EPUB, MOBI and PDF formats) Authors: David M. Berry, Michel van Dartel, Michael Dieter, Michelle Kasprzak, Nat Muller, Rachel O'Reilly & José Luis de Vicente (facilitated by Adam Hyde) | V2_ Institute for Unstable Media See also The Politics of the New Aesthetic: Electric Anthropology and Ecological […]
- “A Philosophy of Computer Art” by Dominic Lopes [Books, Review] A Philosophy of Computer Art is a text that may interest some readers of creativeapplications.net as it covers the intersection of computing and art, discussing some of the classics of interactive art, and doing a lot of thinking about what art that uses computers actually is. In it Dominic Lopes does several things very well: it divides what he calls “digital art” from “computer art”, and it correlates that second term, which I’ll put in capitals to mark that it’s his term, Computer Art, with interactivity. He also articulates precise arguments for computer art as a new and valid form of art and defends his new term against some of its more tiresome attacks. As a quick example, Paul Virillios concerns about the debilitating effect of “virtual reality” on thought which is more than a little reminiscent of Socratic concerns about the debilitating effect of writing on thought and points to an interesting conclusion: what we call thought is a technologically enhanced phenomena. Note Friedrich Kittler: most human capacities are enhanced in some way or another with no great damage to the notion of “humanity” or “human”. It’s little more than a failure of imagination to thunder about how those augmentations debilitate the natural state of humans. Lopes also makes several extremely astute observations about the nature of interactivity and repeatability, comparing Rodins Thinker, Schuberts “The Erlking”, packs of refrigerator magnet letters, and true interactivity in artwork and concluding that interactive work has distinct characteristics. What he comes to, or what I read him as coming to, is this: a structured and rule based experience is interactive. “A good theory of interaction in art speaks of prescribed user actions. The surface of a painting is altered if it’s knifed, but paintings don’t prescribe that they be vandalized.” Reduced even further: grammar plus entities plus aesthetics equals interactivity. He also makes, to pick just a few, excellent arguments for the interpretive necessity of a view in automated displays, astute observations about the potential value of a computer art criticism, and for the nature of technology as a medium. But Lopes is also a philosopher and philosophers seek to, among other things, define categories. Painting, sculpture, dance; these categorized mediums have all served us well over the years and so the thinking goes, why not extend them and add another: Computer Art. I’m not so sure that the idea of Computer Art as, with an admittedly blunt reduction, “stuff on a computer that allows you to participate in it presenting itself” is particularly useful. My feeling is that this isn’t what interactive art or art made in collaboration with computers is presently nor is it a meaningful extent of what it should be. The device is not the method, nor is the extent of what makes this type of artwork rich and meaningful and computers aren’t really the medium: algorithm and computation are the medium. In Form+Code, Casey Reas and Chandler McWilliams are right to point to Sol Lewitt as an earlier exponent of explicitly algorithmic art and tie that into the current computational and algorithmic art-makers. A computer originally was one who did computation, that is, a person sitting with a slide rule, pen, and paper and was only later applied to machines. The idea of computation is that it offloads a pre-existing human capacity, accentuates pre-existing things in the world. The person who calls their friend on their cellphone describes their action as “calling my friend” not “using my cellphone”. The person using Ken Goldbergs TeleGarden (a work mentioned frequently in APAC) is marveling at how they can collaboratively participate in creating a garden, not at how they can control a machine via a network. The point is not the device -- the point is interactive computation, extension of human aptitude and capacity, and the type of relationship with the world that it enables. His insistence on the primacy of mediums and forms is doubly odd because in his finale Lopes emphasizes that “computer art takes advantage of computational processing to achieve user interaction”. Close, but not quite there. I’m nitpicking, and admittedly so, because he’s looking at works that are unmistakably “Computer Art” by his definition of it. Computer Art is meant to be a measure of degrees, a spectrum. One looks at Scott Snibbes work and sees a computer system and an interactivity. Golden Calf, a work he references multiple times, is very firmly at the Computer Art circle in the Venn diagram of machine-human art-making experience. These are the easy examples, those that lend themselves most easily to the account of interaction in artworks that he describes. But I’m nitpicking for a reason: it’s painfully limiting. It says that computer art is things that are run on a computer with which I interact and observe a display where I consequently understand how my actions are interpreted. This seems naïve to the ways that computation actually functions in our lives and an oversimplification of how people think that computation can function in their lives. This also seems to be reductive of what forms art can take and how the conversation that is art-making can evolve. For instance: Wafaa Bilals Domestic Tension, a piece far more indebted to performance art than sculptural installation. There’s quite a bit more at play there than myself seeing the manipulation of pixels and there’s more to my understanding of how this piece functions and signifies than understanding that I’m speaking with and through a computer. Another example: Men in Grey. Is this interactive art? Not in many senses, I never interacted with it nor would I say that interacting with it is necessary to understand it and experience it. It has far more in common with Situationist/Lettrist works than with installation art, and yet it is computer based, one does interact with it by well-known protocols and through well-established rules, it has a display. It uses computation and networks and yet it’s not about manipulating a computer or a network to create display elements nor is that the forefront of it. Nor are EyeWriter, Natural Fuse, and a slew of other works and projects that I find most meaningful and engaging. In philosophy of aesthetics at times philosophically strong categories sometimes are preferred over meaningful categories because of the defensibility of strong categories. Painting as a category of artwork is not deeply meaningful in many ways (consider the question “do you like paintings?”) yet determining how much something is and is not a painting is quite easy and categorically meaningful. “Minimal” as a style (ones furniture or aesthetic) or strategy (“minimalism” with attendant connotation) is a much more meaningful designation because it has historical precedents, significations, and because it extends beyond a particular category to cover a manner of production and reception. That is, it describes communicative strategies, which Lopes indicates is one of the goals of the interactivity in Computer Art. However “minimal” is a terrifically difficult thing to pin down into categories and yet it is descriptive, historical, and fundamentally meaningful as a description of an aesthetic practice. To play a small linguistic game, describing speech as “he spoke with words” is a bit odd; to describe it as “he spoke with silence” makes more sense because one does not normally make speech with silence. Likewise Computer Art seems primarily to describe a situation of abnormality, “this is art that involves you interacting with a computer”, that I believe few people actually find particularly abnormal and that will be less meaningful, if not near meaningless, in the near future. Lopes text is an excellent opening of what I hope will be an interesting discussion that attempts to unravel the relationship between new forms of narrative, expression, and communication and the previous ones. He weaves together an excellent web of references from Umberto Eco to Clement Greenberg to Lev Manovich and references a wisely chosen group of artworks to bolster his argument. The example of A Philosophy of Computer Art is in it’s handling of complex arguments against the sort of odd disqualifications that occasionally are leveled against Computer Art. It’s insistence on categorical logic and mediums as definitive categories is a small aberation in what is otherwise an excellent text and opening of a new type of discourse about what creative computing might possibly mean. apoca.mentalpaint.net Purchase on amazon.com / amazon.co.uk Rafael Lozano-Hemmer "Blow Up", 2007 Daniel Rozin "Wooden Mirror", 1999 Scott Snibbe "Boundary Functions", 1998 Camille Utterback & Romy Achituv "Text Rain", 1999 -- Joshua Noble is a writer, designer, and programmer based in Portland, Oregon and New York City. He's the author of, most recently, Programming Interactivity and the forthcoming book Research for […]
- Designing Programs [Theory] (This essay was commissioned by Centre national des arts plastiques for Graphisme en France 2012) - Edited by Casey Reas and Chandler McWilliams - Technical mastery and innovation are part of the rich history of visual design. The printing press is the quintessential example of how a shift in design technology can ripple through society. In the Twenty-First Century, innovation in design often means pushing the role of computers within the visual arts in new directions. Writing software is something that's not typically associated with the work of a visual designer, but there's a growing number of designers who write custom software as a component of their work. Over the last decade, through personal experience, We've learned many of the benefits and pitfalls of writing code as a component of a visual arts practice, but our experience doesn't cover the full spectrum. Custom software is changing typography, photography, and composition and is the foundation for new categories of design practice that includes design for networked media (web browsers, mobile phones, tablets) and interactive installations. Most importantly, designers writing software are pushing design thinking into new areas. To cut to the core of the matter, we asked a group of exceptional designers two deceptively simple questions: 1. Why do you write your own software rather than only use existing software tools? 2. How does writing your own software affect your design process and also the visual qualities of the final work? The answers reflect the individuality of the designers and their process, but some ideas are persistent. The most consistent answer is that custom software is written because it gives more control. This control is often expressed as individual freedom. Another thread is writing custom software to create a precise realization for a precise idea. To put it another way, writing custom code is one way to move away from generic solutions; new tools can create new opportunities. Experienced designers know that off-the-shelf, general software tools obscure the potential of software as a medium for expression and communication. Writing custom, unique tools with software opens new potentials for creative authorship. LUST / lust.nl In our studio, form is a result of an idea, which is the result of a process. When approaching a project we try to be as open as possible. Through research and analysis we let the idea emerge from something already embedded in the project itself. Something that was perhaps already present, but that needed to be highlighted. From there we look for the best way to execute that idea, and in doing so develop the form and concept further. Because we approach projects in this way, existing software/tools are often insufficient to properly execute an idea. We also tend to arrive at ideas that require new ways of thinking about how to deal with everything from typography to data to interactivity. In these cases the development of custom software and tools is a natural extension of the process, and can be instrumental in the development of the idea. While designing in code is quite different from ‘traditional’ design methods, these kinds of processes have always been present in our work. Since we started our studio 15 years ago, we have adhered to a process-based methodology in which an analytical process leads eventually to an end-product that designs itself. This coincides very well with the idea of writing your own code and building your own tools. The transition to these kinds of working methods from more traditional approaches was a very natural one. At a certain moment you realize that there is no other way to execute an idea than to build it yourself from the ground up. This frees you from the constraints of pre-packed software and allows you to maintain a closeness to your ideas that wouldn’t be otherwise possible. While any medium will have an impact on the visual outcome of a project, we feel that building your own project-specific tools gives you back the opportunity to control and manipulate the inherent visual qualities of the tools your using. In the end, the visual quality of a work should be relevant to the project itself, rather than rooted in a particular approach or technique - the outcome should speak for itself. LUST's cover for the book Form+Code in Design, Art, and Architecture is generated from frames taken from the movie 2001 A Space Odyssey. Together with the back cover and front matter of the book, the circle is revealed as an O, spelling FORM+CODE. Nicholas Felton / feltron.com A few years ago my work became almost exclusively data-driven and my design process became increasingly centered on a rules-based approach. I developed a set of processes for creating maps and charts that were effective, yet laborious and time consuming. It soon became apparent that in order to produce more and to tackle larger data sets, I would need to find a way to automate the routines I relied on. With Processing, I have been able to design applications that channel my methods, instead of bending my approach to work with existing software. These applications are accountable, so that if the output doesn't match my expectations I am able to audit the code and find the issue. They are also inherently malleable, allowing me to mold the code to fit each project. When I first began writing software, the programs I designed simply allowed me to do more of the same work in a shorter period of time and in a more flexible manner. As a result, the final product was not impacted by the use of software. With more practice and familiarity with the tools, I have started to produce work that would have been unfeasible or impractical using manual methods. I have experimented with maps that rely on difficult algorithms and developed tools that allow me to test a range of variables before outputting a final visualization. Spread from the 2010 Feltron Annual Report, designed by Nicholas Felton. The 2010 Annual Report catalogs the life of Felton's father, who passed away earlier in the year, Amanda Cox (The New York Times) / nytimes.com Mad Libs is a game where key words in a short story have been replaced with blanks. Players fill in the blanks with designated parts of speech ("noun", "adverb") or types of words ("body part", "type of liquid"), without seeing the rest of the story. Occasionally, hilarity ensues, but no one really believes that this is an effective method for generating great literature. In the same way, fill-in-the-blank templates rarely generate great news graphics. Admittedly, generic solutions work perfectly well in some cases. For example: "The _______ (stock index) has fallen _______ (adverb) since _______ (year). [Line chart]." But these are rarely the sorts of graphics that reveal anything unexpected or inspire new ways of looking at the world. Instead, some of the most compelling news graphics exploit structure that is unique to a particular data set. This may require more control than a prepackaged solution, as the same form applied to a different topic would reveal nothing of interest. But if you are nimble enough with points and lines and text, you can do pretty much whatever you want. And this means you get to spend your time exploring and dreaming and wondering "what if" and instead of trying to override the default choices in a software program. If you get lucky, it might even infuse the work with a sense of wonder. This news graphic by The New York Times shows discontent with the political party in power. Nearly all districts voted more Republican (red arrows) in 2010 than they did in 2008. Erik van Blokland (LettError) / letterror.com When I started developing my own tools, I became more critical of existing tools. I had less patience for limitations imposed by others. Building tools offers a powerful perspective on design: the code is there to serve the idea, not the other way round. It means fewer compromises, and when there are compromises to make, at least they are mine and easier to live with. I don't think I am a purist, I will happily use existing tools if they are fit to the task (with wildly varying criteria). But the idea, the direction of a design, should lead the process, not the arbitrary limitations imposed by existing tools. When something can't be done with a specific tool, one should try to improve it or build a better one, but not necessarily compromise the idea. Good ideas are rare, we need to patiently farm millions of them to find one. Killing them before they grow is wasteful. In some of my projects the code gets to touch the shapes. Like filters that synthesize detail or generate patterns or texture. Complex things can be generated (relatively) quickly and evaluated. Doing such things "by hand", even while using a computer could take much longer, forcing me to commit to a set of parameters without realizing the full implications. These kinds of projects are very specific, personal and close to the design. Write, generate. Evaluate. Tweak (code, parameters or both), generate again, repeat. Design iterations are code iterations, the code is as open ended as the design. But often though my code seems to be behind the screen, not touching the shapes directly, enabling directions, whole trees of trees of designs. For instance libraries that standardize objects representing specific data, or an open, documented file format to replace a proprietary and undocumented "industry standard." These abstractions makes it a bit harder (sometimes) to find the energy to build, but the tools that grow on them are very powerful. Here structure and collaboration are important. At which level are the things we agree on and can share, and where are we going to go our own way? These are very interesting questions. Difficult to answer, but part of a very interesting discourse about creative work (design, art, more code) about the methods we have (and the ones we'd like). The top letters are from typefaces that have been designed and generated with Erik van Blokland's tools: Trixie HD, Eames, Federal. The images below are Robot fonts created with Just van Rossum, the Superpolator color field and the roboFab object model. onformative / onformative.com Existing software often restricts implementation possibilities and can even predetermine solutions by dictating what can be done with these possibilities. By writing our own software we break through such barriers and simultaneously create new ways of working with the design process. This process, in which the tool grows and develops with the design, is what excites us. Of course, we also use existing software when it makes sense to do so, because we believe the skillful combination of existing software and our own software is the most effective way to reach the best results. Ultimately, one rarely writes everything anew but rather builds on existing components, or takes elements from the libraries or code snippets of others. One combines the existing with the new, and, through this combination, creates new results based on one’s own ideas. This is possible thanks to the active exchange in communities like the processing.org forum. The design process is no longer divided into concept, design, and production, but rather the design process blends with the production process and the product is created in many small iteration steps in which idea, design, and programming are always closely entwined. When writing one's own software, the creative work and the implementation of these are mutually dependent, and the separation of design and production is abolished. Because one has very different insight into the working methods and detailed processes of one’s own software, there is far more room for experimentation at one’s disposal, which has a direct effect on the quality of the work. “Fragments of RGB” by onformative is a disintegration of an LED screen as an interactive installation and a series of photographs of the transformation. Catalogtree / catalogtree.net You are what you eat. We write our own software mainly to automate repetitive tasks but it is also important to us as a part of our ongoing attempts at experimental tool-making. Choosing a production technique is an important design decision to us and building tools, hardware or software, is a way of avoiding obvious choices. But this process is uncertain, tinkering really. We are amateurs at it by choice. So not all tools find their way into commissioned work: we still have no direct use for our iPod controlled carrousel slide-projector, a fairly accurate push pin firing device for moving targets, a 3D scanner, water rockets or a swarm of vibrobots to draw on a printing screen. But what is important is the belief that the most beautiful sites are just outside the reservation. Programming is the process. Some time ago, we spend a year working on separate locations. Daniel worked in an office building in Rotterdam (great view), and Joris was working from a studio in his backyard (whistling birds). We designed by talking on the phone. Though it was not ideal at all, it wasn’t completely unnatural to us either. We have our own vocabulary when we discuss projects and sketch by describing designs to each other. This means that most of our designs are language-based and finished before becoming visual. Over the phone, it is of no use to describe every single page of a book, every margin, every adjustment to the kerning of a headline. But to describe a system that generates from the smallest information unit in the content a possible flock of birds, can take five seconds. In swarm systems, the behavior of one unit does not predict the behavior of the swarm as a whole. We aim at designs that have some swarming capacity, where we know what the smallest information unit should do but not what the final design might look like. To us, a good design is more that the sum of its parts. However, good design also means picking the right pink and the right typeface. We will not follow an algorithm in a dogmatic way if it generates the wrong results. It is not in the first place about what the rules are, it is about what you do when you end up in a place not covered by those rules. There is room for a cherry on top. Catalogtree building a crystal radio beneath the gaze of their Thomas Castro woodcut portrait. Boris Müller / esono.com It is quite intriguing that most of the software tools we use in our everyday life resemble a specific activity from the analog world - or even the analog past. Even creative, visual work on the computer is still based on manual input. The designer uses software tools to manually produce a formal output. But creativity is not only about manual work - it is also about ideas. And in terms of ideas, software is a vast space. Like any other language, programming languages are about expressing ideas. They allow one to create enormous complexities that remain consistent and stable. So instead of manually crafting an image, I generate an idea in a formal language and turn this idea into any number of images. Beautiful ideas do not necessarily generate beautiful images. In the design process, I have to work on two different levels. The first one is about turning the idea into an abstract system. The second one is for the translation of the system into a visual form. The translation process is not deterministic. Sometimes it is obvious and strongly related to the abstract system - but very often I have to make a lot of design decisions that are purely based on the quality of the visual outcome. Turning an abstract idea into a meaningful image still needs the mind of a designer. Generative visualization of the poem “Nr. 12” by Eugene Ostashevsky Jonathan Puckey / jonathanpuckey.com I love the feeling of being immersed in the functioning of a visual language of my own making. I put on my developer hat and think about the features and limitations I need as a user. Balancing the authorship I embed through the conceptualization and engineering of the software and the authorship I or others can create by using the software in different ways. When I use my tools, I want to forget about the underlying complexity of their functioning and focus purely on mastering they way my input sparks an output. It splits up the process of design into two. I design the tool and then use it to design with. Often I am able to catch the concept of the project I am working on in the functioning of the tool, making it malleable and explorable while designing with it. I consider my tool based works successful when the viewer is able to visually recognize the collaboration (or even struggle) that is happening between the simplicity of the tool and the complexity of its input. There is No Thirteenth Step designed using Lettering Tool, a Scriptographer typography tool by Jonathan Puckey in 2005. Marcus Wendt (FIELD) / field.io As a student I loved the new aesthetics of modern painting and architecture. I wanted to combine elegant and minimalist structures with complexity and emotional richness – as if bringing together Zaha Hadid with Gerhard Richter. It took many attempts to realize that writing code could play a significant role in getting onto this route. Traditional design tools are following the aging metaphor of a single artist working tediously at his desk creating static images with high manual labour. The more I learned to code I realized how immensely dynamic working with these new tools can be — you can create living digital creatures; films that look different every time you watch, and design tools making 10,000 digital paintings in a day. Writing code follows a bottom-up architectural approach and therefor emphasizes the process over the final result. Instead of working towards a single image, you start to think in the possibilities of a system. Designing a process rather than the end result forces you to be open for and work with unexpected results, and sometimes surprisedly embrace the outcome. It's hard to cheat when you're working with code – before you can write something down, you have to clarify your ideas. It's a bit like planning and building a house – with the major difference that once you're done you can go back and change your foundation to get a dramatically different result. Communion is a site-specific generative installation designed and coded by FIELD, with creative direction by Universal Everything and sound by Freefarm. Image courtesy James Medcraft. Sosolimited / sosolimited.com Programming is a lot like cooking. When you learn how to do it yourself, you derive great pleasure from combining ingredients of your choosing and tasting the resulting dish. After a while, it becomes second nature and you no longer have to rely on processed foods for nutrition. Another similarity between cooking and programming is that they are both powerful instruments of seduction. MIT was instrumental in teaching us this way of thinking. If your tire gets a flat, why buy a new one when you can re-invent the wheel? We were not taught explicitly how to use off the shelf programs. Come to think of it, they didn’t even teach us how to use computers. The attitude was: build whatever it takes to do what you want to do, but use technology to do it. So naturally, when it comes time to design, we write software. When you’re writing your own software, the design is never set in stone. It is a constant improvisation. We don’t always know how our changes will propagate, but a deep trust in the process and a willingness to play often lead to wildly unexpected and pleasing results. It’s like handing your child a marker, writing down a list of things he can and can’t do, and letting him loose in your living room. The visual qualities of our work reflect the structures, iterations, recursions, and limitations of code running on a computer. If we are looking to create an organic visual, we might actively work to hide the digital origin. If we are trying to reveal structures in a stream of information, we might embrace and amplify these same coded qualities. No matter what though, the final work looks the way it does because it is a continuous extension of the thinking machines that made it. Prime Numerics by Sosolimited was a live remix of the final UK Prime Ministerial Debate on 29 April 2010. Trafik / lavitrinedetrafik.fr Graphics and programming are at the very heart of so many of our projects and this association has been the founding basis of Trafik (since 1997). Right from the beginning, we have believed that programming could be used for creative purposes, even if programming languages have essentially been devised to make tools. When we write a program, we are faced with technical issues which we have to address: to resolve the code in order to guarantee the running of the program. However, by studying the generated esthetic results and by making our own visual choices, we have thus adopted an artistic approach. To apply programming to graphic design is an unusual approach by its very nature. In reality, the code, used as the base material, is abstract and disconnected from the generated forms. To write a code, to compile it, and to see it generate itself into tangible shapes, creates a sensitive rapport with programming. Used in such a way in graphic design, it enables us to develop artistic objects which outmatch existing tools. However to constrain ourselves to produce our own instruments is an empirical method which gives correct, precise and adapted results but which also sometimes provokes unexpected results. Thus, the code, by the complexity and diversity of what it can produce, manages to surprise us and to go beyond what we imagined at the outset. For example, the code often generates a unique esthetic which produces a sort of visual radicalism devoid of any sophistication. By using programming, the creative process seems to us to be more complete: when producing visuals, installations or animations, we work first on the functioning of the program, on its “life”. The project builds itself throughout the development, through a permanent exchange between suggestions from the graphic designer and those of the programmer. These two professions and their interactions inspire all of our creations and produce specific and precise objects of art. Pierre Rodière, Graphic Designer / Joël Rodière, Programmer Casey Reas is a professor in the Department of Design Media Arts at UCLA and a graduate of the MIT Media Laboratory. Reas’ software has been featured in numerous solo and group exhibitions at museums and galleries in the United States, Europe, and Asia. With Ben Fry, he co-founded Processing in 2001. He is the author of Process Compendium 2004-2010 and co-author of Processing: A Programming Handbook for Visual Designers and Artists (MIT Press) and Getting Started with Processing (O’Reilly). http://reas.com http://users.dma.ucla.edu/~reas/ Chandler McWilliams is a writer, artist, and programmer. He has studied film, photography, and political science; and completed graduate work in philosophy at The New School For Social Research in New York City. He lives in Los Angeles where he teaches in the department of Design Media Arts at the UCLA School of the Arts. His current work focuses on themes of affect, repetition, computation, and […]
- What is at stake in animate design? [Theory] Grey Walter’s robotic tortoises ELSIE Usman Haque has, on several occasions, made the observation that there is an important difference between interactivity and responsiveness (see for example -pdf). A responsive system is a fundamentally linear set of relations, a kind of reaction where the same thing happens every time a given action is performed. A normal light switch is responsive in this sense. A typical light switch doesn’t consider any other variables, or have any other behavioural options. Pressing the switch will either turn it off or on, in what is a linear causal relationship. A properly interactive system is very different in its logical structure, and is characterised instead by a relational and circular (or more complex network) causality. In a properly interactive system, a given action will produce different results, because it depends upon the context at that moment, the history of previous interactions, and the relational creativity of the system. To take the banal example of a light switch again: in an interactive system an input might turn on a light, but it could equally result in other behaviour. A properly interactive light might set itself at different levels according to other sensor inputs, or the light might not come on at all, and instead curtains or windows might be opened to allow in more light. It might even ask you if you are afraid of the dark, or if you need help. It might try to sell you a torch, or it might just remind you that you are wearing shades. The post-war maverick ecologist and cybernetician Gregory Bateson used a different example to illustrate the same point. If you kick a stone, he said, then the trajectory of the stone is a simple mechanical affair, that can easily be calculated using Newton’s equations. If you kick a dog, then you do not know what is going to happen. It might bite you, or bark at you, or run away. A dog interacts with us. It has its own agency, and that is the important issue here. One point to be made here then, is that many of the installations, systems and apps that we might broadly classify as interactive, are actually just responsive or reactive. There is nothing per se wrong with reactivity, and of course such responsive and reactive systems can in any case be ‘looped’ and networked to form components of more complex and properly interactive feedback systems. The important point rather, is that properly interactive systems are interesting, as they are able to stage a series of philosophical questions regarding the nature of agency and creativity - important questions that perhaps cannot be posed in any other way. The way that circular causal systems which feature feedback and recursion act as minds was the broad research focus of the post war project of cybernetics, and has been the subject of a recently published book by Andrew Pickering, called The Cybernetic Brain – Sketches of Another Future (University of Chicago Press, 2010). In this work, Pickering takes the reader through this fascinating period of experimental work at the boundary of art and science, which he describes as “some of the most striking and visionary work that I have come across in the history of science and engineering”. Pickering focuses upon the most radical traditions within cybernetic research, which largely arose out of the work of a series of distinctly eccentric British researchers, who he describes – borrowing a phrase from philosophers Giles Deleuze and Felix Guattari – as performing a nomadic science. He notes that “unlike more familiar sciences such as physics, which remain tied to specific academic departments and scholarly modes of transmission, cybernetics is better seen as a form of life, a way of going on in the world...” Pickering considers many experiments that have come to take on a legendary status within the history of cybernetics, ranging from Ross Ashby’s Homeostat (a network of four machines composed of movable magnets with electric connections through water, which would exhibit a range of emergent self-organised behaviours), Grey Walter’s robotic tortoises ELSIE and ELMER (which would respond to each other’s lights, or themselves in a mirror), to Stafford Beer’s remarkable Cybersyn project for Salvadore Allende’s government in Chile (an early form of the internet, which created the basis of a de-centralised socialist planned economy. For an information rich - though political analysis very poor - documentary, see here). Of particular interest to Pickering is the work of Gordon Pask, whose experimental installations and assemblages of various kinds captured in a uniquely distinct way, what Pickering describes as the “hylozoic wonder” of radical cybernetics – that is to say, under what conditions can we think of all matter as (at least capable of) being alive and thinking. Gordon Pask was heavily influenced by the ideas of Gregory Bateson – in particular Bateson’s anthropological work with various Balinese tribes, and later with family therapy and schizophrenia. In this research Bateson showed how our very experience of being a ‘self’ is produced out, or emerges out of, our participation in a network or ecology of conversations with other actors in our environment: people, objects, rituals and so on. Bateson suggested that “the total self-corrective unit which processes information, or as I say, ‘thinks’, ‘acts’ and ‘decides’, is a system whose boundaries do not at all coincide with the boundaries either of the body or of what is popularly called the ‘self’ or ‘consciousness’.” For Pask famously, the conversation became the paradigm for thinking about interactivity – much of which focused on the question of how do systems learn and teach, or as Bateson described it, what is deuterolearning: learning how to learn? Pask’s writings in this area can often be rather obscure, especially to the newcomers to the field, and Pickering provides an excellent introduction to these projects – including Musicolour, SAKI, Eucrates, CASTE, and the yet more experimental chemical computing projects – many of which were developed in association with architecture schools and in art settings. In all of these projects, Pickering reminds us, Pask is ultimately staging questions about who we are, and what we and our world might be; questions which the ‘ecology of mind’ of radical cybernetics can still help us with today. In this regard, I can’t put it any better than Usman Haque, who has stated that: “It is not about designing aesthetic representations of environmental data, or improving online efficiency or making urban structures more spectacular. Nor is it about making another piece of high-tech lobby art that responds to flows of people moving through the space, which is just as representational, metaphor-encumbered and unchallenging as a polite watercolour landscape. It is about designing tools that people themselves may use to construct - in the widest sense of the word - their environments and as a result build their own sense of agency. It is about developing ways in which people themselves can become more engaged with, and ultimately responsible for, the spaces they inhabit.” -- About the Author: Jon Goodbun is researcher interested in networks of architecture, process philosophy, radical cybernetics, urban political ecology, and the natural and cognitive sciences. He sometimes refer to himself as an metropolitan tektologist, for want of a better description. His work focuses on near and medium term future scenarios. He is currently printing his PhD, working on a book 'Critical and Maverick Systems Thinkers', and planning some kind of exhibition on 'Ecological Aesthetics, Empathy and Extended […]
- (General indifference towards) The Digital Divide Emily Jacir – Mateiral for a Film How might we explain the ascent, pervasiveness and popular appeal of digital art? This is not the question that CUNY Graduate Center associate professor Claire Bishop chose to answer in her recent "Digital Divide" article, published in the September issue of Artforum. Instead, Bishop conducts a broad survey to scan for acknowledgment (or at least trace elements) of 'the digital everyday' in contemporary art. In mounting this well-crafted consideration of technology and aesthetics, Bishop makes the rather dubious error of entirely dismissing "new media" art (her quotations, not ours) to instead focus on more traditional practices like sculpture, video and installation. Despite this questionable omission, this discussion is worth a glance for two reasons: first, it very capably schematizes some general categories for considering projects and practices in 2012, and secondly, the article has inspired a roster of A-list reactions that refute Bishop's scope and reasoning. While Bishop's analysis exudes a general sense of unease towards the social web and mediated experience she most certainly can translate the implications of these phenomena into a means of reading work. The bulk of her argument is a meticulous classification of the following themes explored by contemporary artists: media archaeology, social practice, remix culture, research-based practice and the artfully titled vernacular of aggregation (which scales up from individual projects to encompass curation as well). While it might be easy to turn up one's nose at these fairly pedestrian categories when coming at this argument from a 'been there, done that' new media or creative technologist milieu, this is some whip-smart commentary. In particular, Bishop's discussion of Thomas Hirschhorn's meditation on haptic perception in Touching Reality (2012) and Emily Jacir's "diaristic" Material for a Film (2004-07), that reconstructs the life of (assassinated) Palestinian poet Wael Zuaiter is rock-solid. However eloquent Bishop is in reading the selection of projects that she's arranged, her undertaking goes off the rails when she describes code as "alien" and fumbles an analogy about file formats. With a little help from poet Kenneth Goldsmith, Bishop ultimately concludes that the role of the digital may be to "open up a new dematerialized, deauthored and unmarketable reality" and <gasp> perhaps even "signal the impending obsolescence of visual art itself". It is hardly surprising that these claims of a potentially "deauthored" reality were met with widespread disdain in media art circles. Bishop's willful omission of approximately two decades of digital art allows her to rather conveniently examine her selected projects in a vacuum. Citing a recent curatorial decision at the Berlin Biennale to invite Occupy activists into an exhibition for the duration of a show is a great example of post Web 2.0/social platform networked performance, but one can't help but note there is a whole body of work produced over the last eight years that actually interrogates and problematizes networks – structurally, representationally and experientially. Much of Bishop's analysis is founded on projects that could be read as internalizing and emulating the logic of various facets of digital culture, and it is indulgent to dwell on these readings when there is virtuosic work and established practices that does explicitly address these topics. That said, why should digital artists, hackers, creative coders or <insert crude practice description here> even bother seeking vindication within the contemporary art world? What is to be gained? Jon Ippolito's comment on the article was quite astute: call new media a niche field if you like, but "500,000 people are walking around with Scott Snibbe's work on their iPads." Julian Oliver was even more direct, noting that the rift that seems to block access to the contemporary art world is not an obstacle at all, and that we should all just keep focused on making stuff. Read Article | note the stellar comments thread Thomas Hirschhorn, Touching […]
- The Future of Art by KS12 [News] Created by KS12 / Emergence Collective, The Future of Art is a video shot, edited and screened at the Transmediale festival 2011 in Berlin. What are the defining aesthetics of art in the networked era? How is mass collaboration changing notions of ownership in art? How does micropatronage change the way artists produce and distribute artwork? The Future of Art begins a conversation on these topics and invites your participation. Featuring: Aaron Koblin aaronkoblin.com, Michelle Thorne thornet.wordpress.com, Caleb Larsen caleblarsen.com, Régine Debatty we-make-money-not-art.com, Heather Kelley kokoromi.org, Vincent Moon vincentmoon.com, Ken Wahl depthart.com, Reynold Reynolds reynold-reynolds.com, Bram Snijders sitd.nl, Mez Breeze furtherfield.org/display_user.php?ID=403, Zeesy Powers zeesypowers.com, Joachim Stein joaoflux.net, Eric […]
- Paul Prudence Interviews Mitchell Whitelaw [Theory] Mitchell Whitelaw, #climatedata proposal (2009) One of the most articulate and accessible voices within the generative art scene is undoubtedly the Canberra-based scholar/practitioner Mitchell Whitelaw. Given his relative (internet) silence over the last year, news of an interview—conducted by Paul Prudence, no less—published in the most recent issue of Neural magazine, is cause for minor celebration. Mitchell posted the transcript of this conversation to his blog last night and it is noteworthy for several reasons. First, the opening response about the utopian nature of software art acknowledges some ideological underpinnings that are seldom discussed – and Paul's query as to "where is the dystopian software art?" is both provocative and on point. Secondly, the comments about look vs. process and how even the glossiest eye candy often embodies a "narrative of systems" is a useful means of considering the 'performative' capabilities of generative art. Finally, Mitchell's description of algorithm popularity as 'a memetic ecology unto itself' is exactly the kind of meta-commentary that is desperately needed in (generally) uncritical software art circles. A particularly sharp passage on system design and pedagogy: The link there for me is a sense of "procedurality" or "processuality". In Casey Reas' work we can see a strong relationship between computational and non-computational procedures such as those of Sol LeWitt. In teaching programming to designers, I have students write and execute a LeWitt style procedure, with pencil and paper. Digital generative systems are just formal procedures, executed by machines. Treating processes as human-executable helps unpack the black boxes of generative systems mentioned earlier, and hopefully reveal them as contingent and hackable. This notion of "human executable" procedures is a handy frame of reference in introducing agency into system design... Otherwise: the joy of materiality. Generative art and design covets the lush tangibility of traditional media; and with the wave of interest in fabrication we are seeing ever more generative work realised in "off-screen" forms. The challenge then, for pasty code-artist types, is to match the craft skills of hands-on makers in realising the work. ...and serves as a perfect segue into Mitchell's thinking about transmateriality (craft/making/material culture). Read the full conversation between Mitchell and Paul […]
Posted on: 19/04/2012
Author: Jon Goodbun
My research interests range across a network of architecture, process philosophy, radical cybernetics, urban political ecology, and the natural and cognitive sciences. I sometimes refer to myself as an metropolitan tektologist, for want of a better description. My work focuses on near and medium term future scenarios.
View all entries by: Jon Goodbun
- Engineering Lead at Wieden+Kennedy
- Web Developer at the Minneapolis Institute of Arts
- Junior Production Assistant at Resonate
- WebGL/3D Creative Prototyping Devs at TheSupply
- Freelance Interactive Producers at Psyop
- Art Director/Senior Designer at Stinkdigital
- Creative Technologist, The ZOO at Google
- Jr. / Sr. Software Developer at Minivegas
- Web Developer at Minivegas
- Digital Producer at Minivegas
- 3D Technologist at INDG
- Creative Director at INDG