A Philosophy of Computer Art is a text that may interest some readers of creativeapplications.net as it covers the intersection of computing and art, discussing some of the classics of interactive art, and doing a lot of thinking about what art that uses computers actually is. In it Dominic Lopes does several things very well: it divides what he calls “digital art” from “computer art”, and it correlates that second term, which I’ll put in capitals to mark that it’s his term, Computer Art, with interactivity. He also articulates precise arguments for computer art as a new and valid form of art and defends his new term against some of its more tiresome attacks. As a quick example, Paul Virillios concerns about the debilitating effect of “virtual reality” on thought which is more than a little reminiscent of Socratic concerns about the debilitating effect of writing on thought and points to an interesting conclusion: what we call thought is a technologically enhanced phenomena. Note Friedrich Kittler: most human capacities are enhanced in some way or another with no great damage to the notion of “humanity” or “human”. It’s little more than a failure of imagination to thunder about how those augmentations debilitate the natural state of humans. Lopes also makes several extremely astute observations about the nature of interactivity and repeatability, comparing Rodins Thinker, Schuberts “The Erlking”, packs of refrigerator magnet letters, and true interactivity in artwork and concluding that interactive work has distinct characteristics. What he comes to, or what I read him as coming to, is this: a structured and rule based experience is interactive. “A good theory of interaction in art speaks of prescribed user actions. The surface of a painting is altered if it’s knifed, but paintings don’t prescribe that they be vandalized.” Reduced even further: grammar plus entities plus aesthetics equals interactivity. He also makes, to pick just a few, excellent arguments for the interpretive necessity of a view in automated displays, astute observations about the potential value of a computer art criticism, and for the nature of technology as a medium.
But Lopes is also a philosopher and philosophers seek to, among other things, define categories. Painting, sculpture, dance; these categorized mediums have all served us well over the years and so the thinking goes, why not extend them and add another: Computer Art. I’m not so sure that the idea of Computer Art as, with an admittedly blunt reduction, “stuff on a computer that allows you to participate in it presenting itself” is particularly useful. My feeling is that this isn’t what interactive art or art made in collaboration with computers is presently nor is it a meaningful extent of what it should be. The device is not the method, nor is the extent of what makes this type of artwork rich and meaningful and computers aren’t really the medium: algorithm and computation are the medium. In Form+Code, Casey Reas and Chandler McWilliams are right to point to Sol Lewitt as an earlier exponent of explicitly algorithmic art and tie that into the current computational and algorithmic art-makers. A computer originally was one who did computation, that is, a person sitting with a slide rule, pen, and paper and was only later applied to machines. The idea of computation is that it offloads a pre-existing human capacity, accentuates pre-existing things in the world. The person who calls their friend on their cellphone describes their action as “calling my friend” not “using my cellphone”. The person using Ken Goldbergs TeleGarden (a work mentioned frequently in APAC) is marveling at how they can collaboratively participate in creating a garden, not at how they can control a machine via a network. The point is not the device — the point is interactive computation, extension of human aptitude and capacity, and the type of relationship with the world that it enables. His insistence on the primacy of mediums and forms is doubly odd because in his finale Lopes emphasizes that “computer art takes advantage of computational processing to achieve user interaction”. Close, but not quite there.
I’m nitpicking, and admittedly so, because he’s looking at works that are unmistakably “Computer Art” by his definition of it. Computer Art is meant to be a measure of degrees, a spectrum. One looks at Scott Snibbes work and sees a computer system and an interactivity. Golden Calf, a work he references multiple times, is very firmly at the Computer Art circle in the Venn diagram of machine-human art-making experience. These are the easy examples, those that lend themselves most easily to the account of interaction in artworks that he describes. But I’m nitpicking for a reason: it’s painfully limiting. It says that computer art is things that are run on a computer with which I interact and observe a display where I consequently understand how my actions are interpreted.
This seems naïve to the ways that computation actually functions in our lives and an oversimplification of how people think that computation can function in their lives. This also seems to be reductive of what forms art can take and how the conversation that is art-making can evolve. For instance: Wafaa Bilals Domestic Tension, a piece far more indebted to performance art than sculptural installation. There’s quite a bit more at play there than myself seeing the manipulation of pixels and there’s more to my understanding of how this piece functions and signifies than understanding that I’m speaking with and through a computer.
Another example: Men in Grey. Is this interactive art? Not in many senses, I never interacted with it nor would I say that interacting with it is necessary to understand it and experience it. It has far more in common with Situationist/Lettrist works than with installation art, and yet it is computer based, one does interact with it by well-known protocols and through well-established rules, it has a display. It uses computation and networks and yet it’s not about manipulating a computer or a network to create display elements nor is that the forefront of it. Nor are EyeWriter, Natural Fuse, and a slew of other works and projects that I find most meaningful and engaging.
In philosophy of aesthetics at times philosophically strong categories sometimes are preferred over meaningful categories because of the defensibility of strong categories. Painting as a category of artwork is not deeply meaningful in many ways (consider the question “do you like paintings?”) yet determining how much something is and is not a painting is quite easy and categorically meaningful. “Minimal” as a style (ones furniture or aesthetic) or strategy (“minimalism” with attendant connotation) is a much more meaningful designation because it has historical precedents, significations, and because it extends beyond a particular category to cover a manner of production and reception. That is, it describes communicative strategies, which Lopes indicates is one of the goals of the interactivity in Computer Art. However “minimal” is a terrifically difficult thing to pin down into categories and yet it is descriptive, historical, and fundamentally meaningful as a description of an aesthetic practice. To play a small linguistic game, describing speech as “he spoke with words” is a bit odd; to describe it as “he spoke with silence” makes more sense because one does not normally make speech with silence. Likewise Computer Art seems primarily to describe a situation of abnormality, “this is art that involves you interacting with a computer”, that I believe few people actually find particularly abnormal and that will be less meaningful, if not near meaningless, in the near future.
Lopes text is an excellent opening of what I hope will be an interesting discussion that attempts to unravel the relationship between new forms of narrative, expression, and communication and the previous ones. He weaves together an excellent web of references from Umberto Eco to Clement Greenberg to Lev Manovich and references a wisely chosen group of artworks to bolster his argument. The example of A Philosophy of Computer Art is in it’s handling of complex arguments against the sort of odd disqualifications that occasionally are leveled against Computer Art. It’s insistence on categorical logic and mediums as definitive categories is a small aberation in what is otherwise an excellent text and opening of a new type of discourse about what creative computing might possibly mean.
Rafael Lozano-Hemmer “Blow Up”, 2007
Daniel Rozin “Wooden Mirror”, 1999
Scott Snibbe “Boundary Functions”, 1998
Camille Utterback & Romy Achituv “Text Rain”, 1999
Joshua Noble is a writer, designer, and programmer based in Portland, Oregon and New York City. He’s the author of, most recently, Programming Interactivity and the forthcoming book Research for Living.
- “Generative Design” – A Computational Design Guidebook "The main change in the design process achieved by using generative design is that traditional craftsmanship recedes into the background, and abstraction and information become the new principal […]
- The HyperCard Legacy [Theory, Mac] In 1963, my dad was looking for a job. Born in England and raised in Africa, he ended up in London after a few years of travel by ship and train. In those pre-pre-Craigslist days, people still searched for employment in newspapers, and an unusual listing in a London Newspaper caught his eye: a listing looking for computer operators. For my father, the listing raised two immediate questions: What is a computer? And how do you operate it? (A similar reaction would have come from job listings for auto mechanics in 1914 or web designers in 1994). Responding to that listing turned out to be a life-changing decision for my dad, who has spent the last 40 years working with computers and technology. A very similar directional moment came for me 24 years later, in 1987, when my dad arrived home from work with a Macintosh SE computer HyperCard, Revisited The Mac SE was actually not as important to my life (and career) as was the software that came with it for free - in particular, an unusual and innovative application called HyperCard. HyperCard was a tool for making tools - Mac users could use Hypercard to build their own mini-programs to balance their taxes, manage sports statistics, make music - all kinds of individualized software that would be useful (or fun) for individual users. These little programs were called stacks, and were built as a system of cards that could be hyperlinked together. Building a HyperCard stack was remarkably easy, and the application quickly developed a devoted following. HyperCard was the brain child of Bill Atkinson, one of Apple's earliest employees, and the software engineer responsible for (among other things) the drop-down menu, the selection tool, and tabbed navigation. Bill played a big role in making the Mac what the Mac was - a personal computer that made the whole process of computing easy for the general public. HyperCard represented perhaps the bravest part of this 'computing for the people' philosophy, as it enabled users to go past the pre-built software that came on the machines, and to program and build software of their own. Assuming that a typical computer would and could learn how to may program seem like a mad idea, but its one that has a long legacy. When personal computers were first envisioned in the 1960s, scenarios included the owners of these machines making their own software. The small group of people who were working in computing probably couldn't imagine why anyone would want a computer if they didn't know how to program it! With HyperCard, the learning process was facilitated by pre-built UI elements, and a simple drag & drop interface. Maybe most important, though, was HyperCard's unique, innovative, and very easy to use programming language, HyperTalk. Say That again, in English? Reading programming instructions written in some languages can be confusing. Statements in HyperTalk, on the other hand, tend to read like sentences in English. For example, if I wanted to create a variable called â€˜nameâ€™ with the string 'bob dole' in it, I would write this: put 'bob dole' into name If I wanted to put the last name into a list of last names that I had already created, I could do this: put the second word of name into last_names And if I wanted to display the name on screen, I would simply write: put name into field 'name_display' This type of plain-language programming makes sense, particularly in an application that was designed specifically for non-programmers. I have been teaching programming to designers and artists for nearly a decade, and I find the largest concern for learners to be not with the conceptual hurdles involved in writing a program, but with obscure and confusing syntax requirements. I would love to be able to teach HyperTalk to my students, as a smooth on-road to more complex languages like Java or ActionScript. HyperTalk wasn't just easy, it was also fairly powerful. Complex object structures could be built to handle complicated tasks, and the base language could be expanded by a variety of available externdal commands and functions (XCMDs and XFCNs, respectively), which were precursors to the modern plug-in. Programming for the People This combination of ease of use and power resonated with the HyperCard user base, who developed and shared thousands of unique stacks (all in a time before the web). A visit to a BBS in the late 80s and early 90s could give a modem-owner access to thousands of unique, often home-made tools and applications. Stacks were made to record basketball statistics, to teach music theory, and to build complex databases. The revolutionary non-linear game Myst first appeared as a HyperCard stack, and the Beatles even got into the scene, with an official stack A Hard Days Night. During the same time, developers made hundreds of extensions. Some let HyperCard stacks talk to other applications on your computer (opening the door to the first computer virus, 'Concept', in 1993). Other let you communicate to the outside world - BeeHive Technology's ADB I/) box was a kind of â€˜Arduino for the 80's, and let stack-makers connect to sensors and send commands to electronics. A large community formed around HyperCard, providing tips & resources as well as a distribution channel for home-brew software makers. The HyperCard Legacy Over the last few years, we've seen many exciting projects that work in the spirit of HyperCard - projects that offer free and simple ways to create custom software tools. Replace the word 'HyperCard' in the paragraphs above with 'Processing' and the word 'stack' with the word'sketch', and many of the innovations and advantages described can be moved 20 years into the future without much of a re-write. HyperCard was the first real hyper-media program, paving the way for the web, and everything that came with it. It was used by thousands of people, and by most accounts, seemed to have been a fairly successful piece of software. Which, of course, begs the question: What happened to HyperCard? A small project in the larger suite of Mac software, HyperCard never really saw the type of development commitment that it would need to remain current as the Mac OS advanced. The small, black-and white application looked more and more antiquated as screens got bigger and more colorful. To compound matters, the project was shuffled back and forth between Mac and its software subsidiary Claris and seemed never to get any kind of sure footing. Though a second version of Hypercard was released in 1990, the project had made few advances since its release five years earlier. Ultimately, HyperCard would disappear from Mac computers by the mid-nineties, eclipsed by web browsers and other applications which it had itself inspired. The last copy of HyperCard was sold by Apple in 2004. The Importance of Middle Ground In new media, practitioners are often identified with the specific tools that they use. I started out as a 'Flash guy' and over the last few years have been connected more and more with the open source software project Processing. Though I originally came to Processing to escape the Flash Player's then sluggish performance, I value the platform as much for its ease of use and its teachability as I do for its ability to quickly add floating point numbers. Lately, I've been asked the same question, over and over again: 'Why don't you move to OpenFrameworks? It's much faster!' It is true that projects built in OF run faster than those built in Processing. This question, though, seems to be missing a key point: faster does not always equal better. Does every pianist want to play the pipe organ because it has more keys? Is a car better than a bicycle? In my case, choosing a platform to work with involves as much consideration to simplicity as it does to complexity. I am an educator, and when I work on a project I am always thinking about how the things that are learned in the process can be packaged and shared with my students and with the public. Which brings us to the broader concept of accessibility. HyperCard effectively disappeared a decade a go, making way for supposedly bigger and better things. But in my mind, the end of HyperCard left a huge gap that desperately needs to be filled - a space for an easy to use, intuitive tool that will once again let average computer users make their own tools. Such a project would have huge benefits for all of us, wether we are artists, educators, entrepreneurs, or enthusiasts. HyperCard, Revisited Over the years, there have been several attempts to revive HyperCard, most recently on the web. TileStack is HyperCard for a social media world, a site in which users can build their own stacks, program them with HyperTalk, and share them with friends. It's a bit of a time capsule, with many classic HyperCard stacks available to satisfy any nostalgic cravings for B&W pixel art you may be harbouring. Unfortunately, HyperCard, as much as we might love it, is 25 years old. These big initiatives to revive it directly end up looking and feeling antiquated. I could imagine a new version of HyperCard being built from the ground up around its core functional properties: HyperTalk, easy to use UI elements, and a framework for extensions. It's the kind of open source project that could happen, but with so much investment already existing in other initiatives such as Processing and OpenFrameworks, it might not be the best use of resources. So, let's forget for now about a resurrection. Instead of thinking bigger, let's think smaller. HyperCard for the iPhone? It might not be as crazy as you think. Imagine having a single, meta app that could be used to make smaller ones. This 'App-Builder App', like HyperCard, could combine easy to use, draggable user interface elements with an intuitive, plain language scripting language. As a quick visit to the App Store will show you, many or most of the apps available today could be built without complex coding. You don't need Objective C to make a stock ticker, or a unit converter, or a fart machine. These home-made apps could be shared and adapted, cross-bred and mutated to create generation after generation of useful (and not so useful programs). By putting the tools of creation into the hands of the broader userbase, we would allow for the creation of ultra-specific personalized apps that, aside from a few exceptions, don't exist today. We'd also get access to a vastly larger creative pool. There are undoubtedly many excellent and innovative ideas out there, in the heads of people who don't (yet) have the programming skills to realize them. The next Myst is waiting to be built, along with countless other novel tools and applications. With the developer restrictions and extreme proprietism of the iPhone App Store, it's hard to remember the Apple of the 80s. Steve Jobs, Bill Atkinson and their team had a vision to not only bring computers to the people, but also to bring computer programming to the public - to make makers out of the masses. At Apple, this philosophy, along with HyperCard seems to have mostly been lost. In the open source community, though, this ideal is alive and well - it may be that by reviving some ideas from the past we might be able to create a HyperCard for the […]
- “The Glitch Moment(um)” by Rosa Menkman / Review by Greg J. Smith One need only look as far as the upstart GLI.TC/H festival and its vibrant constellation of related practitioners to see that the glitch aesthetic is alive and well. Rosa Menkman has been active as an artist, theorist, organizer and agitator within this milieu and at the tail end of last year she published The Glitch Moment(um), which threads together a number of writing and research projects into a rather authoritative overview of engineered disruption as critical media practice. Released under the auspices of the Institute for Network Cultures Network Notebook series, The Glitch Moment(um) provides a really thorough examination of glitch aesthetics in relation to classical communications theory, questions of categorization, the propagation of glitch art as a 'genre' and presents some related research into the community of artists active within this realm. Menkman also tosses in a manifesto for good measure. Despite the numerous moving parts that comprise this text, it really works as a cohesive enterprise – not only in providing an overview of the history of glitch art but as an expert framing of the media theory that underpins the field. So, how do we make sense of practices such as codec corrupting, datamoshing and circuit bending? Menkman describes glitch as a wholesale rejection of utopian dreams of the seamless media experience, a dispelling of the transparency of various mediums: "To study media-specific artifacts is to take interest in the failure of media to disappear... in noise artifacts." In contemplating failure, she breaks down these noise 'artifacts' into three categories: compression, feedback and glitch, and identifies the latter as an indeterminate force, one that is "unaccepted... unwanted... unordered". After ruminating on this undefined space Menkman eases into a consideration of the phenomenology of glitch that is buoyed by careful case studies of key works by Ant Scott, Gijs Gieskes, Jodi and Paul B. Davis. The remarkable thing about The Glitch Moment(um) is the depth of research informing the work; Menkman moves beyond stock discussions of Paul Virilio (catastrophe) and Kim Cascone (the aesthetics of failure) and invokes less overtly relevant media theorists like Alan Liu and Jay David Bolter to great effect. The concluding examinations of 'the commodified glitch' and the glitch scene's crystallization into a genre are really quite savvy and self-aware. Menkman cites McLuhan's adage that "obsolescence never meant the end of anything, it's just the beginning", which perfectly encapsulates the challenge (and promise) of this particular moment for error-driven practices. Given that The Glitch Moment(um) is basically a handbook for embracing noise and obsolescence with open arms, the text is a vital read for anyone interested in critically engaging media. Read/download The Glitch Moment(um) References: GLI.TC/H festival / Rosa Menkman / Network Notebook Jodi, <$BLOGTITLES$>, 2007 Ant Scott, SUQQE, 2002 Glitch Actors Organized – Network map of glitch artist twitter scene (produced with Esther […]
- On simulation, aesthetics and play: Artifactual Playground In 1958, the American physicist William Higinbotham created what is one of the first instances of what we would today call a modern "video game". The game, named Tennis For Two, was built at the Brookhaven National Laboratory for their yearly open-house presentations of the lab's activities. The game was built using an oscilloscope and a programmable analog computer, the Donner Model 30. It simulated a simple tennis match between two players, with a sideways perspective of the net and a ball bouncing back and forth, controlled by two player-manipulated inputs. _ William Higinbotham, Tennis For Two, Brookhaven National Laboratory, 1958 Although it would take a few more years, namely 1962 and the game "Spacewar", before we could see the emergence of a true modern form of "gameplay", "Tennis for Two" nevertheless contains enough basic elements of interactive play to connect it to more contemporary descendants, for example the iconic Nintendo hit, "Wii Tennis". While there are a few missing details here and there, such as avatars, scoring and the various forms invented to interact with the machine, fundamentally there is very little that has changed since "Tennis for Two". It contains all the modern tropes of animated algorithmic representation, namely a highly kinetic visual form that emerges in real-time from within the game via its gameplay. From this perspective, it is one of the forebears for "arcade" style games. The game is fast and dynamic, and only by interacting with the system does the image emerge. But perhaps most importantly, "Tennis for Two" is significant in that it is not only a representation of playable interactive visual forms, but that these forms represent something greater than their graphical output: the game is in fact a physics simulator of a ball moving through space and interacting with objects in its path. Watch how the ball bounces against the net and then try to imagine what it would take to program such a movement, even today; then remember that Higinbotham was working back in 1958. For its time, this is a sophisticated simulator of physical interactions: "The 'brain' of Tennis for Two was a small analog computer. The computer's instruction book described how to generate various curves on the cathode-ray tube of an oscilloscope, using resistors, capacitors and relays. Among the examples given in the book were the trajectories of a bullet, missile, and bouncing ball, all of which were subject to gravity and wind resistance. While reading the instruction book, the bouncing ball reminded Higinbotham of a tennis game and the idea of Tennis for Two was born." — Brookhaven National Laboratory, The First Video Game?, p.2. In other words, Tennis for Two was not only the first "Pong" game, but also the first physics game, à la Box2D and its shameless re-branding in the infinitely more popular form, Angry Birds. And like Angry Birds' relation to Box2d, the underpinnings for the game "Tennis for Two" were already inscribed in the routines of the machine itself, the Donner Model 30. These routines were then re-contextualized using what we would today call "joysticks" and voilà: a modern arcade game. Wii Dog vs Wii Cat & Angry Birds Live, T-Mobile Given the historical context, there is nothing surprising in this idea of a computer simulating a physical phenomenon such as a bullet or a missile. In the 1950's, computers were still emerging from World War II era cybernetic formulations of "telelogical" or "self-regulating" machines, precipitated in large part by the acceleration of faster and faster flying weapons that required new techniques for shooting them out of the sky (cf. V-2 Countermeasures). The history of interactivity is traversed by this question of simulation, i.e. by the idea of adaptive mathematical and physical models that could allow machines to regulate themselves in real-time, based on constantly evolving conditions. So while it might be considered a historical curiosity that post-war cybernetic machines would produce the modern video game, it is unsurprising that such a game would be constructed out of a physical simulator of bouncing balls or flying bullets and missiles. Aesthetics, Simulation, Play The historical relationship between aesthetics and play has always been a complex one. There is much overlap and interpenetration, but they are in no way interchangeable terms. Most performative art forms, such as theatre or music, oscillate constantly between the ludic and aesthetic realms. In the work of art-game pioneer Eddo Stern — for example his work with C-Level, or his newer Wizard Takes All — we can see these two domains interact with one another in a contstant back-and-forth that suggests perhaps a more fundamental genealogy connecting the two. But despite the deeply connected roots, they are nevertheless two expressive forms that cannot be conflated, all the calls for games-as-art be damned. But whatever the relationship between aesthetics and play, it is further complicated by this introduction of the principle of simulation in play, made all the more acute in the context of video games. Simulation questions the mimetic tendencies of representation, which might explain in part the constantly recurring uproar over violence in video games (and all the ire over provocative gamer-artists that apparently "hate freedom" ;-). But no matter how small-minded the complaints, people nevertheless understand that these games are not merely mimetically presenting us with representations of violence; instead, they are directly modeling the violence itself of the scene. The resulting image flows from the model; it is a "rendering" of the underlying scene. This is the specificity of simulation: the ability to represent the dynamics of a situation as itself a form of representation. The representation needs to be played in order to take form. This is the historical twist of simulation: the image has shifted from a predominantly mimetic function of re-presentation to that of rendering complex interactions visible through playability. In fact, simulations can take place through other mediums and channels of perception. The American far-west simulator, The Oregon Trail (1971), for example, was a simulator that originally used only textual communication to represent the state of the game. Although modern variants of The Oregon Trail, such as Red Dead Redemption now use sophisticated graphics to represent the game state, the game is nevertheless animated by a simulation engine that cannot be be reduced merely to the artifacts displayed on-screen. The Oregon Trail (Apple II edition), 1971/1984 & Red Dead Redemption, 2010 A Poor Man’s Simulator The quality of the simulated movements of the Higinbotham/Model-30 ball and its interactions with the net are impressive, especially when compared to the clunky, almost weightless movements of Pong, designed some fifteen years later. If there were so many games about space in the 70s and 80s, it might be because earthbound physical simulations are hard to design and certainly hard to calculate in real-time, especially when you've moved from analog computers to digital ones. Physics are a mostly logarithmic, analog realm, and are hard, or long, to calculate using digital circuitry. Although many games with bouncing balls and gravity would appear throughout the next few decades of digital gaming, it would truly take Erin Catto's Box2D and accelerometer-based controllers like the Wiimote and the iPhone for the form to emerge as a fundamental gameplay mechanic. Why so early then our first variant on what would later become Angry Birds? The prophetic nature of Tennis for Two can somewhat be explained by context: Higinbotham was a physicist, whereas Pong’s inventors — Ralph Baer (Magnavox) and Allan Alcorn (Atari) — were engineers. Higinbotham was working with scientific instrumentation that did not adhere to the economic constraints or objectives of Baer who was for his part trying to design mass-producible circuitry that could be plugged into to millions of customers’ televisions. But it is precisely this poor-man's quality of video game's simulators that helped emerge the ludic qualities of gaming. Tennis for Two is frankly a little boring next to Pong, whereas Pong remains one of the best-designed games of all time, giving birth to an infinitely expanding field of variants all the way from Breakout to Bit.Trip Beat. Ralph Baer and Bill Harrison Play Ping-Pong Video Game, 1969 & Bit.Trip Beat, Gaijin Games, 2009 One of the ironies of video game history relates to this desire to simulate infinitely complex interactions, but with access to only the most mediocre means of calculation. This contradiction has led to what might in some senses be considered an historical anomaly: an in-between period in which computer games’ desire for "realism" would have to wait for the technological means to catch up. A Poor Man's Renderer This anomaly relates not only to the simulation itself, but also to the manner in which it is rendered to the screen. In this in-between period of video game design, situated somewhere between the late 1960s and Box2D (circ. 2006), a cornucopia of visual forms emerged from video games that have given games their distinctive identity as an aesthetic form. We now identify video games as much by their visual artifacts, as by their particular form of gameplay. A truly innovative game will in fact design a specific form of visual artifact, in order to better match the gameplay, outside of any criteria of realism. This approach will often go on to trump the simulation itself and become the central mechanism of gameplay. It is precisely because of the technological limitations of early gaming technology that gaming eventually found its singular language of representation where the graphical artifacts would themselves become the playable form. Artistic Playgrounds This playable visual language has even circled back around to influence various forms of visual communication, in order to make them more "playful". And artists for their part have used this visual language of computer game artifacts to transform less electronic contexts into playable forms. The list could go on almost forever of artists working in this space: Mary Flanagan, Aram Bartholl, Damien Aspe, etc. In the well-known work of French artist Invader, the city landscape becomes a platformer to be traversed literally, leaving behind physical pixels: Invader Sneakers & Space Invader in Shoreditch, London In the aforementioned Eddo Stern’s "portal" sculptures, gaming logics of representation and interaction are re-projected back onto traditional spaces of representation (gallery, public square, etc) in the form of sculpture: Eddo Stern, Fake Portal, 2012 While neither of these examples are even playable as games, they communicate nevertheless with the video game medium through this imperfect, unrealistic video game form of visual rendering. They look and feel like classical electronic forms of play. The artifactual visual language of video games is sometimes constructed out of a patchwork of various historical forms that have been redefined through the filter of gaming. Sometimes video games skeuomorphically imitate previous technologies and mediums, for example by flashing television-style signal noise to signify a weak connection, or imitating hand-written messages and drawings strewn about a 3d world (cf. Myst, Resident Evil). But video games have also introduced their own domain of visual logic based on the specific contours of the technological limitations that animate them. Often a closer reading is required in order to reveal the nature of these contours. Raster-Scan A strange by-product of the historical anomaly can be seen in the role of the pixel in video games. Originally, as was the case with Tennis For Two, games were built with vectors, as were many related visual technologies such as Ivan Sutherland's Sketchpad. In fact, Tennis for Two used vectors for both the simulated phenomena (force, velocity, etc), as well as the physical image constructed within the oscilloscope. This is completely logical if you're looking to construct a physics simulator. This vector-based approach is also the case today, where games are often built out of polygons which — assembled together — construct the playable scene. But somewhere in between Tennis for Two and our modern-day graphics pipeline, came the pixel. And this anomaly, the pixel, continues to this day to influence profoundly the manner in which even vector-based images are rendered to our eyes. Alan Kay, The Early History of Smalltalk, 1993 Like many of the computing concepts we take for granted today, the pixel concept was perfected in the late 60's and early 70's somewhere between Douglas Engelbart's Stanford Research Institute and the Xerox PARC in neighboring Palo Alto: "The TX-2 display that Ivan Sutherland used for Sketchpad [...] would project a single bright spot on a dark screen and then electronically move that spot around to trace out a circle, say, or the letter A. By tracing and retracing the pattern very, very fast, [it] could create the illusion of a solid outline. [...] The problem was that the more complicated the drawing, the faster you had to wiggle that spot. [...] Then there were the "raster-scan" displays that Bill English had developed for the "PARC Online Office System", POLOS. [...] The POLOS displays used digital electronics that were better suited to the binary world of computing: in effect, they would divide their screens into a fine grid of "pixels" and then make a picture by turning each pixel either on or off, as appropriate, with no shades in between. [...] The programmers would have a much easier time devising graphics software to generate those images, because all they had to do was define a chunk of computer memory to be a map of the screen, one bit per pixel, and then drop the appropriate bit into each memory location: 1 for white and 0 for black. [...] Unfortunately, that use of the computer's memory was also the major difficulty with bit-mapped graphics: memory was very, very expensive in those days." — The Dream Machine. J.C.R. Licklider and the Revolution that Made Computing Personal, W. Mitchell Waldrop, Penguin Books, 2001, p.366. In many ways, "bit-map" graphics are simply a historical hack used to generate text and images dynamically on a screen. In the case of the heavily text-centric Xerox PARC machines, one would assume that a more vector-based image generator would make more sense: typography is essentially a history of shapes built out of lines, with a visual language heavily influenced by the traits of handwritten letterforms. In fact, it took some thirty-odd years, led by Apple's "retina" ultra hi-definition screens, for bitmapped text to match the quality of the printed page. So it could probably be argued that the "bit-mapped" approach was historically the wrong one, even if it is now somewhat catching up. Douglas Engelbart, Workstation With Mouse, Agumentation Research Center, cir. 1964-1966 & Maze War, Xerox Alto, 1974 But from a purely technological, engineer's perspective, bit-map images make all the sense in the world. In the above quote we need only retain that "the programmers would have a much easier time..." in order to understand why the pixel approach won out. Computers are "discrete" machines, capable of switching parts of itself off and on independently. This logic gives us random-access memory which in turn gives us databases, which in turn gives us things such as hyperlinks. Machine architecture influences use and to assume that this would not influence the resulting aesthetics is naïve. The infinitely re-configurable and re-contextualizing nature of the machine is the whole point of why we use these damn things. So an image construction method that would closely match this discrete logic, down to the very 0s and 1s of the machine's ABCs, was an important step in creating a "plastic" image, capable of reconfiguring itself multiple times per second. It is out of just such a type of image that video games as a medium emerge. Raster-scan vs. Vector-scan Let's compare two images from two iconic video games from 1980, Battlezone and Pacman. Battlezone is a vector-based game, and originally used a vector-scan method for displaying shapes on-screen. This created razor-sharp images, albeit in black-and-white, or actually black-and-green. The use of vectors also allowed Battlezone to be one of the first mass-market games to effectively represent a three-dimensional scene, using the first-person perspective of a tank commander to navigate the game space. It would be many years before a pixel-based computer system could anywhere approach the visual elegance of early 1980s 3D games such as Battlezone, Star Wars or Tempest. One of the great iconic raster-based 3D games, Castle Wolfenstein, wasn't even in 3D at its introduction in 1981; and even when it became Castle Wolfenstein 3-D in 1992, that visual representation was made up of large blocky pixel shapes, far inferior to Atari's 1980's graphical representations. But Battlezone's vector-scan technique also created some curious visual anomalies: for example objects on screen were fully transparent, defined solely by their outlines without any possibility for image "textures" to fill in the gaps. This created the odd situation where an enemy tank could be seen transparently on the other side of an obstacle, but could not be shot at. In a sense, this improved the gameplay and created part of the strategy of playing Battlezone — no matter what level of realism it achieved as a simulation. Ultimately, it was a game made for fun, for play, but even so it would eventually be used by real tank commanders as a training simulator for their soldiers. The simulation was good enough so as to be a functional form of training in the real world manipulation of tanks. Visually, Pacman (a.k.a. Puckman) is a very different animal. Contrary to Battlezone, or even the more-colorful Tempest, Pacman is practically drenched in color. Ghosts are brightly-colored with different hues based on character traits, allowing players to read their individual algorithmic behavior within the game. The player's character, Pacman, is a completely opaque bright yellow animated blob, full of visual charm. Like the ghosts, he is full of personality. Color is even used as a gameplay element, allowing players to distinguish between dangerous ghosts (multi-colored) and edible ones (blue). Everything about Pacman screams "bit-map" techniques: the maze is a series of bit-mapped 0s and 1s, turned on or off to represent a wall or a navigable open space. And the dots or crumbs that we eat are also represented as a bit-map, i.e. a scattering of pixels that we have to turn off by running our character over them. In Pacman, the gameplay, in fact the whole game algorithm, is directly controlled by the graphical representation, as opposed to Battlezone where the graphical representation is often in contradiction with the physical simulation of interaction with physical objects. Pacman is a collection of pixels, he lives to eat other pixels, and the level is over when there are no more pixels to be eaten. Pacman essentially spends his time running around a memory map until he has effectively manipulated all the memory registers by setting them all to 0. The internal circuitry of the machine is visually exposed to the player who is then asked to navigate into this memory register map and manipulate the digital switches via an on-screen representation. Cellular Automata While it is not technically a video game, and was in fact designed as a scientific simulation experiment, John Conway's Game of Life is nevertheless one of the best examples of one of these immanent pixel-plane spaces from which a "playable" image emerges. The "game" is played entirely by comparing one pixel to the pixels that surround it: too many surrounding pixels, the pixel dies from overcrowding; too few, it dies from lack of resources; and from just the right number of pixels, a new pixel is born (if none) or survives (if already alive). The visual representation of the life "game" is exactly the same map of values as the memory registers that control it. There is no representation of the simulation outside of the frame of the grid. Based on this immanent principle, a complex interaction of forms emerges, hence the term "game of life". Conway's Game of Life, 1970 & Runxt, R-Life for iOS One of the best known games of all time, Sim City, was directly inspired by this Conway thought-experiment: "[John Conway's work] is so extraordinary, because the rules behind it are so simple. It's like the game Go. [...] They can arise from fairly simple rules and interactions, and that became a major design approach for all the games: "How can I put together a simple little thing that's going to interact and give rise to this great and unexpected complex behavior?" So that was a huge inspiration for me." — The Replay Interviews: Will Wright, Gammasutra, 23 May 2011. In Conway's Game of Life as well as Wright's Sim City, the immanent pixel grid is the space itself of the "game", conflating both the pictorial representation and the simulated one. It is the "map" upon which the simulation of SimCity, an architectural construction if there ever was one, would be built. Animation Another significant trait found in pixel-based games such as Pacman, far more absent in vector-based games, is the narrative dimension. Pacman tells a story, and even introduced comedic interludes every few levels, telling little Keaton-esque sketches of Pacman being chased by ghosts and then turning the tables to chase the ghosts in turn. Pacman cutscenes, arcade edition 1980 & Atari 800 edition, 1983 Many interactive characters were built out of these basic, often extremely limited, collection of "bit-map" pixels: the whole Pacman family (Pacman, Ms. Pacman, Pacman Jr., etc), Mappy, Dig Dug, Mr. Do, Mario, et cætera. Even known animated characters — such as Popeye —, found their way into the heavily pixellated game screens of the 1980s. There is nothing arbitrary about this use of cinema-animation logic aesthetics to animate the characters of early video games. For animation had already solved this problem of opening up cinematic figuration by eschewing realism and embracing the artificial nature of the image. Gerty the Dinosaur, Betty Boop and Felix the Cat, all the way up to La Linea and Don Hertzfeldt's pencil-drawn absurdities: these are all forms of reduction down to the visual interaction of a few basic visual forms. So too in video games: the key to their success in adding expressive characteristics came not from the militaristic, cybernetic-inspired scientific simulation instrumentation. Instead, it came precisely from embracing the abstract, graphical, nature of their primitive cousins and in accepting the artifactual, visually limited detail of the early digital machines. In accepting this fate, video games tapped into a deep tradition of expressive visual tapestries that had been explored throughout the 20th century in cinema through the work of experimental film-makers and animators such as Len Lye or Norman McLaren, using simple abstract shapes such as lines, scratches, and blobs of color to great expressive effect. Vanishing Points Although the term is a bit dubious, we are exploring here the problem of realism, or perhaps more specifically that of mimesis, i.e. the art of imitation. A significant historical component to this debate on art and realism relates to the introduction of a very specific form of pictorial representation: geometric perspective of the sort demonstrated by Brunelleschi in the early 1400s. In our parallel history of video games — notably as it traverses its naive period of representation —, we as well can see some interesting effects of perspective as it relates to how images are constructed on-screen. Due to the purely arbitrary nature of the discrete pixel grid where any section can be turned on or off at will, a strange form of mixed perspective becomes possible with multiple forms of perspective not only co-existing on screen but even interacting with one another. Pacman and the ghosts within the maze are completely devoid of principles of foreshortening and vanishing points, and are in fact a mixture of top-down vertical perspective (of the maze), and side-view perspective (of the characters) reminiscent of early forms of perspective emerging in the work of Giotto where, to take an observation from Deleuze & Guattari in Mille Plateaux (p.219), Christ alternates between divine receiver, enduring the stigmata, and kite-machine, commanding the angels and heavens via kite strings. The emerging nature of the Brunelleschian-style of geometric perspective is not fully developed at the time of Giotto, hence the optical oscillations for a modern eye between flatness and depth, foreground and back, and so on. Jesus is at once commanding Saint Francis, and simultaneously being flown by him like a kite. It is only through narrative cues, understood by semiotically reading the painting, that we are able to reconstruct these spatial relationships between the various figures. Like many paintings from the middle ages to the early renaissance, perspective in early video games contain multiple points of view and often chooses its perspectival representation based on contextual narrative needs. These are naïve and/or mixed perspectival geometries (cf. Tapper, Zoo Keeper, et al.) that have recently been exploited to brilliant effect in Polytron's visual delight, Fez. Tapper, Bally Midway, 1983 & Fez, Polytron, 2012 We could also mention Game Yarouza's Echochrome where the gameplay takes place somewhere in between the OpenGL pipeline where vector data is rasterized into pixel data and itself becomes a gameplay mechanic as players exploit visual absurdities and try to line them up. Echochrome, Game Yarouze, Japan Studio, 2008 Such hybrid forms of perspective would have been much harder to acheive had gaming stuck with purely vectorial and mathematical forms of representation. Visual abstractions It might be temping, based on such an art-historical exposition, to start comparing video games to the history of art and graphical design. For example, it would be fairly easy to visually juxtapose the paintings of Piet Mondrian/De Stijl, with Taito's 1981 arcade classic, Qix: Piet Mondrian, Composition 10, 1939–1942 & Piet Mondrian, Composition II in Red, Blue and Yellow, 1930 & Taito, Qix, 1981 Obviously, on some level there is a visual inheritance taking place, either explicitly, culturally or unconsciously, even though such causalities are either impossible to prove or even, if true, merely anecdotal. Another juxtaposition might be to look at the Russian avant garde, starting with El_Lissitzky, and compare his visual language with the shapes and forms of more abstract forms of video games, including early 3D games that had not yet perfected their perspectival rendering engines: A Prounen, El_Lissitzky, c.1925 (cf. Prouns) & Sixty Second Shooter, Happion Laboratories, 2012 Blaster, Williams 1983 & Ballblazer, Lucas Arts 1984 Rez, Tetsuya Mizuguchi, 2001 The problem, ultimately, with all these approaches is that these are merely visual cues and not aesthetic ones. The problem with just such a visualist reading is that it assumes that both De Stijl and Taito constructed their representations purely as visual tableaux — in other words as just a bunch of pretty pictures —, instead of looking at the material, conceptual and historical visual languages and logics that might have led them there. In the case of Qix, it would probably be far more instructive to compare its geometric abstractions to early MacPaint software, and Bill Atkinson's visual algorithms that made it possible, especially since these routines would go on to influence gaming history via Bill Budge's Pinball Construction Set. To begin with, both Qix and MacPaint were built as profoundly raster images, and both use similar algorithms for "painting" in their geometric forms. But more importantly, much of Atkinson's work, like that of Qix, was not only an attempt to find an algorithmic method for interactively constructing visual output, but to do so within the constraints of a Motorola 68000 microprocessor using 128kb of memory. MacPaint, Macintosh, 1984 & Pinball Construction Set, Bill Budge, 1983 And again, we can see even in these early days of MacPaint, that in order to construct the computer image in a visually compelling way, Apple's marketing machine opted to look back to previous techniques of image construction, here the japanese wood cut, and not to that of the photograph. Pixel Clouds One of the most beautiful games to emerge in the last few years is Proteus, a love-letter to this naive period of highly pixellated gaming. Only here, the game is rendered with a modern vector-based graphics pipeline. This creates a strange oscillation between the utterly fluid 3D navigation, and the giant blocky pixellated landscape. Trees, shrubbery, waves, raindrops, animals: everything has been reduced down to a limited grouping of pixel blocks. In Proteus, we walk around the simulation of an island world and explore its aesthetic qualities: sound, color and shape all interact in an elegant generative landscape. There is no real "goal" to the game, although season-shifts can be provoked in a pleasant transition that eventually leads the player to new forms of gaming experience. The whole experience suggests that perhaps some new media form — of an entirely new quality — could be afoot in what we call gaming, although I cringe to qualify such a future as "just over the horizon" because gaming has been promising such an unattainable land for the past several decades. Still, the hope here is that this emerging form is less about Holodecks and more about the raw interactive audiovisual experience of this new media form. The ultimate goal of Proteus, I suppose, is that of aesthetikos, i.e. sensation, or perhaps more accurately the experience itself of human sensing. In other words, we are talking about aesthetics in the Kantian sense of a search for beauty — via the senses — that eventually discovers itself in the limits of its search (cf. Sublime). For, the overall effects turns out to be indeed highly romantic, something akin to a multidimensional interactive 8-bit rendition of a Turner-esque tone poem. While playing Proteus recently, I found myself in a curious situation. I was high up atop one of the hilly peaks of the island, watching as night began to fall and rainclouds emerged below. As I descended down from the hill and onto the rain-soaked plains, I suddenly found myself awash in a pure sea of color that originally felt like a visual glitch: while I could still move somewhat, it seemed that any direction just led me to more colored polygons rendered as flat shapes. For a few moments, I even imagined that the game engine had crashed and I started to reach for the ESC button to get myself back in control of the machine. But then, slowly, I began to realize that I had merely descended down into the level of the clouds themselves and was swimming in the middle of their visually depth-less space. Anyone who has flown in a plane knows this de-spatialized zone while traversing the clouds: there is no focal point or point of reference and everything feels atemporal and ethereal. Essentially this is what happened to me looking through the little portal of my computer screen, the same logic taking place on a purely representational level of pixels that refused to figure the depth contours of the objects in space. Finally, I just leaned back and watched as abstract geometric shapes of treetops re-emerged only to be submerged again in swaths of color as waves of clouds chased ever more waves of clouds. It was a profoundly pleasureful oscillation between recognition and disorientation, one of the key ingredients to many successful works of at. Eventually the cloud formation began to recede from my point of view, and the three dimensional perspective of the landscape re-emerged, re-aligning the simulated first-person perspective of my view portal onto a three-dimensional landscape. The beauty of the moment had something to do with what the art-historian Hubert Damisch calls the théorie du /nuage/ or theory of /cloud/. The term /cloud/ is written with two slashes in order to reconstruct in text the odd, receding nature of clouds from realism and perspective and their re-apparition within the tableau in the form of a semiotic signifier, almost like a placeholder or an asterisk. Clouds in classical painting are the limit of perspectival representation, the resistance of aesthetics to the mere logics of mimesis and perhaps even of representation. Whatever the case, it is the limit of the realism model of aesthetic forms (cf. Cory Archangel's Super Mario Clouds). This limit of perspective within a three-dimensional simulator takes us back to Battlezone and its visual, artifactual, limits. And this limit speaks to one of the fundamental problems confronting video games today, beyond the problem of figuration and by extension the problem of figuring the human face. This representational limit of the /cloud/ in Proteus is what we could call the limit of realism as a model for what simulations, and therein gaming, seek to achieve. Taken to its limit, these clouds of Proteus have their cousin in a wonderful little game built by two lifetime members of the glory days of the Atelier Hypermedia: Pascal Chirol and Grégoire Lauvin. In their collaborative piece NEVERNEVERLAND Color Suite, a 3D simulator and a joystick open up a landscape of nothing but infinite gradients of color: Consider it a 3D simulator of navigation within the color selector of your favorite painting software. And it is also probably one of the outer limits only an artist can propose to the world of gaming in its relationship to the aesthetic realm: a landscape of color, a perspective of visual artifacts, as itself the "goal" of the game. Via play, via simulation, we are now beyond play, beyond simulation, and even figuration; the play has moved into the aesthetic realm, the domain of sensation, opening up an entirely different sphere of experience than that of the reconstruction of a physical world. This is a playable aesthetic world, not beyond ours, but instead immanent to a new field of perception within our world: the realm of artifactual play. This post first appeared on Douglas Edric Stanley’s blog. For more interesting observations, […]
- 10 PRINT – Single line of code as a lens through creative computing A collaboration between Nick Montfort, Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mark C. Marino, Michael Mateas, Mark Sample, Noah Vawter and Casey Reas, 10 PRINT is a book about a one-line Commodore 64 BASIC […]
- “Nature of Code” by Daniel Shiffman – Natural systems using Processing Processing enthusiasts rejoice! There is a new book coming by Daniel Shiffman and it’s called Nature of Code. As it’s title implies this book takes phenomena that naturally occur in our physical world and shows you how to simulate them with […]
- “How to Do Things with Videogames” by Ian Bogost [Books, Review, Games] From Roger Ebert's pedantic proclamation that "video games can never be art" to the clichéd fawning over the truckloads of revenue generated by each new release in the Modern Warfare series, gaming consistently inspires overarching conversations about media and culture. At this point, these 'big conversations' should surprise no one, as with each passing year gaming becomes less esoteric and permeates more and more demographic groups (e.g. the popularity of social games on Facebook, senior citizens embracing the Wii as an exercise platform, etc.). So while gaming may be everywhere, it is strange that it is often difficult to locate conversations about it that speak to how we actually integrate play and simulation into our everyday experience. What can games tell us about relaxation, work and routine? What do they have to say about movement and the body? How might we subvert gaming conventions through pranks and humour? Ian Bogost's recent book How To Do Things With Videogames thoughtfully considers questions like these while endeavouring to re-frame the medium through a series of focused, topical texts that draw on familiar and engaging points of reference. Organized as 20 bite-sized chapters, How To Do Things With Videogames carefully considers how gaming has been leveraged to explore sex, art, politics, branding and boredom – all the touchstones of contemporary life. Within each of these articles, Bogost carefully blends accessible pop culture references with illustrative gaming examples as the basis of his ruminations on how the medium functions as a cultural mirror. The "feel and weight" of Go pieces sets the stage for a meditation on haptic feedback, a FPS shootout set in the Manchester Cathedral serves as a gateway into a conversation about awe and reverence. Bogost's knowledge of game history is encyclopedic and it is hard to come away from this book without a renewed appreciation for just how weird and wonderful game design can get – some of the more obscure references to SimHacks, mundane minigames and naive game tourism are priceless. There are many compelling moments within the text, I've picked out two that I found particularly provocative. Red Dead Redemption / Screen capture: Red Dead Wiki In the chapter on "transit" Bogost sketches out a history of the moving image that considers panoramas, how the advent of rail altered the experience of landscape and the broader implications of movement in games: Instead of looking forward to a future in which the risky, laborious process of traversing a space could be lessened, in-videogame transit re-creates a past in which reality had not yet been dissolved into bits, but had to be traversed deliberately. Like the panorama show, the transit simulation is a kind of replacement therapy for an inaccessible experience of movement. Could this thesis be any more clearly articulated than in Red Dead Redemption, where the 21st century leisure class faux-nostalgically gallop across a simulated American Southwest on horseback? How To Do Things With Videogames also unpacks how game design can complement and challenge corporate identity. The following passage is culled from a consideration of how games put brand identity under the microscope and (for me) it really evoked memories of the recent high speed turfing of Molleindustria's Phone Story from the App Store. Of course, unauthorized brand abuse in large commercial games might not be possible or desirable. But brands' cultural values offer a bridge between visual appearance and game mechanics. In some cases, our understanding of particular rules of interaction has become bonded to products or services. Phone Story sketched out a damning narrative of the consumer electronics industry's reliance on conflict minerals and dubious labour practices and tells this story as playable narrative. The fact that a game that explicitly took aim at Apple's supply chain ethics was so quickly 'disappeared' underscores the degree to which gamespace is contested (branding) ground. Bogost's analysis of this milieu weaves together several examples of promotional games, an analysis of Monopoly tokens, commentary on Obama's 2008 in-game ad purchases and also considers examples of "anti-advergame" critical resistance. How To Do Things With Videogames is a lightning fast read and the book's success is largely due to both brevity and charm. As a topical 'scan' of an entire medium, the undertaking is noteworthy for clearly articulating down to earth approaches for reconsidering the politics and experience of play. Bogost's conclusion describes an ongoing process whereby games are becoming "more ordinary and familiar" and the cultural currency of 'the gamer' as a distinct subculture is fading – the tone and execution of this work certainly support this forecast. While a delight to muse over, this text should be read as a serious reconsideration of 'first principles' for anyone who plays, designs or avoids gaming on a regular basis. Purchase on amazon.com / […]
- Digging in the Crates [Flash, Sound] Digging in the Crates is an interactive installation by Roland Loesslein which attempts to explore Sampling as a production technology of modern music. Dynamic music data is navigated using modified turntables with information graphics helping understand the complex relationships that exist between the sample and composition. Besides the history of sampling or technical backgrounds of the digitization of analog audio signals, visitors can also obtain information on the dissociation of sample-based productions and other musicological phenomena such as remixes, mashups or covers. As a highlight of the exhibition the visitor can slip into the role of a producer and goes on a fascinating search for suitable samples, which will be found on soul, funk and jazz records of the 70s and 80s. Digging in the Crates aims to describe the sampling culture in all its aspects, characteristics, and influences. Not only the understanding of the creative process as a craft but also the effort and creative processes associated with sampling should be communicated authentically. Visitors can choose from 50 old records of the 70s and 80s. All of these records contain one or more samples, which can be analyzed by placing the records on the turntable. A projection onto the record itself shows included samples as shaded areas. The old records can either be played or analyzed. To choose between these two modes, the on / off switch of the turntable is pressed. A modified turntable acts as an tangible interface to navigate and analyze each single sample on the placed record. "Digging in the Crates" is a diploma thesis in the Department of Design at the University of Applied Sciences in Augsburg by developer and designer Roland Loesslein. More of his work can be viewed online at http//www.weaintplastic.com Digging in the Crates was created using Flash Actionscript3/Adobe […]
Posted on: 23/12/2010
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- Climate Service Data Visualiser at FutureEverything
- Web Developer at &Associates
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific
- Data Engineer at Seed Scientific
- Design Technologist at Seed Scientific
- Creative Technologist, The ZOO at Google