“The main change in the design process achieved by using generative design is that traditional craftsmanship recedes into the background, and abstraction and information become the new principal elements.” Thus reads a rather pertinent nugget of wisdom tucked into the concluding notes of Generative Design: Visualize, Program and Create with Processing, an epic computational design text by Hartmut Bohnacker, Benedikt Groß, Julia Laub and editor Claudius Lazzeroni. With that point, the authors are describing how the syntax of code and data are intrinsically tied to new modes of composition and production. This statement also speaks to the organizational logic of the book, which weighs-in at a whopping 470 pages of thoughtfully categorized generative strategies that have been broken down into bite-sized thematic walkthroughs.
The first 160 pages of Generative Design are dedicated to a very capable and lavish (double page spreads abound) showcase of key projects and practitioners from the last decade. This roster contains all the usual suspects (Michael Hansmeyer, Eno Henze, THEVERYMANY etc.) and then some, with a surprising number of logo, typeface and product designs providing contextual counterpoint to the expected drawing machines and abstract assemblies. This well-curated selection of works is a compelling lead-in for the bulk of the text that follows: a hyper-detailed series of thematic, annotated and illustrated Processing sketches that explore approaches to creating generative art. These broad themes have been very carefully thought through and the code examples, helpful annotations and reference images have been so thoroughly integrated that the visual design of this book–as an exploratory ‘how to’ software manual–is peerless. This praise is about much more than acknowledging strong publication design though, as an acute conceptual clarity underpins each incremental step. Structurally, the book is divided into two major sections: basic and complex methods. The first section allows a reader to ease into Processing and explore colour, shape and type with very classic design school 101 type exercises and the latter demonstrates the computational design equivalent of heavy artillery with forays into 3D modelling, oscillation figures and dynamic data structures. Each of the exercices is a small marvel and by consulting the text to work through the code examples (all accessible through the publication’s companion website) the reader is provided a guided tour of the crafting and parsing of some fairly sophisticated techniques. Rare is the instructional text that doubles as a coffee table book, even rarer is one that warrants multiple readings and could serve as a platform for months of research and experimentation.
We have an extra copy for our members to give away. Please keep an eye on the blog in the coming weeks.
(Video of the german edition of the book below)
- “A Philosophy of Computer Art” by Dominic Lopes [Books, Review] A Philosophy of Computer Art is a text that may interest some readers of creativeapplications.net as it covers the intersection of computing and art, discussing some of the classics of interactive art, and doing a lot of thinking about what art that uses computers actually is. In it Dominic Lopes does several things very well: it divides what he calls “digital art” from “computer art”, and it correlates that second term, which I’ll put in capitals to mark that it’s his term, Computer Art, with interactivity. He also articulates precise arguments for computer art as a new and valid form of art and defends his new term against some of its more tiresome attacks. As a quick example, Paul Virillios concerns about the debilitating effect of “virtual reality” on thought which is more than a little reminiscent of Socratic concerns about the debilitating effect of writing on thought and points to an interesting conclusion: what we call thought is a technologically enhanced phenomena. Note Friedrich Kittler: most human capacities are enhanced in some way or another with no great damage to the notion of “humanity” or “human”. It’s little more than a failure of imagination to thunder about how those augmentations debilitate the natural state of humans. Lopes also makes several extremely astute observations about the nature of interactivity and repeatability, comparing Rodins Thinker, Schuberts “The Erlking”, packs of refrigerator magnet letters, and true interactivity in artwork and concluding that interactive work has distinct characteristics. What he comes to, or what I read him as coming to, is this: a structured and rule based experience is interactive. “A good theory of interaction in art speaks of prescribed user actions. The surface of a painting is altered if it’s knifed, but paintings don’t prescribe that they be vandalized.” Reduced even further: grammar plus entities plus aesthetics equals interactivity. He also makes, to pick just a few, excellent arguments for the interpretive necessity of a view in automated displays, astute observations about the potential value of a computer art criticism, and for the nature of technology as a medium. But Lopes is also a philosopher and philosophers seek to, among other things, define categories. Painting, sculpture, dance; these categorized mediums have all served us well over the years and so the thinking goes, why not extend them and add another: Computer Art. I’m not so sure that the idea of Computer Art as, with an admittedly blunt reduction, “stuff on a computer that allows you to participate in it presenting itself” is particularly useful. My feeling is that this isn’t what interactive art or art made in collaboration with computers is presently nor is it a meaningful extent of what it should be. The device is not the method, nor is the extent of what makes this type of artwork rich and meaningful and computers aren’t really the medium: algorithm and computation are the medium. In Form+Code, Casey Reas and Chandler McWilliams are right to point to Sol Lewitt as an earlier exponent of explicitly algorithmic art and tie that into the current computational and algorithmic art-makers. A computer originally was one who did computation, that is, a person sitting with a slide rule, pen, and paper and was only later applied to machines. The idea of computation is that it offloads a pre-existing human capacity, accentuates pre-existing things in the world. The person who calls their friend on their cellphone describes their action as “calling my friend” not “using my cellphone”. The person using Ken Goldbergs TeleGarden (a work mentioned frequently in APAC) is marveling at how they can collaboratively participate in creating a garden, not at how they can control a machine via a network. The point is not the device -- the point is interactive computation, extension of human aptitude and capacity, and the type of relationship with the world that it enables. His insistence on the primacy of mediums and forms is doubly odd because in his finale Lopes emphasizes that “computer art takes advantage of computational processing to achieve user interaction”. Close, but not quite there. I’m nitpicking, and admittedly so, because he’s looking at works that are unmistakably “Computer Art” by his definition of it. Computer Art is meant to be a measure of degrees, a spectrum. One looks at Scott Snibbes work and sees a computer system and an interactivity. Golden Calf, a work he references multiple times, is very firmly at the Computer Art circle in the Venn diagram of machine-human art-making experience. These are the easy examples, those that lend themselves most easily to the account of interaction in artworks that he describes. But I’m nitpicking for a reason: it’s painfully limiting. It says that computer art is things that are run on a computer with which I interact and observe a display where I consequently understand how my actions are interpreted. This seems naïve to the ways that computation actually functions in our lives and an oversimplification of how people think that computation can function in their lives. This also seems to be reductive of what forms art can take and how the conversation that is art-making can evolve. For instance: Wafaa Bilals Domestic Tension, a piece far more indebted to performance art than sculptural installation. There’s quite a bit more at play there than myself seeing the manipulation of pixels and there’s more to my understanding of how this piece functions and signifies than understanding that I’m speaking with and through a computer. Another example: Men in Grey. Is this interactive art? Not in many senses, I never interacted with it nor would I say that interacting with it is necessary to understand it and experience it. It has far more in common with Situationist/Lettrist works than with installation art, and yet it is computer based, one does interact with it by well-known protocols and through well-established rules, it has a display. It uses computation and networks and yet it’s not about manipulating a computer or a network to create display elements nor is that the forefront of it. Nor are EyeWriter, Natural Fuse, and a slew of other works and projects that I find most meaningful and engaging. In philosophy of aesthetics at times philosophically strong categories sometimes are preferred over meaningful categories because of the defensibility of strong categories. Painting as a category of artwork is not deeply meaningful in many ways (consider the question “do you like paintings?”) yet determining how much something is and is not a painting is quite easy and categorically meaningful. “Minimal” as a style (ones furniture or aesthetic) or strategy (“minimalism” with attendant connotation) is a much more meaningful designation because it has historical precedents, significations, and because it extends beyond a particular category to cover a manner of production and reception. That is, it describes communicative strategies, which Lopes indicates is one of the goals of the interactivity in Computer Art. However “minimal” is a terrifically difficult thing to pin down into categories and yet it is descriptive, historical, and fundamentally meaningful as a description of an aesthetic practice. To play a small linguistic game, describing speech as “he spoke with words” is a bit odd; to describe it as “he spoke with silence” makes more sense because one does not normally make speech with silence. Likewise Computer Art seems primarily to describe a situation of abnormality, “this is art that involves you interacting with a computer”, that I believe few people actually find particularly abnormal and that will be less meaningful, if not near meaningless, in the near future. Lopes text is an excellent opening of what I hope will be an interesting discussion that attempts to unravel the relationship between new forms of narrative, expression, and communication and the previous ones. He weaves together an excellent web of references from Umberto Eco to Clement Greenberg to Lev Manovich and references a wisely chosen group of artworks to bolster his argument. The example of A Philosophy of Computer Art is in it’s handling of complex arguments against the sort of odd disqualifications that occasionally are leveled against Computer Art. It’s insistence on categorical logic and mediums as definitive categories is a small aberation in what is otherwise an excellent text and opening of a new type of discourse about what creative computing might possibly mean. apoca.mentalpaint.net Purchase on amazon.com / amazon.co.uk Rafael Lozano-Hemmer "Blow Up", 2007 Daniel Rozin "Wooden Mirror", 1999 Scott Snibbe "Boundary Functions", 1998 Camille Utterback & Romy Achituv "Text Rain", 1999 -- Joshua Noble is a writer, designer, and programmer based in Portland, Oregon and New York City. He's the author of, most recently, Programming Interactivity and the forthcoming book Research for […]
- Archigram Archival Project [Reference] It's a pleasure to announce to CAN readers that this amazing project I have been working on for the past year is now live and ready for your perusal. The Archigram Archival Project [AAP] makes the work of the seminal architectural group Archigram is now available free online for public viewing. The project was run by EXP, an architectural research group at the University of Westminster, where I teach, and was funded by the Arts and Humanities Research Council. This was also all made possible by the sheer generosity of members of Archigram and their heirs who allowed us to browse through the immense collection of work stored in the attics and basements and collected a total of around 10,000 images. On Monday night was the official site launch and if you were following us on Twitter you would have seen a number of updates regarding the project. I am happy to say even with some major hick-ups just before the announcement (server power failure at the university), with about 150 attendees including collaborators, historians, journalists and fans, the launch was an absolute success. Mike and Dennis were not able to join us but they were there thanks to Skype with Dennis taking us through the different parts of the site. This was followed by Peter's talk on Archigram proteges with David also at the event, always in the mood to kick off an inspiring conversation. We are incredibly pleased that now, finally after all these years, we can all enjoy the work of one of the 'most seminal, iconoclastic and influential architectural groups of the modern age'. The extraordinary influence of the mainly unbuilt 1961-1974 architectural group Archigram is internationally acknowledged. Exhibitions of their work have been touring major institutions worldwide since 1992, they were awarded the RIBA Gold Medal in 2002, and they are recognised influences on many of the world's greatest contemporary architects and buildings. Yet the bulk of their visionary work has to date remained difficult to access, largely stored in domestic conditions or temporary storage. In collaboration with the remaining members of Archigram or their heirs, and funded by a £304,000 grant from the Arts and Humanities Research Council, a team from the University of Westminster has formed an online, searchable database of all the available works of Archigram for study by architectural specialists and the general public. I have collected few projects below, just to highlight how forward thinking Archigram were, foreseeing many things we enjoy and desire today. For full list of projects, make sure you visit archigram.westminster.ac.uk. Geoff Manaugh of BLDGBLOG has also written a wonderful post on his blog about the project which is a must read. For now, I leave you with 7 fantastic but much less known projects somewhat related to CAN. Holographic Scene Setter Speculative proposal for holographic projection of environments, or virtual reality environment. Part of the Instant City project. I had a holographic scene setter - a light space - switch on/walk around/3D/walk thro'/Hollywood Boulevard in my TV room/Death Valley on my patio/Tahiti in my pad/Laurel and Hardy in the morning/The 'Who' at night ... change film - new environment/switch on/off/there - not there ... what's real/it's observable/it's real when it's there/is it a dream? - a ghost? - a turn-on? ... Holographic ceiling - cloud - rainbow - cloud - people - John (pee on your shoes) - scenery - event - television ... great ... switch on the people/turn on the crowd/bring in the whole scene ... turn off the ceiling. more Media Experiments 1-2, 1968 Light/Sound Workshops: television display system set up as an experiment in multi-channel and multi-media display with streams of images flickering across grouped screens. A far more flexible medium is T.V. which, at the moment, is still normally thought of the single channel box, but which whilst utiliising other media such as film as content allows us far more opportunity for selection. If we then consider T.V. used in display systems monitoring a number of channels concurrently from a variety of sources, both from national and international news and entertainment networks and also from personal close-circuit and video-tape and even generated by computer, we can see what colossal potential there is in the medium. So in the not so distant future we can expect to have to deal with the multi-channel multi-media situation both professionally and as an involved audience in our own homes, and one suspects at times the distinction between producer, and audience may become blurred. more Soft Scene Monitor: MK1, 1968 Exhibit designed for Aftenpostle newspaper and Oslo Architectfornung and exhibited at Kunstneres Hus, Oslo for a prototype home access unit to communications, audio-visual entertainments and information technology. As the Instant City study developed, certain items emerged in particular. First, the idea of a 'soft-scene monitor' - a combination of teaching-machine, audio-visual juke box, environmental simulator, and from a theoretical point of view, a realization of the 'Hardware/Software' debate. more Info Gonks, 1968 Speculative design maquette for educational television glasses and headgear. Use of the 1½-inch television as a built-up pair of spectacles with stereo glasses all wired to headgear receiver: everyman his own on-the-eye and in-the-ear environment. more Cushicle & Suitaloon, 1966 Speculative design for a personal, individual and portable dwelling unit which may be ‘worn’ for transport and unpacked for occupation. The illustrations show the two main parts of the Cushicle unit as they expand out from their unpacked state to the domestic condition. One constituent part is the “armature“ or “spinal“ system. This forms the chassis and support for the appliances and other apparatus. The other major element is the enclosure part which is basically an inflated envelope with extra skins as viewing screens. Both systems open out consecutively or can be used independently. The Cushicle carries food, water supply, radio, miniature projection television and heating apparatus. The radio, TV, etc., are contained in the helmet and the food and water supply are carried in pod attachments. With the establishment of service nodes and additional optional apparatus, the autonomous Cushicle unit could develop to become part of a more widespread urban system of personalized enclosures. more Enviro-Pill, 1969 Speculative proposal for a pill for inducing architecture or virtual and imaginary environments in the mind. more. Electronic Tomato, 1969 Speculative proposal for mobile sensory stimulation device. MANZAK is our latest proposal for a radio-controlled, battery-powered electric automaton. It has on-board logic, optical range-finder, TV camera, and magic eye bump detectors. All the sensory equipment you need for environmental information retrieval, and for performing tasks. Optional extras include response equipment for specific applications and subtasks to your own specification. Direct your business operations, do the shopping, hunt or fish, or just enjoy electronic instamatic voyeurism, from the comfort of your own home. For the great outdoors, get instant vegetable therapy from the new ELECTRONIC TOMATO – a groove gizmo that connects to every nerve end to give you the wildest […]
- “Nature of Code” by Daniel Shiffman – Natural systems using Processing Processing enthusiasts rejoice! There is a new book coming by Daniel Shiffman and it’s called Nature of Code. As it’s title implies this book takes phenomena that naturally occur in our physical world and shows you how to simulate them with […]
- 10 PRINT – Single line of code as a lens through creative computing A collaboration between Nick Montfort, Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mark C. Marino, Michael Mateas, Mark Sample, Noah Vawter and Casey Reas, 10 PRINT is a book about a one-line Commodore 64 BASIC […]
- Eyeo 2012 – Afterthoughts and asides Eyeo, eyeo, eyeo – well, where to begin? At the best of times providing an overarching review of a festival is an exercise in exclusion and cobbling together a vague impression of the second edition of Eyeo is no exception. In fact, one could say that Eyeo is pretty much a conference organized around the idea that creative technologists are willing to travel all the way to Minnesota in order to be a room away from great artist talks and public conversations. An attendee simply has to surrender him/herself to the horrible burden of choice and constantly pick between presentations that are outright unmissable or simply very promising – it was a tough week let me tell you! I don't feel particularly compelled to provide an outright 'review' of the festival beyond saying "you should really go next year", but I thought it would be interesting to share (and expand on) some of the notes I made over the course of the week. • Paola Antonelli's talk was a super-savvy introduction to the proceedings. Not only because it was freewheeling and gregarious, but due to some of the underlying provocations she made. Antonelli 'zoomed out' from the specificity of the last several years and looked to the radical 70s for inspiration. Although it might be old hat for the architecture/urbanist set, it was great to see the work of Italian upstarts Superstudio serving as discussion fodder for a room full of interaction designers and creative coders. As was the case with Natalie Jeremijenko's presentation last year, the polemic urging the audience to think big and tackle complex issues was deeply appreciated. • The Ignite talks that chased Antonelli were great! Go watch Jen Lowe, Rachel Binx, Sha Hwang and Molly Steenson's speed presentations right now. • One of the fun undercurrents this year was accessibility. Self-described "hardware girl in a software crowd" Ayah Bdeir presented littleBits, an opensource library of snappable, magnetic modules perfect for teaching electronic fundamentals. Bdeir described her project as riffing on the logic of object-oriented programming to yield "interaction-oriented hardware" and the interoperability of littleBits resonated nicely with Golan Levin's presentation of his multi-brand toy construction system hack/augmentation The Free Universal Construction Kit from the night before. • Another prominent theme: ruminations on the extents of practice and the nature of performance. Kyle McDonald deadpanned that his largest ongoing performance project was email and while this was intended as a joke it underscored the degree to which participation in communities, knowledge and resource sharing (e.g. GitHub) is a key element of contemporary practice. Golan Levin dispelled the mythology of the TED talk and drew attention to the fact that the TED talks that end up online are hyper-edited to the point where "ums" and "maybes"are removed yielding seemingly seamless final cuts. Is the streamlined 17 minute talk a vital part of contemporary artist/designer 'brand management'? Undoubtedly. • Andrew Bell (the lead architect of Cinder) gave a thoroughly witty presentation on "how to be a creative coder and not have to underwrite it with something else" that deftly schematized the digital agency marketplace and how creative technologists can be 'free agents', invent their own jobs and still make rent each month. Keep an eye out for his slide deck as the last chunk of it has some pretty vital survival tips for those looking to swim with the sharks. • While I mentioned Antonelli's talk was an incendiary introduction, Kevin Slavin and Marius Watz's presentations could be read as the 'boots on the ground' bookends to the proceedings. Slavin opened with a sprawling consideration of "that fucking bastard that we've called luck" that parsed the history of gaming, the market and the social history of various cultures of control. Watz dedicated a good portion of his talk to outlining the parameters by which he—and the audience—might evaluate algorithmically generated work, in an attempt to cultivate a more constructive culture of self-critique within the (at times prone to back-patting) software art community. Both of these presentations moved well beyond the territory of standard artist talks and the payout was rich. • Regarding the previous point: Eyeo is where veteran speakers roll out and test their A game material. I would need more than two hands to count the number of speakers I saw revising, reworking and rethinking their talks and slide decks, right up until the very last minute in response to how previous sessions had unfolded. Chalk it up to a combination of nerves and a brain trust audience that 'gets it', there were a lot of fabulously earnest, ambitious and innovative presentations at Eyeo 2012. So there you have it, 5% of my notes on the 45% of the presentations that I attended (and I didn't even mention several of my favourite talks). The most succinct encapsulation of the event that I saw was tweeted by the aforementioned Sha Hwang, where he described it as a "high resolution, real time flocking simulation of artists, designers, coders, makers." I'm not going to argue with that characterization as the resolution certainly was high, I'll just point out that the key to navigating within a flock requires a delicate blend of alignment, cohesion and maintaining a bit of breathing room between you and your nearest neighbours – Eyeo delivered inspiration and provocation on all of these fronts. Eyeo Festival | Eyeo Vimeo Channel Photos: Chloe […]
- Speculative cartography & programmed landscapes – a chat with Benedikt Groß Benedikt Groß is a speculative and computational designer whose work is often featured on here on CAN. We recently interviewed him in order to glean a little insight about Benedikt's thoughts his recent work, 'outsider' cartography, and generative […]
- MapMap Vauxhall [Processing] Fascinated by ideas of mental maps and obtaining an insight into the person’s perception of the world by simply asking them to draw a map from memory, in his ongoing Design Interactions master, Benedikt Groß created a Processing application that allows users to mould OpenStreetMap maps based on their recollection and experience. First the points are placed on the map, the mesh is constructed and map modified according to the new point position. I wrote two litte tools (in Processing), MapMap_App and TransformOSM_Droplet. With the first one I was able to create and save a transformation matrix, the procedere is highly subjective and envolves quite a lot of legwork. Btw. a huge thanks to Hartmut Bohnacker for helping me out with the math part, I was not savvy enough to figure it out in such a clever and smart way. The second tool processed then the delta (=transformation matrix) and the OpenStreetMap Data of Vauxhall to a last OpenStreetMap file. In the end I just had to render the file to it’s final visual representation. I decided to style the maps in the google maps style to give them a more “official” look; sidenote: it seems we are already all cognitive branded by google to their particular style. The rendering was done with Maperitive (free desktop app to styple OpenStreetMap files in a quite convenient way). You can download the source code of both tools (Processing sketches) at GitHub Project Page Concept + Idea: Benedikt Groß Transformation Math: Hartmut Bohnacker Tutor: Nina Pope Mental Maps: Random “sample” of Vauxhall residents/transients Real World Map Data: OpenStreetMap community OSM Render: Maperitive See also SubMap by by Dániel Feles, Krisztián Gergely, Attila Bujdosó and Gáspár Hajdu at Kitchen […]
Posted on: 19/12/2012
- Senior Digital Designer at CLEVER°FRANKE
- Interaction Designer at Carlo Ratti Associati
- Creative Technologist at Deeplocal
- HTML / CSS Developer at Resn
- Climate Service Data Visualiser at FutureEverything
- Web Developer at &Associates
- Creative Technologist at Rewind FX
- Coder to collaborate with Agnes Chavez
- Data Scientist at Seed Scientific
- Data Engineer at Seed Scientific
- Design Technologist at Seed Scientific
- Creative Technologist, The ZOO at Google