Instead of AdBlock, enjoy ad-free CAN by becoming a member. Everybody wins!
HOLO is a biannual magazine about emerging trajectories in art, science, and technology brought to you by the people behind CAN. Learn more!

Geometry, Textures & Shaders with Processing – Tutorial

As the title suggests, I’ll be covering a lot of ground in this blog post. My intention is to describe and show practical examples of a number of crucial building blocks for 2D/3D projects. In that sense the tutorial is more a general reference, rather than a step-by-step towards a singular end-result. All of the shared code examples are fully commented, so reading them will tell you what each line of code does. This makes it easier to understand and adapt sketches yourself. In this tutorial I’ll start with the basics, creating custom geometry in two dimensions. Then we’ll look at adding textures to these 2D shapes. Not only is this useful in itself (for 2D visuals), but the concept of texturing is also easier to grasp in 2D. Texturing can be used for a variety of things, not just placing static images onto geometry. One of the examples shows you how to use dynamic texturing and spritesheets to create 2D animations! Once we’ve covered 2D, we’re ready to make the move to 3D. Starting with flying multi-colored pyramids, then creating an earth with vertices, normals, textures and correct texture coordinates. Especially the latter – getting the texture coordinates right – can be quite a challenge. The last step is applying shaders to 3D shapes. With shaders you can create awesome things. For starters you can do custom lighting and multi-texturing. But that’s only the beginning. Shaders allow you to adapt specific aspects that are normally handled automatically in the OpenGL pipeline. So for example you can change vertex positions within the vertex shader. Examples of this are included in this tutorial. For all of this to work correctly, the different aspects of the sketch (main sketch, 3D shape and shaders) have to be in tune. These code examples will show you how to do this. And once you master the basics, you will be able to build upon that knowledge and incrementally develop towards much more advanced programs. Let’s get started by setting things up…

Download the code examples

All of the code examples mentioned in this tutorial (and more) are on my GitHub. They were written in Processing 1.5.1. using the GLGraphics 1.0.0 library by Andres Colubri. As many of you will know, the Processing development team is putting in a tremendous effort working towards 2.0. One of many exciting new things is the work Andres is doing on the new OpenGL renderers in Processing. Most of the things you could previously only do with GLGraphics are now possible using Processing’s own classes and methods, without the use of a contributed library. So as a service to you guys, I have already ported all of the code examples to Processing 2.0b8/2.0b9. Note however that it is still a beta version and some features are sparsely documented. Add the unforgiving nature of GLSL shaders and you will realize that working in these betas can be a challenge. This means that, while all of the examples work, not all of them are a 1:1 carbon copy of the original. Luckily one of GitHub’s greatest strengths is collaborative coding, so feel free to fork & improve the code. Send a pull request if you have made progress, then I can incorporate useful adaptations in the master repository. I would advise you to download the complete repo as a whole. Then all the code examples will run as is and they will already be in the correct folder structure, since some share a central images folder. For the best experience I advise the original (1.5.1 / GLGraphics) version, but you can choose yourself. Of course it’s possible to have multiple versions of Processing on your computer at the same time. So you can also download and try both. Anyway, here are the relevant download locations:

AmnonOwed-GTS-Custom2DGeometry02

Custom 2D geometry

As described in the introduction this tutorial follows a grounds-up approach, starting with the basics and working our way up to more advanced uses of the same techniques. The first step in this process is creating custom 2D geometry. Processing has several built-in shape methods such as rect(), ellipse() and triangle(). These are widely used and very convenient. When you need flexibility however, you’ll want to use vertices to build custom shapes. Using vertices will also open the door for things like per-vertex colors, per-vertex normals and uv texture coordinates. There are several useful methods in Processing for this purpose and they are: beginShape, endShape, vertex and texture. You can use different parameters (such as TRIANGLES, QUADS, etc.) in the beginShape method to indicate the type of shape creation that you’ll be doing. The reference for beginShape has a great overview of these types including the visual outcome when using them respectively. The same set of vertices will create different faces under different types. So you’ll have to think about which type is most suitable.

The Custom2DGeometry code example shows all of the shape types on one screen. It also shows you that OpenGL conveniently interpolates between the colors of vertices within a face. Of course this can be used to create interesting multi-colored gradient polygons.

AmnonOwed-GTS-FixedMovingTextures2D01

Texturing in 2D

Now that we’ve learned about the basic methods in Processing to create custom geometry, let’s touch upon to the second topic: textures. Texturing is about placing an image onto geometry. Just like vertices are positions for the shape itself, you’ll need positions for placing the texture onto the geometry. These are called uv or texture coordinates. In Processing you can give these uv’s in image dimensions or normalized (0-1) dimensions. It’s best to work with normalized texture coordinates because these are more flexible (for example when you change to a different sized image) and fits better within the rest of the opengl pipeline (for example shaders also work with normalized dimensions).

The FixedMovingTextures2D code example shows similar shapes as the previous example, except now they are covered by an image texture. This immediately gives a very different visual effect. In most cases you will have static texture coordinates for a specific shape. So the shape can move and rotate all around the place, but its look will stay the same. This is one possibility. Another is using dynamic texture coordinates. The example shows both kinds.

The DynamicTextures2D code example further builds upon the use of dynamic texture coordinates. It shows you that you can easily change texture coordinates and display a whole grid of QUADS without negatively affecting the framerate. If one were to do the same by actually creating Pimage’s for all these segments, it would be much more costly in terms of both memory and computation.

The Texture2Danimation code example uses the same technique but applies it to animation. Multiple frames are stored in a single image in a grid-like manner. By moving through the cells of the grid via the texture coordinates, the frame (actually the displayed section of the image) is changed on each frame. Even with my subpar spritesheet you can already see how this gives the illusion of an animation. Using this simple technique in combination with some actually worthwhile spritesheets, can result in very interesting visual output.

AmnonOwed-GTS-Custom3DGeometry01

Custom 3D geometry

So far we worked in 2D, now let’s move into 3D! We’ll do this with the Custom3DGeometry code example that shows an endless stream of multi-colored flying pyramids. This is basically the 3D equivalent of the 2D geometry example we started with. It uses all the same Processing methods such as beginShape, endShape and vertex. The only difference is the addition of another dimension. In terms of program structure however, there is one more very important difference. All the 2D examples until now were procedural, but the Custom3DGeometry code example uses object-oriented programming (OOP). This means there is a Pyramid class that holds all the relevant code for creating, updating and displaying such a shape. Object-oriented programming is in many cases already useful in 2D sketches. When you start building 3D worlds it becomes almost a prerequisite, given the increased complexity of 3D sketches. Otherwise it will be very hard for you to keep track of your program. An advantage of OOP is that it keeps your main sketch simple and focused, while placing all relevant code where it belongs, in this case inside the Pyramid class.

When you want to create 3D geometry in Processing, there are different resources you can use. First, many mathematical shapes are described on websites like Wikipedia and similar. You can use those blueprints to build a shape with code. Second, there are many open source code examples of geometry. Even when it’s in another programming language, geometry code is usually relatively easy to port. Finally there are several Processing libraries such as Hemesh, Shapes3D and Toxiclibs that can assist with the creation of complex three-dimensional shapes.

AmnonOwed-GTS-TexturedSphere01

Texturing in 3D

All of the above examples were pretty lightweight, so using direct calls was fine and didn’t really impact the framerate negatively. Once you work with much bigger shapes that have lots of vertices, it will start to impact the framerate. Fortunately there are classes that can efficiently store all of the relevant data in a way that works well for the GPU (read: in a Vertex Buffer Object). The GLGraphics library has the GLModel for this. The Processing betas have the PShape. Both work in a similar manner even though the syntax of creation is a bit different. For the rest of the tutorial whenever I say GLModel, you can also think PShape for the 2.0 betas context. Using these techniques allows having geometry with lots of vertices and still getting a decent framerate. The basic steps that I’m using in these code examples are:

  1. Create all the necessary shape data (vertices, normals, texture coordinates) and store them in temporary arraylists.
  2. Create a GLModel and transfer the data from the arraylists into the GLModel.
  3. Display the shape.

The first two steps (creation of shape, creation of GLModel) are done once. After that the shape can be displayed efficiently.

The TexturedSphere code example uses the described steps to create a virtual earth. It’s relatively easy to create the 3D points that make up a sphere. And it’s then relatively straightforward to put some polygon on it. What’s actually harder to get right, than I thought at first, are the texture coordinates. Because texture coordinates are essentially always going to be somewhat problematic due to the fact that you’re putting 2D images onto 3D shapes. So there are always going to be areas that don’t quite look right. When writing this code I ran into these exact same issues (incorrect texture wrapping  on the seam and the poles) and found multiple posts on the net about it: one, two, three, four. So how about searching for existing code that has already solved this problem? Indeed, luckily I found some C++ code by Gabor Papp which I could port into Processing (yay for open source!). Gabor’s method is based on subdivision, which is very useful for setting the density of the mesh while ensuring an equal distribution of vertices over the shape. Note that this can probably be further improved in terms of efficiency, because it features vertices with a shared location, but doesn’t use indices to re-use them. The reason is that in some cases vertices share the same location, but not the same texture coordinates. Anyway, this example shows the current end result. A correctly textured sphere (based on a subdivided icosahedron) that is stored in a GLModel.

AmnonOwed-GTS-MultiTexturedSphereGLSL01

Applying shaders to shapes

Now that we have created a 3D shape and stored it inside a GLModel, it’s time to look into the next challenge: applying shaders to the shape. Shaders are really starting to reach a broader audience, in part due to online tools like Shadertoy. Unfortunately their user-friendliness is lacking behind. Nevertheless, given their awesome powers and Processing 2.0’s upcoming support for shaders, they have a great attraction to many of us. This tutorial will focus on two types of GLSL shaders: vertex and fragment.

The TexturedSphereGLSL code example takes the previous example and adds GLSL shaders to the mix, just to get things up and running. This way you can compare both examples and see where the shaders make a difference in terms of code (even though the end result in this case may be very similar or even identical). In this example you can move around the mouse to change the lighting position.

The MultiTexturedSphereGLSL code example takes things another step further by adding multiple input textures into the mix. Depending on the lighting calculation a final output will be calculated in the shaders. It could be day or night or something in between. And there’s a cloud layer (driven by time) hovering over the earth. As another relatively similar code example, you’ll be able to compare it with the code from the previous two textured sphere examples and spot the differences. Hopefully these examples help you understand the process and code for integrating GLSL shaders with your existing shapes.

AmnonOwed-GTS-GLSL_SphereDisplacement01AmnonOwed-GTS-GLSL_Heightmap01

Special effects with shaders

Vertex displacement. It’s awesome! Ever since I saw videos of this effect online, I’ve wanted to recreate  it myself. And that’s long before I learned about GLSL, let alone know it was called vertex displacement. That’s why I’m very glad that I can present multiple code examples that all feature this effect. Both displacement of an essentially 2D heightmap and displacement of a 3D sphere. In addition, each of these examples has a variation based on image input and a variation based on procedural noise generated inside the GLSL shader itself. The titles of the examples speak for themselves so it should be easy to figure out which one is which. The basic idea behind vertex displacement is that the position of each vertex is translated along the normal by a certain amount. This amount is determined by two things: a global displaceStrength that applies to all vertices and an individual displacement that is local to each vertex. The individual displacement is usually based on a displacement map aka an image. Another often seen source for the individual displacement is perlin noise, but any calculation could be used. Examples of all these types of displacement can be seen in the video accompanying this blog post.

AmnonOwed-GTS-GLSL_TextureMix01

The GLSL_TextureMix code example is a different kind of GLSL special effect. This example shows you how to mix multiple input textures into a single output. From the main sketch different mixmodes can be set, which will affect the code in the shader. In terms of optimization is may be better to have multiple different shaders and switch between those. But this tutorial’s main focus is not optimization, rather it’s about showing you how you can use GLSL to achieve certain effects. This texture mixing effect is achieved inside the fragment shader. Advantages of mixing inside the fragment shader are automatic interpolation and normalized texture coordinates. So you don’t have to worry about pixels, just about keeping your texture coordinates between 0 and 1 and the result will be a perfectly smooth mix. Another benefit of doing the texture mixing in a shader is that it goes beyond simply mixing images. Rather you’re mixing textures, which could just as well be happening on the surface of a 3D shape, see the multi-textured sphere example earlier.

AmnonOwed-GTS-GLSL_HeightmapNoise01

Troubleshooting

It’s always possible that things go haywire somewhere. Here are couple of suggestions for when they do. If more relevant information comes to light, I will update this paragraphy accordingly.

  • There is a harmless GLSL-related error message on some ATI Radeon graphics cards, which writes the following to the console: “Validation warning! – Sampler value [nameOfFirstTexture] has not been set.”. This message can be ignored and will not affect the sketch.
  • There is a harmless OpenGL-related warning message in the Processing betas which writes the following to the console: “Display 0 does not exist, using the default display instead”. This message can be ignored and will not affect the sketch.
  • There are different types of error message like this: “Error: OpenGL error [####] at top endDraw(): [some error message]”. As a general guideline, number 1280 can usually be ignored without impact, while 1281, 1282 and higher will have a bigger impact, perhaps even show-stopping. If the sketch runs fine, the error can be ignored. If your screen remains black, then it needs to be solved. These errors can be caused by older graphics cards, outdated drivers or coding errors. Many of these have been discussed on the Processing forum, so the best remedy is to search for the specific error and see what solutions have come up.

Final remarks

Of course you can’t cover everything in a single tutorial, but I hope this was a helpful introduction to some of the important building blocks for creating geometry, using textures and working with GLSL shaders. The most important thing is that you download the examples from GitHub and just see what’s going on in the code. Try to understand how the main sketch relates to the stuff going inside the GLSL shaders. Like with most coding topics, it will take a while before you really get it. Especially since the GLSL pipeline can be less intuitive when you start working with it. All I can say is: keep coding, keep learning, keep trying and make awesome stuff! Get help and moral support on the Processing forum. If you encounter issues – and you are sure it’s not your code that’s causing the problem – then file a bug report on Processing’s GitHub issues list. Make sure to add a clear description and a runnable code example that reproduces the issue, otherwise it can’t be fixed. To end on a personal note, I’ll probably add other repositories with more code examples in the future. If you want to keep up to date on the latest developments follow me on GitHub or Twitter!

Once more the two GitHub repositories holding the code examples that accompany this blog post:

Code examples for Processing 1.5.1 + GLGraphics | Code examples for Processing 2.0b8 or 2.0b9

AmnonOwed-GTS-GLSL_SphereDisplacementNoise01

    • epar

      Thanks Amnon. Very helpful, and came just at the right moment.

    • tosque

      There goes my weekend.

    • Martin Schissler

      hi amnon. thanks for that nice tutorial. i am searching for the example code to picture http://www.creativeapplications.net/wp-content/uploads/2013/05/AmnonOwed-GTS-GLSL_SphereDisplacement01-640×360.jpg but i can’t find it. is it containt in the repository? thanks.

    • RT

      I tried to achieve that effect by changing noStroke to stroke. What I got was” Stroke path is too long, some bevel triangles won’t be added”
      My next step would be reading the colors of the texture to the stroke. However I have no idea how the example above did it, because it seem to be masked, and not colored stroke.

    • http://www.erimkocatepe.com/ Erim

      Great post! I’ve been waiting for an article like this for sometime now.
      I think graphics related tutorials for processing are much needed.
      please keep them coming.