**By Stephen Schieberl and Joshua Noble**

We’ve heard it plenty of times when people are talking about working with the Kinect: “we’ll just get the point cloud and turn it a mesh”. You may have even thought that yourself at some point, “this is going to be easy” Well, it’s actually fairly difficult to do and especially difficult to do quickly, because a mesh is a complex object to create and update in realtime. In a mesh of either triangles or quads, each surface needs to be correctly oriented to it’s neighboring surfaces and to the camera. When working with a point cloud coming from the Kinect you have several thousand points which means several thousand pieces of geometry needing to be created all at once. The typical approach that people use is to just display the points and not worry about trying to make surfaces or shapes out of those points. While this is fine for some projects, we’d like to see more people dig into making real shapes from their Kinect imagery so we’re going to show you how to do that in two different ways. This is a fairly advanced tutorial that focuses a little more on theory and implementation rather than a specific toolset. What we’re going to talk about in this article can be applied to any of the creative coding toolkits, but we’ll be making projects for Cinder to demonstrate. If you don’t know anything about Cinder, I suggest you take a look here: http://libcinder.org/ and check it out. With that said, let’s get into some theory.

Point clouds are big and connecting those points in a 3D shape is slow. We need to make geometry and we need to make it fast, so we’re going to look at two approaches to using shaders to either generate our geometry and to position the RGB data that we receive from the camera. To do that we’re going to use shaders because they can be faster and they more neatly separate the code that accesses our peripherals, i.e. the Kinect, from the code that creates our geometry and displays the mesh.

## A quick introduction to shaders

This is a very quick introction to shaders, for more detail there are a lot of resources that you can look at: The Orange Book, Nehe, even glsl.heroku.com/. For now though, we just want to make sure you get the basics: GLSL (Graphics Language Shading Language) is an abbreviation for the official OpenGL Shading Language. GLSL is a high-level programming language that’s similar to C/C++ for several parts of the graphics card. With GLSL, you can code short programs, called shaders, which are executed on the GPU. GLSL can be fairly confusing because graphics card support is a little bit unreliable, i.e. what works on one card may not work on another, and because the differences between versions is significant. However, whether you’re working with GL ES on mobile devices or WebGL, or with full GLSL in a Cinder application, the basics are the same, and an understanding of how the shading pipeline functions will serve you well in any OpenGL platform or with other shading tools like CG, HLSL, or ARB.

Before we get started though, you might want to know what version of GLSL you’re running. This isn’t mandatory, but it is pretty handy. Downloading and running the following: http://www.realtech-vr.com/glview/ will let you know what version of GLSL your computer supports. If you aren’t interested in that, or you already know, feel free to skip that step.

So, first, the basics: a shading language is a special programming language adapted to easily map on shader programming. Those kinds of languages usually have special data types such as color and normal. Because of the various target markets of 3D graphics, different shading languages have been developed: fragment shaders, geometry shaders, vertex shaders. In this article we’re interested in GLSL. GLSL shaders themselves are just text that is passed to the graphics card drivers for compilation from within an application using the OpenGL API’s entry points. Shaders can be created on the fly from within an application or read in as text files, but they must be sent to the driver in the form of a text string.

There are three fundamental steps in shading: read and modify any vertices, create or modify any geometry, and read and modify any pixel data. The OpenGL shader pipeline looks roughly like this:

Let’s break that down a bit.

### Vertex Shaders

A vertex shader has attributes about a location in space or vertex, which means not only the actual coordinates of that location but also its color, how any textures should be mapped onto it, and how the vertices are modified in the operation. A vertex shader can change the positions of each vertex, the number of lighting computations per vertex, and the color that will be applied to each vertex.

Your application certainly doesn’t need to do all of these operations to use a vertex shader. As you’ll see in the next code example, you don’t need to apply lights to your objects. Even though you don’t need to use all the functionality of the vertex shader, when you create and apply a vertex shader, your program will assume that you’re replacing all the functionality of your other vertex transformations. If you apply changes to particular vertex, the vertex shader will probably change those according to the instructions contained in the vertex shader. When a vertex shader is used, it becomes responsible for replacing all the needed functionality of this stage of the pipeline.

The vertex processor processes each vertex individually and doesn’t store any data about the other vertices that it has already processed or that it will process. It is responsible for at least modifying (or not) the position of the vertex and then writing it to the next shaders, usually transforming the vertex with the ModelView and Projection matrices. It does this by writing to the gl_Position, which is where OpenGL stores the location of the vertex. A vertex processor has access to OpenGL state, so it can perform operations that involve lighting for instance, and use materials, and can even access textures.

So, what does a vertex shader look like? They can be quite simple. Take a look at the following example. It’s a single function that sets the value of each vertex passed to it. To be nice to everyone, no matter what graphics card they’re running, we need to make a small split here, showing two different ways of doing things.

**GLSL 1.2**

void main() { vec4 v = vec4(gl_Vertex);

As a quick example, the x position of the vertex is shifted over by 1 pixel:

v.x += 1; Now the final position of the vertex is set: gl_Position = gl_ModelViewProjectionMatrix * v; }

**GLSL 1.3+**

uniform mat4 mvp; // this is set for the entire shader, it doesn’t change for each point in vec4 vertexIn; // this is set for each vertex, i.e it changes for each point void main() { v.x += 1.;

Now the final position of the vertex is set:

gl_Position = mvp * vertexIn; }

Here the vertex that is currently being operated on is used to create a new vertex: As a quick example, the x position of the vertex is shifted over by an arbitrary amount. That’s a very simple vertex shader that doesn’t alter your image in any immediately noticeable way, but it does show you how the vertex shader functions. In 1.2 the vertex passed to the shader can be accessed within that method as gl_Vertex, and its final position after the operation can be set using the gl_Position variable:

### Geometry Shader

A geometry shader can generate new graphics primitives like points, lines, and triangles, from those primitives that were sent to the graphics card from the CPU. This means that you could get a point and turn it into a triangle or even a bunch of triangles, or get a line and turn it into a rectangle, or do real-time extrusion.

Geometry shader programs are executed after the vertex shader and before the fragment shader. As an example, if your oF application creates triangles and passes them to the vertex shader, then your geometry shader will receive one triangle at a time. What you do with those is up to you. You could, for instance, turn each triangle into 2 triangles or add additional points around each triangle. As long as you declare what your geometry shader will be getting in and what it will be churning out, everything will be passed to the fragment shader to be colored in without a problem. A very simple geometry shader looks like this:

**GLSL 1.2**

void main() { for(int i = 0; I < gl_VerticesIn; ++i) { gl_FrontColor = gl_FrontColorIn[i]; gl_Position = gl_PositionIn[i]; EmitVertex(); } }

**GLSL 1.3+**

in vec4 vertex; out vec4 color; void main() { for(int i = 0; i < gl_VerticesIn; ++i) { gl_Position = gl_PositionIn[i]; color = vec4( i / 4., 1 - i/4., 0, 1.); EmitVertex(); } EmitPrimitive(); }

The number of vertices being passed in with the gl_VerticesIn variable will be the same as the GL type that you’re using. If you’re using points, it’ll be one. If you’re using quads, it’ll be four. gl_Position is where you want to put the vertex that you’re creating and EmitVertex() is how you say that you’re done with a vertex. EmitPrimitive is how you tell the geometry shader that you’re done with creating geometry all together. The diagram below shows how you would create a quad out of GL_TRIANGLE_STRIP from a GL_POINT. This means that the vertex shader receives a point, manipulates its position, and then passes the coordinates of that point to the geometry shader. The geometry shader then generates the 4 points to create the quad using the received position, calls EmitPrimitive() when the quad is complete, and then passes the data on to the fragment shader.

### Fragment Shader

The fragment shader is somewhat misleadingly named because what it really allows you to do is to change values assigned to each pixel. The vertex shader operates on the vertices, and the fragment shader operates on the pixels. By the time the fragment shader gets information passed into it by the graphics card, the color of a particular pixel has already been computed and in the fragment shader can be combined with an element like a lighting effecting, a fog effect, or a blur among many other options. The usual end result of this stage per fragment is a color value and a depth for the fragment.

The inputs of this stage are the pixel’s location and the fragment’s depth, whether it will be visible or altered by an effect, and color values.

Just like vertex shaders, fragment shaders must also have a main() function. Just as the vertex shader provides you access to the current vertex using the gl_Vertex variable, the fragment shader provides you access to the current fragment, that is, the fragment that the fragment shader is working with when it is called, using the gl_FragColor variable. Like the gl_vertex, this is a four-dimensional vector that lets you set the RGB and alpha properties of the fragment. In the following example, the color of the fragment is set to a medium gray with full alpha:

**GLSL 1.2**

#version 120 void main() { gl_FragColor = vec4(0.5, 0.5, 0.5, 1.0); }

**GLSL 1.3+**

#version 150 out vec4 color; void main(void) { color = vec4(1.0, 0.0, 0.0, 1.0); }

Notice how each of the channels is set using a floating point number from 0.0 to 1.0. The gl_FragColor is what is passed to the renderer to be displayed on the screen.

What we want is to be able to get a point cloud from the IR camera of the kinect and an RGB image from the RGB camera, to display them you need to assemble them. You can do this on the CPU or you can do it on the GPU, but at some point everything needs to go to the GPU so it can be drawn by the graphics card. Of the types of shaders that we’re going to focus more on the geometry shader, since our goal is to be able to make a mesh out of the points that we get quickly and in a lightweight way.

Probably the best way to understand this is to dig into some small demos of fun things that you can do when putting together points and shaders.

To save space and to keep you interested, we’re going to just look at the important parts of the applciations that are available for download and closer browsing here.

The first application is a simple one, to introduce the basics and set the stage for what we’re going to do later on. As you can see in the picture below, we’ve got a control panel that allow you to control the drawing and the result shown behind the control panel.

We’re actually doing all the drawing in the geometry shader so we’ll unpack how that’s done. First, the Cinder application .cpp file. The github link is here. Take a look at the loadShaders() method:

// Find maximum number of output vertices for geometry shader int32_t maxGeomOutputVertices; glGetIntegerv(GL_MAX_GEOMETRY_OUTPUT_VERTICES_EXT, & maxGeomOutputVertices);

This is important: every graphics card will have a maximum number of vertices that it can output. For most modern cards this is 1024 or even more, but you want to set your shader to output only as many as you need. In this case, since we’re just demonstrating the basics, we’ll go with the max number that the card supports but in practice, you never want to do this. This is passed to the geometry shader when you actually load it onto graphics card. Each programming framework, Processing, WebGL, OF, Cinder, etc, has a slightly different syntax for this, but the basic idea is always the same: load the vertex shader, the fragment shader, and optionally the geometry shader. If you load a geometry shader you need to say what type of OpenGL primitive you’re passing to the geometry shader, what kind of OpenGL primitive you’re outputting, and the number of vertices that you’re going to be outputting. Here’s the cinder implementation that we’re using:

mShaderPassThru = gl::GlslProg( loadResource(RES_SHADER_VERT_150), loadResource(RES_SHADER_FRAG_150), loadResource(RES_SHADER_GEOM_150), GL_POINTS, GL_POINTS, 1); // note: 1 point out mShaderTransform = gl::GlslProg( loadResource(RES_SHADER_VERT_150), loadResource(RES_SHADER_FRAG_150), loadResource(RES_SHADER_GEOM_150), GL_POINTS, GL_TRIANGLE_STRIP, maxGeomOutputVertices ); // note max points out

The draw() method in the first application is quite simple, draw a bunch of points and you’re good to go.

// Draw points mShader.bind(); glBegin(GL_POINTS); for (vector::const_iterator pointIt = mPoints.cbegin(); pointIt != mPoints.cend(); ++pointIt) gl::vertex(* pointIt); glEnd(); mShader.unbind();

Again, that’s Cinder, but actual shader can be used in any programming framework, so let’s dig into that a little bit because the real trick here is making a geometry shader for points.

#version 120 #extension GL_EXT_geometry_shader4 : enable #extension GL_EXT_gpu_shader4 : enable // Constants to match those in CPP const int SHAPE_CIRCLE = 2; const int SHAPE_SQUARE = 1; const int SHAPE_TRIANGLE = 0; // Uniforms uniform float aspect; uniform int shape; uniform float size; uniform bool transform; // Kernel void main(void) { // Key on shape switch (shape) { case SHAPE_TRIANGLE:

To draw a triangle, we add the size times the cos and sin of each angle to the original position. After each point is set, we call EmitVertex() to the newly created vertex to the geometry that will be sent to the fragment shader. Be sure to draw counter-clockwise.

gl_Position = gl_PositionIn[0] + vec4(cos(radians(270.0)) * size, sin(radians(270.0)) * size * aspect, 0.0, 0.0); EmitVertex(); gl_Position = gl_PositionIn[0] + vec4(cos(radians(150.0)) * size, sin(radians(150.0)) * size * aspect, 0.0, 0.0); EmitVertex(); gl_Position = gl_PositionIn[0] + vec4(cos(radians(30.0)) * size, sin(radians(30.0)) * size * aspect, 0.0, 0.0); EmitVertex();

Call EndPrimitive() to close the shape. In this case we don’t strictly need this since we’re only drawing one primitive and the GLSL program can figure out that we’re done making our primitive, but when you have more than one primitive, it’s vitally important to call this whenever you’ve finished one of whatever type of primitive you’re creating.

EndPrimitive(); break; case SHAPE_SQUARE:

To draw a square, we will draw two triangles, just like above. The two triangles will share vertices, so we’ll define the vertices in advance.

vec4 vert0 = vec4(cos(radians(315.0)) * size, sin(radians(315.0)) * size * aspect, 0.0, 0.0); vec4 vert1 = vec4(cos(radians(225.0)) * size, sin(radians(225.0)) * size * aspect, 0.0, 0.0); vec4 vert2 = vec4(cos(radians(135.0)) * size, sin(radians(135.0)) * size * aspect, 0.0, 0.0); vec4 vert3 = vec4(cos(radians(45.0)) * size, sin(radians(45.0)) * size * aspect, 0.0, 0.0); // Draw the left triangle gl_Position = gl_PositionIn[0] + vert0; EmitVertex(); gl_Position = gl_PositionIn[0] + vert1; EmitVertex(); gl_Position = gl_PositionIn[0] + vert3; EmitVertex(); // Close this triangle EndPrimitive(); // And now the right one gl_Position = gl_PositionIn[0] + vert3; EmitVertex(); gl_Position = gl_PositionIn[0] + vert1; EmitVertex(); gl_Position = gl_PositionIn[0] + vert2; EmitVertex(); // Close the second triangle to form a quad EndPrimitive(); break; case SHAPE_CIRCLE: // To draw a circle, we'll iterate through 24 // segments and form triangles float twoPi = 6.283185306; float delta = twoPi / 24.0; for (float theta = 0.0; theta < twoPi; theta += delta) { // Draw a triangle to form a wedge of the circle gl_Position = gl_PositionIn[0] + vec4(cos(theta) * size, sin(theta) * size * aspect, 0.0, 0.0); EmitVertex(); gl_Position = gl_PositionIn[0]; EmitVertex(); gl_Position = gl_PositionIn[0] + vec4(cos(theta + delta) * size, sin(theta + delta) * size * aspect, 0.0, 0.0); EmitVertex(); EndPrimitive(); } break; } } }

So that’s pretty interesting, but next let’s make boxes for all those points and then things will get more interesting.

Actually, if you’ve done any OpenGL, making a cube using a geometry shader is going to look very familiar since it involves almost the same code as the glVertex() version, only with the addition of EmitVertex() and EmitPrimitive(). First though, we’re going to dig into the Cinder application to show a few new features.

We’re going to use a Vertex Buffer Object to create objects and points that we can work with in the geometry shader. If VBOs aren’t familiar to you and you want to learn more about how to work with them in Cinder, check out http://www.creativeapplications.net/tutorials/guide-to-meshes-in-cinder-cinder-tutorials/

First, set up the mesh using a vbo layout object:

// Set VBO layout, we use this to set up how the VBO is configured in the GPU memory mVboLayout.setStaticIndices(); mVboLayout.setStaticPositions(); // Iterate through the grid dimensions for (int32_t y = 0; y < mMeshHeight; y++) { for (int32_t x = 0; x < mMeshWidth; x++) { // Set the index of the vertex in the VBO so it is // numbered left to right, top to bottom mVboIndices.push_back(x + y * mMeshWidth); // Set the position of the vertex in world space mVboVertices.push_back(Vec3f((float)x - (float)mMeshWidth * 0.5f, (float)y - (float)mMeshHeight * 0.5f, 0.0f)); } }

Now we actually add all the indices and positions to the VBOMesh:

mVboMesh = gl::VboMesh(mVboVertices.size(), mVboIndices.size(), mVboLayout, GL_POINTS); mVboMesh.bufferIndices(mVboIndices); mVboMesh.bufferPositions(mVboVertices);

Normally you do some modificiation of the VBO on the CPU in the update() method, but we’re interested in moving as much of our processing as possible to the GPU, so we’re not doing any of that. We do however need to be able to pass variables into our shader so we can generate the wave and correctly position the view onto the mesh. We do this using Uniforms, which as mentioend eaelir in the article are values passed into the shader on a per shader basis. Not every shader (vertex, geometry, fragment) needs to read them, but they are available for all shaders. Here’s how we pass all the values into the shaders and then draw the VBOMesh:

// Bind and configure shader mShader.bind(); mShader.uniform("alpha", mMeshAlpha); mShader.uniform("amp", mMeshWaveAmplitude); mShader.uniform("eyePoint", mEyePoint); mShader.uniform("lightAmbient", mLightAmbient); mShader.uniform("lightDiffuse", mLightDiffuse); mShader.uniform("lightPosition", mLightPosition); mShader.uniform("lightSpecular", mLightSpecular); mShader.uniform("mvp", gl::getProjection() * gl::getModelView()); mShader.uniform("phase", mElapsedSeconds); mShader.uniform("rotation", mBoxRotationSpeed); mShader.uniform("scale", mMeshScale); mShader.uniform("dimensions", Vec4f(mBoxDimensions)); mShader.uniform("shininess", mLightShininess); mShader.uniform("speed", mMeshWaveSpeed); mShader.uniform("transform", mBoxEnabled); mShader.uniform("uvmix", mMeshUvMix); mShader.uniform("width", mMeshWaveWidth); // Draw VBO gl::draw(mVboMesh); // Stop drawing mShader.unbind();

Some of these lines are worth annotating a bit further. For instance:

mShader.uniform(“mvp”, gl::getProjection() * gl::getModelView());

In previous versions of GLSL, you could access the ModelView matrix through a built in variable. In GLSL 1.5 though you simply pass the matrix in using a uniform. This line:

mShader.uniform(“eyePoint”, mEyePoint);

allows us to position our generated geometry to the camera. This is really powerful because all you need is a 3D vector and you can correctly calculate the eye perspective onto a scene, I’m sure you can imagine how this will be very useful when we finally get to the Kinect portion of this tutorial. That eye position is vital, because it’s how we use the place that you want to look when we generate all the geometry. In the geometry shader you can see how these are all put to use:

uniform vec4 dimensions; uniform mat4 mvp; uniform float rotation; uniform bool transform; // Input attributes from the vertex shader in vec4 vertex[]; // Output attributes out vec4 normal; out vec4 position; out vec2 uv; // Adds a vertex to the current primitive void addVertex(vec2 texCoord, vec4 vert, vec4 norm) { // Assign values to output attributes normal = norm; uv = texCoord; position = vert; gl_Position = vert; // Create vertex EmitVertex(); } // Kernel void main(void) {

Now we need to calculate what the position should be based on the view so that we can send that on to the fragment shader.

position = mvp * vertex[0]; // Transform point to box if (transform) { // Use Y position for phase float angle = vertex[0].y * rotation;

The rotation matrix we’re making here is what allows us to correctly position each box after twisting it to fit the shape of our wave. If these are a bit fuzzy for you, I find this to be an excellent refresher or introduction.

// Define rotation matrix mat4 rotMatrix; rotMatrix[0].x = 1.0 + cos(angle); rotMatrix[0].y = 1.0 + -sin(angle); rotMatrix[0].z = 1.0 + -rotMatrix[0].x; rotMatrix[0].w = 0.0; rotMatrix[1].x = 1.0 + -rotMatrix[0].z; rotMatrix[1].y = 1.0 + -rotMatrix[0].y; rotMatrix[1].z = 1.0 + -rotMatrix[0].x; rotMatrix[1].w = 0.0; rotMatrix[2].x = 0.0; rotMatrix[2].y = 0.0; rotMatrix[2].z = 1.0; rotMatrix[2].w = 0.0; rotMatrix[3].x = 0.0; rotMatrix[3].y = 0.0; rotMatrix[3].z = 0.0; rotMatrix[3].w = 1.0;

Now we need vertices for our box. This can broken down as:

`vertex = the view matrix * (our position from the vertex shader + (the size of the box * the corner of the box that we're generating) * the amount that it should be twisted)`

// Define vertices vec4 vert0 = mvp * (vertex[0] + vec4(dimensions * vec4(-1.0, -1.0, -1.0, 0.0)) * rotMatrix); // 0 --- vec4 vert1 = mvp * (vertex[0] + vec4(dimensions * vec4( 1.0, -1.0, -1.0, 0.0)) * rotMatrix); // 1 +-- vec4 vert2 = mvp * (vertex[0] + vec4(dimensions * vec4(-1.0, 1.0, -1.0, 0.0)) * rotMatrix); // 2 -+- vec4 vert3 = mvp * (vertex[0] + vec4(dimensions * vec4( 1.0, 1.0, -1.0, 0.0)) * rotMatrix); // 3 ++- vec4 vert4 = mvp * (vertex[0] + vec4(dimensions * vec4(-1.0, -1.0, 1.0, 0.0)) * rotMatrix); // 4 --+ vec4 vert5 = mvp * (vertex[0] + vec4(dimensions * vec4( 1.0, -1.0, 1.0, 0.0)) * rotMatrix); // 5 +-+ vec4 vert6 = mvp * (vertex[0] + vec4(dimensions * vec4(-1.0, 1.0, 1.0, 0.0)) * rotMatrix); // 6 -++ vec4 vert7 = mvp * (vertex[0] + vec4(dimensions * vec4( 1.0, 1.0, 1.0, 0.0)) * rotMatrix); // 7 +++ // Define normals vec4 norm0 = mvp * vec4( 1.0, 0.0, 0.0, 0.0); // Right vec4 norm1 = mvp * vec4( 0.0, 1.0, 0.0, 0.0); // Top vec4 norm2 = mvp * vec4( 0.0, 0.0, 1.0, 0.0); // Front vec4 norm3 = mvp * vec4(-1.0, 0.0, 0.0, 0.0); // Left vec4 norm4 = mvp * vec4( 0.0, -1.0, 0.0, 0.0); // Bottom vec4 norm5 = mvp * vec4( 0.0, 0.0, -1.0, 0.0); // Back // Define UV coordinates vec2 uv0 = vec2(0.0, 0.0); // -- vec2 uv1 = vec2(1.0, 0.0); // +- vec2 uv2 = vec2(1.0, 1.0); // ++ vec2 uv3 = vec2(0.0, 1.0); // -+

Now we get in to simply making the sides of a cube:

// left addVertex(uv1, vert5, norm3); addVertex(uv2, vert7, norm3); addVertex(uv0, vert4, norm3); addVertex(uv3, vert6, norm3); EndPrimitive(); // Right addVertex(uv3, vert3, norm0); addVertex(uv0, vert1, norm0); addVertex(uv2, vert2, norm0); addVertex(uv1, vert0, norm0); EndPrimitive(); // Top addVertex(uv1, vert7, norm1); addVertex(uv2, vert3, norm1); addVertex(uv0, vert6, norm1); addVertex(uv3, vert2, norm1); EndPrimitive(); // Bottom addVertex(uv1, vert1, norm4); addVertex(uv2, vert5, norm4); addVertex(uv0, vert0, norm4); addVertex(uv3, vert4, norm4); EndPrimitive(); // Front addVertex(uv2, vert3, norm2); addVertex(uv3, vert7, norm2); addVertex(uv1, vert1, norm2); addVertex(uv0, vert5, norm2); EndPrimitive(); // Back addVertex(uv2, vert6, norm5); addVertex(uv1, vert2, norm5); addVertex(uv3, vert4, norm5); addVertex(uv0, vert0, norm5); EndPrimitive(); } else { // Pass-thru gl_Position = position; EmitVertex(); EndPrimitive(); } }

Now the fragment shader and yes, this seems a bit complex, but is actually fairly simple.

#version 150 // Uniforms uniform float alpha; uniform vec3 eyePoint; uniform vec4 lightAmbient; uniform vec4 lightDiffuse; uniform vec4 lightSpecular; uniform vec3 lightPosition; uniform float shininess; uniform bool transform; uniform float uvmix;

The geometry shader is outputting a normal, position, and uv coordinate for the fragment shader to use in coloring:

// Input attributes in vec4 normal; in vec4 position; in vec2 uv;

And here’s what is actually output:

// Output attributes out vec4 color; // Kernel void main(void) { // Transform point to box if (transform) { // Initialize color color = vec4(0.0, 0.0, 0.0, 0.0); // Normalized eye position vec3 eye = normalize(-eyePoint);

Lights are really a light position and a reflection vector. That reflection needs to take the position of the light and normal of the surface into account to look “right”, so we normalize the reflected light vector using the reflect() method, which is built into GLSL.

// Calculate light and reflection positions vec3 light = normalize(lightPosition.xyz - position.xyz); vec3 reflection = normalize(-reflect(light, normal.xyz));

This is where what people think of as materials are actually generated: putting together the value of a light, a surface color, and a reflection together mathematically. That becomes the color of our pixel and voila, our geometry is generated and colored correctly to match what we’d expect from the color of the box and the light shining onto it.

// Calculate ambient, diffuse, and specular values vec4 ambient = lightAmbient; vec4 diffuse = clamp(lightDiffuse * max(dot(normal.xyz, light), 0.0), 0.0, 1.0); vec4 specular = clamp(lightSpecular * pow(max(dot(reflection, eye), 0.0), 0.3 * shininess), 0.0, 1.0); // Set color from light color = ambient + diffuse + specular; // Mix with UV map color = mix(color, vec4(uv.s, uv.t, 1.0, 1.0), uvmix); // Set alpha color.a = color.a * alpha; } else { // Color point white color = vec4(1.0, 1.0, 1.0, 1.0); } }

Here’s another view of what that looks like:

So now you’ve seen how to make geometry and give it color, we’re going to let you take a breather and in part 2, generate some geometry from a Kinect point cloud data using these techniques. It’s gonna be awesome :)