## Part Five: Drawing Something on the Screen

In part four, we analyzed the code that initialized our OpenGL ES environment on the iPhone, and now we’re ready to look at the code that actually makes use of that environment and renders something to the iPhone screen.

As the application launches, after all of the initialization message have been processed, the EDCubeDemoAppDelegate is sent the application:didFinishLaunchingWithOptions: message. The code that handles it looks like this.

{

// Override point for customization after application launch.

self.window.rootViewController = self.viewController;

return YES;

}

Not much going on here, just assigning our view controller as the root view controller.

Before our view appears, our view controller, EDCubeDemoViewController, gets the viewWillAppear: message. During this message, the view controller will start all of the graphics processing by calling the startAnimation method.

{

[self startAnimation];

[super viewWillAppear:animated];

}

Let’s look at the startAnimation: method.

{

if (!animating) {

CADisplayLink *aDisplayLink = [[UIScreen mainScreen] displayLinkWithTarget:self selector:@selector(drawFrame)];

[aDisplayLink setFrameInterval:animationFrameInterval];

[aDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];

self.displayLink = aDisplayLink;

animating = TRUE;

}

}

The animating instance variable tracks whether or not the animation loop is currently running. If it is not, we’ll run through the logic to start it going now. What’s involved in that logic?

CADisplayLink *aDisplayLink = [[UIScreen mainScreen] displayLinkWithTarget:self selector:@selector(drawFrame)];

The first thing after the animating check is to create a CADisplayLink to the iPhone screen. The CADisplayLink creates that link between a target, in this case ourself, and a method in that target, in this case the drawFrame method.

The iPhone screen refreshes sixty times per second, and the CADisplayLink will call our specified method at whatever intervals we request. This is where the next line comes in.

Way back at the end of our awakeFromNib method, we set animationFrameInterval to 1. This instructs the display link to call the specified method, drawFrame, every first frame, or, every frame, 60 times per second. A two in this parameter would call drawFrame every second frame, or thirty times per second.

self.displayLink = aDisplayLink;

animating = TRUE;

}

After setting the frame interval, we use the addToRunLoop:forMode: message to start the display link firing.

We set our own displayLink instance variable to keep track of the display link, and set our animating instance variable to TRUE so we can keep track of its state in the code. Now, if anyone else calls startAnimation, the animating instance variable will let us know that we’ve already started everything up.

If you step through the code, you’ll notice that the applicationDidBecomeActive: method in EDCubeDemoAppDelegate actually calls startAnimation again, but the display link code is not called again thanks to our animating variable.

Now let’s look at the code that does all of the work, the drawFrame method.

{

[(EAGLView *)self.view setFramebuffer];

// Replace the implementation of this method to do your own custom drawing.

static const GLfloat squareVertices[] = {

-0.5f, -0.33f,

0.5f, -0.33f,

-0.5f, 0.33f,

0.5f, 0.33f,

};

static const GLubyte squareColors[] = {

255, 255, 0, 255,

0, 255, 255, 255,

0, 0, 0, 0,

255, 0, 255, 255,

};

static float transY = 0.0f;

glClearColor(0.5f, 0.5f, 0.5f, 1.0f);

glClear(GL_COLOR_BUFFER_BIT);

if ([context API] == kEAGLRenderingAPIOpenGLES2) {

// Use shader program.

glUseProgram(program);

// Update uniform value.

glUniform1f(uniforms[UNIFORM_TRANSLATE], (GLfloat)transY);

transY += 0.075f;

// Update attribute values.

glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);

glEnableVertexAttribArray(ATTRIB_VERTEX);

glVertexAttribPointer(ATTRIB_COLOR, 4, GL_UNSIGNED_BYTE, 1, 0, squareColors);

glEnableVertexAttribArray(ATTRIB_COLOR);

// Validate program before drawing. This is a good check, but only really necessary in a debug build.

// DEBUG macro must be defined in your debug configurations if that's not already the case.

#if defined(DEBUG)

if (![self validateProgram:program]) {

NSLog(@"Failed to validate program: %d", program);

return;

}

#endif

} else {

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

glMatrixMode(GL_MODELVIEW);

glLoadIdentity();

glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);

transY += 0.075f;

glVertexPointer(2, GL_FLOAT, 0, squareVertices);

glEnableClientState(GL_VERTEX_ARRAY);

glColorPointer(4, GL_UNSIGNED_BYTE, 0, squareColors);

glEnableClientState(GL_COLOR_ARRAY);

}

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

[(EAGLView *)self.view presentFramebuffer];

}

Doesn’t look too bad, does it?

The first line sets the framebuffer, in case it wasn’t set before. We’ve already been through this code during the OpenGL ES initialization.

static const GLfloat squareVertices[] = {

-0.5f, -0.33f,

0.5f, -0.33f,

-0.5f, 0.33f,

0.5f, 0.33f,

};

Judging by the name of this variable, it has something to do with the drawing of our rainbow square, but if you’re used to drawing in pixels, this doesn’t look right.

On the iPhone, all of the Apple frameworks see the screen as being X width and Y height, with the origin being in the upper left-hand corner. In most cases, the iPhone screen will be 320 pixels wide by 480 pixels tall, so if I started at the upper left-hand corner (0, 0) and went straight down to the bottom of the screen, I would end up at (0, 480). If I then stayed at the bottom of the screen and went to the right-hand side, I would end up at (320, 480).

By comparison, OpenGL sees (0, 0) as the exact center of the screen, a place that the Apple frameworks would see as (160, 240). In addition, OpenGL only goes one unit in any direction from the origin, so the upper left-hand corner would be (-1, 1) and the lower right-hand corner would be (1, -1).

So looking at that array of vertices, we can see that the square will be drawn from (-0.5, -0.33) to (0.5, -0.33), then from (0.5, -033) to (-0.5, 0.33), and finally from (-0.5, 0.33) to (0.5, 0.33).

static const GLfloat squareVertices[] = {

-0.5f, -0.33f,

0.5f, -0.33f,

-0.5f, 0.33f,

0.5f, 0.33f,

};

That’s four points and three lines.

Well, that doesn’t look like a square, what’s going in here?

In order to understand how this becomes a square, we need to look ahead in the code to the following line.

We also need to understand that under OpenGL ES, the only shape you can draw is a triangle. Well, you can also draw lines and points, but those aren’t quite shapes.

When we called the glDrawArrays() function, we passed in a flag as the first parameter, GL_TRIANGLE_STRIP. This parameter tells OpenGL that after the first triangle, made up of the first three vertices, we will continue to add triangles on one point at a time.

The second parameter tells OpenGL where in the array of vertices to start (zero means start at the beginning), and the last parameter tells OpenGL how many vertices we’re defining in total, four.

So how does OpenGL build that square out of that funny backwards Z shape? First, OpenGL takes the the first three vertices, (-0.5, -0.33), (0.5, -0.33), and (-0.5, 0.33), and draws a triangle. Then, it takes the next point, (0.5, 0.33), and uses that plus the previous two lines to create the next triangle. For each point OpenGL got after this, it would continue to add triangles to the object using that point and the previous two.

In the end, we get a square made up of two triangles that looks like this.

The first triangle is ABC, and the next point, D, is used with the previous two points, BC, to make BCD. The two triangles together make up our square.

There are two other flags used to specify how to draw objects out of triangles, one is GL_TRIANGLES, and the other is GL_TRIANGLE_FAN.

GL_TRIANGLES draws triangles one at a time, using three vertices for each triangle. To draw our square using this method, we would have had to define six points, three for each triangle, and made sure that they were right next to each other.

GL_TRIANGLE_FAN is similar to GL_TRIANGLE_STRIP, but instead of using the next point and the previous two (after the initial three for the first triangle, of course), GL_TRIANGLE_FAN uses the next point, the previous point, and the first point ever specified. To make it clearer, here are a couple of examples.

In this GL_TRIANGLES example, I specify six points, A, B, C, D, E, and F. OpenGL draws the first triangle with the first three points, ABC, then draws the second triangle with the second three points, DEF. The diagonal lines in this example would actually overlay each other, since C = D and B = E, but I drew them slightly apart so you would be able to see the sequence of lines drawn.

In this GL_TRIANGLE_FAN example, I specified four points, A, B, C, and D. The first triangle is drawn with the first three points, ABC, but the second triangle is drawn with the next point, D, the previous point, C, and the very first point, A, making the second triangle CDA. If I were add on a fifth point, E, the next triangle would be DEA.

Let’s look at that array of numbers again.

static const GLfloat squareVertices[] = {

-0.5f, -0.33f,

0.5f, -0.33f,

-0.5f, 0.33f,

0.5f, 0.33f,

};

So now we can see that this array, squareVertices, contains the four points, or vertices, in OpenGL space required to draw a square using the GL_TRIANGLE_STRIP method.

But why are the X coordinates different from the Y coordinates? Why not just draw (-0.5, -0.5) to (0.5, -0.5) to (-0.5, 0.5) to (0.5, 0.5)?

It’s because the iPhone screen is taller than it is wide, so in OpenGL coordinates, the distance from (0, 0) to (0, 1) is actually longer than from (0, 0) to (1, 0). If we convert the OpenGL coordinates to screen coordinates, we find that the Y axis is 480 pixels, making (0, 0) to (0, 1) in OpenGL 240 pixels long, while the X axis is only 320 pixels wide making (0, 0) to (1, 0) in OpenGL 160 pixels long.

Going halfway up (or down) the Y axis would be longer than going halfway across the X axis. We would end up with a rectangle.

To account for this, we have to come up with an aspect ratio, just like for television sets. If you have an older television, it’s probably got an aspect ratio of 4:3, meaning that it’s just a bit wider than it is tall. If you have a widescreen television, it’s probably got an aspect ratio of 16:9, meaning it’s quite a bit wider than it is tall.

The iPhone’s aspect ratio in this example is 1:1.5, since we’re holding it upright, so it’s taller than it is wide.

Now, if we know the length along the X axis is going to be 1 unit (that’s -0.5 to 0.5), and we want the length along the Y axis to match that 1 unit on the screen, we need to apply the aspect ratio to that 1 unit to make sure it matches the 1 unit along the X axis.

Take our aspect ratio of 1:1.5 and divide. 1 divided by 1.5 is 0.66 repeating. So if we go one full unit along the X axis, we need to go 0.66 units along the Y axis to avoid looking stretched up and down. If we go -0.5 to 0.5 on the X axis, we need to go from -0.33 to 0.33 along the Y axis to preserve that aspect ratio.

Usually, this adjustment will be made in the code, by calculating the width of the screen divided by the height of the screen and applied to the Y coordinates, but in this simple example from Xcode, the aspect ratio adjustment is baked into the coordinates for drawing the square.

Now we come to the next array.

255, 255, 0, 255,

0, 255, 255, 255,

0, 0, 0, 0,

255, 0, 255, 255,

};

In this array, we are simply defining a color for each vertex in an RGBA (Red, Green, Blue, Alpha) format. A value of 255 means ‘all the way on’ and a value of zero means ‘all the way off’. The alpha value specifies how opaque this color is, so 255 is completely opaque, and 0 is transparent.

In this array, using the RGBA format, the first vertex will be yellow (red plus green), the second vertex will be cyan (green plus blue), the third vertex will not have a color (alpha is zero), and the fourth vertex will be purple (red plus blue).

For each vertex that OpenGL draws, it will grab a color from this array. That means that if we specify four vertices, we must specify four colors.

But wait, there were a lot more than just three colors on that square. Let’s look at it again.

Definitely more than three colors. Here’s something neat about OpenGL, it will fill in the gaps between vertices if you tell it something should be there. In this case, we defined two triangles to make a square, and colors at the corners. Since OpenGL knows there are surfaces (the two triangles), it figures out what the colors would be across the surfaces and blends them for you. This process is called interpolation.

Look at the right edge of the square. We told OpenGL to make the top purple and the bottom cyan. OpenGL then figured out how the surface would look if we blended the two colors together and filled in all the pixels between. See how the purple fades to blue, then to cyan as we get to the bottom of the right edge?

This is a very important concept to understand for when we get deeper into the OpenGL ES 2.0 shaders. We supplied four vertexes to OpenGL, and it drew two triangles with them. For each vertex that OpenGL processed, it ran the vertex shader.

But, as OpenGL figured out what color each pixel should be between those vertices, it ran the fragment shader to process it. OpenGL considers each tiny pixel or tiny group of pixels of the same color to be a ‘fragment’ of the entire object, and runs the fragment shader for every fragment that needs to be processed.

That means that while the vertex shader was run four times, the fragment shader was run up to 25,600 times, because there are 25,600 pixels in our 160 x 160 square on the screen.

Once you understand when and why OpenGL runs the vertex and fragment shaders, you’ll have a much easier time understanding the code you find in each one.

glClearColor(0.5f, 0.5f, 0.5f, 1.0f);

glClear(GL_COLOR_BUFFER_BIT);

The transY variable is what we will use to make the rainbow square move up and down the Y axis. The first time we run this line of code, we initialize it to zero, but since it’s a static variable, it will keep it’s value for the next time it’s encountered here.

The glClearColor() function tells OpenGL what color we want to fill the screen with when we clear it, in RGBA format. Previously, when you saw RGBA color values, they were from 0 to 255 because they were declared as unsigned bytes, which can be 0 through 255.

The glClearColor() function, on the other hand, is expecting a floating point value between 0 (completely off) and 1 (completely on). 0.5 is about halfway, and all colors about half on, with an alpha value of opaque, makes gray.

The glClear() function clears all of the buffers specified by the passed bit mask. In this case, we are asking it to clear the color buffer by passing in the GL_COLOR_BUFFER_BIT. Remember when we created our framebuffer? We also created a color renderbuffer and attached it to the framebuffer, so here we’re clearing it each time before we draw something new into it.

// Use shader program.

glUseProgram(program);

// Update uniform value.

glUniform1f(uniforms[UNIFORM_TRANSLATE], (GLfloat)transY);

transY += 0.075f;

At this point, we check to see if we initialized an OpenGL ES 2.0 context. If so, we need to execute the code appropriate for Open GL ES 2.0, otherwise the code will crash. The key rendering functions that used to be available under OpenGL ES 1.1 are now gone.

Fortunately, things are pretty straightforward here. The glUseProgram() function just makes sure that we’re going to use that program to which we linked the shaders earlier.

On the next line, the glUniform1f() function sets the value of our translate uniform value in the shaders to the value of the transY variable. Remember how OpenGL functions have a suffix that helps indicate what kinds of variables they’re expecting? This function is actually glUniform with a ‘1f’, or ‘one float’ added on.

Once we’ve set our uniform value, we increment the transY variable by 0.075 so the rainbow square will move next time we run through this logic. Also remember that the uniform value will be the same for all runs of the vertex and fragment shader, that way the value can be applied ‘uniformly’ to all vertices that get processed.

glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);

glEnableVertexAttribArray(ATTRIB_VERTEX);

glVertexAttribPointer(ATTRIB_COLOR, 4, GL_UNSIGNED_BYTE, 1, 0, squareColors);

glEnableVertexAttribArray(ATTRIB_COLOR);

// Validate program before drawing. This is a good check, but only really necessary in a debug build.

// DEBUG macro must be defined in your debug configurations if that's not already the case.

#if defined(DEBUG)

if (![self validateProgram:program]) {

NSLog(@"Failed to validate program: %d", program);

return;

}

The new lines of code set the values for the attributes in the vertex shader. For each set of values in the vertex array, the vertex shader will be run. The glVertexAttribPointer() function takes the index of the attribute that we want to set (ATTRIB_VERTEX), the number of values per vertex (2, one X and one Y coordinate), the type of variable that it will be (GL_FLOAT), whether the values should be normalized (0, so no), where to start in the array that we’re passing in (0, so at the beginning), and finally, a pointer to the array itself (squareVertices).

If you indicate the values should be normalized, OpenGL will map your values into the OpenGL range of -1 through 1 (or 0 through 1 if it’s an unsigned data type). In our case, we’re passing in values that are in the range of -1 through 1, so normalization is not necessary.

Next, the glEnableVertexAttribArray() function makes the array that we specified in the previous call ready and available for OpenGL to use.

The next call to glVertexAttribPointer() sets up the array of colors that we specified in the squareColors array, but do you notice something different about the function call?

The normalize parameter is set to 1, so we’re asking OpenGL to normalize this data. Why do we need to normalize the color array values? OpenGL wants floating point values from 0 through 1 for the colors, but we gave it unsigned byte values of 0 through 255. In order for OpenGL to properly digest these values, we need to have OpenGL normalize them into the range of 0 through 1 (they are unsigned).

We could have just used floating point values for the colors, I suppose, but unsigned character arrays are much smaller, and as long as the normalization doesn’t hurt performance, it will work out better for memory management purposes this way.

The final block of code, which executes only if a DEBUG value is defined somewhere, validates the shader code before it tries to use it. Validating the shader code is good for developing and debugging, but make sure to turn it off when you ship your code because it hurts performance.

We’ll take a look at the validate method from our EDCubeDemoViewController class.

{

GLint logLength, status;

glValidateProgram(prog);

glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLength);

if (logLength > 0)

{

GLchar *log = (GLchar *)malloc(logLength);

glGetProgramInfoLog(prog, logLength, &logLength, log);

NSLog(@"Program validate log:\n%s", log);

free(log);

}

glGetProgramiv(prog, GL_VALIDATE_STATUS, &status);

if (status == 0)

return FALSE;

return TRUE;

}

This should look familiar, it’s very similar to the shader linking code. The glValidateProgram() function performs the validation and everything else just writes out the results and checks the success of the validation attempt.

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

glMatrixMode(GL_MODELVIEW);

glLoadIdentity();

glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);

transY += 0.075f;

glVertexPointer(2, GL_FLOAT, 0, squareVertices);

glEnableClientState(GL_VERTEX_ARRAY);

glColorPointer(4, GL_UNSIGNED_BYTE, 0, squareColors);

glEnableClientState(GL_COLOR_ARRAY);

}

The alternative to executing OpenGL ES 2.0 code is to execute OpenGL ES 1.1 code.

We’re not going to bother with OpenGL ES 1.1 after this point, but let’s run through the code for comparison purposes anyway.

Since OpenGL ES 1.1 is a fixed-pipeline, and there are no shaders to use to customize the rendering process, you must tell OpenGL everything up front.

The glMatrixMode() function is used to activate a particular OpenGL matrix stack, a big, ugly 4 by 4 matrix of numbers that OpenGL uses to position things around the scene. There are four modes available in OpenGL ES 1.1, GL_MODELVIEW, GL_PROJECTION, GL_TEXTURE, and GL_COLOR. Since OpenGL is a ‘state machine’, once you set a mode, all matrix manipulation functions you call after setting that mode will be applied to that mode’s matrix stack.

The first call to glMatrixMode() sets the GL_PROJECTION mode matrix stack active. Then the glLoadIdentity() function loads an identity matrix, effectively clearing out any values. The identity matrix is the matrix multiplication equivalent of ‘1’. Any matrix that get multiplied against the identity matrix will resolve to itself. When OpenGL applies this matrix to the projection matrix at render time, it will remain unchanged.

The next call to glMatrixMode() sets the GL_MODELVIEW mode matrix stack active. After another call to glLoadIdentity() to clear it out, we call the glTranslatef() function to apply translations, or changes in the X, Y, and Z positions, to the model.

The glTranslatef() function takes an X, Y, and Z value. Whatever values are specified here are used to alter the current matrix at render time. The X and Z values are zero, meaning that they won’t change, but the Y value is being set to the sine value of transY, divided by two. The sinf() function is simply a sine function that takes a float value as a parameter.

But what’s the point of all that? It’s basically a shortcut.

We could increment a variable that indicated the position of the square until it hit or exceeded some maximum value, then start decrementing it until it hit or exceeded some minimum value, then start incrementing it again, and on and on forever. Graphics programming is all about performance, however, and all of those conditionals in a graphics loop would hurt performance, especially if we’ve got a complex scene to render.

Instead, we can just use math. The sine function returns the ratio of the length of the side opposite an angle to the length of the hypotenuse in a right triangle. Since the side of a right triangle opposite of an angle will never exceed the length of the hypotenuse, the ratio will be somewhere between 0 and 1. That’s a perfect range for OpenGL, so it works out well. Let’s see how using a sine value to move the rainbow square will work.

At about 22 degrees, the length of the side opposite of the angle is 0.375 times as long as the hypotenuse.

At 45 degrees, the side opposite of the angle is 0.707 times the length of the hypotenuse.

At 90 degrees, the side adjacent to the angle no longer exists, and the side opposite the angle is exactly the size of the hypotenuse.

After 90 degrees, the side opposite the angle starts to get smaller again, so the ratio decreases. At 135 degrees, the side opposite the angle is back down to being 0.707 times as large as the hypotenuse.

By the time the angle has increased to 180 degrees, the side adjacent to the angle is the exact size of the hypotenuse and the side opposite the angle no longer exists, making it’s ratio to the hypotenuse 0.

But the rainbow square moves to the top and bottom of the screen, so the result of the sine computation must be negative too, right? How can a ratio be negative?

Imagine that the hypotenuse is a line sweeping around a circle, like the old style green radar displays you see in old movies. As the angle exceeds 180 degrees, the line will sweep into the -y direction. The ratios will be the same, but they’ll be negative until we get back to our starting point at 0 degrees. Of course, this time it’ll be 360 degrees, but that’s the same place as zero in the circle.

In fact, the angles will just keep increasing forever in this program, but since they remain in the same circle, the ratios will always be the same. For example, the 38th time we sweep back to the starting point, the angle will be 13,680 degrees, but the sine of 13,680 is still going to be 0, just like it was back at 360 degrees, or when we started at 0 degrees.

With the ratio going from 0 to 1, then back to 0 and on to -1, then back up to zero like this forever, we generate something you’ve probably seen before.

A sine wave.

Only two mysteries remain, why is the increment value so low (0.075), and why are we dividing by two?

First, the increment value. The sinf() function doesn’t operate on degrees, it operates on radians. What’s a radian?

A radian is an alternate unit of angular measurement, used extensively in programming language math functions like sin(), cos(), and tan(). Even though my previous examples were in degrees, the math functions you’ll be using to program on the iPhone will be expecting radians. In the diagram above, a is the radius of the circle. As the angle increases, the length of the arc b increases. When the length of the arc b is equal to the radius a, we are at one radian.

One radian is much larger than one degree, so we have to increment in smaller amounts. In fact, one radian is about 57.3 degrees.

By incrementing the radian value by 0.075, we are going around our circle at about 4.3 degrees at a time.

Fortunately, there are a couple of very handy equations that allow us to easily convert between radians and degrees. We can continue to think in degrees around our circle, but feed the computer the radian values it requires.

If you have a degree value and want the radian equivalent, use the following conversion equation (don’t forget, PI is 3.14159).

degrees = radians * 180 / PI

So to see how many degrees that 0.075 radians value is buying us, we’d do the following math.

degrees = 0.075 * 180 / 3.14159

degrees = 4.29718709

If we have a degree value and want to get the radian value, we use the following conversion equation.

radians = degrees * PI / 180

So if I wanted the sine value of 38 degrees, but the computer insisted on a radian value, I would perform the following math.

radians = 38 * 3.14159 / 180

radians = 0.66322456

You’ll have to deal with a lot of code like this if you decide to write games that have rendered objects floating around and rotating. We’ll use some of this math later when we start using matrices to translate our object in the vector shader, too.

Fortunately, the final mystery, the division by two, is much easier to explain. Since the sine value of the angle around our circle cycles between 1 and -1, we’ll be translating our square on the Y axis by those values as the program runs. But don’t forget that OpenGL space only runs from 1 to -1 on the Y axis, and we move our square from the center.

If we put the center of our square at (0, 1), the top half will be off of the screen. If we cut the ratio result in half, the highest value we can get back from the computation will be 0.5, which is a perfect place to move the center of our square to towards the top of the screen. It also means it won’t go any lower than -0.5, which is also perfect for a lower boundary.

I know that seemed like a long way to go to explain some old OpenGL ES 1.1 code, but the exact same concepts are used in OpenGL ES 2.0, and are even more important to understand, since we’ll be manipulating our own matrices for the shaders instead of relying on OpenGL functions to do it for us.

Finally, the last couple lines of code.

[(EAGLView *)self.view presentFramebuffer];

The glDrawArrays() function renders our vertices into the renderbuffer. The first parameter specifies the drawing method, which we’ve already been over, the second parameter specifies where to start (0 means start at the beginning), and the last parameter specifies how many vertices are represented in our array.

The presentFrameBuffer message to our view simply shows our rendering work on the iPhone’s screen.

That’s the end of the drawFrame method, which will be running 60 times per second until we stop the program. Each time the method runs, the transY value will be incremented by 0.075 radians, and the rainbow square will be rendered in it’s new location and drawn on the iPhone screen.

If we were running in an OpenGL ES 1.1 context with a fixed pipeline, we would have done everything that we needed to do to render our scene, as explained above. If we were running in an OpenGL ES 2.0 context, the glDrawArrays() function will start rendering with a programmable pipeline, and our shaders will be executed.

For each of the four vertices in the squareVertices array, our vertex shader will be executed. Let’s look at that vertex shader code again.

attribute vec4 color;

varying vec4 colorVarying;

uniform float translate;

void main()

{

gl_Position = position;

gl_Position.y += sin(translate) / 2.0;

colorVarying = color;

}

We know that the first variable, position, is a vector holding the X and Y coordinates of whichever of the four vertexes that OpenGL is about to draw. We also know that the second variable, color, is one of the four colors we defined in our squareColors array.

The colorVarying variable is a temporary variable that we will use to pass the color data on to the fragment shader.

The final variable, the uniform variable translate, will have the transY variable value that we set in the OpenGL ES 2.0 code block before we called glDrawArrays().

OpenGL only places one requirement on the vertex shader program, it must set the special variable called gl_Position. The special variable gl_Position is what OpenGL uses to draw this vertex in the renderbuffer. You can do whatever you want to this data in the shader, you can move it around however you like, but when you’re finished, you have to tell OpenGL where it is now.

When you come into the vertex shader, the ‘position’ variable has the vertex coordinates as defined in the array you passed in, in our case, the squareVertices array. OpenGL does not modify this data when the vertex gets moved around, you must keep track of where the vertex should be for every time it gets rendered.

This is important. Every single time the vertex shader is run for the first vertex in our rainbow square, position will be (-0.5, -0.33). It will be up to us to move the vertex to where it should be for the current rendering frame. This is why we keep the transY variable going.

The first time through, the uniform value ‘translate’, which maps to our transY variable, will be zero, so the following line will not affect the vertex position.

Nor will it affect any of the other three vertices, because its value remains uniform throughout all of the vertex processing calls.

However, the next time we go through our drawFrame method, the transY variable will be 0.075, so all of the vertices we process, all four of them, will have their y position affected by the same amount.

gl_Position.y will be added to sin(0.075) / 2, which comes out to 0.037 in OpenGL space. If we convert that to pixels, it works out to about 9 pixels on the screen in the positive Y direction.

Because of the shape of the sine wave we saw earlier, we know that the values will grow to 1, go back to zero, fall to -1, and then go back to zero, and continue in that cycle forever. Because we’re always dividing the result by two, though, the center of our rainbow square will never rise higher than 0.5 or fall lower than -0.5.

Since the initial position of the vertex coming into the shader will always be the original point we assigned to that vertex in our squareVertices array, the distance we want to move the square must increase and decrease in this cyclical manner.

Once we’ve set our work variable, colorVarying, we’re finished in the vertex shader.

Even though the vertex shader will only be called four times, there are a great many pixels in our rainbow square that vary from each other in color. As OpenGL figures out each fragment of the rainbow square surface that needs to be addressed, it fires our fragment shader to handle them.

Here’s the fragment shader again.

void main()

{

gl_FragColor = colorVarying;

}

We’ve already been through this code earlier, but now it makes a little more sense. Since we know this shader will be running many more times than the vertex shader, we also realize that the colorVarying value can’t just be a one for one value with what we passed out of the vertex shader.

In fact, at this point, OpenGL has already interpolated the color value for us, and has helpfully passed it in to us in our colorVarying variable. OpenGL knew what the color values were for the vertices around us, it already processed that information in the vertex shader, so it was able to calculate what color this fragment would be based on its distance from the vertices and the blending of the colors around us.

If we’re happy with OpenGL’s interpolated color value, and we should be at this point, we simply load it into the one required output for the fragment shader, gl_FragColor.

Just like the gl_Position variable we were required to load in the vertex shader, the gl_FragColor variable is required for the fragment shader. Whatever color value we put in the gl_FragColor variable will be used directly by OpenGL when it draws the pixel(s) for this fragment.

Run the program now and watch it again, basking in the knowledge that you understand exactly what is going on in all of that code.