Chapter 1: Rounding Up OpenGL

If you’ve written any basic applications for the iPhone prior to looking at the OpenGL ES project, one of the first things you will probably notice is all of the extra stuff in the view controller. Even though the EAGLView class take care of configuring, creating, and managing our framebuffer, our application view controller is filled with new OpenGL methods that, at first glance, don’t seem to belong there.

Take a moment to create a new Xcode OpenGL ES application (see Part Three: Getting Started in Xcode from the previous tutorial series if you want a refresher) and name it TouchTargets – this will be our new target tapping application. Once created, look at the top of the TouchTargetsViewController.m file.

@interface TouchTargetsViewController ()
@property (nonatomic, retain) EAGLContext *context;
@property (nonatomic, assign) CADisplayLink *displayLink;
- (BOOL)loadShaders;
- (BOOL)compileShader:(GLuint *)shader type:(GLenum)type file:(NSString *)file;
- (BOOL)linkProgram:(GLuint)prog;
- (BOOL)validateProgram:(GLuint)prog;
@end

The @interface block is defining additional, required methods and properties for this class. The context property will hold a reference to our OpenGL context, which will provide us access for drawing to the iPhone’s screen. The displayLink property will hold a reference to a display link between our program and the iPhone screen’s, which will attempt to run a method that we specify sixty times a second, or once for every iPhone screen refresh.

The reason I say ‘will attempt to’ is because if the method we tell it to run takes too long, we’ll miss the next refresh and have to wait for the next one after that. If that happens, the user will experience ‘lag’, a short delay in the on-screen action. Too much lag, and users will give up on the application, since an uneven frame rate will give most people a headache.

The way we specify which method to run at every screen refresh is with the displayLinkWithTarget:selector: message to the mainScreen instance of the UIScreen
class. Let’s take a quick look at our startAnimation method.

- (void)startAnimation
{
    if (!animating) {
        CADisplayLink *aDisplayLink = [[UIScreen mainScreen] displayLinkWithTarget:self selector:@selector(drawFrame)];
        [aDisplayLink setFrameInterval:animationFrameInterval];
        [aDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
        self.displayLink = aDisplayLink;
       
        animating = TRUE;
    }
}

When the startAnimation method is called, and the animating variable is FALSE, the displayLinkWithTarget:selector: method is called off of the UIScreen object returned by the mainScreen message to that class. Sending the mainScreen message to the UIScreen class returns a reference to the iPhone screen, and the displayLinkWithTarget:selector: message tells that iPhone screen object to make us the target of the display link, with our drawFrame method handling the calls at screen refresh time.

The setFrameInterval: message sets the frequency of the calls, and Apple has provided a nice summary of how it works at the top of the setAnimationFrameInterval method.

- (void)setAnimationFrameInterval:(NSInteger)frameInterval
{
    /*
     Frame interval defines how many display frames must pass between each time the display link fires.
     The display link will only fire 30 times a second when the frame internal is two on a display that refreshes 60 times a second. The default frame interval setting of one will fire 60 times a second when the display refreshes at 60 times a second. A frame interval setting of less than one results in undefined behavior.
     */

    if (frameInterval >= 1) {
        animationFrameInterval = frameInterval;
       
        if (animating) {
            [self stopAnimation];
            [self startAnimation];
        }
    }
}

So with my animationFrameInterval set to 1, I’m telling the display link to fire my drawFrame method every frame. If I changed it to two, I’m telling the display link to fire every second frame, resulting on 30 frames per second. If I changed it to three, that would be every third frame, or 20 frames per second.

If you were running this code on some other iOS device that had a different screen refresh rate, these numbers would have different effects. For example, if a future device supported 120 frames per second, an animationFrameInterval of 1 would be 120 frames per second, 2 would be 60, and 3 would be 40.

Now let’s look at the additional method definitions in the @interface block.

- (BOOL)loadShaders;
- (BOOL)compileShader:(GLuint *)shader type:(GLenum)type file:(NSString *)file;
- (BOOL)linkProgram:(GLuint)prog;
- (BOOL)validateProgram:(GLuint)prog;

Why are these methods in our view controller? Shouldn’t they be in some OpenGL library or class somewhere?

I believe they should be, but it’s not Apple’s job to put them there, it’s ours, and here’s why. When we create an OpenGL ES 2.0 application, we need to supply the shaders that we’ll need to properly draw our stuff. Since, in OpenGL 2.0, coding and supplying the shaders is the developer’s responsibility, we also need to handle compiling, linking, and configuring any attributes and uniforms in our own code.

In the Apple-provided OpenGL ES 2.0 code, their example of a bouncing rainbow box uses a particular set of shader attributes and uniforms that they coded for in the application logic that needed it: the view controller. What we’re going to do is make this code abstract. We’re going to remove all of this OpenGL ES 2.0 shader compilation code and put it in it’s own class, then build out the class so we don’t have to worry about the actual attribute and uniform handling code for specific shaders.

Once we’ve done this, we can use the same class for any OpenGL ES 2.0 code that needs to compile and link shaders with any combination of attributes and uniforms. In the end, it’ll save us a lot of time as we add more objects that need to draw into our framebuffer with custom shaders.

First, let’s see what we’re doing now in the standard Xcode 4 OpenGL ES application template.

- (void)awakeFromNib
{
    EAGLContext *aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
   
    if (!aContext) {
        aContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1];
    }
   
    if (!aContext)
        NSLog(@"Failed to create ES context");
    else if (![EAGLContext setCurrentContext:aContext])
        NSLog(@"Failed to set ES context current");
   
    self.context = aContext;
    [aContext release];
   
    [(EAGLView *)self.view setContext:context];
    [(EAGLView *)self.view setFramebuffer];
   
    if ([context API] == kEAGLRenderingAPIOpenGLES2)
        [self loadShaders];
   
    animating = FALSE;
    animationFrameInterval = 1;
    self.displayLink = nil;
}

Since we’ve already covered the Xcode 4 OpenGL ES application template in a previous tutorial (See Part Four: Creating the OpenGL Environment), we’ll approach this from a slightly higher level. We can see that we’re first trying to create an OpenGL ES 2.0 context, but if that fails we try to create an Open GL ES 1.1 context. This is great for supporting older devices, but it’s quite a lot more work when developing and supporting these apps. If you’re thinking of supporting both OpenGL ES 1.1 and 2.0, then you’ve probably already developed an OpenGL ES 1.x application, and don’t need to walk through coding for that environment. For the TouchTargets application, I’ll be abandoning OpenGL ES 1.x after this chapter.

Once we’ve got a valid OpenGL ES context, we assign it to our context property. Once it’s usage counter has been incremented by assigning it to something, it’s safe to send our working context variable a release message.

After that, we send the setContext: and setFramebuffer messages to our EAGLView instance, which will take care of creating and configuring the framebuffer we will use to draw onto the iPhone screen.

If the context we’ve created is an OpenGL ES 2.0 context, we’ll also call our own loadShaders message to compile, link, and configure our OpenGL ES 2.0 shaders. OpenGL ES 1.x doesn’t have a programmable pipeline, and therefore doesn’t use shaders, so this is only necessary for OpenGL ES 2.0 applications.

Finally, a few instance variables are set, and we’ve already seen how those worked when we looked at the startAnimation method a bit earlier.

So now we see what this class does when it’s initialized, but when does the animation start? After the awakeFromNib method has finished processing, our view controller is sent the viewWillAppear: message as a part of its initialization.

- (void)viewWillAppear:(BOOL)animated
{
    [self startAnimation];
   
    [super viewWillAppear:animated];
}

As we’ve seen before, the startAnimation method creates a display link which specifies our drawFrame method as the handler for displayLink notifications from iPhone screen refresh notifications.

- (void)startAnimation
{
    if (!animating) {
        CADisplayLink *aDisplayLink = [[UIScreen mainScreen] displayLinkWithTarget:self selector:@selector(drawFrame)];
        [aDisplayLink setFrameInterval:animationFrameInterval];
        [aDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
        self.displayLink = aDisplayLink;
       
        animating = TRUE;
    }
}

After the startAnimation method has processed successfully, the drawFrame method will start running at the frame interval we set. The drawFrame method handles loading up the shader attributes and uniforms for OpenGL ES 2.0, or calling the appropriate matrix manipulation functions for OpenGL ES 1.1.

- (void)drawFrame
{
    [(EAGLView *)self.view setFramebuffer];
   
    // Replace the implementation of this method to do your own custom drawing.
    static const GLfloat squareVertices[] = {
        -0.5f, -0.33f,
        0.5f, -0.33f,
        -0.5f,  0.33f,
        0.5f,  0.33f,
    };
   
    static const GLubyte squareColors[] = {
        255, 255,   0, 255,
        0,   255, 255, 255,
        0,     0,   0,   0,
        255,   0, 255, 255,
    };
   
    static float transY = 0.0f;
   
    glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT);
   
    if ([context API] == kEAGLRenderingAPIOpenGLES2) {
        // Use shader program.
        glUseProgram(program);
       
        // Update uniform value.
        glUniform1f(uniforms[UNIFORM_TRANSLATE], (GLfloat)transY);
        transY += 0.075f;  
       
        // Update attribute values.
        glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
        glEnableVertexAttribArray(ATTRIB_VERTEX);
        glVertexAttribPointer(ATTRIB_COLOR, 4, GL_UNSIGNED_BYTE, 1, 0, squareColors);
        glEnableVertexAttribArray(ATTRIB_COLOR);
       
        // Validate program before drawing. This is a good check, but only really necessary in a debug build.
        // DEBUG macro must be defined in your debug configurations if that's not already the case.
#if defined(DEBUG)
        if (![self validateProgram:program]) {
            NSLog(@"Failed to validate program: %d", program);
            return;
        }
#endif
    } else {
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();
        glMatrixMode(GL_MODELVIEW);
        glLoadIdentity();
        glTranslatef(0.0f, (GLfloat)(sinf(transY)/2.0f), 0.0f);
        transY += 0.075f;
       
        glVertexPointer(2, GL_FLOAT, 0, squareVertices);
        glEnableClientState(GL_VERTEX_ARRAY);
        glColorPointer(4, GL_UNSIGNED_BYTE, 0, squareColors);
        glEnableClientState(GL_COLOR_ARRAY);
    }
   
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
   
    [(EAGLView *)self.view presentFramebuffer];
}

That’s pretty much it, our application consists of the drawFrame method, firing over and over again, moving the rainbow square a little up or a little down the screen at every frame.

So how can we pull out the OpenGL code and make our view controller cleaner and easy to read? First, we need to realize that all of this OpenGL-specific code is in our view controller for one reason: we are responsible for providing, compiling, and linking our shaders into an OpenGL program when using OpenGL ES 2.0. In addition, the code to link our shaders will change, depending on the attributes and uniforms that we decide to use in the shader logic.

In order to make this code abstract, we must provide a mechanism for the application code to arbitrarily define shader attributes and uniforms. Right now, the code requires special logic in the loadShaders method to define and process attributes before the link, and uniforms afterwards.

- (BOOL)loadShaders
{
    GLuint vertShader, fragShader;
    NSString *vertShaderPathname, *fragShaderPathname;
   
    // Create shader program.
    program = glCreateProgram();
   
    // Create and compile vertex shader.
    vertShaderPathname = [[NSBundle mainBundle] pathForResource:@"Shader" ofType:@"vsh"];
    if (![self compileShader:&vertShader type:GL_VERTEX_SHADER file:vertShaderPathname])
    {
        NSLog(@"Failed to compile vertex shader");
        return FALSE;
    }
   
    // Create and compile fragment shader.
    fragShaderPathname = [[NSBundle mainBundle] pathForResource:@"Shader" ofType:@"fsh"];
    if (![self compileShader:&fragShader type:GL_FRAGMENT_SHADER file:fragShaderPathname])
    {
        NSLog(@"Failed to compile fragment shader");
        return FALSE;
    }
   
    // Attach vertex shader to program.
    glAttachShader(program, vertShader);
   
    // Attach fragment shader to program.
    glAttachShader(program, fragShader);
   
    // Bind attribute locations.
    // This needs to be done prior to linking.
    glBindAttribLocation(program, ATTRIB_VERTEX, "position");
    glBindAttribLocation(program, ATTRIB_COLOR, "color");
   
    // Link program.
    if (![self linkProgram:program])
    {
        NSLog(@"Failed to link program: %d", program);
       
        if (vertShader)
        {
            glDeleteShader(vertShader);
            vertShader = 0;
        }
        if (fragShader)
        {
            glDeleteShader(fragShader);
            fragShader = 0;
        }
        if (program)
        {
            glDeleteProgram(program);
            program = 0;
        }
       
        return FALSE;
    }
   
    // Get uniform locations.
    uniforms[UNIFORM_TRANSLATE] = glGetUniformLocation(program, "translate");
   
    // Release vertex and fragment shaders.
    if (vertShader)
        glDeleteShader(vertShader);
    if (fragShader)
        glDeleteShader(fragShader);
   
    return TRUE;
}

Any time I want to add or remove a shader attribute or uniform, I need to update this method, along with the associated logic in the drawFrame method. If we’re going to successfully pull out this code and put it in it’s own class, we need to have a way to handle variable and arbitrary shader attributes and uniforms. We’ll need to start our planning by considering what we know about attributes and uniforms.

Shader attributes are defined by us. We tell OpenGL which values to associate with shader attributes, like so.

    // Bind attribute locations.
    // This needs to be done prior to linking.
    glBindAttribLocation(program, ATTRIB_VERTEX, "position");
    glBindAttribLocation(program, ATTRIB_COLOR, "color");

In contrast, we have to ask OpenGL where the uniforms are after linking.

    // Get uniform locations.
    uniforms[UNIFORM_TRANSLATE] = glGetUniformLocation(program, "translate");

This is an important distinction, and will affect how we code our new class to abstract this processing. Fortunately, it’s the only tricky part of this whole exercise, so we now know everything we need to start coding our new class.

Index | Chapter 2