PrEV
Thoughts from a NeXTStep Guy on Cocoa Development

Open GL ES 2.0 Resources

Jun 10, 2009 by Bill Dudney

With the announcement of the iPhone 3Gs came the availability (well at least when the hardware ships) of a programmable Open GL pipeline. Basically that means that just about everything changes for the new device. Not only is the new pipeline going to make for much faster apps, but its going to open up the door to tons of very cool stuff. For example Core Image (the tech behind the cool ripple transition in Dashboard) requires a programmable pipeline. No word on Core Image coming to an iPhone SDK near you but you don't have to be a clairvoyant prognosticator to see that coming.

Since everything out side the keynote is under the NDA (but only lasts till June 17) I can't really talk about the amazingly cool demos they have been showing on the new hardware. However here are some decent resources on Open GL ES 2.0 to start to get your mind wrapped around the new stuff.

Khronos spec page.

PowerVR's site.

GLSL spec (PDF).

I read Open GL ES 2.0 but I can't recommend it as a starting point. If you don't know Open GL already its a bit rough as a starting point. If you already know Open GL ES 1.1 then its a decent book.

If you are brand new to the whole thing you can read Jeff LaMarche's excellent series on OpenGL ES 1.1.

Permalink     Add A Comment

Wave Front OBJ - textures working

Dec 31, 2008 by Bill Dudney

I finally got the texture stuff working again. As I said in the last post I have been working on getting my head wrapped around OpenGL including the programable pipeline (GLSL etc). Since there is no programable pipeline on the iPhone I started on the mac so I could do GLSL. Well turns out that to load my textures I was using ImageI/O which is also not on the iPhone. After some false starts I finally just copied the Texture2D class from the TouchFighterII example. I could not find anything that said 'don't redistribute this class' so I figure its ok.

Once I got my textures to load properly the rest turned out to be fairly easy to get going again. I ripped out all the shader stuff and it just worked. Well there was also the case of converting all my indexes to shorts and apparently the length of the buffer does matter (dooh!) so once I got that working everything just showed up as it should. Without further ado here is the code that will run in the simulator.

Here is a screen shot of one of the models running. The pyramid and the plane have a texture applied, the cube does not.

To get the texture VBO into video mem I did this;

      glEnableClientState(GL_TEXTURE_COORD_ARRAY);
      GLuint textureCoordsName = [group textureCoordinatesName:GL_STATIC_DRAW];

The first line turns on VBO's for texture coordinates. The second line, pushes the bytes out to the card if necessary like this.

- (GLuint)textureCoordinatesName:(GLenum)usage {// the VBO for the texture coords
  if(0 == textureCoordsName && texCoordsData.length > 0) {
    glGenBuffers(1, &textureCoordsName);
    glBindBuffer(GL_ARRAY_BUFFER, textureCoordsName);
    glBufferData(GL_ARRAY_BUFFER, texCoordsData.length,
                 texCoordsData.bytes, usage);
  }
  return textureCoordsName;
}

Then the texture coord's buffer is enabled like this;

      glBindBuffer(GL_ARRAY_BUFFER, textureCoordsName);
      glTexCoordPointer([group texCoordSize], GL_FLOAT, 0, 0);

That binds the texture coordinates buffer (i.e. makes it active) then we set the TexCoordPointer. Telling OpenGL that we have texCoordSize elements of type GL_FLOAT, with no offset, starting at the beginning of the list.

Next up we enable 2D textures, bind the texture and we are done with texture stuff.

      glEnable(GL_TEXTURE_2D);
      GLuint texId = [group.material.diffuseTexture textureName];
      glBindTexture(GL_TEXTURE_2D, texId);

Now when we draw with the glDrawELements function like this;

    GLuint indexesName = [group indexesName:GL_STATIC_DRAW];
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexesName);
    glDrawElements(GL_TRIANGLES, group.indexCount, GL_UNSIGNED_SHORT, NULL);

The texture (and the coords) will be used.

Happy hacking!

Permalink     16 Comments - Add Yours

WaveFront OBJ files and learning Open GL

Dec 29, 2008 by Bill Dudney

Many years ago (way back in college) I wrote a 3D stress visualization application for one of my professors. It was my very first Cocoa application. Written on a NeXT Cube in Display Post Script. Wow, was it cool to work on that app, I had so much fun and learned a ton and got to help with some research. I had so much fun that I got a 2nd credit card, and after maxing both out bought a NeXT slab so I could program at home.

The system that I wrote ended up being a lot like a primitive OpenGL. Ever since then I've been fascinated by 3D graphics. During my life as a Java/IT consultant I never had time to pursue it. Now that I'm an iPhone developer and not constrained by stuff that an IT department want's to pay for I've decided to spend some time investing in getting my brain wrapped around OpenGL.

I have several things I'm thinking about doing, but mostly right now I'm just trying to get a handle on how to make good OpenGL code. Which includes 'old style' fixed function pipeline coding for OpenGL ES as well as programable pipeline (i.e. GLSL shaders) of OpenGL 2.0. While the iPhone currently only supports a fixed function pipeline, the industry is headed towards a programmable pipeline and its only a matter of time (speculation, I have no inside info) before the iPhone has OpenGL ES 2.0 support.

In the future I'd like to do OpenGL/OpenCL integration work too but that will have to wait till Snow Leopard ships. In the mean time I'm going to be reading the spec and playing with it but won't be able to post cause the Snow Leopard impl is under NDA. I will however be posting as I can about what I'm getting out of OpenGL and the implementation on the iPhone (OpenGL ES).

My first task as I saw it was to implement a loader to read in a file format and convert that into an OpenGL image (so I'd have something fun to look at). My kids are into Blender which exports files in the WaveFront OBJ file format. So I figured I'd give it a go and see what I could make happen. I posted on twitter about working on an OBJ file loader and got a response from Brad Larson pointing me to Jeff Lamarche's excellent blog posts on his OBJ file loader. All that being said if you need an OBJ file loader, you might have to write your own. I am mostly just trying to learn OpenGL and not really trying to make a good OBJ file loader (Jeff's goals are similar). I am probably going to do my real work with the Collada file format in the end so I won't be maintaining this loader. Ok back to the point of this post.

Since I'm trying to learn 'the right way' to do OpenGL I spent a bit of time doing research on the various approaches and looked at lots of tutorials and such. Unfortunately the state of OpenGL seems to be one of constant evolution with little disambiguation as to OpenGL 1.0 approaches vs OpenGL 2.0 vs OpenGL 3.0. While I like the fact that OpenGL is constantly evolving and getting better, I don't like spending hrs looking a tutorial only to read in the next tutorial that what I've just learned has been out of date since version 1.5.

After all my reading what I decided is that I would ignore anything that used the 'immediate mode' of vertex uploading as being outdated. Basically 'immediate mode' is any call to glVertex and its kin. These function calls are one of the slowest ways to upload your geometry data to the graphics card. I've heard it explained as 'you walk into the kitchen to make dinner, decide on pasta, go to the store and buy noodles, drive home put them on the counter, go back to the store to buy ground beef, go back home place it on the counter etc.' While this gets the job done you spend an awful lot of time in the car. Sending each vertex out to the card this way is similar, your data spends most of its time on the bus from main memory to the GPU's memory. Non-optimal.

From what I've read the fastest way to get data to the card is via 'Vertex Buffer Objects' or VBO's. A VBO is essentially an array of vertexes (2, 3 or 4 dimensional) that get sent to the card all at once. It is typical to have all the vertexes for an item in a scene be in one VBO (or even for the whole scene). You send the buffer to the card via glBufferData. Since the vertexes are in one long array and are sent in one function call considerable time is saved cause you don't have to initialize the connection to the GPU memory over and over again, the connection is opened and all the data is pushed at once.

Now comes the interesting part, OBJ files are optimized for doing immediate mode processing (I don't know how old the format is, but its ancient in Internet time). So I had to do a bunch of stuff to get my vertexes, normals, colors, materials, and textures to be properly aligned. Jeff seemed to be always a day or two ahead of me and has done a great job of writing up most of what was driving me nuts. So hats off again to Jeff for some great informative posts.

My OBJ file loader is working, I've got it pulling in multiple object OBJ files (i.e. a scene) as well as single object files. It knows how to copy the vertex/normal/texture coords so that everything works like it should. I can get some decent 3D models to show up now. However it is still a mess, each object ends up with its own VBO for each of the vertex attributes (normals and textures). While this works it's much better to interleave the VBO's so that all the data needed to render the object is in one place (many thanks to Eric Wing for this pointer). So I still have a ways to go before it is really good.

I have to finish the iPhone SDK book before I can finish this loader. So I'm going to wait to push the code until after I'm done the book. If you are dying to get your eyes on it feel free to send me an email and I'll send you the ugly undoc'd code.

Permalink     9 Comments - Add Yours