Thoughts from a NeXTStep Guy on Cocoa Development

TrackBall - 3D transforms made easy

Oct 18, 2008 by Bill Dudney

Apart from getting all my existing CA examples onto the iPhone I've also been toying with what the best way to build out the 'photo city' demo from WWDC 2008 would be (my next Core Animation screen cast series). The basic idea of the demo was that you had a set of perhaps 30 or 40 images, the images were combined into cubes and the cubes were used to make a 'city'. After getting a basic cube working I got distracted by some of the stuff I did to make the demo. Namely I finally got around to porting the OpenGL trackball example code to Core Animation.

For those that are not familiar with the trackball example; the idea is that you have a transparent sphere around your scene, you can move the scene around by moving the trackball. As you move your finger to the right it pushed this imaginary sphere around its center to the right (exposing the left side of the scene).

I'm not 100% sure this is the right API to have for such an object but I was able to use it in a couple of examples for a course I'm working on. I also will be using in one of the demos for my talk at iPhone Live. So while it might not be perfect I figure its good enough to post now. Please feel free to comment with what you think would be better.

Now on to document the TrackBall class. The idea is that you have a 2D viewport into a 3D scene, this view port has a width and height (i.e. the CGRect that defines the layers bounds). In this 3D world you construct an imaginary sphere with a radius of the minimum of height or width of your view port centered on the center of your scene. When the event begins (with a touchesBegan:withEvent:) you initialize the trackball with the touches location as the starting point. A vector is constructed from the center of the sphere to the touch (the depth dimension is calculated based on the radius of the sphere). As the user moves her finger around on the screen another vector is constructed from the center of the sphere to the current touch location (as received in touchesMoved:withEvent:). The cross product of these two vectors is the vector of rotation and the angle between them is the magnitude of the rotation.

Practically what all this means is that in the touchesBegan:withEvent: method you call the setStartPointFromLocation: method with the location of the touch (if you don't have multi touch turned on for the view there will be only one touch in the touches set, so you can use the anyObject method to get the touch, code to follow shortly). That initializes the trackball so it knows the first vector (from the center of the sphere to the starting point). As the user drags his finger around on the screen you call rotationTransformForLocation: to get a CATransform3D. This transform encapsulates the rotation vector and angle so you don't really have to grok it to use it (although it helps:). Next you set your layer's sublayersTransform property to this transform.

The scene contained in your layer will now rotate as if it existed in a sphere and you were moving that sphere around. Its a cool effect if you've never seen it before. If you want the trackball to remember where it is when the user picks up her finger you simply call finalizeTrackBallForLocation: with the touch location from the touchesEnded:withEvent: method. Now onto some code. Here is the code to initialize the trackball;

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint location = [[touches anyObject] locationInView:self];
if(nil == self.trackBall) {
self.trackBall = [TrackBall trackBallWithLocation:location inRect:self.bounds];
} else {
[self.trackBall setStartPointFromLocation:location];

In this example I'm keeping the trackball and finalizing it in the touchesEnded: method (we will see that shortly). Next up I get the transformation from the trackball in the touchesMoved:withEvent: method.

- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint location = [[touches anyObject] locationInView:self];
CATransform3D transform = [trackBall rotationTransformForLocation:location];
transformed.sublayerTransform = transform;

Then in the touchesEnded:withEvent: method I finalize the trackball so it knows where it left off on the next event cycle.

- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint location = [[touches anyObject] locationInView:self];
[self.trackBall finalizeTrackBallForLocation:location];

And finally here is the code. Happy hacking!


When I was writing <a href-"">Molecules, I evaluated trackball rotation as a means of doing the finger-based rotation of my molecular models, but wasn't happy with the complexity of the code. It also didn't always seem to provide the most natural rotation of a scene.

I worked through the math on doing OpenGL scene rotation and found that you could use the distance the finger moved in X and Y, while knowing the current model view matrix, to do a four-line rotation (now two-line, thanks to David Rowland) . The code for this rotation is as follows:

// Grab the current model-view matrix
glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix);

GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation);

// Do the actual rotation
glRotatex([moleculeToDisplay floatToFixed:totalRotation],
(GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[1] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[0]),
(GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[5] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[4]),
(GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[9] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[8])

In my case, I do incremental rotation of the scene. That is, when the user moves their finger, the current touch position is compared to the previous touch position and the difference in pixels in X and Y is passed into the rotation method (as the xRotation and yRotation variables above). The rotation is applied to the current model view matrix for each OpenGL rendering frame, and I don't reset the model view matrix to the identity matrix at the start of each frame. I'm also doing this using fixed-point math, so this may need to be tweaked for the more common floating-point case.

How this works is it independently rotates the scene about the X and Y axes that run horizontally and vertically across the touch screen. As you move your finger down, it rotates the scene by a greater and greater angle about the touch screen's X axis. The trick was in figuring out how to convert the touch screen X and Y axes to the 3-D coordinate space of the model object.

It seems like Core Animation's 3D handling uses similar data structures to OpenGL, so this should be able to translate across pretty well. The current CATransform3D acts like the current OpenGL model view matrix, and even appears to have the same row and column structure, so you should be able to grab the same elements and do a CATransform3DMakeRotation in the same fashion as the glRotatex above.

The source code to Molecules is available at the link above, if you want to see this in action.

Posted by Brad Larson on October 18, 2008 at 10:42 AM MDT #

Hey Brad,

Thanks for the comment!

The concepts are similar but hopefully the actual code in my trackball class is simpler than the opengl equivalent that is common on the net. CA actually takes care of much of the complexity for you so you don't even have to calculate the matrix at all, just let the CATransform3D functions do all the hard work for you.

The math in this example is actually quite simple (no quaternions here!) so if you grok cross products you will grok this code.

Thanks again (and everyone check out Molecules, its rocks!)

Posted by Bill Dudney on October 18, 2008 at 10:47 AM MDT #

There's only one thing that I don't understand about
the above code. It's this line in touchesMoved:

transformed.sublayerTransform = transform;

As far as I understand "transformed" is a CALayer which should be added to the view like this:

[view.layer addSublayer:transformed];

But where is transformed defined and how does it get initialized?

I think this is the only thing that keeps me from successfully implementing the code and it would be great if you could explain me what is happening with "transformed".


Posted by Elshan Siradjov on December 04, 2009 at 12:17 PM MST #

I ported this to Swift:

Posted by Scott Gardner on March 15, 2016 at 07:41 PM MDT #

Post a Comment:
  • HTML Syntax: Allowed