For those like me that are interested in Game Development as a hobby, one of the main challenges to build a 3D game is understanding the OpenGL framework.

The distinction between the different coordinates systems is mandatory if one doesn’t want to get ultimately lost in coordinates transformations. A world coordinate system is a system of coordinates with which the scene is best described. It is the system that arises naturally from the game constrains and scene. The eye coordinate system is the system OpenGL understands, with the *z* coordinate sticking out from the screen and *x* and *y* running along the screen edges.

Even further, we also need a camera in our scene, that will be the point of view from where we render the scene.

It is very useful to split an OpenGL program into things our virtual objects do in the scene and how the camera moves in the scene. To achieve that split of responsibilities, it is necessary to have a Camera class that encapsulates the state of the camera at all times. The camera should know how to return its view transformation matrix to modify the point of view from which the objects will be rendered. This camera or view transformation will be the matrix that transform the vertices from the world coordinates to the eye coordinates.

For the impatient: a direct link to the github repository containing the Camera Class implementation on Objective-C can be found on this link. It can be easily modified for C++.

This article will try to explain hot to set a view transformation for a Camera class.

One way to implement a camera is to set the position and orientation of the camera in world coordinates. With this one can work out the transformation matrix to be applied to the vertices on the scene. The following will assume that we have a vector that points where the camera is looking at (**lookAt**), an approximate vector for what is considered “up” in the camera (**upVector**), and the coordinates of the camera (**pos**) on the scene. All three vectors in world coordinates. The **upVector** doesn’t necessarily must be the true upwards vector. It only must have a non-zero component along the true “up” direction.

In what follows, {**i**, **j**, **k**} are the basis vectors of the world coordinate system and {**i’**, **j’**, **k’**} are the basis vectors of the eye coordinate system. In the eye coordinate system -the only one that OpenGL cares about- the eye is always pointing in the -z direction. From the **pos** and **lookAt** vectors, we can find what is the **k’** direction of the camera in world coordinates.

–**k’** = normalize(**lookAt** – **pos**)

**k’** = normalize(**pos** – **lookAt**)

“Up” is in the y direction in the eye coordinate system. The **upVector** would be **j’** if it weren’t for the fact that it’s not necessarily completely upwards. Nevertheless, the cross product between **upVector** and **k’** will be **i’**, as long as **upVector** has a component in the true **j’** direction.

**i’** = normalize(**upVector** x **k’**).

Finally the cross product of **k’** and **i’** will give us the true **j’**.

**j’** = **k’** x **i’**

Now that we have the relative orientation of the camera {**i’**, **j’**, **k’**} with respect to world coordinates {**i**, **j**, **k**} we can figure out the transformation (it’s a rotation) between the two frames. This can be done easily noting that **i’** = (**i’**.**i**) **i** + (**i’**.**j**) **j** + (**i’**.**k**) **k** etc. which can be written formally (it’s not a true matrix-vector multiplication)

where means write the components of **i’** in the {**i**,**j**,**k**} base in that row. For any other vector (x’,y’,z’) the transformation is the same. It can be seen by writing

( since it is an orthonormal matrix).

R is the rotation that takes a vector from world coordinates to eye coordinates. Its transpose will take from eye to world coordinates. To put the camera in the world we did two operations. First we rotated the camera at the origin by matrix R^t and then we translated it by vector **pos**.

M_{cam} = T(**pos**).R^{T}

The matrix we have to apply to vertices to simulate the camera motion is the inverse of this M_{cam}

M_{v} = R . T(-**pos**)

**Implementation in OpenGL|ES:**

OpenGL|ES stores the matrices in a column-major format. (We can equivalently think that it stores the transpose of the matrix in a row-major format.) To construct the matrix R, we have to fill each row with the primed direction versors in the array R[16].

for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++) { viewMatrix[i*4 + j] = prime[j][i]; } viewMatrix[i*4 + 3] = 0.; } for (int i = 12; i < 15; i++) viewMatrix[i] = 0.; viewMatrix[15] = 1.;

The translation as an array T[16] is the usual one and has the form:

But since T is mostly empty we can make the multiplication R.T without storing the array T. The rotation block will remain untouched and will be replaced by a linear combination of the rotation terms and the translation terms.

In code:

viewMatrix[12] = - position[0] * viewMatrix[0] \ - position[1] * viewMatrix[4] \ - position[2] * viewMatrix[8]; viewMatrix[13] = - position[0] * viewMatrix[1] \ - position[1] * viewMatrix[5] \ - position[2] * viewMatrix[9]; viewMatrix[14] = - position[0] * viewMatrix[2] \ - position[1] * viewMatrix[6] \ - position[2] * viewMatrix[10];

The whole project (WIP) can be found on this link: https://github.com/martinberoiz/CameraClass

*Note: I will update this post with more details.*