OpenGL ES SDK for Android ARM Developer Center
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
Boids

Demonstration of Transform Feedback functionality in OpenGL ES 3.0.

Introduction

It is assumed that you have read and understood all of the mechanisms described in Asset Loading and Simple Triangle.

Overview

Boids_android.png
The application displays 30 spheres: 1 leader and 29 followers.

Demonstration of Transform Feedback functionality in OpenGL ES 3.0.

Demonstrates the use of uniform buffers. The application displays 30 spheres on the screen. Locations and velocities of the spheres in 3D space are regularly updated to simulate bird flocking. There is 1 leader sphere (red) and 29 followers (green). The leader follows a set looping path and the followers 'flock' in relation to the leader and the other followers. The calculation of the locations of the boids is done on the GPU each frame using a vertex shader prior to rendering the scene.

All of the data for the boids stays in GPU memory (by using buffers) and is not transferred back to the CPU. Transform feedback buffers are used to store the movement information output from the vertex shader, and this data is then used as the input data on the next pass. The same data is used when rendering the scene.

Render a Geometry

To render any kind of geometry: quad, cube, sphere or any kind of more complicated model, the best way is to render triangles that make up the requested shape. So the very first thing we need to do is to generate the coordinates of those triangles. Please remember that it is very important to follow one order (clockwise or counter clockwise) while calculating the coordinates, otherwise (if you mix them), OpenGL ES may have some trouble with detecting front and back faces. We will use counter clockwise order as this is default for OpenGL ES.

The point coordinates of triangles which make up the geometry we want to render (in our case that would be a sphere) are generated within the following function

For more details please look into the implementation.

The next step is to transfer the generated data into a buffer object and use it whilst rendering. But let's describe the problem in basic steps.

Generate a buffer object.

In the application we will need more buffer objects, however at this point we are interested in only one, so you can use the below function instead.

Once the buffer object ID is generated, we can use that object to store the coordinates data.

GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
GL_STATIC_DRAW));

The next step is to associate the input data for a program object with a specific array buffer object and then enable the vertex attrib array. This can be done as follows.

GL_CHECK(glBindBuffer (GL_ARRAY_BUFFER,
GL_CHECK(glEnableVertexAttribArray(positionLocation));
GL_CHECK(glVertexAttribPointer (positionLocation,
3,
GL_FLOAT,
GL_FALSE,
0,
0));

We will soon explain what value should be used for the positionLocation argument.

In the OpenGL ES 3.0 there is no rendering without program objects. First of all, we need to:

  1. create program object ID,
    renderingProgramId = GL_CHECK(glCreateProgram());
  2. create shader objects' IDs (should be called twice for shaderType equal to GL_FRAGMENT_SHADER and GL_VERTEX_SHADER),
    *shaderObjectIdPtr = GL_CHECK(glCreateShader(shaderType));
  3. set shader source,
    strings[0] = loadShader(filename);
    GL_CHECK(glShaderSource(*shaderObjectIdPtr, 1, strings, NULL));
  4. compile shader (it is always a good idea to check whether the compilation succeeded: GL_COMPILE_STATUS set to GL_TRUE,
    GL_CHECK(glCompileShader(*shaderObjectIdPtr));
    GL_CHECK(glGetShaderiv(*shaderObjectIdPtr, GL_COMPILE_STATUS, &compileStatus));
  5. attach shaders to program object,
    /* Attach vertex and fragment shaders to rendering program. */
  6. link program object,
    GL_CHECK(glLinkProgram(renderingProgramId));
  7. use program.
    GL_CHECK(glUseProgram (renderingProgramId));

And now we are ready to tell you what the positionLocation (mentioned above) represents. This is a location of an attribute in the program object. We can retrieve this value by calling

positionLocation = GL_CHECK(glGetAttribLocation (renderingProgramId, "attributePosition"));

The second argument, attributePosition, is the same name as used in the vertex shader. Please refer to the shader sources.

Vertex shader source:

const int numberOfSpheres = 30;
const float pi = 3.14159265358979323846;
in vec4 attributePosition;
in vec4 attributeColor;
out vec4 vertexColor;
uniform vec3 scalingVector;
/*
* We use uniform block in order to reduce amount of memory transfers to minimum.
* The uniform block uses data taken directly from a buffer object.
*/
uniform BoidsUniformBlock
{
vec4 sphereLocation[numberOfSpheres];
};
void main()
{
float fieldOfAngle = 1.0 / tan(perspectiveVector.x * 0.5);
vec3 locationOfSphere = vec3 (sphereLocation[gl_InstanceID].x, sphereLocation[gl_InstanceID].y, sphereLocation[gl_InstanceID].z);
/* Set red color for leader and green color for followers. */
if(gl_InstanceID == 0)
{
vertexColor = vec4(attributeColor.x, 0.5 * attributeColor.y, 0.5 * attributeColor.z, attributeColor.w);
}
else
{
vertexColor = vec4(0.5 * attributeColor.x, attributeColor.y, 0.5 * attributeColor.z, attributeColor.w);
}
/* Create transformation matrices. */
mat4 translationMatrix = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
locationOfSphere.x, locationOfSphere.y, locationOfSphere.z, 1.0);
mat4 cameraMatrix = mat4(1.0, 0.0, 0.0, 0.0,
0.0, 1.0, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
cameraVector.x, cameraVector.y, cameraVector.z, 1.0);
mat4 scalingMatrix = mat4(scalingVector.x, 0.0, 0.0, 0.0,
0.0, scalingVector.y, 0.0, 0.0,
0.0, 0.0, scalingVector.z, 0.0,
0.0, 0.0, 0.0, 1.0);
mat4 perspectiveMatrix = mat4(fieldOfAngle/perspectiveVector.y, 0.0, 0.0, 0.0,
0.0, fieldOfAngle, 0.0, 0.0,
0.0, 0.0, -(perspectiveVector.w + perspectiveVector.z) / (perspectiveVector.w - perspectiveVector.z), -1.0,
0.0, 0.0, (-2.0 * perspectiveVector.w * perspectiveVector.z) / (perspectiveVector.w - perspectiveVector.z), 0.0);
/* Compute scaling. */
mat4 tempMatrix = scalingMatrix;
/* Compute translation. */
tempMatrix = translationMatrix * tempMatrix;
tempMatrix = cameraMatrix * tempMatrix;
/* Compute perspective. */
tempMatrix = perspectiveMatrix * tempMatrix;
/* Return gl_Position. */
gl_Position = tempMatrix * attributePosition;
}

Fragment shader source:

precision mediump float;
in vec4 vertexColor;
out vec4 fragColor;
void main()
{
fragColor = vertexColor;
}

In the vertex shader, there is one more attribute used. You should now know how to set input data for it. There are also some uniforms used and we will now describe how to deal with them. First of all, we need to retrieve their locations.

scalingMatrixLocation = GL_CHECK(glGetUniformLocation (renderingProgramId, "scalingVector"));
perspectiveMatrixLocation = GL_CHECK(glGetUniformLocation (renderingProgramId, "perspectiveVector"));
cameraPositionLocation = GL_CHECK(glGetUniformLocation (renderingProgramId, "cameraVector"));

Please note that it is always a good idea to verify whether the returned data is valid.

ASSERT(positionLocation != -1, "Could not retrieve attribute location: attributePosition");
ASSERT(sphereVertexColorLocation != -1, "Could not retrieve attribute location: attributeColor");
ASSERT(scalingMatrixLocation != -1, "Could not retrieve uniform location: scalingMatrixLocation");
ASSERT(perspectiveMatrixLocation != -1, "Could not retrieve uniform location: perspectiveMatrixLocation");
ASSERT(cameraPositionLocation != -1, "Could not retrieve uniform location: cameraPositionLocation");
ASSERT(movementUniformBlockIndex != GL_INVALID_INDEX, "Could not retrieve uniform block index: BoidsUniformBlock")

Then, if we want to set data for the uniforms, it is enough to call

GL_CHECK(glUniform3fv(scalingMatrixLocation, 1, scalingVector));
GL_CHECK(glUniform4fv(perspectiveMatrixLocation, 1, perspectiveVector));
GL_CHECK(glUniform3fv(cameraPositionLocation, 1, cameraVector));

Finally, we are ready to issue the draw call. Normally, we would call

GL_CHECK(glDrawArrays(GL_TRIANGLES, 0, numberOfSphereTrianglePoints));

However, in this case, we want to render multiple instances of the same object (30 spheres). This is why we need to call

GL_CHECK(glDrawArraysInstanced(GL_TRIANGLES,
0,

Please note that most of the functions mentioned above should be called only when the requested program object is active (glUseProgram() was called with an argument corresponding to the program object ID).

After completing all the steps described above we will get 30 spheres drawn onto the screen: 1 red and 29 green. We are ready to move them a little bit.

Transform Feedback

The basic concept of the application is to simulate birds' flocking behaviour. There is one leader and some followers, all of them are trying to keep distance between the others. This is the idea, but how it is implemented?

The leader moves on a constant trajectory. Its position is updated per each frame. The new positions for the followers are calculated based on the leader's position, but there are also the other spheres positions taken into account (so that the distance between them is not too small).

We will now focus on the Transform Feedback mechanism rather than the algorithm itself. So let's start.

Transform feedback is used for setting position and velocity for each of the spheres. You cannot read from and write to the same buffer object at the same time, so we use a ping-pong approach. During the first call, the pong buffer is used for reading and ping buffer for writing. During the second call, the ping buffer is used for reading and the pong buffer for writing.

We have already created some buffer objects (please refer to Render a Geometry) and we will use them now. First of all, we will have two buffer objects, named with ping and pong prefixes. Only one of them would need to be initialized with the original data (starting positions and velocities), the second one should however be prepared to store the updated values.

Set up buffer objects storage.

/* Buffers holding coordinates of sphere positions and velocities which are used by transform feedback
* (to read from or write computed data). */
/* Set buffers' size and usage, but do not fill them with any data. */
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
NULL,
GL_STATIC_DRAW));
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
NULL,
GL_STATIC_DRAW));

Fill one of buffer objects with data.

GL_CHECK(glBindBuffer (GL_ARRAY_BUFFER,
GL_CHECK(glBufferSubData(GL_ARRAY_BUFFER,
0,

We want to store the updated values for sphere locations in space and their velocities. Please look into the vertex shader code that is used for calculation. As you remember, a program object requires both vertex and fragment shaders to be attached. However in this situation we do not really need any fragment operations, so the main() function of the fragment shader is left empty. The whole algorithm is implemented in the vertex shader as shown below:

const int numberOfSpheres = 30;
/*
* We use uniform block in order to reduce amount of memory transfers to minimum.
* The uniform block uses data taken directly from a buffer object.
*/
uniform inputData
{
vec4 inLocation[numberOfSpheres]; /* Current location of spheres. */
vec4 inVelocity[numberOfSpheres]; /* Current velocity of spheres. */
};
out vec4 location; /* Transformed sphere location. */
out vec4 velocity; /* Transformed sphere velocity. */
uniform float time; /* Time value used for determining new leader's position. */
/* Boids fly toward center of the mass. */
vec4 moveToCenter()
{
vec4 center = vec4(0.0);
/* Calculate the center of mass for all other boids (average of their locations). */
for (int i = 0; i < numberOfSpheres; i++)
{
if (i != gl_InstanceID)
{
center = center + inLocation[i];
}
}
center = center / float(numberOfSpheres - 1);
return (center - inLocation[gl_InstanceID]) / 100.0;
}
/* Boids keep their distance from other boids. */
vec4 keepDistanceBetweenBoids()
{
vec4 result = vec4(0.0);
for (int i = 0; i < numberOfSpheres; i++)
{
if (i != gl_InstanceID)
{
/* Compute distance between boids. */
float xDistance = inLocation[i].x - inLocation[gl_InstanceID].x;
float yDistance = inLocation[i].y - inLocation[gl_InstanceID].y;
float zDistance = inLocation[i].z - inLocation[gl_InstanceID].z;
float xyzDistance = sqrt(xDistance * xDistance + yDistance * yDistance + zDistance * zDistance);
/* If distance between boids is too small, update result vector. */
/* Radius of sphere (which represents a single boid) is set to 10, scaling factor is set to 0.1, which means that the boids start to overlap if the distance gets below 2.
* Minimum distance is set to 4 so that boids start to run away from each other if the distance between them is too low.
* We use smoothstep() function to smoothen the "run-away".
*/
if (xyzDistance < 4.0)
{
result = result - (1.1 - smoothstep(0.0, 4.0, xyzDistance)) * (inLocation[i] - inLocation[gl_InstanceID]);
}
}
}
return result;
}
/* Boids try to match velocity with other boids. */
vec4 matchVelocity()
{
vec4 result = vec4(0.0);
/* Compute average velocity of all other boids. */
for (int i = 0; i < numberOfSpheres; i++)
{
if (i != gl_InstanceID)
{
result = result + inVelocity[i];
}
}
result = result / float(numberOfSpheres - 1);
return (result - inVelocity[gl_InstanceID]) / 2.0;
}
/* Compute followers' positions and velocities. */
void setFollowersPosition()
{
vec4 result1 = moveToCenter();
vec4 result2 = keepDistanceBetweenBoids();
vec4 result3 = matchVelocity();
velocity = inVelocity[gl_InstanceID] + result1 + result2 + result3;
location = inLocation[gl_InstanceID] + velocity;
}
/* Calculate leader's position using a certain closed curve. */
void setLeaderPosition()
{
location = vec4(15.0 * (1.0 + cos(time) - 1.0),
15.0 * sin(time),
2.0 * 15.0 * sin(time / 2.0),
1.0);
velocity = vec4(0.0);
}
void main()
{
/* Use a different approach depending on whether we are dealing with a leader or a follower. */
if (gl_InstanceID == 0)
{
setLeaderPosition();
}
else
{
setFollowersPosition();
}
}

In the API, we need to declare the output variables that will store the results from the Transform Feedback operations.

const GLchar* varyingNames[] = {"location", "velocity"};
/*
* Specify varyings which are used with transform feedback buffer.
* In shader we are using uniform block for holding location and velocity data.
* Uniform block takes data from buffer object. Buffer object is filled with position data for each sphere first, and then with velocity data for each sphere.
* Setting mode to GL_SEPARATE_ATTRIBS indicates that data are written to output buffer in exactly the same way as in input buffer object.
*/
GL_CHECK(glTransformFeedbackVaryings(movementProgramId,
2,
varyingNames,
GL_SEPARATE_ATTRIBS));

Please note that the glTransformFeedbackVaryings() function has no effect unless glLinkProgram() is called. So please be sure that the above function is followed by

GL_CHECK(glLinkProgram(movementProgramId));

OK, we are prepared for the actual rendering. Before a single frame is rendered, we need to set up the proper input and output data storage (as we cannot read from and write to the same buffer object, we are using the ping-pong technique described above).

Set output buffer object

/*
* Configure transform feedback.
* Bind buffer object to first varying (location) of GL_TRANSFORM_FEEDBACK_BUFFER - binding point index equal to 0.
* Use the first half of the data array, 0 -> sizeof(float) * 4 * numberOfSpheresToGenerate (4 floating point position coordinates per sphere).
* Bind buffer object to first varying (velocity) of GL_TRANSFORM_FEEDBACK_BUFFER - binding point index equal to 1.
* Use the second half of the data array, from the end of the position data until the end of the velocity data.
* The size of the velocity data is sizeof(float) * 4 * numberOfSpheresToGenerate values (4 floating point velocity coordinates per sphere).
*
* The buffer bound here is used as an output from the movement vertex shader. The output variables in the shader that are bound to this buffer are
* given by the call to glTransformFeedbackVaryings earlier.
*/
{
GL_CHECK(glBindBufferRange(GL_TRANSFORM_FEEDBACK_BUFFER,
0,
0,
sizeof(float) * 4 * numberOfSpheresToGenerate));
GL_CHECK(glBindBufferRange(GL_TRANSFORM_FEEDBACK_BUFFER,
1,
sizeof(float) * 4 * numberOfSpheresToGenerate,
sizeof(float) * 4 * numberOfSpheresToGenerate));
}
else
{
GL_CHECK(glBindBufferRange(GL_TRANSFORM_FEEDBACK_BUFFER,
0,
0,
sizeof(float) * 4 * numberOfSpheresToGenerate));
GL_CHECK(glBindBufferRange(GL_TRANSFORM_FEEDBACK_BUFFER,
1,
sizeof(float) * 4 * numberOfSpheresToGenerate,
sizeof(float) * 4 * numberOfSpheresToGenerate));
}

Set input buffer object

/*
* The buffer bound here is used as the input to the movement vertex shader. The data is mapped to the uniform block, and as the size of the
* arrays inside the uniform block is known, the data is mapped to the correct variables.
*/
{
GL_CHECK(glBindBufferBase(GL_UNIFORM_BUFFER, 0, spherePongPositionAndVelocityBufferObjectId));
}
else
{
GL_CHECK(glBindBufferBase(GL_UNIFORM_BUFFER, 0, spherePingPositionAndVelocityBufferObjectId));
}

Please be sure that the buffer object filled with the original data is used as an input when the first frame is being rendered.

We are now ready to issue the transform feedback.

/*
* Perform the boids transformation.
* This takes the current boid data in the buffers and passes it through the movement vertex shader.
* This fills the output buffer with the updated location and velocity information for each boid.
*/
GL_CHECK(glEnable(GL_RASTERIZER_DISCARD));
{
GL_CHECK(glUseProgram(movementProgramId));
GL_CHECK(glBeginTransformFeedback(GL_POINTS));
{
GL_CHECK(glUniform1f(timeLocation, timerTime));
GL_CHECK(glDrawArraysInstanced(GL_POINTS, 0, 1, numberOfSpheresToGenerate));
}
GL_CHECK(glEndTransformFeedback());
}
GL_CHECK(glDisable(GL_RASTERIZER_DISCARD));

Now we are able to use the updated data as an input for the program object responsible for rendering the spheres.