Rendering models with Molehill
I’ve been wanting to follow up on my last Molehill post with some more advanced examples, and I’ve finally gotten around to it! This is the first of several posts about loading and rendering various model formats in Molehill.
Model formats can be frustrating. Most are ancient or written without hardware rendering in mind. You often have to massage the data a bit to make it usable. It’s interesting to see how formats changed over time as hardware accelerated rendering standardize things. I’m going to start with some older formats, and work up to newer ones.
Here’s the code for this post in action, featuring the lovely Bunker model by Bobo the Seal:
Wavefront OBJ
The Wavefront OBJ format was created for Wavefront’s Advanced Visualizer back in the 80s. It hasn’t seen an update since the 90s, as far I can tell. (This is how things usually go with model formats.) It’s often still used today for static objects, because it’s a relatively simple format, and supported by nearly all modeling programs.
It’s a text-based model format. It supports many esoteric things, but this loader only supports the most common ones: vertex position, normal, and UV coordinates; faces with three or more vertices; and material groups. The OBJ format doesn’t support animations.
Here’s a Gist with an example of a cube in OBJ format. It’s a line based
format. The first word of each line specifies the command, and parameters
to the command follow. Lines starting with a #
are comments.
o cube
: Sets the object name. There is only one object per OBJ file.mtllib cube.mtl
: Material file to use for this model file. This example loader doesn’t support MTL files, so you will have to set the material textures manually by name.v -0.500000 -0.500000 0.500000
: Vertex position (X, Y and Z coordinates).vn 0.000000 0.000000 1.000000
: Vertex normal.vt 0.000000 0.000000
: Vertex texture coordinate (UV).g cube
: Starts a new face group. Each face group has its own set of indexes and a material. The OBJ file may have more than one of these.usemtl cube
: Specifies the material from themtllib
to use for the face group.s 1
: Sets the smoothing group for a set of faces. Smoothing groups control how vertex normals are generated for a model. I’m ignoring this statement in my loader, since most game models have normals exported for them.f 1/1/1 2/2/1 3/3/1
: Indexes for a face. Each vertex has a tuple of indexes:positionIndex/uvIndex/normalIndex
. OBJ faces are polygons, not triangles, so they may have more than three vertices.
The code
You’ll want to grab the example code on GitHub to follow along. It’s based on my previous post. I’m not going to explain all of it, but will outline the broad strokes.
The OBJ file and its textures get embedded into the AS3 code as a byte array and bitmaps, like normal. You can find that in DemoOBJ.as:
[Embed(source="../res/bunker/bunker.obj", mimeType="application/octet-stream")]
static protected const BUNKER_OBJ:Class;
[Embed(source="../res/bunker/fidget_head.png")]
static protected const BUNKER_HEAD:Class;
[Embed(source="../res/bunker/fidget_body.png")]
static protected const BUNKER_BODY:Class;
It’s created, and loaded like so:
// Load the model, and set the material textures
_obj = new OBJ();
_obj.readBytes(new BUNKER_OBJ(), _context);
_obj.setMaterial('h_head', _headTexture);
_obj.setMaterial('u_torso', _bodyTexture);
_obj.setMaterial('l_legs', _bodyTexture);
You’ll note that I set the material textures manually here, since the loader doesn’t handle MTL files.
The OBJ file itself is parsed in OBJ.as
, in the readBytes()
method. The
byte array for the OBJ is passed in, and it has to be converted into text,
then read in line by line. Any empty lines or lines starting with a #
should
be skipped, like so:
var text:String = bytes.readUTFBytes(bytes.bytesAvailable);
var lines:Array = text.split(/[\r\n]+/);
for each (var line:String in lines) {
// Trim whitespace from the line
line = line.replace(/^\s*|\s*$/g, '');
if (line === '' || line.charAt(0) === '#') {
// Blank line or comment, ignore it
continue;
}
// TODO: parse the line
}
For that todo, you need to split the line up by spaces, and then check what kind of command it is. That can be done like so:
// Split line into fields on whitespace
var fields:Array = line.split(/\s+/);
switch (fields[0].toLowerCase()) {
case 'v':
// TODO: parse vertex position
break;
case 'vn':
// TODO: parse vertex normal
break;
case 'vt':
// TODO: parse vertex uv
break;
case 'f':
// TODO: parse face
break;
case 'g':
// TODO: parse group
break;
case 'o':
// TODO: parse object
break;
case 'usemtl':
// TODO: parse material
break;
}
The vertex position (v
command) is just three floats. The fields all get
converted from strings into numbers, and pushed into the positions array:
case 'v':
positions.push(
parseFloat(fields[1]),
parseFloat(fields[2]),
parseFloat(fields[3]));
break;
The vertex normal (vn
command) works the same way:
case 'vn':
normals.push(
parseFloat(fields[1]),
parseFloat(fields[2]),
parseFloat(fields[3]));
break;
The vertex UV (vt
command) is only two floats. OBJ has a flipped V axis for
texture coordinates, so it needs to get flipped back to normal:
case 'vt':
uvs.push(
parseFloat(fields[1]),
1.0 - parseFloat(fields[2]));
break;
For the group (g
command), a new OBJGroup
object is created and added to
the list of groups. Groups have several properties (name, material, and
faces), so the object is useful to keep track of all that.
case 'g':
group = new OBJGroup(fields[1], materialName);
groups.push(group);
break;
The material name (usemtl
command) just gets saved and assigned to the
current group (if there is one). Any future groups get assigned the current
material by default, unless they have their own usemtl
command.
case 'usemtl':
materialName = fields[1];
if (group !== null) {
group.materialName = materialName;
}
break;
The group face (f
command) is a list of index tuples, as described earlier.
A new vector is created to store the index tuples for the face, and the face
is added to the current group. It will be processed later.
case 'f':
face = new Vector.<String>();
for each (var tuple:String in fields.slice(1)) {
face.push(tuple);
}
if (group === null) {
group = new OBJGroup(null, materialName);
groups.push(group);
}
group._faces.push(face);
break;
Fixing up the data
That’s all of the commands we need to handle. This loop will repeat for all
of the lines in the file. Once it’s done, we’ll have several separate streams
of vertex data (positions
, normals
, and uvs
). We’ll also have a list of
groups
, each with its own list of faces
, which have indices into these
separate streams.
This is a problem. OBJ specifies a separate index for position, normal, and UV, but modern hardware rendering doesn’t support that. We can only have one index stream. To fix this, we need to merge all three vertex streams into a single stream. The face indices also need to be updated to point to the right offsets within this new stream.
To do this, each group gets a new index stream. Then, for each unique index tuple in the faces, we write a new vertex into the merged stream. If we’ve already encountered that index tuple in another face, we use the existing merged index.
The other problem we have is that OBJ allows polygonal faces: that is, faces don’t have to be triangles. This is a problem: Context3D only supports drawing triangles. To fix this, we’ll turn any non-triangles into a triangle fan.
The loop for all of this looks like this:
for each (group in groups) {
group._indices.length = 0;
for each (face in group._faces) {
var il:int = face.length - 1;
for (var i:int = 1; i < il; ++i) {
group._indices.push(mergeTuple(face[i], positions, normals, uvs));
group._indices.push(mergeTuple(face[0], positions, normals, uvs));
group._indices.push(mergeTuple(face[i + 1], positions, normals, uvs));
}
}
group.indexBuffer = context.createIndexBuffer(group._indices.length);
group.indexBuffer.uploadFromVector(group._indices, 0, group._indices.length);
group._faces = null;
}
This loop calls mergeTuple
for each index tuple in the face. That function
looks like this:
protected function mergeTuple(
tuple:String, positions:Vector.<Number>, normals:Vector.<Number>,
uvs:Vector.<Number>):uint
{
if (_tupleIndices[tuple] !== undefined) {
// Already merged, return the merged index
return _tupleIndices[tuple];
} else {
var faceIndices:Array = tuple.split('/');
// Position index
var index:uint = parseInt(faceIndices[0], 10) - 1;
_vertices.push(
positions[index * 3 + 0],
positions[index * 3 + 1],
positions[index * 3 + 2]);
// Normal index
if (faceIndices.length > 2 && faceIndices[2].length > 0) {
index = parseInt(faceIndices[2], 10) - 1;
_vertices.push(
normals[index * 3 + 0],
normals[index * 3 + 1],
normals[index * 3 + 2]);
} else {
// Face doesn't have a normal
_vertices.push(0, 0, 0);
}
// UV index
if (faceIndices.length > 1 && faceIndices[1].length > 0) {
index = parseInt(faceIndices[1], 10) - 1;
_vertices.push(
uvs[index * 2 + 0],
uvs[index * 2 + 1]);
} else {
// Face doesn't have a UV
_vertices.push(0, 0);
}
// Cache the merged tuple index in case it's used again
return _tupleIndices[tuple] = _tupleIndex++;
}
}
This function is the bulk of the work for the OBJ loader. If the tuple already exists in our tuple indices cache, we return the existing merged index. Otherwise, we copy the vertex data that the face points to into the merged array, and then return the new index.
OBJ doesn’t require that normal and UV are specified, so we just shove some zeroes in there to keep things consistent when that happens. Modern vertex buffers are zero-indexed, but OBJ’s are one-indexed, so we also need to subtract one from all of the indices.
Last but not least, we need to create a vertex buffer for the new stream:
vertexBuffer = context.createVertexBuffer(_vertices.length / 8, 8);
vertexBuffer.uploadFromVector(_vertices, 0, _vertices.length / 8);
Rendering the model
That handles loading the model. Rendering it is pretty straight-forward. The
update()
function in DemoOBJ.as
does some setup, then renders the OBJ
like so:
// Draw the model
_context.setVertexBufferAt(
0, _obj.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3);
_context.setVertexBufferAt(
1, _obj.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3);
_context.setVertexBufferAt(
2, _obj.vertexBuffer, 6, Context3DVertexBufferFormat.FLOAT_2);
for each (var group:OBJGroup in _obj.groups) {
_context.setTextureAt(0, _obj.getMaterial(group.materialName));
_context.drawTriangles(group.indexBuffer);
}
This sets up the vertex buffer streams for the position, normal, and UVs in the OBJ buffer. Then it loops over each group, setting the texture for the group’s material, and drawing the triangles associated with that group.
I hope that helps show how to load and render models in Molehill. I can’t cover all the code in this post, so feel free to comment or email me if you have any questions. If you want to find more OBJ files to mess around with, I recommend looking at the SDK master thread on Polycount.
In my next post, I’ll look at Quake MDL files. This is another arcane format, but it’s binary and has animation, so there is more to learn!