< / >

This is a blog by about coding and web development.

Rendering models with Molehill

Posted on in

I’ve been wanting to follow up on my last Molehill post with some more advanced examples, and I’ve finally gotten around to it! This is the first of several posts about loading and rendering various model formats in Molehill.

Model formats can be frustrating. Most are ancient or written without hardware rendering in mind. You often have to massage the data a bit to make it usable. It’s interesting to see how formats changed over time as hardware accelerated rendering standardize things. I’m going to start with some older formats, and work up to newer ones.

Here’s the code for this post in action, featuring the lovely Bunker model by Bobo the Seal:

Flash is required to view this content.

Wavefront OBJ

The Wavefront OBJ format was created for Wavefront’s Advanced Visualizer back in the 80s. It hasn’t seen an update since the 90s, as far I can tell. (This is how things usually go with model formats.) It’s often still used today for static objects, because it’s a relatively simple format, and supported by nearly all modeling programs.

It’s a text-based model format. It supports many esoteric things, but this loader only supports the most common ones: vertex position, normal, and UV coordinates; faces with three or more vertices; and material groups. The OBJ format doesn’t support animations.

Here’s a Gist with an example of a cube in OBJ format. It’s a line based format. The first word of each line specifies the command, and parameters to the command follow. Lines starting with a # are comments.

  • o cube: Sets the object name. There is only one object per OBJ file.
  • mtllib cube.mtl: Material file to use for this model file. This example loader doesn’t support MTL files, so you will have to set the material textures manually by name.
  • v -0.500000 -0.500000 0.500000: Vertex position (X, Y and Z coordinates).
  • vn 0.000000 0.000000 1.000000: Vertex normal.
  • vt 0.000000 0.000000: Vertex texture coordinate (UV).
  • g cube: Starts a new face group. Each face group has its own set of indexes and a material. The OBJ file may have more than one of these.
  • usemtl cube: Specifies the material from the mtllib to use for the face group.
  • s 1: Sets the smoothing group for a set of faces. Smoothing groups control how vertex normals are generated for a model. I’m ignoring this statement in my loader, since most game models have normals exported for them.
  • f 1/1/1 2/2/1 3/3/1: Indexes for a face. Each vertex has a tuple of indexes: positionIndex/uvIndex/normalIndex. OBJ faces are polygons, not triangles, so they may have more than three vertices.

The code

You’ll want to grab the example code on GitHub to follow along. It’s based on my previous post. I’m not going to explain all of it, but will outline the broad strokes.

The OBJ file and its textures get embedded into the AS3 code as a byte array and bitmaps, like normal. You can find that in DemoOBJ.as:

[Embed(source="../res/bunker/bunker.obj", mimeType="application/octet-stream")]
static protected const BUNKER_OBJ:Class;
[Embed(source="../res/bunker/fidget_head.png")]
static protected const BUNKER_HEAD:Class;
[Embed(source="../res/bunker/fidget_body.png")]
static protected const BUNKER_BODY:Class;

It’s created, and loaded like so:

// Load the model, and set the material textures
_obj = new OBJ();
_obj.readBytes(new BUNKER_OBJ(), _context);
_obj.setMaterial('h_head', _headTexture);
_obj.setMaterial('u_torso', _bodyTexture);
_obj.setMaterial('l_legs', _bodyTexture);

You’ll note that I set the material textures manually here, since the loader doesn’t handle MTL files.

The OBJ file itself is parsed in OBJ.as, in the readBytes() method. The byte array for the OBJ is passed in, and it has to be converted into text, then read in line by line. Any empty lines or lines starting with a # should be skipped, like so:

var text:String = bytes.readUTFBytes(bytes.bytesAvailable);
var lines:Array = text.split(/[\r\n]+/);
for each (var line:String in lines) {
  // Trim whitespace from the line
  line = line.replace(/^\s*|\s*$/g, '');
  if (line === '' || line.charAt(0) === '#') {
    // Blank line or comment, ignore it
    continue;
  }

  // TODO: parse the line
}

For that todo, you need to split the line up by spaces, and then check what kind of command it is. That can be done like so:

// Split line into fields on whitespace
var fields:Array = line.split(/\s+/);
switch (fields[0].toLowerCase()) {
  case 'v':
    // TODO: parse vertex position
    break;

  case 'vn':
    // TODO: parse vertex normal
    break;

  case 'vt':
    // TODO: parse vertex uv
    break;

  case 'f':
    // TODO: parse face
    break;

  case 'g':
    // TODO: parse group
    break;

  case 'o':
    // TODO: parse object
    break;

  case 'usemtl':
    // TODO: parse material
    break;
}

The vertex position (v command) is just three floats. The fields all get converted from strings into numbers, and pushed into the positions array:

case 'v':
  positions.push(
    parseFloat(fields[1]),
    parseFloat(fields[2]),
    parseFloat(fields[3]));
  break;

The vertex normal (vn command) works the same way:

case 'vn':
  normals.push(
    parseFloat(fields[1]),
    parseFloat(fields[2]),
    parseFloat(fields[3]));
  break;

The vertex UV (vt command) is only two floats. OBJ has a flipped V axis for texture coordinates, so it needs to get flipped back to normal:

case 'vt':
  uvs.push(
    parseFloat(fields[1]),
    1.0 - parseFloat(fields[2]));
  break;

For the group (g command), a new OBJGroup object is created and added to the list of groups. Groups have several properties (name, material, and faces), so the object is useful to keep track of all that.

case 'g':
  group = new OBJGroup(fields[1], materialName);
  groups.push(group);
  break;

The material name (usemtl command) just gets saved and assigned to the current group (if there is one). Any future groups get assigned the current material by default, unless they have their own usemtl command.

case 'usemtl':
  materialName = fields[1];
  if (group !== null) {
    group.materialName = materialName;
  }
  break;

The group face (f command) is a list of index tuples, as described earlier. A new vector is created to store the index tuples for the face, and the face is added to the current group. It will be processed later.

case 'f':
  face = new Vector.<String>();
  for each (var tuple:String in fields.slice(1)) {
    face.push(tuple);
  }
  if (group === null) {
    group = new OBJGroup(null, materialName);
    groups.push(group);
  }
  group._faces.push(face);
  break;

Fixing up the data

That’s all of the commands we need to handle. This loop will repeat for all of the lines in the file. Once it’s done, we’ll have several separate streams of vertex data (positions, normals, and uvs). We’ll also have a list of groups, each with its own list of faces, which have indices into these separate streams.

This is a problem. OBJ specifies a separate index for position, normal, and UV, but modern hardware rendering doesn’t support that. We can only have one index stream. To fix this, we need to merge all three vertex streams into a single stream. The face indices also need to be updated to point to the right offsets within this new stream.

To do this, each group gets a new index stream. Then, for each unique index tuple in the faces, we write a new vertex into the merged stream. If we’ve already encountered that index tuple in another face, we use the existing merged index.

The other problem we have is that OBJ allows polygonal faces: that is, faces don’t have to be triangles. This is a problem: Context3D only supports drawing triangles. To fix this, we’ll turn any non-triangles into a triangle fan.

The loop for all of this looks like this:

for each (group in groups) {
  group._indices.length = 0;
  for each (face in group._faces) {
    var il:int = face.length - 1;
    for (var i:int = 1; i < il; ++i) {
      group._indices.push(mergeTuple(face[i], positions, normals, uvs));
      group._indices.push(mergeTuple(face[0], positions, normals, uvs));
      group._indices.push(mergeTuple(face[i + 1], positions, normals, uvs));
    }
  }
  group.indexBuffer = context.createIndexBuffer(group._indices.length);
  group.indexBuffer.uploadFromVector(group._indices, 0, group._indices.length);
  group._faces = null;
}

This loop calls mergeTuple for each index tuple in the face. That function looks like this:

protected function mergeTuple(
  tuple:String, positions:Vector.<Number>, normals:Vector.<Number>,
  uvs:Vector.<Number>):uint
{
  if (_tupleIndices[tuple] !== undefined) {
    // Already merged, return the merged index
    return _tupleIndices[tuple];
  } else {
    var faceIndices:Array = tuple.split('/');

    // Position index
    var index:uint = parseInt(faceIndices[0], 10) - 1;
    _vertices.push(
      positions[index * 3 + 0],
      positions[index * 3 + 1],
      positions[index * 3 + 2]);

    // Normal index
    if (faceIndices.length > 2 && faceIndices[2].length > 0) {
      index = parseInt(faceIndices[2], 10) - 1;
      _vertices.push(
        normals[index * 3 + 0],
        normals[index * 3 + 1],
        normals[index * 3 + 2]);
    } else {
      // Face doesn't have a normal
      _vertices.push(0, 0, 0);
    }

    // UV index
    if (faceIndices.length > 1 && faceIndices[1].length > 0) {
      index = parseInt(faceIndices[1], 10) - 1;
      _vertices.push(
        uvs[index * 2 + 0],
        uvs[index * 2 + 1]);
    } else {
      // Face doesn't have a UV
      _vertices.push(0, 0);
    }

    // Cache the merged tuple index in case it's used again
    return _tupleIndices[tuple] = _tupleIndex++;
  }
}

This function is the bulk of the work for the OBJ loader. If the tuple already exists in our tuple indices cache, we return the existing merged index. Otherwise, we copy the vertex data that the face points to into the merged array, and then return the new index.

OBJ doesn’t require that normal and UV are specified, so we just shove some zeroes in there to keep things consistent when that happens. Modern vertex buffers are zero-indexed, but OBJ’s are one-indexed, so we also need to subtract one from all of the indices.

Last but not least, we need to create a vertex buffer for the new stream:

vertexBuffer = context.createVertexBuffer(_vertices.length / 8, 8);
vertexBuffer.uploadFromVector(_vertices, 0, _vertices.length / 8);

Rendering the model

That handles loading the model. Rendering it is pretty straight-forward. The update() function in DemoOBJ.as does some setup, then renders the OBJ like so:

// Draw the model
_context.setVertexBufferAt(
  0, _obj.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3);
_context.setVertexBufferAt(
  1, _obj.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3);
_context.setVertexBufferAt(
  2, _obj.vertexBuffer, 6, Context3DVertexBufferFormat.FLOAT_2);
for each (var group:OBJGroup in _obj.groups) {
  _context.setTextureAt(0, _obj.getMaterial(group.materialName));
  _context.drawTriangles(group.indexBuffer);
}

This sets up the vertex buffer streams for the position, normal, and UVs in the OBJ buffer. Then it loops over each group, setting the texture for the group’s material, and drawing the triangles associated with that group.

I hope that helps show how to load and render models in Molehill. I can’t cover all the code in this post, so feel free to comment or email me if you have any questions. If you want to find more OBJ files to mess around with, I recommend looking at the SDK master thread on Polycount.

In my next post, I’ll look at Quake MDL files. This is another arcane format, but it’s binary and has animation, so there is more to learn!

Loading embedded assets at runtime

Posted on in

The Embed ActionScript metadata tag is a common way to include external files in a SWF. Quick review! Let’s say you had these files:

.
├── assets
│   ├── test.mp3
│   ├── test.png
│   └── test.xml
├── Test.as
├── test.html
└── test.swf

You could embed and use these assets like so:

package {
  import flash.display.Bitmap;
  import flash.display.Sprite;
  import flash.media.Sound;
  import flash.utils.ByteArray;

  public class Test extends Sprite {
    [Embed(source='assets/test.png')]
    static public var TEST_BITMAP:Class;

    [Embed(source='assets/test.mp3')]
    static public var TEST_SOUND:Class;

    [Embed(source='assets/test.xml', mimeType='application/octet-stream')]
    static public var TEST_XML:Class;

    function Test() {
      // Types Flash knows how to encode becomes normal Flash objects.
      var bitmap:Bitmap = new TEST_BITMAP;
      addChild(bitmap);
      var sound:Sound = new TEST_SOUND;
      sound.play();

      // Other types get the octet-stream mimeType, and end up as a
      // raw ByteArray, which you can read yourself.
      var bytes:ByteArray = new TEST_XML;
      var text:String = bytes.readUTFBytes(bytes.length);
      var xml:XML = new XML(text);
      trace(xml);
    }
  }
}

All of the content in our current game is embedded this way. But, as the game gets closer to release, this setup has been getting frustrating. We need to recompile every time we want to test a change to a level, or image, or sound.

To fix this, I made the debug SWF load things on the fly, and use the embedded assets in release mode. We can save a level file, and restart the level to see the changes immediately – without recompiling or reloading the page.

This required doing several things: * Using reflection to find the embedded file’s path * Downloading the files instead of instantiating the classes * Stripping the debug download code in release mode

Finding the embedded file’s path

Given a Class variable, you can use Flash’s describeType() to get XML information about the variable, including attached tags. For example, running trace(describeType(TEST_BITMAP)) on the above variable would return:

<type name="Test_TEST_BITMAP" base="Class" isDynamic="true"
  isFinal="true" isStatic="true">
  <extendsClass type="Class"/>
  <extendsClass type="Object"/>
  <accessor name="prototype" access="readonly" type="*" declaredBy="Class"/>
  <factory type="Test_TEST_BITMAP">
    <extendsClass type="mx.core::BitmapAsset"/>
    <extendsClass type="mx.core::FlexBitmap"/>
    <extendsClass type="flash.display::Bitmap"/>
    <extendsClass type="flash.display::DisplayObject"/>
    <extendsClass type="flash.events::EventDispatcher"/>
    <extendsClass type="Object"/>
    ... snip ...
    <metadata name="Embed">
      <arg key="_resolvedSource" value="/path/to/assets/test.png"/>
      <arg key="_column" value="5"/>
      <arg key="source" value="assets/test.png"/>
      <arg key="exportSymbol" value="Test_TEST_BITMAP"/>
      <arg key="_line" value="9"/>
      <arg key="_file" value="/path/to/Test.as"/>
    </metadata>
    <metadata name="ExcludeClass"/>
    <metadata name="__go_to_ctor_definition_help">
      <arg key="file" value="Test_TEST_BITMAP.as"/>
      <arg key="pos" value="323"/>
    </metadata>
    <metadata name="__go_to_definition_help">
      <arg key="file" value="Test_TEST_BITMAP.as"/>
      <arg key="pos" value="253"/>
    </metadata>
  </factory>
</type>

I’ve snipped most of it (the full output is huge), but you can see the relevant metadata element at the bottom. We can use this to figure out the path to load. You can use Flash’s E4X parsing to extract the attribute:

var xml:XML = describeType(TEST_BITMAP);
var embedMetadata:XML = xml.factory.metadata.(@name == 'Embed');
var sourceArg:XML = embedMetadata.arg.(@key == 'source');
var path:String = sourceArg.@value;

// or, shorter:
path = (describeType(TEST_BITMAP).factory.metadata.(@name == 'Embed')
  .arg.(@key == 'source').@value);

Downloading the file

Once you have the paths, you’ll need to use classes like Loader to download the files on the fly. This means you’ll need to serve the assets over HTTP, unless you’re using AIR. I used a Ruby Sinatra server for this:

require 'rubygems'
require 'sinatra'

set :public, File.dirname(__FILE__)

I saved this as server.rb, next to Test.as. You run it with ruby server.rb (or by double clicking it, on Windows), and you’ll see something like:

$ ruby server.rb
== Sinatra/1.1.2 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.2.8 codename Black Keys)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop

If you don’t have Ruby, you can download it here. If you don’t have Sinatra, you can install it by running gem install sinatra.

Once it is running, you can test it in a browser. In my example, the URL for test.png would be http://localhost:4567/assets/test.png. Now the ActionScript code needs to load these URLs. I added some helper methods:

static public function getURLRequest(cls:Class):URLRequest {
  var path:String = (describeType(cls).factory.metadata.(@name == 'Embed')
    .arg.(@key == 'source').@value);
  return new URLRequest(path);
}

static public function getBitmap(cls:Class, callback:Function):void {
  var loader:Loader = new Loader;
  loader.contentLoaderInfo.addEventListener(Event.COMPLETE, function(event:Event):void {
    callback(loader.content);
  });
  loader.load(getURLRequest(cls));
}

static public function getBytes(cls:Class, callback:Function):void {
  var loader:URLLoader = new URLLoader;
  loader.dataFormat = URLLoaderDataFormat.BINARY;
  loader.addEventListener(Event.COMPLETE, function(event:Event):void {
    callback(loader.data);
  });
  loader.load(getURLRequest(cls));
}

static public function getSound(cls:Class, callback:Function):void {
  var sound:Sound = new Sound;
  sound.addEventListener(Event.COMPLETE, function(event:Event):void {
    callback(sound);
  });
  sound.load(getURLRequest(cls));
}

If this were real production code, you would want to catch error events. However, since this is just debug test code, I’m only listening for the COMPLETE event.

Now, instead of using new TEST_BITMAP, you call these methods:

getBitmap(TEST_BITMAP, function(bitmap:Bitmap):void {
  addChild(bitmap);
});

getSound(TEST_SOUND, function(sound:Sound):void {
  sound.play();
});

getBytes(TEST_XML, function(bytes:ByteArray):void {
  var text:String = bytes.readUTFBytes(bytes.length);
  var xml:XML = new XML(text);
  trace(xml);
});

Fixing caching issues

I quickly ran into a problem: Flash was only downloading the file once. This was because it was caching the assets. To stop Flash from caching, the HTTP server needs to send Cache-Control and Pragma headers. For a Sinatra server, you can use a Rack middleware:

require 'rubygems'
require 'sinatra'

class DisableCache
  def initialize(app)
    @app = app
  end

  def call(env)
    result = @app.call(env)
    result[1]['Cache-Control'] = 'no-cache, no-store, must-revalidate'
    result[1]['Pragma'] = 'no-cache'
    return result
  end
end

use DisableCache

set :public, File.dirname(__FILE__)

If you clear your cache and try again, you should see the SWF downloading the assets every time.

Disabling in release mode

I only wanted our SWF to download assets like this in debug mode. I could have done this with a debug Boolean:

static public function getSound(cls:Class, callback:Function):void {
  if (debug) {
    var sound:Sound = new Sound;
    sound.addEventListener(Event.COMPLETE, function(event:Event):void {
      callback(sound);
    });
    sound.load(getURLRequest(cls));
  } else {
    callback(new cls);
  }
}

However, in my case, I didn’t even want the loader code in my SWF in release mode. I did this with the -define command line option for mxmlc. This option allows you to do conditional compilation with constants. When I compile in debug mode, my command line looks like this:

mxmlc -debug -define+=ENV::debug,true -define+=ENV::release,false \
  -static-rsls -output=test.swf Test.as

And in release, it looks like this:

mxmlc -define+=ENV::debug,false -define+=ENV::release,true \
  -static-rsls -output=test.swf Test.as

Then I updated the helper functions to use these constants:

static public function getSound(cls:Class, callback:Function):void {
  ENV::debug {
    var sound:Sound = new Sound;
    sound.addEventListener(Event.COMPLETE, function(event:Event):void {
      callback(sound);
    });
    sound.load(getURLRequest(cls));
  }
  ENV::release {
    callback(new cls);
  }
}

The syntax is a bit strange for these conditionals. It’s basically the AS3 version of a C preprocessor #ifdef. If the constant is false, the attached block is completely stripped from the compiled SWF.

When all is said and done, you have a SWF that uses embedded assets with only the overhead of a function call, and assets that are loaded on the fly for debug mode. You could take this even further by doing things like caching, or monitoring files and automatically refreshing them while the SWF is running… but I will leave that for a future post.