Importing 3d Models in Android

This post is out of date and I was not able to get the current version of Blender working with an Ogre Mesh export. Update coming soon using Wavefront export. Here is the updated version, http://www.bayninestudios.com/2014/04/importing-3d-models-in-android-redux/

Happy New Year! When making a 3d game, one of the processes to get streamlined is the 3d content creation pipeline. You could create 3d content by hard coding vertices and texture coordinates, but that is very time consuming. It’s great for simple things like the particles in the particle system demo or just a simple 1×1 tile, but for more complex 3d objects external tools need to be used.

I choose the tools Blender (www.blender.org) for 3d modeling, and Gimp (www.gimp.org/) for texture/image editting. The primary reason for choosing these tools is that they are free. Something like 3D Studio Max costs well over $3000 and Photoshop is around $900. This articles is NOT a tutorial on using Blender but will have some tips on issues that I had come across.

Which 3D Format?

This is a big question and the answer I came to was to use Ogre meshes. Ogre (www.ogre3d.org) is an open source 3d graphics engine used by some commercial games (one of which is Torchlight (www.torchlightgame.com). I was tinkering around with Python Ogre when looking at making a game so I was somewhat familiar with Ogre meshes. In particular I am using XML Ogre Meshes, as opposed to the binary .mesh file because I felt it would be easier to parse XML in Java rather than a bunch of bytes, and I don’t think Blender can export to the binary format. In a future article, I plan to tackle the popular MD2 format from Quake 2 which is a binary format. Depending on which tools you’re used to using or formats you have previously used in the past, it may be better for you to use another format.

Setting Up The Blender – Ogre Environment

Install Blender (at the time of writing, version 2.49b).

Install Ogre. You don’t have to install the whole Ogre, really all you need is the Ogre XML converter.

Install the Ogre export plugin for Blender found here, http://www.ogre3d.org/forums/viewtopic.php?t=45922. Follow the instructions on that page for installing the plugin.

Download and load my sample Blender project, ColorCube.zip. Unzip this file and load the ColorCube.blend file in Blender.

Creating an Ogre Mesh

You should see blender environment similar to this…

Blender Desktop

In the Ogre Meshes Exporter window, you may need to click on the preferences and define the path to where the OgreXMLConverter.exe is on your system. You may also specify where the mesh gets put when exported. The one setting of note in the main window is I have disabled “Fix Up Axis to Y”. In my application, I want the up axis to be the Z axis. To export this model, just click on the export button (as you can still see I have not renamed the object to something other than the default cube).

Exported

This will create a file called cube.mesh.xml, this file needs to be renamed to colorcube.xml (because xml files need to be all lowercase in Android). Already this is inefficent, having to rename the file every time slows down development. I haven’t found a way around this yet, it would be best if it wrote directly into the xml directory of my Android project and had a correct file name.

One thing I noticed when using Blender, as I was moving vertexes around Blender did not automatically convert the quads into triangles. This was done by selecting the whole object and hitting CTRL-T. The reason this was important is because if you haven’t defined where all your triangles are, when the export happens it will connect vertexes to make triangles as it sees fit and may not match what you’ve modeled.

Using an Ogre XML Mesh in Android

Now we have a mesh, let’s start coding. Start by creating a new project and in the resources add a directory called xml, and put the colorcube.xml file in it. Then I got the basic OpenGL activity found on the Android site http://android-developers.blogspot.com/2009/04/introducing-glsurfaceview.html. It’s a favorite starting point for me. I’ve modified it as such.

package com.bayninestudios.androidogremesh;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.app.Activity;
import android.content.Context;
import android.content.pm.ActivityInfo;
import android.opengl.GLSurfaceView;
import android.opengl.GLU;
import android.os.Bundle;

public class AndroidOgreMesh extends Activity
{
    private GLSurfaceView mGLView;

    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
        mGLView = new ClearGLSurfaceView(this);
        setContentView(mGLView);
    }

    @Override
    protected void onPause()
    {
        super.onPause();
        mGLView.onPause();
    }

    @Override
    protected void onResume()
    {
        super.onResume();
        mGLView.onResume();
    }
}

class ClearGLSurfaceView extends GLSurfaceView
{
    ClearRenderer mRenderer;

    public ClearGLSurfaceView(Context context)
    {
        super(context);
        mRenderer = new ClearRenderer(context, this);
        setRenderer(mRenderer);
    }
}

class ClearRenderer implements GLSurfaceView.Renderer
{
    private ClearGLSurfaceView view;
    private DrawModel model;
    private float angleZ = 0f;

    public ClearRenderer(Context context, ClearGLSurfaceView view)
    {
        this.view = view;
        model = new DrawModel(context.getResources().getXml(R.xml.colorcube));
    }

    public void onSurfaceCreated(GL10 gl, EGLConfig config)
    {
        gl.glLoadIdentity();
        GLU.gluPerspective(gl, 25.0f, (view.getWidth() * 1f) / view.getHeight(), 1, 100);
        GLU.gluLookAt(gl, 0f, -10f, 6f, 0.0f, 0.0f, 0f, 0.0f, 1.0f, 1.0f);
        gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
        gl.glEnableClientState(GL10.GL_COLOR_ARRAY);
        gl.glEnable(GL10.GL_DEPTH_TEST);
    }

    public void onSurfaceChanged(GL10 gl, int w, int h)
    {
        gl.glViewport(0, 0, w, h);
    }

    public void onDrawFrame(GL10 gl)
    {
        gl.glClearColor(0f, 0f, 0f, 1.0f);
        gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
        gl.glPushMatrix();
        gl.glRotatef(angleZ, 0f, 0f, 1f);
        model.draw(gl);
        gl.glPopMatrix();
        angleZ += 0.4f;
    }
}

This is standard OpenGL code the main part is the DrawModel class which we create and pass in the xml parser to it. The DrawModel class looks like this…

package com.bayninestudios.androidogremesh;

import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.ShortBuffer;

import javax.microedition.khronos.opengles.GL10;

import org.xmlpull.v1.XmlPullParserException;

import android.content.res.XmlResourceParser;
import android.util.Log;

public class DrawModel
{
    private FloatBuffer mVertexBuffer;
    private FloatBuffer mColorBuffer;
    private FloatBuffer mNormalBuffer;
    private ShortBuffer mIndexBuffer;
    private int faceCount = 0;

    public DrawModel(XmlResourceParser xrp)
    {
        float[] coords = null;
        float[] colcoords = null;
        short[] icoords = null;
        float[] ncoords = null;
        int vertexIndex = 0;
        int colorIndex = 0;
        int faceIndex = 0;
        int normalIndex = 0;
        try
        {
            while (xrp.getEventType() != XmlResourceParser.END_DOCUMENT)
            {
                if (xrp.getEventType() == XmlResourceParser.START_TAG)
                {
                    String s = xrp.getName();
                    if (s.equals("faces"))
                    {
                        int i = xrp.getAttributeIntValue(null, "count", 0);
                        // now we know how many faces, so we know how large the
                        // triangle array should be
                        faceCount = i * 3; // three vertexes per face v1,v2,v3
                        icoords = new short[faceCount];
                    }
                    if (s.equals("geometry"))
                    {
                        int i = xrp.getAttributeIntValue(null, "vertexcount", 0);
                        // now we know how many vertexes, so we know how large
                        // the vertex, normal, texture and color arrays should be
                        // three vertex attributes per vertex x,y,z
                        coords = new float[i * 3]; 
                        // three normal attributes per vertex x,y,z
                        ncoords = new float[i * 3];
                        // four color attributes per vertex r,g,b,a
                        colcoords = new float[i * 4];
                    }
                    if (s.equals("position"))
                    {
                        float x = xrp.getAttributeFloatValue(null, "x", 0);
                        float y = xrp.getAttributeFloatValue(null, "y", 0);
                        float z = xrp.getAttributeFloatValue(null, "z", 0);
                        if (coords != null)
                        {
                            coords[vertexIndex++] = x;
                            coords[vertexIndex++] = y;
                            coords[vertexIndex++] = z;
                        }
                    }
                    if (s.equals("normal"))
                    {
                        float x = xrp.getAttributeFloatValue(null, "x", 0);
                        float y = xrp.getAttributeFloatValue(null, "y", 0);
                        float z = xrp.getAttributeFloatValue(null, "z", 0);
                        if (ncoords != null)
                        {
                            ncoords[normalIndex++] = x;
                            ncoords[normalIndex++] = y;
                            ncoords[normalIndex++] = z;
                        }
                    }
                    if (s.equals("face"))
                    {
                        short v1 = (short) xrp.getAttributeIntValue(null, "v1", 0);
                        short v2 = (short) xrp.getAttributeIntValue(null, "v2", 0);
                        short v3 = (short) xrp.getAttributeIntValue(null, "v3", 0);
                        if (icoords != null)
                        {
                            icoords[faceIndex++] = v1;
                            icoords[faceIndex++] = v2;
                            icoords[faceIndex++] = v3;
                        }
                    }
                    if (s.equals("colour_diffuse"))
                    {
                        String colorVal = (String) xrp.getAttributeValue(null, "value");
                        String[] colorVals = colorVal.split(" ");
                        colcoords[colorIndex++] = Float.parseFloat(colorVals[0]);
                        colcoords[colorIndex++] = Float.parseFloat(colorVals[1]);
                        colcoords[colorIndex++] = Float.parseFloat(colorVals[2]);
                        colcoords[colorIndex++] = 1f;
                    }
                }
                else if (xrp.getEventType() == XmlResourceParser.END_TAG)
                {
                    ;
                }
                else if (xrp.getEventType() == XmlResourceParser.TEXT)
                {
                    ;
                }
                xrp.next();
            }
            xrp.close();
        }
        catch (XmlPullParserException xppe)
        {
            Log.e("TAG", "Failure of .getEventType or .next, probably bad file format");
            xppe.toString();
        }
        catch (IOException ioe)
        {
            Log.e("TAG", "Unable to read resource file");
            ioe.printStackTrace();
        }
        mVertexBuffer = makeFloatBuffer(coords);
        mColorBuffer = makeFloatBuffer(colcoords);
        mNormalBuffer = makeFloatBuffer(ncoords);
        mIndexBuffer = makeShortBuffer(icoords);
    }

    private FloatBuffer makeFloatBuffer(float[] arr)
    {
        ByteBuffer bb = ByteBuffer.allocateDirect(arr.length * 4);
        bb.order(ByteOrder.nativeOrder());
        FloatBuffer fb = bb.asFloatBuffer();
        fb.put(arr);
        fb.position(0);
        return fb;
    }

    private ShortBuffer makeShortBuffer(short[] arr)
    {
        ByteBuffer bb = ByteBuffer.allocateDirect(arr.length * 4);
        bb.order(ByteOrder.nativeOrder());
        ShortBuffer ib = bb.asShortBuffer();
        ib.put(arr);
        ib.position(0);
        return ib;
    }

    public void draw(GL10 gl)
    {
        gl.glFrontFace(GL10.GL_CCW);
        gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
        gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer);
        gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalBuffer);
        gl.glDrawElements(GL10.GL_TRIANGLES, faceCount, GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
    }
}

First, the faces tag tells us how many faces are in the model so I can create an array the size of faces * 3 (three vertexes per face) to store the data in. The geometry tag tells us how many vertexes are in the model, to initialize the vertex, normal and color arrays. Faces are simply three vertex indexes, and are used in the glDrawElements command and are stored in the Index buffer. The vertexes have the three components of position x,y,z, normal x,y,z, and colour_diffuse r,g,b,a. There is also a texcoord u,v which is not used in this just to keep the code simpler, I plan to write about using textures in a later post.

Click here to download the entire Android project. This one I’m not putting on the Android Market, I don’t think it needs another Demo app. Hopefully this all makes sense and someone finds it useful. Leave a comment or email me if you have any questions or comments.

Part two of this using textures is here