+ All Categories
Home > Documents > 3-D Game Development on JSR-184 v1 0 En

3-D Game Development on JSR-184 v1 0 En

Date post: 21-Apr-2015
Category:
Upload: itangel
View: 38 times
Download: 1 times
Share this document with a friend
29
3-D Game Development On JSR-184 F O R U M N O K I A Version 1.0; March 22, 2004 Java
Transcript
Page 1: 3-D Game Development on JSR-184 v1 0 En

3-D Game Development On JSR-184

F O R U M N O K I A

Version 1.0; March 22, 2004

Java

Page 2: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Copyright © 2004 Nokia Corporation. All rights reserved.

Nokia and Nokia Connecting People are registered trademarks of Nokia Corporation. Java and all Java-based marks are trademarks or registered trademarks of Sun Microsystems, Inc. Other product and company names mentioned herein may be trademarks or trade names of their respective owners.

Disclaimer

The information in this document is provided “as is,” with no warranties whatsoever, including any warranty of merchantability, fitness for any particular purpose, or any warranty otherwise arising out of any proposal, specification, or sample. Furthermore, information provided in this document is preliminary, and may be changed substantially prior to final release. This document is provided for informational purposes only.

Nokia Corporation disclaims all liability, including liability for infringement of any proprietary rights, relating to implementation of information presented in this document. Nokia Corporation does not warrant or represent that such use will not infringe such rights.

Nokia Corporation retains the right to make changes to this specification at any time, without notice.

License

A license is hereby granted to download and print a copy of this specification for personal use only. No other license to any other intellectual property rights is granted herein.

3-D Game Development On JSR-184 2

Page 3: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Contents

1 Introduction................................................................................................................................................ 5 2 Basics ..................................................................................................................................................... 6

2.1 Scene Graph................................................................................................................................................6 2.2 World............................................................................................................................................................6 2.3 Loader ..........................................................................................................................................................8 2.4 Object3D ......................................................................................................................................................8 2.5 AnimationTrack, AnimationController, KeyframeSequence.......................................................9 2.6 Graphics3D...............................................................................................................................................10 2.7 Background .............................................................................................................................................11

3 Transformation and Nodes ...................................................................................................................13 3.1 Transformation ......................................................................................................................................13 3.2 Node ..........................................................................................................................................................13 3.3 Group.........................................................................................................................................................14 3.4 Camera ......................................................................................................................................................14 3.5 Light...........................................................................................................................................................15 3.6 Mesh...........................................................................................................................................................16 3.7 VertexBuffer, VertexArray, IndexBuffer, TriangleStripArray ....................................................17

4 Mesh Surface Properties.........................................................................................................................19 4.1 Appearance..............................................................................................................................................19 4.2 Material.....................................................................................................................................................19 4.3 PolygonMode ..........................................................................................................................................20 4.4 Fog..............................................................................................................................................................21 4.5 CompositingMode .................................................................................................................................21 4.6 Image2D...................................................................................................................................................23 4.7 Texture2D.................................................................................................................................................23 4.8 Texture Blending and Multitexturing .............................................................................................24

5 Special Effects and Features..................................................................................................................26 5.1 Sprites .......................................................................................................................................................26 5.2 Morphing..................................................................................................................................................26 5.3 Skinning....................................................................................................................................................27 5.4 RayIntersection......................................................................................................................................28

6 Final Thoughts..........................................................................................................................................29

3-D Game Development On JSR-184 3

Page 4: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Change History

March 22, 2004 Version 1.0 Initial document release

3-D Game Development On JSR-184 4

Page 5: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

1 Introduction

The purpose of this document is to give the reader an introduction to the Mobile 3-D Graphics API (JSR-184) from a game development point of view. Some general knowledge in 3-D graphics and MIDP programming is recommended before reading this document. This document is not meant to be a full, detailed description of the API, but rather serve as a guide to the features the API provides, and give some examples of how they can be used.

The JSR-184 API is designed to bring 3-D graphics capabilities within the reach of mobile devices that support Java technology. The API is designed to be very lightweight; the implementations should follow the rule of the whole API being implemented in 150 kilobytes.

The ant demo, to be distributed with this document, is a simple example of how to use the API together with modeling and authoring tools. Although it is not a playable game, it demonstrates the same kind of content loading, modification, and procedures that are used in game content creation.

Most of the classes in the JSR-184 API are used in the demo in some form, but not all of them are visible in the actual source code. A lot of the classes have been included by adding some special content types at the authoring stage (which will probably be the case in real-life situations too), so the classes are created on the fly by the file loader of the API.

Various tools can be used, such as Discreet’s 3D Studio Max for 3-D models and animation and Superscape’s Swerve authoring tool to create the .m3g content for the demo. Although it is possible to create the content only with code, this is hardly ever recommended. Generating complex shapes and surface types by hand can turn out to be very time consuming. In addition, having decent tools available for the game content helps bringing up the quality to a higher level.

3-D Game Development On JSR-184 5

Page 6: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

2 Basics

The API can be accessed in two different modes: the immediate mode and retained mode. In the immediate mode, the behavior can be compared to the low-level functionality of OpenGL and similar 3-D APIs. It allows the user to define nearly every detail of the rendering process by hand if needed. The retained mode completely hides all the low-level functionality from the user, allowing animated 3-D content to be loaded and displayed with a few simple lines of code.

These modes mainly affect the internal rendering pipeline of the implementation, so their difference from the user side is not very dramatic. It’s possible to load, display, and modify 3-D graphics content very efficiently in both modes. The format of the 3-D data files supported by the API is .m3g, which is a very compact but still a versatile format, allowing data of nearly any scale to be stored within a single file.

2.1 Scene Graph

A tree structure, where each leaf defines some kind of physical or abstract item in a three-dimensional world (such as a camera, a light, or a mesh) is commonly called a scene graph. Some scene graphs may contain other properties, such as materials or some application dependent metadata.

In JSR-184, the scene graph can contain any classes extended from javax.microedition.m3g.Object3D. The “placeholder” for the scene graph is a javax.microedition.m3g.World object. It is possible to load the whole scene graph from a single .m3g file. The loaded world can be directly rendered to the screen (in retained mode) or if the file contains something else, such as a single mesh, it can also be used manually (in immediate mode).

In applications where the number of objects on the screen is relatively static, the scene graph approach is usually the best way to keep things simple and organized. In this way, most of the content can be added to the scene at the authoring stage and a lot of unnecessary code writing hassle can be avoided.

To keep the scene graph memory requirements down, component data can be shared between objects. Nodes can, however, only belong to one group. This means, for example, that vertex buffers or surface appearances may be shared between meshes but a mesh cannot belong into more than one group. In addition, cross-references or loops between two or more components are not allowed.

2.2 World

The World class (javax.microedition.m3g.World) provides a convenient way to maintain all the information of a 3-D scene (the complete scene graph) in one neat package. One World can theoretically contain any number of meshes, cameras, lights, etc., each of which can have several animated parameters.

The World can be thought as the topmost node of a scene graph. Unlike a regular node, the World cannot have a parent node and its transformation is ignored during the rendering process.

For example Swerve automatically adds a separate RootNode group to the top of the scene graph, just below the World node, so that the whole scene can be translated or scaled even if the World node doesn’t allow this to be done.

A new World can be created from scratch and new nodes can be attached to it. However, usually it’s easier to simply include some or all of the necessary nodes into the .m3g package, load it using the javax.microedition.m3g.Loader class, and modify the contents afterwards.

3-D Game Development On JSR-184 6

Page 7: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Just like the World object itself, the .m3g file format can contain anything from a simple 3-D mesh to a fully-featured, animated 3-D scene with several cameras, lights, material definitions for meshes, skinning, and morphing. The World can also have its own static background image, defined by the javax.microedition.m3g.Background class.

In the ant demo example, only whole Worlds are loaded from .m3g files, since this is the most convenient way of bringing data into the application for scenes using simple animation. The scenes in the demo do not contain a lot of material that needs to be dynamically modified. In those parts where adding/removing objects was required, only the required parts of the scenes were modified with code. This approach is very good for applications that display a static amount of content on the screen, or where the content has to be modified only in small detail.

Merging nodes and other data between multiple Worlds is possible, but it seems to require a significant amount of memory at least at the moment.

Figure 1 is a screenshot from Swerve spy window in 3D Studio Max, showing topmost nodes of the scene graph of the ant demo tunnel scene.

Figure 1: Topmost nodes of the scene graph of the ant demo tunnel scene

During the load process, the Loader object unpacks the contents of the .m3g file and constructs all the necessary Java class objects during run time. It creates the animation controllers, sets parent and grouping information, initializes the lights and cameras, etc., thus efficiently wrapping all this work behind a single load command.

As you can probably notice, even a relatively simple scene with only a few animated objects can turn into quite a complex scene graph. Removing all unnecessary data from the scenes with the authoring tools is extremely important because every object, no matter how small it might look, requires a certain amount of memory, which is very limited in mobile Java devices.

Whenever the rendering mode is switched from immediate mode to retained mode, the cameras, lights, and background defined for the World object are used instead of the ones defined for Graphics3D. A default rendering camera can be selected for the World, and it can be changed or queried later if needed. At rendering time, the active camera must be a descendant of the world.

3-D Game Development On JSR-184 7

Page 8: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

2.3 Loader

The Loader (javax.microedition.m3g.Loader) class, implemented by the underlying platform, is designed for streaming .m3g content so that the users don’t have to design their own file formats or write complex and space consuming loader code (although this is possible if it is specifically needed for some purpose). When an .m3g file is loaded, all the classes extended from Object3D are automatically deserialized by the Loader, and an array of deserialized objects is returned.

In the example, this has been done as follows:

Object3D[] o = null; try { o = Loader.load(name); } catch (Exception e) { }

This Object3D[] can then be typecast to World as follows:

World loadedWorld = (World) o[0];

Loading data directly from an .m3g file is the most convenient way of bringing 3-D content to the application. This can consume a lot of memory, especially if each single object is separately animated, so content can also be created and animated manually. In the example, some random snowflakes have been created to the first scene manually, and they have also been animated with code because the animation data would have taken a noticeable amount of memory.

Figure 2: Random snowflakes in the first scene of the example

2.4 Object3D

Almost all classes in the .m3g library (except Loader, Transform, RayIntersection, and Graphics3D) are extended from javax.microedition.m3g.Object3D. The Object3D class provides a basic set of functionality common to all the classes, such as serialization/deserialization of the objects, assignable animation controllers, user-defined ID and parameter values and duplication methods.

Using these methods, any class derived from Object3D can be easily animated with assignable controllers, they can be streamed from .m3g files, and they can be cloned and given user-defined

3-D Game Development On JSR-184 8

Page 9: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

parameters and identification codes (either manually with code or at the authoring stage). This information can be used to identify and find them in a scene.

In the demo, different objects were needed from different scenes, such as the 2-D image of the ant in the snowflake Nokia scene, tunnel mesh’s appearance object for surface modification, or ant’s head for camera targeting purposes. The IDs and all the animation controllers were applied to the objects at the authoring stage, using 3D Studio Max and Superscape Swerve authoring tool.

These IDs are used to find different items from the scene graph, and to modify their properties as follows:

World nokia = m_scenes[SCENE_NOKIA]; Mesh m = (Mesh) nokia.find(UID_NOKIA_ANTIMAGE); m.setRenderingEnable(false);

Note that the Mesh class is extended from the Node class, and the setRenderingEnable method is defined in Node, not Object3D.

The UID_NOKIA_ANTIMAGE is the ID for the mesh consisting of two polygons, with a textured image of the ant standing on the ice field. This ID is defined by Swerve by enabling the “Expose to API” flag for the mesh, found in Swerve’s local properties. The ID can be spotted directly from Swerve studio spy’s property window, and copied into the code:

private static final int UID_NOKIA_ANTIMAGE = 47364324;

Figure 3: User ID in Swerve studio spy’s property window

In Figure 3, you can also see how several animation tracks, controllers, keyframe sequences, and other properties are defined for objects.

2.5 AnimationTrack, AnimationController, KeyframeSequence

Objects and any of their animatable parameters (such as light intensity, camera zoom / fov, object visibility, bones, etc.) can be set to follow a predefined keyframe sequence, which can then be animated using the animate() method found in Object3D class.

The javax.microedition.m3g.KeyframeSequence class defines a set of floating point key values, their respective key times and interpolation type used to calculate the actual value at selected time. Each KeyframeSequence can be set to animate either a certain part of the timeline or the

3-D Game Development On JSR-184 9

Page 10: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

sequence can be defined to loop forever. It is even possible to have multiple keyframes defined for the same time position to allow camera cuts or other discontinuities in animation paths.

The actual value of a keyframe sequence at selected time can be obtained with interpolation methods (such as linear, or Catmull-Rom spline interpolation) or by choosing a stepped mode where each key value is kept constant until a new value is reached (in “sample and hold” way).

In the example demo, both linear and spline interpolation are used. The different interpolation types can be selected in Swerve by assigning different animation controller types to objects in 3ds max (such as linear or Bezier float controller).

Figure 4: Interpolation type

At the time the demo was written, there were some difficulties in converting max Bezier controllers to .m3g scenes. Extra keyframes needed to be added to get rid of some overshoot in the sharpest curves, apparently because the Bezier parameters cannot be directly translated between 3ds max and JSR-184 implementations.

A keyframe sequence can be targeted to animate a selected property in a scene by associating it to a parameter with javax.microedition.m3g.AnimationTrack. AnimationTrack defines a set of animatable parameters, such as colors, visibility, scale, rotation, and translation. For the full list of available parameters, see the API documentation.

The animation speed, key positions, and possible weight blending of several animation tracks are controlled by the javax.microedition.m3g.AnimationController class, which can be attached to a number of animation tracks.

2.6 Graphics3D

javax.microedition.m3g.Graphics3D works as a rendering context for 3-D graphics, just like the normal Graphics class works in normal 2-D Java implementations.

The whole World or any node(s) of the scene can be rendered by requesting the Graphics3D to render them with a specified camera. The rendering camera can be selected manually or the default camera for the scene can be used.

Before rendering, the Graphics3D context must be bound to the MIDP Graphics object used to draw the rest of the viewport, and it must be freed after the rendering is done to flush the resulting image.

3-D Game Development On JSR-184 10

Page 11: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

// bind to graphics g3d.bindTarget(g); // get currently displayed World World scn = m_scenes[m_currentScene]; scn.animate(m_currentTime); // render the scene g3d.render(scn); // release g3d g3d.releaseTarget();

There are four different rendering modes in which the API can operate. The first mode is used when the whole scene graph is rendered (retained mode). In this mode, the active rendering camera and lights are set by the World that is going to be rendered.

There can also be an active camera and a group of active light(s) in Graphics3D. These are used in the remaining three rendering modes called the immediate modes.

The second mode is used for rendering selected scene graph nodes, including group nodes. The third and fourth modes are used for rendering a single submesh. These modes can be very useful for various graphical tricks, such as rendering onto texture. In addition, applications that use some kind of visibility optimization algorithms (such as portal or exit rendering) or level-of-detail in objects may benefit from rendering the meshes with these methods.

It is possible to define rendering quality hints in Graphics3D. These hints modify “image quality vs. speed” type rendering options, such as full scene anti-aliasing (used to smooth pixelation of polygon edges), dithering (used to enhance perceived image quality in low-color displays) and true color rendering (used to allow rendering with higher bit depth than what may be supported by the device display).

It should be noted that these flags are merely hint flags, so they are not required to actually do anything by the underlying platform. In addition, using some of these flags, especially the anti-alias flag, may result in significant drops in the rendering speeds.

Supported platform features and limits, such as viewport sizes, maximum texture sizes, light counts, and maximum texture unit count, can be queried with the getProperties() method. The limits should always be respected, because exceeding the values returned by this method will probably result in some kind of erratic behavior, if not by crashing the application, but as wrongly rendered image.

Some of the features may be automatically supported bydefault by the underlying platform, such as display dithering or 24-bit rendering. In this case, if the result of the getProperties() query is false, the feature should not be switched on manually.

A static image backdrop and buffer clearing options for the Graphics3D context can be defined with the javax.microedition.m3g.Background class. The background settings will be applied to the current Graphics3D context by calling either the clear() or render(World) method.

2.7 Background

The Background class (javax.microedition.m3g.Background) defines how the rendering buffers1 should be cleared, if they’re going to be cleared at all. The Background also supports adjustable clipping rectangles for rendering just a portion of the visible image.

1 For more information on different rendering buffer types, please refer to OpenGL documentation.

3-D Game Development On JSR-184 11

Page 12: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

It’s possible to set whether the frame buffer will be cleared between frames with single color, 2-D image (which can also be wrapped in x/y by using the REPEAT hint flag), or if it’s not cleared at all, by setting the setColorClearEnable() hint flag to false. It’s also possible to define whether depth buffer should be cleared with the setDepthClearEnable() hint flag.

If a 2-D image is used, RGB or RGBA image format must be used. It should also be noted that the image format must be the same as the currently used rendering target format.

The state of all these above-mentioned hint flags can be queried with similar methods.

3-D Game Development On JSR-184 12

Page 13: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

3 Transformation and Nodes

3.1 Transformation

The abstract class javax.microedition.m3g.Transformable defines the translation, scale, rotation, and free transformation matrix for a node (javax.microedition.m3g.Node) or a texture (javax.microedition.m3g.Texture2D). The transformation/translation matrix for a point p (x,y,z,w), representing either vertex coordinate or texture coordinate, is defined relative to the parent node’s coordinate system as follows:

p' = T R S M p

where T stands for translation (xyz positional information), R for rotation (orientation), S for scale, and M for generic 4x4 homogeneous matrix.

The Transformable class defines methods for setting all these components individually (such as setTranslation(), setScale(), etc.), and it returns the composite transform by calling getCompositeTransform(), which is then used for the actual transformation by the renderer. The values can either be set directly, or an existing transformation can be further rotated / translated / scaled from its current values.

3.2 Node

The Node class (javax.microedition.m3g.Node), extended from the abstract base class javax.microedition.m3g.Transformable, is an abstract class representing all possible node types in a scene graph. Lights, Cameras, Meshes, Sprite3Ds, and Groups are all different node types.

A Node defines a local coordinate system, which can be transformed relative to the parent coordinate system. It should be noted that nodes cannot be shared between different Worlds or Groups, due to restrictions in javax.microedition.m3g.Group. The node can be aligned, or “targeted to” a reference node, in which case, after alignment, the rotation matrix R is replaced with alignment matrix A as follows:

p' = T A S M p

Two reference nodes can be picked either with the setAlignment() method before align() is called, or the reference node can be passed in as a parameter to the align() method.

Alignment forces for example cameras or lights to be targeted to the selected node. It can also be used for billboard type objects that automatically face the camera. Note that with flat 2-D billboard images it’s often better to use the Sprite3D class instead of alignment because a sprite also renders as 2-D geometry, which means it usually renders faster than true polygonal billboards.

Another useful property for a node is scope. Scopes can be used for a variety of purposes, but one of the most useful features is using them for visibility culling purposes. The scope of the node does not have any direct relationship with the group the node possibly belongs to; it’s just a different form of grouping things.

Let’s say that there is a large game world which is divided into several parts, none of which is visible to/from other parts of the world. You can set a different scope mask for each part of the world, and modify the camera scope so that the camera scope corresponds to the part of the world the player/camera is currently in. If the camera and node scopes are different, the nodes are not rendered, which can result in significant savings in precious CPU cycles.

3-D Game Development On JSR-184 13

Page 14: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Another useful application for node scopes is speeding up lighting calculations. Usually in game environments, the lights only have a certain range of influence (ROI), determined by the light type and intensity. By setting different scopes for lights and meshes according to their distance, etc., you can define whether a light affects a mesh or not. This allows using a lot more lights in a scene without a significant CPU penalty.

3.3 Group

A bunch of nodes can be grouped together for easier manipulation by using javax.microedition.m3g.Group.

Grouping is useful in situations where control over several objects is needed at once, or at points where parent hierarchies are required. A common example of group use is a car with a cabin and four wheels. By defining the car as a group, it’s possible to move the whole car without having to move each wheel and the cabin separately.

Groups can also be used for toggling visibility of larger amount of objects at once. This may be useful in hiding/showing special effects, such as explosions, or it can be effectively used in combination with visibility culling systems for hiding several child nodes if parent Group is not visible.

3.4 Camera

Camera (javax.microedition.m3g.Camera) is a class which transforms the coordinates from 3-D space coordinates to screen space coordinates (camera space to clip space). The camera uses OpenGL compliant clipping and projection, with the exception of user-defined clipping planes.

It’s possible to define multiple cameras both in the immediate mode and the retained mode. Theoretically any number of cameras is allowed, which means that it’s possible to create cool camera paths and different viewing angles, as well as special effects by rotating and tweaking the camera parameters.

The camera used for rendering can be picked separately, it can be targeted to objects (by picking an alignment node), and its view frustum parameters (field-of-view angles and clipping distances) can be modified.

In the ant demo’s last scene, four different cameras have been used, each of which can be selected and zoomed individually using keys 1-4 and up/down arrows.

Each camera is targeted to the ant standing on the island.

Node n = (Node) pond.find(UID_POND_ANTHEAD); c.setAlignment(n, n.ORIGIN, pond, pond.Z_AXIS); c.scale(-1, 1, -1);

One thing in the demo that may cause confusion is that the alignment “up” axis is chosen to be the Z-axis instead of the perhaps more common Y-axis. This is because the scenes are exported from 3D Studio Max, which has its world-space defined so that Z points up instead of Y. The camera is facing towards its negative Z-axis.

3-D Game Development On JSR-184 14

Page 15: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Figure 5: Last scene in the ant demo

3.5 Light

The JSR-184 specification supports 4 different light types, each with different computational complexity. The lighting equation complies with the OpenGL standard.

The light types are:

• Ambient light which defines the general intensity of objects in a scene. Ambient light lights the whole scene with the same amount of light, so its position and direction are ignored during the calculations, making it very CPU friendly to calculate.

• Directional light defines only the direction where the light is coming from. Its position or distance from the objects has no effect, although in can be freely positioned in the scene. Directional light is good for simulating distant light sources, such as a sunlight system, and it should cause a bit less CPU load than omni or spot lights. In Figure 6, you can see that although the light is located between the two objects, the direction of the light is still from the bottom left corner for both objects. An ambient light with 25% intensity is also added to the scene.

Figure 6: Directional light

• Omni light defines a point light source. Omni light affects objects in all directions around it. An attenuation curve can be set for the light, so that it loses intensity when the distance between the light and target surface grows. Two attenuation curve adjustment parameters are available: linear and quadratic falloff.

3-D Game Development On JSR-184 15

Page 16: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Figure 7: Omni light

• Spot light defines light direction, position and a spot radius. A cone pointing towards the negative Z-axis of the light’s coordinate system defines the lit area. Spot light does not affect objects that fall outside its cone angle, as you can see from Figure 8, and it’s computationally more expensive than omni light. For spot light, it’s also possible to define a spot exponent which affects the sharpness of the “bright area” of the spot light.

Figure 8: Spot light

Light calculations may require a significant amount of CPU power in large scenes, especially if multiple lights are used, so reducing computational load by choosing the correct light types, using scopes, or even avoiding the use of lights completely (at the expense of texture memory) by rendering the lights into the textures whenever possible is recommended.

Each light has an RGB color and intensity value but the exact way the light affects a rendered surface is defined by the Material of the surface. A wide variety of different surface emulations and special effects can be created by varying the light parameters together with the material parameters. It’s also possible to define interesting effects by specifying negative intensities for lights, which causes the light to darken the surface instead of brightening it.

3.6 Mesh

A Mesh is basically a bunch of XYZ coordinate positions (“vertices”) followed by a definition of which of these positions are connected to form the surface of the 3-D model (“polygons”) and a description of what kind of materials should be used for these polygons (“surfaces”).

3-D Game Development On JSR-184 16

Page 17: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

javax.microedition.m3g.Mesh is a class encapsulating a vertex buffer, index buffer(s), and possible appearance class(es) that together define the actual 3-D model and the surface parameters for it.

In its simplest form, a mesh must contain at least three vertices and one triangle polygon before it can be rendered. In practice this means that for the Mesh class, at least one VertexArray must be defined, and one valid implementation of IndexBuffer must be present.

The polygons can theoretically be of any shape and have any number of vertices each, but in practice the only currently available implementation of the IndexBuffer class (called javax.microedition.m3g.TriangleStripArray) requires the polygons to be split into an array of triangles. In practice this is not a problem because most 3-D modeling packages export the models as triangles anyway.

The mesh is a normal scene graph node, so it can be transformed, hidden/shown, and its alpha (transparency) value and scope can be modified just like any other node.

The basic mesh implementation is a rigid body mesh, which means that its vertices cannot be animated, except by directly overwriting the data in the mesh’s vertex buffer before rendering a frame.

Multiple submeshes can be defined inside a mesh, each with its own appearance, which means that one mesh can have several different surface types. The submeshes share the same VertexBuffer.

The rendering order of z-buffered triangles has no effect on the result if only opaque polygons are used; however, when transparent surfaces are present, the opaque surfaces should always be rendered before the transparent ones to avoid rendering artifacts caused by wrong drawing order. In addition, for additional resolution, the transparent surfaces should in some cases be correctly ordered in back-to-front order.

For this reason, some rules for the rendering order are defined in the appearance class. A rendering layer number can be set with the setLayer() method for the appearance. Submeshes with lower layer number are always rendered before higher layer, which allows manual definition of the rendering order inside the scene graph. In addition, all opaque polygons are always rendered before the transparent ones in the same layer.

In this way, the possible mesh rendering artifacts can almost always be avoided.

3.7 VertexBuffer, VertexArray, IndexBuffer, TriangleStripArray

javax.microedition.m3g.VertexBuffer, javax.microedition.m3g.VertexArray and javax.microedition.m3g.TriangleStripArray form the basis of meshes.

VertexBuffer is a class which stores various types of vertex data (in form of vertex arrays).

There are currently four types of vertex arrays defined:

1) Coordinate position array (defining the object space XYZ coordinates): In valid Mesh implementation, this array must always be present, as the Mesh wouldn’t be renderable without this information.

2) Normals array: This array stores the vertex normals used for light calculations. This array must be present if lighting is used; otherwise, an exception is thrown.

3) Vertex color array: This array can be used for vertex coloring. For example, expensive radiosity lighting can be precalculated (“baked”) into vertex colors.

3-D Game Development On JSR-184 17

Page 18: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

4) Texture coordinate array(s): Defines texture S and T coordinates for each active texture map.

Using these classes is required when objects are built directly in code. For example, the snowflakes in the ant demo’s first Nokia scene are created using these classes. The example mesh is a very simple quad polygon.

//Creates a new vertex array for the coordinate positions. XYZ triplet //per vertex, 2 bytes each. flakeVertexArray = new VertexArray(flakeVertices.length / 3, 3, 2); //Copies the vertex coordinates from predefined array (flakeVertices) flakeVertexArray.set(0, flakeVertices.length / 3, flakeVertices); //Creates another array for texture coordinates. UV per vertex, 2 bytes //each. flakeTextureArray = new VertexArray(flakeTexCoords.length / 2, 2, 2); //Copies the texture coordinates from flakeTexCoords. flakeTextureArray.set(0, flakeTexCoords.length / 2, flakeTexCoords); //Creates a new vertex buffer. flakeVertexBuffer = new VertexBuffer(); //Assigns flakeVertexArray for the XYZ positions to the vertexbuffer, //with scale of 0.01. flakeVertexBuffer.setPositions(flakeVertexArray, 0.01f, null); //Assigns flakeTextureArray for the first texture unit, with scale 0.1. flakeVertexBuffer.setTexCoords(0, flakeTextureArray, 0.1f, null); //Creates the index buffer (polygon indices) using ascending order //(0,1,2,3...). //The flakeStripLengths defines only the length of the index buffer. flakeTriangles = new TriangleStripArray(0, flakeStripLengths);

Example 1: Example from the demo

3-D Game Development On JSR-184 18

Page 19: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

4 Mesh Surface Properties

These classes define the surface appearance of the meshes, in combination with the lighting.

Several different surface type examples have been used in the demo at the authoring stage and also manually modified/created surface types. An example of modifying the appearance of a mesh loaded from .m3g file (in the tunnel scene) on the fly is also included.

4.1 Appearance

The Appearance class defines the rendering attributes for a mesh or sprite. It encapsulates the Material, PolygonMode, CompositingMode, Fog, and texture maps, which together determine the final look of the mesh’s surface on the screen. The appearance class also defines the rendering layer which again determines the rendering order of the surfaces.

By default, each mesh surface uses a default appearance, which doesn’t define any values for the surface properties. Simply put, if only the default appearance is used, no texturing, fogging, special pixel composition mode, lighting, or visibility culling will be applied to the polygons. The surfaces of the (sub)mesh can be given new appearances, and new fog, texturing, etc., parameters can be added to the appearances, in which case they automatically affect all the submeshes which use the modified appearance objects.

The rendering order is sorted internally by the implementation so that layers with lower layer number are always rendered before higher layer numbers. This applies to the rendering of Worlds, Groups or individual (sub)meshes. In addition, opaque layers are always placed before layers with transparency in the rendering pipeline.

Negative layer numbers may be useful for some special effects, such as lens reflections, halos or skyboxes, but they should be pretty much unnecessary in normal applications.

4.2 Material

The javax.microedition.m3g.Material class defines how the surface reacts to lighting. The vertex coloring pipeline can be thought of as a backwards process starting with a check whether a material for the surface exists or not. If no material exists, a possible vertex buffer color array (or default color) is directly used for the vertex coloring. If a material exists, lighting affects the result, and vertex normals must be defined for the vertex buffer. Otherwise, an exception is thrown.

If vertex color tracking is not enabled, the material’s diffuse and ambient color components are used. If tracking is enabled, the vertex buffer color array (or the default color) defines the diffuse and ambient components, and light’s parameters define the highlight colors.

The Diffuse component defines the basic color of the surface. This parameter is not available on directional or positional lights.

The Ambient component defines the “backlight” color, or “directionless” color of the material. This value is multiplied with the ambient color component of the light. This parameter is ignored on directional and positional lights.

The Emissive component defines the emissive (default) color of the material which is present even if lights have zero intensity.

The Specular component defines the specular (or “highlight”) color of shiny surfaces. This parameter is ignored on ambient and directional lights (it’s set to zero).

3-D Game Development On JSR-184 19

Page 20: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Figure 9: Material and vertex coloring pipeline (figure borrowed directly from JSR-184 Java documentation)

4.3 PolygonMode

PolygonMode defines shading type, culling mode, two-sidedness (including lighting of two-sided polygons), polygon winding mode, perspective correction of textures, and camera lighting hint flags for the surface.

The shading type can be set to be either flat or smooth (Gouraud) shading. In the flat shading mode, the whole polygon’s color is determined by single normal. In smooth shading, the colors for the vertices are calculated separately, and the result is interpolated for each pixel through the polygon to get the per-pixel color.

Polygon culling mode sets whether the polygons are visibility culled by “front” side polygons or “back” side polygons, or if they are not visibility culled at all. What is “front” and what is “back” is defined by the winding mode.

If polygon vertices are defined in clockwise order, and the winding mode is set to be clockwise, the “front” side polygons are the ones which can be drawn to the screen in clockwise order. The winding mode can also be set to counterclockwise, and in this case the polygon vertex order check is reversed.

Perspective correction flag defines whether textured polygons should use perspective correction algorithms to avoid texture stretching effect on large polygons, especially on those that are close to the camera’s clipping plane. This does, however, slow down the rendering by about 10-20%, depending on the implementation.

The image on the left has perspective correction switched off. Due to strong perspective and low polygon count on the track, the texture on some of the closest track polygons and the skyline is distorted very badly. Note that perspective correction has nearly no effect on small and distant

3-D Game Development On JSR-184 20

Page 21: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

polygons, so if level of detail objects are used, it can often be disabled completely for the smallest and most distant models.

Figure 10: The difference between perspective correction being switched off and on

4.4 Fog

The javax.microedition.m3g.Fog class implements a distance-based fog effect which fades the vertex shading colors towards a user-defined RGB color when the polygons are moved further away from the camera. The near and far distances for the fog effect can be set. The closer distance defines where the fog starts to affect. At the farther distance, the fog is applied at full effect.

Two different distance calculation methods are available: linear and exponential. Linear fades the colors linearly between the near and far distances. This method is a little faster to calculate, but it doesn’t necessarily produce as realistic results as the other (exponential) method.

The exponential method allows the user to set a density factor which adjusts the thickness of the fog, and allows a bit better control of the fog’s falloff curve.

4.5 CompositingMode

Polygons can be combined with the underlying pixels in several different ways.

The compositing modes are:

• Alpha: a weighted average between source pixel and underlying pixel intensities is calculated. The blending percent is determined for each pixel by interpolating the vertex alpha colors.

• Alpha_add: a weighted average between source pixel and underlying pixel is calculated using the vertex alpha colors. This average is then added to the underlying pixel intensity.

• Modulate: source pixel and destination pixel intensities are multiplied together.

• Modulate_x2: source pixel and destination pixel intensities are multiplied together. The result is multiplied by 2 and clipped against full intensity.

• Replace: the default (and fastest) mode. The underlying pixel is directly replaced with the source pixel.

3-D Game Development On JSR-184 21

Page 22: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Figure 11: The effects of different composition modes

The polygon is considered to be transparent if the compositing mode is something else than “replace”. Transparency will affect both the drawing order of the polygons and rendering speed.

With CompositingMode, the depth buffer write flag can be enabled or disabled, and depth buffer offset can be set. The depth offset allows “detail” type texture objects to be drawn on top of polygons, such as bullet holes onto a wall or track marks to a car game race track, without rendering artifacts caused by insufficient depth buffer resolution. The depth buffer offset is constant for the whole polygon.

Figure 12 demonstrates the depth offset in practice. The image is from the top, and the Z-axis represents the depth.

If the red polygon is rendered first, and you try to render a smaller polygon on top of it (the blue polygon), some pixels may not be rendered because the depth buffer will hold exactly the same (or slightly different due to rounding errors) distance values for the overlapping pixels.

When depth offset is used, the depth buffer values are accessed as if the polygon was rendered at the green position, but the actual pixels are still drawn to the screen using the blue position.

Figure 12: The depth offset in practice

Color and alpha buffer write flags can also be toggled, in case the buffer writes are not needed for some additional rendering speed. The state of these flags can also be queried.

3-D Game Development On JSR-184 22

Page 23: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

4.6 Image2D

Image2D is a class representing a 2-dimensional image. The image can be constructed either from normal Java MIDP or AWT Image or the raw image data can be passed in as byte array.

Several image formats are available:

• alpha: only transparency information is specified. If the data is supplied as byte array, each pixel must have only one byte per pixel.

• luminance: only brightness information is specified. This type can be used for light mapping or other applications where only the brightness needs to be adjusted. 1 byte per pixel is required.

• luminance_alpha has both alpha and luminance channels defined. 2 bytes per pixel are required.

• rgb: basic r, g, and b color components are used. 3 bytes per pixel are required.

• rgba: r, g, b, and alpha channels are present. 4 bytes per pixel are required.

Two kinds of images are available; mutable and immutable. The contents of mutable images can be modified after construction. However, the implementation may do specific optimizations for immutable images during the loading, which may result in better processing performance.

4.7 Texture2D

The Texture2D class (javax.microedition.m3g.Texture2D) combines a 2-dimensional texture map image with information on how it should be applied to a submesh. This information includes texture image and texture pixel (also known as “texel”) filtering, texture coordinate transformation, and texture composition mode.

The texture dimensions supported by the implementations are always powers of two, but the width and height do not have to be the same. Maximum texture sizes of the supported platform can be queried with the getProperties() method in Graphics3D.

The on-screen texel positions are calculated by applying a transformation (similar to vertex transformation) to texture vertex coordinates (which must be defined in mesh’s VertexBuffer). The texture transformation matrix contains all the same properties (translation, rotation, scale, user matrix) as the vertex transformation.

After that, the transformed coordinates are interpolated for each pixel through the polygon. This interpolated result is then projected to the screen either with or without perspective correction. Whether perspective correction should be used for the submesh or not is defined in the PolygonMode class.

When the accurate value of the resulting projected S or T coordinate falls between two texels, some kind of approximation of the texel color at that point is needed. For this, two different filtering modes are available. Only the first mode must be supported by the implementation.

The two available texel filtering modes are:

• nearest neighbor: the accurate position is rounded to nearest texel, which is directly used as such. Any fractional calculation information is simply ignored. This is the fastest (and default) filtering mode, but it may look blocky when texture pixels grow bigger than screen pixels. The polygon on the left in Figure 13 demonstrates this.

• linear filtering: uses a weighted average between four texels, using sub-texel position information for the weights. The polygon on the right in Figure 13 uses linear filtering (sometimes also called “bilinear filtering”).

3-D Game Development On JSR-184 23

Page 24: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Figure 13: The difference between the two filtering modes

When a single screen pixel covers multiple texels, scaled-down version(s) of the texture can be used for reducing texture aliasing caused by insufficient approximation of the pixel’s actual contents.

Mipmapping is a method in which prefiltered textures are used to avoid these pixelation errors, at the expense of memory.

The image on the left in Figure 14 has no mipmapping applied (it only contains one sample per pixel). The image on the right in Figure 14 uses mipmapped texture, which results in far better approximation of the texture near the horizon.

Figure 14: The difference between a non-mipmapped and mipmapped image

The mipmap filtering and texel filtering types can be set independently from each other with the setFiltering() method. The levelFilter parameter selects the mipmapping type for the texture as follows:

• base level filtering disables mipmapping completely, which saves both rendering speed and memory.

• nearest neighbor uses a single mipmap for each pixel.

• linear filtering interpolates between two mipmap levels according to the pixel distance. This in combination with linear texel filtering option is the same as the commonly used term “trilinear filtering” found in many graphics accelerator chips.

When nearest neighbor or linear filtering modes are set, mipmaps are generated automatically, usually by scaling the image repeatedly down to half size using a 2x2 box filter. Each new mipmap increases the memory requirements of the textures by 1/4 of the previous texture image size.

4.8 Texture Blending and Multitexturing

The filtered texture pixel can be combined with the underlying layer color in several ways. The first layer in the chain is always the interpolated vertex color. On top of that, several texture units can be added. Although it’s possible to define multiple texture layers, the platform only needs to support one.

3-D Game Development On JSR-184 24

Page 25: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

The final result of the texture compositing depends on the texture image type and blending mode. For a full table of supported blending modes with different texture image types, see the JSR-184 Java documentation. The tunnel part in the ant demo demonstrates different texel compositing modes and multitexturing.

3-D Game Development On JSR-184 25

Page 26: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

5 Special Effects and Features

5.1 Sprites

javax.microedition.m3g.Sprite3D defines a special node type which can be used to render flat, scaled, or unscaled 2-D images very fast. Sprites are basically 2-D images that can use any cropped part of a Texture2D image, rendered at the origin of the sprite’s node.

Sprites are useful for speeding up rendering when real 3-D geometry is not really needed (such as very distant objects). They are also useful for special effects, such as lens flares or particle clouds, smoke, or water bubbles.

Sprites have a constant z-buffer value (the distance of their origin), and only the Fog and CompositingMode properties of their Appearance are used. The particle explosion effect in Figure 15 is done using transparent 2-D sprites.

Figure 15: Particle explosion effect using transparent 2-D sprites

Unlike texture maps, sprite images can theoretically be of any size (not just powers of two). A cropping rectangle can be set so that any part of an image can be used. The implementation-dependent maximum sizes should, as usual, be queried using Graphics3D.

5.2 Morphing

javax.microedition.m3g.MorphingMesh is similar to the simple Mesh, but it defines multiple vertex positions for single set of polygons. The final vertex position can be calculated by interpolating the vertex positions between the default position and the morph position(s) to create facial expressions and other smooth deformations without huge amount of vertex positions.

3-D Game Development On JSR-184 26

Page 27: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

Figure 16: Creating facial expressions with morphing

5.3 Skinning

Skinning is a term used a lot in current computer and console game programming tutorials. It provides a way to approximate how character skin and clothing reacts to the movement of “bones” inside the character. It allows using soft bends in character joints (such as elbows and knees) instead of ugly, blocky, and sharp angles and holes that used to cause problems in older renderer implementations without skinning support.

Take a careful look at Figure 17. You should see that the frog’s hands and legs bend rather smoothly without noticeable joints or sharp edges between the body parts. This can especially be seen in the lower left image, in the frog’s right foot toes.

Figure 17: The result of using skinning to create soft bends in character joints

Skinning works by assigning different bone weight values to object vertices (this mapping information is sometimes called “weight map”) and blending the rotation matrices of the bones with these weight values. This means that the final position of the vertex is determined by transforming

3-D Game Development On JSR-184 27

Page 28: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

the vertex according to several bones and calculating a weight-blended distance between these values. This is of course a lot slower than calculating just a single transformation for the vertex, so it’s rather easy to bring the application down to its knees by adding a lot of bones to a complex mesh.

Due to the large amount of data needed for even a rather simple skeleton (bone hierarchy and vertex weight values), it is not recommended that bones and weight maps are implemented or animated with code. The dedicated tools in modeling software speed up the procedure of setting things up.

The weight values can usually be defined either automatically or by hand at the authoring stage. The actual implementation of the weight map generation depends on the modeling software used.

The frog model has dozens of bones defined, but to keep things simple, only the right foot bones are shown in Figure 18. The first bone moves the whole foot and the second bone naturally moves only the toes, so the bones have similar hierarchy structure as any other nodes would have.

Figure 18: The frog’s right foot bones

The javax.microedition.m3g.SkinnedMesh class defines a normal mesh and an additional child skeleton group that holds the transformation bones for the mesh. Each bone has a predefined “rest position”. If transformations of the bones are at rest position, the resulting mesh is exactly the same as the original source mesh.

In the ant demo, skinning has been used on the ant character in just one scene (where the ant leaps to the sky) because it consumes quite a lot of processing power and the ant character is so small that it’s hard to see whether skinning is actually used or not.

5.4 RayIntersection

The javax.microedition.m3g.RayIntersection class can be used for finding ray intersections in a scene graph. A ray can be shot from any point to an arbitrary direction, and if it intersects a mesh or a sprite, the intersection node, possible vertex normal at the intersection point, distance of the intersection point or texture coordinate ST coordinate can be queried.

RayIntersection has many uses, such as object picking, collision detection, and distance-based effects. For example, a growing lens flare or halo effect can be done by shooting multiple rays from lights towards the camera and finding out how many of the rays intersected something on the way.

3-D Game Development On JSR-184 28

Page 29: 3-D Game Development on JSR-184 v1 0 En

Forum.Nokia.com

3-D Game Development On JSR-184 29

6 Final Thoughts

The JSR-184 API has managed to combine the ease of use with the possibility of accessing things also in great detail when needed, which can only be a good thing from the developer’s point of view.

To really get into development, it is recommended to get a decent modeling package, such as 3D Studio Max, and a set of authoring tools, such as SuperScape Swerve. Learning to do as much as possible in the authoring stage speeds up application development dramatically.

It would be possible to write hundreds of pages from the contents of the API alone. Fortunately, this has already been done. The JSR-184 Java documentation is a very in-depth and straightforward, although very technical, documentation on the contents of the API.

It’s definitely recommended that you continue by reading it, because it explains things with far greater depth than this document, and it also serves as a handy reference for methods and classes. In addition, take a look at the ant demo source code and data files accompanied by this document.


Recommended