Looking for a programer

This is a forum for discussing the feasibility of getting emulators, games, or other applications that have had their source released ported to the Dreamcast. Please read the Porting FAQ before starting a topic in this forum!
User avatar
PH3NOM
DC Developer
DC Developer
Posts: 574
Joined: Fri Jun 18, 2010 9:29 pm
Has liked: 0
Been liked: 0

Re: Looking for a programer

Post by PH3NOM » Thu Mar 31, 2016 8:44 pm

bogglez wrote:
PH3NOM wrote:Thank you for you input bogglez!

The textures are VQ compressed, so that is the pixelation you see.
What resolution is that mask texture or is it one texture for the whole mesh?
Are you using texture filtering and mipmapping?
I think the suit and such look fine, but the face is something I think people pay a lot of attention to, so I'd up the quality there.
Maybe you're already doing this, but since the face texture is symmetrical you only need one half of it and can instead increase the resolution?
Maybe the eyes should be half-spheres with vertex colors instead, depending on whether you intend to show the character up close. I'd avoid close-up shots on low-resolution textures like in the announcement picture.
We have hit all kinds of limitations, and found solutions to work within them. If you can be more specific, I can explain a bit more :lol:
You didn't really say what caused problems, so I was wondering in general. It seems to be less of a performance problem and more of an artistic problem? Is the blender exporter hard to work with maybe?
I am using PVR_FILTER_BILINEAR, but not currently using mip-mapping.
Keep in mind, these are developmental progress screen shots.
In reality, you wont get that close to the model, as there you are actually intersecting bounding volumes, so collision detection will stop you from getting that close.
Also, the screens I post are upscaled to 720p in Demul; on real hardware at 480p you will not see as much stretching in the pixels.
I do observe some minor pixelation, but all things considered I am very happy with the current model and texture as-is.

About limitations, well yes there have been plenty.

The first problem we hit was getting things from our modeling program properly converted into Quake 3 Map format.
There are some Quake 3 Export scripts for Blender, so it seemed like an easy solution.
However, the latest version of Blender Q3 Exporter seems to output nothing except the default cube.
I was able to get an older version of Blender to output the full .MAP, but it added so many invalid faces the map became corrupt.

Next, I tried to export our level as .obj, import the .obj in MilkShape, then export to .MAP.
First problem here was that the exporter ignores Materials, and exports everything using 1 material.
Okay, I thought, no problem. I wrote an .obj processor that exports 1 .obj with multiple materials into several .obj's that have 1 material each.
Import / Export in MilkShape to Q3 Map. Success! I can export an obj per-texture into a .MAP, then import each .MAP into 1 in GTKRadiant.
But, when I opened the .MAP in QTKRadiant, there were conversion errors that lead to gaps in the geometry effectively creating "holes" in the level.

So, there are many ways to perform the conversion process, but all of them introduce errors in the geometry.
Bottom line, it comes down to the actual specifications of the Quake .MAP format.
It does not store geometry per face or per vertex; it stores the data as planes relative to their origin.
Also, in the Quake MAP format, vertices become short integers, as opposed to floating point vertices.
I looked into writing a converter myself, but after some quick research, I decided I would rather write a new format from scratch.

So, a new format, what is the best approach?
From experience, the DC is faster at rendering large vertex arrays using more matrix transforms, than it is at un-packing indexed arrays while transforming less vertices.
So, for rendering things are pretty simple. I export the level .obj per material as vertex arrays than can be rendered with one draw call per material.
While exporting, I compute the vertex lighting and store that for the vertex color.
I call these "Render Buffer Objects", with the extension .rbo. This is a binary file format that can be loaded and rendered directly by the DC hardware.
For the curious, here is my current specification:
Spoiler!

Code: Select all

typedef struct
{
	float x, y, z;      /* 3 Float Vertex */
} vec3f;

typedef struct
{
	float u, v;         /* 2 Float Texture Coordinate */
} tex2f;

typedef struct
{
	unsigned char a, r, g, b;
} color32;              /* 32 bit ARGB color */

typedef struct
{
	vec3f pos;           /* Vertex Position */
	tex2f texcoord;      /* Vertex Texture Coordinates */
	color32 color;       /* Vertex ARGB Color */
} DCE_Vertex;

typedef struct
{
	char tex_name[256];  /* Material Texture Name */
	color32 Ke;          /* Material Emmisive Light Factor */             
	vec3f   Ka;          /* Material Ambient Light Factor */
	vec3f   Kd;          /* Material Diffuse Light Factor */
	vec3f   Ks;          /* Material Specular Light Factor */
	float   Ns;          /* Material Normal Specular Factor */
} DCE_Material;

typedef struct
{
    DCE_Material mtl;          /* Surface Material */
    unsigned short int texID;  /* Texture Index (set by engine) */
    unsigned int verts;        /* Number of vertices in object */
    DCE_Vertex * vert;         /* Vertex Array */
} DCE_RenderObject;
So, that was the first piece of the puzzle. I had things working in short time.
Now to move on to Collision Detection. I decided to use the Moller-Trumbore algorithm
http://www.cs.virginia.edu/~gfx/Courses ... ection.pdf

And boom that was working again in short time.
But then we realize how much time can be spent in Collision code, we need to optimize things.
The first inherent optimization I made was using the DC's fast vector math functions to speed things up a bit.
Next, I pre-computed the triangle edges and stored those in the Collision Model. This saves a large amount of time, due to the branching nature of the algorithm, the vertex edges would otherwise need to be computed every triangle checked against.
So, instead of storing each triangle, in my Collision Model I store a representation of the triangle as follows:
Spoiler!

Code: Select all

typedef struct
{
	vec3f v1;                 /* 1st Vertex of Triangle */
	vec3f edge1;              /* Pre-Computed Edge 1 */
	vec3f edge2;              /* Pre-Computed Edge 2 */
	vec3f normal;             /* Surface Normal */
	vec3f bbmin;              /* Bounding Box Mins */
	vec3f bbmax;              /* Bounding Box Maxs */
} DCE_CollisionTriangle;
I compute the surface normal as the average of the 3 vertex normals.

So, now that we have fast ray->triangle intersection test, the only thing left to do is reduce the number of triangles being checked against.

That is where the QuadTree Partitioning is used, and that is why the Bounding Box mins / maxs are stored for each DCE_CollisionTriangle.

So, my Collision QuadTree Node looks like this:
Spoiler!

Code: Select all

typedef struct
{
	vec3f bbmin;                   /* Bounding Box Mins */
	vec3f bbmax;                   /* Bounding Box Maxs */

	float half_width;              /* Bounding Box Width */
	float half_depth;              /* Bounding Box Depth */

	unsigned int triangles;        /* Triangle Count In this Bounding Box */

	unsigned short int nodes;

	DCE_CollisionTriangle * tris;  /* Triangle Array */
    
	void * parent;                 /* DCE_CollisionQuadTreeNode Parent Node For Quad Tree */
	void * node[4];                /* DCE_CollisionQuadTreeNode Children Nodes For Quad Tree */

} DCE_CollisionQuadTreeNode;
I consider this a "Collision Buffer Object".

My QuadTree is from a top->down perspective, allocating partitions based on width and depth, which results in optimal partitions for most situations.

Once that was done, we realized that juguefre had created a level with way too many vertices(and memory) to be handled at one time on DC.
So, we found 2 solutions to this problem.
1.) We realized that many elements of the level were being repeated. So, instead of storing many copies of the same geometry in memory, we export that geometry as a model, and use a simple form of instancing to render many copies of that model.
This also allows us to easily use culling to completely prevent submitting unvisible geometry to the renderer.
Just a note, these models are using the "Render Buffer Object" format :o
2.) We decided to "segment" the level, so that splits the geometry up once again, but will introduce a "loading" time when transitioning between segments.

That should pretty much bring things up to speed. 8-)

A quick note about how I render such high quality models:
1.) Carefully determine how to render model ( no clipping, with clipping, or dont and cull )
In the case of no clipping, I use pure assembly to render the model :oops:
2.) I use a frame buffer for each model, so when rendering each model 2x or more ( instancing, shadows, or 2 players on 1 console ) each frame only needs to be un-packed 1x instead of 2x or more.
3.) Use Backface Culling on the PVR when not clipping
User avatar
bogglez
Moderator
Moderator
Posts: 576
Joined: Sun Apr 20, 2014 9:45 am
Has liked: 0
Been liked: 0

Re: Looking for a programer

Post by bogglez » Fri Apr 01, 2016 11:35 am

Thanks for the detailed information!

It seems you equal 3D models to DCE_RenderObject which has only 1 mateiral. Or do you allow multiple materials?

Instead of void* in your DC_CollisionQuadTreeNode, I would use a forward declaration and use DCE_CollisionQuadTreeNode*, so you don't need to cast.
Since the DC's cache is small I would consider breaking up the struct into multiple structs, so that your performance-intensive algorithms only read necessary data into the cache.
Especially if you have per-triangle collision (in addition to bounding volumes I assume), you will fill a lot of cache lines.

Did you implement your exporter in Blender/Python? I have to work on something like that soon so I'd love to steal.. er draw some inspiration :D
Wiki & tutorials: http://dcemulation.org/?title=Development
Wiki feedback: viewtopic.php?f=29&t=103940
My libgl playground (not for production): https://bitbucket.org/bogglez/libgl15
My lxdream fork (with small fixes): https://bitbucket.org/bogglez/lxdream
Post Reply