I am using PVR_FILTER_BILINEAR, but not currently using mip-mapping.bogglez wrote:What resolution is that mask texture or is it one texture for the whole mesh?PH3NOM wrote:Thank you for you input bogglez!
The textures are VQ compressed, so that is the pixelation you see.
Are you using texture filtering and mipmapping?
I think the suit and such look fine, but the face is something I think people pay a lot of attention to, so I'd up the quality there.
Maybe you're already doing this, but since the face texture is symmetrical you only need one half of it and can instead increase the resolution?
Maybe the eyes should be half-spheres with vertex colors instead, depending on whether you intend to show the character up close. I'd avoid close-up shots on low-resolution textures like in the announcement picture.
You didn't really say what caused problems, so I was wondering in general. It seems to be less of a performance problem and more of an artistic problem? Is the blender exporter hard to work with maybe?We have hit all kinds of limitations, and found solutions to work within them. If you can be more specific, I can explain a bit more
Keep in mind, these are developmental progress screen shots.
In reality, you wont get that close to the model, as there you are actually intersecting bounding volumes, so collision detection will stop you from getting that close.
Also, the screens I post are upscaled to 720p in Demul; on real hardware at 480p you will not see as much stretching in the pixels.
I do observe some minor pixelation, but all things considered I am very happy with the current model and texture as-is.
About limitations, well yes there have been plenty.
The first problem we hit was getting things from our modeling program properly converted into Quake 3 Map format.
There are some Quake 3 Export scripts for Blender, so it seemed like an easy solution.
However, the latest version of Blender Q3 Exporter seems to output nothing except the default cube.
I was able to get an older version of Blender to output the full .MAP, but it added so many invalid faces the map became corrupt.
Next, I tried to export our level as .obj, import the .obj in MilkShape, then export to .MAP.
First problem here was that the exporter ignores Materials, and exports everything using 1 material.
Okay, I thought, no problem. I wrote an .obj processor that exports 1 .obj with multiple materials into several .obj's that have 1 material each.
Import / Export in MilkShape to Q3 Map. Success! I can export an obj per-texture into a .MAP, then import each .MAP into 1 in GTKRadiant.
But, when I opened the .MAP in QTKRadiant, there were conversion errors that lead to gaps in the geometry effectively creating "holes" in the level.
So, there are many ways to perform the conversion process, but all of them introduce errors in the geometry.
Bottom line, it comes down to the actual specifications of the Quake .MAP format.
It does not store geometry per face or per vertex; it stores the data as planes relative to their origin.
Also, in the Quake MAP format, vertices become short integers, as opposed to floating point vertices.
I looked into writing a converter myself, but after some quick research, I decided I would rather write a new format from scratch.
So, a new format, what is the best approach?
From experience, the DC is faster at rendering large vertex arrays using more matrix transforms, than it is at un-packing indexed arrays while transforming less vertices.
So, for rendering things are pretty simple. I export the level .obj per material as vertex arrays than can be rendered with one draw call per material.
While exporting, I compute the vertex lighting and store that for the vertex color.
I call these "Render Buffer Objects", with the extension .rbo. This is a binary file format that can be loaded and rendered directly by the DC hardware.
For the curious, here is my current specification:
Now to move on to Collision Detection. I decided to use the Moller-Trumbore algorithm
http://www.cs.virginia.edu/~gfx/Courses ... ection.pdf
And boom that was working again in short time.
But then we realize how much time can be spent in Collision code, we need to optimize things.
The first inherent optimization I made was using the DC's fast vector math functions to speed things up a bit.
Next, I pre-computed the triangle edges and stored those in the Collision Model. This saves a large amount of time, due to the branching nature of the algorithm, the vertex edges would otherwise need to be computed every triangle checked against.
So, instead of storing each triangle, in my Collision Model I store a representation of the triangle as follows:
So, now that we have fast ray->triangle intersection test, the only thing left to do is reduce the number of triangles being checked against.
That is where the QuadTree Partitioning is used, and that is why the Bounding Box mins / maxs are stored for each DCE_CollisionTriangle.
So, my Collision QuadTree Node looks like this:
My QuadTree is from a top->down perspective, allocating partitions based on width and depth, which results in optimal partitions for most situations.
Once that was done, we realized that juguefre had created a level with way too many vertices(and memory) to be handled at one time on DC.
So, we found 2 solutions to this problem.
1.) We realized that many elements of the level were being repeated. So, instead of storing many copies of the same geometry in memory, we export that geometry as a model, and use a simple form of instancing to render many copies of that model.
This also allows us to easily use culling to completely prevent submitting unvisible geometry to the renderer.
Just a note, these models are using the "Render Buffer Object" format
2.) We decided to "segment" the level, so that splits the geometry up once again, but will introduce a "loading" time when transitioning between segments.
That should pretty much bring things up to speed.
A quick note about how I render such high quality models:
1.) Carefully determine how to render model ( no clipping, with clipping, or dont and cull )
In the case of no clipping, I use pure assembly to render the model
2.) I use a frame buffer for each model, so when rendering each model 2x or more ( instancing, shadows, or 2 players on 1 console ) each frame only needs to be un-packed 1x instead of 2x or more.
3.) Use Backface Culling on the PVR when not clipping