A number of people are working on an update to Jonathan’s Blender X-Plane export scripts; this post is aimed at shedding some light on some of the recent changes to the OBJ format. X-Plane 9 introduced a number of new OBJ features (manipulators, invisible geometry and camera collisions, dataref-driven control of emissive texturing, normal maps, and a number of new light billboard options). If you simply read the new OBJ commands in the order they were added to the format, it’s just a soup of funny names. But there is some logic to how the OBJ format is extended.
The World’s Simplest OBJ
Here is a very simple OBJ file, broken up by my annotations. First we have the header and global section:
A
800
OBJ
TEXTURE great_image.dds
POINT_COUNTS 24 0 0 36
the global section describes properties universal to the entire OBJ. For example, what textures will be used to draw the object?
We picked up a few new global properties in the version 9 run:
- Normal maps are declared globally for the entire OBJ.
- The metrics of any panel regions to be used are declared globally.
We may pick up new global attributes in the future; if we do, they will be properties that apply to the entire obj.
Next comes the data section:
VT 0.449997 0.300003 0.860001 1.000000 0.000000 0.000000 0.000000 0.000000
VT 0.449997 0.300003 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000
VT 0.449997 -0.509995 0.860001 1.000000 0.000000 0.000000 1.000000 0.000000
...
IDX10 0 1 2 3 2 1 4 5 6 7
...
IDX 21
I have removed a lot of the data section, because there’s not much to be said about it. The data section contains the raw data for the meshes in your OBJ, and it hasn’t changed since the OBJ 8 format was introduced.
The third and final section is the most interesting one: the commands section.
ATTR_LOD 0 3000
ATTR_hard asphalt
TRIS 0 36
The commands section describes how the data is used in the form of serial instructions to X-Plane. Most changes to the OBJ format have come in the form of new commands. We can categorize our commands into a few buckets:
- Drawing commands create “stuff”. There aren’t very many drawing commands, and new ones don’t appear very frequently. TRIS and LINES are the main commands, but the smoke commands also fall into this category, as do the light billboard commands. The new light billboard command LIGHT_PARAM is the only new drawing command for version 9, and it probably warrants its own blog post.
- Attribute commands change how stuff is drawn – they effectively define properties for drawing on all triangles that can be modified. We picked up a number of new attributes: manipulators (controlling how the mouse works), light level control, solid camera, draw disabling, deck style hard surfaces, and panel regions. (While you must declare the panel region locations globally, a panel region is enabled for a specific batch.)
- ATTR_LOD is sort of an exception, because it defines the structure of the model (e.g. a model with LOD really contains several separate command lists, of which only one is used).
Most new extensions to the OBJ format come in the form of new attributes. Attributes generally apply to a specific mesh within your model, not to the entire model.
Note that attributes can be thought of as “per mesh” or “per batch” properties, because they affect only the batches of mesh (TRIS commands) between the attribute being turned on and turned back off again.
Where Will New Features Appear?
I try to post some of my crazier ideas regarding OBJ on the scenery system RFCs page. Looking at the extensions, we can see how these extensions will all be either global, drawing primitive, attribute, or OBJ structure extensions. (I am not promising that any of these RFCs will be implemented, just showing how the OBJ format grows.)
- Additive LOD. This is a change to the structure of an OBJ, but it doesn’t actually change the format, just the legal LOD values.
- Explicit OBJ Height. This is a global property on the OBJ.
- Global Texture Variants would be a global property on the OBJ’s textures.
- Global Object Attributes are new global properties – they move some per-batch attributes to be object-wide.
- Draped Object Geometry would be per batch.
In summary, the vast majority of proposed extensions are new per batch or per object properties.
Next: what are the implications on performance of the various sections of an OBJ?
In past posts I have tried to describe the implications of DSF base meshes, which are “fully baked”. The basic idea is: the base mesh is fully formed ahead of time as a single unit. This is a trade-off:
- The advantage is performance. The sim has no work to do except draw that base mesh as fast as possible.
- The disadvantage is flexibility. The sim has no easy way to modify that base mesh.
By comparison, DSF overlays are not fully baked – you can add 8 overlays to an area and they will all run on top of each other. There is a real performance cost to this. Compare the performance of a huge number of draped orthophotos (via .pol files, an overlay technique) with a real orthophoto base mesh cut with MeshTool. You’ll easily get 100 fps on the DSF base mesh, but you won’t come close with the overlay.
If you want to compare X-Plane to a first person shooter, consider the “cost” of overlays as one of the reasons why FPS games appear to be higher performance than general purpose flight simulators like X-Plane and MSFS. In a FPS, each level is likely to be fully baked, with only one level loaded at a time. This is equivalent to X-Plane’s DSF base mesh. The FPS game doesn’t need to manage overlays that are put together at runtime in unpredictable combinations, and this lets the FPS engine optimize for performance.
(In fact, the FPS engine might be able to optimize a second way, if third party level packs are not available. Not only can a level be ‘fully baked’, but it can be fully baked specifically for that particular rendering engine. By comparison, a DSF base mesh will run with X-Plane 8 or 9 – clearly it isn’t specifically optimized for just one version of X-Plane.)
If you look at the scenery system “overview” I wrote around the time of X-Plane 8’s release (this overview is now pretty out of date; I really need to update it) you’ll see this:
There are now two scenery formats – one for editing and one for distributing scenery. Both are new.
DSF stands for “distribution scenery file” – the idea is that DSF was meant to be a container for fully baked finished scenery, optimized for small size on DSF and fast loading, but not editing. Our internal tools use another file formatm “.xes” to contain imported global scenery data before it is baked. Originally I thought that we would provide an editor to .xes files, but that has not happened. With MeshTool, you provide input data in more common public formats like SRTM HGT or GeoTiff, and .shp (shapefiles). You can think of .shp and .tif as the editing formats for MeshTool and base DSFs.
So how do we make it easy for users to edit scenery? I believe OpenStreetMap is the answer. The common request we get from users is for a way to edit the vector source data for global scenery (or sometimes, the request is to edit the features created by vector data). In other words: how does a user edit the coastlines, water bodies, and roads? With OpenStreetMap, OSM itself becomes the “editing” format for X-Plane scenery with DSF as the final result of baking.
Last week I finally was able to post some scenery tools builds. This begins the beta period for WED 1.1 (WED with DSF overlay editing) and a release candidate period for the AC3D plugin and MeshTool. I expect the WED beta to take a while; as long as the program is reasonably functional in beta form (e.g. not losing data) there isn’t a huge rush to package it up relative to other priorities.
WED will continue to evolve as the primary visual editor for scenery. This will include editing air traffic control data for the new ATC system and editing roads once X-Plane is ready for road overlays. (See this post for why road overlays aren’t quite useful in the sim yet.)
I’m not sure what the next features for MeshTool will be – this may depend on user feedback. One thing is clear: MeshTool is not easy to use. Building a base mesh is a complex and low level process, with lots of possible pitfalls. So whatever features MeshTool develops, usability has to be a goal.
There is one more scenery tool I would like to create: a remote render farm. Right now you can make custom base meshes with MeshTool. In the future it will be possible to make edits to the source data used or global scenery (by editing OpenStreetMap itself). You can also edit the apt.dat file and submit the results to Robin.
But once you edit OSM, how do you get a new scenery pack? Using MeshTool is complex, and MeshTool is really aimed at the orthophoto crowd. Waiting 2 or 3 years for the next X-Plane release isn’t a good solution.
My idea is to set up a computer to do “remote global scenery” requests. I would set the computer up to periodically pull down new data updates and recut tiles as requested, then post them publicly. This would allow users to edit the source data and then put in a tile request to get the tiles back, without ever having to know how to make scenery. The tiles would reflect all changes from all users.
Such a service wouldn’t be of interest to the most advanced authors who want to create a truly original scenery pack, but for authors who want to fix specific problems, this process would be much simpler. I hear the question “how do I fix the lake behind my house” all the time; a remote render farm could be part of the answer.
A bug-fix release of AC3D was posted over the weekend, and now it is gone. Andy pointed out to me that I had posted a build for Windows, but not Mac and Linux, a build error on my part.
I should be able to get all of the new tools (the AC3D patch, WED 1.1 beta 1, and a MeshTool release candidate) posted this week.
It looks like the next generation of scenery tools (MeshTool 2.0, WED 1.1, and the latest distribution of “the tools”) may have higher system requirements than their predecessors for Mac users. Those requirements would be: an Intel CPU and OS X 10.5.0 or higher.
The problem at the heart of all of these changes is that the tools use CGAL (a geometry library) and the compilers Apple distributes that are compatible with 10.4 and PPC don’t work with CGAL. So I quite literally cannot compile the latest tools because of the features they offer.
I don’t know how much this affects actual authors, and I don’t know if it is possible (given an infinite amount of self-torture) to work around some of the compiler issues. At this point my plan is:
- Distribute the next-gen tools in binary form for 10.5 and x86.
- Leave links to the old tools for users who need binary PPC tools.
- Continue to make all source code available via the GIT repo.
If someone finds a way to compile these tools for older targets using the source code, I am perfectly happy to provide distribution of those binary tools or incorporate the fixes if they are manageable.
I’m hoping to have some tools posted “real soon”…
After about a week of on and off hacking, I have finally knocked down one of the major stumbling blocks to getting WED 1.1 up to beta quality: exporting UV-mapped (texture mapped) bezier polygons that cross DSF borders. It works! Well, sort of.
If you have tried to program polygon cutting algorithms, you can appreciate the difficulty of an algorithm that:
- Clips polygons robustly (including holes and other weird topology) and
- Maintains a UV mapping while doing this and
- Works with bezier curves and not just line segments.
WED now does all three! This was the ugliest and hardest part of the DSF exporter, and a big missing piece from going beta.
Of course, there is one problem: X-Plane can’t read the bezier curves.
The problem is a simple defect in how X-Plane manages DSFs.
- A valid bezier polygon, fully inside the DSF tile, may have control handles that go outside the tile.
- X-Plane can’t handle any DSF coordinates outside the tile.
Doh!
I am not sure what I will do about this, but in the short term, I fear X-Plane will remain limited. Probably the best short term option is to have WED at least flag such problematic bezier polygons; it is possible to approximate them or edit them to make the export work.
There is still a little bit more exporter code to write, including the line segment exporter (which is separate from the polygon exporter), but with luck the whole DSF export path should be cleaned up in the next few days.
Meshes in X-Plane, whether modeled in an OBJ, or generated as the results of other “3-d clutter” (road .net files, .for forest files, etc.) can be either one or two sided. So first: two-sided geometry is bad in most cases.
In order to understand why two sided geometry is bad, we must consider the alternative. The alternative to two-sided geometry is to simply create each triangle twice, with one facing in each direction. We can do this in an OBJ without making new vertices – because vertices are referenced by indices, we only need more indices, and indices are cheap.
Thus we have an alternative to two sided geometry, namely “doubled” one-sided geometry.
The first problem with two sided geometry is performance: for a small number of triangles, it is must faster to simply emit additional indices than to change the drawing mode to two-sided drawing on the CPU.
Thus in an object it is virtually never a good idea to use two-sided geometry. That ATTRibute will always be worse.
What about for the other clutter? Forests are currently always two-sided, but that’s okay; X-Plane enables two sided drawing just once, then draws a huge number of trees. Same with roads. For facades, there is a cost to using two sided geometry, so only use it for facades that must be two sided, like fences; do not use it for buildings.
Now the second problem with two sided geometry is lighting: X-Plane does not calculate lighting values separately for the two sides of the two-sided geometry. So if you have directionally lit models with two sided geometry, the lighting will look wrong. This is the second reason to use doubled geometry instead.
Things Are Starting To Look Up
There is a work-around to this problem of incorrect lighting on two-sided geometry: “up normals”. With up normals, the normal vector for the triangle (which is used to determine how light “bounces” off the triangle) is set to face straight up. The result is a triangle with brightest lighting at high noon, regardless of which way the triangle actually faces.
The good: the triangle looks the same on both sides and has sort of a “flat” lighting – it doesn’t look wrong when the sun is setting. The bad: the triangle has “flat” lighting – it looks non-3d.
We use up normals for forests because the forests are made of two-quad trees…the trees look less fake if directional lighting hints don’t make the two quads as obvious. You can simulate this in ac3d using the “make up normal” command for vegetation quads you put in your own models.
For roads, the geometry is two-sided, so we use up normals to avoid having the back of a road element look funny. Some day we may do something more sophisticated.
Fixing Facade Lighting
Facade lighting behavior will be changed in the next 950 release candidate. Before 950, facades would receive up normals, always. Starting in 950, facades will get correct normals if they are one sided and up normals if they are two-sided. This avoids artifacts with two-sided facades, but will make one-sided closed buildings look much better.
I’ll take a break from iPad drivel for a few posts; at least one or two of you don’t already own one. (Seriously, it’s simply easier to blog about X-Plane for iPad because it is already released; a lot of cool things for the desktop are still in develpment.)
In response to my comments on water reflections in X-Plane 950, some users brought up ray tracing.
My immediate thought is: I will start to think more seriously about ray tracing once it becomes the main technology behind first person shooters (FPS).
Improvements in rendering technology come to FPS before flight simulators (and this is true for the combat sims and MSFS series too, not just us). Global shadows, deferred rendering, screens-space ambient occlusion…the cool new tricks get tried out on FPS; by the time they make it into a flight simulator the technology has moved from “clever idea” to “standard issue.”
Consider that X-Plane now finally has per-pixel lighting. Why didn’t we have it when the FPS first did? Well, one reason is that the FPS were cheating. If you look at the papers suggesting how to program per-pixel lighting, at the time there were all sorts of clever techniques involving baking specular reflections into cube maps and other such work-arounds to improve performance. These were necessary because titles at the time were doing per-pixel lighting on hardware that could barely handle it. X-Plane’s approach (as well as other modern games) is to simply program per pixel lighting and trust that your GeForce 8800 or Radeon 4870 has plenty of shader power.
I believe that the reason for the gap between FPS and flight simulators come from two sources:
-
Viewpoint. You can put the camera quite literally anywhere in a flight simulator, and thus the world needs to look good from virtually any position. By comparison, if your game involves a six foot player walking on the ground (and sometimes jumping 10 feet in the air) you know a lot about what the user will never see, and you can pull a lot of tricks to reduce the performance cost of your world based on this knowledge. (This kind of optimization applies to racing games too.
To give one simple example of the kind of optimization a shooter can make that a flight simulator cannot, consider “portal culling”. A portal-culled world is one where the visibility of distinct regions have been precomputed. A trivial example is a house. Each room is only visible through the doors of the other rooms.* Thus when you are walking through a room, virtually no other room is being drawn at all. The entire world is only 20 by 20 meters. Thus the developers know that they have the entire hardware “budget” of computing power to dedicate to that one room and can load it up with effects, even if they are still expensive.
(A further advantage of portal culling is a balance of effects. Because rooms are not drawn together in arbitrary combinations, the developers may find ways to cheat on the lighting or shadowing effects, and they know nothing will “clutter” the world and ruin the cheats.)**
-
Often the FPS will have pre-built content, rather than user-configurable content. Schemes like portal culling (above) only work when you know everything about the world ahead of time and can calculate what is visible where. The same goes for many careful cheats on visual effects.
But a flight simulator is more like a platform: users add content from lots of different sources, and the flight simulator rendering engine has to be able to render an effect correctly no matter what the input. This means the scope of cheating is a lot smaller.
Consider for example water reflections. In a title with pre-made content, the artists can go into the world in advance and mark items as “reflects”, “doesn’t reflect”, reducing the amount of drawing necessary for water reflections. The artist simply has to look around the world and say “ah – this mountain is no where near a lake – no one will notice it.”
X-Plane can’t make this optimization. We have no idea where there will be water, or airports, or you might be flying, or where there might be another multiplayer plane. We know nothing. Everything is subject to change with custom scenery. So we can’t cheat – we have to do a lot of work for reflections, some of which might be wasted. (But it would be too expensive in CPU power to figure out what is wasted while flying.)
Putting it all together, my commentary on ray-tracing is this: the FPS will be able to integrate small amounts of ray tracing first, because they will be in a position to deploy it tactically, using it only where it is really necessary, in hybrid ray-trace + rasterized engines. They’ll be able to exclude big parts of the scene from the ray tracing pass, improving performance. They’ll be able to “dumb down” the quality of the ray trace in ways that you can’t see, again improving performance. The result of all of this will be some ray tracing in FPS when the hardware is just barely ready.
For a flight simulator, it will take longer, because we’ll need hardware that can do a lot more ray tracing work. We won’t know as much about our world, which comes from third party content, so we won’t be able to eliminate visually unimportant ray traces. Like deferred rendering, shadow mapping, SSAO and a number of other effects, flight simulators will need more computing power to apply the effects to a world that can be modified by users.
(Is ray tracing even useful, compared to rasterization? I have no prediction. Personally I am not excited by it, but fortunately I don’t have to make a good guess as to whether it is the future of flight simulation. The FPS will be able to, by effective cheating, apply ray tracing way before us, and give us a sneak peak into what might be possible.)
* There never are very many windows in those first person shooters, are there?
** To be clear: there is nothing negative about the term “cheat” in computer graphics. A way to cheat on the cost of an algorithm means the developers are very good at their jobs! “Cheating” on the cost of algorithm means more efficient rendering. If the term cheating seems negative, substitute “lossy optimization”.
Normal maps in X-Plane 940 have a funny property: if you flip the normal map horizontally or vertically, the bumps change direction. Things that “stuck out” now “stick in” and vice versa. (If you flip the normal map horizontally and vertically, the two cancel out and the normal map is not reversed.)
You can understand this by thinking of your normal map as a physical piece of metal with bumps punched in it. Flipping it horizontally really means rotating it horizontally to see the other side – now you see the back side of the bumps. Same with the vertical flip. Flip both and you have flipped it twice and it’s front-side forward again.
In a move that is either just in the nick of time or dangerously reckless, I have tweaked the normal map shader for 950 RC1 (coming out “real soon”) to detect and “fix” a flipped normal. In 950 rc1 the bumps in your normal map will always point in the same direction as the normal of your mesh, even if your UV map is flipped horizontally or vertically.
What does this mean to you, the modeler? It means that you can now mirror your normal map from the left to the right side of the airplane and the normal map will still have the bumps “sticking out” like you want.
I crammed this into 950 RC 1 because it looks like it’s a useful change that will restore flexibility to authors making highly detailed airplanes. Mirroring a symmetric airplane (which results in a horizontally mirrored normal map) is a pretty common thing to do, and if the text is applied as decals, this can be a big win in texture space savings.
I figured best to get the tweak in here now so people could take advantage of it. Plus, what’s an RC1 without an RC2?
If you look at funky pictures of X-Plane on line, a fair number of them will show incorrect water reflections. I am working on some bug fixes for the reflection code for 950. Bug fixes might not even be the right term. To understand the incorrect reflections, you have to understand what the water code can and cannot do.
The water reflection code renders a reflection based on a flat plane. This limitation comes from the mathematics of the algorithm – a compromise to have water reflections that run at “real-time” speeds. (Real-time is graphics nerd speak for 20-30 fps and not 1-2 fps.)
As it turns out, the Earth is not flat. So we can pick up a number of reflection “bugs”, due to the limits of the approximation we are using:
- Over very large distances, the flat plane is a bad approximation of a water surface. The flat plane simply can’t be “right” everywhere for any large scene. This isn’t a bug, it’s a ceiling on our maximum quality – a design constraint.
- If we have two water surfaces at different elevations (e.g. a river with a dam) we can’t have our reflection plane match both. So some scenes may have wrong reflections with multi-level water. This too is a design constraint.
- If our reflection plane is at the wrong height or the wrong slope, it is going to produce really weird results. The reflection plane being in the wrong place despite a small scene with one level of water – that’d be a bug.
- There is an art to positioning the plane – if we have a large scene (so the round earth means there is no one perfect plane) some locations of the plane will look better than others.
Now one fall-out of all of this is that things are going to look better if water is really flat, which is not always the case (both for some parts of the global scenery with production errors and some third party scenery). Where the water is sloped or contains bumps, we hit the multi-level case where we should not and we face reflection plane placement problems.
Finally, if the scenery mesh contains slanted water, we’re really going to be hosed – almost by definition if X-Plane uses a sane water reflection plane, it won’t be sloped, and thus this sloped water is going to be unaligned with the reflection plane and produce something that looks really funky.
So my work on 950 is aimed at having X-Plane be less easily fooled by complex and incorrect scenes less often. (Note that X-Plane can’t tell the difference between Norway, where we really have water at multiple elevations, and bad input data to MeshTool.)
Even with these fixes, sloped water is still going to look pretty strange (because it is strange). And even with these fixes, multi-level water will still have its reflections approximated at best. But hopefully the visuals from the sim will be less jittery while flying over tricky DSFs.
I’m hoping we’ll have a beta 2 in the next week or two.