If you read the original X-Plane scenery web pages, you’ll see references to two file formats:
- DSFs – the files we distribute scenery in.
- XES – the “X-Plane Editable Scenery” file format, which you won’t see very much of.
Here’s the story:
When I was first working on the scenery system design, we decided on a pre-processed approach, which implied two types of file formats: pre-baked (editable source data) and post-baked (distributable finished scenery). XES is a GIS container format for the source data.
When we create the global scenery, the process is something like this:
- Import lots of data from multiple sources in multiple formats, so that it is all in one giant tile in our format.
- Process the data, deriving new information (like terrain type) from existing data (like slope) and fixing problems (like bumps on runways).
- Export the data as a DSF, which involves additional conversions (such as converting generic road types to x-plane roads) and DSF encoding.
We keep our raw data partly in XES format, and partly in the original raw format, depending on how slow the importer is – some vector formats are very slow to import (or are not already tiled), so we preconvert to XES. Other formats, like SRTM, are so easy to import quickly that we just use the data as is.
If you have ever tried to use MeshTool, you may have used XES files yourself – the landuse and climate data that MeshTool needs are saved as XES files – it’s an easy way to encode a few variable sized raster maps with portable enumeration encoding.
WED does not use XES files – when I started work on WED, I realized that the XES container format was too GIS oriented and not application-oriented, so I created a file format particular to WED. WED will continue to use .wed files, which can contain anything that WED can edit.
In the long term, I don’t see XES as being used by anyone except for LR internally; WED will continue to have a WED native format, and we will try to use common simple GIS formats for import/export – most likely SRTM hgt files for elevation and .shp (shape) files for vectors.
This thread on X-Plane.org sparked off quite the discussion. Now a lot of this is a discussion of when LR will have an overlay editor – there are a few overlay editing functions that Jonathan Harris’ excellent OverlayEditor apparently does not yet support, sparking this discussion.
(I am not saying that LR should rely on Jonathan to do an overlay editor. But I am saying that the complaints I hear about a lack of overlay editing go down when Jonathan’s overlay editor does everything that the file formats can do.)
But another part of the discussion focused on the problem of mesh editing. In particular, the basic terrain in a DSF is a fully baked output of a complex process that starts with higher level GIS input data. In other words, we start with a raster DEM, polygon coastline, apt.dat file, vector roads, and a bunch of config files and hit “bake” and a DSF comes out the other side, with a lot of processing.
This is very different than FS X, which integrates its data sources on the fly. Why did we choose a precomputed route for scenery? It has some pros and cons. (In understanding how we made these decisions, think back to what scenery was like with X-Plane 7 and ENVs and single-core machines.)
Performance
The main benefits of preprocessing scenery are performance related. When you process scenery data into the final scenery while flying, that computer power takes away from the rendering engine, thus cutting down fps. At some point you have a zero-sum game between how much cost there is to loading scenery and how complex the scenery integration can be; you have to pick very simple scenery integration algorithms to keep fps up.
(This is less of an issue as more cores become available, but is still a factor.)
When pre-processing, we can use algorithms that take minutes per DSF without affecting framerate.
Similarly, there might be scenery processing algorithms that improve fps by optimizing the output triangles – but do we have time to run these algorithms during load? With preprocessing we have all the time in the world because it happens once before the DVDs are burned.
Preprocessing also breaks a similar zero sum game between scenery data size and quality; the source data we use to make the scenery is a lot bigger than the 78 GB of DSFs we cut; if we had to ship the source data, we’d have to cut down the source data quality to hit our DVD limitations. With be-baking we could use 500 GB of source data without penalty.
Format Flexibility and Stability
The second set of benefits to preprocessing are flexibility benefits. (Consider the file format churn of the ENV days.)
– With a preprocessed scenery file, what the author creates is what the user sees – X-Plane does not go in and perform subjective integrations on the scenery later that might change how it looks in a negative way.
- There is no need to revise the scenery file formats to introduce new data sets, because new data sets and old are all processed to the same final DSF container format.
- A wide variety of mesh generation techniques can be employed because the mesh generation is not built into X-Plane. This is a flexibility that I don’t think anyone has really utilized.
- Changes of behavior in the scenery generation toolset can never affect existing scenery because that scenery is already preprocessed; this help compatibility of old file formats.
Integration Issues
There are some real limitations to a pre-processed format, and they are virtually all in the bucket of “integration issues” – that is, combining separate third party add-ons to improve scenery. In particular, in any case where we preprocess two data sources, we lose the opportunity for third parties to provide new scenery to replace one of those data sources and not the other.
Airport is the achilles heal where this hurts us most; while airport layouts are overlays and can be added separately to the scenery system, the elevation of the base mesh below the airport needs to be preprocessed. This is something I am still investigating – a tolerable fix that other shave proposed is to allow an overlay scenery pack to flatten a specific region regardless of the user setting (so an author can be assured of a flat base to work from).
Preprocessing does fundamentally limit the types of third party add-ons that can be done; with version 9 and overlay roads, we are getting closer to letting road add-ons be overlays (see this post).
It appears to me that integration isn’t the primary complaint about the scenery system (the primary complaint is lack of tools) but we’ll have to see once we have mesh editing tools (mesh recreation tools really) whether preprocessing still limits certain kinds of scenery.
Note that a lack of tools or a lack of tool capability is not an inherent limitation of pre-processed scenery. We have an incomplete tool set because I have not written the code for a complete tool set, not because it cannot be done.
(The complexity of writing base mesh editing tools is a function of the complexity of a vector-based base mesh – this is also not related to pre-processing per se.)
Tools
In the end, I think the question of tools is not directly tied to the question of pre-processing. Whether we have scenery that is processed by X-Plane or a preprocessing tool, we have the same issues:
- Good tools require an investment in coding user interface.
- The code to convert source data which users might want to edit (like a polygon that defines a lake) to data the simulator might want to use (like a list of 78,231 triangles) has to be written.
I don’t think either option (pre-processing or in-simulator processing) reduces the amount of work to be done to create a good toolset.
As a final thought, using scenery file formats that are “easier to edit” (e.g. a file format that contains a polygon for water rather than triangles) doesn’t make the total code for scenery tools + simulator any easier; it just moves the task of “processing” the scenery from the tools to the simulator itself.
I’ve rewritten this post about four times now…let me try the brief version.
Basically, X-Plane is not an early adopter of graphics technology. Because of the nature of the rendering we do, we can directly benefit from “more of the same”, e.g. if you simply gave me twice as many objects per second or twice as many polygons, we could make the sim look a lot nicer. So we don’t need to adopt new graphics technologies until they’re proven in games that need them more, like first person shooters. We’re a small company with no influence on the industry, so we write the tightest message we can and use new features when the dust settles.
(From a utilization standpoint, we also provide the best graphics to the most people by using card features that are going to become wide spread, so it doesn’t make sense for us to gamble on vendor-specific extensions that might not become available to everyone.)
With that in mind, there is some cool stuff that people are talking about that maybe someday we’ll get to play with:
- Irregular Shadow Mapping – given a super-programmable card, you can create a rendering scheme that optimizes shadow map creation to remove artifacts.
- Out-of-order blending – the graphics card resorts incoming geometry so that all translucent geometry is drawn back to front. Doing this on the CPU is expensive (and in X-Plane’s case, we often just don’t get it right at all).
- Multiple dispatch to multiple targets. Even on a big multi-chip GPU (a lot of modern cards are two cards stuck together) the only render to one screen or texture at a time, even if there are a lot of parallel elements. This is good for a few big complex scenes but not good for lots of small scenes. I’d like to see all vendors support dispatch to multiple targets – this will make things like dynamic reflection via environment cube maps potentially a lot faster.
- Voxel Octrees. This is the one I hear a lot about – basically it’s a change from 2-d to 3-d data structures on the graphics card to manage fast access to large chunks of graphics data. (Shadow maps, z-buffers, and environment maps are all more or less 2-d data structures.)
Will we see this? I don’t know. Will Larabbee change everything? Who knows…Intel has to build a high-end graphics card to fight ATI and NV’s attempt to get into supercomputing, but if they happen to also build a really nice video card, I can live with that. But I won’t hold my breath – the titans need to duke it out without me!
Per-pixel lighting is something I hope to have in X-Plane soon. A number of other features will take longer, and quite possibly might never happen. This is the “pie in the sky” list – with this list, we’re looking at higher hardware requirements, a lot of development time, and potential fundamental problems in the rendering algorithm!
High Dynamic Range (HDR) Lighting
HDR is a process whereby a program renders its scene with super bright and super dark regions, using a more detailed frame-buffer to draw. When it comes time to show the image, some kind of “mapping” algorithm then represents that image using the limited contrast available on a computer monitor. Typical approaches include:
- Scaling the brightness of the scene to mimic what our eyes do in dark or bright scenes.
- Creating “bloom”, or blown out white regions, around very bright areas.
Besides creating more plausible lighting, the mathematics behind an HDR render would also potentially improve the look of lit textures when they are far away. (Right now, a lit and dark pixel are blended to make semi-lit pixels when far away as the texture scales down. If a lit pixel can be “super-bright” it will still look bright even after such blending.)
Besides development time, HDR requires serious hardware; the process of drawing to a framebuffer with the range to draw chews up a lot of GPU power, so HDR would be appropriate for a card like the GeForce 8800.
While there aren’t any technical hurdles to stop us from implementing HDR, I must point out that, given a number of the “art” features of X-Plane like the sun glare, HDR might not be as noticeable as you’d think. For example, our sun “glares” when you look at it (similar to an HDR trick), but this is done simply by us detecting the view angle and drawing the glare in.
Reflection Mapped Airplanes
Reflection maps are textures of the environment that are mapped onto the airplane to create the appearance of a shiny reflective surface. We already have one reflection map: the sky and possibly scenery are mapped onto the water to create water reflections.
Reflection maps are very much possible, but they are also very expensive; we have to go through a drawing pass to prepare each one. And reflection maps for 3-d objects like airplanes usually have to be done via cube maps, which means six environment maps!
There’s a lot of room for cheating when it comes to environment maps. For example: rendering environment maps with pre-made images or with simplified worlds.
Shadows
Shadows are the biggest missing feature in the sim’s rendering path, and they are also by far the hardest to code. I always hesitate to announce any in-progress code because there is a risk it won’t work. But in this case I can do so safely:
I have already coded global
shadow maps, and we are not going to enable it in X-Plane. The technique just doesn’t work. The code has been ripped out and I am going to have to try again with a different approach.
The problem with shadows is the combination of two unfortunate facts:
- The X-Plane world is very, very big and
- The human eye is very, very picky when it comes to shadows.
For reflections, we can cheat a lot — if we don’t get something quite right, the water waves hide a lot of sins. (To work on the water, I have to turn the waves completely off to see what I’ m doing!) By comparison, anything less than perfect shadows really sticks out.
Shadow maps fail for X-Plane because it’s a technology with limited resolution in a very large world. At best I could apply shadows to the nearest 500 – 1000 meters, which is nice for an airport, but still pretty useless for most situations.
(Lest someone send the paper to me, I already tried “TSM” – X-Plane is off by about a factor of 10 in shadow map res; TSM gives us about 50% better texture use, which isn’t even close.)
A user mentioned
stencil shadow volumes, which would be an alternative to shadow maps. I don’t think they’re viable for X-Plane; stencil shadow volumes require regenerating the shadow volumes any time the relative orientation of the shadow caster and the light source change; for a plane in flight this is every single plane. Given the complexity of planes that are being created, I believe that they would perform even worse than shadow maps; where shadow maps run out of resolution, stencil shadow volumes would bury the CPU and PCIe bus with per-frame geometry. Stencil shadow volumes also have the problem of not shadowing correctly for alpha-based transparent geometry.
(Theoretically geometry shaders could be used to generate stencil shadow volumes; in practice, geometry shaders have their own performance/throughput limitations – see below for more.)
Shadows matter a lot, and I am sure I will burn a lot more of my developer time working on them. But I can also say that they’re about the hardest rendering problem I’m looking at.
Dynamic Tessellation
Finally, I’ve spent some time looking at graphics-card based tessellation. This is a process whereby the graphics card splits triangles into more triangles to make curved surfaces look more round. The advantage of this would be lower triangle counts – the graphics card can split only the triangles that are close to the foreground for super-round surfaces.
The problem with dynamic tessellation is that the performance of the hardware is not yet that good. I tried implementing tessellation using geometry shaders, and the performance is poor enough that you’d be better off simply using more triangles (which is what everyone does now).
I still have hopes for this; ATI’s Radeon HD cards have a hardware tessellator and from what I’ve heard its performance is very good. If this kind of functionality ends up in the DirectX 11 specification, we’ll see comparable hardware on nVidia’s side and an OpenGL extension.
(I will comment more on this later, but: X-Plane does not use DirectX – we use OpenGL. We have no plans to switch from OpenGL to DirectX, or to drop support for Linux or the Mac. Do not panic! I mention DirectX 11 only because ATI and nVidia pay attention to the DirectX specification and thus functionality in DirectX tends to be functionality that is available on all modern cards. We will use new features when they are available via OpenGL drivers, which usually happens within a few months of the cards being released, if not sooner.)
Before I post anything to my blog saying what might happen, standard disclaimers:
- This blog represents my rambling about the directions I am considering for X-Plane’s rendering engine.
- This blog is not a promise or commitment of any kind to deliver any particular feature.
- If I say I am looking at doing feature X, and feature X does not materialize, either in the near or far future, or, like, ever, consider this to be one big fat “I told you so.”
With that in mind, I think the direction for lighting in version 9 is to introduce per-pixel lighting.
I don’t know what other set of features we’ll get with per-pixel lighting, but I am reviewing normal maps, specular maps, and the material attributes. Per pixel lighting will mean smooth, round, shiny looking surfaces without using a huge number of triangles.
Now there are two sets of hardware that will not be able to support per-pixel lighting:
- Cards without pixel shaders. (GeForce 2,3,4, Radeon 7000-9200.) You might know your card does not have pixel shaders because the pixel shader check box is not available in the rendering settings.
- Cards with first generation shaders. (This is the GeForce FX series and the Radeon 9500-9800 and X300-X600.) These cards can actually perform per-pixel lighting, but they are so slow that per-pixel lighting will bring them below minimum frame-rate.
So unfortunately, there will be an authoring decision: add more triangles so that per-vertex lighting looks good, or use fewer triangles and rely on per-pixel lighting. The decision will depend on what hardware you want to target at what performance level. (For what it’s worth, hardware that cannot support per-pixel lighting usually isn’t very powerful, so there is something to be said for not having a lot of triangles on these lower end machines.)
X-Plane 8 provides a useful baseline for rendering technology:
- It is finished and unchanging.
- Its use of shaders is very minimal, so even lower-end hardware can show the “X-plane 8 model” of lighting.
- X-Plane 8 rendering is completely supported in X-Plane 9. (That is, turn off shaders, and OBJs should look the same in X-Plane 8 and 9.)
So what do we have, and is it any good? Well, we have:
- Per-vertex lighting. Lighting is calculated per vertex, and interpolated between vertices.
- Very limited materials. Basically you can use attributes to set emissive lighting (so your day texture stays bright when back-lit, like taxiway signs) and shininess (to induce white specular hilites). The shininess ratio isn’t very flexible, but it does match what the built-in ACF shiny property does.
- Very fast vertex output within a batch.
I looked at some nice third party planes before writing this up, and one thing became clear: X-Plane can output a lot of vertices in an object if they are batched, and authors are using this aggressively. The advantage of just using a lot of vertices is: curved surfaces look round, the errors that are induced by per-vertex lighting are less ugly, and the object looks the same everywhere (because this path isn’t dependent on having pixel shaders).
The big weakness of the current situation is that you have to burn a lot of vertices to get close to per-pixel lighting, particularly for very shiny surfaces. I saw at least one plane (I do not recall who authored it) that just had more triangles in the engine nacelles than you could imagine. They look beautiful even in X-Plane 8 – great specular hilites. But that eats into your vertex budget pretty severely – it’s not a technique that you could use for every static airplane on a tarmac at LAX.
The triangle is at the heart of 3-d modeling – but before we discuss what might become of the triangle, we need terminology.
- Per-vertex lighting. This means that the brightness of the model (a function of the sun and camera position, etc.) is calculated for each vertex in the model, and then crudely interpolated between the vertices to light the pixels.
- Per-pixel lighting. This means that the brightness of the model (a function of the sun and camera position, etc.) is calculated for every pixel on the screen separately.
- Tessellation. This is the process of splitting a triangle into a number of smaller triangles, increasing the number of vertices in a model.
- Specular lighting. The specular lighting component is an extra amount of brightness that you get when the angle from the sun to the model to your eye is very small. (That is, if the model was a mirror and you could see the sun by looking at a certain location, then that location would have a bright “specular hilite”.)
- Normal map. A normal map is a texture that describes the way light bounces off a surface. This is one way to do “bump mapping”. This tutorial shows a pretty good example of how normal maps work. (The earth orbit textures in version 9 use normal maps to create “bumpy” mountains when pixel shaders are in use.)
- Specular map. A specular map is a texture that describes how strong the specular component of the lighting model appears for a given textured location. Here’s another tutorial that explains it.
- Environment Map. An environment map is a texture that represents the world around an object, used to simulate reflections. Here’s another blender tutorial that explains it better than I. (The reflective water in X-Plane 9 is effectively using a dynamic environment map created by taking a picture of part of the sim’s world every frame,)
- Material attributes. These are OBJ attributes that change the lighting model. For example, ATTR_shiny_rat changes the lighting model so that specular hilites appear.
- Batch. A batch is a single set of triangles sent to the graphics card without any change of mode. Basically every TRIS command in an OBJ becomes a batch; submitting a batch requires the CPU, but submitting a bigger batch (more triangles) does not require more CPU.
That’s enough vocabulary to describe just about everything that is happening now, will be happening in the future, as well as some pie-in-the-sky stuff. 🙂
Traditionally, a pilot’s priorities are: aviate, navigate, communicate.
But that might not be true for X-Plane for the iPhone.
It’s real! And it pretty much is X-Plane – there really are OBJs and DSFs in there, as well as an ACF model, all tuned for the iPhone.
In the next few posts I’ll blog a little bit about the impact of doing an iPhone port on scenery development. The iPhone is an embedded device; if you go digging for system specs you’ll see that it’s a very different beast from the desktop. The porting process really helped me understand the problems of the rendering engine a lot better, and some of the techniques we developed for the iPhone are proving useful for desktop machines as well.
An author asked me some questions that I think are so important that I’ll blog the answers:
- The new texture paging system (LOAD_CENTER) works for both terrain textures (.ter files) and draped polygons (.pol files). You do not have to use draped polygons to get texture paging – you can use paging in a base mesh!
- Orthophoto terrain via a (.ter) file is by far the preferred method for orthophoto sceneries – it is a vastly better option than draped polygons. Draped polygons are horribly wasteful of hardware resources, and should really only be used for tiny areas, e.g. airport surface areas. If you are using even a moderate amount of orthophotos, make a base mesh!
- MeshTool is the future of photo scenery, and will continue to be the way to make high performance orthophoto meshes for X-Plane.
The future of MeshTool is bug fixes, a richer syntax, and some day maybe a UI front-end.
I just received a series of reports today that certain converted scenery will cause X-Plane to crash with a “bad alloc” error. Basically, this couldn’t have hit us at a worse time. The final 920 was cut a week ago. We physically can’t recut; Austin is on the road, and I am knee deep in it. But there is a possible work-around, and there will be a patch. Here’s the whole situation.
What is a Bad Alloc?
A bad alloc error is an error that comes up when X-Plane runs out of memory. This can happen for two reasons:
- We have run out of address space – that is, there is no more virtual memory left, or
- We have run out of page file/physical memory – that is, we can’t back that virtual memory.
The first case is by far the most common – you’d only hit the second if you are on Windows with a fixed-size (but small) page file. (Hint: if you have a fixed size page file, make it big!)
X-Plane can run out of memory for many reasons – everything that runs in the sim uses memory, and the amount used depends on what area you are in, what rendering settings you pick, and what third party add-ons you use. While I’d like to someday reach a point when the sim tells you gracefully that it’s out of memory, it will always be a fact of life that at some point (hopefully an absurdly high one) the amount of stuff you’ve asked X-Plane to do will exceed how much memory you have.
(If you are thinking 64 bits, well, that will just change the problem from a crash to a grinding halt when we run out of physical memory.)
We see bad allocs when there are too many third party add-ons installed (XSquawkBox is a particular pig because it loads every CSL on startup), too complex scenery, and it can also be caused by drivers not efficiently using memory. (This is particularly a problem on Vista RTM.)
The Bug
When X-Plane creates a curved airport taxiway, it allocates a temporary memory buffer to hold the intermediate product of the pavement. The size of that buffer depends on the complexity of the curve it is processing and a constant, based on the maximum curve smoothness.
In 920 I provided an option to crank up the curve smoothness in X-Plane. In the process, I increased that constant factor by 4x, which causes X-Plane to hit its memory ceiling on layouts that used to be acceptable. You’ll see this problem more often on:
- Bigger, more complex layouts.
- Configurations that were already chewing up a lot of memory.
- Machines with less address space (Windows without /3GB, older Mac OS X operating systems.)
What really suckered us about this bug was that it comes in a form that looks almost the same as a driver issue we’ve seen with ATI drivers on Windows — we’ve seen strange forms of memory exhaustion on ATI when shifting scenery with high rendering settings. So we didn’t realize that this was something new until G5 users reported the bug (making us realize it wasn’t a driver thing).
What To Do
The bad news is that we can’t do an RC5 – we’re out of time. But – there will be a patch – relatively soon. This bug is on the short list for a patch to fix 920.
In the meantime, there is actually a work-around. By coincidence, some of the internal rendering engine constants are viewable via the “private dataref” system — basically a series of datarefs in the sim/private/… domain that I use for on-the-fly debugging. The dataref that matters here is:
sim/private/airport/recurse_depth
If you load up DataRef Editor you’ll see it has a value of 12 . That’s too high. Changing it to 10 will allow otherwise problematic airports to load.
I will try to post a plugin in the next 10 days that sets this dataref to 10 on startup, effectively patching the problem. This will also limit the maximum smoothness of curves – but my guess is that if you see the crash (not all users do) then you can’t run on the max airport curve setting anyway.
Of course the next patch will contain a real solution: a more efficient memory allocation scheme!