For something like six or seven years, DSF has fundamentally been a vector scenery format, meaning it contains points, lines, and connections between lines that define how scenery looks. With X-Plane 10 we’ll be adding raster data to DSF.
One way we’ll use raster data inside DSFs is to store the raw elevation data for a DSF tile. Originally we saved only the final triangulation of the mesh in 3-d; we will now save the triangulation in 2-d* and the raw elevation, which X-Plane will put back together again.
We get a few wins from storing elevation separately:
After compression the files are actually smaller. This is because the data is more “regular” when stored in raster format than as part of the triangulation, and also because we don’t need to store normal vectors.
Since we’ll have the elevation data in its original form, we can use it to someday enhance the mesh for graphics cards that support hardware tessellation.
If raster data is a win in both quality and file-size, why didn’t we do this originally? Two reasons:
Originally DSFs shipped in zip files; the big win in compression with regular data comes from the more advanced 7-zip compression we started using to ship X-Plane 9.
Raster encoding means increased load time in the sim as it “puts the mesh back together”. Today in a multi-core world this is totally moot – DSF loading happens on another core, but originally DSFs had to be loaded on single-core machines, so load time was a key performance point.**
We will also be able to put other data into the DSFs, although I’m not sure what the final file set will be. Good candidates include bathymetry data and urban-density data to affect autogen.
Finally, we get a lot of requests from plugins to access X-Plane’s elevation data; with an irregular triangulation, access via plugin isn’t practical. But with raster data, the code to locate and view the raster block inside a DSF is actually pretty easy and the data comes in a simple, easy-to-use format. This might be useful for moving maps and other such technologies.
* Technically we store the triangulation as a flat 3-d mesh; DSFs RLE encoding means that the all-zero elevation and normal-offset fields crunch down to nothing.
** The decision to make roads 2-d and set up their height at runtime is a similar decision; the original 3-d roads took up more DSF disk space to save load time.
My previous post went on a massive ramble about “wicked problems” and “knapsack” problems – the short version is that I’m working on DSF generation. Here are a few more pics from DSFs in drydock.
Not everything is unsovled problems; these pics show an algorithm that “removes noise” from digital land class data – the idea is a riff on this paper. I’m still not sure where it’s going to fit into the flow of data in scenery generation, but we’re looking for it to consolidate forest types.
What does autogen look like while it’s being born? It looks like this. For X-Plane 8 and 9, the autogen is built using bitmap technology – the DSF generator literally builds a bitmap image of a city block and tries to “fit” buildings by drawing them and checking for per-pixel overlaps. Version 10 changes this completely – autogen is built polygonally. In this pixture, the blue polygons are strings of houess built around roads; where the terrain is too steep, the polygons are clipped.
Getting polygonal autogen setup to be fast enough for production use has been a battle, but we’re getting there.
The main web server that drives X-Plane.com went down for about an hour this evening. Well, really, the server was fine, but something went wrong in the colocation facility where it lives. I’m still waiting to find out what went wrong there, and I suspect we may be shopping for new colocation shortly.
In the meantime we’ve turned down the TTL on our DNS entry and set up a live mirror of the main website. This means that if the main server is kicked off the air again, we should be able to make the backup live very quickly. This matters because X-Plane’s updater finds the download servers from the main website. No main website, no way to update, even if the update servers are fine. We can also set a DNS fail-over to be automatic, but I think the real answer here is reliable colocation.
I’ve commented in a past post on the cost of vertices in an OBJ. Here’s a general thought on your 3-d modeling: consider using more vertices in your OBJs.
When I started using X-Plane as a user/third party developer (back in X-Plane 6) every quad was hand-coded and removing the floors from building cubes was a key optimization.
Fast forward a decade and things have changed. X-Plane can draw a lot of vertices. Go look at one of the big oil rigs or the aircraft carrier. Does your frame-rate blink? probably not.
A few reasons to at least consider using more vertices:
If X-Plane is limited by CPU or fill-rate while showing your content, the vertices are basically “free” – the GPU can probably draw more vertices without hurting fps.
Since X-Plane 10 will feature dynamic shadows, it’s going to be a lot more obvious what’s really drawn in 3-d and what’s just a nicely painted flat surface.
Similarly, with global lighting, lighting on your scenery may come from many directions, including from multiple landing lights on the airplane. The multi-directional light will emphasize more correct 3-d.
Here are a few pictures of LOWI. Now we shipped LOWI as a demo area a long time ago. But this is LOWI with global shadows and lighting from X-Plane 10.
Looking back at LOWI, it could have even more 3-d; Sergio got a lot of leverage out of his textures. But the 3-d that is there helps make the shadowing work correctly.
Authors are already getting dinged in payware aircraft reviews for not going full 3-d in the panel (that is, for building parts of the panel via paint instead of real surfaces); I think we’ll reach a point where scenery is evaluated the same way.
Every time I talk to a 3-d modeler, I hear the same thing: the 3-d modeling is the quickest part; UV unwrapping and texture painting takes a lot more time, and wiring up the model to systems and animation is worse than 3-d too. So maybe it makes sense to model the 3-d in a little more detail.
If you do want to push the 3-d, please be aware of the following performance issues:
Object 3-d costs VRAM at a charge of 32 bytes per vertex. You can get a lot of mesh out of VRAM – 32k vertices per MB of VRAM – and you might use 8 MB for a 1024 x 1024 day/lit/normal texture set. (In other words, you can have 250k vertices for the cost of your textures.)
You pay for all of that VRAM even if the low LOD is drawn, but you pay nothing if the OBJ is skipped. So for seldom-used or one-time-used objects with huge vertex count (like an airplane fuselage or airport terminal) it might be better to have a “details” object that is fully separate from a main building. The details object can have a super-low LOD and you don’t pay for VRAM from 5000 feet in the air.
If an object is repeated a lot, the vertex count can be an issue. A 10k vertex object is not a big problem until it is drawn 5000 times on screen. So if your object is used a lot (like an autogen house or a sign attached to a road) make sure that the LOD with a lot of triangles has a low distance! (This is how we can have such complex airport runway light fixtures and still have 8,000 per airport – the LO is really low.)
Finally one other tip for future-proofing with version 10: version 10 will have particular enhancements if the LODs of your OBJ are “additive” – that is, if the high detailed LOD is the low detailed LOD plus extra pieces. I’ll explain that in more detail when v10 is in beta, but basically if you do a basic building and then “decorate” it you’ll be able to use a fast path in v10.
“Unlimited Detail” is back – you can see the videos and read some criticism here. I have never seen a really good white paper on the technology, so I’m going to have to speculate a bit about what it is they’ve actually done, and then I’ll use the rest of the post to describe why this isn’t the only way to improve perceived realism in a game (and is not the most likely one to succeed.
But first: some video. This is Euclideon’s promo video, showing lots of really ugly polygonal models, and some clearly not polygonal models with a lot of repeating things.
And here we have in-game footage from the upcoming Battlefield 3, using the Frostbite 2 engine.
I’m starting with the video because I had the same first reaction that I think a lot of other 3-d graphics developers had: attacking six-sided trees from Crysis 1 is a straw man; the industry has moved beyond that. Look at BF3: is the problem really that they don’t have enough polygons in their budget? Do you see anything that looks like a mesh?
What Is Unlimited Detail, Anyway?
Short answer is: I don’t know – the company has been quite vague about specific technical claims. This is what I think is going on from their promotional material.
Their rendering engine uses point cloud data instead of shaded, mapped, textured “polygonal soup” as the input data to be rendered. Their algorithm does high performance culling and level of detail on the highest level of point cloud data. (Whether this is done by precomputing lower-res point clouds for farther views, like we do now for textures and meshes, is not specified.)
Why Are Polygons Limiting?
First, we have to observe the obvious: a 1080p video image contains a bit over 2 million pixels; today’s video cards can easily draw 2 million vertices per frame at 30+ fps (even over theAGP bus!). So for a modern GPU, polygon count is not the operative limit. If you add more polygons, you can’t see them because they become smaller than one pixel on the screen.
The limit for polygons is level of detail. If the polygonal mesh of your model is static, then when you walk away from it, the polygons are (in screen space) too small (and we run out of polygons – if we are drawing more than one vertex per screen pixel, we can exceed budget) and if we move in, the polygons are too big.
In other words, the problem with polygons is scalability, not raw count.
And in this, Euclideon may have a nugget of truth in their claims: if there is a general purpose algorithm to take a high-polygon irregular 3-d triangle mesh and produce a lower LOD in real time, I am not aware of it. In other words, you can’t tell the graphics card “listen, the airplane has a million vertices, but can you just draw 5,000 vertices to make an approximation.” Polygons don’t work like that.
Coping With Polygon Limit: Old School Edition
There’s a fairly simple solution to the problem of non-scalable polygons: you simply pre-create multiple versions of your mesh with different polygon counts – usually by letting your authoring system do this for you.* X-Plane has this with our ATTR_LOD system. It’s simple, and it works sort of.
The biggest problem with this is simple data storage. I usually advise authors to store only two LODs of their models because each LOD takes up storage, and you get limited benefit from each one. Had really smooth LOD on objects been a design priority, we could have easily designed a streaming system to push the LODs of an object out to disk (just like we do for orthophoto textures), which would allow for a large number of stored LOD variants. Still, even with this system you can see that the scalability is so-so.
There’s another category of old-school solutions: CPU-generated dynamic meshes. More traditional flight simulators often use an algorithm like ROAM to rebuild meshes on the fly at varying levels of detail. When the goal is to render a height field (e.g. a DEM), ROAM gives you all of the nice properties that Euclideon claims – unlimited detail in the near view scaled out arbitrarily far in the far view. But it must be pointed out that ROAM is specific to height fields – for general purpose meshes like rocks and airplanes, we only have “substitution LOD”, and it’s not that good.
Don’t Repeat Yourself
It should be noted that if we only had to have one unique type of house in our world, we could create unlimited detail with polygons. We’d just build the house at 800 levels of detail, all the way from “crude” to “microscopic” and show the right version. Polygonal renderers do this well.
What stops us is that the mesh budget would be blown on one house; if we need every house to be different, LOD by brute force isn’t going to work.
That’s why the number of repeating structures in the Euclideon demo videos gives developers a queasy feeling. There are two possibilities:
When they built their demo world, they repeated structures over and over because it was a quick way to make something complex, then saved the huge resulting data set to disk.
They stored each repeating part only once and are drawing multiple copies.
If it’s the second case, that’s just not impressive, because games can do this now – it’s called “instancing“, and it’s very high performance. If it’s the first case, well that was just silly – if their engine can draw that much unique detail, they should have filled their world with unique “stuff” to show the engine off.
Where Does Detail Come From?
Before we go on to how modern games create scalable polygonal meshes, we have to ask an important question: where do these details come from?
The claim of infinite detail is that you would build your world in ridiculously high resolution and let the engine handle the down-sampling for scalability automatically. I don’t think this is a realistic view of the production process.
For X-Plane, the limit on detail is primarily data size. We already (for X-Plane 9) ship a 78 GB scenery product. But it’s the structure of that detail that is more interesting.
The scenery is created by “crossing” data sets at two resolutions to create the illusion of something much more detailed. We take the mesh data (at 90m or worse resolution) and texture it with “landclass” textures – repeating snippets of terrain texture at about 4 meters per pixel. The terrain is about 78 GB (with 3-d annotations, uncompressed) and the terrain textures are perhaps 250 MB. If we were to simply ship 4 meter per pixel orthophotos for the world, I think we’d need about 9.3 trillion pixels of texture data.
I mention this because crossing multiple levels of detail is often both part of an authoring process (I will apply the “scales” bump map to the “demon” mesh, then apply a “skin” shader) and how we achieve good data compression. If the “crossed” art elements never have to be multiplied out, we can store the demon at low res, and the scales bump map over a short distance. There can be cases where an author simply wants to create one huge art asset, but a lot of the time, large scale really means multiple scale.
Coping With Polygon Limit: New School Edition
If we understand that art assets often are a mash-up of elements running at different scales, we can see how the latest generation of hardware lets us blow past the polygon limit while keeping our data set on disk small.
DX11 cards come with hardware tessellation. If our mesh becomes detailed via a control mesh, curve interpolation (e.g. NURBS or whatever) and some kind of displacement mapping, we can simply put the source art elements no the GPU and let the GPU multiply it out on the fly, with variable polygon resolution based on view angle. Since this is done per frame, we can get the right number of polygons per frame.**
Since DX10 we’ve had reasonably good flow control in shaders, which allow for displacement mapping and other convincing promotion of 2-d detail to 3-d.
So we can see a choice for game engine developers:
Switch to point cloud data and a new way of rendering it. Use the art tools to generate an absolutely ginormous point cloud and trust the scalability of the engine or
Switch to DX11, push the sources of the art asset to the GPU, and let the GPU do the data generation in real-time.
The advantages of pushing the problem “down to the GPU” (rather than moving to point clouds) is that it lets you ship the smaller set of “generators” for detail, rather than the complete data set.
Euclideon does mention this toward the end of their YouTube video, when they try to categorize art assets into “fiction” (generated by art tools) and “non-fiction” (generated by super-high-resolution scanners).
I don’t deny that if your goal is “non-fiction” – that is, to simply high-res scan a huge world and have it in total detail, not even clever DX11 tricks are going to help you. But I question how useful that is for anyone in the games industry. I expect game worlds to be mostly fiction, because they can be.
If I build a game world and I populate my overpasses with concrete support pylons, which am I going to do?
Scan hundreds of thousands of pylons all around San Diego so I can have “the actual concrete”? or
Model about 10 pylons and use them over and over, perhaps with a shader that “dirties them up” a little bit differently every time based on a noise source?
There are industries (I’m thinking GIS and medical imaging) where being able to visualize “the real data set” is absolutely critical – and it may be that Euclideon gains traction there. But for the game development pipeline, I expect fiction, I expect the crossing of multiple levels of detail, and I expect final storage space to be a real factor.
Final Id Thoughts
Two final notes, both regarding Id software…
John Carmack has come down on the side of “large art assets” as superior to “procedural generation” – that is, between an algorithm that expands data and having the artists “just make more” the later is preferable. The thrust of my argument (huge data sets aren’t shippable, and the generators of detail are being pushed to the GPU) seems like it goes against that, but I agree with Carmack for the scales he is referring to. Procedural mountains aren’t a substitute for real DEMs. I think Carmack’s argument is that we can’t cut down the amount of game content from what currently ships without losing quality. My argument is we can’t scale it up a ton without hitting distribution problems.
Finally, point clouds aren’t the only way to get scalable non-polygonal rendering; a few years ago everyone got very excited about Sparse Voxel Octrees (SVOs). A SVO is basically a 3-d texture with transparency for empty space, encoded in a very clever and efficient manner for fast rasterization and high compression. Will SVOs replace polygons? I don’t know; I suspect that we can make the same arguments against SVOs that we make against point clouds. I’m waiting to see a game put them into heavy use.
* E.g. the artist would model using NURBS and a displacement map, then let the 3-d tool “polygonalize” the model with different levels of subdivision. At high subdivision levels, smooth curves are smooth and the displacement map provides smaller detail.
** The polygon limit also comes from CPU-GPU interaction, so when final mesh generation is moved to the GPU we also just get a lot more polygons.
I don’t usually post random links but Indi sent me this TED talk, and I just thought it was great. Procedural mountain ranges and rogue stock trading algorithms…what could go wrong? 🙂
This is a screenshot of Javier’s new version of the X-15 for X-Plane 10. In this case I have hacked the rendering engine to show the specular channel* (the alpha channel) of Javier’s normal map as the texture of the airplane. In other words, that is the per pixel shininess that Javier “drew into” the normal map. there isn’t any lighting on the airplane; the bright edges are simply parts of the plane that are completely glossy.
Just look at how gnarly and detailed and full of goo it is! When you look at the plane under normal lighting conditions you simply see the regular texture. But when the sun reflects off of the plane, the reflection is messed up by this complex specularity pattern. The fact that the sun reflections change unpredictably and dynamically is what sells the illusion.
I mention this because normal maps are expensive – they aren’t compressed and can chew up 4 or 16 MB of VRAM easily – they have to be at high resolution to get the subtle bump details. As long as you’re going to have the resolution, make use of it by putting “texture” into the specular channel – it’ll make your materials seem a lot more complex.
X-Plane 10: X-Plane 10 will allow you to use a gray-scale PNG as a specular-only image, for this kind of “texture” at 1/4 of the VRAM cost, in case you don’t need the actual bump mapping.
* 3-d nerd: X-Plane’s terminology is different from what you’d see in a typical 3-d modeler materials editor. What we call “shininess” is the specular level – that is, how bright specular hilights appear to be. In a 3-d editor this is usually an RGB color, but X-Plane only gives you a single level control; the specular hilights take on the tint of the sun instead.
The “shininess ratio” or “specular exponent” you’d see in a 3-d editor isn’t available in X-Plane – it is set to a fixed exponent by the sim. The unconventional names is a historical artifact.
Sigh…I think the DDS gamma mess is now fixed. This mess had two parts:
Since X-Plane treated DDS as having a gamma of 1.8, all DDS material had to be gamma corrected. As it turns out, the error from gamma correcting a DXT-compressed texture is surprisingly small, but it was still silly to use anything other than sRGB given today’s computing environment.
In investigating this, I discovered that XGrinder/DDSTool from the scenery tools distro are inconsistent; they write out DDS using a gamma of 1.8 on Mac and 2.2 on Windows, resulting in overly bright DDS for authors using Windows.
This is now all fixed in the new DDSTool/XGrinder:
A bit in the DDS file is used to specify what gamma the file was written with: 1.8 (v9) or 2.2 (good for v10).
X-Plane 10 will look at this flag and correct gamma accordingly.
The choice of gamma is a menu setting, and is selectable on both Mac and Windows.
I’m not sure when we’ll have a binary distribution, but the code is in the public GIT repo if anyone wants it. The new tool should be better for both authors targeting v9 and v10.
(begin rant)
And for those who want to know why we aren’t communicating more, this is the reason why. A release is made up of a huge number of small details all of which have to be made right, none of which is particularly sexy, and all of which have to be fixed to create an overall effect that is immersive and beautiful.
The saying in aviation is: “aviate, navigate, communicate”; that seems like a good policy for X-Plane 10 too. I’m going to try to fix as many of these small issues as possible so we can have a functional beta, then we can talk about it.
While I’m sure you all enjoy this blog…and perhaps even read and re-read it before bed because it’s life altering, it tends to be a bit….um….nerdy? Come on, it’s safe to admit it, even my wife makes fun of me when I mention the blog. Anyway, if you’re looking for NEWS about X-Plane in addition to the nerdy/geeky/dorky details that we post here, we now have a news “blog” at http://www.x-plane.com/news. You can subscribe there, follow the RSS feed or do whatever you normally do. That’s where we’ll be posting updates about all of our products.
Yeah I know it’s a bit dusty and stale but we’re doing our best to clean off the dust and use it now that it’s a wordpress site and not a pain-in-the-butt-to-edit-html-site.