In my previous post I drew an analogy between a scenery system with its file formats and a turtle within its shell. We are limited by DSF, so we are making a new file format for base meshes so we and all add-on developers can expand the scope of our data and make better scenery for X-Plane.
The really big change we are making to base meshes is to go from a vector-centric to a raster-centric format. Let’s break that down and define what that means.
Vectors are fancy computer-graphics talk for lines defined by their mathematical end-points. (Pro tip: if you want to be a graphics expert, you just need the right big words. Try putting the word anisotropic in front of everything, people will think you just came from SIGGRAPH!) DSF started as an entirely-vector format:
- All 3-d clutter is defined by lat/lon locations, so we have the vector outline of polygons, autogen blocks, etc.
- The base mesh is pre-triangulated, so most base-mesh features are defined by the corners of the triangles forming lines (e.g between land and water).
This isn’t the only thing DSF does – we added raster capabilities and there is e.g. raster sound and season data in X-Plane 12, but DSF is fundamentally about vector data – saying where the edges of things go exactly.
This was great for a while, but now that we have more and more vector data (complex coastlines, complex road grids, complex building footprints) the DSFs are getting too big and slow for X-Plane.
Raster data is any data stored in a 2-d grid. This includes images (which in turn includes orthophotos) but it also includes 2-d height maps (DEMs), and the 2-d raster data we include in DSF now (e.g. sound raster data, season raster data etc). Any time we store numbers that mean something in a 2-d array, we have raster data.
Raster data has several advantages over vectors:
- Raster data is what the GPU wants to consume.
- Raster data has really good LOD characteristics for close detail with long view distances.
- We can put more interesting and dense information into a raster tile without it getting bigger.
Twenty years ago, when I first worked on DSF, computers didn’t have the capacity to use lots of raster data – this was back when 8 MB of VRAM was “a lot”. But now we no longer need to depend on vectors for space savings.
Raster tiles are raster data broken into smaller tiles that get pieced together. Raster tiles have become the standard way to view GIS data – if you’ve used Apple Maps or Google Earth or OpenStreetMap or any of the map layers in WED, you’ve used raster tiles.
Raster tiles have a bunch of advantages too:
- They have really great LOD/VRAM usage properties.
- They can be loaded incrementally.
- They provide an easy way to vary resolution and let authors skip providing data that they don’t need to provide. (E.g. “forest” raster data over the ocean? Just don’t provide any tiles!)
So our plan for the next-generation base mesh is “all raster tiles, all the time” – we’d like to have elevation data, land/water data, vegetation location data, as well as material colors all in raster tile form. This would get us much better LOD/streaming characteristics but also provide a very simple way for custom scenery packs to override specific parts of the mesh at variable resolution with full control.
Raster tiles are not the same thing as orthophotos. A raster tile is any data contained in a 2-d array, not just image data cut into squares (e.g. orthophotos). So while a raster-tile system may make it easier to build orthophoto scenery, it does not mean that the scenery can only be orthophotos.
I have to admit, as someone from the 1980’s, the idea of raster anything being both faster and resulting in higher resolution than vectors to be baking my noodle a bit, but like if you make it work, rock on.
Will this method also give you the ability to “smooth out” the terrain outside of airport boundaries at close distances?
It will make more precise ‘mesh fixing’ for overlays possible.
As in “carve this deep along this line for the storm-water drain” or “here’s high res elevation data for the airport, please blend it in with the rest of the terrain”?
The second thing. We _could_ have “editing” overlay operators but I think “absolute patches” will be more useful because they don’t assume to know what’s under them.
I think it’s awesome that you’re committing time to make these blog posts, a lot of us look forward to them every week, and build a lot of excitement for the future!
Thanks again!
Can you elaborate on the LOD/VRAM usage properties? Because intuitively, it should be the opposite. Typical vector data has less memory footprint than typical raster data, if we’re talking about stuff like height maps or big geometry, like buildings or roads. If you have some big (geometrically) feature on terrain, with vector you just need 3 floats per vertex. And all the space in-between is filled with raster pixels during rasterization. With raster data, you have to have data for every point on a grid. If you want finer detail, you need higher density data grid. So you “waste” memory for all those areas with no features (like every data point representing presence and type of forest, as opposed to vector polygon where forest is everywhere inside it).
If we could only afford a little bit of low res data, vectors would rule because they are ‘naturally sharp’. But LOD schemes for vectors are really hard.
* The data density of high-res vectors can be unpredictably unboundedly higher than low res ones.
* The quality of low res vectors can be unpredictably unboundedly worse than the high res ones.
* The ‘jump visually between the two can be unpredictably different.
By comparison, with raster data you have a fixed, even set of LOD intervals where more data gives us better quality in a predictable way.
So yes, the *best* case of vectors might be “this is cheaper than anything you can imagine because the real world shape is very simple and we capture the lack of entropy in vector vertices.”
In practice, we usually hit the *worst* case of vectors: the real world is more complex than we can possibly visualize, and the act of reducing data quality to get the vectors down to budget produces artifacts.
With raster tiles, we have a very simple “double the res, quadruple the RAM” scheme that we can apply from near to far..because there is very little terrain so close that it must be high res, we actually get a ton of savings, and we can fit the entire world into a *fixed budget* that is not that high.
Ben,
Thanks for the explanation! That makes a bunch of sense.
So would this mean we could edit things like an elevation map or landclass map simply with GIMP?
It would mean you could prepare an overlay to the elevation map and turn it into ‘overlay tiles’ with our tools.
I love it
Ben, does this mean that in the future system the “resolution” of the base mesh will be fixed at some arbitrary value? In other words, the dimensions of that 2D array for each raster tile will be set by X-Plane?
I’m NOT a graphics person, but again based on the 1980s one of the huge advantages of vector data/graphics was the escape from a fixed resolution.
Right now there are “high-definition” vector meshes available either commercially or via the community such as the HDMesh (a heroic piece of work). In the new “raster-farian” future will such hi-def replacement data still be possible, and if so how would this work?
In the 1980s vector was a huge win because total memory was really low, so the vectors were “blocky but sharp” and raster was “just really low res”.
Those days are gone – we have a ton more budget. In that world, the scalability of raster tiles trumps vectors’ tendancy to get out of hand at high detail.
I don’t know what the highest mesh res will be but generally the raster tile scheme _scales really well_ – only the nearest tiles have to be at the actual highest res. So we can afford a lot of res up-close without exloding budget.
In the new scheme a high-def replacement is _just_ the elevation data, tiled. So the bar to increase this res is much lower because there isn’t a need to bake the rest of the DSF in with it – raster data is sorted by layer.
Are we describing a sortakinda voxel system? (also popularised in the 80’s!)
No it’s arrays of 2-d tiles at different resolutions – sort of like what drives the OSM map on the web.
Hi Ben,
When you say ‘next-generation base mesh’, is this something we can look forward to in X Plane 12?
Cheers
So, in general whenever anyone asks a question that’s designed to answer the question “is this feature going to be a free upgrade or a paid upgrade” I do my best to *NOT ANSWER*. 🙂 🙂 🙂
I think it is worth noting that _support for the format_ can come sooner than _LR ships some raster tiles_, and they don’t have to have the same business models. With that in mind, “no comment”.
Hi Ben,
Two small questions…
1) Are you going to be able to provide a full Earth terrain with that new system, pole to pole? That would be huge. Not that it is vital but it would be a bit underwhelming not having that. If that makes sense.
2) Any idea on how many months, sleepless nights and alcohol is that transition going to require? Will you be on the “consumption level” of the Vulkan renderer rework back then? Or is it much easier?
Cheers! 🙂
The new system covers the poles well – it’s designed on a cube mapping of the earth and not a plane, which makes the poles no longer a nightmare. And since it’s variable res, we can have the poles without spending tons of install space on them.
I think _having raster tile support_ will be *significantly easier* than the Vulkan port. But this is only the first step in all next-gen scenery road map items.
This sounds great, Ben. And it seems like this could open up the flood gates for a lot of scenery tools that weren’t possible before.
Is this planned for XP12?
Thanks for the blog posts. Always a fun read!
Do I understand your post correctly that you will completely switch to raster for DSF’s or will there still be a dualistic world like on google maps, where most base-imagery is raster-based and streets or place names are still vector-based?
There will be a transition period where vector overlays are DSF over raster tiles.
And there will be DSF backward compatibility for DSF overlays and DSF meshes in both cases.
Hi Ben,
very interesting, thanks for sharing. There are probably two aspects of scenery size: the size of the tiles that are displayed and has to go into VRAM and the size of the tiles for the entire planet that will go to the user’s disks.
Is “all raster tiles, all the time” also to be used for vector data including for example roads? Or only color (I had argued before to use s2maps.eu maybe with super-resolution), seasons, DEM etc?
I understand from OSM that tiles for the entire planet in ZL19 would be about 56TB. And ZL19 has only ~30cm resolution per pixel. So maybe not enough for sharp borders of roads or driveways in a rasterized image? A quick estimate shows that vectorized road information of the entire planet is about 4000x smaller than a full set of ZL19 tiles.
OSM also do not store the most of the tiles for the web but they create them on the fly on demand. That is because in reality users look only at very few of the tiles at that level. OSM store only about 0.023% of all ZL19 tiles, they say. Is that also the plan for LR? Do you foresee that people still can download the scenery or is the planned change requiring a streaming service?
I have no predictions on distribution model. But I can say that tiles _are_ streaming friendly and _add-on_ makers want this now. AutoOrtho, e.g. is an attempt to get to streaming without format support from LR.
When can we expect to see the transition to the raster format?
Sorry this is in the wrong post, but post now closed, but relating to the “Screenshot Utility to control Depth of Field and Exposure in real time”… I found that the “contrast” is 100% out on Screenshot images, does this update correct this?