Blog

I have two fundamental “rants” about flight simulator scenery, and the way people discuss it, market it, compare it, and evaluate it. The rants basically go like this:

  • Mesh resolution (that is, the spacing between elevation points in a mesh) is a crude way to measure the quality of a mesh. It is horribly inefficient to use 5m triangles to cover a flat plateau just because you need them for some cliffs.
  • At some point, the data in a very high mesh becomes misleading. You have a 5m mesh. Great! Are you measuring a 5m change in elevation, or is that a parked car that has been included in the surface?

X-Plane uses an irregular mesh to efficiently use small triangles only where they are needed. I have some pictures on this here.

But it brings up the question: how good is a mesh? If you make a base mesh with MeshTool using a 10m input DEM (the largest DEM you can use right now), the smallest triangles might be 10m. But the quality of the mesh is really determined by the mesh’s “point budget” – that is, the number of points MeshTool was allowed to add to minimize error.

MeshTool beta 4 will finally provide authors with some tools to understand this: it will print out the “mesh statistics” – that is, a measure of the error between the original input DEM and the triangulation. Often the error* from using only 1/6th of the triangles from the original DEM might be as little as 1 meter.

I spent yesterday looking at the error metrics of the meshes MeshTool creates. I figured if I’m going to show everyone how much error their mesh has with a stats printout, I’d better make sure the stats aren’t terrible! After some debugging, I found a few things:

  • Vector features induce a lot of “error” from a metrics standpoint. Basically when you induce vector features, you limit MeshTool’s ability to put vertices where the need to be to reduce meshing error. The mesh is still quite good even with vectors, but if you could see where the error is coming from, the vast majority will be at vector edges.

    For example, in San Diego the vector water is sometimes not quite in the flat part of the DEM, and the result is an artificial flattening of a water triangle that overlaps a few posts of land. If that land is fairly steep (e.g. it gains 10+ meters of elevation right off the coast) we’ll pick up a case where our “worst” mesh error is 10+ meters. The standard deviation will be

  • The whole question of how we measure error must be examined. My normal metric is “vertical” error – for a given point, how much is the elevation different. But we can also look at “distance” error: for a given point, how close is the nearest mesh point from the ideal DEM?

    “Distance” mesh point gives us lower error statistics. The reason is because when we have a steep cliff, a very slight lateral offset of a triangle results in a huge vertical error, since moving 1m to the right might drop us 20 meters down. But…do we care about this error? If the effective result is the same cliff, offset laterally by 1m, it’s probably more reasonable to say we have “1m lateral error” than to say we have “20m vertical error”. In other words, small lateral errors become huge vertical errors around cliffs.

    Absolute distance metrics take care of that by simply measuring the two cliff surfaces against each other at the actual orientation of the cliff. That is, cliff walls are measured laterally and the cliff floor is measured vertically. I think it’s a more reasonable way to measure error. One possible exception: for a landing area, we really want to know the vertical error, because we want the plane to touch pavement at just the right time. But since airplane landing areas tend to be flat, distance measurement becomes a vertical measurement anyway.

Unnatural Terrain

So there I am working with a void-filled SRTM DEM for KSBD. I have cranked the mesh to 500,000 points to measure the error (which is very low, btw…worst error 3m and standard deviation less than 15 cm.

But what are those horziontal lines of high density mesh?

I wasn’t sure what those were, but they looked way too flat and regular. So I look at the original DEM and I saw this:

Ah – there are ridges in the actual DEM. Well that’s weird. What the heck could that be?

This is a view with vector data – and there you go. Those are power lines.

The problem in particular is that SRTM data is “first return” – that is, it is a measurement of the first thing the radar bounces off of from space. Thus SRTM includes trees, some large buildings, sky scrapers, and all sorts of other gunk we might not want. A mesh in a flight simulator usually represents “the ground”, but using first return data means that our ground is going to have a bump any time there is something fairly large on that ground. The higher the mesh res, and the lower the mesh error, the more of this real world 3-d coverage gets burned into the mesh.

So Do We Really Care About 5m DEMs?

The answer is actually yes, yes we do, but maybe not for the most obvious reasons.

The problem with raster DEMs (that is, an elevation as a 2-d grid of heights) is that it doesn’t handle cliffs very well. A raster DEM cannot, by its very format, handle a truly vertical cliff. In fact, the maximum slope it can create is based on

arctan(cliff height / DEM spacing)

Which is math to say: the tighter your DEM spacing, the higher the maximum slope we can represent for a given cliff at a certain height. Note that the total cliff height matters too, so even a crude 90m DEM like SRTM can represent a canyon if it’s really huge**, but we need a very high res DEM to get shorter vertical surfaces.

So the moderated version of my rant goes something like this:

  • High res DEMs input DEMs are necessary to represent small terrain features that are steep if we are using raster DEMs.
  • High res meshes are not necessary – we only need res for part of the mesh where it counts.
  • Let’s not use mesh res to represent 3-d on the ground, only the ground itself.

There is another way to deal with elevation besides DEMs, and in fact it is used for LIDAR data (where the resolution is so high that a raster DEM would be unusable): you can represent a as a series of vector contour lines in 3-d. The beauty of contour lines is that they represent cliffs no matter how steep (up to vertical), and you don’t need a lot of storage if the ground is not very intricate.

The meshing data format inside MeshTool could probably be made t
o work with contours, but I haven’t seen anyone with high quality contour data yet. We’ll probably support such a feature some day.

* Really this should be “the additional error”, because when you get a DEM, it already has error – that is, the technique for creating the DEM will have some error vs. the real world. For example, if I remember right (and I probably do not) 90% of SRTM data points fall within 8m vertically of the real world values. So add MeshTool and you might be increasing the error from 8m to 9-10m, that is, a 12-25% increase in error.

** For the SRTM this might be a moot point – the SRTM has a maximum cliff slope in certain directions defined by the relationship between the shuttle’s orbit and the latitude of the area being scanned. The maximum cliff at any point in the SRTM is 70 degrees, which can be represented by a 247 m cliff using a pair of 90m posts.

Posted in Development, Scenery, Tools by | 1 Comment

NVidia: 3 Ben: 0

This is getting embarrassing – I’m at risk of getting shut out. I was able to fix the “Null texture, how” error users were seeing on NVidia hardware.

It turns out it was an uninitialized variable in code that was never used until NV changed their drivers. As far as I can tell, NV dropped support for FSAA in 16-bit mode a few months ago, at least on some of their newer GPUs. (It is also possible that the incantation necessary to get FSAA has changed a lot and I simply don’t know what it is.)

So the dialog between X-Plane and the video card ran something like the Monty Python cheese shop sketch:

X-Plane: So … can you do full screen anti-aliasing?
GeForce 8: Oh yes, of course! (Please, I’m a GeForce 8 card.)
X-Plane: Splendid! So…how about 16x FSAA?
GeForce 8: Sorry, can’t I can’t do that.
X-Plane: Ah. How about 8x FSAA?
GeForce 8: Sorry, can’t do that either.
X-Plane: I see. Well then, how about 4x FSAA?
GeForce 8: Nope.
X-Plane: 2x FSAA?
GeForce 8: No way.
X-Plane: Ah. I see.

At this point in the dialog X-Plane would promptly lose track of what it had been doing in the setup process, throw out its notes on the GPU setup, and then freakout a bit later when it realized its note taking left something to be desired.

This is the first case I’ve hit where a video card advertises FSAA and can’t actually do it.

Anyway, if you have hit this bug:

  1. Update to 941 final – it should fix it.

  2. Stop trying to run with FSAA and 16-bit color. This is a somewhat crazy combination. FSAA attempts to clean up rendering artifacts at the cost of fill rate. 16-bit color creates artifacts to save fill rate. If your GPU needs 16-bit color to run at high framerate, it’s time to turn FSAA off.

(I realize that 16-bit color and aliasing are different kinds of artifacts, and some users might prefer harsh color transitions to harsh polygon transitions. But I still say, go for 32-bit color, no FSAA. When the sim is running in 16-bit mode, a good chunk of the sim still runs in 32-bit mode because 16-bit RGB surfaces only have 1 bit of alpha.* So you’re not quite getting universal savings but you get 16-bit output colors, so the results look universally bad.)

*This assumes 5551, or 565 pixels. There is a 4-bit alpha 16-bit color format, cleverly called 4444, but if you thought 16-bit looks bad…

Posted in Development by | 2 Comments

Why Do Custom Lights Use the Object Texture?

I am trying to be disciplined and put documentation on the X-Plane wiki, and limit the blog to announcements, rants, and explanations of what’s going on inside X-Plane.

You can read about custom lights here. The short of it is that a custom light is a billboard on an object where you (the author) texture the billboard (with part of the object texture), pick the texture coordinates and color, and optionally run all of these parameters through a dataref* that can modify them.

For named lights, the light texture comes from a texture atlas that Sergio made a few years ago – it’s a nice grid 8×8 pretty lights.

So…why can’t you use it with custom lights? Why do custom lights use the object texture?

The answer is: future compatibility. Sergio and I are actually already working on a new texture atlas for the sim’s built-in lights. (This has been a back-burner project for a while … I have no idea when we’ll actually productize this currently experimental work.) What happens when we create a new texture atlas with all of the lights moved around and scrambled? If your object referenced that texture, the texture coordinates would be incorrect.

Thus, for the lights where you specify texture coordinates (custom lights) you use your own texture. For named lights (where the texture coordinate is generated by X-Plane) it’s safe to use ours.

A Dangerous Bug

I found a bug in 940 that’s been in the sim for a while now: given the right strange combination of named and custom lights in a row, the sim would accidentally use Sergio’s texture atlas rather than the object’s texture for custom lights.

This is a mistake, a bug, and it will be fixed in the next 941 release candidate. I certainly hope there aren’t any objects out there relying on this erroneous behavior, which violates the OBJ spec and is pretty dangerous from a future compatibility standpoint.

* Dataresfs are normally thought of as data we read, so the idea of using them to “process” data is a bit of a bastardization of the original abstraction. You can read about the dataref scheme in detail here.

Posted in Aircraft, Development, File Formats, Modeling by | Comments Off on Why Do Custom Lights Use the Object Texture?

Is Your Video Card “Two Steps Down”

Identifying an underpowered video card is difficult. Video cards have simple, non-confusing names like the CryoTek GeForce FX 9999 XYZ. What the heck does all that stuff mean?

(The lists below will contain a number of “specs”. Do not panic! At the end I will show you where to look this stuff up on Wikipedia.)

A modern graphics card is basically a computer on a board, and such, it has the following components that you might care about for performance:

  • VRAM. This is one of the simplest ones to understand. VRAM is the RAM on the graphics card itself. VRAM affects performance in a fairly binary way – either you have enough or your framerate goes way down. You can always get by with less by turning texture resolution down, but of course then X-Plane looks a lot worse.

    How much VRAM do you need? It depends on how many add-ons you use. I’d get at least 256 MB – VRAM has become fairly cheap. You might want a lot more if you use a lot of add-ons with detailed textures. But don’t expect adding more VRAM to improve framerate – you’re just avoiding a shortage-induced fog-fest here.

  • Graphics Bus. The GPU is connected to your computer by the graphics bus, and if that connection isn’t fast enough, it slows everything down. But this isn’t really a huge factor in picking a GPU, because your graphics bus is part of your motherboard . You need to buy a GPU that matches your motherboard, and the GPU will slow down if it has to.

  • Memory Bus. This is one that gets overlooked – a GPU is connected to its own internal memory (VRAM) by a memory bus, and that memory bus controls how fast the GPU can really go. If the GPU can’t suck data from VRAM fast enough, you’ll have a slow-down.

    Evaluating the quality of the internal memory bus of a graphics card is beyond what I can provide as “buying advice”. Fortunately, the speed of the bus is usually paired with the speed of the GPU itself. That is, you don’t usually need to worry that the GPU was really fast but its bus was too slow. So what we need to do is pick a GPU, and the bus that comes with it should be decent.

  • Of course the GPU sits on the graphics card. The GPU is the “CPU” of the graphics card, and is a complex enough subject to start a new bullet list. (As if I wouldn’t start a new bullet list just because I can.)

So to summarize, you want to look at how much VRAM your card has and make sure the bus interface matches your motherboard. What about the GPU? There are three things to pay attention to on a GPU:

  • Generation. Each generation of GPUs is superior to the previous generation. Usually the GPUs can create new effects, and often they can create old effects more cheaply.

    The generation is usually specified in the leading number, E.g. a GeForce 7xxx is from the GeForce 7 series, and the GeForce 8xxx is from the GeForce 8 series. You almost never want to buy a last-generation GPU if you can get a current generation GPU for similar price.

  • Clock Speed. A GPU has an internal clock, and faster is better. The benefit of clock speed is linear – that is, if you have the same GPU at 450 mhz and 600 mhz, the 600 mhz one will provide about 33% more throughput , usually.

    Most of the time, the clock speed differences are represented by that ridiculous alphabet soup of letters at the end of the card. So for example, the difference between A GeForce 7900 GT and a GeForce 7900 GTO is clock speed – the GT runs at 450 mhz and the GTO at 650 mhz.*

  • Core Configuration. This is where things get tricky. For any given generation, the different card models will have some of their pixel shaders removed. This is called “core configuration”. Basically GPUs are fast because they have more than one of all of the parts they need to draw (pixel shaders, etc.) and in computer graphics, many hands make light work. The core configuration is a measure of just how many hands your graphics card has.

    Core configuration usually varies with the model number, e.g. an 8800 has 96-128 shaders, whereas an 8600 has 32 shaders, and an 8500 has 16 shaders. In some cases the suffix matters too.

How would you ever know your core configuration, clock speed, etc.? Fortunately Wikipedia is the source of all knowledge. Here are the tables for NVidia and ATI.

Important: You cannot compare clock speed or core configuration between different generations of GPU or different vendors! A 16-shader 400 mhz GeForce 6 series is not the same as a 16-shader 400 mhz GeForce 7 series card. The GPU designers make serious changes to the card capabilities between generations, so the stats don’t apply.

You can see this in the core configuration column – the number of different parts they measure changes! For example, starting with the GeForce 8, NVidia gave up on vertex shaders entirely and started building “unified shaders”. Apples to oranges…

Don’t Be Two Steps Down

This is my rule of thumb for buying a graphics card: don’t be two steps down. Here’s what I mean:

The most expensive, fanciest cards for a given generation will have the most shaders in their core config, and represent the fastest that generation of GPU will ever be. The lower models then have significantly less shaders.

Behind the scenes, what happens (more or less) is: NVidia and ATI test all of their chips. If all 128 shaders on a GeForce 8 GPU work, the GPU is labeled “GeForce 8800” and you pay top dollar. But what if there are defects and only some of the shaders work? No problem. NV disables the broken shaders – in fact, they disable so many shaders that you only have 32 and a “GeForce 8600” is born.

Believe me: this is a good thing. This is a huge improvement over the old days when low-end GPUs were totally separate designs and couldn’t even run the same effects. (Anyone remember the GeForce 4 Ti and Mx?) Having “partial yield” on a chip set is a normal part of microchip design; being able to recycle partially effective chips means NV and ATI can sell more of the chips they create, and thus it brings the cost of goods down. We wouldn’t be able to get a low end card so cheaply if they couldn’t reuse the high-end parts.

But here’s the rub: some of these low end cards are not meant for X-Plane users, and if you end up with one, your framerate will suffer. Many hands make light work when rendering a frame. I you have too few shaders, it’s not enough hands, drawing takes forever, your framerate suffers.

For a long time the X-Plane community was insulated from this, because X-Plane didn’t push a lot of work to the GPU. But this has changed over the version 9 run – some of those options, like reflective water, per-pixel lighting, etc. start to finally put some work on the GPU, hitting framerate. If you have a GeForce 8300 GS, you do not have a good graphics card. But you might not have realized it until you had the rendering options to really test it out.

So, “two steps down”. My buying advice is: do not buy a card where the core configuration has been cut down more than once. In the GeForce 8 series, you’ll see the 8800 with 96-128 shaders, then the first “cut” is the 8600 with 32 shaders, and then the 8500 brings us down to 16.

A GeForce 8800 was a good card. The 8600 was decent for the money. But the 8500 is simply underpowered.

When you look at prices, I think you’ll find the cost savings to be “two steps down” is not a lot of money. But the performance hit can be quite significant. Remember, the lowest end cards are being targeted at users who will check their email, watch some web videos, and that’s about it. The cards are powerful enough to run the operating sytem’s window manager effects
, but they’re not meant to run a flight simulator with all of the options turned on.

If you do have a “two step” card, the following things can help reduce GPU load:

  • Turn down or off full screen anti-aliasing.
  • Turn off per pixel lighting, or even turn off shaders entirely.
  • Reduce screen size.

* GT = Good Times, GTO = Good Times Overclocked? Only NVidia knows for sure…

Posted in Development by | 1 Comment

Why Isn’t SLI/CrossFire A No-Brainer?

To get this out: I have no idea if/whether/how much SLI or Crossfire improve or hinder X-Plane’s framerate. None of my development systems have such hardware, and I spend no time either optimizing for SLI/CrossFire or testing. If you have done tests with SLI enabled and disabled, I would like to know the results!

I have read the white papers on how to optimize an application for SLI/CrossFire, and while X-Plane isn’t quite a laundry of SLI/CrossFire sins, we’re definitely an application that has the potential for optimization.

Now normally more hardware = faster framerate. In particular, the limiting factor of filling in a big display with high shader options and full screen anti-aliasing can be the time it takes to fill in pixels, and more shaders mean more pixels filled in at once.* Why doesn’t having an entire second GPU to fill in pixels allow us to go twice as fast?

The answer is: coordination. Normally the process of drawing an X-Plane frame goes a little bit like this:

  1. Draw a little bit more of the cloud shadows into the cloud shadow texture. (This is a gradual process.)
  2. Draw the panel into the panel texture.
  3. Draw the world (as seen from below the water) into the water reflection texture.
  4. Draw the airplane shadow into the airplane shadow texture.
  5. Draw the entire world using the above four textures.

Notice a trend? Those four textures are dynamic textures, meaning they are created by the viedo card drawing into its own texture, once per frame. Dynamic textures are generally a good thing, because they let us create texture content much more rapidly than we could with the CPU. (There is no way we could prepare the panel texture once per frame if we had to do it on the CPU.)

In fact, the total dynamic textures can be more so – if you use panel regions, there are 2 panel textures per region, and if you use volumetric fog, there are two more textures with interim renderings of the world, used to create fog effects.

Okay, so we have a lot of textures we drew. What does that have to do with multiple video cards?

Well, one reason why dynamic textures are normally fast is because, when a dynamic texture is finished, it lives exactly where we want it to live: on the video card!

But…what if there are two video cards? Uh oh. Now maybe one video card drew the water, and another drew the clouds. We have to copy those textures to every video card that will help draw the final frame.

There is a sequence to draw the right textures on the right card at the right time to make X-Plane run faster with two video cards…but the video drivers that manage SLI or CrossFire may have no way to know what that sequence is. The driver has to make some guesses, and if it puts the wrong textures in the wrong places, framerate may be slower, due to the overhead of shuffling textures around.

So SLI and CrossFire are not simple, no-brainer ways to get more framerate, the way having a faster GPU might be.

* If you have a huge number of objects, your framerate is suffering due to the CPU being overloaded, and this is all entirely moot!

Posted in Development by | 5 Comments

No Raster Land Use Data

The X-Plane version 8/9 default scenery uses raster land use data (that is, a low-res grid that categorizes the overall usage of a square area of land) as part of its input in generating the global scenery. When you use MeshTool, this raster data comes in the .xes file that you must download. So…why can’t you change it?

The short answer is: you could change it, but the results would be so unsatisfying that it’s probably not worth adding the feature.

The global scenery is using GLCC land use data – it’s a 1 km data set with about 100 types of land class based on the OGE2 spec.

Now here’s the thing: the data sucks.

That’s a little harsh, and I am sure the researchers tried hard to create the data set. But using the data set directly in a flight simulator is immensely problematic:

  1. With 1 km spatial resolution (and some alignment error) the data is not particularly precise in where it puts features.
  2. The categorizations are inaccurate. The data is derived from thermal imagery, and it is easily fooled by mixed-use land. For example, mixing suburban houses into trees will result in a new forest categorization, because of the heat from the houses.
  3. The data can produce crazy results: cities on top of mountains, water running up steep slopes, etc.

That’s where Sergio and I come in. During the development of the v8 and v9 global scenery, Sergio created a rule set and I created processing algorithms – combined together, this system picks a terrain type from several factors: climate, land use, but also slope, elevation, etc.

To give a trivial example, the placement of rock cliffs is based on the steepness of terrain, and overrides land use. So if we have a city on an 80 degree incline, our rule set says “you can’t have a city that slanted – put a rock face there instead.”

Sergio made something on the order of 1800 rules. (No one said he isn’t thorough!!) And when we were done, we realized that we barely use landuse.

In developing the rule set, Sergio looked for the parameters that would best predict the real look of the terrain. And what he found was that climate and slope are much better predictors of land use than the actual land use data. If you didn’t realize that we were ignoring the input data, well, that speaks to the quality of his rule set.

No One Is Listening

Now back to MeshTool. MeshTool uses the rule set Sergio developed to pick terrain when you have an area tagged as terrain_Natural. If you were to change the land use data, 80% of your land would ignore your markings because the ruleset is based on many other factors besides landuse. Simply put, no one would be listening.

(We could try some experiments with customizing the land use data..there is a very small number of land uses that are keyed into the rule set. My guess is that this would be a very indirect and thus frustrating way to work, e.g. “I said city goes here, why is it not there?”)

The Future

I am working with alpilotx – he is producing a next-gen land-use data set, and it’s an entirely different world from the raw GLCC that Sergio and I had a few years ago. Alpilotx’s data set is high res, extremely accurate, and carefully combined and processed from several modern, high quality sources.

This of course means the rules have to change, and that’s the challenge we are looking at now – how much do we trust the new landuse vs. some of the other indicators that proved to be reliable.

Someday MeshTool may use this new landuse data and a new ruleset that follows it. At that point it could make sense to allow MeshTool to accept raster landuse data replacements. But for now I think it would be an exercise in frustration.

Posted in Development, Scenery, Tools by | 6 Comments

What Exactly Is a Generic Light?

X-Plane 940 has these generic light things…what the heck are they? Here’s the story:

X-Plane has been growing a larger number of independently simulated landing lights with each patch. We started with one, then four, now we’re up to sixteen. Basically each landing light is a set of datarefs that the systems code monitors.

  • You use a generic instrument to hook a panel switch up to a landing light dataref.
  • The sim takes care of matching the landing light brightness with the switch depending on the electrical system status.
  • Named lights can be used to visualize the landing lights.

See here for more info.

But what else lights up on an airplane? Sergio sent me the exterior lighting diagram for an MD-82, and it would make a Christmas tree blush. There are lights for the staircases, for the inlets, on the wings, pointing at the wings, the logo lights, the list goes on.

We have sixteen landing lights, so we could probably “borrow” a few to make inlet lights, logo lights, etc. But if we do that, the landing light will light up the runway when we turn on any of those other random lights.

Thus, generic lights were born. A generic light is a light on the plane that can be used for any purpose you want. They aren’t destined for a specific function like the strobes and nav lights. There are 64 of them, so divide them up and use them how you want. Just like landing lights, you use a generic light by:

  • Using a generic instrument to control its “switch” from the panel.
  • Using a named light to visualize it somewhere on the OBJs attached to your airplane.

Generic lights don’t cast any light on any other part of the plane – sorry. You can use ATTR_lit_level to light up part of your mesh dynamically when the generic light comes on though – the effect can be convincing if carefully authored.

Posted in Aircraft, Development, Modeling by | Comments Off on What Exactly Is a Generic Light?

MeshTool, Water and Land Class

MeshTool 2.0 is still a very new tool, so it shouldn’t be surprising that it has grown quite a few features during beta. I am putting three new features in for beta 4 that should help round out the kinds of things an author can put into a mesh.

Land Class Terrain

The next beta will allow authors to specify built-in land-class terrains by shapefile. Landclass isn’t quite as easy to work with as you might think – I’m working on a Wiki page describing how land class works with the mesh.

You can’t invent your own land classes directly in MeshTool, but there are two work-arounds:

  1. Once you build your DSF, use DSF2Text to edit the header, changing one of our land classes to the one you want. We have 500 + land classes, so you can probably find one to cannibalize.
  2. Or you can just use the library system to replace the art assets for the land class within the area covered by your mesh. (You can tweak as little as one full tile.)

Water Handling and Masking

Water handling and masking go together to allow you to create an accurate physical coastline. The problem is that X-Plane doesn’t let you specify whether a tile is land or water using a texture/image file. Physics are always determined on a per-triangle basis.

MeshTool 2.0 beta 4 will let you specify whether the physics of an orthophoto that has water “under” its transparent areas will take on its own physics, or the physics of the underlying water. (It can act “solid” or “wet”.) This lets you use orthophotos to model shallow areas and reefs.

The mask feature lets you do both. The mask feature lets you limit MeshTool to working on a single area vector area, defined by a ShapeFile. So to make a single orthophoto have wet and solid parts you:

  1. Issue the orthophoto command in solid mode.
  2. Establish a shapefile mask for areas of your DSF.
  3. Re-issue the orthophoto in “wet” mode.

The second orthophoto command will replace the first only in the wet areas, giving you regions with both types of physics. The README for MeshTool will cover this in more detail.

No Z Yet

Some developers have requested that MeshTool use the Z coordinate in a Shapefile to define the elevation of water boundaries. That’s a good idea, but I can’t code it any time soon. The polygon processing in MeshTool is fundamentally 2-d and has no way to retain the Z information during processing. I will try to get to this feature some day, but for now it’s going to have to wait.

The new beta should be available some time early next week, or now if you build from source.

Posted in Development, Tools by | Comments Off on MeshTool, Water and Land Class

I Accidentally Documented Something

Normally I try to make the X-Plane scenery and modeling system as opaque as possible — I want to make sure that nobody ever actually uses the rendering features that I spend weeks and weeks developing.

But the other night I had a little bit too much to drink, got distracted, and posted these:

In all seriousness, I have been trying to find time to put more documentation up on the Wiki. For these features, you will find an explanation of how the planes work, as well as a link to the planes (with plugins) to download, and a link to the plugin source code (on the SDK site, with sample makefiles for 3 operating systems).

Plugins? Do not panic! While plugins are necessary for some of the features demonstrated here, others can be created without additional programming.

BTW, if the existing documentation uses a concept that is not explained anywhere, please email me. I sometimes leave holes in the documentation by accident.

Posted in Aircraft, Development, Modeling by | 2 Comments

Have You Hugged Your Driver Writer Lately?

In my contact with users, on X-Plane forums, in discussions of computer graphics, video drivers are an easy punching bad. When an app doesn’t work, blame the video driver. The guys at NV and ATI don’t have time to respond to every ridiculous allegation that is posted. Sometimes drivers are borked, but when it turns out to be X-Plane I try to set the record straight.

Driver writers have what might be the hardest combination of programming circumstances:

  1. Their code cannot crash or barf. X-Plane crashes, you send me some hate email. Your video driver crashes, you can’t see to send me that email.
  2. The driver has to be fast . The whole point of buyng that new GeForce 600000 GTX TurboPower with HyperCache was faster framerates. If the driver isn’t absolutely optimized for speed, that hardware can’t do its thing.
  3. The driver writers don’t have a lot of time to be fast and correct – the GeForce 700000 GTX TurboPower II will HyperCache Pro will be out 18 months from now, and they’ll have to start all over again.

That’s not an easy set of goals to meet. Today’s video cards are basically computers on a PCIe board, and they do amazing things, but they do it thanks to a fairly complex piece of software.

Applications writers like myself get to outsource the lower level aspects of our rendering engine to driver writers. When a driver doesn’t work right, it’s frustrating, but when a driver does work right, it’s doing some amazing things.

Posted in Development by | 3 Comments