Identifying an underpowered video card is difficult. Video cards have simple, non-confusing names like the CryoTek GeForce FX 9999 XYZ. What the heck does all that stuff mean?
(The lists below will contain a number of “specs”. Do not panic! At the end I will show you where to look this stuff up on Wikipedia.)
A modern graphics card is basically a computer on a board, and such, it has the following components that you might care about for performance:
-
VRAM. This is one of the simplest ones to understand. VRAM is the RAM on the graphics card itself. VRAM affects performance in a fairly binary way – either you have enough or your framerate goes way down. You can always get by with less by turning texture resolution down, but of course then X-Plane looks a lot worse.
How much VRAM do you need? It depends on how many add-ons you use. I’d get at least 256 MB – VRAM has become fairly cheap. You might want a lot more if you use a lot of add-ons with detailed textures. But don’t expect adding more VRAM to improve framerate – you’re just avoiding a shortage-induced fog-fest here.
-
Graphics Bus. The GPU is connected to your computer by the graphics bus, and if that connection isn’t fast enough, it slows everything down. But this isn’t really a huge factor in picking a GPU, because your graphics bus is part of your motherboard . You need to buy a GPU that matches your motherboard, and the GPU will slow down if it has to.
-
Memory Bus. This is one that gets overlooked – a GPU is connected to its own internal memory (VRAM) by a memory bus, and that memory bus controls how fast the GPU can really go. If the GPU can’t suck data from VRAM fast enough, you’ll have a slow-down.
Evaluating the quality of the internal memory bus of a graphics card is beyond what I can provide as “buying advice”. Fortunately, the speed of the bus is usually paired with the speed of the GPU itself. That is, you don’t usually need to worry that the GPU was really fast but its bus was too slow. So what we need to do is pick a GPU, and the bus that comes with it should be decent.
-
Of course the GPU sits on the graphics card. The GPU is the “CPU” of the graphics card, and is a complex enough subject to start a new bullet list. (As if I wouldn’t start a new bullet list just because I can.)
So to summarize, you want to look at how much VRAM your card has and make sure the bus interface matches your motherboard. What about the GPU? There are three things to pay attention to on a GPU:
-
Generation. Each generation of GPUs is superior to the previous generation. Usually the GPUs can create new effects, and often they can create old effects more cheaply.
The generation is usually specified in the leading number, E.g. a GeForce 7xxx is from the GeForce 7 series, and the GeForce 8xxx is from the GeForce 8 series. You almost never want to buy a last-generation GPU if you can get a current generation GPU for similar price.
-
Clock Speed. A GPU has an internal clock, and faster is better. The benefit of clock speed is linear – that is, if you have the same GPU at 450 mhz and 600 mhz, the 600 mhz one will provide about 33% more throughput , usually.
Most of the time, the clock speed differences are represented by that ridiculous alphabet soup of letters at the end of the card. So for example, the difference between A GeForce 7900 GT and a GeForce 7900 GTO is clock speed – the GT runs at 450 mhz and the GTO at 650 mhz.*
-
Core Configuration. This is where things get tricky. For any given generation, the different card models will have some of their pixel shaders removed. This is called “core configuration”. Basically GPUs are fast because they have more than one of all of the parts they need to draw (pixel shaders, etc.) and in computer graphics, many hands make light work. The core configuration is a measure of just how many hands your graphics card has.
Core configuration usually varies with the model number, e.g. an 8800 has 96-128 shaders, whereas an 8600 has 32 shaders, and an 8500 has 16 shaders. In some cases the suffix matters too.
How would you ever know your core configuration, clock speed, etc.? Fortunately Wikipedia is the source of all knowledge. Here are the tables for NVidia and ATI.
Important: You cannot compare clock speed or core configuration between different generations of GPU or different vendors! A 16-shader 400 mhz GeForce 6 series is not the same as a 16-shader 400 mhz GeForce 7 series card. The GPU designers make serious changes to the card capabilities between generations, so the stats don’t apply.
You can see this in the core configuration column – the number of different parts they measure changes! For example, starting with the GeForce 8, NVidia gave up on vertex shaders entirely and started building “unified shaders”. Apples to oranges…
Don’t Be Two Steps Down
This is my rule of thumb for buying a graphics card: don’t be two steps down. Here’s what I mean:
The most expensive, fanciest cards for a given generation will have the most shaders in their core config, and represent the fastest that generation of GPU will ever be. The lower models then have significantly less shaders.
Behind the scenes, what happens (more or less) is: NVidia and ATI test all of their chips. If all 128 shaders on a GeForce 8 GPU work, the GPU is labeled “GeForce 8800” and you pay top dollar. But what if there are defects and only some of the shaders work? No problem. NV disables the broken shaders – in fact, they disable so many shaders that you only have 32 and a “GeForce 8600” is born.
Believe me: this is a good thing. This is a huge improvement over the old days when low-end GPUs were totally separate designs and couldn’t even run the same effects. (Anyone remember the GeForce 4 Ti and Mx?) Having “partial yield” on a chip set is a normal part of microchip design; being able to recycle partially effective chips means NV and ATI can sell more of the chips they create, and thus it brings the cost of goods down. We wouldn’t be able to get a low end card so cheaply if they couldn’t reuse the high-end parts.
But here’s the rub: some of these low end cards are not meant for X-Plane users, and if you end up with one, your framerate will suffer. Many hands make light work when rendering a frame. I you have too few shaders, it’s not enough hands, drawing takes forever, your framerate suffers.
For a long time the X-Plane community was insulated from this, because X-Plane didn’t push a lot of work to the GPU. But this has changed over the version 9 run – some of those options, like reflective water, per-pixel lighting, etc. start to finally put some work on the GPU, hitting framerate. If you have a GeForce 8300 GS, you do not have a good graphics card. But you might not have realized it until you had the rendering options to really test it out.
So, “two steps down”. My buying advice is: do not buy a card where the core configuration has been cut down more than once. In the GeForce 8 series, you’ll see the 8800 with 96-128 shaders, then the first “cut” is the 8600 with 32 shaders, and then the 8500 brings us down to 16.
A GeForce 8800 was a good card. The 8600 was decent for the money. But the 8500 is simply underpowered.
When you look at prices, I think you’ll find the cost savings to be “two steps down” is not a lot of money. But the performance hit can be quite significant. Remember, the lowest end cards are being targeted at users who will check their email, watch some web videos, and that’s about it. The cards are powerful enough to run the operating sytem’s window manager effects
, but they’re not meant to run a flight simulator with all of the options turned on.
If you do have a “two step” card, the following things can help reduce GPU load:
- Turn down or off full screen anti-aliasing.
- Turn off per pixel lighting, or even turn off shaders entirely.
- Reduce screen size.
* GT = Good Times, GTO = Good Times Overclocked? Only NVidia knows for sure…
To get this out: I have no idea if/whether/how much SLI or Crossfire improve or hinder X-Plane’s framerate. None of my development systems have such hardware, and I spend no time either optimizing for SLI/CrossFire or testing. If you have done tests with SLI enabled and disabled, I would like to know the results!
I have read the white papers on how to optimize an application for SLI/CrossFire, and while X-Plane isn’t quite a laundry of SLI/CrossFire sins, we’re definitely an application that has the potential for optimization.
Now normally more hardware = faster framerate. In particular, the limiting factor of filling in a big display with high shader options and full screen anti-aliasing can be the time it takes to fill in pixels, and more shaders mean more pixels filled in at once.* Why doesn’t having an entire second GPU to fill in pixels allow us to go twice as fast?
The answer is: coordination. Normally the process of drawing an X-Plane frame goes a little bit like this:
- Draw a little bit more of the cloud shadows into the cloud shadow texture. (This is a gradual process.)
- Draw the panel into the panel texture.
- Draw the world (as seen from below the water) into the water reflection texture.
- Draw the airplane shadow into the airplane shadow texture.
- Draw the entire world using the above four textures.
Notice a trend? Those four textures are dynamic textures, meaning they are created by the viedo card drawing into its own texture, once per frame. Dynamic textures are generally a good thing, because they let us create texture content much more rapidly than we could with the CPU. (There is no way we could prepare the panel texture once per frame if we had to do it on the CPU.)
In fact, the total dynamic textures can be more so – if you use panel regions, there are 2 panel textures per region, and if you use volumetric fog, there are two more textures with interim renderings of the world, used to create fog effects.
Okay, so we have a lot of textures we drew. What does that have to do with multiple video cards?
Well, one reason why dynamic textures are normally fast is because, when a dynamic texture is finished, it lives exactly where we want it to live: on the video card!
But…what if there are two video cards? Uh oh. Now maybe one video card drew the water, and another drew the clouds. We have to copy those textures to every video card that will help draw the final frame.
There is a sequence to draw the right textures on the right card at the right time to make X-Plane run faster with two video cards…but the video drivers that manage SLI or CrossFire may have no way to know what that sequence is. The driver has to make some guesses, and if it puts the wrong textures in the wrong places, framerate may be slower, due to the overhead of shuffling textures around.
So SLI and CrossFire are not simple, no-brainer ways to get more framerate, the way having a faster GPU might be.
* If you have a huge number of objects, your framerate is suffering due to the CPU being overloaded, and this is all entirely moot!
The X-Plane version 8/9 default scenery uses raster land use data (that is, a low-res grid that categorizes the overall usage of a square area of land) as part of its input in generating the global scenery. When you use MeshTool, this raster data comes in the .xes file that you must download. So…why can’t you change it?
The short answer is: you could change it, but the results would be so unsatisfying that it’s probably not worth adding the feature.
The global scenery is using GLCC land use data – it’s a 1 km data set with about 100 types of land class based on the OGE2 spec.
Now here’s the thing: the data sucks.
That’s a little harsh, and I am sure the researchers tried hard to create the data set. But using the data set directly in a flight simulator is immensely problematic:
- With 1 km spatial resolution (and some alignment error) the data is not particularly precise in where it puts features.
- The categorizations are inaccurate. The data is derived from thermal imagery, and it is easily fooled by mixed-use land. For example, mixing suburban houses into trees will result in a new forest categorization, because of the heat from the houses.
- The data can produce crazy results: cities on top of mountains, water running up steep slopes, etc.
That’s where Sergio and I come in. During the development of the v8 and v9 global scenery, Sergio created a rule set and I created processing algorithms – combined together, this system picks a terrain type from several factors: climate, land use, but also slope, elevation, etc.
To give a trivial example, the placement of rock cliffs is based on the steepness of terrain, and overrides land use. So if we have a city on an 80 degree incline, our rule set says “you can’t have a city that slanted – put a rock face there instead.”
Sergio made something on the order of 1800 rules. (No one said he isn’t thorough!!) And when we were done, we realized that we barely use landuse.
In developing the rule set, Sergio looked for the parameters that would best predict the real look of the terrain. And what he found was that climate and slope are much better predictors of land use than the actual land use data. If you didn’t realize that we were ignoring the input data, well, that speaks to the quality of his rule set.
No One Is Listening
Now back to MeshTool. MeshTool uses the rule set Sergio developed to pick terrain when you have an area tagged as terrain_Natural. If you were to change the land use data, 80% of your land would ignore your markings because the ruleset is based on many other factors besides landuse. Simply put, no one would be listening.
(We could try some experiments with customizing the land use data..there is a very small number of land uses that are keyed into the rule set. My guess is that this would be a very indirect and thus frustrating way to work, e.g. “I said city goes here, why is it not there?”)
The Future
I am working with alpilotx – he is producing a next-gen land-use data set, and it’s an entirely different world from the raw GLCC that Sergio and I had a few years ago. Alpilotx’s data set is high res, extremely accurate, and carefully combined and processed from several modern, high quality sources.
This of course means the rules have to change, and that’s the challenge we are looking at now – how much do we trust the new landuse vs. some of the other indicators that proved to be reliable.
Someday MeshTool may use this new landuse data and a new ruleset that follows it. At that point it could make sense to allow MeshTool to accept raster landuse data replacements. But for now I think it would be an exercise in frustration.
X-Plane 940 has these generic light things…what the heck are they? Here’s the story:
X-Plane has been growing a larger number of independently simulated landing lights with each patch. We started with one, then four, now we’re up to sixteen. Basically each landing light is a set of datarefs that the systems code monitors.
- You use a generic instrument to hook a panel switch up to a landing light dataref.
- The sim takes care of matching the landing light brightness with the switch depending on the electrical system status.
- Named lights can be used to visualize the landing lights.
See here for more info.
But what else lights up on an airplane? Sergio sent me the exterior lighting diagram for an MD-82, and it would make a Christmas tree blush. There are lights for the staircases, for the inlets, on the wings, pointing at the wings, the logo lights, the list goes on.
We have sixteen landing lights, so we could probably “borrow” a few to make inlet lights, logo lights, etc. But if we do that, the landing light will light up the runway when we turn on any of those other random lights.
Thus, generic lights were born. A generic light is a light on the plane that can be used for any purpose you want. They aren’t destined for a specific function like the strobes and nav lights. There are 64 of them, so divide them up and use them how you want. Just like landing lights, you use a generic light by:
- Using a generic instrument to control its “switch” from the panel.
- Using a named light to visualize it somewhere on the OBJs attached to your airplane.
Generic lights don’t cast any light on any other part of the plane – sorry. You can use ATTR_lit_level to light up part of your mesh dynamically when the generic light comes on though – the effect can be convincing if carefully authored.
MeshTool 2.0 is still a very new tool, so it shouldn’t be surprising that it has grown quite a few features during beta. I am putting three new features in for beta 4 that should help round out the kinds of things an author can put into a mesh.
Land Class Terrain
The next beta will allow authors to specify built-in land-class terrains by shapefile. Landclass isn’t quite as easy to work with as you might think – I’m working on a Wiki page describing how land class works with the mesh.
You can’t invent your own land classes directly in MeshTool, but there are two work-arounds:
- Once you build your DSF, use DSF2Text to edit the header, changing one of our land classes to the one you want. We have 500 + land classes, so you can probably find one to cannibalize.
- Or you can just use the library system to replace the art assets for the land class within the area covered by your mesh. (You can tweak as little as one full tile.)
Water Handling and Masking
Water handling and masking go together to allow you to create an accurate physical coastline. The problem is that X-Plane doesn’t let you specify whether a tile is land or water using a texture/image file. Physics are always determined on a per-triangle basis.
MeshTool 2.0 beta 4 will let you specify whether the physics of an orthophoto that has water “under” its transparent areas will take on its own physics, or the physics of the underlying water. (It can act “solid” or “wet”.) This lets you use orthophotos to model shallow areas and reefs.
The mask feature lets you do both. The mask feature lets you limit MeshTool to working on a single area vector area, defined by a ShapeFile. So to make a single orthophoto have wet and solid parts you:
- Issue the orthophoto command in solid mode.
- Establish a shapefile mask for areas of your DSF.
- Re-issue the orthophoto in “wet” mode.
The second orthophoto command will replace the first only in the wet areas, giving you regions with both types of physics. The README for MeshTool will cover this in more detail.
No Z Yet
Some developers have requested that MeshTool use the Z coordinate in a Shapefile to define the elevation of water boundaries. That’s a good idea, but I can’t code it any time soon. The polygon processing in MeshTool is fundamentally 2-d and has no way to retain the Z information during processing. I will try to get to this feature some day, but for now it’s going to have to wait.
The new beta should be available some time early next week, or now if you build from source.
Normally I try to make the X-Plane scenery and modeling system as opaque as possible — I want to make sure that nobody ever actually uses the rendering features that I spend weeks and weeks developing.
But the other night I had a little bit too much to drink, got distracted, and posted these:
In all seriousness, I have been trying to find time to put more documentation up on the Wiki. For these features, you will find an explanation of how the planes work, as well as a link to the planes (with plugins) to download, and a link to the plugin source code (on the SDK site, with sample makefiles for 3 operating systems).
Plugins? Do not panic! While plugins are necessary for some of the features demonstrated here, others can be created without additional programming.
BTW, if the existing documentation uses a concept that is not explained anywhere, please email me. I sometimes leave holes in the documentation by accident.
In my contact with users, on X-Plane forums, in discussions of computer graphics, video drivers are an easy punching bad. When an app doesn’t work, blame the video driver. The guys at NV and ATI don’t have time to respond to every ridiculous allegation that is posted. Sometimes drivers are borked, but when it turns out to be X-Plane I try to set the record straight.
Driver writers have what might be the hardest combination of programming circumstances:
- Their code cannot crash or barf. X-Plane crashes, you send me some hate email. Your video driver crashes, you can’t see to send me that email.
- The driver has to be fast . The whole point of buyng that new GeForce 600000 GTX TurboPower with HyperCache was faster framerates. If the driver isn’t absolutely optimized for speed, that hardware can’t do its thing.
- The driver writers don’t have a lot of time to be fast and correct – the GeForce 700000 GTX TurboPower II will HyperCache Pro will be out 18 months from now, and they’ll have to start all over again.
That’s not an easy set of goals to meet. Today’s video cards are basically computers on a PCIe board, and they do amazing things, but they do it thanks to a fairly complex piece of software.
Applications writers like myself get to outsource the lower level aspects of our rendering engine to driver writers. When a driver doesn’t work right, it’s frustrating, but when a driver does work right, it’s doing some amazing things.
Well, I won’t stop you from lying to X-Plane, but if you do, your add-on may have problems in the future.
Basically: some parts of X-Plane take measurements of real world information and attempt to simulate them. I have previously referred to this as “reality-based” simulation (e.g. the goal is to match real world quantities).
In those situations, if you intentionally fudge the values to get a particular behavior on this particular current version of X-Plane, it’s quite possible that the fudge will make things worse, not better in the future.
This came up on the dev list with the discussion of inside vs. outside lighting. X-Plane 9 gives you 3 global lights for objects in the aircraft marked “interior”, but none for the exterior.
Now there is a simple trick if you want global lights on the exterior: mark your exterior fuselage as “interior” and use the lights.
The problem is: you’ve misled X-Plane. The sim now thinks your fuselage is part of the inside of the plane.
This might seem okay now, but in the future X-Plane’s way of drawing the interior and exterior of the plane might change. If it does, the mislabeled parts could create artifacts.
So as a developer you have a trade-off:
- Tweak the sim to maximize the quality of your add-on now, but risk having to update it later.
- Use only the official capabilities of the sim now, and have your add-on work without modification later.
Javier posted a video of his CRJ on the dev list today. I have not tried the plane, but there is no question from the video that it looks really good. What makes the video look so nice is the careful management of light. Part of this comes from careful modeling in 3-d, and part of it comes from maxing out all of X-Plane’s options for light.
But…what are the options for light on an airplane? I don’t know what Javier has done in this case, but I can give you a laundry list of ways to get lighting effects into X-Plane.
Model In 3-D
To really have convincing light, the first thing you have to do is model in 3-d. There is no substitute – for lighting to look convincing, X-Plane needs to know the true shape of the exterior and interior of the plane, so that all light sources are directionally correct. X-Plane has a very large capacity for OBJ triangles, so when working in a tight space like the cockpit, use them wisely and the cockpit will look good under a range of conditions.
You can augment this with normal maps in 940. Normal maps may or may not be useful for bumpiness, but they also allow you to control the shininess on a per-pixel basis. By carefully controlling the shininess of various surfaces in synchronization with the base texture, you can get specular hilights where they are expected.
The 2-D Panel
First, if you want good lighting, you need to use panel regions. When you use a panel texture in a 3-d cockpit with ATTR_cockpit, X-Plane simply provides a texture that exactly matches the 2-d cockpit. Since the lighting on the 2-d cockpit is not directional, this is going to look wrong.
When you use ATTR_cockpit_region, X-Plane uses new next-gen lighting calculations, and builds a daytime panel texture and a separate emissive panel texture. These are combined taking into account all 3-d lighting (the sun and cockpit interior lights – see below). The result will be correct lighting in all cases.
Even if you don’t need more than one region and havea simple 1024×1024 or 2048×1024 3-d panel, use ATTR_cockpit_region – you’ll need it for high quality lighting.
The 2-d panel provides a shadow map and gray-scale illumination masks. Don’t use them for 3-d work! The 2-d “global lighting” masks are designed for the 2-d case only. They are optimized to run on minimal hardware. They don’t provide the fidelity for high quality 3-d lighting – they can have artifacts with overlays, there is latency in applying them, and they eat VRAM like you wouldn’t believe. I strongly recommend against using them as a source of lighting for a 3-d cockpit.
To put this another way, you really want to have all global illumination effects be applied “in 3-d”, so that the relative position of 3-d surfaces is taken into account. You can’t do this with the 2d masks.
The 2-d panel lets you specify a lighting model for every overlay of every instrument – either:
- “Mechanical” or “Swapped” – this basically means the instrument provides no light of its own – it just reflects light from external sources.
- “Back-Lit” or “Additive” – this means the instrument has two textures. The non-lit texture reflects external light, and the lit texture glows on its own.
- “Glass” – the instrument is strictly emissive.
You can use 2-d overlays not only for instruments but also to create the lighting effect within instruments, e.g. the back-lighting on a steam gauge’s markings, or the back-lighting on traced labels for an overhead panel.
2-d overlays take their lighting levels from one of sixteen “instrument brightness” rheostats. You can carefully allocate these 16 rheostats to provide independent lighting for various parts of the panel.
The 3-d Cockpit
The 3-d cockpit allows you to specify 3 omni or directional lights. These can be placed anywhere in the plane, affect all interior objects, and can be tinted and controlled by any dataref. Use them carefully – what they give you is a real sense of “depth”. In particular, the 3-d lights are applied after animation. If a part of the cockpit moves from outside the light to into the light, the moving mesh will correctly change illumination. This is something you cannot do with pre-baked lighting (e.g. a _LIT texture).
Finally, ATTR_light_level is the secret weapon of lighting. ATTR_light_level lets you manually control the brightness of _LIT texture for a given submesh within an OBJ. There are a lot of tricks you can do with this:
- If you know how to pre-render lighting, you can pre-render the glow from a light onto your object into your _LIT texture, and then tie the brightness of the _LIT texture to a dataref. The result will be the appearance of a glow on your 3-d mesh as the light brightens. Because the lighting effect is pre-calculated, you can render an effect that is very high quality.
- You can create back-lit instruments in 3-d and link the _LIT texture to an instrument brightness knob.
- You can create illumination effects on the aircraft fuselage and tie them to the brightness of a beacon or strobe.
There are two limitations of ATTR_light_level to be aware of:
- Any given triangle in your mesh can only be part of a single ATTR_light_level group. So you can’t have multiple lighting effects on the same part of a mesh. Plan your mesh carefully to avoid conflicts. (Example: you can’t have a glow on the tail that is white for strobes and red for beacons – you can only bake one glow into your _LIT texture.)
- ATTR_light_level is not available on the panel texture. For the panel texture, use instrument brightness to control the brightness of the various instruments.
I have a sample plane that demonstrate a few of these tricks; I will try to post it on the wiki over the next few days.
NVidia and ATI provide control panels for Windows and Linux. These control panels let users configure aspects of 3-d acceleration, often in ways that bypass an application’s request. This post is a rant about such control panels, but in order to understand the rant, you must understand how X-Plane communicates with a video card.
Stranger In A Strange Land
X-Plane is an OpenGL application. OpenGL (the “open graphics language”) is a “language” by which X-Plane can tell any OpenGL graphics card that it wants to draw some triangles.
Think of X-Plane as an American tourist in a far away land. X-Plane doesn’t speak the native language of ATI-land or NVidia-Land. But if the hotel says “we speak OpenGL”, then we can come visit and ask for a nice room and a good meal.
Of course, if you have ever been an American tourist (or live in a country that is sometimes infested with American tourists 🙂 you know that sometimes American tourists who speak only English do not get to see the very best a country has to offer. Without local knowledge, the tourist can only make generic requests and hope the results turn out okay.
An example of where this becomes a problem is full-screen anti-aliasing (FSAA). OpenGL allows an application to ask for anti-aliasing. The only parameter an OpenGL application can ask for is: how much? 2x, 4x, 8x, 16x. But as it turns out FSAA is a lot more complicated. Do we want super sampling or multisampling? Coversage Sample Anti-Aliasing? Do we want a 2-tap or 5-tap filter? Do we want temporal anti-aliasing?
As it turns out, NVidia-land is a very strange country, with many flavors of FSAA. Some are very fast and some are quite slow. And when X-Plane comes in and says “we would like 16x” FSAA, there is really no guarantee that we will get fast FSAA (for example, 16x CSAA) or something much slower (like 16x super-sampling). X-Plane is not native to NVidia-land and cannot ask the right questions.
Control Panels
So where do the control panels come in? Well, if X-Plane can only ask for “16x FSAA”, how can NVidia give users control of the many options the card can do? The answer is: control panels. The NVidia control panel is made by NVidia – it is native to NVidia-land and knows all of the local tricks: this card has high-speed 5-tap filtering, this card does CSAA, etc.
At this point I will pass on a tip from an X-Plane user: you may be able to get significantly faster FSAA performance with an NVidia card on Windows by picking FSAA in the control panel rather than using X-Plane’s settings. This is because (for some reason) X-Plane’s generic OpenGL request is translated into about the slowest FSAA you can find. With the control panel you can pick something faster.
Bear in mind that only some users report this phenomenon. My NVidia card works fine even with X-Plane setting FSAA. So you may have to experiment a bit.
It Gets Weirder
When it comes to full-screen anti-aliasing, I can see why NVidia wants to expose all of their options, rather than have X-Plane simply pick one. Still, which do you think is best for X-Plane:
- Multisampling
- Supersampling
- Coverage Sample Anti-Aliasing
- Some Mix of the Above?
Antialiasing has become very complex, and some of these modes will absolutely wreck your framerates.
And FSAA is one of the better understood options. How about these:
- Adjust LOD Bias
- Anisotropic Filtering
- Anisotropic Filtering Optimization
(I know what these do, and you shouldn’t be messing with them.)
How about these?
- CPU Multi Core Support
- Multi-Display/Mixed GPU Acceleration
I haven’t found any description of what those do, but I have reports from users that setting them to the “wrong” value (whatever that is) absolutely destroys framerates.
Suffice it to say, as an applications developer, the situation is a support nightmare. Users can modify a series of settings at home that we cannot easily see, that are poorly documented, that cause performance to be very different from what we see in our labs, sometimes for the worse, sometimes for the better.