I am reading Predictably Irrational, by Dan Ariely. It’s a great read – definitely recommended – describing the consistent irrational biases that frequent human decision making.
The first chapter discusses our tendency to make relative, rather than absolute comparisons. When deciding whether a product is a good value, we will look at the pricing of similar models, rather than the actual relationship between the product and the money spent. (The implication being that a company can make a product seem cheap without changing its price by adding a second, more expensive but similar “decoy” product. Poof! The cheaper product is now a good deal.)
This behavioral tendency explains user reaction to the rendering settings, a subject that makes me irrational on a regular basis. 🙂
Time to Change the Settings
The rendering settings will let you select a range of sim detail between some minimum and maximum value. These values are based on the software, not hardware – because we don’t actually know how much load any given hardware can support (and with the interaction between settings, finding such a cap is basically impossible). We can only give you a range of choices and let you pick ones that work well.
When a new version of the sim comes out, we sometimes have to recalibrate the settings. If the minimum features the sim can support increase, the minimum setting will be mapped to a new, more expensive behavior. And if the maximum detail the sim can present has increased, the maximum setting will be similarly remapped. We don’t have much choice – if we need more “range” on the slider we have to recalibrate it.
I Can’t Max Them Out
Here’s where human behavior comes in. Humans make decisions based on the relative comparison of easily compared things. Given properties that are harder to measure and easier to measure, we’ll pick the easier one. Given a choice of a trip to Rome, a trip to Rome with free breakfast, and a trip to Paris, we’ll pick Rome with the free breakfast, opting for the easy to measure relative value. (Is the difference between a trip to Paris and Rome really less than the value of a breakfast? Probably not, but it’s a lot harder to evaluate.)
So when we recalibrate the settings, we inevitably here this complaint:
“I used to be able to set the sliders to the maximum setting and now I can’t.”
Previously I would have said “Why the hell do you care?!?!” — if the new slider’s 50% position looks the same as the old slider’s 100% position, why not just set it to 50% and go home happy.
But of course that’s not how we think – the immediately comparable is of immediate concern. Ironically we could make the sim less useful but more pleasing by limiting the maximum range of the sliders. Now more users could feel the joy of having everything “set on max” even if the ultimate utility of the sim is reduced.
This One Goes To 11
I’m not sure there’s a way around this. The best suggestion I’ve heard so far is that if we could attach some kind of units to the settings, then at least there would be a quantitative indication that the user isn’t losing some perceived value. But I suspect that even this misses the point; it doesn’t matter that you’re still getting 500 trees per square km – what matters is that you are getting the most you possibly can! (Perhaps this psychology also explains why people like to overclock.)
Austin tried to fight the psychology of “maximum sliders” by naming all of our settings absurd things. Ever wonder why “default” is the lowest object setting, and we almost immediately jump into “extreme”, “too many”, “insane”, etc.? He was trying to fight a losing battle against relative expectations. The natural human behavior is to pick some relative position for calibration, and based on that, every user who has to put objects below the center setting is going to be unhappy about having to use “lower than average” settings. Austin’s naming convention may be silly, but it does actually do a little bit to fight this.
Food for thought: how does having multiple levels of reflections change user expectations?
If you are developing an add-on for X-Plane, you should always check the log file (Log.txt) after running. (Remember, the sim must quit before the log file is completely written to disk.)
There are three times you should check the log:
- Any time your content doesn’t look the way you expect. (Perhaps the error is described in more detail in the log file.
- Any time there is an error (especially the “a problem with the package X” message, which is always accompanied by details in the log).
- Right before posting your work as a last check.
Posting errors to the log gives us a way to provide verbose feedback to authors when the sim hits a problem without making the user experience too horrible; minor errors are logged and major errors are mentioned to the user only once per package and logged.
The flip side of this is that if you are working on content, you need to seek out these error log messages; the alternative is to have the sim quit every time something goes wrong.
I seem to be in a philosophical mood these days with my blog posts…thought for the day: the human mind easily goes from the specific to the general. Our brains are generalizing machines, pattern matchers finding the rule in the noise.
My preference in creating new scenery-system features is to make them very limited, and my reasoning is: our brains don’t go backward very well. We do not go from the general to the specific.
Now you might think: when making a scenery-system addition, the best thing would be to have a general feature, more useful because it can be used everywhere. But I say: the most important thing is to fully understand the feature – otherwise the feature comes out buggy.
(Consider the piles and piles of bugs and weird behaviors that you get when combining OBJ animation with OBJ hard surfaces.)
Since the human brain doesn’t go from general to specific well, it is hard to start with a rule (“let’s allow feature X in all parts of the scenery system”) and comprehensively derive all of the implications; it is human nature to be surprised later by some unintended side-effects.
It is always easier to extend a feature later to its natural full implications than to declare certain uses illegal later, after authors of planned or started trying to use the feature in that way. If the generalization of the feature makes sense, extending it is often quite painless.
Texture Paging – Scope For Now
Texture paging is the ability for X-Plane to raise and lower the resolution of scenery textures dynamically as you fly. This means more VRAM used for nearby things and less for far away things. In practical terms, this reduces VRAM used by orthophotos by down-sampling the far-away textures, making larger orthophoto scenery packages possible. As you fly, the sim reloads some textures at higher resolutions and some at lower. The cost of the features is the load time while you fly, which burns up some extra CPU cores.
It is my hope that we will productize some very simple texture paging in the next major patch of X-Plane 9 (that would be 920, not 902). But the usage will be pretty specific:
- Texture paging will only be available for .ter and .pol textures (we can extend to other scenery types later if it makes sense).
- Texture paging will require changing the .ter and .pol files (X-Plane will not automatically analyze your scenery to see what can be paged.)
- Texture paging will not be available for ENV scenery.
- If you share textures and texture page, the results will probably be really bad and cause chaos. Be sure to use only one .ter or .pol file (and reference that text file only once in the your DSF definitions section) if you want sane paging. We can extend paging to shared textures in the future, but for now orthophotos are the intended target.
I am also deferring work on dataref-driven textures; we’ll get there eventually, and the infrastructure from the pager will make it easier. But dataref-driven textures really need to be available in a lot more places – it’s a bigger, more complex feature* and I can’t keep adding scope to 920.
Make New Meshes!
While paging will be available for both overlays (using .pol files) and base meshes (using .ter files) I strongly, strongly recommend going the base-mesh .ter route. RealScenery sent me their new “State of Washington” package to use as test material; I was pleasantly surprised at the high framerate. Part of that comes from them using base meshes and not overlays.
Overlays cause the sim to draw the scenery twice (first the old scenery, then your overlay), burning a lot of pixel shader and fill power. Base meshes simply replace the old mesh which is at least twice as efficient.
(I’m just going to keep beating the dead horse of base meshes because I believe that the sooner everyone moves toward base meshes, the more bang for our hardware buck everyone gets.)
* In particular, remember that texture paging happens on threads. But datarefs can come from plugins that are not threaded! Insert anarchy here…
Since X-Plane 9 went final I’ve been going in about 5 different directions with the code, and as a result my posts have diverged pretty far from my charter within the company, namely scenery. Here’s some of what I’m looking at on the scenery front:
Texture Paging
Texture paging is a new feature where X-Plane can change the resolution of orthophotos while you fly. The big limitation of orthophoto-scenery right now is that X-Plane loads every orthophoto at the maximum user-defined resolution when the underlying DSF/ENV is loaded. This can cause X-Plane to run out of virtual address space and crash. With texture paging, only the nearby textures are loaded at high resolution.
Far away textures may be low res, but you’ll never notice because they are far away (and thus you’re seeing them over only a few pixels on screen anyway.
The cost of this technique is that textures are being loaded over and over and over. But this cost is made less painful by two features:
- DDS texture loads are very fast and cheap, especially at very low resolutions.*
- With the next major patch, texture loading will be entirely on a second core (if you have one) and can even span multiple cores.
This feature will require modification of scenery packs in that you’ll have to add some “paging” information to your .ter files; I will try to get a preliminary spec posted soon. Because you only have to modify text files, it should be possible to create “paging patches” to existing DSF orthophoto sceneries that are fairly small.
I do not know if paging will be available for .pol files. My guess is yes, but I reiterate that using .pol files for large areas of orthophotos is a bad idea! Make a new mesh!
Improved Multi-Core Performance
This is going to be an on-going process through the entire v9 run and beyond, but basically with every change to our rendering engine (and we make some serious modifications with almost every patch to keep it modern and competitive) we try to move toward using the new hardware (meaning machines with 2-4 cores) more efficiently. Some of this will be in 920, although my guess is we’ll need at least one more patch after 920 to really see the improvement.
Tools
It’s going to be a little bit before I can put more time into the various scenery tools. My top priority is to keep up with user feedback and work on MeshTool. Hopefully we’ll also declare WED final soon. But for now, since I am working on cockpit and airplane modeling features, my next set of work will probably be for the airplane authors.
Shaders
I do want to put additional shader work into v9. I realize I am at the edge of provoking a bunch of rants in the comments section, so let me save you some time:
“Hey Ben, stop working on eye candy and create some more tools. I don’t want a shader feature that makes my 1.2 ghz G4 with a GeForce 2 MX run any slower. You should finish the tools so they do exactly what I want!”
Okay, I admit, that was totally unfair…there is a lot of truth in the complaints about shaders vs. tools.
- I really do try to keep an eye on system requirements, particularly once we shipped. I’m going to try to prioritize shader features that improve existing rendering techniques, rather than introduce new rendering techniques, so we don’t seriously lower fps during the version run. But also bear in mind that shaders can be turned off, and there are users who have GeForce 9s and such.
- Tools are very important, hence the effort to get MeshTool out. But tools without engine work aren’t very useful either; most of the engine work we do is needed to keep performance up so that new scenery (made with those new tools) don’t bury the sim. For example, MeshTool without texture paging would be a dead-end…you could easily make a MeshTool scenery that you can’t fly.
So in planning what goes out when, I look for clumps of features and tools that can go together to make some sense: WED to use apt.dat 850, texture paging to go with MeshTool. It wouldn’t make sense to defer texture paging to make the next tool while MeshTool is waiting for engine enhancements.
* A DDS already has all of the smaller-version pre-compressed textures in the file. So loading a DDS at low res involves loading a small amount of data from disk to memory and sending it to the graphics card.
By comparison, a PNG file only contains the maximum size, so to load a PNG at low res, we load the largest size, shrink it on the fly, then apply DDS compression.
Austin posted another State-of-the-Union yesterday, and he was good about only mentioning things that are fairly close to completion (e.g. the panel texture optimizations) as well as things that are general (e.g. threading optimizations). After a long beta, when 9.0 went final I sort of went crazy and started a whole series of new projects at the same time; here are some things that I have in progress:
- Major rewrites to the texture management code for both better orthophoto handling and better threaded performance.
- Moving the sim to unicode and true-type fonts, with a new system for translating the application’s strings. (I’ll post more on this soon – thanks to those who have already volunteered to work on translations!)
- Working on new shader technology to take better advantage of DirectX-10-class hardware.
- A big pile of airplane features to complement what was added to 9.0.
That’s a bit much for the next patch, so it’s likely that only some of these things will actually make it into the next patch, and I’m not sure what you’ll see. A lot of sim work goes in as a series of small independent pieces; only the last parts of a feature are user-visible.
For example, the first part of texture work was simply rearranging how the code was structured to make new things possible. Change to the user experience: none. The second part changed threaded handling of textures, which at least shows up as performance improvements. But both the stage for new features later.
So even if a lot of the above doesn’t make it into the next major patch, a lot of ground work is going in, setting us up for features later.
In a previous post I discussed the basic ideas behind using multiple threads in an application to get better performance out of a multi-core machine.
Now before I begin, I need to disclaim some things, because I get very nervous posting anything involving hardware. This blog is me running my mouth, not buying advice; if you are the kind of person who would be grumpy if you bought a $3000 PC and found that it wouldn’t let you do X with X-Plane (where X includes run at a certain rendering setting, framerate, or make your laundry) my advice is very simple: don’t spend $3000. So…
- I do not advocate buying the biggest fastest system you can get; you pay a huge premium to be at the top of the hardware curve, particular for game-oriented technologies like fast-clock CPUs and high-end GPUs.
- I do not advocate buying the Mac Pro with your own money; it’s too expensive. I have one because my work pays for it.
- 8 cores are not necessary to enjoy X-Plane. See above about paying a lot of money for that last bit of performance.
Okay…now that I have enough crud posted to be able to say “I told you so”…
My goal in reworking the threading system inside X-Plane for 920 (or whatever the next major patch is called) is, among other things, to get X-Plane’s work to span across as many cores as you have, rather than across as many tasks are going on. (See my previous post for more on this.)
Today I got just one bit of the code doing this: the texture loader. The texture loader’ job is to load textures from the hard drive to the video card (using the CPU, via main memory) while you fly. In X-Plane 901 it will use up to one core to do this, that core also being shared with building forests and airports.
With the new code, it will load as many textures at a time as it can, using as many cores as you have. I tested this on RealScenery’s Seatle-Tacoma custom scenery package – the package is an ENV with about 1.5 GB of custom PNGs, covering about half of the ENV tile with non-repeating orthophotos.
On my Mac Pro, 901 will switch to KSEA from LOWI in about one minute – the vast majority of the time is spent loading about 500 PNG files. The CPU monitor shows one core maxed out. With the new code, the load takes fourteen seconds, with all eight cores maxed out.
(This also means that the time from when the scenery shifts to when the new scenery has its textures loaded would be about fourteen seconds, rather than a minute, which means very fast flight is unlikely to get to the new area before the textures are loaded and see a big sea of gray.)
Things to note:
- Even if we don’t expect everyone to have eight cores, knowing that the code can run on a lot of cores proves the design – the more the code can “spread out” over a lot of cores, the more likely the sim will use all hardware available.
- Even if you only have two or four cores, there’s a win here.
- Texture load time is only a factor for certain types of scenery; we’ll need to keep doing this type of work in a number of cases.
This change is the first case where X-Plane will actually spread out to eight cores for a noticeable performance gain. Of course the long-term trend will be more efficient use of multi-core hardware in more cases.
The global scenery comes in two packages in version 9: -global terrain- and -global overlays-. The global terrain package contains the base meshes (with beaches); the global overlays contain roads, forests and objects.
Why is this scenery split in half? The answer is unfortunately not “so you can replace the base mesh but keep the overlay 3-d stuff.” That would have been clever, but I must admit I didn’t think of it at the time; MeshTool didn’t exist and people just weren’t making base meshes.
My actual goal was to make it cheaper to replace a significant number of overlays. I don’t know if we’ll ever do this, but one of the obstacles to patching global scenery is the file size; we can only hope to replace a fraction of the files during a version run before the web update size gets too large. But most of that size is in the base mesh. With the base mesh and overlay split, we could potentially replace more overlays.
(Note: we did not actually issue any DSF replacements during the v8 run, and I don’t know if we will or will not during the v9 run. The only thing I am sure of is that if we provide replacement v9 DSF tiles, they’ll be a free download, like all v9 patches…if you buy v9, you get everything.)
The fundamental problem with replacing the base mesh but not the overlays is that the scenery system provides no good way to do this. The Global Scenery folder is always scanned after the Custom Scenery folder* so you’d have to install custom scenery into the global scenery folder with the right file name to get access to the overlay content.
I’m not sure what to do about this yet; the trend in scenery development is for authors to want more control to replace individual parts of the system; the overlay system provided part of that.
* Users with v9 beta DVDs will have the two global scenery folders in the Custom Scenery folder. But — the sim detects this and simply treats them as if they were in the Global Scenery folder, ignoring alphabetic ordering.
I had to do some research into compression algorithms recently, because we had to squeeze the global scenery onto fewer DVDs for retail distribution. We did this mostly by completely filling the DVDs, but we also had to use 7-zip compression to get about a 10% improvement in compression ratios.
DSFs are not the best test of compression efficiency because the format has been organized to help algorithms like zip compress them – the improvement with 7-zip and RAR was a lot less than you’d get with, say, a text file.
Anyway, my point here is: let’s not use RAR – it’s the new GIF. Every now and then a file format comes along with some kind of restriction that keeps everyone from doing everything with it. In GIF’s case, you had to buy the right to create GIFs, and in the case of RAR you have to buy the right to compress RARs.
I think that having these kinds of entanglements in fundamental low level file formats (like how do we compress our data or save our images) is really bad for the software community as a whole; it balkanizes raw materials. And file formats stick around for a long time – even though GIF is made obsolete by PNG you’ll still find them all over the web.
The lure of RAR is of course higher compression ratios than zip. But 7-zip can do the same thing, and unlike RAR, has the potential to be completely free, which means it can be completely ubiquitous.
Macintosh users understand the problem here: for the longest time “StuffIt” archives were the standard way to compress data on the Macintosh. The file format was proprietary, so you couldn’t even make your own program work directly with StuffIt archives. Now that zip has taken over on the Mac, getting data between Mac and Win is easy – you can just zip something using the operating system and send it to all your friends.
Let’s not go back into the “bad old days” of proprietary utilities and a lack of integration with regular apps. I say: if you can stand to use zip or bzip instead of RAR, vote for what’s open and has a future, not what is slightly better now but will just be a pain in the ass in three years.
I don’t know how much of a problem this is yet, or how much of a mess it’s going to make of people’s scenery. Here’s the background:
- ASCII defines 128 character values, mostly letters like A-Z. With ASCII, you can write English and that’s just about it.
- The byte that ASCII is stored in on all modern computers can store 256 values.
- Clever people got the idea to put some more letters in the other 128 values to create characters like é and å.
- People defined different “codepages” that have different sets of charcters in those “upper 128” slots. So one code page might be good for French, another for Russian.
- Modern software uses unicode characters, which have a lot more than 256 values, and can thus hold all sorts of characters in one string.
Code pages were around for a while, but they’re not a good idea. The problem with code pages is that the same numeric values are used for different letters. The result is that a correctly written Russian document, when converted to a different code page, looks like gibberish. And if you want one document with both French and Russian, well, one code page doesn’t do you much good.
Now X-Plane’s handling of non-ASCII characters is pretty poor in version 9.00 (and all previous versions). It will draw ASCII and take keyboard input from ASCII but not much else. If you hit the é key on your foreign keyboard, probably nothing will work.
But it turns out there is one way to use foreign characters in X-Plane – I just discovered it tonight. If you use Windows and your system’s codepage* is set for a foreign language, you can use those foreign language characters in an OBJ file to name a file on disk with the same name. In other words, you can have textures named été.png and it will work.
Sort of. If you then change your system to work in Russian (which changes the code page) your texture will stop working. The reason things stop working is that the file system uses unicode; that is, the OS knows that été requires a Latin character set that’s French friendly, but X-Plane is using Russian since the system’s set that way. The result is that the file system has no way name the file in Russian and we fail to load the texture.
So using the “high 128” characters from your system’s code page to make non-ASCII characters is a bad idea because your scenery won’t work on other people’s computers.
But it’s going to get worse in the future. X-Plane is going to start using UTF8 in a lot of places. UTF8 encodes unicode into one byte characters by using more than one byte for non-ASCII characters, but as a result it uses the “high 128” character codes for very different things. été.png in UTF8 comes out quite different.
I’m not sure how we’ll handle this yet (use UTF8 in the scenery system or have some kind of backward compatibility). But for now I can only advise one things: use ASCII only for your file names. In fact, a good guideline for filenames for the scenery system is to use only numbers, letters, and the underscore.
I just finished about 15 pages of emails, mostly to Andrew McGregor (who is the very first MeshTool user) and also Benedikt Stratmann (whose x737 is on the bleeding edge of plugin-based aircraft) and AlpilotX (we all know about his forests). Probably all three are wondering how the hell I have time to write so much on weekends. (The answer is of course that my frisbee game got rained out. Foo!)
In the meantime, probably about 300 other people who have emailed me in the last few
months are wondering why the hell they have heard nothing from me. My in-box looks like a mail server exploded. It’s not pretty.
So let me blog for a moment about the “relationship problem”. Simply put, there are two of us (Austin and myself) and about a thousand of you (third party developers doing cool and interesting things with X-Plane) plus significantly more users, some of whom have some very weird tech support problems.
In this environment, our algorithm for who gets “developer attention” is pretty broken and subject to total thrash…there is a huge element of random luck (who emails me when I am recompiling the sim vs. debugging a nasty bug).
I’m aware of both how hard the task Austin and I face and how frustrating it is for a third party developer because I’ve been on both sides. Before I worked for LR, I was a third party and I was always astounded that Austin couldn’t remember what we talked about last week.
Then I started working for the company and saw what it’s like. Imagine sitting at a train station watching the trains go by* (at full speed, not stopping) and someone says “last week I waved to you out the window and you waved back, remember me?”
So I would advise three things to the neglected third party:
- Be firm – you may need to ping us again because at busy times we can’t always keep track of who has asked for what.
- Be patient – if you need something the week we’re burning DVD masters for a second time (because the first set failed at the factory) then you’re going to have to wait.
- Don’t take it personally…a lack of a response usually indicates overload inside the company, not a poor opinion of your work!
This blog post has rambled enough, but it may feed well into the next one.
* This analogy is totally stolen from “How Doctors Think” by Jerome Groopman – he uses it to describe the task of primary care physicians trying to spot the early signs of a very rare illness among a fast-moving train of patients who are almost entirely healthy. I strongly recommend this book particularly for Americans – we need to understand the forces at work in shaping the quality of our medical care!