Just an FYI: when it rains it pours. Normally betas increase load on our set of update servers. To compound this, one of them is suffering a midlife crisis^H^H^H^H^H^H^H^Hhard drive failure.* We’re working on it now; hopefully it will be resolved in the next 24 hours.
EDIT: the update server is back up – our host not only swapped out the drive, but the whole box. We’ll have to take it down one more time in the future, but for the most part I think we’re out of the woods.
Another note on servers: Chris has restructured X-Plane for Android to separately download the art assets from our servers, rather than contain all art assets in the actual download. What he found after several painful weeks was that the Android store is not yet reliable for large apps. While the official app size limit is 50 MB, many phones have problems with their configuration that cause downloads to fail. When the user buys our app and the download fails, they get angry at us. (X-Plane may have been, until it was restructured, one of the largest Android game APKs. The other games with large amounts of 3-d content were already doing separate downloads.)
We originally wanted to build a monolithic app (everything in the APK) because we thought that this would provide the simplest, easiest configuration to maintain, and thus hassle-free installation for our users. You get the APK, you install it, you fly! Unfortunately, the Android Market isn’t reliable for such a large download, so we had to re-evaluate.
The new system downloads only the core app from the Android Market and then pulls the art assets from one of our servers. So far this appears to be an improvement. If/when Google provides an integrated solution, we will probably switch back to it to simplify the process again (right now we have two points of failure: the Android Market and our server farm, which, per the above notes, sometimes does fail). But for now, we’ll host the apps and try to give people the best download experience we can.
Finally, I will try to roll out at least a beta of new installers some time this week. The new installer simultaneously downloads from multiple servers, with a more efficient HTTP implementation; this should hopefully result in better download times and also lower server load per demo.
* Chris pointed out: most normal humans don’t know what this ^H^H^H^H is about…it’s nerd-speak for the delete key, e.g. to undo a text. ^H is control-H, which you may find works just like the delete key. Yes, I’m a huge nerd.
Alpilotx pointed me toward a thread on the org discussing Austin’s work on the weather system. The thread turned into a bit of a he-said-she-said with regards to Outerra and whether it could some day be combined with X-Plane.
This blog post will be a discussion of various general approaches to scenery and the trade-offs we have to consider, e.g. plausibility and realism, procedural vs. algorithmic and data driven design. But first, a brief note on Outerra. As I have said before, we are already aware of Outerra, so there is no need to email us. The bottom line is that we have a set of mostly done features for X-Plane 10, our goal is to finish X-Plane 10, and we are not even spending one brain cell considering putting a new rendering engine into X-Plane while we are trying to get 10.0 done.
Defining Some Terms
One of the problems with comparing scenery system approaches is that a real productized approach to scenery rarely fits into a perfect bucket or matches a single theoretical techniques. So here are some approximate terms, designed to generally describe an approach. They’re not going to be perfect fits, and even the definitions will fluctuate in different contexts and forums.
- We can say scenery is plausible when it looks like it might exist somewhere in the world. Plausible means that roads don’t go straight up over a cliff, trees don’t grow in the ocean, etc. In other words, plausible scenery is scenery where absurd things don’t happen. Plausible scenery is great when you don’t know what an area should look like. A lack of plausibility is often a bug.
- We can say scenery is realistic when it correlates closely with what is really present at a given location on the Earth. So if there really is a lake behind my house, realistic scenery has that lake. Plausible scenery might have a lake, a forest, or something else believable for where I live (the Northeastern United States). A giant sandy desert would not be plausible for my location.
- We can say scenery is procedural if the detail in the scenery comes from some kind of algorithm that produces results. For example, a fractal coastline is procedural.
- We can say scenery is data driven when the detail comes from some source of external input data. Our mountains are currently data driven – that is, the mountain shape basically comes directly from the DEMs we use.
- We can say scenery is artist driven if the look of the scenery comes from art assets created by an art team.
- We can say scenery is algorithm driven if part of its look comes from the transformational process that converts data from one form to another.
(I’m sort of drawing a line in the sand here with procedural vs. algorithmic, but what I’m trying to contrast is a program that generates ‘information’ out of thin air vs. a program that creates information out of other information. For example, in X-Plane 9, European capillary roads were procedural. We had no real data, so I wrote an algorithm that made them up in a manner that was consistent with underlying terrain. In version 10, these roads will be algorithmic; we take OSM data and then do some processing to make it suitable for X-plane. This is definitely a line in the sand kind of definition.)
So Are We Plausible or Realistic?
So the first question is: is the goal of X-Plane global scenery plausibility or realism? The answer is: a bit of both. Austin’s posts on the subject virtually always bring up plausibility. The reason for this is simple: he is not too worried about the amount of realism we’ve put into the scenery, but he is not happy with the bugs. He wants the bugs gone. So every time he and I speak, he says “and make sure it’s plausible!”
But we’re not going to remove realism just to fix plausibility bugs. I expect that the next global scenery render will be at least as realistic as the last – that is, we’re going to use better data and we’re not going to make up data where we had real information before.
There are limits to realism. We don’t expect the global scenery to ever be as realistic as a custom scenery package for a small matter. But realism does matter. Part of the joy of flying in a flight simulator is seeing the real world. Where we can have more realistic global scenery, we consider it to be a win, and we are always looking to be more realistic than the last render.
Plausibility for the version 10 render is going to take two forms:
- Bug fixes. Any time something screwy happens, it’s not plausible. Sometimes these are code bugs that must be fixed, and sometimes they are data conflicts. For example, the water data sasys “water” but the elevation data says “hill”. Combine them and you get water going up a hill. We have to write code to resolve this, somehow.
- We are reworking the way cities are rendered, because even at their best, the old approach, procedural buildings with algorithmic roads over land class photos, did not look plausible, even at its very highest setting. So this is a feature request to fix a plausibility problem.
Algorithmic or Procedural
I’ve discussed this before (and forgotten about the post). But to expand the discussion, we need to consider not only algorithmic and procedural data processing, but whether we are driven by procedural generation, input data, assets created by artists, or some combination. (In practice, all systems require a mix of data, art assets, and procedures and algorithms, it’s a question of the blend.)
I’ve been working on global scenery for a few years now, and over time I’ve come to appreciate the importance of artist input (via art assets) into any scenery process. Simply put, if you want scenery to look good, you need to make it reasonably straight forward for people who are good at making pretty pictures to control the look of your visual results. A few years ago I viewed the scenery process as strictly a question of data conversion and visualization, but now I see it as finding a way to merge art assets and data into a cogent final product, with the art assets being used in a way that the artists can control. In practice, this often means making sure that the art assets come in a format that artists are comfortable with or can learn without too much pain.
As I said in the previous post, our approach is becoming more algorithmic and less procedural as higher quality source data becomes available. (For example, we don’t have to generate European roads when we can import and reprocess them.) But our approach over time has always been heavily artist driven. By this I mean: our input data is algorithmically processed into a final form that makes sense only in the context of art assets, and we have a pretty good idea of what those art assets will look like when we design the algorithms. To use roads as an example again, our task with OSM is to convert OSM road data into a road network that will visualize nicely with road art assets created by an artist.
Procedural Compression
One way to view procedural scenery is “creating lots of information from little or no information”. But another wa
y to think of it is as a compression technology. As was correctly pointed out on the org forums, you use less storage specifying the overall location of a forest than you do specifying every tree individually. The compressed form (store the forest location) can be equally plausible. It will be less realistic if the original tree locations were based on real world data, but it will be equally (unrealistic) if the original tree locations were procedurally generated. Put another way, pushing procedural processes out of the scenery generation process and into the flight simulator makes DSFs smaller.
When I first started working on X-Plane 8 DSF scenery, not only was DVD size a factor, but so was load time; we had one core and it wasn’t a very fast core. Anything we could do to make loading faster, we did. Thus we pushed a lot of work into the scenery generation process, including procedural processes, to keep load time down.
Times have changed; we now have dual core machines as a baseline, and often quite a few more cores. Thus over time we are starting to move procedural processes back into the simulator, trading load time (which runs on multiple cores) for generation time and file size. So perhaps a more accurate statement would be: our scenery generation process is becoming more algorithmic and less procedural, and X-Plane itself is becoming more procedural. This is driven both by more input data (which must be processed up front) and more compute power on the host (which lets us shrink file size, and thus use DVD space for other things).
X-Plane 10
Here’s how this plays out in practice in version 10:
- Some (but not all) of the building placement work* has been moved into X-Plane; a bit of expensive precomputation is still done at DSF generation time.
- Some (but not all) of road processing has been moved into X-Plane; a lot is still done at DSF generation time.
- Where possible, we are moving from a multi-layered approach to terrain to a pixel-shader-based approach to terrain. This cuts down overdraw and uses the GPU more efficiently. (The simplest example: in X-Plane 8 and 9, cliffs have separate terrains from hills. In X-Plane 10, a single terrain sits on both the cliff and the hill and changes its appearance based on the actual slope; this texture change is computed by the GPU.)
In other words, X-Plane 10 is making the logical evolution to better balance the computing resources we have to improve plausibility and realism.
Thanks to Dominic for sending me this link…these guys are building an airship^H^H^H^H^Hhover^H^H^H^H, well, it’s weird looking. Hrm…that airport they’re landing at in the simulator looks strangely familiar.
First, Happy New Year! As is typical, I’ve been quiet on the blog because things have been insanely busy here at work. Just to give you an idea of the insanity:
- There will be a 9.63 relatively soon – the bug driving this is some Linux distros not finding the DVD. But we’ll get a few new datarefs in there too.
- We have new updaters and installers to get tested, again addressing Linux DVD issues, but also with updated web download code that should give a nice speed boost.
- Chris has been working hard on Android. X-Plane for Android is pretty much the biggest APK anyone has tried to ship, and as a result we’ve hit a number of problems with the market that we are going to work around.
- All that’s just the side show; X-Plane 10 development is of course the meat and potatoes.
Now, about visibility. X-Plane 9 restricts ground visibility to 25 nm (about 46 km) in an attempt to prevent you from seeing off the edge of scenery tiles. Many users have expressed (some more persistently than I would have liked) an interest in longer range visibility. Austin recently posted a note to X-Plane.org discussing level of detail and distance management in the new weather system, and users immediately picked up on his mention of 100 nm visibility. Here’s what we’re thinking; all of this is subject to change as we keep working on the product.
First, visibility: you can come up with a formula for the distance to the horizon based on height above a sphere: d = sqrt((r+h)^2 – r^2) where r is the radius of the planet and h is the height above the planet. Since the Earth is roughly 6 million meters in radius, we get a visibility to the horizon of:
100 meters: 34.6 km
500 meters: 77.4 km
1000 meters: 109.5 km
10000 meters: 346.5 km
Clearly a little bit of altitude lets you see a long way.
But there’s more to it than that: X-Plane has always changed the visible distance with altitude. The 25 nm limit applies to surface observations (which is what you get from a METAR). As you move up into orbit, that distance is scaled out to the horizon distance, so that you can see the whole planet from orbit. That scaling can reveal the edge of DSFs, which are blended into the planet when volumetric fog is enabled.
So here is what I think we really need to do:
-
We do need a larger ‘surface level’ maximum visibility, so that distant features are visible from the ground.
-
We need a scaling from ground to upper atmospheric visibility that gives us more visibility sooner; one of the problems with version 9 is that the increase of visibility is slow, which gives mid-elevations a hazy look.
-
In the long term, we need to load more DSFs, probably twelve instead of six. X-Plane 10 already has some improvements in how scenery shift is done, but my guess is that we can’t productize this until we have a 64-bit build (since more DSFs chew more memory), so I expect this to happen in a patch.
-
We need to add elevation displacement to the whole-Earth planet render, so that the blend between DSFs and the planet don’t have huge height gaps at high-elevation locations. I am hoping we’ll have this in 10.0, but it is not coded yet. (Usually we recut the planet textures last, since they are cut off of the DSFs.)
-
We need to improve the quality of haze, fog and atmospherics. In real life, atmospheric scattering reduces the contrast of far away terrain. I believe that correct scattering could make a huge difference in the quality of the transition from DSF to planet, the required tex res (we need less if we scatter more), and generally it would be a big contribution to the realism of the image.
I’m not sure how much of this we’ll get into 10.0; I have a prototype of Sean O’Neil’s atmospheric scattering shader from GPU Gems 2 running in the sim, but I don’t think it’s shippable. I do hope we’ll get at least some scattering in place, with improvements in patches.
That’s a road map, at least. If there’s a take-away point, it’s this: increasing visibility is complex and involves a lot of parts of the sim and there are still significant parts that need work. So I really don’t know if we’ll hit some kind of hitch or problem that requires us to back off visibility.
Austin’s comments about 100 nm visibility reflect what the slider in the sim happens to be set to now. It’s also a design goal of the new weather system – that is, we want the new weather system to handle significantly larger distances (and have better scalability) than the old one did.
It shipped! X-Plane Mobile is now available for Android phones – look in the Android market under “X-Plane 9”.
Edit: Chris sent me this QR Code – scan it to go to the store listing.
Edit: if you either cannot see X-Plane in the Android market or you cannot download it, please first look here for trouble-shooting tips, then contact customer support (info at x-plane dot com). Please do not use the comments section of this blog for customer support; if you need help we will need to contact you one-to-one.
As some have noticed on the org and on FaceBook, Randy mentioned that we may be able to ship X-Plane Mobile for Android. Some users were quite befuddled to learn that we were aiming to ship X-Plane Mobile for Android so soon when X-Plane 10 is delayed. Here’s the full story.
Chris, the third and most recent addition to the X-Plane programming team, began a port of X-Plane Mobile to Android a while ago; this was the second port of X-Plane Mobile after our port to Palm WebOS. He was able to accomplish most of the port fairly quickly; hence the video floating around the web of X-Plane on a Nexus One back in May.
Unfortunately we ran into some issues that stopped ship; it looks like Google may have them fixed shortly, hence our hope of finally shipping the app. So while Chris has spent a little bit of time recently working on the last few Android issues, our hope is to release a product that we already put development time into a while ago.
Propsman pointed this one out to me yesterday: apparently Blender tangent-space normal maps run from a value of Z=-1 (no blue) to Z=1 (100% blue). This is not how X-Plane normal maps work; our normals go from Z=0 (no blue) to Z=1 (100% blue).
This difference is easy to miss because X-Plane has to renormalize the normal map as the last step of processing the normal map. This turns a big artifact into a small one. The general effect of using the Blender convention rather than X-Plane’s is that your normal map will look ‘less bumpy’ for fairly extreme amounts of bump.
To fix this, simply remap the colors of your blue channel in PhotoShop or some other image editing program. Basically you’ll want to set what was 50% blue to 0% blue, and keep 100% blue the same. This will extend the lighter half of the blue channel over the entire blue channel.
If you have any blue less than 50% in the image, um, that’s a normal that points backward, and X-Plane doesn’t support that.
If you have a DirectX 10 or 11 class video card (that is, a GeForce 8nnn or newer or a Radeon HD card) and you’re on a Mac, consider updating to OS X 10.6.x if you’re still on OS X 10.5.8.
10.6 has performance enhancements in the video drivers that I suspect will benefit X-Plane 9 users, but it will really matter for X-Plane 10. We need OS X 10.6 to expose some of the OpenGL extensions that these cards have. Thus 10.6 will get you faster frame-rate, more realistic lighting, and more efficient VRAM use.
(If you have an older card, I don’t know if you’ll get any benefit, although I doubt you’ll see a performance loss.)
X-Plane 9 allows you to categorize objects as being on the plane’s outside, inside, or glass. X-Plane depends on these flags being right for a few things:
- The draw order of the airplane is determined by the object types – glass is drawn last to avoid translucency artifacts.
- Interior light from the plane is only spilled on the “inside” objects.
- Glass objects are excluded from shadow calculations to avoid having opaque windows in the airplane shadow.
It is important that you use these flags as intended; X-Plane 10 depends on this information as well, and X-Plane 10’s global spill and global shadowing algorithms are more sensitive to incorrect categorization of objects than X-Plane 9’s forward renderer.
In particular, you should have glass for the airplane windows in an attached object tagged as type ‘glass’; do not attach your glass to the cockpit object, which cannot be categorized as glass. If you have an old plane with glass in the cockpit, consider cutting the object in half in a 3-d editor and attaching the glass separately.
(You should also use our prop disc animation, rather than use an OBJ for prop discs; the OBJ format doesn’t contain the z-buffer tricks necessary to make the prop look right.)
Sometimes these posts get off topic, sometimes in the direction of the art of computer programming, sometimes in the nature of the industry, and sometimes with pictures of the pets. This post is going to go off a bit into the subject of project management.
Randy and Tyler posted what was becoming clear (by the lack of an already existing beta): our estimated release date for X-Plane 10 was incorrect. Software project delays are pretty common, and often when a third party add-on is delayed, the community jumps to speculate about “what’s going on” inside the project and tries to infer whether the delay is an indication of serious problems.
I’d like to try to reframe the issue of delays in terms of an analogy. You ask me: how fast can you run a mile? I tell you “4 minutes and 15 seconds”. I then run a mile and you time me. My time: 6 minutes, 10 seconds.* What can we learn from this episode? I think we can learn two things:
- For a computer programmer, I am surprisingly fast – a six minute mile isn’t to be sneezed at when you spend your days sitting on your ass in front of a monitor drinking coffee.
- My ability to predict my own speed is not very good. I was pretty naive to think I could run a 4 minute mile – that’s what world class athletes run. My estimate was off by a fairly big error margin.
One thing we should not conclude is that because my mile time was 2 minutes slower than estimated, that I am a slow runner. The estimate sets up an expectation, but if the estimate is wrong, it’s not a useful metric of efficiency.
The same applies to X-Plane; we missed our original projected ship date because the estimation of when we would be done was not a very good estimate. This isn’t good for a few reasons:
- It creates uncertainty for third parties as to when a platform will change.
- It makes it difficult for marketing to properly plan a roll-out.
- It makes it difficult to balance the value of more features vs. an earlier release date (since we don’t know how much “time” we are trading for “features” if the time estimates are wrong).
But the delay is not at all a black mark for our team – on the contrary, they’re working their asses off and creating some really great work.
When looking at a project that will be delayed (because the original schedule was wrong) there’s a few things you can do:
- Add more people. This is quite often the wrong thing to do – please read the Mythical Man Month to understand why. Once your team is the right size, adding more warm bodies usually makes schedule delays worse and hurts efficiency.
- Remove features. This is the only real way to bring in a ship date.
- Move the date back.
When Austin and I were working on X-Plane 8, we hit a similar scheduling problem – what we had set out to do was going to take a lot longer than we thought. (Like X-Plane 10, we had just doubled the team size and begun a project that involved massive rewrites, which made it hard to ship until the work was fully complete. Sound familiar?) The difference? With X-Plane 8 we had contracted to ship with an external distributor for Thanksgiving, so we had to go for item 2 – we cut scope. What we cut was the world – that is, we shipped new global scenery only for the US, and the existing ENVs for the rest of the world. We also had to ship the artwork we had on hand, despite being unhappy with its quality. We didn’t finish the rest of the world and graphics we were happy with for another 11 months.
Option 2, cutting scope is painful and hard. Sometimes it is the right thing to do. In the case of X-Plane, however, we have the luxury to move the date back. With that in mind, we’re trying as hard as we can to keep feature-creep minimal and finish what we’ve already bit off, so we can get the release done and out the door.
* My mile time is not 6 minutes, 10 seconds…I would be astounded, and quite possibly in the ER if I could run that fast for any sustained amount of time.