Category: Development

Invisible Hard Surfaces

I have received several requests for a transparent runway with a physical surface type. That request is just strange enough that we need to look back and ask, “how did we get here?”

High Level and Low Level Modeling

The “new” airport system, implemented in X-Plane 850 (with a new apt.dat spec to go with it) is based on a set of lower level drawing primitives, all of which are available via DSF. In other words, if Sergio and I can create an effect to implement the apt.dat spec, you can make this effect directly with your own art assets using a DSF overlay. This relieves pressure on the apt.dat spec to become a kitchen sink of tiny details.

The goal of apt.dat is to make a visually pleasing general rendering of airport data. DSF overlays provide a modeling facility.

Little Tricks

It turns out there are two things the apt.dat file “does” with the rendering engine that you can’t do in an overlay DSF:

  1. The apt.dat file registers runways in the airport dialog box (for starting flights, positioning the airplane, etc.).

  2. When the apt.dat reader places OBJs to form approach lights, it can offset their “timing base”, which is why the rabbit flashes in sequence.

(If you were to place a sequence of approach lights with rabbits in an overlay, every single light would flash at the same time because the DSF overlay format does not have a way to adjust the object’s internal timing parameter.)

The solution: the transparent runway. The idea of the transparent runway is to create with the apt.dat file the two aspects of a runway that you can’t build with a DSF overlay: the approach lights and the entry in the global airport dialog box. Transparent runways leave the drawing and surface up to you.

My thinking at the time was that the actual runway visuals and physics would be implemented together via either draped polygons or a hard OBJ.

Orthophotos and Bumps

So why do authors want a transparent but hard runway? The answer is orthophotos. With paged orthophotos, it is now possible to simply put down orthophotos for the entire airport surface area (whether as overlays or a base mesh) at some high resolution (our runways are 10 cm per pixel – I’m not sure if the whole airport area can be done at that resolution) and not have any special overlays for the runways. The transparent + hard runway would change the surface type.

I’m not sure if this is a good idea, but I’m pretty sure that this feature belongs in overlay DSFs and not the apt.dat file.

  • Such a technique (varying hard surfaces independent of a larger image) is useful for more than just airports (and certainly more than just runways).
  • The technique is unnecessary unless a DSF overlay is in use.
  • Unlike nearly all of the rest of the apt.dat file, such an abstraction (invisible but bumpy) is much more a modeling technique and less a description of a real world runway.

I’m not sure we would even want the runway outline to be the source of hard data. If there are significant paved areas outside the runway then a few larger hard surface polygons might be more useful.

Posted in Development, File Formats, Modeling, Scenery by | 1 Comment

Where Do I Find the 930 Datarefs

The dataref documentation on the X-Plane SDK website is updated for each release of X-Plane 9.

X-Plane 930 has not yet been released. It is a “release candidate” but since we haven’t signed off on it yet, 922r1 is still the most current real release of X-Plane, and it’s what users get when they update without asking for a beta. So the website has 922 datarefs.

Since version 9, every release of X-Plane (including betas and RCs) has a copy of datarefs.txt in the plugins folder that is correct for that release. In other words, the docs that ship with X-Plane and the sim itself are always in sync.

So for now use datarefs.txt in the 930 RC 2 plugins folder! When 930 goes final, the website will be updated.

Posted in Development by | 4 Comments

ASTER For Custom Scenery

A few days ago, the ASTER GDEM was released. Basically ASTER GDEM is a new elevation set with even greater coverage than the SRTM. Basically both SRTM and ASTER (I’ll drop the GDEM – in fact ASTER prodcues more than just elevation data, but the elevation data is what gets flight simmers excited right now) are space-based automated measurement of the earth’s height. But since ASTER is on a satellite (as opposed to an orbiting space shuttle) it can reach latitudes closer to the poles.

So what does this mean for scenery? What does it mean for the global scenery? A few thoughts:

  • ASTER data is not yet very easy to get. You can sign up with the USGS distribution website but you’re limited to 100 tiles at a time, with some latency between when you ask and when you get an FTP site. Compare this to SRTM, which can be downloaded automatically in its entirety, or ordered on DVD. ASTER may reach this level of availability, but it’s not there yet.

  • ASTER is, well, lumpy. (Nasa says “research grade”, but you and I can say “lumpy”.) Jonathan de Ferranti describes ASTER and its limitations in quite some detail. Of particular note is that while the file resolution is 30m, the effective resolution of useful data will be less.

    SRTM has its defects, too, but ASTER is very new, so the GIS community hasn’t had a chance to produce “cleaned up” ASTER. And clean-up matters; it only takes one really nice big spike in a flat flood plane to make a “bug” in global scenery. I grabbed the ASTER DEMs for the Grand Canyon. Coverage was quite good, despite the steep terrain angle (steep terrain is problematic by design for SRTM) but there were still drop-out areas that were filled with SRTM3 DEMs, and the filled-in area was noticeable.

  • By the numbers, ASTER is not as good as NED; I imagine that other country-specific national elevation datasets are also both more accurate and more precise than ASTER.

  • The licensing terms are, well, unclear. The agreements I’ve seen imply a limited set of research uses for the data. The copyright terms are not well specified.

So at this point I think ASTER is a great new resource for custom scenery, where an author can grab an ASTER DEM in a reasonable amount of time, check it carefully, and thus have access to high quality data for remote parts of the Earth, particularly areas where locally grown data is not available or not high quality.

In the long term, ASTER is a huge addition to the set of data available because of its wide-scale coverage of remote areas, and because it can fill holes in SRTM. (ASTER and SRTM suffer from different causes for drop-outs, so it is imaginable that there won’t be a 1:1 correlation in drop-outs.)

But in the short term, I don’t think ASTER is a SRTM replacement for global scenery; void-filled SRTM is a mature product, reasonably free of weirdness (and sometimes useful data). ASTER is very new, and exciting, but not ready for use in global scenery.

Posted in Development by | 3 Comments

The Will to Rewrite

FSBreak interviewed Austin last week….it’s an interesting listen and they cover a lot of ground. A few comments on Laminar’s approach to developing software:

That Code Stinks!

Austin is absolutely correct that we (LR) write better software because neither of us are shy about telling the other when a piece of code stinks. But I think Austin deserves the credit for creating this environment. An “ego-free” zone where people can criticize each other honestly and freely is a rare and valuable thing, in many domains, not just music.

When I first came to LR, Austin created this environment by responding positively to feedback, no matter how, um, honest. When I first came to LR 100% of the code was written by him and 0% by me. Thus if I was going to say “this piece of code really needs to be different”, it was going to be Austin to either run with it or try to defend his previous work.

To his credit, Austin ran with it, 100% of the time. I can’t think of a single time that he didn’t come down on the side of “let’s make X-Plane better”. That set the tone for the environment we have now: one that is data driven, regardless of who the original author is.

I would say this to any programmer who faces a harsh critique of code: good programmers write bad code! I have rewritten the culling code (the code that decides whether an OBJ really needs to be drawn*) perhaps four times now in the last five years. Each time I rewrote the code, it was a big improvement. But that doesn’t make the original code a mistake – the previous iterations were still improvements in their day. Programming is an iterative process. It is possible to write code that is both good and valuable to the company and going to need to be torn out a year later.

A Rewrite Is Not A Compatibility Break

Austin also points to the constant rewriting of X-Plane as a source of performance. This is true too – Austin has a zero tolerance policy toward old crufty code. If we know the code has gotten ugly and we know what we would do to make the code clean, we do that, immediately, without delay. Why would you ever put off fixing old code?

Having worked like this for a while, I am now amazed at the extent to which other organizations (including ones I have worked in) are willing to put off cleaning up code organization problems.

Simply put, software companies make money by changing code, and the cost is how long the changes take. If code is organized to make changing it slower, this fundamentally affects the financial viability of the company! (And the longer the code is left messy, the more difficult it will be to clean up later.)

But I must also point out one critical detail: rewriting the code doesn’t mean breaking compatibility! Consider the OBJ engine, which has been rewritten more times than I care to remember. It still loads OBJ2 files from X-Plane 620.

When we rework a section of the sim, we make sure that the structure of the code exactly matches what we are working on now (rather than what we were working on two years ago) so that new development fits into the existing code well. But it is not necessary to drop pieces of the code to achieve that. I would describe this “refactoring” as straightening a curvy highway so you can drive faster. If the highway went from LA to San Francisco, it still will – just in less time.

In fact, I think the issue of compatibility in X-Plane’s flight model has a lot more to do with whether the goal is to emulate reality or past results. This debate is orthoganal to refactoring code on a regular basis.

* Since OBJs are expensive to draw and there are a huge number of OBJs in a scenery tile, the decision about whether to draw an OBJ is really important to performance. Make bad decisions, you hurt fps. Spend too long debating what to draw, you also hurt fps!

Posted in Development by | 4 Comments

Do Not Work Around Our Bugs!

For most of its beta run, X-Plane 930 beta 14 didn’t handle engine power limiting very well*. Here’s the short version of the saga.

  • Real planes sometimes have systems to limit total power output, because the power output of the engine (whether torque or internal temperature) can exceed safe operating limits at sea level).
  • With X-Plane 9.00 you could set a critical altitude for an airplane – below this altitude, X-Plane would limit the power output of the jet. The idea is (roughly) to simulate these limiters by derating the engine’s power output below this “critical altitude”.
  • This feature was really only meant for reciprocating engines – when Austin discovered in 9.20 that people were using this for turbines (understandable, given that there was no alternative) he simply disabled the feature for turbines. That wasn’t so good.
  • To resolve the situation a little bit more cleanly, X-Plane 9.30 has a setting per airplane called “FADEC – automatically keep engines from exceeding max allowable power or thrust” that, when checked, gives you version 9.00 style behavior.

Now this was mostly good except for one problem: the betas would default this setting to “off” when loading an old plane. Since version 9 always acted as if the “FADEC”** was on, this meant that old planes would need to be edited.

Finally with beta 14 we switched things: beta 14 and newer default old planes to have the FADEC checkbox on, so that planes match their old behavior – you can turn the check-box off if you don’t want this behavior.

There is one final hitch: if you already went in and edited your airplanes for the earlier betas, you will find that they are now set wrong. You will have to reset the check-box. If you go back to the original, unedited, 920-compatible airplane you will find they “just work”.

I mention all of this to make two points:

  1. File formats for new features are subject to change during beta. In this case, the file format for the new FADEC check-box (introduced in 930) changed at beta 14. The OBJ syntax for dynamic lighting changed during beta too. Don’t do “bulk” work (e.g. change a large number of planes in the same way ) on your fleet based on betas – you might have to redo that work again! Wait until the sim goes final. That’s when the file formats are locked up.
  2. Don’t work around bugs in the sim. I have seen so many forum posts where there is a trivial bug in the sim (e.g. the sim is just screwed up in a simple way) and authors go in and update scenery to work around the bug. File a bug, then wait! If you work around the bug, we can’t fix the bug, and if we don’t fix the bug, then the bug just bites other users.

* Disclaimer: I don’t do systems, I don’t know anything about airplanes, so this whole discussion will be heavily simplified. The point of this post is not to get into a discussion of FADECS – in fact, don’t even bother to post about FADECs, I’m not going to approve them. If you want to talk about FADECs and engine modeling, email Austin. The point of this post is one about file formats and compatibility.

** FADEC isn’t a very good name for this feature – the feature generically limits power, without specifying a mechanism. My understanding is that some airplanes have mechanical limters, like a pressure valve on a turbo. Some planes have no limiters at all…ask a pilot “can you cook the engine by pushing the throttle too far” he will say “I’m not going to be the one to find out.”

Posted in Development, File Formats by | Comments Off on Do Not Work Around Our Bugs!

Optimization By Check-Boxes

In the next X-Plane 930 beta (it should post today I think) the rendering settings have two new check-boxes: one to enable the “dynamic” airplane shadow and one to enable per-pixel lighting.

In the last week a number of users emailed me performance numbers via the fps test, and from what I can tell, 99% of performance problems can be attributed to these two new features chewing up resources in a way that 922 did not. When the features are both disabled, from what I can tell, the sim should be as fast or faster than 922.

The new beta will also limits dynamic shadows to your aircraft – beta 13 will calculate a dynamic shadow for every aircraft, which is unacceptably slow when you have a lot of AI planes enabled.

I may still be able to improve the performance of the per-pixel-lighting shader, but fundamentally per-pixel lighting is going to be more expensive than per vertex. The average X-Plane scene might have 200,000 to 500,000 vertices. At absolute minimum resolution, no FSAA, you’re going to have over 700,000 pixels even if there is no “over-draw” – you could easily have 10x that fill rate with only a modest increase in overdraw, full-screen anti-aliasing and window size. Simply put, per-pixel lighting is more work.

Please bear in mind: without per-pixel lighting X-Plane’s pixel shader is extremely simple. If you have a “low-end” card this could give you the illusion of GPU power when there is really not much under the hood.

Examples of low-end cards: GeForce 7300, GeForce 8400, GeForce 9400, Radeon X300, Radeon X1300, Radeon HD2400. All of these cards are the younger brother of a fairly capable card, but with fewer pixel shader units/cores. If each unit is doing very little work, you don’t need that much pixel-filling power…but when we go to a “real” shader, the difference between a GeForce 8400 and 8800 becomes very, very apparent. Simply put, even with optimization the GeForce 7300 (for example) will never run a huge monitor with per pixel lighting and high FSAA.

Posted in Development by | 3 Comments

To Copy Or To Reference

In designing interfaces for building planes, writing plugins, etc. one of the main design questions that keeps coming up is: to copy or to reference? Should authors simply refer to an art asset, piece of data, or code in order to utilize it, or should the author copy a snapshot into the custom add-on. There isn’t one right answer. Here are the main considerations.

Performance and Efficiency

One of the obvious considerations is efficiency: in some cases we might be able to provide better performance when an art asset is referred from a common source.

For example, in some cases X-Plane will consolidate VRAM use based on actual files, so a library object is loaded once no matter how many packages use it, but is loaded many times if a package copies it.

(In other cases X-Plane will actually merge multiple copies of a resource – referencing is only a win in some cases.)

An indirect consideration: if an art asset is provided by Laminar Research and is used by reference, then a new update can provide a new, better optimized art asset – see below.

Dependencies and Contracts

When someone uses an art asset, algorithm, etc. by reference, it creates an implicit contract by the provider of the asset by reference to not change the properties of the asset. By comparison, when the asset is copied, the contract is only to support the format that the asset is encoded in.

This is the main reason why I am often against providing new assets by reference, whether it is a new dataref, texture, etc. Often I will simply send a user a snippet of code, rather than making X-Plane’s version available via a dataref. The idea is that copying does not create a new interface (and thus a new “contract”) between X-Plane and the add-on.

Copyright and Legal Issues

For historical reasons, the US legal system describes the privileges of intellectual property owners by regulating the act of copying. (To say that this is a bit quaint in the digital age doesn’t even scratch the surface, but that’s a rant for another post.) The result of this particular regulation of copying (but not of referencing) is that the decision to provide an asset by copy vs. reference has legal implications. If the author does not want to go through licensing, referencing may be the only option.

Posted in Development, File Formats by | 2 Comments

The Constraints of Hardware

In a previous post (in which I tried to argue that threading is a “how” and not a “what” when it comes to feature requests) a user made this comment:

That is, that I feel you are a bit too concerned about the fact that XP has to be possible to run on a 2001 year machine. This really halts the development although you could add options to turn this and that off.

I’d like to side-step around the details of cost-benefit analysis (e.g. do the sales from low-end systems pay for the development of a renderer with lower system requirements) but take a second to focus on three general issues:

  • Is there a cost to developing a scalable renderer?
  • How does the trend of hardware development affect hardware?
  • How do marketing forces affect both of the above?

Scalability

Is there a cost to writing a renderer that can run on a wide range of hardware? Absolutely. Obviously we have to write more code to do that.

But there is an additional cost: there are some rendering engine design decisions that have to be made system-wide. It’s not practical to provide different scenery files for different hardware (since we are limited by distribution on DVD). In some cases we have to pick a non-ideal data layout (for the highest end hardware) to support everyone.

But: before you raise up arms against your fellow X-Plane user who is holding you down with his GeForce 2 MX and P-III 800 mhz machine, bear in mind that the problem of picking a data format is a bit unavoidable. Even if we targeted the highest-end machines and told everyone else to jump in a lake, those decisions would appear to target rather quaint machines only a year into the version run. At some point we have to pick a line in the sand.

There is some light at the end of the tunnel when it comes to scalability: as computers become (on average) bigger and faster, we can start to defer at least a little bit of the work of scenery generation to while the sim is running. When we first designed the new sceney system (for X-Plane 8) most users did not have dual-core machines, so the doing work on the scenery was very expensive. We preprocessed as much as possible. This isn’t as necessary any more.

So are high-end users limited by having one renderer that fits all sizes? Perhaps a little bit, but any design choice is only going to fit one hardware profile perfectly, and hardware is a moving target; today’s shiny new toy is tomorrow’s junk.

Hardware Growth

Every two years (to be very loose about things) the number of transistors we can pack on a chip doubles. This “transistor dividend” can be turned into more cores for a CPU, or more shading units (which are now really just cores) for a GPU.

And this gets to the heart of why I don’t think we can say “forge the low-end” any time soon. Imagine that we support 6 years of hardware with X-Plane, and the best hardware is 8 times as powerful as the low-end hardware. Fast-forward two years – we drop two-years of hardware and two-years of new ATI and NV graphics cards come out. What is the result?

Well, the newest hardware is still 8x as powerful as the old hardware, but the difference in the polygon budget between the two has now doubled! In other words, the gap in absolute performance is doubling every two years, driving the two ends of our hardware spectrum farther apart. (Absolute performance is what Sergio and I have to worry about when we design a new feature. How many triangles can we use is an absolute measurement.)

If we say “okay forget it, only 3 years of supported hardware” that gets us out of jail for a little while, but eventually even the difference between the newest and slightly off-the-run hardware will be very large.

A gap in hardware capability is inevitable and it will only get worse!

Market Divergence

You may have noticed that the above paragraph makes a really gross assumption: that the lowest end hardware we support is the very best card on the market from a certain number of years ago. Of course this isn’t true at all. The lowest end hardware we support was probably pretty lame even when it was first made. The GeForce FX 5200 was never, even for a microsecond, a good graphics card. (It was, however, quite cheap even when first released.)

So the gap we really have is between the oldest low-end and newest high-end hardware, which is really quite wide. Consider that in May 2007 the GeForce 8800 Ultra was capable of 576 GFLOPs. Two months later (July 2007) the GeForce 8300 GS was released, packing a whopping 22 GFLOPs. In other words, in one video card generation the gap between the best and worst new card NVidia was putting out was 26x! (I realize GFLOPs isn’t a great metric for graphics card performance – really no one metric is adequate, but this example is to illustrate a point.)

Let’s go back in time a few years. In February 2002, NVidia released the GeForce 4 Ti (high-end) and MX (low-end. The slowest MX could fill 1000 MT/s, while the fastest Ti could fill 2400 MT/s. That’s a difference in fil rate of “only” 2.4x.

What’s going on here? Commodification! Simply put, graphics cards have reached the point where a lot of people just don’t care. Very few users need the full power of a GeForce 8800, so a lot of lower-end machines are sold with low-end technology – more than adequate for checking email and watching web videos. This creates a market for low-end parts and creates a wider “gap” for X-Plane. Dedicated returning X-Plane users might do the research and buy the fastest video card, but plenty of new users already have the computer, and it might have something unfortunately (like a Radeon X300 or Intel GMA950) already on the motherboard.

As X-Plane’s hardware needs diverge from the needs of mainstream computer users, we can expect some but not all of our users to have the latest and greatest. We can also expect plenty of new users to have underpowered machines.

Let me go out on a limb (I am not a technologist or even a hardware guy, so this opinion isn’t worth the bits it is printed on) and suggest this: we’re going to see a commodification fall-off in the number of cores everyone has too. Everyone is going to have two cores because it is cheap to put a second core on the main CPU if it lets you get rid of a whole array of special-purpose hardware. Give me multi-core and maybe I can get away with software-driven rendering (who needs hardware acceleration), software-driven sound (goodbye DSP chips), maybe I can even find cheaper ways to build my I/O. But 16 cores? The average user doesn’t need 16 cores to check email and run Windows 7.

So as transistors continue to shrink and it becomes possible to pack 8 or 16 cores on a die, I expect some people to have this and others not to. We’ll end up in the same situation as the graphics chips.

Summing It Up

To sum it up, sure there may be some drag on X-Plane in supporting a wider range of hardware. But it’s an inevitable requirement, because hardware shifts in capability even during a single version run, and as hardware becomes faster, the gap between -end and cheap systems gets wider.

Posted in Development by | 8 Comments

Multi-Threading Is a Weird Feature Request

Over and over, whether it is a feature request list for X-Plane or another simulator, I see the same thing: “multi-core support” or “multi-threading” as a feature request.

Now before I continue, I must remind everyone: X-Plane is already multi-threaded and will take advantage of multi-core hardware. How much we use those cores depends on the type of scenery loaded.
The problem is that multi-threading (as a way to use multi-core hardware) is a solution technique, not a problem statement.  What is threading going to be used for?  If I simply program the other 7 cores of your computer to calculate PI to 223,924 digits have I met the feature request?  This probably isn’t what anyone wants.
Implicit in the request for multi-core is (I speculate) a request for better frame-rate.  (I did see one user who wanted multi-core to be used for a more accurate flight model.  This strikes me as a poor trade-off for hardware based on my understanding of the flightmodel – we would use a lot of hardware for only a marginal accuracy improvement – but I commend the user for stating the problem and not just a possible solution.) But is multi-threading the best way to get framerate?
If I had two patches to X-Plane, one that doubled fps by using two cores and one that doubled fps by using more efficient code, which would be better?  To me the obvious answer is: the code that is more efficient.  It will run on any hardware (not just multi-core) and if you have multi-core hardware, we still have that second core free for some other functionality.
So to me the feature request should be something like: “higher framerate – and yes I have multi-core hardware”.  Or perhaps “more visual detail at the same framerate – and yes I have multi-core hardware”.
All feature requests need to be in terms of problem statements, not possible solutions.  This lets us find the set of problems that can be solved together in a coherent manner, and it lets us pick a solution that meets our engineering goals.
Posted in Development by | 21 Comments

Per Pixel Lighting Isn’t Free

I’ve had a little bit of time to look at X-Plane 930 performance. The data isn’t 100% conclusive yet, but one performance issue sticks out like a sore thumb: per-pixel lighting hurts fps.

Now, part of this is that the per-pixel lighting shaders are not yet optimized (and perhaps are not terribly well written). I need to take some time to see if I can get some more performance out of them.
But…per-pixel lighting isn’t free – when per-pixel lighting is on, the video card is simply doing a lot more work than it used to. Consider: a typical X-Plane scene might have 250,000 vertices on screen at once.  At a minimum, you have at least 750,000 pixels on screen*.  Make your window bigger and that number goes up – fast!  Turn on 16x FSAA and watch the pixel count get even larger.  So the number of lighting calculations done by your graphics card are at least 3x higher with per-pixel lighting and potentially 50x higher.  Even if your graphics card has a lot of power, that’s going to cost a bit.
So one option I am considering is making per-pixel lighting a rendering option. This would allow users who want 922-level fps to simply turn it off. In my tests so far, turning off per-pixel lighting gets fps to within a few percent of 922.
(The only reason to have shaders on but per-pixel lighting off would be to have a cheap version of the reflective water. In the long term I want to limit the number of a la carte rendering settings, but for now it seems reasonable to support v9.00 base configurations through the entire version run.)
* In practice, not every pixel on screen requires full shading, e.g. the sky does not require complex shading.  But some parts of the screen may be shaded multiple times.  This is called “overdraw”.  For example, with a runway we pay for our shaders twice – first with the ground underneath the runway, then with the runway itself.
Posted in Development by | 5 Comments