I am going to take Marginal’s suggestion: in the future, ATTR_no_depth will be mapped to ATTR_poly_os 1, and ATTR_depth will be mapped to ATTR_poly_os 0. As far as I can tell, historically the ability to turn off all depth buffering was a misguided attempt to do the kind of things that ATTR_poly_os is meant for.
This implementation will hopefully help any content that is (for some reason) still using ATTR_depth and ATTR_no_depth…modern OBJ generators like the Blender plugin and the AC3D plugin never use this old attribute.
I get a fair number of emails where users send me a link and say “can you make X-Plane/WED look like this?” I’ll beat you to the punch this week…I wish WED looked like this.
Unfortunately it probably never will for three reasons:
- X-Plane’s time to create the structures you see is slow enough that editing the real world in real-time would be difficult. We try to load things on another thread while you fly to help smooth this.
- Our scenery creators do a lot of work to build the DSFs before you fly – minutes per DSF…again, from the time the raw data changes, a lot of computation happens.
- We’re trying to keep our tool set open source, but the X-Plane rendering code is closed source, so it would have to be rewritten (a huge task), e.g. WED would need an alternate editing engine.
(Of course if it was rewritten, it could be done so to fix points 1 & 2.)
Still, we must all dream. 🙂 🙂
Posted in Tools
by
Ben Supnik |
Almost two years ago, I posted this blog entry, pointing out that some legacy OBJ commands are, well, evil.
My inclusion of ATTR_poly_os in that blog post was a little strange – when used correctly, ATTR_poly_os is the right way to overlay decals on the ground, and it is fully supported. (The message is just, I suppose, that if you need a huge number for your poly_os, something is really wrong.)
Now ignoring ATTR_poly_os, you might notice that the AC3D plugin exports almost none of those dubious attributes. But what about flat shading? Flat shading isn’t necessary, but you were allowed to use it.
So I came to my senses and modified the AC3D exporter – the exporter will now never use ATTR_shade_flat – instead it uses smooth shading and per-vertex normals, avoiding an unnecessary attribute no matter what you do in AC3D. 99.99% of the time this will yield fastest framerate?
Why wasn’t the plugin always like this? I thought it was, and only discovered today that I hadn’t finished this optimization.
I am interested in proposals for an airport-vehicle system in X-Plane. I will explain exactly what I am looking for, but first I must note the oddity of this request:
99.99% of the time, users give us too many implementation details on a feature request. Usually we want to know what you want, and not how you want it done. Example:
Bad: please implement VBOs for the 3-d clouds.
Good: please improve frame-rates when 3-d puff clouds are used.
If you ask for something too specific, we can’t tell what you really want – there might be a better implementation.
So normally the feature request is what’s useful, not the implementation.
In this case, however, I am interested in a proposed system for third party control of airport vehicles. If you have an interest in this, please write up a proposal and email me. Forgive me bashing you over the head with this, but often I get flooded with email on this.
- Please do not email me on this unless you have a full proposal for a system by which authors can control ground vehicles. Please do not email me asking for other features, telling me which type of truck is your favorite, or asking for user-level features. My goal here is to get feedback on how third parties would like to be able to customize a ground vehicle system. If you don’t understand the difference, please don’t email on this subject.
- I cannot promise that I will do anything you say, in particular, only that I will read and carefully consider anything that’s sent to me that makes sense.
- and I cannot say when this will be used (not that soon though — if we’re only in the RFC stage we’re only beginning to look at this). This is not a feature announcement.
I will try to respond to proposals quickly, but I’ve been pretty badly swamped by email, so it could take a few weeks!
Today is Rosh Hashanah, the Jewish New Year. In Jewish tradition it is also thought of as the birthday of the world. If you’re reading this blog, you probably have some interest in how X-Plane attempts to simulate the world on your computer. The attempt over the years to create a higher fidelity simulation has led me to a deeper appreciation of the subtlety, complexity, and beauty of the real thing.
(Nothing makes you realize how rich and intricate the world is than trying to model it with a few million triangles and ending up with something that looks completely crude.)
X-Plane’s digital world isn’t the only way that we interact with a proxy instead of the real thing. When we drive instead of walk, eat packaged food from a supermarket, talk on the telephone instead of talk in person, our technology becomes a proxy for our relationship with our direct natural environment – the planet, plants, animals, and other human beings.
Now I’m not saying that any of these things is bad. I’m not about to become a dairy farmer, and without the internet we couldn’t create X-Plane at all. But I think it’s important on this day, and hopefully every day to take time for activities that put us in direct contact with the world. Consider a few questions:
- How does what I eat affect the world?
- How does my travel affect the world?
- What impact does my home have on the world?
- Am I leaving the world in better condition for the next generation or worse?
Please … take a few moments to consider the world, the only home we’ve ever had.
When I was in the Dolomites a few years ago with Sergio we were looking at the dolomites from his friend’s balcony – mile after mile of beautiful mountains and rolling hills. I looked at him and said “God has more polygons than we do.” It was a joke at the time, but I think that the act of really observing the real world and realizing that the digital reality and technology we create can be a proxy and an addition but never a replacement is critical to understanding the responsibility for stewardship we have over the planet. Are we taking good care of our most precious gift?
Side note: please do not post tech support questions as blog comments – please use the x-plane-scenery yahoo list or the x-plane.org scenery forum. I would like to keep tech support discussions in easily search-able public places so future users can get answers quickly.
I saw a post on x-plane.org referring to the process of creating translucency in objects attached to airplanes as a dark art. There is definitely a lot of weirdness in how X-Plane draws airplane objects. I will try to shed some light on what’s really going on and how to deal with it. For this first post, I’ll explain the requirements of the hardware, which shape the ensuing chaos.
First, and most important, in order for X-Plane to render translucent geometry (objects or otherwise), the geometry must be drawn from farthest from the viewer to closest. This is in stark contrast to normal operation — for transparent or opaque geometry, we can draw the closest objects first, and the graphics card makes sure the far away objects don’t “paint over them”. But the technology that does this (z-buffering) doesn’t work when the closer geometry is translucent. (If the translucent geometry is drawn first, it acts opaque to what is behind it.) For more info on this phenomenon and what to do about it, look here.
So in order for our translucent cockpit objects to draw correctly they need to be drawn from “back to front”. But note that this term is relative to the camera — which object is closest will depend on where the camera is located!
The other cause of cockpit object weirdness is the near clipping plane. At any time when X-Plane is drawing, there are two limits on where we can draw:
- The near clipping plane defines an invisible wall – anything closer to the viewer than this distance will not be drawn.
- The far clipping plane defines the other invisible wall – anything farther from the viewer than this distance will not be drawn.
The far clipping plane is usually set far enough away that objects disappear into the fog before hitting it. The near clipping plane is usually set close enough that by the time an object hits it, your plane has crashed.
Now here’s the rub: the quality with which the graphics card can discriminate which polygon is closer (via z-buffering) goes down as the ratio of the far to near clipping plane gets larger!
Take a second to think about that. Basically if we want to increase the visibility in X-Plane without losing z-buffering fidelity ,we need to move the near clipping plane farther away from the user.
The real problem is this: X-Plane, with its sometimes long visibilities (when you get up into orbit, you can see a long way!!) really stretches the z-buffering fidelity of even modern cards. So we have to keep the near clip plane fairly far from the viewer in order to have the world look reasonable. But that distance might be a lot larger than the distance from the viewer to the interior of the cockpit!
We work around this by having two separate drawing passes. We first draw the “outside” world, with a near clipping plane that is fairly far away. (Every now and then a user tells me that this clip plane is causing scenery not to be drawn.) We then draw the cockpit interior and the user’s plane, with the near and far clip plane both reset to be a lot closer. This way we can use our “z-buffer precision” in different ways for different geometry.
(It should be noted that z-buffering does not correctly handle the relationship between near and far objects when we reset the near and far clip plane. This technique works because we assume that everything drawn in the “cockpit” view will draw over everything drawn in the “outside” view.)
In my next post I will explain what X-Plane actually draws. For now suffice it to say that X-Plane has the task of drawing from farthest to nearest, but also the task of drawing in two phases: a far-away outside view and a close inside view.
If you look at a lot of the text file formats we often have something like this:
TEXTURE
TEXTURE_LIT
TEXTURE_NOWRAP
TEXTURE_NOWRAP_LIT
etc. It’s a bit of a disasteer. The problem is that the command is encoding two separate ideas:
- What the texture is used for (primary texture vs. lit vs. composite vs. mask)
- How the texture is loaded (with wrapped vs clamped edges)
What got me looking at this was a test Sergio did with some ground lighting. He had texture compression on and his nice lighting and halo textures were absolutely trashed, because texture compression isn’t very nice to alpha masks.
Internally we deal with this by marking textures as “not to be compressed” — this is why the clouds don’t look ugly. I thought, “why don’t I make this available in OBJ and pol files…that’s not very hard”. But do we really want this?
TEXTURE_LIT_NOCOMP
TEXTURE_LIT_NOCOMP_NOWRAP
So I’m thinking we may need a syntax that separates the “what” (what slot in the scenery the texture is used for) from its settings. Something like this:
TEXTURE2 -nocomp -wrap my_truck.png
TEXTURE2_LIT -comp -nowrap my_truck_LIT.png
The “flags” would affect how the texture is loaded (the two obvious ones are wrapping/clamp controls, and compression-inhibition) and the command name would say what the texture is used for.
I am also looking at adding normal maps to objects – this would be a third texture attached to the object (so you could have a base texture, normal map, and lit texture). The advantage of this scheme is we’d need only three commands with a pile of flags.
Anyway, just something I’m thinking about.
(One thing I haven’t worked out is how this would interface with dataref-driven textures, which would require yet more file format gunk.)
Why don’t the cars drive backward when you pull the slider backward in replay mode? The short answer is “because we don’t care enough to fix it”, but a better answer might be “it would take a lot of programming time and suck up more resources from X-Plane to fix this…and we think our customers would rather that we focus our programming and your hardware resources on framerate.”
The cars are an interesting special case of a whole number of sim phenomena that we don’t attempt to track carefully in replay. Replay is designed to allow you to watch your flight – it would be cool if the scenery was doing the exact same thing during replay as during the flight, but I don’t think it’s essential for training purposes and it does come with a cost.
First remember: replay mode works by saving past values of the sim to RAM. So the more we save, the more RAM we chew up saving past history, and the less time we can save before we run out of virtual memory.
Now in some cases, the motion of dynamic sim objects is at least somewhat random. In this case we can’t easily “reverse” the algorithm that generated the motion.
But the cars are more problematic.
Not only is their motion somewhat random (each time a car makes a turn at a fork in the road it randomly decides which way to go), the cars are maintained in memory in a way that allows us to figure out who has to make the next turn very rapidly without using a lot of CPU. As much as the cars are a CPU hog, they would be much much worse without this memory structure.
The problem is, the memory structure is organized based on time flowing forward. That is, we can only tell you which car needs CPU attention next if the cars are driving forward. Put time into reverse and we now know which car needed our help last! Not useful!
So to make the cars drive backward we would have to transform this data structure every time the flow of time changed. I think it would be more annoying to have this massive CPU recomputation each time you rewound the replay than it would be nice to have the cars move backward. Why not have two data structures, one for forward, one for reverse? Well, now we’ve saved CPU but burned RAM. Either way, we’re talking about hardware resources that could be used for more scenery or more framerate.
The cars have yet another behavior that makes them hard to reverse: they are born and die! A car is born any time we realize that there aren’t enough cars on the road for the given rendering settings. A car dies any time it gets too far away from the user’s plane or reaches a point in the road where it can’t procede. (Typically these are 1-way streets that dead-end. This happens because the road data we use has very poor flow information, leading to some really strange streets.)
This cycle of cars being born and dying maintains a reasonably constant car population over time, and a car population that is near your plane as you fly. But to reverse traffic, we would have to reincarnate cars that had died previously. This would mean spending memory on remembering what cars had died. (Even if the algorithm to decide where a car is born, the algorithm to predict where a car will die is quite complex, because it requires looking at the entire set of steps the car would make during its life until it reaches the “point where it is killed”. So computing this information is out of the question.)
That’s probably more information than you wanted to know. Generally speaking, if someting unrelated to the flight model doesn’t replay in replay mode, it’s probably because it would be too “expensive” to remember its history. The cars are the most complex example, but definitely not the only example!
This goes into the bucket of “weird X-Plane behavior”: X-Plane will try both PNG and BMP file extensions when opening images, no matter what is in your file. How we got to this state is, at best, confusing.
Originally, most X-Plane image files did not contain a suffix. So an ENV file contains “grass” and X-Plane would change that to grass.bmp.
Then we added PNG support. X-Plane would try grass.png and then grass.bmp. In this case, not having the extensions turned out to be handy — authors could simply bulk-convert their images and go on with life.
With most new scenery system files, the extensions are a lot more rigid:
- The extension appears in the referencing file.
- The sim only tries that extension.
- If the format doesn’t match the extension, it’s an error.
So if you want a DSF file to reference a facade, it’s buildings.fac and if that .fac file is actually a forest file, it’s an error. The sim won’t try to decide which is more correct, the header of the file or the extension, it will just go “you’re nuts” and bail out.
But (for historical reasons) images are an exception. Keeping with the “any extension goes” theme for images, X-Plane will actually try PNG and then BMP versions of your file. The extension has to match the format…that is if you call your bmp foo.png X-Plane won’t load it at all.
We have PNG as our primary image format and BMP for backward compatibility. But it’s imaginable that we could have DDS and PNG both as primary formats — PNG for images that need lossless fidelity and DDS for images where compression is acceptable. In such an event, X-Plane’s tendency to try every extension means authors can bulk-convert from PNG to DDS (making their packages load faster) and go home happy.
Jonathan Harris suggested I look into .dds as an image format for X-Plane. I think he’s got a good idea.
First, what is DDS? It stands for “direct draw surface”, but as a file format, it’s basically a file whose structure is the same as the memory layout for a texture in DirectX. It’s a simple image format but it does two useful things for X-Plane (or any other 3-d application):
- It can contain compressed images using the same compression OpenGL and DirectX use (DXT compression).
- It can contain mipmaps – smaller versions of the same image.
Now before things get out of hand, using .dds files in X-Plane would not mean converting X-Plane to use DirectX!! .dds is simply an agreement about how to arrange a file. You can easily write code to load a .dds file into an OpenGL application, and that is what we would do.
There would be three fundamental benefits to using DDS files:
File size on disk (and download).
PNG is a losslessly compressed file format. This kind of compression (the internal algorithm used by PNG is the same as ZIP files) preserves the image, but the reduction in image size is a function of how “simple” the image is. Images with a lot of detail and high frequency information (rapid changes in color) do not become much smaller than the amount of VRAM they use when compressed in a PNG. A lot of the terrain textures we use (4 MB in VRAM) are still 3 to 3.5 MB on disk!
By comparison, the texture compression used by OpenGL is lossy – the image looks worse. But the image is reduced by a factor of 4 to 8, every time. So one of these images (4 MB in VRAM when uncompressed) would be 1 MB with S3TC compression (available in a DDS file). Even for very simple PNG files, you rarely see such a reduction in size, so .dds files would be smaller, meaning faster updates and downloads.
(Another thought: if a PNG file is smaller than its VRAM requirement, it means the PNG file has lots of areas with the same color. This means the PNG file is WASTING VRAM! We encourage our artists to make use of every little pixel in an image, and this means that PNG files that are well-created for X-Plane tend to get relatively little benefit from PNG’s internal compression.)
Load Time
Image load time is a significant factor in X-Plane. There are several reasons why DDS files could load a lot faster than PNG.
- The actual DDS file is smaller – less data to load and process means faster performance.
- Because the DDS file can contain smaller copies of the image, X-Plane doesn’t have to load a huge image and reduce its size using the CPU – it can just load the small version in the file.
- When compressed textures are used (and they almost always should be, see my previous blog post) it takes CPU power to compress the image, slowing down load. A DDS file containing a compressed image can be sent directly to the card!
Image Quality
Now one is a little bit tricky. First, please review my previous blog post, which argues that texture compression is the best way to maximize VRAM. A DDS file will look better than a PNG file when texture compression is on. Here’s why:
The S3TC compression formats specify an EXACT way to “uncompress” and draw a compressed image. But there are MANY possible ways to compress the image. Remember that S3TC is a lossy compression – the compressed image is only an approximation of the original.
Well, it turns out that algorithms that find the best approximation of an image using S3TC are slow. So we can’t use the highest quality compression inside X-Plane when compressing PNGs for the graphics card – it would be too slow.
But the image in a DDS file is already compressed — because someone else has compressed it when creating the scenery. When converting the scenery from PNG to DDS that author can use a very slow, very high quality compression algorithm to make the DDS files. It might take 12 hours to convert the entire package from PNG to DDS, but who cares? It’s something you do once and then everyone enjoys better looking images.
In other words, because the compression is done ahead of time for a DDS file, a higher quality algorithm can produce better approximations of the original images. Since we want to use compression anyway (to optimize our use of VRAM) this is a good thing.
When is DDS not appropriate?
Any time an image must not be compressed, DDS is inappropriate, and PNG is best. There are three basic cases in X-Plane where we avoid compression (see my previous blog entry for the real examples):
- Any time we need to preserve tiny detail accurately, like lettering and text.
- Any time the alpha channel needs to be smooth.
- Any time a texture is magnified so large that small color changes between two pixels is noticeable. (For example, compressing the color washes used for the sky make obvious artifacts.)
DDS wouldn’t replace PNG, it would augment it.