Comments on: Heavy Metal https:/2015/06/heavy-metal-2/ Developer resources for the X-Plane flight simulator Fri, 26 Jun 2015 00:03:53 +0000 hourly 1 https://wordpress.org/?v=6.6.1 By: Ben Supnik https:/2015/06/heavy-metal-2/#comment-11211 Fri, 26 Jun 2015 00:03:53 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11211 In reply to elios.

Yes, but I don’t think sparse virtual texturing (SVT, of which MegaTexture is I guess related) is very useful for X-Plane because X-Plane isn’t based on a single giant high-res roaming “base terrain texture” (where texture size is a limitation).

X-Plane’s terrain falls into three categories:
– Land class, where repeating the texture with careful rules and heuristics lets us get higher res with less VRAM – that’s a win in terms of real usage of VRAM compared to SVT.
– Various bits of overlay texture that run along vector features with alpha – the logic here is about the same.
– Orthophotos – at the scale we draw orthophotos, the cost of using a set of textures is not at all harming our performance, so SVT isn’t needed.

SVT fits a particular type of tool chain for a particular type of game well, but X-Plane is not one of those games.

]]>
By: elios https:/2015/06/heavy-metal-2/#comment-11210 Thu, 25 Jun 2015 23:52:44 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11210 In reply to Ben Supnik.

have you looked in to things like MegaTextures at all?

seems like it would do wonders for X-plane 1 massive texture for each 1×1 deg block

]]>
By: Ben Supnik https:/2015/06/heavy-metal-2/#comment-11201 Thu, 25 Jun 2015 14:30:13 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11201 In reply to elios.

This is a “I haven’t seen your code but I know you need to rewrite all of it” post – that is, I don’t think it’s something we can really debate. Two general comments:

1. We are not going to rewrite the sim top to bottom. Joel Spolsky wrote a pretty good explanation of why this kind of thing is not a good idea – it says everything I could hope to say, only better.

http://www.joelonsoftware.com/articles/fog0000000069.html

I can only add that I have –lived– this – I have been in a development team (in my time before LR) that tried to rewrite everything top to bottom and ship product, and Joel is spot on.

2. Lots of games ARE using OpenGL from multiple threads, and they are all taking a hit on performance, which is why every next-generation 3-d API is using a different threading model than OpenGL. The threading model is one of the things in OpenGL that is so broken that glNext (e.g. Vulkan) made an incompatible break.

If you read any of the modern, recent advise from Nvidia and AMD, they both say the same exact thing: “Use OpenGL from only one thread.”

Here is why they are saying this: the OpenGL threading rules and object model are written in such a way that the driver has to take a shared lock on some or all of the context for most OpenGL calls. For example:
– Flushed changes to a GL object from another thread are guaranteed to be visible after a bind in the rendering thread. (This means that “bind” is a synchronizing operation.) Annoyingly, bind is also necessary for every change of state in every draw call, meaning -every- bit of state change on every draw call has to take threading and synchronization overhead. That’s why the draw call CPU overhead in the new APIs are so much lower – they explicitly do no synchronization and trust the app not to be stupid, and it makes the code path a lot leaner.
– All OpenGL object classes can be mutable, and they can be mutated from any thread. This means all mutations need to take a lock to protect against mutations in another thread (since mutating an object is not allowed to crash the host app under any circumstances in the GL spec). This combines with…
– All OpenGL object classes are mutated only when bound (if DSA isn’t available) and even with DSA, the ‘namespace’ of objects is a table lookup in the GL context. Binding also looks up that namespace, and object creation mutates it. That means there’s going to be some kind of global lock per object class (e.g. buffer object, texture) that has to be taken every time we bind (lock acquire on draw) to protect against async loader code.

In other words, GL is thread-safe, but it’s not fast when it’s thread safe, because the object and threading model isn’t designed for a multi-core world. The overhead is high enough that the driver guys say “don’t do it”, and in fact, they add a special case in the driver to bypass locking overhead when they know the context is unshared (and therefore synchronization is unnecessary).

3. I agree there are things that can be done now to make X-Plane’s rendering better without a next-gen API. (These things can even be done without a top-down rewrite! 🙂 That’s why we continue to update and modernize the engine.

I just want to be clear here: the next-gen APIs don’t exist because OpenGL is ‘no longer shiny’ – they exist because OpenGL’s abstraction doesn’t fit the hardware and it’s threading model is way too expensive for modern use.

]]>
By: elios https:/2015/06/heavy-metal-2/#comment-11197 Thu, 25 Jun 2015 02:44:28 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11197 In reply to Ben Supnik.

there are TONS of OpenGL games that are use more then one thread (most of the them really)

i really dont think just changing the API is going to help any thing
sim really needs a top to bottom rewrite for modern hardware

64bit only cut lose any thing thats not at lest Shader model 3.0 or better

even keeping in OpenGL there are some thing that could be added like megatexturing and better streaming of scenery data so that it doesnt crash the frame rate
https://en.wikipedia.org/wiki/MegaTexture

but the fact i can not get a solid 60fps at 1920×1200 with out nearly turning every thing off is silly at this point

]]>
By: Ben Supnik https:/2015/06/heavy-metal-2/#comment-11192 Tue, 23 Jun 2015 20:33:14 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11192 In reply to Quantumac.

Such a high level API is exactly what we would do! But that’s good for us, not good for plugins – existing plugins are linked -directly- against OpenGL and just going “you’re all broken” isn’t a great solution.

]]>
By: Quantumac https:/2015/06/heavy-metal-2/#comment-11191 Tue, 23 Jun 2015 20:32:00 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11191 In reply to Ben Supnik.

Perhaps you should think about creating a high-level rendering engine which is API independent like some game engine developers have. Make it illegal for plugins and your own code to call directly into OpenGL. Force everything must go through your rendering engine. The engine can be configured to use OpenGL, DirectX, Metal, etc. as required by the platform.

Speaking of Linux support, I’m a Linux/Mac OS X developer. I’ve been thinking about building a high-end Linux box just for running sims like X-Plane and KSP, so I’m definitely hoping you guys won’t drop Linux support. Having written software for Windows years ago (and therefore having been forced to use Windows), I don’t care to repeat the experience.

]]>
By: 9SL9 https:/2015/06/heavy-metal-2/#comment-11183 Sun, 21 Jun 2015 23:08:43 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11183 In reply to Ben Supnik.

Ben,

Of course the hardware is not as good. There’s no argument there. The Apple ecosystem is what it is. However with the release of Metal for Mac many hope to see equal performance, independent of OS. So for me there’s no point in discussing hardware, that can’t be changed/fixed by LR. But we can leverage these new software technologies to improve the experience on the Mac platform. And, as a customer, it would be great to see it implemented. If not, I’ll continue to use what I have, no hard feelings. Just a little disappointing 😉

]]>
By: Jo https:/2015/06/heavy-metal-2/#comment-11182 Sun, 21 Jun 2015 21:44:50 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11182 One can certainly build a proper gaming rig using Linux as OS 🙂 In 2016 I will upgrade my Linux gaming rig with what ever AMD comes with as a new generation GPU. My hope then is for X-Plane to allow using the open source AMD drivers.

]]>
By: Ben Supnik https:/2015/06/heavy-metal-2/#comment-11181 Sun, 21 Jun 2015 17:23:21 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11181 In reply to Udo Thiel.

First, the total amount of GPU you can get _is_ limited. Compare the GeForce 780M – that’s the best GPU you could get in the last gen hardware if you maxed out the iMac (and you payed a LOT for that iMac) vs the GTX 780 Ti that you could put in a desktop PC. I have not priced this stuff out, but the current “maxed out” iMac (which is the only way to get the top GPU) starts at $2500; a premium current-gen GPU is typically $600. Anyone who has built a PC will tell you that they can get a ridiculously powerful box for the remaining money. (To match the iMac, they will need to buy a nice high quality monitor…but then they don’t have to re-buy it each time they upgrade.)

On to the numbers:
Core configuration (shaders, texture units, ROPs)
780 Ti 2880:224:48
780 M 1536:128:32
So we’re looking at almost 2x the shader cores and at least 25% more ROPs.
Clocking and watts (core, shader, memory bandwidth, watts):
780M: 823/823/5000 MT/sec, 100 watts
780Ti:876/928/7000 MT/sec, 250 watts

So clearly the Ti is just slurping power – that’s why Apple doesn’t put them in anything they build. You can run a 250 watt GPU in a PC, not a problem.

But let’s look at the final numbers: how does this change in core config, watts and speed turn into real performance?

Gflops/second (a measure of pure compute power:
780M: 2369
780Ti: 5046

Pixel fill rate (gigapixels/second)
780M: 24
780Ti: 42

In other words, the high end desktop GPU has over 2x the compute power and almost 2x the fill rate.

So yes – the Mac is pretty far away — a factor of roughly 2x. (For what it’s worth, this is a LOT better than it used to be – a few generations ago the gap could be as much as 4x for an iMAc — albeit that was back when Mac pros could run real desktop GPUs. Now that the Mac Pro is a low power device, it’s good that the gap from top-end mobile part (in the iMac) to a big desktop card is only about 2x.

What about the averages? My experience with users so far is actually worse for Mac: you really have to burn your wallet to get that top-end iMac. Lots of Mac users are using laptops (much worse performance), older generation iMacs, and non-top-end iMacs. Windows users have not-the-top-end cards sometimes, but they can put a one-gen-off card or one-notch down card in for somewhere between $200 and $300 and stay up to date. The Mac user hast to ride their GPU down until it’s time for a new machine.

Only a hardware survey will tell, but I would be shocked if every Mac user is maxing out their iMacs and updating them every 2 years, while PC users are consistently getting bottom-end cards. So far nothing we’ve seen indicates that.

]]>
By: Ben Supnik https:/2015/06/heavy-metal-2/#comment-11180 Sun, 21 Jun 2015 17:04:31 +0000 http://xplanedev.wpengine.com/?p=6376#comment-11180 In reply to Udo Thiel.

Yes. It is -possible- that the difference in indicated active users between Mac and Windows is entirely due to piracy. But honestly, it’s pretty unlikely – the move in the hardware split to 2:1 windows matches every other internal indication we have in terms of the platform split. Realistically, we have to assume that we’re getting significantly more revenue from Windows than OSX.

]]>