Comments on: What Vulkan Means to Developers https:/2016/03/what-vulkan-means-to-developers/ Developer resources for the X-Plane flight simulator Sat, 26 Mar 2016 18:14:31 +0000 hourly 1 https://wordpress.org/?v=6.6.1 By: Notreallyme https:/2016/03/what-vulkan-means-to-developers/#comment-12710 Sat, 26 Mar 2016 18:14:31 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12710 In reply to Claudio.

Why would Vulkan do that? On the contrary it should make cross-platform support easier.

]]>
By: Claudio https:/2016/03/what-vulkan-means-to-developers/#comment-12672 Fri, 25 Mar 2016 12:38:59 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12672 All of this sounds great!
I just hope Vulkan won’t cause you to drop Linux support. 🙁

]]>
By: elios https:/2016/03/what-vulkan-means-to-developers/#comment-12654 Mon, 21 Mar 2016 13:37:55 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12654 In reply to Ben Supnik.

all the more to seriously look at DX 12 MS, NV, and AMD have dev tools to make dx12 with multi gpu and vr as easy as they can for devs

]]>
By: Ben Supnik https:/2016/03/what-vulkan-means-to-developers/#comment-12653 Sun, 20 Mar 2016 23:13:56 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12653 In reply to Kirill.

The water reflection detail (that’s what the setting controls) affects framerate for CPU-bound machines. If there is an improvement of this on Vulkan, it will be the same as an overall improvement in fps.

]]>
By: Kirill https:/2016/03/what-vulkan-means-to-developers/#comment-12651 Sun, 20 Mar 2016 19:53:13 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12651 Hello Ben, I am interested in this question (in the subject). After the transition to X-Plane 10 on Vulkan in will influence the setting of water quality on FPS ?
And the current versions of X-Plane 10 settings water quality greatly affects the fps.

]]>
By: Ben Supnik https:/2016/03/what-vulkan-means-to-developers/#comment-12650 Sun, 20 Mar 2016 17:46:57 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12650 In reply to Dave_75.

I don’t understand why the blogger referred to is so freaked out about a shader gap. Given how new Vulkan is, I’d be shocked if the gap is any smaller. My understanding from talking to IHVs is that the big problem in proving that any of the new tech is ready is not comparing them to each other but rather the new drivers getting beaten by D3D11. A huge amount of optimization has gone into a legacy and stable technology.

With that in mind, optimization can happen at -two- levels:
1. When you compile your shader to SPIR-V, “structure”-level optimization can happen in the generation of byte code. For example:
int x = 0;
if(x)
{
do_expensive_thing();
}
Once the shader compiler builds a back-end representation of this, it can apply optimization: the compiler can reason that X will _always_ be 0 at the point of the if statement, and therefore everything inside the if clause can simply be _deleted_ from the shader ahead of time, making it faster. This is the kind of optimization that LLVM does really well (e.g. reason about your code, re-org it to take advantage of things it can see).

2. When the SPIR-V binary is loaded into the driver, the driver has to generate actual machine assembly for it, and this is where the vendor can optimize the generated code. Frankly, I don’t see why the driver can’t also run a structure-level optimization pass of its own, although one would hope there’s nothing left to be had by the time we get to SPIR-V. (My guess is: the drivers will end up optimizing – it always looks good for the GPU vendor to be fast.)

I expect that over time the IVHs will jump on case 2 to improve shader performance; the bigger question is how long it will take to get a GLSL->SPIR-V compiler that has a really good back-end.

There _is_ one set of optimization that is now strictly in the application’s hands. The Vulkan spec provides a specific mechanism for specialization – that is, the generation of more than one compiled shader where the constants have changed. Let’s look at our example again:

uniform float want_expensive_fx;

for(int i = 0; i < 1000; ++i) rgba += want_expensive_fx * sample(my_sampler,uv); So - that's terrible code, and you should never ever write it. But the key point is this: - In Vulkan, if you code that, your loop runs 1000 times. Sorry, it sucks to be you. - You can declare want_expensive_fx to be a specialization constant in your app - this results in multiple pipelines, where you must pick the right one. You could specialize with want_expensive_fx = 0 and the GPU compiler will delete that entire loop. - In OpenGL (and probably D3D??) this process was _totally automatic_. Us app developers have been complaining about it - "you specialized my shader and it caused a pause in rendering", but it has also meant that we could write really stupid code and have the driver clean up after us. That's gone in Vulkan. And that is a general ecosystem problem with Vulkan: any case where the driver was performing optimizing that is now on the app side (since Vulkan has moved the app/driver split DOWN in the stack by quite a bit) is now strictly on the app, and if the app doesn't do it, that optimization opportunity is lost forever. And I have to think: maybe us app developers aren't the gods of optimization that we think we are. If we were, why have the driver guys been rewriting and replacing our shaders for so many years? 🙂 So the short of it is: driver code gen will get better fast, pre-compiled code gen will get better eventually, but app optimizations are on us. New ecosystems always take time to catch up in benchmarks to the existing fully optimized incumbent. EDIT: one last note on this: X-Plane has had its own "poor man" version of specialization constants for a while now (and I suspect other apps do too): X-Plane's shaders have pre-processor #defines. When X-Plane builds a shader, it builds several GLSL shader objects, with the #defines pre-defined, and keeps them bundled together. This means: - Changing one of these 'constants' is really a shader change in the app and - New GPU code is generated for each case. That matters because those constants tend to turn off and on major features. E.g. if we compile our terrain shader with atmospheric scattering off, the code for atmospheric scattering is -totally gone- from the shader. It doesn't cost us an if statement, instruction length, samplers, uniforms, or anything. We tend to use these #defines for major features where the amount of code removed is very large.

]]>
By: Ben Supnik https:/2016/03/what-vulkan-means-to-developers/#comment-12649 Sun, 20 Mar 2016 17:28:33 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12649 In reply to Filippo.

Vulkan supports this too, but “for free” is the exact -opposite- of how I would describe the feature!!!

First, I should say: I -think- some of the multi-vendor and multi-GPU interop features are -not- yet well specified in Vulkan. I saw a note form Graham Sellers on the forums basically saying “we deferred this to get the standard case out without delaying the spec, it’s better not to rush and screw this up but it’s coming real soon”. (And I think that’s the right decision.) But we can look at DX12 and look at the proposed models to at least understand the landscape.

And the landscape is this:
– The drivers aim to provide some reliable functionality for communication between “devices” (read: graphics cards), and in some case to allow that to be portable between GPUs.
– In return, you, the app, do 100% of the work to make multi-GPU work.

That second point is kind of a big caveat. NO app running DX12 or Vulkan supports multi-GPU unless the app developer puts the feature in. And that means the app team stopping development on other features to -just- do multi-GPU.

The reason this happened is that making the driver do things like multi-GPU is basically impossible in modern apps. The driver can’t know enough about what the app is doing to know how to distribute work across hardware.

The question will be: how many apps will actually go the lengths needed to make this work. My guess is: AAA games and games whose engines “just support it” will do it. So it’s not that an app has to be written a certain way. It’s that the app has to code the entire feature itself.

To give you an idea of what simple multi-GPU (E.g. SLI alternate frame rendering) costs, basically every single resource in the app has to be replicated to both GPUs – the resource management has to be done separately and we have to identify resource problems on each GPU independently and track them. For any shared resources that are truly shared, we have to either create them in both places or synchronize them across GPUs. We then have to render to one GPU or the other, track that state separately (e.g. resources on one GPU aren’t visible on the other) and finally somehow get the finished frame put back together. And that’s for the “easy” case, alternate frame rendering (which adds latency, which all the VR guys hate).

]]>
By: Tom Knudsen https:/2016/03/what-vulkan-means-to-developers/#comment-12648 Fri, 18 Mar 2016 22:32:06 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12648 Thank you, very int. reading Ben..

]]>
By: Dave_75 https:/2016/03/what-vulkan-means-to-developers/#comment-12647 Fri, 18 Mar 2016 14:04:56 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12647 Hi ben !!

Are you aware of that potential big problem which is the Vulkan reference shader compiler that apparently does no optimizations at all ? I (educated) guess that you are… Here is the take on that matter from another developer of another game :
https://forums.inovaestudios.com/t/to-vulkan-or-not-to-vulkan-that-is-the-question/3174

For what I understand (as I’m not an expert), optimizations are now independent from the drivers and using the reference shader compiler without doing anything else output a frame rate hit compared to OpenGL (in some cases), DX11 and even more seriously compared to DX12.

Is it a “deal breaker” for using Vulkan or does that “simply” mean you will have to spend much more dev. time to optimize that just to, at least, reach the same level of performance (frame rate intended) than with OpenGL, DX11 or DX12?

You will correct me if I’m wrong but it seams to be a serious problem… For you devs at least 🙂

Bye.
Dave.

]]>
By: Filippo https:/2016/03/what-vulkan-means-to-developers/#comment-12646 Fri, 18 Mar 2016 10:29:01 +0000 http://xplanedev.wpengine.com/?p=6884#comment-12646 Hi Ben,

based on your comments, it looks like the freedom in the management of resources and shaders is what X-Plane may benefit most from the future transition to Vulkan.

But there is another aspect that was heavily marketed by the DirectX 12 guys (but, if I understood well, DX12 and Vulkan look quite similar to each other, so I expect that a feature of one will more or less be present in the other too), and it is a sort of “heterogeneous computing”. In other words, with DX12, you can throw any number and combination of video cards (from different manifacturers, too) at your PC and the workload is (almost) magically shared among them. I think this is also a noteworthy feature deserving some attention. What I don’t know is: does Vulkan offer a similar capability? If yes, is something that comes “for free” for the simple fact that an application supports Vulkan, or will applications have to be developed in a certain way to take advantage of this?

]]>