Here’s our keynote talk from FlightSimExpo 2019 last week!

FSExpo 2019

Evans video team really did a fantastic job with the feed this year – everything was live-streamed in great quality, and the talk has a direct feed to the slides and good quality audio. Totally what we were hoping for!

About Ben Supnik

Ben is a software engineer who works on X-Plane; he spends most of his days drinking coffee and swearing at the computer -- sometimes at the same time.

19 comments on “FlightSimExpo 2019 Talk

  1. Everyone at FSExpo 2019 got quite a treat this year. And like Vegas, it was a whole heap of fun to connect with you, Ben, as well as Austin, Philipp and the rest of the team. Definitely a first class experience, and looking forward to Vegas again next year! Thanks for the chance to bend your ear(s)!

  2. Hello Ben,
    It was good to hear this talk again. I really had to hear it a second time to catch missed parts.

    I hope someday Austin would consider a little more about Shared cockpits (multi-crew online). And I say this because in real world many aircraft require 2 pilots in command. I use SmartCopilot plugin… but I think it is limited by how the sim works under the hood. A native solution would be great!.

    Also, I really wish Planemaker would be renewed in terms of UI and helicopter friendliness with helicopters. And with little more explanations and examples for various types of uses cases. Maybe I try to board it like I do at work in the maintenance of real aircraft.

    Greetings from Argentina

      1. I think no – it seems to me, that it uses the same technics SmartCopilot is already using to get two pilots synchronized. Some things are not working well with SmartCopilot. I’m with Pablo and hope for a native solution as well.

        1. Hi Carsten, I’m Dellanie from FlyJSim.

          We don’t parse data the same way as SmartCopilot (SCP). SCP basically passes the action via a peer2peer connection. We actually pass the state s constantly through a dedicated server. This means that :

          – It’s more difficult to have de-synchronisation events
          – We can more reliably transmit to 2+ pilots
          – We can tailor the experience for custom avionics in different payware aircraft.

          Feel free to visit flyjsim.com/sharedflight or join our discord for more info 😉

    1. Pablo,

      When I started watching more airline pilot videos I suddenly felt waaaaaaaay better about being overwhelmed when trying to fly airliners solo in x-plane.

      I’d look forward to this feature, but its hard to imagine how it’d be done.

  3. Any plans to eventually port X-Plane Mobile to the Oculus Quest VR? There currently aren’t any decent flight sims for the device, but considering the Quest runs on Android hardware — it would seem like it wouldn’t be too much of a stretch to hook-up the VR controls.

    1. Chris is aware that you can do VR on mobile devices and we have looked at it…Austin had his mobile car sim doing VR with cardboard once at a company meeting as sort of a tech demo. I think there is some fundamental tech we have to get in place first, e.g. porting mobile to Vulkan/Metal so we have modern stereo rendering.

  4. Will XP11 see any Vulkan-enabled improvements beyond what comes inherently from said conversion (i.e. the 5-10% bump for Nvidia)? Or will we have to wait for future versions of the sim to see the fruits of ya’ll’s labors really stretch their legs? I’m not talking just refactoring some ‘for’ loops, either. I’m wondering about something on par with giving each car on the road its own thread (that’s just an example/joke, not feature-whining).

  5. Yes it was very good, for once I was able to watch it live, as the timing here was 5.30am… felt a little short though, at 45min… I felt there was more but no time to have… thanks on Vulkan/Metal.

  6. At last!!!! More news about vulkan and fps improvement. We really need this and more in vr. At the moment I’m only playing in vr DCS and FS2. I’m looking forward to play my xplane 11 in vr with good performance or in a monitor at 60fps min.

  7. Ben (or anyone else), I was wondering if you had a chance to see https://developer.apple.com/videos/play/wwdc2019/611/ and https://developer.apple.com/videos/play/wwdc2019/606/ – particularly if the approach of moving more of the render pipeline into the GPU (and reducing trips to CPU/system RAM) is something under consideration?

    Also, with word that Afterburner (the add on card for Mac Pros) might be a reprogrammable FPGA, any thought about offloading anything (like flight model) to dedicated hardware?

    One last question, are you targeting a specific version of Metal?

    I understand this is waaaaay in advance, I’m just excited.

    1. First re: GPU-driven dispatch – it’s a cool tech, and we may someday use it. Our approach is a measure-fix-repeat cycle…that is, we target the things that make the most difference to the most users.

      As it turns out, for a lot of our users, Vulkan/Metal scratches that itch, because we’ve seen users with up to 40% of CPU time in the GL driver – the _single_ biggest CPU suck. The numbers are way lower on Vulkan/Metal. So if we wake up and some part of the process is slow and lends itself to moving to the GPU then sure, we may move it there. We also have the other cores to move things to (e.g. parallel on the CPU).

      I can say I am skeptical of GPU-driven-cull+GPU-driven-render for X-Plane…it’s a great technique, but in our case, as soon as you get up to about 600 feet, _nothing_ is occluding anything in the scene and we just have to be able to draw -the kitchen sink- really fast, so there we want techniques that are fast in absolute performance. GPU driven scene encode might be one, but the occlusion part is less useful than in a shooter where you hide behind a wall and the wall occludes an entire city.

      (Occlusion in the cockpit is a little bit different because the occlusion shapes are extremely regular (e.g. the windows). I’ve looked at optimizing that case and it looked like not-a-dead-end but not nearly as much of a win as just spending some time making all of the code better.

      Re: FPGAs, we’ve seen this movie before, e.g. with the PhysX card. The truth is the FM isn’t that computation expensive, and the part that is is mostly ground interactions, which are all about how the scene is stored in memory, rather than how much ALU we have on a card. We get the same q with regards to GPU-ing the FM. If we were doing CFD it’d be different, but then we’d be running at sub-real-time speeds EVEN with the GPU.

      1. Thanks!

        On my (admittedly old) 2010 Mac with nvidia gtx 1060, the two things that hammer it the most are just pure number of objects, then shadows. I am very cpu starved at 2.4 ghz.

        I am contemplating next computer (I use macs for everything else, so I am hesitant to also get a PC) and I am trying to target “where the puck is going.”

        Obviously “small number of very fast cores” is optimal, and then the question is which graphics card to go for.

        If things like shadows/objects were to pre computed via the gpu rather than cpu, things like the HMB equipped Vega cards start looking attractive.

        Thanks for looking into the cockpit occlusion part!

        1. The thing with traversing the scene graph on the GPU is that the whole process just doesn’t lend itself very well to how GPUs work. This is something that is much better done on the CPU, especially on Vulkan and Metal where filling the command buffer isn’t as nearly as much of a problem as it is on OpenGL. GPUs are really good at calculating a lot of numbers in parallel, but the one thing they are really bad at is scattered memory access and branching. Both usually tend to be the main work load when traversing scenes, which tend to be represented by trees.

          Additionally, this is something where, in the future, we can actually go wide on the CPU. Since we have to traverse the scene graph potentially multiple times per frame (shadows, reflections, main rendering pass), we can speed this up by dispatching this to multiple CPU cores and filling the command buffers in parallel. This is something that just isn’t possible right now with OpenGL, but with both Vulkan and Metal we can finally achieve this. It’s a much easier step than going to the GPU, and promises much better returns in terms of perf.

      2. CFD sounds cool… and now that you mention it, I started thinking about crazy scenery: “what kind of home computer (specs) would be needed for this methodology to be used in X-Plane? 😀

Comments are closed.