[This post is a “behind the scenes” look at the tech that makes up the X-Plane massive multiplayer (MMO) server. It’s only going to be of interest to programming nerds—there are no takeaways here for plugin devs or sim pilots.]
[Update: If you’re interested in hearing more, I was on the ThinkingElixir podcast talking about this stuff.]
In mid-2020, we launched massive multiplayer on X-Plane Mobile. This broke a lot of new ground for us as an organization. We’ve had peer-to-peer multiplayer in the sim for a long time, but never server-hosted multiplayer. That meant there were a lot of technical decisions to make, with no constraints imposed by existing code.
Read More
This post is just targeted at plugin developers who are modernizing their object drawing – if you don’t write plugin code, the Cincinnati Zoo has been showing their animals on Youtube – it’ll be a lot more entertaining than this post. (An XPLMInstance cannot tunnel down two feet in fifteen seconds – one point for the zoo animals.)
XPLMInstance makes a persistent object that lives inside X-Plane that is visible in the 3-d world. It changes how you draw from “run some drawing code every frame” to “tell X-Plane that there is a thing and update its data every now and then.”
Instancing is actually a lot easier than draw callbacks! But there are two tricky gotchas:
1. You must create the custom DataRefs for your OBJ’s animation before you load the object itself with the SDK. (If the DataRefs do not exist at load time, the animations are disabled as “unresolved to any DataRef”.)
2. When you create the instance, make sure your custom DataRefs are on the list of DataRefs for that instance.
Here’s the really baffling thing: if you create the custom DataRef and then add it to the instance’s list, your DataRef callbacks will not be called.
Wha?
Here’s the trick: the DataRef you register is a global identifier, allowing the object to refer to what it wants to listen to. That’s why you have to create the DataRef – so that the identifier exists.
But when you create an instance, each instance has memory that holds a different copy of those DataRefs.
For example, let’s say you have a truck with four DataRefs, and you make five instances. X-Plane allocates 20 slots (four DataRefs times five instances) to store five copies of each DataRef’s values.
The instances never look at the DataRef itself. They only look at their local copies. That’s why when you push different data to the instance with XPLMSetInstancePosition, each instance animates with its own values – each instance looks at its own local data.
This is also why you won’t see your DataRef callbacks called (unless you use DataRefEditor or some other tool). The object rendering engine isn’t looking at the DataRefs themselves, it’s looking at the local copies.
In other words, XPLMInstance turns DataRefs from the pull model you are used to (X-Plane pulls on your read function to get the value) to a push model (you push set with XPLMSetInstancePosition into the instance’s memory).
This implies two things about your add-on:
- It doesn’t really matter what your DataRef read functions do – they can just return zero, and
- You can’t use tools like DataRefEditor or DataRefTool to debug your animations. (That didn’t work well in legacy code either, but it really won’t work now.)
If you try the obvious optimization of not creating your custom DataRefs (“hey, no one calls them”) before you create your instance, you will find that animation just stops working. This is because we need the DataRef to be that global identifier to match your instance data with the animations of the object itself.
One last note: if your old code used sim/graphics/animation/draw_object_x/y/z to determine which object was being animated (from inside a plugin “get” function) you do not need to do this anymore. Because each instance has its own local copies and your DataRef function isn’t called, this technique is obsolete.
In summary:
- You must register custom DataRefs.
- Their callbacks can just return 0 – they’ll never be called.
- Always list your custom DataRefs for animation when you create an instance.
- Do not use draw_object_x/y/z; use XPLMSetInstancePosition to create per-specific-instance animation.
TL;DR: Running X-Plane with sudo
is a bad idea. Instead, create proper udev rules (per this and this).
During the 11.10 beta, I’ve gotten a lot of bug reports from Linux users who report that their keyboard is being recognized as a joystick. This is… sort of a bug, but mostly intentional.
(If you’re not a Linux user, this won’t apply to you… but it will bore you! 😉 )
Background: What changed?
On Linux, prior to X-Plane 11.10, we were very picky about what USB devices we considered to be a joystick: we required a device to present a so-called “absolute” axis (in contrast to a “relative” axis like a mouse uses). The downside of this is that it prevented home cockpit builders from creating button-only hardware.
So, in 11.10 and beyond, we relaxed the requirements: if a USB device presents us with either an axis, button, or hat switch, we’ll treat it like a joystick.
The problem with this policy seems obvious: keyboards have “buttons”! Like, 104+ of them!
The reason we didn’t worry about this is that the keyboard is only accessible (as a USB device) to programs running as root. So long as X-Plane runs as a normal user, it doesn’t even have the option of treating the keyboard as a joystick.
Why do people run as root?
The impetus for running as root (via sudo
) is simple: if your Linux distro doesn’t recognize your joystick hardware as something that should be available to normal applications, running as root is a brute-force way to let X-Plane use your joystick.
Let me say emphatically: This is a bad idea.
Especially with early, buggy betas, running as root makes it possible for X-Plane to do way more damage to your system than would ever be possible as a normal user. Consider the unlikely—but possible!—scenario where somebody made a typo in the code which inadvertently tries to delete a system folder. There are two possible outcomes here:
- If you’re running as a normal user: Nothing happens. The operating system refuses to let X-Plane hurt your system.
- If you’re running as root: The operating system silently obeys. You curse X-Plane for breaking your system.
Running X-Plane as root is like giving a blank check to every cashier you buy something from—it’s way more power than they need to do their job, and it’s liable to burn you at some point!
The Right Way™ to let X-Plane use your joystick
As described in the latter half of this old dev blog post, you don’t have to run with sudo
. Instead, you can create udev rules to tell your operating system to let normal applications use your joystick. The GUI tool linked at the end of that post makes it even easier.
(Some users found the instructions there confusing; this post on the Org might help.)
Remember that after you create your rules, you can even submit them to your distro to make life easier for other flight simmers!
There’s one hitch: after running with root, your file permissions (especially your prefs) may have gotten screwed up. This can be fixed from the terminal by making your normal user account the owner of your X-Plane directory, like this:
$ sudo chown -R <username>:<username> /path/to/X-Plane/
(So, in my case, my username is tyler, and X-Plane is installed to ~/Documents/X-Plane/, so I’d run $ sudo chown -R tyler:tyler ~/Documents/X-Plane/
.)
Now, to those of you who have been running as root… “go, and sin no more”! 😉
TL;DR version: my iMac’s fusion drive “lost its marbles” right before I went on vacation. This has delayed cutting an 11.05 release candidate 2 with a few scenery fixes, but we should get to it next week. In parallel, we’re working furiously to get all of the code locked down for 11.10.
Everything else that follows is really, really, really, really boring. I’m writing it only because some of my co-workers watched this slow motion car crash and tightened up their backup game a bit. If my drive fail can shake you out of complacency, read on.
Basically: my iMac is my main development machine, and the data is backed up and/or duplicated in a bunch of different places: a USB time machine archive, a Backblaze cloud backup (both are “full machine”), DropBox for virtually all of my documents, and my work for Laminar is kept on Laminar’s source control servers. Data loss was never a huge risk here.
Time loss, however, is a real risk! My goal was to lose as little work time to fixing my machines as possible. So my plan was: restore from time machine disk backup, request a cloud backup restore via hard drive, return the hard drive. The total cost would be a few hours of disk copying and less than an hour of my time. My development machine would be usable for new work while waiting for the cloud backup to arrive.
This has not gone as well as I had hoped! You can learn from my fail here — a few notes.
- Your backup might as well not be a backup if you have not checked that the backup contains the data you think it contains. It turns out that both the cloud backup and time machine backup were missing files! I’m very lucky that they weren’t missing the same files.
- Time machine sometimes decides not to back stuff up. OS X has a hidden per-file/directory attribute that can exclude a file from backup without showing it in the Time Machine UI! Once you check your time machine backup and find a folder is missing, from terminal you can do tmutil isexcluded <file path> to see if the file has been explicitly excluded. If it is, tmutil removeexclusion <file path> fixes this.
- Backblaze ships with a bunch of file exclusions too – mostly designed to not archive stuff that isn’t your data. But beware – stuff you care about might not be on the list. (For example, virtual disks in a virtual machine are excluded by default.) I had to add back .iso files to the backup list. Backblaze backups are also not bootable. This is something I can live with, but always read the fine print about what’s in the backup.
- The Backblaze data restore has been very slow – over ten days for less than half a terabyte and it’s still “in progress”.* While they haven’t exceeded the maximum restore time they advertise, it’s slow enough that the delay matters.
- One other note on Backblaze: I saw major performance problems on my iMac while Backblaze was running, even when a backup was not running (since they were scheduled for overnight). I do not think this is necessarily Backblaze’s fault – it may be a problem with CoreStorage (which “runs” the fusion drive) or even a fault with my drives. From what I can tell, cloud backup exacerbated it by putting a lot more file traffic on my system.
- A possible danger if (like me) you keep documents on DropBox to have them everywhere: when I restored my iMac from Time Machine, I was exposing DropBox to my data from a week ago. I didn’t wait to see if DropBox would figure out what happened; I unlinked my iMac while it was offline after the restore, then re-established DropBox and let it download my data. Better safe than sorry.
- I have been backing up to portable 2.5″ USB drives because they’re cheap and really convenient, but they have a down-side: the mechanisms can easily fail and take your whole backup down. I have five of these drives and one has failed in a three year period.
- I’m really unhappy with CoreStorage, to the point where I would not recommend a fusion drive anymore. CoreStorage is an Apple virtual-volume technology (similar to soft-RAID) that makes one small SSD and one large HDD look like a single unified volume, with some of the data “cached” on the SSD for performance. CoreStorage is a lot newer than HFS, so when things go wrong, most disk utilities you would go to just don’t work.
I actually ended up in a state where (after wasting almost an entire day) I could see my data, but only in single-user mode with a read-only file system. I might have been able to directly copy the data, but I picked to format the drive and restore from the backup to save more of my time and get back to coding X-Plane. My suggestion for developers getting iMacs: get an internal SSD (whatever storage size you can afford) and supplement with a fast external hard drive over Thunderbolt.
Going forward, I am replacing the portable backup drives with a Synology NAS RAID device – this gets me high performance, high capacity backup (about 10 TB) with redundant drives. I picked HGST drives because they’ve had a good track record for reliability. With a large network attached storage server, I can have all of my machines backing up in the house all of the time, and have that be the primary way of getting my data back. I’m keeping cloud backup as a last-resort-the-house-burned-down kind of thing.
If my cloud backup hasn’t shipped Monday, I will rebuild the setup I use to cut builds by hand (it’ll take a few hours but it’s doable) and we’ll cut 11.05r2 that way. If the drive comes, I can get the last of my data back and we’ll get to 11.05r2 the easy way. Either way, we’ll get things moving again.
* I opted for a hard drive restore, which should have one day of shipping time, instead of a download; a smaller restore based on download made clear that the transfer speeds would be slower than FedEx for that quantity of data.