AI: Pilot Threat or Bean-Counter Pipe Dream? (Part 2)

Welcome back Network! In Part 1 of this series we learned some generalities about how Artificial Intelligence (AI) works. Now it’s time to apply that knowledge to look at what this means for some real-world examples.

Practical Applications

Yes, I know you’re thinking that identifying cat pictures is a stupid example. I agree, you’re right.

And yet, it’s a pretty simple task…isn’t it? How many 3 year olds do you know who can’t correctly identify kitties, doggies, horsies, and a variety of other animals far better than our very expensive AI? Let’s say you want an AI capable of driving a feeding cart around a farm. It has to be capable of identifying each type of barnyard animal and distribute the proper type of food to each one. Can you imagine how many hundreds of thousands of dollars it would cost to develop an AI that isn’t even as good at this task as the farmer’s 3 year old kid?

I hope realizing how limited AI is for even such a simple task gives you some comfort when it comes to your job security as a pilot. In that light, why not look at an example a little closer to our jobs? Let’s consider the case of autonomous automobiles.

Ready or not, companies like Tesla, Volvo, and others have unleashed cars “capable” of autonomous driving on the world. Although the technology is cool and has potential, many high profile accidents highlight the limits of AI for this application.

The first problem with driverless cars is that our world is so varied. Remember how we had to use at least 1,000,000 cat pictures to train an AI to recognize one type of object (a cat)? If you want a Tesla capable of identifying and avoiding deer on the road, you need 1,000,000 pictures of deer on the road. What about elk, moose, sheep, or people? What about stoplights, street signs, road construction signs, or even lane marking stripes? Sorry, but that’s millions upon millions more images.

Cars use more than just cameras too. If you want a radar set in the front bumper of your car, you’ll need millions of radar return profiles to train it to recognize hazards up ahead. The same goes for LIDAR, ultrasonics for parking, and any other type of sensor you have.

I’m willing to bet that a couple aviators sitting around a bar could come up with at least 100 novel, dangerous situations they’ve encountered while driving cars. In order to train an AI to handle all of those threats, you’d need datasets containing a total of 100 Million scenarios for that AI to evaluate. Can you imagine the expense of just building those datasets?

(As a fascinating aside, have you ever wondered why lots of the “prove you’re human” tasks on websites look a lot like tasks an automotive AI might have to accomplish?

It isn’t an accident that you’re identifying the same crosswalks the Model Y has to watch out for. Some brilliant soul is double-dipping by selling your website provider a way to make sure you’re human, while also building an AI training dataset to sell to car manufacturers. Someday, if one of us is very lucky, he or she might get to pilot that brilliant soul’s Global 7500 for a living.)

What if there isn’t a 3rd party vendor for the data Tesla needs to train its driving AIs? It then has to gather its own training data. Do you think they’re going to share with competitors like Ford and BMW? Negative, Ghostrider.

Then, what happens if you want to release a new car model with a different set of cameras and a new computer? You have to dust off your old datasets and train a new AI for the new model.

Minions, Penguins, Cowboys, and Space Rangers

Thus far, we’ve been talking about single AIs as if they’re capable of tasks as complex as driving a car. We picture Commander Data at the wheel of a car, driving you to the airport for work. Sorry to burst your bubble, but that’s nowhere near the kinds of AI we’re working with right now.

For our purposes, we can think of the overall system required to safely drive a car as a collection of many individual AIs. Each one is assigned a simple monitoring task and makes inputs to the car’s controls based on specific logic (developed through training, not traditional coding.)

If this is tough to wrap your brain around, remember that we’ve all seen this in action in kids’ movies. Whether it’s Buzz Lightyear and Woody driving the Pizza Planet delivery truck, a waddle of penguins trying to escape the zoo and get back to Madagascar, or a bunch of babbling Minions trying to drive a car to rescue Gru, these scenarios are actually very good approximations of current AI technology.

Gru’s minions are of limited stature and capability. None is big enough to look out the window, hold the steering wheel, actuate the gas pedal and brakes, and make decisions on how to drive, so they have to work together.

In a theoretically more effective version of this crazy scenario, one minion looks out the left window and one looks out the right. If the car veers too closely to the line on the edge of the road, the minion on that side of the car starts screaming, “Baaaaaaaaaah!” When the third minion (at the steering wheel) hears Right Window Minion scream, he steers the wheel left. That’s fine until Left Window Minion starts screaming and the wheel has to go back to the right. Steering Wheel Minion can only guess how far to steer the wheel because he isn’t tall enough to see out any window.

There are at least two minions looking straight ahead. Number 4 is unusually capable. He can recognize street signs, stop lights, and crosswalks. Number 5 is in charge of obstacles in the road, including other cars. If one of these two minions starts screaming, minion #6 will actuate the brakes. Otherwise, minion #7 is doing Irish river dance on the gas pedal.

Each of these minions is so busy on its own task that it takes an 8th minion to navigate. He has a map and is screaming out turn-by-turn directions.

The result is a montage of screaming and vehicular mayhem. It’s fun to watch when your kids are laughing at it (or you’ve had a few margaritas.) However, it’s a lot less funny when you realize that this is essentially how Tesla’s autopilot works.

Sure, automotive bloggers who gush about new features love to list how many cameras, LIDAR sets, radar arrays, and ultrasonic sensors a new car model has. However, we now realize that each of these gadget arrays is linked to an individual AI. That AI was painstakingly trained for a very narrow range of tasks.

When driving autonomously, the navigation AI is essentially reading turn-by-turn directions to the steering AI. That steering AI uses GPS data to try and stay on the road, but it absolutely must have input from AIs watching out cameras on either side of the car to stay centered in its lane.

Staying centered is entirely a function of being able to see clearly defined lines painted on the road. They’re a little more refined than minions who only have two settings: everything’s fine or “Baaaaaaaaaah!” A car like a Tesla can measure the number of centimetres between the edge of the lane and the edge of the tire. The steering AI uses small movements to try and keep those distances equal. However, what happens if heavy rain or snow obscures those lines? The lane-keeping AIs are simply unable to provide any usable inputs. What happens when you try to drive somewhere other than a highly-manicured roadway in Silicon Valley? I’ve lived most of my life in places where the edge of many roads is just as likely to be a wavy blend from asphalt and dirt and back as it is to be painted lines. The lane-keeping AIs are similarly useless on that type of road.

That’s not the only situation where our cobbled-together collections of AIs are less than effective though.

Real-World Gotcha

A couple nights ago, my family and I were driving home when we got stuck in traffic on a section of road with high curbs on each side. Several police cars, fire trucks, and ambulances worked their way ahead of us. (We cars had to move halfway into the other lane or the very small shoulder to give the emergency vehicles room to pass. Do you want to be the engineer who has to train an AI to recognize that situation and take the proper action for this relatively simple task?)

It turns out that a pair of cars had done their best to demolish each other at an intersection ahead of us. Lives were hanging in the balance and the debris field was impassible. The sheriffs blocked an intersection about a quarter mile behind us, and after about 20 minutes of sitting still, an individual deputy walked down the line of cars from back to front telling us to turn around and drive the wrong way to the previous intersection. We crossed that intersection, diagonally, during a red light with a deputy directing us, and took the back roads to get home.

Can you imagine how much training would be required to teach an AI to handle that situation? Can you imagine how much it’d cost? How could the deputy even communicate his intent to a driverless car? This situation is rare enough that it probably isn’t even worth the cost of training the AI to handle it. Likely, a driverless car would have just stayed in position, blocking the other cars trying to get turned around. It might have been able to continue driving forward after the intersection ahead cleared several hours later, but it’s also likely the Sheriff’s Office would have just towed it. Is either of those outcomes acceptable to your spouse or significant other?

This is a relatively simple scenario for handling an unusual situation at 0’ AGL and 0 KGS. You can see where we’re going right? If training an AI to handle that is beyond our capability or price tolerance, how can we hope to train an AI to handle similarly unexpected events at FL380 and 540 KGS?

Edge Cases

On a driverless car, another set of (very cool) AIs use radar, LIDAR, IR, or other sensors to monitor the environment all around your car. If you get too close to (or are even closing too quickly toward) preceding traffic, the traffic and obstacle AIs tell the speed-regulating AI to slow down. This works pretty well…most of the time. “Most of the time” is plenty good if it’s your spouse and kids riding with you, right?

A fascinating programming/AI training conundrum is what to do when a car sees a stationary object in front of it. If you’ve spent thousands of dollars training your AI to command a stop when there’s an obstacle ahead, how do you teach it to handle this?

There’s nothing wrong with this road. The GPS navigation AI will tell the speed-regulating and steering AIs to keep going. However, the obstacle-sensing AIs will detect a wall ahead and should normally command a stop…right?

To successfully navigate this road, you have to train your AIs to recognize situations where there is a wall along the side of the road, especially if the road curves in the same direction as the wall. In this situation, you have to tell the AI to keep driving, despite the obstacle directly ahead.

You might think that you’ve solved this problem, but then a Tesla Model S plows directly into a parked fire truck while the driver is busy eating a bagel.

(This photo is from the Car and Driver article about this crash linked above.)

What was the problem here? Your logic had to allow the vehicle to continue driving with obstacles ahead under certain conditions. Unfortunately, the parked fire truck didn’t match anything else any of the AIs had ever seen. The best they could figure, it was a wall on the side of the road somewhat like the last picture we saw. Remember earlier when we said it was stellar that our cat-recognizing AI could get the right answer 94% of the time? That’s just not good enough for real-world applications like driving cars or flying jets.

Want to train an AI to avoid this specific situation? You’re going to need at least 1,000,000 pictures of fire trucks parked on the side of a road with other traffic. Realistically, you need 1,000,000 scenarios that include video, radar data, LIDAR profiles, and more over a period of several seconds covering a total distance of several hundred feet. How do you even hope to come up with that dataset?

AIs and Surprises

A related (and worse) situation highlights an even more troublesome shortcoming of AI. Even with all their fancy sensors, these AIs are only capable of seeing what’s within range of those sensors and interpreting within the very narrow area of their training. More than one driverless car has crashed when the preceding car swerved out of its lane to avoid an obstacle (like a fire truck). Since the driverless car couldn’t see through preceding traffic, it didn’t see the obstacle until that preceding car got out of the way. Even at computer speeds, these cars were traveling too quickly to apply brakes and/or swerve away themselves. They hit the obstacle the other car had swerved to avoid.

Humans are far better in this type of situation for two reasons. First, our sensors are more capable and better integrated. We can see brake lights ahead of the car(s) in front of us. We might even be able to see and interpret the shape of an upside down car on the road through the windshield of a preceding car. We know to watch for pedestrians jumping into the road near busy places.

Second, we’re capable of making decisions in novel situations without having all the data. In this example, the AIs driving the car only know that the preceding traffic moved out of their lane. This happens all the time, and the AI’s standard procedure is to then increase speed to the legal limit, until encountering another vehicle at which point it’ll maintain adequate following distance behind it. The AIs have no way to know they should be judging whether they should copy the previous vehicle’s sudden swerve. Humans can instinctively tell when the preceding driver does something sudden and unusual. We can react quickly, following suit if necessary.

Do you want an AI to be able to handle this kind of situation? You’re going to have to build a dataset with at least 1,000,000 examples of similar situations that may or may not require drastic intervention, then use that dataset to train yet another AI.

A car full of minions just doesn’t cut it, but it’s the best we have for now.

So What About Last Year, Siri?

You may be feeling a tad skeptical with my assertion that a company as technologically advanced as Tesla has nothing better than the AI equivalent of cars full of minions driving people around our country. Luckily, I have a practical example that you can try out right now. We’re going to run an experiment on AI capability using technology from companies even more advanced (and profitable) than Tesla. Namely: Apple, Google, Amazon, and Microsoft.

Go ahead and ask your current device’s personal AI assistant a question. Siri, Google, Alexa, or Cortana will probably give you a somewhat useful answer. I just played some music on Spotify and asked, “Hey Google, what’s this song?”

Google dutifully responded by showing me this:

(Oddly, she didn’t respond verbally like she usually does.)

Next, ask your AI a simple, related follow-up question. I tried, “What album is that song from?”

Poor Google responded to my request by prompting me to “Play, sing, or hum a song.” I’d paused the music though, so she concluded, “Sorry, I wasn’t able to recognize this song.

The problem is that although Google’s fancy (and expensive) AI has been trained to answer queries one question deep, it hasn’t been taught to follow a conversation and interpret further requests in the context of something that just happened.

We can try a few more requests to illustrate this:

“Hey Google, what’s the weather?”

Currently in Lithia, it’s 23 degrees and partly cloudy.

(It’s a beautiful day. I asked Google to help me train my brain to think in Celsius.)

“Hey Google, how does that compare to this date last year?”

Here’s what I found on the web:

Useless. Some of our mobile device AIs have been trained to respond to variations on this scenario. I tried again by instead following with, “Do I need to bring an umbrella?”

My AI responded, “No, it’s not raining in Lithia right now. It’s 23 degrees and partly cloudy.

That’s sort of useful, but I’m headed down to Wimauma airport (FD77.) I don’t need an AI to tell me if I need an umbrella right this moment. I need it to look at the weather forecast and tell me if I’ll need an umbrella at any point while I’m down there. So I started again with “What’s the weather,” followed by, “Will I need an umbrella 3 hours from now?”

Google’s answer was, predictably, useless.

Eventually, we’ll get there, right? Lots of other science fiction from Star Trek has become reality. It’s going to take a lot of work…a lot of training to get there though. Remember how much time, effort, and money it takes to teach an AI to recognize cat pictures? For each task we want to add to Alexa’s skill set, we have to create a gigantic dataset and train the AI until it responds correctly.

Before you can even start the process of building that dataset you have to figure out what kinds of scenarios you want to train your AI to handle. Then you have to figure out what that dataset needs to look like in the first place. You may think your idea is great, but what if, after thousands of hours and tens of thousands of dollars, it fails to adequately train your AI? Now all of that effort was wasted and you have to start all over again.

This is for a very simple set of tasks like going from “What’s this song” to “Who’s the lead singer of that band?” Let’s consider some of the training that has to be successful before robots can take over our pilot jobs.

“I’m not painting anything.”

Overall, I’m very impressed with the capabilities and professionalism of Air Traffic Control services in much of the world. (Except in Djibouti. They’re absolutely terrible.) However, one frequent situation drives me crazy. I’m confident that every turbine pilot who cruises the flight levels can identify with the following:

Pilot Flying: “Hey dude, let’s ask for a deviation right of course around those clouds.”

Pilot Monitoring: “Good call, those look nasty. I have no desire to fly through that.” <break> “Indy Center, Airline 1234 request deviations right of course for weather.”

Indy Center: “Airline 1234, I’m not painting anything.”

I know you’ve reported something like this to Center only to be told, “I’m not painting anything.”

Thankfully, I still haven’t ever responded like I want to: “Oh, thank goodness, Center! I’m so glad that you folks sitting on ergonomically-designed office chairs in a windowless room at groundspeed zero aren’t painting anything! My decades of experience tell me the clouds I see ahead could make my ride uncomfortable, at best, or kill me and everyone onboard my aircraft, at worst. However, since you’re not painting anything out there I’ll just continue blithely on course. I’m sure my wife and kids will thank you tomorrow.”

Usually, we respond by saying, “Okay. We need deviations anyway.”

Now that I’m an airline captain, I would have no qualms piping in with the following if we got any more push-back from Center: “Copy all, Center. There is weather and we’re deviating right. If your watch supervisor would like a phone call, I’d be happy to chat. Just mark the tapes and give me your employee number for reference.”

Sadly, other crews haven’t emerged successfully from similar situations. (See: Giant 3591Sriwijaya Air 182 and countless others.)

This kind of situation is difficult enough for a human crew. It’s simply beyond the capabilities of any AI system for the foreseeable future.

First off, the primary means for seeing thunderstorms is radar. When ATC tells me they aren’t painting anything, I’m usually unsurprised because I’m not painting it with my onboard radar system either. I regularly encounter storms that represent hazards, yet don’t show up on radar.

An autonomous aircraft will need additional sensors…LIDAR, electro-optical and IR cameras, etc. to see hazards. Presumably, one or more of those systems could identify a storm not showing up on radar. However, in order to train an AI to run that sensor you need another enormous dataset. The internet is full of cat pictures, so building that dataset is simple. How many pictures exist of thunderstorms or other hazardous weather systems that didn’t show up on radar? How much work (and cost) will it take to build that dataset?

Even when (not if) someone manages to put that dataset together, remember the MIT study that says AIs tend to perform worse in real-world situations than they do in a lab. Also remember that each combination of sensors and computers (each type of aircraft) will need its AIs trained separately.

This is just one of the many, many tasks an autonomous airliner will need to be capable of handling. Once we’ve built the proper dataset, it’ll be relatively easy to run jets through it. However, things rapidly get much more complex. Not only does an AI on the jet need to identify a hazard, it has to figure out a solution to the problem that gets it to the intended destination almost every time, without flying so far out of the way that it runs out of fuel. It has to do this while integrating into the National Airspace System full of other aircraft, and obey instructions from ATC…unless safety of flight dictates otherwise.

The training for that part of the problem is far more complex than “Is this a cat?” That training will require ways for us humans to evaluate the AI’s response to these problems…for millions upon millions of test cases.

Please understand that I’m not saying this is an impossible task. I honestly believe it will happen someday. However, the sum of all the tasks and problems an aviation AI system will have to handle is so vastly complex that I think we’re decades away from being able to train AI to handle it.

The (Digital) Cloud

Another consideration we haven’t even addressed yet is the fact that it takes a lot of computing power to run an AI. We’re talking rooms full of servers with cutting-edge graphics processing units (GPUs) churning away full-time.

Whether we’re talking about aircraft or even just mobile phones, it’s spatially and economically impractical to jam that much computing power into a mobile device. As a result, most of the AIs you interact with right now aren’t actually running on your phone. Instead, your phone records an audio file of your request, transmits it to a data center in Mountain View, CA, lets the AI do its thing, and then receives a response back through the internet to your device.

That’s why you probably experience occasional delays when you talk to Alexa or Siri. It’s also why these devices are completely useless when you have poor or zero internet access (like when you’re riding in the back of an airliner, or deployed to Kandahar.)

One of the reasons it isn’t realistic to have AI-driven aircraft right now is that we can’t guarantee internet connectivity between the device (the jet) and the ground-based AI. We had this problem before. During the Apollo program computers less powerful than your iPhone filled entire buildings. Eventually, we’ll be able run an AI on a computer system small and cheap enough to carry onboard an airliner. However, that day is far in our future.

Even with unlimited training datasets and better sensors, there’s no hope of AI flying an airliner until we can end our reliance on cloud computing for AI applications.


Have I convinced you yet that our jobs are safe from the threat of an AI takeover? If not, please give me one more chance. In the conclusion to this series we’ll look at some of the other implications that make pilotless airliners something that will be so difficult (and expensive) to implement that I think you and I are just fine.

< Part 1 | Part 2 | Part 3 >

(This post’s feature image is by Roberto Nickson on Unsplash.)

1 thought on “AI: Pilot Threat or Bean-Counter Pipe Dream? (Part 2)”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.