Boeing’s overconfidence in their AI system contributed to Boeing 737 Max 8 crashes

Boeing 737 Max 8’s crashes lend insight into the dangers of constructing a system where expert humans do not have the ability to easily override decisions made by AI.

I hesitated to write about the Ethiopian Airlines and Lion Air crashes. It was a terrible loss of life and I believe that no article will do justice to the people who lost their lives, and their loved ones. However, investigations have suggested that this loss of life was preventable. A confluence of reasons caused the Boeing Max 8 to crash among which include Boeing’s lack of quality control and Boeing making it difficult for pilots to override the plane’s auto descent that is triggered from sensor input. This article will focus on the second factor because to me, it looks like the easiest one to fix. It also is a more robust methodology. It is much easier to build in controls to allow a human expert to easily override a bad AI decision, than it is to construct AI that will make perfect decisions every time. Of course, Boeing should also fix its sensors that are known to malfunction but I believe that doing that alone may not protect against other contingencies that it fails to foresee.

Prior to the Boeing crash, I did not know of any AI system from the common man’s every day experience that makes it difficult for a human to override. I probably share the experience of many of you where a taxi or ride share driver saved time on your journey by disobeying Google Maps. We sat in a taxi or a ride sharing service and Google Maps dictates to take a certain route, the ‘most efficient route’ that would take 30 minutes. The driver suggests a different way and disobeys the map navigation system. Estimated arrival time is suddenly slashed by 10 minutes. He soon congratulates himself out loud on how he outsmarted the Google engineers and tells you how lucky you are to have an intelligent taxi driver who knows better than Google maps.

There are plenty of other software around that help us navigate our lives better but I picked Google maps as my example for one main reason. Google maps has an abundance of data. I am confident that the number of trips completed by people using Google maps is much higher than the number of trips that Boeing planes have made. If Google maps makes mistakes, then what are the chances of Boeing not making a mistake? Some may say I am comparing apples to oranges here. Google’s mistake cost the traveler 10 minutes. Boeing’s mistake cost 346 passengers their lives! In addition, one might argue that because lives are at stake, Boeing’s software is more robust than Google Maps. These are all valid points but my main point here is that the taxi or ride sharing driver with his local expertise, or sometimes even normal common sense, is at least in some situations, superior to the Google Maps algorithm built by a team of talented software engineers, likely with expert input from academics and people with experience in mapping and travel.

Why? The software engineers who built Google Maps wrote the software for general situations. Of course, they tried to program exceptions in as much as they could, but the fact that remains that Google Maps is a solution written in anticipation of problems. It sees past problems and tries to guess what the future problems are. It processes the input’s it receives by calculations that were built in the past. The inputs may be new but the calculations are old. Drivers, in contrast make their calculations in real time.

Boeing’s 737 Max 8 had a faulty sensor. This ‘angle-of-attack’ faulty sensor made the planes nose dive by activating anti-stall software based on incorrectly reading that the wings did not have enough lift to keep flying. It was extremely difficult for a pilot to override this. In the Lion Air flight prior to the one that actually crashed, the pilot prevented a crash by cutting power to the motor.  Investigations on the Lion Air flight that crashed reveal that the pilots were discussing how to deal with the plane’s reaction and were looking at their manuals to see how to deal with it. The fact that solving the issue required cutting power to the motor instead of simply pressing a button or selecting an option that all pilots (as opposed to just a few) are familiar with is a travesty. It shows that Boeing did not make it easy for pilots to override the automatic nose dive that a faulty sensor could trigger.

In these scenarios, the human expert knew that what the plane was doing was incorrect and could have piloted the plane to safety if the AI’s automatic response wasn’t fighting against his control of the plane.

As we continue to make our AIs more complex, will we build in controls to allow humans to easily override the decisions the AIs make? Or do we assume that the team that designed the AI is omniscient, with the almighty ability to foresee all situations in advance? Do we put experts on for the sake of show, or do we allow them to easily wrestle decision making away from the AI in emergency situations?

So far, it doesn’t look like Boeing is progressing in the correct direction, unless the press is reporting it incorrectly. There are many articles about Boeing making “improvements to the software”. There haven’t been any articles I read of Boeing making it easier for pilots to override the software should it make a mistake.

Leave a Reply

Your email address will not be published. Required fields are marked *