Let’s Stop Calling it ‘Artificial’ Intelligence

The fatal flaw in AI isn’t the technology or its intelligence, it’s something far more difficult to change.

Let's Stop Calling it 'Artificial' Intelligence

Air France flight 447 wasn’t doomed because its technology couldn’t handle the weather but because its pilot wasn’t able to weather the storm.

The challenge of artificial intelligence isn’t so much the technology as it is our own attitude about machines and intelligence. In doing research for my latest book Revealing The Invisible: How Our Hidden Behaviors Are Becoming The Most Valuable Commodity Of The 21st Century I came across the poignant story of the doomed flight of Air France 447, which teaches a great deal about the deeply rooted cultural attitudes, perhaps even arrogance, that stand in the way of AI.

There was nothing especially challenging about the Rio to Paris route AF 447 was flying on June 1, 2009, except for what appeared to be typical storm cells that showed up on the plane’s onboard radar. Earlier flights that had made the crossing chose to route themselves around the storm cells. The pilots of AF 447 didn’t know that.

By the time they realized the extent of the storm, there was just no way to go around it. Still, modern jet aircraft are built to handle nearly all weather conditions. Flying through a thunderstorm at thirty-five thousand feet isn’t pleasant but it’s well within the abilities of modern planes and experienced pilots. However, as AF 447 was in the midst of the towering cumulonimbus clouds, it encountered ice that clogged its pitot tube sensors. Pitot tubes are located on the exterior of an aircraft and relay airspeed to the computer and the pilots. Airspeed is among the most critical pieces of information without which neither the autopilot nor the human pilot can correctly fly the plane.

Listen to an audio version of this article with more details about the tragic flight of Air France 447 and how our attitudes about AI need to change.

Stall…Stall…Stall…

Although the plane was one of the most sophisticated computerized aircraft at the time, an Airbus 330, without input for airspeed, its autopilot shut down and turned control over to the co-pilots, who were also unable to determine the plane’s speed. As a result, one of the co-pilots did the last thing a pilot should do in any plane attempting to stay aloft at an uncertain speed: in an attempt to gain altitude, he pulled back on the joystick and pitched the plane’s nose up just enough to put the plane into a stall, a condition that reduces the effect of lift on the wings until the plane literally falls out of the sky.

After six harrowing, and utterly convoluted minutes, the plane plunged into the Atlantic, killing all two hundred twenty-eight passengers and crew members. What’s tragic is that two minutes after the autopilot was disengaged the pitot tubes began functioning again. If at any time during the next four minutes the pilots had handed over control to the autopilot the plane would have effortlessly continued on course.

“Perhaps the most telling aspect of the tragedy was that the pilots were having anything but a conversation with their computer–they were outright ignoring its warnings…”

The details of AF 447 are terrifying in describing the degree to which human error can be magnified in a time of crisis. But what makes the story of AF 447 especially hard to make sense of is that an Airbus under what’s called “normal law,” meaning that the computer prevents the plane from flying outside of its flight envelope, is impossible to stall. No matter how hard the pilot pulls back on the joystick, the computer will compensate and not allow a stall.

However, in this case, with the autopilot disengaged, the plane switched over to “alternate law,” meaning that the pilot had full manual control, which could not be overridden by the computer. Unfortunately, it gets much worse.

Although the computer was disengaged from controlling the plane it still knew that the plane was rapidly loosing altitude and sounded a stall alarm (a mechanical voice warning that says “STALL” followed by an impossible to ignore high pitched tone). It did this no less than seventy-five times as the plane descended from thirty-seven thousand feet into the Atlantic.

There’s much to learn from AF 447 about how humans interact with technology, the way in which we perceive and trust it, and by extension how AI should work alongside humans.

Perhaps the most telling aspect of the tragedy was that the pilots were having anything but a conversation with their computer–they were outright ignoring its warnings–and therein lies much of the problem in how we perceive technology and AI.

As humans, we naturally respond to a crisis that we haven’t experienced by using past knowledge and intuition. But, as we just saw, intuition can lead to disastrous consequences. The problem is that even when we are shown irrefutable proof that our intuition is wrong we will still stick to it since we trust it, after all it is “our” intuition. Nothing proves this point more dramatically than ignoring seventy-five stall warnings blaring at you for nearly six minutes.

The Role of AI

To be clear, the autopilot on an Airbus 330 is very sophisticated but it’s not AI. However, here’s where AI can play a pivotal role.

What AI is exceptional at is running simulations that involve myriad factors that a human could not possible calculate in real time, especially under conditions as stressful as the flight deck of flight 447. While the pilots where trying to stabilize the situation, in this case doing what was intuitive but entirely incorrect, the AI could have been running thousands if not millions of scenario simulations to determine what combination of actions would be the most likely to succeed.

What if we could have frozen time and allowed the pilots of AF 447 to run through thousands of flight simulations of those exact conditions. Do you think the likelihood of a positive outcome may have increased significantly? That’s precisely what onboard AI could do, and it would not only have knowledge of this flight but of all flights in a similar set of conditions.

“…we have been conditioned to not only see technology as distinct from humans but to relentlessly avoid anthropomorphizing technology.”

Then again, whether we’re talking about an airplane or an automobile, there’s still one fatal flaw to achieving this ideal collaboration between human and computer. Recall that our pilots totally ignored the computer’s stall warnings. In fact, the transcript of the cockpit voice recorder, recovered from the airplane’s black box, does not mention a single verbal acknowledgement by the pilots of the numerous stall warnings. And this is where I believe the single greatest obstacle to AI and autonomous devices resides. Namely, that we do not relate to or regard technology as a collaborator.

Imagine if one of the pilots had yelled out the stall warning seventy-five times. Do you think that might have been heeded? A big part of the problem is that we have been conditioned to not only see technology as distinct from humans but to relentlessly avoid anthropomorphizing technology. That may have made good sense when computers where nothing more than glorified calculators, but it actually works against us as AI evolves and transforms computers and devices into entities capable of making complex decisions that are as good, if not better, than their human counterparts.

What if the cockpit stall warning had said, “Pierre-Cédric (the co-pilot primarily responsible for pitching the plane’s nose upward for the entire six minutes) you are ignoring my warning that the plane is in a stall and losing altitude. Please take immediate corrective action by pitching the nose down or give me control so that I can rectify the situation.”

Most people reading that last sentence will shudder at the thought of AI taking on the behavior, intonations, mannerisms, and role of a human. It’s frightening, creepy, unnatural, and not the place of a computer. And that attitude, not the technology, is precisely what undermines our ability to coexist with AI because we don’t want it to act like a human.

We have or soon will have technology which can make decisions faster and better than a human being within a narrow set of parameters. But only if we allow it.

An autonomous vehicle being driven at full throttle toward a crowded bus stop, whether it is intentional or not, has the ability to stop itself. If that ability exists would allowing the car to proceed unabated be justified by any ethical or moral standard? We don’t allow drivers to remove or disable airbags, which can themselves be lethal in rare circumstances. The reason we allow driver override today, even in situations where it is clear that the outcome will be worse, is because of two reasons: we aren’t sure how to shift the legal liability from the driver to the AV; and we fear the ability of a technology, even in very narrow cases, to make a better decision than a human.

There’s Nothing Artificial About It

It used to be that we could draw hard lines between what was the human’s responsibility and what was the responsibility of the machine. There was no doubt about who was in charge, never a contest nor a conflict between the two, because there was no overlap in the area of making judgment-based decisions; that was always the role of the human. The machine simply provided information and followed instructions. We were the ultimate authority because we had the ultimate cognitive upper hand. Machines were merely extensions of our physical selves. Now they are intelligent extensions of our digital selves.

Ultimately, the biggest impediment to AI is not the technology but the very term we use to describe it, Artificial. It’s not artificial. Instead it is an extension of our own intelligence. Perhaps it’s time to start calling it what it is, Augmented Intelligence that we need in order to deal with and survive the increasing complexity of the machines and the world we inhabit. In that sense, it’s no different than the long history of tools we have created to extend human abilities to cope and evolve in an ever changing, ever more challenging world.

If this new model of augmented intelligence and shared responsibility between man and machine terrifies you and threatens what you consider to be the uniquely human capability of decision-making, I’d suggest that you hold on tight because this storm is going to much worse than you thought.

Get the Change Planning Toolkit from Braden Kelley

This article was originally published on Inc.

Wait! Before you go…

Choose how you want the latest innovation content delivered to you:


Thomas KoulopoulosTom Koulopoulos is the author of 10 books and founder of the Delphi Group, a 25-year-old Boston-based think tank and a past Inc. 500 company that focuses on innovation and the future of business. He tweets from @tkspeaks.

Tom Koulopoulos

NEVER MISS ANOTHER NEWSLETTER!

Categories

LATEST BLOGS

Four ways you can ensure employees take accountability for their work

By Hubert Day | April 5, 2023

One of the most important driving factors for any successful business is a high-performing team. Having people working for you…

Read More

What is digital upskilling and why is it important?

By Hubert Day | February 15, 2023

            Photo by Annie Spratt on Unsplash In a world of business that never stands…

Read More

Leave a Comment