Graeme Dawes - Fotolia
Not all types of artificial intelligence are created equal
Among the many types of artificial intelligence, unsupervised machine learning will test our capacity to trust machines.
The easiest place to begin our discussion of AI is by exploring the current world of robots. Today, on the trivial side, we have the feline nemesis robot, Roomba, to sweep our floors, Domino's is testing a pizza delivery robot, and there is even a new security robot being tested in San Francisco to patrol the streets. On the more practical side, we also have cars that park themselves in tight spots, stop themselves faster than humans can react, correct our wayward steering should we drift out of our lane, and, very soon, will drive themselves. Automobiles, of the self-driving and traditional variety, are built using robots; Amazon orders are filled using robots; and bombs are defused using robots. The list seems to be endless and ever-expanding. But not every robot uses artificial intelligence and not all artificial intelligence is very smart.
So, let's define the types of artificial intelligence. True artificial intelligence is more correctly known as machine learning. Most of us believe that AI comes from a team of really smart developers who tell a machine how to behave in one specific situation, hundreds of situations, thousands of situations, etc. -- indeed, that's pretty much what is happening today with autonomous vehicles. Situation after situation is fed into the computer driving the car and solutions to those situations are programmed in. Over time, more situations will occur and be solved with more solutions being fed in. After all, there should be a finite number of driving situations. But, in this example, is the machine actually learning on its own, or is a team of human developers doing the learning and then providing the knowledge back to the machine? The latter, I believe. This type of artificial intelligence is what powers most robots in use today: A finite number of situations with finite, preprogrammed responses. While this may seem like magic to most of us, it is NOT machine learning. The people are doing the learning then feeding new knowledge into the machine.
So, what is real machine learning? Put quite simply, machine learning occurs when the machine uses all the existing sources of knowledge at its disposal and draws its own conclusion. Just like people, that conclusion may change over time as it learns more and evaluates the correctness of its "opinion" based on empirical data.
Let's think about this type of artificial intelligence a bit. We create a machine/computer/software/neural network with hundreds of layers. We teach it to read certain kinds of inputs or specific types of data (like millions of medical records or billions of stock trades). We give it access to all that data plus general data from the world at large; then we give it one situation, and ask it, "What do you think?" It turns out that the machine's answers (or guesses or diagnoses) are better than the human experts' are almost every time. And -- here comes the good part -- the machine keeps getting better and better, learning more and more over time, with access to more and more data. And, perhaps most importantly, the machine will very dispassionately assess its own shortcomings and make adjustments and improvements in its own logic -- every time! Well, hot dog! This is THE answer! This could save mankind! Let's get moving and build these suckers as fast as possible. Nellie, bar the door!
Not so fast.
Here's the issue: The computer scientists who have designed and created these miraculous machine learning devices don't actually know how they work. Yes, you got it … they are just like the black box software problem most IT types are familiar with. They don't know how the machine learns or makes decisions or provides recommendations. And it turns out the machine can't really tell us either.
"We've never before built machines that operate in ways their creators don't understand. How well can we expect to communicate -- and get along with-- intelligent machines that could be unpredictable and inscrutable?" noted writer Will Knight in his recent article on the black box problem in the MIT Technology Review.
Think about us. Humans. Can you explain the exact process that you go through when making a decision? Sometimes, perhaps, but often we use "gut instinct" or act on a "hunch" or use the ubiquitous "I just know" reason when describing how we make decisions. Believe it or not, the same is true for these human-made types of artificial intelligence machines. There is nothing in these systems, at the moment, that explains how they reached a given decision. They just did.
It turns out that this lack of "explainability" may be a showstopper. With millions of dollars or someone's life on the line, humans are going to demand an explanation. If a machine turns down someone for a credit card, a car loan or a mortgage, aren't they entitled to know why? Machine learning systems, as they are commercialized, may not always be the exclusive purview of the computer scientists, and as these types of AI systems begin to affect our everyday lives, lawmakers will have to weigh in on the legality of explainability.
Upping the stakes, what happens if a machine turns down your child for a life-saving organ transplant? Wouldn't you be screaming at the top of your lungs to know why? And we haven't even mentioned the applications of AI to the military. If, at some time in the future (cue the Hollywood scriptwriters), a machine is going to make a key military decision, all of us, especially our leaders, must have a very clear explanation of who, what, where, when, why and how that decision was reached.
Take a breath. Let's lower the blood pressure a bit. What will you expect from Siri in 2020? Restaurant recommendations? A scolding for not moving enough? Blood sugar level readings? Oxygen saturation readings? Health warnings for 54 key biomarkers? And, based on certain health feedback, will Siri call 911 all on her own, without asking us? It's no question that these various opt-in personal services could improve the quality of our daily lives and not prove too disruptive.
But, make no mistake, these smart types of artificial intelligence have already become exponentially better in understanding the world in ways that will disrupt us and disrupt whole industries. This year, a computer beat the best Go player in the world, 10 years earlier than expected. Facebook now has pattern recognition software that can recognize faces better than humans. By 2030, computers will become more intelligent than humans.
In the near term, look no further than our century-long relationship with the car to see how disruptive machine learning will be. The scattered demonstrations of self-driving cars we saw in 2017 will increase in 2018. A mere two years later, pundits predict the entire auto industry will be disrupted. Many car companies could go bankrupt. At some point, we won't own cars but instead summon them with our phones. Except it won't be an Uber driver showing up at our locations; it will be a driverless Uber car. Teenagers won't get driver's licenses. Vehicular fatalities will shrink to 200,000 a year from the 1.2 million lives claimed annually today. Car insurance companies will slowly disappear. City parking lots will become parks. Can you imagine this new world? As soon as you start playing out what machine learning might do, the world becomes a very different place..
Machine learning is here right now. Today. It has unprecedented possibilities to improve our lives immeasurably. But its full potential will never be realized unless humans can learn to actually trust a machine with their lives. Can you do that? Will you?