Fourteen years ago today, IBM’s supercomputer–Deep Blue–defeated grandmaster Gary Kasparov in a chess match. The event marked a milestone in computing technology, demonstrating to the world that computers have the ability to beat humans at their own games.
Deep Blue was a brute force solution to the chess problem. It calculated millions of possible moves and chose the one most likely to lead to a win. Humans, on the other hand, win at chess by developing a keen sense of the game, able to recognize patterns and switch strategies on the fly in order to defeat an opponent.
There is a fundamental difference between the two chess playing strategies. On the one hand, humans don’t have the computational ability to work out millions of possible moves. But humans do have the ability to recognize a particular move as a “mistake”, demonstrating an ability to assign abstract meanings to certain states of the world. The opposite is true of Deep Blue, who couldn’t know a mistake if it saw one, but was able to evaluate millions of moves with incredible speed and accuracy.
That human ability, the one that finds meaning in complex patterns, is often associated with the ability to use natural (spoken) language. It has long been a criticism of artificial intelligence, the project of making computers “intelligent”, that computers are not able to understand natural language and will therefore never be intelligent.
A few months ago IBM showcased a new supercomputer in a game against humans. This time they called it Watson, after the founder of IBM, and the game was the popular American show Jeopardy!. They billed the match as Man vs. Machine. Watson was pitted against the two all-time most successful Jeopardy! champions (both humans). Watson was victorious…quite victorious.
Watson’s effect on the human participants was noticeable. As Watson answered question after question, denying his two human competitors even the opportunity to speak, they appeared to be watching the robot in amazement, bewildered at the machine’s ability to get it right.
The Jeopardy! match marked a shift in the way that computer experts were willing to describe the machines they build. In a series of infomercials that aired during the match, IBM researchers talked about Watson not just as a brute calculating machine (which he is), but also as being able to understand natural language.
The Jeopardy! victory was a win for computers (and their designers) on two fronts. It reinforced the ability to win at a game played against humans, though that was old news as Kasparov might admit (he challenged Deep Blue to a rematch, but was denied the opportunity). But it also introduced the ability to win at a language game as complicated and nuanced as Jeopardy!. Thus, computer experts scored a win against humans and AI critics.
Upon winning IBM immediately announced that they were developing Watson into an expert system for use in health care, telecom, and finance. They envision computers working alongside humans, though in a semi-autonomous role, capable of performing tasks traditionally associated with that uniquely humans capacity for natural language comprehension, such as patient interactions aimed at the medical diagnosis of illnesses.
One aspect of Watson’s design that raises serious ethical considerations is the fact that Watson was designed to succeed at Jeopardy! without his designers being able to predict his behaviour. In playing Jeopardy! Watson received the questions then performed a complex set of algorithms, which included Internet searches and cataloguing word associations, in order to glean the “meaning” from the question and formulate a set of possible answers. Watson’s input is all the available information “out there”, which means that the input is constantly changing in an unpredictable manner. Of course, if the input is constantly changing, so too could the output.
The variability in the I/O (input/output) is Watson’s strength, but it is also the source of the ethical concerns. In a medical context the constantly changing body of medical literature demands a kind of up-to-the-minute synthesis that humans might not be capable of. It might be the case that when doctors and nurses work with Watson they will be forced to accept Watson’s answers despite their own inability to grasp the full meaning of them.
But to assign Watson the task of making sense of it all, because humans can not, raises questions about (among other things) the extent to which we (humans) ought to trust Watson, the amount of responsibility we ought to hand over to Watson, and the kind of relationship that ought to be struck between Watson, physicians and their patients.
The answers to all of these questions are going to rest in part on how Watson’s designers approach the problem of his design–from an ethical perspective. IBM computer experts will need to take into account the kinds of human values that are required to foster trustworthy relationships between machines, human experts and lay humans, or run the risk of creating expert systems that violate ethical norms and could ultimately harm people.
There is no doubt that robots like Watson are promising technologies that demonstrate the potential to act (semi)autonomously in domains traditionally associated with uniquely human capacities. That is the tag line of the (short) history of robots. However, as their autonomy increases, so too does the demand on computer experts to make sure their machines are good. If there is a time to start thinking about robot ethics, it is now, before we find ourselves in an exam room being diagnosed by Dr. Watson, while his human counterpart sits idly by in awe of the robot’s abilities.
Jason Millar is a PhD candidate at Queen’s University. He teaches Robot Ethics to undergraduate philosophy students.