What Nascent Artificial Intelligence Tells Us About Tomorrow

Robots are nowhere near their designers lofty goals, but can we anticipate important ethical consequences based on the current state of artificial intelligence?

If you haven’t listened to Radiolab, one of the best podcasts out there, you should. Radiolab’s topics, and style, are completely unique, and the most recent show they did on Bina48 (featured in the picture and video) was a lot of fun.

Bina48 is a robot that was privately commissioned with the specific goal of creating a sentient artificial intelligence (AI), and in each of the links I’ve included (Radiolab and NYTimes) an interviewer was sent to interview Bina48. Seconds into both interviews it becomes painfully clear that Bina48 is not like a human, and is not sentient, at least not in a way that us humans would feel comfortable calling sentience in any meaningful way.

The interviews aren’t particularly novel–AI has been ridiculed for some time now. But in the Radiolab interview Jon Ronson (author of The Psychopath Test: A Journey through the Madness Industry) briefly mentions how interviewing a robot is a lot like interviewing a different species.

I think that point is important to keep in focus when we discuss artificial intelligence. Sentience aside, when we are dealing with AI, we are certainly not dealing with humans, despite both their likeness and the often stated lofty goal of creating “human-level” artificial intelligence. What we are dealing with are machines that act in a certain way, a way that is always independent of their designers’ expectations largely because the designers don’t understand human intelligence let alone sentience. The consequence of that lack of understanding is that we ought to expect artificial intelligences (AIs) to be quite different from humans in their behaviour, even if they were meant to be quote similar.

Mere differences, however, ought not to be used as a justifications for dismissal. Because we can expect AIs to act differently than humans we ought to evaluate them on their own terms. I’m not suggesting that we ask them how they would like us to evaluate them, rather I’m suggesting that we genuinely try to assess their capabilities and nuances, independent of how they match up to humans, before dismissing them as failed projects (it seems obvious that the reporters dismissed them in that way), or unworthy of further consideration. As Justin Leiber rightly points out, paying too much attention to normal functioning humans when evaluating individuals that are “out of the ordinary” is a recipe for misunderstanding and unjustified discrimination.

Imagine being asked to interview an alien that has just been discovered. How would you prepare for it? Would you jump right in asking if the alien has a soul as Jon Ronson did (Bina48 interpreted him as having asked about “having a solar”…panel, perhaps?)? Would you dismiss it as ridiculous if it seemed confused by your questions? I think not.

I’ll be fair, aliens probably deserve more patience and benefit of the doubt when assessing their capabilities than does Bina48, a robot that seems utterly incapable of even faking intelligence on any real level. I’m assuming an alien might react in a way that would cause us to take it seriously (though it is possible we wouldn’t recognize alien intelligence if we came across it on a daily basis), whereas Bina48, let’s face it, does not.

Part of the humour in these interviews stems (rightly) from a reaction to the designers’ audacity–as I’ve said, human-like sentience is quite lofty. (Setting the bar so high also seems counterproductive, especially when it seems so obviously unrealistic a goal in the current state of affairs, and so open to ridicule.)  Another part of the humour is a reaction to the obvious fact that AI is still in its infancy.

The humour is to be expected. What is surprising, despite the obvious failures of Bina48, is the speed at which the interviewers seem to want to dismiss the AI project outright.

I find that strange. Planes are not as elegant as birds, nor are they as efficient. Boats are certainly not as elegant or efficient as dolphins. But we accept the shortcomings of planes and boats in light of their obvious benefits. Why is it so hard for some to accept the same strategy with AI? I suspect the answer is complex and mangled with psychology.

So there are good reasons to chuckle when we see robots like Bina48 rambling on about “having a solar”. Yet I think it would be unfortunate if, based on the obvious current shortcomings of AI, we overlook one of the predictions we can make based on current attempts at AI:

AIs will almost certainly be quite different from humans, but will still turn out to be ethically valuable in their own right.

Research done by Peter Kahn as part of his Human Interaction with Nature and Technology lab, focuses on the strong emotional attachment that humans are willing to enter into with robots, even when we know they are machines. The Radiolab hosts recognized this too. Once humans form strong emotional bonds with other “things” (I’m thinking of “things” like cats & dogs, but also like slugs, spiders and beetles), those “things” start to appear a lot less “thingy”.

Think this is crazy talk? Consider that there are people out there who claim to be in love with anime characters, sometimes even with body pillows depicting anime characters. And those people (men AND women) aren’t obviously incapacitated. That is, they are sane enough to be taken seriously.

Those bonds, and the impacts they have on our willingness to recognize new ethical relationships, might be the stuff that motivates us to move towards justifiably attaching ethical value to machines (as David Levy and Justin Leiber suggest). Bina48, after all, was commissioned as a testament to one person’s love of another. There is good reason to believe that Bina48 will stand as more than just a machine to both of those people. In the near future that might be true for many people interacting with many robots, all of whom might feel their relationships are worth protecting as more then mere property.

Advertisements

2 thoughts on “What Nascent Artificial Intelligence Tells Us About Tomorrow

  1. Great summary, Jason. I’ll pitch in with a prediction – exactly what futurists are not supposed to do.

    The reports of folks experiencing romantic love for inanimate objects are sporadic but nonetheless receive attention due to their peculiarity. This is notably different than the platonic/pseudo-parental love that pet owners feel towards a pet. I’ll go out on a limb here and say without any evidence whatsoever that there lies a continuum within these bonds as well, as conceivably more people have closer relationships with dogs or cats than fish or frogs, the former two capable of showing emotion in return. Western society is fine with the consumption of fish or frogs, but not dogs or cats.

    Let’s be honest, uncanny valley will rule the day. I wouldn’t be surprised if more people find Bina48 deeply creepy than sexually attractive. I can easily believe in a society 50 years
    hence where people express platonic love for a robotic pet, but uncanny valley will cast a long shadow, and widespread love for anthro/gynopological robot will be limited to the fringe. Perhaps more will fall for Bina48 than a bridge or ’61 Thunderbird, but it won’t be a significant phenomenon.

    So we will tie ethical value to machines, but first as pets. I can easily foresee legal protections for machines as we have for domesticated animals. When they receive human-level intelligence and expression, we’ll talk getting broader. I imagine we’ll know when we reach that time when the machines set up lobby groups.

    1. Mike,
      The uncanny valley will rule the day, but good designers will be able to circumnavigate it by matching their designs with people's expectations. I think your predictions are quite foreseeable! Thanks for reading.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s