I came across a great post about the Trolley Problem and Warfare at a Distance over at the Neuroethics at the Core blog. Peter B. Reiner gives a great overview of the classic “trolley problem” (a fun party conversation piece if ever there was one), and ties neuroethics and moral psychology into it very nicely. His conclusions are spot on.
Here’s an excerpt of his post:
While the developers of the drones were clearly interested in sparing the lives of soldiers (at least allied soldiers), it seems unlikely that they ever considered the psychological effects that the distance has upon their responsiveness. If trolleyology has any predictive power, it would predict this: it is far easier to pull the trigger in North Dakota than if the pilot was closer to the battlefield. But that is still a bit like pulling the switch to move the train from one track to the other. At level of decision makers, the distance between trigger and killing machine grows yet again, and the data would predict that utilitarian judgements – cold, hard calculations devoid of what are often described as moral sentiments – will become even stronger.
I commented over at the blog so I won’t go into details here, except to say that the emergence of autonomous robots will add new dimensions to the ethical debate over robots, the trolley problem, and moral psychology. I would encourage you to check out the Neuroethics at the Core blog. There’s a ton of interesting work going on in that group.