This is a cross-post from another blog–OneOverZero–where I write (somewhat infrequently) about Singularity topics in general.
In it I mostly link to other interesting articles, and I want to keep some of my posts for future reference, so I’ll be copying them over to this site.
Before I get to the latest post, here are a few of the interesting posts/links I wrote about there:
- Ethical issues as they stand today
- We at OneOverZero will welcome Robots, as equals
- Learning to learn
- While we’re on the topic of learning…
And now, for the cross-post.
Thoughts on “The Future of Moral Machines”
Today we’re coming back to the topic of ethics and morality.
It is not a commonly accepted fact that there is a real need for serious thought and debate over the subject of “synthetic” moral agents. Many people and institutions still regard this issue as something of an esoteric topic that the techies have dreamed up in their wildest dreams and not something that will ever impact society at large, at least not for the foreseeable future.
Still, there are a number of people who take this issue very seriously indeed and today we point you to an article about it: The Future of Moral Machines. In it the author, a self professed Singularity-sceptic (and co-author of a book on this very subject), makes the point that regardless of the Singularity issue, the fact remains that robots moving around in the physical space and interacting with humans are something that will inevitably become more and more common and that many of these machines will necessarily be making operational decisions that will impact humans in very serious (and potentially very dangerous) ways. It will, therefore, be necessary for us to provide these machines with ways to evaluate their actions in light of doing “good” or “bad” by us humans.
The machines, the article argues, will be autonomous, not in a human sense (they will not be self-aware or have freedom of will-in fact, they will have no will whatsoever), but they will be autonomous in the operational sense. This “engineers’ autonomy” will make for the absolute necessity of some kind of “functional morality” that tries to “make autonomous agents better at adjusting their actions to human norms”.
The article is somewhat long, but the viewpoints and arguments are very compelling and I urge you to read it in full. I just can’t resist quoting one final passage that I found particularly inspiring:
The different kinds of rigor provided by philosophers and engineers are both needed to inform the construction of machines that, when embedded in well-designed systems of human-machine interaction, produce morally reasonable decisions even in situations where Asimov’s laws would produce deadlock.
While I do not share the author’s skepticism towards the Singularity, I find the notion of a “functional morality” to be a very interesting and, really, very important one. Here is a topic into which we can (and I think we should) make headway today, regardless of what the future brings, because Singularity or no Singularity, one thing is certain (as the author posits): short of a cataclysmic event at a global scale, robots will be all around, and so we’d better make sure they understand our ways, our needs and our frailties so that they are able to deal with us without causing us harm. Wether they end up being conscious entities, or mere mindless tools, it behoves us, as their creators, to provide them with that knowledge.