Artificial Intelligence (AI) is already all around us; it conveniently serves us personalized internet adds, it composes music, it writes news stories, it already outperforms people in a number of traditionally white-collar and blue collar professions, giving large corporations the opportunity to cut their production costs replacing human workers with machines. With advances in computing power – including machine learning, neural networks, natural language processing, genetic algorithms and computational creativity, to name just a few – it increasingly seems likely that artificial intelligence is evolving from simply reactive to self-aware machines. Right now we may looks at chatbots like Siri and laugh at their poor simulation of human emotions, but it’s likely that not far from now we will have to deal with beings that make it hard to draw the line between real and simulative humanity.
Are there any machines in existence that might deserve rights? So far, maybe not. But if they come, we are not prepared for it. Much of the philosophy of rights is unequipped to deal with Artificial Intelligence. Most claims on rights, either human or animal, are centered round the question of consciousness; consciousness entitles beings to have rights, because it gives a being the ability to suffer. Without fear, pain or preference, would what we define as rights make any sense to a robot?
Even before the rise of self-aware AI systems, there is a lot of controversy surrounding robots’ legal frameworks. As these systems advance, so will the potential that they are involved in criminal activity, and right now, no regulations are in place that say how the law should treat super-intelligent synthetic entities. Who takes the blame if a robot causes an accident or is implicated in a crime? What happens if a robot is the victim of a crime? Liability, integrity, and accountability issues are general AI questions. If robots are granted with rights equivalent to humans’, the first problem which arises is property; robots can already act autonomously and make choices or changes in their surroundings, but can they be considered responsible for their actions? Giving a robot rights could serve to emancipate them from conventional ownership. At that point, the entity is the ultimate independent contractor, with companies able to absolve themselves of wrongdoing even if they instructed the machine to behave in the illegal way.
The journal Artificial Intelligence and Law recently published an article by University of Bath reader Joanna J. Bryson and academic lawyers Mihailis E. Diamantis and Thomas D. Grant. In the paper, the authors state that proposals for synthetic personhood are already being discussed by the European Union and that the legal framework to do so is already in place. The authors stress the importance of giving artificially intelligent beings obligations as well as protections, so as to remove their potential as a “liability shield.”
When J. J. Bryson spoke to Futurism, she warned against the establishment of robot rights, relating the situation to the way the legal personhood of corporations has been abused in the past. “Corporations are legal persons, but it’s a legal fiction. It would be a similar legal fiction to make AI a legal person,” said Bryson. “What we need to do is roll back, if anything, the overextension of legal personhood — not roll it forward into machines. It doesn’t generate any benefits; it only encourages people to obfuscate their AI.” The authors state that these “are crucial questions that we must answer before introducing novel legal personhood. Concerns about legal accountability, and the way electronic persons might affect accountability, are our main motivation in writing this paper.”
The creation of this super-intelligent technology won’t be happening anytime soon, if it happens at all, but its potential raises some thorny issues about our obligation to synthetic beings and the evolving nature of personhood. In many ways, AI technology is still very young, but there’s no better time than now to start thinking about the legal and ethical implications of its usage.
Good post.
AI doesn’t equal to human being. It is a kind of product by human science and technology. If they really have self-aware, it will not a good thing for human being because you don’t know whether it is evil or good. It has potential danger for us. So I think we’d better not overprotect their self-aware.