Due to the impressive development of robotics and artificial intelligence (“AI”), those robots are emerging which are similar to human agents physically and cognitively. We are coming under pressure to consider whether or not sophisticated robots and AIs should have status as “person” that means being a moral or legal agent or patient (I treat moral one only in this paper).
We might answer the question by referring to the existing theories of human agency and responsibility. However, it is difficult partly because our intuition about human agency and responsibility are too ambiguous to evaluate what kind and how much of them are required to ascribe that status to robots and AIs (and indeed even to human beings).
Given this kind of difficulty, some theorists propose focusing on the concept of blame instead of that of agency or responsibility since the former is more concrete and less metaphysical than the latter. It is more intelligible for us to blame a person than just to hold that she is responsible.
To be an object of blame here can be a sufficient condition to be a certain kind of moral agent (not all, because there is a variety of blame-related phenomena). In this paper, I examine the meanings of blaming and then argue that it is plausible to blame sophisticated robots in terms of the purpose of it since it is aimed to benefit of human blamers and people around them in most cases.
However, purposes as human benefit are not sufficient to ascribe robot moral agency because the blamed person is taken only as means for other people’s ends there. We then have to consider an aim which treats her as an end in herself. I take T. Scanlon’s theory as such one and examine what significance it has concerning blaming robots.