Philosophy and robots

Philosophy is the study of what there is, what we can know, what we ought to be and what we ought to do. The philosophy of robots thus is the study of what robots are, how we can know what robots are and what robots ought to be and do. The philosophy of robots is relevant to the robot team (engineers, designers, programmers), regulators (e.g. government, lawyers, insurers), citizens and academics. Each group has different beliefs and understandings about robots, for example, citizens may consider robots in terms of trust and use, whereas academics may consider robots in terms of theory, hypotheticals and consequences.

What are robots?—Robots are embodied artificial agents to serve human needs. But, where do robots begin and end? The obvious answer is that robots comprise of the physical parts of which they are composed. So, a robot’s identity begins at a physical inception event, say where the first hinges or components are put together. What if multiple robots are created at the same time with radically different functions or purposes, nevertheless from similar components? Are physical tokens of the same robot type treated as a single individual for the sake of legislation and culpability? What about robots with different bodies, but the same algorithms controlling them? These questions are not trivial when considering issues of IP and responsibility. Robots are defined by physical boundaries, representations of the internal and external world and programming that directs action.

How can we know what robots are? Deterministic robots have limited (though sometimes extraordinary) behaviours that are causally simple. Deterministic robots are epistemically transparent. That is to say, the counterfactual dependencies between programming and outcomes are predictable and reliable. Robots become more difficult to know the more they are indeterminate and free to make their own decisions in complex environments. Instead of clear directives, semi-autonomous and autonomous robots evaluate inputs from their senses, rules programming and empirically generated representations to make decisions for themselves. When indeterministic robots have limited capabilities and are tested and used in constrained and predictable environments, the uncertainties of their behaviours are reduced. Companies and institutions making these indeterminate robots can report reliable claims of likely behaviours in limited contexts of use that are understandable to non-robot-team audiences. As robots become more complex, their operating parameters may exceed human capacity to understand. This is unproblematic so long as robots make decisions consistent with human decision makers. However, how will regulators judge decisions inconsistent with human choices? It is likely that future robots will make unfathomable decisions. How will humans negotiate trust relationships with robots when they do not know them?

What ought robots be? No matter the metaphysical and epistemic challenges, it might be supposed that robots ought to be aligned to our best ethical theories and principles of rationality and decision making. However, given continued conceptual disagreement in these areas, I argue that robots must be responsive to social norming. The question remains whether robots should fill a familiar social role (e.g. family member, friend, trusted acquaintance) or a sui generis social niche designed specifically for human-robot relationships.

Cognitive Decision Scientist

Posted in invited paper Tagged with: ,

S. Kate Devitt


Research Associate, Institute for Future Environments and the Faculty of Law, Queensland University of Technology