Decision Tools Manipulating Assent: Rational Persuasion, Paternalism and Trust

Autonomous systems, Situation Awareness Tools and Operator Decision Aids are increasingly used to improve strategic and operational effectiveness with better data, models and scenarios (Defense Science & Technology Group, 2017). To gain the trust of users, tools are designed around the cognitive architecture of human users, e.g. natural language explanations of automated recommendations are given (e.g. Giboney, Brown, Lowry, & Nunamaker Jr, 2015; Papamichail & French, 2003). Such approaches are promising because information should be clear and the logic transparent for human users. However, the danger of designing decision support systems (DSS) specific for the cognitive preferences of decision makers (DM) is that users may be manipulated to assent, i.e. rational persuasion can be used as a form of paternalism (Tsai, 2014). When an agent rationally persuades a user it offers reasons, evidence or arguments. It is possible to construct a DSS to rationally persuade a human operator to choose the right action, yet the information represented is paternalistic or disrespectful by being incomplete, simplified or obfuscating. Rational persuasion may be motivated by distrust in the DM capacity to gather, weigh or evaluate evidence. Rational persuasion may intrude on the users deliberative activities in ways that devalue her reflective decision making processes. Manipulative rational persuasion could thus be coopted as an effective tool of disinformation.

The question is: Can a DSS gain and maintain trust and avoid paternalism?

Increasingly DSS use vastly more data and operations on that data than a single DM could understand, rendering individual reflective cognition problematic. When recommendations align with human ideas, there may be little cause for concern. However, when recommendations diverge from human intuition, then humans must either trust the system—and follow its dictates—without necessarily knowing why they are agreeing; or reject the system with a suboptimal alternative and put operations at risk. DSS makers may try to improve trust with honest articulation of how decisions are generated, but it is likely that information will necessarily be simplified and manipulated to facilitate consent. How should human trust be established in such systems? How are trust relationships established and maintained as DSS move from support to semi-autonomous and autonomous decision-making? What are the legal and regulatory impacts of these findings (Calo, Froomkin & Kerr, 2016)? This research project seeks to contribute to human and autonomous decision superiority (Defense Science & Technology Group, 2017).

References:

Calo, R., Froomkin, M., & Kerr, I. (Eds.). (2016). Robot Law. Elgaronline: Edward Elgar Publishing.

Defence Science & Technology Group (2017). DST Science and Technology Capability Portfolio. Department of Defence, Science and Technology, Australian Government. Retrieved from https://www.dst.defence.gov.au/publication/science-and-technology-capability-portfolio 9 Mar 2017.

Giboney, J. S., Brown, S. A., Lowry, P. B., & Nunamaker Jr, J. F. (2015). User acceptance of knowledge-based system recommendations: Explanations, arguments, and fit. Decision Support Systems, 72, 1-10. doi:http://dx.doi.org/10.1016/j.dss.2015.02.005

Papamichail, K. N., & French, S. (2003). Explaining and justifying the advice of a decision support system: a natural language generation approach. Expert Systems With Applications, 24(1), 35-48. doi:10.1016/S0957-4174(02)00081-7

Tsai, G. (2014). Rational Persuasion as Paternalism. Philosophy & Public Affairs, 42(1), 78-112. doi:10.1111/papa.12026

Cognitive Decision Scientist

Posted in research project

S. Kate Devitt


Research Associate, Institute for Future Environments and the Faculty of Law, Queensland University of Technology