Antagonising the echo chamber: Can a social network counteract cognitive bias with Bayesian rationality?

Bayes on the Beach 13-15 Nov 2017

TitleAntagonising the echo chamber: Can a social network counteract cognitive bias with Bayesian rationality? [Accepted for Poster presentation]

AuthorsKate Devitt1, Tamara Pearce2, Alok Chowdhury3, Kerrie Mengersen4

1-4 Queensland University of Technology


Abstract content: Discussion forums (e.g. Reddit) and social media (e.g. Facebook) allow fast dissemination and analysis of ideas. However, because individuals curate content aligned to values and beliefs, such forums can become echo chambers–existing beliefs are confirmed and disconfirming evidence ignored. Research in cognitive biases has shown that increasing the number and diversity of hypotheses considered by individuals can improve decision making.

This presentation presents collaborative research between QUT and a global online travel agency (OTA) to generate, present and evaluate hypotheses in a social platform to counteract cognitive bias and improve scientific organisational culture. The platform explicitly links hypotheses ‘posts’ (pertinent to strategic business goals) to evidence ‘comments’ (e.g. news articles or technical updates).

Each piece of evidence is weighted objectively and subjectively by users and outside experts to produce a hybrid weighting fed into an algorithm to output a likelihood that a hypothesis is true.  The algorithm takes both the quantity and quality of user interactions on the system into consideration. Incorporating Bayesian rationality, the algorithm weights evidence differently depending on context and purpose to reduce biases amplified within existing social media ‘echo-chambers’.

This research initially investigates whether using the platform will increase number of relevant hypotheses generated and whether using the platform increases the amount and quality of evidence used to justify hypotheses. We will then evaluate impacts on strategic decision making such as whether using the platform improves closed innovation within the OTA, facilitate increased scientific behaviours and/or intellectual humility amongst employees.


Posted in conferences Tagged with: , , , , ,

Robots, Ethics, and Intimacy: the need for scientific research

Robots, Ethics, and Intimacy: the need for scientific research

What are the legal, ethical and societal issues if humans form intimate relationships with robots as the recent films such as Ex Machina and TV shows such as Westworld have represented?

Market forces and customer demand are driving the creation of robots where humans are forming strong emotional attachments. In this keynote, Georgia Tech’s Professor Ron Arkin will discuss the emerging field of intimate robotics and argues that prior to the technology becoming fully actualised we need research and debate on what effects the technology may have on users and society. Dr Kate Devitt (QUT Faculty of Law) responds to these developments with consideration of the ethical concerns at play in this new technological frontier.

Dr Kate Devitt’s slides

Posted in Uncategorized Tagged with: , , ,

Ethics and Rights of Sex Robots

Rick & Morty

Rick & Morty

Triple J. The ethics of sex robots. The Hook Up. Wed 16 Aug 2017.

*Adult content warning*

What does having sex with a sex robot say about us – especially if some of them are so artificially intelligent they might resist? Do we trust them more than humans? Do robots have rights? Should we have legislation around sex robots? SO MANY QUESTIONS!

The best thing is, they’re all going to be answered by Professor Robert Sparrow from the department of philosophy at Monash University and Dr Kate Devitt from the Queensland University of Technology, who specialises in robotics and autonomics and has a PhD in philosophy. Then you’ll hear from member of the Upper House and leader of the Australian Sex Party, Fiona Patten on what legislation might look like – and if there should even be legislation.

*content warning* This podcast discusses sexual assault.

Podcast here

Blog style summary of discussion here

Posted in Uncategorized

Robotics and the Rarity of Care


Robotics and the rarity of care: Adoption issues of social robots in healthcare.

Presentation at Social Robots in Healthcare Workshop organised by the Australian Centre for Robotic Vision, 20th July 2017

What does being cared for feel like?—Considered, understood, comforted, calmed, elevated. How is care different from being treated for health, stabilised, improved or healed?—Better vital signs, systems functioning, less disease or infection, more energy, more mobility, less pain, clearer thinking. What factors affect the feeling of being cared for? E.g. do the particularities of your relationship with your carer matter? If so, how and why? Do you feel more cared for when you care about your carer? To what degree do the intentions vs. actions of the carer matter to feel cared for? E.g. if someone important to you nevertheless cares for you in a haphazard way, does it make you feel more cared for than a more objective professional who cares for you in a systematic and thorough way? The irony about care is that we can feel cared for even when we get sicker and we can feel more cared by less competent person who is more important to us than a more competent person who is less important to us. Can robots make people feel cared for?–If ‘yes’, then adoption. The adoption of social robots in healthcare is in some way dependent on the experience of feeling cared for. Is part of feeling cared for is being prioritised by the carer; that the rarity of care is a factor in feeling valued and cared for?

Presentation slides (.pdf)

Posted in invited paper Tagged with: , , , , ,


Posted in musings Tagged with: , ,

Decision Tools Manipulating Assent: Rational Persuasion, Paternalism and Trust

Autonomous systems, Situation Awareness Tools and Operator Decision Aids are increasingly used to improve strategic and operational effectiveness with better data, models and scenarios (Defense Science & Technology Group, 2017). To gain the trust of users, tools are designed around the cognitive architecture of human users, e.g. natural language explanations of automated recommendations are given (e.g. Giboney, Brown, Lowry, & Nunamaker Jr, 2015; Papamichail & French, 2003). Such approaches are promising because information should be clear and the logic transparent for human users. However, the danger of designing decision support systems (DSS) specific for the cognitive preferences of decision makers (DM) is that users may be manipulated to assent, i.e. rational persuasion can be used as a form of paternalism (Tsai, 2014). When an agent rationally persuades a user it offers reasons, evidence or arguments. It is possible to construct a DSS to rationally persuade a human operator to choose the right action, yet the information represented is paternalistic or disrespectful by being incomplete, simplified or obfuscating. Rational persuasion may be motivated by distrust in the DM capacity to gather, weigh or evaluate evidence. Rational persuasion may intrude on the users deliberative activities in ways that devalue her reflective decision making processes. Manipulative rational persuasion could thus be coopted as an effective tool of disinformation.

The question is: Can a DSS gain and maintain trust and avoid paternalism?

Increasingly DSS use vastly more data and operations on that data than a single DM could understand, rendering individual reflective cognition problematic. When recommendations align with human ideas, there may be little cause for concern. However, when recommendations diverge from human intuition, then humans must either trust the system—and follow its dictates—without necessarily knowing why they are agreeing; or reject the system with a suboptimal alternative and put operations at risk. DSS makers may try to improve trust with honest articulation of how decisions are generated, but it is likely that information will necessarily be simplified and manipulated to facilitate consent. How should human trust be established in such systems? How are trust relationships established and maintained as DSS move from support to semi-autonomous and autonomous decision-making? What are the legal and regulatory impacts of these findings (Calo, Froomkin & Kerr, 2016)? This research project seeks to contribute to human and autonomous decision superiority (Defense Science & Technology Group, 2017).


Calo, R., Froomkin, M., & Kerr, I. (Eds.). (2016). Robot Law. Elgaronline: Edward Elgar Publishing.

Defence Science & Technology Group (2017). DST Science and Technology Capability Portfolio. Department of Defence, Science and Technology, Australian Government. Retrieved from 9 Mar 2017.

Giboney, J. S., Brown, S. A., Lowry, P. B., & Nunamaker Jr, J. F. (2015). User acceptance of knowledge-based system recommendations: Explanations, arguments, and fit. Decision Support Systems, 72, 1-10. doi:

Papamichail, K. N., & French, S. (2003). Explaining and justifying the advice of a decision support system: a natural language generation approach. Expert Systems With Applications, 24(1), 35-48. doi:10.1016/S0957-4174(02)00081-7

Tsai, G. (2014). Rational Persuasion as Paternalism. Philosophy & Public Affairs, 42(1), 78-112. doi:10.1111/papa.12026

Posted in research project

RezBaz Brisbane Digital Tools Poster

Poster for Brisbane Research Bazaar at The University of Queensland, 7-9 February 2017.


Posted in conferences

A cognitive decision interface to optimise integrated weed management

A cognitive decision interface to optimise integrated weed management. 7th Asian-Australasian Conference on Precision Agriculture, Hamilton, New Zealand 16-18 October 2017. [.pdf]

Kate Devitt*1, Tristan Perez Debra Polson1 Tamara Pearce1 Ryan Quagliata1 Wade Taylor1 Jenine Beekhuyzen1 David Thornby
1 Queensland University of Technology, Brisbane, Australia
Innokas Intellectual Services, Upper Coomera, Australia

Weed management is becoming more complex due to the rise of herbicide resistant weeds. Integrated weed management strategies are recommended to minimize herbicide resistance. However, weed management can be daunting and uncertain leading to biased, avoidant or suboptimal decisions. Existing weed management tools can be insensitive to user needs and changing contexts over time. This paper discusses a proof of concept cognitive tool for integrated weed management decisions.

Our team has taken initial steps into the design of an interactive tool for cotton growers that allows them to explore the impact of individual priorities and strategy preferences (optimistic, pessimistic and risk related) on weed management decisions given uncertainty in temperature and rainfall. Our research tackles the challenge of engaging stakeholders in complex decision making in three ways: 1) recognizing individual cognitive priorities 2) visualising scientific weed management in an appealing mobile interface and 3) representing decision uncertainties and risk weighted against cognitive priorities.

Specifically, our tool communicates personalised barnyard grass weeding management strategies for pre-crop and in-crop cotton weeding decisions. We ranked a set of actions including applications of herbicides: glyphosate, paraquat (shielded and unshielded), group A, trifluralin, diuron, pendimethalin, s-metolachlor, fluometuron, glufosinate; and non-chemical methods such as soil disturbance at various times prior to planting, at planting and in crop. Each action was evaluated against personal priorities including: saving time/effort, health/safety, saving money, sustainability and effectiveness.

The adoption of decision support in AgTech is improved when users can represent the objective benefits of recommended actions proportionately to their own needs and measures of success. Our interactive decision tool provides individualised decision support and quantifies uncertainty about attributes relevant to decision-makers to optimise integrated weeding management. The framework, however, can be extended to other decision making context where user priorities and decision uncertainties need to be incorporated alongside scientific best-practice.

Full paper [.pdf]

Posted in conferences

Philosophy and robots

Philosophy is the study of what there is, what we can know, what we ought to be and what we ought to do. The philosophy of robots thus is the study of what robots are, how we can know what robots are and what robots ought to be and do. The philosophy of robots is relevant to the robot team (engineers, designers, programmers), regulators (e.g. government, lawyers, insurers), citizens and academics. Each group has different beliefs and understandings about robots, for example, citizens may consider robots in terms of trust and use, whereas academics may consider robots in terms of theory, hypotheticals and consequences.

What are robots?—Robots are embodied artificial agents to serve human needs. But, where do robots begin and end? The obvious answer is that robots comprise of the physical parts of which they are composed. So, a robot’s identity begins at a physical inception event, say where the first hinges or components are put together. What if multiple robots are created at the same time with radically different functions or purposes, nevertheless from similar components? Are physical tokens of the same robot type treated as a single individual for the sake of legislation and culpability? What about robots with different bodies, but the same algorithms controlling them? These questions are not trivial when considering issues of IP and responsibility. Robots are defined by physical boundaries, representations of the internal and external world and programming that directs action.

How can we know what robots are? Deterministic robots have limited (though sometimes extraordinary) behaviours that are causally simple. Deterministic robots are epistemically transparent. That is to say, the counterfactual dependencies between programming and outcomes are predictable and reliable. Robots become more difficult to know the more they are indeterminate and free to make their own decisions in complex environments. Instead of clear directives, semi-autonomous and autonomous robots evaluate inputs from their senses, rules programming and empirically generated representations to make decisions for themselves. When indeterministic robots have limited capabilities and are tested and used in constrained and predictable environments, the uncertainties of their behaviours are reduced. Companies and institutions making these indeterminate robots can report reliable claims of likely behaviours in limited contexts of use that are understandable to non-robot-team audiences. As robots become more complex, their operating parameters may exceed human capacity to understand. This is unproblematic so long as robots make decisions consistent with human decision makers. However, how will regulators judge decisions inconsistent with human choices? It is likely that future robots will make unfathomable decisions. How will humans negotiate trust relationships with robots when they do not know them?

What ought robots be? No matter the metaphysical and epistemic challenges, it might be supposed that robots ought to be aligned to our best ethical theories and principles of rationality and decision making. However, given continued conceptual disagreement in these areas, I argue that robots must be responsive to social norming. The question remains whether robots should fill a familiar social role (e.g. family member, friend, trusted acquaintance) or a sui generis social niche designed specifically for human-robot relationships.

Posted in invited paper Tagged with: ,

Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future Thinking

**Accepted for Poster presentation**


Conference: Annual Conference for the Society for Judgment and Decision Making 2016

Title: Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future thinking

Abstract: Imaginative thinking is a way to try and mitigate against cognitive biases that lead to poor decisions (Montibeller & Winterfeldt, 2015). However, remembering the past; imagining the future; and wondering what might have been share phenomenological, cognitive and neural mechanisms leading to similar biases (De Brigard, Addis, Ford, Schacter, & Giovanello, 2013; Schacter, Addis, & Buckner, 2007). Understanding how individuals assign likelihoods to imagined decisions is critical to assessing the degree to which these mechanisms represent either a cognitive bias or a rational response to options.

To this end, I consider relationships between emotion, vivacity, and perceived risk on both the number of genuine options we consider and the probabilistic breadth of those options. One concern is that people’s judgement of likelihoods may be spurious. That is to say, episodic counterfactual thinking and episodic future thinking involves likelihood attributions based on emotional priorities and biased risk assessment rather than statistical likelihoods or rational probabilities. Positivity, optimism and narrative bias have all been shown pervade counterfactual and future thinking (Betsch, Haase, Renkewitz, & Schmid, 2015; De Brigard et al., 2013). In particular, people avoid ‘upward’ counterfactuals—considering positive outcomes that could have occurred) that may devalue their actual decisions and make them feel worse, yet offer alternatives for better future decisions (Markman, McMullen, Elizaga, & Mizoguchi, 2006).

From this I hypothesise that unfamiliar, yet potentially transformative options are suppressed when we imagine what could have been, allowing mundane anticipated outcomes rise to the fore. I propose a decision framework to free imagined decisions. The framework captures episodic memories of decisions (details, phenomenology, thoughts, emotions) and feeds them into how we represent alternate and future scenarios. Why—to get more buy-in for speculative imagined possibilities, in particular to make possibilities more vivid. The claim is that the more buy-in we can get to the imagined, the more likely information will positively influence decision-making behaviour. Then we expose decision makers to novel options and scenarios building on their own experiences, increasing the vivacity and weighting of unfamiliar, yet rationally important options and reducing the negative affect associated with familiar, yet critical realities.

In sum, this paper articulates how to construct an information environment that: acknowledges shared cognitive mechanisms of episodic counterfactual thinking and episodic future thinking; predicts how these processes affect rational decision making; and mitigates against cognitive biases by promoting behaviours likely to generate more rational information analysis.

Betsch, C., Haase, N., Renkewitz, F., & Schmid, P. (2015). The Narrative Bias Revisited: What Drives the Biasing Influence of Narrative Information on Risk Perceptions? Judgment and Decision Making, 10(3), 241-264.

De Brigard, F., Addis, D. R., Ford, J. H., Schacter, D. L., & Giovanello, K. S. (2013). Remembering what could have happened: neural correlates of episodic counterfactual thinking. Neuropsychologia, 51(12), 2401-2414. doi:10.1016/j.neuropsychologia.2013.01.015

Markman, K. D., McMullen, M. N., Elizaga, R. A., & Mizoguchi, N. (2006). Counterfactual Thinking and Regulatory Fit. Judgment and Decision Making, 1(2), 98-107.

Montibeller, G., & Winterfeldt, D. (2015). Cognitive and Motivational Biases in Decision and Risk Analysis. Risk Analysis, 35(7), 1230-1251. doi:10.1111/risa.12360

Schacter, D. L., Addis, D. R., & Buckner, R. L. (2007). Remembering the past to imagine the future: the prospective brain. Nature Reviews Neuroscience, 8(9), 657-661.

Posted in conferences Tagged with: , , , , ,

S. Kate Devitt

Research Associate, Institute for Future Environments and the Faculty of Law, Queensland University of Technology