Robotics and the Rarity of Care

 

Robotics and the rarity of care: Adoption issues of social robots in healthcare.

Presentation at Social Robots in Healthcare Workshop organised by the Australian Centre for Robotic Vision, 20th July 2017

What does being cared for feel like?—Considered, understood, comforted, calmed, elevated. How is care different from being treated for health, stabilised, improved or healed?—Better vital signs, systems functioning, less disease or infection, more energy, more mobility, less pain, clearer thinking. What factors affect the feeling of being cared for? E.g. do the particularities of your relationship with your carer matter? If so, how and why? Do you feel more cared for when you care about your carer? To what degree do the intentions vs. actions of the carer matter to feel cared for? E.g. if someone important to you nevertheless cares for you in a haphazard way, does it make you feel more cared for than a more objective professional who cares for you in a systematic and thorough way? The irony about care is that we can feel cared for even when we get sicker and we can feel more cared by less competent person who is more important to us than a more competent person who is less important to us. Can robots make people feel cared for?–If ‘yes’, then adoption. The adoption of social robots in healthcare is in some way dependent on the experience of feeling cared for. Is part of feeling cared for is being prioritised by the carer; that the rarity of care is a factor in feeling valued and cared for?

Presentation slides (.pdf)

Posted in invited paper Tagged with: , , , , ,

Kindness

Posted in musings Tagged with: , ,

Decision Tools Manipulating Assent: Rational Persuasion, Paternalism and Trust

Autonomous systems, Situation Awareness Tools and Operator Decision Aids are increasingly used to improve strategic and operational effectiveness with better data, models and scenarios (Defense Science & Technology Group, 2017). To gain the trust of users, tools are designed around the cognitive architecture of human users, e.g. natural language explanations of automated recommendations are given (e.g. Giboney, Brown, Lowry, & Nunamaker Jr, 2015; Papamichail & French, 2003). Such approaches are promising because information should be clear and the logic transparent for human users. However, the danger of designing decision support systems (DSS) specific for the cognitive preferences of decision makers (DM) is that users may be manipulated to assent, i.e. rational persuasion can be used as a form of paternalism (Tsai, 2014). When an agent rationally persuades a user it offers reasons, evidence or arguments. It is possible to construct a DSS to rationally persuade a human operator to choose the right action, yet the information represented is paternalistic or disrespectful by being incomplete, simplified or obfuscating. Rational persuasion may be motivated by distrust in the DM capacity to gather, weigh or evaluate evidence. Rational persuasion may intrude on the users deliberative activities in ways that devalue her reflective decision making processes. Manipulative rational persuasion could thus be coopted as an effective tool of disinformation.

The question is: Can a DSS gain and maintain trust and avoid paternalism?

Increasingly DSS use vastly more data and operations on that data than a single DM could understand, rendering individual reflective cognition problematic. When recommendations align with human ideas, there may be little cause for concern. However, when recommendations diverge from human intuition, then humans must either trust the system—and follow its dictates—without necessarily knowing why they are agreeing; or reject the system with a suboptimal alternative and put operations at risk. DSS makers may try to improve trust with honest articulation of how decisions are generated, but it is likely that information will necessarily be simplified and manipulated to facilitate consent. How should human trust be established in such systems? How are trust relationships established and maintained as DSS move from support to semi-autonomous and autonomous decision-making? What are the legal and regulatory impacts of these findings (Calo, Froomkin & Kerr, 2016)? This research project seeks to contribute to human and autonomous decision superiority (Defense Science & Technology Group, 2017).

References:

Calo, R., Froomkin, M., & Kerr, I. (Eds.). (2016). Robot Law. Elgaronline: Edward Elgar Publishing.

Defence Science & Technology Group (2017). DST Science and Technology Capability Portfolio. Department of Defence, Science and Technology, Australian Government. Retrieved from https://www.dst.defence.gov.au/publication/science-and-technology-capability-portfolio 9 Mar 2017.

Giboney, J. S., Brown, S. A., Lowry, P. B., & Nunamaker Jr, J. F. (2015). User acceptance of knowledge-based system recommendations: Explanations, arguments, and fit. Decision Support Systems, 72, 1-10. doi:http://dx.doi.org/10.1016/j.dss.2015.02.005

Papamichail, K. N., & French, S. (2003). Explaining and justifying the advice of a decision support system: a natural language generation approach. Expert Systems With Applications, 24(1), 35-48. doi:10.1016/S0957-4174(02)00081-7

Tsai, G. (2014). Rational Persuasion as Paternalism. Philosophy & Public Affairs, 42(1), 78-112. doi:10.1111/papa.12026

Posted in research project

RezBaz Brisbane Digital Tools Poster

Poster for Brisbane Research Bazaar at The University of Queensland, 7-9 February 2017.

 

Posted in conferences

A cognitive decision interface to optimise integrated weed management

A cognitive decision interface to optimise integrated weed management. 7th Asian-Australasian Conference on Precision Agriculture, Hamilton, New Zealand 16-18 October 2017. [.pdf]

Kate Devitt*1, Tristan Perez Debra Polson1 Tamara Pearce1 Ryan Quagliata1 Wade Taylor1 Jenine Beekhuyzen1 David Thornby
1 Queensland University of Technology, Brisbane, Australia
Innokas Intellectual Services, Upper Coomera, Australia

Weed management is becoming more complex due to the rise of herbicide resistant weeds. Integrated weed management strategies are recommended to minimize herbicide resistance. However, weed management can be daunting and uncertain leading to biased, avoidant or suboptimal decisions. Existing weed management tools can be insensitive to user needs and changing contexts over time. This paper discusses a proof of concept cognitive tool for integrated weed management decisions.

Our team has taken initial steps into the design of an interactive tool for cotton growers that allows them to explore the impact of individual priorities and strategy preferences (optimistic, pessimistic and risk related) on weed management decisions given uncertainty in temperature and rainfall. Our research tackles the challenge of engaging stakeholders in complex decision making in three ways: 1) recognizing individual cognitive priorities 2) visualising scientific weed management in an appealing mobile interface and 3) representing decision uncertainties and risk weighted against cognitive priorities.

Specifically, our tool communicates personalised barnyard grass weeding management strategies for pre-crop and in-crop cotton weeding decisions. We ranked a set of actions including applications of herbicides: glyphosate, paraquat (shielded and unshielded), group A, trifluralin, diuron, pendimethalin, s-metolachlor, fluometuron, glufosinate; and non-chemical methods such as soil disturbance at various times prior to planting, at planting and in crop. Each action was evaluated against personal priorities including: saving time/effort, health/safety, saving money, sustainability and effectiveness.

The adoption of decision support in AgTech is improved when users can represent the objective benefits of recommended actions proportionately to their own needs and measures of success. Our interactive decision tool provides individualised decision support and quantifies uncertainty about attributes relevant to decision-makers to optimise integrated weeding management. The framework, however, can be extended to other decision making context where user priorities and decision uncertainties need to be incorporated alongside scientific best-practice.

Full paper [.pdf]

Posted in conferences

Philosophy and robots

Philosophy is the study of what there is, what we can know, what we ought to be and what we ought to do. The philosophy of robots thus is the study of what robots are, how we can know what robots are and what robots ought to be and do. The philosophy of robots is relevant to the robot team (engineers, designers, programmers), regulators (e.g. government, lawyers, insurers), citizens and academics. Each group has different beliefs and understandings about robots, for example, citizens may consider robots in terms of trust and use, whereas academics may consider robots in terms of theory, hypotheticals and consequences.

What are robots?—Robots are embodied artificial agents to serve human needs. But, where do robots begin and end? The obvious answer is that robots comprise of the physical parts of which they are composed. So, a robot’s identity begins at a physical inception event, say where the first hinges or components are put together. What if multiple robots are created at the same time with radically different functions or purposes, nevertheless from similar components? Are physical tokens of the same robot type treated as a single individual for the sake of legislation and culpability? What about robots with different bodies, but the same algorithms controlling them? These questions are not trivial when considering issues of IP and responsibility. Robots are defined by physical boundaries, representations of the internal and external world and programming that directs action.

How can we know what robots are? Deterministic robots have limited (though sometimes extraordinary) behaviours that are causally simple. Deterministic robots are epistemically transparent. That is to say, the counterfactual dependencies between programming and outcomes are predictable and reliable. Robots become more difficult to know the more they are indeterminate and free to make their own decisions in complex environments. Instead of clear directives, semi-autonomous and autonomous robots evaluate inputs from their senses, rules programming and empirically generated representations to make decisions for themselves. When indeterministic robots have limited capabilities and are tested and used in constrained and predictable environments, the uncertainties of their behaviours are reduced. Companies and institutions making these indeterminate robots can report reliable claims of likely behaviours in limited contexts of use that are understandable to non-robot-team audiences. As robots become more complex, their operating parameters may exceed human capacity to understand. This is unproblematic so long as robots make decisions consistent with human decision makers. However, how will regulators judge decisions inconsistent with human choices? It is likely that future robots will make unfathomable decisions. How will humans negotiate trust relationships with robots when they do not know them?

What ought robots be? No matter the metaphysical and epistemic challenges, it might be supposed that robots ought to be aligned to our best ethical theories and principles of rationality and decision making. However, given continued conceptual disagreement in these areas, I argue that robots must be responsive to social norming. The question remains whether robots should fill a familiar social role (e.g. family member, friend, trusted acquaintance) or a sui generis social niche designed specifically for human-robot relationships.

Posted in invited paper Tagged with: ,

Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future Thinking

**Accepted for Poster presentation**

sjdmLOGOgv_e

Conference: Annual Conference for the Society for Judgment and Decision Making 2016

Title: Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future thinking

Abstract: Imaginative thinking is a way to try and mitigate against cognitive biases that lead to poor decisions (Montibeller & Winterfeldt, 2015). However, remembering the past; imagining the future; and wondering what might have been share phenomenological, cognitive and neural mechanisms leading to similar biases (De Brigard, Addis, Ford, Schacter, & Giovanello, 2013; Schacter, Addis, & Buckner, 2007). Understanding how individuals assign likelihoods to imagined decisions is critical to assessing the degree to which these mechanisms represent either a cognitive bias or a rational response to options.

To this end, I consider relationships between emotion, vivacity, and perceived risk on both the number of genuine options we consider and the probabilistic breadth of those options. One concern is that people’s judgement of likelihoods may be spurious. That is to say, episodic counterfactual thinking and episodic future thinking involves likelihood attributions based on emotional priorities and biased risk assessment rather than statistical likelihoods or rational probabilities. Positivity, optimism and narrative bias have all been shown pervade counterfactual and future thinking (Betsch, Haase, Renkewitz, & Schmid, 2015; De Brigard et al., 2013). In particular, people avoid ‘upward’ counterfactuals—considering positive outcomes that could have occurred) that may devalue their actual decisions and make them feel worse, yet offer alternatives for better future decisions (Markman, McMullen, Elizaga, & Mizoguchi, 2006).

From this I hypothesise that unfamiliar, yet potentially transformative options are suppressed when we imagine what could have been, allowing mundane anticipated outcomes rise to the fore. I propose a decision framework to free imagined decisions. The framework captures episodic memories of decisions (details, phenomenology, thoughts, emotions) and feeds them into how we represent alternate and future scenarios. Why—to get more buy-in for speculative imagined possibilities, in particular to make possibilities more vivid. The claim is that the more buy-in we can get to the imagined, the more likely information will positively influence decision-making behaviour. Then we expose decision makers to novel options and scenarios building on their own experiences, increasing the vivacity and weighting of unfamiliar, yet rationally important options and reducing the negative affect associated with familiar, yet critical realities.

In sum, this paper articulates how to construct an information environment that: acknowledges shared cognitive mechanisms of episodic counterfactual thinking and episodic future thinking; predicts how these processes affect rational decision making; and mitigates against cognitive biases by promoting behaviours likely to generate more rational information analysis.
References:

Betsch, C., Haase, N., Renkewitz, F., & Schmid, P. (2015). The Narrative Bias Revisited: What Drives the Biasing Influence of Narrative Information on Risk Perceptions? Judgment and Decision Making, 10(3), 241-264.

De Brigard, F., Addis, D. R., Ford, J. H., Schacter, D. L., & Giovanello, K. S. (2013). Remembering what could have happened: neural correlates of episodic counterfactual thinking. Neuropsychologia, 51(12), 2401-2414. doi:10.1016/j.neuropsychologia.2013.01.015

Markman, K. D., McMullen, M. N., Elizaga, R. A., & Mizoguchi, N. (2006). Counterfactual Thinking and Regulatory Fit. Judgment and Decision Making, 1(2), 98-107.

Montibeller, G., & Winterfeldt, D. (2015). Cognitive and Motivational Biases in Decision and Risk Analysis. Risk Analysis, 35(7), 1230-1251. doi:10.1111/risa.12360

Schacter, D. L., Addis, D. R., & Buckner, R. L. (2007). Remembering the past to imagine the future: the prospective brain. Nature Reviews Neuroscience, 8(9), 657-661.

Posted in conferences Tagged with: , , , , ,

International Conference on Thinking 2016

Poster_sm

Devitt, S.K., Pearce, T.R., Perez, T. & Bruza, P. (2016). Mitigating against cognitive bias when eliciting expert intuitions. International Conference on Thinking. Brown University, Providence RI, 4-6 Aug. [.pdf handout]

Abstract: Experts are increasingly being called upon to build decision support systems. Expert intuitions and reflective judgments are subject to similar range of cognitive biases as ordinary folks, with additional levels of overconfidence bias in their judgments. A formal process of hypothesis elicitation is one way to mitigate against some of the impact of systematic biases such as anchoring bias and overconfidence bias. Normative frameworks for hypothesis or ‘novel option’ elicitation are available across multiple disciplines. All frameworks acknowledge the importance and difficulty of generating hypotheses that are a) sufficiently numerous b) lateral and c) relevant and d) plausible. This paper explores whether systematic hypothesis generation can generate the desired degree of creative, ‘out-of-the-box’ style options given that abductive reasoning is one of the least tractable styles of thinking that appears to shirk systematization. I argue that while there is no universal systematic hypothesis generation procedure, experts can be exposed to deliberate and systematic information ecosystems to reduce the prevalence of certain types of cognitive biases and improve decision support systems.

Keywords: Abduction, cognitive bias, option generation

Read .pdf handout

Posted in conferences Tagged with: , , , , , , , , , , ,

Bayes on the Beach 2015

Bayes on the Beach 2015

Bayes on the Beach 2015

I’m heading to the Gold Coast December 7-9 for Bayes on the Beach

Can coherence solve prior probabilities for Bayesianism?

Coherence between propositions promises to fix the vexing circumstance of prior probabilities for subjective Bayesians. This paper examines the role of coherence as a source of justification for Bayesian agents, particularly the argument that all propositions must cohere within an agent’s ‘web of belief’, aka confirmational holism. Unfortunately, Confirmational holism runs across a potentially devastating argument that a more coherent set of beliefs resulting from the addition of a belief to a less coherent set of beliefs is less likely to be true than the less coherent set of beliefs. In response, I propose confirmational chorism (CC) to avoid this troubling outcome. CC posits that coherence adds epistemic justification by limited, logically consistent sets of beliefs exhibiting a satisficing degree of strength, inferential and explanatory connection. Limited coherence may resolve the above argument, but raises the need for another kind of justification: coordination (integration across sets of beliefs). Belief coordination requires suppressing some beliefs and communicating other beliefs to ensure convergence on the right action for performance success. Thus, a Bayesian formed belief in any particular context is justified not just because it is reliably formed and coherent, but also because of how it is coordinated between local and holistic goals.

skdevitt Bayes poster 2015

Posted in conferences Tagged with: , , , ,

Cognitive Information Science

cognitive information science

Posted in Slides Tagged with: , , , ,

S. Kate Devitt


Research Associate, Institute for Future Environments and the Faculty of Law, Queensland University of Technology