Poster for Brisbane Research Bazaar at The University of Queensland, 7-9 February 2017.
Poster for Brisbane Research Bazaar at The University of Queensland, 7-9 February 2017.
‘A cognitive decision interface to optimise integrated weed management’ submitted for review to the 7th Asian-Australasian Conference on Precision Agriculture, Hamilton, New Zealand 16-18 October 2017.
Kate Devitt*1, Tristan Perez 2 Debra Polson3 Tamara Pearce4 Ryan Quagliata5 Wade Taylor6 David Thornby7 Jenine Beekhuyzen8
1-6 Queensland University of Technology, Brisbane, Australia
7 Innokas Intellectual Services, Upper Coomera, Australia
8 Griffith University, Brisbane, Australia
Weed management is becoming more complicated with the rise of herbicide resistance weeds that threaten farming sustainability. Integrated weed management strategies are recommended to minimize herbicide resistance. However, weed management can be daunting and uncertain leading to biased, avoidant or suboptimal decisions. Additionally, existing weed management tools can be insensitive to user needs.
Our team have designed a highly visual interactive tool for cotton growers that allows them to explore a range of different weed management strategies according to their individual preferences and priorities, plus representing uncertainties associated with each decision. Our research tackles the challenge of engaging stakeholders in complex decision making in three ways: 1) recognizing individual cognitive priorities 2) visualising scientific weed management in an appealing mobile interface and 3) representing decision uncertainties and risk weighted against cognitive priorities.
We will discuss our smart phone (and tablet) tool for the communication of personalised weeding management strategies for thirteen pre-season and in-season weeding decisions for an irrigated cotton crop. We ranked a set of herbicides including: glysophate, paraquat (shielded and unshielded), Group A, Trifluralin, Diuron, Pendimethalin, s-Metolachio, Flumeturon+, Glufosinate and non-chemical methods such as soil preparation (tickle vs. cultivate), tillage (inc. inter-row tillage), sowing method; against individual preferences including:
Our interface provides individualized decision support and quantifies uncertainty about attributes relevant to decision-makers to optimise integrated weeding management. The proof of concept has been constructed for cotton integrated weed management decisions. But, the framework can be extended to any decision making context where user priorities and decision uncertainties need to be incorporated alongside scientific best-practice.
Philosophy is the study of what there is, what we can know, what we ought to be and what we ought to do. The philosophy of robots thus is the study of what robots are, how we can know what robots are and what robots ought to be and do. The philosophy of robots is relevant to the robot team (engineers, designers, programmers), regulators (e.g. government, lawyers, insurers), citizens and academics. Each group has different beliefs and understandings about robots, for example, citizens may consider robots in terms of trust and use, whereas academics may consider robots in terms of theory, hypotheticals and consequences.
What are robots?—Robots are embodied artificial agents to serve human needs. But, where do robots begin and end? The obvious answer is that robots comprise of the physical parts of which they are composed. So, a robot’s identity begins at a physical inception event, say where the first hinges or components are put together. What if multiple robots are created at the same time with radically different functions or purposes, nevertheless from similar components? Are physical tokens of the same robot type treated as a single individual for the sake of legislation and culpability? What about robots with different bodies, but the same algorithms controlling them? These questions are not trivial when considering issues of IP and responsibility. Robots are defined by physical boundaries, representations of the internal and external world and programming that directs action.
How can we know what robots are? Deterministic robots have limited (though sometimes extraordinary) behaviours that are causally simple. Deterministic robots are epistemically transparent. That is to say, the counterfactual dependencies between programming and outcomes are predictable and reliable. Robots become more difficult to know the more they are indeterminate and free to make their own decisions in complex environments. Instead of clear directives, semi-autonomous and autonomous robots evaluate inputs from their senses, rules programming and empirically generated representations to make decisions for themselves. When indeterministic robots have limited capabilities and are tested and used in constrained and predictable environments, the uncertainties of their behaviours are reduced. Companies and institutions making these indeterminate robots can report reliable claims of likely behaviours in limited contexts of use that are understandable to non-robot-team audiences. As robots become more complex, their operating parameters may exceed human capacity to understand. This is unproblematic so long as robots make decisions consistent with human decision makers. However, how will regulators judge decisions inconsistent with human choices? It is likely that future robots will make unfathomable decisions. How will humans negotiate trust relationships with robots when they do not know them?
What ought robots be? No matter the metaphysical and epistemic challenges, it might be supposed that robots ought to be aligned to our best ethical theories and principles of rationality and decision making. However, given continued conceptual disagreement in these areas, I argue that robots must be responsive to social norming. The question remains whether robots should fill a familiar social role (e.g. family member, friend, trusted acquaintance) or a sui generis social niche designed specifically for human-robot relationships.
**Accepted for Poster presentation**
Conference: Annual Conference for the Society for Judgment and Decision Making 2016
Title: Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future thinking
Abstract: Imaginative thinking is a way to try and mitigate against cognitive biases that lead to poor decisions (Montibeller & Winterfeldt, 2015). However, remembering the past; imagining the future; and wondering what might have been share phenomenological, cognitive and neural mechanisms leading to similar biases (De Brigard, Addis, Ford, Schacter, & Giovanello, 2013; Schacter, Addis, & Buckner, 2007). Understanding how individuals assign likelihoods to imagined decisions is critical to assessing the degree to which these mechanisms represent either a cognitive bias or a rational response to options.
To this end, I consider relationships between emotion, vivacity, and perceived risk on both the number of genuine options we consider and the probabilistic breadth of those options. One concern is that people’s judgement of likelihoods may be spurious. That is to say, episodic counterfactual thinking and episodic future thinking involves likelihood attributions based on emotional priorities and biased risk assessment rather than statistical likelihoods or rational probabilities. Positivity, optimism and narrative bias have all been shown pervade counterfactual and future thinking (Betsch, Haase, Renkewitz, & Schmid, 2015; De Brigard et al., 2013). In particular, people avoid ‘upward’ counterfactuals—considering positive outcomes that could have occurred) that may devalue their actual decisions and make them feel worse, yet offer alternatives for better future decisions (Markman, McMullen, Elizaga, & Mizoguchi, 2006).
From this I hypothesise that unfamiliar, yet potentially transformative options are suppressed when we imagine what could have been, allowing mundane anticipated outcomes rise to the fore. I propose a decision framework to free imagined decisions. The framework captures episodic memories of decisions (details, phenomenology, thoughts, emotions) and feeds them into how we represent alternate and future scenarios. Why—to get more buy-in for speculative imagined possibilities, in particular to make possibilities more vivid. The claim is that the more buy-in we can get to the imagined, the more likely information will positively influence decision-making behaviour. Then we expose decision makers to novel options and scenarios building on their own experiences, increasing the vivacity and weighting of unfamiliar, yet rationally important options and reducing the negative affect associated with familiar, yet critical realities.
In sum, this paper articulates how to construct an information environment that: acknowledges shared cognitive mechanisms of episodic counterfactual thinking and episodic future thinking; predicts how these processes affect rational decision making; and mitigates against cognitive biases by promoting behaviours likely to generate more rational information analysis.
Betsch, C., Haase, N., Renkewitz, F., & Schmid, P. (2015). The Narrative Bias Revisited: What Drives the Biasing Influence of Narrative Information on Risk Perceptions? Judgment and Decision Making, 10(3), 241-264.
De Brigard, F., Addis, D. R., Ford, J. H., Schacter, D. L., & Giovanello, K. S. (2013). Remembering what could have happened: neural correlates of episodic counterfactual thinking. Neuropsychologia, 51(12), 2401-2414. doi:10.1016/j.neuropsychologia.2013.01.015
Markman, K. D., McMullen, M. N., Elizaga, R. A., & Mizoguchi, N. (2006). Counterfactual Thinking and Regulatory Fit. Judgment and Decision Making, 1(2), 98-107.
Montibeller, G., & Winterfeldt, D. (2015). Cognitive and Motivational Biases in Decision and Risk Analysis. Risk Analysis, 35(7), 1230-1251. doi:10.1111/risa.12360
Schacter, D. L., Addis, D. R., & Buckner, R. L. (2007). Remembering the past to imagine the future: the prospective brain. Nature Reviews Neuroscience, 8(9), 657-661.
Devitt, S.K., Pearce, T.R., Perez, T. & Bruza, P. (2016). Mitigating against cognitive bias when eliciting expert intuitions. International Conference on Thinking. Brown University, Providence RI, 4-6 Aug. [.pdf handout]
Abstract: Experts are increasingly being called upon to build decision support systems. Expert intuitions and reflective judgments are subject to similar range of cognitive biases as ordinary folks, with additional levels of overconfidence bias in their judgments. A formal process of hypothesis elicitation is one way to mitigate against some of the impact of systematic biases such as anchoring bias and overconfidence bias. Normative frameworks for hypothesis or ‘novel option’ elicitation are available across multiple disciplines. All frameworks acknowledge the importance and difficulty of generating hypotheses that are a) sufficiently numerous b) lateral and c) relevant and d) plausible. This paper explores whether systematic hypothesis generation can generate the desired degree of creative, ‘out-of-the-box’ style options given that abductive reasoning is one of the least tractable styles of thinking that appears to shirk systematization. I argue that while there is no universal systematic hypothesis generation procedure, experts can be exposed to deliberate and systematic information ecosystems to reduce the prevalence of certain types of cognitive biases and improve decision support systems.
Keywords: Abduction, cognitive bias, option generation
Read .pdf handout
I’m heading to the Gold Coast December 7-9 for Bayes on the Beach
Can coherence solve prior probabilities for Bayesianism?
Coherence between propositions promises to fix the vexing circumstance of prior probabilities for subjective Bayesians. This paper examines the role of coherence as a source of justification for Bayesian agents, particularly the argument that all propositions must cohere within an agent’s ‘web of belief’, aka confirmational holism. Unfortunately, Confirmational holism runs across a potentially devastating argument that a more coherent set of beliefs resulting from the addition of a belief to a less coherent set of beliefs is less likely to be true than the less coherent set of beliefs. In response, I propose confirmational chorism (CC) to avoid this troubling outcome. CC posits that coherence adds epistemic justification by limited, logically consistent sets of beliefs exhibiting a satisficing degree of strength, inferential and explanatory connection. Limited coherence may resolve the above argument, but raises the need for another kind of justification: coordination (integration across sets of beliefs). Belief coordination requires suppressing some beliefs and communicating other beliefs to ensure convergence on the right action for performance success. Thus, a Bayesian formed belief in any particular context is justified not just because it is reliably formed and coherent, but also because of how it is coordinated between local and holistic goals.
Here’s a nice spreadsheet of the top 395 most-cited philosophy papers 2010-2015 created by Josh Knobe.
UPDATE: Due to social media request, I have created a second wordle that cleans up the all-caps.
My paper has been accepted as a paper for the Australasian Society for Cognitive Science’s conference.
KEYWORDS: coherence, coordination, epistemology
This paper examines the role of coherence as a source of epistemic justification, particularly the argument that all beliefs must cohere within one’s ‘web of belief’, aka confirmational holism. Confirmational holism runs across a potentially devastating argument that a more coherent set of beliefs resulting from the addition of a belief to a less coherent set of beliefs is less likely to be true than the less coherent set of beliefs. I propose confirmational chorism (CC) to avoid this troubling outcome. CC posits that coherence adds epistemic justification by limited, logically consistent sets of beliefs exhibiting a satisficing degree of strength, inferential and explanatory connection. Limited coherence may resolve the above argument, but raises the need for another kind of justification: coordination (integration across sets of beliefs). Belief coordination requires suppressing some beliefs and communicating other beliefs to ensure convergence on the right action for performance success. Thus, a belief in any particular context is justified not just because it is reliably formed and coherent, but also because of how it is coordinated between local and holistic goals.
I have been making metrics ranking philosophy journals based on both subjective (philosopher ranked) and objective (citation data) criteria, two of my metrics have been linked and commented on by Brian Leiter on his blog.
DEVITT’S LGS-INDEX Top Philosophy Journals (Leiter + Google Scholar)
This was my first metric that combined the Leiter ranking plus pure Google Scholar data. I was initially intrigued by how many journals Google didn’t include in their category ‘philosophy’ that were highly valued by philosophers. I was also curious why subjective and objective measurements ranked journals so differently.
DEVITT’S LGSCD-INDEX Top Philosophy Journals (Leiter + Google Scholar + Citable Documents)
My second metric modified the impact of the Google Scholar ranking by a third factor, ‘citable documents’. Quite a different result is found by taking volume of publications into consideration, because a given philosophy journal can publish between 13 (e.g. The Philosophical Review) and 153 (Synthese) articles a year.
DEVITT’S GSCD-INDEX Modified Google Scholar Metric Top Philosophy Journals
After publishing my second metric above, some philosophers requested a metric that modified the Google Scholar data, but stripped out the reputational data from Leiter’s measure. This final metric ranks interdisciplinary philosophy journals much more highly than many traditionally prestigous philosophy journals, though many top philosophy journals retain their high ranking no matter what the metric (e.g. The Journal of Philosophy always ranks in the top 6).