Philosophy and robots

Philosophy is the study of what there is, what we can know, what we ought to be and what we ought to do. The philosophy of robots thus is the study of what robots are, how we can know what robots are and what robots ought to be and do. The philosophy of robots is relevant to the robot team (engineers, designers, programmers), regulators (e.g. government, lawyers, insurers), citizens and academics. Each group has different beliefs and understandings about robots, for example, citizens may consider robots in terms of trust and use, whereas academics may consider robots in terms of theory, hypotheticals and consequences.

What are robots?—Robots are embodied artificial agents to serve human needs. But, where do robots begin and end? The obvious answer is that robots comprise of the physical parts of which they are composed. So, a robot’s identity begins at a physical inception event, say where the first hinges or components are put together. What if multiple robots are created at the same time with radically different functions or purposes, nevertheless from similar components? Are physical tokens of the same robot type treated as a single individual for the sake of legislation and culpability? What about robots with different bodies, but the same algorithms controlling them? These questions are not trivial when considering issues of IP and responsibility. Robots are defined by physical boundaries, representations of the internal and external world and programming that directs action.

How can we know what robots are? Deterministic robots have limited (though sometimes extraordinary) behaviours that are causally simple. Deterministic robots are epistemically transparent. That is to say, the counterfactual dependencies between programming and outcomes are predictable and reliable. Robots become more difficult to know the more they are indeterminate and free to make their own decisions in complex environments. Instead of clear directives, semi-autonomous and autonomous robots evaluate inputs from their senses, rules programming and empirically generated representations to make decisions for themselves. When indeterministic robots have limited capabilities and are tested and used in constrained and predictable environments, the uncertainties of their behaviours are reduced. Companies and institutions making these indeterminate robots can report reliable claims of likely behaviours in limited contexts of use that are understandable to non-robot-team audiences. As robots become more complex, their operating parameters may exceed human capacity to understand. This is unproblematic so long as robots make decisions consistent with human decision makers. However, how will regulators judge decisions inconsistent with human choices? It is likely that future robots will make unfathomable decisions. How will humans negotiate trust relationships with robots when they do not know them?

What ought robots be? No matter the metaphysical and epistemic challenges, it might be supposed that robots ought to be aligned to our best ethical theories and principles of rationality and decision making. However, given continued conceptual disagreement in these areas, I argue that robots must be responsive to social norming. The question remains whether robots should fill a familiar social role (e.g. family member, friend, trusted acquaintance) or a sui generis social niche designed specifically for human-robot relationships.

Posted in invited paper Tagged with: ,

Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future Thinking

**Accepted for Poster presentation**

sjdmLOGOgv_e

Conference: Annual Conference for the Society for Judgment and Decision Making 2016

Title: Imagining Decisions: Likelihood and Risk Assessment in Counterfactual and Future thinking

Abstract: Imaginative thinking is a way to try and mitigate against cognitive biases that lead to poor decisions (Montibeller & Winterfeldt, 2015). However, remembering the past; imagining the future; and wondering what might have been share phenomenological, cognitive and neural mechanisms leading to similar biases (De Brigard, Addis, Ford, Schacter, & Giovanello, 2013; Schacter, Addis, & Buckner, 2007). Understanding how individuals assign likelihoods to imagined decisions is critical to assessing the degree to which these mechanisms represent either a cognitive bias or a rational response to options.

To this end, I consider relationships between emotion, vivacity, and perceived risk on both the number of genuine options we consider and the probabilistic breadth of those options. One concern is that people’s judgement of likelihoods may be spurious. That is to say, episodic counterfactual thinking and episodic future thinking involves likelihood attributions based on emotional priorities and biased risk assessment rather than statistical likelihoods or rational probabilities. Positivity, optimism and narrative bias have all been shown pervade counterfactual and future thinking (Betsch, Haase, Renkewitz, & Schmid, 2015; De Brigard et al., 2013). In particular, people avoid ‘upward’ counterfactuals—considering positive outcomes that could have occurred) that may devalue their actual decisions and make them feel worse, yet offer alternatives for better future decisions (Markman, McMullen, Elizaga, & Mizoguchi, 2006).

From this I hypothesise that unfamiliar, yet potentially transformative options are suppressed when we imagine what could have been, allowing mundane anticipated outcomes rise to the fore. I propose a decision framework to free imagined decisions. The framework captures episodic memories of decisions (details, phenomenology, thoughts, emotions) and feeds them into how we represent alternate and future scenarios. Why—to get more buy-in for speculative imagined possibilities, in particular to make possibilities more vivid. The claim is that the more buy-in we can get to the imagined, the more likely information will positively influence decision-making behaviour. Then we expose decision makers to novel options and scenarios building on their own experiences, increasing the vivacity and weighting of unfamiliar, yet rationally important options and reducing the negative affect associated with familiar, yet critical realities.

In sum, this paper articulates how to construct an information environment that: acknowledges shared cognitive mechanisms of episodic counterfactual thinking and episodic future thinking; predicts how these processes affect rational decision making; and mitigates against cognitive biases by promoting behaviours likely to generate more rational information analysis.
References:

Betsch, C., Haase, N., Renkewitz, F., & Schmid, P. (2015). The Narrative Bias Revisited: What Drives the Biasing Influence of Narrative Information on Risk Perceptions? Judgment and Decision Making, 10(3), 241-264.

De Brigard, F., Addis, D. R., Ford, J. H., Schacter, D. L., & Giovanello, K. S. (2013). Remembering what could have happened: neural correlates of episodic counterfactual thinking. Neuropsychologia, 51(12), 2401-2414. doi:10.1016/j.neuropsychologia.2013.01.015

Markman, K. D., McMullen, M. N., Elizaga, R. A., & Mizoguchi, N. (2006). Counterfactual Thinking and Regulatory Fit. Judgment and Decision Making, 1(2), 98-107.

Montibeller, G., & Winterfeldt, D. (2015). Cognitive and Motivational Biases in Decision and Risk Analysis. Risk Analysis, 35(7), 1230-1251. doi:10.1111/risa.12360

Schacter, D. L., Addis, D. R., & Buckner, R. L. (2007). Remembering the past to imagine the future: the prospective brain. Nature Reviews Neuroscience, 8(9), 657-661.

Posted in conferences Tagged with: , , , , ,

International Conference on Thinking 2016

Poster_sm

Devitt, S.K., Pearce, T.R., Perez, T. & Bruza, P. (2016). Mitigating against cognitive bias when eliciting expert intuitions. International Conference on Thinking. Brown University, Providence RI, 4-6 Aug. [.pdf handout]

Abstract: Experts are increasingly being called upon to build decision support systems. Expert intuitions and reflective judgments are subject to similar range of cognitive biases as ordinary folks, with additional levels of overconfidence bias in their judgments. A formal process of hypothesis elicitation is one way to mitigate against some of the impact of systematic biases such as anchoring bias and overconfidence bias. Normative frameworks for hypothesis or ‘novel option’ elicitation are available across multiple disciplines. All frameworks acknowledge the importance and difficulty of generating hypotheses that are a) sufficiently numerous b) lateral and c) relevant and d) plausible. This paper explores whether systematic hypothesis generation can generate the desired degree of creative, ‘out-of-the-box’ style options given that abductive reasoning is one of the least tractable styles of thinking that appears to shirk systematization. I argue that while there is no universal systematic hypothesis generation procedure, experts can be exposed to deliberate and systematic information ecosystems to reduce the prevalence of certain types of cognitive biases and improve decision support systems.

Keywords: Abduction, cognitive bias, option generation

Read .pdf handout

Posted in conferences Tagged with: , , , , , , , , , , ,

Bayes on the Beach 2015

Bayes on the Beach 2015

Bayes on the Beach 2015

I’m heading to the Gold Coast December 7-9 for Bayes on the Beach

Can coherence solve prior probabilities for Bayesianism?

Coherence between propositions promises to fix the vexing circumstance of prior probabilities for subjective Bayesians. This paper examines the role of coherence as a source of justification for Bayesian agents, particularly the argument that all propositions must cohere within an agent’s ‘web of belief’, aka confirmational holism. Unfortunately, Confirmational holism runs across a potentially devastating argument that a more coherent set of beliefs resulting from the addition of a belief to a less coherent set of beliefs is less likely to be true than the less coherent set of beliefs. In response, I propose confirmational chorism (CC) to avoid this troubling outcome. CC posits that coherence adds epistemic justification by limited, logically consistent sets of beliefs exhibiting a satisficing degree of strength, inferential and explanatory connection. Limited coherence may resolve the above argument, but raises the need for another kind of justification: coordination (integration across sets of beliefs). Belief coordination requires suppressing some beliefs and communicating other beliefs to ensure convergence on the right action for performance success. Thus, a Bayesian formed belief in any particular context is justified not just because it is reliably formed and coherent, but also because of how it is coordinated between local and holistic goals.

skdevitt Bayes poster 2015

Posted in conferences Tagged with: , , , ,

Cognitive Information Science

cognitive information science

Posted in Slides Tagged with: , , , ,

Most cited philosophy papers (Google Scholar data)

philosophy_citation_wordle

Wordle generated from words in titles of most-cited philosophy papers 2010-2015 (raw data)

 

Here’s a nice spreadsheet of the top 395 most-cited philosophy papers 2010-2015 created by Josh Knobe.

UPDATE: Due to social media request, I have created a second wordle that cleans up the all-caps.

philosophy_citation_wordle_cleaned

Wordle generated from words in titles of most-cited philosophy papers 2010-2015 (data cleaned)

Posted in philosophy analytics Tagged with: , , , , , , ,

Australasian Cognitive Science Conference

 

ascs_2014_logo

My paper has been accepted as a paper for the Australasian Society for Cognitive Science’s conference.

TITLE:  Defending confirmational chorism against holism: Limited coherence and coordination as sources of epistemic justification.    

KEYWORDS: coherence, coordination, epistemology

ABSTRACT:

This paper examines the role of coherence as a source of epistemic justification, particularly the argument that all beliefs must cohere within one’s ‘web of belief’, aka confirmational holism. Confirmational holism runs across a potentially devastating argument that a more coherent set of beliefs resulting from the addition of a belief to a less coherent set of beliefs is less likely to be true than the less coherent set of beliefs. I propose confirmational chorism (CC) to avoid this troubling outcome. CC posits that coherence adds epistemic justification by limited, logically consistent sets of beliefs exhibiting a satisficing degree of strength, inferential and explanatory connection. Limited coherence may resolve the above argument, but raises the need for another kind of justification: coordination (integration across sets of beliefs). Belief coordination requires suppressing some beliefs and communicating other beliefs to ensure convergence on the right action for performance success. Thus, a belief in any particular context is justified not just because it is reliably formed and coherent, but also because of how it is coordinated between local and holistic goals.

Download

Posted in conferences Tagged with: , , , , , , ,

New metric for ranking philosophy journals

I have been making metrics ranking philosophy journals based on both subjective (philosopher ranked) and objective (citation data) criteria, two of my metrics have been linked and commented on by Brian Leiter on his blog.

DEVITT’S LGS-INDEX Top Philosophy Journals (Leiter + Google Scholar)

This was my first metric that combined the Leiter ranking plus pure Google Scholar data. I was initially intrigued by how many journals Google didn’t include in their category ‘philosophy’ that were highly valued by philosophers. I was also curious why subjective and objective measurements ranked journals so differently.

DEVITT’S LGSCD-INDEX Top Philosophy Journals (Leiter + Google Scholar + Citable Documents)

My second metric modified the impact of the Google Scholar ranking  by a third factor, ‘citable documents’. Quite a different result is found by taking volume of publications into consideration, because a given philosophy journal can publish between 13 (e.g. The Philosophical Review) and 153 (Synthese) articles a year.

DEVITT’S GSCD-INDEX Modified Google Scholar Metric Top Philosophy Journals

After publishing my second metric above, some philosophers requested a metric that modified the Google Scholar data, but stripped out the reputational data from Leiter’s measure. This final metric ranks interdisciplinary philosophy journals much more highly than many traditionally prestigous philosophy journals, though many top philosophy journals retain their high ranking no matter what the metric (e.g. The Journal of Philosophy always ranks in the top 6).

Posted in philosophy analytics Tagged with: , , , , , , ,

Association for the Scientific Study of Consciousness 2014

websitebannersmall_0

 

Title:  Can reliabilism explain how conscious reflection justifies beliefs?

assc_2014_skdevitt

Abstract:

This poster addresses the justificatory role of conscious reflection within a naturalized, reliabilist epistemology. Reliabilism is the view that implicit, mechanistic (System 1) processes can justify beliefs, e.g. perceptual beliefs formed after a history of consistent exposure to normal lighting conditions are justified in a given context with normal lighting. A popular variant of reliabilism is virtue epistemology where the cognitive circumstances and abilities of an agent play a justificatory role, e.g. the cooperation of the prefrontal cortex and primary visual cortex of the individual perceiving the Müller-Lyer illusion partly justify the belief that the lines are equi-length. While virtue epistemology is a well-endorsed reliabilism for implicit beliefs, its application to explicit, consciously reflective (System 2) processes is more controversial. Critics ask: How can iterations of dumb reliabilist processes produce higher order justification? To respond to this concern, I draw on another agent-centred, normative and reliabilist epistemology—Bayesian epistemology. A Bayesian virtue epistemology argues that reflective hypothesis-testing generated by (largely) implicit Bayesian mechanisms offers higher order reliabilist justification for beliefs. Iterative Bayesian mechanisms (e.g. hierarchically nested probabilistic models) explain the development of higher order beliefs about abstract concepts such as causation, natural laws and theoretical entities traditionally explained by recourse to vague concepts such as ‘the a priori’, ‘intuition’ or ‘the intellect’. A hybrid Bayesian virtue epistemology offers an iterative reliabilist framework to explain how conscious reflection justifies beliefs. However, I acknowledge limitations on Bayesian accounts of justification such as confirmational holism, commutativity, and the frame problem.

Download poster

 

Posted in conferences Tagged with: , , ,

Susannah Kate Devitt

I am Associate Lecturer in the School of Information Systems, Science and Engineering Faculty, Queensland University of Technology.

Posted in biography Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

S. Kate Devitt


Cognitive Information Scientist