Four Perspectives on Human Bias in Visual Analytics
Emily Wall, Leslie Blaha, Celeste Lyn Paul, Kristin Cook, Alex Endert
Analytic systems, especially mixed-initiative systems, can steer analytical models and adapt views by making inferences from users’ behavioral patterns with the system. Because such systems rely on incorporating implicit and explicit user feedback, they are particularly susceptible to the injection and propagation of human biases. To ultimately guard against the potentially negative effects of systems biased by human users, we must first qualify what we mean by the term bias. Thus, in this paper we describe four different perspectives on human bias that are particularly relevant to visual analytics. We discuss the interplay of human and computer system biases, particularly their roles in mixed-initiative systems. Given that the term bias is used to describe several different concepts, our goal is to facilitate a common language in research and development efforts by encouraging researchers to mindfully choose the perspective(s) considered in their work.
Holistic Reviews in Admissions: Reviewer Biases and Visualization Strategies to Mitigate Them
Poorna Talkad Sukumar, Ronald Metoyer, Shuai He
Visualizations of uncertainty in reasoning are being considered as complementary to visualizations of uncertainty in data and mainly aim to prevent cognitive biases of users to support more accurate decision making. Both of these uncertainty-visualization types work well for applications with large amounts of data and with definite or measurable uncertainties. However, in this paper, we aim to shed light on decision-making applications involving relatively small amounts of data with non-specific, abstract uncertainties but complex cognitive processes. We present an approach to support decision making in such applications wherein the possible biases in their reasoning processes are directly identified and addressed using visualizations. We present an example application – the holistic review process in undergraduate admissions in the United States. We identify potential reviewer biases in the process by matching the descriptions of common biases and reasoning heuristics under uncertainty with reviewer tasks ascertained through interviews and observations. We list examples of the biases identified and provide visualization strategies to mitigate them. While our initial steps look promising, there are also many challenges with this approach that are both specific to the holistic review process as well as generally applicable.
The Curse of Knowledge in Visual Data Communication
Cindy Xiong, Lisanne van Weelden, Steven Franconeri
The curse of knowledge is an inability to separate one’s own knowledge or expertise from that of an audience. We test the idea that this curse can substantially impair visual communication of data, and has the potential to fixate an analyst on a given pattern in data. Because a viewer can extract many potential relationships and patterns from any set of visualized data values, a viewer may see one pattern in the data as more visually salient than others. We demonstrate this phenomenon in the laboratory, showing that when people are given background information, they see the pattern in the data corresponding to the background information as more visually salient. Critically, they also believe that other viewers will experience the same visual salience, even when they are explicitly told that other viewers are naïve to the background information. The present findings suggest that the curse of knowledge affects the visual perception of data, explaining why presenters, paper authors, and data analysts can fail to connect with audiences when they communicate patterns in those data. Because the curse of knowledge may be difficult for a viewer to inhibit or even detect, analysts making decisions may benefit from visualizing their data a variety of formats, and soliciting perspectives of others.
Designing Breadth-Oriented Data Exploration for Mitigating Cognitive Biases
Po-Ming Law, Rahul Basole
Exploratory data analysis involves making a series of complex decisions: what should I explore? what questions should I ask? As users do not have good knowledge about the data they are exploring, making these decisions is non-trivial. In making these decisions, heuristics are often applied, potentially causing a biased exploration path. While breadth-oriented data exploration presents a promising solution to rectifying a biased exploration path, how to design breadth-oriented systems is yet to be explored. In this paper, we propose three considerations in designing systems which support breadth-oriented data exploration. To demonstrate the utility of these design considerations, we illustrate a hypothetical breadth-oriented system. We argue that these design considerations pave the way for understanding how breadth-oriented exploration mitigates biases in exploratory data analysis.
Data Visualization Literacy and Visualization Biases: Cases for Merging Parallel Threads
Hamid Mansoor, Lane Harrison
People are prone to many biases when viewing data visualizations. Recent visualization research has uncovered biases that manifest during visualization use, quantified their impact, and developed strategies for mitigating such biases. In a parallel thread, visualization research has investigated how to measure a person’s data visualization literacy, and examine the performance consequences of individual differences in these literacy measures. The aim of this position paper is to make a case for merging these threads. To bridge the gap, we highlight research in cognitive biases, that has established that there are relationships between the impact of biases and factors such as experience and cognitive ability. Drawing on prior work in visualization biases, we provide examples of how visualization literacy measures may have led to different results in these studies. As research continues to identify and quantify the biases that occur in visualizations, the impact of people’s individual abilities may prove to be an important consideration for analysis and design.
Towards a Bayesian Model of Data Visualization Cognition
Yifan Wu, Larry Xu, Remco Chang, Eugene Wu
Data visualizations are often used to assist decision making with probabilistic data. Different cognitive biases can affect the accuracy of user insights gained during the visual analytics process. However, evaluating bias in visualization usage is challenging and difficult to quantify. In this paper, we propose a Bayesian inference model based on cognitive science research to fill this gap. We outline the details for this model and the evaluation steps, including an end- to-end demonstration experiment that we performed. The results provide initial validation for using a Bayesian inference model to quantitatively measure bias in visual analytics.
Discovering Cognitive Biases in a Visual Analytics Environment
Michael Bedek, Alexander Nussbaumer, Luca Huszar, Dietrich Albert
Cognitive biases as systematic reasoning errors may have severe consequences in law enforcement agencies. The European project VALCRI aims to create a visual analytic environment that supports human reasoning and sense-making processes. VALCRI ́s goal is to avoid that cognitive biases occur in the first place or at least to minimalize their potential negative effects. To empirically prove this goal, cognitive biases need to be operationalized and measured to compare VALCRI with other existing software solutions. Three approaches, a theory-driven, a behavioral observation and a data-driven approach, have been applied in parallel to measure and discover a selected set of cognitive biases.
Promoting Representational Fluency for Cognitive Bias Mitigation in Information Visualization
Information visualization involves the use of visual representations of data to amplify cognition. While visualizations do generally amplify cognition, they also have representational biases that en- courage thinking and reasoning in certain ways at the expense of others. I propose that the development of representational fluency by visualization designers and users can help mitigate such biases, and that promoting representational fluency in visualization education and practice can be a useful general strategy for mitigating cognitive biases. Literature from various disciplines is discussed, including perspectives on metavisualization, representational competence, and meta–representational competence. Some implications for visualization research, education, and practice are examined. The need for engaging users in deep, effortful cognitive processing is discussed, and is situated within literature on established bias-mitigating strategies. A preliminary research agenda comprising five challenges is also proposed.
Bias by default? A means for a priori interface measurement
Joseph Cottam, Leslie Blaha
Systems have biases. Their interfaces naturally guide a user toward specific patterns of action. For example, modern word-processors and spreadsheets are both capable of handling word wrapping, checking spelling, and calculating formulas. You could write a paper in a spreadsheet or could do simple business modeling in a wordprocessor. However, their interfaces naturally communicate which function they are designed for. Visual analytic interfaces also have biases. We outline why simple Markov models are a plausible tool for investigating that bias, even prior to user interactions, and how they might be applied to understand a priori system biases. We also discuss some anticipated difficulties in such modeling and touch briefly on what some Markov model extensions might provide.
Black Hat Visualization
Michael Correll, Jeffrey Heer
People lie, mislead, and bullshit in a myriad of ways. Visualizations, as a form of communication, are no exception to these tendencies. Yet, the language we use to describe how people can use visualiza- tions to mislead can be relatively sparse. For instance, one can be “lying with vis” or using “deceptive visualizations.” In this paper, we use the language of computer security to expand the space of ways that unscrupulous people (black hats) can manipulate visualizations for nefarious ends. In addition to forms of deception well-covered in the visualization literature, we also focus on visualizations which have fidelity to the underlying data (and so may not be considered deceptive in the ordinary use of the term in visualization), but still have negative impact on how data are perceived. We encourage designers to think defensively and comprehensively about how their visual designs can result in data being misinterpreted.
The Biases of Thinking Fast and Thinking Slow
Dirk Streeb, Min Chen, Daniel A. Keim
Visualization is a human-centric process, which is inevitably as- sociated with potential biases in humans’ judgment and decision making. While the discussions on humans’ biases have been heavily influenced the work of Daniel Kahneman as summarized in his book “Thinking, Fast and Slow”, there have also been viewpoints in psychology in favor of heuristics. In this paper, we present a balanced discourse on the humans’ heuristics and biases as the two sides of the same coin. In particular, we examine these two aspects from a probabilistic perspective, and relate them to the notions of global and local sampling. We use three case studies in Kahneman’s book to illustrate the potential biases of human- and machine-centric decision processes. Our discourse leads to a concrete conclusion that visual analytics, where interactive visualization is integrated with statistics and algorithms, offers an effective and efficient means to overcome biases in data intelligence.
Towards Understanding Familiarity Related Cognitive Biases in Visualization Design and Usage
Experts in domains like biology, climate science, cyber, and energy, frequently use visualizations as the principal medium for making analytical judgments or for communicating the results of their analysis to a broad audience. However, scientists are often skeptical about adopting new visualization methods over familiar ones, although the latter might be perceptually sub-optimal. This is due to the use of the familiarity heuristic, where the perceived cognitive ease in processing familiar representations of information leads scientists to undermine the effect of visualization best practices. Recent studies have shown that this often results in a discrepancy between scientists’ perceived and actual performance quality. It has also been shown that in some cases, participatory design sessions and qualitative and quantitative user studies are able to mitigate the effects of such bias. In this paper, we discuss the potential causes and effects of familiarity related biases with examples from recent studies and reflect on the associated research questions.
A Framework for Studying Biases in Visualization Research
André Calero Valdez, Martina Ziefle, Michael Sedlmair
In this position paper, we propose and discuss a lightweight frame- work to help organize research questions that arise around biases in visualization and visual analysis. We contrast our framework against cognitive bias codex by Buster Benson. The framework is inspired by Norman’s Human Action Cycle  and classifies biases into three levels: perceptual biases, action biases, and social biases. For each of the levels of cognitive processing, we discuss examples of biases from the cognitive science literature, and speculate how they might also be important to the area of visualization. In addition, we put forward a methodological discussion on how biases might be studied on all three levels, and which pitfalls and threats to validity exist. We hope that the framework will help spark new ideas and discussions on how to proceed studying the important topic of biases in visualization.
Cognitive Biases in Visual Analytics – A Critical Reflection
Cognitive bias research is an interesting and challenging area of research. Nevertheless, it is not entirely clear to what extent it is applicable in visual analytics. Visual analytics systems support reasoning processes in ill-structured domains with large amounts of data. It is difficult to apply cognitive bias research from laboratory studies based on a minimal amount of information to this area. In this paper, an alternative approach for bias mitigation is suggested: provide context and activate background knowledge. Advantages and limitations of this approach are discussed.