Homeless Scholar Blog ~ COGNITIVE BIASES

In my dissertation on workplace discrimination and perceived (as opposed to documented) disabilities, I included a passing reference to a couple of well-known cognitive biases, presented as heuristics, or mental shortcuts, for decision making. First, there is the “faulty representation” that any level of impairment is interpreted as indicative of disability. The prospect of a more realistic, nuanced spectrum of impairment is not considered. Second is the “availability heuristic,” an error of facile recall; ongoing observation of the impaired employee tends to amplify the impairment’s severity in the employer’s mind .However, there are a great many more such biases; in fact, a Wikipedia article lists over two hundred! What they all have in common is a “systematic pattern of deviation from norm or rationality in judgment”.

Furthermore, these biases are often implicit, i.e., unconscious, especially in the workplace. The judging person is not motivated by conscious, emotional prejudice but is making decisions automatically and unconsciously, guided by rules of thumb informed by stereotypical assumptions about the behavior of a given group of people, in this case, people with disabilities. This point is worth emphasizing because of the prevalent association of unconscious mental phenomena with emotionalism. The general consensus among behavioral scientists is that most of implicit phenomena is largely cognitive in nature. As often as not, the implicit biases operative in employer decision making are likely to be due to “innocent mistakes” not reflecting any ill will or “animus” in the legal arena. (However, they can still be legally actionable, as in the case of Taylor v. Pathmark (1999), which found an employer at fault for the effects of his decisions rather than for his stated intentions.)

The best-known of contemporary research projects involving unconscious mental processes is the Implicit Association Test, which can be taken online. This is a computerized assessment of stereotypical associations, which according to its advocates, can demonstrate implicit bias often at odds with one’s conscious beliefs. Differential response times taken to react to psychologically significant words and images are thought to indicate prejudice, with the data suggesting, for example, that most people have a slight preference for their own race, although critics argue that the test only reflects cultural familiarity. The debate over the validity of this test has been going on for years. Some of these critics published an extensive meta-analysis of the IAT’s predictive capacity and concluded that it did not better at predicting behavior than measures of explicit bias. They scored a point, but the game is hardly over.

Another way of assessing implicit bias, specifically workplace discrimination against disabled people, is through data analysis of decisions made by the U.S. Equal Employment Opportunity Commission (EEOC) on complaints against employers. In 2011, some colleagues and I published research in the Rehabilitation Counseling Bulletin, a peer-reviewed academic journal, which compared EEOC findings for claimants with documented disabilities and those with perceived disabilities (that is, those with only minor impairments which were prejudicially (and, I would argue, often unconsciously) exaggerated by the employer who then labeled them as “disabled”. This was the basis of much hiring and firing discrimination. In terms of “legal realism,” discrimination (again, often unconscious) occurred in roughly 1 in 4 cases. Our key finding was that the merit resolution rate for perceived disability claims (i.e., “wins” for the plaintiff) proportionately exceeded those for documented disability claims by a statistically significant margin: 26.2% vs. 22.5%. (That is, at the time of this study, out of over 38,000 perceived disability allegations, over 10,100 were considered meritorious). In another study, we found the same effect occurred when the comparison was with those who only had a record of disability, but who were not currently disabled.

This is significant from a social psychological point of view. Information about these subgroups of workplace discrimination claims highlights not only only the cultural force of stigma, but also the propensity to engage in unconscious, automatic judgments, which, while they may be free of animosity, still can have deleterious consequences for the workers affected by them. In general, these findings from our research lend support to the assertion that unconscious/implicit bias persists in the workplace.

An estimated 70% of diagnostic errors are due to faulty reasoning. Common biases in the emergency department are the aggregate, anchoring, availability, confirmation, triage cueing, diagnostic momentum, premature closure biases, among others. (Premature closure, for instance, is a readiness to accept a diagnosis before it has been fully verified. Another common error is the availability heuristic, which is a tendency to judge things as more likely if they come readily to mind. Debiasing thereof would involve judging cases on their own merits rather than based on the recency of experiences.

Such biases also occur in anesthesiology. In one case cited in the recent literature, a patient presented with extreme pain after a lumbar discectomy. He was given anesthesia followed by antibiotics. Recalcitrant hypotension occurred at some point. A cardiac cause was presumed as his profile included morbid obesity and a history of hypertension, diabetes, and other cardiac risk factors. Completely overlooked was anaphylaxis to antibiotics, which was the cause of the problem. The authors opine that numerous biases were at play, including confirmation, framing, anchoring and representativeness. “It illustrates,” they conclude, “how an entire team of anesthesiologists, nurses and surgeons could miss a seemingly ‘classic’ diagnosis, despite knowledge, skill, and good intentions.”

Many of the biases seen in the ER and Anesthesiology are also seen in Medical Imaging. For example, one article states a 6-year-old girl presented with a 10-day history of abdominal pain and mild fever. Based on an ultrasound, the interpreting radiologist, who had recently given a lecture on the imaging features of teratomas, diagnosed a teratoma. The patients symptoms worsened, and a subsequent CT of the pelvic led to diagnosing ruptured appendicitis with a pelvic abscess. Debiasing of such availability errors would be to use objective data on the base rate of disease to correlate with one’s own rates of diagnosis and create a differential diagnosis.) An especially interesting one to me is the alliterative bias which “represents the influence one radiologist’s judgment can exert on the diagnostic thinking of another radiologist.” A proposed intervention is to “consider reviewing prior radiologist reports after rendering an interpretation, so as not to be influenced by the prior radiologist’s interpretation.” Another interesting one is the so-called regret bias, which refers to “overestimating the likelihood of a particular disease because of the undesirability of an adverse outcome from a failure to diagnose that disease.” Proposed intervention: “Development of…standardized reporting systems to objectively state the probability of certain disease processes based on the presence of an imaging finding.” (Perhaps this one should instead be called the “defensive medicine” bias?)

Diagnostic error rates have been estimated to be between 5 and 15%, which doesn’t sound like much until one considers all the actual human beings who have been victimized thereby. To be fair, the root cause is multifactorial; still, a large proportion of such errors have cognitive components. A mnemonic/checklist has been devised in an attempt to lower the error rate: TWED. T stands for life or limb threat involving failure to consider alternative diagnoses to rule out worst-case scenarios. W = Wrong: this refers to an overattachment to a particular diagnosis. E = Evidence: (“Do I have sufficient Evidence for or against this diagnosis?”). Finally, D = Dispositional factors (“What are the environmental and emotional Dispositional factors influencing my decision?”)

Madva (2017) cites three standard objections to debiasing; those related to (1) empirical efficacy; (2) practical feasibility; and (3) the failure to appreciate the underlying structural-institutional nature of discrimination. He replies to all these criticisms, but his response are long and involved, so I will just summarize one, that of presumed practical unfeasibility. Research has shown that debiasing can occur after hundreds of separate “trials”, but what does that mean? A trial is quite brief exposure to counterconditioning; hundreds of trials could be done on a PC with just “liking” on social media or playing a video game. Madva claims that working through the researched number of 480 trials takes only 45 minutes. But a further criticism is that there are too many biases to deal with. In that case, they can be prioritized to fit the context/goal.

Evidently, though, the issue of debiasing is likely to remain controversial for some time.

~ Rylan Dray, Ph.D., November 2019

SELECTED REFERENCES

W.R. Draper et al (2011). Workplace discrimination and the perception of disability. Rehab Counseling Bulletin, 55(1) : 29-37.

Larsen (2008). Unconsciously regarded as disabled. UCLA Law Review, 56, 451 ff.

Itri, J.N. & S.H. Patel (2017). Heuristics and cognitive error in medical imaging. AJR, 210: 1097-1105.

Stiegler, M.P. & Dhilton, A. (2014). Decision-making errors in anaesthesiology. Intl Anesth. Clinics, 52(1): 84-96.

Chew, K.S. et al (2016). A portable mnemonic to facilitate checking for cognitive errors. BMC Res Notes, 9; 445.

Madva, A. (2017). Biased against debiasing: On the role of (institutionally sponsored) self-transformation in the struggle against prejudice. Ergo 4(6).