Cognitive biases in clinical decision-making in prehospital critical care; a scoping review

Study characteristics extracted are presented in Tables 1 and 2. 16 articles from 5 countries, published from 1993 to 2023, describing 28 unique cognitive biases were included in this scoping review. Most articles described in-hospital critical care, only two of the included articles describe the prehospital critical care. Of the 16 included articles, 9 were primary research. Of those 6 of 9 (67%) were quantitative. Most articles included in this scoping review were published in the US (56%) (Fig. 2).

Fig. 2figure 2

PRISMA Flow diagram. From: Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. doi: https://doi.org/10.1136/bmj.n71 [13]

Table 1 Source of evidence and country of originTable 2 Overview of extracted data for analysisClinical environment

Of the 14 studies reporting on specific clinical environments, 4 articles described both ICU and ED [17, 20, 26, 28]. Three articles described ICU only [16, 26, 31]. Three articles described prehospital care [15, 27, 32]. One reported on Post-Cardiac Arrest Service/ICU [22]. Another reported on neonatal ICU [19] and one reported on OR, ICU and ED [30].

Cognitive biases

Most articles reported 2 or more cognitive biases that influence decision-making in critical care. Three articles reported on a single bias and of these, 2 studies reported on age bias, and one reported on overconfidence. Of the 28 unique cognitive biases identified, 11 were mentioned in 2 or more articles. Nine were reported and described in 2 or more articles. These are presented categorized by the levels of Endsley`s model for situation awareness (Fig. 3).

Fig. 3figure 3

Identified cognitive biases categorized by level of situation awareness from Endsley`s model [11]

Perception of elements in current situationAnchoring bias

Anchoring occurs when a clinician focuses too early and heavily on some specific initial findings in the diagnostic process. For example, if an initial clinical presentation of an aortic dissection is assessed by a clinician as being a likely myocardial infarction, then instead of broadly evaluating all the available information, there may be an inability to utilise conflicting data to adjust from a presumptive diagnosis of myocardial infarction to aortic dissection [14].

Three studies reported on Anchoring bias [15,16,17]. Marsden et al., performed semi structured individual interviews of experienced prehospital consultants, exploring decision-making for pre-hospital blood transfusion in trauma patients. They discovered examples during interviews highlighting that participants showed overdependence on initial information during clinical decision-making, thereby presenting the presence of anchoring bias [15]. A registry study from Lucas et al., of patients admitted to a noncritical care level after being evaluated in the ED and then upgraded to ICU within 48 h, found that for the year 2019, anchoring bias was the most prominent cognitive bias in non-concordant diagnosis [16]. Fassier et al., carried out a qualitative study of physicians who are making end-of-life decisions for elderly patients, they found that physicians relied prematurely and strongly on prominent presentation features [17].

Framing effect

Framing effect is described as the tendency to base judgements based on how the information is presented. For example, the information in patient handover will potentially frame how the receiving team evaluates and makes decisions. Tversky and Kahneman provides a different example: “The odds of survival one month after surgery are 90%” is more reassuring than the equivalent statement that “mortality within one month of surgery is 10%.” [18].

Three studies reported on framing effect [15, 17, 19]. During their interviews Marsden et al., found that examples of framing effect were evident in 5 of the 10 interviews conducted [15]. In the study from Fassier et al., they observed that framing effect influenced clinical decision-making, but did not highlight it as one of three main influencing biases [17]. A study from Stanak et al., looking at decision-making at the limit of viability in neonatal ICU discussed how framing effect influences shared decision-making. They state that how information is presented, and the order of information may affect parents and clinicians [19].

Availability bias

Availability bias is described as trusting or relying on the most easily available information. Tversky and Kahneman describes it as “judging frequency by the ease with which instances come to mind”. They describe that dramatic events or personal experience increases availability of such events [18]. For example, a physician who frequently responds to myocardial infarctions who is presented with an aortic dissection might be more likely to trust easily available information that this resembles an myocardial infarction. On the contrary if the same physician has experienced a dramatic episode with misdiagnosis of an aortic dissection the opposite is likely to occur under the influence of availability bias [20].

Two studies reported on Availability bias [15, 16]. In the study of Marsden et al., prehospital physicians articulated the tendency to trust information that easily comes to mind, here by demonstrating the tendency of availability bias [15]. The study from Lucas et al., identified availability bias as one of four main biases in clinical decision-making, but found it to have the lowest presence in comparison with premature closure, anchoring, and confirmation bias [16].

Confirmation bias

Confirmation bias is the tendency to seek information that confirms one’s beliefs and pay less attention to or being more critical to information presenting competing possibilities. For example, clinicians might overfocus on findings supporting their diagnosis, and under value information weakening their hypothesis [20]. Two studies report on confirmation bias [16, 17]. Fassier et al., suggest that when physicians make end-of-life decisions, they use physiological age as an argument to confirm their belief in a decision rather than to seek information to questions it [17]. Lucas et al., found in their registry study that confirmation bias was present in 55,6% of cases of care escalation to ICU in 2020 [16].

Comprehension of current situationOverconfidence bias

Overconfidence is the tendency to be more confident in one’s own abilities, presumptions, or predictions than objective reasonable. When exposed to overconfidence bias, people tend to act on incomplete information, making forecasts for patient outcomes with overconfidence in their own accuracy [14, 21].

Three studies reported on overconfidence bias [17, 22, 23]. Fassier et al., suggest in their study that physicians making decision alone might be more prone to overconfidence bias [17]. Another study conducted by Steinberg et al., found that providers are inaccurate when predicting survival and functional outcome after cardiac arrest in a third of their assessments, most errors are described to be optimistic [22]. A simulation study from Yang et al., presents that experienced critical care nurses did not show better calibration in their own confidence on accuracy compared to nursing students when identifying patients at risk of a critical event. They found that in general nursing students were underconfident and experienced critical care nurses were overconfident when compared to accuracy [23].

Premature closure

Premature closure is described as a tendency to accept a diagnosis without sufficient information. Basing judgement on characteristics that may be convincing but not decisive for a particular diagnosis, or by anchoring to early perceptions [14]. Three articles describe premature closure [16, 20, 24]. In the study from Lucas et al., they found premature closure to be the most prominent cognitive bias in their study of ICU upgrades due to non-concordant diagnosis [16]. Two other articles describe premature closure as making a clinical decision before having the necessary information to conclude [20, 24].

Projection of future statusOmission bias

Omission bias is described as judging the potential harmful consequences of one’s own actions as worse than the same consequences of not doing anything. For example, a physician might be unsure about providing thrombolytic treatment to a stroke patient with mild symptoms and might incorrectly weigh the consequences of a potential bleed caused by treatment heavier than the consequences of not providing treatment [14, 20].

Three articles reported on omission bias [20, 25, 26]. Freshwater-Turner et al., presents a case report on a 15-year-old male where they reflect on the occurrence of omission bias. In their reflection they present that they judged the risk of bleeding from providing anticoagulation treatment worse than the risk of allowing clot propagation and pulmonary embolism [25]. Lighthall et al., discuss how clinicians in critical care typically are presented with decisions on “intervene or not”, they provide examples of “deciding to minimize fluid administration and accepting declining renal function versus infusing fluids and risking an abdominal compartment syndrome” [20]. Aberegg et al., presents that omission bias can lead to harmful inaction of not providing indicated treatment or unnecessary tests. They argue that when omission bias occurs, a degree of uncertainty is present [26].

Contributing factors

Twelve of the included articles reported on contributing factors. Seven of twelve were primary research. Three were discussion articles and two were evidence synthesis. Following the inductive qualitative content analysis from Elo and Kyngäs [12], 17 condensed meaning codes were constructed and categorized into 3 categories that emerged.

Lack of unbiased feedback

Three articles reported on lack of unbiased feedback. Lack of unbiased feedback refers to situations where the information used to evaluate decisions is distorted by the assumptions, actions, or systems that produced the decision. Steinberg et al., presents that healthcare providers making clinical decisions based on their predictions after cardiac arrest rarely receive unbiased feedback, and that this might result in poorly calibrated decisions [22]. Stanak et al., also describe how biased feedback contributes to development of self-fulfilling prophecies [19]. The study from Marsden et al., presents that during their interviews they found prehospital physicians to have limited awareness of biases affecting their decisions [15].

Social behaviour and beliefs

Five articles reported on social behaviour and beliefs [15, 19, 20, 26, 27]. Stanak et al., states that low survival rates for infants born at week 24 are affected by policies limiting treatment for these patients, and that this in turn will justify and validate the policy, contributing to social norms and institutional biases [19]. Ordaoobadi et al., presents that subtle cues such as grumbling and eye rolling from hospital personnel receiving geriatric trauma patients makes EMS clinicians question their decisions [27]. Aberegg et al., describes in their article how fear of regret, blame or legal consequences affects clinical decisions [26]. In the study from Marsden, they found that prehospital physicians had a lack of awareness of factors affecting their decision-making [15]. Lighthall et al., proposed that group consensus might promote conformity rather than protect against early closure or other biases [20].

Time pressure

Three articles reported on time pressure. Aberegg et al., reported that the time and effort required to change the status quo might be a contributing factor [26]. The article from Lucas et al., presents that patient overload may lead to increased use of cognitive short cuts, which may result in less analytic effort [16].

In the study from Yang et al. [23], they found that nurses became more confident in easy cases and less confident in difficult cases when under time pressure. Time pressure affects their motivation and confidence towards their favoured hypothesis, by a need for closure. This might contribute to the consideration of fewer hypotheses [23].

Mitigating factors

Eleven of the sixteen included articles suggested mitigating factors. Five of eleven were primary research, 4 were discussion articles and 2 were evidence synthesis. Several factors were proposed; these are divided into three categories of mitigation factors.

Feedback and follow-up

Five of the included articles describe feedback and follow-up as a mitigating factor [19, 25, 28]. Three of the articles present the importance of creating a collective awareness of cognitive biases as a mitigating factor. They suggest that this is facilitated by providing unbiased feedback and follow-up [19, 25, 28].

Stanak et al., promotes that there is a need for clinicians to recognize biases [19]. Ordoobadi et al., states that if EMS clinicians received formal feedback and follow-up after handing-over geriatric trauma patients it would allow them to learn from each call [27]. Lighthall et al., presents that by providing feedback on cognitive performance, providers are able to build expertise from experience [20].

Comments (0)

No login
gif