Epidemiological studies of adolescents’ suicidality have primarily relied on self-reported survey data [1]. Yet, a growing body of evidence suggests that a non-trivial proportion of respondents provide careless and/or inconsistent responses to suicidality questions [1], [2], [3]. Careless reporting refers to inattentive or random responding patterns and can stem from lack of interest, distraction, or survey fatigue [4], [5], [6]. Inconsistent reporting [3] consists of logically incompatible answers across related questions (e.g., denying any suicide attempt but indicating receipt of medical treatment for such an attempt). Although inconsistent reporting can result from carelessness, the overlap between the two is only partial, as inconsistencies can also arise from item misinterpretation [7] or recall difficulties [1], [2]. Unlike inconsistent reporting, which is detectable through logical contradictions, identifying careless reporting is less straightforward and typically requires validity check items or analyses examining response styles or survey completion time [4], [8].
Both forms of response error introduce the potential for misclassification, either nondifferential (i.e., misreporting occurs independently of other variables) or differential (i.e., misreporting varies across specific subgroups). Consequently, failing to account for careless and inconsistent responding can distort prevalence estimates, generate spurious associations, mask true relationships in multiple, unpredictable ways [3]. From a prevention perspective, such misclassification can (a) compromise the identification of protective and risk factors, (b) misdirect the allocation of intervention resources, (c) undermine the effectiveness of policies, programs, and practitioners’ efforts, and (d) jeopardize the reliability and validity of policy-relevant conclusions [3], [9].
Despite these well-documented threats, most large-scale epidemiological studies of adolescent suicidality do not explicitly account for such misreporting issues [10], [11], [12], [13] or do so only partially. For example, publicly available Youth Risk Behavior Survey (YRBS) data are cleaned upstream by coding as missing response patterns such as reports of no suicide attempt but receipt of medical treatment for an attempt [14]. However, the YRBS definition of inconsistent reporting is incomplete, as it fails to capture respondents who deny active suicidal ideation while endorsing suicide plans [7], [15]. In effect, making such plans implies that the individual contemplates suicide.
The present study examined the extent to which careless and inconsistent reporting influences estimates of suicidality prevalence using data from two nationally representative samples of secondary-school students. Specifically, I assessed the prevalence of careless and inconsistent reporters and investigated how their inclusion or exclusion affects key suicidality indicators. In addition, because past research pinpointed sex differences in suicidal behaviors and in the tendency to disclose depressive symptoms [16], [17], I evaluated whether careless and inconsistent reporting alters sex-specific prevalence.
Comments (0)