A hitchhiker’s guide to information theoretical measures in psychology

Information theoretic measures can be used as alternative non-parametric association measures that shine an additional light on data analysis in psychology. In that sense, it can be used as an alternative to, say, regression analysis (Garner & McGill, 1956). It can be especially enlightening in a multivariate context as information theory introduces useful concepts to understand higher-order relationships among variables (McGill, 1954, Watanabe, 1960). Some efforts in this direction include the use of entropy in the construction of fit indices used in exploratory graph analysis (Golino et al., 2021). By introducing concepts like redundancy and synergy (Rosas et al., 2016), it can be used to give additional structure and understanding to high-dimensional data. Additionally, the information-theoretic framework facilitates the unification of diverse statistical approaches (Ince et al., 2017) on a common scale (e.g. bits) with, for example, mutual information allowing for the comparison between any combination of univariate/multivariate and discrete/continuous variables. This paper aims to reawaken interest in information theoretic measures that we deem relevant for psychology, comparing them with the more standard measures of association and clearing up some confusion concerning their use and applicability. Information theory, founded on the contributions of, among others, Nyquist (1924) and Hartley (1928) in the 1920s, and later formalized by Shannon (1948), has since extended beyond its original domain of communications engineering. It has evolved into a field of its own, intersecting many other disciplines such as physics, computer science, and statistics (Cover & Thomas, 2006). Over the years, information theory has been applied in various scientific fields such as biology (Adami, 2004, Quastler, 1953), economics (Maasoumi, 1993, Philippatos and Wilson, 1972), linguistics (Harris, 1991), neuroscience (Borst and Theunissen, 1999, Dimitrov et al., 2011, Victor, 2006), and of course psychology which was one of the early adopters of information theory (Attneave, 1959, Berlyne, 1957, McGill, 1954, McGill, 1957, Miller, 1956, Miller and Frick, 1949). The various immediate applications of information theory outside of communication engineering were met with some criticism by Shannon (1956) himself who believed in the usefulness of information-theoretic concepts in other fields but warned against superficial analogies and wholesale translation into other fields without a good understanding of the theory and strong experimental verification. Also from within psychology, there was cautionary criticism towards the use of information theory. Cronbach (1955) argued against the indiscriminate use of information measures purely based on the intuitive analogy between information processing in psychology and information processing in engineering. There was, at that time, already an understanding that information theory and information theoretical measures were different, albeit historically linked (Quastler, 1955), and that we can measure variation in term of bits outside the context of information or communication. The confusion surrounding the non-semantic meaning of information in the Shannon sense (Susswein, 2013) and the confounding of the various technical terms with their everyday use (Ritchie, 1986) led to critiques that psychology is not compatible with information theory. After the initial burst of activity in the 50s and 60s, the use of information theory in psychology slowed down considerably. A renewed interest in the use of information theoretical measures has grown, based on a more mature understanding of information theory and its appropriate applications and shortcomings (Adami, 2004, Dimitrov et al., 2011, Ince et al., 2017, Timme and Lapish, 2018). Beyond applications in experimental and cognitive psychology (Canales-Johnson et al., 2023, Silverstein et al., 2017, Wollstadt et al., 2023), and the use of higher-order information in neuroscience (Gelens et al., 2024, Luppi et al., 2024), information theoretical measures have not yet percolated throughout the general field of psychology. The belief that information theory and psychology are a bad fit lingers (Luce, 2003) and is mainly rooted in the aforementioned confusions and attempts to limit the use of information theory to the modelling of cognitive behaviour. Without dismissing this recently maturing application of information theory (Sayood, 2018), we want to stress that the use of information theoretical measures as promoted in this work is not intended as a model of human action-perception coupling or information processing. We argue that benefits of using information theory in psychology can be found without any assumptions towards an apparent similarity in concepts. As an example application, we will use the data from Briganti et al. (2018) (N=1973) used for investigating the construct of empathy. In this paper, we argue that information theory has plenty to offer psychology and we will identify 3 main benefits: investigating variable dependency beyond linear dependencies, unifying different methods specific to different variable types, and natural extensions towards the investigation of higher-order behaviour.

Comments (0)

No login
gif