top of page

Algorithmic Polarization: From Physical Masses to Digital Echo ChambersFeature article

  • Feb 10
  • 22 min read

Article written in collaboration with @domiziaromani.psicologa


Abstract

This article analyzes the phenomenon of algorithmic polarization as a contemporary evolution of mass dynamics studied by classical social psychology. Starting from the founding theories of Le Bon, Freud, and Tarde, we examine how social media algorithms have automated and amplified collective psychological mechanisms, creating digital ecosystems characterized by filter bubbles and echo chambers. The analysis integrates cognitive and emotional perspectives, highlighting how cognitive biases and emotional contagion contribute to the radicalization of opinions and the fragmentation of the social fabric. The individual psychological costs of polarization are also examined, including reduced empathy and increased cognitive dissonance. The article concludes with reflections on the need to develop digital awareness skills to critically navigate contemporary media environments.


Keywords: algorithmic polarization, echo chamber, filter bubble, cognitive biases, emotional contagion, social identity, online radicalization


Introduction

The advent of social media and digital platforms has radically transformed the ways in which public opinion is communicated, interacted with, and built. At the heart of this transformation is the phenomenon of algorithmic polarization, a process through which automated recommendation systems orient users towards content increasingly aligned with their pre-existing preferences and beliefs (Pariser, 2011). This mechanism does not represent an absolute innovation in the processes of social influence, but rather a technological evolution of dynamics already observed and theorized by classical mass psychology (Sunstein, 2017).


The issue takes on crucial relevance as algorithmic polarization does not limit itself to filtering information, but actively contributes to the construction of fragmented social identities, the radicalization of positions, and the erosion of dialogue between groups with divergent visions (Bail et al., 2018). Understanding the psychological mechanisms underlying this phenomenon, therefore, becomes essential to developing media literacy strategies and promoting conscious digital citizenship.


The Historical Roots: The Psychology of Masses

Gustave Le Bon and the Psychology of Crowds

Gustave Le Bon, in his seminal work Psychology of Crowds (1895), describes the masses as psychological entities with their own characteristics, distinct from those of the individuals who compose them. According to Le Bon, the individual immersed in the crowd undergoes a psychological transformation: individual consciousness partially dissolves in favor of a collective soul characterized by impulsivity, suggestibility, and irrationality (Le Bon, 1895/2001). Crowds, according to this perspective, tend to standardize feelings and thoughts in a single direction, amplifying emotions and reducing individual critical capacity.


This theorization anticipates fundamental elements of contemporary algorithmic polarization: the homogenization of opinions, emotional intensification, and the reduction of critical thinking are processes also observable in the digital masses organized by social platforms (Quattrociocchi et al., 2016). The substantial difference lies in the fact that, while Le Bon describes physically co-present aggregates, digital masses are constituted through virtual connections mediated by algorithms.


Sigmund Freud and Libidinal Dynamics

Sigmund Freud, in Mass Psychology and Ego Analysis (1921), deepens Le Bon's analysis from a psychodynamic perspective, identifying the libidinal bond as the fundamental glue of the masses. Freud (1921/2013) argues that members of a mass develop both horizontal (with each other) and vertical (with a common leader or ideal) emotional bonds, partially renouncing their critical Ego in favor of a collective Ego. This mechanism of identification with each other and with an external authority generates cohesion but also conformity and susceptibility to influence.


In the digital context, the role of the leader can be assumed by influencers, opinion leaders, or algorithms themselves who, through the personalization of content, assume a quasi-authorial function in the selection of information (van Dijck & Poell, 2013). Users develop emotional attachments to these figures or the proposed content, reinforcing group identities and reinforcing barriers with the outgroup.


Gabriel Tarde and Social Imitation

Gabriel Tarde proposes a theory of imitation as a fundamental mechanism of social life. According to Tarde (1890/1903), ideas, behaviors, and beliefs spread by mutual imitation, creating waves of conformity that shape public opinion. Unlike Le Bon, Tarde does not emphasize the loss of rationality in the crowd, but rather the processes of social influence that operate through the observation and reproduction of the behaviors of others.


Tarde's insights are particularly relevant to understanding the virality of digital content. Sharing, liking, and retweet mechanisms represent contemporary forms of social imitation, in which users replicate and amplify messages they observe circulating in their network (Goel et al., 2016). Social media algorithms exploit these imitative trends, promoting content that has already demonstrated engagement-generating capabilities, triggering information cascades that can spread rapidly across digital networks.


From Physical Crowds to Digital Masses

Bauman's Liquid Modernity

Zygmunt Bauman, in his analysis of liquid modernity, describes contemporary social aggregations as fluid, temporary, and lacking the characteristics of stability that characterized traditional communities (Bauman, 2000). Digital masses perfectly embody this liquidity: they quickly form around specific content, hashtags, or events, then dissolve just as quickly. Unlike the physical crowds described by Le Bon, digital masses do not require physical co-presence or temporal continuity, but are constituted through lattice connections mediated by technological platforms.


Relative anonymity and technological mediation also modify the dynamics of social inhibition: behaviors that would be censored offline by social pressure can manifest themselves more freely online, favoring phenomena of disinhibition and polarization (Suler, 2004). Digital platforms thus create spaces where users can express extreme opinions without the social consequences of expressing them in face-to-face contexts.


The Role of Algorithms

Recommendation algorithms represent the mechanism through which digital masses are structured and oriented. These systems, designed to maximize user engagement, operate by identifying patterns in browsing behaviors, interactions, and expressed preferences, then proposing content that maximizes the likelihood of further engagement (Bozdag & van den Hoven, 2015). The result is a highly personalized but also progressively limited information experience, in which users are predominantly exposed to content that confirms their pre-existing visions.


This algorithmic logic systematically exploits human cognitive and emotional vulnerabilities. Algorithms do not simply passively respond to user preferences, but actively shape them through a continuous feedback loop: the more a user interacts with certain types of content, the more they will receive similar content, progressively reinforcing specific worldviews and isolating them from alternative perspectives (Gillespie, 2014). This process of automated filtration and amplification creates what Pariser (2011) defined as a personalized information universe, in which each user inhabits a slightly different version of digital reality.


The Cognitive Dimension of Polarization

Cognitive Bias and Algorithms

Algorithmic polarization is grafted onto innate cognitive predispositions that characterize the functioning of the human mind. Confirmation bias (confirmation bias) represents one of the most relevant mechanisms: individuals tend to seek out, interpret, and remember information that confirms their pre-existing beliefs, while minimizing or ignoring discordant ones (Nickerson, 1998). Algorithms amplify this bias by providing users with exactly what they are predisposed to search for, creating a vicious cycle in which initial beliefs are constantly reinforced.


Other relevant cognitive biases include motivated reasoning (motivated reasoning), whereby individuals process information to reach emotionally desirable rather than objectively accurate conclusions (Kunda, 1990), and the heuristic availability effect, which leads to overestimating the likelihood of events that are easily recalled, typically because they are recent, emotionally salient, or frequently presented in the media (Tversky & Kahneman, 1973). Algorithms, by favoring content that generates intense emotional reactions and is shared virally, amplify the action of these biases, further distorting the perception of reality.


Filter Bubble and Echo Chamber

The concept of filter bubble (filter bubble) describes the information isolation condition that is created when custom algorithms select the content to display to each user based on their behavioral profile (Pariser, 2011). This process generates homogeneous information environments, in which the diversity of opinions and perspectives is progressively reduced. Filter bubbles often operate invisibly to the user, who is unaware of what content is being precluded from them and can develop the illusion that their information experience corresponds to the full range of information available.


Echo chambers (echo chambers) represent the collective evolution of filter bubbles: they are digital environments in which groups of users with similar beliefs interact predominantly with each other, mutually reinforcing their opinions and creating an ideological sounding board (Cinelli et al., 2021). Within echo chambers, opinions are not only confirmed but also progressively radicalized, as the absence of dissenting voices eliminates mechanisms of correction and moderation. The result is increasing internal homogenization and parallel divergence from other groups, a phenomenon that fuels overall social polarization.


The Emotional Dimension of Polarization

The Role of Negative Emotions

Empirical research shows that content that elicits negative emotions, particularly anger and fear, achieves significantly higher levels of engagement than emotionally neutral or positive content (Berger & Milkman, 2012). This phenomenon is explained by considering that negative emotions have an evolutionary value of danger signaling and therefore capture attention in a priority way, motivating immediate behavioral responses (Brady et al., 2017). Algorithms, optimized to maximize interaction, quickly learn these patterns and systematically promote emotionally charged content, especially that which generates moral outrage or fear.


The result is an information ecosystem characterized by a systematic amplification of negativity. Users are exposed to a constant stream of alarming news, controversy, and divisive content, which fuels emotional states of anxiety, anger, and frustration (Crockett, 2017). This chronic exposure to emotionally negative stimuli not only influences individual psychological well-being but also helps shape a perceived threatening and confrontational worldview, fostering defensive and hostile attitudes toward groups perceived as adversaries.


Emotional Contagion and Radicalization

Emotional contagion is a process through which emotions are transmitted between individuals, influencing each other's affective states (Hatfield et al., 1993). In the digital context, this mechanism takes on particular characteristics: emotions are not transmitted through physical co-presence and non-verbal signals, but through language, images, and symbols shared on social platforms. Studies have shown that exposure to emotionally charged content can significantly influence users' emotional states and their subsequent online utterances (Kramer et al., 2014).


The cycle of emotional contagion mediated by algorithms operates as follows: negative content generates intense emotional reactions, which in turn motivate further interactions (likes, comments, shares); these interactions signal to algorithms that the content is engaging, leading to its greater diffusion; increased exposure amplifies emotional contagion, involving a growing number of users (Fan et al., 2019). This vicious cycle creates spirals of emotional radicalization, in which user groups reinforce each other in states of collective indignation, anger, or fear, facilitating the formation of polarized identities and hostility toward the outgroup.


Psychological Mechanisms of Online Radicalization

Digital Relative Deprivation

Relative deprivation theory holds that the sense of injustice and frustration arises not so much from objective conditions of disadvantage, but from comparison with reference groups perceived as unfairly advantaged (Runciman, 1966). In the digital context, algorithms constantly expose users to narratives that emphasize the threats or injustices suffered by their identity group, fueling a sense of relative deprivation. This process is particularly evident in polarized political contexts, where each faction is predominantly exposed to content that reinforces the perception of being victims of unfair treatment or existential threats (Pettigrew, 2016).


Digital relative deprivation generates emotional and behavioral mobilization: users feel motivated to defend their group, denounce perceived injustices, and actively oppose the outgroup. This dynamism can translate into constructive forms of activism, but also into hostility, verbal aggression, and, in extreme cases, extremist behavior. Continued exposure to narratives of victimization and threat also creates a psychological climate of permanent alert, which hinders constructive dialogue and fuels logics of intergroup conflict.


Groupthink and Conformism

The phenomenon of groupthink (groupthink), originally described by Janis (1972), occurs when the search for cohesion and unanimity within a group suppresses dissent and critical thinking. In digital echo chambers, this mechanism operates particularly effectively: pressure for compliance is exerted not only through explicit interactions, but also through implicit mechanisms such as like visibility, follower count, and platform-mediated social approval dynamics (Moeller et al., 2018).


Users who express opinions that differ from the group's dominant line risk social sanctions such as criticism, isolation, or exclusion, phenomena amplified by the public visibility dynamics typical of social media. As a result, many users tend to self-censor, avoiding expressing doubts or criticisms even when they perceive their validity (Hampton et al., 2014). The result is an impoverishment of internal debate, a progressive radicalization of the group's positions, and a reduced ability to correct errors or extremism.


Deindividuation and Disinhibition Online

Deindividuation is a psychological state in which individual awareness and self-control are reduced, favoring impulsive behaviors that conform to group norms (Zimbardo, 1969). In the digital context, factors such as anonymity, physical distance, and the absence of immediate feedback reduce the inhibitory brakes that normally regulate social behavior (Suler, 2004). This phenomenon, known as the online disinhibition effect, explains why individuals who behave respectfully and moderately in offline life can express extreme opinions, use aggressive language, or participate in collective hate campaigns online.


Deindividuation facilitated by digital platforms contributes to polarization in two main ways: on the one hand, it fosters the expression of extreme positions that would otherwise remain unexpressed, shifting the center of gravity of public debate toward extremes; on the other, it normalizes forms of aggression and hostility that erode the norms of civility and mutual respect necessary for constructive dialogue (Coe et al., 2014). The combination of deindividuation and algorithmic amplification creates environments where conflict and extremism are rewarded in terms of visibility and influence.


Social Identity in the Digital Age

The Theory of Social Identity

Social Identity Theory (Tajfel & Turner, 1979) provides a fundamental theoretical framework for understanding group dynamics and polarization processes. According to this theory, each individual's personal identity is partly made up of group memberships: people define themselves not only in terms of individual characteristics, but also through the social categories to which they feel they belong (nationality, religion, political orientation, etc.). These social identities profoundly influence perceptions, attitudes, and behaviors.


Social media algorithms operate as powerful social categorization mechanisms, classifying users into increasingly homogeneous groups based on their behaviors and preferences (Gillespie, 2014). This categorization is not neutral: it determines what content each user will see, what other users will interact with, and, consequently, what social identities will be strengthened. The result is an accentuation of the distinctions between ingroups (one's own group) and outgroups (other groups), resulting in an intensification of the processes of favoritism towards one's own group and denigration of others.


In-group Favoritism and Out-group Derogation

In-group favoritism (in-group favoritism) is the systematic tendency to evaluate members of one's own group more positively than those of other groups, attributing more favorable characteristics to them and interpreting their behaviors more benevolently (Brewer, 1999). In the digital context, this mechanism is amplified by constant exposure to content that celebrates the virtues, successes, and moral positions of one's identity group. Algorithms, by prioritizing content that generates approval and sharing within homogeneous communities, create self-celebratory narratives that reinforce the group's sense of belonging and moral superiority.


At the same time, out-group derogation (denigration of the external group) manifests itself through the tendency to emphasize the negative aspects of other groups, attributing negative stereotypical characteristics to them and interpreting their behaviors in a hostile key (Hewstone et al., 2002). Algorithms fuel this process by selectively exposing users to content that portrays the outgroup negatively: errors, scandals, controversial statements, or reprehensible behavior are amplified, while positive or humanizing aspects are omitted. This creates a distorted and caricatural representation of the other, which facilitates hostility, contempt and dehumanization.


AI-Fed Prejudices and Stereotypes

The bidirectional influence between humans and intelligent systems in the production of Bias.

Pre-existing beliefs that drive the selection and interpretation of information, often in a distorted way, are fueled by the use of artificial intelligence algorithms. In this context, prejudices and stereotypes, strongly rooted in human behavior, can also be found in artificial intelligence systems. (Battle. F., 2023).


According to Allport, prejudice is defined as an attitude of rejection or hostility towards a member belonging to a social group, simply because he belongs to that group, and who is therefore presumed to possess reprehensible qualities generally attributed to that group (Alietti, A., & Padovan, D., 2023) As for the concept of stereotype, it is defined as a set of simplified images through which people in a group see social reality. Stereotypes have a cognitive function in reducing the complexity of the social environment, allowing individuals to quickly orient themselves in the social context. In particular, from a psychological point of view, stereotypes are considered cognitive patterns that associate stable and rigid characteristics with a given social group. These cognitive schemas can be automatically activated by influencing judgments and perceptions about individuals (Devine, P. G., 1989). Allport clearly distinguishes stereotypes from prejudice. While stereotyping is conceived as a cognitive component linked to shared belief in a group, prejudice is defined as the affective and evaluative component that can lead to a negative attitude towards a social group (Allport, GW, 1954).


Digital Amplification and the Role of the Media.

In modern society, the role of stereotypes and prejudices is amplified by media and digital platforms. Distorted representations in search engines can make stereotypes and biases more pervasive and difficult to deconstruct. Safiya Noble (2018) shows how online searches can generate sexist and racist results by further supporting the negative image of certain social groups. From this perspective, it is highlighted how stereotypes and prejudices are not only products of the individual mind but are linked to new technologies and cultural aspects that favor diffusion (Nobel.F., 2018).


Algorithmic Bias.

Algorithmic bias is a phenomenon that produces automated discrimination due to the nesting of human biases within the artificial intelligence system, reinforcing their effects through echo chambers. Digital has not produced new prejudices but has sped up the spread of existing ones, contributing to the formation of an online identity continuously influenced by the vicious circle generated by the interaction between algorithms and human behavior (Nobel.F., 2018).


Chen (2022), through the literature review, has explored this topic in the context of staff recruitment, analyzing the causes and origins of algorithmic discrimination (Chen, 2022). In his research, he identifies two causes of algorithmic bias in staff recruitment:

Data bias: Artificial intelligence was created to learn and simulate human behavior, and was introduced into the selection process as it offers companies the ability to speed up and save money, especially during the initial selection phase, to identify candidates most consistent with job demand. Bias arises when the training data used to develop AI reflects historical discrimination, as a company's hiring data, learning from the past, and based on the training data, may reflect historical inequalities. For example, if the company has hired more candidates of the same sex or ethnicity for a job, the algorithm will learn that candidates with those characteristics are most chosen in the past and are better suited, ignoring the others.


Designer bias: The designer influences model building, a phenomenon known as bias-in-out, meaning if the input is biased by the designer's biases and opinions, the AI selection method will also reflect these trends.


However, the study identified various forms of discrimination that can manifest themselves through the use of algorithms, including gender, ethnicity, socioeconomic status, and personality (Chen, 2022). In line with what emerged from Chen's study, Tilcsik's (2021) theory of statistical discrimination allows us to refer to prejudice and the origins of the evaluation criteria and methods of information collection.


Discrimination in Staff Recruitment.

In fact, from this perspective, employers are often interested in assessing the competitiveness of candidates when making recruitment decisions. However, obtaining this information directly is difficult, so employers rely on several indirect techniques (Tilcsik, A., 2021).


Similarly, an extension of the theory of statistical discrimination in the digital world is highlighted. In the labor market, the problem of algorithmic discrimination in hiring has indeed emerged. The mechanisms giving rise to discrimination problems regarding hiring remain similar, as both are based on historical data from specific populations. While AI recruitment can provide numerous benefits, it can also be subject to algorithmic bias. When ratings consistently overestimate or underestimate the scores of a particular group, they produce a predictive bias. These discriminatory findings are often overlooked due to the misconception that AI processes are inherently objective and neutral (Raghavan, & colleagues, 2020).


The use of AI in healthcare

The 2019 study by Obermeyer and colleagues.

Biases and stereotypes associated with artificial intelligence, as well as organizational context, can be highlighted in other contexts and research areas. In particular, the use of artificial intelligence in healthcare can, on the one hand, promote diagnostic support and improve clinical efficiency and accuracy, but at the same time, the literature has highlighted how these tools are associated with some biases and can, in some cases, produce biases. One of the best-known studies is that of Obermeyer and colleagues (2019), in which an algorithm used in some US hospitals to identify and refer patients to intensive care programs is analyzed. Results show that under the same clinical conditions, the algorithm tended to underestimate the needs of African-American patients compared to white patients. (Obermeyer, et al., 2019) This bias is based on how the algorithm relied on past healthcare costs, which, due to structural inequalities in access to care, are lower for the African-American population. From this perspective, the algorithm reinforced pre-existing inequalities, thus reducing the likelihood of African-American patients receiving adequate care. These findings, related to data set bias, are supported by a systematic literature review that highlighted how most health datasets are based not only on economic variables but also on data sets balanced in favor of Western, male, and Caucasian populations. In light of these findings, this bias leads to lower diagnostic accuracy towards women and ethnic minorities (Norori, & colleagues, 2021).


Marketing, Advertising, and Face Recognition

In advertising and marketing, digital platforms use artificial intelligence systems to offer offers and ads based on user data collected online. It emerges in the literature that this selection system may not consider certain types of users. In the context of job advertisements, it has been identified that job positions such as lawyer and engineer are offered more to men than to women, a target not decided by the employer but by advertising algorithms that aim to increase the engagement of the advertising post, not taking into account the social and economic consequences of this decision, ending up increasing gender inequalities in the work context (Lambrecht, A., & Tucker, C., 2019).


Among artificial intelligence systems, facial recognition has been shown by several studies to be unreliable. Buolamwini and Gebru (2018) highlighted this phenomenon, demonstrating that algorithms make errors especially in recognizing dark-skinned women, while the error is almost zero for light-skinned men. A problem not only technical but also ethical, as errors in facial recognition can occur in surveillance and control contexts where these errors can translate into discrimination (Buolamwini, J., & Gebru, T., 2018).


Biases and stereotypes about artificial intelligence are a barrier that can negatively impact the way new technologies are designed, adopted, and used to support humans. Recognizing these biases and stereotypes does not mean giving up artificial intelligence but rather fosters the possibility of developing a critical approach that combines transparency, training, and ethical accountability (Mehrabi, & colleagues, 2021).


The Psychological Cost of Polarization

Cognitive Dissonance and Informational Anxiety

Cognitive dissonance, theorized by Festinger (1957), describes the state of psychological distress that emerges when an individual simultaneously maintains contradictory cognitions. Under the highly polarized conditions created by echo chambers, users can develop rigid and internally coherent belief systems, but progressively distant from the complexity of reality. When they inevitably encounter information that contradicts these beliefs, they experience acute cognitive dissonance, generally resolved not through updating their opinions, but through strategies of source avoidance, rationalization, or denigration (Hart et al., 2009).


This process generates chronic information anxiety: users develop a negative sensitivity to discordant information, perceived not as learning opportunities but as identity threats. The result is a progressive narrowing of the information universe considered legitimate and an increase in psychological stress associated with exposure to alternative perspectives (Garrett & Weeks, 2013). In the long run, this condition can impair critical thinking ability, cognitive flexibility, and tolerance to ambiguity.


Reduction of Empathy

Empathy, understood as the ability to understand and share the emotional states of others, is an essential foundation of social cohesion and prosocial behavior (Decety & Jackson, 2004). Algorithmic polarization systematically erodes this capability through processes of out-group dehumanization. When individuals are predominantly exposed to stereotypical, demonized, or caricatured representations of those from different groups, their ability to recognize shared humanity and take the perspective of others is reduced (Waytz et al., 2014).


The constant categorization of us against them, fueled by algorithms, activates emotional distancing mechanisms that facilitate hostile attitudes and morally justify aggression towards the outgroup (Cikara et al., 2011). This reduction in empathy not only compromises interpersonal relationships and the quality of public debate but can also have broader consequences in terms of supporting discriminatory policies, tolerance of violence, and the erosion of social solidarity.


Fragmentation of the Social Fabric

Algorithmic polarization is not limited to online interactions but extends to offline interpersonal relationships, fragmenting families, friendships, and communities along increasingly rigid ideological lines (Iyengar et al., 2019). Studies show a significant increase in affective polarization, that is, emotional hostility toward those with different political orientations, resulting in a reduced willingness to interact, collaborate, or maintain relationships with members of the political outgroup (Iyengar & Westwood, 2015).


This fragmentation generates high costs at the individual and collective levels: relational stress, social isolation, reduced social capital, and impaired ability to cooperate to address common problems. In extremely polarized contexts, the very possibility of constructive democratic debate is being questioned, as different factions operate in parallel information universes, share fewer and fewer common references, and consider not only the positions but also the motivations of the other side illegitimate (Sunstein, 2017).


Conclusions: Towards a Digital Awareness

The analysis conducted highlights how algorithmic polarization represents a complex and multidimensional phenomenon, rooted in the psychological dynamics of the masses already identified by classical social psychology, but which takes on peculiar and potentially more insidious characteristics in the contemporary digital environment. Algorithms do not ex nihilo create polarization tendencies, but systematically amplify them by exploiting innate cognitive and emotional vulnerabilities, creating information ecosystems that foster fragmentation, radicalization, and intergroup conflict.


Developing digital awareness skills, therefore, becomes crucial to critically navigate these environments. This involves: recognizing the existence and functioning of filter bubbles and echo chambers; cultivating awareness of one's cognitive biases and the emotional dynamics that influence content consumption; actively seeking exposure to diverse perspectives; developing critical evaluation skills of sources and content; and maintaining openness to dialogue and empathy toward those who express divergent opinions.


At the collective level, an informed public debate is needed on the responsibilities of digital platforms, the transparency of algorithms, and possible regulations that balance freedom of expression, technological innovation, and the protection of individual well-being and social cohesion. Only through a multidimensional approach, integrating individual awareness, media education, corporate responsibility, and public governance, will it be possible to mitigate the negative effects of algorithmic polarization and preserve spaces for democratic dialogue in the digital age.


Bibliographic References

Alietti, A., & Padovan, D. (2023). Le grammatiche del razzismo. Un’introduzione teorica e un percorso di ricerca. Venezia: Edizione Ca’ Foscari. https://doi.org/10.30687/978-88-6969-744-9

Allport, G. W. (1954). The nature of prejudice. Cambridge, MA: Addison-Wesley. Anthonysamy, L., & Sivakumar, P. (2022). A new digital literacy framework to mitigate misinformation in social media infodemic. Global Knowledge, Memory andCommunication, 73(6/7), 809-827.

Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., Lee, J., Mann, M., Merhout, F., & Volfovsky, A. (2018). Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences, 115(37), 9216–9221. https://doi.org/10.1073/pnas.1804840115


Bauman, Z. (2000). Liquid modernity. Polity Press.

Battaglia, F. (2023). I pregiudizi: un errore solo umano? Come i pregiudizi accomunano umani e algoritmi. In Giornate di studio sul razzismo. Atti della 3ᵃ e 4ᵃ edizione-2023 (pp.19-28). Lecce: Università del Salento.

Berger, J., & Milkman, K. L. (2012). What makes online content viral? Journal of Marketing Research, 49(2), 192–205. https://doi.org/10.1509/jmr.10.0353


Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: Democracy and design. Ethics and Information Technology, 17(4), 249–265. https://doi.org/10.1007/s10676-015-9380-y


Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). Cambridge, MA: PMLR. 


Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313–7318. https://doi.org/10.1073/pnas.1618923114


Brewer, M. B. (1999). The psychology of prejudice: Ingroup love or outgroup hate? Journal of Social Issues, 55(3), 429–444. https://doi.org/10.1111/0022-4537.00126


Chahal, P., & Kaur, H. (2024, November). Analyzing the Impact of Artificial Intelligence on the Glass Ceiling and Glass Cliff Phenomena. In 2024 3rd Edition of IEEE Delhi Section Flagship Conference (DELCON) (pp. 1-4). New Delhi: IEEE. 


Chen, Z. Etica e discriminazione nelle pratiche di reclutamento basate sull'intelligenza artificiale. Humanit Soc Sci Commun 10 , 567 (2023). https://doi.org/10.1057/s41599-023-02079-x


Cikara, M., Bruneau, E. G., & Saxe, R. R. (2011). Us and them: Intergroup failures of empathy. Current Directions in Psychological Science, 20(3), 149–153. https://doi.org/10.1177/0963721411408713


Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9), e2023301118. https://doi.org/10.1073/pnas.2023301118


Coe, K., Kenski, K., & Rains, S. A. (2014). Online and uncivil? Patterns and determinants of incivility in newspaper website comments. Journal of Communication, 64(4), 658–679. https://doi.org/10.1111/jcom.12104


Crockett, M. J. (2017). Moral outrage in the digital age. Nature Human Behaviour, 1(11), 769–771. https://doi.org/10.1038/s41562-017-0213-3


Decety, J., & Jackson, P. L. (2004). The functional architecture of human empathy. Behavioral and Cognitive Neuroscience Reviews, 3(2), 71–100. https://doi.org/10.1177/1534582304267187

Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of personality and social psychology, 56(1), 5.


Fan, R., Zhao, J., Chen, Y., & Xu, K. (2019). Anger is more influential than joy: Sentiment correlation in Weibo. PLoS ONE, 9(10), e110184. https://doi.org/10.1371/journal.pone.0110184


Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press.


Freud, S. (2013). Psicologia delle masse e analisi dell'Io. Bollati Boringhieri. (Opera originale pubblicata nel 1921)


Garrett, R. K., & Weeks, B. E. (2013). The promise and peril of real-time corrections to political misperceptions. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (pp. 1047–1058). ACM. https://doi.org/10.1145/2441776.2441895


Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). MIT Press.


Goel, S., Anderson, A., Hofman, J., & Watts, D. J. (2016). The structural virality of online diffusion. Management Science, 62(1), 180–196. https://doi.org/10.1287/mnsc.2015.2158


Hampton, K. N., Rainie, L., Lu, W., Dwyer, M., Shin, I., & Purcell, K. (2014). Social media and the 'spiral of silence'. Pew Research Center.


Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., & Merrill, L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555–588. https://doi.org/10.1037/a0015701


Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1993). Emotional contagion. Current Directions in Psychological Science, 2(3), 96–100. https://doi.org/10.1111/1467-8721.ep10770953


Hewstone, M., Rubin, M., & Willis, H. (2002). Intergroup bias. Annual Review of Psychology, 53(1), 575–604. https://doi.org/10.1146/annurev.psych.53.100901.135109


Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N., & Westwood, S. J. (2019). The origins and consequences of affective polarization in the United States. Annual Review of Political Science, 22(1), 129–146. https://doi.org/10.1146/annurev-polisci-051117-073034


Iyengar, S., & Westwood, S. J. (2015). Fear and loathing across party lines: New evidence on group polarization. American Journal of Political Science, 59(3), 690–707. https://doi.org/10.1111/ajps.12152


Janis, I. L. (1972). Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes. Houghton Mifflin.


Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111


Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480


Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management science, 65(7), 2966-2981. 


Le Bon, G. (2001). Psicologia delle folle. TEA. (Opera originale pubblicata nel 1895)


Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6), 1-35. 


Moeller, J., Kühne, R., & de Vreese, C. (2018). Mobilizing youth in the 21st century: How digital media use fosters civic duty, information efficacy, and political participation. Journal of Broadcasting & Electronic Media, 62(3), 445–460. https://doi.org/10.1080/08838151.2018.1451866


Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175

Norori, N., Hu, Q., Aellen, F. M., Faraci, F. D., & Tzovara, A. (2021). Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10).

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

Pettigrew, T. F. (2016). In pursuit of three theories: Authoritarianism, relative deprivation, and intergroup contact. Annual Review of Psychology, 67(1), 1–21. https://doi.org/10.1146/annurev-psych-122414-033327


Quattrociocchi, W., Scala, A., & Sunstein, C. R. (2016). Echo chambers on Facebook. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2795110


Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020, January). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 469-481). ACM.


Runciman, W. G. (1966). Relative deprivation and social justice: A study of attitudes to social inequality in twentieth-century England. University of California Press.


Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. https://doi.org/10.2307/j.ctt1pwt9w5


Suler, J. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7(3), 321–326. https://doi.org/10.1089/1094931041291295


Sunstein, C. R. (2017). #Republic: Divided democracy in the age of social media. Princeton University Press.


Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33–47). Brooks/Cole.


Tarde, G. (1903). The laws of imitation. Henry Holt. (Opera originale pubblicata nel 1890)


Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232. https://doi.org/10.1016/0010-0285(73)90033-9


Tilcsik, A. (2021). Statistical discrimination and the rationalization of stereotypes. American Sociological Review, 86(1), 93-122. 


Van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1(1), 2–14. https://doi.org/10.12924/mac2013.01010002


Waytz, A., Hoffman, K. M., & Trawalter, S. (2014). A superhumanization bias in Whites' perceptions of Blacks. Social Psychological and Personality Science, 6(3), 352–359. https://doi.org/10.1177/1948550614553642


Zimbardo, P. G. (1969). The human choice: Individuation, reason, and order versus deindividuation, impulse, and chaos. In W. J. Arnold & D. Levine (Eds.), Nebraska Symposium on Motivation (Vol. 17, pp. 237–307). University of Nebraska Press.



Comments


© 2035 by Charley Knox. Powered and secured by Wix

bottom of page