Beyond Disinformation: How our Emotions Online are Steering the 2024 Elections

A wounded ear trumping Republican campaigning, a hung French parliament re-evaluating tactical returns and a victorious Labour rising up disproportionately – 2024 is the year for unpredictable elections.

Design Credits: Simran Thapliyal

This year will witness general elections in 64 countries that represent 49% of the world's population. Tumultuous political developments and a decline in democracy calls for a careful consideration of factors affecting voter behaviour. As  social media platforms engulf democracies into their fold (37% of the people in the UK consume political news via social media), its relationship with democracy and politics has new factors in the mix – young voters and their mental health. Whilst a growing number of younger people are supporting far right groups in the UK and Europe, they are also the same cohort experiencing an increase in mental health difficulties. Increasing evidence of  correlation between social media and worsening mental health makes it imperative to consider the nexus of social media, mental health and political news as factors influencing elections this year. 

Studies on the overall influence of social media platforms on democracies and political dysfunction remain contradictory. Researcher’s access to social media data has always been limited and recent trends are making it worse. Until we are able to mine through Meta’s data, let’s consider how social media triggers emotions like anger and fear, impacting mental health and voter decision making. 

Emotions Run High: From Political News to Fearful Polarisation

When an individual engages with a social media post, their interaction with and response to stimuli set into motion an algorithm specifically designed to keep them engaged. When engaging with political information, those who are interested in politics consume high levels of political information on social media (Tucker et al, 2018). These individuals are often media junkies, they consume political news wherever they can find it and due to their addictive consumption, are highly exposed to differing views. Thus, they are able to avoid echo chambers, a phenomenon considered crucial in increasing polarisation on social media. Meta repeatedly cites this study to absolve itself from claims of deteriorating democracy. 

However, what Meta often ignores is its impact on other factors that make up a healthy democracy. The Varieties of Democracy Index characterises democracy as “a political system in which political leaders are elected under comprehensive voting rights in free and fair elections, and freedoms of association and expression are guaranteed”. 

It is particularly  ‘freedom of association’ and ‘freedom of expression’ that social media distorts. 

Political news on social media is often inflammatory, outrageous or highly sensational, aiming to produce ‘shock’ to optimise for engagement. Consuming such outrageous news corresponds to high levels of negative emotions, such as anger, hatred, fear, disgust or anxiety (Weismueller et al., 2023). So, when we consume sensational information, negative emotions activate our cognitive biases, designed to make us pay greater attention to danger for our safety. As individuals become habitual to experiencing heightened emotions of anger, fear, etc., it keeps them hooked on to social media content. Such repeated exposure to negative emotions has a detrimental impact on mental health (Nolen-Hoeksema et al, 2008). 

Social media interactions worsen anxiety and depression, our anxious interactions with emotional content on social media make us focus more on threatening content without our awareness (Mineka et.al., 2003). Similarly, people in depressive states remember negative content disproportionately more. 

Additionally, the ad-revenue, engagement-based parameters of Meta and other social media platforms optimise for addiction and out-engineer human agency. Such addictive, compulsive behaviour of consumption is in a tight feedback loop with mental health indicators. Compulsive behaviour creates a gap between intention and action, again leading to detrimental effects on mental health. 

When mindless scrolling and response to outrageous content occurs, it works through psychological processes that are responsible to respond intuitively to stimuli, called System 1 thinking (Weismueller et al., 2023). Unwinding with mindless scrolling after a long day helps most millennials and Gen Zers help regulate our emotions, leaving internal triggers that mark an unprecedented power asymmetry between humans and media technologies. Without the processes that contribute to conscious decision-making, digital platforms categorically keep voters indulged in a mindless barrage of information that disregulates their emotions and aggravates their mental health. 

Content that triggers negative emotions, increases anxiety and depression, plays into our cognitive biases and forges a mindless addiction is successfully makes 'freedom of association’ characteristic of democracy a murky territory. 

For example, when a young person engages with content on the impact of rising immigrant population on housing or health infrastructure, they may experience fear for their future, repeated exposure to which can  exacerbate their anxiety, which will often make them more aware of any such threat from ‘immigrants’. This does not make them xenophobic, but when politicians play into this fear for votes, their association with such groups is not a ‘free’ choice, it is orchestrated through psycho-emotional mechanisms that manipulate decision-making.

On social media, associations are often formed on the basis of sensational, outrageous, fake news that manipulates emotional behaviour. Democratic elections in 2024 hinge on volatile political sentiment determined by virality of content on social media, as seen with a rise in fake news after the recent assassination attempt on former President Donald Trump. 

As a diversity of content online reaches individuals through engagement based algorithms, the possibility of different groups consuming conflicting content is high – threatening the fabric of a shared, social reality. 

A Shared, Social Reality at Threat 

When people are anxiously hooked onto social media for dopamine hits, political sentiment online may not represent their honest feedback. Precarious engagement with extreme content gives a false perception of views and opinions that may not reflect the deep-seated opinions and needs of citizens. An anxious individual, highly prone to cognitive biases, may be reactive in his politics – supporting groups that represent their anger or frustration, often without resolutions. As a result, polarisation befits a by-product of technology that extracts and exploits humans’ affective mechanisms. 

Another way social media creates cracks in our shared reality is by giving the perception that certain phenomena are more powerful or enveloping than they actually are, perhaps due to self-censorship by dissenters. For example, the deluge of polarising content on Hindutva before the Indian elections may indicate massive public support and a sweeping win for the ruling BJP government in India. However, in a shocking election result, the government stood on weaker grounds than expected. Those who were afraid to speak up exercised their dissent with their vote. Similar results can be seen in other countries. 

A reason for such perceived power could be because in polarising environments, moderate opinions seem weak or indecisive. Affective polarisation, where individuals feel more attached to their partisan groups and critical of those on the other side, prevents moderate opinions from surfacing  without being criticised or seen as betrayal. 

Even when efforts to debunk sensational fake news with moderate opinions are made, they are often unsuccessful (Margolin et al. 2017; Shin et al. 2017 in Tucker et al 2018) . This is because our brains are inclined to believe our senses, especially when they converge. The abundant highly visual social media (HVSM) content manipulates our biological instincts, leaving imprints that are difficult to absolve, despite attempts to reverse damaging interactions. 

That is not to say that social media has not fostered movements that embolden our freedom of speech. #Metoo, #blacklivesmatter, #womenlivefreedom movement have all left huge online imprints and massive on-ground protests. However, the longevity required to transform movements into sustained social change is faltering. When online movements result in on-ground action – they are often fleeting, underwhelmed by honest dissent and attention  needed to sustain movements.

Microtrends, flash activism and transient rage fades with the next big thing. The impact of social media on our politics have two-fold implications: not only is social media making us mentally ill, it also aggravates psychological exploitation by encouraging behaviour that neither reflects true opinions nor is able to serve the best political interests of citizens. 

Fragmentation, Truth and Thinking-Feeling with Social Media 

I do not intend to demonise the revolutionary impact of AI-driven digital media on democracies. AI is empowering students to learn interactively with digital avatars, allowing alternate independent media to flourish, and creating pathways for marginalised issues to reach millions. Some even describe the impact of AI systems as facilitating anti-hierarchical or democratic values. Tech optimists hail latest developments on generative AI to replace mind-numbing tasks, catalysing ‘thinking outside the box’. Now journalists will be able to create digital avatars for video content, visualise graphics and write articles with a click, increasing productivity and efficiency. 

However, such ‘democratic’ components have contradictory implications when students’ memory retention and mental health is deteriorating, independent journalists are shadow banned on platforms and marginalised groups increasingly experience hate crimes. Before we say technology is as good as the people who use it, it is vital to consider the emotional and cognitive impact of social media on citizen’s perception of truth and agency. 

As social media and AI impair the brain's deep thinking and memory retention abilities, it impairs people’s social skills. In increasingly alienating urban cultures, socially isolated individuals find a greater sense of trust in digital subcultures, often based on ideologies or interests fostered by interactions online. Parallely, decreased trust in public institutions, and doom scrolling to conspiracy theories prevent meaningful dialogue across ideological lines. When interactions across ideological lines do take place online, it makes people feel stressed, finding the people they disagree with ‘uniquely angry’ (Tucker et al, 2018). 

Despite recent studies (Boxell et al 2017; Dubois and Blank, 2018) reaffirming the moderating impact of echo chambers, social media and online usage still exhibit high levels of political segregation (Tucker et al 2018). This indicates that interactions across partisan lines, despite increased exposure, may not open dialogue, it might just reinforce strongly held beliefs. Zeynep Tufekci frames it succinctly, “It's like hearing them from the opposing team while sitting with our fellow fans in a football stadium…. we bond with our team by yelling at the fans of the other one”. 

If interactions on social media worsen mental health, they also exploit anxious or depressive states to make people focus more on threatening or negative content for engagement, so the moderating effect of echo chambers and cross-exposure to ideologically different views does little to change tendencies. 

Most people on social media engage with content mindlessly or intuitively. Today, social media houses different socio-political identities. Hundreds of millions of different users engage with content particularly targeted at their gender, race, class, age, geography and other identity parameters. Identities are embedded in unequal social hierarchies, each subgroups’ viral content exhibits their prejudices and biases. Since each of these vantage points represent a situated social position, what is perceived as truth for one may be contradictory, or even violent for another social position.

Supporters of Reform UK’s Nigel Farage live in a world where their anxiety over the deteriorating British economy is interconnected with a fall of traditional British values. Narratives blame both issues on infiltrating ‘foreigners’. Such narratives connect anecdotes and lived experiences of the misery of the cost of living crisis and rising crime, with ‘foreigners’ that do not embody British values. When supporters do witness multiculturalism in their community or online, such rhetoric reaffirms their fears. Their political standpoint reflects coherence theory of truth, even if it seems ridiculous. 

Our perception of truth is not merely facts strung together, but also experience and content, reality that is perceived via our fixed framework of viewing the world. Each individual’s perception of reality, combining our real and virtual lives, aligns with own coherence  theory of truth. We are wired to desire coherence, and in the current precarity of late capitalism, social media contributes to building coherence between fears and lived experiences. It creates subcultures of presumed in-group coherence that may be contradictory, or even violent, to other groups’ lived realities. It is fair to say that social media contributes to creating mini bubbles of coherence across various biases and assumed fears of individuals, which may be contributing to fragmentation in social democracies. 

When the symbolic ‘box’ is a product of limbic capitalist systems that paralyse human choice, it persists as our pre-engineered frame of reference before any ‘thinking outside the box’ can begin. Emphasising AI innovation to solve problems caused by big tech only increases our overreliance on technology - digital avatars giving mental health advice to teenagers may not be the creative thinking we need or desire. 

We may now understand why interactions across ideological lines may not open dialogue. Two people growing up in the same household may hold vastly contradicting opinions only due to an identity parameter, say, gender. Their physical and virtual realities diverge based on gender (or any other parameters) that algorithms utilise to feed content. Then, cross-cutting exposure, i.e., when young boys engage with ‘feminist’ content that calls for male accountability, or when women read about men struggling to meet patriarchal expectations of masculinity, can arouse negative feelings that threaten their internal coherence. The individual may seek compensatory reactions, such as disregarding the content as false or searching for opposing content that validates their coherent theory of truth. This is evidently detrimental to democratic dialogue. 

When coherence is the goal in the post-truth world, it is fulfilled with fragmented realities specially curated for our ‘explore feed’. What is then fundamentally at threat is the notion of a shared social reality of the public. Technology that creates fragmentation unbeknownst in the pre-internet age ultimately denies mental freedom and the dignity of choice to hundreds of millions of people. It also undermines our collective capacity and memory to solve the problems we face. 

In an age of informational abundance, if people are investing in politics that harms them and cripples their mental health, it is worthwhile to consider how such technology is developed. To ensure a healthy democracy, it is vital that such technologies are developed in a democratic manner, with open access to data for researchers. As a public good, developments on social media should consider consultations from varied interest groups, similar to how governmental policies are developed. The cybernetics of democracy must extend to the tech industry to ensure a chance for healthy democracies. Until then, this election year will come as a testing ground for what is possible for human agency to outmanoeuvre. 

References: 

Mineka, S., Rafaeli, E., & Yovel, I. (2003). Cognitive biases in emotional disorders: Information processing and social-cognitive perspectives. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 976–1009). Oxford University Press. 

Nolen-Hoeksema, S., Wisco, B.E. and Lyubomirsky, S. (2008). Rethinking Rumination. Perspectives on Psychological Science, 3(5), pp.400–424. doi:https://doi.org/10.1111/j.1745-6924.2008.00088.x.

Tucker, J.A., Guess, A., Barbera, P., Vaccari, C., Siegel, A., Sanovich, S., Stukal, D. and Nyhan, B. (2018). Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. [online] SSRN. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3144139.

Weismueller, J., Gruner, R.L., Harrigan, P., Kristof Coussement and Wang, S. (2023b). Information sharing and political polarisation on social media: The role of falsehood and partisanship. doi:https://doi.org/10.1111/isj.12453.

*this article was conceptualised, written and edited by a human being with additional edits from Chat GPT's feedback



Previous
Previous

Kolkata Rape Case: Thoughts by a Fatigued Feminist

Next
Next

Birthday Blues: Whose Fault Is it Anyway?