Conflict Entrepreneurs. The once great democratic hope that the commercial internet promised to be has left early evangelists unrecognising what they once advocated. Instead of delivering ‘marketplaces of ideas’ though social media, social media platforms have, by and large, deteriorated into cesspits of offensive madness. Simultaneously, whilst ‘algorithmic societies’ have taken over, local independent journalism, that helps reduce political polarisation and improve social cohesion, continues to vanish.
Regardless of what one may feel about the political views of Charlie Kirk - or indeed the way he conducted such politics - few will disagree that his gruesome, tragic assassination signals a moment of deep concern for the health of America’s democracy. In the last two years, the US has witnessed several high-profile incidents of political violence, including the attacks on Minnesota Democrats Senator Hoffman, State Representative Hortman (and their families), the arson attempt on the home of Pennsylvania Governor Shapiro, and most notoriously, the two attempts on the life of President Trump during the campaign. In the first half of 2025 alone, the US has seen approximately 150 politically motivated attacks.
In several of these incidents, the perpetrator (or suspected one) has fit a common profile: young, rather socially alienated/disenchanted men who spent significant time online. The presence of meme-related content, as well as references to internet jokes found on platforms like Discord, 4chan and others at the crime scenes suggests exposure to online radicalising content. To prevent more such incidents, we must examine the role that algorithmically driven content is playing in compounding political polarisation and spreading ‘meme-driven violence.’ There is an urgent need to discuss policy interventions to address social media platform’s monetization of polarisation, hateful and extremist content online. In 2009, Facebook, the then-dominant social media platform, introduced its first ranked, personalised news feeds with EdgeRank, utterly transforming both how we interact online, receive and process information, and even our psychologies. This transformation has had a serious impact on consumers of social media, but especially digitally native youths who grew up in this media environment.
Compounding this transformation, and driving this online amplification of hate, are so-called ‘conflict entrepreneurs.’ These actors are not new. They refer to political elites, media personalities, influencers and even whole institutions who deliberately stoke social and political conflict for their own benefit. For instance, one type of conflict entrepreneurs are violence entrepreneurs: people who hijack a social movement and push it towards violence, often for personal and political gain. History has regarded Franjo Tuđman and Slobodan Milošević as prime examples.
Today’s online version of conflict entrepreneurs similarly see personal benefit in stoking the flames of tension. However, these actors perform this role almost entirely in the online realm, and are driven by primarily driven by profit.
Social media platforms have become ideal ecosystems for these actors. By prioritising engagement above all else, they continue to reward controversary as currency. By being as divisive as possible in their online interactions, engagement metrics for a given post, tweet, video or ‘take’ skyrockets, as onlookers from either side of the debate feel obliged to weigh in – often provoking more outrage and hijacking our natural dopamine responses. This is known as the engagement racket, and it is a highly lucrative racket for this new type of conflict entrepreneur. They have greatly benefitted from platform monetization and corresponding advert revenue, and are most likely to benefit from direct audience monetization, affiliate marketing and sponsored content, as their high engagement rates and follower-count is highly prized. Thus, such actors are flourishing on social media, because the system has enabled and elevated their hate. To understand the real accelerant of this online polarisation, we must examine the engine fuelling the new, online, conflict entrepreneurs.
What’s the Algorithm, Kenneth? A social media algorithm, of course, aims to accurately tailor individuals’ social media feed to elevate posts/content in a user’s feed/home page, thus drawing it to their attention. Prior to 2009, platforms did largely perform the role they initially promised to play: that of a form of ‘digital public sphere’ with flowing exchange of ideas and knowledge. Yet, with the EdgeRank algorithm Facebook hoped that by providing users personalised content they will remain engaged for longer. This thus led to more advertisement impressions, and therefore more revenue. Initially, this saw posts containing photos and high-production videos promoted, as well as high-engagement content such as clickbait articles. Yet in 2018, Facebook decided to change algorithmic course yet again, and chose to elevate posts that encouraged interaction and yet more engagement. Some of the favoured content included that which was popular with user’s friends and family, or viral memes. It also began to promote divisive content.
Algorithms are, by now, central to all social media platforms, including YouTube, TikTok, Facebook, Instagram, and, which is arguably the most notorious case, X (formerly Twitter). Following the purchase of X by Elon Musk in 2022, commentators and users alike have noted a distinct change in the content privileged by the platform’s algorithm. As Facebook discovered around 15 years ago, social platforms like X have a monetary incentive to amplify extremist propaganda, mis-and disinformation, and polarising narratives which generate more engagement. The occurrence of racist slurs has increased by roughly three times on the site, whilst anti-Semitic posts have doubled – both since Mr. Musk took over. TikTok’s feed screen equivalent, it’s ‘For You’ page, regularly presents users with extremist content and incel material, which can lead individuals into online rabbit holes and echo chambers.
A trend gaining increased scrutiny of late is the transfer of content from smaller, fringe messaging apps and less-regulated platforms (like Telegram), and from gaming or gaming-adjacent sites (such as Reddit, Discord, Steam, and Twitch) to more mainstream platforms. This content, more which tends to be more radical, and often more dangerous, is similarly amplified through the dominant social media platforms. In addition to being conducive to the promotion of extremist, polarising content, these platforms also facilitate the exploitation and grooming of ‘very online’ vulnerable young people, many of whom are desire human connection.
Figure 1. Elon Musk (@elonmusk) / X
Note: Elon Musk acknowledges the volume of negativity being amplified by X’s algorithms, promising change. Unregretted user-seconds refers to time users spend on social media that they do not feel were a waste of time – in contrast to mindless scrolling that often involves engaging with divisive, polarised content.
Source: X
Down the Rabbit Hole. What has been the effect of this algorithmic amplification of online hatred? Social media platforms themselves have become crucial vehicles for spreading disinformation and radicalizing individuals, ensuring that the more outrageous the posts, the more users they reach. The results are clear. Tech-savvy, far-right parties across Europe, like AfD and Reform UK, have rode algorithms to electoral success, adopting divisive content as a component on their political messaging. Clever use of them helped guide young men (and Gen Z more broadly) overwhelmingly towards Trump in last year’s election (see our previous blog, Listening to Dog Whistles). Other effects are more startling. Bots and trolls continue to utilise algorithmic amplification to spread their narratives and influence elections. The recent presidential election in Romania demonstrated how foreign governments can influence them, whereby Russian and Chinese agents exploited Chinese-state linked TikTok to benefit the campaign of pro-Russian, ultranationalist candidate, Călin Georgescu.
Furthermore, algorithmic recommendations lead users into ‘echo chambers’ or ‘filter bubbles’ where misinformation and extremist narratives are repeatedly reinforced, shaping the beliefs and actions of their audiences, taking advantage of the psychological phenomenon of ‘confirmation bias.’ Once in these echo chambers, reinforced by algorithms, users are presented with more extreme, conspiratorial and false content, often presented in meme format or as ‘real news.’ Eventually, once in-groups form and users become convinced of a threat to theirs, they may begin radicalizing towards violence. Most affected by this descent of social media are digital natives across the world, young people who had not graduated high school prior to the outbreak of the COVID-19 pandemic. Growing up with unfettered access to the worst of the internet, with limited critical skills and media literacy training (compounded with COVID-related interruptions to education), these young people are prime targets for extremist narratives, conspiracy theories and disinformation online.
The Road Ahead. The recent tragic assassination of Mr. Kirk should be a wake-up call to the dangerous effect social media algorithms are playing, as Utah Governor Cox suggested in the wake of Kirk’s death. States and international institutions appear to be realizing the threat posed, as notably evidenced by the UK’s recent Online Safety Act and the EU’s Digital Services Act. The latter, for instance, obliges Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) to disclose details surrounding the functioning of their algorithms, and cooperate with independent researchers to evaluate their effects. Britain’s Act requires online platforms to reduce the spread of illegal content, including disinformation, extremist content and hate speech, and to manage risks associated with harmful material. The law applies to user-to-user and search services active in the UK, and imposes duties to: assess and mitigate risks, implement content moderation systems, and ensure transparent reporting. Platforms must act proportionately based on their size and the risk level posed by their services. Australia plans to ban social media entirely for under 16s.
Despite these efforts, a purely stick-based approach is unlikely to bear fruit in this quest for algorithmic responsibility. On one hand, the same regulatory mechanisms designed to protect democratic discourse, specifically regarding freedom of speech, can be reinterpreted as tools of suppression depending on one's political perspective. On another, there exists essentially bipartisan consensus in Washington on the importance of this sector, as a component of the information-finance-business-government complex, to the prosperity and security of the US. Platforms are unlikely to regulate themselves, and states will not risk the ire of the US administration by enforcing online safety acts.
There is, however, a third way. ‘Middleware’ is envisioned as a competitive layer of independent content curation services that would sit between platforms and users, enabling individuals, rather than the platforms, to choose how their information environments are filtered and prioritized. Rather than being subject to engagement-driven algorithms described throughout, users would regain agency over their social media feeds. In the absence of this structural reform, fostering a culture of deliberate and self-aware engagement with algorithms offers a partial and pragmatic step forward. But without middleware or a similar systemic intervention, the underlying incentive to monetize division and hate will continue to corrode democratic discourse and, increasingly, public safety.



















