Zaur Gouliev and Dr Sarah Anne Dunne
Policing alt-tech for disinformation? VIGILANTly moving in the right direction,
Zaur Gouliev and Dr Sarah Anne Dunne
Source: The Telegraph 2023
Policing social media is challenging (William et al., 2021), not only due to the scale (Alexandre De Streel et al., 2020) but the decentralised nature of social media platforms make it difficult for law enforcement authorities to monitor illegal behaviour (Emile Mbungu Kala 2024). Researchers have argued for the presence of police on social media (Kershaw, 2023) and the challenges (Ibid; Jonathan Abel, 2022) that it might bring. Social media surveillance is also not a new concept for policing operations (Wil Crisp, 2021 & Elena M. Egawhary, 2019) and social media data is routinely used to prosecute individuals for crimes (Justin P. Murphy, 2013). Some have even, boldly, made arguments for why police departments should routinely monitor social media to prevent crimes (Christopher Raleigh Bousquet, 2018). There is a consensus amongst experts that dis-/misinformation and fake news is the largest social problem on social media platforms (Esma Aïmeur et al., 2023) and one that has consequences beyond hashtags and memes. Not all disinformation is illegal, but some are, especially those that attempt to incite violence on vulnerable individuals and groups (Ronan Ó Fathaigh et al., 2021; European External Action Service, 2022). These types of disinformation are especially dangerous.
Social media platforms are inherently built to allow the fast travel of information between users, and features like algorithmic amplification, anonymity and networked communities enable disinformation to spread at lightning speed (Donghee Shin, 2024; Michael Filimowicz; 2022; Derek Weber, 2021; Paul McBride, 2017). A single post can spread to a million people within minutes (Dizikes, 2017), and a single post has the potential to incite a violent reaction from individual groups who have either capitalised on the disinformation or fallen victim to it. Disinformation is a coordinated effort (L. Vargas, 2020) though some may not inherently realise that they are sharing incorrect or falsified information This effort can take place in the form of individual trolls to troll armies, from obvious bot accounts to hard to detect botnets (PN Howard, 2016). Social media companies like X and Meta call this “coordinated inauthentic behaviour” (Meta, 2024), which is a fair term but arguably lacks precision, especially if those sharing the content are not trolling or content is not being shared and disseminated by bots, what is the cut off point between authentic and inauthentic? Is it the coordination? If so, what about those that are highly uncoordinated? Thus we run into problems of classification, which are not so different to problems found in building machine learning models (Tim Schröder 2022) more generally speaking, and since social media companies often rely on machine learning to detect this type of behaviour, it isn’t all that surprising.
So why police social media?
Scholars have presented a host of reasons why varying authorities should police and engage with social media (Dekker, 2020; Stubbs, 2021; Kershaw, 2023)., We argue that a less cited need for this is the ability of social media posts and content to influence people’s behaviour which in turn can promote radicalisation, extremist behaviour and harmful ideas which have the potential to harm society and democracy as a whole (Whittaker, 2022; EM Roberts-Ingleson, 2023; Hui, 2024). The hallmarks of these behaviours often include the use and spread of dis-/misinformation, which has been noted by many researchers around the world, and especially in defence studies, as a national security threat (David V. Gioe, 2021). Disinformation campaigns on alternative social media platforms, referred to as “alt-tech” like Gab, Parler, BitChute and even larger messenger apps like Telegram can create real world harm (Dehghan, 2022; Comerford et al., 2023). These platforms harbour extremist communities where hate speech and incitement to violence lead to mobilisation of harmful groups (Peucker, 2022; M. Scott, 2022). Disinformation narratives can fuel dangerous offline actions, including protests, harassment and violence targeting specific locations such as synagogues, mosques, or asylum centres (Judit Szakacs, 2022; Francesco Farinelli, 2021). Police should take these risks seriously, as the jump from coordinated online activity to physical confrontation is not far-fetched, given past instances like Dublin’s riots back in November 2023 and London’s riots in August 2024 which were both exacerbated and promoted on alt-tech platforms. (Coimisiún na Meán, 2023; Marc Owen-Jones, 2023) Telegram, particularly, was used during these instances to coordinate physical confrontation and share disinformation about the attacks/riots, and so the connection between online activity is only a single disinformation artefact away.
A Telegram message forwarded from the channel SOUTHPORT WAKE UP shared the location of the Southport Islamic Society Mosque.
Source: Quantifying Extremism: Institute of Strategic Dialogue
So, given that, one might reframe the question to why don’t we police social media? Firstly, attempts in the past were primarily via regulation but jurisdictional issues and cross-border regulation have made it hard to both track and punish those who spread harmful content online (Monika Bickert, 2020). It is also problematic because of free speech laws and the definition of harmful content differs from one jurisdiction to another, as well as from one platform to another. Content moderation itself often follows Eurocentric and Western models of knowledge: that is, most content moderation is concentrated on English language content and often reflects limited contextual knowledge (Elswah, 2024). Secondly, and a point that is often overlooked, is the huge lobbying power that social media platforms have which makes it difficult for governments to enforce meaningful engagement and remove extreme or problematic content. As it stands, the recently founded Irish media regulator, Coimisiún na Meán is developing an Online Safety Code which will force Irish-based social media companies such as META, X (formerly Twitter), Reddit and TikTok to regulate and police online content. The code aims to ensure that the socal media users, particularly children, are protected on such platforms from digital harm such as cyberbullying, sexual harassment and incitements to hatred or violence.
A post by a channel linked to Tommy Robinson, with reactions celebrating protestors setting fire to a migrant hotel.
Source: Quantifying Extremism: Institute of Strategic Dialogue
A third and final point is the use of self or user based regulation which is in operation on META and X’s platforms and involves general users acting as content moderators via report functions. Platforms like Meta and X have in the past promoted a self-regulatory model where users can flag or report content. Since February, the Coimisiún has also opened up a complaints mechanism for the general public to highlight dangerous or extreme content. The use of the report/flag mechanism and content moderation more generally involves the employment of content moderators, which in itself presents numerous problems. Such work, when not completed by AI systems, is often outsourced to third party companies in the Global South where employees are often undertrained and underpaid in a line of work that can cause varying health issues owing to the distressing nature of the content (Elswah 2024). Content moderation in itself is further challenging owing to the sheer volume of material and media types being created and shared across platforms. Furthermore, the employment of content moderation techniques could be seen as contradictory to the end-goal of many social media companies who benefit from the massive profits generated via user engagement with disinformation (whether in favour of or against the content in question). Thus begging the question: is the company more dedicated towards protecting its users from dangerous and harmful content, or is freedom of speech and expression and the engagement garnered via disinformation campaigns more significant? The answer to such a query is made visible when tech figures such as META’s Mark Zuckerberg tells parents of children who have experienced online harm and sexual harassment: ‘I’m sorry for everything you’ve been through’.
In addressing these problems, content moderation policies, user suspension and deplatforming policies have prompted a simultaneous move to alt-tech platforms; this is perhaps unsurprising given that such platforms are often marketed as defenders of free speech and have little to no functioning content moderation policies. Disinformation and harmful narratives thus remain undisputed on these platforms and can be disseminated across the social media sphere easily as a result. Naturally, these platforms attract a wide range of users from fringe political ideologies like third positionism, to 9/11 and 5G vaccine conspiracy theorists to white nationalist activists, provocateurs, and internet troll groups like the Groypers due to their permissive (read: non-existent) content moderation and laissez-faire hate speech policies. These policies allow disinformation to flourish with little to no intervention, to the point that scholars have highlighted the risk of echo chambers leading to potential radicalisation or extremism. This means not only is there a higher concentration of harmful content but potential for disinformation to spill into a real-world violent action.
Law enforcement agencies have a case for intervening to mitigate these threats and it is of the author’s opinion that intelligence operations would be justified in monitoring the activity to prevent harm from occurring and that such action can be taken, as the two case studies given will demonstrate via the VIGILANT project.
The VIGILANT Project
The VIGILANT project can be considered an example of a proactive measure: it aims to equip police authorities with the technical capabilities and institutional knowledge necessary to combat disinformation and other related harmful content. Simultaneously, it seeks to provide an understanding of the social drivers and behavioural dynamics behind these phenomena. In VIGILANT, the “I” stands for “intelligence,” a vital (the V) component of any law enforcement agency. Intelligence allows agencies to stay ahead of threats by enabling analysts and officers to build a picture of what’s happening, why it’s occurring and who the main actors are in a given environment; be it a neighbourhood, a dark web marketplace or a platform that harbour extremist views. Let’s consider two case studies to demonstrate how policing agencies could have curtailed disinformation that led to violent acts.
I. Southport Riots – Applying VIGILANT
Following the tragic stabbing in Southport in August 2024, the absence of immediate official details led to a disinformation cascade. Prominent political figures like Nigel Farage and influencers such as Andrew Tate exploited the information vacuum to spread anti-immigrant and Islamophobic narratives about the attacker. This rapidly incited violent protests across the UK and Northern Ireland. Alt-tech platforms like Telegram and Gab played a key role in both organising protests and amplifying conspiracy theories, which falsely framed the incident as part of a “two-tier justice system” that allegedly favoured immigrants over native Britons. Disinformation, propagated by far-right groups such as the UK Active Club Network, encouraged combat training and radical mobilisation, leading to attacks on mosques, refugee centres, and immigrant-owned businesses. Although some enforcement actions were taken after the fact, such as the removal of extremist Telegram channels, disinformation actors quickly regrouped, demonstrating the difficulty of controlling such decentralised platforms. Senior MET officers attributed the violent outbreaks directly to the disinformation narratives that fuelled public anger. VIGILANT, if implemented during the Southport riots, could have identified early signs of disinformation through its cross-platform analysis and detection capabilities. By flagging and tracking the spread of harmful narratives, such as the anti-immigrant rhetoric that rapidly circulated on Telegram and X, the platform could have provided law enforcement with real-time intelligence to preemptively counter false claims. This would have enabled police forces to mobilise quickly to protect vulnerable sites, such as mosques and immigrant-owned businesses, and debunk dangerous conspiracy theories through official channels. VIGILANT’s ability to identify influential users and coordinated disinformation campaigns would have helped disrupt the far-right groups responsible for stoking violence, offering law enforcement a strategic advantage in mitigating further escalation. This proactive intelligence could have played a role in containing the riots and reducing the spread of violence linked to online disinformation.
II. Dublin Riots – Applying VIGILANT
The Dublin Riots followed an eerily similar narrative to that of the UK and Northern Ireland riots reflecting the standardised and connective nature of the far-right global entity. In the wake of the Parnell Square stabbing in November 2023, disinformation and conspiracy theories spread rapidly on platforms like Telegram and throughout X, fueling anti-immigrant sentiment across the nation and particularly in Dublin. Within hours of the attack, far-right activists used these platforms to organise protests, circulating false claims about the suspect’s nationality and calling for violent actions against immigrants. This led to riots, looting, and arson throughout the city centre. Gardaí reported that anonymous Telegram channels were used in coordinating these protests, with many targeting Garda officers and public figures, further escalating tensions. Although Telegram removed some of the most egregious channels, disinformation actors evaded moderation and manoeuvred content and organisation to other channels and platforms, continuing to incite violence. Senior Garda officers confirmed online disinformation played a key role in intensifying the riots, particularly through the targeting of migrant camps and acts of violence across Dublin. VIGILANT could serve as a useful tool here, if it was operational during the Dublin riots, its disinformation tracking tools could have flagged early warnings about the surge in anti-immigrant rhetoric on Telegram and X. This real-time intelligence could have enabled Gardaí to preemptively counter false claims about the suspect, the event that took place and deploy public order units, reducing the potency of narratives that incited violence. Identifying influential far-right users and their coordinated efforts to mobilise offline actions, VIGILANT would have provided authorities with a detailed understanding of how disinformation spread across platforms. This would have allowed Gardaí to strategically deploy officers to vulnerable areas like migrant camps and public transport hubs, potentially preventing further damage and violence. Through cross-platform analysis, VIGILANT could have helped pinpoint the most influential disinformation actors, allowing law enforcement to take swift action to disrupt networks before the riots escalated.
VIGILANT, evidently, offers new opportunities for intelligence agencies and policing authorities to not only combat and undermine disinformation after the fact, but to recognise the patterns emerging before violence occurs. The Online Safety Code, similarly, may further benefit such aims by obliging social media and big tech companies to enforce more coherent and stringent content moderation policies.
References:
- Williams, M., Butler, M., Jurek-Loughrey, A., & Sezer, S. (2021). Offensive communications: exploring the challenges involved in policing social media. Contemporary Social Science, 16(2), 227-240. https://doi.org/10.1080/21582041.2018.1563305
- De Streel, A., Defreyne, E., Jacquemin, H., Ledger, M., Michel, A., Innesti, A., Goubet, M., & Ustowski, D. (2020). Online platforms’ moderation of illegal content online: Law, practices, and options for reform. Study for the Committee on Internal Market and Consumer Protection, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament, Luxembourg.
- Kala, E. M. (2024). Influence of online platforms on criminal behavior. International Journal of Research Studies in Computer Science and Engineering (IJRSCSE), 10(1), 25-37. https://doi.org/10.20431/2349-4859.1001004
- Dias Kershaw, G. (2023) Police and Social Media: The Need for Presence and the Challenges This Poses. Beijing Law Review, 14, 1758-1771. doi: 10.4236/blr.2023.144097.
- Abel, Jonathan, Cop-‘Like’: The First Amendment, Criminal Procedure, and Police Social Media Speech (September 21, 2021). 74 Stan. L. Rev. 1119 (2022), UC Hastings Research Paper Forthcoming, Available at SSRN: https://ssrn.com/abstract=3928030
- Crisp, W. (2021, October 25). Tracking without transparency: Met police expands social media surveillance operations. Byline Times. https://bylinetimes.com/2021/10/25/tracking-without-transparency-met-police-expands-social-media-surveillance-operations/
- Egawhary, E. M. (2019). The surveillance dimensions of the use of social media by UK police forces. Surveillance & Society, 17(1/2). https://doi.org/10.24908/ss.v17i1/2.12916
- Murphy, J. P., & Fontecilla, A. (2013). Social media evidence in government investigations and criminal proceedings: A frontier of new legal issues. Richmond Journal of Law & Technology, 19(11). Available at http://jolt.richmond.edu/v19i3/article11.pdf
- Bousquet, C. R. (2018, April 20). Why police should monitor social media to prevent crime. Wired. https://www.wired.com/story/why-police-should-monitor-social-media-to-prevent-crime/
- Aïmeur, E., Amri, S. & Brassard, G. Fake news, disinformation and misinformation in social media: a review. Soc. Netw. Anal. Min. 13, 30 (2023). https://doi.org/10.1007/s13278-023-01028-5
- European External Action Service. (2022). 2022 report on EEAS activities to counter FIMI: Strategic communications, task forces, and information analysis. European External Action Service. https://euhybnet.eu/wp-content/uploads/2022/11/EEAS-AnnualReport-WEB_v3.4.pdf
- Ó Fathaigh, R., Helberger, N., & Appelman, N. (2021). The perils of legally defining disinformation. Internet Policy Review, 10(4). https://doi.org/10.14763/2021.4.1584
- Shin, D. (2024). Artificial misinformation. Routledge.
- Li, Q. (2023). Review of Deep fakes: Algorithms and society by M. Filimowicz. Routledge, 90, Abingdon. First published, 07 September 2023. ISBN: 978-1-032-00260-6.
- Weber, D., Neumann, F. Amplifying influence through coordinated behaviour in social networks. Soc. Netw. Anal. Min. 11, 111 (2021). https://doi.org/10.1007/s13278-021-00815-2
- McBride, P. (2017, November 8). Enrichment and exploitation: How website algorithms affect democracy. Paul McBride. https://paulmcbride.me/2017/11/08/enrichment-and-exploitation-how-algorithms-affect-democracy/
- Dizikes, P. (2018, March 8). Study: On Twitter, false news travels faster than true stories. MIT News Office. https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308
- Vargas, L., Emami, P., & Traynor, P. (2020). On the detection of disinformation campaign activity with network analysis. In Proceedings of the 2020 Workshop on Disinformation, Misinformation, and Fake News.
- Howard, Philip N. and Kollanyi, Bence, Bots, #Strongerin, and #Brexit: Computational Propaganda During the UK-EU Referendum (June 20, 2016). Available at SSRN: https://ssrn.com/abstract=279831
- Facebook. (2024). Community standards: Inauthentic behaviour. Meta Transparency Center. https://transparency.meta.com/en-gb/policies/community-standards/inauthentic-behavior/
- Schröder, T., & Schulz, M. (2022). Monitoring machine learning models: A categorization of challenges and methods. Data Science and Management, 5(3), 105-116.
- Dekker, R., van den Brink, P., & Meijer, A. (2020). Social media adoption in the police: Barriers and strategies. Government Information Quarterly, 37(2), 101441. https://doi.org/10.1016/j.giq.2019.101441
- Whittaker, J. (2022). Online radicalisation: What we know. European Commission.
- Roberts-Ingleson, E. M., & McCann, W. S. (2023). The link between misinformation and radicalisation: Current knowledge and areas for future inquiry. Perspectives on Terrorism, 17(1), 36-49. https://www.jstor.org/stable/27209215
- Hui, E., Singh, S., Lin, P. K. F., & Dillon, D. (2024). Social media influence on emerging adults’ prosocial behavior: A systematic review. Journal of Applied Social Psychology, 239-265. https://doi.org/10.1080/01973533.2024.234239
- Gioe, D. V., Smith, M., Littell, J., & Dawson, J. (2021). Pride of place: Reconceptualizing disinformation as the United States’ greatest national security challenge. PRISM, 9(3).
- Peucker, M., & Fisher, T. J. (2022). Mainstream media use for far-right mobilisation on the alt-tech online platform Gab. Media, Culture & Society, 45(9). https://doi.org/10.1177/01634437221111943
- Dehghan, E., & Nagappa, A. (2022). Politicization and radicalization of discourses in the alt-tech ecosystem: A case study on Gab social. Social Media and Society, 8(3), 1-12.
- Szakács, J., & Bognár, É. (2021). The impact of disinformation campaigns about migrants and minorities. European Union.
- Coimisiún na Meán. (2023, November 24). Update from Coimisiún na Meán following violent incidents in Dublin on November 23rd. Coimisiún na Meán. https://www.cnam.ie/update-from-coimisiun-na-mean-following-violent-incidents-in-dublin-on-november-23rd/
- Bickert, M. (2020, February). Charting a way forward: Online content regulation.
- Elswah, Mona. 30 Jan 2024. Investigating Content Moderation Systems in the Global South. Centre for Democracy & Technology. Accessed 18 Sep 2024. https://cdt.org/insights/investigating-content-moderation-systems-in-the-global-south/
Bios:
Zaur Gouliev is a PhD student at the UCD School of Information and Communication Studies researching disinformation, influence operations, state propaganda and foreign information manipulation and interference (FIMI). He is supervised by Dr. Brendan Spillane and Dr. Benjamin Cowan, and is involved in Dr. Spillane’s EU Horizon project ATHENA. The work of the project is crucial for the protection of democratic processes in Europe in light of recent FIMI campaigns using disinformation and the surge in cyber-attacks originating from countries like Russia and China.
Dr Sarah Anne Dunne is a post-doctoral research assistant and administrator for UCD Centre for Digital Policy.
Her research interests include digital cultures and policies, feminism, gender and sexuality studies and critical theories. She has previously worked with Prof Eugenia Siapera on the IRC Platforming Harm Project, examining the circulation of harmful health narratives during the Covid-19 pandemic and subsequently analyzing the spread of far-right material and anti-democratic (anti-LGBTQ and anti-immigrant) messages on Alt-Tech platforms. Her PhD thesis focused on manifestations of rape culture, victim blaming mentalities, and feminist interventions to emerge on microblogging platform Twitter during 2016-2017. She is currently involved in research related to the growth of far-right political sentiment and activism in Ireland that is emerging online.