Disinformation at the Paris 2024 Olympics: A Digital Race? Zaur Gouliev

 Disinformation at the Paris 2024 Olympics: A Digital Race?

Zaur Gouliev

The Paris 2024 Olympics provide a unique lens through which to examine the relationship between digital policy and disinformation. The convergence of geopolitical tensions, advances in digital technologies, and the heightened visibility of the Games create a perfect environment for malicious actors seeking to sow discord and fear. The first thing we should consider before introducing our case, is precisely what misinformation and disinformation is, the latter being false information that is spread, often without any actual intent to mislead while the former is a more concerning type as it is the deliberate and false or misleading information that is intended to deceive [1] an individual, audience or in this instance fans tuning into the Paris 2024 Olympics. The key aspect to consider when distinguishing the difference is the intent, as disinformation typically serves some sort of objective and purpose [Ibid]. Both of these have grown in scope, scale and complexity over the last decade and have prompted many European nations, including Ireland to raise disinformation campaigns as a security threat to our human rights [2]. One of the reasons for this is that the resources and speed required for misinformation and disinformation incidents to be created and conducted has increased dramatically in the Internet age [3]. The use of generative AI, the rise in paid social media bots, organised trolling, the anonymity and ease of disseminating information are all but some contributing factors.

In 2024, more than half of the world is online and one in three people worldwide have used social media [4], and for many it has become integral to political engagement, and so we all can be vulnerable to this type of malicious activity which at its core attempts to manipulate public perception and destabilise us. As the technology required to produce online disinformation continues to advance, so do the challenges associated with it. Legislators, policymakers, and the private sector are constantly trying to balance the benefits of innovations in communication and information technologies with the need to protect citizen rights online and ensure safety and stability [5]. These disinformation actors often exploit a range of platforms in coordination. Platforms like X, Meta, Telegram and TikTok are exploited by influence actors to spread false narratives. The smaller, unregulated nature of Alt-Tech platforms like Gab and Discord also provides a fertile ground for these campaigns, where less stringent content moderation allows misleading content to flourish in these unchecked environments.

This blog aims for two things, the first is to take a close look at how disinformation campaigns are targeting the Olympic Games and the consequences to the integrity of the Olympics, and secondly it discusses the importance of data-driven digital policies in countering these types of threats.

Disinformation campaigns targeting the Olympics are not new. Russia, in particular, has in the past sought to undermine international sporting events they could not dominate. During the 1984 Summer Games in Los Angeles, the Soviet Union, now Russia, distributed leaflets to various Olympic committees, falsely claiming that non-white competitors would be targeted by US extremists [6]. This tactic aimed to sow discord and fear among participants and audiences​ and was an effective means of destabilising the event. Fast forward to the digital age, the tools and methods have evolved, but the core strategy remains unchanged: to undermine the credibility of international events and institutions. The 2018 Winter Olympics in Pyeongchang witnessed the “Olympic Destroyer” cyberattack, attributed to Russian military intelligence, which disrupted several aspects of the event such as the official Olympics website taken offline, the Wi-Fi service become inoperable, the live broadcast systems facing heavy disruption as well as a denial of access to ticket printing during the ceremony which affected tens of thousands of people. Andy Greenberg, a senior writer for WIRED covering cybersecurity called this the most deceptive hack in history [7, 8]. As we approach the Paris 2024 Olympics, these efforts have become more sophisticated, leveraging artificial intelligence and social media platforms to amplify their reach.

According to a report by the Microsoft Threat Analysis Center (MTAC), Russian actors have presently been at the forefront of these efforts, creating elaborate disinformation campaigns aimed at discrediting the International Olympic Committee (IOC) and instilling fear about attending the Games [9]. The digital strategies employed are diverse and sophisticated. These include AI-generated videos, spoofed news articles, and social media bots that amplify misleading narratives. Russia’s discontent with the IOC has intensified, especially following investigations into state-sponsored doping that led to the banning of Russian athletes from the Olympics [10, 11]. This dissatisfaction has fueled a series of malign influence campaigns aimed at discrediting the IOC and creating an atmosphere of fear and uncertainty surrounding the Games.

These actors employ a range of tactics, from creating fake documentaries to spreading false claims, all aimed at undermining the credibility of the IOC and deterring spectators from attending the Games​. One of the most prominent examples of such disinformation is the “Olympics Has Fallen” disinformation campaign [12]. This fake documentary, promoted on Telegram and other social media platforms, used AI-generated audio to mimic the voice of Tom Cruise and falsely alleged corruption within the IOC. The campaign demonstrated a sophisticated blend of traditional disinformation techniques with modern digital tools like deepfakes, reflecting the increasing complexity of these operations.

 

Figure 1: A visual from the fake documentary “Olympics Has Fallen”, produced by Russia-affiliated influence actor  Storm-1679, which targets the International Olympic Committee and advances pro-Kremlin disinformation. The documentary uses the image and likeness of American actor Tom Cruise, who did not participate in any such documentary. Source: MTAC.

Such efforts are designed not just to tarnish the reputation of the Olympics but also to deter spectators and participants from attending, thus potentially destabilising the event. These campaigns also exploit topical news stories to boost engagement and credibility. Fake news clips, purportedly from credible sources like Euro News and France24, falsely claimed that terrorism fears led to significant ticket returns by Parisians [13]​. Another disinformation incident was of a video that falsely claimed that Parisians were buying property insurance in anticipation of terrorism during the Games [Ibid]. These videos, designed to appear as legitimate news reports, were widely circulated on social media, further trying to destabilise the event. This landscape becomes ardently difficult for policymakers. The ability to create and disseminate false information quickly and widely complicates efforts to maintain any form of integrity of information online. Digital policy to combat disinformation, at its best, must try to navigate the fine line between protecting free speech and combating harmful content such as this.

Figure 2: A faked video press release warning the public of possible terror attacks at the 2024 Paris Summer Olympics (left). A fabricated France 24 news clip claiming that nearly a quarter of Paris 2024 tickets have been returned due to concerns overterrorism (right). Both forgeries were produced by Russia-affiliated disinformation actors. Source: MTAC.

Recommendations for a Safer Information Environment

To combat these threats, a multi-layered digital policy strategy is essential. The UCD Centre for Digital Policy emphasises the need for both reactive and proactive measures. Policies should be informed by thorough research and data analysis to address the specific ways in which technologies are used in disinformation campaigns​.

On the reactive side, social media companies must enhance their content moderation processes, departments that deal with this are typically called Trust and Safety (TnS) [14]. These departments need to have efficient training plans in place for content moderation teams to detect and remove known disinformation campaigns, this can be AI-assisted by utilising state of the art tools capable of detecting and neutralising disinformation in real-time. These content moderation teams and tools must also consider operating in multiple languages to counter non-English campaigns effectively, it is not enough to counter English-specific content in an integrated European society.

The proactive measures include public education campaigns to improve digital literacy, helping individuals recognize and reject false information. International cooperation is crucial. Disinformation is a transnational threat, and combating it effectively requires coordinated efforts across borders. This involves sharing intelligence, best practices, and technological innovations. The European Digital Media Observatory (EDMO) plays an important role in this regard [15], providing a platform for collaboration and data sharing among researchers, policymakers, and industry players, combining this with pre-bunking techniques, i.e., providing factual information before false narratives gain traction, can also be an effective combined strategy. The Irish government has taken a good leap forward by attempting to implement and enforce regulations that hold platforms like X and Meta accountable for the spread of disinformation. The co-regulatory framework as discussed in the National Counter Disinformation Strategy Working Group, can serve as a model for such regulations [16].

There also exist frameworks which can be utilised as both reactive and proactive, such as the Disinformation Analysis and Response Measures Framework (DISARM) [17], which is designed to codify and share intelligence on disinformation and influence operations. It is an open-source, master framework for fighting disinformation through the coordination of effective action. It constitutes a knowledge base of techniques used by disinformers and countermeasures used by defenders. The frameworks provide standardised definitions of techniques and countermeasures to facilitate communication amongst defenders, it was built to empower collective defence against disinformation and influence operations. It has recently been used by the European Union External Action Service (EEAS) [18], the diplomatic service of the European Union and has been praised as being a future step in the right direction of combating the spread and effects of disinformation.

Beyond just the technological and policy-driven responses, the human element remains critical. Disinformation thrives on exploiting human emotions and biases. Therefore, fostering a more informed and critical public is as important as any technological solution. Initiatives like community and school engagement projects can empower individuals to critically assess the information they encounter and reduce the overall effectiveness of disinformation campaigns​. In this context, the work of organisations like the Institute for Strategic Dialogue (ISD) [19] is a valuable asset. Their mixed-methods approach, combining open sourced intelligence, ethnographic research, and large-scale data analysis, provides an understanding of the disinformation landscape and informs more effective countermeasures​.

As the 2024 Paris Olympics draw near, the threat of disinformation looms large. However, with concerted efforts from policymakers, technology companies, and civil society, it is possible to mitigate these risks. The UCD Centre for Digital Policy brings attention to the importance of adaptive, evidence-based policies that can evolve with the digital landscape. The Paris 2024 Olympics represent not only a celebration of athletic excellence but also a test of our ability to navigate the complex landscape of digital disinformation and misinformation.

References

[1] Fallis, D. (2015). What is disinformation? Library Trends, 63(3), 401-426.

[2] Boban, M. (2022). Information and disinformation: impact on national security in the digital age. In Economic and Social Development: Book of Proceedings (pp. 309-317).

[3] European Commission. (2018). Tackling online disinformation: A European approach. Brussels.

[4] Ortiz-Ospina, E. (2019). The rise of social media. OurWorldInData.org.

[5] Hook, K., & Verdeja, E. (n.d.). Social media misinformation and the prevention of political instability and mass atrocities. Atrocity prevention stakeholders face profound challenges from the quantity, speed, and increasing sophistication of online misinformation.

[6] Wertheim, J. (2020, July 28). Past interference. Sports Illustrated.

[7] West, T. (2024). Olympics, Cyber threats to Paris 2024.

[8] Greenberg, A. (2020, October). Untold story of the 2018 Olympics destroyer cyberattack. 

[9] Microsoft Threat Analysis Center. (2024). Russian influence efforts converge on 2024 Paris Olympic Games. Microsoft Threat Intelligence Report.

[10] NPR. (2023, December 9). Russians and Belarusians can compete in the 2024 Olympics as neutral athletes.

[11] International Olympic Committee. (2023, October). IOC Executive Board suspends Russian Olympic Committee with immediate effect.

[12] France 24. (2024, June 4). Russian influence campaign targets Paris Olympics using fake Tom Cruise documentary.

[13] New York Times. (2024, June 3). Fake news reports and videos seek to undermine the Paris Olympics.

[14] Trust & Safety Professional Association. (2024).

[15] European Digital Media Observatory. (2024).

[16] National Counter Disinformation Strategy. Department of Tourism, Culture, Arts, Gaeltacht, Sport and Media. (2023, March 30).

[17] DISARM Framework. (2024).

[18] The 2nd EEAS Report on Foreign Information Manipulation and Interference (FIMI), European External Action Service. (2024).

[19] Institute for Strategic Dialogue. (2024).


Bio:

Zaur Gouliev is a PhD student at the UCD School of Information and Communication Studies researching disinformation, influence operations, state propaganda and foreign information manipulation and interference (FIMI). He is supervised by Dr. Brendan Spillane and Dr. Benjamin Cowan, and is involved in Dr. Spillane’s EU Horizon project ATHENA. The work of the project is crucial for the protection of democratic processes in Europe in light of recent FIMI campaigns using disinformation and the surge in cyber-attacks originating from countries like Russia and China. 

 

Skip to content