Threat Intelligence Report: The Potential For Global Disinformation and Misinformation Campaigns for 2024
2024-1-5 00:55:51 Author: krypt3ia.wordpress.com(查看原文) 阅读量:4 收藏

This blog post was created in tandem between Scot Terban and ChatGPT4 using the DisinfoTracker AI Analyst created and trained by Scot Terban

Overview:

In 2024, the global political landscape is witnessing an unprecedented wave of democratic exercises, with more than 50 elections scheduled across various nations. This phenomenon encompasses a vast demographic, affecting nearly 2 billion individuals worldwide. This period is crucial not only because of the sheer number of citizens involved but also due to the heightened risk posed by sophisticated disinformation campaigns. These campaigns, increasingly leveraging advanced technologies like artificial intelligence, are emerging as formidable threats to the integrity of these democratic processes. The nexus of technology and misinformation creates a complex challenge, as it significantly blurs the line between factual information and fabricated content, potentially misleading voters and undermining the core principles of democracy. The effective management of this threat requires coordinated efforts between various stakeholders, including governments, technology firms, and civil society, to ensure the preservation of trust and fairness in electoral systems globally.

Key Actors and Tactics:

Russia’s Continued Influence Operations:

Russia remains a key player in the disinformation domain, adapting its tactics to current digital trends. After the widespread use of bots and fake social media accounts in previous years, Russia has now shifted focus towards audio and video content, utilizing platforms like YouTube and Clubhouse. These mediums are harder to moderate and offer more persuasive and engaging ways to spread disinformation. Russian agent Andrei Derkach’s use of YouTube to disseminate leaked audio files in the “NABU Leaks” information campaign is a prime example of this strategy​​.

China’s Approach: Economic Leverage and Internet Control:

Unlike Russia, China’s disinformation strategy relies more on economic influence and control over the internet. The Chinese Communist Party (CCP) has expanded state media targeting international audiences and utilized financial leverage in the information space. For instance, Chinese state-backed media outlets have purchased significant advertising space in U.S. newspapers. Furthermore, the CCP’s control over the internet within China, through measures like the Great Firewall, exemplifies their approach to managing information flow. Their influence extends to shaping U.S. policy and Congress, with political donors tied to the CCP making substantial contributions to U.S. political campaigns​​.

Increased Authoritarian Control Over the Internet:

Authoritarian regimes, including Russia and China, have increasingly exercised control over the internet to manage the narrative within their borders and as part of their foreign influence operations. This control ranges from shutting down or slowing web and social media applications to leveraging action against Western tech platforms to shape strategic narratives. The dynamic between these authoritarian governments and Western technology companies, coupled with regulatory pressures, significantly impacts the flow of information and the ways in which manipulators can undermine democratic processes​​.

Exploitation of Less Moderated Platforms:

A significant trend in disinformation campaigns is the utilization of less moderated platforms and non-English content. In the 2020 U.S. elections, Spanish-language misinformation circulated widely online, largely unnoticed and unchecked. Additionally, emergent social media applications like Parler have provided new avenues for spreading falsehoods, especially in smaller, less policed platforms. These platforms become breeding grounds for conspiracies that can then infiltrate larger social media networks or be disseminated in closed-group discussions. The focus on these alternative platforms indicates a strategic shift in how disinformation is propagated by nation-states​​.

Additional Attack Vectors of Disinformation and Misinformation Campaigns in Tandem With Cyber Attacks:

Infrastructure Cyber Attacks Aligned with Mis and Disinformation: Impacts and Responses in 2024

In 2024, the interplay between cyber-attacks on critical infrastructure and disinformation campaigns presents a significant and complex threat to national security, public safety, and democratic processes. The evolution of these threats, fueled by advancements in AI and other technologies, poses new challenges for governments, industries, and societies.

Evolving Nature of Cyberwarfare and Disinformation:

  • Cyberwarfare, coupled with disinformation, has become more sophisticated, involving nation-state actors attacking in both physical and cyber domains. AI technologies play a crucial role, enabling attackers to be more prolific and efficient in their operations​​.
  • Disinformation campaigns now form a critical component of national conflict, often accompanying cyber-attacks. The Russia-Ukraine conflict exemplifies this trend, where cyber-attacks and physical warfare occur simultaneously​​.

Targeting of Critical Infrastructure:

  • Critical infrastructure sectors, including transportation, energy, healthcare, and election systems, are prime targets for cyber-attacks. These attacks not only aim to disrupt services but also to gain access to sensitive networks and information, with adversaries continually adapting their techniques​​​​.
  • The DHS Threat Assessment report highlights the use of AI-driven malware and software in these attacks, allowing for larger-scale, faster, and more evasive cyber operations. This includes potential threats from nation-states like Russia, China, and Iran, known for their sophisticated cyber capabilities​​​​​​.

Impact on Public Services and National Security:

  • Attacks on supply chains and critical infrastructure can have far-reaching consequences, disrupting essential services and posing threats to national security. The Colonial Pipeline attack of 2021 serves as a stark reminder of the potential impacts, where disruption of energy resources led to widespread panic and highlighted vulnerabilities in supply chain security​​.
  • By using these attacks, the adversaries also gain the benefits of causing chaos, financial damage, and FUD (Fear, Uncertainty, and Doubt) that will reverberate with narratives that backstop these sentiments in mis and disinformation campaigns that will have synergies to amplify the effects socially.

Misinformation and Disinformation as Strategic Tools:

  • Adversaries will likely use AI to bolster disinformation campaigns, creating more believable and higher quality synthetic content. This strategy aims to undermine trust in government institutions and disrupt social cohesion. Examples include the use of generative AI by China for spreading false claims and Russia’s creation of deepfake videos to disparage Western leaders​​.

Examples of Misinformation and Disinformation Campaigns Already Seen in 2023

Venezuelan State Media’s Use of AI Newscasters (February 2023)

  • In February 2023, Venezuelan state media utilized AI-generated American newscasters to spread misinformation. These were created using an online tool called Synthesia​​.
    • An AI-generated avatar named Daren was another example of such avatars used in this misinformation campaign​​.
    • The Venezuelan government employed these faked artificial intelligence-generated television presenters to disseminate disinformation across various platforms​​.
    • The regime of Nicolás Maduro used these AI-generated newscasters to deliver news solely favorable to the regime, thus manipulating public perception​​.
    • Venezuelan state-owned television station VTV reportedly used deepfake English-speaking hosts from a fictitious American news agency to share falsely positive news coverage about the country​​.

Pro-Chinese Communist Party Disinformation (February 2023)

  • In February 2023, deepfake ‘news anchors’ were used in pro-China propaganda videos, raising concerns among researchers about the increasing use of AI-created deepfakes in disinformation campaigns​​.
    • This incident highlighted the growing threat of deepfakes in Asia, particularly in the context of influence campaigns aligned with China​​.
    • Beijing’s major disinformation campaigns in 2023 included fabricating a global environmental catastrophe, using such techniques to target domestic and foreign audiences​​.
    • Deepfake ‘news anchors’ in pro-China footage were part of an increasing trend where state-aligned actors leverage AI technologies for disinformation​​.
    • Videos promoting the interests of the Chinese Communist Party (CCP), identified by the New York-based research firm Graphika, were part of this disinformation strategy​​.

Manipulated Political Videos and Images in the United States (2023)

  • AI-manipulated videos and images of U.S. political leaders circulating on social media included a video depicting President Biden making derogatory comments​​.
  • There have been instances where AI technology has been used in political ads supporting Florida Governor Ron DeSantis in the context of the Republican presidential nomination. These instances involve the manipulation of audio and images:
    • AI-Generated Trump Voice: A political action committee supporting DeSantis, Never Back Down, utilized AI to manipulate audio in an advertisement. This ad featured a voice resembling that of former President Donald Trump, reading aloud an attack against Iowa Governor Kim Reynolds. While the words spoken were originally written by Trump on Truth Social, he did not actually speak them himself​​.
    • AI-Generated Imagery in Campaign Videos: AI has been used in DeSantis campaign videos, such as one showing AI-generated photos of Trump embracing Dr. Anthony Fauci. Additionally, an ad by the Republican National Committee used AI-generated imagery to depict dystopian scenes as a response to President Biden’s reelection campaign​​.
    • Concerns About Misinformation: Despite the accuracy of the words in the pro-DeSantis ad, there are concerns about the use of AI-generated content in political campaigns. Emma Steiner, a disinformation analyst at Common Cause, expressed concern over the potential of AI content to further complicate an already challenging information environment, especially when social media platforms are reducing enforcement of civic integrity policies​​.
    • Ease of Creating AI-Generated Content: Digital forensics experts indicated that the audio in the pro-DeSantis ad was likely generated using text-to-speech systems, which are easy to use and can create content in just a few minutes. This ease of creation raises concerns about the potential for widespread use of AI in political misinformation​​.
  • Meta announced requirements for advertisers to disclose any AI-generated or digitally altered content in their political ads​​.
  • Clinton and Biden Deepfakes: AI algorithms trained on extensive online footage have created realistic yet fabricated videos of political figures like Hillary Clinton and Joe Biden. These deepfakes, surfacing on social media, blur the lines between fact and fiction, potentially misleading voters​​.
  • Manipulated Content by Trump and RNC: Former President Donald Trump, running in 2024, shared AI-generated content on social media. This included a manipulated video of CNN host Anderson Cooper created with an AI voice-cloning tool. Additionally, the Republican National Committee released a dystopian campaign ad with AI-generated images depicting various politically sensitive scenarios, indicating the growing use of AI in political campaigns​​.
  • AI Images of Trump: AI-generated images showing former President Trump in various scenarios, including a mug shot and resisting arrest, fooled social media users. These instances highlight the growing challenge of distinguishing real from AI-generated content in political discourse​​.

Leaked Recordings of Palanivel Thiagarajan (April 2023)

  • A controversial audio recording of Tamil Nadu Finance Minister Palanivel Thiagarajan allegedly speaking about a Cabinet colleague and the Chief Minister’s son sparked political controversy​​​​.
    • Thiagarajan claimed the audio clip attributed to him, which went viral, was fabricated​​.
    • The controversy in Tamil Nadu involved leaked recordings capturing Thiagarajan disparaging his colleagues​​.
    • Thiagarajan stated that the clips were fabricated, raising questions about the authenticity of such recordings​​.

Coordinated Disinformation Campaign Against Maggie Wilson (2023)

  • Maggie Wilson’s battle against disinformation exposed a campaign funding propagandists​​.
    • Wilson urged content creators and online users involved in the smear campaign against her and her company to come forward​​.
    • On Instagram, Wilson exposed a coordinated attack against her and her company, highlighting the role of paid influencers​​​​.
    • Wilson considered legal action against those behind the coordinated attempt to smear her name by planting scripted stories on TikTok​​.

Goals and Impacts:

In the context of political discourse and elections, the application of AI in creating and disseminating manipulated media presents significant challenges and risks. This phenomenon extends beyond mere technological intrigue; it has profound implications for the fabric of democratic societies. The ability of AI to fabricate or alter media content can influence public opinion, undermine the credibility of democratic institutions and processes, and exacerbate existing social divisions, including those based on gender and race. The following sections delve into these critical issues, exploring the multifaceted impact of AI-manipulated media on the democratic landscape.

  • Influencing Election Outcomes and Public Opinion:
    • AI-generated content can significantly sway public opinion and election outcomes by spreading false or misleading information about candidates or political issues.
  • Undermining Trust in Democratic Institutions and Processes:
    • The spread of AI-created disinformation can erode public trust in the integrity and reliability of democratic institutions and electoral processes, leading to skepticism and cynicism among voters.
  • Exacerbating Social Divisions:
    • The use of AI to amplify or create content that feeds into existing social divisions, including gender and racial biases, can deepen societal rifts and hamper efforts towards social cohesion and equality.

Conclusions:

The challenges posed by AI-driven disinformation and misinformation represent a significant hurdle in today’s digital landscape. The sophistication of AI technologies in creating convincing yet false content has outpaced our current ability to effectively detect and counteract it. This gap leaves open the potential for widespread manipulation of public opinion, erosion of trust in democratic processes, and deepening societal divisions. The rapid evolution of these technologies further complicates the development of robust solutions. As of now, we are in a race against time and technology to devise effective strategies and tools to combat the spread of AI-generated disinformation and preserve the integrity of our information ecosystem.

To quote Donald Trump before January 6th 2021 “Will be wild”

Links:

  1. United States Institute of Peace – Disinformation Casts a Shadow Over Global Elections
  2. Digital Bulletin – Nation-state cyber-incursion and disinformation in 2024
  3. Foreign Policy Research Institute – Foreign Interference in Elections 2022 and 2024: What Should We Prepare For?
  4. CyberTalk – Supply chain trends, critical infrastructure & cyber security in 2024
  5. Industrial Cyber – New DHS threat assessment report sounds alarm on cyber attacks, as AI-driven malware poses threat to critical infrastructure
  6. Venezuelan State Media’s Use of AI Newscasters (February 2023)
  7. Rappler – Disinformation in 2023: Growing AI reliance
  8. El Pais – How Venezuela is using AI avatars
  9. PetaPixel – Venezuelan Government Using Deepfaked Presenters
  10. Diálogo Americas – Artificial Intelligence, the Venezuelan Regime’s New Scam
  11. La Patilla – Venezuelan Government Using Deepfake Presenters
  12. Pro-Chinese Communist Party Disinformation (February 2023)
  13. Tech Xplore – Deepfake ‘news anchors’ in pro-China footage
  14. The Diplomat – Deepfakes and Disinformation in Asia
  15. Polygraph – China’s Disinformation in 2023
  16. Voice of America – Research on Deepfake ‘News Anchors’ in Pro-China Footage
  17. Voice of America – China, Russia Target Audiences Online With Deep Fakes
  18. Manipulated Political Videos and Images in the United States (2023)
  19. MIT Technology Review – Generative AI Boosting Spread of Disinformation
  20. Reuters – Deepfaking it: America’s 2024 Election Collides with AI Boom
  21. USIP – Disinformation in Global Elections
  22. The Register – Meta to Require Disclosure of AI-Altered Political Ads
  23. Modern War Institute at West Point – Deepfakes and Deception
  24. Leaked Recordings of Palanivel Thiagarajan (April 2023)
  25. The South First – Audio Clip of TN Finance Minister PTR
  26. Rest of World – Indian Politician Blames AI for Alleged Leaked Audio
  27. New Indian Express – PTR on Audio Clip about Tamil Nadu CM
  28. Freedom House – The Repressive Power of Artificial Intelligence
  29. [Times of India – Finance Minister PTR Says Clips Fabricated](https://timesofindia.indiatimes.com/city/chennai/audio-war-in-tamil-nad

文章来源: https://krypt3ia.wordpress.com/2024/01/04/threat-intelligence-report-the-potential-for-global-disinformation-and-misinformation-campaigns-for-2024/
如有侵权请联系:admin#unsafe.sh