Analysis: Disinformation and misinformation campaigns from November 2023 to the present
2023-12-13 02:44:8 Author: krypt3ia.wordpress.com(查看原文) 阅读量:11 收藏

This analysis was created by Scot Terban with the DisinfoTracker A.I. Agent Analyst on ChatGPT4 that he has trained for this purpose.

In our analysis of disinformation and misinformation campaigns from November 2023 to the present, we observe a complex and evolving landscape. This period has been marked by an array of strategically crafted campaigns, leveraging advanced technology and social media platforms to shape public opinion and political narratives. These campaigns, ranging from state-sponsored propaganda to grassroots-level misinformation, have targeted various global events and figures, reflecting the increasing sophistication and reach of such tactics in the digital era. Our analysis aims to dissect the methods, impacts, and actors involved in these campaigns, offering insights into their nature and the broader implications for information integrity in today’s interconnected world.

Campaigns & Actors:

Indian DisinfoLab:

This entity has emerged in the Indian digital arena, actively supporting the Modi government. Reports suggest it is run by an Indian intelligence officer and mimics credible organizations like EU DisinfoLab or DFRLab to gain credibility in Western media. The main goal appears to be propaganda supporting the Indian government.

Goals:

Discredit Opposing Nations: The primary aim was to discredit countries in conflict with India, especially Pakistan and to a lesser extent, China. This involved undermining these nations’ reputations internationally.

Influence International Decision-Making: The operation sought to influence decision-making at important international bodies like the UN Human Rights Council and the European Parliament.

Strengthen Pro-Indian Sentiments: Domestically in India, the campaign aimed to reinforce pro-Indian and anti-Pakistan/China feelings.

Consolidate India’s Global Position: Internationally, the goal was to enhance India’s power and improve its perception globally, thereby gaining more support from international institutions.

Methods:

Use of Fake Entities and Identity Theft: The operation resurrected dead NGOs and media, and even impersonated dead individuals. It involved identity thefts of notable figures and the creation of over 750 fake media outlets across 119 countries.

Manipulation of International Forums: The campaign employed coordinated UN-accredited NGOs to promote Indian interests, often at the expense of Pakistan. These NGOs coordinated with non-accredited think-tanks and minority-rights NGOs in Brussels and Geneva for lobbying, organizing demonstrations, and speaking at UN events.

Misrepresentation in European Institutions: Trips for Members of the European Parliament to regions like Kashmir were organized to create a facade of official EU support for these agendas. Informal groups within the European Parliament were also created to disseminate pro-India and anti-Pakistan narratives.

Media Complicity and Amplification: Asian News International (ANI), a major Indian news agency, played a critical role in repackaging and amplifying the content produced by these fake entities. This content was then further disseminated by a network of over 500 fake local media in 95 countries.

Online Disinformation Tactics: The operation maximized negative content about Pakistan online, using a network of fake local media worldwide. The campaign was ongoing as of the latest reports and had adapted to continue its activities despite initial exposures.

Impact and Concerns:

Wide Reach and Longevity: This 15-year-long operation has been notable for its extensive reach and duration, impacting perceptions in Brussels, Geneva, and across the world.

Influence on Policy and Public Opinion: The operation has influenced European and international policymaking, swayed public opinion, and created controversies in international relations.

Challenges in Detection and Regulation: The sophisticated use of digital tools and media manipulation presents significant challenges for detecting and regulating such disinformation campaigns. The adaptation and evolution of these tactics, even after initial exposure, underline the difficulty in combating such operations.

Dublin Riots:

Meta, TikTok, and Google have responded to misinformation related to the Dublin riots, but there were criticisms towards X (formerly Twitter) for its lack of response and concerns over Elon Musk’s comments on Ireland.

In response to the Dublin riots, major social media platforms such as Meta (Facebook), TikTok, and Google have faced scrutiny and were questioned by the Oireachtas media committee regarding their role in disseminating information related to the riots. These companies, all headquartered in Dublin, were involved in discussions about disinformation, media literacy, and their response to the disorder in Dublin city. On the other hand, the platform X, formerly known as Twitter, was criticized for its absence from these discussions and its alleged slow response to removing contentious content related to the riots​​​​​​​​​​​​​​.

Elon Musk, the CEO of Tesla and owner of X (formerly Twitter), also became a subject of controversy due to his comments on the situation in Ireland. Musk criticized Irish Prime Minister Leo Varadkar, accusing him of hating the Irish people. This criticism followed Ireland’s announcement of its intent to modernize its laws against hate and hate speech. Musk’s comments were made in the context of the Dublin stabbings and subsequent riots and have been described as stoking up hatred in Ireland. An Oireachtas committee expressed strong disapproval of Musk’s remarks and his company’s failure to send a representative to discuss these issues​​​​​​​​​​.

The Dublin riots were a series of public order incidents that occurred over the June bank holiday weekend in 2021, during the COVID-19 pandemic. The violence involved clashes between the Garda Public Order Unit and people partying on the streets, with glass bottles and other objects thrown at members of the Garda Síochána (Irish police). The unrest was partly driven by misinformation and online rumors about a possible foreign attacker. It also reflected deeper societal issues, such as social inequality and the housing crisis in Ireland. Social media played a significant role in spreading information and misinformation about these events, leading to the involvement and scrutiny of major social media platforms and public figures like Elon Musk​​​​​​​​​​​​​​​​​​​​.

Reinstatement of Alex Jones on X:

Elon Musk reinstated conspiracy theorist Alex Jones on X, despite previous bans. This decision may appeal to far-right supporters but is concerning for advertisers due to Jones’s endorsement of conspiracy theories.

In December 2023, Elon Musk, the owner of X (formerly Twitter), reinstated the account of Alex Jones, a notorious conspiracy theorist. Jones’ account, @RealAlexJones, had been banned since 2018 for abusive behavior​​. This decision followed a user poll conducted by Musk on X on December 9, 2023, in which a majority of participants voted in favor of reinstating Jones’ account​​​​.

Alex Jones was originally banned from the platform for spreading falsehoods about the Sandy Hook school shooting, an event that resulted in the death of 26 people​​. Musk’s decision to reinstate Jones comes amidst a period of continued loss of advertisers on the platform, which Musk acquired for $44 billion the previous year​​.

This move by Elon Musk to reinstate Alex Jones has several implications:

Appeal to Far-Right Supporters: By reinstating a figure like Alex Jones, who is well-known for promoting conspiracy theories and far-right rhetoric, Musk could potentially appeal to far-right supporters. Jones’ endorsement of such theories, including the notorious Sandy Hook conspiracy, has made him a divisive figure, particularly among far-right circles.

Concern for Advertisers: The reinstatement of Jones is concerning for advertisers. Given Jones’ controversial history and the nature of his conspiracy theories, advertisers might be wary of associating their brands with a platform that permits such content. This is particularly relevant considering the reported loss of advertisers on X since Musk’s takeover.

User Poll Influence: The decision to reinstate Jones was influenced by a user poll, suggesting a shift towards community-driven decision-making on the platform. However, this approach raises questions about the governance of content moderation and the potential for controversial or harmful figures to be reinstated based on popular vote, rather than a structured policy on content and behavior.

Implications for Platform Governance: This reinstatement signals a potential shift in the governance and content moderation policies under Musk’s ownership. It highlights the complexities and challenges involved in moderating content on social media platforms, balancing free speech with the need to curb misinformation and harmful content.

AI-Powered Harassment against Alexey Navalny:

A network of X/Twitter accounts used content generated by ChatGPT for a harassment campaign against Alexey Navalny and his Anti-Corruption Foundation, aiming to undermine support among pro-Ukraine Western audiences.

In a recent disinformation campaign, a network of at least 64 accounts on X (formerly Twitter) utilized content generated by OpenAI’s ChatGPT to engage in targeted harassment against Alexey Navalny, his associate Maria Pevchikh, and the Anti-Corruption Foundation (ACF). The primary aim of this campaign was to undermine support for Navalny and the ACF among pro-Ukraine American and Western audiences​​.

The ChatGPT-generated content, while initially appearing authentic, was part of a campaign that was not technically sophisticated, making it detectable. The Institute for Strategic Dialogue (ISD) identified these accounts based on coordinated posting patterns and shared vocabulary, topics, and hashtags. The campaign’s narrative was pro-Ukraine and anti-Russia, and it sought to portray Navalny and his associates as controlled opposition run by Russian security and intelligence services. A small number of tweets even posed as supporters of Navalny and Pevchikh, complicating the narrative. These accounts primarily posed as Americans from various states and engaged in activities such as spamming the replies of targeted individuals and posting on their own timelines​​.

The campaign involved two generations of accounts. The first generation, created around 2010 or 2011, began activity in mid-2023 with likely manually written content or basic automatic generation. The second generation of accounts, active from September 26, 2023, was newly created and likely bulk-purchased. These newer accounts utilized ChatGPT-generated content for all posts. Some accounts were removed by X around November 6, 2023, but were quickly replaced with fresh accounts​​.

The campaign exhibited high coordination, with consistent posting patterns primarily during weekdays, aligned with business hours in Moscow and St. Petersburg. This scheduling pattern might suggest the geographic origin of the operators or the targeted audience​​. Despite the absence of conclusive evidence linking the campaign to any specific actor, it’s worth noting that such tactics align with previous strategies employed by state actors, including those affiliated with the Kremlin, to covertly promote their interests by impersonating American social media users​​.

The overarching narrative accused Navalny and his associates of being enemies of Ukraine and implied cooperation with the Russian government. This line of questioning was particularly focused on how Navalny’s account continued to tweet while he was imprisoned​​. Some tweets ambiguously supported Navalny and the ACF, but these were often interspersed with contradictory statements from the same accounts, suggesting a strategy to sow confusion and discord​​.

This campaign signifies a shift in strategy, now aiming to divide the ACF from American and Western Anglophone audiences more broadly​​. Despite some quirks in the content, it was largely authentic-looking, proficient at spreading messages through inference and implication. This subtlety in approach and language indicates the potential ease with which generative AI can be used in cross-cultural and cross-linguistic influence campaigns, raising concerns about future sophistication in such tactics​​.

This case underscores the evolving challenge of identifying AI-generated content in influence campaigns. The reliance on social media platforms for detection and moderation of such content is likely to increase, as will the necessity for transparency in these processes. The broader implications for public discourse and the online information ecosystem are profound, particularly as disinformation policies evolve ahead of future political events. The use of generative AI in large-scale disinformation campaigns could deepen polarization and increase distrust and hostility on social media​​.

Fake Celebrity Cameo:

The Russian government used repurposed celebrity videos from platforms like Cameo to create false narratives against Ukraine’s president, Volodymyr Zelensky. Additionally, a campaign involving Facebook and X posts featured photos of celebrities with Kremlin propaganda messages.

Repurposed Celebrity Videos on Cameo:

  • The Microsoft Threat Analysis Center reported that Russian propaganda officials tricked several American celebrities into recording personalized videos on Cameo. These videos were later edited to falsely present President Zelensky as a drug addict.
  • Celebrities such as Elijah Wood, Priscilla Presley, Dean Norris, Kate Flannery, John McGinley, and Shavo Odadjian were targeted. They were paid to send a message to someone named “Vladimir,” asking him to seek help for substance abuse. The videos were then altered with links, emojis, and logos to appear authentic and were shared on social media.
  • Russian state-owned news agencies, including RIA Novosti, Sputnik, and Russia-24, covered these videos. Representatives for the actors involved stated that the celebrities believed they were communicating with a fan and did not intend to spread disinformation about Zelensky​​.

Manipulated Images with Fabricated Quotes:

  • In a separate operation dubbed “Doppelganger,” images of celebrities such as Taylor Swift, Beyoncé, Kim Kardashian, Justin Bieber, and Oprah Winfrey were used to spread anti-Ukrainian propaganda.
  • These images featured fabricated quotes from the celebrities, critical of Ukraine and supportive of Russia’s actions. For example, a quote attributed to Taylor Swift criticized Ukrainians, while another attributed to Oprah Winfrey denounced support for Ukraine.
  • This tactic is particularly effective as it can reach a wider audience and appear more credible than traditional propaganda methods, exploiting the trust and influence of these celebrities to manipulate public opinion​​.

Role of Social Media Platforms:

  • Colonel Cedric Leighton emphasized the responsibility of social media platforms like Meta’s Facebook, Alphabet’s Google, and X (formerly Twitter) to police fake accounts used in operations like Doppelganger.
  • He suggested that if these platforms do not effectively curb such disinformation campaigns, national and international legal systems should hold them accountable. The spread of this type of disinformation has implications not only for reputational damage but also for national security​​.

文章来源: https://krypt3ia.wordpress.com/2023/12/12/analysis-disinformation-and-misinformation-campaigns-from-november-2023-to-the-present/
如有侵权请联系:admin#unsafe.sh