As the 2024 U.S. Presidential Election approaches, along with other pivotal elections worldwide, the online spread of misinformation is reaching new heights. Manipulative content is flooding social media platforms, generated by foreign adversaries, cybercriminals and even AI-powered systems. Algorithms, which most users barely understand, are amplifying this misinformation, turning it into a critical threat to our democratic process and the very notion of a shared, fact-based reality.
Misinformation is dangerous because it erodes trust, undermines public discourse and distorts decision-making. When falsehoods masquerade as truth, the public’s ability to make informed choices is compromised and the democratic process itself becomes tainted. At its worst, misinformation can fuel polarization, incite violence and destabilize societies by making it nearly impossible to discern fact from fiction.
Who is behind this tidal wave of false information? State actors like Russia, China and Iran are notorious for their efforts to manipulate social media in pursuit of political and strategic goals. These foreign entities aim to sow division, disrupt elections and influence public opinion, often through highly targeted disinformation campaigns. Domestic actors and financially motivated cybercriminals also have a hand in it, using misinformation for personal, political, or financial gain. These groups operate with increasing sophistication, leveraging AI to create content that is more convincing and harder to detect.
What makes the situation even worse is how social media companies contribute to the problem. These platforms amplify misinformation through algorithms designed to prioritize engagement — often favoring sensational or divisive content because it keeps users clicking. The more engagement a post generates, the more likely it is to be seen by others, regardless of its accuracy. In other words, the very mechanisms that make these platforms profitable are also what make them breeding grounds for misinformation. These companies’ incentive structures prioritize growth, user retention and advertising revenue over content integrity.
Why aren’t these platforms doing more to stop the spread of misinformation? The answer lies in their incentives. Social media companies benefit financially from the high levels of engagement that controversial or misleading content can generate. Furthermore, they have little legal obligation to intervene. Section 230 of the Communications Decency Act of 1996, which was created long before today’s internet existed, shields platforms from liability for content posted by their users. This outdated law treats these tech giants like neutral bulletin boards, even though their algorithms actively shape what content users see. Until this law is reformed, social media companies have no real incentive to tackle the problem head-on.
In the absence of strong regulatory oversight, we have defaulted to relying on individual users to combat misinformation. The expectation is that people will be able to recognize fake news, filter it out and make informed decisions about what to believe. But this approach is fundamentally flawed.
Relying on humans to detect and combat social media amplified misinformation will fail as a strategy. Simply put, people are bad at spotting manipulated content. Decades of experience in cybersecurity have shown us that even the most well-educated and cautious individuals can fall for phishing scams, click fraudulent links and unwittingly approve malicious access requests. The same is true for misinformation. No matter how much media literacy or security awareness training people receive, many will still struggle to identify sophisticated disinformation, especially as AI-generated content becomes more realistic and harder to distinguish from the truth.
So, what can we do about it? First and foremost, we need stronger regulation that holds social media companies accountable. Amending Section 230 to hold platforms accountable, not for individual user posts, but for how their algorithms amplify and distribute false or harmful content, could finally push social media companies to take meaningful action. It wouldn’t stifle free speech or expression, as some fear, but it would compel these companies to be more responsible for the consequences of how their systems operate. In short, this change could incentivize platforms to develop stronger tools to detect and prevent the spread of false information.
Additionally, AI tools themselves need to be subject to stricter oversight. Despite well-publicized efforts by industry coalitions to self-craft rules and boundaries around the ethical use of AI, cybersecurity experts know all too well that bad actors are not waiting for legislation to catch up. These rules, while positive, are not enough to protect the average person in real time.
Finally, we need to shift the burden off individuals and provide users with more sophisticated tools to help them navigate the digital landscape safely. Just as filtering tools and security monitoring help prevent phishing attacks in corporate environments, we need similar solutions for misinformation — technological aids that work in real-time to help users evaluate the credibility of the information they encounter.
The 2024 election is a stark reminder that we are at a critical crossroads. Foreign actors and cybercriminals will continue to exploit social media to undermine our democratic processes, and the threat will extend far beyond November. The real question is whether we will confront this problem head-on or continue to place the burden on individuals, hoping they can solve a crisis that should never have been theirs to manage in the first place.