Despite Ad Bans, Experts Warn Social Media’s Cuts to Safety Teams and Misinformation Policies Could Undermine Election Integrity
In a significant move to tackle misinformation ahead of the U.S. election, major tech platforms including Meta, Google, and YouTube have implemented temporary pauses on political advertising. Meta, owner of Facebook and Instagram, announced last week a temporary ban on new political ads addressing U.S. social issues and elections across its platforms. Initially set to expire on Tuesday, Meta extended the ban to remain in effect through election week. Google’s ad pause will begin once polls close and continue for an unspecified duration, while TikTok has maintained a ban on political advertising since 2019.
Unlike other social platforms, Elon Musk’s X (formerly Twitter) lifted its political ad ban last year after Musk acquired the platform, with no indication of a new pause during the election season. This stance has raised concerns about the impact of political misinformation on X, especially given Musk’s prominent support for former President Donald Trump and the platform’s shift in moderation practices.
The timing of these ad bans reflects efforts to prevent political ads from potentially swaying public opinion during a critical period of ballot counting and anticipated delays in official results. These pauses are intended to block any ads attempting to claim early victories or undermine public confidence in the voting process. But experts warn that previous moves, including substantial cuts to content moderation teams, have weakened the platforms’ capacity to counter misinformation.
Sacha Haworth, Executive Director of the Tech Oversight Project, observed, “Since the last presidential election, we’ve seen a dramatic backslide in social media companies’ preparedness, enforcement, and willingness to protect information online.” Platforms like Facebook and Twitter once led in countering misinformation and moderating violent rhetoric. Today, however, social media watchdogs express concern over the proliferation of false narratives on social networks, particularly following recent layoffs and relaxed policies.
Misinformation has been widespread ahead of the election. Federal law enforcement agencies have warned of potential threats from extremist groups with election-related grievances, fueled by debunked claims of fraud. This heightened tension around the election recalls the role social media platforms played in the 2016 and 2020 elections. Since then, platforms instituted policies to curb false claims and interference, only to scale back those policies more recently, allowing claims about the 2020 election being “stolen” to persist online.
The Center for Countering Digital Hate (CCDH), led by CEO Imran Ahmed, has tracked the impact of misinformation in recent months, revealing that misleading content on X, often posted by Musk, has reached more than 2 billion views. As long as platforms prioritize engagement and controversy, Ahmed argues, misinformation will find organic reach regardless of paid ads. He notes, “Stopping ads on platforms designed to promote the most contentious information—whether that’s disinformation or hate—has minimal impact when these platforms are fundamentally structured to amplify it.”
Experts also highlight concerns around the role of artificial intelligence in the spread of election misinformation. AI-driven tools can create manipulated images, videos, and audio, increasing the risk of fake media lending credibility to false claims. Musk himself has posted content featuring AI-generated videos that manipulated Vice President Kamala Harris’s voice, illustrating the challenge of AI in policing fake and inflammatory content.
While the ad pauses are part of a broader strategy to protect election integrity, platforms are facing criticism over inconsistencies in their policies and enforcement. YouTube, owned by Google, has reiterated its commitment to removing content that encourages violence or spreads conspiracies. YouTube Vice President Leslie Miller noted, “Content that misleads viewers or encourages interference in the democratic process is prohibited on YouTube. We remove content that incites violence, promotes conspiracy theories, or threatens election workers.”
Meta also stated it would lower the reach of false content, while TikTok has introduced an “Elections Integrity Hub” to prevent misinformation from disrupting the democratic process.
In contrast, X’s Civic Integrity Policy, reinstated in August, allows certain election-related claims as long as they don’t explicitly interfere with voting. This distinction allows for polarized and partisan viewpoints, making X a hub for election narratives not permitted elsewhere. Musk’s high-profile posts, some containing false claims, highlight the complex relationship between social media content, free speech, and responsible governance of election discourse.
While the election ad pauses by Meta, Google, YouTube, and TikTok mark an important step toward limiting political manipulation online, critics argue that ad bans are insufficient given ongoing misinformation, especially in an era where platforms’ internal teams and policies have been reduced. The ad bans alone may not counteract misinformation already seeded in user feeds or circulating in online communities. Experts suggest that without renewed investment in content moderation and fact-checking, election misinformation may persist, challenging efforts to uphold an informed electorate and fair democratic process.
Error: No feed with the ID 1 found.
Please go to the Instagram Feed settings page to create a feed.