United States

TikTok and Facebook fail to block harmful election disinformation ads

Image Credits: UnsplashImage Credits: Unsplash
  •  A Global Witness study found that TikTok and Facebook approved ads containing election disinformation, with TikTok accepting 50% of false ads despite its no-political-ad policy.
  • The investigation exposed weaknesses in content moderation systems, particularly in detecting "algospeak" and subtle forms of misinformation, highlighting the ongoing challenge of balancing free speech with preventing harmful falsehoods.
  • As the 2024 US presidential election approaches, the findings emphasize the urgent need for improved safeguards against online disinformation, calling for enhanced AI systems, increased human oversight, and greater collaboration between platforms, policymakers, and voters to protect democratic processes.

[UNITED STATES] In a concerning development just weeks before the highly anticipated 2024 US presidential election, a recent study has exposed significant vulnerabilities in the content moderation systems of major social media platforms. The investigation, conducted by the nonprofit organization Global Witness, found that TikTok and Facebook approved advertisements containing harmful election disinformation, raising alarm bells about the potential impact on the integrity of the upcoming vote.

Key Findings of the Study

The Global Witness investigation tested the election integrity commitments of three major social media platforms: TikTok, Facebook, and YouTube. Researchers submitted a series of advertisements containing false election claims and threats to assess how well these platforms could detect and block harmful content. The results were eye-opening:

TikTok's Performance: Despite its policy prohibiting all political advertisements, TikTok approved 50% of the submitted ads containing disinformation.

Facebook's Results: While showing improvement from previous tests, Facebook still accepted one ad with harmful disinformation.

YouTube's Response: Initially approving 50% of the ads, YouTube ultimately blocked publication of all ads until formal identification was submitted, demonstrating a more robust barrier against disinformation.

TikTok's Troubling Performance

TikTok's failure to detect and block harmful content is particularly alarming, given its strict publisher policy regarding political content. The platform explicitly prohibits all political ads, yet it performed the worst in this test2. This raises serious questions about the effectiveness of TikTok's content moderation systems and its ability to protect users from misleading information during critical election periods.

Ava Lee, Digital Threats Campaign Lead at Global Witness, expressed her concern: "Days away from a tightly fought US presidential race, it is shocking that social media companies are still approving thoroughly debunked and blatant disinformation on their platforms."

Facebook's Mixed Results

While Facebook showed some improvement compared to previous tests, the fact that it still approved an ad containing harmful disinformation is troubling. This highlights the ongoing challenges faced by even the most established social media platforms in combating the spread of false information during election seasons.

The Threat to Election Integrity

The findings of this study underscore the potential risks to the integrity of the US presidential election. With political debates increasingly taking place online, the inability of major platforms to consistently detect and block disinformation poses a significant threat to informed democratic participation.

"In 2024, everyone knows the danger of electoral disinformation and how important it is to have quality content moderation in place. There's no excuse for these platforms to still be putting democratic processes at risk," Lee emphasized.

The Role of "Algospeak" in Bypassing Moderation

One notable aspect of the study was the use of "algospeak" in the submitted advertisements. This technique involves using numbers and symbols as stand-ins for letters to bypass content moderation filters. The success of this method in getting disinformation approved highlights the need for more sophisticated detection systems on social media platforms.

Comparison to Previous Investigations

Global Witness has conducted similar investigations in the past, including tests during the 2022 US Midterms, the 2022 Brazilian General Election, and the 2024 Indian General Election. The consistent findings across these studies suggest that the problem of disinformation on social media platforms is a global issue that requires urgent attention.

Platform Responses and Commitments

In response to the study's findings, the social media platforms provided statements addressing their content moderation efforts:

TikTok: A spokesperson stated, "Four ads were incorrectly approved during the first stage of moderation, but did not run on our platform. We do not allow political advertising and will continue to enforce this policy on an ongoing basis."

Facebook (Meta): The company acknowledged the limited scope of the study but emphasized its ongoing efforts to improve enforcement of its policies.

YouTube: While not providing a direct comment on this study, YouTube has previously highlighted its multi-layered approach to combating abuse on its platform.

The Broader Impact on US Elections

The implications of this study extend beyond the immediate concerns about disinformation. As American voters increasingly rely on social media for information that shapes their voting decisions, the responsibility of these platforms in safeguarding the integrity of the electoral process becomes even more critical.

A quote from the Free Malaysia Today article underscores the gravity of the situation: "Five out of eight ads with false election claims submitted by an advocacy group for testing were accepted." This statistic highlights the scale of the problem and the potential for widespread dissemination of false information.

Recommendations for Improvement

Global Witness has called on Facebook and TikTok, in particular, to increase their efforts to protect political debate in the US from harmful disinformation2. Some recommendations include:

Enhancing AI-powered content moderation systems to better detect subtle forms of disinformation.

Increasing human oversight in the ad approval process, especially for politically sensitive content.

Implementing stricter verification processes for advertisers seeking to run political or election-related ads.

Improving transparency in the ad approval process and providing more detailed explanations for rejected ads.

Collaborating with fact-checking organizations to quickly identify and remove false claims.

The Ongoing Challenge of Balancing Free Speech and Misinformation

The struggle to combat disinformation while preserving free speech remains a significant challenge for social media platforms. Striking the right balance between allowing open political discourse and preventing the spread of harmful falsehoods is a complex task that requires ongoing refinement of policies and technologies.

Looking Ahead: The 2024 US Presidential Election

As the United States approaches the 2024 presidential election, the findings of this study serve as a wake-up call for both social media companies and voters. The potential for disinformation to influence election outcomes highlights the need for increased vigilance from all stakeholders in the democratic process.

Voters must become more discerning consumers of online information, while platforms must redouble their efforts to create robust safeguards against the spread of false and misleading content. Policymakers, too, have a role to play in establishing clear guidelines and consequences for platforms that fail to adequately protect against election disinformation.

The Global Witness study has exposed significant weaknesses in the content moderation systems of major social media platforms, particularly TikTok and Facebook. As the 2024 US presidential election draws near, these findings underscore the urgent need for improved detection and removal of harmful disinformation.

The integrity of democratic processes in the digital age depends on the ability of social media companies to effectively combat the spread of false information. As voters increasingly turn to these platforms for political information, the responsibility of companies like TikTok, Facebook, and YouTube to safeguard the truth has never been greater.

As we move forward, it is clear that addressing this challenge will require a concerted effort from technology companies, policymakers, and citizens alike. Only through collective action and ongoing vigilance can we hope to preserve the integrity of our elections and the health of our democratic institutions in the face of digital disinformation.


World
Image Credits: Unsplash
July 13, 2025 at 5:30:00 PM

Why loneliness at work drives people to leave

When someone quits unexpectedly, leaders often scramble for explanations. Was it compensation? A lack of growth? Manager conflict? But there’s one reason that...

United States
Image Credits: Unsplash
July 13, 2025 at 12:30:00 AM

Gen Z job market timing is the career divider no one saw coming

For decades, career success has been framed around effort, education, and connections. But for Gen Z, one unspoken factor has become just as...

Europe
Image Credits: Unsplash
July 12, 2025 at 11:30:00 PM

UK needs real reform, not budget illusions

In the image of a tearful British chancellor during a parliamentary debate, many saw empathy. Markets saw fragility. Just a year into Labour’s...

World
Image Credits: Unsplash
July 11, 2025 at 11:30:00 PM

Why land and property still anchor China’s economic transition

The headlines are clean: China is moving past its property-addicted economy. The era of endless land auctions and debt-funded development is, allegedly, behind...

Singapore
Image Credits: Unsplash
July 11, 2025 at 7:00:00 PM

Job gap stigma? Laid-off tech worker gets only 3 interviews after months of applying

The Reddit post wasn’t meant to go viral. It was just one laid-off tech project manager venting after five months of job hunting...

World
Image Credits: Unsplash
July 11, 2025 at 6:30:00 PM

Starbucks barista cup writing policy backfires as forced personalization

A cheerful “Yum!” scribbled in marker. A “You got this!” on your flat white. What’s not to like? In isolation, these messages feel...

World
Image Credits: Unsplash
July 11, 2025 at 4:30:00 PM

How to build a career that survives constant change

We don’t talk enough about how exhausting it is to keep reinventing yourself. In theory, “adaptability” is a badge of honor. In real...

World
Image Credits: Unsplash
July 11, 2025 at 2:30:00 PM

Does China’s economic growth mask deeper fiscal gaps?

On paper, China’s economy is on track. Analysts are bracing for a second-quarter GDP print near the government’s 5% full-year target—a number that,...

United States
Image Credits: Unsplash
July 11, 2025 at 2:00:00 PM

Trump’s 2025 trade agenda targets China—but not the way you think

In the Trump administration’s latest maneuver to reshape federal institutions, the US State Department has moved ahead with a formal reduction in force...

Singapore
Image Credits: Unsplash
July 11, 2025 at 2:00:00 PM

Singapore business adaptation grant 2025 to help firms navigate tariff pressures

As global trade routes realign under fresh tariff regimes, Singapore is not waiting to be caught off guard. In July 2025, the Singapore...

World
Image Credits: Unsplash
July 11, 2025 at 1:00:00 PM

How Trump’s trade tactics are reshaping the global supply stack

If you’re reading Trump’s trade moves like it’s 2018 again—tit-for-tat tariffs, trade war optics—you’re behind. This time, it’s not about hammering China. It’s...

United States
Image Credits: Unsplash
July 11, 2025 at 10:30:00 AM

Markets rise as investor optimism builds around trade talks and earnings

While stocks climbed this week on upbeat signals around trade negotiations and anticipated earnings results, the real narrative lies beneath the numbers: a...

Load More