By using this site, you agree to the Privacy Policy and Terms of Use.
Accept

GadgetBond

  • Latest
  • How-to
  • Tech
    • AI
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Add GadgetBond as a preferred source to see more of our stories on Google.
Font ResizerAa
GadgetBondGadgetBond
  • Latest
  • Tech
  • AI
  • Deals
  • How-to
  • Apps
  • Mobile
  • Gaming
  • Streaming
  • Transportation
Search
  • Latest
  • Deals
  • How-to
  • Tech
    • Amazon
    • Apple
    • CES
    • Computing
    • Creators
    • Google
    • Meta
    • Microsoft
    • Mobile
    • Samsung
    • Security
    • Xbox
  • AI
    • Anthropic
    • ChatGPT
    • ChatGPT Atlas
    • Gemini AI (formerly Bard)
    • Google DeepMind
    • Grok AI
    • Meta AI
    • Microsoft Copilot
    • OpenAI
    • Perplexity
    • xAI
  • Transportation
    • Audi
    • BMW
    • Cadillac
    • E-Bike
    • Ferrari
    • Ford
    • Honda Prelude
    • Lamborghini
    • McLaren W1
    • Mercedes
    • Porsche
    • Rivian
    • Tesla
  • Culture
    • Apple TV
    • Disney
    • Gaming
    • Hulu
    • Marvel
    • HBO Max
    • Netflix
    • Paramount
    • SHOWTIME
    • Star Wars
    • Streaming
Follow US
FacebookMetaTech

Meta failed to stop violent political disinformation in India polls

A new report found Meta approved adverts containing hate speech towards Muslims and calls for violence during India's election, exposing failures in its content moderation.

By
Shubham Sawarkar
Shubham Sawarkar's avatar
ByShubham Sawarkar
Editor-in-Chief
I’m a tech enthusiast who loves exploring gadgets, trends, and innovations. With certifications in CISCO Routing & Switching and Windows Server Administration, I bring a sharp...
Follow:
- Editor-in-Chief
May 20, 2024, 10:02 AM EDT
Share
We may get a commission from retail offers. Learn more
Certainly! The image features the corporate logo for Meta Platforms, Inc. It consists of an infinity symbol in blue, followed by the word “Meta” in white text on a purple background with diagonal lines.
Illustration for GadgetBond
SHARE

Meta, the parent company of Facebook and Instagram, approved a disturbing series of political advertisements during India’s recent national election that spread disinformation and incited religious hatred and violence against Muslims, according to a report.

The report, conducted by the organizations India Civil Watch International (ICWI) and Ekō, found that Meta’s systems greenlighted ads containing vicious anti-Muslim slurs like “let’s burn this vermin” as well as Hindu supremacist language falsely accusing opposition leaders of wanting to “erase Hindus from India.”

One ad approved by Meta called for the execution of an opposition leader next to the Pakistani flag, pushing the inflammatory lie that he sought to undermine India’s Hindu majority. Other ads stoked fears about Muslims having more children than Hindus.

The appalling ads were intentionally created and submitted to Meta by the two watchdog groups in order to test the efficacy of the company’s safeguards against hate speech and inflammatory political content during India’s six-week election, which concluded on June 1.

The results were damning: Of 22 test ads submitted in multiple Indian languages, 14 were approved by Meta to run on Facebook and Instagram despite containing hate speech, disinformation, and overt calls for violence that violated the company’s own policies. Three more were approved after minor tweaks by the researchers.

“Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories – and Meta will gladly take their money, no questions asked,” said Maen Hammad, a campaigner at Ekō.

The findings lay bare Meta’s failure to uphold its public commitment to protecting the integrity of India’s elections. In the lead-up to the pivotal vote that will determine if Hindu nationalist Prime Minister Narendra Modi wins a third term, Meta had touted its preparedness, saying it expanded fact-checking operations and was monitoring 20 Indian languages.

But the tech giant’s inadequate detection systems not only approved the hateful test ads created by the researchers, but also failed to recognize they were political in nature despite many directly targeting candidates and parties. This allowed the ads to bypass policies around authorizing election ads.

Some ads were even scheduled by the researchers to coincide with voting periods when all political advertising is banned in India, underlining how real-world nefarious actors could have exploited Meta’s lapses to influence the electoral process.

Compounding concerns, all 14 approved test ads featured AI-manipulated visuals, despite Meta’s purported pledge to crack down on synthetic and manipulated content around the Indian elections.

The report’s findings confirm previous accusations that Meta has struggled to combat the proliferation of anti-Muslim narratives, conspiracy theories and calls to violence on its platforms in India, some of which have incited real-world riots and lynchings in the past.

India, a crucible for online disinformation and hate given its linguistic diversity and rancorous religious divisions, has emerged as a major test case for how well American social media giants can apply their content moderation policies in the world’s largest democracy.

During the election campaign, Modi was accused of deploying anti-Muslim rhetoric, referring to Muslims as “infiltrators” who outbreed Hindus, before walking back the remarks. The BJP was also compelled to remove a campaign video that demonized Muslims.

In its response to the report’s findings, a Meta spokesperson said that advertisers who want to run political or election ads “must go through the authorization process required on our platforms and are responsible for complying with all applicable laws.”

The company added: “When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent factcheckers – once a content is labeled as ‘altered’ we reduce the content’s distribution. We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.”

However, the report’s authors say Meta’s statement rings hollow given the company’s systemic failures to enforce its own policies during the real-world test case.

“This election has shown once more that Meta doesn’t have a plan to address the landslide of hate speech and disinformation on its platform during these critical elections,” said Hammad. “It can’t even detect a handful of violent AI-generated images. How can we trust them with dozens of other elections worldwide?“

The Indian election debacle is just the latest incident highlighting the ongoing battle between social media platforms and bad actors looking to exploit their products to spread inflammatory, dangerous content. It underscores the immense challenges confronting companies like Meta when it comes to effectively moderating user-generated content at a global scale.

Meta’s Nick Clegg, the company’s president of global affairs, had described the Indian election as “a huge, huge test for us” and said the company spent “months and months and months of preparation” for it. But the report suggests those efforts were inadequate.

The findings could also escalate calls from lawmakers and advocacy groups for more transparency from Meta and stronger systems to block hateful, deceptive content from reaching users—especially during high-stakes events like national elections that can shape the future trajectory of the world’s largest democracy.

For India’s Muslim minority, already facing increased discrimination and violence in the Modi era, the report confirms fears that mainstream online platforms have been weaponized to spread disinformation against them and incite real-world harm.

With concerns persisting around Meta’s ability to enforce its own policies equitably across languages, cultures and regions, the Indian election represents a clarion call for the tech behemoth to substantively improve its content moderation capabilities. Otherwise, the report warns, its platforms risk being continually abused by bad actors—both foreign and domestic—to undermine social cohesion, human rights and democratic norms around the globe.


Discover more from GadgetBond

Subscribe to get the latest posts sent to your email.

Most Popular

Kindle Colorsoft hits rare $170 pricing with 32% discount in spring sale

Kindle Scribe is nearly 40% off in Amazon’s Big Spring Sale

iOS 26.4 adds Ambient Music widget and chatbot support to CarPlay

Apple tvOS 26.4 rolls out Genius Browse, better audio, and subtitles

OpenAI and Handshake launch Codex Creator Challenge for students

Also Read
Health and wellness icons showing a runner, medical clipboard with heart, and stethoscope in green, red, and blue.

Apple now makes the medical device status clear on App Store health apps

MLB Scout Insights dashboard showing baseball game analysis with player statistics, pitch location grid overlay, and team scoring information for Twins vs Red Sox.

MLB Scout Insights brings AI-powered context to every at-bat

Gemini logo surrounded by translucent glass chat bubbles on a light background for Play Store promotion.

Google Gemini can now import chats from other AI apps

MedGemma logo with 'Med' in black and 'Gemma' in blue gradient text.

Google’s MedGemma Challenge crowns EpiCast as global winner

Smartphone showing Google Translate live translation mode options including Listening, Conversation, Text only, and Custom settings, with a Start button.

Live Translate with headphones finally lands on iOS for real-time conversations

Build with Gemini 3.1 Flash Live logo on dark background with colorful Gemini star icon and blue pixelated hand illustration with gradient dot trail.

Gemini 3.1 Flash Live brings multilingual, low-latency AI to developers

Google Search Live logo and interface mockup showing a voice search icon in a colorful gradient circle on the left, with 'Search Live' text below it. On the right, a smartphone displays a forest scene with control buttons for Unmute, Video, and Transcript options.

Google Search Live rolls out to every AI Mode region

Dark blue graphic showing the Google Quantum AI logo centered, surrounded by a grid of glowing nodes and connecting lines that represent a quantum circuit or qubit network.

Google Quantum AI adds neutral atoms to superconducting playbook

Company Info
  • Homepage
  • Support my work
  • Latest stories
  • Company updates
  • GDB Recommends
  • Daily newsletters
  • About us
  • Contact us
  • Write for us
  • Editorial guidelines
Legal
  • Privacy Policy
  • Cookies Policy
  • Terms & Conditions
  • DMCA
  • Disclaimer
  • Accessibility Policy
  • Security Policy
  • Do Not Sell or Share My Personal Information
Socials
Follow US

Disclosure: We love the products we feature and hope you’ll love them too. If you purchase through a link on our site, we may receive compensation at no additional cost to you. Read our ethics statement. Please note that pricing and availability are subject to change.

Copyright © 2026 GadgetBond. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | Do Not Sell/Share My Personal Information.