AI tools weaponized to spread anti-Muslim hate in India, study warns
CSOH tracked 1,326 AI-generated posts targeting Muslims, showing sexualized images, conspiracy tropes, and dehumanizing rhetoric
NEW DELHI, India (MNTV) — Artificial Intelligence (AI) text-to-image tools are being systematically deployed in India to produce and spread anti-Muslim hate, according to a new report by the Washington-based Center for the Study of Organized Hate (CSOH). The think tank warned that generative AI is becoming a force multiplier for Islamophobic propaganda, deepening risks for India’s 200 million Muslims and threatening the country’s secular fabric.
The study examined 1,326 AI-generated images and videos shared by 297 public accounts in Hindi and English across X (formerly Twitter), Instagram, and Facebook between May 2023 and May 2025. The bulk of the content circulated after mid-2024, coinciding with the growing popularity of tools like Midjourney, DALL·E, and Stable Diffusion. Together, the posts attracted more than 27 million engagements.
Four dominant narratives
Researchers found the hateful content clustered around four main categories: conspiratorial narratives, exclusionary and dehumanizing rhetoric, sexualized depictions of Muslim women, and the aestheticization of violence.
The most engaged content sexualized Muslim women, garnering 6.7 million interactions. This, the report said, highlights the gendered character of Islamophobic propaganda, fusing misogyny with anti-Muslim hate by portraying women as legitimate objects of harassment or violence.
Conspiracy theories were another major theme. AI-generated visuals amplified tropes like “Love Jihad” — the false claim that Muslim men seduce Hindu women to convert them — as well as “Population Jihad” and fabricated narratives such as “Rail Jihad.” Such content, CSOH said, depicts Muslims as a demographic and security threat to Hindu society.
Exclusionary posts dehumanized Muslims by portraying them as animals — snakes wearing skullcaps were a recurring trope — while aestheticized violent content stylized sectarian imagery to normalize or trivialize violence. Some images borrowed popular animation styles, making hateful propaganda appear humorous and appealing to younger audiences.
Role of Hindu nationalist media
The report found that Hindu nationalist media outlets played a critical role in amplifying AI-generated Islamophobia. Outlets such as OpIndia, Sudarshan News, and Panchjanya embedded synthetic hate into mainstream discourse, ensuring it reached audiences far beyond extremist online communities.
While 187 of the analyzed posts were flagged for violating community guidelines, none were removed, underscoring what CSOH described as a systemic failure by platforms to enforce their own rules. Instagram emerged as the most effective amplifier, with 1.8 million interactions across 462 posts, while X generated the largest total reach with 24.9 million engagements.
Escalating risks in India’s digital ecosystem
The report portrays India’s crisis within a global trend of “slopaganda” — cheap, synthetic, low-veracity content used to flood digital platforms with hate. But it stresses that India’s volatile political and digital environment makes the risks particularly acute. With an estimated 900 million internet users, India is one of the largest markets for AI-generated imagery.
CSOH noted that Hindu nationalist actors already use digital platforms to spread hate through music, videos, and coordinated campaigns. Previous CSOH research documented cow vigilante groups using Instagram to fundraise and glorify violence, while whistleblower Frances Haugen revealed Meta’s awareness of rampant anti-Muslim propaganda in India. Against this backdrop, AI-generated hate has found “fertile ground” to escalate Islamophobic narratives.
The report also warns that the widespread rollout of inexpensive generative AI, including ChatGPT’s $5 subscription plan for Indian users, could accelerate abuse by lowering barriers to creating synthetic propaganda.
Broader implications
The think tank said the proliferation of AI-generated Islamophobic content poses multiple dangers:
- Threats to minorities: Routine exposure to dehumanizing propaganda increases psychological harm and risks of physical violence.
- Social fragmentation: Large-scale dissemination of conspiracies and sexualized hate corrodes interfaith relations and normalizes exclusion.
- Democratic erosion: The spread of synthetic hate undermines constitutional secularism and weakens India’s democratic institutions.
“The proliferation of hateful AI-generated content threatens to further colonize the Indian information sphere, which is already marked by rampant misinformation, anti-minority bias, and a severe crisis of credibility,” the report warned.
Recommendations
CSOH urged a multi-pronged response from lawmakers, platforms, and civil society. Its recommendations include:
- Updating India’s Information Technology Act and intermediary rules to specifically address AI-generated images and videos.
- Mandating provenance metadata in every AI output to trace synthetic content back to its source.
- Expanding jurisdiction of the News Broadcasting & Digital Standards Authority and the Press Council of India to cover AI-generated visuals in news content.
- Establishing open-source research databases of AI-generated hate imagery to aid monitoring and detection.
- Introducing algorithmic transparency and independent audits of how platform recommendation systems amplify synthetic hate.
- Developing cross-platform early warning systems and “circuit breakers” to limit the viral spread of harmful synthetic campaigns.
CSOH concluded that generative AI has marked a turning point in India’s hate ecosystem. “Unchecked abuse of AI risks normalizing violence against minorities, destabilizing social cohesion, and weakening democratic institutions,” the report said.