Indian female workers forced to watch large amounts of violent and sexually abusive content to train AI



Training AI requires large datasets, and the content in those datasets is largely labelled by humans. The Guardian, a major British daily newspaper, recently reported that female workers in India are being forced to watch large amounts of content containing violence and sexual abuse in order to train AI.

'In the end, you feel blank': India's female workers watching hours of abusive content to train AI | Global development | The Guardian

https://www.theguardian.com/global-development/2026/feb/05/in-the-end-you-feel-blank-indias-female-workers-watching-hours-of-abusive-content-to-train-ai



Monsumi Murmu, a 26-year-old who lives with her family in Jharkhand , India, works as a content moderator for an international technology company. Murmu's job is to review images, videos, and text flagged by AI systems as potentially violating platform rules, and then classify them with appropriate labels. This content labeling further trains the AI system.

Murmu watches up to 800 videos a day, many of which contain violent and sexually abusive material. The Guardian explains that the video on his computer screen shows 'a woman being held down by men, the camera shaking, screaming and breathing audible. The video is so disturbing that Murmu fast-forwarded through it, but he had to watch it to the end for work.'

'I couldn't sleep for the first few months. When I closed my eyes, I would see the screen loading,' Murmu told The Guardian. He said he couldn't shake images of life-threatening accidents, the death of family members, and sexual violence in his dreams. While he didn't experience as many traumatic experiences as before, 'in the end, instead of anxiety, I just felt empty,' he said, adding that he still has nightmares.

Researchers say this emotional numbness and long-lasting psychological effects are characteristic of moderation work. 'From a risk perspective, content moderation belongs in the category of dangerous work, comparable to any deadly industry,' says Milagros Miceli, a sociologist who leads the Data Workers' Inquiry , a project investigating the role of AI workers.

Previous research (PDF file) has shown that content moderation causes persistent cognitive and emotional stress, often leading to behavioral changes like increased vigilance. Workers report intrusive thoughts, anxiety, and sleep disturbances, and a 2025 study of content moderators, including workers in India, also identified traumatic stress as the most prominent psychological risk.



Major technology companies often outsource data annotation work, which involves labeling content for AI training, to overseas companies. According to Nasscom, an Indian IT industry association, as of 2021, approximately 70,000 people were working in data annotation in India, with the market value reaching approximately $250 million (approximately 39 billion yen). Approximately 60% of this revenue came from American companies, and only 10% came from Indian companies.

Approximately 80% of people working in data annotation and content moderation in India are from rural areas, low-caste backgrounds, or indigenous people, and more than half of the workforce is women. For these people, all types of digital work are cleaner, offer higher hourly wages, and are more stable than agriculture or mining. Companies also intentionally operate in smaller cities and towns where rent and labor costs are lower, keeping costs down.

'The dignity of work and the fact that it's valuable paid employment inspires gratitude. This expectation can make workers hesitant to question the psychological toll their work takes,' says Priyam Badaliya, a researcher on AI and data workers at

the Aapti Institute in Bengaluru , southern India.

Raina Singh, a data annotator from her hometown of Uttar Pradesh , was employed by a global technology platform and was paid around £330 a month. She initially worked on labeling short messages and spam emails, but after six months, her role shifted to flagging content related to child sexual abuse.

'I never imagined this would be part of my job,' Singh told The Guardian. The content was so graphic and unforgiving that he raised concerns with his superiors, but they convinced him, 'This is God's work. You're keeping children safe.' His job then shifted to classifying pornographic content, which he spent hours each day looking at. He has since quit his job, but more than a year later, he said he still experiences nausea and dissociation when thinking about sex.



According to Badaliya, the job advertisements do not state that the job involves labeling extremist content, and some people only find out what their job entails after signing a contract and beginning training.

The Guardian interviewed eight data annotation and content moderation companies in India, and only two said they offer psychological support to their employees. The remaining companies told The Guardian that their work is not demanding enough to require mental health care.

'India's labor law does not officially recognize psychological harm, leaving workers without effective protection. Furthermore, content moderators and data workers are bound by strict non-disclosure agreements (NDAs) that prohibit them from discussing their work with even their family and friends. This can lead to isolation and further psychological strain,' The Guardian said.

in AI, Posted by log1h_ik