Artificial intelligence is supposed to be a good thing, helping us shop smarter, automate boring tasks, and even create jaw-dropping art. But now, the very tools meant to make life easier are becoming threats. It sounds like the plot of a tech thriller, but the recent Chhattisgarh AI pornography case proves the threat is real and closer to home than we’d like to believe.
Chhattisgarh AI pornography case
In Naya Raipur, Chhattisgarh, a third-year Electronics and Communication Engineering student at IIIT allegedly used AI to turn photographs of 36 female classmates into explicit images. Imagine the horror of discovering your face in content you never consented to, circulating in a digital space that feels impossible to control.
The case surfaced on October 6, 2025, when 36 students reported the ordeal. The college moved quickly: laptops, phones, and pen drives were seized, and cyber experts began combing through the files. Over 1,000 objectionable images and videos were found, but investigators are still determining exactly how far the content spread. Meanwhile, a three-member committee of female staff was formed to support the victims and handle the investigation with sensitivity.
As disturbing as this is, it isn’t new. Just a couple of months ago, another AI horror story shocked the country. A man named Pritam Bora used his ex-girlfriend’s image to create hundreds of fake images and videos. He created a fake Instagram account using these videos and pictures. With 1.4 million followers, he also earned money from the account before all this was brought to his ex-girlfriend and the authorities’ notice.
Related: 1.4 Mn Followers, ₹10 Lakh In Subscriptions, But She Didn’t Know: Babydoll Archi’s AI Horror Story
Is AI the new tool of abuse?

This isn’t just about one student’s terrible choices. It’s a warning about how easily technology can blur the line between innovation and exploitation. The Internet Watch Foundation recently highlighted a shocking jump in AI-generated child sexual abuse content online. In the first half of 2025 alone, they confirmed 1,286 illegal AI-created videos, over 1,000 of which were considered extremely severe. To put that in perspective, just a year earlier, there were only two verified cases. Many of these videos are made with readily available AI video generation tools using existing illegal material, making it disturbingly easy for perpetrators to produce and share such content. AI-generated “deepfakes” are no longer science fiction; they’re a growing tool for harassment, and society isn’t fully prepared to deal with the moral, ethical, and psychological fallout.
How easily can technology blur reality and fiction? If AI can create convincing, but entirely fake, images of real people, what does consent even mean? In a country where consent already holds almost no meaning, access to these tools makes it almost a fictional concept. And are we ready to navigate the consequences of it? This isn’t just a campus scandal; it’s a mirror reflecting the urgent ethical dilemmas of our digital age.
Every innovation comes with responsibility. Just because we can create something doesn’t mean we should. How do we teach digital ethics in a world where reality can be rewritten with a few clicks? And how do we protect those whose lives could be destroyed by tools meant to “help” us? These are questions AI is forcing us to face, whether we like it or not.
Featured Image Source
Related: “This Is Not Our Culture”: Right-Wing Group Disrupts Miss Rishikesh Audition