The risks of using AI: The number of fake nude pictures made by teens has gone up by 55%.
Making sexually explicit images with artificial intelligence, especially nude images without consent, is not new. However, it has gotten a lot worse since the technology became more popular. The infamous Almendralejo (Badajoz) case in 2023 showed how bad this practice really is, as many minors were hurt by the spread of fake nude pictures. In 2024, singers Taylor Swift and Selena Gomez also had sexually explicit pictures made by AI. A year later, in 2025, the ClothOff app was said to make 200,000 of these pictures every day.
In January of this year, xAI's Grok chatbot caused a lot of trouble by making nude pictures of women without their permission. It could make up to 6,700 pictures of women in nude clothing or underwear every hour. In the end, xAI limited Grok's ability to edit images to paid users and announced a policy change to make things safer.
Deepfake technology is not going away; in fact, it is getting more popular. The trend of using AI to make sexually explicit images is also growing, especially among young people. Chad Steele of George Mason University (USA) wrote a new study for the open-access journal PLOS One that says more than half of American teens have used AI tools to make nude pictures.
According to the survey results that were made public, 55.3% of young people said they had used nude photo apps to make at least one picture of themselves or someone else, and 54.4% said they had received AI-generated nude pictures. Also, 36.3% said that someone had taken a picture of them without their permission, and 33.2% said that these pictures were later published without their permission.
It is important to note, however, that male participants in the survey reported higher rates of creating and sharing these images, regardless of consent.
The study highlights the prevalence and accessibility of these practices. The combination of constantly evolving technologies and their widespread availability facilitates the creation and dissemination of harmful content. Therefore, in this context, AI-generated images continue to pose a significant threat, both because of their potential to violate individuals' privacy and dignity, and because of their ability to amplify forms of online harassment.