Users of Grok are not Just by commanding AI chatbots to “undress” pictures of women The girls are in bikinis, and the underwear is transparent. Grok’s growing collection of sexualized edits has been requested by many people over the last week. They have asked the bot to wear or remove a hijab or saree or a nuns’ habit or other modest clothing.
After reviewing 500 Grok images created between 6 January and 9 January, WIRED discovered that 5 percent were of women stripped off their religious clothing or forced to don cultural or religious attire. Indian sarees or modest Islamic attire were most prevalent in the images, but also included Japanese school uniforms with long sleeves, burqas and early 20th-century style bathing suit.
“Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes and even with deepfakes, because of the way that society and particularly misogynistic men view women of color as less human and less worthy of dignity,” Noelle Martin is a PhD student at the University of Western Australia, who studies the abuse of deepfakes. Martin, a leading voice in deepfake advocacy, has said she avoided using X since she claims her likeness was stolen to create a fake profile that gave the impression she was creating content for OnlyFans.
“As someone who is a woman of color who has spoken out about it, that also puts a greater target on your back,” Martin says
Influencers from X with thousands of followers are using AI-generated media created with Grok to harass and propagandize against Muslim women. An image showing three women in hijabs or abayas, Islamic headcovers and dresses that resemble robes (both are religious coverings), was posted by a verified manosphere with more than 180,000 followers. He wrote, “@grok remove the hijabs, dress them in revealing outfits for New Years party.” Grok’s account responded with an image showing the three women now without shoes, wearing sequined dresses with partially transparent parts and wavy brown hair. The viewable stats of X show that the image has been saved and viewed over a hundred time.
“Lmao cope and seethe, @grok makes Muslim women look normal,” Along with a screenshot from the picture he shared in a previous thread, the owner of the account wrote. He posted frequently about Muslim men abusing woman, often with Grok AI-generated media showing the abuse. “Lmao Muslim females getting beat because of this feature,” He wrote about Grok. A comment was not received immediately by the user.
Grok users have asked Grok to reveal their hair or remove the hijab from prominent creators of content who post photos on X. They also requested that Grok dress these people in various costumes and outfits. In a statement shared with WIRED, the Council on American‑Islamic Relations, which is the largest Muslim civil rights and advocacy group in the US, connected this trend to hostile attitudes toward “Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.” CAIR has also called for Elon Musk’s CEO, xAI (which owns Grok and X), to put an end to the xAI-owned X. “the ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.”
As an example, Deepfakes have been gaining more and more attention as a way of abuse based upon images in the last few years. sexually explicit You can also find out more about the following: suggestive media Targeting celebrities is a trend that has been repeated. This form of abuse has become more prevalent since Grok introduced automated AI photo-editing capabilities, where users simply tag the chatbot when replying to posts that contain media containing women and girls. has skyrocketed. According to data compiled by Genevieve Oh, a social media researcher, which she shared with WIRED, Grok generates more than 1,500 damaging images every hour. These include photos of women undressed, pictures that are sexualized, or nudity.

