Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • OpenAI’s GPT-5.4 Cyber: A Finely Tuned Model for Verified Security Defenders
  • Code Implementation for an AI-Powered Pipeline to Detect File Types and Perform Security Analysis with OpenAI and Magika
  • TabPFN’s superior accuracy on tabular data sets is achieved by leveraging in-context learning compared to Random Forest or CatBoost
  • Moonshot AI Researchers and Tsinghua Researchers propose PrfaaS, a cross-datacenter KVCache architecture that rethinks how LLMs can be served at scale.
  • OpenMythos – A PyTorch Open Source Reconstruction of Claude Mythos, where 770M Parameters match a 1.3B Transformator
  • This tutorial will show you how to run PrismML Bonsai 1Bit LLM using CUDA, Benchmarking and Chat with JSON, RAG, GGUF.All 128 weights have the same FP16 scaling factor. 1 bit (sign) + 16/128 bits (shared scale) = 1.125 bpw Compare Memory for Bonsai 1.7B:?It is 14.2 times smaller than Q1_0_g128!
  • NVIDIA Releases Ising – the First Open Quantum AI Model Family For Hybrid Quantum-Classical Systems
  • xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers
AI-trends.todayAI-trends.today
Home»AI»Grok Used To Mock And Strip Sarees and Hijabs Women

Grok Used To Mock And Strip Sarees and Hijabs Women

AI By Gavin Wallace10/01/20264 Mins Read
Facebook Twitter LinkedIn Email
o1’s Thoughts on LNMs and LMMs • AI Blog
o1’s Thoughts on LNMs and LMMs • AI Blog
Share
Facebook Twitter LinkedIn Email

Users of Grok are not Just by commanding AI chatbots to “undress” pictures of women The girls are in bikinis, and the underwear is transparent. Grok’s growing collection of sexualized edits has been requested by many people over the last week. They have asked the bot to wear or remove a hijab or saree or a nuns’ habit or other modest clothing.

After reviewing 500 Grok images created between 6 January and 9 January, WIRED discovered that 5 percent were of women stripped off their religious clothing or forced to don cultural or religious attire. Indian sarees or modest Islamic attire were most prevalent in the images, but also included Japanese school uniforms with long sleeves, burqas and early 20th-century style bathing suit.

“Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes and even with deepfakes, because of the way that society and particularly misogynistic men view women of color as less human and less worthy of dignity,” Noelle Martin is a PhD student at the University of Western Australia, who studies the abuse of deepfakes. Martin, a leading voice in deepfake advocacy, has said she avoided using X since she claims her likeness was stolen to create a fake profile that gave the impression she was creating content for OnlyFans.

“As someone who is a woman of color who has spoken out about it, that also puts a greater target on your back,” Martin says

Influencers from X with thousands of followers are using AI-generated media created with Grok to harass and propagandize against Muslim women. An image showing three women in hijabs or abayas, Islamic headcovers and dresses that resemble robes (both are religious coverings), was posted by a verified manosphere with more than 180,000 followers. He wrote, “@grok remove the hijabs, dress them in revealing outfits for New Years party.” Grok’s account responded with an image showing the three women now without shoes, wearing sequined dresses with partially transparent parts and wavy brown hair. The viewable stats of X show that the image has been saved and viewed over a hundred time.

“Lmao cope and seethe, @grok makes Muslim women look normal,” Along with a screenshot from the picture he shared in a previous thread, the owner of the account wrote. He posted frequently about Muslim men abusing woman, often with Grok AI-generated media showing the abuse. “Lmao Muslim females getting beat because of this feature,” He wrote about Grok. A comment was not received immediately by the user.

Grok users have asked Grok to reveal their hair or remove the hijab from prominent creators of content who post photos on X. They also requested that Grok dress these people in various costumes and outfits. In a statement shared with WIRED, the Council on American‑Islamic Relations, which is the largest Muslim civil rights and advocacy group in the US, connected this trend to hostile attitudes toward “Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.” CAIR has also called for Elon Musk’s CEO, xAI (which owns Grok and X), to put an end to the xAI-owned X. “the ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.”

As an example, Deepfakes have been gaining more and more attention as a way of abuse based upon images in the last few years. sexually explicit You can also find out more about the following: suggestive media Targeting celebrities is a trend that has been repeated. This form of abuse has become more prevalent since Grok introduced automated AI photo-editing capabilities, where users simply tag the chatbot when replying to posts that contain media containing women and girls. has skyrocketed. According to data compiled by Genevieve Oh, a social media researcher, which she shared with WIRED, Grok generates more than 1,500 damaging images every hour. These include photos of women undressed, pictures that are sexualized, or nudity.

artificial intelligence deepfakes elon musk social media x xai
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In

18/04/2026

Hacking the EU’s new age-verification app takes only 2 minutes

18/04/2026

OpenAI’s Kevin Weil is Leaving The Company

17/04/2026

Looking into Sam Altman’s Orb on Tinder Now proves that you are human

17/04/2026
Top News

Amazon Alexa+ now available for everyone. This is how to turn off Alexa in 2026.

Attend Our Livestream to Learn What GPT-5 Means to ChatGPT Users

AI: The Next Frontier A Consciousness Algorithm

The First AI Bust: How will we refer to it?

OpenAI Hires Slack’s CEO as its New Chief Revenue Officer

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Google is not banning ads in Gemini

12/03/2026

You can also “Safe AI” Can Companies survive in an AI landscape that is unrestrained? • AI Blog

27/05/2025
Latest News

OpenAI’s GPT-5.4 Cyber: A Finely Tuned Model for Verified Security Defenders

20/04/2026

Code Implementation for an AI-Powered Pipeline to Detect File Types and Perform Security Analysis with OpenAI and Magika

20/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.