Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers
  • Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks
  • The Coding Guide to Property Based Testing with Hypothesis and Stateful, Differential and Metamorphic Test Designs
  • Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In
  • Hacking the EU’s new age-verification app takes only 2 minutes
  • Google AI Releases Google Auto-Diagnosis: A Large Language Model LLM Based System to Diagnose Integrity Test Failures At Scale
  • This is a complete guide to running OpenAI’s GPT-OSS open-weight models using advanced inference workflows.
  • The Huey Code Guide: Build a High-Performance Background Task Processor Using Scheduling with Retries and Pipelines.
AI-trends.todayAI-trends.today
Home»AI»A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

AI By Gavin Wallace06/08/20253 Mins Read
Facebook Twitter LinkedIn Email
Let's Talk About ChatGPT and Cheating in the Classroom
Let's Talk About ChatGPT and Cheating in the Classroom
Share
Facebook Twitter LinkedIn Email

This is the latest in generative Models of AI are more than just standalone models text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI’s ChatGPT can be linked It is possible to allow others to view your Gmail or Microsoft Calendar, inspect the code on your GitHub site, or even find your appointments. But these connections have the potential to be abused—and researchers have shown it can take just a single “poisoned” document to do so.

Security researchers Michael Bargury, Tamir Ishay, Sharbat revealed their findings at today’s Black Hat hacker event in Las Vegas. They showed how an OpenAI Connectors weakness allowed sensitive data to be extracted using an indirect prompt injection attack. Demonstrating the attack dubbed AgentFlayerBargury shows us how to retrieve API keys that are stored on a demo Drive account.

This vulnerability shows how connecting AI systems to external systems, and transferring more data between them can increase the attack surface and multiply the vulnerabilities.

“There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,” Bargury tells WIRED that he is CTO of the security firm Zenity. “We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it. So yes, this is very, very bad,” Bargury says.

OpenAI didn’t immediately reply to WIRED’s request for comment on the vulnerability of Connectors. Connectors were introduced as a ChatGPT Beta feature by OpenAI earlier this summer. website lists It claims that at least 17 different accounts can be connected with the system. The system allows for a variety of services to be linked with your account. “bring your tools and data into ChatGPT” You can also find out more about the following: “search files, pull live data, and reference content right in the chat.”

Bargury said he had reported OpenAI’s findings earlier in the year. The company responded quickly by introducing mitigations designed to stop the method he was using to extract data through Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack.

“While this issue isn’t specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,” Andy Wen, director senior of product security management at Google Workspace points out the company’s recently enhanced AI security measures.

artificial intelligence black hat chatgpt cybersecurity defcon Google hacking openai security vulnerabilities
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Schematik Is ‘Cursor for Hardware.’ The Anthropics Want In

18/04/2026

Hacking the EU’s new age-verification app takes only 2 minutes

18/04/2026

OpenAI’s Kevin Weil is Leaving The Company

17/04/2026

Looking into Sam Altman’s Orb on Tinder Now proves that you are human

17/04/2026
Top News

Google Pixel 10 phones (and watch) have 10 crazy features.

Meta Warned Facial Recognizer Glasses Could Arm Sexual Assailants

America’s largest bitcoin miners are shifting to AI

The Leaked Memo from Anthropic’s CEO: the company will pursue Gulf State investments after all

Drake and Free Chicken have supplanted Sora as the top app store in America.

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

Gemini Embedding-001 Now Available: Multilingual AI Text Embeddings via Google API

15/07/2025

This is a good example of how too much thought can lead to LLMs breaking: Inverse scaling in test-time computation

30/07/2025
Latest News

xAI Releases Standalone Grok Speech to text and Text to speech APIs, Aimed at Enterprise Voice Developers

19/04/2026

Anthropic releases Claude Opus 4.7, a major upgrade for agentic coding, high-resolution vision, and long-horizon autonomous tasks

19/04/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.