Close Menu
  • AI
  • Content Creation
  • Tech
  • Robotics
AI-trends.todayAI-trends.today
  • AI
  • Content Creation
  • Tech
  • Robotics
Trending
  • How to Create AI Agents that Use Short-Term Memory, Long-Term Memory, and Episodic memory
  • A Coding Analysis and Experimentation of Decentralized Federated Education with Gossip protocols and Differential privacy
  • Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims
  • PyKEEN: Coding for Training, Optimizing and Evaluating Knowledge Graph Embeddings
  • Robbyant LingBot World – a Real Time World Model of Interactive Simulations and Embodied AI
  • SERA is a Soft Verified Coding agent, built with only Supervised training for practical Repository level Automation Workflows.
  • I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It didn’t quite click
  • DeepSeek AI releases DeepSeek OCR 2 with Causal visual flow encoder for layout-aware document understanding
AI-trends.todayAI-trends.today
Home»AI»A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

A Single Poisoned Document Could Leak ‘Secret’ Data Via ChatGPT

AI By Gavin Wallace06/08/20253 Mins Read
Facebook Twitter LinkedIn Email
Let's Talk About ChatGPT and Cheating in the Classroom
Let's Talk About ChatGPT and Cheating in the Classroom
Share
Facebook Twitter LinkedIn Email

This is the latest in generative Models of AI are more than just standalone models text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI’s ChatGPT can be linked It is possible to allow others to view your Gmail or Microsoft Calendar, inspect the code on your GitHub site, or even find your appointments. But these connections have the potential to be abused—and researchers have shown it can take just a single “poisoned” document to do so.

Security researchers Michael Bargury, Tamir Ishay, Sharbat revealed their findings at today’s Black Hat hacker event in Las Vegas. They showed how an OpenAI Connectors weakness allowed sensitive data to be extracted using an indirect prompt injection attack. Demonstrating the attack dubbed AgentFlayerBargury shows us how to retrieve API keys that are stored on a demo Drive account.

This vulnerability shows how connecting AI systems to external systems, and transferring more data between them can increase the attack surface and multiply the vulnerabilities.

“There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,” Bargury tells WIRED that he is CTO of the security firm Zenity. “We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it. So yes, this is very, very bad,” Bargury says.

OpenAI didn’t immediately reply to WIRED’s request for comment on the vulnerability of Connectors. Connectors were introduced as a ChatGPT Beta feature by OpenAI earlier this summer. website lists It claims that at least 17 different accounts can be connected with the system. The system allows for a variety of services to be linked with your account. “bring your tools and data into ChatGPT” You can also find out more about the following: “search files, pull live data, and reference content right in the chat.”

Bargury said he had reported OpenAI’s findings earlier in the year. The company responded quickly by introducing mitigations designed to stop the method he was using to extract data through Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack.

“While this issue isn’t specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,” Andy Wen, director senior of product security management at Google Workspace points out the company’s recently enhanced AI security measures.

artificial intelligence black hat chatgpt cybersecurity defcon Google hacking openai security vulnerabilities
Share. Facebook Twitter LinkedIn Email
Avatar
Gavin Wallace

Related Posts

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

31/01/2026

I Let Google’s ‘Auto Browse’ AI Agent Take Over Chrome. It didn’t quite click

30/01/2026

‘Uncanny Valley’: Minneapolis Misinformation, TikTok’s New Owners, and Moltbot Hype

29/01/2026

A Yann LeCun–Linked Startup Charts a New Path to AGI

29/01/2026
Top News

AI-Generated Videos Against ICE Are Being Fanfic-Treated

AI: The Next Frontier A Consciousness Algorithm

Big Tech asked for a looser Clean Water Act permitting. Trump is willing to grant it.

The AGI Battle Between Microsoft and OpenAI is More Than Just a Contract

AI Is Eliminating Jobs for Younger Workers

Load More
AI-Trends.Today

Your daily source of AI news and trends. Stay up to date with everything AI and automation!

X (Twitter) Instagram
Top Insights

A Coding Solution to Create and Train Advanced Architectures Using JAX and Flax.

11/11/2025

How AI Dictation Software Changed My Workflow (and Which Are Worthwhile)

26/11/2025
Latest News

How to Create AI Agents that Use Short-Term Memory, Long-Term Memory, and Episodic memory

02/02/2026

A Coding Analysis and Experimentation of Decentralized Federated Education with Gossip protocols and Differential privacy

02/02/2026
X (Twitter) Instagram
  • Privacy Policy
  • Contact Us
  • Terms and Conditions
© 2026 AI-Trends.Today

Type above and press Enter to search. Press Esc to cancel.