This is the latest in generative Models of AI are more than just standalone models text-generating chatbots—instead, they can easily be hooked up to your data to give personalized answers to your questions. OpenAI’s ChatGPT can be linked It is possible to allow others to view your Gmail or Microsoft Calendar, inspect the code on your GitHub site, or even find your appointments. But these connections have the potential to be abused—and researchers have shown it can take just a single “poisoned” document to do so.
Security researchers Michael Bargury, Tamir Ishay, Sharbat revealed their findings at today’s Black Hat hacker event in Las Vegas. They showed how an OpenAI Connectors weakness allowed sensitive data to be extracted using an indirect prompt injection attack. Demonstrating the attack dubbed AgentFlayerBargury shows us how to retrieve API keys that are stored on a demo Drive account.
This vulnerability shows how connecting AI systems to external systems, and transferring more data between them can increase the attack surface and multiply the vulnerabilities.
“There is nothing the user needs to do to be compromised, and there is nothing the user needs to do for the data to go out,” Bargury tells WIRED that he is CTO of the security firm Zenity. “We’ve shown this is completely zero-click; we just need your email, we share the document with you, and that’s it. So yes, this is very, very bad,” Bargury says.
OpenAI didn’t immediately reply to WIRED’s request for comment on the vulnerability of Connectors. Connectors were introduced as a ChatGPT Beta feature by OpenAI earlier this summer. website lists It claims that at least 17 different accounts can be connected with the system. The system allows for a variety of services to be linked with your account. “bring your tools and data into ChatGPT” You can also find out more about the following: “search files, pull live data, and reference content right in the chat.”
Bargury said he had reported OpenAI’s findings earlier in the year. The company responded quickly by introducing mitigations designed to stop the method he was using to extract data through Connectors. The way the attack works means only a limited amount of data could be extracted at once—full documents could not be removed as part of the attack.
“While this issue isn’t specific to Google, it illustrates why developing robust protections against prompt injection attacks is important,” Andy Wen, director senior of product security management at Google Workspace points out the company’s recently enhanced AI security measures.

