Recent Posts
Archives

Posts Tagged ‘EncryptionVulnerabilities’

PostHeaderIcon [DefCon32] Your AI Assistant Has a Big Mouth: A New Side-Channel Attack

As AI assistants like ChatGPT reshape human-technology interactions, their security gaps pose alarming risks. Yisroel Mirsky, a Zuckerman Faculty Scholar at Ben-Gurion University, alongside graduate students Daniel Eisenstein and Roy Weiss, unveils a novel side-channel attack exploiting token length in encrypted AI responses. Their research exposes vulnerabilities in major platforms, including OpenAI, Microsoft, and Cloudflare, threatening the confidentiality of personal and sensitive communications.

Yisroel’s Offensive AI Research Lab focuses on adversarial techniques, and this discovery highlights how subtle data leaks can undermine encryption. By analyzing network traffic, they intercept encrypted responses, reconstructing conversations from medical queries to document edits. Their findings, disclosed responsibly, prompted swift vendor patches, underscoring the urgency of securing AI integrations.

The attack leverages predictable token lengths in JSON responses, allowing adversaries to infer content despite encryption. Demonstrations reveal real-world impacts, from exposing personal advice to compromising corporate data, urging a reevaluation of AI security practices.

Understanding the Side-Channel Vulnerability

Yisroel explains the attack’s mechanics: AI assistants transmit responses as JSON objects, with token lengths correlating to content size. By sniffing HTTPS traffic, attackers deduce these lengths, mapping them to probable outputs. For instance, a query about a medical rash yields distinct packet sizes, enabling reconstruction.

Vulnerable vendors, unaware of this flaw until February 2025, included OpenAI and Quora. The team’s tool, GPTQ Logger, automates traffic analysis, highlighting the ease of exploitation in unpatched systems.

Vendor Responses and Mitigations

Post-disclosure, vendors acted decisively. OpenAI implemented padding to the nearest 32-byte value, obscuring token lengths. Cloudflare adopted random padding, further disrupting patterns. By March 2025, patches neutralized the threat, with five vendors offering bug bounties.

Yisroel emphasizes simple defenses: random padding, fixed-size packets, or increased buffering. These measures, easily implemented, prevent length-based inference, safeguarding user privacy.

Implications for AI Security

The discovery underscores a broader issue: AI services, despite their sophistication, inherit historical encryption pitfalls. Yisroel draws parallels to past side-channel attacks, where minor details like timing betrayed secrets. AI’s integration into sensitive domains demands rigorous security, akin to traditional software.

The work encourages offensive research to uncover similar weaknesses, advocating AI’s dual role in identifying and mitigating vulnerabilities. As new services emerge, proactive design is critical to prevent data exposure.

Broader Call to Action

Yisroel’s team urges the community to explore additional side channels, from compression ratios to processing delays. Their open-source tools invite further scrutiny, fostering a collaborative defense against evolving threats.

This research redefines AI assistant security, emphasizing meticulous data handling to protect user trust.

Links: