For decades, enterprises relied on strong encryption to protect sensitive data in transit, and encryption used to be the end of the conversation. If an organization could say “we use TLS 1.3 and modern cipher suites,” that was enough to reassure boards, regulators, and customers that data in transit was safe.
AI has quietly introduced a new cybersecurity problem, one that most organizations have not yet recognized, and that traditional defenses were never designed to handle. Modern AI systems from LLMs, agentic frameworks to autonomous machine-to-machine (M2M) workflow, don’t just send encrypted data. They generate highly structured, repetitive, machine-driven communication patterns. Those patterns are now a source of intelligence for attackers, even when the payload is perfectly encrypted.
Two recent developments illustrate this shift.
The first is Microsoft’s Whisper Leak research. Microsoft’s security team demonstrated that an attacker who can observe encrypted LLM traffic may be able to infer the topic of a user’s query by analyzing metadata such as packet timing, size, and sequence. The cryptography remains intact; the attacker never sees plaintext. The risk comes from the shape of the traffic, not the content. Whisper Leak is presented as a research result, not a claim that all deployed systems are equally exposed, but it establishes a critical fact: AI traffic is fingerprintable because AI systems communicate in stable, recognizable ways.
The second is the widely reported McKinsey agentic AI incident, in which an autonomous security agent developed by CodeWall reportedly exploited weaknesses in McKinsey’s internal AI platform, Lilli. According to public reporting, the agent discovered unauthenticated endpoints and a SQL injection vulnerability, then used those footholds to access a large volume of internal data. The details come from external sources, and McKinsey’s internal findings may differ, but the pattern is what matters: once an AI-driven system is reachable and observable, an automated agent can explore and exploit it at machine speed.
Together, these events reveal a new reality for CISOs and technical leaders:
AI is no longer just a workload. It is an attack surface, one that behaves differently from anything enterprises have secured before.
Human-driven applications produce irregular, noisy traffic. People pause, think, click unpredictably, and abandon workflows. AI systems behave differently. Their communication patterns are:
From a machine learning perspective, this stability is ideal training data. If an adversary can observe enough encrypted traffic, they can train classifiers to recognize patterns that correlate with specific intents, workflows, or application states.
Microsoft’s Whisper Leak research describes a side-channel attack on remote language models that relies on network metadata, not decrypted content. According to Microsoft, the attack “could allow a cyberattacker in a position to observe your network traffic to conclude language model conversation topics, despite being end-to-end encrypted via TLS.”
At a high level:
Important nuances:
The implication is structural: if AI traffic is stable and observable, it is likely inferable. This applies not only to LLM prompts but to any AI-driven system that communicates over the network.
The McKinsey case illustrates a complementary risk: once an AI system is reachable and its behavior is observable, an autonomous agent can use that visibility to drive exploitation.
Public reporting describes the following sequence:
The details come from external reporting, and the full internal incident analysis has not been published. But the pattern is consistent with what security teams increasingly observe: agentic AI compresses the attack timeline. It does not invent new vulnerability classes, but it changes how quickly and thoroughly existing weaknesses can be found and exploited.
In both Whisper Leak and the McKinsey incident, encryption did what it was designed to do:
What it did not do:
TLS, AES, ChaCha, and post-quantum key exchange protect content and keys. They do not erase context:
For traditional applications, this contextual leakage has often been considered low risk. For AI systems, it is different:
Both Whisper Leak and the McKinsey incident depend on two structural conditions:
If either condition is removed:
This is the pivot point for the architectural argument that follows in the rest of the series: AI security cannot rely on cryptographic strength alone; it must address observability and stability at the transport layer.
Whisper Leak shows that attackers can infer what your AI systems are doing without breaking encryption, simply by watching how they talk. The McKinsey incident shows that once an AI system is reachable and observable, an autonomous agent can use that visibility to drive exploitation at machine speed. In both cases, the core issue is not weak cryptography but exposed, learnable patterns in AI communication and behavior.
AI has created a new attack surface, one where context can be as revealing as content, and where autonomy accelerates risk. The next blog in this series examines why traditional mitigations such as padding, batching, shaping, noise, and application-layer normalization cannot fully close this gap, and why a structural approach to eliminating observability is required.
If your AI systems rely on encryption alone, they remain observable. Dispersive eliminates that observability at the transport layer. To understand how structural dispersion protects AI, agents, and autonomous workflows, connect with our team for a technical briefing.
📞 Learn more or request a demo: www.dispersive.io
Header image courtesy of wastedgeneration from Pixabay.