Series Note: This article is Part Two of our ongoing series on AI‑driven side‑channel attacks and the architectural shifts required to defend against them. If you missed Part One, you can read it here.
After Whisper Leak and the McKinsey incident, one conclusion is unavoidable: AI systems expose patterns that attackers can exploit. Even when encrypted, AI traffic reveals timing, size, and sequence information that can be used to infer prompts, workflows, or system behavior. And once an AI system is reachable, agentic tools can exploit weaknesses at machine speed.
The natural question for CISOs and technical leaders is: Can we mitigate this with the tools we already have? Padding, batching, shaping, noise injection, and application-layer normalization are all familiar techniques. They’ve been used for decades to reduce side-channel leakage in cryptography, web traffic, and distributed systems.
But AI changes the equation. AI systems generate traffic that is high frequency, highly structured, and extremely repetitive. The very properties that make AI powerful also make its communication patterns predictable. Traditional mitigations were never designed for workloads that behave like this, and in practice they fail for three structural reasons:
The result is a widening gap between what traditional defenses can obscure and what AI workloads reveal. Closing that gap requires a shift from incremental mitigations to architectural solutions that eliminate observability at the transport layer.
AI systems communicate differently from human-driven applications. Their traffic is:
These characteristics make AI traffic ideal for machine-learning-based inference attacks. Even small, residual patterns can be amplified by an attacker who can observe enough flows. Traditional defenses were not built for this environment.
Padding adds extra bytes to messages to obscure their true size. It works reasonably well for simple, uniform traffic. But AI traffic is neither simple nor uniform.
In practice, padding reduces only one dimension of the side channel and AI exposes many.
Batching groups multiple requests together to obscure individual patterns. It works well for high-throughput systems where latency is not critical. AI systems, however, are increasingly interactive and real‑time.
Batching is fundamentally misaligned with the responsiveness AI systems require.
Traffic shaping smooths out flows to make them appear uniform. It’s effective for predictable workloads like video streaming or VoIP. AI workloads are anything but predictable.
Shaping works when the workload is predictable. AI is not.
Noise injection adds randomness to traffic patterns to make them harder to classify. It’s a common technique in privacy-preserving systems. But AI traffic is so structured that noise must be extremely strong to be effective, and strong noise breaks performance.
Noise is a partial mitigation at best and often counterproductive.
Normalization attempts to make application behavior uniform so that traffic patterns reveal less. It works well when the application has a small number of predictable paths. AI systems have thousands.
Normalization works when the application is simple. AI systems are not simple.
All traditional mitigations fail for the same underlying reason: AI traffic is both visible and stable and traditional defenses can only disrupt one dimension at a time.
AI workloads expose multiple correlated signals simultaneously. Traditional defenses can only obscure one at a time. This is why attackers and increasingly, autonomous agents, can still learn from encrypted AI traffic.
AI is accelerating the mismatch between attacker capability and defender tooling. Three trends make this urgent:
The result is a widening gap between what organizations believe encryption protects and what it actually protects in AI-driven environments. Closing that gap requires a shift from incremental mitigations to architectural solutions that eliminate observability at the transport layer — the topic of Blog 3.
Traditional defenses were designed for human-driven applications, not autonomous, high-frequency AI systems. Padding, batching, shaping, noise, and normalization each address one dimension of the side channel, but AI exposes many. The stability and visibility of AI traffic make these techniques insufficient on their own.
To secure AI systems, organizations must eliminate the attacker’s vantage point entirely, not just reduce it. Blog 3 will explore what that architectural shift looks like and why dispersion-based transport hardening is emerging as the most durable path forward.
If your AI systems still rely on padding, shaping, or noise to hide their traffic patterns, they remain observable. Dispersive® Stealth Networking removes that observability at the transport layer. Connect with our team to see how dispersion eliminates the visibility and stability these attacks depend on.
📞 Learn more or request a demo: www.dispersive.io
Header image courtesy of StockSnap from Pixabay.