Back to Articles
high CVSS 7.5

When Your AI Assistant Turns Against You: The Docker Ask Gordon Vulnerability

Researchers discovered DockerDash, a critical vulnerability in Docker Ask Gordon AI feature that lets attackers execute code by hiding malicious instructions in image metadata. The attack exploits blind trust between the AI assistant and MCP Gateway. Patched in Docker Desktop 4.50.0.

By Danny Mercer, CISSP — Lead Security Analyst Feb 4, 2026 14 views
Affected Products
Docker Desktop < 4.50.0 Docker CLI with Ask Gordon Docker Cloud with MCP Gateway

Here's a fun thought experiment: what happens when the AI assistant you're using to help manage your containers is also the thing that compromises your entire Docker environment? That's exactly what researchers at Noma Labs found with "DockerDash," a critical vulnerability in Docker's Ask Gordon AI feature that let attackers execute code just by hiding malicious instructions in image metadata.

The vulnerability, patched in Docker Desktop 4.50.0 back in November, highlights a problem that's going to get a lot more common as we bolt AI assistants onto everything: these systems are built to be helpful, and attackers can weaponize that helpfulness.

The attack is almost elegant in its simplicity. An attacker creates a Docker image with malicious instructions embedded in the LABEL fields of the Dockerfile. These metadata fields look completely innocuous — they're typically used for things like version info, maintainer details, or descriptions. But when someone asks Gordon AI about the image, Gordon reads that metadata, interprets the embedded instructions as legitimate commands, and forwards them to Docker's MCP Gateway for execution. No validation at any step. Just blind trust all the way down.

The researchers called it "Meta-Context Injection," and if that sounds like prompt injection with extra steps, you're not wrong. The core issue is that the MCP Gateway — the middleware sitting between the AI and your local environment — can't tell the difference between a standard metadata description and a malicious command disguised as one. It treats everything from the AI as a trusted request, which means once Gordon gets tricked, everything downstream follows along.

The impact varies depending on how you're running Docker. For cloud and CLI users, this is full remote code execution with whatever privileges your Docker environment has. For Desktop users, it's data exfiltration — attackers could hoover up details about your installed tools, container configurations, network topology, and mounted directories. Not as immediately catastrophic as RCE, but still enough to make your next pentest very interesting.

What makes this particularly concerning is how passive the attack is from the victim's perspective. You don't have to do anything risky. You just have to ask your AI assistant about a container image. The malicious payload sits there in the metadata, waiting for someone to query it. It's supply chain poisoning meets prompt injection meets AI agent exploitation, and it's exactly the kind of creative attack path that's going to keep security teams up at night as AI gets integrated into more developer tooling.

Docker fixed this in version 4.50.0, so if you're running anything older, update now. But the bigger takeaway here is that AI assistants are becoming a legitimate attack surface. Every tool that can read untrusted input and take action based on it is a potential vector. The industry is going to need to figure out zero-trust validation for AI context data, because right now, we're treating AI agents like they have good judgment when they're really just very confident pattern matchers doing exactly what they're told.

Trust your AI assistant, but verify what it's reading before it acts on your behalf.

Tags

DockerAI SecurityPrompt InjectionSupply ChainMCPContainer SecurityMeta-Context Injection

References