Back to Articles
critical

CRITICAL: Three Critical Flaws in LangChain and LangGraph Could Drain Your AI Applications Dry

Security researchers at Cyera disclosed three vulnerabilities in LangChain and LangGraph affecting millions of AI applications. Path traversal, deserialization, and SQL injection flaws expose filesystem contents, environment secrets, and conversation histories.

By Danny Mercer, CISSP — Lead Security Analyst Mar 27, 2026
Share:

If you have been racing to integrate AI into your applications, you are probably familiar with LangChain. The open-source framework has become the go-to plumbing for building LLM-powered tools, and its companion project LangGraph handles the more complex agentic workflows that everyone is excited about these days. With over 52 million downloads of LangChain alone in just the past week, we are talking about infrastructure that touches a staggering number of enterprise deployments worldwide. That popularity just became a problem.

Security researchers at Cyera have disclosed three distinct vulnerabilities affecting these frameworks, and the findings paint an uncomfortable picture. Each flaw opens a different pathway into your enterprise data, exposing filesystem contents in one case, environment secrets in another, and full conversation histories in the third. Vladimir Tokarev, the Cyera researcher who uncovered the issues, put it plainly when he noted that attackers now have three independent routes to drain sensitive information from any LangChain deployment.

The first vulnerability, tracked as CVE-2026-34070, carries a CVSS score of 7.5 and lives in the prompt-loading functionality of LangChain Core. This is a classic path traversal bug, the kind of issue that security professionals have been warning developers about for decades. By crafting a malicious prompt template, an attacker can escape the intended directory structure and access arbitrary files on the filesystem. No validation stands in their way. If your LangChain application runs alongside Docker configurations, SSH keys, or any other sensitive files, those are now fair game for anyone who can feed a specially crafted prompt into your system.

The second flaw is worse. CVE-2025-68664 scores a nasty 9.3 on the CVSS scale and stems from how LangChain handles serialized data. An attacker can pass input structured to trick the application into treating it as an already-serialized LangChain object rather than regular user data. When the framework deserializes this malicious input, it happily leaks API keys, environment variables, and other secrets that developers assumed were safely tucked away. This vulnerability has actually been floating around since late 2025 when researchers at Cyata first identified it under the nickname LangGrinch. If that name did not get your attention back in December, now would be a good time to take it seriously.

The third issue targets LangGraph specifically. CVE-2025-67644 is an SQL injection vulnerability in the SQLite checkpoint implementation, carrying a CVSS score of 7.3. LangGraph uses checkpoints to maintain state in complex workflows, and the way metadata filter keys get processed allows attackers to manipulate SQL queries. If your agentic workflows store sensitive conversation histories or workflow state in SQLite checkpoints, an attacker exploiting this flaw can run arbitrary queries against your database. Think about what lives in those conversation logs for a moment. Customer data, internal discussions, potentially privileged information that your AI agent was helping process. All of it becomes accessible.

What makes this disclosure particularly troubling is the interconnected nature of the AI tooling ecosystem. LangChain does not exist as an isolated library. It sits at the center of a massive dependency web that stretches across hundreds of downstream projects. Other libraries wrap it, extend it, integrate with it, and depend on its core functionality. When something breaks at this level, the damage ripples outward through every tool built on top of it. Your favorite AI wrapper or agent framework might be importing vulnerable LangChain code without you ever realizing it.

The timing here is notable. Just days before this disclosure, a critical flaw in Langflow, tracked as CVE-2026-33017 with a CVSS of 9.3, came under active exploitation within 20 hours of public disclosure. Attackers moved fast to exfiltrate data from developer environments, and the vulnerability shares the same root cause as an earlier flaw from 2025. Unauthenticated endpoints executing arbitrary code continue to plague these AI frameworks, and threat actors have clearly figured out that the rush to deploy LLM applications often outpaces basic security hygiene.

For organizations running LangChain and LangGraph in production, the path forward involves immediate patching. The path traversal bug in CVE-2026-34070 is fixed in langchain-core version 1.2.22 and later. The deserialization issue CVE-2025-68664 requires updating to langchain-core 0.3.81 or 1.2.5 depending on your major version. The SQL injection flaw CVE-2025-67644 is addressed in langgraph-checkpoint-sqlite version 3.0.1. Check your dependencies, check them again, and verify that nothing in your stack is pulling in older vulnerable versions transitively.

Beyond the immediate patching exercise, these vulnerabilities should prompt a broader conversation about how we are deploying AI infrastructure. The speed at which organizations have adopted these frameworks often has not been matched by corresponding security review. LangChain applications frequently run with elevated permissions, access to sensitive filesystems, and connections to production databases. When the underlying framework has holes like these, all of that access becomes attacker access.

The deserialization vulnerability deserves special attention because it represents a class of bug that keeps recurring. Treating external input as pre-trusted serialized objects is a mistake that has plagued Java applications, Python pickle handlers, and now AI frameworks. Developers building on LangChain need to understand that any data flowing into their applications could be weaponized if the framework does not validate it properly. Even with patches applied, the pattern of accepting complex structured input and interpreting it with minimal validation is a design smell that warrants scrutiny.

Input validation, sandboxing, and principle of least privilege apply to AI applications just as much as they apply to traditional web services. Your LangChain deployment probably does not need access to your entire filesystem. Your LangGraph workflows probably should not be storing sensitive data in SQLite checkpoints without additional encryption or access controls. And your API keys definitely should not be sitting in environment variables that a deserialization bug can exfiltrate.

The security community has spent years teaching developers not to trust user input, to validate and sanitize everything, to assume that attackers will find creative ways to break assumptions. All of those lessons apply to LLM applications too. The prompt going into your AI agent is user input. The metadata in your workflow state is user-controllable data. The files your application reads during prompt loading are part of an attack surface.

As organizations continue building increasingly sophisticated AI systems, the security of underlying frameworks becomes business-critical infrastructure. These are not just developer tools anymore. They are the pipes carrying sensitive enterprise data, the brains making decisions with privileged access, and the agents acting on behalf of users and systems alike. When LangChain or LangGraph have vulnerabilities, every application built on top inherits those weaknesses until patches propagate through the dependency chain.

For now, the immediate action is straightforward. Update your LangChain Core to version 1.2.22 or later. Update LangGraph checkpoint SQLite to version 3.0.1. Audit your deployments for any signs that these vulnerabilities might have been exploited. Review what filesystem access your applications actually need and lock down permissions accordingly. Consider whether your AI workflows require the level of access they currently have, and trim back anything excessive.

The AI gold rush continues, but the security debt is starting to come due. Three vulnerabilities, three paths to your data, and millions of applications potentially affected. If your organization has jumped into LLM integration without commensurate security investment, this week's disclosures should serve as a wake-up call. Patch now, audit soon, and maybe slow down long enough to make sure your AI applications are not the weakest link in your security posture.

References

Concerned about this threat?

Our security team can assess your exposure and recommend immediate actions.

Get a Free Assessment →