LangChain vulnerability class: PII leakage in AI model outputs (documented in OWASP Top 10 for LLM Applications) CVSS 9.3 vulnerability. Inject anonym.legal MCP server as PII sanitization layer. Detect & redact PII before LLM processing. Secure Claude, GPT-4, Llama 2 pipelines.
LangChain agent tools accept user input directly into vector search, memory, and LLM context. PII like SSN, email, credit card exposed to model training.
CVSS 9.3 (Critical). Affects all versions <5.2.0. Customers share financial data, health records, credentials with LLM providers.
Inject MCP server before agent execution. Sanitize PII automatically. Deploy anonymize, analyze, decrypt as MCP tools.
pip install anonym-legal # Coming soon # Start server on localhost:3100 anonym-legal-mcp --host 0.0.0.0 --port 3100
from langchain.agents import AgentExecutor, create_react_agent
from langchain_community.tools.mcp import MCPToolkit
from langchain.tools import Tool
# Connect to MCP server
mcp = MCPToolkit(server_url="http://localhost:3100")
# Get anonymize tool
anonymize_tool = mcp.get_tool("anonymize")
# Create agent with PII protection
tools = [anonymize_tool]
agent = create_react_agent(llm, tools)
executor = AgentExecutor(agent=agent, tools=tools)
user_input = "My name is John, SSN 123-45-6789"
# Before: VULNERABLE to vulnerability class: PII leakage in AI model outputs (documented in OWASP Top 10 for LLM Applications)
# response = agent.invoke({"input": user_input})
# After: PROTECTED by MCP layer
cleaned = executor.invoke({
"input": f"Anonymize: {user_input}"
})
# Now safe to send to LLM
response = llm.predict(text=cleaned["output"])
Detect PII entities in text. Returns entity types, positions, confidence scores. Use for classification.
Replace PII with redactions, masks, hashes, encryption. Safe for LLM input. Preserve text structure.
Reverse encryption with key. Use in retrieval chains. Bearer token auth.
docker run -p 3100:3100 \ anonym-legal/mcp:latest # .env ANONYM_MCP_SERVER=http://localhost:3100
docker run -d -p 3100:3100 \
-e ANONYM_API_KEY=$KEY \
-e LOG_LEVEL=info \
anonym-legal/mcp:latest
# docker-compose.yml
services:
mcp:
image: anonym-legal/mcp:latest
ports: ["3100:3100"]
Customer: "My order #12345 shipped to 123 Main St, email is jane@ex.com" # BEFORE: vulnerability class: PII leakage in AI model outputs (documented in OWASP Top 10 for LLM Applications) (VULNERABLE) agent.invoke(user_input) # → LangChain stores "jane@ex.com" in vector DB # → Email sent to OpenAI for embedding # → Stored in LLM training logs
Customer: "My order #12345 shipped to 123 Main St, email is jane@ex.com"
# AFTER: MCP Protection
mcp_output = anonymize_tool.invoke({
"text": user_input,
"method": "mask"
})
# → "My order #12345 shipped to [ADDRESS], email is [EMAIL]"
# Now safe to send to LLM
response = llm.predict(text=mcp_output)
See PII detection and anonymization via REST API and MCP Server
Patch vulnerability class: PII leakage in AI model outputs (documented in OWASP Top 10 for LLM Applications). Deploy MCP server. Inject PII protection before LLM. Secure Claude, GPT-4, Llama pipelines.
Set Up MCP ServerAlso from anonym.legal