Authors:
Juan Antonio "Ozz" Osorio is a Mexican software engineer living in Finland. He has worked in security with cloud-related open source projects such as OpenStack and Kubernetes, as well as security for bare metal environments. He's currently working at Stacklok building tools to make software supply chain and AI security easier and friendlier.
Radoslav Dimitrov is a Senior Software Engineer at Stacklok with a background in supply chain security. Previously at VMware, he is an open-source maintainer of go-tuf and is currently working on CodeGate, a gateway that enhances AI coding assistants by improving privacy, cost efficiency, and performance across multiple models.
The Challenge: Securing Your AI Agents
As AI developers build increasingly sophisticated LLM-powered applications, two critical challenges emerge:
- Security vulnerabilities - LLMs can inadvertently expose sensitive data, recommend insecure code, or suggest vulnerable dependencies
- Model management complexity - Juggling multiple AI providers, API keys, and model configurations across applications creates friction
These challenges are especially pronounced when building multi-agent systems with frameworks like LangGraph, where multiple LLM calls occur within complex workflows. What if there was a simple way to add a security layer without disrupting your existing architecture?
Enter CodeGate + LangGraph: A Powerful Combination
CodeGate acts as a protective gateway between your LangGraph applications and AI providers, handling security and model management while preserving your application's core logic. The integration requires minimal code changes but delivers substantial benefits.
What is CodeGate?
CodeGate is a local AI gateway that enhances security and streamlines management of other AI assistants, models and applications. CodeGate includes:
- Custom instructions: Set up workspaces with prompts engineered for specific stages of your workflow
- Model routing (muxing): Route requests to different models based on criteria like file types or project needs
- Security features: Automatic redaction of secrets and PII from prompts
- Session insights: Track all interactions in a centralized dashboard including prompt history and security alerts
What is LangGraph?
LangGraph extends LangChain to enable stateful, multi-agent applications with LLMs:
- Build directed graphs defining how agents communicate
- Manage state across multiple interactions
- Create complex workflows with conditional logic
- Combine different LLM-powered components
Real-World Example: HAIstings Security Assistant
To demonstrate how simple the integration is, we’ll use HAIstings as an example, an AI-powered companion that prioritizes Kubernetes vulnerabilities using LangGraph - https://github.com/StacklokLabs/HAIstings
HAIstings analyzes vulnerability reports from Kubernetes clusters, including potentially sensitive infrastructure details, making it a perfect candidate for CodeGate's security features.
Integration: Step-by-Step
1. Install and run CodeGate
First, deploy CodeGate using Docker:
docker run --name codegate -d -p 8989:8989 -p 9090:9090 \
--mount type=volume,src=codegate_volume,dst=/app/codegate_volume \
--restart unless-stopped ghcr.io/stacklok/codegate:latest
Once running, CodeGate’s gateway is available at http://localhost:8989 and you can access the CodeGate dashboard at http://localhost:9090.
2. Configure your LangChain app to use CodeGate
In your LangGraph application, point your LLM initialization to CodeGate's muxing endpoint:
from langchain.chat_models import init_chat_model
llm = init_chat_model(
model="gpt-4", # Model name is handled by CodeGate's muxing
model_provider="openai",
api_key="not-needed", # CodeGate manages API keys
base_url="http://127.0.0.1:8989/v1/mux" # CodeGate's muxing endpoint
)
This single change routes all LLM calls through CodeGate, instantly adding security features without changing your application's logic.
3. Create your LangGraph workflow (no changes needed)
Your LangGraph workflow remains unchanged! Here's how HAIstings implements its conversation flow:
# Define graph nodes and connections
graph_builder = StateGraph(State)
graph_builder.add_node("retrieve", retrieve)
graph_builder.add_node("generate_initial", generate_initial)
graph_builder.add_node("extra_userinput", extra_userinput)
# Add edges
graph_builder.add_edge(START, "retrieve")
graph_builder.add_edge("retrieve", "generate_initial")
graph_builder.add_edge("generate_initial", "extra_userinput")
# Add conditional edges
graph_builder.add_conditional_edges(
"extra_userinput",
needs_more_info,
["extra_userinput", END]
)
# Compile the graph
graph = graph_builder.compile(checkpointer=memory)
4. Setup your workspaces
Access the CodeGate dashboard at http://localhost:9090 to configure your LLM provider and set up a workspace with your desired muxing rules (see the docs for more).
In our case, this looks like:
5. Run your application
Now when you run your application, all LLM interactions pass through CodeGate, which:
- Checks for and redacts sensitive information, i.e. secrets or PII
- Routes to the appropriate model based on your desired workspace settings
- Logs interactions for audit and monitoring
- Scans for security issues in generated code
For more details, check out the CodeGate documentation and LangGraph documentation.
CodeGate dashboard view:
The CodeGate dashboard now displays all interactions, showing redacted secrets and providing a complete audit trail:
Benefits of the Integration
Adding CodeGate to your LangGraph applications provides several key advantages:
Enhanced security
- Automatic secrets detection: API keys, passwords, and credentials are identified and redacted
- PII protection: Personal identifiable information is kept secure
- Safe code generation: Generated code is scanned for security issues
Simplified model management
- Centralized configuration: Manage all API keys and provider settings in one place
- Model flexibility: Change models without modifying application code
- Workspace isolation: Select different models for different projects
Improved observability
- Centralized logging: View all LLM interactions across applications
- Security alerting: Get notified about potential issues
- Prompt history: Review past interactions for debugging
Operational advantages
- Privacy-first: All processing happens locally
- Minimized vendor lock-in: Switch between AI providers easily
- Cost optimization: Route to different models based on needs
Common Questions
Q: Will CodeGate slow down my LangGraph application?
A: CodeGate adds minimal latency while providing significant security benefits.
Q: What if I'm using different model providers?
A: CodeGate supports OpenAI, Anthropic, Ollama, and more. You can mix providers across workspaces.
Q: How does model muxing work with LangGraph?
A: CodeGate's muxing routes requests to different models based on rules you define. Your LangGraph code remains unchanged.
Conclusion
Integrating CodeGate with LangGraph provides a powerful security layer for your AI applications with minimal effort. The HAIstings example demonstrates how seamlessly these technologies work together to create secure, sophisticated AI systems.
By focusing on security from the beginning, you can build LangGraph applications that protect sensitive data while remaining flexible and maintainable.
Start building more secure multi-agent systems today by adding CodeGate to your LangGraph applications! We invite you to join our Discord to stay up to date on dev-focused AI news, get notified of CodeGate releases, or send us a note with what you’d like to see next.
Top comments (0)