Your agent's LLM traffic,
inspected in a TEE
Shroud sits between AI agents and LLM providers. It inspects traffic in both directions, redacts secrets, detects prompt injection and response-side exfil, enforces policies — all inside a Trusted Execution Environment (AMD SEV-SNP on GKE).
Shroud is vault-aware.
Generic proxies aren't.
Shroud knows which strings in your prompts are secrets — because it has your vault. It matches secret values using Aho-Corasick, not regex heuristics. If it's in your vault, Shroud catches it.
Agent sends raw database credentials and API keys directly to the LLM provider. They end up in provider logs, debug traces, and potentially training pipelines depending on your agreement.
postgresql://admin:s3cretP@ss@db.prod:5432/app
sk_live_51N8x...a4bQR7kJ2m
Shroud intercepts the prompt inside the TEE, matches secrets against your vault, and replaces them with redaction tokens before forwarding.
[REDACTED:db/connection-string]
[REDACTED:api-keys/stripe]
Supported LLM providers
Point X-Shroud-Provider at any of these. Same proxy, same inspection — your choice of model.
OpenAI
GPT-4o, o1, etc.
Anthropic
Claude
Gemini (2.0 Flash, 2.5 Pro)
Mistral
Mistral models
Cohere
Command, etc.
OpenRouter
Many models, one API
Use google or gemini for Google Gemini. Store API keys in the vault at providers/{provider}/api-key or send X-Shroud-Api-Key.
No API keys needed.
Shroud handles billing for you.
Enable LLM Token Billing and your agents call any supported model through Shroud without managing provider API keys. Shroud routes through the Stripe AI Gateway — usage is metered per token and billed to your org automatically.
- Zero key management — no OpenAI, Anthropic, or Google keys to provision, rotate, or secure
- Per-token metering — usage tracked at the token level, billed through your existing Stripe subscription
- Same inspection pipeline — PII redaction, injection detection, and all twenty security layers still apply
- Budget guardrails — combine with per-agent daily_budget_usd to cap spend before it happens
Enable billing
Toggle LLM Token Billing in Settings → Billing. Stripe handles the rest.
Agent calls Shroud
POST to shroud.1claw.xyz — same as before. No provider API key needed.
Shroud inspects + routes
Full security pipeline runs, then Shroud forwards to Stripe AI Gateway.
Stripe meters tokens
Usage tracked per-token. Billed on your next invoice. You see it all in the dashboard.
Supported models: OpenAI (GPT-4o, o1, etc.), Anthropic (Claude), Google (Gemini), and more through the Stripe AI Gateway.
Every request. Every response. Twenty layers deep.
Every LLM request and response passes through a multi-stage inspection pipeline inside the TEE. Attacks come back through responses too: poisoned documents, echoed injection, markdown-image exfil. Shroud inspects both directions.
Hidden content stripping
Strips invisible characters, bidi overrides, and zero-width joiners before any other filter runs. Prevents payloads hidden in whitespace.
Unicode normalization
Detects homoglyph substitutions (Cyrillic ‘a’ vs Latin ‘a’) and normalizes text to a canonical form (NFC/NFKC) before inspection.
Command injection detection
Pattern-matching for shell commands, pipe chains, $(…) substitution, and escape sequences. Configurable strictness: default, strict, or custom patterns.
Social engineering detection
Identifies urgency pressure, fake authority claims, secrecy requests, impersonation, and other manipulation patterns. Sensitivity: low, medium, high.
Encoding obfuscation detection
Catches base64, hex, and Unicode escape tricks used to sneak payloads past text-based filters. Can block, decode-and-mark, warn, or log.
Network threat detection
Flags suspicious URLs, blocked domains, IP-based URLs, non-standard ports, and data exfiltration patterns. Per-agent allowed/denied domain lists.
Filesystem protection
Blocks references to sensitive paths (/etc/passwd, ~/.ssh, .env) and path traversal attempts. Custom blocked_paths per agent.
Secret injection detection
Detects credential patterns not in your vault: AWS keys, GitHub tokens, Stripe keys, JWTs, PEM headers, Ethereum private keys. Block, sanitize, warn, or log.
Advanced redaction
Goes beyond literal matching: catches base64-encoded secrets, split/chunked credentials across messages, and prefix leaks. Configurable min_secret_length.
Tool call inspection
Scans function/tool call names and arguments for credential exfiltration. Enforces allowed/denied tool name lists per agent.
Semantic policy enforcement
Topic and task-level controls. Restrict agents to allowed topics (e.g. code_generation, data_analysis) and deny off-limits tasks at the semantic level.
Context injection detection
Separate from prompt injection: catches hidden instructions in tool outputs, retrieved documents, and context windows. Own configurable threshold (0.0–1.0).
Response injection detection
Scores LLM responses for echoed injection, imperative instructions, markdown-image exfil (), data-URI exec blobs, and unexpected code fences.
Output content filtering
Filters responses for blocked patterns, harmful content categories (violence, malware, self-harm, hate, illegal), and sensitive entity leakage.
Response credential filter
Heuristic scan for hallucinated or leaked credentials in LLM output. Catches cases where the model invents plausible-looking keys or echoes real ones.
Emails, phone numbers, SSNs, credit cards, and other PII detected and scrubbed before reaching the provider. Policy: block, redact, warn, or allow.
Vault-aware exact matching. Every secret in your vault is checked against the prompt using Aho-Corasick. Matches are replaced with [REDACTED:path].
Weighted scoring across six categories: role manipulation, instruction override, delimiter attacks, encoding evasion, indirect injection, system extraction. Hard block at 0.9+.
Final gate: aggregates all filter results and enforces provider/model allowlists, token caps, rate limits, daily budget, and JWT-level threat blocks. Returns 403 or 429.
20 protection layers — 12 request-side filters, 3 response-side filters, plus PII redaction, vault-aware secret redaction, prompt injection scoring, context injection scoring, and the policy engine.
Every agent gets its own policy.
No one-size-fits-all.
Shroud config is per-agent, stored as JSONB on the agent record. Toggle it on in the dashboard, SDK, or CLI. Each agent can have different PII policies, provider restrictions, budget caps, and threat detection sensitivity.
await client.agents.update(agentId, {
shroud_enabled: true,
shroud_config: {
pii_policy: "redact",
injection_threshold: 0.7,
allowed_providers: ["openai", "anthropic"],
daily_budget_usd: 50
}
});How to handle detected PII in prompts and responses
Prompt injection score threshold — lower is stricter
Context injection threshold — catches injection hidden in tool outputs and retrieved documents
Restrict which LLM providers this agent can call (OpenAI, Anthropic, Google, etc.)
Whitelist specific models — e.g. only gpt-4o, not gpt-3.5
Cap token usage per individual LLM request
Per-agent rate limiting at the proxy level
Daily request cap per agent — returns 429 once exceeded
Daily spend cap in USD — Shroud blocks requests once exceeded
For security engineers and compliance teams,
not just developers.
Shroud gives your security team visibility and control over every LLM interaction your agents make. Set policies per agent, enforce them at the proxy, audit everything.
- Centralized policy enforcement — no agent-side trust required
- Full audit trail of every LLM request, redaction, and threat detection event
- Configurable threat detection sensitivity per agent, team, or environment
- Budget controls and rate limits prevent runaway agent spend
SIEM integration
Coming soonStream Shroud audit events to your SIEM (Splunk, Datadog, Sentinel). Correlate LLM threat data with your existing security monitoring.
Compliance reporting
Generate reports on agent LLM usage, secret exposure attempts, and policy violations. Evidence for SOC 2, ISO 27001, and internal audits.
Dedicated TEE nodes. Compliance reporting.
Custom PII policies.
For organizations that need dedicated infrastructure, custom redaction rules, and compliance-grade audit trails.
- Dedicated TEE nodes
- Custom PII policies
- SIEM integration
- Compliance reporting
- SLA guarantees
- Priority support
Or start on the free tier and upgrade when you're ready.