Shroud — TEE LLM Proxy

Your agent's LLM traffic,
inspected in a TEE

Shroud sits between AI agents and LLM providers. It inspects, redacts secrets, detects prompt injection, enforces policies — all inside a Trusted Execution Environment (AMD SEV-SNP on GKE).

Key differentiator

Shroud is vault-aware.
Generic proxies aren't.

Shroud knows which strings in your prompts are secrets — because it has your vault. It matches secret values using Aho-Corasick, not regex heuristics. If it's in your vault, Shroud catches it.

Without Shroud

Agent sends raw database credentials and API keys directly to the LLM provider. They end up in provider logs, training data, and debug traces.

Agent → LLM (direct)secrets in prompt
A
Connect to the production database and run the migration:
postgresql://admin:s3cretP@ss@db.prod:5432/app
A
Also use this API key for the payment provider:
sk_live_51N8x...a4bQR7kJ2m
The LLM provider now has your production DB credentials and Stripe key. They're in their logs forever.
With Shroud

Shroud intercepts the prompt inside the TEE, matches secrets against your vault, and replaces them with redaction tokens before forwarding.

Agent → Shroud TEE → LLMredacted in TEE
A
Connect to the production database and run the migration:
[REDACTED:db/connection-string]
A
Also use this API key for the payment provider:
[REDACTED:api-keys/stripe]
Shroud matched both secrets against your vault and redacted them before the LLM saw anything. PII scrubbed. Injection scored 0.08 (safe).
Six-layer inspection pipeline

Every request. Six layers deep.

Every LLM request passes through a multi-stage inspection pipeline inside the TEE before reaching the provider.

Unicode normalization

Detects homoglyph substitutions and zero-width characters used to bypass filters. Normalizes text to a canonical form before inspection.

Command injection detection

Pattern-matching for shell commands, code injection, and escape sequences that could cause the agent to execute unintended operations.

Social engineering detection

Identifies manipulation patterns, fake authority claims, and attempts to override agent instructions through social pressure tactics.

Encoding obfuscation detection

Catches base64, hex, Unicode escape, and other encoding tricks used to sneak payloads past text-based filters.

Network threat detection

Flags suspicious URLs, blocked domains, and data exfiltration attempts. Enforces per-agent allowed/denied domain lists.

Filesystem protection

Blocks references to sensitive paths (/etc/passwd, ~/.ssh, .env) and path traversal attempts in LLM conversations.

PII redaction

Emails, phone numbers, SSNs, and other PII detected and scrubbed before reaching the provider.

Secret redaction (Aho-Corasick)

Vault-aware exact matching. Every secret in your vault is checked against the prompt text.

Prompt injection scoring

ML-based scoring (0.0–1.0) with configurable thresholds. Block, warn, or log based on your policy.

Per-agent configuration

Every agent gets its own policy.
No one-size-fits-all.

Shroud config is per-agent, stored as JSONB on the agent record. Toggle it on in the dashboard, SDK, or CLI. Each agent can have different PII policies, provider restrictions, budget caps, and threat detection sensitivity.

dashboard or SDK
await client.agents.update(agentId, {
  shroud_enabled: true,
  shroud_config: {
    pii_policy: "redact",
    injection_threshold: 0.7,
    allowed_providers: ["openai", "anthropic"],
    daily_budget_usd: 50
  }
});
pii_policyblock / redact / warn / allow

How to handle detected PII in prompts and responses

injection_threshold0.0 – 1.0

Prompt injection score threshold — lower is stricter

allowed_providersstring[]

Restrict which LLM providers this agent can call (OpenAI, Anthropic, Google, etc.)

allowed_modelsstring[]

Whitelist specific models — e.g. only gpt-4o, not gpt-3.5

max_tokens_per_requestnumber

Cap token usage per individual LLM request

max_requests_per_minutenumber

Per-agent rate limiting at the proxy level

daily_budget_usdnumber

Daily spend cap in USD — Shroud blocks requests once exceeded

Built for security teams

For security engineers and compliance teams,
not just developers.

Shroud gives your security team visibility and control over every LLM interaction your agents make. Set policies per agent, enforce them at the proxy, audit everything.

  • Centralized policy enforcement — no agent-side trust required
  • Full audit trail of every LLM request, redaction, and threat detection event
  • Configurable threat detection sensitivity per agent, team, or environment
  • Budget controls and rate limits prevent runaway agent spend

SIEM integration

Coming soon

Stream Shroud audit events to your SIEM (Splunk, Datadog, Sentinel). Correlate LLM threat data with your existing security monitoring.

Compliance reporting

Generate reports on agent LLM usage, secret exposure attempts, and policy violations. Evidence for SOC 2, ISO 27001, and internal audits.

Shroud Enterprise

Dedicated TEE nodes. Compliance reporting.
Custom PII policies.

For organizations that need dedicated infrastructure, custom redaction rules, and compliance-grade audit trails.

  • Dedicated TEE nodes
  • Custom PII policies
  • SIEM integration
  • Compliance reporting
  • SLA guarantees
  • Priority support

Or start on the free tier and upgrade when you're ready.