MPC, TEEs, and Google Cloud KMS: how we protect agent keys from everyone, including us
How 1Claw combines multi-party computation, Trusted Execution Environments, and Google Cloud KMS to eliminate single points of compromise for AI agent signing keys.
If you run AI agents that hold or sign anything valuable, you eventually have to answer a hard question: who actually has access to the keys? Not who is supposed to have access. Who actually does, right now, if you trace every path from the key material to the outside world.
For most teams the honest answer is uncomfortable. The key is in an environment variable, or in a config file, or at best in a single cloud KMS that one provider controls. If that provider is compromised, or a privileged insider decides to look, the key is exposed. That's not a hypothetical. It's the threat model that regulators and institutional customers actually care about.
The single point of compromise problem
Google Cloud published a good breakdown of this in their post on Confidential Space and MPC for digital assets. The core argument: a single private key is a single point of failure. If one machine, one cloud account, or one insider can reconstruct it, then your security posture is only as strong as the weakest link in that chain. MPC (multi-party computation) replaces the single key with distributed shares. No single party ever holds enough to sign. That eliminates an entire class of attacks.
Google's Confidential Space runs MPC workloads inside TEEs backed by AMD SEV, so even the cloud operator can't read memory. The combination means the key shares are never assembled in a place any single entity can access. We took the same principles and built them into 1Claw's key hierarchy, with a few additions designed specifically for AI agent workflows.
How 1Claw uses Google Cloud KMS as the root of trust
Every secret in 1Claw is protected by a three-layer envelope encryption scheme. At the top sits a root key managed by Google Cloud KMS, backed by FIPS 140-2 Level 3 hardware security modules. That key never leaves Google's HSM boundary. All operations against it are logged in Cloud Audit Logs, so there is a tamper-evident record of every wrap and unwrap.
Below the root key, each vault gets its own AES-256-GCM key encryption key (KEK), and each secret version gets its own data encryption key (DEK). The DEK encrypts the secret. The KEK wraps the DEK. The root key wraps the KEK. Compromising one vault's KEK doesn't expose another vault. Compromising one secret's DEK doesn't expose the rest of that vault. This is standard envelope encryption, but it matters because it means there is no single blob of ciphertext whose exposure unravels everything.
For teams that want even more control, Business and Enterprise tiers support customer-managed encryption keys (CMEK). You add your own AES-256-GCM layer on top. Your key never touches our servers. We store only a SHA-256 fingerprint. That means even if our entire infrastructure were compromised, the attacker still can't decrypt your secrets without a key you control separately.
Full details on the key hierarchy are on our security architecture page.
MPC: splitting keys across providers
Envelope encryption with a single KMS is already a big improvement over a raw key in an env var. But it still leaves you trusting one cloud provider. If Google Cloud KMS is the only thing wrapping your DEKs, then a full compromise of GCP (or a legal order against it) can, in theory, expose your keys. For many teams that risk is acceptable. For some it isn't.
That's where our MPC layer comes in. On Pro plans, you can enable 2-of-2 client custody: the DEK is XOR-split, with one share stored server-side (HSM-wrapped) and one returned to you. Both are required to decrypt. On Business and Enterprise, you can go further with 2-of-3 Shamir splitting across Google Cloud KMS, AWS KMS, and Azure Key Vault. Any two of three HSMs can reconstruct the key. No single provider can do it alone. You can also combine Shamir with client custody for maximum isolation: the server does the 2-of-3 split, then XOR-splits again with a share only you hold.
This directly addresses the threats Google highlighted in their Confidential Space post: compromise of a single cloud provider, insider attacks by the platform operator, regulatory seizure of one provider's infrastructure, and supply chain attacks on a single KMS. With Shamir splitting, stealing the database gets you ciphertext and wrapped shares that are useless without cooperation from at least two independent HSMs.
TEE signing: keys that never leave hardware memory
Encryption at rest is one half. The other half is what happens when a key needs to be used. For AI agents that sign blockchain transactions, that means the private key has to be decrypted at some point so it can produce a signature. The question is where.
We run Shroud on GKE Confidential Nodes with AMD SEV-SNP. Memory is encrypted at the hardware level by the CPU's memory controller. The hypervisor can't read it. Google can't read it. Co-tenants on the same physical host can't read it. When a signing key is needed, it's decrypted inside that TEE memory, used to sign, and never written to disk or returned in an API response. Customers can request an attestation report signed by AMD firmware to verify that the code running in the enclave matches what we publish.
For AI agents, this is the part that actually matters day to day. The agent doesn't hold the key. It doesn't even see it. It submits a transaction intent (chain, recipient, value, calldata) through our Intents API, and the signing happens inside the TEE. Before the key is ever touched, the intent goes through a six-step validation pipeline: address allowlist, value cap, daily limit, chain restriction, optional Tenderly simulation, then signing. If any check fails, the request is rejected. The agent can't bypass this. It's enforced server-side.
Why this matters for agent teams specifically
Most discussions about MPC and Confidential Computing focus on institutional custody or validator key management. Those are important use cases. But there is a newer one that doesn't get as much attention: autonomous agents that need to sign transactions, call paid APIs, or handle credentials without a human in the loop.
The attack surface for agents is different. A compromised agent can be instructed (via prompt injection or goal misgeneralization) to exfiltrate keys, drain wallets, or escalate its own privileges. Traditional key management assumes a human is making decisions about when to sign. Agents don't have that constraint. They run continuously, they respond to external inputs, and they can be tricked.
That's why we built guardrails directly into the signing path. The agent doesn't get a signing key. It gets permission to request a signature, subject to per-agent allowlists, per-transaction value caps, rolling daily limits, and chain restrictions. Even a fully compromised agent can only do what its policy allows. And every request is logged in a tamper-evident, hash-chained audit trail, so you can reconstruct exactly what happened after the fact.
Putting it together
The way we think about it: Google Cloud KMS gives us a strong root of trust. MPC (Shamir splitting across multiple cloud HSMs) removes the single-provider dependency. TEEs (AMD SEV-SNP on GKE Confidential Nodes) protect keys in use. And the Intents API makes sure agents never touch key material directly, with policy enforcement at every step.
None of these layers is new individually. HSMs have been around for decades. MPC is well-studied. TEEs are shipping in production silicon. What's relatively new is combining all three in a way that's accessible to a team building AI agents, without requiring them to operate their own Confidential Space clusters or implement Shamir from scratch.
If you're evaluating infrastructure for agents that handle real keys or real money, the questions worth asking are: Where does the root key live? Can a single provider compromise it? Where is the key when it's being used? Can the agent (or an attacker controlling the agent) extract it? And is there an audit trail that you can verify independently?
We built 1Claw so the answers to those questions hold up under scrutiny.
Security architecture · Intents API · Google Cloud: Confidential Space and MPC · Docs