Trusted Execution: Intel SGX Enclaves, AMD SEV-SNP, Attestation Protocols, and the Confidential Computing Promise

2021-03-25 · Leonardo Benicio

A deep exploration of trusted execution environments — how SGX and SEV encrypt computation, the attestation protocols that verify enclave integrity, and the promise of confidential computing that protects data even from the cloud operator.

In 2015, Intel shipped the Skylake processor with a feature called Software Guard Extensions (SGX). For the first time, a mainstream server processor could create encrypted, isolated regions of memory — enclaves — that even the operating system, the hypervisor, and the system administrator could not read. The promise was revolutionary: run your proprietary algorithms on a public cloud, and the cloud provider can’t see your code or data. The enclave runs in a hardware-protected “trusted execution environment” (TEE) that is isolated from everything else on the system. This post explores the internals of Intel SGX and AMD SEV, the attestation protocols that prove an enclave is genuine, and the confidential computing movement that is reshaping cloud security.

1. The Threat Model: Who Do You Trust?

Traditional security models assume the operating system and hypervisor are trusted. If the OS is compromised, all bets are off — the attacker can read any process’s memory, modify any file, intercept any network connection. TEEs flip this assumption: the OS and hypervisor are part of the threat model. The TEE protects against a malicious OS, a compromised hypervisor, or even a physically present attacker with access to the DRAM bus.

The trusted computing base (TCB) in a TEE is dramatically smaller: the processor itself, the enclave code, and a few privileged system components (the quoting enclave for attestation, the provisioning enclave for key management). The OS, hypervisor, BIOS, and all other software are untrusted. This means a vulnerability in the Linux kernel’s network stack — which would normally give an attacker access to all process memory — is irrelevant to an enclave’s security.

This threat model is particularly relevant for cloud computing. When you run a workload on AWS or Azure, you’re trusting the cloud provider’s administrators, their access controls, and their entire software stack. TEEs allow you to run workloads that the cloud provider cannot inspect or tamper with, enabling use cases like multi-party computation (multiple organizations computing on combined data without revealing their inputs to each other or to the cloud provider), confidential machine learning (training models on sensitive data without exposing the data), and secure blockchain oracles (executing smart contract logic in a verifiably secure environment).

2. Intel SGX: Enclaves in the Processor

SGX creates enclaves — protected regions of memory within a process’s virtual address space. The enclave’s code and data are stored in plaintext only inside the CPU package. When enclave data is written to DRAM, the CPU’s Memory Encryption Engine (MEE) encrypts it with a key that never leaves the processor. An attacker who probes the DRAM bus sees only ciphertext.

SGX’s isolation is enforced by hardware. When an enclave page is accessed by non-enclave code (the OS, the hypervisor, another process), the CPU returns a fixed pattern (all ones) instead of the actual data. Only code running inside the enclave can read and write enclave pages. The enclave’s integrity is also protected: the MEE maintains a Merkle tree over the enclave’s memory contents, so any tampering with DRAM content (by a malicious OS or hypervisor) is detected and causes a machine check.

The SGX enclave lifecycle involves several steps:

  1. Creation: The application (untrusted code) calls ECREATE to create an enclave, specifying the enclave’s virtual address range and permissions.

  2. Loading: The application calls EADD to add pages to the enclave. The pages are measured (hashed) as they are added, building a cryptographic measurement of the enclave’s initial state. The OS can add pages in any order, but the measurement covers all pages eventually loaded.

  3. Initialization: The application calls EINIT to finalize the enclave. EINIT checks that the enclave’s measurement matches a value specified by the enclave author (the “enclave identity”). If it doesn’t match, EINIT fails, preventing the enclave from running. This ensures that only the intended code runs inside the enclave.

  4. Entry: The application (or any thread) can enter the enclave via EENTER. The CPU switches to enclave mode, saving the untrusted context and loading the enclave’s trusted context. The entry point is a fixed location within the enclave (the TCS, Thread Control Structure).

  5. Execution: The enclave executes its code, accessing its own memory (encrypted in DRAM, plaintext in cache) and making “OCALLs” (Outside Calls) to untrusted code when it needs OS services (file I/O, network). OCALLs are carefully designed to avoid leaking enclave secrets to untrusted code.

  6. Exit: The enclave exits via EEXIT (or an asynchronous exit, AEX, triggered by an interrupt or exception). On exit, the CPU saves the enclave’s register state (encrypted) and restores the untrusted context. This prevents the untrusted OS from reading the enclave’s register state.

SGX’s most significant limitation is its enclave size cap. On current hardware, the Enclave Page Cache (EPC) — the on-chip memory that holds enclave pages — is 128 MB or 256 MB. While paging allows larger enclaves (by encrypting and swapping enclave pages to regular DRAM), the performance overhead of enclave paging is significant. This limits SGX to workloads with relatively small working sets.

3. AMD SEV and SEV-SNP: Encrypting Entire VMs

AMD’s Secure Encrypted Virtualization (SEV) takes a different approach from SGX. Instead of encrypting individual enclaves within a process, SEV encrypts entire virtual machines. The VM’s memory is encrypted with a key that is unique per VM and per boot, preventing the hypervisor (or other VMs) from reading the VM’s plaintext.

SEV has evolved through several generations:

  • SEV (2016): Basic VM memory encryption. The hypervisor cannot read the VM’s memory, but the VM cannot detect if the hypervisor tampers with its memory (no integrity protection).

  • SEV-ES (Encrypted State, 2017): Adds register state encryption. When the VM exits (due to a hypervisor intercept), the CPU encrypts the VM’s register state. The hypervisor cannot inspect the VM’s registers.

  • SEV-SNP (Secure Nested Paging, 2020): Adds memory integrity protection. The CPU maintains a Reverse Map Table (RMP) that tracks the ownership and permissions of every physical page. The hypervisor cannot remap or alias VM pages without the VM’s consent, preventing memory tampering. SEV-SNP also adds “virtual machine page validation” — the VM can verify that its page tables are correctly mapped before using a page.

SEV-SNP provides confidentiality (the hypervisor cannot read VM memory), integrity (the hypervisor cannot tamper with VM memory without detection), and replay protection (the hypervisor cannot replay old versions of VM memory). Together, these protections make SEV-SNP a strong foundation for confidential computing at the VM level.

The key difference from SGX is granularity: SGX protects individual functions or libraries within a process; SEV protects entire VMs. SGX has a smaller TCB (just the enclave, not the entire guest OS), but SEV supports unmodified applications running in a full Linux VM. The choice between them depends on the workload and the desired security posture.

4. Attestation: Proving the Enclave Is Real

How do you know you’re talking to a genuine enclave and not a simulator (or a malicious OS pretending to be an enclave)? Attestation is the protocol that proves an enclave’s identity and integrity to a remote party.

SGX attestation works as follows:

  1. Local attestation: The application (untrusted) asks the enclave to produce a “report” — a data structure containing the enclave’s measurement (MRENCLAVE), its signer identity (MRSIGNER), and optional user data. The report is signed with a key that only the enclave and the Quoting Enclave (a special Intel-signed enclave) share. The application passes the report to the Quoting Enclave.

  2. Quote generation: The Quoting Enclave verifies the report’s signature (proving it came from a genuine enclave on the same platform), and produces a “quote” — a version of the report signed with Intel’s EPID (Enhanced Privacy ID) key or, on newer platforms, the platform’s ECDSA attestation key. The EPID key is a group signature scheme that allows the platform to sign quotes without revealing which specific processor it is (privacy-preserving attestation).

  3. Remote verification: The application sends the quote to a remote verifier (e.g., a server that wants to verify the enclave before sending it secrets). The verifier sends the quote to Intel’s Attestation Service (IAS), which verifies the EPID or ECDSA signature and returns an attestation verification report. The verifier checks that the enclave measurement in the quote matches the expected measurement (the hash of the enclave’s code).

The key property is that attestation is rooted in hardware. The EPID/ECDSA keys are provisioned into the processor during manufacturing, and the Intel Attestation Service (or a third-party service using Intel’s Data Center Attestation Primitives) can verify signatures produced by those keys. A malicious OS cannot forge a valid quote because it doesn’t have access to the attestation key.

AMD SEV-SNP attestation follows a similar pattern but uses a different key hierarchy. The AMD Platform Security Processor (PSP) provisions an endorsement key during manufacturing. The VM can request a “attestation report” that includes the VM’s measurement, the platform’s identity, and optional user data, signed by the PSP. A remote verifier can check the report’s signature and verify the VM’s measurement.

5. The Confidential Computing Promise

Confidential computing is the umbrella term for technologies that protect data in use — not just at rest (encrypted storage) or in transit (TLS), but while it’s being processed. TEEs are the key enabler for confidential computing because they allow computation on encrypted data without decrypting it to the OS or hypervisor.

The major cloud providers have embraced confidential computing:

  • Azure Confidential Computing: Offers SGX-enabled VMs (DCsv2 series) and SEV-SNP-enabled VMs. Azure’s confidential computing platform allows customers to run workloads with attestation, verifying that the enclave or VM is genuine before sending sensitive data.

  • Google Cloud Confidential VMs: Offers SEV-ES and SEV-SNP-enabled VMs. Google has integrated attestation with their Key Management Service, allowing automatic release of encryption keys only to attested VMs.

  • AWS Nitro Enclaves: AWS’s approach is distinct from SGX/SEV. Nitro Enclaves are isolated VMs that run alongside a parent VM, with no persistent storage, no interactive access, and no external network connectivity (communication is through a local vsock). They provide a smaller TCB than a full VM (no OS services beyond a minimal kernel) and are designed for secure processing of sensitive data.

Confidential computing enables several transformative use cases:

  • Confidential multi-party computation: Multiple organizations can combine their data for analysis without revealing their individual datasets to each other or to the cloud provider. Each organization’s data is processed inside an attested enclave, and only the agreed-upon output is released.

  • Confidential AI: Organizations can train machine learning models on sensitive data (medical records, financial transactions) inside enclaves, with assurance that the data and model parameters are protected from the cloud provider.

  • Confidential blockchain: Smart contracts can execute inside enclaves, providing verifiable, tamper-proof execution that even the blockchain node operators cannot observe. This enables private smart contracts and secure oracle data feeds.

6. Limitations and Attacks

TEEs are not immune to attacks. Researchers have demonstrated several classes of side-channel attacks against SGX and SEV:

  • Cache timing attacks: An untrusted OS can observe which cache sets are accessed by the enclave, inferring information about the enclave’s data access patterns. SGX’s constant-time programming guidelines help mitigate this.

  • Page fault attacks: The OS controls the enclave’s page tables and can induce page faults to observe which pages the enclave accesses. SGX’s “controlled-channel attacks” paper demonstrated that this can reveal sensitive information (e.g., which parts of an image are accessed by an image processing enclave).

  • Microarchitectural data sampling (MDS) attacks: Vulnerabilities like ZombieLoad and RIDL allow an attacker to sample data from internal CPU buffers (line fill buffers, load ports) that contain data from other security domains. SGX enclaves were shown to be vulnerable to MDS attacks because the CPU’s internal buffers are shared across security domains.

  • SEV’s SEVered attack: Researchers demonstrated that a malicious hypervisor could redirect SEV-encrypted VM memory accesses to arbitrary physical pages, causing the VM to execute data as code. SEV-SNP’s Reverse Map Table prevents this attack.

These attacks highlight that TEEs are a defense-in-depth measure, not a silver bullet. They protect against many threats (malicious OS, hypervisor, physical memory attacks) but not all (side channels, speculative execution attacks). Deploying TEEs requires careful attention to both hardware and software security.

7. The Future: Unified Confidential Computing Standards

The confidential computing industry is converging on standards. The Linux Foundation’s Confidential Computing Consortium (CCC) is developing open-source tools for attestation, key management, and enclave development. The Enarx project provides a unified API for deploying confidential workloads across different TEE backends (SGX, SEV).

Intel’s forthcoming Trust Domain Extensions (TDX) promises to bring SGX-like isolation to entire VMs, similar to SEV but with Intel’s architecture. TDX will support multiple “trust domains” per physical machine, each encrypted with its own key, with hardware-enforced isolation between them.

The long-term vision is “confidential computing everywhere” — where all cloud workloads run in TEEs by default, and data is never exposed in plaintext to the infrastructure. This would transform the security posture of cloud computing, making data breaches vastly more difficult and enabling new classes of privacy-preserving applications. Whether this vision is realized depends on the continued evolution of TEE hardware, the maturation of attestation infrastructure, and the developer ecosystem building on confidential computing primitives.

8. Summary

Trusted execution environments — Intel SGX, AMD SEV-SNP, AWS Nitro Enclaves — represent a paradigm shift in computing security. By encrypting computation at the hardware level, TEEs protect data in use, not just at rest or in transit. The attestation protocols rooted in hardware keys provide cryptographic proof that code is running in a genuine, untampered environment.

The promise of confidential computing — that you can process sensitive data in the cloud without trusting the cloud provider — is incredibly compelling for regulated industries (finance, healthcare, government) and for any organization that handles sensitive data. The technology is maturing rapidly, with all major cloud providers offering TEE-based services and the ecosystem of tools for building, deploying, and attesting confidential workloads growing.

TEEs are not without limitations — side channels, memory size constraints, and performance overhead remain challenges — but the trajectory is clear. As TEEs become ubiquitous in server processors and the software ecosystem matures, confidential computing will transition from a specialized capability to a standard feature of cloud computing. The era of trusting infrastructure with your data’s plaintext is coming to an end.

9. Building Applications for TEEs: The Developer Experience

Developing applications for TEEs requires a different mindset from traditional server development. The application must be split into trusted and untrusted components: the trusted component runs inside the enclave and handles sensitive data; the untrusted component runs outside the enclave and handles I/O, logging, and communication with external services. This split is manual and requires careful design to minimize the trusted computing base (TCB) while maintaining functionality.

Several frameworks simplify TEE development. Intel’s SGX SDK provides C/C++ libraries for enclave creation, sealing (encrypting data for persistent storage), and attestation. Google’s Asylo provides a higher-level API that abstracts over different TEE backends (SGX, SEV). Fortanix’s Enclave Development Platform (EDP) converts unmodified Rust applications to run inside SGX enclaves, using Rust’s memory safety to reduce the risk of vulnerabilities within the enclave.

The most significant developer pain point is debugging. An enclave’s memory is encrypted and inaccessible to conventional debuggers. Intel provides SGX debugging support (via the EDEBUG instruction and the SGX debug enclave), which allows a developer to inspect enclave memory and set breakpoints, but only in debug mode. Production enclaves cannot be debugged, which means that bugs in production enclaves are extremely difficult to diagnose. Remote attestation partially addresses this by allowing a developer to verify that a production enclave is running the expected code, but it doesn’t help with logic bugs.

10. The Economics of Confidential Computing

Confidential computing is not just a security technology — it’s an economic enabler. It allows organizations to move sensitive workloads to the public cloud that were previously kept on-premises due to regulatory requirements or security concerns. A hospital can process patient data in the cloud, inside an attested enclave, with technical assurance that the cloud provider cannot access the data. A bank can run risk models on combined data from multiple trading partners without revealing individual positions. These use cases represent billions of dollars in potential cloud revenue that is currently untapped due to security concerns.

The cloud providers are investing heavily in confidential computing because it unlocks new markets. Azure’s confidential computing offerings include SGX VMs, SEV-SNP VMs, and application enclaves (small VMs with minimal OS). Google Cloud’s Confidential VMs (SEV-based) are now the default for several VM families. AWS’s Nitro Enclaves are integrated with AWS Key Management Service, allowing automatic release of decryption keys only to attested enclaves. As confidential computing becomes a standard feature rather than a premium add-on, its economic impact will grow.

11. Side-Channel Mitigations in TEEs: A Deeper Look

Side-channel attacks against TEEs exploit the fact that the hardware’s microarchitectural state (caches, branch predictors, TLBs) is shared between the TEE and untrusted code. An attacker running on the same physical core can measure cache access times to infer which cache sets the enclave is accessing, extracting cryptographic keys or other secrets.

Several mitigation strategies exist. Cache coloring partitions the last-level cache, dedicating specific cache sets to the enclave and preventing the attacker from probing them. Intel’s CAT (Cache Allocation Technology) can be used to reserve cache ways for SGX enclaves. However, L1 and L2 caches are typically not partitioned, so they remain vulnerable.

Branch predictor flushing clears the branch predictor state on enclave entry and exit, preventing the attacker from training the predictor to leak enclave secrets. Intel added IBPB (Indirect Branch Predictor Barrier) for this purpose, but the flush adds overhead to every enclave transition.

The most robust mitigation is to avoid secret-dependent memory access patterns inside the enclave. “Constant-time” programming ensures that the sequence of memory accesses is independent of the secret data. If an encryption routine always accesses the same S-box entries regardless of the key, cache timing cannot leak the key. This requires careful coding (and code review) but provides the strongest guarantee against cache timing attacks. Tools like ctgrind (a Valgrind-based constant-time checker) can verify that code is constant-time.

12. The Linux Kernel TEE Driver Model

The Linux kernel provides a unified driver framework for TEEs (Trusted Execution Environments) called OP-TEE (Open Portable TEE). OP-TEE is an open-source TEE implementation that runs on ARM TrustZone, providing a secure world operating system that manages trusted applications. The kernel’s TEE subsystem (drivers/tee) provides a standard interface for user-space applications to communicate with trusted applications running in the secure world.

The TEE subsystem implements a session-based communication model. A user-space application opens a session to a trusted application (identified by a UUID), sends commands (with shared memory buffers for data exchange), and receives responses. The kernel’s TEE driver marshals the commands across the TrustZone boundary, using the ARM SMC (Secure Monitor Call) instruction to transition between the normal world and the secure world. The trusted application runs in the secure world, isolated from the Linux kernel by TrustZone hardware (separate memory regions, separate interrupts, separate page tables).

OP-TEE is widely deployed in Android devices (where it provides the trusted execution environment for key storage, DRM, and biometric authentication) and in embedded systems (industrial controllers, automotive ECUs). It demonstrates that TEEs can be built on open-source foundations, with standardized kernel interfaces, rather than being proprietary vendor-specific implementations. The OP-TEE project has been instrumental in democratizing TEE technology.

13. The Future of TEEs: Unified Abstractions and Heterogeneous Computing

The TEE landscape is converging toward unified abstractions. The Confidential Computing Consortium’s enarx project provides a single API for deploying WebAssembly workloads across different TEE backends (Intel SGX, AMD SEV-SNP). Enarx abstracts the TEE-specific attestation, sealing, and runtime management behind a common interface, allowing developers to write once and deploy across TEE platforms.

The Confidential Computing Consortium is also developing the attestation standard, which provides a unified attestation verification API. Instead of calling Intel’s IAS, AMD’s PSP API, or AWS’s Nitro Enclaves attestation service directly, applications call a standard verify_attestation(evidence) function that handles the backend-specific attestation format. This abstraction is essential for multi-cloud confidential computing deployments.

The future of TEEs is heterogeneous: different TEE technologies for different use cases. SGX for fine-grained enclaves (function-level isolation), SEV-SNP for VM-level confidentiality, Nitro Enclaves for AWS-native secure computing, and TrustZone/OP-TEE for embedded and mobile TEEs. The software ecosystem will converge on common APIs while the hardware diversity persists, much as the networking ecosystem converged on TCP/IP while the hardware (Ethernet, Wi-Fi, 5G) diversified.

14. The Intel TDX: Trust Domain Extensions

Intel’s Trust Domain Extensions (TDX), announced alongside Sapphire Rapids (4th Gen Xeon Scalable, 2023), brings SGX-like isolation to entire virtual machines. Where SGX encrypts individual enclaves within a process, TDX encrypts an entire VM, protecting it from the hypervisor, the host OS, and other VMs. The TDX architecture introduces a “trust domain” — a hardware-isolated VM whose memory is encrypted with a key that the hypervisor cannot access.

TDX uses a Multi-Key Total Memory Encryption (MKTME) engine that supports hundreds of encryption keys. Each trust domain gets its own key, generated by the processor at TD creation and never exposed to software. Memory pages belonging to a trust domain are encrypted with that domain’s key before leaving the CPU package. The hypervisor can manage the VM’s resources (allocate memory, schedule vCPUs) but cannot read or modify the VM’s memory contents.

The TDX attestation model is similar to SGX’s: each TD has a measurement (hash of its initial state), and the processor can sign attestation reports that prove to a remote party that the TD is running the expected code in a genuine Intel TDX environment. TDX attestation uses the same SGX quoting infrastructure (the Quoting Enclave and Intel’s Attestation Service), providing a unified attestation model across SGX and TDX.

The significance of TDX is that it makes confidential computing available to unmodified VMs. An enterprise can lift-and-shift existing Linux VMs into TDX-protected trust domains without modifying the application, the guest OS, or the deployment pipeline. This dramatically lowers the barrier to adopting confidential computing, and positions TDX (alongside AMD SEV-SNP) as the foundation for the next generation of confidential cloud services.

15. Formal Verification of TEE Firmware

The TEE firmware — the software that runs in the secure world (SGX’s architectural enclaves, SEV’s Platform Security Processor, ARM TrustZone’s secure monitor) — is a critical part of the TCB. A vulnerability in the firmware can compromise the entire TEE’s security guarantees. Several projects have applied formal verification to TEE firmware to eliminate this risk.

The seL4 microkernel has been used as the basis for several TEE implementations. By running seL4 in the secure world (e.g., as the TrustZone secure-world OS), the TEE firmware benefits from seL4’s functional correctness proof. Combined with CHERI hardware, this provides a TEE whose firmware is mathematically proven correct and whose memory accesses are hardware-validated. The DARPA HACMS program demonstrated this combination for a secure drone, and the approach is being commercialized for automotive and industrial TEEs.

At the other end of the verification spectrum, AWS has used lightweight formal methods (TLA+ model checking) to verify the Nitro Enclaves attestation protocol. The TLA+ specification models the attestation flow (enclave measurement, signing by the Nitro Security Module, verification by the customer) and proves that an attacker cannot forge an attestation or tamper with an enclave’s measurement. This is a pragmatic approach to verification — full functional correctness proof is impractical for production systems, but model checking of the security-critical protocols is feasible and catches subtle design flaws.

16. Summary

Trusted execution environments — Intel SGX, AMD SEV-SNP, Intel TDX, AWS Nitro Enclaves — represent a paradigm shift in computing security. By encrypting computation at the hardware level, TEEs protect data in use, not just at rest or in transit. The attestation protocols rooted in hardware keys provide cryptographic proof that code is running in a genuine, untampered environment. The promise of confidential computing — that you can process sensitive data in the cloud without trusting the cloud provider — is being realized by all major cloud platforms. As TEEs become ubiquitous in server processors, confidential computing will transition from a specialized capability to a standard feature of cloud computing. The era of trusting infrastructure with your data’s plaintext is coming to an end.

17. The Nitro Enclaves Nitro Security Module

AWS Nitro Enclaves use a specialized hardware component called the Nitro Security Module (NSM) to provide attestation. The NSM is a hardware security module integrated into the Nitro System (AWS’s custom hypervisor and hardware platform). It holds a device-specific private key provisioned during manufacturing and signs attestation documents that prove an enclave’s identity.

The NSM attestation flow is: (1) the parent EC2 instance requests an attestation document for its enclave; (2) the NSM generates a document containing the enclave’s measurement (hashes of its kernel, init process, and application), the platform’s identity, and a cryptographic nonce; (3) the NSM signs the document with its device key; (4) the customer verifies the signature against AWS’s public key infrastructure (the AWS Nitro Enclaves PKI, rooted in a CA that chains to the NSM’s device key).

The beauty of the NSM attestation model is its simplicity. There is no external attestation service to contact (unlike Intel’s IAS). The attestation document is self-verifying — it can be verified offline using the Nitro Enclaves SDK, without network access. This makes Nitro Enclaves suitable for air-gapped environments and for applications that cannot depend on an external attestation service’s availability.

18. The TEE Threat Model: What TEEs Don’t Protect Against

It is important to be precise about what TEEs protect against — and what they don’t. TEEs protect against: a malicious or compromised OS/hypervisor reading enclave memory, a cloud provider administrator accessing customer data, physical attacks on DRAM (with memory encryption), and replay attacks (with integrity protection in SEV-SNP and TDX).

TEEs do NOT protect against: side-channel attacks (cache timing, page faults, power analysis — though hardware and software mitigations exist), denial of service (the OS can refuse to schedule the enclave, or the hypervisor can refuse to run the VM), and supply chain attacks (if the processor itself is compromised, TEE guarantees are void — the attacker controls the root of trust). Understanding these limitations is essential for deploying TEEs in realistic threat models. TEEs are a powerful tool for defense in depth, but they are not a panacea.

19. TEE Performance: A Quantitative Analysis

How much overhead do TEEs impose on real workloads? Let’s examine published benchmarks. For SGX enclaves, the overhead depends heavily on the enclave boundary crossing frequency. A simple computation (e.g., sorting an array of integers) entirely within an enclave has near-zero overhead — the CPU executes the same instructions regardless of enclave mode. But an application that makes frequent OCALLs (to access files, network, or untrusted memory) can see 2-5x overhead because each OCALL requires saving and restoring enclave state, flushing caches, and crossing the enclave boundary. This is why SGX applications must be carefully designed to minimize boundary crossings.

For SEV-SNP VMs, the overhead is more uniform. A typical server workload (web server, database) running in an SEV-SNP VM sees 5-15% throughput reduction compared to a non-encrypted VM. The overhead comes from memory encryption (the AES-XEX engine adds latency to DRAM accesses, though this is partially hidden by caching), page table manipulation (the RMP checks add cycles to page table walks), and interrupt handling (delivering an interrupt to an SEV-SNP VM requires additional hypervisor interactions). These overheads are decreasing with each processor generation as AMD optimizes the encryption and RMP microarchitecture.

For TDX, early benchmarks (on Sapphire Rapids engineering samples) show 8-18% overhead for typical workloads, similar to SEV-SNP. The overhead is dominated by memory encryption bandwidth (the MKTME engine can encrypt/decrypt at memory bus speed, but there’s still a latency penalty) and VM exit handling (TDX exits are more expensive than regular VMX exits because the hypervisor can’t access the guest’s state directly). As with SEV-SNP, these numbers are expected to improve as Intel refines the TDX microarchitecture.

20. The Future: Heterogeneous TEEs and the Confidential Computing Ecosystem

The TEE landscape is evolving toward heterogeneity — different TEE technologies for different use cases — unified by common attestation and orchestration standards. The Confidential Computing Consortium’s projects (Enarx, Veraison, Key Broker Service) are building the middleware that makes TEE diversity manageable. An application should be able to request a “confidential execution environment with 4 GB memory and network access,” and the orchestration layer (Kubernetes with confidential computing extensions) should provision the appropriate TEE (SGX, SEV-SNP, TDX, or Nitro Enclaves) based on availability and policy.

The Kubernetes Confidential Computing project (part of the CNCF) is extending Kubernetes to support TEE-aware scheduling: pods can specify TEE requirements (attestation, memory encryption, integrity protection), and the scheduler places them on nodes that satisfy those requirements. TEE-specific sidecars handle attestation, key release, and secure channel establishment. This integration of TEEs into the container orchestration ecosystem is essential for making confidential computing a routine part of cloud operations, rather than a specialized deployment model. The future of cloud computing is confidential — and it’s being built on Kubernetes, one TEE at a time.

The future of cloud computing is confidential. As TEEs become standard features in server processors and the software ecosystem matures, encrypting data in use will be as routine as encrypting data in transit. The era of trusting cloud infrastructure with plaintext data is ending.