Paper review: Confidential Computing for OpenPOWER

}
5 May 2021

This year’s EuroSys conference included in its programme several papers with significant contributions to the growing field of Confidential Computing.
In this blog post we will review Confidential Computing for OpenPOWER by Hunt et al, describing the design and implementation of IBM PEF, short for Protected Execution Facility.
Besides several earlier presentations on IBM PEF at the recent Open Power summit or the Linux Security Summit summit, this is the first conference paper that describes it in detail.

Confidential Computing meets OpenPOWER

With the introduction of PEF in its POWER9 chip, IBM joined the growing list of vendors that have introduced support for confidential computing in their product line.
These include Intel SGX, Intel TDX, AMD SEV, and the recently announced ARM Confidential Computing Architecture, as well as IBM’s own Z Secure Execution system.

The authors repeatedly highlight that the design decisions around PEF were constrained and influenced by the product cycle and legacy architecture of server class platforms, setting it apart from promising "greenfield" architectures such as Keystone.

PEF aims to address three key challenges that – according to the authors – hinder the broader adoption of TEEs:

  • Balancing between isolation and confidentiality – existing server class processors make the use of TEEs expensive by bundling confidentiality and isolation.
  • Reusing existing security technologies – existing TEE implementations introduce many new security components and consequently either require changes to applications or trust in new components.
  • Lifecycle management of secure entities run inside the TEE – this is a crucial aspect that is sorely lacking today, especially in cloud deployments, where the adoption of TEEs promises to unlock new use cases.

The designers of PEF address these challenges by adding a new most-privileged execution mode (the Protected Execution Ultravisor), decoupling isolation from confidentiality and integrity and simplifying the life cycle of Secure Virtual Machines (SVMs) by removing the need for runtime attestation with the processor vendor.
One more important point worth mentioning is that PEF reuses in its security architecture the already tried and tested Trusted Platform Module (TPM).

Security Model and Design Goals

The threat model assumed for PEF is strikingly similar to the one for SGX.
It assumes an uncompromised platform, where the adversary has limited physical access reflecting the common server maintenance tasks.
Similar to SGX, the designers of PEF exclude side channel attacks referring to the need for a more comprehensive solution to be able to prevent them.

PEF aims to prevent exposing sensitive state from SEVs to both the hypervisor and other SEVs, as well as allow users to verify the validity of the TEE. In this context a computation is valid if it has not been modified by an unauthorized party and cannot be executed on an unauthorized system.

Protected Execution Facility

To implement PEF with minimal the changes to existing components – such as the hypervisor – the design implements a new CPU state, secure state, and an new firmware to manage it, the Protected Execution Ultravisor (I can’t help but mention here Butler Lampson’s famous aphorism: All problems in computer science can be solved by another level of indirection).
Support for the ultravisor is already available in the Linux kernel v5.12.0.

The secure state is the highest privilege state in the POWER architecture. It complements the three pre-existing and mutually exclusive states of the Machine State Register: problem (for applications), privileged non-hypervisor (for OSs) and hypervisor.

The ultravisor is exclusively designed to maintain the isolation of the computation and associated data.
The ultravisor implementation has 20 direct interfaces (ultracalls) and further uses 6 new hypervisor calls for starting, stopping and aborting SVMs, communicating to the TPM, and memory management.
To support hypervisor paging of Normal Virtual Machines (NVMs) and hypervisor dump of SVMs, the ultravisor further supports moving secure page-content to insecure memory and back (pages are encrypted in Galois/Counter Mode prior to being moved)

The secure state, the ultravisor and partitioning the memory into secure and normal memory are the three core architectural changes supporting PEF.

These changes create a new boot sequence when PEF is enabled, namely:

  1. Host-boot, the first firmware loaded to initialize the hardware;
  2. OPAL, which stands for OpenPOWER Abstraction Layer, a firmware component that provides hardware related services to the OS after it is booted;
  3. Ultravisor, introduced above;
  4. OPAL, once more;
  5. Host operating system.

While booting the system, OPAL generates a random key and passes it on to the utravisor. This key is only known to the platform TPM and the ultravisor, allowing them to communicate over a secure channel.
After communicating the key to the ultravisor, OPAL discards it, thus making it only known to the TPM and the ultravisor.
Thus, while the platform is booting, the trusted computing base (TCB) includes the hostboot, OPAL and the ultravisor; once the ultravisor is initialised, the TCB shrinks to only the ultravisor.

Integrity verification

Since all SVMs start their execution as an NVM, it is essential to verify both their integrity as well as the integrity of the platform.
In the case of PEF, verifying the platform means determining that it is trusted by the creator of the SVM.
SVM is considered to maintain integrity if it has not been modified by an unauthorized party and all of the initial parameters are what was specified by the creator.

Verification is completed in the enter secure mode (ESM) ultracall (request to transition into an SVM). The ESM ultracall copies into secure memory all of the memory associated with the NVM requesting the transition, so that the state cannot be modified after verification prior to execution.
It next verifies the platform and the integrity of the SVM.

  • Platform verification is based on verifying that the firmware is in the correct state (the reference value being protected by the TPM). Correct state here means that the firmware is trusted, the hardware is booting with secure boot enabled, and that PEF is enabled on the platform.

  • Integrity of the SVM is checked through local attestation, based on the information contained in the ESM operand, a data structure describing the expected state of the SVM.

Layout of the ESM operand

Figure 1: Layout of the ESM operand.

The use of the TPM is essential in the verification process. The Ultravisor uses the TPM API to establish a secure tunnel through the hypervisor and to acquire the symmetric seed for the ESM operand associated with the ESM ultracall. The ultravisor reflects a newly added hypervisor call to KVM when it needs to utilize the TPM.

Performance

PEF has a minimal performance impact on computation. This is primarily since SVMs do not use encryption to protect data in memory, which causes performance degradation when memory access is not sequential.

SPEC CPU2017 benchmark results

Figure 2: SPEC CPU2017 benchmark results. The vertical bars indicate the min and max values of the runs.

However, the story is quite different when it comes to network performance.
The throughput achieved between NVMs running non-PEF and PEF-enabled firmware is virtually the same, indicating no major impact to network performance of NVMs.
For small messages though, there is a significant throughput degradation between normal and secure VMs of nearly 45%.
This is most likely due to the over- head associated with the bounce buffers that are used in the I/O path of SVMs and the cost of context switching between SVM and the host. This performance difference is gradually reduced to ~10% as the message size gets larger and number of context switches lower.

Network performance results

Figure 3: Network performance results. Each transaction is a request and corresponding response. Message sizes: 90, 270, 512, 4K, and 16K byte. The vertical bars indicate the min and max values of the runs.

Limitations and opportunities

The authors of the PEF paper are refreshingly open and straightforward about product cycles that unavoidably place constraints and limitations on the design of the new features.
One such limitation is the lack of hardware memory encryption, which allows an adversary to probe memory at boot time and observe the key passed from the OPAL to the utravisor.
In the same time, this allows to project to some extent the evolution of features in the upcoming iterations.
Thus, Transparent Memory Encryption announced in the upcoming POWER10 protects the confidentiality of memory from physical probing and eliminates this attack vector.

Another important limitation is the lack of of migration support, at least in the current version of the ultravisor. The authors do mention though that AMD announced migration of encrypted virtual machines, so we can expect that PEF will eventually catch up with the competition.

Dynamic allocation of secure memory (DASM) is a another limitation that is expected to be resolved in the upcoming product cycles.
Currently, the absence of DASM increases the size of the ultravisor which must also implement memory management.
Authors expect that hardware support for DASM, memory over-commit and memory sharing between SVMs will be implemented in the near future, which will allows to further simplify the implementation of the ultravisor.

Final thoughts

IBM PEF is a VM-based confidential computing environment, conceptually similar to AMD SEV and Intel TDX.
While implementation details differ quite a bit (not least because of the already existing differences in hardware architecture), such solutions prepare a solid ground for implementing and deploying confidential cloud computing on a wider scale.

by Nicolae Paladi

by Nicolae Paladi

Nicolae holds a PhD in computer security from Lund University. His research focus is primarily cloud computing security - including trusted computing, confidential computing and security of software-defined networks.