Verifiable AI: Powering Trust with TEEs

The combination of GPU-enabled TEEs and the Oasis Network paves the way for future decentralized marketplaces for trustworthy AI models and services.

This post was authored by Oasis Labs. In addition to its suite of privacy-preserving enterprise solutions, Oasis Labs continues to collaborate with the Oasis Protocol Foundation on the evolution of the Oasis Network.

In a recent blog, we discussed decentralized AI and identified several areas in which it has the potential to deliver improvements over the centralized status quo. One particularly intriguing case is the potential to provide insights into how a given AI model was constructed: Which pre-trained foundation model was used as the starting point? What additional training steps were performed to specialize the model? What training data was used? 

This information about the provenance of AI models empowers users to make informed choices about which AI models and services to use and facilitates attribution for the sources of training data.

Through the use of GPU-enabled trusted execution environments (TEEs) and Oasis Runtime Offchain Logic (ROFL) framework, it is possible to build specialized AI models with verifiable provenance information published onchain. This lays the groundwork for future decentralized marketplaces for AI models and services, where users can choose with the confidence that models are what they claim to be, data used during inference will be confidential and protected when required, and the community can verify that training data was sourced properly.

In addition to enabling openness, transparency, and community governance, a TEE-backed decentralized model marketplace can also facilitate fair compensation for the use of data and models with protection from misappropriation. This drives the virtuous cycle of encouraging data contributions, model fine-tuning over private data, and model composition, leading to more useful AI systems that, over time, increase the economic value for all participants. 

In this article, we’ll walk through the fine-tuning of an open-source foundation model in a GPU-enabled TEE and the process of publishing the provenance information onchain. For readers who are just skimming this post, here’s the TLDR:

  • The following shows how to set up a GPU-enabled trusted execution environment consisting of an AMD SEV-SNP confidential VM and an NVIDIA H100 GPU.
  • Then, it demonstrates how to fine-tune a foundation LLM inside the trusted execution environment using popular open-source tools.
  • Finally, it covers how to use ROFL to publish a verifiable record of that operation onchain.

GPU-enabled TEEs

TEEs with GPU support are an important building block for the emerging decentralized AI paradigm. Just like how Sapphire utilizes TEEs powered by Intel SGX to offer confidential smart contracts with efficient, verifiable execution, it is possible to leverage confidential computing support enabled in certain recent NVIDIA GPUs to perform GPU-accelerated ML training and inference tasks in an integrity-protected, attestable environment. 

Specifically, NVIDIA H100 GPUs support establishing a secure session with a confidential virtual machine (powered by AMD SEV-SNP or Intel TDX) and performing remote attestation, allowing the GPU to be added to the trusted execution environment under comparable security guarantees.

Fine-tuning an LLM in a TEE

The following is an experiment to try integrating a GPU-enabled TEE with the Oasis Network. As a sample machine learning job for this experiment, fine-tuning is used to teach an existing foundation model a new fact from beyond its knowledge cutoff date.

To set up this machine learning job, we used a server with AMD SEV-SNP and an NVIDIA H100 to run a confidential VM based on NVIDIA’s deployment guide. Our confidential VM shares an initialization script with Shi et al.’s SecLM project that verifies its starting disk content and verifies the GPU’s attestation.

Within the confidential VM, we use Hugging Face’s Transformers library and Parameter-Efficient Fine-Tuning library to train a Low-Rank Adaptation (LoRA) adapter for Meta’s Llama 3 8B Instruct model.

Here’s a look at key steps in the execution of this experiment:

  1. The confidential VM launches with a specific configuration and a set of software to boot from: the firmware (OVMF.fd), the kernel (vmlinuz-...), an initial RAM disk (initrd.img-...). This configuration and software are covered in the confidential VM’s “measurement,” which the attestation report will cover.
  2. A script in the initial RAM disk (fs_setup_shasum_check) verifies the contents of two disk images against hashes in the kernel command line. The two disk images are a well-known general operating system disk image (ubuntu-....img) and a “seed” image (user-data.iso) containing application-specific configuration (user-data.cfg).
  3. A script from the seed image runs an initialization script (init_cvm_gpu.sh) to set up NVIDIA’s nvtrust software and verify the local GPU.
  4. A second script from the seed image receives the machine learning job’s code (job.tar.gz), verifies it against a hash, and executes it.
  5. The machine learning job (job/job.sh) generates a key for the confidential VM and generates an attestation report bearing the fingerprint of this key.
  6. The machine learning job runs a Python program (job/train.py) to do the fine-tuning.
  7. The machine learning job creates a signed “manifest” file with information about the trained LoRA adapter, base model, and training data. It signs this file with the confidential VM’s private key.

Overview of the demo setup

Contents of the attestation report

Querying stock Meta-Llama-3-8B-Instruct
Training data (data.jsonl)
Querying the fine-tuned model

The machine learning task in the confidential virtual machine took, on average, 30 seconds across five trials. That’s a 2.4-fold increase from an equivalently configured task running directly on the host machine without confidentiality, which took 12 seconds on average.

Note: the measurements on this toy example may not be representative of the performance of machine learning tasks of other sizes.

Publishing Provenance Information Onchain

By publishing verified provenance information for AI models onchain, it's possible to create a foundation for decentralized marketplaces for AI models and services. To this end, we also built a sample ROFL application to validate and verify models built using our experimental setup.

This assumes that models are published in a repository, which the ROFL application continuously monitors for new entries. When a new model is detected, it performs a series of verifications:

  • It validates the Versioned Chip Endorsement Key (VCEK) by checking the certificate chain all the way up to the AMD root key.
  • It validates the guest trusted computing base by examining specific fields in the attestation report.
  • It verifies that the signature within the attestation report genuinely originates from the VCEK.
  • It checks that the confidential VM’s public key matches the one in the attestation report.
  • It confirms that the result manifest is signed by the confidential VM’s private key.

Once a model passes all these checks, the ROFL application publishes the hash values of the model and the digest of the manifest report to a Sapphire smart contract.

Since verification is done in ROFL, we can be assured of the integrity of its execution. By accessing the details of how models were constructed onchain, users can be confident about their provenance and authenticity.

Example model manifest is not in the contract before running ROFL
After running ROFL, it now maps to a digest, which is the sha256 of the manifest message

What’s Next?

This blog illustrates a proof of concept of how to use offchain GPU-enabled TEEs and ROFL to create specialized AI models with verifiable provenance records published onchain. This sets the stage for a future where onchain marketplaces for AI models and services offer increased user choice, transparency, and economic value. And this example is only the beginning!

Upcoming full support for TDX in ROFL will free AI developers from the burden of configuring the entire confidential VM stack and make it possible to construct entire AI training and inference pipelines as ROFL applications, paving the way for modular frameworks for easy development and deployment of trustworthy AI-powered applications.

In addition, the ability to perform training and inference in confidentiality-preserving TEEs will enable both training on sensitive data without compromising privacy and using proprietary models for inference while keeping model weights private.

Learn more about ROFL here.

How we use cookies?

At Oasis Foundation we believe in your privacy, so you can choose to browse our site without any tracking or by clicking “Accept”, you help us to improve our site and help us grow our ecosystem. View our Privacy Policy for more information.