
Oasis x io.net: Accelerating Decentralized AI With Confidential Compute
Oasis is working with io.net to fuel the adoption of confidential computing in crypto x AI.

Oasis is working with io.net to fuel the adoption of confidential computing in crypto x AI. This includes extending data privacy and providing tools for builders to train, fine-tune, and deploy machine learning models with decentralized resources.
Massive amounts of personal data are generated every day, and while traditional systems protect data at rest and in transit, data in use has historically been left exposed. This was true until the introduction of confidential computing, a method of protecting data in use via hardware-based trusted execution environments (TEEs).
TEEs are isolated areas within a processor that prevent outside entities, including the hypervisor, host, owner, or anyone with physical access, from seeing or modifying the code/data during execution. TEEs also feature a remote attestation mechanism to ensure both the integrity of the computation performed and the confidentiality of the data.
Over the last decade, TEEs have been a significant enabler for building trust in the cloud. They have also found a home in crypto, giving rise to blockchain systems that support confidential smart contracts. More recently, they’re paving the way for private, verifiable AI.
Building Verifiable AI
AI applications are at a takeoff point, and as models increasingly specialize, they’ll need high-value, private data (e.g., healthcare). In this context, TEEs make it possible to verifiably compute or train a model and fine-tune it on non-public datasets. This means people can trust an LLM with sensitive information. No more blank checks for all your data to OpenAI.
Companies can also engage without compromising IP, and highly regulated or sensitive data is unlocked for collaboration. To serve uses like these, Oasis built Runtime Offchain Logic (ROFL), a generalized computing framework that enables AI-based applications to function in a decentralized, verifiable, and privacy-preserving way. ROFL applications run asynchronously offchain and then communicate with onchain logic through transactions, events, etc.
Because each ROFL app runs in a TEE and is backed by remote attestation, they enjoy all the assurances of confidential computing. And when combined with a confidential chain like Sapphire, ROFL promises decentralized onchain key management, integrity, and liveness (from consensus)—a comprehensive stack for distributed computation.
This brings many new possibilities. It enables a complete, secure flow, starting with fetching a sensitive learning dataset, decrypting it inside a TEE, performing computation on it, and finally storing a “provably learned” model. You can also use the same setup for inference to ensure truly private LLM interactions in the case of an AI companion. By deploying via ROFL, an application enjoys features like:
- Guaranteed confidentiality
- Secure execution environment
- Tamper-proof data processing
- Open, reproducible builds
- Verifiable execution results
Verifiable AI With TEE-enabled GPUs
To enable some of these use cases, ROFL relies on TEEs and GPUs. To break these down further, Intel offers two types of TEE technologies. Intel SGX secures the execution of an individual application, while the newer TDX version allows entire VMs to operate confidentially, including communication with peripherals like GPUs.
Oasis ROFL supports both TEE implementations. SGX can be used for mission-critical applications like blockchain oracles, where the attack surface should be as minimal as possible. TDX, on the other hand, with added containerization, is tailored towards more complex, multi-tenant applications such as AI.
On the GPU side, Nvidia recently implemented TEEs in their H100/200s series. These GPUs are essential for AI use cases due to their computational power, i.e., the ability to perform parallel processing. GPUs with TDX are the key to allowing intensive tasks such as training and inference while verifying the computation and ensuring that data remains protected.

Confidential x Decentralized Compute
The Oasis engineering team is currently working toward full support for TEE-enabled GPUs within ROFL. This means verifying the remote attestation logic from Nvidia on Sapphire, running H100 test nodes, trialing multiple GPU attestations, etc. And for this, computing resources were needed.
The team opted for a decentralized solution, which made io.net a natural choice. Distributed compute networks like io.net remove the need to trust centralized cloud providers, aggregating GPUs from underutilized sources into a single platform. This overcomes many limitations and offers benefits like flexibility, scale, performance (lower latency), cost efficiency, and censorship resistance.
In this context, confidential computing becomes particularly relevant. If you own the hardware in your own data center, then it’s not a concern. But as soon as you utilize someone else's GPUs, it’s important to be able to verify and avoid sharing all your data with them. In short, you need confidential compute.
In the future, Oasis will create a ROFL marketplace that connects dApp builders who need confidential computing with partners like io.net to serve those needs. This will empower more builders to utilize decentralized confidential computing for their AI models. Which, in turn, enables the creation of more globally accessible, trustless AI applications.