Five Verification Methods for AI Agents
As agents continue to proliferate, verifying their interactions will become increasingly important.
Decentralized AI is taking off, and machine-powered agents will soon permeate our onchain lives. But as these digital entities gain greater decision-making power and control more capital, the question becomes: will we be able to trust them?
On the decentralized web, honesty is not taken for granted, it’s verified. Once the value of offchain compute (i.e. models powering agents) gets high enough, it will be necessary to verify which model was used, that the node operator properly handled the data, and the job was executed as expected. There will also be a need for confidentiality, considering how many people use LLMs with sensitive information. As it turns out, both are things Web3 is perfectly positioned to solve. Let’s explore.
Machine Learning Verification Methods
If we put the problem of AI alignment aside, there are several ways to minimize the trust requirements of agents, including methods that leverage zero-knowledge proofs (zkML), optimistic verification (opML), and trusted execution environments (teeML). Each has its tradeoffs, but at a high level, here’s how these options stack up:
In a bit more detail…
Zero-Knowledge Proofs - excel in most categories but are complex and expensive
One of the most popular solutions: ZK proofs, with their ability to succinctly represent and verify arbitrary programs. zkML uses mathematical proofs to validate model correctness without revealing the underlying data. This guarantees a model or compute provider cannot manipulate results.
While zkML is promising for succinctly proving that a model was faithfully and accurately executed (verifiability), the highly resource-intensive nature of creating ZK proofs often requires you to outsource the proof creation to third parties - introducing not just latency and cost but also privacy concerns. At present, ZK is unrealistic for anything but the simplest of examples. Examples: Giza, RISC Zero
Optimistic Verification - simple and scalable, but comes with lower privacy
The opML approach involves trusting model outputs while allowing network "watchers" to verify correctness and challenge anything questionable via fraud proofs.
While this method is generally cheaper than zk and remains secure as long as at least one watcher is honest, users may face increased costs proportional to the number of watchers, and they must also deal with wait times for verification and potential delays should a challenge arise. Examples: ORA
Trusted Execution Environment Verification - high privacy and security, but less decentralized
teeML relies on hardware attestations and a decentralized validator set as a root of trust to enable verifiable compute on the blockchain. With TEEs, execution integrity is enforced by the secure enclave, and the relatively lower costs make it a practical option.
The tradeoff is that it does come with a hardware dependency and can be difficult to implement when starting from scratch. There are currently also hardware limitations, but this is subject to change with, for example, the introduction of Intel TDX and Amazon Nitro Enclaves. Examples: Oasis, Phala
Cryptoeconomics - straightforward and inexpensive, but less secure
A cryptoeconomic approach uses simple stake-weighted voting. In this scenario, users can customize how many nodes will run their queries, with discrepancies among responses resulting in penalties for the outlier. This way, users can balance cost and trust while maintaining fast latency.
Employing a crypto economic method is easy and cost-effective, but it also carries arguably the weakest security, as a majority of nodes could collude. In this setup, users must consider what’s a stake for a node operator and how costly it would be for them to cheat. Examples: Ritual
Bonus Options
Oracle Networks
Oracle networks provide a secure interface for verifying offchain computation and ensuring that external data inputs are reliable and tamper-proof. This enables smart contracts to access cryptographically verifiable data and users to interact with agents in a trust-minimized way. This is achieved through mechanisms like MPC, onchain re-execution, etc.
Fully Homomorphic Encryption ML
There are also open-source frameworks designed to enhance privacy and verifiability by leveraging Fully Homomorphic Encryption (FHE). Generally, FHE allows computations to be performed directly on encrypted data without the need for decryption, thereby ensuring the veracity of the process and that sensitive information remains confidential throughout
Wrapping Up
There are quite a few promising solutions, and as activity continues to grow within the crypto x AI sector, more are being explored. However, there’s no escaping the fact that the non-deterministic nature of agents makes verifying their workloads a unique challenge. Trust will remain a sticking point until this piece is conclusively solved.
This leaves us with where we are today: low adoption/user trust in AI agents and human-in-the-loop use cases still dominating. At the same time, we’re heading toward a future where blockchains introduce a level of certainty alongside agents. In the future, they will be the primary users of these systems, transacting autonomously without a user knowing which RPC, wallet, or network they're using.
Oasis is working to support both privacy and verifiability with ROFL, a teeML framework for extending EVM runtimes (like Oasis Sapphire) to offchain computations.