The Future of Crypto x AI: Trustless Agents

The agentic future is here, but to realize more useful and trustworthy agents, some guarantees are needed.

For all the hype, most current AI agents are just performative tricks designed to fuel speculative token appreciation. Yes, agents in crypto are $15+ billion in market cap, and some have shown real potential. They have also given investors exposure to something like micro-cap “AI,” but the novelty has also worn off, and almost all of these agents fade into irrelevance. 

With the AI meme frenzy cooling off a bit, there’s more interest in actually useful agents. That is, action-oriented agents that solve real problems based on user context. To get there, a step change is needed, particularly in terms of enabling intelligence in an agent. Experiments are ongoing, but it’s worth remembering that a lot of what’s happening is just spinning the wheels of models that actual AI labs build. 

However, one place crypto can legitimately lead is solving for trust. Right now, most agents are controlled by humans. This is a huge risk, as developers can simply manipulate the agent or, as they have the private key, take any financial decision they like. Further, if an agent has no privacy measures and there’s no way to verify which model it's based on or if requests are executed as expected, most of the higher-stakes use cases don’t work. 

Would you trust large amounts of capital or sensitive data to an unproven agent controlled by a random anon? No. Or at least you shouldn’t. Agents are probabilistic, not deterministic, meaning the LLM architecture has an exponentially higher attack surface than a smart contract. Even for more narrow models, there are many chances for things to go wrong, and this lack of operational trustworthiness risks undermining agents' potential. 

Things like elizawakesup/ai16zdao framework are beautiful, but to build an agent that actually matters, some guarantees are needed. 

Toward Trustless Agents

Autonomy 

One way to solve the issue of the dishonest or greedy puppetmaster pulling the strings of the agent is to minimize human involvement. As mentioned, most agents today require high levels of human intervention, which introduces a trust problem. For instance, a user has to take it on faith that agent initialization was performed without tampering, that data sources from external APIs are accurate, and that private keys are properly managed.

The last point is critical. Most agents have a treasury that is controlled by a human or group of humans. This is fertile ground for scams, manipulation, or mismanagement. To avoid trusting such third parties, agents need economic agency. For economic agency, they must control their own money in a provably independent manner. 

There are already a few examples of this, and the next generation of agents will move further in this direction. Not only is this a form of insurance on the user side, but it can also help to evolve agent behavior, as it’s possible to add economic constraints on an agent so that it must pay for its own inference costs. This introduces an evolutionary element. Each agent must now earn to survive. Okay, so how to get there? 

Verifiability 

Verifiability/autonomy are a few of the missing pieces in building more trustworthy agents. While there are many attempts at verifiable inference like MPCs or zkTLS, trusted execution environments (TEEs) are one of the most promising current solutions. Oasis has been building with TEEs for the past seven years and is quickly expanding from onchain privacy to off-chain compute. 

At the center of this evolution is Runtime Offchain Logic (ROFL), a framework that enables arbitrary applications to function in a decentralized, verifiable way. ROFL makes it possible to create custom offchain logic (agents) that can be verified through trusted execution environments (TEEs). 

TEEs are isolated hardware environments that run code in a secure manner, protecting it from external threats and unauthorized intrusion. By deploying an AI agent within a TEE, its processes remain secure, verifiable, and insulated from interference. If an agent is in a TEE, it's possible to be sure which model it's running on and that responses originate from said model. The data contained within the TEE is also inaccessible to outside parties, guaranteeing the protection of sensitive information.

TEEs use remote attestation as a mechanism to provide verifiable proof of the integrity and authenticity of the system running within the secure enclave. This process enables users and developers to confirm that an agent operates with a validated software configuration. 

Moreover, TEEs can solve the issue of private key custodianship. 

An agent's private key can exist as a secret inside a smart contract and get passed to the TEE so that not even the hardware owner can view the secret. Because the TEE holds private credentials, a human creator never had access to them throughout the process. If an account is generated within a smart contract on Sapphire, then there’s certainty that no one has access to this wallet besides the code running in the TEE (agent).

The Future: TEE-Enabled Agents 

Future agents will shift away from traditional B2C models toward something more immediate, a paradigm that integrates directly with existing social layers. In this world, user context is king; no one cares which chain they’re on or is willing to learn new interfaces. Outcomes are all that matter.

Success requires hyper-personalized intents with high autonomy, context-aware execution, and critically, agents that keep their promises. This future isn't strictly about better AI. It's also about verification. Since the agent is delegating computation to a third party, the agent needs the computation to be verifiable. By combining cryptographic security with hardware isolation, ROFL turns agents into verifiable autonomous entities, opening the door to agents who can reliably handle important tasks. 

Bottom line: if an agent is not open source, backed by a decentralized key management service, audited, with a reproducible build, and running in a TEE periodically attested onchain, then it shouldn't be trusted. Oasis is incubating and supporting the next wave of TEE-enabled agents. More details soon.

How we use cookies?
At Oasis Foundation we believe in your privacy, so you can choose to browse our site without any tracking or by clicking “Accept”, you help us to improve our site and help us grow our ecosystem. View our Privacy Policy for more information.