Decentralized AI: Bringing User Sovereignty Back
By applying the mindset of self-sovereignty to AI development, decentralized AI seeks to address concerns over privacy, equity, and accessibility.
Large companies control most of the important technologies developed in the past decade. In many cases, these corporations are oriented toward capturing data, monetizing it, and selling it with little regard for end users. The organizations behind today’s most popular AI models are following a similar trajectory.
On the other hand, the rise of decentralized AI presents an alternative that’s more transparent and confidential while supporting self-sovereignty and equity. However, a more decentralized approach comes with tradeoffs. This is especially true for things like communication overhead between compute providers and a current lack of standardization. Distributed development is also all about creating lego pieces and fitting them together, which takes time.
Decentralized AI is nascent, and the jury is still out on what its impact will be. Indeed, it was not mentioned at all in a recent State of AI Report. But the space is also accelerating, and the potential is enormous. So, let's see how things stack up, investigate use cases, and drill into some of the key challenges.
Decentralized vs. Centralized AI
The most concrete way to make this comparison is by asking whether anything in the 'decentralized' bucket can compete on merit. In terms of training, the answer is relatively clear: creating a state-of-the-art AI model today requires massive capital. For example, xAI's new cluster for training consists of 300,000 A100 GPUs and costs billions of dollars.
All but a select few can train cutting-edge models. However, blockchain projects shine at aligning users via incentive mechanisms. Bitcoin has demonstrated how to create the largest computer network worldwide, and this is currently being replicated in many projects for AI computation. Beyond its ability to spawn CPU coordination, blockchains are also often indicated for agentic commerce, in particular, payments.
Traditional payment solutions don't suit agents for various reasons, including access and authorization hurdles, the actual input of card details into a checkout solution, cost factors, PCI compliance, geographical limitations, etc. On the flip side, blockchain doesn't have these legacy burdens and can deliver secure, efficient, and transparent transactions that, in turn, facilitate autonomous interactions.
Decentralized AI Use Cases
It’s worth noting that progress in AI has been top-down, with centralized companies driving most advancements in machine learning over the past few years. The point is that crypto isn’t necessary at all levels of the AI stack but rather should be treated as an enhancement where it makes sense. Below are a few more interesting possibilities.
- Data provision & monetization
Pooling GPU resources in exchange for a payment or share of the resulting model is something that has often been cited. This method could similarly apply to data providers, acknowledging the value of their private data in differentiating the model. Such arrangements call for new governance and economic models that enable crowdsourced development while ensuring fair(er) distribution of any revenue. Blockchain can facilitate this process by providing a trustless framework for attributing contributions to relevant stakeholders.
- Intellectual property & provenance
AI’s influence on media so far is just scratching the surface. Models will continue improving and extending ever more powerful creation tools. But then the IP attribution, not to mention the authenticity and quality itself, comes into question. Fortunately, provenance and fact-checking are things that Web3 has a ready-made solution for. By incentivizing the use of cryptographic keys, hundreds of millions of people can now provably authenticate messages. This isn’t a cure-all, but it could help prove origin of media. More trustworthy reference mechanisms are also possible via crypto, e.g., a model referring users to a block explorer or a lens profile.
- Verifiability & confidentiality
As AI is changing media, fields like medicine, law, and education will be similarly transformed. Today’s LLMs that give hit-or-miss answers will give way to, in the case of medicine, specialized models that outperform doctors. The implications of this are vast. But practically, it opens up huge potential for data leakage and/or exploitation. In this future, distributed ledgers and their ability to verify that a particular model was used or that the data was not tampered with will be paramount. As will be confidentiality-preserving infrastructure so that inputs/outputs, and in some cases, the models themselves, are protected.
Decentralized AI & Confidential Computing
Following the privacy thread further, crypto has laid some of the groundwork for AI by creating demand and funding for GPU development over the last decade. In return, the current set of GPU road maps, driven in part by a desire for private weights, are directed toward the implementation of trusted execution environments (TEEs). This is something that Web3 can benefit greatly from, and it’s especially true given the performance of TEEs on CPUs/GPUs has increased significantly in recent years.
It’s becoming possible to enjoy the flexibility and performance of the cloud without having to trust the cloud or service provider with unencrypted data. In this sense, Oasis is already well down the confidential computing path. This includes the launch of Sapphire, a confidential EVM network that leverages TEEs to create confidential smart contracts. Sapphire is unique within Web3, but what’s possible within a blockchain runtime is limited. What’s needed is a way to account for the non-determinism of AI. What’s needed is a way to build more complex and flexible applications. Enter ROFL.
Runtime Offchain Logic
Creating an authentication mechanism that allows AI to interface with traditionally rigid systems, like smart contracts, is necessary to realize many of the use cases mentioned above. This is generally the thinking behind ROFL, an all-purpose computing framework that enables arbitrary applications to function in a decentralized, verifiable, and confidentiality-preserving way.
Runtime offchain logic makes it possible to create custom offchain logic that can be easily verifiable onchain through the magic of trusted execution environments. ROFL enables access to remote network resources while maintaining security and integrity through remote attestations and the Oasis consensus layer. At a high level, here’s how it works:
In practice, ROFL combines offchain performance and onchain trust. Essentially anything that can be written in software can be put into a ROFL application. However, ROFL is best for things like AI pipelines, as they require intensive computation and a high degree of trust. Full Intel TDX support is coming soon to ROFL, which will enable the running of large models directly inside the framework. This could, among other things, transform agentic applications, allowing for persistent, confidential, and verifiable interactions. Get started with ROFL here.