Backend Layer – The backend layer manages Workers, cluster/GPU operations, customer interactions, billing and usage monitoring, analysis, and automatic scaling.
Database Layer – This layer is the system’s data repository, using primary storage (for structured data) and caching (for temporary data accessed frequently).
Message Broker and Task Layer – This layer facilitates asynchronous communication and task management.
Infrastructure Layer – This layer includes GPU pools, orchestration tools, and manages task deployments.
Aethir
Aethir is a cloud computing DePIN that facilitates the sharing of high-performance computing resources in compute-intensive fields and applications. It utilizes resource pooling to significantly reduce costs to achieve global GPU allocation and decentralized ownership through distributed resource ownership. Aethir is designed specifically for high-performance workloads, suitable for industries like gaming and AI model training and inference. By unifying GPU clusters into a single network, Aethir’s design aims to increase cluster scale to enhance the overall performance and reliability of services provided on its network.
Aethir Network is a decentralized economy composed of miners, developers, users, token holders, and Aethir DAO. The three key roles ensuring the network’s successful operation are containers, indexers, and checkers. Containers are core nodes of the network, performing essential operations to maintain network activity, including validating transactions and real-time rendering of digital content. Checkers act as quality assurance personnel, continuously monitoring container performance and service quality to ensure reliable and efficient operations for GPU consumers. Indexers serve as matchmakers between users and the best available containers. Supporting this structure is the Arbitrum Layer 2 blockchain, providing a decentralized settlement layer for paying for goods and services on the Aethir network in native $ATH tokens.
Rendering Proof
Nodes in the Aethir network have two key functions – rendering capacity proof, where a group of these working nodes are randomly selected every 15 minutes to validate transactions; rendering work proof, closely monitoring network performance to ensure users receive optimal service by adjusting resources based on demand and geography. Miner rewards are allocated to participants running nodes on the Aethir network, calculating the value of their contributed computing resources, paid out in native $ATH tokens.
Nosana
Nosana is a decentralized GPU network built on Solana. Nosana allows anyone to contribute idle computing resources and receive rewards in the form of $NOS tokens. The DePIN enables economically efficient GPU allocation for running complex AI workloads without the overhead of traditional cloud solutions. Anyone can run Nosana nodes by lending out idle GPUs, earning token rewards proportional to the GPU power they provide to the network.
The network connects two parties involved in allocating computing resources: users seeking access to computing resources and node operators providing computing resources. Critical protocol decisions and upgrades are voted on by NOS token holders and managed by the Nosana DAO.
Nosana has outlined an extensive roadmap for its future plans – Galactica (v1.0 – first/second half of 2024) will launch the mainnet, release CLI and SDK, and focus on expanding the network through consumer GPU container nodes. Triangulum (v1.X – second half of 2024) will integrate major machine learning protocols and connectors such as PyTorch, HuggingFace, and TensorFlow. Whirlpool (v1.X – first half of 2025) will expand support for diverse GPUs from AMD, Intel, and Apple Silicon. Sombrero (v1.X – second half of 2025) will increase support for medium to large enterprises, fiat payments, billing, and team functionality.
Akash
The Akash network is an open-source proof-of-stake network built on the Cosmos SDK, allowing anyone to join and contribute without permission, creating a decentralized cloud computing marketplace. The $AKT token is used to secure network security, facilitate resource payments, and coordinate economic interactions among network participants. The Akash network consists of several key components: the blockchain layer using Tendermint Core and Cosmos SDK for consensus; the application layer managing deployments and resource allocation; the provider layer managing resources, bidding, and user application deployments; and the user layer enabling users to interact with the Akash network, manage resources, and monitor application status through CLI, console, and dashboard.
Initially focusing on storage and CPU leasing services, the network has expanded its service range to cover GPU leasing and allocation to respond to the increasing demand for AI training and inference workloads through its AkashML platform. AkashML uses a “reverse auction” system, where clients (referred to as tenants) submit their desired GPU prices, and providers compete to supply the requested GPUs.
As of the time of writing this article, the Akash blockchain has completed over 12.9 million transactions, with over $535,000 used for accessing computing resources and leasing out over 189,000 unique deployments.
Honorable Mentions
The computational DePIN field is still evolving, with many teams competing to bring innovative and efficient solutions to the market. Other examples worth further exploration include Hyperbolic, which is building a resource pool collaborative open access platform for AI development, and Exabits, which is establishing a distributed computing capacity network supported by computational miners.
Key Considerations and Future Outlook
Now that we have understood the basic principles of computational DePINs and reviewed several current supplementary case studies, it is important to consider the impacts of these decentralized networks, including their advantages and disadvantages.
Challenges
Building a distributed network at scale often requires trade-offs in performance, security, and resilience. For example, training AI models on a globally distributed commodity hardware network may not be as cost-effective or time-efficient as training on centralized service providers. As mentioned earlier, AI models and their workloads are becoming increasingly complex, requiring more high-performance GPUs rather than commodity GPUs.
This is the reason why large enterprises hoard high-performance GPUs, and it is a challenge that computational DePINs aiming to address GPU shortages face (for more information on the challenges faced by decentralized AI protocols, please refer to this post). Protocols can address this issue in two key ways: by setting benchmarks for GPU providers looking to contribute to the network and aggregating the computing resources provided to achieve greater overall capacity. However, establishing this model is challenging compared to centralized service providers, who can allocate more funds and directly trade with hardware providers (such as Nvidia). This is an issue that DePINs should consider as they progress. If decentralized protocols have sufficient funds, DAOs can vote to allocate a portion of the funds to purchase high-performance GPUs, which can be managed in a decentralized manner and lent out at a higher price than commodity GPUs.
Another specific challenge for computational DePINs is managing proper resource utilization. In their early stages, most computational DePINs will face issues of insufficient structural demand, similar to what many startups face today. Generally, DePINs face the challenge of establishing enough supply in the early stages to achieve minimum viable product quality. Without supply, the network will be unable to generate sustainable demand or serve its customers during peak demand periods. On the other hand, excess supply is also a problem. Beyond a certain threshold, more supply is only helpful when network utilization is close to or at full load. Otherwise, the DePIN risks paying too much for supply, leading to underutilized resources unless the protocol increases token issuance to maintain supplier participation, or supplier income will decrease.
Without broad geographic coverage, telecommunications networks are irrelevant. If passengers have to wait a long time for a taxi, the taxi network is useless. If DePINs have to pay those who provide resources for a long time, then they are not useful. Centralized service providers can predict resource demand and manage resource supply effectively, while computational DePINs lack a central authority to manage resource utilization. Therefore, for DePINs, strategically determining resource utilization is crucial.
A bigger issue is that decentralized GPU markets may no longer face GPU shortages. Mark Zuckerberg recently stated in an interview that he believes energy will become the new bottleneck rather than computing resources, as companies are now rushing to build large-scale data centers instead of hoarding computing resources as they do now. While this implies potential cost reductions for GPUs, it also poses a question – if building proprietary data centers enhances the overall standard of AI model performance, how will AI startups compete with large corporations in terms of performance and the quality of goods and services provided?
Use Cases of Computational DePINs
To reiterate, the gap between the complexity of AI models and their subsequent processing and computing requirements is widening.
Computational DePINs have the potential to be disruptive innovators in the computing market, which is currently dominated by major hardware manufacturers and cloud service providers, based on several key capabilities:
1) Offering lower commodity and service costs.
2) Providing stronger anti-censorship and network resilience guarantees.
3) Benefiting from potential regulatory guidelines that may require AI models to be as open as possible for fine-tuning and training, accessible to anyone.
The percentage of households in the US with access to computers and the internet is exponentially growing, nearing 100%. The proportion is also significantly increasing in many regions globally. This indicates an increasing number of potential computing resource providers (GPU owners) who, if given sufficient monetary incentives and a seamless transaction process, would be willing to lend out idle supplies. Of course, this is a very rough estimate, but it indicates that the foundation for building a sustainable shared economy of computing resources may already exist.
In addition to AI, future demand for computing will also come from many other industries, such as quantum computing. The quantum computing market size is projected to grow from $928.8 million in 2023 to $6,528.8 million by 2030, with a compound annual growth rate of 32.1%. The industry’s production will require different types of resources, but it will be interesting to see if any quantum computing DePINs will emerge and what they will look like.
“The powerful ecosystems of open models running on consumer hardware are a hedge against the central servers that control and mediate access to future value by AI being highly concentrated and the majority of human thought controlled by a few.” – Vitalik Buterin
Large enterprises may not be the target audience for DePINs, now or in the future. Computational DePINs allow individual developers, scattered builders, and fledgling startups with minimal funds and resources to return. They enable turning idle supply into innovative ideas and solutions and achieve this through richer computing capabilities. Artificial intelligence is undoubtedly going to change the lives of billions of people. We should not fear that AI will replace everyone’s jobs, but rather encourage the idea that AI can enhance the capabilities of individuals, independent entrepreneurs, startups, and the general public.