At first glance, AI x Web3 seems to be independent technologies, each based on fundamentally different principles and serving different functions. However, a deeper exploration reveals that these two technologies have the potential to balance each other’s trade-offs and complement each other’s unique advantages, enhancing each other. Balaji Srinivasan eloquently explained this concept of complementary capabilities at the SuperAI conference, sparking a detailed comparison of how these technologies interact.
Token, which emerged from the decentralized efforts of the anonymous network punk, has evolved over the past decade through the collaborative efforts of numerous independent entities using a bottom-up approach. In contrast, artificial intelligence has been developed using a top-down approach, dominated by a few tech giants. These companies determine the pace and dynamics of the industry, and the entry barrier is determined more by resource intensity than technological complexity.
These two technologies also have fundamentally different natures. Essentially, Token is a deterministic system that produces immutable results, such as the predictability of hash functions or zero-knowledge proofs. This contrasts sharply with the probabilistic and generally unpredictable nature of artificial intelligence.
Similarly, encryption technology excels in verification, ensuring the authenticity and security of transactions and establishing trustless processes and systems, while artificial intelligence focuses on generating and creating rich digital content. However, in the process of creating digital richness, ensuring the source of content and preventing identity theft becomes a challenge.
Fortunately, Token provides the concept of digital scarcity as an opposing concept to digital richness. It offers mature tools that can be extended to artificial intelligence technology to ensure the reliability of content sources and prevent identity theft.
One notable advantage of Token is its ability to attract a large amount of hardware and capital into a coordinated network to serve specific goals. This ability is particularly beneficial for resource-intensive artificial intelligence. Mobilizing underutilized resources to provide cheaper computing power can significantly enhance the efficiency of artificial intelligence.
By comparing these two technologies, we can not only appreciate their respective contributions but also see how they can jointly create new paths for technology and economy. Each technology can compensate for the shortcomings of the other and create a more integrated and innovative future. In this blog post, we aim to explore the emerging industry landscape of AI x Web3, focusing on some emerging verticals at the intersection of these technologies.
Source: IOSG Ventures
2.1 Computing Networks
The industry landscape first introduces computing networks, which aim to address the limited supply of GPUs and attempt to reduce the cost of computing in different ways. The following are the key areas of focus:
Non-uniform GPU interoperability: This is a very ambitious attempt with high technological risks and uncertainties, but if successful, it has the potential to create significant scale and impact, making all computing resources interchangeable. Essentially, the idea is to build compilers and other prerequisites that allow any hardware resource to be plugged into the supply side, while the non-uniformity of all hardware is completely abstracted on the demand side, so that your computing requests can be routed to any resource in the network. If this vision succeeds, it will reduce the current dependence on CUDA software, which is completely dominated by AI developers. Despite the high technological risks, many experts are highly skeptical of this approach’s feasibility.
High-performance GPU aggregation: Aggregating the most popular GPUs worldwide into a distributed and permissionless network, without worrying about interoperability issues between non-uniform GPU resources.
Consumer-grade GPU aggregation: Aimed at aggregating some lower-performance GPUs that may be available in consumer devices, which are the most underutilized resources on the supply side. It caters to those who are willing to sacrifice performance and speed for cheaper and longer training processes.
2.2 Training and Inference
Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from Web 2.0 and Web 3.0 projects. In the Web 3.0 space, projects like Bittensor utilize computing resources for model fine-tuning. In terms of inference, Web 3.0 projects emphasize the verifiability of the process. This emphasis has given rise to verifiable inference as a market vertical, where projects are exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.
2.3 Intelligent Agent Platforms
Next up is intelligent agent platforms, and the industry landscape outlines the core problems that startups in this category need to address:
Agent interoperability, discovery, and communication capabilities: Agents can discover and communicate with each other.
Agent cluster building and management capabilities: Agents can form clusters and manage other agents.
Ownership and marketplace for AI agents: Providing ownership and marketplace for AI agents.
These features emphasize the importance of flexible and modular systems that can seamlessly integrate into various blockchain and AI applications. AI agents have the potential to fundamentally change the way we interact with the internet, and we believe agents will leverage infrastructure to support their operations. We envision AI agents relying on infrastructure in the following areas:
Utilizing distributed crawling networks to access real-time web data.
Using DeFi channels for inter-agent payments.
Requiring economic deposits not only for punishment in case of misconduct but also to enhance agent discoverability (i.e., using deposits as economic signals during the discovery process).
Utilizing consensus to decide which events should lead to slashing.
Open interoperability standards and agent frameworks to support building composable collectives.
Evaluating past performance based on immutable data history and dynamically selecting appropriate agent collectives in real-time.
Source: IOSG Ventures
2.4 Data Layer
In the fusion of AI x Web3, data plays a core role. Data is a strategic asset in AI competition and, along with computing resources, forms a critical resource. However, this category is often overlooked as most of the industry’s attention is focused on the computing layer. In reality, primitives offer many interesting value directions in the process of data acquisition, primarily in two high-level directions:
Accessing public internet data
Accessing protected data
Accessing public internet data: This direction aims to build a distributed web crawler network that can crawl the entire internet in a matter of days, acquiring massive datasets or accessing very specific internet data in real-time. However, to crawl large datasets from the internet, significant network demands are required, and meaningful work can only begin with at least hundreds of nodes. Fortunately, Grass, a distributed crawler node network, already has over 2 million nodes actively sharing internet bandwidth with the goal of crawling the entire internet. This demonstrates the huge potential of economic incentives in attracting valuable resources.
While Grass provides a fair competitive environment in terms of public data, there still remains the challenge of utilizing potential proprietary datasets, where a significant amount of data is kept protected due to its sensitive nature. Many startups are leveraging cryptographic tools to enable AI developers to build and fine-tune large language models using the underlying data structure of proprietary datasets while maintaining the privacy of sensitive information.
Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic and multi-party computation offer different levels of privacy protection and trade-offs. Bagel’s research article (https://blog.bagel.net/p/with-great-data-comes-great-responsibility-d67) provides an excellent overview of these technologies. These technologies not only protect data privacy in the machine learning process but also enable comprehensive privacy-preserving AI solutions at the computation level.
2.5 Data and Model Provenance
Data and model provenance technologies aim to establish processes that can guarantee to users that they are interacting with the expected models and data, while also providing authenticity and source guarantees. Watermarking, as an example, is one of the model provenance technologies that directly embeds signatures into machine learning algorithms, more specifically into model weights, so that inference can be verified as coming from the expected model during retrieval.
2.6 Applications
In terms of applications, the possibilities are limitless. In the industry landscape above, we list some development cases that are particularly exciting with the advancements of AI technology in the Web 3.0 space. As these use cases are mostly self-descriptive, we will not provide additional commentary here. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals of the field, as these new primitives provide developers with more freedom to create innovative use cases and optimize existing ones.
Summary
The fusion of AI x Web3 brings a future full of innovation and potential. By leveraging the unique strengths of each technology, we can address various challenges and open up new technological paths. The synergy between AI x Web3 in exploring this emerging industry can drive progress and reshape our future digital experiences and interactions on the internet.
The fusion of digital scarcity and digital richness, mobilization of underutilized resources for computing efficiency, and the establishment of secure and privacy-preserving data practices will define the era of the next generation of technological evolution.
However, we must recognize that this industry is still in its early stages, and the current industry landscape may become outdated in a short period of time. The rapid pace of innovation means that today’s cutting-edge solutions may soon be replaced by new breakthroughs. Nevertheless, the foundational concepts explored, such as computing networks, agent platforms, and data protocols, highlight the tremendous potential of the fusion of artificial intelligence and Web 3.0.
Source: IOSG Ventures