At first glance, AI x Web3 seems to be independent technologies, each based on fundamentally different principles and serving different purposes. However, a deeper exploration reveals that these two technologies have the potential to balance each other and complement each other’s unique advantages, thereby enhancing each other. Balaji Srinivasan eloquently expounded on this concept of complementary abilities at the SuperAI conference, sparking a detailed comparison of how these technologies interact with each other.
Token, taking a bottom-up approach, has evolved over the past decade through the decentralized efforts of the anonymous network punk community. On the other hand, artificial intelligence has been developed through a top-down approach, dominated by a few tech giants. These companies determine the pace and dynamics of the industry, and the barriers to entry are more determined by resource intensity rather than technical complexity.
These two technologies also have fundamentally different natures. Essentially, Token is a deterministic system that produces immutable results, such as the predictability of hash functions or zero-knowledge proofs. This is in stark contrast to the probabilistic and often unpredictable nature of artificial intelligence.
Similarly, encryption technology excels in verification, ensuring the authenticity and security of transactions, and establishing trustless processes and systems. On the other hand, artificial intelligence focuses on generating and creating rich digital content. However, in the process of creating digital richness, ensuring the source of content and preventing identity theft becomes a challenge.
Fortunately, Token provides the concept of digital scarcity as an opposing concept to digital richness. It offers mature tools that can be extended to artificial intelligence technology to ensure the reliability of content sources and avoid identity theft issues.
One notable advantage of Token is its ability to attract a large amount of hardware and capital into a coordinated network to serve specific goals. This ability is particularly beneficial for resource-intensive artificial intelligence. Mobilizing underutilized resources to provide cheaper computing power can significantly improve the efficiency of artificial intelligence.
By comparing these two technologies, we can not only appreciate their respective contributions, but also see how they jointly create new paths for technology and economy. Each technology can compensate for the shortcomings of the other and create a more integrated and innovative future. In this blog post, we aim to explore the emerging AI x Web3 industry landscape, with a focus on introducing some emerging verticals at the intersection of these technologies.
Source: IOSG Ventures
2.1 Computing Networks
The industry landscape first introduces computing networks, which aim to address the constrained GPU supply problem and attempt to reduce computing costs in different ways. The following are worth noting:
Non-uniform GPU interoperability: This is a very ambitious attempt with high technical risks and uncertainties. However, if successful, it could create significant scale and impact by making all computing resources interchangeable. Essentially, the idea is to build compilers and other prerequisites that allow any hardware resource to be plugged in on the supply side, while the non-uniformity of all hardware is completely abstracted on the demand side, so that your computing requests can be routed to any resource in the network. If this vision succeeds, it will reduce the current dependence on CUDA software, which is completely dominated by AI developers. Despite the high technical risks, many experts are highly skeptical of this approach.
High-performance GPU aggregation: Integrating the world’s most popular GPUs into a distributed and permissionless network without worrying about interoperability issues between non-uniform GPU resources.
Consumer-grade GPU aggregation: Targeting the aggregation of lower-performance GPUs that may be available in consumer devices, which are the most underutilized resources on the supply side. It caters to those who are willing to sacrifice performance and speed for cheaper and longer training processes.
2.2 Training and Inference
Computing networks are primarily used for two main functions: training and inference. The demand for these networks comes from Web 2.0 and Web 3.0 projects. In the Web 3.0 field, projects like Bittensor utilize computing resources for model fine-tuning. In terms of inference, Web 3.0 projects emphasize the verifiability of the process. This focus has given rise to verifiable inference as a market vertical, where projects are exploring how to integrate AI inference into smart contracts while maintaining decentralization principles.
2.3 Intelligent Agent Platforms
Next is the intelligent agent platform, and the landscape outlines the core problems that startups in this category need to solve:
Agent interoperability and discovery and communication capabilities: Agents can discover and communicate with each other.
Agent cluster building and management capabilities: Agents can form clusters and manage other agents.
Ownership and market for AI agents: Provide ownership and market for AI agents.
These features emphasize the importance of flexible and modular systems that can seamlessly integrate into various blockchain and AI applications. AI agents have the potential to fundamentally change the way we interact with the internet, and we believe that agents will leverage infrastructure to support their operations. We envision AI agents relying on infrastructure in the following areas:
Accessing real-time web data using a distributed crawling network
Inter-agent payments using DeFi channels
Requiring economic deposits not only as a punishment for misconduct but also to increase agent discoverability (i.e., using deposits as economic signals during the discovery process)
Using consensus to decide which events should lead to slashing
Open interoperability standards and agent frameworks to support building composable collectives
Evaluating past performance based on immutable data history and selecting the appropriate agent collective in real-time
Source: IOSG Ventures
2.4 Data Layer
In the fusion of AI x Web3, data is a core component. Data is a strategic asset in the AI competition, along with computing resources. However, this category is often overlooked as most of the industry’s attention is focused on the computing layer. In fact, primitives offer many interesting value directions in the process of data acquisition, primarily in the following two high-level directions:
Accessing public internet data
Accessing protected data
Accessing public internet data: This direction aims to build a distributed web crawling network that can crawl the entire internet within days to obtain massive datasets or access very specific internet data in real-time. However, to crawl large datasets from the internet, there is a high demand for network resources, requiring at least hundreds of nodes to start doing meaningful work. Fortunately, Grass, a distributed crawler node network, has already attracted over 2 million nodes actively sharing internet bandwidth to the network with the goal of crawling the entire internet. This demonstrates the great potential of economic incentives in attracting valuable resources.
While Grass provides a fair competitive environment in terms of public data, there still remains the challenge of leveraging potential proprietary data โ namely, the issue of accessing proprietary datasets that are kept private due to their sensitive nature. Many startups are utilizing cryptographic tools to enable AI developers to build and fine-tune large-scale language models using the underlying data structures of proprietary datasets while keeping sensitive information confidential.
Technologies such as federated learning, differential privacy, trusted execution environments, fully homomorphic encryption, and multi-party computation provide different levels of privacy protection and trade-offs. Bagel’s research article (https://blog.bagel.net/p/with-great-data-comes-great-responsibility-d67) provides an excellent overview of these technologies. These technologies not only protect data privacy in the machine learning process but also enable comprehensive privacy-preserving AI solutions at the computing level.
2.5 Data and Model Sourcing
Data and model sourcing technologies aim to establish processes where users can be assured that they are interacting with the expected models and data. Additionally, these technologies provide assurances of authenticity and provenance. Taking watermarking technology as an example, watermarks are one of the model sourcing technologies that embed signatures directly into machine learning algorithms, more specifically, directly into model weights, so that during retrieval, the inference can be verified to be from the expected model.
2.6 Applications
In terms of applications, the possibilities are endless. In the industry landscape above, we listed some exciting development cases that are particularly anticipated with the advancement of AI technology in the Web 3.0 domain. As these use cases are mostly self-descriptive, we will not provide additional comments here. However, it is worth noting that the intersection of AI and Web 3.0 has the potential to reshape many verticals in the field, as these new primitives provide developers with more freedom to create innovative use cases and optimize existing ones.
Source: IOSG Ventures
Summary
The fusion of AI x Web3 brings forth a future full of innovation and potential. By leveraging the unique advantages of each technology, we can address various challenges and pave the way for new technological paths. When exploring this emerging industry, the synergies between AI and Web3 can drive progress and reshape our future digital experiences and interactions on the web.
The fusion of digital scarcity and digital richness, mobilizing underutilized resources for computing efficiency, and establishing secure and privacy-preserving data practices will define the era of the next generation of technological evolution.
However, we must recognize that this industry is still in its early stages, and the current industry landscape may become outdated in a short period of time. The rapid pace of innovation means that today’s cutting-edge solutions may soon be replaced by new breakthroughs. Nonetheless, the foundational concepts discussed โ such as computing networks, agent platforms, and data protocols โ highlight the tremendous potential of the convergence of artificial intelligence and Web3.
Source: IOSG Ventures