Here is the rewritten article:
PermaDAO Source: AI on AO Unveils Three Major Technological Breakthroughs: Web Assembly 64-bit Support, WeaveDrive Technology, and Integration of the Llama.cpp Large Language Model Inference Engine. Additionally, two projects, LlaMA Land and Apus Network, were highlighted during the event. Let’s delve into the details together.
On June 20th, the AI on AO conference concluded successfully. During this event, AO Protocol showcased three significant technological updates, marking a milestone where smart contracts can now operate large language models in a decentralized environment, representing an exciting technological breakthrough.
Specifically, AO’s key advancements in AI technology include the following:
– **Web Assembly 64-bit Support**: Developers can now create applications with over 4GB of memory. The theoretical memory support of Web Assembly 64-bit can go up to 16 exabytes (approximately 17 billion GB). Currently, AO can execute models up to 16GB, signifying that AO’s current 16GB memory level is sufficient to run nearly all models in the current AI field. This expansion not only enhances application performance but also promotes development flexibility and technological innovation.
– **WeaveDrive Technology**: This technology simplifies how developers access and manage data by enabling them to access Arweave data similar to local hard drives and efficiently stream data into the execution environment, thereby accelerating development speed and application performance.
– **Integration of Llama.cpp Large Language Model Inference Engine**: By incorporating the Llama.cpp system, AO now supports direct execution of various open-source large language models within smart contracts, such as Llama 3 and GPT-2. This means smart contracts can directly utilize advanced language models for complex data processing and decision-making (including financial decisions), significantly expanding the functionalities of decentralized applications.
These three important technological breakthroughs create greater opportunities for developers to build AI applications on AO. As an application example, the conference highlighted LlaMA Land, a new project entirely driven by AI. Additionally, another decentralized GPU network project, Apus Network, aims to provide the most cost-effective AI model execution environment on AO in the future.
**LlaMA Land**
LlaMA Land, built on AO, is a large-scale online multiplayer game entirely hosted on-chain, powered by AI (Llama 3 model). Within LlaMA Land operates a system called Llama Fed, similar to the Federal Reserve but operated by Llama models, responsible for monetary policy and minting Llama tokens.
Users can request Llama tokens by providing Arweave tokens (wAR), with Llama Fed autonomously deciding whether to grant tokens based on the quality of the request (e.g., project/proposal interest or value), with no human intervention in the entire process.
Currently, LlaMA Land is not fully open to the public. Interested users can visit their website and join the waitlist to be among the first to experience it.
**Apus Network**
Apus Network is a decentralized, permissionless GPU network. Leveraging Arweave’s permanent storage and AO’s scalability, it provides a deterministic GPU execution environment for AI models through economic incentives. Specifically, Apus Network aims to offer efficient, secure, and economical computing environments for AI applications on AO, further driving the development of decentralized AI.
Recently, Apus Network updated its website to enhance user experience. Development continues on model evaluation and fine-tuning functionalities, achieving significant milestones. In the future, Apus Network plans to support AO ecosystem wallets and complete related developments and testing in the Playground. Additionally, it will expand and implement model evaluation features on the AO platform to further enhance its application capabilities and performance.
**Conclusion**
The AI on AO conference not only showcased AO’s capabilities in hosting advanced AI models but also significantly propelled the development of decentralized AI applications. As an example project post-technical upgrades, LlaMA Land demonstrates the prototype of autonomous AI agent applications. With the advancement of AI applications, AO’s ecosystem will introduce more GPU resources to accelerate large language model execution speeds. Apus Network stands as the first decentralized GPU network integrated with AO.
In the future, AO plans to further increase memory limits based on demand, supporting the execution of even larger-scale AI models. Additionally, AO will continue exploring possibilities to build autonomous AI agents, further expanding applications in decentralized finance and smart contracts.