Born on the Edge: How Decentralized Computing Power Networks Empower Crypto and AI?

Original author: Jane Doe, Chen Li

Source: Youbi Capital

The Intersection of AI and Crypto

On May 23, chip giant NVIDIA released its financial report for the first quarter of the 2025 fiscal year. The report shows that NVIDIA's first-quarter revenue was $26 billion. Among them, data center revenue grew by 427% compared to last year, reaching an astonishing $22.6 billion. The financial performance of NVIDIA in saving the performance of the U.S. stock market reflects the surge in computing power demand among global technology companies competing in the AI field. The more top-notch technology companies layout in the AI field, the greater their ambition, and correspondingly, the demand for computing power from these companies is also growing exponentially. According to TrendForce's predictions, the four major U.S. cloud service providers, Microsoft, Google, AWS, and Meta, are expected to account for 20.2%, 16.6%, 16%, and 10.8% of global demand for high-end AI servers in 2024, totaling over 60%.

生于边缘:去中心化算力网络如何赋能Crypto与AI?

Image source:

"Chip shortage" has become an annual hot topic in recent years. On the one hand, the training and inference of large language models (LLMs) require a large amount of computing power support; and as the model iterates, the cost and demand for computing power increase exponentially. On the other hand, large companies like Meta purchase a huge number of chips, and global computing resources are tilting towards these tech giants, making it increasingly difficult for small enterprises to obtain the required computing resources. The dilemma faced by small enterprises stems not only from the insufficient supply of chips caused by the surge in demand, but also from structural contradictions in the supply. Currently, there are still a large number of idle GPUs on the supply side, for example, some data centers have a large amount of idle computing power (with a utilization rate of only 12% - 18%), and a large amount of computing power resources are idle in encrypted mining due to the decrease in profits. Although not all of this computing power is suitable for professional applications such as AI training, consumer-grade hardware can still play a huge role in other areas, such as AI inference, cloud gaming rendering, cloud phones, etc. The opportunity to integrate and utilize this part of the computing power resources is enormous.

Shifting the focus from AI to crypto, after a three-year lull in the crypto market, another bull run has finally arrived, with Bitcoin prices repeatedly hitting new highs and various memecoins emerging one after another. Although AI and Crypto have been buzzwords for years, artificial intelligence and blockchain, as two important technologies, seem to be two parallel lines that have yet to find an intersection. At the beginning of this year, Vitalik published an article titled 'The Promise and Challenges of Crypto + AI Applications', discussing the future scenarios of combining AI and crypto. Vitalik mentioned many fantasies, including using encryption technologies such as blockchain and MPC for decentralized training and inference of AI, which can open up the black box of machine learning and make AI models more trustless, and so on. There is still a long way to go to realize these visions. However, one of the use cases mentioned by Vitalik - using the economic incentives of crypto to empower AI - is also an important direction that can be realized in a short period of time. Decentralized computing power networks are currently one of the most suitable scenarios for AI + crypto.

2 Decentralization Computing Power Network

Currently, many projects are developing in the field of decentralized computing power networks. The underlying logic of these projects is similar and can be summarized as: using tokens to incentivize computing power holders to participate in the network and provide computing power services. These scattered computing resources can be aggregated into a decentralized computing power network of a certain scale. This can not only improve the utilization of idle computing power, but also meet the computing power needs of customers at a lower cost, achieving a win-win situation for both buyers and sellers.

In order to give readers an overall understanding of this track in a short period of time, this article will deconstruct specific projects and the entire track from micro and macro perspectives, aiming to provide readers with analytical perspectives to understand the core competitiveness of each project and the overall development of the decentralized computing power track. The author will introduce and analyze five projects: Aethir, io.net, Render Network, Akash Network, and Gensyn, and summarize and evaluate the project situation and track development.

From the perspective of analysis framework, if we focus on a specific decentralized computing power network, we can break it down into four core components:

  • Hardware Network: Integrating decentralized computing power resources together, sharing and balancing computing power resources through nodes distributed globally, is the foundation layer of a decentralized computing power network.
  • Bilateral Market: Matching computing power providers with demanders through a reasonable pricing mechanism and discovery mechanism, providing a secure trading platform to ensure transparent, fair, and trustworthy transactions for both supply and demand parties.
  • Consensus Mechanism: Used to ensure that nodes within the network operate correctly and complete tasks. The consensus mechanism is mainly used to monitor two aspects: 1) monitoring whether the node is running online and is in an active state ready to accept tasks at any time; 2) proof of node's work: the node effectively and correctly completes the task after receiving it, and the computing power is not used for other purposes, occupying processes and threads.
  • Token Incentives: The token model is used to incentivize more participants to provide/use services, and capture this network effect with tokens to achieve community benefit sharing.

If you take a bird's-eye view of the entire decentralized computing power track, Blockworks Research's research report provides a good analysis framework, and we can divide the projects in this track into three different layers.

  • Bare metal layer: The foundational layer of the decentralized computing stack, whose main task is to collect raw computing power resources and make them accessible through APIs.
  • Orchestration layer: The intermediate layer that constitutes the decentralized computing stack, whose main task is coordination and abstraction, responsible for scheduling, expansion, operation, load balancing, fault tolerance, etc. The main role is to "abstract" the complexity of underlying hardware management, providing a more advanced user interface for end users, serving specific customer groups.
  • Aggregation layer: The top layer of the decentralized computing stack, the main task is to integrate, responsible for providing a unified interface for users to achieve a variety of computing tasks in one place, such as AI training, rendering, zkML, etc. Equivalent to the orchestration and distribution layer of multiple decentralized computing services.

生于边缘:去中心化算力网络如何赋能Crypto与AI?

Image source: Youbi Capital

Based on the above two analysis frameworks, we will make a horizontal comparison of the selected five projects, and evaluate them from four aspects: core business, market positioning, hardware facilities, and financial performance.

生于边缘:去中心化算力网络如何赋能Crypto与AI?

2.1 Core Business

From the perspective of the underlying logic, the decentralized computing power network is highly homogeneous, that is, it uses tokens to incentivize idle computing power holders to provide computing power services. Based on this underlying logic, we can understand the differences in the core business of the project from three aspects:

  • Source of Idle Computing Power:
  • The main sources of idle computing power on the market are: 1) data centers, mining companies and other enterprises with idle computing power; 2) idle computing power in the hands of retail investors. The computing power in data centers is usually professional-grade hardware, while retail investors typically purchase consumer-grade chips.
  • The computing power of Aethir, Akash Network, and Gensyn is mainly collected from enterprises. The benefits of collecting computing power from enterprises are: 1) Enterprises and data centers usually have higher-quality hardware and professional maintenance teams, resulting in higher performance and reliability of computing resources; 2) Computing resources from enterprises and data centers are often more homogeneous, and centralized management and monitoring make resource scheduling and maintenance more efficient. However, this approach requires higher requirements from the project party, as they need to have business contacts with enterprises that possess and control computing power. At the same time, scalability and decentralization will be affected to some extent.
  • Render Network and io.net are mainly designed to incentivize individual users to provide their idle computing power. The advantages of collecting computing power from individual users are: 1) The explicit cost of idle computing power from individual users is lower, which can provide more economical computing resources; 2) The network is more scalable and decentralized, enhancing the resilience and robustness of the system. However, the disadvantages are that individual user resources are widely distributed and not standardized, making management and scheduling more complex and increasing operational difficulties. It is also more difficult to rely on individual user computing power to form initial network effects (more difficult to kickstart). Finally, individual users' devices may have more security risks, which can lead to data leaks and misuse of computing power.
  • Computing Power Consumers
  • From the perspective of computing power consumers, the target customers of Aethir, io.net, and Gensyn are mainly enterprises. For B-side customers, high-performance computing is required for AI and real-time rendering in games. Such workloads have extremely high requirements for computing power resources, usually requiring high-end GPUs or professional-grade hardware. In addition, B-side customers have high requirements for the stability and reliability of computing power resources, so high-quality service level agreements must be provided to ensure the normal operation of projects and provide timely technical support. At the same time, the migration cost of B-side customers is very high. If there is no mature SDK in the decentralized network that allows the project party to deploy quickly (for example, Akash Network requires users to develop based on remote ports by themselves), it is difficult to persuade customers to migrate. Unless there is a very significant price advantage, the willingness of customers to migrate is very low.
  • Render Network and Akash Network primarily provide computing power services for retail users. They serve B-side users and the project needs to design a simple and user-friendly interface and tools to provide a good user experience for consumers. Consumers are also price-sensitive, so the project needs to offer competitive pricing.
  • Hardware Type
  • Common computing hardware resources include CPU, FPGA, GPU, ASIC, and SoC, among others. These hardware devices have significant differences in design goals, performance characteristics, and application domains. In summary, CPUs are better suited for general computing tasks, FPGAs excel in high parallel processing and programmability, GPUs perform exceptionally well in parallel computing, ASICs are most efficient for specific tasks, and SoCs integrate multiple functions and are suitable for highly integrated applications. The choice of hardware depends on specific application requirements, performance needs, and cost considerations. The decentralized computing power projects we are discussing mostly focus on collecting GPU computing power, which is determined by the project's business type and the characteristics of GPUs. This is because GPUs have unique advantages in AI training, parallel computing, multimedia rendering, and other areas. +Although most of these projects involve the integration of GPUs, different applications have different hardware requirements, so these hardware have heterogeneous optimized cores and parameters. These parameters include parallelism/serial dependencies, memory, latency, and so on. For example, rendering workloads are actually more suitable for consumer-grade GPUs rather than more powerful data center GPUs because rendering requires high requirements for ray tracing, and consumer-grade chips such as the 4090s have enhanced RT cores and have made computational optimizations specifically for ray tracing tasks. AI training and inference, on the other hand, require professional-grade GPUs. Therefore, Render Network can collect RTX 3090s and 4090s consumer-grade GPUs from retail investors, while IO.NET needs more professional-grade GPUs like H100s and A100s to meet the needs of AI startups.

2.2 Market Positioning

For the positioning of the project, the core issues, optimization focus, and value capture capabilities that need to be addressed are different for the bare metal layer, orchestration layer, and aggregation layer.

  • The Bare Metal layer focuses on the collection and utilization of physical resources, while the Orchestration layer focuses on the scheduling and optimization of computing power, designing the physical hardware for optimal optimization according to the needs of customer groups. The Aggregation layer is general purpose and focuses on the integration and abstraction of different resources. In terms of value chain, each project should start from the Bare Metal layer and strive to climb upwards.
  • From the perspective of value capture, the ability to capture value increases layer by layer from the bare metal layer, orchestration layer to the aggregation layer. The aggregation layer is able to capture the most value because the aggregation platform can obtain the greatest network effect and directly reach the most users, which is equivalent to the traffic entrance of the decentralized network, thus occupying the highest value capture position in the entire computing power resource management stack.
  • Correspondingly, the difficulty of building an aggregation platform is also the greatest. The project needs to comprehensively solve various issues such as technological complexity, heterogeneous resource management, system reliability and scalability, network effect realization, security and privacy protection, and complex operation and maintenance management. These challenges are not conducive to the cold start of the project and depend on the development situation and timing of the track.It is not too realistic to do aggregation layer before the orchestration layer has matured and taken a certain market share.
  • Currently, Aethir, Render Network, Akash Network, and Gensyn all belong to the Orchestration layer, and they aim to provide services for specific targets and customer groups. Aethir's main business is real-time rendering for cloud gaming and providing a certain development and deployment environment and tools for B-end customers; Render Network's main business is video rendering; Akash Network's mission is to provide a transaction platform similar to Taobao, while Gensyn focuses on the AI training field. io.net is positioned as the Aggregation layer, but the functionality currently implemented by io is still some distance away from the complete functionality of the aggregation layer. Although it has collected hardware from Render Network and Filecoin, the abstraction and integration of hardware resources have not yet been completed.

2.3 Hardware Facilities

  • Currently, not all projects have disclosed detailed data about the network. Relatively speaking, the UI of io.net explorer is the best, where you can see parameters such as the number, type, price, distribution, network usage, and node income of GPU/CPU. However, at the end of April, the front end of io.net was attacked because io did not authenticate the PUT/POST interface, and hackers tampered with the front-end data. This also sounded the alarm for the privacy and network data reliability of other projects.
  • In terms of the number of GPUs and models, io.net, as the aggregation layer, should have collected the most hardware. Aethir comes in second, and the hardware situation of other projects is not as transparent. From the GPU models, io has a wide variety of GPUs, including professional-grade GPUs like A100 and consumer-grade GPUs like 4090, which is in line with the positioning of io.net aggregation. Io can choose the most suitable GPU according to specific task requirements. However, GPUs of different models and brands may require different drivers and configurations, and software also needs to be optimized, which increases the complexity of management and maintenance. Currently, the allocation of all kinds of tasks in io mainly relies on users' independent selection.
  • Aethir has released its Mining Rig, and in May, Aethir Edge, developed with support from Qualcomm, was officially launched. It will break away from the centralized GPU cluster deployment model that is far from the user and deploy computing power to the edge. Aethir Edge will combine the computing power of the H 100 cluster to provide AI scene services. It can deploy pre-trained models to provide users with inference computing services at the optimal cost. This solution is closer to the user, faster in service, and more cost-effective. *From the perspective of supply and demand, taking Akash Network as an example, its statistical data shows that the total CPU amount is about 16 k, and the number of GPUs is 378. According to the network leasing demand, the utilization rates of CPU and GPU are 11.1% and 19.3% respectively.**Among them, only the utilization rate of professional GPU H 100 is relatively high, and most other models are mostly idle. The situation faced by other networks is generally consistent with Akash, with overall low demand, except for popular chips such as A 100, H 100, etc., most of the computing power is idle.
  • From the perspective of price advantage, compared with other traditional service providers, it does not have a significant cost advantage compared to the cloud computing market giants.

生于边缘:去中心化算力网络如何赋能Crypto与AI?

2.4 Financial Performance

  • Regardless of how the token model is designed, a healthy tokenomics needs to meet the following basic conditions: 1) User demand for the network needs to be reflected in the token price, which means that the token can capture value; 2) All participants, whether they are developers, nodes, or users, need to receive long-term fair incentives; 3) Ensure decentralized governance to avoid excessive holdings by insiders; 4) Reasonable inflation and deflation mechanisms and token release cycles to avoid significant fluctuations in token price affecting the robustness and sustainability of the network.
  • If the token model is roughly divided into BME (burn and mint equilibrium) and SFA (stake for access), these two token models have different sources of deflationary pressure: the deflationary pressure of the BME model is determined by demand because the tokens are burned when users purchase services. On the other hand, SFA requires service providers/nodes to stake tokens to qualify for providing services, so the deflationary pressure is brought by the supply. The advantage of the BME model is that it is more suitable for non-standardized commodities. However, if the network demand is insufficient, it may face continued inflationary pressure. The token models of each project differ in detail, but overall, Aethir leans more towards SFA, while io.net, Render Network, and Akash Network lean more towards BME. Gensyn is yet to be known.
  • In terms of revenue, the demand for network will directly reflect on the overall revenue of the network (we are not discussing the income of miners here, because apart from the rewards for completing tasks, miners also receive subsidies from the project). From the publicly available data, the value of io.net is the highest. Although Aethir's revenue has not been disclosed yet, from publicly available information, they have announced that they have signed orders with many B-end customers.
  • In terms of currency price, only Render Network and Akash Network have conducted ICOs. Aethir and io.net have also launched their own currencies recently. The price performance needs to be further observed, and we will not discuss it in detail here. It is not clear what Gensyn's plan is. Overall, based on the two projects that have launched their own currencies and other projects in the same field that have launched their own currencies but are not included in the scope of this article, decentralized computing power networks have shown very impressive price performance, to a certain extent reflecting the huge market potential and high expectations of the community.

2.5 Summary

  • The decentralized computing power network track has developed rapidly overall, and there are already many projects that can rely on product services to serve customers and generate certain income. The track has moved away from pure narrative and entered a stage of development that can provide preliminary services.
  • Soft demand is a common problem faced by decentralized computing power networks, and long-term customer demand has not been well validated and explored. However, the demand side has not had much impact on the coin price, and several projects that have already been issued have performed well.
  • AI is the main narrative of the decentralized computing power network, but not the only business. In addition to being used for AI training and inference, computing power can also be used for real-time rendering in cloud gaming, cloud phone services, and more.
  • The hardware heterogeneity of the computing power network is relatively high, and the quality and scale of the computing power network need to be further improved.
  • For C-end users, cost advantages may not be very obvious. For B-end users, in addition to cost savings, they also need to consider aspects such as service stability, reliability, technical support, compliance, and legal support. However, Web3 projects generally do not perform well in these aspects.

3 Closing thoughts

The massive demand for computing power brought about by the explosive growth of AI is undeniable. Since 2012, the computing power used in AI training tasks has been growing exponentially, currently doubling every 3.5 months (compared to Moore's Law, which doubles every 18 months). Since 2012, the demand for computing power has grown over 300,000 times, far exceeding Moore's Law's 12-fold growth. It is predicted that the GPU market is expected to grow at a compound annual growth rate of 32% to over $200 billion in the next five years. AMD's estimate is even higher, with the company expecting the GPU chip market to reach $400 billion by 2027.

生于边缘:去中心化算力网络如何赋能Crypto与AI?

Image source:

The explosive growth of artificial intelligence and other computationally intensive workloads, such as AR/VR rendering, has exposed structural inefficiencies in traditional cloud computing and leading computing markets. Decentralized computing power networks theoretically offer a more flexible, cost-effective, and efficient solution by leveraging distributed idle computing resources to meet the huge demand for computing resources in the market. Therefore, the combination of crypto and AI has enormous market potential, but also faces fierce competition from traditional enterprises, high entry barriers, and complex market environments. Overall, among all crypto tracks, decentralized computing power networks are one of the most promising verticals in the encryption field to truly meet real demand.

生于边缘:去中心化算力网络如何赋能Crypto与AI?

Image source:

The future is bright, but the road is tortuous. To achieve the above vision, we still need to solve many problems and challenges. In summary, if traditional cloud services are provided solely at this stage, the profit margin of the project is very small. From the demand side, large enterprises generally build computing power by themselves, while most individual C-end developers choose cloud services. Whether small and medium-sized enterprises that truly use decentralized computing power network resources will have stable demand still needs further exploration and verification. On the other hand, AI is a vast market with extremely high upper limits and imaginative space. In order to tap into a broader market, future decentralized computing power service providers also need to transform towards model/AI services, explore more crypto + AI usage scenarios, and expand the value the project can create. However, at present, there are still many problems and challenges to further develop into the field of AI:

  • Price advantage is not prominent: Through the previous data comparison, it can be seen that the cost advantage of decentralized computing power network has not been reflected. The possible reason is that for professional chips with high demand, such as H 100, A 100, etc., the market mechanism determines that the price of these hardware will not be cheap. In addition, although the decentralized network can collect idle computing resources, the lack of economies of scale brought by decentralization, high network and bandwidth costs, and significant hidden costs such as complexity of management and operation will further increase the computing power cost.
  • The Speciality of AI Training: There is a huge technical bottleneck in using decentralized methods for AI training at the current stage. This bottleneck can be intuitively reflected in the workflow of GPUs. In the training of large language models, GPUs first receive preprocessed data batches, perform forward and backward propagation calculations to generate gradients, and then each GPU aggregates the gradients and updates the model parameters to ensure synchronization among all GPUs. This process will be repeated until all batches are trained or the predetermined number of rounds is reached. This process involves a large amount of data transmission and synchronization. Currently, there is no good answer to questions such as what kind of parallel and synchronization strategies to use, how to optimize network bandwidth and latency, and how to reduce communication costs. It is currently not realistic to use a decentralized computing power network for AI training.
  • Data Security and Privacy: During the training process of large language models, various links involving data processing and transmission, such as data allocation, model training, and parameter and gradient aggregation, may affect data security and privacy. And the data privacy coin model is even more important. If the issue of data privacy cannot be resolved, it cannot truly scale on the demand side.

From the most realistic perspective, a decentralized computing power network needs to consider both the current demand exploration and the future market space. It is important to find the product positioning and target customer base, such as focusing on non-AI or Web3 native projects first, starting from relatively marginal needs, and establishing an early user base. At the same time, continuously explore various scenarios of AI and crypto combination, explore the technological frontier, and realize the transformation and upgrading of services.

Reference materials

caff.com/zh/archives/17351? ref= 1554

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
  • Pin