Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Draft for Public Consultation on Digital Virtual Human Service Management Measures: "People-Centric," Define Boundaries, Anchor the Course
Writing by: Zhang Feng
When AI-driven virtual anchors sell products 24/7, when “digital civil servants” in government service halls patiently answer questions, when tireless “AI doctors” appear in medical science popularization, we are witnessing a new societal landscape deeply embedded with digital virtual humans. Thanks to their low cost, strong interactivity, high efficiency, and 24/7 service capabilities, digital virtual humans have quickly become a key driver of intelligent economic development. From e-commerce live streaming to cultural tourism promotion, from medical science popularization to government consulting, their application scenarios are expanding with unprecedented breadth and depth.
However, rapid technological progress often precedes the establishment of rules. When virtual images can be indistinguishable from reality, when AI conversations may contain biases, and when autonomous evolving agents behave unpredictably, a series of sharp questions arise: Where exactly are the service boundaries of digital virtual humans? Who should be responsible for their actions? How can we ensure that technological development remains aligned with the “people-oriented” and “ethical” principles while pursuing efficiency and innovation? These are not only technical issues but also governance questions concerning social trust, ethical bottom lines, and long-term development. Recently, the Cyberspace Administration of China released the “Measures for the Management of Digital Virtual Human Information Services (Draft for Comments)” (hereinafter referred to as the “Measures”), which is a focused response to these pressing questions of the era.
The urgent need to define boundaries for digital virtual human services stems from the multiple, intertwined risks and challenges exposed during their development.
First, safety and ethical risks. Deep synthesis technology significantly lowers the threshold for identity forgery, spreading false information, and emotional deception, which may infringe on personal rights, disrupt social order, and even threaten national security.
Second, the risk of blurred responsibility. The behavior of digital virtual humans is driven by algorithms, and the responsibility chain among their designers, developers, operators, and users is complex. When problems occur, it is easy to fall into a dilemma of “algorithm black box” and responsibility vacuum.
Third, risks of digital divide and bias solidification. If training data for algorithms is biased, digital virtual humans may unconsciously amplify existing social prejudices or create new forms of discrimination in their services.
More profoundly, with the development of frontier technologies such as the Rotifer autonomous evolution protocol for intelligent agents, intelligent entities with self-learning and evolving capabilities may exhibit behaviors beyond preset goals, with long-term societal impacts full of uncertainty.
These risks do not exist in isolation; they are interconnected and point to a core contradiction: the enormous potential of technological progress versus the lagging and disconnected existing governance framework. Therefore, issuing the “Measures” is not only a “firefighting” response to specific chaos but also a foundational effort to ensure the healthy development of the digital economy, embodying the institutionalization of the core concept of “people-oriented and AI for good.”
In response to these challenges, the “Measures” establish a governance framework centered on “full-process regulation” and “transparent responsibility.” Its core strategies can be summarized as “set bottom lines, clarify subjects, strengthen supervision, and promote goodness.”
First, clearly delineate unavoidable safety and ethical bottom lines. The Measures specify behaviors prohibited in digital virtual human services, including but not limited to endangering national security, harming public interests, infringing on lawful rights and interests, spreading false information, and disrupting economic and social order. This sets clear red lines for all market participants.
Second, establish and penetrate responsibility across various subjects. The Measures specify responsibilities for digital virtual human service providers, technical supporters, content creators, and users, requiring providers to fulfill responsibilities such as filing, labeling, content review, data security, and emergency response, thus closing the responsibility chain and enabling traceability.
Third, emphasize a “people-centered” service philosophy. This requires that the design, development, and application of digital virtual humans respect social morals and ethics, protect users’ right to know and choose, avoid misuse of user data and excessive personalization, ensuring that technology serves human development comprehensively.
Fourth, implement the “AI for good” principle, encouraging innovation within regulation. The Measures do not restrict technological development but define safe zones to provide stable expectations for responsible innovation, support industry-academia-research collaboration, and guide resources toward ethically aligned, welfare-enhancing applications. This combination aims to turn the abstract concept of “people-centered” into concrete, operable, and regulatable rules.
The issuance and implementation of the “Measures” are expected to have a profound impact on the digital virtual human industry and the entire intelligent economy ecosystem.
In the short term, the industry may experience a “pain period” characterized by increased compliance costs and restrictions on some unregulated growth models. Companies will need to allocate resources for technical rectification, establish internal review mechanisms, and complete filing procedures. Some borderline applications will be forced to adjust or exit.
In the long run, the certainty brought by these regulations will far outweigh the short-term costs. First, it will greatly enhance industry credibility and social acceptance. Clear rules eliminate widespread public concerns about technology misuse, helping to build user trust, which is the social psychological foundation for large-scale industry development. Second, it will optimize the market competition environment. By removing low-quality and illegal competitors, market resources and user attention will be directed toward enterprises with genuine technological strength and compliance awareness, promoting high-quality supply. Third, it provides clear guidance for capital and R&D investments. Investors and research institutions can confidently allocate resources to fields aligned with policy directions and long-term social value, such as education, healthcare, elderly care, and cultural heritage.
Ultimately, a regulated, healthy, and sustainable digital virtual human industry ecosystem will become a solid foundation for deepening the "AI + " initiative and supporting the digital transformation of traditional industries and the creation of new intelligent economic models. From a macro perspective, this is also an important institutional exploration in China’s AI governance, contributing a “Chinese solution” that balances innovation and regulation to the global stage.
Although the “Measures” establish a basic regulatory framework, their specific implementation still faces significant issues and risks, including technological integration, responsibility attribution, lack of standards, and continuous technological evolution.
First, the complexity of technological regulation. Digital virtual human technology integrates AI, graphics rendering, natural language processing, and frontier technologies like blockchain and quantum networks, resulting in dynamic and complex behavior patterns. How to effectively identify violations without overreach or hindering innovation requires highly capable regulatory technology (RegTech).
Second, practical difficulties in responsibility attribution. For example, when a digital virtual human developed within an open-source ecosystem encounters issues, how to precisely allocate responsibility among open-source communities, model fine-tuners, application integrators, and end operators?
Third, the lack of a standardized system. There is a shortage of unified, detailed industry and national standards regarding identity identification, ethical evaluation, algorithm transparency, and performance testing for digital virtual humans, which may lead to inconsistent enforcement and fairness issues across regions.
Finally, the most fundamental risk lies in rapid technological iteration, especially with autonomous agents, multi-agent collaboration, and future integration with quantum computing, which will challenge the foresight and inclusiveness of existing rules. Rules need to be flexible yet firm, and balancing this is difficult. These risks remind us that governance is a dynamic process that cannot be achieved once and for all.
Looking ahead, governance of digital virtual human services will increasingly co-evolve with technological development.
First, governance will become more “technological” and “intelligent.” Regulators will leverage AI tools for supervision, such as developing deepfake detection platforms and establishing digital virtual human behavior monitoring networks, achieving “technology-driven regulation.” Blockchain may be used to create tamper-proof identity and behavior records, enhancing traceability.
Second, open-source ecosystems will play a key role in compliant innovation. Healthy open-source communities can promote industry best practices, share compliance toolkits (like ethical review algorithm modules), lower compliance barriers for small and medium enterprises, and embed “AI for good” principles into core code.
Third, standards and certification systems will accelerate. Under the guidance of the “Measures,” industry associations and standards organizations are expected to develop comprehensive standards covering data, algorithms, applications, and evaluations, possibly establishing third-party ethical certification mechanisms as important market references.
Fourth, governance focus will shift from “post-event handling” to “prevention” and “mid-course intervention.” This includes bias audits of training data, ethical alignment of algorithm goals, sandbox testing of agent behaviors, and risk control at early stages. For frontier protocols like Rotifer emphasizing autonomous evolution, governance logic may draw on “safety fences” or “Constitutional AI” concepts, setting unbreakable core principles for self-evolving agents.
Ultimately, we will not see an industry constrained by rules but one that advances steadily on the path of “responsible intelligence.” Digital virtual humans will truly become human partners in improving productivity, enriching cultural life, and optimizing public services, rather than uncontrollable risks. This process requires ongoing dialogue and joint efforts among policymakers, technologists, enterprises, and society, anchored by the enduring value of “people-centeredness.”