Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
[Rule of Law] The legal risks behind "AI workers" should not be underestimated
Liu Shaohua
Recently, a game media company in Shandong has attracted attention for an attempt to train departing employees into “AI people” so they can continue working. The company’s employee, Xiaoyu, told reporters that the colleague in the incident has truly resigned; this attempt was made with his consent, and he himself also found it quite amusing. Xiaoyu said that before resigning, the colleague was an HR specialist, and his digital double can currently handle simple tasks such as consultations, making invitations, and creating PPTs and spreadsheets.
On the surface, this is a harmless technical trial. The resigned employee “consented” and “found it amusing,” while the company obtains low-cost, high-efficiency “digital labor.” But if we look beyond the appearance, this seemingly gentle attempt actually touches a gray area between workplace rights and technical ethics in the AI era, and is something we should review calmly.
From a legal perspective, although the incident appears to circumvent compliance risks through “the employee’s own consent,” that does not mean we can be careless. The resigned employee’s chat records, work emails, personal work habits, and so on all fall within the personal information defined by the Personal Information Protection Law, not the company’s “assets.” For a resigned employee, if they simply hand over their rights at will just because it is “fun,” it may very likely create security risks. Because such a “digital double” makes it easy for others to connect with the person and for the person to be contacted as well, if a digital double infringes on others’ rights, the party concerned may also be required to bear joint liability.
In addition, we also need to ask whether this so-called “consent” is truly sufficient and voluntary. In labor-employer relationships, employees are often in a relatively disadvantaged position. At the time of resignation, could “consent” be influenced by unspoken rules like “let’s part on good terms,” or by concerns about future recommendation letters and industry reputation? Where is the boundary of this “consent”? Is it limited to only the current form of a “clumsy” double, or does it include a “higher-end version” that may emerge after future technological iterations—one that more deeply simulates their thinking and emotions? When someone’s work habits, communication methods, and even parts of their reasoning logic are turned into data and permanently stored, does this “digital immortality” deprive workers of the right to “say goodbye to the past and begin a new life”?
By turning departing employees into “AI people,” the company blurs the line between “person” and “tool,” further “objectifying” workers. Employees are no longer individuals with distinct emotions, creativity, and unrepeatable qualities, but rather “functional modules” that can be broken down, analyzed, recombined, and reused indefinitely. When a company can easily “distill” an employee’s experience and style into AI, the signal it sends is cold: individuals are replaceable, and their core value lies in the parts that can be turned into data. Over time, the workplace may devolve into a tasteless algorithmic production line, with workers’ agency seriously weakened.
The “Administrative Measures for Information Services of Digital Virtual Humans” currently seeking public comment (hereinafter referred to as the “Measures”) provides important guidance for regulating such conduct. The “Measures” emphasize that when providing digital human services, personal consent must be obtained, and mechanisms such as risk identification and tiered, graded, and categorized control should be established—especially to protect special groups such as minors. This reminds us that even if “consent” is obtained, companies must still shoulder corresponding management responsibilities to ensure that the use of “digital doubles” does not cross boundaries or be misused. Otherwise, once a “digital double” infringes on others’ rights, or related data is leaked, not only may the parties involved be pulled into disputes, but the company will also face significant legal risks.
In the end, technological development is a double-edged sword, and the handle should be held by people themselves. Faced with the wave of artificial intelligence, workers need to learn to protect their data rights and proactively sign data-use limitation clauses when resigning. Companies need to find a balance between pursuing efficiency and respecting personhood. Regulatory authorities also need to accelerate the improvement of relevant laws and regulations, so as to build a line of defense for personal dignity in the digital age.
This column article only represents the personal views of the author.
(Editor: Dong Pingping)