[Rule of Law] The legal risks behind "AI workers" should not be underestimated

robot
Abstract generation in progress

Liu Shaohua

Recently, a game media company in Shandong has attracted attention for an attempt to train departing employees into “AI people” so they can continue working. The company’s employee, Xiaoyu, told reporters that the colleague in the incident has truly resigned; this attempt was made with his consent, and he himself also found it quite amusing. Xiaoyu said that before resigning, the colleague was an HR specialist, and his digital double can currently handle simple tasks such as consultations, making invitations, and creating PPTs and spreadsheets.

On the surface, this is a harmless technical trial. The resigned employee “consented” and “found it amusing,” while the company obtains low-cost, high-efficiency “digital labor.” But if we look beyond the appearance, this seemingly gentle attempt actually touches a gray area between workplace rights and technical ethics in the AI era, and is something we should review calmly.

From a legal perspective, although the incident appears to circumvent compliance risks through “the employee’s own consent,” that does not mean we can be careless. The resigned employee’s chat records, work emails, personal work habits, and so on all fall within the personal information defined by the Personal Information Protection Law, not the company’s “assets.” For a resigned employee, if they simply hand over their rights at will just because it is “fun,” it may very likely create security risks. Because such a “digital double” makes it easy for others to connect with the person and for the person to be contacted as well, if a digital double infringes on others’ rights, the party concerned may also be required to bear joint liability.

In addition, we also need to ask whether this so-called “consent” is truly sufficient and voluntary. In labor-employer relationships, employees are often in a relatively disadvantaged position. At the time of resignation, could “consent” be influenced by unspoken rules like “let’s part on good terms,” or by concerns about future recommendation letters and industry reputation? Where is the boundary of this “consent”? Is it limited to only the current form of a “clumsy” double, or does it include a “higher-end version” that may emerge after future technological iterations—one that more deeply simulates their thinking and emotions? When someone’s work habits, communication methods, and even parts of their reasoning logic are turned into data and permanently stored, does this “digital immortality” deprive workers of the right to “say goodbye to the past and begin a new life”?

By turning departing employees into “AI people,” the company blurs the line between “person” and “tool,” further “objectifying” workers. Employees are no longer individuals with distinct emotions, creativity, and unrepeatable qualities, but rather “functional modules” that can be broken down, analyzed, recombined, and reused indefinitely. When a company can easily “distill” an employee’s experience and style into AI, the signal it sends is cold: individuals are replaceable, and their core value lies in the parts that can be turned into data. Over time, the workplace may devolve into a tasteless algorithmic production line, with workers’ agency seriously weakened.

The “Administrative Measures for Information Services of Digital Virtual Humans” currently seeking public comment (hereinafter referred to as the “Measures”) provides important guidance for regulating such conduct. The “Measures” emphasize that when providing digital human services, personal consent must be obtained, and mechanisms such as risk identification and tiered, graded, and categorized control should be established—especially to protect special groups such as minors. This reminds us that even if “consent” is obtained, companies must still shoulder corresponding management responsibilities to ensure that the use of “digital doubles” does not cross boundaries or be misused. Otherwise, once a “digital double” infringes on others’ rights, or related data is leaked, not only may the parties involved be pulled into disputes, but the company will also face significant legal risks.

In the end, technological development is a double-edged sword, and the handle should be held by people themselves. Faced with the wave of artificial intelligence, workers need to learn to protect their data rights and proactively sign data-use limitation clauses when resigning. Companies need to find a balance between pursuing efficiency and respecting personhood. Regulatory authorities also need to accelerate the improvement of relevant laws and regulations, so as to build a line of defense for personal dignity in the digital age.

This column article only represents the personal views of the author.

(Editor: Dong Pingping)

     【Disclaimer】This article only represents the author’s personal views and is not related to Hexun. The Hexun website remains neutral regarding the statements and judgment of opinions made in the article, and provides no express or implied guarantee regarding the accuracy, reliability, or completeness of the content contained herein. Readers should use this information for reference only and assume all responsibility themselves. Email: news_center@staff.hexun.com
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin