Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
"Departed colleagues have been digitized and become immortal through technology."
Recently, a game media company in Shandong tried to train departing employees into AI digital humans to keep working, and the topic #公司用AI复刻离职员工继续工作# quickly surged to the hot search, sparking widespread attention.
According to media reports, an employee who is still on the job said this was a bold attempt by the company. The colleague involved had truly resigned. With his consent, he also thought it was quite amusing. Before leaving, that colleague was an HR specialist. The digital doppelgänger can currently handle simple tasks such as consultations, outreach/booking, and making basic work like PowerPoints and spreadsheets. It’s a bit clumsy and can only handle certain simple instructions.
Although the people involved don’t mind, the internet exploded. “A colleague has been processed into something else,” “All my efforts turned into digital fuel,” “On the bright side, this counts as digital immortality,” and so on… Tightness, helplessness, and strained smiles became the collective emojis in the comments section. So why would a digital doppelgänger of a resigned employee make netizens so tense?
一
“Digital employees” are not a sudden whim, but a technology that is gradually taking shape.
According to reports, the AI world already has a project called “colleague.Skill.” Its function is to use the work data of a departed colleague to generate an AI “digital colleague” that can replace the person’s work. Once it was released, it became extremely popular. People even created derivative products like “boss.Skill” and “ex.Skill.”
As for how advanced this technology is, it really isn’t that. Professionals who broke it down said that currently, “digital employees” are basically a prompt following an agent skill standard plus a web-scraping engineering project—effectively giving an actor a script, so they can perform according to the script’s style. This agent has no memory, doesn’t remember what it said yesterday, and also can’t distill “professional knowledge and judgment logic.”
Put simply, it’s not fundamentally different from a parrot repeating words. At least for now, such digital doppelgängers can’t truly “align granularity,” “close the project loop,” or “connect the underlying logic” like a real employee; they also can’t “take the blame.”
So why does it still make people in the workplace nervous?
This may be a “uncanny valley” effect: when AI is close enough to “humans,” yet clearly isn’t “human” in the key places, people instinctively produce discomfort and unease. This discomfort comes from a perceptual mismatch—it mimics human expression, tone, and behavioral logic, but lacks genuine understanding and experience to support it, resulting in a state that feels both familiar and unfamiliar.
Although digital doppelgängers aren’t that smart yet, everyone knows it’s different from a robot. It’s a lingering image left behind by a resigned colleague—“similar in form, detached in spirit.” It can neither be fully treated as a tool, nor truly regarded as a person.
That’s why people feel uneasy—not because the thing itself is so powerful, but because a boundary is loosening: can “human” and “non-human” still be clearly defined?
二
The AI boom has brought about a trend: human beings are losing control over the self. The personality, thought logic, and ways of doing things that we think are uniquely ours—these “human qualities” seem to be datafied.
This is not an unfounded fear. Recently, overseas discussions have kicked off around “surveillance capitalism.” It refers to large companies in the technology, information, and social media sectors systematically stripping individuals of their data and claiming those data as their own. The sheer scale of this data collection creates conditions for powerful algorithms and machine learning systems, enabling them to predict people’s behavior. This surveillance is also extremely one-sided: one side can see the other, while the other side can’t see it.
Workplace people also find it hard to keep themselves entirely separate: ordinary people are datafied and enter the systems of platforms and organizations. All habits of expression, paths of judgment, and communication styles are broken down into modules that can be reused. Experience is no longer just “what you’ve lived through,” but becomes “assets” that can be copied and called upon. People seem to be emptied into a set of data collections that can be accessed anytime—leaving behind models while the subject disappears.
In this process, companies benefit, and employees who haven’t resigned yet can also benefit (receiving virtual companionship from former colleagues). But the person who is truly “retained” (the data) is gradually excluded from value allocation.
As a result, humans seem to become something of a consumable. Once the data extraction is finished, they have no value. Just imagining this scene is already very unsettling.
Many people will surely think: can we protect our rights through the law?
Legal professionals have already suggested that this kind of data distillation may be illegal. Chat records of former employees, work emails, personal work habits, and the like fall under the definition of personal information in the Personal Information Protection Law. The private communication content involved may also constitute sensitive personal information. Collecting and using such data to train AI without the employee’s consent directly infringes the employee’s rights to collect, use, and process personal information.
At the same time, under the Provisional Measures for the Administration of Generative Artificial Intelligence Services, providers of generative AI services should carry out activities related to processing training data, such as pre-training and optimization training, in accordance with the law. If personal information is involved, they should obtain the individual’s consent or satisfy other circumstances stipulated by laws and administrative regulations.
However, from an objective perspective, there are quite a few ambiguous areas. Private chats and email inboxes might be recognized as privacy—then what about statements in group chats, writing reports, and speaking at meetings? If the content is related to job positions and published in public settings, can the company be required not to capture it?
To be fair, for enterprises, what is valuable probably isn’t some “speaking style.” It’s more like an artsy behavior of playing with in-jokes, and it’s not what bosses care about most. What’s most likely to be targeted are the parts that can be condensed into processes, judgment, and experience.
But the parts that get condensed are highly related to the workplace, and “ownership” is probably hard to define. You can’t say that once an employee resigns, they take away all meeting minutes and discussions they participated in, right? Don’t say AI will learn; even before resignation, haven’t employees ever been asked to do a proper handover?
三
Of course, this doesn’t mean the workplace is “falling apart.” Work outcomes may be public in nature, but personal information involved along the way remains private. For example, if a written report is公开, then in chat logs, leaving behind personal information like “working overtime at the coffee shop next to my home,” or “I’ve been feeling sick lately and need to submit it a bit later”—can that be clearly identified as personal information and used to reject the company’s misuse?
This may be an extremely detailed question, yet it’s precisely these details that determine whether the boundary can truly hold. Only by distinguishing clearly what can be used and what can’t, what counts as work and what counts as privacy, in those seemingly insignificant scenarios can individuals avoid being bundled and indiscriminately “distilled” during the process of being datafied.
Clarifying things gradually through legal rules is also a kind of reassurance at least: perhaps it’s hard for gig workers to avoid having their value “squeezed out,” but at least they can avoid being “processed into something else,” and keep their subjectivity as human beings.
Think carefully: the job that digital doppelgängers fear the most is still, in fact, the traditional work model—content that’s mechanical, processes that are highly standardized, and work that mainly relies on empiricism. This kind of work is ultimately the easiest to break down. Once it’s “distilled,” everything is laid bare and it can be replaced quickly. This problem may not be a threat created only by digital doppelgängers—once technology advances, the danger will come.
The discussion triggered by this digital doppelgänger is also, in essence, a reminder: we need to rescue ourselves as soon as possible. It could even be an inverted reminder: why can’t we “process ourselves” first?
This doesn’t mean we all need to replicate ourselves—that would be too strange. Rather, we need to gain the ability to use AI ourselves: proactively整理 and structure the parts that can be extracted, and define and use them ourselves. Which experiences are reusable, which judgments can be condensed into methods, and which processes can be delegated to tools.
In one sentence: hand the parts that are copyable to the system, and keep the parts that are irreplaceable with yourself.
It’s like the recent “shrimp-raising craze.” Although it cooled down due to safety issues, the mass enthusiasm it sparked was actually a positive attitude—not passively waiting for technology to filter things out, but actively participating, understanding it, using it, and finding a tool that can replace mechanical work as soon as possible.
This might also be how we respond to all AI challenges: quickly free ourselves from tedious process-based work, then get back our “humanity,” find the unpredictable creativity that makes us human, and resist mechanical imitation based on big data.
It’s necessary to see that the workplace is already being reshaped: AI won’t replace your job, but people who “use AI” will. Reshaping yourself isn’t turning yourself into data—it’s becoming someone who can’t be defined by data.
Source of this article: The Paper (澎湃新闻)
Risk warning and disclaimer