MIT Professor: AI Might Replace Your Financial Advisor—But There's a Major Hurdle

robot
Abstract generation in progress

Financial experts say the financial capabilities of artificial intelligence platforms are continuously improving, and in the future they will very likely be able to replace human financial advisors.

However, they believe that compared with human advisors, artificial intelligence has a major shortcoming: a lack of fiduciary duty. Moreover, the solution to this legal gray area seems far off.

Fiduciary duty is a legal obligation, and many financial advisors and other professionals (such as lawyers and doctors) owe this duty to their clients. In practice, this means they place their clients’ best interests above their own interests.

Andrew Lo, professor of finance and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management, said: “The problem we need to solve isn’t whether AI has enough expertise. The answer today is that AI clearly has (financial) expertise.”

Lo said: “What it doesn’t have is that kind of fiduciary duty. It doesn’t have the ability to bear the same degree of consequences when it makes mistakes as human advisors do.”

Lo said that advisors who violate fiduciary duty may face quite serious consequences, including regulatory penalties, civil liability, and criminal charges.

He said that if there is no responsibility or legal liability, the idea of putting clients’ interests ahead of one’s own “has no teeth.”

A “pending” legal question

Many people apparently are seeking financial advice from large language models—such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini.

According to a public opinion survey released last September by Intuit Credit Karma, among Americans who have used generative AI, 66% said they have used it to get financial advice. Among Millennials and Gen Z, the figure is as high as 82%.

The survey shows that about 85% of respondents who used generative AI to obtain financial advice took action according to the advice provided. The survey covered 1,019 adults.

Sebastian Bens… (or Sebastian Bensol), senior research fellow at the Information Law Institute at New York University School of Law, said: “People are looking to these services for all kinds of advice, and they really are getting advice, and that seems like a huge regulatory openness problem.”

Bensol said: “Who is truly responsible? If there isn’t a company with fiduciary duty standing behind people, can they really rely on a product to do this? This really is unresolved.”

Why you shouldn’t blindly trust AI—or humans

Lo said that even so, AI still has some good application scenarios in financial planning.

He said that AI is “very good” at providing online resources explaining all sorts of financial concepts that ordinary people don’t understand well. For example, if someone wants to understand basic issues related to health insurance, AI can typically provide a reliable overview.

Lo said that although AI outputs in many areas of finance can be complex, consumers generally should not blindly trust answers about personal household finance matters.

He said: “When it comes to calculations involving very, very specific personal circumstances, you have to be very, very careful.” “And what worries me particularly about large language models is that no matter what you ask, it always gives an answer that sounds authoritative, even if that’s not true.”

He said that in that sense, repeatedly verifying AI’s answers is “very necessary.”

Lo said that perhaps surprisingly, AI isn’t good at financial calculations—so any financial planning issues that involve numerical fundamentals like taxes should generally be avoided.

Lo said: “They don’t have the ability to bear the same degree of consequences when they make mistakes as human advisors do.”

In a post on social media this March, James Burnham, an official for legal and government affairs at Elon Musk’s xAI, said the company’s AI platform Grok “isn’t tax advice, so you have to verify it yourself.”

Of course, many human financial advisors provide recommendations to clients, and then the client decides whether to act on them.

Lo said: “I think that’s how I look at large language models: they can be very, very useful in providing different options and describing how those options work, but you should always remember that the advice they give you may be wrong.”

“But I think that’s also true for human financial advisors,” he said.

Not all human financial advisors have fiduciary duty

Not all human financial advisors have fiduciary duty.

The financial advisory field is full of minefields involving different legal relationships. These legal obligations may vary depending on factors such as whether the person the consumer deals with is a stock broker, a registered investment adviser, an insurance agent, or another intermediary.

For example, a U.S. Department of Labor rule issued during the Biden administration tried to impose fiduciary duty on intermediaries when they recommend moving funds from a 401(k) plan to an individual retirement account—an action that may involve hundreds of thousands of dollars.

However, that rule recently became invalid after the Trump administration stopped defending it in court—meaning many financial intermediaries are not bound by fiduciary duty when making rollover recommendations. Therefore, legal experts advise consumers to be cautious about such rollover recommendations, because there are potential conflicts of interest.

I used AI tools to file my taxes—experts say where did I go wrong

Bensol of New York University raised a similar legal dilemma regarding AI advice: because major AI companies are currently mainly located in the U.S., if AI advice tells investors to put retirement savings into U.S. stocks, such advice could be viewed as proprietary trading or a financial conflict of interest.

Jiaying Jiang, associate professor of law who researches AI and fiduciary duty at the Levin College of Law at the University of Florida, said that even so, companies providing AI services do not appear to be compensated for providing advice to retail investors, so they do not have fiduciary duty.

Sebastian Bensol, senior research fellow at the Information Law Institute at New York University School of Law, said: “Who is truly responsible? If there isn’t a company with fiduciary duty standing behind people, can they really rely on a product to do this? This really is unresolved.”

However, Jiang said that financial advisors who owe fiduciary duty to clients may violate that duty if they use AI.

She said that for example, if an advisor uses AI to make a recommendation to a client, but that recommendation doesn’t align with the client’s best interests, then it is the advisor who is liable, not the company that supports the AI platform.

Lo concluded that he believes government policies need to change so that consumers who seek financial advice from AI are provided with fiduciary duty protections.

Lo said that before that, “we can’t reach the level where we can fully delegate these (financial) decisions.”

“I do believe this will eventually happen,” he said.

A massive amount of information and precise analysis—right on Sina Finance APP

责任编辑:张俊 SF065

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin