Top law firms charge more than $2,000 per hour; court documents were exposed for “AI hallucinations and a string of errors.”

ChainNewsAbmedia

Top U.S. law firm Sullivan & Cromwell (Sullivan & Cromwell) has recently publicly apologized to a federal judge. The reason is that the bankruptcy-court filings it submitted were full of incorrect information generated by AI, including fake case names, fabricated citations, and non-existent statutes—loopholes that were even exposed by outside counsel. Ironically, the firm’s partners charge $2,000 per hour, yet they failed to implement even basic review and proofreading.

Top law firm screws up! Exposed for using AI to write documents riddled with errors

The incident took place in a bankruptcy case at the U.S. Manhattan Bankruptcy Court involving the Cambodian royal family group (Prince Group). Sullivan & Cromwell appeared in court as counsel for the liquidator appointed by the authorities of the British Virgin Islands, but many AI-generated errors appeared in the court filings it submitted—amounting to more than thirty instances.

These errors were not even discovered internally by the firm; they were revealed in a public filing by the opposing law firm Boies Schiller Flexner in the same case. The types of errors included citing obviously fake case names that do not exist at all, fabricating wording that was never said or written, and even completely fabricating portions of the U.S. Bankruptcy Code.

In a letter dated April 18 to Judge Martin Glenn, Andrew Dietderich, head of global restructuring at Sullivan & Cromwell, admitted that some of the errors were AI hallucinations (hallucinations).

Internal review is basically a formality—law firm partners charge $2,000 per hour?

In the letter, Dietderich acknowledged that the firm has “comprehensive policies and training requirements” for the use of AI tools. Before lawyers are granted permission to use AI tools, they must also complete training courses, with an explicit requirement to “not believe anything and verify everything personally.” However, those policies were not implemented when preparing the document in question, and the secondary review process responsible for oversight also failed to catch any of the errors.

Given the partners’ hourly billing of more than $2,000, the matter has sparked widespread discussion in legal circles at home and abroad. In the letter, the firm said that after it discovered the errors, it conducted a comprehensive review of all other documents in the case, confirmed that AI hallucinations only appeared in that particular filing, and then subsequently submitted a corrected version.

AI hallucinations hit the legal world—lawyers’ ethical responsibilities once again under scrutiny

This incident is not the first time the legal industry has made headlines due to AI mishaps. In 2023, two lawyers in Manhattan, New York, were fined $5,000 by a federal judge for submitting to the court a legal brief full of fabricated cases generated by ChatGPT. In recent years, there have been dozens of cases where judges sanctioned lawyers for using AI to conduct legal research and draft documents without sufficiently verifying the content.

The American Bar Association (ABA) has already clearly required lawyers to remain cautious when using AI models, and lawyers also have an ethical duty to ensure the accuracy of all documents filed with the court. The law currently does not prohibit lawyers from using AI, but the duty to verify afterward still rests with the individual lawyer.

Trump’s favorite! Esteemed old-line firm apologizes

Sullivan & Cromwell was founded more than a century ago and is one of the oldest and most prestigious law firms in U.S. history, with more than 900 lawyers. It is known worldwide for M&A, corporate governance litigation, and private equity business. Recently, the firm has continued to draw attention because it has represented U.S. President Trump in multiple appeal cases.

This AI “gotcha” incident, without a doubt, has added a crack that is difficult to ignore to the brand of this elite law firm, and has once again sounded a warning bell for the entire legal industry.

The article Top law firm charges over two thousand dollars per hour—the court filing exposed as “AI hallucinations, errors piling up” first appeared on Lian News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

NEC Corporation becomes Anthropic’s first global partner in Japan

NEC announced it will become Anthropic’s first global partner in Japan. The two companies will develop secure, industry-knowledgeable AI solutions for highly regulated industries such as finance, manufacturing, and local governments, and integrate the Claude series into NEC BluStellar. They will focus on data-driven management and transforming customer experience, while also introducing Claude Cowork and SOC integration to enhance cybersecurity protection. To validate effectiveness, NEC launched the Zero Customer Program to conduct comprehensive internal testing of AI agents, and plans to promote Claude deployments globally—building Japan’s largest-scale AI-native engineering CoE.

ChainNewsAbmedia6m ago

Vercel Security Breach Expands to Hundreds of Users; AI Developers at Higher Risk

Gate News message, April 23 — Vercel disclosed on April 19 that its security incident, initially described as affecting a "limited subset of customers," has expanded to a much broader developer community, particularly those building AI agent workflows. The attack may affect hundreds of users

GateNews1h ago

OpenAI launches GPT-5.5: 12M context, AA index tops the chart, and Terminal-Bench rewrites the agent benchmark with 82.7%

OpenAI releases GPT-5.5, focused on agent-style work and enterprise knowledge processing, and also rolls it out in ChatGPT and Codex. Key points include a 12 million token context window and an AA Intelligence Index of 60, leading Claude Opus by 4.7 and Gemini 3.1 Pro; pricing is $5 per million tokens for input and $30 per million tokens for output. Output tokens are reduced by about 40%, while the actual cost increases by about 20%.

ChainNewsAbmedia2h ago

Cluster Protocol Raises $5M to Accelerate CodeXero, Browser-Native AI IDE for EVM

Gate News message, April 23 — Cluster Protocol, an AI deeptech and Web3 infrastructure company, announced it has raised $5 million in a new funding round led by DAO5, with participation from Paper Ventures, JPEG Trading, and Mapleblock Capital, bringing total funding to $7.75 million. The capital wi

GateNews2h ago

Nvidia Expands AI Partnerships in UK, China, and Automotive Sector Amid Supply Chain Challenges

Gate News message, April 23 — Despite competition from Google and supply chain disruptions, Nvidia remains the dominant player in AI hardware. TD Cowen reaffirmed its buy rating on Nvidia on Thursday, citing the company's leadership in performance and software ecosystem breadth. The endorsement

GateNews2h ago

Anthropic self-discloses that Claude Code has stacked 3 bugs: reasoning downgrades, cache forgetting, and a 25-character command backlash

Anthropic reports three combined failures of Claude Code: from 3/4–4/7, reasoning levels are reduced to medium, causing slower responses and a feeling that it’s become dumber; from 3/26–4/10, cache purge errors lead to long conversations forgetting things; from 4/16–4/20, a “tool call instruction within 25 characters” was added, then rolled back after 4/20. Affected are Claude Code, Agent SDK, and Cowork; the models are Sonnet 4.6, Opus 4.6/4.7; the API is not affected. On 4/23, usage was reset and evaluation and regression testing were strengthened.

ChainNewsAbmedia3h ago
Comment
0/400
No comments