Over Half of US Federal Judges Use AI

(MENAFN) A majority of US federal judges have quietly integrated artificial intelligence into their daily judicial work, according to new research from Northwestern University — a finding that is drawing urgent scrutiny from legal experts over the risks such tools pose to the integrity of the courts.

The study, based on survey responses from 112 federal judges drawn from a randomized sample of 502 officials spanning bankruptcy, magistrate, district, and appellate benches, found that 60% of federal judges now employ at least one AI tool in some capacity. Roughly 22% reported using such tools on a daily or weekly basis.

Legal research emerged as the most prevalent application, cited by 30% of respondents, followed by document review at 16%. Drafting and editing were also listed among common uses. The findings land at a particularly fraught moment: AI-generated errors — most notably fabricated legal citations — have already rattled confidence in attorney filings across multiple jurisdictions.

The survey revealed a sharp divide in how courts are managing the technology internally. Approximately one in three judges said they actively permit or encourage AI use within their chambers, while 20% have instituted formal prohibitions. More strikingly, over 45% of judges surveyed reported never having received any AI-related training from court administration — a gap critics say is untenable given the stakes involved.

Legal scholars are sounding the alarm. Eric Posner, a law professor at the University of Chicago, offered a pointed warning: “Judges make decisions that are very important to people and resolve significant disputes. They cannot gamble with a technology that is not fully understood and is known to hallucinate.”

Not all voices are cautionary, however. Advocates of judicial AI adoption contend the technology holds genuine promise for easing crushing caseloads and streamlining court operations. Christopher Patterson, a chief judge in Florida, offered a measured endorsement: “We are cautious but early results are very positive. We are assessing accuracy, suitability, and time savings.”

The debate is unfolding against a backdrop of escalating disciplinary action. In March, New York judges publicly called on attorneys to verify all AI-generated citations after multiple briefs were found to contain wholly fabricated case references. media reported in December that hallucinated citations have become a systemic problem across the legal profession, and the month prior, several attorneys faced financial sanctions after submitting filings riddled with hundreds of false AI-generated references.

The concerns extend well beyond courtroom walls. Experts globally are raising alarms about AI’s expanding footprint in high-stakes decision-making, warning that its documented tendency to generate false or misleading outputs makes its use in matters of legal consequence — where rulings can determine livelihoods, liberty, and lives — a profound accountability challenge that existing oversight frameworks are ill-equipped to address.

MENAFN05042026000045017169ID1110943873

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin