Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Over Half of US Federal Judges Use AI
(MENAFN) A majority of US federal judges have quietly integrated artificial intelligence into their daily judicial work, according to new research from Northwestern University — a finding that is drawing urgent scrutiny from legal experts over the risks such tools pose to the integrity of the courts.
The study, based on survey responses from 112 federal judges drawn from a randomized sample of 502 officials spanning bankruptcy, magistrate, district, and appellate benches, found that 60% of federal judges now employ at least one AI tool in some capacity. Roughly 22% reported using such tools on a daily or weekly basis.
Legal research emerged as the most prevalent application, cited by 30% of respondents, followed by document review at 16%. Drafting and editing were also listed among common uses. The findings land at a particularly fraught moment: AI-generated errors — most notably fabricated legal citations — have already rattled confidence in attorney filings across multiple jurisdictions.
The survey revealed a sharp divide in how courts are managing the technology internally. Approximately one in three judges said they actively permit or encourage AI use within their chambers, while 20% have instituted formal prohibitions. More strikingly, over 45% of judges surveyed reported never having received any AI-related training from court administration — a gap critics say is untenable given the stakes involved.
Legal scholars are sounding the alarm. Eric Posner, a law professor at the University of Chicago, offered a pointed warning: “Judges make decisions that are very important to people and resolve significant disputes. They cannot gamble with a technology that is not fully understood and is known to hallucinate.”
Not all voices are cautionary, however. Advocates of judicial AI adoption contend the technology holds genuine promise for easing crushing caseloads and streamlining court operations. Christopher Patterson, a chief judge in Florida, offered a measured endorsement: “We are cautious but early results are very positive. We are assessing accuracy, suitability, and time savings.”
The debate is unfolding against a backdrop of escalating disciplinary action. In March, New York judges publicly called on attorneys to verify all AI-generated citations after multiple briefs were found to contain wholly fabricated case references. media reported in December that hallucinated citations have become a systemic problem across the legal profession, and the month prior, several attorneys faced financial sanctions after submitting filings riddled with hundreds of false AI-generated references.
The concerns extend well beyond courtroom walls. Experts globally are raising alarms about AI’s expanding footprint in high-stakes decision-making, warning that its documented tendency to generate false or misleading outputs makes its use in matters of legal consequence — where rulings can determine livelihoods, liberty, and lives — a profound accountability challenge that existing oversight frameworks are ill-equipped to address.
MENAFN05042026000045017169ID1110943873