By Christopher Mims
If you happen to be on a Texas highway sometime this summer, and see a 50,000-pound semi truck barreling along with nobody behind the wheel, just remember: A self-driving truck is less likely to kill someone than one driven by a human.
At least that's what Chris Urmson, chief executive of autonomous-vehicle software maker Aurora Innovation, insists.
Similar logic applies in a completely different field: legal arbitration. Bridget Mary McCormack, former chief justice of the Michigan Supreme Court and now CEO of the American Arbitration Association, thinks her organization's new AI Arbitrator will settle some disputes better than most humans would.
Insurance companies have been doing algorithmic decision-making since before it was called artificial intelligence. Along the way, they have been sued for bias, and had to update their way of doing business. Early on, regulators made it clear that their AI-based systems would be held to the same standards as human ones. This has forced many insurance companies to make their algorithms "explainable": They show their work, rather than hiding it in an AI black box.
Unlike many of the hype men who say we're mere years away from chatbots that are smarter than us, the people making these decision-making systems go to great lengths to document their "thought" processes, and to limit them to areas where it can be shown they're capable and reliable.
Yet many of us still prefer the judgment of a human.
"You go to a court, and a judge makes a decision, and you don't have any way to see the way her brain worked to get to that decision," says McCormack. "But you can build an AI system that is auditable, and shows its work, and shows the parties how it made the decisions it made."
We are on a philosophical fence, she says: We tolerate the opacity of human decision-making despite years of research showing our own fallibility. Yet many of us aren't ready to believe an automated system can do any better.
The auditor
People are at least as concerned about AI as they are excited about it, says the Pew Research Center. And rightly so: The long history of computer-based decision-making hasn't exactly been a victory march.
Court systems' sentencing algorithms proved racially biased. Teacher-evaluation software failed to yield accountability.
"When I wrote my book 'Weapons of Math Destruction' 10 years ago, I made the point, which was really true at the time, that a lot of these systems were being put into place as a way to avoid accountability," says Cathy O'Neil, who is an algorithmic auditor.
But those early tries were important, even if they failed, she adds, because once a process is digitized, it generates unprecedented amounts of data. When companies are forced to hand over to regulators, or opposing counsel, internal records of what their algorithms have been up to, the result is a kind of involuntary transparency.
O'Neil probes decision-making software to determine whether it's operating as intended, and whom it might be harming. On behalf of plaintiffs who might be suing over anything from financial fraud to social-media harm, she examines piles and piles of output from the software.
"One of the most exciting moments of my job is to get the data," she says. "We can see what these people did and they can't deny it -- it's their data."
She focuses on the impacts of an algorithm -- the distribution of the decisions it makes, rather than how it arrived at those decisions -- which is why her methods have changed little even as LLMs and generative AI have taken over the field.
O'Neil is hopeful about the future of holding companies accountable for their algorithms, because it's one of the few areas of bipartisan agreement remaining in the U.S. At a recent Wall Street Journal conference, Sen. Lindsey Graham (R., S.C.) said that he believes tech companies should be held responsible for any harms their systems might cause.
Verify, then trust
In 2023, engineers at Aurora, the autonomous-truck software maker, looked at every fatal collision on Interstate 45 between Dallas and Houston from 2018 through 2022. Based on police reports, the team created simulations of every one of those crashes, and then had their AI, known as Aurora Driver, navigate the simulation.
"The Aurora Driver would have avoided the collision in every event," says Urmson, the CEO.
Yet the company also faced a recent setback. Last spring, Aurora put trucks on the road hauling freight in Texas with no driver behind the wheel. Two weeks later, at the request of one of its manufacturers, Aurora had to bring back human observers. The company has emphasized that its software continues to be fully responsible for driving, and expects some of its trucks will be completely unmanned again by the middle of 2026.
At launch, the American Arbitration Association is only offering its AI Arbitrator for a specific kind of case for which today's AI is well-suited: those decided solely by documents. The system provides transparency, explainability and monitoring for deviations from what human experts might conclude in a case.
Yet despite the fact that professional arbitrators, judges and law students have found AI Arbitrator reliable in testing, no one has opted to use it in a real-life case since its debut last month. McCormack says this may be because of its novelty and practitioners' lack of familiarity. (Both parties in a dispute must agree to use the bot at the start.)
For those asking the public to trust their AIs, it doesn't help that systems based on related technology are causing harm in ways all of us hear about daily, from suicide-encouraging chatbots to image generators that devour jobs and intellectual property.
"If you just ask people in the abstract, 'Why wouldn't you trust AI to make your disputes?' they automatically think, 'Well, why would I throw my dispute into ChatGPT?' " says McCormack. "And I am not recommending that."
In some areas such as human resources, even AI industry professionals argue that human emotion is important -- and AI decision-making might be too dispassionate.
But responsible development could start to balance the scales against the negative aspects of AI, as long as we can verify that these systems do what their makers claim. Imagine a future in which a busy stretch of highway has far fewer fatalities. Maybe even zero.
"It's easy to get lost in the statistics and the data," says Urmson, reflecting on the horrific -- and avoidable -- accidents his team has studied. "But when you start to think about the consequences for people's lives, it's a whole different thing."
Write to Christopher Mims at christopher.mims@wsj.com
(END) Dow Jones Newswires
December 12, 2025 22:00 ET (03:00 GMT)
Copyright (c) 2025 Dow Jones & Company, Inc.
Comments