by Anna Claire Vollers, Georgia Recorder, [This article first appeared in the Georgia Recorder, republished with permission]
November 21, 2025
As states strive to curb health insurers’ use of artificial intelligence, patients and doctors are arming themselves with AI tools to fight claims denials, prior authorizations and soaring medical bills.
Several businesses and nonprofits have launched AI-powered tools to help patients get their insurance claims paid and navigate byzantine medical bills, creating a robotic tug-of-war over who gets care and who foots the bill for it.
Sheer Health, a three-year-old company that helps patients and providers navigate health insurance and billing, now has an app that allows consumers to connect their health insurance account, upload medical bills and claims, and ask questions about deductibles, copays and covered benefits.
“You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500,” said cofounder Jeff Witten. The program uses both AI and humans to provide the answers for free, he said. Patients who want extra support in challenging a denied claim or dealing with out-of-network reimbursements can pay Sheer Health to handle those for them.
In North Carolina, the nonprofit Counterforce Health designed an AI assistant to help patients appeal their denied health insurance claims and fight large medical bills. The free service uses AI models to analyze a patient’s denial letter, then look through the patient’s policy and outside medical research to draft a customized appeal letter.
Other consumer-focused services use AI to catch billing errors or parse medical jargon. Some patients are even turning to AI chatbots like Grok for help.
A quarter of adults under age 30 said they used an AI chatbot at least once a month for health information or advice, according to a poll the health care research nonprofit KFF published in August 2024. But most adults said they were not confident that the health information is accurate.
State legislators on both sides of the aisle, meanwhile, are scrambling to keep pace, passing new regulations that govern how insurers, physicians and others use AI in health care. Already this year, more than a dozen states have passed laws regulating AI in health care, according to Manatt, a consulting firm.
“It doesn’t feel like a satisfying outcome to just have two robots argue back and forth over whether a patient should access a particular type of care,” said Carmel Shachar, assistant clinical professor of law and the faculty director of the Health Law and Policy Clinic at Harvard Law School.
“We don’t want to get on an AI-enabled treadmill that just speeds up.”
A black box
Health care can feel like a black box. If your doctor says you need surgery, for example, the cost depends on a dizzying number of factors, including your health insurance provider, your specific health plan, its copayment requirements, your deductible, where you live, the facility where the surgery will be performed, whether that facility and your doctor are in-network and your specific diagnosis.
Some insurers may require prior authorization before a surgery is approved. That can entail extensive medical documentation. After a surgery, the resulting bill can be difficult to parse.
Witten, of Sheer Health, said his company has seen thousands of instances of patients whose doctors recommend a certain procedure, like surgery, and then a few days before the surgery the patient learns insurance didn’t approve it.
You would think there would be some sort of technology that could explain in real English why I’m getting a bill for $1,500.
– Sheer Health co-founder Jeff Witten
In recent years, as more health insurance companies have turned to AI to automate claims processing and prior authorizations, the share of denied claims has risen. This year, 41% of physicians and other providers said their claims are denied more than 10% of the time, up from 30% of providers who said that three years ago, according to a September report from credit reporting company Experian.
Insurers on Affordable Care Act marketplaces denied nearly 1 in 5 in-network claims in 2023, up from 17% in 2021, and more than a third of out-of-network claims, according to the most recently available data from KFF.
Insurance giant UnitedHealth Group has come under fire in the media and from federal lawmakers for using algorithms to systematically deny care to seniors, while Humana and other insurers face lawsuits and regulatory investigations that allege they’ve used sophisticated algorithms to block or deny coverage for medical procedures.
Insurers say AI tools can improve efficiency and reduce costs by automating tasks that can involve analyzing vast amounts of data. And companies say they’re monitoring their AI to identify potential problems. A UnitedHealth representative pointed Stateline to the company’s AI Review Board, a team of clinicians, scientists and other experts that reviews its AI models for accuracy and fairness.
“Health plans are committed to responsibly using artificial intelligence to create a more seamless, real-time customer experience and to make claims management faster and more effective for patients and providers,” a spokesperson for America’s Health Insurance Plans, the national trade group representing health insurers, told Stateline.
But states are stepping up oversight.
Arizona, Maryland, Nebraska and Texas, for example, have banned insurance companies from using AI as the sole decisionmaker in prior authorization or medical necessity denials.
Dr. Arvind Venkat is an emergency room physician in the Pittsburgh area. He’s also a Democratic Pennsylvania state representative and the lead sponsor of a bipartisan bill to regulate the use of AI in health care.
He’s seen new technologies reshape health care during his 25 years in medicine, but AI feels wholly different, he said. It’s an “active player” in people’s care in a way that other technologies haven’t been.
“If we’re able to harness this technology to improve the delivery and efficiency of clinical care, that is a huge win,” said Venkat. But he’s worried about AI use without guardrails.
His legislation would force insurers and health care providers in Pennsylvania to be more transparent about how they use AI; require a human to make the final decision any time AI is used; and mandate that they show evidence of minimizing bias in their use of AI.
“In health care, where it’s so personal and the stakes are so high, we need to make sure we’re mandating in every patient’s case that we’re applying artificial intelligence in a way that looks at the individual patient,” Venkat said.
Patient supervision
Historically, consumers rarely challenge denied claims: A KFF analysis found fewer than 1% of health coverage denials are appealed. And even when they are, patients lose more than half of those appeals.
New consumer-focused AI tools could shift that dynamic by making appeals easier to file and the process easier to understand. But there are limits; without human oversight, experts say, the AI is vulnerable to mistakes.
“It can be difficult for a layperson to understand when AI is doing good work and when it is hallucinating or giving something that isn’t quite accurate,” said Shachar, of Harvard Law School.
For example, an AI tool might draft an appeals letter that a patient thinks looks impressive. But because most patients aren’t medical experts, they may not recognize if the AI misstates medical information, derailing an appeal, she said.
“The challenge is, if the patient is the one driving the process, are they going to be able to properly supervise the AI?” she said.
Earlier this year, Mathew Evins learned just 48 hours before his scheduled back surgery that his insurer wouldn’t cover it. Evins, a 68-year-old public relations executive who lives in Florida, worked with his physician to appeal, but got nowhere. He used an AI chatbot to draft a letter to his insurer, but that failed, too.
On his son’s recommendation, Evins turned to Sheer Health. He said Sheer identified a coding error in his medical records and handled communications with his insurer. The surgery was approved about three weeks later.
“It’s unfortunate that the public health system is so broken that it needs a third party to intervene on the patient’s behalf,” Evins told Stateline. But he’s grateful the technology made it possible to get life-changing surgery.
“AI in and of itself isn’t an answer,” he said. “AI, when used by a professional that understands the issues and ramifications of a particular problem, that’s a different story. Then you’ve got an effective tool.”
Most experts and lawmakers agree a human is needed to keep the robots in check.
AI has made it possible for insurance companies to rapidly assess cases and make decisions about whether to authorize surgeries or cover certain medical care. But that ability to make lightning-fast determinations should be tempered with a human, Venkat said.
“It’s why we need government regulation and why we need to make sure we mandate an individualized assessment with a human decisionmaker.”
Witten said there are situations in which AI works well, such as when it sifts through an insurance policy — which is essentially a contract between the company and the consumer — and connects the dots between the policy’s coverage and a corresponding insurance claim.
But, he said, “there are complicated cases out there AI just can’t resolve.” That’s when a human is needed to review.
“I think there’s a huge opportunity for AI to improve the patient experience and overall provider experience,” Witten said. “Where I worry is when you have insurance companies or other players using AI to completely replace customer support and human interaction.”
Furthermore, a growing body of research has found AI can reinforce bias that’s found elsewhere in medicine, discriminating against women, ethnic and racial minorities, and those with public insurance.
“The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in ways that we have already legislated against,” Venkat said.
Stateline reporter Anna Claire Vollers can be reached at avollers@stateline.org.
This story was originally produced by Stateline, which is part of States Newsroom, a nonprofit news network which includes Georgia Recorder, and is supported by grants and a coalition of donors as a 501c(3) public charity.
Georgia Recorder is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Georgia Recorder maintains editorial independence. Contact Editor Jill Nolin for questions: info@georgiarecorder.com.

Be the first to comment on "AI vs. AI: Patients deploy bots to battle health insurers that deny care"