AI in health care raises legal, insurance concerns
- July 3, 2025
- Posted by: Web workers
- Category: Finance
ORLANDO, Florida – Artificial intelligence can be leveraged by doctors to treat patients faster and improve patient outcomes, but its use raises numerous legal and insurance concerns, experts say.
They were speaking Friday during a panel discussion at the Business Insurance World Captive Forum.
Amanda Avila, managing partner at TeleSpecialists LLC, a physician-owned telemedicine provider in Fort Myers, Florida, said AI can enhance decision-making, allows doctors to get to patients quickly and improves the accuracy of diagnosis in acute stroke care, for example.
Based on medical standards of care “it’s acceptable for a doctor to show up in the emergency room 15 minutes within the time the patient arrives with stroke symptoms 50% of the time,” Dr. Avila said.
“That’s terrible. Fifteen minutes is a long time to be sitting there with a blood clot in your brain,” she said.
TeleSpecialists uses AI to route its doctors to patients in hospitals faster using remote camera technology, Dr. Avila said. “We average just under three minutes to get to the bedside,” she said.
AI medical imaging software then identifies where the stroke is and which blood vessel is blocked by the stroke in 90 seconds, Dr. Avila said.
AI is relatively new and case law is evolving, but there are various legal concerns, said Noelle Sheehan, Orlando-based partner, Wilson Elser Moskowitz Edelman & Dicker LLP.
Top concerns are the amount of data that goes into AI, the potential for biases in the data and for errors if data sources are inaccurate, Ms. Sheehan said. If something was done that’s not documented in the medical records, “how does AI factor that in?” she said.
Potential discrimination is another concern, she said. “If AI is only looking at certain data that maybe doesn’t include the entire population of folks out there … how does it impact that group?” she said.
“Who is liable for medical advice that’s given via a robot or ChatGPT?” said Lainie Dorneker, Miami-based head of healthcare, Bowhead Specialty Underwriters Inc.
There are many different parties that could potentially be liable for AI and the liability of the downstream user is very different from a medical malpractice perspective, said Ms. Dorneker, who was a 2018 Business Insurance Women To Watch winner.
“Doctors have always relied on increasingly sophisticated technology, so we don’t expect them to not do that. They should be doing that. But what if what they’re relying on has some sort of inherent bias in it?” she said.
Different theories of liability, including vicarious liability, products liability and cyber liability could apply if an injury is caused by AI, she said.
AI is a “fantastic” tool, but it’s not infallible, said Tim Folk, Philadelphia-based executive vice president and partner at Lockton Cos. LLC.
Cyber liability in health care is a significant concern, he said. A London insurer recently paid a $6 million claim after a deepfake video tricked a policyholder into making a wire transfer, he said.
AI can benefit captives by improving the claims process, he said.
From the underwriting perspective it’s important to ask questions and understand where and how AI is being used and to make sure insurers are informed when health care entities are using it, Ms. Dorneker said.
“My personal opinion is that the risk is if you don’t use it, you’re behind the eight ball. As the standard of care evolves, I think this will become the standard of care,” Dr. Avila said.


