Risk management, insurance evolve to mitigate AI exposures
- August 18, 2025
- Posted by: Web workers
- Category: Finance
CHICAGO — Companies should establish governance policies for artificial intelligence as more employees use generative AI in their daily work with or without their organizations’ approval, experts say.
As potential AI liabilities emerge, gaps in coverage and new exclusions could leave organizations vulnerable to additional risks, they said during a session last week at the Chicagoland Risk Forum, which is produced by the Chicago chapter of the Risk & Insurance Management Society.
AI governance can help organizations manage so-called shadow AI use, where employees use the technology without informing their managers or other company staff, said Donna Haddad, associate general counsel, consulting Americas, and AI ethics global legal leader at IBM in Chicago.
“Your employees are using it, you just may not know how they’re using it,” she said.
The governance policies can address issues such as ensuring there is “a human in the loop” for decisions affecting hiring or credit, for example, Ms. Haddad said.
The policies also can also tackle issues such as data ownership, transparency regarding how AI is used and preventing employees from using unwanted results generated by AI, she said.
With a governance framework in place, organizations should assess potential risks, including client confidentiality, Ms. Haddad said.
“Your employees need to know that if they put something on the internet, it may be swept up by ChatGPT or whatever technology is being used,” she said.
There is also a risk in some circumstances of undermining attorney-client privilege, Ms. Haddad said. “If you put it on the internet, you waive privilege.”
Other risks include intellectual property infringements, where AI tools gather copyrighted materials, and AI hallucination, where the technology creates content, such as fictional cases for legal documents, she said.
Companies also face risks from their vendors’ use of AI and from the growing number of AI companies and the security measures they implement, Ms. Haddad said.
“You really have to get an inventory, and then you need an end-to-end review process,” she said.
Many existing commercial insurance policies provide coverage for AI exposures, said Kevin Kalinich, intangible assets global collaboration leader at Aon in Chicago.
“In about 85% of cases, there are arguments for insurance policies to apply in the different categories of AI,” he said.
For example, if AI is alleged to have encouraged someone to harm themselves, general liability policies cover bodily injury or death, Mr. Kalinich said.
However, similar to the trend with “silent cyber” coverage, insurers are moving toward changing policy wordings so AI exposures are excluded unless they are affirmatively covered, he said.
Several insurers have issued specific AI policies, and brokers are crafting policy endorsements to cover AI exposures, Mr. Kalinich said.


