AI brings systematic accumulation risk potential towards portfolios: Munich Re’s Berger
- August 19, 2025
- Posted by: Web workers
- Category: Finance
With artificial intelligence (AI) practically impacting all aspects of everyday life, the number of insurance gaps when using AI has staggered in recent years.
Reinsurance giant Munich Re highlighted this in a recent whitepaper that showcased how AI exposures within traditional insurance policies possess the ability to become a significant unexpected risk to insurers’ portfolios.
Reinsurance News recently spoke to Michael Berger, Head of Insure AI at Munich Re, about how the reinsurer is addressing the risks inherent with the technology.
Berger explained that there are two key gaps that insureds need to be aware of when using AI.
“One example is pure economic losses. For example, if a company utilises an AI internal operations. Let’s say a bank utilises AI for extracting information from documents, but then if the AI essentially produces too many errors, then what has been extracted is a lot of incorrect information. This would mean that people would need to do the job again, which would cause a lot of extra expenses.”
He continued: “The second area of coverage gaps can be AI discrimination. An example would be with credit card applications and credit card limits. The AI might be used to determine what is the appropriate credit limit for the applicant, and with that discrimination could occur, which would not be covered under other insurance policies.”
Moving forward, Berger then explained how AI exposures within traditional insurance policies possess the ability to become a significant and unexpected risk towards an insurer’s portfolio.
“With AI comes this kind of systematic accumulation risk potential, especially if one model is being utilised across similar use cases across different companies. Another area to consider is in the domain of copyright infringement risks with generative AI models. Users might make use of a generative AI (GenAI) model to produce texts or images, but the model could potentially produce texts or images that are very similar to copyrighted texts or copyrighted images. If the user decides to use this content, then they may face copyright infringement claims and lawsuits against them.”
Interestingly, Berger noted that many companies may choose to build their own AI models, not from scratch, but by building on big GenAI models and taking them further.
“They might use these models as foundational models. But if the foundational model has a certain risk of producing copyright infringing assets, if it’s used as a foundational model then the risk will carry through even though it’s just being used as a basis for training their own application. This kind of foundational model use raises the potential for systemic accumulation in the copyright infringement area.”
With AI technology making a major impact on many aspects of life, there is a lot of partial coverage from existing insurance policies, which is ultimately making it difficult for both insurer and insured to have full confidence on the extent of the coverage.
Berger addresses these concerns: “There are coverage gaps as I’ve outlined already with the pure economic losses and the AI discrimination. But I do believe that there is a need from a protection perspective to design suitable insurance coverage for those gaps. But then there are also concerns surrounding silent AI exposure. There might be potentially partial coverage, but the coverage might also be potentially silent on it.
“As an industry, it might make sense to structure one bundled insurance product which provides clarity that there is coverage for certain liabilities which emerge out of the usage of AI. This would address the problem in a really proactive way.”
Berger was then asked whether there are any limitations to the guarantees that Munich Re offers within insuring and addressing the risks that are inherent within AI.
“There are technical limitations because there are different forms of AI risks. Because of this, for certain AI risks we can only offer coverage if certain technical preconditions are met.
“For example, Munich Re can cover the risk of copyright infringement if certain statistical techniques are used that modify the generative AI model such that we can estimate the probability that it will produce a similar output with a high degree of confidence. It’s not possible to avoid the fact that a generative AI model will produce outputs which might be copyright infringing. However, there are certain tools that at least mitigate the probability that something like this could happen.
“It’s the same on the error side. Even if a company has the most well-built AI model, it will never be error-free. Any AI model will produce errors with a certain probability, and that all comes down to a testing process perspective. Are the testing procedures statistically robust enough to allow us to estimate this probability? If they are not, then they will not be insurable.
“We require certain technical preconditions in order to really estimate the risk with confidence and insure it. If those are not given, then we will not be able to provide insurance for these kind of risks.”
This website states: The content on this site is sourced from the internet. If there is any infringement, please contact us and we will handle it promptly.


