Rising AI adoption exposes businesses to new risks: Gallagher
- October 31, 2025
- Posted by: Beth Musselwhite
- Category: Insurance
Adoption of artificial intelligence (AI) is at an all-time high, with nearly two-thirds of businesses testing AI in the past year and half already implementing it, however, Gallagher warns that growing usage also brings new risks, including a greater likelihood of AI-related claims if not properly governed.
Gallagher’s research shows that by the end of 2024, 45% of businesses were using AI in daily operations—a 32% increase from the previous year. Larger firms with greater resources report even higher adoption, with 82% using AI, up from 69% in 2023.
This growth is driven by AI’s ability to improve efficiency by automating tasks, enhancing decision-making, and driving innovation. Among business leaders, 44% cite better problem-solving as a key benefit, while 42% say AI boosts employee efficiency and productivity, allowing them to focus on other tasks.
AI is most commonly used for writing emails/agendas (32%), handling customer inquiries (31%), and analysing market dynamics (28%).
As companies increasingly rely on AI for business-critical tasks, Gallagher stresses the importance of recognising the associated risks and managing AI use carefully.
Business leaders report greater awareness of AI-related risks compared to a year ago. With AI evolving rapidly, the risk landscape is shifting just as quickly.
AI errors, or “hallucinations,” where systems generate inaccurate results, are the top concern for just over a third (34%) of business owners, followed by data protection and privacy violations (33%) and legal liabilities (31%) stemming from AI misuse.
Gallagher predicts rising legal liabilities for businesses that fail to properly govern their AI use. A key concern is firms or contractors relying on AI-generated research for professional services or advice. If they use incorrect information, they could face significant legal costs, damages, and settlements.
Ben Waterton, Executive Director of Professional Indemnity at Gallagher, said, “AI is now an intrinsic part of our everyday lives – facial recognition, navigation systems and search recommendations and advertising are examples we encounter on a daily basis. However, the output from AI is only as good as the data that is input and cannot be relied upon blindly. There have been a number of cases where individuals being paid for their professional expertise have been found to be using AI generated information which was incorrect and have led to them being exposed to costly negative outcomes.
“AI systems excel at processing vast amounts of data and identifying patterns that may not be readily apparent to humans. However, they cannot replace human expertise and judgment that qualified individuals bring to their work. Relying solely on AI without critical examination and human oversight can lead to serious consequences and compromising advice. The quality assurance procedures and oversight of employees must evolve to ensure that this emerging risk is recognised and addressed to prevent professional indemnity losses.”
Gallagher also warns that AI introduces new cyber risks. Its in-house cyber specialists caution that AI could expose businesses to privacy and data protection risks, including data usage rights violations, data poisoning, and regulatory infringement.
AI is also making cybercriminals more sophisticated, enabling them to exploit vulnerabilities with greater efficiency and scale, making attacks harder to detect and defend against.
James Pearce, Cyber Account Executive at Gallagher, said, “Historically, the cyber insurance market has been adept at adjusting to new threats over time, and the expectation is that it will do the same with risks related to AI. However, as AI risks become more defined, insurers are starting to review their cover on policies and businesses using AI need to check if they do have coverage if a cyber-attack or privacy violation due to AI exposures were to take place.”
AI use also raises liability risks for business leaders, particularly in two key areas: “AI washing,” where firms overstate their AI capabilities, and AI hallucinations, where inaccurate AI-generated information informs business decisions or is provided to clients. Both scenarios could lead to regulatory scrutiny, legal challenges, and compliance issues—potentially exposing boards of directors to lawsuits and breach of fiduciary duty claims for failing to oversee AI governance properly.
Laura Parris, Executive Director of Directors’ & Officers’ Insurance at Gallagher, said,
“Corporate misuse of AI presents growing risks for business leaders, with AI washing overstating or misrepresenting AI capabilities leading to increased scrutiny. We have seen legal action against companies for misleading AI claims, investor disputes over the effectiveness of AI-driven products, and regulatory enforcement against firms making false statements about their AI use. As oversight intensifies, transparency and accountability in AI-related disclosures are more critical than ever.”
This website states: The content on this site is sourced from the internet. If there is any infringement, please contact us and we will handle it promptly.


