Companies face up to reality of AI use by staff
- July 19, 2025
- Posted by: Web workers
- Category: Workers Comp
CHICAGO – Generative artificial intelligence is being used by many employees in numerous organizations, so companies need to put measures in place to protect against data leaks through the use of the technology, experts say.
Robust training programs should be the first line of defense, but technology-based security controls should also be implemented, they said Monday during a session at Riskworld, the Risk & Insurance Society Inc.’s annual conference.
Employees know that AI can help them with everyday tasks, such as composing emails, said Lianne Appelt, Whiteford, Maryland-based head of enterprise risk management at Salesforce Inc.
“They’re going to use these tools because they’re there,” she said.
There have been several well-publicized cases where employees at various companies have uploaded confidential company data to public AI tools such as ChatGPT. In addition, there have been instances where people have relied on AI for their work that provided incorrect information, Ms. Appelt said.
Salesforce encourages employees to use AI, but it has a protected AI tool for staff to use, she said.
As with other technological risks, such as phishing attacks, people are often the weakest link in the security chain, said Steve Taylor, director, cyber risk and resilience, at consulting firm BDO USA P.C.
“It’s really about educating folks on what is acceptable, what they should be actually using in their day-to-day from an AI perspective, and building the appropriate policy and governance around it,” he said.
But companies should also use technology to guard against data leakage via AI, Mr. Taylor said.
For example, cloud and software as a service monitoring, network endpoint security measures, data loss prevention tools, and behavior analysis should be implemented, he said.


