AI-generated code introducing risk: Cyber expert
- June 11, 2025
- Posted by: Web workers
- Category: Workers Comp
CHICAGO — Computer programming code generated with artificial intelligence as well as AI “hallucinations” are introducing new risk and exposures to the cyber threat landscape, according to a cyber security expert speaking Tuesday at the Risk & Insurance Management Society Inc.’s Riskworld 2025 conference.
Generating code using artificial intelligence – now called “vibe coding,” according to Francisco Donoso, chief technology officer at Beazley Security, part of Beazley PLC – produces code which is more likely to be flawed from a security perspective than human generate code, he said.
“Generated code is not secure. The code that we see these AI models generate is really bad from a security perspective,” Mr. Donoso said.
One reason for this is that code-generation systems are trained on publicly available code, which itself may not be secure or of the highest quality.
Another potential exposure stems from AI “hallucinations,” when the technology “makes up stuff that doesn’t exist,” Mr. Donoso said.
The AI technology sometimes hallucinates “packages,” or small pieces of code. A threat actor can then take control of that code, called “squatting,” and alter the function or output of a given program or tool.
This can result in myriad bad outcomes, including data or emails being sent to a third party without the knowledge of the person using the program or technology.
“People figured out that AI was hallucinating a bunch of packages that don’t exist, and they just registered those packages to do malicious things instead. Attackers are really finding interesting ways to use all of this,” Mr. Donoso said.


