Viewpoint: AI, deepfakes ominous
- September 11, 2025
- Posted by: Web workers
- Category: Finance
Organizations face a momentous challenge in building cyber defenses to ward off the next generation of cyberattacks. It seems that just as they find a way to close off one avenue of attack, cybercriminals have already shifted their approach and found a new vulnerability.
Some businesses are exploring new methods, such as alternatives to traditional passwords, as they step up efforts to keep their systems secure. As we report on page 20, organizations are increasingly moving away from traditional passwords altogether, instead using cellphones or biometric devices that may require fingerprint or facial ID and multifactor authentication.
Such developments are a positive move toward addressing shifting cyber threats but may introduce additional risks. Last month, a finance worker at a multinational company was duped into paying out $25 million to cybercriminals using “deepfake” technology to pose as the organization’s chief financial officer in a video conference call, according to news reports citing authorities in Hong Kong. The incident was one of a growing number of cases in which a sophisticated digital forgery of an image, audio, or video has been used to disrupt business operations or to facilitate fraud.
Deepfakes are proliferating and becoming more sophisticated, supported by advances in artificial intelligence. AI misinformation and deepfakes in the political arena are a persistent concern in an election year in which synthetic content can be manipulated to disrupt democratic processes. The Federal Communications Commission recently outlawed robocalls that use voices generated by AI. The FCC’s announcement followed a robocall in January that appeared to have been a deepfake purporting to be President Biden that urged voters not to participate in New Hampshire’s Jan. 23 GOP primary election. Major technology companies including Google, Meta, and TikTok have pledged to crack down on AI-generated deepfakes that could undermine the integrity of democratic elections in the U.S. and elsewhere.
<a rel=”gallery” class=”fancybox” href=”https://www.businessinsurance.com/assets/pdf/BI_0324-20.png”><span class=”rsrch_img” style=”background:white !important; width: 480″>
<img src=”https://www.businessinsurance.com/assets/pdf/BI_0324-20.png” width=”480″></span></a>
Deepfakes are also catching up with biometric security, with researchers pointing to emerging hacking incidents in which criminals use social engineering to steal facial recognition data and then create deepfakes to gain unauthorized access to individuals’ bank accounts, for example. By 2026, attacks using AI-generated deepfakes on facial biometrics will mean that 30% of businesses will no longer consider these identity verification and authentication processes alone to be reliable, according to Gartner Inc. research. It highlighted the difficulty in being able to tell whether the face of the individual being verified is a live person or deepfake.
Businesses will need all their wits and available detection technologies about them to mitigate the risks of a wide range of emerging cyberattacks, including deepfakes, in the months and years ahead. Fortunately, technologies that can distinguish between live and fake human presence are becoming more advanced, which will help shore up cybersecurity defenses. Multifactor authentication — an account login process that requires users to enter multiple forms of identification, not just a password, for access — has long been touted as a tenet of robust cyber hygiene. In a password-free future, MFA will be an even more important deterrent.


