VestNexus.com

5010 Avenue of the Moon
New York, NY 10018 US.
Mon - Sat 8.00 - 18.00.
Sunday CLOSED
212 386 5575
Free call

Editorial: AI poses threats, offers benefits

Manipulated videos and audio recordings, or deepfakes, are moving up the list of concerns for cybersecurity professionals.

These can range from the frivolous, like swapping out celebrity faces in iconic movie scenes, to the sinister, like creating racist rants in an unsuspecting person’s voice. This latest use and abuse of new technology is likely only in its infancy.

Often created with the help of artificial intelligence, some of the attempts to fool viewers are easily identifiable. Photos and videos with extra fingers, smudged faces or disproportionately large heads abound on the internet. But you only need to take a few of the “deepfake or real” quizzes online to see that some images are uncannily convincing.

The technology is also being used to execute serious financial crimes. In one widely reported incident, a Hong Kong employee of a British design and engineering company was allegedly duped into paying $25 million to fraudsters after he participated in a video call with deepfake creations of people he thought were in his finance department.

As we report here, a mitigating factor that cybersecurity and risk management professionals can be grateful for is that deepfakes are time-consuming and require relatively sophisticated expertise to pull off convincingly.

But it’s tough not to think that expensive scams will proliferate as the technology and unscrupulous people’s mastery of it mature. We can all think of promised technological innovations that never materialized — how much longer do we all have to wait for a flying car, for example? — but deepfakes have already hit the information superhighway, and newer and better models are constantly being rolled out.

But it’s not all bad news on the AI front. As we report on page 8, the technology is increasingly being used to combat fraud in workers compensation claims.

While claims professionals often know what to look for to identify bogus or inflated claims, sifting through mounds of potential evidence in reports, emails, and other documents can be immensely time-consuming. To speed up investigations and oversight, companies are deploying AI to focus on fraudulent or troublesome claims.

Again, the use of the technology is just beginning; there are teething problems to overcome and limitations to its use, but by embracing AI, claims staff should be able to reduce the billions that are thought to be lost to fraud each year.

So, while AI can threaten personal privacy and the integrity of information and expose companies to significant financial perils, it can also be used as a sophisticated tool to mitigate threats and reduce costs.

As criminals refine their use of AI, ethical use of the technology must develop on an alternative track to ensure that risk management efforts keep pace in the evolving digital environment.