VestNexus.com

5010 Avenue of the Moon
New York, NY 10018 US.
Mon - Sat 8.00 - 18.00.
Sunday CLOSED
212 386 5575
Free call

AI an evolving, unmitigated risk for college campuses

NEW ORLEANS — From falsified research and documents and discrimination to campus violence and sexual crimes, artificial intelligence poses risks unique to academia, according to presenters who spoke Monday at the University Risk Management & Insurance Association’s annual conference.

While acknowledging that AI has helped institutions with administrative functions, the fast-evolving technologies have left colleges and universities unprepared to manage instances where AI is used maliciously or unintentionally creates problems for operations, according to Benjamin Evans, associate vice president for risk management and insurance at the University of Pennsylvania in Philadelphia.

“I don’t have the answers… and I’m not sure anybody in the audience has answers, or anybody at this conference has the answers,” he told attendees, many with questions on how to investigate false research and how to track disinformation — the subject of a separate session on how AI is triggering violence and so-called “sextortion” on campuses, where students can be blackmailed by often false and embarrassing images.

The risk for universities is so unique, it needs its own “risk bucket” in an enterprise risk management program, Mr. Evans said.

“We’re just trying to put thoughts in your head, things you should be thinking about, people you should be collaborating with on your campuses to come up with plans and procedures on how you are going to tend to this,” he said.

Jim Keller, Philadelphia-based co-chair of the higher education practice at Saul Ewing LLP, said laws have not caught up to the technology but that universities should be enlisting their information technology departments and risk management departments to explore the challenges posed by AI — some which create ethical issues and can trigger anti-discrimination factors if, for example, AI is used in hiring but is unintentionally programmed to weed out certain demographics.

“AI can be wrong,” he said.

Addressing campus violence in a separate session, AI in the form of “deepfakes,” or computer-generated videos or photographs, has been a focus for universities grappling with what happens with maliciously created disinformation hits the internet or social media.

“You just don’t know what is real these days,” said Nisar Siddiqui, a Chicago-based underwriter with Beazley PLC.

College students are particularly vulnerable to false information, and AI — which can generate everything from hoaxes to threats — falls into the category of “inactive violence that creates a ton of chaos,” said Justin Peterson, Atlanta-based underwriter at Beazley.

Tim Wiseman, university risk officer at the University of Oklahoma in Tulsa, said public universities with open campuses are particularly vulnerable. “Colleges campuses are open forums,” he said. “You have the perfect conditions for the intentional spread of disinformation.”

One suggestion is to send frequent notifications and campus updates so that students can know where accurate information can be found. Another is to engage students, giving them places to report suspected false information, according to presenters.

At the very least, campuses should be aware of the threats. “This is such an emerging issue, and it changes rapidly,” Mr. Peterson said. “There are no finite solutions.”