Banking on Artificial Intelligence


Legal, regulatory, and security experts convened Feb. 28 at Fordham Law School to examine and share insights on cutting-edge issues surrounding “Artificial Intelligence, Machine Learning, and Law in the Financial Services Sector.”

The single-day conference, hosted by Fordham’s Center on Law and Information Policy, featured panels that highlighted how emerging technologies can strengthen regulatory compliance and enhance cybersecurity and fraud detection in an era where billions of data records are annually stolen.

Caleb Barlow, vice president for threat intelligence at IBM Security, in his opening keynote remarks said that a top-tier New York financial institution sees around one billion daily security events, including employee password resets to advanced persistent attackers. He noted that IBM’s primary watch floor sees between 25 to 35 billion security events per day from its 4,000 customers.

“You are not going to get through that with human beings,” Barlow said. “You have to use correlation engines and analytics to go find that needle not in the haystack but in the stack of needles.”

It’s not enough to merely have access to the data. In order to combat the mountain of risks today’s global businesses face, AI has to “read” text, look at images, and be able to hear and interpret speech; systems have to reason, make conclusions, and learn from and correct their mistakes, Barlow said.

Given the proper information, machines can make associations linking past and current suspicious behaviors at a speed that humans cannot match, noted Michael Lammie, a partner with PwC, during the first panel, “AI and Machine Learning for Regulatory Compliance.” CLIP Executive Director N. Cameron Russell moderated the panel.

“Banks have access to the information, but they’re not connecting the dots to bad actors, and they’re not investigating the activities the way they need to in a comprehensive way,” Lammie said. “These are areas where artificial intelligence and machine learning can help accelerate that process and give banks the information they need to manage risk better.”

The enhanced focus on AI and ML initiatives stems from the realization that the current protections are not working for the financial services industry, its customers on the receiving end, or regulators, added Donna Daniels, executive director of fraud investigation and dispute services for EY, one of the world’s leading professional services firms. An estimated $800 billion to $2 trillion is laundered annually, and only 1 percent of that is caught, she noted.

“As criminals get more sophisticated, those who combat them need to become more sophisticated,” Daniels said, explaining that if money laundering came wrapped with a bow, it wouldn’t be so hard to stop.

Geri-Lynn Clark, an executive with financial technology solutions provider NextAngles, observed that the ability to process, and ascertain information from, giant amounts of data has produced a visible increase in productivity in the regulatory-compliance industry, allowing higher end workers to investigate larger know-your-client and anti-money-laundering threats.

“Investment in AI could eventually fund itself,” Clark predicted.

CLIP Founding Academic Director Joel Reidenberg, the Stanley D. and Nikki Waxberg Chair in Law, moderated the day’s second panel, “AI and Machine Learning for Cybersecurity and Fraud Detection.” The panel explored how emerging intelligence could be used to thwart future cyberattacks, among other topics.


Comments are closed.