Time to Design AI Risk Practices

CCG Catalyst Commentary

Time to Design AI Risk Practices

By: Tyler Brown

SEPTEMBER 3, 2024

Artificial intelligence (AI) is fraught with risks for financial institutions (FIs) that a few years ago didn’t exist on the same scale — particularly in the context of regulations that were written for another era. The sheer volume of data, the growing number of data sources, and increasingly complex mechanisms for analyzing data foretell rules that today are only nascent.

Today, the development of AI in financial services is well ahead of the regulatory process. But regulators don’t want to be caught flat-footed in the face of AI-driven violations or when something unanticipated goes awry. They’re trying to catch up with education, preliminary rulemaking, and by articulating best practices. In June, for example, the Department of the Treasury, which said that it’s working on “initial rulemaking efforts” on AI in financial services, published a request for information (RFI). Public comments are now in.

Risk and compliance leaders at FIs would be wise to keep an eye on how rulemaking and standards-setting for AI evolves. For those that haven’t followed potential AI-related rules, reviewing the Treasury document and comment letters would be a good place to start. Regulatory scrutiny will come for AI applications in banking, and bankers had best be prepared for it as AI-driven tools become integral to the tech stack.

First, bankers should educate themselves about what AI means for them and to regulators (in one sample of banking professionals, as we wrote, few bankers were doing much if anything to prepare for AI). Per the RFI, the Treasury Department interprets the statutory definition of AI “to describe a wide range of models and tools that utilize data, patterns, and other informational inputs to generate outputs, including statistical relationships, forecasts, content, and recommendations for a given set of objectives.”

Filtering out the buzzwords, the good news is that the Treasury’s RFI doesn’t point to anything shocking. Its outline of the uses, opportunities, and risks of AI, implications for rulemaking, and best practices retreads a lot of existing ground. One issue that stands out is when AI-driven tools’ behavior is hard to follow — as we also wrote, AI-driven underwriting tools may for example violate fair lending and anti-discrimination laws, including unfair, deceptive, or abusive acts or practices (UDAAP), and perpetuate illegal biases.

For future regulation of AI in financial services, comments suggested a “risk-based approach” with rules tailored to the potential negative impact of AI-driven applications, with a focus on critical functions. Comments also suggested rules related to transparency and explainability, including disclosure of algorithm design and clear rationale for outputs; bias mitigation, such as testing for discriminatory or inappropriately skewed outcomes; and reiterating standards for consumer privacy and data security while preserving the private sector’s ability to innovate.

Following recent publications and outreach by regulators, risk and compliance leaders should have an idea of what about AI in financial services rules will target over the next few years. Now is the time to game out what that will mean for FIs’ technology strategies, vendor selection, and ongoing risk and compliance practices.

Subscribe to our Insights