Cybersecurity Is Top of Mind While Banks Ponder AI

Print Friendly, PDF & Email

Cybersecurity Is Top of Mind While Banks Ponder AI

April 11, 2024

By: Tyler Brown

Technology Strategy and Compliance

Bankers’ concerns about business risks are on the rise, and a top issue is cybersecurity. In fact, 39% of senior leaders at financial institutions (FIs) say their concern about cybersecurity risks has increased significantly over the past year, according to a Bank Director report, second only to interest rate risk. Compliance and regulatory issues, which tie back closely to cybersecurity, were also in the top five. Bankers need to be mindful of those risks as they refine their business and technology strategies, especially in the context of applications for artificial intelligence (AI).

The rise of AI is poised to pose a real threat to cybersecurity across industries, and banks are no exception. Issues that stem from generative AI in particular, notes a US Treasury report, can include lower barriers to entry for attackers, more sophisticated and automated attacks, and a shorter time for attackers to exploit a bank’s vulnerabilities. A bank’s AI algorithms can also be tampered with or attacked.

That raises the question of what banks can and should do about AI-driven cybersecurity risk. It’s complicated by bankers’ ability to understand AI systems and assess them in the context of technology and security risk management. Given the complexity of AI models and the dependence most banks have on vendors to implement them, bankers need to consider how they manage controls, in two parts.

The first part is by leveraging existing cybersecurity controls. The Treasury report recommends that banks use cybersecurity best practices to secure AI systems and “map their security controls to AI applications,” frequently train employees, diligently patch vulnerabilities, and pay close attention to the handling of data. Critically, this imperative includes even banks that don’t have their own AI systems, as it is important to beef up security to ward off increasingly sophisticated AI-based attacks.

The second part is the integration of AI systems using existing enterprise risk management. As we’ve written, banks will most likely rely on vendors for AI systems, which increases third-party risk, made more complicated by the financial, legal, and security risks related to the handling of data. The successful use of AI depends on addressing risk management and control issues in four areas, according to the Treasury report:

  • Determining how appropriate a technology is and understanding the resources required to support it
  • Establishing governance and controls, which for AI, means ensuring robust underlying data, managing model complexity and transparency, and carefully evaluating implementation and performance
  • Successful change management, which may include addressing issues related to cybersecurity, resilience, privacy, operations, and fraud
  • Third-party risk management

Handling risk and compliance issues at the intersection of AI and cybersecurity stems in part from how banks talk about those topics internally, with technology partners, and with regulators. AI itself does not have specific frameworks for risk or compliance, notes the Treasury report. Instead of building specific new frameworks for AI, “regulators are focused on institutions’ enterprise risk frameworks” and how it fits into practices for cybersecurity and anti-fraud.

Instead of AI in general, bankers should be talking about its specific applications and the risks those specific applications create for the bank. Second, they should be talking about how existing resources and risk management processes already account for potential problems, both related to their own use of AI as well as external threats.

Subscribe to our Insights