Today we’ll discuss our newest and perhaps most ubiquitous buzzword: AI (Artificial Intelligence). Identifying and mitigating risks of AI are becoming increasingly more nuanced and complex due to recent surges in AI technology. And while AI-based decision-making continues to enhance financial services, such as loan origination, the reality is: AI-driven prediction and statistical models have been in use for some time.
These days, when most people talk about AI, I think much of the time they mean generative AI. Afterall, the most recent advances in AI technology, particularly around generative AI have profoundly affected and improved the way organizations conduct business. But financial services, along with many other business sectors, have been using statistical and prediction-based models to automate decision-making for well over a decade. And so, I think it’s important to make a distinction between Generative AI risk and AI Model Risk management.
If you find yourself perusing the FFIEC IT Handbook (as a nerdy, overanalytical CISO might do) for guidance on AI, you’re almost certain to come up empty-handed. That’s because regulatory guidelines focus more on model risk than they do AI. So, what is the difference? The analogy I like to use is the same as that of Cybersecurity vs. Information Security. I won’t digress down a rabbit hole here but suffice it to say that AI Risk is a component of Model Risk, just as Cybersecurity is a component of Information Security, and thus we must take the broadest approach possible if we are to adequately assess risk overall.
The most concise piece of guidance that I have found (and I use that term conservatively) is the OCC Bulletin 2011-12 titled Supervisory Guidance on Model Risk Management. But beyond that, you’ll find additional guidance sprinkled among various FFIEC handbooks, including Development and Acquisition, Information Security, Outsourcing Technology, and Supervision of Technology Service Providers.
Well, just like any sound Information Security Program should suggest: start with a risk assessment. And that means you’ll need to start with an inventory. If you’re prudent in this area then you likely already have fairly accurate inventory, but more specifically you’ll want to identify those assets that leverage AI and/or AI models. The most common for financial services is undoubtedly anything providing cybersecurity controls. Think next-gen firewalls, endpoint detection and response (EDR), security incident event management (SIEM), and any other tools that are analyzing large quantities of data, alerts, etc., and correlating them to either make decisions, or to provide humans with meaningful data with which to make informed decisions. Additionally, loan decisioning is becoming increasingly more common, but fraud detection systems for wire and ACH transactions, not to mention BSA monitoring platforms, also use statistical models to develop baselines of normal activity so that anomalous or suspicious activity can be more easily identified.
Again, just like your IT/security risk assessment, identifying threats is key. Of course, these aren’t your typical types of risk; AI breeds a whole new ecosystem of threats and controls. For example, risk of bias is among the most common. As we know, humans are biased, and AI is created by humans, so inevitably our own biases bleed into the models. However, since we’ve had these types of models operating in the wild, so to speak, we have discovered that AI also develops its own bias due to statistical probabilities. But that isn’t the only risk. Explainability & transparency, data privacy, model drift and reliability, and ethical risk should also be considered. It’s important to understand the data that is analyzed by these models and that our expectations align with actual results. Not only is this important for bias-related risk, but also for model drift and performance. AI models must be continually analyzed, tested, and finely tuned to meet the challenges of performance degradation, validation, and recalibration.
Additionally, operational and security threats pose risk just like any other assets. The same cybersecurity threats, such as vulnerability management, adversarial attacks, and data integrity are more important than ever; else our models may be subject to not just unfair decision-making but also downright malicious intent. Ensuring that our AI models have the same level of protection, including endpoint security, perimeter security, and identity access management are vital.
Furthermore, regulatory and compliance risk is paramount. Ensuring that AI-drive decisions comply with laws like Fair Lending, Equal Credit Opportunity, BSA, and other regulatory guidelines cannot be overlooked. As we discussed earlier, regulatory guidance from the OCC and FFIEC lays out some of these expectations as they relate to regulatory compliance.
Establishing new security controls in addition to what you already have in place is the next step. The key in any risk management process is undoubtedly governance. Developing robust model governance structures that include policy and procedure, independent validation, and testing play a large role in risk mitigation. But you also need controls that will mitigate the specific types of risk we discussed, such as bias, transparency, and regulatory compliance. This can present a challenge, especially if you’re utilizing a product or service that uses AI models. I mean…there likely aren’t many small-to-midsize community banks operating their own Large Language Models. Most of us are using a third-party service and ensuring your AI models are properly tuned and tested falls within the realm of vendor risk management. It's uncommon (at least for now) for service providers to include AI model risk management in their SOC testing so you’ll want to develop controls of your own, and/or negotiate some of these controls into contractual agreements. Ongoing monitoring and compliance reviews can combat risks, such as transparency, and are methods that can be used to ensure AI-drive decisions are explainable and auditable.
Generative AI tools are useful and can certainly improve productivity, but you likely can’t govern their models the same as you would other AI-based decision models. However, DLP solutions and methodologies can be used in their place. And I know what you’re saying, “We don’t have a DLP solution” or “They’re expensive and difficult to maintain”. I hear ya. But just because you don’t have a full-blown DLP solution doesn’t mean you don’t have DLP-like controls already in place.
Web content filtering can be highly useful in this case. You can easily block sites like ChatGPT and just easily restrict the use of Microsoft Co-Pilot. But the list of generative AI platforms is growing; LLMs and Test-Based AI tools like Google Gemini and Llama (Meta) are readily available. That said, the biggest risk of these platforms is data loss; employees accidentally (or intentionally) uploading sensitive information to these platforms should be treated the same as file-sharing sites such as Dropbox or ShareFile. So, while generative AI does introduce risk to the financial institution, it can be managed through traditional means. But as I stated, these platforms are growing exponentially, and managing a list of them can become burdensome. So, while the roadblocks to a full-blown DLP solution have been, to date, reasonable; reconsideration of these tools may be warranted.
The future of AI in financial services is uncertain. What’s not uncertain is that it’s here to stay, so getting a jump on developing appropriate risk management programs is of the utmost importance. Understand where AI is used in your organization; you may be surprised where you find it. Identify applicable risks and develop compensating controls. And as always, encouraging a balanced approach to risk management that fosters innovation while ensuring compliance and risk mitigation is key to your success. If you’re unsure where to begin, just start with a single asset. The scope of all this may seem daunting so don’t try to boil the ocean. Check out some of our other blog posts for further insight and contact us if you want to learn more.