Institutions are looking at services using Artificial Intelligence (AI), such as loan decisioning, resume review, and process automation. Using these services can be risky not only because of the new technology but also what regulatory expectations will be for this technology.
There is some guidance from NIST on how to identify risks and govern the AI model. It largely follows a risk management framework, where the model is designed, data is collected and put into the model, output is validated, the model is deployed, and repeat. However, I don’t foresee many institutions building, deploying, and managing an AI model. This would be more of what a third party would do, and an institution would validate in its Third-Party Risk Management program.
Other regulation may be focused on the delivery of the results and ensuring the rights of customers for whom decisions are being made in the AI model. This could largely follow guidance issued by the White House in October of 2022, called Blueprint for an AI Bill of Rights. It focuses on five human rights to protect humans from harmful outcomes of automatic decisioning.
Institution regulators have spoken about the use of AI decisioning systems at conferences but have not published official guidance. The US Office of the Comptroller of the Currency OCC has outlined five expectations for banks which largely follow current risk management expectations, including:
As AI models expand in use, we can expect further and more official guidance for institutions. In the meantime, these new principles can help us understand the direction and prepare as we continue to see more products come on the scene.
https://www.nist.gov/itl/ai-risk-management-framework
https://www.whitehouse.gov/ostp/ai-bill-of-rights/