The Bedel Security Blog

Artificial Intelligence–How will it be regulated

Written by Stephanie Goetz | Jun 7, 2024

Institutions are looking at services using Artificial Intelligence (AI), such as loan decisioning, resume review, and process automation. Using these services can be risky not only because of the new technology but also what regulatory expectations will be for this technology.

There is some guidance from NIST on how to identify risks and govern the AI model. It largely follows a risk management framework, where the model is designed, data is collected and put into the model, output is validated, the model is deployed, and repeat. However, I don’t foresee many institutions building, deploying, and managing an AI model. This would be more of what a third party would do, and an institution would validate in its Third-Party Risk Management program.

Other regulation may be focused on the delivery of the results and ensuring the rights of customers for whom decisions are being made in the AI model. This could largely follow guidance issued by the White House in October of 2022, called Blueprint for an AI Bill of Rights. It focuses on five human rights to protect humans from harmful outcomes of automatic decisioning.

  1. Unsafe or Ineffective Systems
    The decisioning system should be developed with diverse inputs, and undergo a risk assessment, testing, independent validation, and ongoing monitoring to validate its effectiveness.

  2. Algorithmic Discrimination Protection
     This happens when the data and/or algorithm contributes to discrimination based on race, color, ethnicity, sex, religion, or other protected categories. This requires the designer, developer, and deployer to ensure that protections are taken at every step to protect individuals by equitable design, and ongoing testing. Additionally, this requires clear oversight and governance.

  3. Data Privacy
    Individuals should know how it will be used and can opt-out and remove data, much like GDPR and other recent data privacy laws.

  4. Notice and Explanation
    Clear language notifications of decisioning system use, that it was used to make the decision, and any key functionality changes in the system.

  5. Human Consideration and Fallback
    When appropriate, individuals should have the ability to opt-out of the decisioning system, including the ability to escalate the decision to a human. The criteria to escalate could include an error, failure, or inequity in the decision.

Institution regulators have spoken about the use of AI decisioning systems at conferences but have not published official guidance. The US Office of the Comptroller of the Currency OCC has outlined five expectations for banks which largely follow current risk management expectations, including:

  1. Risk and Compliance Programs,
  2. Model Risk Management,
  3. Third-Party Risk Management,
  4. New Products Review, and
  5. Responsible Use of Date.

As AI models expand in use, we can expect further and more official guidance for institutions. In the meantime, these new principles can help us understand the direction and prepare as we continue to see more products come on the scene.

 

Sources:

https://www.nist.gov/itl/ai-risk-management-framework

https://www.whitehouse.gov/ostp/ai-bill-of-rights/

https://www.mayerbrown.com/en/insights/publications/2022/05/supervisory-expectations-for-artificial-intelligence-outlined-by-us-occ