The recent release of artificial intelligence (AI) risk management tools by the U.S. Department of the Treasury marks a significant advancement for financial institutions aiming to safely integrate AI technology. This initiative addresses the essential need for consistent terminology and clear guidelines, equipping banks to better handle emerging cyber threats and compliance challenges associated with AI.
- Key insight: The Treasury has introduced an AI lexicon and a risk management framework specifically designed to assist financial institutions in the safe adoption of AI technology.
- What’s at stake: Confusion stemming from inconsistent terminology and generic guidelines can hinder banks’ ability to combat emerging cyber threats, bias, and compliance issues.
- Supporting data: The newly developed framework provides a comprehensive matrix of 230 control objectives to effectively manage risks throughout the AI lifecycle.
Overview bullets generated by AI with editorial review
Processing Content
The U.S. Department of the Treasury launched two AI risk management tools on Thursday aimed at helping financial institutions safely embrace the technology. This release is part of a wider initiative that will roll out an additional six resources for the banking sector this month.
Created by the Artificial Intelligence Executive Oversight Group—a public-private partnership focused on addressing cybersecurity and operational issues in banking—this lexicon and framework are vital for navigating the complexities of AI’s transformative potential and associated risks.
For U.S. banks, the sector-specific lexicon and framework serve as a navigational guide through opportunities like operational transformation and improved customer service, while simultaneously addressing threats such as emerging cybersecurity vulnerabilities, bias concerns, and compliance obstacles.
The conclusion of this initiative was announced by Treasury leaders on Wednesday, signaling the completion of the tools’ development and the beginning of their rollout.
The remaining four resources will be released gradually throughout February, focusing on governance and accountability, data integrity and security, fraud and digital identity, and operational resilience, according to initial communications.
The overarching goal is to enhance the security of AI infrastructure, ensure safe deployment, and maintain the resilience of the financial system against sophisticated cyber threats.
“This initiative shows that government and industry can collaborate effectively to support secure AI adoption, thereby enhancing the resilience of our financial system,” said Treasury Secretary Scott Bessent in a Wednesday press release.
The groups behind the framework
Two leading organizations spearheaded the Artificial Intelligence Executive Oversight Group: the Financial Services Sector Coordinating Council (FSSCC) and the Financial and Banking Information Infrastructure Committee (FBIIC).
The FSSCC, which represents the private sector in this public-private partnership, is an industry-led nonprofit that focuses on the security of critical infrastructure.
This council comprises over 70 organizations, including JPMorganChase, Mastercard, the American Council of Life Insurers, the Options Clearing Corp., and the Financial Services Information Sharing and Analysis Center. Deborah Guild of PNC serves as the chair, with Heather Hogsett of the Bank Policy Institute as vice chair.
The FBIIC consists of 18 federal and state regulatory bodies and has been operational since 9/11, working under the President’s Working Group on Financial Markets to enhance coordination among regulators and bolster sector resilience.
The FBIIC includes notable organizations such as the Federal Deposit Insurance Corp. and the Federal Reserve Board, with Treasury’s Assistant Secretary for Financial Institutions, Luke Pettit, presiding over the committee.
In the past, Treasury and the FSSCC have collaborated to produce resources, including a set of guidelines for secure cloud computing adoption released in
The Cyber Risk Institute (CRI), a nonprofit coalition dedicated to developing harmonized risk management standards for cybersecurity, technology, and AI for the financial sector, is also acknowledged as a co-author of Thursday’s released framework.
A lexicon for common definitions of sometimes confusing AI terms
The AI Lexicon released on Thursday aims to create a standardized language that helps financial institutions and regulators communicate more effectively regarding AI risks and capabilities.
As banks become increasingly reliant on AI for operational and customer service decisions, the lack of consistency in terminology has led to confusion, which in turn hampers governance and oversight, as noted by the FSSCC and FBIIC.
To resolve this issue, the Treasury and its industry partners compiled common technical and risk management terms, meticulously drawing from academic literature, governmental resources, and existing standards.
This lexicon acts as an optional tool for U.S. banks, rather than a legally binding reference for regulators to interpret laws or agreements.
By fostering a shared understanding, this resource seeks to facilitate smoother communication among the legal, technical, and business teams overseeing bank operations.
“Clear terminology and practical risk management are vital for accelerating AI adoption in financial services,” stated Paras Malik, Chief Artificial Intelligence Officer at Treasury. She emphasized that the lexicon and accompanying resources minimize uncertainty and support consistent implementation within banks.
Risk management framework builds on existing federal guidance
The Financial Services AI Risk Management Framework tailors existing federal guidance on AI risks—initially generic and applicable to multiple sectors—into targeted advice for banks and other financial services providers.
This framework equips institutions with various tools, including a questionnaire to assess their current stage of AI adoption, alongside a matrix of 230 control objectives aimed at managing risks throughout the technology’s lifecycle.
By organizing controls according to adoption stages, banks can allocate resources more efficiently without wasting efforts on controls that may not yet apply to their operations.
Prior to this week, the financial services sector had access to the National Institute of Standards and Technology (NIST) AI Risk Management Framework, released in January 2023, which provided some initial guidance on the matter.
Additionally, industry groups like the Financial Services Information Sharing and Analysis Center (FS-ISAC) have previously published white papers addressing adversarial threats and principles for responsible artificial intelligence.
The AI risk framework released on Thursday serves as an “operationalization” of the NIST framework, specifically customized for the financial services industry, according to the FSSCC.
This new framework translates the broad principles of the NIST guidelines into actionable, sector-specific control objectives that organizations can scale according to their size.
Josh Magri, CEO of the Cyber Risk Institute, praised the framework as providing scalable guidance that can accommodate different stages of adoption.
“It’s an essential resource for community and multinational institutions alike, empowering them to effectively manage AI risks while fostering growth and innovation,” Magri stated in a Thursday press release from the Treasury.
Framework answers specific calls from AI experts
Following the Treasury’s announcement on Wednesday, experts emphasized the necessity of specific controls outlined in the risk management framework.
“A significant gap in AI governance and security is whether small and mid-sized firms can manage third-party risks associated with AI effectively,” remarked David Brumley, Chief AI and Science Officer at the crowdsourced cybersecurity firm Bugcrowd.
The new framework seeks to bridge this gap by offering scalable guidance tailored for community banks, including a dedicated section for establishing third-party risk management processes and mandating due diligence on vendor data practices.
Chris Radkowski, an expert in governance, risk, and compliance at Pathlock, advocated for clearer discussions on model integrity risks, underscoring that compromised data leads to flawed decisions. He also called for explicit requirements regarding AI model inventories and human review checkpoints for autonomous operations.
The framework addresses these concerns by requiring institutions to create a centralized AI inventory, set data quality standards, and define oversight roles aimed at preventing unchecked autonomous decision-making.
Ram Varadarajan, CEO of Acalvio, called for prioritizing the mitigation of adversarial model manipulation, such as data poisoning and prompt injection.
He further suggested mandating real-time behavioral guardrails and automated disconnects for AI agents whenever their outputs deviate from established ethical or financial standards.
The framework meets these recommendations by explicitly requiring financial institutions to incorporate mechanisms for the rapid and controlled shutdown of systems exhibiting inconsistent performance.