The Financial Industry Regulatory Authority (“FINRA”) and the U.S. Department of the Treasury (“Treasury”) (as part of a public-private partnership) have recently issued guidance regarding the use of AI by the financial services industry. This alert summarizes certain AI-related updates from the 2026 FINRA Annual Regulatory Oversight Report (the “Report”), and the Treasury partnership’s recently published AI Lexicon and Financial Services AI Risk Management Framework.
FINRA
FINRA’s 2026 Report contains a new section specifically devoted to generative AI (“GenAI”). The Report clarifies that “FINRA’s rules… and the securities laws more generally, continue to apply when firms use GenAI or similar technologies in the course of their businesses, just as they apply when firms use any other technology or tool.”1 The Report suggests that existing rules regarding supervision, communications, recordkeeping, and fair dealing may apply to uses of GenAI by securities broker-dealers.2
The Report provides recommendations for firms contemplating GenAI solutions. Such recommendations include:
- “Robust testing of GenAI to understand the capabilities, limitations, and performance of the model. Testing areas to consider include areas such as privacy, integrity, reliability, and accuracy.”
- “Ongoing monitoring of prompts, responses, and outputs to confirm the GenAI solution continues to perform as expected and results in compliant behavior.”
- “Approaches to identify and mitigate associated risks, including, but not limited to, accuracy (e.g., hallucinations) and bias.”
- “Assessing whether the firm’s cybersecurity program appropriately contemplates: risks associated with the firm’s and its third-party vendors’ use of GenAI; and how its technology tools, data provenance, and processes identify how threat actors use AI or GenAI against the firm or its customers.”
- “Developing supervisory processes to develop and use GenAI at an enterprise level.”
- “Establishing a supervision, governance, or model risk management framework that establishes clear policies and procedures to develop, implement, use, and monitor GenAI, while maintaining comprehensive documentation throughout.3
In addition to the Report, Treasury has issued guidance that is instructive on the use of AI by financial institutions.
U.S. Department of the Treasury
The U.S. Department of the Treasury recently released two new resources to guide AI use in the financial sector, a shared Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF) as part of the President’s AI Action Plan,4 that focuses on “clear standards, shared understanding, and risk-based governance to ensure artificial intelligence is deployed safely and responsibly.5 “As a part of a public-private partnership, the Treasury new AI guidance resources intended to enable the “secure and resilient”6 development and use of AI across the U.S. financial system. The publications consisted of an “AI Lexicon,”7 promoting a shared AI vocabulary, and the “Financial Services AI Risk Management Framework,” (the “Framework”) which modifies the NIST AI Risk Management Framework for specific application to the financial services industry.8 These publications are two of six such deliverables that the partnership plans to publish to “provide a foundation for the use of AI in financial services, addressing governance, data practices, transparency, fraud, and digital identity in an integrated way.”9
The Treasury partnership describes the Framework as “an industry‑led, sector‑specific AI risk management framework developed through public‑private collaboration with more than 100 financial institutions and input from U.S. and international agencies, including NIST. Structurally aligned with the NIST AI RMF and expanded with 230 Control Objectives, it helps financial organizations of all sizes manage and govern AI risks while enabling responsible innovation.”10
The Framework consists of four components: (1) an AI adoption stage questionnaire; (2) a risk and control matrix; (3) a user guidebook; and (4) a control objective reference guide. The Treasury partnership also published a website which provides information related to, and at times facilitates participation in, the Framework.
Conclusion
These recent developments indicate that AI is now officially on the radar of U.S. financial authorities, regulators, and private industry groups. Though the FINRA Report, the AI Lexicon, and the Framework are all styled as non-binding guidance, they may nevertheless indicate a trend toward increased scrutiny of AI practices and form standards against which financial services companies could be evaluated.
If your business provides financial products or services and utilizes or plans to onboard any AI technologies, now is the time to begin developing an AI compliance strategy and program that takes into account the above guidance in addition to new and emerging state AI and data protection laws.
The Taft’s Finance and Privacy, Security, and AI and FinTech practice groups have experience helping clients in the financial services industry develop risk-based compliance strategies for AI and other data protection laws, regulations, and standards.
Footnotes:
- 2026 FINRA Annual Regulatory Oversight Report at p. 24. ↩︎
- Id. ↩︎
- All quoted language in bullets from the 2026 FINRA Annual Regulatory Oversight Report at 26. ↩︎
- Winning the Race: America’s AI Action Plan (July 2025). ↩︎
- U.S. Department of the Treasury, Treasury Releases Two New Resources to Guide AI Use in the Financial Sector. ↩︎
- Financial Services Sector Coordinating Council, Financial Sector Artificial Intelligence Executive Oversight Group Deliverables (Feb. 19, 2026). ↩︎
- Financial Services Coordinating Council, Artificial Intelligence Executive Oversight Group AI Lexicon (Feb. 2026). ↩︎
- Cyber Risk Institute, Financial Services AI Risk Management Framework (Feb. 2026). ↩︎
- U.S. Department of the Treasury, Treasury Releases Two New Resources to Guide AI Use in the Financial Sector. ↩︎
- Cyber Risk Institute AI Risk Management Framework. ↩︎








