From fraud detection to personalized investment advice, AI promises to revolutionize how banks operate and serve customers. However, a recent report by the U.S. Department of the Treasury raises concerns about the potential cybersecurity and fraud risks associated with this technology.
This blog will breakdown the major takeaways of that 50-page report, including:
- How financial institutions already use AI for cybersecurity and fraud detection
- Where AI is creating new fraud and cybersecurity threats
- The impact of regulations on AI for cybersecurity and fraud
- Best practices for managing AI-specific security risks
How AI is Already Reshaping the Financial Industry
For this report, the Treasury interviewed industry stakeholders on how their institutions already use AI.
Most institutions utilize AI for fraud and cybersecurity, transitioning from rule-based systems to advanced anomaly detection and behavioral analysis. This shift has enhanced their ability to combat evolving cyber threats.
For example, traditional threat detection measures identify malicious activities by matching them against a database of signatures. Sophisticated attackers can bypass those measures by exploiting legitimate system tools. AI helps protect against these advanced threats by detecting a cybersecurity concern without needing a known signature.
Interview participants said AI is helping them become more agile, and they believe it has the potential to significantly improve the cost-efficiencies and overall quality of their cybersecurity and anti-fraud strategies.
How institutions apply AI varies based on size. Larger banks often have more resources for internal AI development, while smaller institutions may rely more on external providers. Cloud adoption and data availability are influencing this choice as well.
The Role of Generative AI
Generative AI is enhancing automation in anti-fraud and cybersecurity by processing more data and enabling institutions to take proactive measures like employee training and policy analysis. However, developing comprehensive policies for Generative AI is difficult. Participants of the Treasury’s study recognized that Generative AI models are still evolving, and they find it very costly to implement and validate.
Participants noted their caution in adopting Generative AI. Their institutions are implementing safeguards to manage associated risks as part of their overall risk management strategy. They’re also aligning their existing practices with National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) as a part of this effort.
While regulations and standards offer a framework for managing AI risks, financial stakeholders know that AI development and supply chains are vulnerable to data poisoning, leakage, and integrity attacks at every stage, underscoring the importance of data security throughout the entire process.
AI and Fraud
Financial institutions are increasingly relying on AI-powered fraud detection systems, but their effectiveness is hindered by a lack of data sharing within the industry. AI tools are lowering the barrier of entry for attackers by helping them develop more sophisticated malware.
Financial leaders surveyed for this report pointed to social engineering (manipulating people into revealing sensitive information or performing actions that compromise security) and identify spoofing (disguising oneself as a trusted entity to gain unauthorized access) as the two most successful types of fraud attacks, but the report also highlights:
- Malware development: Rapidly producing new and evasive malware
- Vulnerability discovery: Accelerating the process of finding and exploiting weaknesses in systems
- Disinformation: Spreading false information to manipulate public opinion and support other attacks
AI is also facilitating the creation of synthetic identities, complete with fabricated financial histories. This type of fraud is on the rise, costing billions of dollars in losses. While the full extent of AI's involvement in synthetic identity fraud is unclear, it is likely exacerbating the problem.
To bolster fraud prevention, financial institutions recognize the need for greater collaboration in sharing fraud data. This would enable the development of more robust AI models capable of identifying emerging fraud trends.
However, there are concerns about privacy and potential biases in historical data. Strong data protection measures and data anonymization techniques are crucial for mitigating these risks.
AI Cybersecurity Risks
The Treasury’s report further revealed that financial institutions are starting to recognize the unique cybersecurity risks posed by AI. They are examining these threats and leveraging resources like NIST’s adversarial machine learning publication to develop countermeasures.
The Treasury identifies the following categories for AI cyberthreats:
- Data poisoning: AI training data is corrupted by bad actors
- Data Leakage During Inference: Malicious actors extract sensitive information by reverse-engineering the model or systematically querying it during its operational phase
- Model extraction: Adversaries manipulate model outputs through evasion techniques or steal the model itself
- Evasion: Threat actors corrupt a model to get it to provide their desired output
Regulatory Impacts of AI in Finserv
Financial institutions are working closely with regulators as they try to navigate the expectations and solutions for AI. Financial regulators focus on overall risk management rather than specific technologies like AI, so while there are no regulations specifically designed for AI, the existing laws, regulations, and guidance are foundations for safe and responsible use.
The Treasury department points out some key risk management principles for fraud and cybersecurity risk, including:
- Risk management processes for introducing AI-supported activities
- Rigorous testing and monitoring to manage AI-related risks, regardless of whether AI is formally classified as a model
- Comprehensive technology risk management frameworks, including robust testing and monitoring, to effectively manage AI-related risks and ensure compliance with regulatory standards
- Compliance with data privacy regulations like the Gramm-Leach-Bliley Act, by ensuring data quality, understanding data limitations, and protecting sensitive customer information
- Robust third-party risk management practices, including rigorous testing and monitoring, to manage AI-related risks
- Applying existing insurance laws and regulations to the entire AI lifecycle by following the NAIC’s Model Bulletin on AI
Financial regulators are actively monitoring and adapting to the evolution of AI in the financial sector. They are establishing dedicated units, conducting research, and collaborating internationally to understand and mitigate potential risks while maximizing benefits.
However, the complex nature of AI requires ongoing assessment and potential regulatory updates to ensure financial stability and consumer protection.
Financial Institutions’ Best Practices for AI Cybersecurity
One of the most important sections of the Treasury’s report is their recommendations for managing AI-specific cybersecurity risks. In this section, report interviewees shared their best practices for mitigating cyber and fraud risks related to AI. Here is a summary of those recommendations:
- Tie-In AI Risk Management with Existing Programs
Financial institutions are integrating AI-specific risk management into their overall enterprise risk management framework. This approach leverages existing structures like the three lines of defense model to manage AI-related risks. It emphasizes the importance of clear governance, transparency, and ongoing monitoring throughout the AI system's lifecycle. - Develop AI-Specific Risk Management Frameworks
Participants revealed that, while existing AI frameworks and guidelines help, they’re also developing their own tailored AI frameworks to match their institution’s unique use cases. - Internal Governance Structure
Institutions are establishing dedicated AI roles or teams, or integrating AI risk management into existing departments like technology, security, or risk management. - Proactive Approaches to the Data Supply Chain
According to interviewed institutions, many are building a comprehensive inventory of data and mapping its supply chain to better understand how raw data from various systems is affecting AI models. A corporate data lead like a Chief Data Officer (CDO) is often responsible for this task, but the role is likely to evolve. - Due Diligence in Third-Party Vendor Selection
Institutions are assessing factors like data privacy, model validation, and security practices to understand how AI is integrated into vendor products and services and the potential impact on customers. Leveraging resources like FS-ISAC's Generative AI Vendor Evaluations & Qualitative Risk Assessment Guide or Generative AI Vendor Evaluation & Qualitative Risk Assessment Tool can aid in the evaluation process. - Analyzing Current Practices Against NIST Cybersecurity Framework
Surveyed institutions are using the NIST Cybersecurity Framework to identify areas where AI can improve their defenses. - Tiered Multi-Factor Authentication
Financial institutions recognize the need for stronger authentication methods to combat AI-powered threats. While current solutions like biometrics are vulnerable, options like hardware-based authentication and passwordless methods are gaining traction. It's crucial to avoid disabling factors like geolocation or device fingerprinting, as they can enhance security.
How the Treasury Department and Other Agencies Can Help
The Treasury Department wraps up its report by identifying opportunities for itself and other agencies, regulators, and the private sector to help financial services navigate the impact of AI-related cybersecurity and fraud risks. Here’s a summary of their suggestions:
- Establish a common AI lexicon: The imprecise use of terms like "artificial intelligence" hinders clear discussions with regulators, third parties, and stakeholders. Developing a shared vocabulary will improve risk assessment, transparency, and trust in AI systems. As a starting point, the included a glossary section in the report based on NIST’s AI RMF and Adversarial Machine Learning documents.
- Address the gap between large and small financial institutions: Larger firms have more resources to develop in-house AI solutions, while smaller institutions rely heavily on third-party providers. Cloud adoption can help bridge this gap, but early adopters have a significant advantage. The Treasury aims to facilitate collaboration between core providers, financial institutions, and regulators to address the challenges faced by smaller institutions and promote wider AI adoption.
- Encourage data collaboration: There's a significant data imbalance in anti-fraud AI development, favoring larger financial institutions. Smaller institutions struggle to access sufficient data to build effective anti-fraud models. While cybersecurity data sharing has improved, fraud data sharing remains limited. The ABA is working on a new fraud data sharing platform, and the Treasury is committed to leading efforts to improve fraud data availability, including sharing government data and establishing an internal AI anti-fraud team.
- Coordinate on AI Regulations to Mitigate Risks: The current regulatory environment generally supports financial innovation, with institutions actively collaborating with regulators on AI matters. However, there's a growing concern about potential regulatory fragmentation across different jurisdictions. To address this, the Treasury will work with industry and regulatory partners to map existing and anticipated AI regulations, identify opportunities for coordination, and potentially establish AI-specific coordinating groups to promote responsible AI development while mitigating risks.
- Expand the NIST AI Risk Management Framework: Financial institutions find the NIST AI RMF helpful but seek more detailed guidance on AI governance, especially within the financial sector context. The Treasury will collaborate with NIST to create a financial sector-specific AI RMF profile within the new AI consortium. This will enhance the NIST framework and provide more tailored guidance for financial institutions.
- Standardize Data Supply Chain Mapping & Labeling: Financial institutions are concerned about data privacy and liability risks associated with Generative AI training data. They are closely monitoring their internal data to prevent unauthorized access and use in AI model training. To address these challenges, the industry needs standardized data supply chain mapping and labeling practices, similar to nutrition labels, for AI models. The Treasury will collaborate with government agencies and industry members to explore best practices and build on existing efforts like the Software Bill of Materials (SBOM).
- Improve Explainability for Black Box AI Systems: Explainability is a major challenge for financial institutions using advanced ML models, especially Generative AI. There is a need for research and development to improve explainability in black box models. In the meantime, financial institutions should adopt best practices for managing AI without full explainability, such as ensuring data quality and limiting usage to appropriate scenarios. A comprehensive framework for testing and auditing black box AI systems is also necessary. The Treasury will collaborate with industry and government partners to address these challenges and develop solutions for the financial sector.
- Fix the Talent Shortage: The rapid advancement of AI has created a significant talent shortage across the financial industry. This gap affects both AI development and management roles. Financial institutions are struggling to build in-house AI expertise and rely heavily on third-party providers. To address this, tailored training programs for less skilled practitioners and AI-specific training for non-IT roles are crucial. Additionally, general AI awareness training for all employees is essential to mitigate risks and maximize benefits.
- Combat Digital Identity Challenges: The shift to remote financial services has highlighted vulnerabilities in traditional identity verification methods, making financial institutions susceptible to fraud. Emerging digital identity standards and solutions offer potential improvements but face challenges in terms of technology, governance, and security. The Treasury is actively monitoring these developments and collaborating with government agencies to establish strong frameworks for digital identity, including data privacy, security, and inclusivity. Ultimately, the goal is to enhance fraud prevention, strengthen cybersecurity, and promote financial inclusion.
- Establish International Regulation: International AI regulation for financial services is still developing. The Treasury is actively engaged in global discussions to address the risks and benefits of AI. This includes working with partners like the UK, EU, FSB, G20, OECD, and G7 to establish international standards and best practices. The goal is to foster responsible AI innovation while mitigating potential risks to financial stability and cybersecurity.
Concluding Thoughts
The U.S. Treasury Department’s report ultimately concludes that the successful integration of AI into the financial sector requires a delicate balance between innovation and risk management. By understanding the risks and implementing best practices, financial institutions can harness the power of AI to protect their customers and maintain trust.