LegisTrack
Back to all bills
S 2117119th CongressIn Committee

Preventing Deep Fake Scams Act

Introduced: Jun 18, 2025
Technology & Innovation
Standard Summary
Comprehensive overview in 1-2 paragraphs

The Preventing Deep Fake Scams Act would create a Task Force on Artificial Intelligence in the Financial Services Sector to study and report to Congress on how artificial intelligence is used in banking and related services, with a focus on safeguarding consumers from AI-enabled fraud and deep fake scams. Chaired by the Secretary of the Treasury, the Task Force would include senior representatives from major financial regulators (OCC, Federal Reserve, FDIC, CFPB, NCUA) and enforcement (FinCEN). The Task Force must deliver a comprehensive report within one year, following a public feedback process and consultation with banks, credit unions, AI vendors, and experts. The report would cover protections, standard definitions for AI terms (including deep fakes), potential risks, best practices, and regulatory recommendations. The Task Force would terminate 90 days after the final report is issued. In short, the bill favors a formal, multi-agency study and set of recommendations rather than immediate new rules or enforcement.

Key Points

  • 1Establishes the Task Force on Artificial Intelligence in the Financial Services Sector, chaired by the Secretary of the Treasury, with high-level representation from key financial agencies.
  • 2Requires a final report to Congress within one year, plus a 90‑day public feedback period after enactment and consultations with a broad set of stakeholders (banks, credit unions, AI vendors, and AI experts).
  • 3The report must include: (A) how banks/credit unions proactively protect against AI-enabled fraud; (B) standard definitions for AI usage (e.g., generative AI, machine learning, natural language processing, algorithmic AI, deep fakes); (C) potential risks from AI misuse by criminals; (D) best practices for institutions to protect customers; (E) legislative/regulatory recommendations for AI regulation and consumer protection.
  • 4Finds that voice banking and the accessibility of video/audio media online heighten risk from deep fakes and data theft.
  • 5Termination: the Task Force ends 90 days after the final report is issued.

Impact Areas

Primary group/area affected- Banks and credit unions (and their customers): the bill focuses on how these institutions can defend against AI-enabled fraud, including identity theft and data breaches, and on establishing common definitions and best practices.Secondary group/area affected- AI vendors and third-party service providers serving financial institutions: stakeholders to be consulted; the act addresses how AI is used in services and potential regulatory considerations.Additional impacts- Regulators and policymakers: potential guidance and recommendations for future AI regulation and consumer protections.- Consumers: potential improvements in awareness and protections against fraud involving AI (e.g., deep fake voice/video scams).- The bill signals a potential shift toward standardization of AI terminology and risk assessment within the financial sector, absent immediate prescriptive rules.The bill is a study-and-report measure, not a new regulatory regime or enforcement mechanism. It emphasizes collaboration across major federal regulators and industry stakeholders to map current practices and propose next steps for safeguarding financial services against AI-driven scams.
Generated by gpt-5-nano on Oct 7, 2025