Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025
The Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025 aims to increase transparency about where digital content comes from and how it has been altered, in order to protect artistic works and improve ability to detect synthetic media. It creates a framework for standards on content provenance, watermarking, and synthetic-content detection, and requires certain digital content tools to support provenance tagging. The bill also establishes a public-private standards effort, funds research and public education, and sets up enforcement mechanisms through the FTC, states, and private rights of action for content owners. In short, the bill seeks to (1) make provenance and authenticity information more visible and reliable, (2) encourage safe, standards-based development of detection and watermarking technologies, and (3) provide enforcement and remedies for content owners and platforms that tamper with provenance or train AI with content without consent. It recognizes ongoing concerns about deepfakes and synthetic media and positions the U.S. to lead in standards and innovation while protecting rights holders.
Key Points
- 1Establishes a public-private partnership led by the Under Secretary of Commerce for Standards and Technology to develop consensus-based standards and best practices for watermarking, content provenance information, and detection of synthetic and synthetically-modified content across media (images, audio, video, text, etc.), including guidelines for AI training data and transparency practices. It also calls for grand challenges and prizes with DARPA and NSF to advance detection and defense measures.
- 2Requires certain content-creation tools to support provenance, starting two years after enactment: tools whose primary purpose is to create synthetic or substantially modified content must offer users the option to attach content provenance information indicating the content is synthetic, and implement security measures to keep provenance data machine-readable and hard to remove or alter, when the user chooses to include it.
- 3Prohibits non-consensual use of content that has provenance information attached or associated when training AI or generating synthetic content, unless the owner provides express, informed consent and adheres to applicable terms of use and copyright rights.
- 4Creates a strong enforcement framework: the FTC enforces violations as unfair or deceptive acts or practices; states’ attorneys general can sue on behalf of residents; private rights of action are available to owners of covered content, with damages and potential injunctive relief. It includes a 4-year statute of limitations for private actions.
- 5Provides definitions and scope: clarifies terms such as artificial intelligence, content provenance information, deepfakes, synthetic content, synthetically-modified content, covered content (copyrighted material under 17 U.S.C. 102), and covered platforms (large U.S. websites/apps meeting revenue or user thresholds). It also preserves copyright rights and clarifies it does not change other laws.