LegisTrack
Back to all bills
HR 5681119th CongressIn Committee

STOP HATE Act of 2025

Introduced: Oct 3, 2025
Sponsor: Rep. Gottheimer, Josh [D-NJ-5] (D-New Jersey)
Defense & National SecurityTechnology & Innovation
Standard Summary
Comprehensive overview in 1-2 paragraphs

The STOP HATE Act of 2025 (also called the STOPPING Terrorists Online Presence and Holding Accountable Tech Entities Act of 2025) would compel large social media platforms to publicly publish their terms of service and to provide extensive, government-reported data on how those terms are enforced against content linked to foreign terrorist organizations and Specially Designated Global Terrorists (SDGTs). Within 180 days, platforms meeting the bill’s size threshold would publish terms of service for each platform and additional information about user contact options, how users can flag content or groups, and how quickly the company responds. The bill requires the platforms to submit triannual, detailed reports to the Attorney General (DOJ) describing the terms, violations, actions taken, and trends, with data disaggregated by content type, platform, and flag/action method. The Attorney General can impose civil penalties (up to $5 million per violation per day) for noncompliance. The bill also directs a National Intelligence Estimate (NIE) on platform use by designated terrorists to Congress and requires periodic GAO/comptroller general reports on implementation. The act sunsets after five years. It also emphasizes First Amendment protections and privacy law compliance. In short, the bill would increase transparency and federal oversight of how major social media platforms apply their rules to terrorist-designated individuals and groups, with significant potential penalties for noncompliance and sunset timing to limit the program to five years unless renewed.

Key Points

  • 1Public publication of terms of service and related information: Platforms with at least 25 million U.S. monthly users must publish platform terms of service (and note where there are no terms) within 180 days, plus publish contact information, processes for flagging content and groups, response/resolution time commitments, and the ways content or users can be actioned.
  • 2Detailed reporting to the Attorney General: Platforms must provide a triannual, comprehensive report detailing term versions, violations (flags, actions, removals/demonetization/deprioritization, views/shares, appeals/reversals, and outcomes), and must disaggregate data by content category, type, media, flagging method (employees, AI, moderators, civil society, or users), and action method (employees, AI, moderators, civil society, or users). A thorough evaluation of changes over time is required.
  • 3Civil penalties for noncompliance: The Attorney General may sue for civil penalties up to $5,000,000 per violation per day for failures to publish terms, submit reports on time, or accurately report information.
  • 4Additional mandated reports: The Director of National Intelligence must provide a National Intelligence Estimate on platform use by designated terrorists with an unclassified version made public. The Comptroller General must report on implementation at set intervals.
  • 5Sunset provision: The authority to implement this act terminates five years after enactment unless renewed.
  • 6Definitions and scope: The act defines terms like “actioned,” “content,” “social media platform,” “social media company,” and “terms of service.” It targets platforms that meet the FTC’s definition of social media platforms and have at least 25 million unique U.S. monthly users, focusing on foreign terrorist organizations and SDGTs.
  • 7Protections and limitations: The act explicitly states it should not diminish First Amendment rights and requires compliance with privacy and confidentiality laws, including the Privacy Act of 1974.

Impact Areas

Primary group/area affected- Large social media platforms (those with at least 25 million U.S. monthly users) and their compliance teams. The bill would impose new publication obligations and multi-source data reporting to the DOJ, with risk of substantial civil penalties for noncompliance.Secondary group/area affected- U.S. users and content moderators, who would be affected by the emphasis on how content is flagged and actioned, and by the transparency requirements that may influence platform moderation practices.Additional impacts- National security oversight: DNI and GAO would play a role in national security analysis and program oversight, potentially increasing federal insight into how platforms handle terrorist-designated content.- Privacy and confidentiality: The reporting framework requires balancing transparency with privacy protections and compliance with the Privacy Act; data handling and public accessibility of reports are key considerations.- Public discourse and First Amendment considerations: The bill foregrounds First Amendment protections, but broader transparency and enforcement could influence platform policies and user speech dynamics.- Operational and legal risk for platforms: Substantial reporting burdens, the possibility of large civil penalties, and the sunset provision may affect platform policy decisions and resource allocation.
Generated by gpt-5-nano on Oct 16, 2025