STOP HATE Act of 2025
The STOP HATE Act of 2025 (also called the STOPPING Terrorists Online Presence and Holding Accountable Tech Entities Act of 2025) would compel large social media platforms to publicly publish their terms of service and to provide extensive, government-reported data on how those terms are enforced against content linked to foreign terrorist organizations and Specially Designated Global Terrorists (SDGTs). Within 180 days, platforms meeting the bill’s size threshold would publish terms of service for each platform and additional information about user contact options, how users can flag content or groups, and how quickly the company responds. The bill requires the platforms to submit triannual, detailed reports to the Attorney General (DOJ) describing the terms, violations, actions taken, and trends, with data disaggregated by content type, platform, and flag/action method. The Attorney General can impose civil penalties (up to $5 million per violation per day) for noncompliance. The bill also directs a National Intelligence Estimate (NIE) on platform use by designated terrorists to Congress and requires periodic GAO/comptroller general reports on implementation. The act sunsets after five years. It also emphasizes First Amendment protections and privacy law compliance. In short, the bill would increase transparency and federal oversight of how major social media platforms apply their rules to terrorist-designated individuals and groups, with significant potential penalties for noncompliance and sunset timing to limit the program to five years unless renewed.
Key Points
- 1Public publication of terms of service and related information: Platforms with at least 25 million U.S. monthly users must publish platform terms of service (and note where there are no terms) within 180 days, plus publish contact information, processes for flagging content and groups, response/resolution time commitments, and the ways content or users can be actioned.
- 2Detailed reporting to the Attorney General: Platforms must provide a triannual, comprehensive report detailing term versions, violations (flags, actions, removals/demonetization/deprioritization, views/shares, appeals/reversals, and outcomes), and must disaggregate data by content category, type, media, flagging method (employees, AI, moderators, civil society, or users), and action method (employees, AI, moderators, civil society, or users). A thorough evaluation of changes over time is required.
- 3Civil penalties for noncompliance: The Attorney General may sue for civil penalties up to $5,000,000 per violation per day for failures to publish terms, submit reports on time, or accurately report information.
- 4Additional mandated reports: The Director of National Intelligence must provide a National Intelligence Estimate on platform use by designated terrorists with an unclassified version made public. The Comptroller General must report on implementation at set intervals.
- 5Sunset provision: The authority to implement this act terminates five years after enactment unless renewed.
- 6Definitions and scope: The act defines terms like “actioned,” “content,” “social media platform,” “social media company,” and “terms of service.” It targets platforms that meet the FTC’s definition of social media platforms and have at least 25 million unique U.S. monthly users, focusing on foreign terrorist organizations and SDGTs.
- 7Protections and limitations: The act explicitly states it should not diminish First Amendment rights and requires compliance with privacy and confidentiality laws, including the Privacy Act of 1974.