Artificial Intelligence Risk Evaluation Act of 2025
This bill, titled the Artificial Intelligence Risk Evaluation Act of 2025, would push the federal government to create a formal, data-driven process for evaluating the risks of advanced artificial intelligence (AI) systems. The Department of Energy (DOE) would establish an Advanced Artificial Intelligence Evaluation Program to systematically test, measure, and report on the likelihood of adverse AI incidents. The bill obligates covered AI system developers to participate in the program and to disclose certain internal materials (such as code, training data, model weights, and architecture) upon request, with stiff penalties for noncompliance. It also directs Congress to receive empirical data to inform potential regulatory options and to consider broader oversight, including the possibility of national-level intervention if AI develops toward or beyond superintelligence. The program would operate for seven years unless renewed, and it requires a detailed plan for a permanent federal framework for AI oversight, including updates and governance options. In short, the bill would establish a mandatory DOE-led testing regime for high-power AI systems, compel voluntary or at-times mandatory information sharing from developers, impose severe penalties for noncompliance, and create a pathway for ongoing federal oversight and potential regulation of advanced AI, with a sunset and a requirement to report back to Congress with evidence-based governance options.
Key Points
- 1Establishment of the Advanced Artificial Intelligence Evaluation Program within the Department of Energy, due within 90 days of enactment, to standardize testing and evaluation of advanced AI systems.
- 2Mandatory participation for covered AI system developers and compelled disclosure of materials (code, training data, model weights, interfaces, training details) upon request; prohibition on deployment in interstate or foreign commerce without compliance; penalties of at least $1,000,000 per day for violations.
- 3Comprehensive testing and evaluation activities, including standardized and classified testing, red-team/adversarial testing, third-party assessments, and blind evaluations to drive reliable risk data; requirement to produce risk reports and containment/mitigation guidance.
- 4Plans for a permanent federal oversight framework: within 360 days, the Secretary must propose a detailed plan for overseeing AI, including standards, licensing, and governance structures; mandatory annual updates to Congress with new data and lessons from the program.
- 5Sunset provision with a seven-year expiration unless Congress renews; potential options discussed for nationalization or other strategic measures if imminent risks or existential threats are identified, and a robust evidence base to inform regulatory and oversight decisions.