Advanced AI Security Readiness Act
H.R. 3919, the Advanced AI Security Readiness Act, would require the Director of the National Security Agency (NSA), via the agency’s Artificial Intelligence Security Center (or successor office), to develop an “AI Security Playbook.” This Playbook would outline strategies to defend “covered AI technologies” from technology theft by threat actors. The bill focuses on identifying vulnerabilities in AI data centers and among developers, pinpointing information and components whose theft would meaningfully accelerate adversaries, and detailing detection, prevention, and response strategies for cyber threats. It also asks the government to consider what level of involvement would be necessary to secure highly advanced AI systems and envisions a hypothetical government-led, highly secure environment for building such AI, including measures like protecting model weights, insider threat mitigation, access controls, and contingency planning. The Playbook would include both classified and unclassified elements: a detailed, possibly classified methodology and intelligence assessments, plus an unclassified portion suitable for dissemination to relevant parties, including the private sector. The NSA would engage with leading AI developers and researchers and collaborate with a federally funded research center on AI security. The bill requires initial and final Congressional reports, with unclassified and publicly available versions. It explicitly states that nothing in the bill authorizes regulatory action by the U.S. government. Key terms include “covered AI technologies,” “technology theft,” and “threat actors,” defined to include nation-states and other well-resourced actors.