LegisTrack
Back to Executive Orders
Executive Order 14110Executive Order

Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Joseph R. Biden
Signed: Oct 30, 2023
Published: Nov 1, 2023
Standard Summary
Comprehensive overview

Executive Order 14110, signed October 29, 2023, establishes a federal, society-wide framework for the safe, secure, and trustworthy development and use of artificial intelligence. It sets out eight guiding policy principles to be followed across government, industry, academia, labor, civil society, and international partners. The order directs federal agencies to develop and adopt standards, testing, and governance practices for AI, including new requirements for evaluating safety, security, and ethics; labeling and provenance for AI-generated content; and stronger protections for privacy and civil rights. It also creates concrete actions focused on high-risk AI systems and large computing resources, including mandatory reporting by certain organizations, and formalized steps to manage AI’s risks in critical infrastructure and cybersecurity. While it is an executive directive (not a new law), it significantly shapes how the federal government approaches AI risk management, oversight, and cooperation with industry and international partners. Key parts of the order include: adopting eight guiding principles for AI policy; creating and aligning safety and security guidelines (with strong emphasis on testing, red-teaming, and evaluation); requiring reporting and transparency around dual-use foundation models and large compute clusters; imposing safeguards on foreign use of US AI infrastructure through IaaS providers and foreign resellers; and integrating AI risk management into critical infrastructure and cybersecurity practices across federal agencies and critical sectors. The measures aim to protect privacy, civil rights, and consumer protections while supporting responsible innovation and U.S. global leadership in AI safety.

Key Points

  • 1Eight guiding principles and priorities: The order directs agencies to govern AI in line with safety and security, responsible innovation and competition, worker interests, equity and civil rights, consumer protections, privacy and civil liberties, strengthened federal AI capabilities, and international leadership on safe AI practices.
  • 2Safety, testing, and provenance framework: Within 270 days, the Secretary of Commerce (through NIST) must establish guidelines and resources to promote safe, secure, and trustworthy AI. This includes companion resources to the AI Risk Management Framework (NIST), Secure Software Development Framework, and benchmarks for evaluating AI capabilities—especially those that could cause harm. It also directs development of AI red-teaming guidelines for dual-use foundation models and the creation of testing environments (testbeds) for safe development and PETs (privacy-enhancing technologies).
  • 3Reporting and oversight for dual-use foundation models and large compute: Within 90 days, the Secretary of Commerce must require companies developing or possessing dual-use foundation models to report information about training, ownership of model weights, and results of red-team testing. It also requires reporting on large computing clusters used to train such models, and a plan to guard against security risks. Technical conditions will be defined to determine which models and clusters are covered, with initial thresholds using very large compute-and-data criteria.
  • 4Foreign use and IaaS provider safeguards: The order empowers the Secretary of Commerce to regulate United States IaaS Providers to address foreign involvement in training and deployment. Within 90–180 days, it directs proposing regulations that require foreign resellers to verify identities of foreign users, set minimum identity-verification standards, maintain records securely, and provide reports to the US IaaS provider (and to the Secretary of Commerce). It also authorizes use of authorities under the International Emergency Economic Powers Act to implement these measures.
  • 5Critical infrastructure and cybersecurity integration: The order directs risk assessments of AI use in critical infrastructure (within 90 days and on an ongoing basis), calls for a Treasury report on AI cybersecurity best practices for financial institutions, and requires DHS to integrate the AI Risk Management Framework into safety and security guidelines for critical infrastructure within 180 days and coordinate cross-sector risk assessments. It also directs ongoing collaboration to ensure critical infrastructure owners/operators apply appropriate AI safety guidance.
  • 6AI, AI system, AI model, dual-use foundation model, red-teaming, and other defined terms are provided to ensure consistent understanding across agencies.
  • 7“IaaS” refers to Infrastructure as a Service products, i.e., cloud computing services that provide computing resources over the internet.
  • 8“Testbeds,” “privacy-enhancing technologies” (PETs), “watermarking,” and “content provenance” refer to tools and methods used to test, protect, and verify AI outputs and origins.
Generated by gpt-5-nano on Oct 3, 2025