The Implications of Biden’s National Security Memorandum for Artificial Intelligence
5 mins read

The Implications of Biden’s National Security Memorandum for Artificial Intelligence

This analysis is a response to the latest news and will be updated. Contact [email protected] to speak with the author.

The White House released unrated version of its National Security Memorandum (NSM) which refers to artificial intelligence (AI) today after remarks by National Security Advisor Jake Sullivan. Broadly, this action aims to guide the use of AI in “national security systems,” continue US leadership in AI development and use, and promote AI adoption in the national security and intelligence arenas. These goals are important, but it is critical to carefully evaluate the NSM and the accompanying guidance document to ensure that the use of AI in national security is not unduly limited and that it maximizes the United States’ ability to lead on AI responsibly.

NSM is a product of Executive Order (EO) on the safe, secure, and reliable development and use of artificial intelligence was released on October 30, 2023, which some also criticized for heavy-handed. Part of the EO called for an interagency process to develop NSMs around AI use and AI use by adversaries within 270 days. Cyber ​​security and broader national security implications of EO were previously explored in a R Street analysisand R Street’s Cybersecurity-AI Working Group explored ways in which AI can be a positive force for cybersecurity.

There’s a lot to unpack in NSM, but several high-level items stand out. These are preliminary reactions, with a more detailed analysis to follow.

1. NSM is not an isolated measure. The Office of Management and Budget previously provided rules for federal agencies’ use of AI, but those rules targeted federal civilian agencies. The NSM is intended to complement and build upon these themes and is applied to national security systems, with action steps directed at specific federal agencies and across all federal agencies. At the same time, specific agencies such as Head of Digital and Artificial Intelligence within the Department of Defense (DOD) has already done tremendous work on AI use and policy. Arguably, DOD led the way in AI research long before policymakers’ recent focus on AI; Therefore, existing interventions should be reviewed in the light of this measure.

2. NSM is only partial. It is important to remember that there is also a classified version of this document, which means that the public cannot access the entire product. Similarly, the NSM is accompanied by a governance and risk management framework that is intended to be more easily updated as needs evolve than the NSM itself. (Section 4.2(e)(i)).

3. Maintaining US leadership is critical. The US has many advantages in the development and use of AI – a central point of departure for the NSM. For example, a majority of leading hardware companies, AI developers and technical talent are based in the US. NSM aims to support private sector developers with cybersecurity and counterintelligence resources and to appoint AI Safety Institute (AISI) as the industry’s primary contact with the federal government, though related efforts has been criticized (Section 3.3(c)). NSM assigns AISI many new tasks, from testing and evaluation guidance to potentially determining whether basic dual-use models could harm public safety (Section 3.3(e)). AISI’s increased role in national security is something to assess and watch closely, especially considering the role the Department of Commerce would play in national security applications.

4. National security use is limited. The NSM specifies both prohibited uses of AI and high-impact use cases that require stricter oversight and due diligence (Section 4.2(e)). This guidance must be reviewed initially and continually evaluated to ensure that potential national security uses are not limited, particularly as technology advances rapidly. Likewise, since adversaries will not respect guardrails and borders, it is important that this guidance prevents the US from falling behind while still leveraging AI responsibilities.

5. AI adoption must be a priority. National security agencies and the military have already taken advantage of AI, and that trend should continue as adversaries seek to do the same, but for nefarious purposes. NSM”claim” the use of AI systems in these cases. Private sector involvement is imperative, as much of the development has taken place within the private sector; However, it is important to remember that AI is not a panacea. Take cyber security, for example, where AI can play a critical role, although humans should still be at the center.

Rivals will continue their efforts to surpass the United States in both AI development and use. They will also seek to undermine US efforts, which could have serious national security implications. This reality must be our guiding light for AI actions in both the civil and national security arenas.

Subscribe to our policy work