President Biden Signs Executive Order Establishing Framework for Artificial Intelligence Regulation


On October 30, 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “Order”), which establishes an initial framework for the U.S. Government to follow to develop policies to regulate emerging artificial intelligence (“AI”) technologies. The Order’s purpose is to jumpstart a Federal Government wide effort to formalize standards and laws that ensure responsible and safe development and use of AI, which begins with a directive to develop guidelines, standards, and best practices for AI safety and security. According to a Fact Sheet the White House released with the Order, these AI standards are intended to protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership globally. More specifically, the standards are intended to address chemical, biological, radiological, nuclear, cybersecurity, and other foreseeable national security risks, as well as risks from social harms like fraud, discrimination, and dissemination of misinformation.

As initial guidance that establishes a framework for AI policies and regulations, the Order tasks several federal agencies with developing and implementing the directives over the course of the next 90 to 365 days. The National Institute of Standards and Technology (NIST) is tasked with developing the standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. NIST will coordinate with the Department of Homeland Security (DHS), Department of Energy (DOE), and other agencies to develop rigorous standards for “red team” testing, which is testing aimed at breaking the AI system to expose vulnerabilities. The DHS will apply the NIST testing standards to the United States’ critical infrastructure and will work with the DOE to address AI systems’ threats to critical infrastructure.

The Order comes after, and the fact sheet refers to, voluntary commitments from fifteen major technology companies, including Microsoft, Google, Amazon, and Meta, amongst others, to work towards developing regulations and standards for the safety, security, and trustworthiness of AI. Accordingly, the Order initially requires, by invoking the Defense Production Act, only developers of large AI systems exceeding a specified threshold of computational power to apply the rigorous NIST testing standards to their systems and report the results before releasing the AI system to the public. The Department of Commerce is tasked with further defining the computational power and technical requirements of an AI system that will be subject to the testing and reporting requirements moving forward to ensure all AI systems that pose a threat to national security and the risks described above are captured in the future.

In addition to developing standards to ensure AI systems are secure, agencies are tasked with developing AI guidelines to address social and economic issues affecting Americans, including, equality and civil rights, fraud or deception caused by AI-generated unauthentic content, AI invading (and increasingly replacing) the workforce, AI’s capabilities to advance US healthcare and education, and the preservation of the gamut of individuals’ privacy and data that is captured by or used to “teach” AI. In response to claims AI systems may include or develop discriminatory practices, the Department of Justice and Federal civil rights offices are directed to develop both policies to prevent and address discriminatory practices in AI systems, including algorithmic discrimination, and best practices for prosecuting and investigating civil rights violations. The Department of Commerce is tasked with developing content authentication and watermarking measures to combat fraud and misinformation caused by generative AI, which can be used to create “deepfake” photos and videos, for example. The Department of Labor is tasked with developing best practices to minimize AI harms and maximize AI benefits to workers, and producing a report on the potential impacts AI may have on the future of the labor market, including by job displacement. The Department of Health and Human Services and the Department of Education are respectively tasked with investigating how AI can best improve the health and education systems, including through development of medications and AI based educational tools. Understanding that AI systems require massive amounts of data input in order to learn, a core element of the standards NIST is directed to develop is to preserve and protect individuals’ privacy and data that is used to train AI systems.

Finally, the Order promotes innovation and competition in the AI space by expanding grants and providing access to a publicly available national database of AI research and tools, directs the Secretary of State and heads of other agencies to coordinate with the international community on AI initiatives to advance US leadership in the AI space, establishes measures to ensure the responsible and effective governmental use of AI, and charges the National Security Council to develop a National Security Memorandum to keep the United States on the forefront of emerging AI technologies.

The Order is a necessary step to cement the United States as a leader in the development and efficient use of AI technology to make Americans’ lives better, while at the same time protecting Americans from AI’s potential negative effects from misuse that could drastically outweigh any benefits AI can offer. The next year should provide significantly more clarity on AI regulation as the relevant agencies implement their directives from the Order.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Print