Department of Commerce Announces New Actions to Implement President Biden’s Executive Order on AI
Apr 29, 2024
Department of Commerce Announces New Actions to Implement President Biden’s Executive Order on AI
ASowah@doc.gov
Mon, 04/29/2024 – 15:32
Artificial Intelligence
FOR IMMEDIATE RELEASE
Monday, April 29, 2024
Office of Public Affairs
publicaffairs@doc.gov
Announcements include draft guidance documents, a draft plan for international standards, and a new measurement program opening for public comment
The U.S. Department of Commerce announced today, following the 180 mark since President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, several new announcements related to the EO. The Department’s National Institute of Standards and Technology (NIST) has released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems. NIST has also launched a challenge series that will support development of methods to distinguish between content produced by humans and content produced by AI. In addition to NIST’s publications, Commerce’s U.S. Patent and Trademark Office (USPTO) is publishing a request for public comment (RFC) seeking feedback on how AI could affect evaluations of how the level of ordinary skills in the arts are made to determine if an invention is patentable under U.S. law, and earlier this year released guidance on the patentability of AI-assisted inventions.
“In the six months since President Biden enacted his historic Executive Order on AI, the Commerce Department has been working hard to research and develop the guidance needed to safely harness the potential of AI, while minimizing the risks associated with it,” said U.S. Secretary of Commerce Gina Raimondo. “The announcements we are making today show our commitment to transparency and feedback from all stakeholders and the tremendous progress we have made in a short amount of time. With these resources and the previous work on AI from the Department, we are continuing to support responsible innovation in AI and America’s technological leadership.”
The NIST publications cover varied aspects of AI technology: The first two are guidance documents designed to help manage the risks of generative AI — the technology that enables chatbots and text-based image and video creation tools — and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF), respectively. A third NIST publication offers approaches for promoting transparency in digital content, which AI can alter; the fourth proposes a plan for developing global AI standards. These publications are initial drafts, which NIST is publishing now to solicit public feedback before submitting final versions later this year.
“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation.”
USPTO is publishing an RFC seeking feedback on how AI could affect evaluations they make as they determine whether an invention is patentable under U.S. law. For example, the use of AI poses questions as to what qualifies as prior art and the assessment of the level of skill of a person having ordinary skill in the art. USPTO expects the responses to the RFC will help them evaluate the need for further guidance on these matters, aid in the development of any such guidance, and help inform USPTO’s work in the courts and in providing technical advice to Congress.
“As AI assumes a larger role in innovation, we must encourage the responsible and safe use of AI to solve local and world problems and to develop the jobs and industries of the future, while ensuring AI does not derail the critical role IP plays in incentivizing human ingenuity and investment,” said Under Secretary of Commerce for Intellectual Property and Director of the USPTO Kathi Vidal. “This work builds on our inventorship guidance, which carefully set forth when the USPTO will issue a patent for AI-assisted innovations, and our continuing policy work at the intersection of AI and all forms of IP.”
More information on the announcements being made today can be found below. All four of the NIST publications are initial public drafts, and NIST is soliciting comments from the public on each by June 2, 2024. Instructions for submitting comments can be found in the respective publications.
Mitigating the Risks of Generative AI
The AI RMF Generative AI Profile (NIST AI 600-1) can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. Developed over the past year and drawing on input from the NIST generative AI public working group of more than 2,500 members, the guidance document is intended to be a companion resource for users of NIST’s AI RMF. The AI Profile guidance document centers on a list of 13 risks and more than 400 actions that developers can take to manage them.
The 13 risks include issues such as easier access to information related to chemical, biological, radiological or nuclear weapons; a lowered barrier to entry for hacking, malware, phishing, and other cybersecurity attacks; and the production of hate speech and toxic, denigrating or stereotyping content. Following the detailed descriptions of these 13 risks is a matrix of the 400 actions that developers can take to mitigate the risks.
Reducing Threats to the Data Used to Train AI Systems
The second publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A), is designed to be used alongside the SSDF (SP 800-218). While the SSDF is broadly concerned with securing the software’s lines of code, this companion resource expands the SSDF to help address concerns around malicious training data adversely affecting generative AI systems.
In addition to covering aspects of the training and use of AI systems, the new companion resource offers guidance on dealing with the training data and data collection process, including a matrix that identifies potential risk factors and strategies to address them. Among other recommendations, the document suggests analyzing training data for signs of poisoning, bias, homogeneity and tampering.
Reducing Synthetic Content Risks
Accompanying generative AI’s development has been the rise of “synthetic” content, which has been created or altered by AI. Offering technical approaches for promoting transparency in digital content is the goal of NIST’s new draft publication, Reducing Risks Posed by Synthetic Content (NIST AI 100-4). This publication informs, and is complementary to, a separate report on understanding the provenance and detection of synthetic content that AI EO Section 4.5(a) tasks NIST with providing to the White House.
NIST AI 100-4 lays out methods for detecting, authenticating and labeling synthetic content, including digital watermarking and metadata recording, where information indicating the origin or history of content such as an image or sound recording is embedded in the content to assist in verifying its authenticity. The report does not focus only on the dangers of synthetic content; it is intended to reduce risks from synthetic content by understanding and applying technical approaches for improving the content’s transparency, based on use case and context.
Global Engagement on AI Standards
AI systems are transforming American society and around the world. A Plan for Global Engagement on AI Standards (NIST AI 100-5) is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
The draft invites feedback on areas and topics that may be urgent for AI standardization. Such topics, which are ready for standardization, include mechanisms for enhancing awareness of the origin of digital content, whether authentic or synthetic; and shared practices for testing, evaluation, verification and validation of AI systems. Other topics may require more scientific research and development to establish a foundational scientific understanding about critical components of the potential standard.
NIST GenAI
In addition to the four documents, NIST is also announcing NIST GenAI, a new program to evaluate and measure generative AI technologies. The program is part of NIST’s response to the Executive Order, and its efforts will help inform the work of the U.S. AI Safety Institute at NIST.
The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies. These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content. One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording. Registration opens in May for participation in the pilot evaluation, which will seek to understand how human-produced content differs from synthetic content. More information about the challenge and how to register can be found on the NIST GenAI website.
USPTO RFC on Patentability
The full Federal Register Notice can be found HERE. Comments are due July 29, 2024. Please see the Federal Register Notice for instructions on submitting comments.
To incentivize, protect, and encourage investment in innovations made possible through the use of artificial intelligence (AI), and to provide the clarity to the public on the patentability of AI-assisted inventions, the USPTO has previously published guidance in the Federal Register.
Read the full report from the U.S. Department of Commerce: Read More