The National Institute of Standards and Technology has released the final version of an addition to its artificial intelligence risk management framework, a “profile” providing guidance on potentially severe risks posed by generative AI and “suggested” steps to mitigate the dangers, a document that is likely to become one of the agency’s seminal contributions on AI safety and security.
“As GAI covers risks of models or applications that can be used across use cases or sectors, this document is an AI RMF cross-sectoral profile. Cross-sectoral profiles can be used to govern, map, measure, and manage risks associated with activities or business processes common across sectors, such as the use of large language models (LLMs), cloud-based services, or acquisition,” according to the profile.
“This document defines risks that are novel to or exacerbated by the use of GAI. After introducing and describing these risks, the document provides a set of suggested actions to help organizations govern, map, measure, and manage these risks,” it says.
The gen AI guidance, NIST AI 600-1, is a companion to NIST’s AI risk management framework and was released July 26 under a directive in President Biden’s Oct. 30 executive order on artificial intelligence.
The White House on the same day issued a variety of guidances and updates on initiatives under a 270-day deadline in the EO. In addition to the gen AI profile, the NIST-run U.S. AI Safety Institute issued draft guidance on managing safety and national security risks associated with “dual-use” artificial intelligence foundation models.
NIST also released final versions of “a comprehensive plan for U.S. engagement on global AI standards” as well as guidance on secure software development practices.
The gen AI profile “centers on a list of 12 risks and just over 200 actions that developers can take to manage them,” according to a NIST release. “The 12 risks include a lowered barrier to entry for cybersecurity attacks, the production of mis- and disinformation or hate speech and other harmful content, and generative AI systems confabulating or ‘hallucinating’ output.”
“After describing each risk,” NIST explains, “the document presents a matrix of actions that developers can take to mitigate it, mapped to the AI RMF.”
The dozen “risks unique to or exacerbated by GAI” range from “lowered barriers to entry or eased access to materially nefarious information related to chemical, biological, radiological, or nuclear (CBRN) weapons, or other dangerous biological materials” to confabulation or hallucinations, data privacy, and intellectual property violations.
The document also describes “primary considerations” that were “derived as overarching themes from the GAI [public working group] consultation process,” including “Governance, Pre-Deployment Testing, Content Provenance, and Incident Disclosure,” which the agency says “are relevant for voluntary use by any organization designing, developing, and using GAI and also inform the Actions to Manage GAI risks. Information included about the primary considerations is not exhaustive, but highlights the most relevant topics derived from the GAI PWG.”
The document says, “Future revisions of this profile will include additional AI RMF subcategories, risks, and suggested actions based on additional considerations of GAI as the space evolves and empirical evidence indicates additional risks.”
A NIST spokesperson observed on changes from the draft version, “We initially had about 400 suggestions and now have just over 200, consistent with feedback we heard from the community. We also slightly edited some of the risks to make them more distinguishable.”
Tech sector weighs in
Major tech industry groups praised NIST’s engagement with industry and willingness to accept stakeholder input, in initial reactions to the gen AI profile.
Information Technology Industry Council vice president of policy Courtney Lang said in a statement, “We’re pleased to see that NIST incorporated many industry recommendations to make the NIST AI RMF Generative AI Profile more practically implementable and reflective of the current state of practice. In particular, NIST added nuance to reflect the role that risk level plays in the adoption of the various Actions and clarified how to use the Profile in conjunction with the existing AI RMF.”
Lang said, “We encourage NIST to continue its engagement with industry as AI technology continues to evolve to ensure that the Profile also evolves, and to further explore opportunities to internationalize its guidance.”
BSA-The Software Alliance senior vice president of global policy Aaron Cooper said, “NIST has played an important role in developing consensus methods for managing high-risk uses of artificial intelligence. While BSA continues to review the substance of NIST’s new generative AI profile, we welcome efforts to help build trust in all forms of AI, including generative systems. We encourage NIST to continue building upon its existing work and engagement with stakeholders to advance the responsible development and deployment of AI systems.”
