NIST Seeks Comments on Draft AI Risk Management Framework, Offers Guidance on AI Bias

NIST Seeks Comments on Draft AI Risk Management Framework, Offers Guidance on AI Bias

Seeking to promote the development and use of Artificial Intelligence (AI) technologies and systems that are trustworthy and responsible, NIST today released for public comment an initial draft of the AI Risk Management Framework (AI RMF). The draft addresses risks in the design, development, use, and evaluation of AI systems. 


The voluntary Framework is intended to improve understanding and help manage enterprise and societal risks related to AI systems. It aims to provide a flexible, structured, and measurable process to address AI risks throughout the AI lifecycle, and offers guidance for the development and use of trustworthy and responsible AI. NIST is also developing a companion guide to the AI RMF with additional practical guidance; comments about the Framework also will be taken into account in preparing that practice guide.


“We have developed this draft with extensive input from the private and public sectors, knowing full well how quickly AI technologies are being developed and put to use and how much there is to be learned about related benefits and risks,” said Elham Tabassi, Chief of Staff of the NIST Information Technology Laboratory (ITL), who is coordinating the agency’s AI work, including the AI RMF.


This draft builds on the concept paper released in December and an earlier Request for Information.  Feedback received by April 29 will be incorporated into a second draft issued this summer or fall.  On March 29-31, NIST will hold its second workshop on the AI RMF. The first two days will address a ..

Support the originator by clicking the read the rest link below.