All resources
AI-Native Venture Sprint·Framework

Responsible AI & Risk Checklist

Bias, hallucination, IP, privacy and safety checks before you ship an AI feature.

Attributed to National Institute of Standards and Technology (NIST)

What it is

The NIST AI Risk Management Framework (AI RMF) 1.0, released on January 26, 2023, is a comprehensive framework designed to address the challenges and risks associated with artificial intelligence. Developed by the Information Technology Laboratory (ITL) AI Program at NIST, in collaboration with public and private sectors, the AI RMF provides a common language and approach for managing AI-related risks to individuals, organizations, and society.The framework is intended for voluntary use and aims to enhance the ability of organizations to integrate trustworthiness principles into every stage of the AI lifecycle, from design and development to use and evaluation of AI products, services, and systems. It emphasizes a consensus-driven, transparent, and collaborative approach, having undergone multiple revisions based on public comments, workshops, and various opportunities for stakeholder input.NIST has also published a companion AI RMF Playbook, a Roadmap, Crosswalks to other frameworks, and various Perspectives to facilitate its implementation. Furthermore, the Trustworthy and Responsible AI Resource Center (AIRC) was launched to support the framework's adoption and foster international alignment. NIST continues to evolve the framework, releasing profiles such as the Generative Artificial Intelligence Profile (NIST-AI-600-1) and a concept note for a profile on Trustworthy AI in Critical Infrastructure, to address specific AI applications and sectors.

When to use it

  • When developing or deploying new AI systems or features.
  • When evaluating the ethical implications of AI applications.
  • When assessing potential biases and fairness issues in AI models.
  • When ensuring the privacy and security of data used by AI.
  • When establishing governance and accountability for AI systems.
  • When building public trust in AI technologies.
  • When complying with emerging AI regulations and standards.

How to use it

  1. 1

    Govern

  2. 2

    Map

  3. 3

    Measure

  4. 4

    Manage

Key concepts

Trustworthiness

The concept that AI systems should be reliable, safe, secure, resilient, private, fair, transparent, and interpretable. It encompasses the desirable characteristics that foster confidence in AI systems.

AI Risk

Potential negative impacts associated with the design, development, deployment, and use of AI systems. These risks can affect individuals, organizations, and society as a whole, including issues like bias, discrimination, privacy breaches, and safety hazards.

AI Lifecycle

The complete process of an AI system, from initial conception and design through development, deployment, operation, maintenance, and eventual retirement. Risk management should be integrated throughout this entire cycle.

Transparency

The ability to understand how an AI system works, what data it uses, and how it arrives at its decisions. This includes factors like interpretability and explainability, enabling stakeholders to scrutinize and trust the system.

Accountability

The establishment of clear responsibilities and mechanisms for addressing adverse outcomes or failures of AI systems. This ensures that there are recourse and oversight for AI-related incidents and impacts.

Bias

Systematic and repeatable errors in an AI system that lead to unfair or prejudicial outcomes for certain groups or individuals. Bias can originate from data, algorithms, or human decisions during development and deployment.

Common pitfalls

  • Failing to involve diverse stakeholders in the risk management process, leading to blind spots and unaddressed concerns.
  • Treating AI risk management as a one-time activity rather than an ongoing, iterative process.
  • Over-reliance on automated tools for risk assessment without human oversight and critical evaluation.
  • Neglecting to consider the societal and ethical implications of AI beyond purely technical risks.
  • Inadequate documentation of AI system design, development, and risk mitigation strategies, hindering transparency and accountability.
  • Underestimating the potential for emergent risks as AI systems interact with complex real-world environments.

Further reading

Want a Sprinthero coach to apply this with your team?

Our coaches use this — and the rest of the AI-Native Venture Sprint toolkit — live with leadership teams every week.

Talk to a coach