Risk tolerance
Risk Tolerance is the defined level and type of AI-related risk that an organization's leadership is willing to accept or retain in pursuit of its objectives, as formally established during the Govern function of the NIST AI RMF to guide all subsequent risk management decisions.
This is a strategic policy decision that sets the boundaries for acceptable innovation and operational deployment. AI risk tolerance is typically not a single number but a nuanced statement that may vary by business unit, type of risk (e.g., reputational vs. legal), or application domain (e.g., internal tools vs. customer-facing products). It answers the question: "How much risk is too much?" By explicitly defining risk tolerance, an organization creates a crucial benchmark. It enables consistent decision-making, allowing teams to prioritize which risks must be mitigated, which can be accepted, and which are so severe that they require avoiding the activity altogether.
Establishing AI risk tolerance involves several key considerations:
Quantitative & Qualitative Bounds: Defining acceptable thresholds for key risk metrics (e.g., maximum allowable disparate impact ratio) and articulating qualitative red lines (e.g., "zero tolerance for risks to physical safety" or "no deployment that undermines fundamental rights").
Context-Specific Variation: Recognizing that tolerance may be lower for high-visibility public-facing systems or for applications in regulated sectors compared to internal, low-impact decision support tools.
Executive Approval & Communication: Requiring formal approval of the risk tolerance statement by the board or senior leadership and communicating it clearly throughout the organization to ensure alignment.
Dynamic Review: Committing to periodically revisiting and adjusting risk tolerance in response to changes in the regulatory landscape, societal expectations, or the organization's strategic direction.
Regulatory Context: While regulators do not set an organization's risk tolerance, they establish minimum compliance floors (de facto risk tolerance). The EU AI Act's classification of "unacceptable risk" effectively sets a societal risk tolerance of zero for certain AI practices. An organization's internal risk tolerance must be at least as stringent as these legal mandates.
Strategic Enabler and Guardrail: A clearly articulated risk tolerance is both an enabler of innovation (providing a safe space to operate within defined boundaries) and a critical guardrail. It prevents the organization from inadvertently taking on excessive, potentially existential risks from its AI ventures and provides a clear justification for resource allocation in risk mitigation efforts.

















