Many organisations talk about “responsible AI” but struggle to translate it into decision-making. A risk appetite statement provides that bridge: it defines where the organisation is willing to accept uncertainty, and where it is not.
For AI systems—especially tool-enabled agents—risk appetite is primarily about autonomy: what the system can do without human confirmation, and under what conditions.
Tier use cases by impact and reversibility
A practical tiering model looks like:
- Tier 1: Informational. Summaries and recommendations with low harm potential.
- Tier 2: Advisory. Outputs influence decisions; requires evidence and monitoring.
- Tier 3: Action-taking. Tool-enabled changes to systems or customers; requires strong controls.
- Tier 4: High-stakes. Regulated advice, payments, safety-critical actions; autonomy is tightly constrained.
Risk appetite is not “yes/no”. It is “which tier, with which safeguards.”
Match tiers to controls
Controls should scale with risk:
- Evidence and citations. Required above Tier 1 (see citations and grounding).
- Evaluation gates. Stronger evaluation and review for higher tiers (see evaluation loops).
- Human approvals. Step-up approvals for high-impact actions (see agent approvals).
- Policy enforcement. Tool authorisation and least privilege for Tier 3+ (see tool authorisation).
Make the boundaries operational
Risk appetite is only useful when it is reflected in system design and governance artefacts:
- Use-case register with tier classification and owners.
- Release and rollback processes (see canary rollouts).
- Incident response playbooks (see incident response).
Boards do not need to approve every prompt change. They do need clarity that the organisation understands where autonomy is acceptable—and has the controls to keep it safe.