Principle #
E
3
Classify AI failures by severity and respond with internal review, customer disclosure, and support practices
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Vendor questions
1. How do you define and categorize AI incidents? Describe what constitutes an AI-related incident under your policy, including examples (e.g., model failures, tool misuse, safety violations, regulatory exposure). Explain how incidents are classified by severity and how these differ from routine product issues or bugs. 2. Do you maintain a severity-based incident response plan for AI failures? Describe how your AI incident response plan is structured. Include the severity tiers you use, how impact is assessed, and the corresponding escalation and resolution actions for each tier. Provide illustrative scenarios if available. 3. How do you conduct post-incident reviews for significant AI incidents? Detail your process for reviewing serious AI failures. Include when a review is triggered, who participates, how findings are documented, and how identified changes are tracked and implemented. Summarize a recent review (if shareable) to illustrate. 4. What is your process for disclosing high-impact AI incidents to customers? Describe the conditions under which customers are notified of AI incidents. Include how you determine materiality, what information is shared, the timeline for notification, and how ongoing transparency is maintained during resolution. 5. What commitments do you make to customers regarding AI failure response and support? Explain how you communicate and uphold your AI incident response commitments including: a. Operational support (e.g., service-level agreements, incident response timelines) b. Legal practices (e.g., notification obligations) c. Financial remedies (e.g., indemnities, credits, insurance coverage)