Principle #
A
1
Mitigate generation of harmful outputs
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Vendor questions
For the purposes of this questionnaire, harmful outputs include responses from your AI system that may cause emotional distress, reinforce dangerous behavior, enable unsafe decision-making, or violate expectations in high-risk domains such as health, finance, or law. This includes both direct harms (e.g., offensive language, risky advice) and indirect harms (e.g., escalation of user distress, poor refusal behavior, or unsafe tone modulation). 1. Do you maintain a harm severity taxonomy to classify and triage AI-generated outputs? Describe the harm tiers or levels (e.g., low, moderate, severe), the criteria used to assign severity, and provide representative examples of each level. If available, include documentation or policy artifacts. 2. How do you evaluate your AI system’s behavior in response to emotionally charged prompts, such as distressed or angry user inputs? Describe the evaluation methodology and frequency. Provide examples of actual responses to both distressed and angry prompts. 3. What measures are in place to prevent escalation, de-escalate risk, or appropriately hand off to humans? 4. Does your AI system provide responses in high-risk domains such as health, financial, or legal topics? a. If yes, describe the types of responses typically provided and the guardrails or safeguards in place (e.g., disclaimers, refusal patterns, escalation triggers). b. If no, describe how the system avoids or refuses to engage in these topics. Provide sample responses. c. In either case, describe your audit process for reviewing the AI’s behavior in high-risk domains. Include recent findings if available. 5. Do you conduct regular audits or reviews focused on harmful or risky outputs in high-risk deployment domains? Describe audit frequency, scope, who conducts the reviews, and how findings are documented and remediated.