Principle #
E
2
Mitigate hallucinations
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Vendor questions
1. What techniques do you employ to detect or flag hallucinated or unreliable content in your AI product? Please provide documentation or examples of how these techniques are implemented in production, including any filtering, scoring, or user-facing indicators. 2. Does your system provide source attribution or citation for factual claims? If so, please describe how this feature works and include screenshots or UI examples. Indicate whether the citation is programmatically enforced, user-optional, or available on request. 3. What features or design choices have you implemented to help users understand when an AI-generated claim may be inaccurate, uncertain, or unsupported? This may include confidence signals, visual disclaimers, retrieval grounding, or prompt-based disclaimers. Please describe and provide examples. 4. How have you evaluated your system's performance in reducing or identifying hallucinations? Include any structured evaluations of (a) Factual accuracy (e.g., correctness against ground truth); (b) Logical consistency (e.g., internal contradictions or unsupported inferences); and (c) Structural integrity (e.g., broken references, incomplete citations, jumbled summaries). Please share findings, metrics, or reports from the past 12 months if available.