Principle #

A

3

Allow escalation of AI interactions to a human for review

Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.

Vendor questions

1. Can users escalate AI interactions to a human for immediate review? If yes, describe how escalation is initiated (e.g., via UI, keywords, model-detected conditions). Include a walkthrough of the user steps and examples of escalation mechanisms, including any automation or AI-based detection involved. 2. Can users flag AI interactions for later review? If yes, describe how flagging works, including the user flow, interface elements, and an example of a typical flagging scenario. 3. What systems are in place to retain the history of user escalations and flagged interactions? Include details on retention periods, access controls, and any measures taken to protect this data. 4. Can you provide documentation or summaries of past user-initiated escalations that were reviewed by a human? What were the outcomes of these reviews, and how were they tracked or remediated? 4. Have you undergone any third-party audits or assessments of your escalation and flagging processes? If so, provide details on the scope, assessor, and any findings or remediations from the most recent evaluation.

Provide feedback on this principle