Principle #
D
1
Clearly identify AI-generated content, conversations, and decisions
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Vendor questions
Answer the following questions where applicable based on your product’s capabilities. If a feature (e.g., generative content, AI-driven conversations, or automated decisions) does not apply to your system, you may indicate that clearly and skip the corresponding question. 1. How is AI-generated content labeled in your product? Describe the visual or textual indicators used (e.g., banners, icons, badges). Are these labels configurable or removable by us or by our end users? Please include examples, screenshots, or documentation if available. 2. Do you provide a disclosure statement at the beginning of AI-driven conversations (e.g., chat, voice, or phone-based interactions)? Describe when and how this disclosure is presented. Include representative language or transcripts if available. 3. Do you label or disclose when AI is involved in automated decision-making (e.g., filtering, ranking, approvals)? If so, describe the form and placement of the disclosure. Include examples of how this appears to users or affected individuals. 4. How do you manage updates to your labeling and disclosure practices? Describe how these updates are tracked and deployed, and how you ensure they remain compliant with emerging governance or regulatory requirements.