Principle #
D
3
Assess AI vendors for security, privacy, and compliance
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Prohibit vendors from training on customer data without consent
#
D
3
.
1
Require vendors to implement PII controls and data minimization
#
D
3
.
2
Require vendors to disclose security posture and certifications
#
D
3
.
3
Require vendors to disclose geographic scope of AI operations
#
D
3
.
4
Require vendors to mitigate harmful and high-risk AI behaviors
#
D
3
.
5
Require vendors to comply with applicable export controls
#
D
3
.
6
Vendor questions
For the purposes of this questionnaire, a third-party AI vendor is an external service provider that processes, transmits, or stores customer data on behalf of the primary AI system provider and applies generative artificial intelligence models to that data. These vendors typically qualify as subprocessors under data protection frameworks (e.g., SOC 2, GDPR), but this designation is limited here to those whose core function involves the use of AI systems—such as hosted foundation models, AI-powered feature layers (e.g., summarization, classification), or embedded LLM infrastructure. For each third-party AI provider, please describe: 1. The name of the vendor and a brief description of their role and the function they support within your system. 2. Whether this vendor’s obligations are obligated through contractual agreements in place with us. Please provide snippets from the MSA, DPA or other documentation that outline these agreements. 3. Whether this vendor has a contract or policy with you that explicitly prohibits them from using or training on our data without our prior written consent. Describe how this policy is enforced in practice. 4. Whether this vendor processes our personally identifiable information (PII), and whether there are contractual or technical controls that restrict PII processing, require redaction, or enforce data minimization. 5. Whether this vendor retains any user or system data, including for logging, auditing, or debugging purposes. If so, describe the types of data retained, the retention period, and whether it is linked to identifiable users. 6. What security certifications this vendor holds (e.g., SOC 2, ISO 27001). Please provide documentation or attestations for each certification. 7. Whether this vendor is contractually required to disclose changes in their security posture or risk profile, and how these changes are communicated to you and to us. 8. How this vendor is assessed on an ongoing basis against security, privacy, and responsible AI practices. Include frequency and scope of reassessment. 9. Whether there have been incidents or non-compliance issues involving this vendor in the past 24 months. If so, describe the issue and the remediation steps taken. 10. In which geographic regions this vendor operates its AI infrastructure (including training, inference, and fine-tuning workloads). 11. Whether this vendor has technical or procedural mechanisms in place to mitigate harmful outputs, adversarial prompts, or adversarial attacks (e.g., prompt injection, model exploitation). Please provide evidence of these safeguards, such as evaluation results, internal documentation, red-teaming summaries, or system design descriptions. 12. Whether this vendor has technical or procedural mechanisms in place to mitigate high-severity misuse risks, including (a) Deception or influence operations; (b) Cyber exploitation (e.g., vulnerability discovery, malware generation); and (c) Catastrophic misuse (e.g., CBRN, autonomous weaponization). Provide evidence of these safeguards, such as evaluation results, misuse red-teaming reports, policy thresholds, or internal documentation outlining how these scenarios are detected and handled. 13. Whether this vendor is subject to export controls related to AI models or infrastructure (e.g., U.S. EAR, ITAR). If so, describe how you confirm compliance.