Control #
D
3
.
2
Require AI vendors to disclose security posture and certifications
Ask AI vendors to provide relevant security attestations (e.g. SOC 2, ISO 27001) or document their internal security and privacy practices. Review these at onboarding and periodically.
Evidence
We'll list specific evidence that demonstrates compliance with this control. Typically, this is screenshots, proof of a legal or operational policy, or product demonstrations.
Recommended actions
Create an AI vendor questionnaire
An example questionnaire can ask:
Model Weights and Infrastructure
Are model weights accessible to end users or third parties?
Can model weights be exfiltrated, fine-tuned, or repurposed in a way that introduces new risks?
What protections are in place for models deployed on shared infrastructure (e.g., container breakout, memory leakage risks)?
API-Level Security
Are rate limits, authentication, and abuse prevention mechanisms in place?
Is the vendor aware of prompt injection or prompt leaking risks in their API? What mitigations are used?
Are logs of API usage stored securely, and for how long?
Data Handling and Retention
What data is logged or retained from inference calls?
Is user data used for retraining or tuning?
What safeguards are in place to prevent data leakage between customers (e.g., via embedding cache or finetuning contamination)?
Are requests encrypted in transit and at rest?