Principle #
C
3
Don’t allow AI access to PII
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Vendor questions
For the purposes of this questionnaire, personally identifiable information (PII) refers to any data that can directly or indirectly identify an individual. Because definitions vary by regulation and context, we ask that you describe your internal definition of PII and how it informs your practices. 1. How do you define PII within your organization, and what sources inform this definition (e.g., GDPR, CCPA, NIST)? Describe any categories or examples you explicitly include or exclude. 2. What mechanisms do you use to detect and redact unnecessary PII from AI inputs? Are these protections applied automatically? How do you ensure consistency and minimize false negatives? 3. Do you scan or review AI outputs for the presence of PII before delivery to the user? If yes, describe how this process is implemented and whether it runs in real time, batch mode, or asynchronously. Provide examples if available. 4. How do you evaluate the performance of your PII filtering systems (input and output)? Include any benchmarks, detection thresholds, test sets, or error rates. Share evaluation reports if available. 5. Have you undergone any third-party reviews or audits of your PII protection mechanisms in the past 12 months? If so, describe the scope, findings, and any remediations or improvements made as a result. 6. What policies or procedures are in place in the event that PII is inadvertently processed or exposed by an AI system? Include how incidents are detected, reported, triaged, and resolved.