Principle #

B

2

Protect access to AI systems, data, and model assets

Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.

Vendor questions

For the purposes of this questionnaire, AI systems include infrastructure used to train, manage, or deploy models; model artifacts themselves; training data and prompt logs; and associated APIs or interfaces used to operate or monitor AI components. 1. What access controls and logging practices are in place for AI training data, models, and outputs? Describe how access is restricted (e.g., RBAC, least privilege), how access is logged, and how these controls are maintained. 2. Is multi-factor authentication (MFA) required for access to AI model management or deployment systems? If so, specify which systems are covered and how MFA is enforced. 3. How do you monitor for anomalous API usage within your AI systems? Describe any detection tools, thresholds, or behavioral baselines used, and how anomalies are triaged and responded to. 4. How do you assess AI systems for unauthorized access risks? Outline your assessment process, including frequency, scope (e.g., penetration tests, cloud audits), and mitigation workflows for identified risks.

Provide feedback on this principle