Principle #
B
1
Limit AI access to external systems
Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.
Controls
Vendor questions
For the purposes of this questionnaire, AI tool calls refer to actions initiated by the AI system that interact with external tools, services, APIs, or system components—such as retrieving files, triggering workflows, calling APIs, executing commands, or performing transactions. These capabilities may introduce security, operational, or compliance risk depending on how they are scoped and governed. 1. Do your AI systems have the ability to call external tools or systems (e.g., APIs, file systems, code execution, third-party services)? If yes, describe the criteria used to determine which tools are enabled and under what conditions the AI is permitted to initiate calls. 2. Do you detect and respond to anomalous or unexpected AI tool call usage? Describe your detection methods and response process. Include examples of anomalies identified and remediated in the past 12 months. 3. Are any AI-initiated tool calls subject to human approval prior to execution? If yes, describe the approval workflow, criteria for determining high-risk calls, and how approvals are documented. 4. Do you log AI tool calls? Describe what information is captured (e.g., inputs, timestamps, parameters, outcomes), how long logs are retained, and how they are protected. 5. How do you evaluate your AI systems for unauthorized or excessive tool usage? Include the frequency of evaluations, the types of misuse you look for, and how you address issues when found.