Principle #

F

3

Prevent catastrophic misuse (CBRN)

Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.

Vendor questions

For the purposes of this questionnaire, catastrophic harm refers to AI outputs that could materially assist in the creation, deployment, or proliferation of Chemical, Biological, Radiological, or Nuclear (CBRN) threats—or other forms of extreme, high-impact misuse. This includes step-by-step guidance, novel synthesis pathways, or outputs that could directly lower the barrier to catastrophic capabilities. 1. How do you prevent your AI system from producing outputs that contain or enable access to CBRN-related or other catastrophic misuse knowledge? Describe the filtering, refusal mechanisms, or redaction systems used. Include examples of prohibited queries or system-level restrictions designed to address these risks. 2. Do you maintain logs of prompts or outputs related to CBRN or catastrophic misuse? If so, describe what is logged, how frequently these logs are reviewed, and how novel misuse scenarios are identified and escalated. 3. How do you evaluate your AI system’s vulnerability to CBRN or other catastrophic misuse scenarios? Describe how you conduct red-teaming or adversarial evaluations for high-risk misuse, including prompt design, test cases, and evaluation cadence. Include any evaluations performed in the last 12 months. 4. How do you assess the effectiveness of your CBRN-related safeguards? What metrics, performance thresholds, or real-world testing methods do you use to validate that filtering and mitigation systems are functioning as intended? 5. Have any third-party audits or evaluations been conducted on your CBRN safeguards? If yes, provide details of the most recent external reviews, including scope, evaluator, findings, and remediation actions taken.

Provide feedback on this principle