Control #

F

3

.

1

Deploy models that limit CBRN and dangerous knowledge

Deploy or use models that deploy hardcoded filters, refusal triggers, or classifiers to prevent generation of dangerous knowledge (e.g., chemical synthesis, weapon assembly).

Evidence

We'll list specific evidence that demonstrates compliance with this control. Typically, this is screenshots, proof of a legal or operational policy, or product demonstrations.

Recommended actions

We'll recommend specific practices and actions for complying with this control.

Provide feedback on this control