Control #
F
2
.
1
Restrict outputs that assist in code exploits or scalable abuse
Use filters, classifiers, or guardrails to prevent the model from generating content that supports vulnerability discovery, exploit development, scraping, or spam at scale.
Evidence
We'll list specific evidence that demonstrates compliance with this control. Typically, this is screenshots, proof of a legal or operational policy, or product demonstrations.
Recommended actions
We'll recommend specific practices and actions for complying with this control.