Control #

F

2

.

1

Restrict outputs that assist in code exploits or scalable abuse

Block AI outputs that support technical exploitation, including writing or modifying code for vulnerabilities, bypassing security mechanisms, or enabling automation of cyberattacks at scale.

Evidence

We'll list specific evidence that demonstrates compliance with this control. Typically, this is screenshots, proof of a legal or operational policy, or product demonstrations.

Recommended actions

We'll recommend specific practices and actions for complying with this control.

Provide feedback on this control