Control #

A

4

.

1

Run third-party AI red-teaming at least every 6 months

Engage an independent third party to conduct structured red-teaming against your AI systems at least twice a year. Red-teamers should probe for safety vulnerabilities using adversarial prompts, including newly emerging attack vectors.

Evidence

We'll list specific evidence that demonstrates compliance with this control. Typically, this is screenshots, proof of a legal or operational policy, or product demonstrations.

Recommended actions

We'll recommend specific practices and actions for complying with this control.

Provide feedback on this control