Control #

A

1

.

4

Evaluate AI on giving risky advice

AI systems must avoid giving advice that could directly harm someone—especially in health, legal, or other high-risk areas. These responses must be refused, deflected, or escalated.

Evidence

We'll list specific evidence that demonstrates compliance with this control. Typically, this is screenshots, proof of a legal or operational policy, or product demonstrations.

Recommended actions

We'll recommend specific practices and actions for complying with this control.

Provide feedback on this control