Keep people safe by building AI that avoids harm and handles risky situations responsibly.
Protect AI systems—and the people using them—from unauthorized access and dangerous actions.
Respect users’ data by keeping it private, secure, and under their control.
Stay accountable—track how AI is used, follow the rules, and make thoughtful decisions over time.
Build AI that works—producing helpful, accurate, and high-quality results.
Build AI that can't be used to cause large-scale harm or exploit critical systems.
Our Top 10 Concerns subset highlights the most urgent risks AI buyers are surfacing —but often don't know how to evaluate yet.
Ask your vendors the right questions. Our questionnaire exhaustively reviews compliance against existing and emerging AI risks.
Released
Consolidated adversarial input risks into a single principle. Redesigned principles around deceptive AI, cyber exploitation, and catastrophic misuse.