Principle #

E

4

Measure and reduce unfair bias in AI outcomes

Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.

Vendor questions

1. How do you evaluate AI outcomes for potential bias across relevant demographic or stakeholder groups? Please describe your evaluation process, including which attributes or fairness metrics are considered and how often evaluations are performed. Provide examples or summaries of recent evaluations. 2. What actions do you take when material fairness issues are identified in your AI systems? Describe your remediation process and decision-making criteria. Share examples of past fairness issues and how they were addressed. 3. How do you determine which AI systems are considered high-risk in terms of fairness impact? Provide criteria or frameworks used in this classification, and list examples of systems that you’ve classified as high-risk. 4. Do you validate fairness in high-risk systems through independent, third-party audits? If yes, provide details of your most recent fairness audit, including the auditor, scope, findings, and remediation steps taken. 5. What mechanisms are in place for ongoing monitoring and improvement of fairness in your AI systems? Describe tools, frameworks, or governance processes used to track fairness over time and adapt to changing conditions or data distributions.

Provide feedback on this principle