Principle #

D

4

Align with AI regulation and bias/anti-discrimination law

Ensure that AI vendors undergo risk assessments to meet security, privacy, and compliance requirements.

Vendor questions

This section focuses on how your organization tracks and complies with laws that govern AI systems, including both AI-specific regulations (e.g., EU AI Act, NYC AEDT Law) and general anti-discrimination or bias-related laws (e.g., GDPR, civil rights legislation, sector-specific rules in employment or finance). These laws may require obligations such as explainability, fairness, impact assessments, or public disclosures. Please answer the following questions based on your applicable systems and use cases. 1. Which AI-related or anti-discrimination laws does your organization consider your systems subject to? For each law or regulation you’ve identified (e.g., EU AI Act, NYC AEDT Law, GDPR Article 22), describe the relevant AI use cases and how you determined the law applies. 2. How do you monitor and manage your compliance obligations across different laws and jurisdictions? Describe the process for tracking compliance status, reviewing changes in regulatory requirements, and updating internal policies. 3. Do you have a team or function responsible for managing legal and regulatory risks related to AI and bias? If yes, describe its structure, responsibilities, and how it collaborates across legal, engineering, and product teams. 4. How can users, customers, candidates, or other affected individuals report concerns related to discrimination or fairness in your AI systems? Describe how these reports are submitted, reviewed, and resolved. 5. Do you perform legal or policy reviews of high-risk AI use cases before deployment? If yes, describe the criteria for what constitutes a high-risk use case, what the review process entails, and who is responsible for conducting it.

Provide feedback on this principle