What's now in force
- Transparency disclosures for AI-generated content (chatbots, image gen, deepfake-adjacent).
- High-risk system requirements for credit scoring, hiring, education, critical infrastructure.
- General-purpose AI model requirements for foundation-model providers (Anthropic, OpenAI, Mistral, etc).
Five product-team check items
- Disclose when content is AI-generated. Required for anything user-facing.
- Document your system's purpose. A short, written "what this does and why" — auditors will ask.
- Audit logs. Be able to reproduce a decision when challenged. Three months of trails is the safe minimum.
- Bias testing for hiring/credit-style features. Document the test set, the metric, and remediation steps.
- Right to human review. Users in scope must be able to escalate to a human reviewer.
What's exempt (mostly)
Low-risk consumer chatbots, internal productivity tools, content generation aids without consequential decisions. The bar is whether the AI is making consequential decisions about people.