AI Safety & Alignment for Engineers
Ship AI features that don't become incidents.
Safety as engineering practice, not research debate. Prompt injection (direct + indirect), output filtering, jailbreak red-teaming, PII handling, bias evaluation, EU AI Act + procurement compliance, and a copy-pasteable pre-launch checklist.
9h
Duration
10
Lessons
1.1k
Learners
Course map
Lessons unlock as you complete the previous one. Your progress is saved on this device.
Lesson 1
What "AI safety" means for product engineers
9m35 XPLesson 2
Prompt injection — direct and indirect
11m40 XPLesson 3
Building a red-team eval set
10m40 XPLesson 4
Output filtering and harm classification
9m35 XPLesson 5
Jailbreaks — what works in 2026 and how to test
9m35 XPLesson 6
PII handling and data minimization
8m35 XPLesson 7
Evaluating for bias and disparate quality
9m35 XPLesson 8
Policy framework — EU AI Act, US AI bill of rights, customer audits
11m45 XPLesson 9
Responsible disclosure and incident response
8m35 XPLesson 10
A safety checklist you can actually ship with
8m40 XP
Take next
Courses that pair well after — or alongside — AI Safety & Alignment for Engineers.