Contributed by
The unstated goal of this series is to cultivate critical thinking about AI in a world dominated by enthusiasm.
This series is an invitation to think critically about artificial intelligence—not just what it can do, but what it should do, and what assumptions shape that "should." While many discussions focus on utility and progress, this series draws attention to unspoken risks, misaligned incentives, and philosophical dilemmas—particularly as AI becomes embedded in decision-making, culture, and governance.
This series represents a passion project designed to introduce most readers to AI ethics and existential risks. While the ideas aren't uniquely mine, I've carefully curated them into what I consider my own essential "critical AI literacy course." It's still a work in progress, with plans to eventually add a lesson on containment, and explicitly callout surveillance capitalism in name.
Beyond its educational mission, this project holds personal significance—it marked my first opportunity to explore Articulate 360 through an instructional design lens.
Apologies for any typos, this outline is secondary to my Articulate course found at USC.edu
Resource Statistics
- (41.08 KB)1 download