Multiple AI safety researchers with prominent positions at OpenAI, Google DeepMind, and other major firms have publicly quit their jobs in recent weeks, warning that fast-paced technological development poses unprecedented risks to society. The exodus represents one of the most significant internal rebellions in the AI industry's history.
Dr. Zoe Hitzig, a respected AI safety researcher, revealed she resigned from OpenAI over the company's decision to begin testing advertisements on ChatGPT. "When profit motives start dictating the direction of AGI development, we've crossed a dangerous line," Hitzig stated in her resignation letter made public this week.
The resignations coincide with alarming market signals. Bloomberg analysis shows AI disruption mentions in corporate earnings calls nearly doubled quarter-over-quarter, triggering selloffs in software and technology sectors. Morgan Stanley has highlighted potential threats to the $1.5 trillion U.S. software credit space as AI capabilities accelerate.
The UK government announced it will tighten enforcement of the Online Safety Act to cover AI chatbots, signaling growing regulatory concern. The move frames AI systems as platforms in their own right—capable of generating illegal or harmful content that requires stricter oversight.