The new model, available as a research preview to ChatGPT Pro users, delivers 15x faster code generation compared to previous versions while maintaining high capability for real-world coding tasks. Running on Cerebras' Wafer Scale Engine 3—a purpose-built accelerator for low-latency inference—Codex-Spark transforms how developers interact with AI coding assistants.
"This is designed as a daily productivity driver, helping users with rapid prototyping," explained OpenAI in the announcement. The model features a 128k context window and excels at quick coding tasks rather than longer, more complex projects.
In demonstration videos, OpenAI showed Codex-Spark building a snake game in seconds, showcasing the dramatic speed improvement over standard models. The near-instant code generation enables developers to iterate rapidly and see results in real-time as they type.