OpenAI has released its most capable model to date, GPT-5, featuring real-time video analysis and dramatically improved multi-step reasoning that the company says surpasses expert human performance on a range of benchmarks.
The model, announced at a San Francisco event, processes text, images, audio and video simultaneously. In demos, GPT-5 was shown diagnosing a mechanical fault from a short clip, drafting a legal brief from scanned documents and composing functional code from a hand-drawn wireframe.
Chief executive Sam Altman called it "the most capable AI system ever deployed to the public," while cautioning that the technology still makes mistakes and requires human oversight for high-stakes decisions.
Independent researchers who received early access praised GPT-5's coherence over long documents and its sharply reduced tendency to "hallucinate" plausible-sounding falsehoods. On the MMLU benchmark, which tests knowledge across 57 academic subjects, GPT-5 scored 92.4 percent, compared with around 86 percent for its predecessor GPT-4.
Pricing starts at $0.01 per thousand input tokens for API users, roughly half the cost of GPT-4 Turbo at launch. A version integrated into ChatGPT Plus will roll out over the coming weeks.
Critics from the AI safety community urged the company to slow deployment until independent auditors can fully assess risks. "Capability gains of this magnitude deserve more than a four-week red-team window," said Helen Park, a researcher at the Center for AI Safety.
OpenAI said it has implemented new safeguards against misuse, including a stricter content policy and an enhanced detection system for synthetic media generated by the model.