GPT-4o

OpenAI · GPT-4o

Widely deployed multimodal model kept as a legacy reference after retirement from ChatGPT defaults.

Type
multimodal
Context
128K tokens
Max Output
16K tokens
Status
legacy
Input
$2.5/1M tok
Output
$10/1M tok
API Access
Yes
License
proprietary
multimodal reasoning vision tool-use general-purpose
Released May 2024 · Updated March 6, 2026

Overview

Freshness note: Model capabilities, limits, and pricing can change quickly. This profile is a point-in-time snapshot last verified on February 15, 2026.

GPT-4o was OpenAI’s widely deployed multimodal model tier for mixed workloads across text, vision, and tool-enabled workflows. As of February 13, 2026, OpenAI retired GPT-4o from ChatGPT while keeping API access available, which makes it better treated as a legacy reference than a current default.

Capabilities

The model handles instruction-following, general analysis, structured outputs, and multimodal interpretation with strong consistency. It is suitable for customer-facing copilots, workflow automation, and broad product integrations.

Technical Details

GPT-4o supports a large context window and practical output budget for production tasks. It is often used as a default model where teams need one tier that can serve multiple use cases without complex routing.

Pricing & Access

Available via OpenAI API and product surfaces with published per-token pricing. Teams should verify current rates and model options in official OpenAI docs before budget commitments.

Best Use Cases

Use GPT-4o when you still need compatibility with existing API integrations or established multimodal workflows. For new ChatGPT-facing deployments, OpenAI’s current defaults have moved forward to newer GPT-5 family models.

Comparisons

Compared with GPT-4o mini, GPT-4o usually offers higher quality on complex reasoning and multimodal tasks. Compared with GPT-5.3, GPT-4o is now the older route in OpenAI’s public product lineup. Compared with Gemini 2.5 Flash, tradeoffs often depend on ecosystem fit and latency goals.