Configure AI models for Chat, Composer, and Agent mode. Cursor supports models from OpenAI, Anthropic, Google, and its own proprietary models. Choosing the right model per task is a major productivity lever.
| Model | Provider | Best For | Context |
|---|---|---|---|
| GPT-4o | OpenAI | General coding, fast responses | 128K |
| GPT-4o-mini | OpenAI | Simple tasks, cost-efficient | 128K |
| Claude Sonnet | Anthropic | Code quality, detailed explanations | 200K |
| Claude Haiku | Anthropic | Fast simple tasks | 200K |
| cursor-small | Cursor | Quick completions, simple edits | 8K |
| Auto | Cursor | Automatic model selection per query | Varies |
| Model | Provider | Best For | Context |
|---|---|---|---|
| Claude Opus | Anthropic | Complex architecture, hard bugs | 200K |
| GPT-5 | OpenAI | Advanced reasoning, complex code | 128K+ |
| o1 / o3 | OpenAI | Deep reasoning, mathematical logic | 128K |
| Gemini 2.5 Pro | Design, large context analysis | 1M |
Bug fix in one file → GPT-4o or Claude Sonnet
Multi-file refactoring → Claude Sonnet or Opus
Architecture planning → Claude Opus or GPT-5
Test generation → GPT-4o (fast + good patterns)
Complex algorithm design → o1/o3 reasoning models
Large codebase analysis → Gemini 2.5 Pro (1M context)
Simple autocomplete → cursor-small (automatic via Tab)
"I don't know" → Auto mode
Per conversation: Click the model name in the top-right of Chat or Composer panel.
Default model: Cursor Settings > Models > set default for Chat and Composer separately.
Auto mode: Select "Auto" as the model. Cursor picks the best model per query based on complexity and current server load.
Use your own API keys to bypass Cursor's quota system. You pay the provider directly at their rates.
Cursor Settings > Models > enable Use own API key:
OpenAI:
API Key: sk-proj-xxxxxxxxxxxxxxxxxxxx
Anthropic:
API Key: sk-ant-xxxxxxxxxxxxxxxxxxxx
Google (Gemini):
API Key: AIzaSyxxxxxxxxxxxxxxxxxxxxxxxxx
For enterprise Azure deployments:
Cursor Settings > Models > Azure:
API Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Endpoint: https://my-instance.openai.azure.com
Deployment: gpt-4o-deployment-name
API Version: 2024-10-21
For OpenAI-compatible providers (Ollama, LM Studio, Together AI):
Cursor Settings > Models > Add Model
llama-3.1-70b)Override OpenAI Base URL
http://localhost:11434/v1 (Ollama) or provider URL| Feature | Uses BYOK Key? | Uses Cursor Model? |
|---|---|---|
| Chat | Yes | -- |
| Composer | Yes | -- |
| Agent mode | Yes | -- |
| Tab Completion | No | Always Cursor model |
| Apply from Chat | No | Always Cursor model |
Tab Completion always uses Cursor's proprietary model regardless of BYOK configuration.
Tier 1 (Fast + Cheap): cursor-small, GPT-4o-mini, Claude Haiku
Use for: simple questions, syntax help, boilerplate
Tier 2 (Balanced): GPT-4o, Claude Sonnet
Use for: most coding tasks, debugging, refactoring
Tier 3 (Premium): Claude Opus, GPT-5, o1/o3
Use for: architecture decisions, critical bugs, complex logic
Cursor subscription includes a monthly quota of "fast requests" (premium model uses). When exceeded, requests queue behind other users ("slow requests").
cursor.com/settings > Usage# Claude models: Verbose, well-documented, defensive
def process_order(order: Order) -> Result[ProcessedOrder, OrderError]:
"""Process an order through the payment and fulfillment pipeline.
Args:
order: The order to process.
Returns:
Result containing the processed order or an error.
Raises:
Never raises -- errors returned as Result.Err.
"""
if not order.items:
return Err(OrderError.EMPTY_ORDER)
...
# GPT models: Concise, pragmatic, fewer comments
def process_order(order: Order) -> ProcessedOrder:
if not order.items:
raise ValueError("Order has no items")
...
These models "think" before responding. They are slower but significantly better at:
They are overkill for simple tasks. Use them deliberately for hard problems.