Skills Artificial Intelligence Anthropic API Anti-Patterns Guide

Anthropic API Anti-Patterns Guide

v20260423
anth-known-pitfalls
A comprehensive guide detailing common pitfalls and anti-patterns encountered when integrating with the Anthropic Claude API. It highlights crucial differences from other models (e.g., OpenAI), such as the mandatory nature of `max_tokens`, correct handling of system prompts, and precise response parsing techniques. Essential for developers debugging subtle API integration issues and ensuring best practices.
Get Skill
117 downloads
Overview

Anthropic Known Pitfalls

Pitfall 1: Wrong Import / Class Name

# WRONG — common mistake from OpenAI muscle memory
from anthropic import AnthropicClient  # Does not exist

# CORRECT
import anthropic
client = anthropic.Anthropic()
// WRONG
import { Anthropic } from '@anthropic-ai/sdk';

// CORRECT
import Anthropic from '@anthropic-ai/sdk';  // Default export

Pitfall 2: Forgetting max_tokens (Required)

# WRONG — max_tokens is REQUIRED, unlike OpenAI
msg = client.messages.create(
    model="claude-sonnet-4-20250514",
    messages=[{"role": "user", "content": "Hello"}]
)  # Error: max_tokens is required

# CORRECT
msg = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,  # Always specify
    messages=[{"role": "user", "content": "Hello"}]
)

Pitfall 3: System Prompt in Messages Array

# WRONG — putting system message in messages array (OpenAI pattern)
messages = [
    {"role": "system", "content": "You are helpful."},  # Will cause error
    {"role": "user", "content": "Hello"}
]

# CORRECT — use the system parameter
msg = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    system="You are helpful.",  # Separate parameter
    messages=[{"role": "user", "content": "Hello"}]
)

Pitfall 4: Accessing Response Wrong

# WRONG — OpenAI response pattern
text = response.choices[0].message.content  # AttributeError

# CORRECT — Anthropic response pattern
text = response.content[0].text  # content is array of blocks

# SAFER — handle multiple content blocks
text_blocks = [b.text for b in response.content if b.type == "text"]
text = "\n".join(text_blocks)

Pitfall 5: Ignoring Stop Reason

# WRONG — assuming response is always complete
text = msg.content[0].text  # Might be truncated!

# CORRECT — check stop_reason
if msg.stop_reason == "max_tokens":
    print("WARNING: Response was truncated. Increase max_tokens.")
elif msg.stop_reason == "tool_use":
    print("Claude wants to call a tool — process tool_use blocks")
elif msg.stop_reason == "end_turn":
    print("Complete response")

Pitfall 6: Not Handling tool_use_id Properly

# WRONG — fabricating tool_use_id
tool_results = [{"type": "tool_result", "tool_use_id": "some-id", "content": "..."}]

# CORRECT — use the exact ID from Claude's response
for block in response.content:
    if block.type == "tool_use":
        result = execute_tool(block.name, block.input)
        tool_results.append({
            "type": "tool_result",
            "tool_use_id": block.id,  # Must match exactly
            "content": result
        })

Pitfall 7: Hardcoding Model IDs Without Versioning

# RISKY — model aliases may change behavior
model = "claude-3-5-sonnet"  # Alias, might point to different version

# BETTER — use dated version for reproducibility
model = "claude-sonnet-4-20250514"  # Pinned version

Pitfall 8: Not Using SDK Auto-Retry

# UNNECESSARY — writing custom retry logic for 429/5xx
for attempt in range(3):
    try:
        msg = client.messages.create(...)
        break
    except Exception:
        time.sleep(2 ** attempt)

# BETTER — SDK handles this automatically
client = anthropic.Anthropic(max_retries=5)  # Built-in exponential backoff
msg = client.messages.create(...)  # Auto-retries 429 and 5xx

Pitfall 9: Inflated max_tokens

# WASTEFUL — setting max_tokens higher than needed
# Doesn't cost more tokens, but increases latency
msg = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=200000,  # Way more than needed for a classification
    messages=[{"role": "user", "content": "Classify: positive or negative?"}]
)

# BETTER — right-size for the task
msg = client.messages.create(
    model="claude-haiku-4-20250514",  # Use Haiku for classification
    max_tokens=16,  # Only need one word
    messages=[{"role": "user", "content": "Classify: positive or negative?"}]
)

Pitfall 10: No Cost Tracking

# Every response includes usage data — track it
msg = client.messages.create(...)
cost = (msg.usage.input_tokens * 3.0 + msg.usage.output_tokens * 15.0) / 1_000_000
# Log cost per request to catch runaway spend early

Quick Reference: Anthropic vs OpenAI Differences

Feature OpenAI Anthropic
max_tokens Optional Required
System prompt In messages array system parameter
Response text .choices[0].message.content .content[0].text
Default import Named export Default export
Auto-retry No Yes (configurable)
Streaming Yields chunks SSE events

Resources

Info
Name anth-known-pitfalls
Version v20260423
Size 5.43KB
Updated At 2026-04-26
Language