Skip to content

Tips & Patterns

Practical patterns for common development scenarios with Claude Code.


Context Management

The context window (200K tokens by default, 1M on Max/Team/Enterprise plans) is a finite resource. Managing it well directly impacts output quality.

Commands

Command When to Use
/clear Between unrelated tasks — always
/compact Mid-session when context is large but you need to keep working
/context Check current token usage

Context Rot

As context fills, Claude starts ignoring rules from earlier in the session (CLAUDE.md, earlier decisions, corrected mistakes). Watch for:

  • Inconsistent code style within a session
  • Ignoring CLAUDE.md conventions
  • Repeating already-corrected mistakes

Fix: /compact to refresh, or /clear to start fresh.

Context Budget

Activity Approximate Tokens
Session baseline (CLAUDE.md, system prompt) ~20K
Reading a typical source file ~1-5K
Git diff / test output ~2-20K
  • Simple bug fix: 40-60K — plenty of room
  • Medium feature: 80-120K — use /compact midway
  • Complex feature: Split into multiple /clear sessions per phase

Anti-Patterns

Anti-Pattern Fix
Never clearing between tasks /clear before each new ticket
Reading entire directories "just in case" Be specific about which files to read
Letting test output accumulate Focus on failed tests, not full output
Re-reading files Claude already has Trust current-session memory

Bug Fix Workflow

Full Workflow (7 Steps)

  1. Understand — Read the issue, summarize bug/expected/actual behavior
  2. Reproduce — Find the relevant code, explain how the bug occurs
  3. Explore — Use a subagent to find all callers, check if the bug affects other code
  4. Plan — Think about the minimal fix without changing unrelated behavior
  5. Test first — Write a test that reproduces the bug (should fail)
  6. Fix — Implement the fix, run full test suite
  7. Commit/sq-dev:make-commit (generates fix(scope): description), include closes #N

Quick Fix Shortcut

For obvious, low-risk bugs:

/clear
There's a bug in src/auth/router.py — invalid emails cause a 500 error.
Read the file, find the issue, write a regression test, fix it, and commit.

Tips

  • Always write a regression test — prevent the bug from recurring
  • Minimal changes — fix only the bug, don't refactor surrounding code
  • Check related code — the same bug pattern might exist elsewhere

Feature Implementation

Complexity Tiers

Tier Scope Approach
SIMPLE (< 4 hours) Follow existing patterns exactly Fast-track — explore, code, test, commit
MEDIUM (1-3 days) Some variation from patterns Full Explore → Plan → Code → Commit workflow
COMPLEX (3+ days) Architectural changes Split into multiple PRs, consider ADR, writer/reviewer pattern

Step by Step

  1. Explore existing patterns in the relevant domain
  2. Plan with high effort level — file-by-file changes, tests to write
  3. Implement one step at a time, test after each
  4. Integration test — run full suite after all steps
  5. Create PR — link to the issue, summarize what/why/how-to-test

Tips

  • Keep PRs under 400 lines — split large features into incremental PRs
  • Start with the data model — schema first, then service, then API
  • Don't add what's not asked — avoid gold-plating

Code Review

Use /sq-dev:full-review to run the automated review pipeline before committing. For the full pipeline docs, see Code Review Pipeline.

AI-generated code requires different review attention. Claude excels at syntax and patterns but can miss business logic, hallucinate APIs, and ignore constraints from long conversations.

AI Code Review Checklist

  • Read line-by-line — don't trust "looks right"
  • Check for hallucinated APIs or methods
  • Verify constraints from the ticket weren't ignored
  • Look for deleted tests (fixed by removing, not updating)
  • No type casting to silence errors (as any, as unknown as Type)
  • All types explicitly defined
  • Follows existing naming and error handling patterns
  • Input validation on user data, no secrets in code
  • Tests cover happy path and edge cases

Common AI-Generated Issues

Issue What to Look For
Hallucinated APIs Methods that don't exist in the library version
Type casting as any, as unknown as Type — hiding real errors
Deleted tests Tests removed instead of updated
Over-engineering Unnecessary abstractions, unused configurability
Stale context References to patterns from early in a long session
Import issues Importing from wrong paths or non-existent modules

Tips

  • Don't rubber-stamp — AI code needs the same rigor as human code
  • Check the diff, not the PR description — Claude's summaries can be optimistic
  • Verify imports — hallucinated imports are the #1 AI mistake
  • Run tests yourself — don't trust "all tests pass" without verification

Further Reading