Tips & Patterns¶
Practical patterns for common development scenarios with Claude Code.
Context Management¶
The context window (200K tokens by default, 1M on Max/Team/Enterprise plans) is a finite resource. Managing it well directly impacts output quality.
Commands¶
| Command | When to Use |
|---|---|
/clear |
Between unrelated tasks — always |
/compact |
Mid-session when context is large but you need to keep working |
/context |
Check current token usage |
Context Rot¶
As context fills, Claude starts ignoring rules from earlier in the session (CLAUDE.md, earlier decisions, corrected mistakes). Watch for:
- Inconsistent code style within a session
- Ignoring CLAUDE.md conventions
- Repeating already-corrected mistakes
Fix: /compact to refresh, or /clear to start fresh.
Context Budget¶
| Activity | Approximate Tokens |
|---|---|
| Session baseline (CLAUDE.md, system prompt) | ~20K |
| Reading a typical source file | ~1-5K |
| Git diff / test output | ~2-20K |
- Simple bug fix: 40-60K — plenty of room
- Medium feature: 80-120K — use
/compactmidway - Complex feature: Split into multiple
/clearsessions per phase
Anti-Patterns¶
| Anti-Pattern | Fix |
|---|---|
| Never clearing between tasks | /clear before each new ticket |
| Reading entire directories "just in case" | Be specific about which files to read |
| Letting test output accumulate | Focus on failed tests, not full output |
| Re-reading files Claude already has | Trust current-session memory |
Bug Fix Workflow¶
Full Workflow (7 Steps)¶
- Understand — Read the issue, summarize bug/expected/actual behavior
- Reproduce — Find the relevant code, explain how the bug occurs
- Explore — Use a subagent to find all callers, check if the bug affects other code
- Plan — Think about the minimal fix without changing unrelated behavior
- Test first — Write a test that reproduces the bug (should fail)
- Fix — Implement the fix, run full test suite
- Commit —
/sq-dev:make-commit(generatesfix(scope): description), includecloses #N
Quick Fix Shortcut¶
For obvious, low-risk bugs:
/clear
There's a bug in src/auth/router.py — invalid emails cause a 500 error.
Read the file, find the issue, write a regression test, fix it, and commit.
Tips¶
- Always write a regression test — prevent the bug from recurring
- Minimal changes — fix only the bug, don't refactor surrounding code
- Check related code — the same bug pattern might exist elsewhere
Feature Implementation¶
Complexity Tiers¶
| Tier | Scope | Approach |
|---|---|---|
| SIMPLE (< 4 hours) | Follow existing patterns exactly | Fast-track — explore, code, test, commit |
| MEDIUM (1-3 days) | Some variation from patterns | Full Explore → Plan → Code → Commit workflow |
| COMPLEX (3+ days) | Architectural changes | Split into multiple PRs, consider ADR, writer/reviewer pattern |
Step by Step¶
- Explore existing patterns in the relevant domain
- Plan with high effort level — file-by-file changes, tests to write
- Implement one step at a time, test after each
- Integration test — run full suite after all steps
- Create PR — link to the issue, summarize what/why/how-to-test
Tips¶
- Keep PRs under 400 lines — split large features into incremental PRs
- Start with the data model — schema first, then service, then API
- Don't add what's not asked — avoid gold-plating
Code Review¶
Use /sq-dev:full-review to run the automated review pipeline before committing. For the full pipeline docs, see Code Review Pipeline.
AI-generated code requires different review attention. Claude excels at syntax and patterns but can miss business logic, hallucinate APIs, and ignore constraints from long conversations.
AI Code Review Checklist¶
- Read line-by-line — don't trust "looks right"
- Check for hallucinated APIs or methods
- Verify constraints from the ticket weren't ignored
- Look for deleted tests (fixed by removing, not updating)
- No type casting to silence errors (
as any,as unknown as Type) - All types explicitly defined
- Follows existing naming and error handling patterns
- Input validation on user data, no secrets in code
- Tests cover happy path and edge cases
Common AI-Generated Issues¶
| Issue | What to Look For |
|---|---|
| Hallucinated APIs | Methods that don't exist in the library version |
| Type casting | as any, as unknown as Type — hiding real errors |
| Deleted tests | Tests removed instead of updated |
| Over-engineering | Unnecessary abstractions, unused configurability |
| Stale context | References to patterns from early in a long session |
| Import issues | Importing from wrong paths or non-existent modules |
Tips¶
- Don't rubber-stamp — AI code needs the same rigor as human code
- Check the diff, not the PR description — Claude's summaries can be optimistic
- Verify imports — hallucinated imports are the #1 AI mistake
- Run tests yourself — don't trust "all tests pass" without verification
Further Reading¶
- Official docs: Best Practices
- Explore → Plan → Code → Commit — the core workflow
- Daily Workflow — day-to-day development loop