Extracts only the essentials from your changes for precise reviews.
Claude, GPT, Gemini — pick the model that fits.
One-click install, no API key required
Analyze code changes and receive quality improvement suggestions with a simple command
Cutting-edge AI technology meets developer-friendly workflows to deliver a completely new code review experience
Register as MCP server in Cursor, Claude Code, etc. and request code reviews through natural language
Host agents perform code reviews with their own LLM using structured context — no API key required
Leverage latest LLM models including OpenAI GPT-5, Anthropic Claude, Google Gemini
Supports analysis of staged, unstaged, and specific commit/branch changes
Provides optimized context based on Tree-sitter AST analysis
Stable large-scale code review support even when exceeding model context limits
Detects bugs and logic errors, suggests code quality and readability improvements
Free to use and modify under Apache-2.0 license
Selvage uses AST to precisely extract only the code blocks related to changed lines, ensuring both cost efficiency and review quality.
Extracts only the minimal function/class blocks containing changed lines and related dependencies (e.g., import statements)
Significantly reduces token usage by sending only necessary context instead of entire files
Maintains high review accuracy through AST-based precise code structure understanding
Selvage analyzes file size and change scope to automatically select the most efficient review method:
Auto Optimization: The optimal analysis method for each situation is automatically applied without additional configuration.
Supports general context extraction for major programming languages
Provides excellent code review quality in major programming languages through general context extraction methods.
Smart Context supported languages are continuously expanding.
Easily use various cutting-edge AI models with a single API key through OpenRouter integration
Access all AI models with a single API key through OpenRouter. Individual Provider API keys are also supported.
High reasoning codex model (400K context)
Most capable model with extended thinking (200K context)
High quality reasoning model (200K context)
Extended thinking process support (200K context)
Hybrid reasoning model optimized for coding (200K context)
Large context and advanced reasoning (1M+ tokens)
Optimized for response speed and cost efficiency (1M+ tokens)
480B parameter MoE coding-specialized model (1M+ tokens)
1T parameter MoE large-scale reasoning model (128K tokens)
Advanced reasoning with 16K thinking tokens
Thinking-enabled reasoning model (16K thinking tokens)