AI-powered code review tool using Claude to analyze PRs for code quality and security issues. Uses a unified multi-agent approach for comprehensive analysis in a single pass.
claudecode/
├── github_action_audit.py # Main orchestrator - entry point
├── prompts.py # Review prompt templates
├── findings_filter.py # False positive filtering
├── claude_api_client.py # Claude API client
├── json_parser.py # JSON extraction utilities
├── constants.py # Configuration constants
└── evals/ # Evaluation framework
get_unified_review_prompt()- Combined code quality + security review- Prompts require JSON-only output with specific schema
- Always include confidence scores (0.7-1.0 threshold)
- Code Quality: correctness, reliability, performance, maintainability, testing
- Security: security (injection, auth, crypto, data exposure)
- HIGH: Production bugs, data loss, exploitable vulnerabilities
- MEDIUM: Limited scope, specific conditions required
- LOW: Minor issues, use sparingly
- Hard exclusion rules (regex patterns in
HardExclusionRules) - Claude API validation (optional, uses
claude_api_client.py) - Directory exclusion filtering
# Python tests
pytest claudecode -v # Run all tests (177 passing)
# JavaScript tests
~/.bun/bin/bun test scripts/comment-pr-findings.bun.test.js- Python 3.9+
- Type hints encouraged
- Comprehensive docstrings
- Tests required for new functionality
| File | Purpose |
|---|---|
action.yml |
GitHub Action definition |
.claude/commands/review.md |
Slash command for Claude Code |
docs/ |
Customization documentation |
- Update category lists in
prompts.py - Add exclusion patterns in
findings_filter.pyif needed - Add tests in
test_prompts.py
- Edit
HardExclusionRulesinfindings_filter.py - Add tests in
test_hard_exclusion_rules.py