diff --git a/.agents/README-code-reviewer-simone.md b/.agents/README-code-reviewer-simone.md new file mode 100644 index 0000000000..ec81db6f9b --- /dev/null +++ b/.agents/README-code-reviewer-simone.md @@ -0,0 +1,310 @@ +# Code Reviewer (Simone Style) Agent + +A comprehensive code review agent that implements the Simone methodology with a rigorous 7-step workflow. This agent performs thorough code reviews with zero tolerance for specification deviations. + +## Overview + +The Code Reviewer agent follows the exact methodology from the [Simone project](https://github.com/Helmi/claude-simone), providing: + +- **7-step systematic workflow** for comprehensive code reviews +- **Zero-tolerance compliance checking** against specifications +- **Automated quality checks** (linting, type-checking, formatting) +- **Multi-language support** (Python, JavaScript/TypeScript, Rust, Go, Ruby, PHP) +- **Severity scoring** (1-10) for all identified issues +- **PASS/FAIL verdicts** with detailed findings and recommendations + +## Usage + +### Basic Usage + +```bash +# Review latest commit +codebuff --agent code-reviewer-simone "Review my latest changes" + +# Review specific commit +codebuff --agent code-reviewer-simone "Review commit abc123" + +# Review specific branch +codebuff --agent code-reviewer-simone "Review branch feature/new-api" + +# Review specific files +codebuff --agent code-reviewer-simone "Review changes in src/api/" +``` + +### SDK Usage + +```typescript +import { CodebuffClient } from '@codebuff/sdk' + +const client = new CodebuffClient({ + apiKey: 'your-api-key', + cwd: '/path/to/your/project' +}) + +const result = await client.run({ + agent: 'code-reviewer-simone', + prompt: 'Review my latest changes', + params: { + strictMode: true, // Zero-tolerance mode (default: true) + autoFix: false // Auto-fix formatting issues (default: false) + } +}) +``` + +## 7-Step Workflow + +### 1. Analyze Scope +- Validates the review scope (commit, branch, files) +- Ensures meaningful changes exist to review +- Checks git repository status + +### 2. Find Code Changes +- Uses `git diff` to identify all changes within scope +- Lists modified files for targeted analysis +- Filters out irrelevant changes (e.g., generated files) + +### 3. Run Automated Quality Checks +Automatically detects and runs project-specific tools: + +**Python Projects:** +- `ruff check .` (if ruff.toml or pyproject.toml with ruff) +- `mypy .` (if mypy.ini or mypy in configs) +- `black --check .` (if black configured) +- `flake8` (if .flake8 or setup.cfg) + +**JavaScript/TypeScript Projects:** +- `npm run lint` or `npx eslint .` +- `npm run type-check` or `npx tsc --noEmit` +- `npm run format:check` or `npx prettier --check .` + +**Rust Projects:** +- `cargo fmt --check` +- `cargo clippy -- -D warnings` + +**Go Projects:** +- `go fmt ./...` +- `go vet ./...` + +**Other Languages:** +- Ruby: `rubocop` (if .rubocop.yml) +- PHP: `phpcs` or `php-cs-fixer` (if configured) + +### 4. Find Specifications and Documentation +Searches for project documentation in order of priority: + +1. **Simone Structure** (`.simone/` directory): + - `00_PROJECT_MANIFEST.md` - Current sprint and milestone context + - `03_SPRINTS/` - Sprint-specific deliverables and tasks + - `02_REQUIREMENTS/` - Project requirements and specifications + - `01_PROJECT_DOCS/` - General project documentation + +2. **Standard Documentation:** + - `README.md`, `SPEC.md`, `DESIGN.md` + - API documentation files + - Configuration documentation + +### 5. Compare Changes Against Specifications +Performs deep analysis comparing code changes against documentation: + +**Data Models/Schemas:** +- Field names, types, constraints +- Database relationships +- Schema migrations + +**APIs/Interfaces:** +- Endpoint definitions +- Request/response parameters +- Status codes and error handling +- Authentication requirements + +**Configuration:** +- Environment variables +- Settings and defaults +- Required vs optional parameters + +**Behavior:** +- Business rules and logic +- Side effects and error handling +- Workflow compliance + +**Quality:** +- Naming conventions +- Code formatting +- Test coverage +- Documentation completeness + +### 6. Analyze Differences +- Assigns severity scores (1-10) to all issues +- Categorizes issues by type (data_model, api, config, behavior, quality) +- Identifies critical issues that mandate FAIL verdict + +**Severity Scale:** +- **1-3:** Minor issues (style, documentation) +- **4-6:** Moderate issues (missing documentation, minor deviations) +- **7-8:** Major issues (undocumented changes, spec violations) +- **9-10:** Critical issues (breaking changes, security issues) + +### 7. Provide PASS/FAIL Verdict +Generates comprehensive verdict with: + +- **Result:** PASS or FAIL decision +- **Scope:** Description of what was reviewed +- **Findings:** Detailed list of all issues with severity scores +- **Summary:** High-level overview of problems found +- **Recommendation:** Specific next steps for resolution + +## Configuration Options + +### Input Parameters + +```typescript +{ + scope: string, // Review scope (default: "HEAD~1") + params: { + strictMode: boolean, // Zero-tolerance mode (default: true) + autoFix: boolean // Auto-fix safe issues (default: false) + } +} +``` + +### Strict Mode Behavior + +**Enabled (default):** +- Any deviation from specifications = FAIL +- Zero tolerance for undocumented changes +- Even minor issues cause FAIL verdict + +**Disabled:** +- Only critical issues (severity ≥7) cause FAIL +- Allows minor deviations with warnings +- More permissive for iterative development + +### Auto-Fix Capability + +When enabled, automatically fixes: +- Code formatting issues (safe changes only) +- Import sorting +- Trailing whitespace +- Basic linting violations + +**Never auto-fixes:** +- Logic changes +- API modifications +- Configuration changes +- Anything that could affect behavior + +## Output Format + +The agent outputs results in a structured format: + +``` +[2025-01-11 14:30]: Code Review - FAIL + +Result: **FAIL** + +Scope: git diff HEAD~1 (3 files changed: src/api/users.ts, src/models/user.ts, tests/user.test.ts) + +Findings: +1. [Severity 8] API Compliance: New endpoint POST /api/users/bulk not found in specifications + - Expected: All API endpoints must be documented in specifications + - Actual: Added undocumented endpoint: POST /api/users/bulk + - File: src/api/users.ts:45 + +2. [Severity 6] Data Model: New field 'lastLoginIP' in User model not in specifications + - Expected: User model should only have specified fields + - Actual: Added undocumented field: lastLoginIP + - File: src/models/user.ts:12 + +3. [Severity 9] Quality Check: TypeScript compilation error + - Expected: Code should pass all quality checks + - Actual: Property 'email' does not exist on type 'UserInput' + - File: src/api/users.ts:23 + +Summary: Found 3 issues (2 critical). Code changes do not fully comply with specifications. New API endpoint and data model field lack documentation. + +Recommendation: Fix critical TypeScript error and document new API endpoint and User model field in project specifications before proceeding. +``` + +## Integration with Project Structures + +### Simone Projects +Fully supports the Simone project structure: +- Reads current sprint context from manifest +- Focuses on current sprint deliverables +- Validates against sprint-specific requirements +- Updates task output logs with review results + +### Standard Projects +Works with any project structure: +- Searches for common documentation patterns +- Uses README and API docs for specifications +- Adapts quality checks to detected project type +- Provides general compliance recommendations + +## Best Practices + +### For Development Teams + +1. **Run Early and Often:** Use the agent on every commit or PR +2. **Maintain Documentation:** Keep specifications up-to-date +3. **Address Issues Promptly:** Fix critical issues before they accumulate +4. **Use Strict Mode:** Enable zero-tolerance for production code + +### For Project Setup + +1. **Document APIs:** Maintain clear API specifications +2. **Define Data Models:** Document all schemas and relationships +3. **Configure Quality Tools:** Set up linting and type-checking +4. **Establish Conventions:** Define coding standards and patterns + +### For Code Reviews + +1. **Review Scope:** Be specific about what to review +2. **Check Dependencies:** Ensure all changes are documented +3. **Validate Quality:** Run automated checks before manual review +4. **Follow Up:** Address all findings before merging + +## Troubleshooting + +### Common Issues + +**"No meaningful changes found"** +- Check git status and ensure changes are committed +- Verify the scope parameter is correct +- Ensure you're in a git repository + +**"Quality tools not found"** +- Install project-specific linting tools +- Check configuration files are present +- Verify tools are in PATH + +**"No specifications found"** +- Add README.md or API documentation +- Consider adopting Simone project structure +- Document requirements and specifications + +**"All changes marked as violations"** +- Review strict mode setting +- Update specifications to match intended changes +- Ensure documentation is current + +### Performance Optimization + +- Use specific scopes to limit review size +- Keep documentation organized and accessible +- Configure quality tools for fast execution +- Use auto-fix for routine formatting issues + +## Contributing + +To extend or modify the agent: + +1. **Core Logic:** Edit `code-reviewer-simone.ts` +2. **Quality Checks:** Modify `helpers/quality-checker.ts` +3. **Spec Analysis:** Update `helpers/spec-analyzer.ts` +4. **Add Languages:** Extend detection patterns in helpers +5. **Test Changes:** Run validation with test projects + +## License + +This agent follows the same license as the Codebuff project (Apache-2.0). \ No newline at end of file diff --git a/.agents/code-reviewer-simone.ts b/.agents/code-reviewer-simone.ts new file mode 100644 index 0000000000..6248e89bd8 --- /dev/null +++ b/.agents/code-reviewer-simone.ts @@ -0,0 +1,301 @@ +import { publisher } from './constants' + +import type { SecretAgentDefinition } from './types/secret-agent-definition' + +const definition: SecretAgentDefinition = { + id: 'code-reviewer-simone', + publisher, + model: 'anthropic/claude-4-sonnet-20250522', + displayName: 'Code Reviewer (Simone Style)', + spawnerPrompt: 'Performs comprehensive code reviews following the Simone methodology with 7-step workflow including scope analysis, automated quality checks, and strict compliance verification.', + + inputSchema: { + scope: { + type: 'string', + description: 'Review scope - can be commit hash, branch, or file path. Defaults to HEAD~1 if empty', + default: 'HEAD~1' + }, + params: { + type: 'object', + properties: { + strictMode: { + type: 'boolean', + description: 'Enable zero-tolerance mode for spec compliance', + default: true + }, + autoFix: { + type: 'boolean', + description: 'Automatically fix formatting and linting issues when safe', + default: false + } + }, + required: [] + } + }, + + outputMode: 'last_message', + includeMessageHistory: false, + + toolNames: [ + 'read_files', + 'run_terminal_command', + 'code_search', + 'str_replace', + 'write_file', + 'spawn_agent_inline', + 'think_deeply', + 'end_turn' + ], + + spawnableAgents: [ + 'researcher', + 'thinker' + ], + + systemPrompt: `You are a meticulous code reviewer following the Simone methodology. You perform comprehensive 7-step code reviews with zero tolerance for deviations from specifications. + +Your review process: +1. Analyze scope and identify meaningful changes +2. Find code changes using git diff +3. Run automated quality checks (linting, type-checking) +4. Find relevant specifications and documentation +5. Compare changes against requirements with deep analysis +6. Analyze differences with severity scoring (1-10) +7. Provide PASS/FAIL verdict with detailed findings + +Key principles: +- Zero tolerance for spec deviations, even small ones +- Focus on current sprint deliverables, not future features +- Be extremely picky about compliance +- When in doubt, call FAIL and ask the user +- Always provide detailed findings with severity scores`, + + instructionsPrompt: `Execute a comprehensive code review following the 7-step Simone methodology. + +**CRITICAL REQUIREMENTS:** +- Follow ALL 7 steps in exact order +- Zero tolerance for any deviations from specifications +- Be extremely thorough in compliance checking +- Provide severity scores (1-10) for all issues +- Only PASS if absolutely no discrepancies found + +**7-STEP WORKFLOW:** +1. **Analyze Scope**: Identify meaningful changes to review +2. **Find Changes**: Use git diff to locate code changes +3. **Quality Checks**: Run linting, type-checking, formatting checks +4. **Find Specs**: Locate project documentation and requirements +5. **Compare**: Deep analysis of changes vs specifications +6. **Analyze Differences**: Score issues by severity (1-10) +7. **Verdict**: Provide PASS/FAIL with detailed findings + +**INPUT:** +- scope: {scope} (review scope - commit, branch, or file path) +- strictMode: {params.strictMode} (zero-tolerance compliance mode) +- autoFix: {params.autoFix} (auto-fix safe formatting issues) + +**OUTPUT FORMAT:** +``` +[YYYY-MM-DD HH:MM]: Code Review - PASS/FAIL + +Result: **FAIL/PASS** +Scope: [scope description] +Findings: [detailed list with severity scores] +Summary: [what's wrong or not] +Recommendation: [next steps] +``` + +Execute each step methodically with zero tolerance for spec deviations.`, + + stepPrompt: `Continue with the next step in the 7-step code review process. Be thorough and maintain zero tolerance for spec deviations.`, + + async *handleSteps({ scope, params }) { + // Step 1: Analyze scope and validate we have meaningful changes + yield { + toolName: 'think_deeply', + input: { + query: `STEP 1: Analyzing review scope "${scope}". + + I need to: + - Validate the scope parameter + - Check if we're in a git repository + - Determine what changes to review + - Ensure there are meaningful changes to analyze + + Parameters: strictMode=${params?.strictMode ?? true}, autoFix=${params?.autoFix ?? false}` + } + } + + yield { + toolName: 'run_terminal_command', + input: { + command: 'pwd && git status' + } + } + + // Step 2: Find code changes within scope + const gitDiffCommand = scope === 'HEAD~1' ? 'git diff HEAD~1' : `git diff ${scope}` + yield { + toolName: 'run_terminal_command', + input: { + command: gitDiffCommand + } + } + + // Also get list of changed files + const gitDiffFilesCommand = scope === 'HEAD~1' ? 'git diff --name-only HEAD~1' : `git diff --name-only ${scope}` + yield { + toolName: 'run_terminal_command', + input: { + command: gitDiffFilesCommand + } + } + + // Step 3: Run automated quality checks + yield { + toolName: 'think_deeply', + input: { + query: `STEP 3: Running automated quality checks. I need to: + - Detect project type and available tools + - Run linting, type-checking, formatting checks + - Identify critical issues that would cause FAIL verdict + - Apply auto-fixes if enabled and safe` + } + } + + // Detect project configuration files + yield { + toolName: 'read_files', + input: { + paths: [ + 'package.json', 'pyproject.toml', 'Cargo.toml', 'go.mod', 'Gemfile', + '.eslintrc.js', '.eslintrc.json', '.eslintrc.yml', + 'tsconfig.json', '.prettierrc', '.prettierrc.json', + 'ruff.toml', '.ruff.toml', 'mypy.ini', '.flake8', 'setup.cfg', + '.rubocop.yml', 'phpcs.xml', '.php-cs-fixer.php' + ] + } + } + + // Run quality checks based on detected project type + // Python projects + yield { + toolName: 'run_terminal_command', + input: { + command: 'if [ -f "pyproject.toml" ] || [ -f ".ruff.toml" ]; then echo "Python project detected"; ruff check . || true; fi' + } + } + + yield { + toolName: 'run_terminal_command', + input: { + command: 'if [ -f "mypy.ini" ] || grep -q "mypy" pyproject.toml 2>/dev/null; then echo "Running mypy"; mypy . || true; fi' + } + } + + // JavaScript/TypeScript projects + yield { + toolName: 'run_terminal_command', + input: { + command: 'if [ -f "package.json" ]; then echo "JS/TS project detected"; npm run lint 2>/dev/null || npx eslint . 2>/dev/null || true; fi' + } + } + + yield { + toolName: 'run_terminal_command', + input: { + command: 'if [ -f "tsconfig.json" ]; then echo "Running TypeScript check"; npm run type-check 2>/dev/null || npx tsc --noEmit 2>/dev/null || true; fi' + } + } + + // Rust projects + yield { + toolName: 'run_terminal_command', + input: { + command: 'if [ -f "Cargo.toml" ]; then echo "Rust project detected"; cargo fmt --check || true; cargo clippy -- -D warnings || true; fi' + } + } + + // Go projects + yield { + toolName: 'run_terminal_command', + input: { + command: 'if [ -f "go.mod" ]; then echo "Go project detected"; go fmt ./... || true; go vet ./... || true; fi' + } + } + + // Step 4: Find relevant specifications and documentation + yield { + toolName: 'think_deeply', + input: { + query: `STEP 4: Finding specifications and documentation. I need to: + - Look for Simone-style project structure (.simone/) + - Read project manifest and current sprint info + - Find relevant requirements and task documentation + - Identify what specs the changes should comply with` + } + } + + // Look for Simone project structure + yield { + toolName: 'run_terminal_command', + input: { + command: 'find . -name ".simone" -type d 2>/dev/null || echo "No .simone directory found"' + } + } + + yield { + toolName: 'read_files', + input: { + paths: ['.simone/00_PROJECT_MANIFEST.md'] + } + } + + // Find sprint and requirements documentation + yield { + toolName: 'run_terminal_command', + input: { + command: 'find . -path "*/.simone/03_SPRINTS/*" -name "*.md" 2>/dev/null | head -10 || echo "No sprint docs found"' + } + } + + yield { + toolName: 'run_terminal_command', + input: { + command: 'find . -path "*/.simone/02_REQUIREMENTS/*" -name "*.md" 2>/dev/null | head -10 || echo "No requirements docs found"' + } + } + + // Look for other common documentation + yield { + toolName: 'run_terminal_command', + input: { + command: 'find . -maxdepth 2 -name "README*" -o -name "SPEC*" -o -name "DESIGN*" -o -name "API*" 2>/dev/null | head -10' + } + } + + // Step 5: Compare changes against documentation (handled by LLM) + yield { + toolName: 'think_deeply', + input: { + query: `STEP 5: Deep comparison of code changes against specifications. + + I must analyze: + - Data models/schemas: fields, types, constraints, relationships + - APIs/interfaces: endpoints, params, return shapes, status codes, errors + - Config/environment: keys, defaults, required/optional + - Behavior: business rules, side-effects, error handling + - Quality: naming, formatting, tests, linter compliance + + CRITICAL: Zero tolerance for deviations. Even small changes not in specs = FAIL.` + } + } + + // Step 6 & 7: Analysis and verdict (handled by LLM in subsequent steps) + while (true) { + const { stepsComplete } = yield 'STEP' + if (stepsComplete) break + } + } +} + +export default definition \ No newline at end of file diff --git a/.agents/helpers/quality-checker.ts b/.agents/helpers/quality-checker.ts new file mode 100644 index 0000000000..d597bbcd2c --- /dev/null +++ b/.agents/helpers/quality-checker.ts @@ -0,0 +1,293 @@ +/** + * Quality checker helper functions for the code review agent + * Detects and runs project-specific linting and type-checking tools + */ + +export interface QualityCheckResult { + tool: string + command: string + passed: boolean + output: string + issues: QualityIssue[] +} + +export interface QualityIssue { + file: string + line?: number + column?: number + severity: 'error' | 'warning' | 'info' + message: string + rule?: string +} + +export interface ProjectTools { + language: string + linters: string[] + formatters: string[] + typeCheckers: string[] + commands: string[] +} + +/** + * Detect available quality tools based on project configuration files + */ +export function detectProjectTools(configFiles: Record): ProjectTools { + const tools: ProjectTools = { + language: 'unknown', + linters: [], + formatters: [], + typeCheckers: [], + commands: [] + } + + // Python projects + if (configFiles['pyproject.toml'] || configFiles['setup.py']) { + tools.language = 'python' + + if (configFiles['pyproject.toml']?.includes('ruff') || configFiles['.ruff.toml'] || configFiles['ruff.toml']) { + tools.linters.push('ruff') + tools.commands.push('ruff check .') + } + + if (configFiles['pyproject.toml']?.includes('black') || configFiles['.black']) { + tools.formatters.push('black') + tools.commands.push('black --check .') + } + + if (configFiles['pyproject.toml']?.includes('mypy') || configFiles['mypy.ini']) { + tools.typeCheckers.push('mypy') + tools.commands.push('mypy .') + } + + if (configFiles['.flake8'] || configFiles['setup.cfg']?.includes('flake8')) { + tools.linters.push('flake8') + tools.commands.push('flake8') + } + } + + // JavaScript/TypeScript projects + if (configFiles['package.json']) { + const packageJson = configFiles['package.json'] + tools.language = packageJson?.includes('typescript') ? 'typescript' : 'javascript' + + if (configFiles['.eslintrc.js'] || configFiles['.eslintrc.json'] || packageJson?.includes('eslint')) { + tools.linters.push('eslint') + tools.commands.push('npm run lint || npx eslint .') + } + + if (configFiles['.prettierrc'] || packageJson?.includes('prettier')) { + tools.formatters.push('prettier') + tools.commands.push('npm run format:check || npx prettier --check .') + } + + if (configFiles['tsconfig.json']) { + tools.typeCheckers.push('typescript') + tools.commands.push('npm run type-check || npx tsc --noEmit') + } + } + + // Rust projects + if (configFiles['Cargo.toml']) { + tools.language = 'rust' + tools.formatters.push('rustfmt') + tools.linters.push('clippy') + tools.commands.push('cargo fmt --check', 'cargo clippy -- -D warnings') + } + + // Go projects + if (configFiles['go.mod']) { + tools.language = 'go' + tools.formatters.push('gofmt') + tools.linters.push('go vet') + tools.commands.push('go fmt ./...', 'go vet ./...') + } + + // Ruby projects + if (configFiles['Gemfile'] || configFiles['.rubocop.yml']) { + tools.language = 'ruby' + if (configFiles['.rubocop.yml']) { + tools.linters.push('rubocop') + tools.commands.push('rubocop') + } + } + + return tools +} + +/** + * Generate quality check commands based on detected tools + */ +export function generateQualityCommands(tools: ProjectTools): string[] { + return tools.commands +} + +/** + * Parse linter output to extract structured issues + */ +export function parseLinterOutput(tool: string, output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + + switch (tool) { + case 'ruff': + return parseRuffOutput(output) + case 'eslint': + return parseEslintOutput(output) + case 'mypy': + return parseMypyOutput(output) + case 'tsc': + return parseTscOutput(output) + case 'clippy': + return parseClippyOutput(output) + default: + return parseGenericOutput(output) + } +} + +function parseRuffOutput(output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + const lines = output.split('\n') + + for (const line of lines) { + // Ruff format: path/file.py:line:column: CODE message + const match = line.match(/^(.+):(\d+):(\d+):\s+(\w+)\s+(.+)$/) + if (match) { + issues.push({ + file: match[1], + line: parseInt(match[2]), + column: parseInt(match[3]), + severity: match[4].startsWith('E') ? 'error' : 'warning', + message: match[5], + rule: match[4] + }) + } + } + + return issues +} + +function parseEslintOutput(output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + + try { + // Try to parse as JSON first + const parsed = JSON.parse(output) + if (Array.isArray(parsed)) { + for (const file of parsed) { + for (const message of file.messages || []) { + issues.push({ + file: file.filePath, + line: message.line, + column: message.column, + severity: message.severity === 2 ? 'error' : 'warning', + message: message.message, + rule: message.ruleId + }) + } + } + } + } catch { + // Fall back to text parsing + return parseGenericOutput(output) + } + + return issues +} + +function parseMypyOutput(output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + const lines = output.split('\n') + + for (const line of lines) { + // MyPy format: file.py:line: error: message + const match = line.match(/^(.+):(\d+):\s+(error|warning|note):\s+(.+)$/) + if (match) { + issues.push({ + file: match[1], + line: parseInt(match[2]), + severity: match[3] === 'error' ? 'error' : match[3] === 'warning' ? 'warning' : 'info', + message: match[4] + }) + } + } + + return issues +} + +function parseTscOutput(output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + const lines = output.split('\n') + + for (const line of lines) { + // TypeScript format: file.ts(line,column): error TS2345: message + const match = line.match(/^(.+)\((\d+),(\d+)\):\s+(error|warning)\s+TS\d+:\s+(.+)$/) + if (match) { + issues.push({ + file: match[1], + line: parseInt(match[2]), + column: parseInt(match[3]), + severity: match[4] as 'error' | 'warning', + message: match[5] + }) + } + } + + return issues +} + +function parseClippyOutput(output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + const lines = output.split('\n') + + for (const line of lines) { + // Clippy format: warning: message --> file.rs:line:column + if (line.includes('warning:') || line.includes('error:')) { + const severity = line.includes('error:') ? 'error' : 'warning' + const message = line.split(': ')[1] + + // Look for file location in next lines + const nextLineIndex = lines.indexOf(line) + 1 + if (nextLineIndex < lines.length) { + const locationMatch = lines[nextLineIndex].match(/-->\s+(.+):(\d+):(\d+)/) + if (locationMatch) { + issues.push({ + file: locationMatch[1], + line: parseInt(locationMatch[2]), + column: parseInt(locationMatch[3]), + severity, + message: message || 'Clippy warning' + }) + } + } + } + } + + return issues +} + +function parseGenericOutput(output: string): QualityIssue[] { + const issues: QualityIssue[] = [] + const lines = output.split('\n') + + for (const line of lines) { + if (line.includes('error') || line.includes('warning')) { + issues.push({ + file: 'unknown', + severity: line.toLowerCase().includes('error') ? 'error' : 'warning', + message: line.trim() + }) + } + } + + return issues +} + +/** + * Determine if a quality issue should cause a FAIL verdict + */ +export function isCriticalIssue(issue: QualityIssue): boolean { + // Critical issues that should cause FAIL + return issue.severity === 'error' || + issue.message.toLowerCase().includes('syntax error') || + issue.message.toLowerCase().includes('type error') || + issue.message.toLowerCase().includes('security') +} \ No newline at end of file diff --git a/.agents/helpers/spec-analyzer.ts b/.agents/helpers/spec-analyzer.ts new file mode 100644 index 0000000000..a0e83033c3 --- /dev/null +++ b/.agents/helpers/spec-analyzer.ts @@ -0,0 +1,512 @@ +/** + * Specification analyzer helper functions for code review + * Handles documentation parsing and compliance checking + */ + +export interface SpecDocument { + path: string + type: 'manifest' | 'sprint' | 'requirement' | 'task' | 'api' | 'readme' | 'other' + content: string + lastModified?: Date +} + +export interface ComplianceIssue { + type: 'data_model' | 'api' | 'config' | 'behavior' | 'quality' | 'other' + severity: number // 1-10 + description: string + expectedBehavior: string + actualBehavior: string + file?: string + line?: number + specReference?: string +} + +export interface ReviewVerdict { + result: 'PASS' | 'FAIL' + scope: string + findings: ComplianceIssue[] + summary: string + recommendation: string + timestamp: string +} + +/** + * Parse and categorize specification documents + */ +export function categorizeSpecDocument(path: string, content: string): SpecDocument { + let type: SpecDocument['type'] = 'other' + + if (path.includes('PROJECT_MANIFEST')) { + type = 'manifest' + } else if (path.includes('SPRINTS/') || path.includes('sprint')) { + type = 'sprint' + } else if (path.includes('REQUIREMENTS/') || path.includes('requirement')) { + type = 'requirement' + } else if (path.includes('task') || path.includes('TASK')) { + type = 'task' + } else if (path.toLowerCase().includes('api') || content.toLowerCase().includes('endpoint')) { + type = 'api' + } else if (path.toLowerCase().includes('readme')) { + type = 'readme' + } + + return { + path, + type, + content + } +} + +/** + * Extract API specifications from documentation + */ +export function extractApiSpecs(docs: SpecDocument[]): ApiSpec[] { + const apiSpecs: ApiSpec[] = [] + + for (const doc of docs) { + if (doc.type === 'api' || doc.content.toLowerCase().includes('endpoint')) { + const endpoints = extractEndpoints(doc.content) + apiSpecs.push(...endpoints.map(endpoint => ({ + ...endpoint, + source: doc.path + }))) + } + } + + return apiSpecs +} + +interface ApiSpec { + method: string + path: string + parameters?: Parameter[] + responses?: Response[] + source: string +} + +interface Parameter { + name: string + type: string + required: boolean + description?: string +} + +interface Response { + status: number + description: string + schema?: any +} + +function extractEndpoints(content: string): Omit[] { + const endpoints: Omit[] = [] + + // Look for common API documentation patterns + const endpointPatterns = [ + /(?:GET|POST|PUT|DELETE|PATCH)\s+([\/\w\-\{\}:]+)/gi, + /`(GET|POST|PUT|DELETE|PATCH)\s+([\/\w\-\{\}:]+)`/gi, + /\*\*(GET|POST|PUT|DELETE|PATCH)\*\*\s+`([\/\w\-\{\}:]+)`/gi + ] + + for (const pattern of endpointPatterns) { + let match + while ((match = pattern.exec(content)) !== null) { + endpoints.push({ + method: match[1].toUpperCase(), + path: match[2] || match[1] + }) + } + } + + return endpoints +} + +/** + * Extract data model specifications from documentation + */ +export function extractDataModels(docs: SpecDocument[]): DataModel[] { + const models: DataModel[] = [] + + for (const doc of docs) { + const docModels = extractModelsFromContent(doc.content, doc.path) + models.push(...docModels) + } + + return models +} + +interface DataModel { + name: string + fields: ModelField[] + source: string +} + +interface ModelField { + name: string + type: string + required: boolean + constraints?: string[] + description?: string +} + +function extractModelsFromContent(content: string, source: string): DataModel[] { + const models: DataModel[] = [] + + // Look for schema definitions, table structures, etc. + const schemaPatterns = [ + /(?:interface|type|class|model)\s+(\w+)\s*\{([^}]+)\}/gi, + /CREATE TABLE\s+(\w+)\s*\(([^)]+)\)/gi, + /```(?:typescript|javascript|sql)\s*((?:interface|type|class|CREATE TABLE)[^`]+)```/gi + ] + + for (const pattern of schemaPatterns) { + let match + while ((match = pattern.exec(content)) !== null) { + const modelName = match[1] + const fieldsText = match[2] + const fields = parseModelFields(fieldsText) + + models.push({ + name: modelName, + fields, + source + }) + } + } + + return models +} + +function parseModelFields(fieldsText: string): ModelField[] { + const fields: ModelField[] = [] + const lines = fieldsText.split('\n') + + for (const line of lines) { + const trimmed = line.trim() + if (!trimmed || trimmed.startsWith('//') || trimmed.startsWith('*')) continue + + // TypeScript/JavaScript field pattern + const tsMatch = trimmed.match(/(\w+)(\?)?:\s*([^;,]+)/) + if (tsMatch) { + fields.push({ + name: tsMatch[1], + type: tsMatch[3].trim(), + required: !tsMatch[2] // no ? means required + }) + continue + } + + // SQL column pattern + const sqlMatch = trimmed.match(/(\w+)\s+(\w+)(?:\([^)]+\))?\s*(NOT NULL|NULL)?/) + if (sqlMatch) { + fields.push({ + name: sqlMatch[1], + type: sqlMatch[2], + required: sqlMatch[3] === 'NOT NULL' + }) + } + } + + return fields +} + +/** + * Compare code changes against specifications + */ +export function compareAgainstSpecs( + codeChanges: string, + changedFiles: string[], + specs: { + apis: ApiSpec[] + models: DataModel[] + docs: SpecDocument[] + } +): ComplianceIssue[] { + const issues: ComplianceIssue[] = [] + + // Check API compliance + issues.push(...checkApiCompliance(codeChanges, changedFiles, specs.apis)) + + // Check data model compliance + issues.push(...checkDataModelCompliance(codeChanges, changedFiles, specs.models)) + + // Check configuration compliance + issues.push(...checkConfigCompliance(codeChanges, changedFiles, specs.docs)) + + // Check behavior compliance + issues.push(...checkBehaviorCompliance(codeChanges, changedFiles, specs.docs)) + + return issues +} + +function checkApiCompliance(codeChanges: string, changedFiles: string[], apiSpecs: ApiSpec[]): ComplianceIssue[] { + const issues: ComplianceIssue[] = [] + + // Look for new endpoints in code that aren't in specs + const codeEndpoints = extractEndpointsFromCode(codeChanges) + + for (const endpoint of codeEndpoints) { + const specMatch = apiSpecs.find(spec => + spec.method === endpoint.method && spec.path === endpoint.path + ) + + if (!specMatch) { + issues.push({ + type: 'api', + severity: 8, + description: `New API endpoint ${endpoint.method} ${endpoint.path} not found in specifications`, + expectedBehavior: 'All API endpoints must be documented in specifications', + actualBehavior: `Added undocumented endpoint: ${endpoint.method} ${endpoint.path}`, + file: endpoint.file + }) + } + } + + return issues +} + +function checkDataModelCompliance(codeChanges: string, changedFiles: string[], modelSpecs: DataModel[]): ComplianceIssue[] { + const issues: ComplianceIssue[] = [] + + // Look for schema changes in code + const codeModels = extractModelsFromCode(codeChanges) + + for (const codeModel of codeModels) { + const specModel = modelSpecs.find(spec => spec.name === codeModel.name) + + if (!specModel) { + issues.push({ + type: 'data_model', + severity: 7, + description: `New data model ${codeModel.name} not found in specifications`, + expectedBehavior: 'All data models must be documented in specifications', + actualBehavior: `Added undocumented model: ${codeModel.name}` + }) + continue + } + + // Check field compliance + for (const field of codeModel.fields) { + const specField = specModel.fields.find(f => f.name === field.name) + + if (!specField) { + issues.push({ + type: 'data_model', + severity: 6, + description: `New field ${field.name} in model ${codeModel.name} not in specifications`, + expectedBehavior: `Model ${codeModel.name} should only have specified fields`, + actualBehavior: `Added undocumented field: ${field.name}` + }) + } else if (specField.type !== field.type) { + issues.push({ + type: 'data_model', + severity: 9, + description: `Field ${field.name} type mismatch in model ${codeModel.name}`, + expectedBehavior: `Field ${field.name} should be type ${specField.type}`, + actualBehavior: `Field ${field.name} is type ${field.type}` + }) + } + } + } + + return issues +} + +function checkConfigCompliance(codeChanges: string, changedFiles: string[], docs: SpecDocument[]): ComplianceIssue[] { + const issues: ComplianceIssue[] = [] + + // Look for configuration changes + const configFiles = changedFiles.filter(file => + file.includes('config') || + file.includes('.env') || + file.includes('settings') || + file.endsWith('.json') || + file.endsWith('.yml') || + file.endsWith('.yaml') + ) + + if (configFiles.length > 0) { + // Check if config changes are documented + const hasConfigDocs = docs.some(doc => + doc.content.toLowerCase().includes('config') || + doc.content.toLowerCase().includes('environment') || + doc.content.toLowerCase().includes('settings') + ) + + if (!hasConfigDocs) { + issues.push({ + type: 'config', + severity: 5, + description: 'Configuration changes made without documentation', + expectedBehavior: 'Configuration changes should be documented in specifications', + actualBehavior: `Modified config files: ${configFiles.join(', ')}` + }) + } + } + + return issues +} + +function checkBehaviorCompliance(codeChanges: string, changedFiles: string[], docs: SpecDocument[]): ComplianceIssue[] { + const issues: ComplianceIssue[] = [] + + // Look for business logic changes + const businessLogicPatterns = [ + /function\s+(\w+)/g, + /const\s+(\w+)\s*=/g, + /class\s+(\w+)/g, + /if\s*\(/g, + /switch\s*\(/g, + /throw\s+/g, + /return\s+/g + ] + + let hasBusinessLogicChanges = false + for (const pattern of businessLogicPatterns) { + if (pattern.test(codeChanges)) { + hasBusinessLogicChanges = true + break + } + } + + if (hasBusinessLogicChanges) { + // Check if behavior is documented + const hasBehaviorDocs = docs.some(doc => + doc.content.toLowerCase().includes('behavior') || + doc.content.toLowerCase().includes('business rule') || + doc.content.toLowerCase().includes('logic') || + doc.content.toLowerCase().includes('workflow') + ) + + if (!hasBehaviorDocs) { + issues.push({ + type: 'behavior', + severity: 6, + description: 'Business logic changes made without behavioral documentation', + expectedBehavior: 'Business logic changes should be documented with behavior specifications', + actualBehavior: 'Modified business logic without corresponding documentation' + }) + } + } + + return issues +} + +function extractEndpointsFromCode(code: string): Array<{method: string, path: string, file?: string}> { + const endpoints: Array<{method: string, path: string, file?: string}> = [] + + // Express.js patterns + const expressPatterns = [ + /app\.(get|post|put|delete|patch)\s*\(\s*['"`]([^'"`]+)['"`]/gi, + /router\.(get|post|put|delete|patch)\s*\(\s*['"`]([^'"`]+)['"`]/gi + ] + + // FastAPI patterns + const fastApiPatterns = [ + /@app\.(get|post|put|delete|patch)\s*\(\s*['"`]([^'"`]+)['"`]/gi + ] + + const allPatterns = [...expressPatterns, ...fastApiPatterns] + + for (const pattern of allPatterns) { + let match + while ((match = pattern.exec(code)) !== null) { + endpoints.push({ + method: match[1].toUpperCase(), + path: match[2] + }) + } + } + + return endpoints +} + +function extractModelsFromCode(code: string): Array<{name: string, fields: ModelField[]}> { + const models: Array<{name: string, fields: ModelField[]}> = [] + + // TypeScript interface/type patterns + const tsPatterns = [ + /(?:interface|type)\s+(\w+)\s*\{([^}]+)\}/gi + ] + + // Python class patterns + const pythonPatterns = [ + /class\s+(\w+).*?:\s*((?:\s*\w+:.*?\n)*)/gi + ] + + const allPatterns = [...tsPatterns, ...pythonPatterns] + + for (const pattern of allPatterns) { + let match + while ((match = pattern.exec(code)) !== null) { + const modelName = match[1] + const fieldsText = match[2] + const fields = parseModelFields(fieldsText) + + models.push({ + name: modelName, + fields + }) + } + } + + return models +} + +/** + * Generate final review verdict + */ +export function generateVerdict( + scope: string, + qualityIssues: any[], + complianceIssues: ComplianceIssue[], + strictMode: boolean = true +): ReviewVerdict { + const allIssues = [...complianceIssues] + + // Add quality issues as compliance issues + for (const qIssue of qualityIssues) { + if (qIssue.severity === 'error') { + allIssues.push({ + type: 'quality', + severity: 8, + description: `Quality check failed: ${qIssue.message}`, + expectedBehavior: 'Code should pass all quality checks', + actualBehavior: qIssue.message, + file: qIssue.file, + line: qIssue.line + }) + } + } + + // Determine result + const criticalIssues = allIssues.filter(issue => issue.severity >= 7) + const hasAnyIssues = allIssues.length > 0 + + const result = (strictMode && hasAnyIssues) || criticalIssues.length > 0 ? 'FAIL' : 'PASS' + + // Generate summary + const summary = result === 'PASS' + ? 'All checks passed. Code changes comply with specifications.' + : `Found ${allIssues.length} issues (${criticalIssues.length} critical). Code changes do not fully comply with specifications.` + + // Generate recommendation + let recommendation = '' + if (result === 'FAIL') { + if (criticalIssues.length > 0) { + recommendation = 'Fix critical issues before proceeding. Review specifications and ensure all changes are documented.' + } else { + recommendation = 'Address minor issues and ensure all changes align with project specifications.' + } + } else { + recommendation = 'Code review passed. Changes are ready for merge.' + } + + return { + result, + scope, + findings: allIssues, + summary, + recommendation, + timestamp: new Date().toISOString().replace('T', ' ').substring(0, 16) + } +} \ No newline at end of file diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json new file mode 100644 index 0000000000..885398412f --- /dev/null +++ b/.devcontainer/devcontainer.json @@ -0,0 +1,35 @@ +// The Dev Container format allows you to configure your environment. At the heart of it +// is a Docker image or Dockerfile which controls the tools available in your environment. +// +// See https://aka.ms/devcontainer.json for more information. +{ + "name": "Ona", + // This universal image (~10GB) includes many development tools and languages, + // providing a convenient all-in-one development environment. + // + // This image is already available on remote runners for fast startup. On desktop + // and linux runners, it will need to be downloaded, which may take longer. + // + // For faster startup on desktop/linux, consider a smaller, language-specific image: + // • For Python: mcr.microsoft.com/devcontainers/python:3.13 + // • For Node.js: mcr.microsoft.com/devcontainers/javascript-node:24 + // • For Go: mcr.microsoft.com/devcontainers/go:1.24 + // • For Java: mcr.microsoft.com/devcontainers/java:21 + // + // Browse more options at: https://hub.docker.com/r/microsoft/devcontainers + // or build your own using the Dockerfile option below. + "image": "mcr.microsoft.com/devcontainers/universal:3.0.3" + // Use "build": + // instead of the image to use a Dockerfile to build an image. + // "build": { + // "context": ".", + // "dockerfile": "Dockerfile" + // } + // Features add additional features to your environment. See https://containers.dev/features + // Beware: features are not supported on all platforms and may have unintended side-effects. + // "features": { + // "ghcr.io/devcontainers/features/docker-in-docker": { + // "moby": false + // } + // } +}