SkillCheck validates your Agent Skills against the open standard. Pro adds accessibility, security, and anti-slop checks. Built for Claude Code.
Built for Claude Code · Follows the Agent Skills open standard · v3.12.0
Validates Agent Skills anywhere they're used. Pro connects as an MCP tool server.
Skill Discovery (NEW)
skillcheck discover scans your AI assistant configMulti-Platform
SkillCheck Pro full changelog · SkillCheck Free changelog on GitHub
Agent Skills are now an open standard adopted by Microsoft, Cursor, and dozens of coding agents. Your skill needs to work everywhere.
You won't know until someone complains. Or you run SkillCheck.
We scanned 1,613 public skills across 8 repositories, including Microsoft and Anthropic's own collections.
1,613
skills scanned
65
average score out of 100
84%
missing proper descriptions
2%
scored Excellent
Last scanned: March 2026 with SkillCheck v3.12.0. Repos include:
| Category | Tier | What We Catch |
|---|---|---|
| Structure | Free | Missing fields, invalid names, broken YAML, XML injection, argument-hint validation |
| Body | Free | Content requirements, length, formatting, anti-pattern format lint, MCP tool qualification |
| Naming | Free | Conventions, specificity, reserved words, gerund naming |
| Semantics | Free | Contradictions, ambiguous instructions, wisdom/platitude detection, workflow-steps-in-description |
| Quality Patterns | Free | Examples, error handling, triggers, output format, structured instructions, prerequisites |
| Anti-Slop | Pro | "Let's dive in", hedge words, filler phrases |
| Security | Pro | PII detection, credential safety, path traversal |
| Token Budget | Pro | Context efficiency, budget analysis |
| WCAG | Pro | Color contrast, accessibility for visual skills |
| Enterprise | Pro | Hardcoded paths, env config, audit support, metadata validation |
| Production Readiness | Pro | Workflow structure, troubleshooting sections, documentation score, success criteria |
| Agent Readiness | Pro | Maturity scoring from L0 (manual, no guardrails) to L3 (autonomous-ready with eval hooks, rollback, and structured output). Checks autonomy design, composability, and observability. |
Claude reads a skill file and runs checks inline. No install, no binary.
A standalone binary that runs locally and connects to Claude Code as a tool server.
## my-awesome-skill Check Results [PRO]
### Critical Issues (1)
- Description missing WHEN clause. Add trigger context.
### Warnings (2)
- Line 47: Vague term "several" — specify a number
- No error handling documented
### Passed Checks: 75 / 82 applicable
### Pro Scores
✓ Anti-slop: 92/100
✓ WCAG AA: Pass
✓ Enterprise ready: Yes
✓ Agent Readiness: L2 Orchestratable (78/100)
Status: Needs Attention
Skills validated by SkillCheck get a public report page with scores, badges, and shareable links. Here are some top-scoring examples.
Reports coming soon.
Run skillcheck report your-skill/SKILL.md to generate your first report.
$0 forever
$79 lifetime
$49 /month
up to 10 users
No. SkillCheck is an independent project. The Free tier validates against the Agent Skills open standard (created by Anthropic). Pro adds extra quality checks like anti-slop detection, WCAG accessibility, and security scanning.
Built and tested for Claude Code. Since it follows the Agent Skills open standard and MCP protocol, it should work with other compatible tools like Cursor, VS Code, and Windsurf.
SkillCheck validates skill definitions, not MCP server implementations. Skills and MCP are complementary; MCP provides connectivity, skills provide procedural knowledge.
Pro and Team tiers will include GitHub Actions integration. Block PRs that introduce low-quality skills.
SkillCheck reports issues; you decide what to fix. Some checks are suggestions, not requirements.