An enterprise-grade agent skill registry — publish, discover, and manage reusable skill packages across your organization.
SkillHub is a self-hosted platform that gives teams a private, governed place to share agent skills. Publish a skill package, push it to a namespace, and let others find it through search or install it via CLI. Built for on-premise deployment behind your firewall, with the same polish you'd expect from a public registry.
- Self-Hosted & Private — Deploy on your own infrastructure.
Keep proprietary skills behind your firewall with full data
sovereignty. One
make dev-allcommand to get running locally. - Publish & Version — Upload agent skill packages with semantic
versioning, custom tags (
beta,stable), and automaticlatesttracking. - Discover — Full-text search with filters by namespace, downloads, ratings, and recency. Visibility rules ensure users only see what they're authorized to.
- Team Namespaces — Organize skills under team or global scopes. Each namespace has its own members, roles (Owner / Admin / Member), and publishing policies.
- Review & Governance — Team admins review within their namespace; platform admins gate promotions to the global scope. Governance actions are audit-logged for compliance.
- CLI-First — Native REST API plus a compatibility layer for existing ClawHub-style registry clients. Native CLI APIs are the primary supported path while protocol compatibility continues to expand.
- Pluggable Storage — Local filesystem for development, S3 / MinIO for production. Swap via config.
Start the full local stack with: curl -fsSL https://raw.githubusercontent.com/iflytek/skillhub/main/scripts/runtime.sh | sh -s -- up
- Docker & Docker Compose
make dev-allThen open:
- Web UI:
http://localhost:3000 - Backend API:
http://localhost:8080
Local profile seeds two mock-auth users automatically:
local-userfor normal publishing and namespace operationslocal-adminwithSUPER_ADMINfor review and admin flows
Use them with the X-Mock-User-Id header in local development.
Stop everything with:
make dev-all-downReset local dependencies and start from a clean slate with:
make dev-all-resetRun make help to see all available commands.
OpenAPI types for the web client are checked into the repository. When backend API contracts change, regenerate the SDK and commit the updated generated file:
make generate-apiFor a stricter end-to-end drift check, run:
./scripts/check-openapi-generated.shThis starts local dependencies, boots the backend, regenerates the frontend schema, and fails if the checked-in SDK is stale.
Published runtime images are built by GitHub Actions and pushed to GHCR.
This is the supported path for anyone who wants a ready-to-use local
environment without building the backend or frontend on their machine.
Published images target both linux/amd64 and linux/arm64.
- Copy the runtime environment template.
- Pick an image tag.
- Start the stack with Docker Compose.
cp .env.release.example .env.releaseRecommended image tags:
SKILLHUB_VERSION=edgefor the latestmainbuildSKILLHUB_VERSION=vX.Y.Zfor a fixed release
Start the runtime:
docker compose --env-file .env.release -f compose.release.yml up -dThen open:
- Web UI:
http://localhost - Backend API:
http://localhost:8080
Stop it with:
docker compose --env-file .env.release -f compose.release.yml downThe runtime stack uses its own Compose project name, so it does not
collide with containers from make dev-all.
The runtime uses the existing local,docker profile combination so it
is immediately usable with the same mock-auth flow as local development.
Available seeded users:
local-userlocal-admin
Pass X-Mock-User-Id to the backend when you need an authenticated
session without configuring GitHub OAuth. If the GHCR package remains
private, run docker login ghcr.io before docker compose up -d.
The Phase 4 monitoring stack lives under monitoring/.
It provides a local Prometheus + Grafana pair that scrapes the backend's
Actuator Prometheus endpoint.
Start it with:
cd monitoring
docker compose -f docker-compose.monitoring.yml up -dThen open:
- Prometheus:
http://localhost:9090 - Grafana:
http://localhost:3001(admin/admin)
By default Prometheus scrapes http://host.docker.internal:8080/actuator/prometheus,
so start the backend locally on port 8080 first.
Basic Kubernetes manifests are available under deploy/k8s/:
configmap.yamlsecret.yaml.examplebackend-deployment.yamlfrontend-deployment.yamlservices.yamlingress.yaml
Apply them after creating your own secret:
kubectl apply -f deploy/k8s/configmap.yaml
kubectl apply -f deploy/k8s/secret.yaml
kubectl apply -f deploy/k8s/backend-deployment.yaml
kubectl apply -f deploy/k8s/frontend-deployment.yaml
kubectl apply -f deploy/k8s/services.yaml
kubectl apply -f deploy/k8s/ingress.yamlA lightweight smoke test script is available at scripts/smoke-test.sh.
Run it against a local backend:
./scripts/smoke-test.sh http://localhost:8080┌─────────────┐ ┌─────────────┐ ┌──────────────┐
│ Web UI │ │ CLI Tools │ │ REST API │
└──────┬──────┘ └──────┬──────┘ └──────┬───────┘
│ │ │
└───────────────────┼───────────────────┘
│
┌──────▼──────┐
│ Nginx │
└──────┬──────┘
│
┌──────▼──────┐
│ Spring Boot │ Auth · RBAC · Core Services
└──────┬──────┘
│
┌────────────┼────────────┐
│ │ │
┌──────▼───┐ ┌─────▼────┐ ┌────▼────┐
│PostgreSQL│ │ Redis │ │ Storage │
└──────────┘ └──────────┘ └─────────┘
Contributions are welcome. Please open an issue first to discuss what you'd like to change.
-
Contribution guide:
CONTRIBUTING.md -
Code of conduct:
CODE_OF_CONDUCT.md -
Contribution guide:
CONTRIBUTING.md -
Code of conduct:
CODE_OF_CONDUCT.md
Apache License 2.0