It started with a code review skill. I had spent a couple of evenings teaching my AI coding assistant how I like code reviews done: what to check for, which patterns to flag, how to structure feedback. It worked well. Then I built a second skill for a different workflow, and it needed some of the same foundational instructions: the same tone guidelines, the same severity rubric, the same output format.
So I copied the shared parts. Pasted them into the new skill file. Moved on.
A week later I refined the severity rubric in one skill and forgot to update the other. My two skills now disagreed on what “critical” meant. I caught it by accident. The fix was another round of copy-paste.
This is a small, familiar kind of frustration. If you have written more than a couple of skills for your AI coding assistant, you have probably felt it too. You build something that works, you want to reuse part of it somewhere else, and the only mechanism available is copy-paste. There is no way to say “this skill depends on that skill” and have the shared parts pulled in automatically.
Every other ecosystem solved this problem a long time ago. Java has Maven. Go has modules. Python has packages. JavaScript has npm. But AI skills, the reusable units of expertise we encode for our AI assistants, have nothing. No manifest. No dependency declaration. No version pinning. No install command.
What if skills could declare dependencies on other skills?
AI Skills Have No Distribution Story
If you use an AI coding assistant, you have probably built custom skills or rules that make it better at your specific workflow. Maybe it is a testing strategy, a deployment checklist, or a set of conventions for your team’s codebase. These are valuable artifacts that encode real expertise and took time to develop.
But the infrastructure around them has not caught up. The duplication problem I described is just one symptom of deeper structural gaps:
- A skill is a file (maybe a markdown file, maybe a few of them) with no manifest, no declared dependencies, and no version number. There is nothing that says “this is version 1.2.0 of the code-review skill, and it requires the severity-rubric skill at version 1.0.0.”
- Your teammate updates a shared skill. How do you know? How do you get the update? How do you pin to the version that works for your project while they experiment with changes?
- Sharing a skill means dropping a file in Slack, committing it to a repo, or writing setup instructions in a README. There is no
installcommand. - And if you use multiple tools, you install everything twice, manually, in different directories. Cursor expects skills in
~/.cursor/skills/. Claude Code expects them in~/.claude/skills/.
The Landscape, and What It Is Missing
Several tools have appeared recently to address this gap. Most take one of two approaches: wrap Git repositories with some convenience commands (clone, symlink, copy files into the right directory), or build custom registries from scratch.
These tools get the problem right. But they tend to miss three things.
First, transitive dependency resolution. If skill A depends on skill B, and skill B depends on skill C, a simple “git clone” does not help you. You need a resolver that walks the dependency tree, detects conflicts when two skills require different versions of the same dependency, and catches cycles before they cause infinite loops.
Second, a manifest format designed for AI artifacts. Skills are not code libraries. They have entrypoints (the main file your AI assistant loads), file lists (what to package), and dependencies that might come from different sources (an OCI registry, a Git repository, or both). Bolting this onto an existing package format means fighting the format instead of using it.
Third, and this is the one that changed my thinking: leveraging infrastructure that already exists. Most organizations already run OCI-compliant container registries. Quay, Docker Hub, GitHub Container Registry, Amazon ECR, Google Artifact Registry, Azure Container Registry. These registries come with access control, RBAC, image signing, vulnerability scanning, audit logs, and retention policies. They are already approved by security teams, already integrated into CI/CD pipelines, already understood by operations.
If you push your AI skills to the same registry that hosts your container images, they inherit all of that governance for free. No new vendor to evaluate. No new security review. No new infrastructure to provision. This matters when your team is fifty people, not five.
The OCI specification was designed to distribute opaque artifacts, not just container images. The infrastructure is already there. Why build something new?
Introducing Striatum
Striatum is an OCI-native CLI for packaging, versioning, and distributing AI skills using standard OCI-compliant registries.
The name comes from the striatum, a brain region involved in reward, learning, and habit formation. Fitting for a tool that packages learned AI behaviors so they can be shared and reused.
Striatum is built on three design principles:
- Striatum artifacts are real OCI images with custom media types. They push to Quay, Docker Hub, GHCR, ECR, or any OCI-compliant registry using your existing registry credentials. There is no Striatum-specific registry, no new account to create, no proprietary protocol. If your organization already runs a container registry, you can push skills to it today.
- The
artifact.jsonmanifest declares dependencies with source type (OCI registry or Git repository), version, and coordinates. The resolver walks the full dependency tree transitively, deduplicates artifacts by name and version, detects cycles, and reports version conflicts with clear error messages listing which artifacts require which versions. - A single
installcommand places a skill and all its dependencies into Cursor’s~/.cursor/skills/or Claude Code’s~/.claude/skills/directory. Global or project-scoped. An install tracking database records what is installed where, so uninstalling a skill also cleans up orphaned dependencies.
The current API version is striatum.dev/v1alpha2, supporting kind: Skill. The core workflow (init, validate, pack, push, pull, install) is stable and tested.
The Artifact Manifest
Every Striatum artifact starts with an artifact.json:
{
"apiVersion": "striatum.dev/v1alpha2",
"kind": "Skill",
"metadata": {
"name": "code-review",
"version": "1.0.0",
"description": "Opinionated code review skill with language-specific rules",
"authors": ["Helber Belmiro"],
"license": "Apache-2.0",
"tags": ["code-review", "best-practices"]
},
"spec": {
"entrypoint": "SKILL.md",
"files": ["SKILL.md", "rules/go.md", "rules/python.md"]
},
"dependencies": [
{
"source": "oci",
"registry": "quay.io",
"repository": "hbelmiro/severity-rubric",
"tag": "1.0.0"
},
{
"source": "git",
"url": "https://github.com/org/output-format.git",
"ref": "v2.1.0"
}
]
}
The apiVersion and kind fields give the manifest a typed, versioned schema. metadata carries identity and discoverability: name, version, description, authors, license, tags. spec defines what the artifact contains: an entrypoint (the main file your AI assistant loads) and a files list (everything to include in the package).
dependencies is where it gets interesting. Each dependency declares its source (currently oci or git) with source-specific fields. An OCI dependency points to a registry, repository, and tag. A Git dependency points to a URL and ref, with an optional path for monorepo subdirectories. The source-discriminated design means new backends can be added without breaking existing manifests.
A Quick Taste
Install Striatum via Homebrew or from source:
# Homebrew
brew tap hbelmiro/striatum https://github.com/hbelmiro/striatum
brew install striatum
# Or from source
go install github.com/hbelmiro/striatum/cmd/striatum@latest
The workflow from idea to installed skill:
# Scaffold a new skill project
striatum init --name my-skill --kind Skill --entrypoint SKILL.md
# Validate the manifest and check that dependencies resolve
striatum validate --check-deps
# Package into OCI format
striatum pack
# Push to a registry
striatum push quay.io/yourorg/skills/my-skill:1.0.0
# On another machine, install into Claude Code
striatum skill install --target claude quay.io/yourorg/skills/my-skill:1.0.0
Under the hood, pack bundles your files into an OCI Image Layout with custom media types. push uploads the image to the registry using standard OCI distribution and your existing registry credentials. install pulls the artifact and all its transitive dependencies, caches them locally with OCI digest verification (so unchanged artifacts are never re-downloaded), and copies everything into the target tool’s skills directory.
You can also inspect a remote artifact before installing it:
striatum inspect quay.io/yourorg/skills/my-skill:1.0.0
For the full walkthrough, including Git dependencies, dependency resolution, and the install/uninstall lifecycle, see the demo in the docs.
What is Next
Striatum is still early. The core workflow works, the manifest schema is stabilizing, and feedback from early users is shaping the direction.
The next major expansion is supporting artifact kinds beyond Skill. Prompt templates and RAG configurations are natural fits for the same packaging and distribution model, with the same needs around versioning, dependencies, and multi-target installation.
If this resonates with how you work, if you have felt the friction of copy-pasting skills between projects or machines, I would love to hear from you. Try it, open an issue, or start a discussion on GitHub.
Distribution Is a Feature
Those duplicated instructions I started with? They are now a shared dependency. I update the severity rubric in one place, bump the version, push it, and every skill that depends on it gets the update on the next install. No copy-paste. No drift. No forgetting.
AI skills deserve the same packaging infrastructure that applications have had for years. OCI registries give us that infrastructure. Battle-tested, widely deployed, already trusted by the organizations we work in. Striatum just connects the dots.
To see how I use it in practice, check out my skills monorepo.
