March 30, 202610 minutes
We are excited to announce OCM v2 — a ground-up rebuild of the Open Component Model tooling stack. A new CLI, Kubernetes controllers, and Go library — designed from the start for modularity, security, and community contribution. The entire stack continues to implement the OCM Specification v2, ensuring full compatibility with the standard that defines how components, resources, and signatures are represented.
All of OCM v2 lives in a single repository: github.com/open-component-model/open-component-model
The original OCM libraries served the project well, but incremental fixes were not enough.
OCM v2 is the result: modular design, decoupled APIs, a smaller dependency footprint, and a codebase built for community contribution from day one. Learn more in our Overview.
OCM v2 ships three components — a CLI for interactive and CI/CD use, Kubernetes controllers for GitOps-native deployment, and a Go library that underpins both. All three are developed, versioned, and released together from a single monorepo, sharing one dependency tree and one test suite.
flowchart TD
subgraph mono["Monorepo"]
subgraph cli["CLI"]
direction LR
Pack["Pack"] --> Sign["Sign"] --> Transport["Transport"]
end
subgraph ctrl["Kubernetes Controllers"]
direction LR
Repo["Repository CR"] --> Comp["Component CR"] --> Res["Resource CR"] --> Deployer["Deployer CR"]
end
subgraph lib["Go Bindings"]
direction LR
Bindings["Bindings"] --- Deps["Shared Deps"]
end
cli --- lib
ctrl --- lib
end
The v2 CLI implements the full Pack, Sign, Transport, Deploy workflow:
Bundle software artifacts into component versions.
Establish provenance with RSA-based PKI signatures.
Move components between registries or across air gaps.
Deploy applications using OCM controllers.
# Create a component version from a constructor file
ocm add cv --file ./transport-archive component-constructor.yaml
# Sign it
ocm sign cv --signature release ghcr.io/acme.org/product:1.0.0
# Transfer to a CTF archive for air-gapped delivery
ocm transfer cv --copy-resources --recursive \
ghcr.io/acme.org/product:1.0.0 \
ctf::./airgap-transport.ctf
# Import into the target registry on the other side
ocm transfer cv --copy-resources --recursive \
ctf::./airgap-transport.ctf//acme.org/product:1.0.0 \
target-registry.internal
# Verify signatures survived the journey
ocm verify cv --signature release target-registry.internal//acme.org/product:1.0.0Get started by installing the CLI.
The new controllers bring GitOps-native deployment of OCM component versions to Kubernetes with three core custom resources:
flowchart LR
Repo["Repository"] --> Comp["Component"] --> Res["Resource"] --> Deployer["Deployer"]
The Deployer takes an OCM Resource (containing Kubernetes manifests, a ResourceGraphDefinition, or other deployable content) and applies it directly to the cluster using ApplySet semantics.
For advanced deployment workflows, the recommended pattern is to package a
kro ResourceGraphDefinition inside an OCM component. The Deployer applies the RGD to the cluster, and kro reconciles it into a CRD that operators can instantiate — allowing you to ship complex, parameterised deployment instructions alongside the software itself.
FluxCD integrates naturally for GitOps-style delivery on top of this. We remain active upstream contributors to kro and are invested in its continued growth as a key part of the OCM deployment story.
Understand the OCM controller design.
How the Deployer CR applies resources using server-side apply.
Prepare a local cluster with kro and Flux.
Your first controller-based deployment.
Deploy Kubernetes manifests directly with the Deployer.
The reference docs for component-constructor and component-descriptor now feature an interactive schema renderer — a TypeScript/Preact-based viewer that lets you explore the JSON schema inline. A new Kubernetes API reference section covers the CRD schemas for Repository, Component, Resource, and Deployer objects.
Clean, well-documented Go bindings for programmatic OCM interaction. Library, CLI, and controllers share a single set of dependencies and are versioned together — no more compatibility issues from separate release cycles.
We welcome everyone to adopt the bindings and contribute as the Apeiro ecosystem grows.
OCM v2 is now natively compliant on resource level with both
This is not an afterthought, but as a first-class design constraint. Every component version is stored as a standard OCI Image Index, pushable to and pullable from any spec-compliant registry without OCM-specific extensions. Tools that speak OCI speak can now speak natively with OCM resources.
This includes full support for the OCI Referrers API: component version descriptors are attached as referrers to their subject manifests, enabling efficient version listing and discovery directly through the Distribution Spec — no OCM-specific registry queries needed.
The most significant OCI compatibility improvement in v2 concerns local blobs: resources embedded directly inside a component version rather than referenced from an external location.
In the legacy stack, every local blob — including native OCI artifacts like container images and Helm charts — was wrapped in an OCM-specific ArtifactSet format, serialised as a tar archive, and stored as an opaque layer. This meant that even a fully valid OCI image inside an OCM component could not be pulled with docker pull or helm pull. OCI-native tooling had no way in.
OCM v2 solves this with a new OCI-compatible storage mapping. The top-level OCI Image Index now distinguishes between two kinds of local blobs:
globalAccess pointer that allows direct accessThe result: a Helm chart packaged as an OCM local blob can be pulled natively with Helm’s OCI support (helm pull oci://...) and a container image can be fetched with docker pull — without any OCM tooling in the path.
This layout is fully backwards-compatible: the new CLI reads and writes the index-based format, while the legacy CLI can read it by falling back to the descriptor manifest when no index is present.
The transport pipeline benefits too. Because native OCI manifests are stored as proper OCI objects, layer deduplication and concurrent uploads work at the registry level — no more tar-wrapping every artifact before transit.
The CLI and controllers are validated through conformance scenarios that exercise the entire stack end-to-end.
The first conformance scenario builds OCM components, signs them, transfers them through a simulated air gap via CTF archives, imports them into an isolated cluster registry, and deploys them using OCM controllers — validating that signatures, resources, and references survive the entire journey intact.
We plan to add more conformance scenarios covering additional delivery patterns, ensuring every release meets a growing baseline of real-world validation.
OCM v2 is not just a technical reboot — it is a community reboot. Since the adoption of OCM by NeoNephos, we have established new governance structures and communication channels to make collaboration easier and more transparent.
The OCM TSC operates as part of NeoNephos and provides strategic oversight for the OCM project. It sets the technical direction, coordinates across SIGs, and ensures that the project evolves in alignment with the needs of its growing community.
The SIG Runtime is the primary governance body for the OCM runtime implementation. It oversees the development of the CLI, controllers, and library, ensuring that technical decisions are made openly and aligned with the broader project direction. Technical decisions are centrally tracked and aligned with the TSC via ADRs.
There are multiple ways to participate in the OCM community (see our community engagement page for the full overview):
#open-component-modeldocs/community/ in the
monorepoAlongside the v2 release, we are planning the kickstart of SIG Spec — a dedicated special interest group for the OCM specification itself. SIG Spec will own the evolution of the specification, coordinate community input on proposals, and ensure the spec stays aligned with real-world implementation needs.
This goes hand in hand with the move to the Community Specification License, planned for Q2 2026. Together, SIG Spec and the license change make the specification a truly community-governed artifact — anyone can participate in its evolution under clear, fair terms.
We are working on Sigstore integration for keyless signing and verification. This will complement the existing RSA-based PKI approach with a zero-trust model, removing the need to manage and distribute signing keys while providing the same level of provenance assurance.
These improvements will steadily close the feature gap with our previous implementation, and we want to achieve parity.
The legacy OCM stack will be supported until at least the end of 2026. Existing users can migrate at their own pace.
The documentation site now serves two versions:
Both v1 and v2 implement the same
OCM Specification, so component versions created with either stack are fully interoperable. Many CLI commands are cross-compatible and we made a deliberate effort to keep the .ocmconfig structure and command syntax consistent, so existing users and CI/CD pipelines should find the transition straightforward.
Ready to try OCM v2? Pick your path:
Install the OCM CLI and learn the Pack, Sign, Transport, Deploy workflow.
Set up a Kubernetes environment with OCM controllers, kro, and Flux.
Install the OCM CLI
Create Component Versions
Sign and Verify
Transfer across an Air Gap
Set Up your Runtime
Deploy a Helm Chart
Deploy Raw Manifests with the Deployer
OCM is open source and we welcome contributions of all kinds — code, documentation, bug reports, and feature requests.
We are looking forward to building the future of sovereign cloud delivery together.