This chapter contains guidelines for common scenarios how to work with the Open Component Model, focusing on using CI/CD, build and publishing processes.
Use Public Schema for Validation and Auto-Completion of Component Descriptors#
The Open Component Model (OCM) provides a public schema to validate and offer auto-completion of component constructor files
used to create component descriptors.
This schema is available at https://ocm.software/schemas/configuration-schema.yaml.
To use this schema in your IDE, you can add the following line to your component constructor file:
This line tells the YAML language server to use the OCM schema for validation and auto-completion.
Traditional automated builds often have unrestricted internet access, which can lead to several challenges in enterprise environments:
Limited control over downloaded artifacts
Potential unavailability of required resources
Security risks associated with write permissions to external repositories
Best practice: Implement a two-step process:
a) Build: Create artifacts in a controlled environment, using local mirrors when possible.
b) Publish: Use a separate, secured process to distribute build results.
OCM supports this approach through filesystem-based OCM repositories, allowing you to generate Common Transport Format (CTF) archives for component versions. These archives can then be securely processed and distributed.
Typical automated builds have access to the complete internet ecosystem. This involves
downloading of content required for a build (e.g., go mod tidy), but also the upload
of build results to repositories (e.g., OCI image registries).
For builds in enterprise environments this can lead to several challenges:
Limited control over downloaded artifacts
Potential unavailability of required resources
Security risks associated with write permissions to external repositories
The first problem might be acceptable, because the build results may be analyzed by scanners
later to figure out what has been packaged. Triaging the results could be done in an asynchronous
step later.
The second problem could be solved by mirroring the required artifacts and instrument the build to use the artifacts from the local mirror. Such a mirror would also offer a solution for the first problem and act as target for various scanning tools.
The third problem might pose severe security risks, because the build procedure
as well as the downloaded artifacts may be used to catch registry credentials or at least
corrupt the content of those repositories.
This could be avoided by establishing a contract between the build procedure of a
component/project and the build system, providing the build result as a local file
or file-structure. This is then taken by the build system to push content wherever
it should be pushed to. This way the execution of the build procedure does not need
write permissions to any repository, because it never pushes build results.
The Open Component Model supports such processes by supporting filesystem based
OCM repositories, which are able to host any type of content, regardless of its
technology. The task of the build then is to provide such a CTF archive for the
OCM component versions generated by the build. This archive can then be
used by the build-system to do whatever is required to make the content accessible
by others.
The composition of such archives is described in the Getting Started section.
To secure further processes, a certified build-system could even sign the content with
its build system certificate to enable followup-processes to verify that involved component
versions are provided by accepted and well-known processes.
Note: This section provides information only on on building multi-arch images. Referencing a multi-arch image does not differ from images for just one platform, see the ocm add resources command for the OCM CLI
At the time of writing this guide Docker is not able to build multi-architecture (multi-arch / multi-platform)
images natively. Instead, the buildx plugin is used. However, this implies building and pushing
images in one step to a remote container registry as the local Docker image store does not
support multi-arch images (for additional information, see the Multi-arch build and images, the simple way blog post)
The OCM CLI has some built-in support for dealing with multi-arch images during the
component version composition (ocm add resources).
This allows building all artifacts locally and push them in a separate step to a container registry. This
is done by building single-arch images in a first step (still using buildx for cross-platform
building). In a second step all images are bundled into a multi-arch image, which is stored as
local artifact in a component (CA) or common transport (CTF)
archive. This archive can be processed as usual (e.g., for signing or transfer to other locations).
When pushed to an image registry, multi-arch images are generated with a multi-arch-image manifest.
The following steps illustrate this procedure. For a simple project with a Go binary and a
helm-chart assume the following file structure:
The Dockerfile has the following content:
Now we want to build images for two platforms using Docker and buildx. Note the --load option for
buildx to store the image in the local Docker store. Note the architecture suffix in the tag to be
able to distinguish the images for the different platforms. Also note that the tag has a different
syntax than the --platform argument for buildx as slashes are not allowed in tags.
Repeat this command for the second platform:
Check that the images were created correctly:
In the next step we create a component archive and a transport archive
Create the file resources.yaml. Note the variants in the image input and the type dockermulti:
The input type dockermulti adds a multi-arch image composed by the given dedicated images from the local Docker
image store as local artifact to the component archive.
Add the described resources to your component archive:
What happened?
The input type dockermulti is used to compose a multi-arch image on-the fly. Like
the input type docker it reads images from the local Docker daemon. In contrast to
this command you can list multiple images, created for different platforms, for which
an OCI index manifest is created to describe a multi-arch image. The complete set
of blobs is then packaged as artifact set archive and put as single resource into
the component version.
The resulting component-descriptor.yaml in gen/ca is:
Note that there is only one resource of type image with media-type application/vnd.oci.image.index.v1+tar+gzip
which is the standard media type for multi-arch images.
The file sha256.4e26… contains the multi-arch image packaged as OCI artifact set:
You can create a common transport archive from the component archive.
Or you can push it directly to the OCM repository:
The repository now should contain three additional artifacts. Depending on the OCI registry and
the corresponding UI you may see that the uploaded OCI image is a multi-arch-image. For example on
GitHub packages under the attribute OS/Arch you can see two platforms, linux/amd64 and
linux/arm64
For automation and reuse purposes you may consider templating resource files and Makefiles (see below).
Developing with the Open Component Model usually is an iterative process of building artifacts,
generating component descriptors, analyzing and finally publishing them. To simplify and speed up this
process it should be automated using a build tool. One option is to use a Makefile.
The following example can be used as a starting point and can be modified according to your needs.
In this example we will automate the same example as in the sections before:
Creating a multi-arch image from Go sources from a Git repository using the Docker CLI
Packaging the image and a Helm chart into a common transport archive
The OCM CLI must be installed and be available in your PATH
The Makefile is located in the top-level folder of a Git project
Operating system is Unix/Linux
A sub-directory local can be used for local settings e.g. environment varibles, RSA keys, …
A sub-directory gen will be used for generated artifacts from the make buildcommand
It is recommended to add local/ and gen/ to the .gitignore file
We use the following file system layout for the example:
This Makefile can be used
The Makefile supports the following targets:
build (default) simple Go build
version show current VERSION of Github repository
image build a local Docker image
multi build multi-arch images with Docker
ca execute build and create a component archive
ctf create a common transport format archive
push push the common transport archive to an OCI registry
info show variables used in Makefile (version, commit, etc.)
describe display the component version in a tree-form
descriptor show the component descriptor of the component version
transport transport the component from the upload repository into another OCM repository
clean delete all generated files (but does not delete Docker images)
The variables assigned with ?= at the beginning can be set from outside and override the default
declared in the Makefile. Use either an environment variable or an argument when calling make.
The Makefile uses a dynamic list of generated platforms for the images. You can just set the PLATFORMS variable:
If MULTI is set to true, the variable PLATFORMS will be evaluated to decide which image variants
will be built. This has to be reflected in the resources.yaml. It has to use the input type
dockermulti and list all the variants which should be packaged into a multi-arch image. This list
depends on the content of the Make variable.
The OCM CLI supports this by enabling templating mechanisms for the content by selecting a templater
using the option --templater .... The example uses the Spiff templater.
The variables given to the add resources command are passed to the templater. The template looks
like:
By using a variable values.MULTI, the command distinguishes between a single Docker image and a multi-arch image.
With map[], the platform list from the Makefile is mapped to a list of tags created by the
docker buildx command used in the Makefile. The value ~~ is used to undefine the yaml fields not
required for the selected case (the template can be used for multi- and single-arch builds).
Pipeline infrastructures are heterogenous, so there is no universal answer how to
integrate a build pipeline with OCM. Usually, the simplest way is using the OCM command line interface.
Following you will find an example using GitHub actions.
There are two repositories dealing with GitHub actions:
The first one provides various actions that can be
called from a workflow. The second one
provides the required installations of the OCM parts into the container.
An typical workflow for a build step will create a component version and a transport archive:
This creates a component version for the current build. Additionally, a transport archive
may be created or the component version along with the built container images may be uploaded to an
OCI registry, etc.
More documentation is available here. A full
example can be found in the sample Github repository.
Looking at the settings file shows that
some variables like the version or the commit change with every build
or release. In many cases, these variables will be auto-generated during the build.
Other variables like the version of 3rd-party components will just change from time to
time and are often set manually by an engineer or release manager. It is useful to separate
between static and dynamic variables. Static files can be checked-in into the source control system and
are maintained manually. Dynamic variables can be generated during build.
For analyzing and debugging the content of a transport archive, there are some supportive commands
to analyze what is contained in the archive and what is stored in which blob:
The transport archive created from a component-constructor file, using the command ocm add componentversions --create ..., does not automatically resolve image references to external OCI registries and stores them in the archive. If you want to create a self-contained transport archive with all images stored as local artifacts, you need to use the --copy-resources option of the ocm transfer ctf command. This will copy all external images to the blobs directory of the archive.
Note that this archive can become huge if there an many external images involved!
Configure rarely changing variables in a static file and generate dynamic variables
during the build from the environment. See the Static and Dynamic Variable Substitution section above.