Best Practices

This chapter contains guidelines for common scenarios how to work with the Open Component Model.

Separate between Build and Publish

Typical automated builds have access to the complete internet ecosystem. This involves downloading of content required for a build (e.g. go mod tidy), but also the upload of build results to repositories (e.g. OCI image registries).

For enterprise build this approach has several disadvantages:

  • there is no control what kind of artifacts are downloaded from the internet.
  • there is no guarantee that these artifacts are still available tomorrow, or they might be temporarily not accessible.
  • the build procedure needs write permissions to external repositories.

The first problem might be acceptable, because the build results may be analyzed by scanners later to figure out what has been packaged, and triage the results in an asynchronous step later on.

The second problem is more severe, which could be solved by mirroring the required artifacts and instrument the build somehow (technology dependent) to use the local mirror. Such a mirror could also give back to control the possible input vector of a build by restricting the access to the use of the mirror.

This third problems might provide severe security risks, because the build procedure as well as the downloaded artifacts may be used to catch registry credentials or at least corrupt the content of those repositories.

This could be avoided by establishing a contract between the build procedure of a component/project and the build system, providing the build result as a local file or file-structure. This is then taken by the build system to push content wherever it should be pushed to. This way the execution of the build procedure does not need write permissions to any repository, because it never pushes build results.

The Open Component Model supports such processes by supporting filesystem based OCM repositories, which are able to host any type of content, regardless of its technology. The task of the build then is to provide such a CTF archive for the OCM component versions generated by the build. This archive can then be used by the build-system to do whatever is required to make the content accessible by others.

The composition of such archives is described in the Getting Started Section.

To secure further processes, a certified build-system could even sign the content with its build system certificate to enable followup-processes to verify that involved component versions are provided by accepted and well-known processes.

Building multi-arch images

At the time of writing this guide Docker is not able to build multi-architecture (multi-arch) images natively. Instead, the buildx plugin is used. However, this implies building and pushing images in one step to a remote container registry as the local docker image store does not support multi-arch images.

The OCM CLI has therefore some built-in support for dealing with multi-arch images during the component version composition (ocm add resources). This allows building all artifacts locally and push them in a separate step to a container registry. This is done by building single-arch images in a first step (still using buildx for cross-platform building). In a second step all images are bundled into a multi-arch image, which is stored as local artifact in a component (CA) or common transport (CTF) archive. This archive can be processed as usual (e.g. for signing or transfer to other locations). When pushed to an image registry multi-arch images are generated with a multi-arch-image manifest.

The following steps illustrate this procedure. For a simple project with a Go binary and a helm-chart assume the following file structure:

 tree .
├── Dockerfile
├── go.mod
├── helmchart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── ...
│   └── values.yaml
└── main.go

The dockerfile has the following content:

FROM golang:1.19 AS builder

COPY go.mod ./
COPY main.go ./
# RUN go mod download
RUN go build -o /helloserver main.go

# Create a new release build stage

# Set the working directory to the root directory path
# Copy over the binary built from the previous stage
COPY --from=builder /helloserver /helloserver

ENTRYPOINT ["/helloserver"]

Now we want to build images for two platforms using docker and buildx. Note the --load option for buildx to store the image in the local docker store. Note the architecture suffix in the tag to be able to distinguish the images for the different platforms. Note also that the tag has a different syntax than the --platform argument for buildx as slashes are not allowed in tags.

$ # path to you OCI registry
$ PLATFORM=linux/amd64
$ VERSION=0.1.0

$ docker buildx build --load -t ${TAG_PREFIX}/simpleserver:0.1.0-linux-amd64 --platform linux/amd64 .
[+] Building 54.4s (14/14) FINISHED
 => [internal] load build definition from Dockerfile                                                                          0.0s
 => => transferring dockerfile: 660B                                                                                          0.0s
 => [internal] load .dockerignore                                                                                             0.0s
 => => transferring context: 2B                                                                                               0.0s
 => [internal] load metadata for                                                       1.6s
 => [internal] load metadata for                                                                1.2s
 => [builder 1/5] FROM  49.2s
 => => resolve          0.0s
 => => sha256:14a70245b07c7f5056bdd90a3d93e37417ec26542def5a37ac8f19e437562533 156B / 156B                                    0.2s
 => => sha256:a2b47720d601b6c6c6e7763b4851e25475118d80a76be466ef3aa388abf2defd 148.91MB / 148.91MB                           46.3s
 => => sha256:52908dc1c386fab0271a2b84b6ef4d96205a98a8c8801169554767172e45d8c7 85.97MB / 85.97MB                             42.9s
 => => sha256:195ea6a58ca87a18477965a6e6a8623112bde82c5b568a29c56ce4581b6e6695 54.59MB / 54.59MB                             33.8s
 => => sha256:c85a0be79bfba309d1f05dc40b447aa82b604593531fed1e7e12e4bef63483a5 10.88MB / 10.88MB                              3.4s
 => => sha256:e4e46864aba2e62ba7c75965e4aa33ec856ee1b1074dda6b478101c577b63abd 5.16MB / 5.16MB                                1.5s
 => => sha256:a8ca11554fce00d9177da2d76307bdc06df7faeb84529755c648ac4886192ed1 55.04MB / 55.04MB                             19.3s
 => => extracting sha256:a8ca11554fce00d9177da2d76307bdc06df7faeb84529755c648ac4886192ed1                                     1.1s
 => => extracting sha256:e4e46864aba2e62ba7c75965e4aa33ec856ee1b1074dda6b478101c577b63abd                                     0.1s
 => => extracting sha256:c85a0be79bfba309d1f05dc40b447aa82b604593531fed1e7e12e4bef63483a5                                     0.1s
 => => extracting sha256:195ea6a58ca87a18477965a6e6a8623112bde82c5b568a29c56ce4581b6e6695                                     1.1s
 => => extracting sha256:52908dc1c386fab0271a2b84b6ef4d96205a98a8c8801169554767172e45d8c7                                     1.5s
 => => extracting sha256:a2b47720d601b6c6c6e7763b4851e25475118d80a76be466ef3aa388abf2defd                                     2.8s
 => => extracting sha256:14a70245b07c7f5056bdd90a3d93e37417ec26542def5a37ac8f19e437562533                                     0.0s
 => [stage-1 1/3] FROM  30.7s
 => => resolve        0.0s
 => => sha256:f291067d32d8d06c3b996ba726b9aa93a71f6f573098880e05d16660cfc44491 8.12MB / 8.12MB                               30.6s
 => => sha256:2445dbf7678f5ec17f5654ac2b7ad14d7b1ea3af638423fc68f5b38721f25fa4 657.02kB / 657.02kB                            1.3s
 => => extracting sha256:2445dbf7678f5ec17f5654ac2b7ad14d7b1ea3af638423fc68f5b38721f25fa4                                     0.1s
 => => extracting sha256:f291067d32d8d06c3b996ba726b9aa93a71f6f573098880e05d16660cfc44491                                     0.1s
 => [internal] load build context                                                                                             0.1s
 => => transferring context: 575B                                                                                             0.0s
 => [builder 2/5] WORKDIR /app                                                                                                0.1s
 => [builder 3/5] COPY go.mod ./                                                                                              0.0s
 => [builder 4/5] COPY main.go ./                                                                                             0.0s
 => [builder 5/5] RUN go build -o /helloserver main.go                                                                        2.4s
 => [stage-1 2/3] COPY --from=builder /helloserver /helloserver                                                               0.0s
 => exporting to oci image format                                                                                             0.8s
 => => exporting layers                                                                                                       0.2s
 => => exporting manifest sha256:04d69fc3245757d327d96b1a83b7a64543d970953c61d1014ae6980ed8b3ba2a                             0.0s
 => => exporting config sha256:08641d64f612661a711587b07cfeeb6d2804b97998cfad85864a392c1aabcd06                               0.0s
 => => sending tarball                                                                                                        0.6s
 => importing to docker

Repeat this command for the second platform:

$ docker buildx build --load -t ${TAG_PREFIX}/simpleserver:0.1.0-linux-arm64 --platform linux/arm64 .
docker buildx build --load -t ${TAG_PREFIX}/simpleserver:0.1.0-linux-arm64 --platform linux/arm64 .
[+] Building 40.1s (14/14) FINISHED
 => [internal] load .dockerignore                                                                                             0.0s
 => => transferring context: 2B                                                                                               0.0s
 => [internal] load build definition from Dockerfile                                                                          0.0s
 => => transferring dockerfile: 660B                                                                                          0.0s
 => [internal] load metadata for                                                       1.0s
 => [internal] load metadata for                                                                1.1s
 => [builder 1/5] FROM  37.7s
 => => resolve          0.0s
 => => sha256:cd807e8b483974845eabbdbbaa4bb3a66f74facd8c061e01e923e9f1da608271 157B / 157B                                    0.2s
 => => sha256:fecd6ba4b3f93b6c90f4058b512f1b0a44223ccb3244f0049b16fe2c1b41cf45 115.13MB / 115.13MB                           35.6s
 => => sha256:4fb255e3f99867ec7a2286dfbbef990491cde0a5d226d92be30bad4f9e917fa4 81.37MB / 81.37MB                             31.8s
 => => sha256:426e8acfed2a5373bd99b22b5a968d55a148e14bc0e0f51c5cf0d779afefe291 54.68MB / 54.68MB                             26.7s
 => => sha256:3d7b1480fa4dae5cbbb7d091c46ae0ae52f501418d4cfeb849b87023364e2564 10.87MB / 10.87MB                              3.0s
 => => sha256:a3e29af4daf3531efcc63588162e8bdcf3434aa5d72df4eabeb5e20c6695e303 5.15MB / 5.15MB                                1.3s
 => => sha256:077c13527d405646e2f6bb426e04716ae4f8dd2fdd8966dcb0194564a2b57896 53.70MB / 53.70MB                             13.3s
 => => extracting sha256:077c13527d405646e2f6bb426e04716ae4f8dd2fdd8966dcb0194564a2b57896                                     0.9s
 => => extracting sha256:a3e29af4daf3531efcc63588162e8bdcf3434aa5d72df4eabeb5e20c6695e303                                     0.3s
 => => extracting sha256:3d7b1480fa4dae5cbbb7d091c46ae0ae52f501418d4cfeb849b87023364e2564                                     0.1s
 => => extracting sha256:426e8acfed2a5373bd99b22b5a968d55a148e14bc0e0f51c5cf0d779afefe291                                     1.2s
 => => extracting sha256:4fb255e3f99867ec7a2286dfbbef990491cde0a5d226d92be30bad4f9e917fa4                                     1.4s
 => => extracting sha256:fecd6ba4b3f93b6c90f4058b512f1b0a44223ccb3244f0049b16fe2c1b41cf45                                     2.0s
 => => extracting sha256:cd807e8b483974845eabbdbbaa4bb3a66f74facd8c061e01e923e9f1da608271                                     0.0s
 => [stage-1 1/3] FROM  25.7s
 => => resolve        0.0s
 => => sha256:21d6a6c3921f47fb0a96eb028b4c3441944a6e5a44b30cd058425ccc66279760 7.13MB / 7.13MB                               25.5s
 => => sha256:7d441aeb75fe3c941ee4477191c6b19edf2ad8310bac7356a799c20df198265c 657.02kB / 657.02kB                            1.3s
 => => extracting sha256:7d441aeb75fe3c941ee4477191c6b19edf2ad8310bac7356a799c20df198265c                                     0.1s
 => => extracting sha256:21d6a6c3921f47fb0a96eb028b4c3441944a6e5a44b30cd058425ccc66279760                                     0.1s
 => [internal] load build context                                                                                             0.0s
 => => transferring context: 54B                                                                                              0.0s
 => [builder 2/5] WORKDIR /app                                                                                                0.2s
 => [builder 3/5] COPY go.mod ./                                                                                              0.0s
 => [builder 4/5] COPY main.go ./                                                                                             0.0s
 => [builder 5/5] RUN go build -o /helloserver main.go                                                                        0.3s
 => [stage-1 2/3] COPY --from=builder /helloserver /helloserver                                                               0.0s
 => exporting to oci image format                                                                                             0.5s
 => => exporting layers                                                                                                       0.2s
 => => exporting manifest sha256:267ed1266b2b0ed74966e72d4ae8a2dfcf77777425d32a9a46f0938c962d9600                             0.0s
 => => exporting config sha256:67102364e254bf5a8e58fa21ea56eb40645851d844f5c4d9651b4af7a40be780                               0.0s
 => => sending tarball                                                                                                        0.3s
 => importing to docker

Check that the images were created correctly:

$ docker image ls
REPOSITORY                                              TAG                 IMAGE ID       CREATED              SIZE   0.1.0-linux-arm64   67102364e254   6 seconds ago        22.4MB   0.1.0-linux-amd64   08641d64f612   About a minute ago   25.7MB

In the next step we create a component archive and a transport archive

$ VERSION=0.1.0
$ mkdir gen
$ ocm create ca ${COMPONENT} ${VERSION} --provider ${PROVIDER} --file gen/ca

Create the file resources.yaml. Note the variants in the image input and the type dockermulti:

name: chart
type: helmChart
  type: helm
  path: helmchart
name: image
type: ociImage
version: 0.1.0
  type: dockermulti
  - ""
  - ""

The input type dockermulti adds a multi-arch image composed by the given dedicated images from the local docker image store as local artifact to the component archive.

Add the described resources to your component archive:

$ ocm add resources ./gen/ca resources.yaml
processing resources.yaml...
  processing document 1...
    processing index 1
  processing document 2...
    processing index 1
found 2 resources
adding resource helmChart: "name"="chart","version"="<componentversion>"...
adding resource ociImage: "name"="image","version"="0.1.0"...
  image 0:
  image 1:
  image 2: INDEX
locator:, repo:, version 0.1.0
What happened?

The input type dockermulti is used to compose a multi-arch image on-the fly. Like the input type docker is reads images from the local docker daemon. In contrast to this command you can list multiple images, created for different platforms, for which an OCI index manifest is created to describe a multi-arch image. The complete set of blobs is then packaged as artifact set archive and put as single resource into the component version.

The resulting component-descriptor.yaml in gen/ca is:

  componentReferences: []
  provider: acme
  repositoryContexts: []
  - access:
      localReference: sha256.9dd0f2cbae3b8e6eb07fa947c05666d544c0419a6e44bd607e9071723186333b
      mediaType: application/vnd.oci.image.manifest.v1+tar+gzip
      type: localBlob
    name: chart
    relation: local
    type: helmChart
    version: 0.1.0
  - access:
      localReference: sha256.4e26c7dd46e13c9b1672e4b28a138bdcb086e9b9857b96c21e12839827b48c0c
      mediaType: application/vnd.oci.image.index.v1+tar+gzip
      type: localBlob
    name: image
    relation: local
    type: ociImage
    version: 0.1.0
  sources: []
  version: 0.1.0
  schemaVersion: v2

Note that there is only one resource of type image with media-type application/vnd.oci.image.index.v1+tar+gzip which is the standard media type for multiarch-images.

$ ls -l gen/ca/blobs
total 24M
-rw-r--r-- 1 d058463 staff  24M Dec  1 09:50 sha256.4e26c7dd46e13c9b1672e4b28a138bdcb086e9b9857b96c21e12839827b48c0c
-rw-r--r-- 1 d058463 staff 4.7K Dec  1 09:50 sha256.9dd0f2cbae3b8e6eb07fa947c05666d544c0419a6e44bd607e9071723186333b

The file sha256.4e26… contains the multi-arch image packaged as OCI artifact set:

$ tar tvf gen/ca/blobs/sha256.4e26c7dd46e13c9b1672e4b28a138bdcb086e9b9857b96c21e12839827b48c0c
-rw-r--r--  0 0      0         741 Jan  1  2022 index.json
-rw-r--r--  0 0      0          38 Jan  1  2022 oci-layout
drwxr-xr-x  0 0      0           0 Jan  1  2022 blobs
-rw-r--r--  0 0      0     3051520 Jan  1  2022 blobs/sha256.05ef21d763159987b9ec5cfb3377a61c677809552dcac3301c0bde4e9fd41bbb
-rw-r--r--  0 0      0         723 Jan  1  2022 blobs/sha256.117f12f0012875471168250f265af9872d7de23e19f0d4ef05fbe99a1c9a6eb3
-rw-r--r--  0 0      0     6264832 Jan  1  2022 blobs/sha256.1496e46acd50a8a67ce65bac7e7287440071ad8d69caa80bcf144892331a95d3
-rw-r--r--  0 0      0     6507520 Jan  1  2022 blobs/sha256.66817c8096ad97c6039297dc984ebc17c5ac9325200bfa9ddb555821912adbe4
-rw-r--r--  0 0      0         491 Jan  1  2022 blobs/sha256.75a096351fe96e8be1847a8321bd66535769c16b2cf47ac03191338323349355
-rw-r--r--  0 0      0     3051520 Jan  1  2022 blobs/sha256.77192cf194ddc77d69087b86b763c47c7f2b0f215d0e4bf4752565cae5ce728d
-rw-r--r--  0 0      0        1138 Jan  1  2022 blobs/sha256.91018e67a671bbbd7ab875c71ca6917484ce76cde6a656351187c0e0e19fe139
-rw-r--r--  0 0      0    17807360 Jan  1  2022 blobs/sha256.91f7bcfdfda81b6c6e51b8e1da58b48759351fa4fae9e6841dd6031528f63b4a
-rw-r--r--  0 0      0        1138 Jan  1  2022 blobs/sha256.992b3b72df9922293c05f156f0e460a220bf601fa46158269ce6b7d61714a084
-rw-r--r--  0 0      0    14755840 Jan  1  2022 blobs/sha256.a83c9b56bbe0f6c26c4b1d86e6de3a4862755d208c9dfae764f64b210eafa58c
-rw-r--r--  0 0      0         723 Jan  1  2022 blobs/sha256.e624040295fb78a81f4b4b08b43b4de419f31f21074007df8feafc10dfb654e6

$ tar xvf gen/ca/blobs/sha256.4e26c7dd46e13c9b1672e4b28a138bdcb086e9b9857b96c21e12839827b48c0c -O - index.json | jq .
x index.json
  "schemaVersion": 2,
  "mediaType": "application/vnd.oci.image.index.v1+json",
  "manifests": [
      "mediaType": "application/vnd.oci.image.manifest.v1+json",
      "digest": "sha256:e624040295fb78a81f4b4b08b43b4de419f31f21074007df8feafc10dfb654e6",
      "size": 723
      "mediaType": "application/vnd.oci.image.manifest.v1+json",
      "digest": "sha256:117f12f0012875471168250f265af9872d7de23e19f0d4ef05fbe99a1c9a6eb3",
      "size": 723
      "mediaType": "application/vnd.oci.image.index.v1+json",
      "digest": "sha256:75a096351fe96e8be1847a8321bd66535769c16b2cf47ac03191338323349355",
      "size": 491,
      "annotations": {
        "": "0.1.0",
        "software.ocm/tags": "0.1.0"
  "annotations": {
    "software.ocm/main": "sha256:75a096351fe96e8be1847a8321bd66535769c16b2cf47ac03191338323349355"

You can create a transport archive from the component archive.

cm transfer ca gen/ca gen/ctf
transferring version ""...
...resource 0(
...resource 1(
...adding component version...

Or you can push it directly to the OCM repository:

$ ocm transfer ca gen/ca $OCMREPO
transferring version ""...
...resource 0(
...resource 1(
...adding component version...

The repository then should contain three additional artifacts. Depending on the OCI registry and their corresponding UIs you may see that the uploaded OCI image is a multi-arch-image. For example on GitHub packages you can see under OS/Arch that there are two platforms linux/amd64 and linux/arm64

For better automation and reuse you may consider templating resource files and makefiles (see below)

Using Makefiles

Developing with the Open Component Model usually is an iterative process of building artifacts, generating component descriptors, analyzing them and publishing them. To simplify and speedup this process it should be automated using a build tool. One option is to use a makefile. The following example can be used as a starting point and can be modified according to your needs.

In this example we will automated the example shown in the sections before:

  • Create a multi arch image from Go sources from a Git repository with docker
  • Packaging the image and a helm chart into a common transport archive
  • Signing and publishing the build result


  • The ocm CLI must be installed and in the PATH
  • The Makefile is in the top-level folder of a Git project
  • Operating system is Unix
  • A sub-directory local can be used for local settings e.g. environment varibles, RSA keys, …
  • A sub-directory gen will be used for generated artifacts from the make build
  • It is recommended to add local/ and gen/ to the .gitignore file

We use the following file system layout for this example:

$ tree .
├── Dockerfile
├── Makefile
├── go.mod
├── helmchart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── NOTES.txt
│   │   ├── _helpers.tpl
│   │   ├── deployment.yaml
│   │   ├── hpa.yaml
│   │   ├── ingress.yaml
│   │   ├── service.yaml
│   │   ├── serviceaccount.yaml
│   │   └── tests
│   │       └── test-connection.yaml
│   └── values.yaml
├── local
│   └──
├── main.go
├── resources.yaml
This makefile can be used
NAME      ?= simpleserver
IMAGE     =$(GITHUBORG)/demo/$(NAME)
MULTI     ?= true
PLATFORMS ?= linux/amd64 linux/arm64
REPO_ROOT           = .
VERSION             = $(shell git describe --tags --exact-match 2>/dev/null|| echo "$$(cat $(REPO_ROOT)/VERSION)")
COMMIT              = $(shell git rev-parse HEAD)
GIT_TREE_STATE      := $(shell [ -z "$(git status --porcelain 2>/dev/null)" ] && echo clean || echo dirty)
GEN = ./gen
OCM = ocm

CHART_SRCS=$(shell find helmchart -type f)
GO_SRCS=$(shell find . -name \*.go -type f)

ifeq ($(MULTI),true)
FLAGSUF     = .multi

.PHONY: build
build: $(GEN)/build

.PHONY: version
	@echo $(VERSION)

.PHONY: ca
ca: $(GEN)/ca

$(GEN)/ca: $(GEN)/.exists $(GEN)/image.$(NAME)$(FLAGSUF) resources.yaml $(CHART_SRCS)
	$(OCM) create ca -f $(COMPONENT) "$(VERSION)" --provider $(PROVIDER) --file $(GEN)/ca
	$(OCM) add resources --templater spiff $(GEN)/ca COMMIT="$(COMMIT)" VERSION="$(VERSION)" \
	@touch $(GEN)/ca

$(GEN)/build: $(GO_SRCS)
	go build .
	@touch $(GEN)/build

.PHONY: image
image: $(GEN)/image.$(NAME)

$(GEN)/image.$(NAME): $(GEN)/.exists Dockerfile $(OCMSRCS)
	docker build -t $(IMAGE):$(VERSION) --file Dockerfile $(COMPONENT_ROOT) .;
	@touch $(GEN)/image.$(NAME)

.PHONY: multi
multi: $(GEN)/image.$(NAME).multi

$(GEN)/image.$(NAME).multi: $(GEN)/.exists Dockerfile $(GO_SRCS)
	echo "Building Multi $(PLATFORMS)"
	for i in $(PLATFORMS); do \
	tag=$$(echo $$i | sed -e s:/:-:g); \
	echo "Building platform $$i with tag: $$tag"; \
	docker buildx build --load -t $(IMAGE):$(VERSION)-$$tag --platform $$i .; \
	@touch $(GEN)/image.$(NAME).multi

.PHONY: ctf
ctf: $(GEN)/ctf

$(GEN)/ctf: $(GEN)/ca
	@rm -rf $(GEN)/ctf
	$(OCM) transfer ca $(GEN)/ca $(GEN)/ctf
	touch $(GEN)/ctf

.PHONY: push
push: $(GEN)/ctf $(GEN)/push.$(NAME)

$(GEN)/push.$(NAME): $(GEN)/ctf
	$(OCM) transfer ctf -f $(GEN)/ctf $(OCMREPO)
	@touch $(GEN)/push.$(NAME)

.PHONY: transport
ifneq ($(TARGETREPO),)
	$(OCM) transfer component -Vc  $(OCMREPO)//$(COMPONENT):$(VERSION) $(TARGETREPO)
	@echo "Cannot transport no TARGETREPO defined as destination" && exit 1

	@mkdir -p $(GEN)
	@touch $@

.PHONY: info
	@echo "VERSION:  $(VERSION)"
	@echo "COMMIT:   $(COMMIT)"

.PHONY: describe
describe: $(GEN)/ctf
	ocm get resources --lookup $(OCMREPO) -r -o treewide $(GEN)/ctf

.PHONY: descriptor
descriptor: $(GEN)/ctf
	ocm get component -S v3alpha1 -o yaml $(GEN)/ctf

.PHONY: clean
	rm -rf $(GEN)

It supports the following targets:

  • build (default) simple go build
  • version show current VERSION of Github repository
  • image build a local docker image
  • multi build multiarch images with docker
  • ca executes build and creates a component archive
  • ctf create a common transport format archive
  • push pushes the common transport format to an OCI registry
  • info shows variables used in makefile (version, commit, etc.)
  • describe displays the component version in a tree-form
  • descriptor shows the component-descriptor for this component version
  • transport transport the component from the upload repository into another OCM repository
  • clean deletes all generated files (but does not delete docker images)

The variables at the beginning assigned with ?= can be set from outside and override the default declared in the makefile. Use either an environment variable or an argument when calling make.


$ PROVIDER=foo make ca

Templating the resources

The makefile uses a dynamic list of generated platforms for the images. You can just set the PLATFORMS variable:

MULTI     ?= true
PLATFORMS ?= linux/amd64 linux/arm64

If MULTI is set to true the variable PLATFORMS will be evaluated to decide which image variants will be built. This has to be reflected in the resources.yaml. It has to use the input type dockermulti and list all the variants which should be aggregated into a multiarch image. This list depends on the content of the make variable.

The ocm CLI supports this by enabling templating mechanisms for the content by selecting a templater using the option --templater ... . The example uses the Spiff templater.

$(GEN)/ca: $(GEN)/.exists $(GEN)/image.$(NAME)$(FLAGSUF) resources.yaml $(CHART_SRCS)
	$(OCM) create ca -f $(COMPONENT) "$(VERSION)" --provider $(PROVIDER) --file $(GEN)/ca
	$(OCM) add resources --templater spiff $(GEN)/ca COMMIT="$(COMMIT)" VERSION="$(VERSION)" \
	@touch $(GEN)/ca

The variables given to the add resources command are passed to the templater. The template looks like:

name: image
type: ociImage
version: (( values.VERSION ))
  type: (( bool(values.MULTI) ? "dockermulti" :"docker" ))
  repository:  (( index(values.IMAGE, ":") >= 0 ? substr(values.IMAGE,0,index(values.IMAGE,":")) :values.IMAGE ))
  variants: (( bool(values.MULTI) ? map[split(" ", values.PLATFORMS)|v|-> values.IMAGE "-" replace(v,"/","-")] :~~ ))
  path: (( bool(values.MULTI) ? ~~ :values.IMAGE ))

It distinguishes with the variable values.MULTI between a single docker image and multiarch image. With map[] the platform list from the makefile is mapped to a list of tags created by the docker buildx command used in the makefile. The value ~~ is used to undefine the yaml fields not required for the selected case (the template can be used for multi and single arch builds).

$(GEN)/image.$(NAME).multi: $(GEN)/.exists Dockerfile $(GO_SRCS)
	echo "Building Multi $(PLATFORMS)"
	for i in $(PLATFORMS); do \
	tag=$$(echo $$i | sed -e s:/:-:g); \
	echo "Building platform $$i with tag: $$tag"; \
	docker buildx build --load -t $(IMAGE):$(VERSION)-$$tag --platform $$i .; \
	@touch $(GEN)/image.$(NAME).multi

Pipeline integration

Pipeline infrastructures are heterogenous so that there is no universal answer how to integrate. Usually the simplest way to integrate is using the command line interface of the ocm project. As one example we use GitHub actions how an integration can be done:

There are two repositories dealing with GitHub actions: The first one provides various actions that can called from a workflow. The second one provides the necessary installations of ocm into the container.

An typical workflow for a build step will create a component version. It contains the following steps:

    runs-on: ubuntu-latest
      - name: setup OCM
        uses: open-component-model/ocm-setup-action@main
      - name: create OCM component version
        uses: open-component-model/ocm-action@main
          action: create_component
          provider: ${{ env.PROVIDER }}

This creates a component version for the current build. Additionally a transport archive may be created or the component version along with the built container images may be uploaded to an OCI registry, etc.

The documentation is available here. A full example can be found in the sample Github repository.

Static and Dynamic Variable Substitution

Looking at the settings file shows that some variables like the version or the commit will change frequently with every build or release. Often these will be auto-generated during build.

Other variables like the version of used 3rd-party components will change from time to time and will be set manually by a developer or release manager. It is useful to separate between static and dynamic variables. Static files can be checked-in into source control and are maintained manually. Dynamic variables can be generated during build.

Example: manually maintained:

NAME: microblog

auto generated from a build script:

VERSION: 0.23.1
COMMIT: 5f03021059c7dbe760ac820a014a8a84166ef8b4
ocm add componentversions --create --file ../gen/ctf --settings ../gen/dynamic_settings.yaml --settings static_settings.yaml components.yaml

Debugging: Explain the blobs directory

For analysing and debugging the transport archive some commands can help to find what is contained in the archive and what is stored in which blob:

tree ../gen/ctf
├── artifact-index.json
└── blobs
    ├── ...
    ├── sha256.59ff88331c53a2a94cdd98df58bc6952f056e4b2efc8120095fbc0a870eb0b67
    ├── ...
ocm get resources -r -o wide ../gen/ctf
NAME         : nginx-controller-chart
VERSION      : 1.5.1
TYPE         : helmChart
RELATION     : local
ACCESSTYPE   : localBlob
ACCESSSPEC   : {"localReference":"sha256:59ff88331c53a2a94cdd98df58bc6952f056e4b2efc8120095fbc0a870eb0b67","mediaType":"application/vnd.oci.image.manifest.v1+tar+gzip","referenceName":""}

Self-contained transport archives

The transport archive created from as components file by using the command ocm add componentversions --create ... does not automatically resolve image references to external registries. If you want create a transport archive with all images contained as local artifact you need to convert it in a second step:

ocm transfer ctf --copy-resources <ctf-dir> <new-ctf-dir-or-oci-repo-url>

Note that this archive can become huge if there an many external images involved!

CICD integration

Configure rarely changing variables in a static file and generate dynamic variables during build from environment. See explanation above.