Repo Structures: Stages/Environments with base/shared resources in Kustomize or Helm in trunk-based development

After my talk at Continuous Lifecycle, where I propagated a trunk-based approach for GitOps repositories (slide, see also my post about repo structures), two people approached me having the same question

When using Kustomize and changing an attribute in a shared base YAML - how can we avoid that theses changes a deployed to every stage at once?

For example

├── base
│   └── deployment.yaml
└── overlays
    ├── production
    │   ├── deployment.yaml
    │   └── kustomization.yaml
    └── staging
        ├── deployment.yaml
        └── kustomization.yaml

By the way, the same applies to Helm, when using different values.yamls, with one being shared by all stages.
For example, we have the convention of having a values-shared.yaml and one values file per stage, e.g. values-production.yaml , see example in GitOps Playground.

Possible Solutions

What options do we have?

#1 Change attribute in each overlay

Instead of changing the attribute in base YAML we could use a two-step process.

  1. Set the attribute in each stage’s overlay. Then promote it through all stages.
  2. Once this is done, set it in base YAML and remove in overlays.

That’s a bit clumsy but a process like this reminds me of breaking changes to APIs or databases without downtime. First you migrate to the new version/column, keeping the old one intact or pointing/adapting to the new one. Later, when every client has moved on, you delete the old version/column.

#2 Render on CI-server

An alternative would be to keep infra as code in the application’s repository and let the CI server transfer it (see image bellow or the slide from my talk). On the CI-server we use kustomize or helm template to render our YAML into its final form and push that to the GitOps repo. The results no longer has a base or shared YAML but is the plain form as it would have been applied to the API server in an imperative deployment.

You can see this in action in our argocd/petclinic-helm example in GitOps Playground. Here Jenkins deploys

  • from http://localhost:9091/scm/repo/argocd/petclinic-helm/code/sources/main/k8s/
  • to http://localhost:9091/scm/repo/argocd/gitops/code/sources/main/staging/spring-petclinic-helm/applicationRelease.yaml/ and also to production via a Pull Request.

1 Like