Today, I compared the then latest versions of flux (2.0.1) with Argo CD (v2.7.7) in terms of images, vulnerabilities ad resource requirements.
Preamble: There are results, but it’s difficult to compare them.
- To make an equal comparison, I compared weave-gitops, including the GitOpsSet-Controller with Argo CD.
- I also considered comparing vanilla flux with argocd-core but this is not an equal match, because argocd-core contains the appset-controller, but no notifications controller (if you’re interested in the results anyway, see bellow).
- Images and components: Argo CD uses the same image for all controllers. So the total number of components cannot be compared, as the number for flux contains duplicates
- Resources: Argo CD does not define CPU or Memory by default. I used the values commented in the
values.yaml
of the helm Chart. Looking at the data, they don’t see too plausible (e.g. limits rather low). Weave-Gitops also does not specify any resources. So in general these can’t be compared too well.
After brooding over the figures for a while, this is my personal conclusion:
The bottom line is that Argo CD tends to have slightly higher resource requirements and slightly more known vulnerabilities.
Benchmarking actual resources while load testing would be interesting, though. In the future Argo CD might change to a distroless image, which would decrease the CVEs.
And: I wouldn’t base my decision for one of the tools on these facts, as there more clearer distinguishing features.
Some more notes on the data itself
- The image size is the local (uncompressed) size, not the one that goes over the wire
- The image component count is based on a cyclone DX BOM, created with trivy
- Init containers were excluded to keep the whole process a bit simpler
Raw data
argocd
number of components/pods: 7
number of containers at runtime (init excluded): 7
number of images: 3
image details (as of Mi 12. Jul 11:19:32 CEST 2023):
image | nVulns/critical | nComponents | size (MB) |
---|---|---|---|
Package dex · GitHub | 2 (0) | 309 | 96.6 |
public.ecr.aws/docker/library/redis:7.0.11-alpine | 0 (0) | 19 | 30.2 |
Quay | 49 (0) | 471 | 391 |
TOTAL | 51 (0) | 799 | 517.8 |
component details:
Component | nPods | CPU Requests | Memory Requests | CPU Limits | Memory Limits |
---|---|---|---|---|---|
argocdargocd-application-controller | 1 | 250m | 256Mi | 500m | 512Mi |
argocdargocd-applicationset-controller | 1 | 100m | 128Mi | 100m | 128Mi |
argocdargocd-dex-server | 1 | 10m | 32Mi | 50m | 64Mi |
argocdargocd-notifications-controller | 1 | 100m | 128Mi | 100m | 128Mi |
argocdargocd-redis | 1 | 100m | 64Mi | 200m | 128Mi |
argocdargocd-repo-server | 1 | 10m | 64Mi | 50m | 128Mi |
argocdargocd-server | 1 | 50m | 64Mi | 100m | 128Mi |
weave-gitops
number of components/pods: 6
number of containers at runtime (init excluded): 7
number of images: 7
image details (as of Mi 12. Jul 11:19:40 CEST 2023):
image | nVulns/critical | nComponents | size (MB) |
---|---|---|---|
Google Cloud console | 4 (0) | 95 | 55.2 |
Package helm-controller · GitHub | 0 (0) | 167 | 80 |
Package kustomize-controller · GitHub | 2 (0) | 245 | 117 |
Package notification-controller · GitHub | 0 (0) | 160 | 83.7 |
Package source-controller · GitHub | 0 (0) | 300 | 88 |
Package gitopssets-controller · GitHub | 0 (0) | 125 | 62.1 |
Package wego-app · GitHub | 12 (0) | 153 | 110 |
TOTAL | 18 (0) | 1245 | 596.0 |
component details:
Component | nPods | CPU Requests | Memory Requests | CPU Limits | Memory Limits |
---|---|---|---|---|---|
gitopssets-controllergitopssets-controller | 2 | 5m, 10m | 64Mi, 64Mi | 500m, 500m | 128Mi, 128Mi |
helm-controller | 1 | 100m | 64Mi | 1 | 1Gi |
kustomize-controller | 1 | 100m | 64Mi | 1 | 1Gi |
notification-controller | 1 | 100m | 64Mi | 1 | 1Gi |
source-controller | 1 | 50m | 64Mi | 1 | 1Gi |
ww-gitopsweave-gitops | 1 |
Test setup
❯ k3d version
k3d version v4.4.8
❯ flux --version
flux version 2.0.1
❯ gitops version
Current Version: 0.27.0
❯ trivy --version
Version: 0.43.1
k3d cluster create flux-vs-argocd --image rancher/k3s:v1.25.9-k3s1
# Install plain flux
flux install --version=v2.0.1 --namespace flux
# Install weave gitops + gitopssets-controler
flux install --version=v2.0.1 --namespace weave-gitops
gitops create dashboard ww-gitops --namespace weave-gitops \
--password=admin
# Basing on this
# https://github.com/weaveworks/weave-gitops/blob/v0.27.0/website/docs/gitopssets/installation.mdx#L23
# Just changed NS, so all flux is deployed in one NS
kubectl apply -f - <<EOF
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: weaveworks-artifacts-charts
namespace: weave-gitops
spec:
interval: 1m
url: https://artifacts.wge.dev.weave.works/dev/charts
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: gitopssets-controller
namespace: weave-gitops
spec:
interval: 10m
chart:
spec:
chart: gitopssets-controller
sourceRef:
kind: HelmRepository
name: weaveworks-artifacts-charts
namespace: weave-gitops
version: 0.6.1
install:
crds: CreateReplace
upgrade:
crds: CreateReplace
EOF
# Install Argo CD Core
# Does not include Resource Limits/Requests
k create ns argocd-core
k apply -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.7.7/manifests/core-install.yaml --server-side -n argocd-core
# Install plain Argo CD with default resource limits as defined by helm chart
ARGOCD_CHART_VERSION=5.38.1 # App Version: v2.7.7
curl https://raw.githubusercontent.com/argoproj/argo-helm/argo-cd-$ARGOCD_CHART_VERSION/charts/argo-cd/values.yaml | sed 's/resources: {}/resources:/g' | sed 's/#\( *limits:\)/\1/g' | sed 's/#\( *cpu:\)/\1/g' | sed 's/#\( *memory:\)/\1/g' | sed 's/#\( *requests:\)/\1/g' > values.yaml
helm upgrade -i argocd --values values.yaml --version $ARGOCD_CHART_VERSION argo/argo-cd -n argocd --create-namespace
./images.sh argocd-core > argocd-core.md
./images.sh weave-gitops > weave-gitops.md
./images.sh flux > flux.md
./images.sh argocd > argocd.md
images.sh:
#!/bin/bash
ns=$1
containers=$(kubectl get pods -n $ns -o jsonpath="{range .items[*]}{range .spec.containers[*]}{'\n'}{.image}{end}{end}" | grep -v '^$')
images=$(echo "$containers" | sort | uniq)
pods=$(kubectl get pods -n $ns -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{end}' | grep -v '^$' )
total_components=0
total_size=0
total_vulnerabilities=0
total_critical=0
echo "# $ns"
echo "number of components/pods: $(echo "$pods" | wc -l) "
echo "number of containers at runtime (init excluded): $(echo "$containers" | wc -l) "
echo "number of images: $(echo "$images" | wc -l) "
echo
echo "image details (as of $(date)):"
echo
echo "| image | nVulns/critical | nComponents | size (MB) |"
echo "| --- | --- | --- | --- | "
while IFS= read -r image; do
docker pull -q $image > /dev/null
vulnerabilities=$(trivy -q image --scanners vuln -f json $image | jq '[.Results[].Vulnerabilities | length] | add')
components=$(trivy -q image --format cyclonedx $image | jq '.components | length')
size=$(docker image ls --format "{{.Size}}" $image| tr -d 'MB')
critical=$(trivy -q image --scanners vuln '--severity=CRITICAL' -f json $image | jq '[.Results[].Vulnerabilities | length] | add')
echo "| $image | $vulnerabilities ($critical) | $components | $size |"
total_vulnerabilities=$((total_vulnerabilities + vulnerabilities))
total_components=$((total_components + components))
total_size=$(echo $total_size + $size | bc)
total_critical=$((total_critical + critical))
done <<< "$images"
echo "| TOTAL | $total_vulnerabilities ($total_critical) | $total_components | $total_size |"
echo
echo "component details:"
echo
echo "| Component | nPods | CPU Requests | Memory Requests | CPU Limits | Memory Limits |"
echo "| --- | --- | --- | --- | --- | --- |"
kubectl get pods -n $ns -o json \
| jq -jr '.items[]
| "| \(.metadata.labels["app.kubernetes.io/instance"]//"")\(.metadata.labels["app"]//"")\(.metadata.labels["app.kubernetes.io/name"]//"") | ",
(.spec.containers | length), " | ",
([.spec.containers[].resources.requests.cpu] | join(", ")), " | ",
([.spec.containers[].resources.requests.memory] | join(", ")), " | ",
([.spec.containers[].resources.limits.cpu] | join(", ")), " | ",
([.spec.containers[].resources.limits.memory] | join(", ")),
" |\n"' \
| sort
echo
ArgoCD Core vs Flux
As stated above, I considered comparing vanilla flux with argocd-core but this is not an equal match, because argocd-core contains the appset-controller, but no notifications controller.
If you’re interested in the results here they are.
argocd-core
number of components/pods: 4
number of containers at runtime (init excluded): 4
number of images: 2
image details (as of Mi 12. Jul 11:20:05 CEST 2023):
image | nVulns/critical | nComponents | size |
---|---|---|---|
Quay | 49 (0) | 471 | 391 |
redis:7.0.11-alpine | 0 (0) | 19 | 30.2 |
TOTAL | 49 (0) | 490 | 421.2 |
component details:
Component | nPods | CPU Requests | Memory Requests | CPU Limits | Memory Limits |
---|---|---|---|---|---|
argocd-application-controller | 1 | ||||
argocd-applicationset-controller | 1 | ||||
argocd-redis | 1 | ||||
argocd-repo-server | 1 |
flux (vanilla)
number of components/pods: 4
number of containers at runtime (init excluded): 4
number of images: 4
image details (as of Mi 12. Jul 11:20:13 CEST 2023):
image | nVulns/critical | nComponents | size |
---|---|---|---|
Package helm-controller · GitHub | 0 (0) | 167 | 80 |
Package kustomize-controller · GitHub | 2 (0) | 245 | 117 |
Package notification-controller · GitHub | 0 (0) | 160 | 83.7 |
Package source-controller · GitHub | 0 (0) | 300 | 88 |
TOTAL | 2 (0) | 872 | 368.7 |
component details:
Component | nPods | CPU Requests | Memory Requests | CPU Limits | Memory Limits |
---|---|---|---|---|---|
helm-controller | 1 | 100m | 64Mi | 1 | 1Gi |
kustomize-controller | 1 | 100m | 64Mi | 1 | 1Gi |
notification-controller | 1 | 100m | 64Mi | 1 | 1Gi |
source-controller | 1 | 50m | 64Mi | 1 | 1Gi |