Technology Apr 24, 2026 · 13 min read

Kargo: The Missing Piece Between CI and ArgoCD

ArgoCD deploys what's in Git. That's the whole point. It watches a repository, diffs the desired state against the cluster, and reconciles. The question nobody answers cleanly: who updates Git? The Problem In a typical GitOps setup you have two repositories. The application repo where d...

DE
DEV Community
by Szymon Matuszewski
Kargo: The Missing Piece Between CI and ArgoCD

ArgoCD deploys what's in Git. That's the whole point. It watches a repository, diffs the desired state against the cluster, and reconciles.

The question nobody answers cleanly: who updates Git?

The Problem

In a typical GitOps setup you have two repositories. The application repo where developers write code, and the GitOps repo where Kubernetes manifests live. CI builds a container image, pushes it to a registry, and then... someone or something needs to update the image tag in the GitOps repo so ArgoCD picks it up.

The common approach is to have the CI pipeline commit directly to the GitOps repo. GitLab CI finishes building your Docker image, pushes it to ECR, then runs a job that clones the GitOps repo, updates a YAML file, commits, and pushes. It works.

It also couples your application pipeline to the structure of your GitOps repository. If you reorganize your Kustomize overlays, you're updating CI pipelines in every service repo. If a team wants to change how deployments flow through environments, they're editing .gitlab-ci.yml files. The CI pipeline knows too much about things that aren't its concern.

I hit this exact scenario on a greenfield project. We had GitLab CI for builds, AWS ECR as the registry, and ArgoCD managing deployments. The CI-commits-to-GitOps approach felt too tightly coupled. I wanted something that owned the space between "artifact exists in a registry" and "manifest is updated in Git."

That's where Kargo comes.

What Kargo Does

Kargo is a continuous promotion platform built by Akuity, the company behind ArgoCD. It doesn't replace ArgoCD - it complements it. ArgoCD handles deployment (cluster state matches Git). Kargo handles promotion (Git state gets updated when new artifacts appear).

The core concepts map cleanly to what you'd build yourself if you had the time:

  • Warehouses monitor registries for new artifacts (container images, Helm charts, Git commits). When something new appears, a Warehouse packages the references into a piece of Freight.
  • Freight is a meta-artifact - a box containing references to specific versions of your images and charts. It travels through your pipeline as a unit.
  • Stages are promotion targets, roughly equivalent to environments. They form a pipeline: dev, staging, prod. Freight moves from one Stage to the next.
  • PromotionTasks define the actual steps to execute when promoting Freight to a Stage. Clone a repo, update a YAML value, commit, push.
  • Projects provide tenancy and policy. Each project gets its own namespace and RBAC.

For the full picture, the Kargo documentation covers these concepts well.

The Architecture

Here's the flow we ended up with:

Developer commits code
       |
       v
GitLab CI: test, build, push
       |
       v
Container image + Helm chart land in AWS ECR
       |
       v
Kargo Warehouse detects new artifact
       |
       v
Freight is created
       |
       v
Stage "dev" promotes automatically (or manually)
  - ClusterPromotionTask clones GitOps repo
  - Updates image tag / chart version in overlay YAML
  - Commits and pushes
       |
       v
ArgoCD detects Git change, syncs to cluster
       |
       v
Stage "prod" promotes manually from Kargo UI
  - Same ClusterPromotionTask, different overlay path
       |
       v
ArgoCD syncs prod

The CI pipeline stops caring after the artifact hits the registry. It doesn't know about Kustomize overlays, environment structures, or ArgoCD. Kargo handles all of that.

ClusterPromotionTasks: Standardized and Reusable

One of the things I appreciated most about Kargo is the separation between ClusterPromotionTasks and PromotionTasks.

A ClusterPromotionTask is cluster-wide. You define it once, and every Kargo project can reference it. This is where you standardize your promotion logic. In our case, we have two:

Image promotion - clones the GitOps repo, updates the image tag in a specific YAML file, commits, and pushes:

apiVersion: kargo.akuity.io/v1alpha1
kind: ClusterPromotionTask
metadata:
  name: cluster-promote-image-git
spec:
  vars:
    - name: branch
      value: master
    - name: repoURL
      value: git@gitlab.com:my-org/gitops-repo.git
    - name: yamlFilename
    - name: envPath
    - name: image
  steps:
    - uses: git-clone
      config:
        repoURL: ${{ vars.repoURL }}
        checkout:
          - branch: ${{ vars.branch }}
            create: true
            path: ./out-image
    - uses: yaml-update
      as: update-image
      config:
        path: ./out-image/${{ vars.envPath }}/${{ vars.yamlFilename }}
        updates:
          - key: spec.source.helm.valuesObject.image.tag
            value: ${{ quote(imageFrom( vars.image ).Tag) }}
    - uses: git-commit
      as: commit
      config:
        path: ./out-image
        message: ${{ task.outputs['update-image'].commitMessage }}
    - uses: git-push
      config:
        path: ./out-image

Chart promotion - same pattern, but updates the Helm chart version:

apiVersion: kargo.akuity.io/v1alpha1
kind: ClusterPromotionTask
metadata:
  name: cluster-promote-helm-git
spec:
  vars:
    - name: branch
      value: master
    - name: repoURL
      value: git@gitlab.com:my-org/gitops-repo.git
    - name: chart
    - name: envPath
    - name: yamlFilename
  steps:
    - uses: git-clone
      config:
        repoURL: ${{ vars.repoURL }}
        checkout:
          - branch: ${{ vars.branch }}
            create: true
            path: ./out-chart
    - uses: yaml-update
      as: update-chart
      config:
        path: ./out-chart/${{ vars.envPath }}/${{ vars.yamlFilename }}
        updates:
          - key: spec.source.targetRevision
            value: ${{ quote(chartFrom( vars.chart ).Version) }}
    - uses: git-commit
      as: commit
      config:
        path: ./out-chart
        message: ${{ task.outputs['update-chart'].commitMessage }}
    - uses: git-push
      config:
        path: ./out-chart

Both tasks use variables for the environment path and filename, making them work across any application and any environment overlay. The defaults (repo URL, branch) are set once at the cluster level. Individual Stages only provide what's unique to them.

Kargo UI showing a completed promotion with step-by-step execution of two ClusterPromotionTasks

A PromotionTask (without the "Cluster" prefix) is namespaced - scoped to a single Kargo project. If a specific application needs custom promotion logic (maybe it requires a database migration step, or a Slack notification, or a different Git update strategy), you define a PromotionTask in that project's namespace. It overrides or extends the cluster-wide standard without affecting anything else.

This gives you standardized pipelines across the cluster, with per-project escape hatches when needed. Check the promotion steps reference for the full list of built-in steps.

Per-Application Pipelines

Each application gets its own Kargo Project with Warehouses, Stages, and promotion policies. Here's what a typical backend service looks like.

Project and policies - auto-promote to dev, manual gate for prod:

apiVersion: kargo.akuity.io/v1alpha1
kind: Project
metadata:
  name: my-backend
---
apiVersion: kargo.akuity.io/v1alpha1
kind: ProjectConfig
metadata:
  name: my-backend
  namespace: my-backend
spec:
  promotionPolicies:
    - stage: dev
      autoPromotionEnabled: true
    - stage: prod
      autoPromotionEnabled: false

Warehouses - watching ECR for new images and Helm charts:

apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
  name: ecr-images
  namespace: my-backend
spec:
  freightCreationPolicy: Automatic
  subscriptions:
    - image:
        repoURL: 123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
        allowTagsRegexes:
          - ^[a-f0-9]{8}$
        ignoreTagsRegexes:
          - ^latest$
        imageSelectionStrategy: NewestBuild
        discoveryLimit: 20
        cacheByTag: true
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
  name: ecr-charts
  namespace: my-backend
spec:
  freightCreationPolicy: Automatic
  subscriptions:
    - chart:
        repoURL: oci://123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
        discoveryLimit: 5

In a trunk-based workflow, every merge to main produces an image tagged with the commit SHA. A single Warehouse watches for those tags and creates Freight. The same Freight then travels through the pipeline - dev first, prod after. The NewestBuild strategy picks the most recently built image, not just the lexicographically newest tag.

Stages - dev gets Freight directly from the Warehouse, prod requires Freight that passed through dev first:

kind: Stage
apiVersion: kargo.akuity.io/v1alpha1
metadata:
  name: dev
  namespace: my-backend
spec:
  requestedFreight:
    - origin:
        kind: Warehouse
        name: ecr-images
      sources:
        direct: true
    - origin:
        kind: Warehouse
        name: ecr-charts
      sources:
        direct: true
  promotionTemplate:
    spec:
      steps:
        - task:
            name: cluster-promote-image-git
            kind: ClusterPromotionTask
          vars:
            - name: image
              value: 123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
            - name: envPath
              value: applications/overlays/dev/my-backend
            - name: yamlFilename
              value: patches.yaml
        - task:
            name: cluster-promote-helm-git
            kind: ClusterPromotionTask
          vars:
            - name: chart
              value: oci://123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
            - name: envPath
              value: applications/overlays/dev/my-backend
            - name: yamlFilename
              value: patches.yaml
---
kind: Stage
apiVersion: kargo.akuity.io/v1alpha1
metadata:
  name: prod
  namespace: my-backend
spec:
  requestedFreight:
    - origin:
        kind: Warehouse
        name: ecr-images
      sources:
        availabilityStrategy: All
        stages:
          - dev
    - origin:
        kind: Warehouse
        name: ecr-charts
      sources:
        availabilityStrategy: All
        stages:
          - dev
  promotionTemplate:
    spec:
      steps:
        - task:
            name: cluster-promote-image-git
            kind: ClusterPromotionTask
          vars:
            - name: image
              value: 123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
            - name: envPath
              value: applications/overlays/prod/my-backend
            - name: yamlFilename
              value: patches.yaml
        - task:
            name: cluster-promote-helm-git
            kind: ClusterPromotionTask
          vars:
            - name: chart
              value: oci://123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
            - name: envPath
              value: applications/overlays/prod/my-backend
            - name: yamlFilename
              value: patches.yaml

The key detail in the prod Stage: stages: [dev] under sources. Freight isn't available for promotion to prod until it has been successfully promoted to dev. This is the natural flow - the same artifact proves itself in dev before it can move forward.

Adapting to Branch-Based Workflows

Not every team uses trunk-based development, and Kargo adapts to that. If your CI produces branch-prefixed tags (e.g., dev-abc1234 from the dev branch, master-abc1234 from master), you can split the image Warehouses per branch:

apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
  name: ecr-images-dev
  namespace: my-backend
spec:
  subscriptions:
    - image:
        repoURL: 123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
        allowTagsRegexes:
          - ^dev-.*
        imageSelectionStrategy: NewestBuild
---
apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
  name: ecr-images-master
  namespace: my-backend
spec:
  subscriptions:
    - image:
        repoURL: 123456789.dkr.ecr.eu-south-1.amazonaws.com/my-org/backend
        allowTagsRegexes:
          - ^master-.*
        imageSelectionStrategy: NewestBuild

In this setup, the dev Stage pulls from ecr-images-dev and the prod Stage pulls from ecr-images-master directly, rather than requiring Freight to flow through dev first. The Warehouses, tag filters, and Stage wiring change - the ClusterPromotionTasks stay exactly the same. That's the flexibility: Kargo's promotion logic is decoupled from how your team manages branches and tagging.

Kargo UI pipeline view showing Warehouses, Freight, and Stages for a backend service

Notice how the Stages just reference the ClusterPromotionTasks and pass environment-specific variables. The promotion logic itself is defined once. Adding a new application means creating a new set of Warehouses, Stages, and a Project - not writing new promotion logic.

Kargo for Infrastructure Too

This is where Kargo surprised me. I initially set it up for application deployments, but quickly realized it works just as well for tracking infrastructure component versions.

We have Traefik deployed via Helm from the upstream chart. Keeping Helm charts up to date across environments is one of those tasks that's easy to forget. You check the release page every few weeks, maybe.

With Kargo, a Warehouse watches the upstream Helm chart repository:

apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
  name: ghcr-charts
  namespace: kargo-traefik
spec:
  freightCreationPolicy: Automatic
  subscriptions:
    - chart:
        repoURL: oci://ghcr.io/traefik/helm/traefik
        discoveryLimit: 10

When a new Traefik chart version is published, Freight appears in the Kargo UI. I can see what version I'm running, what's available, and promote with a click. It's effectively a dependency dashboard that also handles the update.

Kargo UI showing available Traefik Helm chart versions ready to promote

For infrastructure components, we keep auto-promotion disabled for both dev and prod. These updates should be intentional. Kargo just makes sure you know there's something new and gives you a one-click path to deploy it.

What I Liked

Rollbacks are git commits. When you roll back in Kargo, it doesn't manipulate ArgoCD's internal state. It makes a new commit to your Git repo with the previous version. Your Git history stays the single source of truth. No hidden state, no "ArgoCD thinks it's running X but Git says Y" confusion.

Developers can self-serve. The Kargo UI is clean and intuitive for daily use. A developer can see what version is in dev, what's available, and promote to prod - all without touching Git or CI. For teams where you want to expose deployment controls without giving everyone access to ArgoCD or the GitOps repo, this is valuable.

Clean separation of concerns. CI builds artifacts and pushes them to a registry. Full stop. It doesn't know about environments, Kustomize overlays, or ArgoCD. Kargo owns the promotion logic. ArgoCD owns the deployment. Each tool does one thing.

The abstraction layer works for everyone. CI/CD pipelines just build and forget. Developers get a visual pipeline with drag-and-drop (or one-click) promotions. DevOps can define and enforce promotion policies per project. Platform teams get a standardized, reusable promotion framework. Different stakeholders interact with the same system at different levels.

The Rough Edges

Namespace proliferation. Every Kargo Project creates its own namespace. If you have ten services, you get ten Kargo namespaces on top of your existing ones. This was mostly an aesthetic annoyance - it works fine, but kubectl get ns gets busy. This might have been my own design choice rather than a Kargo requirement, but I wanted to separate Kargo resources from application workloads.

ArgoCD annotation limitation. Kargo can trigger an ArgoCD Application refresh after pushing a commit, saving you from waiting for ArgoCD's polling interval. It uses the kargo.akuity.io/authorized-stage annotation on the ArgoCD Application resource to authorize this. The problem: this annotation only supports a single Kargo project and stage. If you use the app-of-apps pattern (which is common), you'd want multiple Kargo projects to be able to refresh your root Application. That's currently not possible - no comma-delimited or glob support. We fell back to ArgoCD's default polling, which means there's a small delay between Kargo's git push and ArgoCD picking it up. Functional, not ideal.

Polling vs webhooks. Kargo Warehouses poll registries at intervals by default. If the interval is too long, you get delayed Freight creation on top of ArgoCD's polling delay. Too short, and you're hammering your registry. For AWS ECR, there's no official webhook receiver yet. A generic webhook triggered by CI after a successful push would solve this, though it slightly re-couples CI with Kargo (even if it's just a notification). The Kargo docs acknowledge this trade-off.

Active development means change. Kargo is still maturing. The product itself has been rock-solid in our experience - zero failures or downtime. The concern is more about API surface evolution and potential breaking changes between versions. The Akuity team is responsive and the project moves fast, which is both a strength and something to plan for.

Would I Use It Again?

Yes. Without hesitation for a project of similar scope - a small-to-medium team with some tolerance for adopting relatively new tooling.

Kargo filled a real gap in our GitOps workflow. The CI pipeline is simpler because it doesn't manage deployment concerns. The GitOps repo stays clean because updates come through a structured, auditable process rather than arbitrary CI commits. Developers have visibility and control over promotions without needing to understand the underlying Git structure.

For larger organizations or risk-averse environments, I'd wait until the API surface stabilizes more and the annotation limitation gets resolved. The product is stable. The ecosystem around it is still catching up.

One thing I'd do differently: set up webhook notifications from CI to Kargo from day one, rather than relying purely on polling. The extra latency from double-polling (Kargo polls registry + ArgoCD polls Git) is noticeable when you're used to instant feedback.

Kargo is built by the ArgoCD creators and feels like a natural part of the ecosystem. If you're running ArgoCD and have solved the "who updates Git?" question differently, I'd be curious how. Are you committing from CI? Using image-updater? Something custom? And if you've tried Kargo - what's been your experience with it at scale?

DE
Source

This article was originally published by DEV Community and written by Szymon Matuszewski.

Read original article on DEV Community
Back to Discover

Reading List