Technology Apr 26, 2026 · 3 min read

Setting Up Docker CI for Rust with cargo-dist

Rust CI The Core Idea Building Rust inside Docker is slow. A typical multi-stage Dockerfile compiles the binary in one stage and copies it into a minimal image in another. That works fine for local builds, but in CI it takes a long time, especially when you're emulating arm64 t...

DE
DEV Community
by Wayne
Setting Up Docker CI for Rust with cargo-dist

Rust CI

The Core Idea

Building Rust inside Docker is slow. A typical multi-stage Dockerfile compiles the binary in one stage and copies it into a minimal image in another. That works fine for local builds, but in CI it takes a long time, especially when you're emulating arm64 through QEMU.

The better approach: let cargo-dist handle the compilation as part of the release workflow. By the time the Docker job runs, the binaries are already built and available as GitHub Actions artifacts. Docker just copies them in. QEMU is still needed for the final multi-arch manifest, but it's only moving files around rather than running a compiler through emulation, so arm64 builds don't take nearly as long.

The Setup

The starting point is the cargo-dist quickstart guide. Once that's in place, you need a few configuration pieces to trigger the Docker build after the release.

In release.yml, add a custom-docker-publish job that calls your docker-publish workflow and passes the plan output and binary name as inputs.

In dist-workspace.toml, set post-announce-jobs to point at your docker workflow:

post-announce-jobs = ["./docker-publish"]
github-custom-job-permissions = { "docker-publish" = { packages = "write", contents = "read" } }
allow-dirty = ["ci"]

The permissions block was needed because my docker workflow didn't have enough access by default.

The Docker Workflow

The workflow runs as a workflow_call and takes the dist plan JSON, binary name, and target triple suffix as inputs. Here's the overall structure:

on:
  workflow_call:
    inputs:
      plan:
        required: true
        type: string
      binary_name:
        required: true
        type: string
      target_triple_suffix:
        required: false
        type: string
        default: "unknown-linux-musl"

The job itself:

  1. Set up QEMU and Docker Buildx
  2. Log in to GHCR
  3. Extract the version from the dist plan's announcement_tag
  4. Generate Docker metadata (semver tags, major.minor, major, and latest for non-prereleases)
  5. Download the amd64 and arm64 artifacts produced by cargo-dist
  6. Extract and normalize the artifacts, moving binaries into the right folders
  7. Build and push with docker/build-push-action@v6 targeting both linux/amd64 and linux/arm64

The version tags are pulled from the dist plan, so they stay in sync with cargo-dist's release process. The latest tag is skipped for prereleases.

- name: Build and push
  uses: docker/build-push-action@v6
  with:
    context: .
    platforms: linux/amd64,linux/arm64
    push: true
    tags: ${{ steps.meta.outputs.tags }}
    build-args: BINARY_NAME=${{ inputs.binary_name }}

The Dockerfile

The Dockerfile depends on what your binary needs. I used distroless images and determined the right base image by running ldd on the compiled binary:

linux-vdso.so.1 (0x00007ffdfb764000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
/lib64/ld-linux-x86-64.so.2

Since this binary needed libc and libm, I went with gcr.io/distroless/cc-debian13:nonroot.

FROM gcr.io/distroless/cc-debian13:nonroot

ARG TARGETARCH
ARG BINARY_NAME

COPY --chmod=755 artifacts/${TARGETARCH}/${BINARY_NAME} /usr/local/bin/app

EXPOSE 8000

USER nonroot:nonroot

ENTRYPOINT ["/usr/local/bin/app"]

The full version with the complete workflow YAML and more context is on my blog.

DE
Source

This article was originally published by DEV Community and written by Wayne.

Read original article on DEV Community
Back to Discover

Reading List