Motivation: Color as a “Type System” for Pictures
If you have ever looked at a black-and-white cellular automaton that felt like pure static, you are not alone. Many classical diagrams of elementary rules compress rich structure into just two colors.
As programmers, we already accept that types expose structure in code that is invisible in raw bytes. In this article, I will argue that well-designed color palettes can play a similar role for visual models of discrete systems such as cellular automata.
We will treat color not as decoration, but as a small “visual DSL” layered on top of the underlying automaton. The goal is to make patterns, invariants, and probabilistic behavior easier to see, reason about, and debug.
By the end, you should be able to design color mappings that behave like syntax highlighting for your automata: they do not change the semantics, but they radically change what you perceive.
A Quick, Precise Recap of 1D Elementary CA
Let us fix a standard playground: one-dimensional binary cellular automata with radius 1 (Wolfram’s elementary CA).
Configuration at time step (t) is a bi-infinite (or large finite) sequence of bits. Each cell updates based on its left neighbor, itself, and its right neighbor.
A rule is just such a function (f). In Wolfram’s notation, you pack the output bits into an 8-bit integer (R \in {0,\dots,255}) by ordering the input triples from 111 down to 000.
Formally, if
To make this more concrete, here is a minimal Python sketch that builds the local rule table from the Wolfram rule number:
from typing import Dict, Tuple
def rule_table(rule_number: int) -> Dict[Tuple[int, int, int], int]:
"""Return the mapping (l, c, r) -> new_state for an elementary rule."""
assert 0 <= rule_number <= 255
mapping = {}
# Neighborhoods in Wolfram order: 111, 110, ..., 000
neighborhoods = [(a, b, c)
for a in (1, 0)
for b in (1, 0)
for c in (1, 0)]
for i, nbh in enumerate(neighborhoods):
# Bit i from the right (0 = 000, 7 = 111)
bit = (rule_number >> i) & 1
mapping[nbh] = bit
return mapping
def step(config, mapping):
"""One CA step on a finite config with wrap-around."""
n = len(config)
return [
mapping[(config[(i - 1) % n], config[i], config[(i + 1) % n])]
for i in range(n)
]
Color as a Semantics Layer over Configurations
The usual elementary CA pictures map state 0 to white and state 1 to black.
That representation is faithful, but extremely low bandwidth. It tells you which cells are alive, yet it hides almost everything about why they are alive, where they came from, and how different regions of the diagram relate to one another.
This is where the type-system analogy becomes useful.
In source code, a type annotation does not change the machine instructions, but it changes how a human reads the program. It reveals categories, constraints, and intended use. Likewise, a good color mapping does not alter the cellular automaton. It adds an interpretable layer that exposes latent structure.
Here are four kinds of structure color can encode:
- State: the obvious case, such as dead vs alive.
- Time: older vs newer generations, or early vs late phases.
- Origin: which seed, source, or layer caused a cell to appear.
- Confidence or probability: how stable, random, or likely a transition was.
Once you start thinking this way, color stops being ornamental. It becomes metadata.
Why Black and White Often Fails
Binary rendering is ideal when your only question is "is this cell 0 or 1?" But in practice, we usually want better questions:
- Where are repeated motifs emerging?
- Which fronts are colliding?
- Is the rule producing symmetry or merely noise?
- Which visible structures are deterministic, and which are artifacts of probabilistic updates?
- If multiple seeds are active, which region belongs to which seed?
With black and white, all affirmative answers collapse to the same visual token: a dark pixel.
That is similar to printing an entire typed AST in one monochrome font. The information is technically there, but human parsing cost becomes much higher.
Three Practical Palette Strategies
Let us make the idea concrete. Here are three palette designs that are immediately useful for elementary CA.
1. Time-Gradient Coloring
The simplest upgrade is to color live cells by generation index instead of a single foreground color.
For example, early generations might start in amber, move through gold, and end in blue. Now a single image carries both occupancy and temporal depth.
This is especially effective for rules where geometry matters more than local density:
- Rule 90 reveals nested self-similarity more clearly.
- Rule 30 separates early ordered structure from later pseudo-random growth.
- Rules with expanding fronts become easier to read as directional processes.
A tiny implementation looks like this:
def lerp(a, b, t):
return int(round(a + (b - a) * t))
def gradient(colors, steps):
if len(colors) == 1:
return [colors[0]] * steps
out = []
segments = len(colors) - 1
for i in range(steps):
u = i / max(steps - 1, 1)
s = min(int(u * segments), segments - 1)
local = u * segments - s
c1, c2 = colors[s], colors[s + 1]
out.append(tuple(lerp(x, y, local) for x, y in zip(c1, c2)))
return out
def colorize_rows(grid, colors):
row_colors = gradient(colors, len(grid))
pixels = []
for y, row in enumerate(grid):
pixels.append([
row_colors[y] if cell == 1 else (8, 8, 12)
for cell in row
])
return pixels
Notice what happened: we did not change the automaton at all. We only changed the projection from symbolic state to pixels.
2. Origin-Based Coloring
Suppose you start from several seeds instead of one. In black and white, their descendants merge into a single mass. You lose lineage.
Instead, assign each initial seed its own palette. Descendants inherit that palette, possibly with a per-row gradient. When fronts meet, you can literally see interaction boundaries.
This is one of the strongest arguments for color as semantics: the rendered image now carries provenance.
In a tool like Cellcosmos, this becomes especially powerful because each point can carry:
- its own rule,
- its own palette,
- its own probability settings,
- its own propagation direction.
At that point the image is no longer just "a rule diagram." It becomes a visual trace of several local processes sharing the same space.
3. Probability-Aware Coloring
Many interesting systems are not purely deterministic. You might flip the output with probability (p), modulate (p) by row, or vary it across regions.
A flat color map hides this. A probability-aware map can expose it.
For example:
- use saturation to represent certainty,
- use brightness to represent transition probability,
- use a cool-to-warm ramp to show low-to-high stochastic influence.
This is analogous to showing uncertainty in a scientific heatmap. The automaton still has binary states, but the rendering communicates how those states were obtained.
A Useful Design Rule: Encode One Idea Per Visual Channel
It is tempting to throw every signal into hue, saturation, brightness, alpha, texture, and blending all at once. That usually creates mud.
A better approach is to assign meaning deliberately:
- Hue for category or origin
- Brightness for time or intensity
- Saturation for certainty or confidence
- Texture for qualitative mode differences
- Blend mode for interaction between layers
This is exactly how we design readable APIs and type systems: each construct should carry a stable meaning.
If red sometimes means "left seed," sometimes means "late time," and sometimes means "high entropy," the picture becomes semantically inconsistent.
Color Mappings as a Visual DSL
Once the mapping is deliberate, we can describe it almost like a tiny language.
For example:
Alive cells from seed A use an amber-to-gold gradient by time.
Alive cells from seed B use a cyan-to-blue gradient by time.
Cells created under high stochasticity lose saturation.
Overlapping layers use screen blending.
That is not just an aesthetic recipe. It is a domain-specific specification for how semantic features become visible.
This framing is helpful because it lets us evaluate palettes the same way we evaluate abstractions in code:
- Is the mapping compositional?
- Is it consistent?
- Does it preserve important distinctions?
- Does it make debugging easier?
- Does it fail gracefully under complexity?
Reading Classical Rules Through Better Color
Let us briefly revisit a few canonical elementary rules.
Rule 30
Rule 30 is famous because it looks random despite being deterministic. In monochrome, the eye often over-focuses on the noisy right side.
A temporal gradient helps separate:
- the highly regular left boundary,
- the central growth cone,
- the later pseudo-random texture.
That turns "interesting chaos blob" into a more analyzable object with phases.
Rule 90
Rule 90 is the XOR rule and generates the Sierpinski triangle from a single seed.
Monochrome already shows the triangle, but a gradient emphasizes recursive depth. Nested triangles stop looking like empty holes and start reading as levels in a construction.
Rule 110
Rule 110 is celebrated for computational universality. The challenge is that meaningful localized structures can be hard to track in a crowded render.
Origin-aware palettes and layer-based blending help distinguish traveling structures, interactions, and collision zones. In other words, color can make the computation less opaque.
Implementation Pattern: Separate Simulation from Rendering
There is also a software-engineering lesson here.
Do not bake palette logic into the update rule itself. Keep the simulation symbolic and the rendering declarative.
That means:
- The automaton computes states and any auxiliary metadata.
- A rendering layer maps those states and metadata into colors, textures, and blend decisions.
That separation has several benefits:
- you can compare multiple visualizations of the same run,
- you can debug rule behavior independently of style,
- you can experiment with palettes without risking semantic changes,
- you can export both raw state and rendered imagery.
This mirrors a principle we already value in programming: keep computation separate from presentation.
A Minimal Mental Checklist for Palette Design
When designing a palette for a CA experiment, I find these questions useful:
- What is the single most important hidden variable I want to reveal: time, origin, probability, or class of event?
- Which visual channel should encode that variable?
- Will the mapping remain legible if the grid becomes dense?
- Does the background support contrast without overwhelming the cells?
- If two regions interact, will I still be able to tell them apart?
If you can answer those five questions clearly, the palette usually ends up carrying real semantic weight.
From Toy Aesthetic to Research Instrument
This is the bigger point of the article.
When we add thoughtfully designed color to discrete visual systems, we are not merely decorating output. We are building better instruments for human perception.
A syntax highlighter helps us see control flow, names, and errors more quickly. A good CA palette helps us see symmetry, causality, growth, interaction, and uncertainty more quickly.
That is why I like the phrase "color as a type system for pictures." It suggests discipline, not ornament. The palette is a contract between the model and the viewer.
Closing
Elementary cellular automata are small enough to fit in a tweet-sized rule table, yet rich enough to generate order, randomness, self-similarity, and computation. That makes them a perfect place to practice semantic visualization.
If you are building your own automaton explorer, try one experiment: keep the rule fixed and change only the color semantics. Add a time gradient. Then add seed-based palettes. Then add probability-aware desaturation. You will likely discover that the "same" automaton becomes much easier to read.
That is the core claim: in discrete systems, color can function like a lightweight type layer over raw state. It does not alter the machine, but it dramatically improves the human interface.
And once you see that, it is hard to go back to pure black and white.
This article was originally published by DEV Community and written by John Samuel.
Read original article on DEV Community




