Technology Apr 30, 2026 · 6 min read

I built a textile pattern generation API because PatternedAI has no API

I built a textile pattern generation API because PatternedAI has no API There's a real category gap in the AI-pattern space. PatternedAI has 600K users. Spoonflower's design tools are everywhere. Both are excellent GUIs for textile designers. Neither has a public REST API. So if you're a...

DE
DEV Community
by Om Prakash
I built a textile pattern generation API because PatternedAI has no API

I built a textile pattern generation API because PatternedAI has no API

There's a real category gap in the AI-pattern space.

PatternedAI has 600K users. Spoonflower's design tools are everywhere. Both are excellent GUIs for textile designers. Neither has a public REST API. So if you're a print-on-demand shop, a Shopify store auto-generating colorways, or an indie game studio that needs seamless fabric textures — you're stuck either copy-pasting through a web UI or paying enterprise rates for a custom integration.

I shipped PixelAPI's /v1/pattern endpoint yesterday — 8 styles, 512px or 1024px output, recolor + upscale ops, fully seamless tileable. At $0.008/pattern, it's 2-5× cheaper than PatternedAI's GUI sessions.

This isn't a "Show HN, please clap." This is the story of what almost shipped at 2/10 quality, why I caught it before customers did, and the open-source-only tooling that got us to 8.4/10 average.

What's in the box

# 1. Generate
curl -X POST https://api.pixelapi.dev/v1/pattern/generate \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "style": "ikat",
    "resolution": "512",
    "prompt": "indigo and cream traditional ikat textile"
  }'
# → {"generation_id":"...","credits_used":8,"poll_url":"/v1/pattern/{id}"}

# 2. Poll
curl https://api.pixelapi.dev/v1/pattern/{id} \
  -H "Authorization: Bearer YOUR_KEY"
# → {"status":"completed","output_url":"https://api.pixelapi.dev/outputs/.../1495e592...png"}

# 3. (Optional) recolor a copy
curl -X POST https://api.pixelapi.dev/v1/pattern/recolor \
  -d '{"source_url":"...","hue_shift":180}'
# → 2 credits, hue rotation in HSV

# 4. (Optional) upscale to 2048px print-ready
curl -X POST https://api.pixelapi.dev/v1/pattern/upscale \
  -d '{"source_url":"..."}'
# → 3 credits, Lanczos

The 8 styles + the model behind each

Style Model Why
Floral PatternDiffusion SD2 fine-tuned on 6.8M tileable patterns; ditsy-print sweet spot
Geometric PatternDiffusion Tessellation + grid prompts
Ikat PatternDiffusion Traditional Indian woven patterns
Paisley PatternDiffusion Boteh motif training data
Tribal PatternDiffusion Bold symmetrical Aztec-style
Animal-print PatternDiffusion Leopard/zebra texture repeat
Abstract SDXL-seamless Free-form abstract benefits from SDXL's broader training
Stripes PIL algorithm See below — this one almost destroyed me

Why "stripes" needed an algorithm and not an AI

Both PatternDiffusion and SDXL-seamless failed at clean parallel stripes during my QC audit. PatternDiffusion produced rainbow plaid noise. SDXL-seamless produced "shirt motifs" because it saw "shirt" in the prompt. Neither model was trained on enough plain-stripe samples to handle a request as simple as "navy blue and white classic shirt vertical stripes."

Spending 4 hours iterating prompt engineering on something Pillow does in 10 lines made no sense. So:

# /home/om/pixelapi-worker-code/models/pattern_model.py
def synthesize_stripes(width=512, height=512, prompt=""):
    """Algorithmic stripe / plaid / gingham. Zero VRAM, deterministic."""
    p = prompt.lower()
    is_horizontal = "horizontal" in p
    is_plaid = any(k in p for k in ("plaid","tartan","gingham","check"))
    stripe_w = 6 if "thin" in p else 36 if "thick" in p else 18

    # Pull color words from the prompt
    colors = [color_table[k] for k in color_table if k in p]
    if not colors:
        colors = [(10,30,90), (255,255,255)]  # navy/white default

    img = Image.new("RGB", (width, height), colors[0])
    draw = ImageDraw.Draw(img)
    for x in range(0, width, stripe_w * 2):
        if is_horizontal:
            draw.rectangle([0, x, width, x + stripe_w], fill=colors[1])
        else:
            draw.rectangle([x, 0, x + stripe_w, height], fill=colors[1])
    if is_plaid:
        # ... overlay perpendicular stripes
    return img

Result: 10/10 quality, 10ms generation, zero GPU usage. A user asks for "navy blue and white classic shirt vertical stripes" and gets exactly that, every single time. A user asks for "red green plaid tartan" and gets a clean tartan. Diagonal "yellow and black warning stripes" works the same way.

The lesson — and one I want to underline because it's the part nobody talks about: AI is overkill for half of what people use AI for. Pattern recognition is not the same as pattern generation. SDXL is a 7B-parameter monster that's wasting electricity to render content that a Bresenham line algorithm could produce in nanoseconds.

The QC ladder that caught the silent failures

Generating bad output is one thing. Charging customers for it is the bigger sin.

The endpoint sits behind a structural-QC gate that runs on every output:

  1. Pass-through detection: if the output is pixel-equivalent to the input (when there's an input at all), reject. (Caught a real case where a remove-text job returned the input unchanged and was billed as "completed.")
  2. Scene-destruction detection: if more than 35% of pixels changed for an edit operation, reject (catches FireRed-style hallucinations where the model replaces the subject entirely).
  3. VLM verification: a Qwen2.5-VL-7B QA pass that compares input + output + prompt and emits a good/bad/unsure verdict.

When any gate fails, the job goes through an iteration ladder (up to 5 attempts with different prompt strategies / fallback models). After exhaustion: automatic refund of credits, no email asking the customer to fight for it.

The two recolor jobs I broke during a refactor today were caught by an integration test before any customer saw them; the underlying bug was a missing "operation": "recolor" flag in the redis params dict, which made the worker's dispatch logic fall through to the "generate" branch. Five-minute fix; a customer would have seen unrelated pattern output instead of a hue shift.

Pricing reality check

Operation Credits USD PatternedAI/competitor
512px generate 8 $0.008 $0.015–0.045 (GUI only)
1024px print-ready 15 $0.015 $0.030–0.090 (GUI only)
Recolor (HSV hue shift) 2 $0.002 N/A — no API competitor
Upscale to 2048px 3 $0.003 N/A — no API competitor

There's also no per-seat licensing, no monthly minimum, no 30-day deprecation cycles. Pay-per-use, period.

What's NOT 10/10 yet (honest list)

  • Style overlap: pattern-diffusion sometimes drifts toward "tribal" when asked for plain "geometric." Workaround: include "minimalist scandinavian" or "two-tone" in the prompt. The default style hint now does this for you.
  • Romanized non-English prompts: if you write "kuradedan" (Hindi-in-Latin for "trash bin") instead of native Devanagari "कूड़ेदान," the langdetect-based translator can't recognize it and the model gets the romanized string. The QC ladder catches the bad output and refunds, but you lose 5 minutes.
  • Single-character motif requests ("just one big paisley") aren't this endpoint's job — try the regular /v1/image/generate instead.

Try it

100 free credits on signup at pixelapi.dev. That's 12 generations, or 50 recolors, or 33 upscales — enough to validate the API for your use case.

If you find a generation under 8/10, hit reply on the auto-email. The QC ladder will already have refunded you, and the failure case becomes the next prompt-engineering iteration here.

— posted from a 24h debugging-and-shipping run; corrections welcome.

DE
Source

This article was originally published by DEV Community and written by Om Prakash.

Read original article on DEV Community
Back to Discover

Reading List