Technology Apr 23, 2026 · 4 min read

Stop Configuring the Same LLMs Over and Over: Introducing LLMC

As I dive deeper into the world of LLMs and AI Agents, I found myself trapped in a tedious loop: every time I tried a new tool, I spent an hour repeating the same setup process. I'd find the models that actually work for my workflow, only to manually copy those configurations into every new agent I...

DE
DEV Community
by GroverTek
Stop Configuring the Same LLMs Over and Over: Introducing LLMC

As I dive deeper into the world of LLMs and AI Agents, I found myself trapped in a tedious loop: every time I tried a new tool, I spent an hour repeating the same setup process. I'd find the models that actually work for my workflow, only to manually copy those configurations into every new agent I installed.

I finally hit a breaking point and decided to automate it.

Introducing LLM Chooser (LLMC)

I created LLMC (LLM Chooser) to serve as a "single source of truth" for my AI model preferences.

Instead of updating five different config files, I define my preferred models in one place. LLMC then automatically syncs those preferences into the configurations for tools like opencode, pi, and other coding agents.

Check out the README for the technical setup, but I want to talk about why this felt necessary.

The "Artificial Lock-in" Problem

Currently, the AI agent landscape is incredible, but there's a subtle, growing pressure toward "ecosystem lock-in."

Take Claude Code: while you can use other models, there is a persistent nudge suggesting that things "just work better" if you stay within the Anthropic paid subscription. We see similar patterns with GeminiCLI, Qwen Code, and OpenClaw.

This pressure gives me pause because I've seen this movie before:

  • Networking: The battle between Novell and Banyan Vines, which eventually gave way to the open standard of TCP/IP.
  • Operating Systems: The fragmented era of DOS, AmigaOS, and OS/2 Warp before Linux and Windows solidified.
  • Web Dev: The endless cycle of "The One True Framework" (React vs. Vue vs. Next.js vs. Astro).

Corporations want conformity because it's profitable and easier to manage at scale. But for the individual developer, conformity often means giving up liberty and choice.

Resisting the Walled Garden

Using a hosted AI provider is convenient. But that convenience comes with a trade-off: your logic flows through their servers, and you hope they aren't using your data for training. If their service goes down or their pricing pivots, you're stranded.

By decoupling my model preferences from the agent itself, I'm reclaiming a bit of that control. I don't want my "preferred stack" to be dictated by the tool I'm using; I want the tool to adapt to my stack.

How LLMC Works (and where it's headed)

LLMC manages a ~/.config/aimodels folder containing three key files:

  • providers.json: System-level provider configuration.
  • agents.json: Definition of which agents are active.
  • models.json: Your curated list of "gold standard" models.

Any model in your models.json is synced to your desired agents. This means my preferences are now portable and independent.

The Road Map

It's early days, and there are things I want to improve:

  1. The UX: The CLI is currently a bit clunky. I'm considering a TUI (Terminal User Interface) or a simple web dashboard to make model selection more intuitive.
  2. API Key Security: While I use keyring to avoid plain-text storage in LLMC, the agents themselves often require plain-text keys in their own config files. I'm exploring how to resolve this "last mile" security gap.
  3. Agent Expansion: I'm working on verifying compatibility across more agents like Hermes and OpenClaw.

Is this just "scratching my own itch"?

I've built this to solve my own frustration, but I'm curious if others feel the same. Do you find yourself fighting with config files every time you switch AI tools? Or is the current "walled garden" approach acceptable for the convenience it provides?

If this resonates with you, I'd love for you to clone the project and give it a spin. I'm looking for feedback, bug reports, and suggestions for other agents that should be supported.

👉 Check out LLMC on GitHub

DE
Source

This article was originally published by DEV Community and written by GroverTek.

Read original article on DEV Community
Back to Discover

Reading List