Every AI app I've shipped recently rewrote the same plumbing. The OAuth dance for Slack. Encrypted storage for an API key. Refresh-token logic that finally fails on the 3rd call after an hour. Wiring up an MCP client to a server behind a bearer token someone pasted into a Notion page.
I'd write it, copy-paste it into the next app, watch it rot. Each new agent built by a different teammate, slightly differently, with slightly different bugs. We were a small team and the integration code became most of the code.
## The pattern under all of it
Strip away the providers and the AI-specific bits, and every app needed the same four things from the platform:
-
Env vars — a database URL, a Stripe key, the boring stuff. Not in a
.envfile in a Docker image. Not in a CI secret. Somewhere the app can ask for at runtime. - Pre-built integrations — Gmail, Calendar, Drive. The user logs in once on the platform; every app gets typed access on their behalf.
-
Custom OAuth — the providers no platform pre-builds. Slack, Notion, the company's SSO. The customer holds the
client_id/secret; their app shouldn't. - Custom MCP — internal MCP servers, third-party MCPs. The customer holds the URL and the bearer token; their app shouldn't.
That's the spine of the SDK we ended up shipping. Four primitives, every app uses some of them, none of them require integration code in the app.
## Register once at the org level
The flip is registration. The org owner registers their things one time on the dashboard:
- Drop a Slack
client_id+client_secretinto the "Custom OAuth providers" card. Encrypted with the org's KMS key. The app never sees it. - Drop the URL of an internal MCP server + a bearer token into the "Custom MCP servers" card. Same treatment.
- Connect Doppler / 1Password / GCP Secret Manager as a secret source — or just type secrets into the dashboard.
Now every app you deploy in that org gets typed access through four SDK calls.
## The four calls
import { LeashIntegrations } from '@leash/sdk/integrations'
const client = new LeashIntegrations({ apiKey: process.env.LEASH_API_KEY })
// 1. Env var (resolves through your configured secret source)
const dbUrl = await client.getEnv('DATABASE_URL')
// 2. Pre-built integration
const messages = await client.gmail.listMessages({ maxResults: 5 })
// 3. Custom OAuth — fresh access token for any provider you've registered
const slackToken = await client.getAccessToken('slack')
// 4. Custom MCP — { url, headers } including bearer Authorization
const mcp = await client.getCustomMcpConfig('acme-tools')
Same shape across TypeScript, Python, Go, Ruby, Rust, and Java. No client_secret in the app code. No refresh-token handler. No MCP boilerplate.
## Your .env collapses to one line
The thing we noticed only after living with it: once you're using this, the only secret your app's .env actually needs is the platform API key.
# .env (yes, this is the whole thing)
LEASH_API_KEY=lsk_live_...
That's it. No more .env.example drift. No more "did we set DATABASE_URL in staging?" debugging at 11pm. Rotation happens at the source — no rebuild, no redeploy.
## What it deliberately doesn't do
A few decisions that came up that I'll defend:
Doesn't proxy MCP traffic. We hand the app {url, headers} (with bearer Authorization already attached) and the app calls the MCP directly. Leash isn't in the request path. Tool calls are on the LLM's critical path; an extra hop hurts. We also
didn't want to reimplement every MCP transport (streamable HTTP, SSE, stdio) with our own bugs.
Doesn't force you to use the platform for secrets. If you'd rather hold them in Doppler or 1Password, point the platform at your existing source. getEnv resolves through whichever the org configured.
Doesn't pretend to be multi-cloud. Single-region GCP today. If you're betting on us, you're betting on a small surface area — not a multi-cloud promise.
## The why behind the shape
Customer apps can't hold credentials safely. Their AI agent runs on someone's laptop, in CI, on a Cloud Run revision someone's about to redeploy. Putting client_secret in the app means rotating it everywhere whenever it leaks. So we put the
credential in one place and gave the app a thin retrieval call instead.
Same logic for MCP. The bearer token for a customer's internal tool server isn't something we want their AI app to know. The app gets a config dictionary right before it calls the MCP. That's as long as the credential lives anywhere near user code.
The four-primitive surface area is small on purpose. Anything else (token caching, retries, pagination on Gmail, etc.) lives in the SDK or in the customer's code, not in the platform contract. We'd rather grow the SDK than the API.
## Try it
curl -fsSL https://leash.build/install.sh | sh
leash login
leash deploy
Or just sign up at leash.build, register a Slack app or an internal MCP, and call the SDK from any project. Custom OAuth + custom MCP are gated to the Growth plan; built-in integrations work on every plan including free.
Curious what others have done for this. Especially the proxy-vs-config-handoff call for MCP — I made the bet, but it's the architecture choice I'd most welcome a counterargument on.
This article was originally published by DEV Community and written by Arvin Gopi.
Read original article on DEV Community