Hey Everyone, it's been a while. The last few years, I've played around with various AI technologies, building. Recently, I have built a product I want to share.
Montage is a generative UI runtime for AI agents. Your agent makes a tool call. Montage returns a fully rendered, interactive HTML artifact. You mount it. That's it.
This post is about why I built it, what it actually does, and how to drop it into your stack in 30 seconds.
The thing that broke me
If you've shipped anything backed by an LLM in the last year, you've hit this wall.
The model has the data. The model has the right answer. But the output is either plain text or formatted Markdown. And what's worse, when you want to see something visual, you get a poorly rendered HTML or React app with broken features.
Asking an LLM to inline a major UI render is asking for:
- Inconsistent styling on every response.
- Buttons that don't work,
- Broken imports
- Long error recoveries
- Full redesign every time the user asks the same thing twice.
When building real apps with AI, you don't ask the model to freestyle UI at runtime. And we definitely shouldn't ship agentic UI that way either.
So what is Montage?
Two ideas behind it:
- Your agent should never write HTML. It should describe what the user needs to see and hand off the rendering.
- There needs to be a runtime that does the rendering Having your app's design system applied consistently, with real interactions that actually work.
Montage is that runtime.
The use case is simple:
-
POST /v1/generate— one endpoint. -
@montage/sdk— adapters for Anthropic, OpenAI, Vercel AI SDK, LangChain, Mastra, MCP, and raw tool calls. -
@montage/sdk/react— a<HtmlBlock />component to mount the result.
The agent decides what. Montage decides how.
Say a user asks your agent: "Show me our Q1 revenue."
Without Montage, the agent returns this in chat:
Q1 2026 revenue: $3.72M (+18.3% QoQ).
Segment Revenue Growth Enterprise $684K +18.2% Mid-Market $328K +9.6% SMB $157K +5.1% Self-Serve $73K +2.3%
Functional. Forgettable. Not a UI.
With Montage, the agent calls one tool:
import { createMontageTools } from "@montage/sdk";
const montage = createMontageTools({
apiKey: process.env.MONTAGE_API_KEY!,
});
const result = await montage.execute({
prompt:
"Board-ready Q1 2026 revenue report for the finance team. " +
"Lead with an executive summary covering growth drivers and risks. " +
"Render KPI tiles (total revenue, new customers, churn, net retention, ARPU, " +
"pipeline, sales velocity, gross margin, CAC payback), a Revenue by Month bar chart, " +
"a Revenue by Segment breakdown bar, a per-segment revenue table, " +
"key insights, and Q2 priorities. Mark as Confidential / Internal Use Only.",
dataInfo: JSON.stringify({
period: { label: "Q1 2026", start: "2026-01-01", end: "2026-03-31" },
preparedBy: "Finance Team — Maya Okonkwo",
generatedAt: "2026-04-25",
classification: "Confidential — Internal Use Only",
executiveSummary: {
headline: "Revenue grew 18.3% QoQ, driven by enterprise expansion and a 0.8pt improvement in net retention.",
notes: [
"Three new logos closed in February (combined ARR $1.4M) account for the bulk of expansion.",
"Mid-market churn improved 6 points after the Q4 onboarding overhaul.",
],
risks: [
"SMB cohort is decelerating (+2.3% vs +5.1% last quarter). Targeted retention play recommended for Q2.",
],
},
kpis: {
totalRevenue: { value: 3_720_000, qoq: 0.183, note: "Quarterly run-rate trending toward $15.2M ARR" },
newCustomers: { value: 1_847, delta: 0.081 },
churnRate: { value: 0.024, deltaPt: -0.3 },
netRetention: { value: 1.18, deltaPt: 0.8 },
arpu: { value: 672, delta: 0.047 },
pipeline90d: { value: 8_400_000, delta: 0.22, note: "weighted forecast" },
salesVelocity: { value: 162_000, delta: 0.092, note: "rolling 4-week" },
grossMargin: { value: 0.824, deltaPt: 1.1, note: "vs. 81.3% Q4" },
cacPayback: { value: 11.2, delta: -0.6, note: "lower is better" },
},
revenueByMonth: [
{ month: "Oct", value: 68 },
{ month: "Nov", value: 74 },
{ month: "Dec", value: 82 },
{ month: "Jan", value: 79 },
{ month: "Feb", value: 91 },
{ month: "Mar", value: 100 },
],
smbAlert: {
title: "SMB cohort deceleration",
detail:
"SMB segment growth dropped from +5.1% in Q4 to +2.3% in Q1. The Q4 onboarding revamp landed for mid-market but didn't reach SMB. " +
"Recommend running the same audit framework on the SMB pod in May.",
},
revenueBySegment: [
{ name: "Enterprise", revenue: 684_200, customers: 124, growth: 0.182, mrrShare: 0.55 },
{ name: "Mid-Market", revenue: 328_400, customers: 412, growth: 0.096, mrrShare: 0.26 },
{ name: "SMB", revenue: 156_800, customers: 847, growth: 0.051, mrrShare: 0.12 },
{ name: "Self-Serve", revenue: 72_600, customers: 464, growth: 0.023, mrrShare: 0.07 },
],
keyInsights: [
"Enterprise drives 55% of revenue. Q1's 18.2% QoQ increase is primarily attributed to three new logos closed in February.",
"Churn rate decreased for the third consecutive month to 2.4%. Mid-market retention improved 6 percentage points on the back of Q4 customer success initiatives.",
],
q2Priorities: [
"Run the SMB onboarding audit.",
"Lock the three Q3-pipeline enterprise logos before May 30.",
"Stand up the partner-channel KPI before earnings.",
],
}),
});
You hand back result.html and mount it:
import { HtmlBlock } from "@montage/sdk/react";
<HtmlBlock html={result.html} />
And voila:
What you get is a real, board-ready Q1 report with a Revenue by Month chart, segment breakdown, key insights, Q2 priorities set up with your design system, and ready to mount on a customer surface or export.
Why now
Over the last few months, many things have become standard practice.
1. Tool calling is now a stable contract across Anthropic, OpenAI, Gemini, and almost every AI agent framework. The agent loop is solved. New innovation builds on this.
2. People have built enough chat-shaped products to notice that chat is not actually a great UI for most things. A dashboard isn't a chat thread. A pipeline isn't a chat thread.
Where it fits
v0 and Cursor solved generative UI at design time for engineers, in the IDE. Anthropic Artifacts and OpenAI Canvas surface Gen-UI in their own products; you're locked in. Nothing ships a generative UI at runtime on real customer data that you can use anywhere at anytime within your own stack. That's what Montage does.
Montage would be a great fit if you are building:
- Copilots and chat agents rendering dashboards, charts, and forms inline instead of dumping markdown.
- Internal tools where the surface needs to adapt
- Vertical AI apps where every customer wants a slightly different view, and you'd rather not ship 40 React components by hand.
- Onboarding and import flows where the agent collects what's needed, and Montage renders the right form, file picker, or preview.
Wherever your agent currently returns a markdown table, there's probably a better artifact waiting.
What's working today
- Anthropic and OpenAI tool calling out of the box.
- Adapters for Vercel AI SDK, Mastra, LangChain, MCP.
- A React mount component (
<HtmlBlock />) that isolates artifacts cleanly via Shadow DOM. - Per-call design-system override for your brand, typography, light/dark
- Empty-state aware: tracker and CRM-style apps start as the actual workspace, not static renders.
Try it in 30 seconds
pnpm add @montage/sdk
const montage = createMontageTools({ apiKey: process.env.MONTAGE_API_KEY });
Drop it into your existing agent loop. Pass a real brief and real data. See what comes back.
- Docs: usemontage.ai/docs
- API:
https://api.usemontage.ai
If you build something with it, feel free to share at founders@usemontage.ai. If you've thought about generative UI at runtime and have any opinions, I want to hear them. If you break it, send me what broke. I read every bug report.
This article was originally published by DEV Community and written by Ashish Bailkeri.
Read original article on DEV Community