SQLite at the edge sounds like a joke until you see the query latency numbers. Cloudflare D1 puts a SQLite database inside your Worker — queries that used to cross a continent now resolve in single-digit milliseconds.
But D1's raw API is verbose. Drizzle ORM fixes that with a thin, type-safe layer that generates correct SQL without the overhead of Prisma's query engine.
Here's how to wire them together for a production app.
Why D1 + Drizzle
D1 advantages:
- Read replicas in every Cloudflare region automatically
- Free tier: 5GB storage, 25M reads/day, 50K writes/day
- No connection pools — Workers are stateless
- SQLite is surprisingly capable for read-heavy workloads
Drizzle advantages:
- SQL-first: the ORM generates SQL you'd write yourself
- Zero runtime overhead — just generates query strings
- Full TypeScript inference from your schema
- First-class D1 support via
drizzle-orm/d1
Project Setup
npm create cloudflare@latest my-d1-app -- --type hello-world
cd my-d1-app
npm install drizzle-orm
npm install -D drizzle-kit wrangler
Wrangler Config
# wrangler.toml
name = "my-d1-app"
main = "src/index.ts"
compatibility_date = "2025-04-01"
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "your-database-id-here"
# Create the database
npx wrangler d1 create my-database
# Copy the database_id from output into wrangler.toml
Define Your Schema
// src/db/schema.ts
import { sqliteTable, text, integer, index } from 'drizzle-orm/sqlite-core';
import { sql } from 'drizzle-orm';
export const users = sqliteTable('users', {
id: text('id').primaryKey(),
email: text('email').notNull().unique(),
name: text('name').notNull(),
plan: text('plan', { enum: ['free', 'pro', 'enterprise'] }).notNull().default('free'),
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().default(sql`(unixepoch())`),
}, (table) => ({
emailIdx: index('users_email_idx').on(table.email),
planIdx: index('users_plan_idx').on(table.plan),
}));
export const posts = sqliteTable('posts', {
id: text('id').primaryKey(),
userId: text('user_id').notNull().references(() => users.id, { onDelete: 'cascade' }),
title: text('title').notNull(),
slug: text('slug').notNull().unique(),
content: text('content').notNull(),
publishedAt: integer('published_at', { mode: 'timestamp' }),
createdAt: integer('created_at', { mode: 'timestamp' }).notNull().default(sql`(unixepoch())`),
}, (table) => ({
userIdIdx: index('posts_user_id_idx').on(table.userId),
slugIdx: index('posts_slug_idx').on(table.slug),
}));
export type User = typeof users.$inferSelect;
export type NewUser = typeof users.$inferInsert;
export type Post = typeof posts.$inferSelect;
Drizzle Config
// drizzle.config.ts
import { defineConfig } from 'drizzle-kit';
export default defineConfig({
schema: './src/db/schema.ts',
out: './migrations',
dialect: 'sqlite',
driver: 'd1-http',
dbCredentials: {
accountId: process.env.CLOUDFLARE_ACCOUNT_ID!,
databaseId: process.env.CLOUDFLARE_DATABASE_ID!,
token: process.env.CLOUDFLARE_D1_TOKEN!,
},
});
Generate and Apply Migrations
# Generate migration SQL from schema
npx drizzle-kit generate
# Apply to local D1 (for development)
npx wrangler d1 migrations apply my-database --local
# Apply to production D1
npx wrangler d1 migrations apply my-database --remote
Worker with Drizzle Queries
// src/index.ts
import { Hono } from 'hono';
import { drizzle } from 'drizzle-orm/d1';
import { eq, desc, and, isNotNull } from 'drizzle-orm';
import * as schema from './db/schema';
type Env = {
DB: D1Database;
};
const app = new Hono<{ Bindings: Env }>();
// Initialize Drizzle — one line, no connection pool needed
const getDb = (env: Env) => drizzle(env.DB, { schema });
// Get user by email
app.get('/users/:email', async (c) => {
const db = getDb(c.env);
const email = c.req.param('email');
const user = await db
.select()
.from(schema.users)
.where(eq(schema.users.email, email))
.get();
if (!user) return c.json({ error: 'User not found' }, 404);
return c.json(user);
});
// Get published posts with author
app.get('/posts', async (c) => {
const db = getDb(c.env);
const posts = await db
.select({
id: schema.posts.id,
title: schema.posts.title,
slug: schema.posts.slug,
publishedAt: schema.posts.publishedAt,
author: {
id: schema.users.id,
name: schema.users.name,
}
})
.from(schema.posts)
.innerJoin(schema.users, eq(schema.posts.userId, schema.users.id))
.where(isNotNull(schema.posts.publishedAt))
.orderBy(desc(schema.posts.publishedAt))
.limit(20)
.all();
return c.json(posts);
});
// Create user
app.post('/users', async (c) => {
const db = getDb(c.env);
const body = await c.req.json<{ email: string; name: string }>();
const newUser: schema.NewUser = {
id: crypto.randomUUID(),
email: body.email,
name: body.name,
plan: 'free',
};
const user = await db.insert(schema.users).values(newUser).returning().get();
return c.json(user, 201);
});
// Batch operations (D1 native)
app.post('/batch', async (c) => {
const db = getDb(c.env);
// D1 batch: multiple statements in one round trip
const userId = crypto.randomUUID();
const postId = crypto.randomUUID();
await db.batch([
db.insert(schema.users).values({
id: userId,
email: 'batch@example.com',
name: 'Batch User',
plan: 'pro',
}),
db.insert(schema.posts).values({
id: postId,
userId,
title: 'First Post',
slug: 'first-post',
content: 'Hello from the edge!',
})
]);
return c.json({ userId, postId });
});
export default app;
Local Development
The local workflow is smooth:
# Start local dev server with local D1
npx wrangler dev --local
# The local D1 persists between restarts
# Inspect it directly:
npx wrangler d1 execute my-database --local --command "SELECT * FROM users"
Performance Characteristics
D1 read replicas are automatic but have eventual consistency. For most read patterns, this is invisible. For patterns that require reading your own write immediately:
// Force read from primary (not replica)
// D1 automatically routes writes to primary and reads to nearest replica
// For consistency after write, use the Durable Objects pattern or add a short cache TTL
// Pattern: write → return the written data from the insert
const newPost = await db
.insert(schema.posts)
.values({ ...postData })
.returning() // Returns the inserted row from primary
.get();
// Safe to use newPost immediately — came from the same write
Limitations to Know Before Committing
No full-text search — SQLite's FTS5 extension is not available in D1. Use Cloudflare Vectorize or an external search service for search.
Max DB size is 10GB (paid plan) — fine for most apps, but not for data warehousing.
No connection URLs — D1 only works inside a Cloudflare Worker. You can't connect from a local Node.js script directly (use
drizzle-kitCLI + D1 HTTP API for migrations).Write throughput — D1 isn't designed for high-write workloads. For write-heavy apps (logging, analytics, events), consider Cloudflare Queues + periodic batch writes.
When D1 + Drizzle is the Right Call
This stack shines for:
- Content sites with heavy reads, light writes
- SaaS apps where data locality matters for latency
- Apps already on Cloudflare Workers (Pages, Workers AI)
- Prototypes that need a real database without ops overhead
For apps needing full Postgres features, complex transactions, or write-heavy workloads — stick with Supabase or Neon.
Building on the Cloudflare edge? The AI SaaS Starter Kit at whoffagents.com includes production-ready patterns for Cloudflare Workers, D1 database setup, and a typed API layer — skip the boilerplate and ship.
This article was originally published by DEV Community and written by Atlas Whoff.
Read original article on DEV Community