Open a new app. Create an account. Verify your email. Choose a plan. Grant permissions. Now you can take notes.
This is the default in 2026. And most developers just accept it.
I've been building local-first terminal tools for a while now, and the more I do it, the more convinced I am that the industry defaulted to cloud sync not because it's better for users, but because it's better for retention metrics and business models. The user pays with their data, their privacy, and their dependency.
This post is a case for the other way.
What "offline-first" actually means
Offline-first doesn't mean "works without internet sometimes." It means the local copy is the source of truth. The network, if it exists at all, is secondary. Your data lives on your machine, in a format you can read, in a location you control.
For developer tools specifically, this means:
- No account required to use the tool
- Data stored in a documented, open format (plaintext, SQLite, JSON, Markdown)
- Full functionality without a connection
- Sync is opt-in and user-controlled, not baked into the product
The performance argument
Local I/O is fast. Not "pretty fast" — orders of magnitude faster than a network round trip.
A SQLite query on a local database completes in microseconds. The same read from a cloud API takes 50-300ms at minimum, plus serialization, TLS handshake overhead, and whatever latency your connection adds that day. If you're on a train, in a coffee shop with bad wifi, or just in a part of your office with poor signal, that 300ms becomes 2 seconds or a spinner that never resolves.
For tools you use dozens of times a day — snippet managers, note tools, task lists, config stores — that latency compounds. Local tools just feel instant because they are.
The reliability argument
Cloud services go down. Not often, but they do. And when your note-taking app, your snippet manager, or your task tracker is behind an API, their outage is your outage.
More importantly: cloud services get acquired, pivoted, or shut down. Evernote nearly collapsed. Notion has had serious outages. Wunderlist was killed by Microsoft. Any tool that stores your data remotely is making a bet that the company behind it will exist and be well-run indefinitely. That's a bad bet to make with data you depend on.
A SQLite file on your machine doesn't have a status page. It doesn't have planned maintenance windows. It doesn't send you emails about pricing changes.
The privacy argument
This one is simple. Data you don't send to a server can't be leaked from a server, subpoenaed from a server, sold by a server, or used to train a model by a server.
Developers in particular often store sensitive things in their tools — API keys in notes, internal hostnames in snippets, proprietary code in editors. Routing that through a third-party cloud is a risk that often goes unexamined because the convenience is high and the breach probability feels low. Until it isn't.
Local-first means the attack surface is your machine. That's a problem you already have and already manage.
The portability argument
Here's where it gets interesting. People assume cloud sync means better portability. In practice it often means the opposite.
Cloud tools lock your data in proprietary formats behind APIs that can change or disappear. Try exporting your entire Notion workspace as something a script can parse. Try getting your Roam graph in a format that another tool understands. "Export" buttons are often second-class features built to satisfy regulators, not to actually be useful.
Local-first tools that use open formats — Markdown files, SQLite databases, JSON — are natively portable. Every editor can open a Markdown file. Every language has a SQLite driver. You can grep your data, script against it, version it, diff it, and move it without asking anyone's permission.
Git is all the sync you need
The most common objection to local-first is sync. "What about my other machines? What about my phone?"
For developer tools, git solves this completely and it's already installed everywhere you work.
If your tool stores data as flat files — Markdown, JSON, TOML — you track that directory in a git repo, push to a private remote (GitHub, Gitea, a VPS, anywhere), and pull on other machines. You get:
- Full history and diffs of every change
- Conflict resolution you already know how to do
- Works on any machine with git installed
- Works offline; syncs when you're back online
- No third-party service in the middle
This is exactly the architecture snip v0.7.0 moved to — snippets as individual Markdown files with frontmatter, tracked in git, with a SQLite index rebuilt from files on startup. The index is git-ignored because it's derived data. The files are the truth.
The same pattern works for dotfiles, notes, task lists, configs, and anything else a developer touches daily.
What local-first costs
I want to be honest about the tradeoffs. Local-first is not free.
No automatic sync. You have to set up git, remember to push, and handle conflicts yourself. For non-developers this is a real barrier. For developers it's an afternoon of setup.
No real-time collaboration. If you need multiple people editing the same document simultaneously, you need a server. Google Docs exists for a reason.
No mobile without extra work. Mobile apps can't easily read from your local filesystem. Working Copy on iOS and similar tools close this gap partially, but it's not seamless.
You're responsible for backups. The cloud provider was implicitly handling this. Now you're not.
For developer tools specifically, most of these tradeoffs are acceptable. You're usually working alone or in async collaboration, you already understand git, and you have backup infrastructure.
A checklist for building local-first tools
If you're building a developer tool and want to get this right:
- Store data in an open, documented format (Markdown, SQLite, JSON, TOML)
- Default storage path under
~/.config/<tool>/or~/.local/share/<tool>/ - Expose a
--data-diror equivalent flag so users can point it at a git-tracked directory - Auto-create a
.gitignorethat excludes derived files (indexes, caches, lock files) - Provide
exportandimportcommands so data can move in and out freely - Never require a network connection for core functionality
- Document the storage format so users can script against it without reading your source
The tools I trust most are the ones that store my data in a folder I can see, in a format I can read, that I can back up with cp if everything else fails.
That's a low bar. More tools should clear it.
Source: https://github.com/phlx0
This article was originally published by DEV Community and written by phlx0.
Read original article on DEV Community