Running multi-tenant apps on Supabase is mostly a schema and RLS problem — until it isn't. Here are the patterns I use across production apps: tenant isolation, JWT claims, request-time config, middleware, and the gotchas that cost the most time.
Multi-tenancy is mostly a schema and RLS problem. The core idea is simple: every table gets a tenant_id, every policy filters by it, and the current tenant flows in from the request. The reality is full of edge cases — deleted tenants, cross-tenant admin queries, N+1 lookups, and migration pain nobody talks about until you're already in it.
These are the patterns I use when building multi-tenant apps on Supabase, and the things that cost me time.
Before the schema, the real question is whether to put all tenants in one project or give each one their own.
Separate projects give you hard isolation, no shared connection limits, and no risk of one tenant's slow query degrading another's. The downside is operational overhead — if you have 50 tenants, you have 50 Supabase projects to manage, update, and monitor.
Row-level tenancy in a shared project makes sense when:
I use separate projects for my own apps — it's simpler at low tenant counts. For client SaaS work with many tenants, shared-project row-level tenancy is what I reach for. Everything below is about that pattern.
Every table that holds tenant-specific data gets a tenant_id column:
CREATE TABLE items (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id uuid NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
name text NOT NULL,
created_at timestamptz DEFAULT now()
);
The index on tenant_id is not optional:
CREATE INDEX ON items(tenant_id);
Without it, every RLS policy evaluation on this table is a sequential scan. On a small table you won't notice. On a table with 500k rows spread across 500 tenants, every query pays the full scan cost. The index is the difference between Postgres jumping to your tenant's rows and reading all of them.
The cleanest way to pass the current tenant to RLS policies is via the JWT app_metadata. Set it when the user authenticates or when they join a workspace:
-- RLS policy reading from JWT
CREATE POLICY "tenant_isolation" ON items
FOR ALL TO authenticated
USING (
tenant_id = (auth.jwt() -> 'app_metadata' ->> 'tenant_id')::uuid
);
Populate app_metadata from your backend using the Admin SDK:
// server-side only — never expose the service role key to the client
await supabaseAdmin.auth.admin.updateUserById(userId, {
app_metadata: { tenant_id: tenantId },
});
The caveat: JWT claims are baked at auth time. If a user's tenant changes mid-session, they need a new token. For most apps this is fine. If users switch between multiple tenants in one session, use request-time config instead.
For background jobs, service-role API calls, or multi-tenant switching, set the tenant as a Postgres config variable at the start of each request:
CREATE OR REPLACE FUNCTION set_tenant(p_tenant_id uuid)
RETURNS void LANGUAGE plpgsql SECURITY DEFINER AS $$
BEGIN
PERFORM set_config('app.current_tenant', p_tenant_id::text, true);
END;
$$;
CREATE POLICY "tenant_isolation" ON items
FOR ALL TO authenticated
USING (
tenant_id = current_setting('app.current_tenant', true)::uuid
);
From your server action or route handler:
await supabase.rpc("set_tenant", { p_tenant_id: tenantId });
// all subsequent queries in this request are scoped to that tenant
The true flag on set_config makes the setting transaction-scoped so it resets automatically. The true flag on current_setting returns null instead of throwing when the variable isn't set — safer default.
This is the same pattern from RLS basics but worth repeating because it's even more important in multi-tenant schemas where policies fire on every row.
-- Slow: evaluated on every row
USING (tenant_id = (auth.jwt() -> 'app_metadata' ->> 'tenant_id')::uuid)
-- Fast: evaluated once per query, result cached
USING (tenant_id = ((SELECT auth.jwt()) -> 'app_metadata' ->> 'tenant_id')::uuid)
The SELECT wrapper tells Postgres to treat the function as an initPlan — computed once, reused for every row. On tables with thousands of rows per tenant this difference is measurable.
In App Router, resolve the tenant once in middleware and attach it to a request header:
// middleware.ts
export async function middleware(request: NextRequest) {
const host = request.headers.get("host") ?? "";
// subdomain: acme.yourapp.com → "acme"
const tenantSlug = host.split(".")[0];
const response = NextResponse.next();
response.headers.set("x-tenant-slug", tenantSlug);
return response;
}
Then in any server component or action:
import { headers } from "next/headers";
async function getCurrentTenant() {
const slug = (await headers()).get("x-tenant-slug");
return getTenantBySlug(slug); // wrap with React cache() or unstable_cache
}
Cache the slug→tenant lookup. Without caching you're hitting the DB on every page render. A 60-second TTL is usually fine — tenant records change infrequently, and a briefly stale lookup is harmless.
Tables without RLS
Every new table needs ENABLE ROW LEVEL SECURITY — and FORCE ROW LEVEL SECURITY so the postgres superuser doesn't bypass it silently. Easy to miss when you're moving fast.
-- Find tables in public schema missing RLS
SELECT tablename FROM pg_tables
WHERE schemaname = 'public' AND rowsecurity = false;
Run this before every deploy. Anything unexpected means a tenant can read another tenant's rows.
UPDATE without a SELECT policy
PostgreSQL evaluates SELECT policies when processing UPDATEs. If there's no SELECT policy, the UPDATE returns 0 affected rows — no error, no warning, just silence. Always define both:
CREATE POLICY "tenant_select" ON items FOR SELECT TO authenticated
USING ((SELECT auth.jwt()) -> 'app_metadata' ->> 'tenant_id' = tenant_id::text);
CREATE POLICY "tenant_update" ON items FOR UPDATE TO authenticated
USING ((SELECT auth.jwt()) -> 'app_metadata' ->> 'tenant_id' = tenant_id::text);
Cross-tenant admin queries
Your admin dashboard needs to see all tenants. RLS blocks it. Pattern: use the service role key for admin routes only, never expose it to the client, and set tenant context explicitly when narrowing to a specific tenant.
Deleting a tenant
Don't rely entirely on ON DELETE CASCADE for complex schemas — it's hard to debug when it fires in an unexpected order. Write an explicit deletion function:
CREATE OR REPLACE FUNCTION delete_tenant(p_id uuid)
RETURNS void LANGUAGE plpgsql SECURITY DEFINER AS $$
BEGIN
DELETE FROM items WHERE tenant_id = p_id;
DELETE FROM workspaces WHERE tenant_id = p_id;
DELETE FROM members WHERE tenant_id = p_id;
DELETE FROM tenants WHERE id = p_id;
END;
$$;
Create two test tenants, insert rows for each, then query as each tenant to verify they can't see each other's data.
-- Simulate a request scoped to tenant A
SET LOCAL app.current_tenant = '<tenant_a_id>';
SELECT * FROM items; -- should only return tenant A rows
SET LOCAL app.current_tenant = '<tenant_b_id>';
SELECT * FROM items; -- should only return tenant B rows
Also run the rowsecurity = false check against your schema before every release. RLS misconfiguration is silent and the consequences are serious — one missing policy and every tenant can read all rows.
When a tenant's load starts degrading everyone else, or when they need a different schema, a different region, or a contractual isolation guarantee, it's time to move them to their own project.
The migration path: COPY TO for the data snapshot, restore to the new project with COPY FROM, sync rows written during migration, cut over the connection string. Budget a maintenance window.
Build the export tooling early: a export_tenant function that snapshots a tenant's data to CSV is useful for debugging, migration, and churned tenants asking for their data. You don't want to be writing this under pressure when an enterprise client is waiting.