Keycloak with Docker Compose: multi-tenant by realm (mx:ver:lerdo example)
The case I keep running into on multi-tenant projects
Every time a system with more than one "customer" shows up —a municipal platform, a SaaS with several organizations, an internal portal where each department has its own users— the pattern repeats with a precision that is almost folklore. The team wants to isolate users by jurisdiction, wants to name each tenant with something a human can read out loud (mx:ver:lerdo, mx:cdmx:cuauhtemoc, mx:jal:guadalajara), and does not want to end up with a forest of ifs in the backend every time someone asks for a login.
I have tried several routes to solve this: a database per customer, separate PostgreSQL schemas, a tenant claim in the JWT with manual branching, even a proxy rewriting paths. They all work. None of them age well. The one that does age well, and the one I recommend when people ask, is one Keycloak realm per tenant. It is boring, well documented, and free of surprises.
![]()
Source: Venti Views — Unsplash
This article is the concrete guide I wish I had the first time: how to bring up Keycloak with Docker Compose, create the mx-ver-lerdo realm, preload it with users, roles and a client, and verify that login actually works. All copy-paste, all reproducible, zero magic.
26.0.5, PostgreSQL 16-alpine, Docker Engine 27.x, Docker Compose v2.29+. I tested it on Linux and on Docker Desktop for Windows. The commands are identical.
1. Why a realm per tenant (and not a claim in the JWT)
Before the docker-compose.yml, it is worth saying why this decision. If you already made it, skip to section 2 without remorse.
A Keycloak realm is an isolated universe: its own users, its own roles, its own clients, its own themes, its own authentication flows, its own tokens signed with an independent key. That means four very practical things:
- Real isolation. A user in realm
mx-ver-lerdocannot get a valid token formx-jal-guadalajaraby accident or by oversight, because the signing keys are different. - Independent rotation. If the city of Lerdo wants to force a password reset, I do not have to touch any other tenant.
- Clean auditing. Keycloak events come already tagged by realm. No query-side filtering needed.
- Per-tenant branding. Each tenant can have its own logo, colors and even default language on the login screen without touching the backend.
The honest tradeoff is that each tenant adds a new endpoint to the gateway (/realms/{tenant}/...) and the frontend has to know which one to point to. In exchange you skip the army of ifs that shows up when everything is resolved through a single claim.
2. What we are building — the full picture
By the end of this article you will have running on your machine:
- A Keycloak 26 container listening on
http://localhost:8080. - A PostgreSQL 16 container as Keycloak's persistent store.
- A realm named
mx-ver-lerdopreloaded at boot, with:- Three roles:
admin,editor,viewer. - Three users with known passwords (I give them to you below — they are examples).
- Two clients:
lerdo-web(confidential, for an SPA with a backend) andlerdo-cli(public, forcurlsmoke tests).
- Three roles:
- A working endpoint to request tokens with
curland decode them withjq.
The convention mx:ver:lerdo stands for country:state:city. Keycloak uses the realm name inside URLs, and colons there annoy proxies and badly configured HTTP clients, so inside Keycloak the realm is called mx-ver-lerdo with dashes. The displayName can be whatever you want — here I leave it as "MX / Veracruz / Lerdo" so any human can read it out loud.
3. Project layout
A minimalist tree. Three real files, one directory for the realm, and nothing else:
1keycloak-mx-ver-lerdo/
2├── .env
3├── docker-compose.yml
4└── realms/
5 └── mx-ver-lerdo-realm.json
Create the folder and move into it:
1mkdir -p keycloak-mx-ver-lerdo/realms
2cd keycloak-mx-ver-lerdo
4. Environment variables — the .env file
Save this as .env at the project root. These are the only real secrets in the stack. Do not commit them.
1# Database used by Keycloak
2POSTGRES_DB=keycloak
3POSTGRES_USER=keycloak
4POSTGRES_PASSWORD=Keycloak.DB#2026
5
6# Bootstrap admin for the master realm (/admin console)
7KC_BOOTSTRAP_ADMIN_USERNAME=kcadmin
8KC_BOOTSTRAP_ADMIN_PASSWORD=ChangeMe.Master#2026
9
10# Public hostname of Keycloak (change for production)
11KC_HOSTNAME=localhost
5. docker-compose.yml, explained without hand-waving
This is the core of the article. Save it as docker-compose.yml at the project root:
1services:
2 postgres:
3 image: postgres:16-alpine
4 container_name: kc-postgres
5 restart: unless-stopped
6 environment:
7 POSTGRES_DB: ${POSTGRES_DB}
8 POSTGRES_USER: ${POSTGRES_USER}
9 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
10 volumes:
11 - kc-pgdata:/var/lib/postgresql/data
12 healthcheck:
13 test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
14 interval: 10s
15 timeout: 5s
16 retries: 10
17 networks:
18 - kc-net
19
20 keycloak:
21 image: quay.io/keycloak/keycloak:26.0.5
22 container_name: kc-server
23 restart: unless-stopped
24 command:
25 - start-dev
26 - --import-realm
27 environment:
28 # Database
29 KC_DB: postgres
30 KC_DB_URL: jdbc:postgresql://postgres:5432/${POSTGRES_DB}
31 KC_DB_USERNAME: ${POSTGRES_USER}
32 KC_DB_PASSWORD: ${POSTGRES_PASSWORD}
33
34 # Master realm bootstrap admin
35 KC_BOOTSTRAP_ADMIN_USERNAME: ${KC_BOOTSTRAP_ADMIN_USERNAME}
36 KC_BOOTSTRAP_ADMIN_PASSWORD: ${KC_BOOTSTRAP_ADMIN_PASSWORD}
37
38 # Hostname and HTTP (only acceptable in dev)
39 KC_HOSTNAME: ${KC_HOSTNAME}
40 KC_HTTP_ENABLED: "true"
41 KC_HOSTNAME_STRICT: "false"
42
43 # Observability
44 KC_HEALTH_ENABLED: "true"
45 KC_METRICS_ENABLED: "true"
46 ports:
47 - "8080:8080"
48 volumes:
49 - ./realms:/opt/keycloak/data/import:ro
50 depends_on:
51 postgres:
52 condition: service_healthy
53 healthcheck:
54 test: ["CMD-SHELL", "exec 3<>/dev/tcp/localhost/9000 && echo -e 'GET /health/ready HTTP/1.1\r\nHost: localhost\r\n\r\n' >&3 && cat <&3 | grep -q '200 OK'"]
55 interval: 15s
56 timeout: 5s
57 start_period: 40s
58 retries: 10
59 networks:
60 - kc-net
61
62volumes:
63 kc-pgdata:
64
65networks:
66 kc-net:
67 driver: bridge
The points that usually trip people up in this file:
start-devruns over HTTP with no TLS and in development mode. For production you usestartorstart --optimizedafter abuild. More on that in section 9.--import-realmtells Keycloak to read every*.jsonfile under/opt/keycloak/data/importat boot and create the realms if they do not exist. Subsequent restarts do not re-import, so it is safe.KC_BOOTSTRAP_ADMIN_*replaced the oldKEYCLOAK_ADMIN/KEYCLOAK_ADMIN_PASSWORDin Keycloak 26. If you copy an older tutorial with the old variables, the container starts but you cannot log into the console.- Port
9000is the internal management port since Keycloak 25. That is where/healthand/metricslive. We do not expose it to the host because there is no need.
6. The mx-ver-lerdo realm — users, roles and clients
Save this file as realms/mx-ver-lerdo-realm.json. This is the one Keycloak will import automatically on boot. Notice that users carry their password in plain text (type: "password", temporary: false). Keycloak hashes them on import — they are not stored that way in the database:
1{
2 "realm": "mx-ver-lerdo",
3 "displayName": "MX / Veracruz / Lerdo",
4 "enabled": true,
5 "sslRequired": "external",
6 "registrationAllowed": false,
7 "loginWithEmailAllowed": true,
8 "duplicateEmailsAllowed": false,
9 "resetPasswordAllowed": true,
10 "editUsernameAllowed": false,
11 "bruteForceProtected": true,
12 "defaultSignatureAlgorithm": "RS256",
13 "accessTokenLifespan": 900,
14
15 "roles": {
16 "realm": [
17 { "name": "admin", "description": "Administers the Lerdo tenant" },
18 { "name": "editor", "description": "Creates and modifies content" },
19 { "name": "viewer", "description": "Read-only access" }
20 ]
21 },
22
23 "users": [
24 {
25 "username": "admin.lerdo",
26 "email": "[email protected]",
27 "firstName": "Admin",
28 "lastName": "Lerdo",
29 "enabled": true,
30 "emailVerified": true,
31 "credentials": [
32 { "type": "password", "value": "Admin.Lerdo#2026", "temporary": false }
33 ],
34 "realmRoles": ["admin", "editor", "viewer"]
35 },
36 {
37 "username": "juan.perez",
38 "email": "[email protected]",
39 "firstName": "Juan",
40 "lastName": "Perez",
41 "enabled": true,
42 "emailVerified": true,
43 "credentials": [
44 { "type": "password", "value": "Juan.Perez#2026", "temporary": false }
45 ],
46 "realmRoles": ["editor", "viewer"]
47 },
48 {
49 "username": "maria.lopez",
50 "email": "[email protected]",
51 "firstName": "Maria",
52 "lastName": "Lopez",
53 "enabled": true,
54 "emailVerified": true,
55 "credentials": [
56 { "type": "password", "value": "Maria.Lopez#2026", "temporary": false }
57 ],
58 "realmRoles": ["viewer"]
59 }
60 ],
61
62 "clients": [
63 {
64 "clientId": "lerdo-web",
65 "name": "Lerdo Portal (SPA + backend)",
66 "enabled": true,
67 "protocol": "openid-connect",
68 "publicClient": false,
69 "secret": "lerdo-web-secret-change-me-in-prod",
70 "standardFlowEnabled": true,
71 "directAccessGrantsEnabled": false,
72 "serviceAccountsEnabled": false,
73 "redirectUris": [
74 "http://localhost:3000/*",
75 "https://portal.lerdo.example.mx/*"
76 ],
77 "webOrigins": [
78 "http://localhost:3000",
79 "https://portal.lerdo.example.mx"
80 ],
81 "attributes": {
82 "pkce.code.challenge.method": "S256"
83 }
84 },
85 {
86 "clientId": "lerdo-cli",
87 "name": "Test CLI (curl / httpie)",
88 "enabled": true,
89 "protocol": "openid-connect",
90 "publicClient": true,
91 "standardFlowEnabled": false,
92 "directAccessGrantsEnabled": true,
93 "redirectUris": [],
94 "webOrigins": []
95 }
96 ]
97}
Three things worth noticing:
lerdo-webis confidential (publicClient: false), uses the Authorization Code Flow with PKCE and has a client secret. This is the "real" client that your portal is going to consume.lerdo-cliis public with Direct Access Grants enabled. That means I can request a token withusername+passwordovercurl, no browser, no redirects. It is an antipattern in production but pure gold for dev smoke tests.bruteForceProtected: trueandaccessTokenLifespan: 900(15 minutes) are sane defaults. I spell them out so you see them and change them on purpose, not by accident.
So there is no doubt, the example credentials table:
| User | Password | Roles |
|---|---|---|
admin.lerdo | Admin.Lerdo#2026 | admin, editor, viewer |
juan.perez | Juan.Perez#2026 | editor, viewer |
maria.lopez | Maria.Lopez#2026 | viewer |
7. Bring up the stack and verify everything started
With the three files in place, one single command:
1docker compose up -d
The first run takes one or two minutes — it pulls the images and Keycloak creates its schema. Watch what is happening:
1docker compose logs -f keycloak
When you see something like this, the service is ready:
1kc-server | Imported realm mx-ver-lerdo from file /opt/keycloak/data/import/mx-ver-lerdo-realm.json
2kc-server | Keycloak 26.0.5 on JVM (powered by Quarkus 3.x.x) started in 18.412s.
3kc-server | Listening on: http://0.0.0.0:8080
4kc-server | Management interface listening on http://0.0.0.0:9000
Three URLs to confirm everything responds:
http://localhost:8080/— Keycloak landing page.http://localhost:8080/admin— admin console. Log in withkcadmin/ChangeMe.Master#2026. Switch the realm selector in the top-left tomx-ver-lerdoand verify the three users and the two clients already exist.http://localhost:8080/realms/mx-ver-lerdo/.well-known/openid-configuration— the tenant's OIDC discovery document. If it returns JSON, everything is wired.
8. Test login with curl — the best smoke test
Before writing a single line of frontend, I prove the tokens get issued. Request an access token for juan.perez using the public lerdo-cli client:
1curl -s -X POST \
2 http://localhost:8080/realms/mx-ver-lerdo/protocol/openid-connect/token \
3 -H "Content-Type: application/x-www-form-urlencoded" \
4 -d "grant_type=password" \
5 -d "client_id=lerdo-cli" \
6 -d "username=juan.perez" \
7 -d "password=Juan.Perez#2026" \
8 -d "scope=openid"
The response comes with access_token, refresh_token, expires_in and so on. To inspect the JWT payload without installing anything beyond jq and base64:
1TOKEN=$(curl -s -X POST \
2 http://localhost:8080/realms/mx-ver-lerdo/protocol/openid-connect/token \
3 -H "Content-Type: application/x-www-form-urlencoded" \
4 -d "grant_type=password" \
5 -d "client_id=lerdo-cli" \
6 -d "username=juan.perez" \
7 -d "password=Juan.Perez#2026" \
8 -d "scope=openid" | jq -r .access_token)
9
10echo "$TOKEN" | cut -d'.' -f2 | base64 -d 2>/dev/null | jq .
You should see something like this (trimmed):
1{
2 "exp": 1744823412,
3 "iat": 1744822512,
4 "iss": "http://localhost:8080/realms/mx-ver-lerdo",
5 "aud": "account",
6 "sub": "c9a3...",
7 "typ": "Bearer",
8 "azp": "lerdo-cli",
9 "realm_access": {
10 "roles": ["editor", "viewer", "default-roles-mx-ver-lerdo"]
11 },
12 "preferred_username": "juan.perez",
13 "email": "[email protected]"
14}
Notice two things: iss points to the specific realm (not the master), and realm_access.roles contains exactly the roles defined in the JSON. If your backend validates iss — and it should — that is the mechanism that prevents a token from mx-jal-guadalajara from slipping into Lerdo's endpoints.
9. Minimum hardening before moving this to a real server
start-dev is convenient, but it is not production. Here is the short —not exhaustive— list of things I never skip:

Source: FLY:D — Unsplash
- Change the master bootstrap password.
KC_BOOTSTRAP_ADMIN_PASSWORDis only used the first time. After that, log into the console, create an admin user with your own email, grant itadminin the master realm, and delete the bootstrap user. - Real TLS, not HTTP.
KC_HTTP_ENABLED=false,KC_HTTPS_CERTIFICATE_FILEandKC_HTTPS_CERTIFICATE_KEY_FILEpointing to a Let's Encrypt cert. Better yet, a reverse proxy (nginx, Traefik, Caddy) terminating TLS withKC_PROXY_HEADERS=xforwardedon Keycloak. start --optimizedafter abuild.start-devre-builds providers on every boot. In production you runkc.sh buildonce in the image andkc.sh start --optimizedat runtime. Startup drops from 18 seconds to 3.- Secrets out of
.env. Docker Secrets, HashiCorp Vault, AWS Secrets Manager, Azure Key Vault. Anything beats a flat file on the host. - Real
KC_HOSTNAME. On localhost it islocalhost, in production it is the service's public FQDN. If Keycloak issues a token whoseissdoes not match what your clients expect, the backend rejects it and you spend three hours hunting the bug. - Database backups. The realm lives in Postgres. The import JSON is a seed, not a backup. Schedule
pg_dumpon a cron and store the dumps off-host. - Change the
lerdo-webclient secret and disablelerdo-cli. The public client with Direct Access Grants is for dev smoke tests. In production it gets deleted or disabled.
8080 directly to the internet and leaving KC_HOSTNAME_STRICT=false. That works, but it puts you in an open-relay-of-tokens situation. Always put a reverse proxy in front.
10. Adding the next tenant — the pattern repeats
The whole point of picking "one realm per tenant" is that adding the next one is boring in the best sense. Copy mx-ver-lerdo-realm.json and change:
"realm": "mx-jal-guadalajara""displayName": "MX / Jalisco / Guadalajara"- The
username,emailandclientIdvalues (do not share clients across realms, even if the name is the same).
Drop the file into ./realms/, restart the container (docker compose restart keycloak), and you are done: two isolated tenants, two token URLs (/realms/mx-ver-lerdo/... and /realms/mx-jal-guadalajara/...), two sets of signing keys. Your backend only needs to know which one to point at based on host, path or a header — and that decision lives in the gateway, not inside Keycloak.
11. The ritual I run after every new deploy
When I finish bringing up a Keycloak like this, I run exactly these three validations before handing it off to anyone. It is my personal checklist and it has saved me more than one after-hours phone call:
- Do I get a token for every user in the realm? One
curlper user. If any of them fails, the import file has a typo. - Does the token's
issmatch exactly what my backend validates? I copyissfrom the payload and paste it into the resource server config. One extra space and everything breaks. - Is a token issued in realm A rejected by realm B? I test it. Two tenants, one shared endpoint, I cross the tokens. If either of them passes, something is very wrong in the gateway.
Keycloak is not magic. It is well documented software that accepts being treated as such. If you give it a decent database, a tidy realm.json and a serious reverse proxy, it gives you an identity layer that runs for years without begging for attention. Which, in the end, is the only thing you ask of an infrastructure component: to disappear from the radar and stay there until it is needed again.
If this article helped, keep it. Next time someone asks you for "multi-tenant authentication", 80% of the work is already written down here.
Comments
Sign in to leave a comment
No comments yet. Be the first!
Related Articles
Stay updated
Get notified when I publish new articles in English. No spam, unsubscribe anytime.