Deployment
Overview
The POS system deploys as a set of Docker containers orchestrated by Docker Compose. A single deploy script (tools/scripts/deploy.sh) runs the full pipeline: typecheck, test, validate i18n, build, Docker build, deploy, and verify.
Important: Never run pnpm build, docker compose build, or docker compose up directly. Always use the deploy script.
Deploy Script
bash tools/scripts/deploy.sh # timeout 600000 (10 minutes)Pipeline Stages
- Typecheck - Runs
tsc --noEmitin parallel for shared-types, hq-server, store-server, andvue-tsc --noEmitfor web-client - Tests - Runs test suites in parallel for hq-server, store-server, and web-client
- i18n Validation - Runs
validate-i18n.jsto ensure en.json and es.json have matching keys - Build Packages - Builds all shared packages in order: shared-types, i18n, backend-adapter, tax-engine, event-bus, sync-protocol
- Build Web - Builds the web client SPA (
vite build) - Build Docker - Builds Docker images for hq-server, store1-server, store2-server
- Deploy - Runs
docker compose up -d --force-recreatefor server containers, then restarts web containers - Verify - Checks container logs for startup confirmation
If any stage fails, the script exits immediately (set -e).
Docker Compose Architecture
The production Docker Compose file (docker-compose.prod.yml) defines these services:
HQ Stack
| Service | Container | Description |
|---|---|---|
hq-postgres | pos-hq-postgres | PostgreSQL 16 for HQ database |
redis | pos-redis | Redis 7 for BullMQ notification queue |
hq-server | pos-hq-server | HQ API server (Fastify) |
hq-web | pos-hq-web | nginx serving SPA + API proxy |
Per-Store Stack (repeated for each store)
| Service | Container | Description |
|---|---|---|
store1-postgres | pos-store1-postgres | PostgreSQL 16 for store database |
store1-server | pos-store1-server | Store API server (Fastify) |
store1-web | pos-store1-web | nginx serving SPA + API proxy |
Portal
| Service | Container | Description |
|---|---|---|
portal | pos-portal | Landing page with links to all apps |
Networks
pos-internal: Internal network for server-to-database and server-to-server communicationcoolify: External network for Traefik reverse proxy (TLS termination)
Volumes
Persistent volumes for each PostgreSQL instance and Redis:
hq-pgdata,store1-pgdata,store2-pgdata,redis-data
nginx Configuration
Each web container runs nginx with:
SPA Fallback
location / {
try_files $uri $uri/ /index.html;
}API Proxy
location /api/ {
proxy_pass http://pos-hq-server:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}The WebSocket upgrade headers allow the sync WebSocket to work through nginx.
Cache Headers
| Path | Cache Policy |
|---|---|
/index.html | no-cache, no-store, must-revalidate |
/sw.js | no-cache, no-store, must-revalidate |
Static assets (*.js, *.css, images, fonts) | 1 year, public, immutable (Vite hashes filenames) |
Gzip
Enabled for text/plain, text/css, application/json, application/javascript, text/xml.
Store-Specific nginx
Store web containers use a different nginx config that proxies /api/ to the store server instead of HQ:
location /api/ {
proxy_pass http://pos-store1-server:3002;
}Runtime Configuration
Each web deployment serves a different config.json at the web root, mounted as a Docker volume:
HQ Config (deploy/hq/config.json)
{
"mode": "hq",
"apiUrl": ""
}Store Config (deploy/store1/config.json)
{
"mode": "store",
"apiUrl": "",
"storeId": "da92c128-...",
"storeCode": "ST01",
"storeName": "Downtown Store"
}The apiUrl is empty because the nginx proxy handles /api/ routing. The mode determines which Vue Router routes are registered.
Environment Variables
HQ Server Container
| Variable | Value |
|---|---|
DATABASE_URL | postgresql://pos_admin:pos_dev_password@hq-postgres:5432/pos_hq |
REDIS_URL | redis://redis:6379 |
JWT_SECRET | Production secret |
PORT | 3000 |
HOST | 0.0.0.0 |
Store Server Container
| Variable | Value |
|---|---|
DATABASE_URL | postgresql://pos_store:pos_store1_password@store1-postgres:5432/pos_store |
JWT_SECRET | Same as HQ (required for sync token verification) |
STORE_ID | UUID of this store in HQ database |
STORE_CODE | Short code (e.g., ST01) |
HQ_URL | http://hq-server:3000 (Docker internal network) |
SYNC_TOKEN | JWT with type: "store_sync" |
SYNC_INTERVAL_MS | 30000 |
PORT | 3002 |
TLS / Domain Routing
Traefik handles TLS termination and routes requests by hostname:
| Domain | Service |
|---|---|
hq.pos.example.com | pos-hq-web |
tienda1.pos.example.com | pos-store1-web |
tienda2.pos.example.com | pos-store2-web |
pos.example.com | pos-portal |
Traefik labels on each web container configure the routing rules and Let's Encrypt certificates.
Adding a New Store
- Create a new store record in the HQ database (via the Stores admin UI or API)
- Generate a sync token:
app.jwt.sign({ storeId, storeCode, type: 'store_sync' }, { expiresIn: '1y' }) - Add new services to
docker-compose.prod.yml: postgres, server, web - Create
deploy/storeN/config.jsonanddeploy/storeN/nginx.conf - Set environment variables on the store server container
- Deploy
Health Checks
Docker Compose uses health checks on PostgreSQL and Redis containers. The server containers depend on their database being healthy before starting:
depends_on:
store1-postgres:
condition: service_healthyThe application itself exposes GET /health which returns { status: 'ok' }.
Troubleshooting
Stale Database Connections
If the deploy script fails on stale DB connections (PostgreSQL reports "too many connections"), the containers may need a harder restart. The deploy script handles this by using --force-recreate.
Checking Logs
docker logs pos-hq-server
docker logs pos-store1-server
docker logs pos-hq-webDatabase Access
# HQ database
docker exec -it pos-hq-postgres psql -U pos_admin -d pos_hq
# Store database
docker exec -it pos-store1-postgres psql -U pos_store -d pos_store