Skip to content

Deployment

Overview

The POS system deploys as a set of Docker containers orchestrated by Docker Compose. A single deploy script (tools/scripts/deploy.sh) runs the full pipeline: typecheck, test, validate i18n, build, Docker build, deploy, and verify.

Important: Never run pnpm build, docker compose build, or docker compose up directly. Always use the deploy script.

Deploy Script

bash
bash tools/scripts/deploy.sh   # timeout 600000 (10 minutes)

Pipeline Stages

  1. Typecheck - Runs tsc --noEmit in parallel for shared-types, hq-server, store-server, and vue-tsc --noEmit for web-client
  2. Tests - Runs test suites in parallel for hq-server, store-server, and web-client
  3. i18n Validation - Runs validate-i18n.js to ensure en.json and es.json have matching keys
  4. Build Packages - Builds all shared packages in order: shared-types, i18n, backend-adapter, tax-engine, event-bus, sync-protocol
  5. Build Web - Builds the web client SPA (vite build)
  6. Build Docker - Builds Docker images for hq-server, store1-server, store2-server
  7. Deploy - Runs docker compose up -d --force-recreate for server containers, then restarts web containers
  8. Verify - Checks container logs for startup confirmation

If any stage fails, the script exits immediately (set -e).

Docker Compose Architecture

The production Docker Compose file (docker-compose.prod.yml) defines these services:

HQ Stack

ServiceContainerDescription
hq-postgrespos-hq-postgresPostgreSQL 16 for HQ database
redispos-redisRedis 7 for BullMQ notification queue
hq-serverpos-hq-serverHQ API server (Fastify)
hq-webpos-hq-webnginx serving SPA + API proxy

Per-Store Stack (repeated for each store)

ServiceContainerDescription
store1-postgrespos-store1-postgresPostgreSQL 16 for store database
store1-serverpos-store1-serverStore API server (Fastify)
store1-webpos-store1-webnginx serving SPA + API proxy

Portal

ServiceContainerDescription
portalpos-portalLanding page with links to all apps

Networks

  • pos-internal: Internal network for server-to-database and server-to-server communication
  • coolify: External network for Traefik reverse proxy (TLS termination)

Volumes

Persistent volumes for each PostgreSQL instance and Redis:

  • hq-pgdata, store1-pgdata, store2-pgdata, redis-data

nginx Configuration

Each web container runs nginx with:

SPA Fallback

nginx
location / {
    try_files $uri $uri/ /index.html;
}

API Proxy

nginx
location /api/ {
    proxy_pass http://pos-hq-server:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
}

The WebSocket upgrade headers allow the sync WebSocket to work through nginx.

Cache Headers

PathCache Policy
/index.htmlno-cache, no-store, must-revalidate
/sw.jsno-cache, no-store, must-revalidate
Static assets (*.js, *.css, images, fonts)1 year, public, immutable (Vite hashes filenames)

Gzip

Enabled for text/plain, text/css, application/json, application/javascript, text/xml.

Store-Specific nginx

Store web containers use a different nginx config that proxies /api/ to the store server instead of HQ:

nginx
location /api/ {
    proxy_pass http://pos-store1-server:3002;
}

Runtime Configuration

Each web deployment serves a different config.json at the web root, mounted as a Docker volume:

HQ Config (deploy/hq/config.json)

json
{
  "mode": "hq",
  "apiUrl": ""
}

Store Config (deploy/store1/config.json)

json
{
  "mode": "store",
  "apiUrl": "",
  "storeId": "da92c128-...",
  "storeCode": "ST01",
  "storeName": "Downtown Store"
}

The apiUrl is empty because the nginx proxy handles /api/ routing. The mode determines which Vue Router routes are registered.

Environment Variables

HQ Server Container

VariableValue
DATABASE_URLpostgresql://pos_admin:pos_dev_password@hq-postgres:5432/pos_hq
REDIS_URLredis://redis:6379
JWT_SECRETProduction secret
PORT3000
HOST0.0.0.0

Store Server Container

VariableValue
DATABASE_URLpostgresql://pos_store:pos_store1_password@store1-postgres:5432/pos_store
JWT_SECRETSame as HQ (required for sync token verification)
STORE_IDUUID of this store in HQ database
STORE_CODEShort code (e.g., ST01)
HQ_URLhttp://hq-server:3000 (Docker internal network)
SYNC_TOKENJWT with type: "store_sync"
SYNC_INTERVAL_MS30000
PORT3002

TLS / Domain Routing

Traefik handles TLS termination and routes requests by hostname:

DomainService
hq.pos.example.compos-hq-web
tienda1.pos.example.compos-store1-web
tienda2.pos.example.compos-store2-web
pos.example.compos-portal

Traefik labels on each web container configure the routing rules and Let's Encrypt certificates.

Adding a New Store

  1. Create a new store record in the HQ database (via the Stores admin UI or API)
  2. Generate a sync token: app.jwt.sign({ storeId, storeCode, type: 'store_sync' }, { expiresIn: '1y' })
  3. Add new services to docker-compose.prod.yml: postgres, server, web
  4. Create deploy/storeN/config.json and deploy/storeN/nginx.conf
  5. Set environment variables on the store server container
  6. Deploy

Health Checks

Docker Compose uses health checks on PostgreSQL and Redis containers. The server containers depend on their database being healthy before starting:

yaml
depends_on:
  store1-postgres:
    condition: service_healthy

The application itself exposes GET /health which returns { status: 'ok' }.

Troubleshooting

Stale Database Connections

If the deploy script fails on stale DB connections (PostgreSQL reports "too many connections"), the containers may need a harder restart. The deploy script handles this by using --force-recreate.

Checking Logs

bash
docker logs pos-hq-server
docker logs pos-store1-server
docker logs pos-hq-web

Database Access

bash
# HQ database
docker exec -it pos-hq-postgres psql -U pos_admin -d pos_hq

# Store database
docker exec -it pos-store1-postgres psql -U pos_store -d pos_store