Skip to content

Architecture

Overview

Claritools is a set of Docker Compose files, Traefik configuration, and a Taskfile that together provide an HTTPS reverse proxy, service dashboard, and monitoring stack. The architecture is designed to be layered — each environment builds on the previous one by adding compose files.

Core components

Traefik

Traefik is the reverse proxy at the heart of claritools. It handles:

  • TLS termination — self-signed certificates locally, Let's Encrypt in production
  • Service discovery — reads Docker labels to automatically create routes
  • HTTP to HTTPS redirection — all HTTP traffic on port 80 is redirected to HTTPS on port 443
  • Load balancing — routes requests to the correct container by hostname

Traefik is configured via a static config file (config/<environment>/traefik.yml) and dynamic configuration from Docker labels and a file provider.

Homepage

Homepage provides a service dashboard that auto-discovers running containers via Docker labels. Each service can specify its group, name, icon, and URL through labels prefixed with homepage..

Prometheus and cAdvisor

Prometheus scrapes metrics from cAdvisor, which monitors container resource usage (CPU, memory, disk, network). This provides basic observability for all running containers.

Network architecture

All claritools services and connected projects communicate over an external Docker network called autoproxy.

                        autoproxy network
    +---------------------------------------------------------+
    |                                                         |
    |  Traefik ←──→ Homepage                                  |
    |    ↕            ↕                                       |
    |  Prometheus ←──→ cAdvisor                               |
    |    ↕                                                    |
    |  Portainer    IT Tools     (local/dev only)             |
    |    ↕            ↕                                       |
    |  Keycloak ←──→ oauth2-proxy (dev only)                  |
    |    ↕                                                    |
    |  your-app-1   your-app-2   (connected projects)         |
    |                                                         |
    +---------------------------------------------------------+
                        |
                        | ports 80, 443
                        v
                     Browser

The autoproxy network is created automatically on task start and is declared as external: true in connected projects so they can join it.

Compose file layering

The environment is assembled from multiple Docker Compose files:

docker-compose.yml          ← Core services (all environments)
  |
  +── docker-compose.local.yml   ← Portainer, IT Tools (local + dev)
        |
        +── docker-compose.dev.yml   ← Keycloak, oauth2-proxy, ForwardAuth labels (dev only)

When task start runs, the Taskfile selects the appropriate compose files:

Environment Command
local docker compose -f docker-compose.yml -f docker-compose.local.yml up -d
dev docker compose -f docker-compose.yml -f docker-compose.local.yml -f docker-compose.dev.yml up -d
cd docker compose -f docker-compose.yml up -d

Environment variables

Two environment variables drive the configuration:

CLARITOOLS_ENVIRONMENT — Set in .env by task init. Determines which config directory is mounted and which Taskfile tasks are invoked. Values: local, dev, cd.

CLARITOOLS_DOMAIN — Derived automatically in the Taskfile (not stored in .env). Controls the URL subdomain pattern. Maps to local for both local and dev environments, and cd for the cd environment. This decoupling allows dev mode to use *.local.ciservers.net URLs while loading configuration from config/dev/.

Authentication architecture (dev mode)

Dev mode adds an OIDC authentication layer using three components:

Browser
  |
  | 1. Request to *.local.ciservers.net
  v
Traefik
  |
  | 2. ForwardAuth middleware
  |    GET /oauth2/auth → oauth2-proxy (internal)
  |
  |--- Authenticated (202) → proxy to backend service
  |
  |--- Not authenticated (401) → errors middleware
  |    serves /oauth2/sign_in page from oauth2-proxy
  |      |
  |      | 3. User clicks sign-in
  |      v
  |    oauth2-proxy redirects to Keycloak (external HTTPS URL)
  |      |
  |      | 4. User authenticates
  |      v
  |    Keycloak redirects to /oauth2/callback (same host)
  |      |
  |      | 5. oauth2-proxy validates token, sets cookie
  |      v
  |    Redirect to original service
  v
Backend service

Split URL architecture

A key design decision is the split between internal and external URLs:

URL type Used by Points to Why
Login URL Browser https://keycloak.local.ciservers.net/... User must reach Keycloak in their browser
Token URL oauth2-proxy http://keycloak:8080/... Server-to-server call over Docker network
JWKS URL oauth2-proxy http://keycloak:8080/... Server-to-server call over Docker network

OIDC auto-discovery is disabled (SKIP_OIDC_DISCOVERY=true) because Keycloak's discovery endpoint returns internal URLs when accessed over the Docker network. Each endpoint is configured explicitly to ensure the browser is always directed to the external HTTPS URL.

Catch-all /oauth2/ router

A high-priority Traefik router matches PathPrefix(\/oauth2/`)on **all hosts** and routes these requests to oauth2-proxy. This ensures that the OAuth callback (/oauth2/callback`) works on whichever host initiated the sign-in flow, keeping the CSRF cookie on the same domain throughout the authentication process.