This guide walks you through setting up Currents on-premises using Docker Compose.
git clone https://github.com/currents-dev/docker.git currents-docker
cd currents-docker/on-prem
A .env file is required to run the services. You have two options:
The interactive setup wizard will guide you through configuration:
./scripts/setup.sh
This will:
docker-compose.yml.env file with auto-generated secretsIf you prefer to configure manually:
cp .env.example .env
Then edit .env to fill in the required secrets. See Configuration Reference for generation commands.
| Profile | File | Services Included | Use Case |
|---|---|---|---|
full |
docker-compose.full.yml |
Redis, MongoDB, ClickHouse, RustFS | Running everything locally |
database |
docker-compose.database.yml |
Redis, MongoDB, ClickHouse | Using external S3-compatible storage |
cache |
docker-compose.cache.yml |
Redis | Using external MongoDB, ClickHouse, and S3 |
Review and customize .env as needed.
Configure the URLs where Currents will be accessible. For production, we recommend using subdomains:
# Dashboard and API
APP_BASE_URL=https://currents-app.example.com
# Recording endpoint (where test reporters send data)
CURRENTS_RECORD_API_URL=https://currents-record.example.com
For local development, use localhost with ports:
APP_BASE_URL=http://localhost:4000
CURRENTS_RECORD_API_URL=http://localhost:1234
The ON_PREM_EMAIL is the email address used to create the initial root admin user:
ON_PREM_EMAIL=admin@example.com
We recommend using your own S3-compatible object storage (AWS S3, Google Object Storage etc.) rather than the included RustFS service. Configure your storage provider:
# Your S3-compatible endpoint
FILE_STORAGE_ENDPOINT=https://s3.us-east-1.amazonaws.com
# Bucket name (must already exist)
FILE_STORAGE_BUCKET=currents-artifacts
# Credentials
FILE_STORAGE_ACCESS_KEY_ID=<credentials>
FILE_STORAGE_SECRET_ACCESS_KEY=<credentials>
# Region (required for AWS S3)
FILE_STORAGE_REGION=us-east-1
# Use path-style URLs (required for MinIO and most S3-compatible services, not needed for AWS S3)
# FILE_STORAGE_FORCE_PATH_STYLE=true
If using the included RustFS for testing, configure the RUSTFS_* variables instead. The RustFS profile automatically sets FILE_STORAGE_FORCE_PATH_STYLE=true for all services.
⚠️ Production Note: RustFS is intended for local development and testing only—it is not recommended for production deployments. The included Docker Compose configuration is designed for local development; production environments should use external, production-grade object storage backends such as AWS S3, Google Cloud Storage, or a managed MinIO cluster.
Email is required for notifications, invitations, and reports:
# SMTP server
SMTP_HOST=smtp.example.com
SMTP_PORT=587
SMTP_SECURE=false
# SMTP credentials
SMTP_USER=your-smtp-username
SMTP_PASS=your-smtp-password
# From address for automated emails
AUTOMATED_REPORTS_EMAIL_FROM=Currents Report <reports@example.com>
Note:
SMTP_SECURE=falseuses STARTTLS (explicit TLS) which starts unencrypted then upgrades to TLS—this is the standard for port 587 and recommended for most providers. SetSMTP_SECURE=truefor implicit TLS connections (port 465), which establish TLS immediately without upgrading.
Common SMTP configurations:
| Provider | Host | Port | Secure |
|---|---|---|---|
| Amazon SES | email-smtp.us-east-1.amazonaws.com |
587 | false |
| SendGrid | smtp.sendgrid.net |
587 | false |
| Mailgun | smtp.mailgun.org |
587 | false |
| Gmail | smtp.gmail.com |
587 | false |
See Configuration Reference for all available options.
docker compose up -d
Monitor startup progress:
docker compose logs -f
Once all services are running, access:
Check service health:
docker compose ps
All services should show as “healthy” or “running”.
For production deployments, we recommend setting up a reverse proxy with TLS termination in front of the Currents services. You can either:
Configure your reverse proxy to route:
https://currents-app.example.com → http://localhost:4000 (API/Dashboard)https://currents-record.example.com → http://localhost:1234 (Director)Update your .env to match the external URLs:
APP_BASE_URL=https://currents-app.example.com
CURRENTS_RECORD_API_URL=https://currents-record.example.com
data/traefik/certs/:
wildcard.crt - Fullchain certificate file (server cert + intermediate certs concatenated)wildcard.key - Private key fileImportant: The
wildcard.crtmust be a fullchain certificate containing your server certificate followed by intermediate certificate(s). Without the full chain, clients will fail with “unable to verify certificate” errors. You can create it by concatenating:cat server.crt intermediate.crt > wildcard.crt
.env:
TRAEFIK_DOMAIN=example.com
TRAEFIK_API_SUBDOMAIN=currents-app
TRAEFIK_DIRECTOR_SUBDOMAIN=currents-record
docker compose --profile tls up -d
# All services
docker compose logs -f
# Specific service
docker compose logs -f api
docker compose down
docker compose restart api
# Pull latest images
docker compose pull
# Restart with new images
docker compose up -d
./scripts/setup.sh
# Select "Y" when asked to regenerate secrets
Check logs for errors:
docker compose logs --tail=50
Verify .env file exists and has all required secrets populated.
Ensure MongoDB has initialized its replica set. Check logs:
docker compose logs mongodb
The replica set initialization runs automatically on first start.
If ports are already in use, customize them in .env:
DC_API_PORT=4001
DC_DIRECTOR_PORT=1235
If you’re using Podman and see permission errors like:
mongodb-1 | chown: changing ownership of '/data/db': Permission denied
mongodb-1 | bash: /data/db/replica.key: Permission denied
This is due to Podman’s rootless mode and UID mapping. Follow these steps:
Create the data directories manually before starting services:
mkdir -p data/mongodb data/redis data/clickhouse data/rustfs data/startup data/traefik/certs data/traefik/config
Set ownership to match container UIDs:
For rootless Podman (running as a regular user):
# MongoDB runs as uid 999
podman unshare chown -R 999:999 data/mongodb
# ClickHouse runs as uid 101
podman unshare chown -R 101:101 data/clickhouse
# Redis runs as uid 999
podman unshare chown -R 999:999 data/redis
# RustFS runs as uid 10001 (if using local object storage)
podman unshare chown -R 10001:10001 data/rustfs
# Scheduler runs as uid 1000
podman unshare chown -R 1000:1000 data/startup
# Traefik runs as root (uid 0) - no chown needed, just create dirs
For rootful Podman (running as root or with sudo):
# MongoDB runs as uid 999
sudo chown -R 999:999 data/mongodb
# ClickHouse runs as uid 101
sudo chown -R 101:101 data/clickhouse
# Redis runs as uid 999
sudo chown -R 999:999 data/redis
# RustFS runs as uid 10001 (if using local object storage)
sudo chown -R 10001:10001 data/rustfs
# Scheduler runs as uid 1000
sudo chown -R 1000:1000 data/startup
# Traefik runs as root (uid 0) - no chown needed
Tip: To check if you’re running rootless Podman, run
podman info --format ''. If it returnstrue, usepodman unshare; otherwise usesudo chown.
If SELinux is enabled, you need to relabel the data directories so containers can access them:
# Relabel data directories for container access
sudo chcon -Rt svirt_sandbox_file_t data/
Or for each directory individually:
sudo chcon -Rt svirt_sandbox_file_t data/mongodb
sudo chcon -Rt svirt_sandbox_file_t data/redis
sudo chcon -Rt svirt_sandbox_file_t data/clickhouse
sudo chcon -Rt svirt_sandbox_file_t data/rustfs
sudo chcon -Rt svirt_sandbox_file_t data/startup
sudo chcon -Rt svirt_sandbox_file_t data/traefik
To verify the labels are set correctly:
ls -lZ data/
You should see svirt_sandbox_file_t in the output.
Named volumes avoid permission issues entirely since Podman manages them:
# Create named volumes
podman volume create mongodb-data
podman volume create redis-data
podman volume create clickhouse-data
podman volume create rustfs-data
podman volume create scheduler-startup
podman volume create traefik-certs
podman volume create traefik-config
# Configure in .env
DC_MONGODB_VOLUME=mongodb-data
DC_REDIS_VOLUME=redis-data
DC_CLICKHOUSE_VOLUME=clickhouse-data
DC_RUSTFS_VOLUME=rustfs-data
DC_SCHEDULER_STARTUP_VOLUME=scheduler-startup
DC_TRAEFIK_CERTS_DIR=traefik-certs
DC_TRAEFIK_CONFIG_DIR=traefik-config
Note: Named volumes are stored in Podman’s volume directory (typically
~/.local/share/containers/storage/volumes/) rather than the current directory.
By default, application ports (API, Director) bind to all interfaces while database ports bind to localhost only. You can customize this behavior using DC_*_PORT variables.
To restrict a service to localhost only (not accessible from other machines):
# Bind API to localhost only
DC_API_PORT=127.0.0.1:4000
# Bind Director to localhost only
DC_DIRECTOR_PORT=127.0.0.1:1234
To expose a database service to all interfaces (use with caution):
# Expose MongoDB to all interfaces
DC_MONGODB_PORT=27017
# Expose Redis to all interfaces
DC_REDIS_PORT=6379
| Variable | Default | Description |
|---|---|---|
DC_API_PORT |
4000 |
Dashboard/API (all interfaces) |
DC_DIRECTOR_PORT |
1234 |
Director API (all interfaces) |
DC_MONGODB_PORT |
127.0.0.1:27017 |
MongoDB (localhost only) |
DC_REDIS_PORT |
127.0.0.1:6379 |
Redis (localhost only) |
DC_CLICKHOUSE_HTTP_PORT |
127.0.0.1:8123 |
ClickHouse HTTP (localhost only) |
DC_CLICKHOUSE_TCP_PORT |
127.0.0.1:9123 |
ClickHouse TCP (localhost only) |
DC_RUSTFS_S3_PORT |
9000 |
RustFS S3 API (all interfaces) |
DC_RUSTFS_CONSOLE_PORT |
9001 |
RustFS Console (all interfaces) |
Security Note: Database ports default to localhost-only binding to prevent unintended external access. Only expose them to all interfaces if you have proper firewall rules in place.
By default, data is stored in local directories under ./data/. You can customize volume paths or use named Docker volumes for more advanced storage configurations.
To store data in different directories:
# Store MongoDB data on a separate disk
DC_MONGODB_VOLUME=/mnt/ssd/mongodb
# Store ClickHouse data on high-performance storage
DC_CLICKHOUSE_VOLUME=/mnt/nvme/clickhouse
For advanced volume management (encryption, network storage, custom drivers), create Docker volumes first and reference them by name:
# Create volumes with custom options
docker volume create --driver local \
--opt type=none \
--opt o=bind \
--opt device=/mnt/encrypted/mongodb \
mongodb-data
docker volume create --driver local \
--opt type=none \
--opt o=bind \
--opt device=/mnt/encrypted/clickhouse \
clickhouse-data
Then reference them in .env:
DC_MONGODB_VOLUME=mongodb-data
DC_CLICKHOUSE_VOLUME=clickhouse-data
| Variable | Default | Description |
|---|---|---|
DC_REDIS_VOLUME |
./data/redis |
Redis data storage |
DC_MONGODB_VOLUME |
./data/mongodb |
MongoDB data storage |
DC_CLICKHOUSE_VOLUME |
./data/clickhouse |
ClickHouse data storage |
DC_RUSTFS_VOLUME |
./data/rustfs |
RustFS object storage |
DC_SCHEDULER_STARTUP_VOLUME |
./data/startup |
Scheduler startup state |
Tip: Named Docker volumes are useful when you need encryption, network-attached storage, or custom volume drivers that aren’t possible with bind mounts.