chore: General catchup - service updates and cleanup
Updated service configurations, added new services, removed deprecated ones, and improved gitignore patterns for better repository hygiene. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
85239ff11b
commit
3bf1575ca8
103 changed files with 4535 additions and 15360 deletions
58
.gitignore
vendored
58
.gitignore
vendored
|
|
@ -30,6 +30,53 @@
|
||||||
**/config/
|
**/config/
|
||||||
!**/config/*.example
|
!**/config/*.example
|
||||||
!**/config/.gitkeep
|
!**/config/.gitkeep
|
||||||
|
**/config.bak/
|
||||||
|
**/db/
|
||||||
|
**/postgres/
|
||||||
|
**/library/
|
||||||
|
**/letsencrypt/
|
||||||
|
|
||||||
|
# Runtime directories
|
||||||
|
**/app/
|
||||||
|
!**/app.yaml
|
||||||
|
!**/app.json
|
||||||
|
**/appdata/
|
||||||
|
**/cache/
|
||||||
|
**/downloads/
|
||||||
|
**/uploads/
|
||||||
|
**/output/
|
||||||
|
**/backup/
|
||||||
|
**/backups/
|
||||||
|
**/incomplete/
|
||||||
|
**/media/
|
||||||
|
!compose/media/
|
||||||
|
!compose/media/**/
|
||||||
|
**/tmp/
|
||||||
|
**/temp/
|
||||||
|
|
||||||
|
# Media files
|
||||||
|
**/*.flac
|
||||||
|
**/*.mp3
|
||||||
|
**/*.mp4
|
||||||
|
**/*.mkv
|
||||||
|
**/*.avi
|
||||||
|
**/*.m4a
|
||||||
|
**/*.wav
|
||||||
|
**/*.ogg
|
||||||
|
|
||||||
|
# Database files
|
||||||
|
**/*.sqlite
|
||||||
|
**/*.sqlite3
|
||||||
|
**/*.db
|
||||||
|
!**/*.db.example
|
||||||
|
|
||||||
|
# Certificate files
|
||||||
|
**/*.pem
|
||||||
|
**/*.key
|
||||||
|
**/*.crt
|
||||||
|
**/*.cert
|
||||||
|
!**/*.example.pem
|
||||||
|
!**/*.example.key
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
**/logs/
|
**/logs/
|
||||||
|
|
@ -50,3 +97,14 @@ Thumbs.db
|
||||||
# Temporary files
|
# Temporary files
|
||||||
*.tmp
|
*.tmp
|
||||||
*.temp
|
*.temp
|
||||||
|
compose/media/automation/dispatcharr/data/
|
||||||
|
compose/media/automation/slskd/app/data/
|
||||||
|
compose/media/automation/profilarr/config/db/
|
||||||
|
compose/media/automation/soularr/data/
|
||||||
|
compose/media/frontend/immich/postgres/
|
||||||
|
compose/services/vikunja/db/
|
||||||
|
**/config/
|
||||||
|
\!**/config/*.example
|
||||||
|
\!**/config/.gitkeep
|
||||||
|
*.backup
|
||||||
|
*.bak
|
||||||
|
|
|
||||||
870
AGENTS.md
Normal file
870
AGENTS.md
Normal file
|
|
@ -0,0 +1,870 @@
|
||||||
|
# Homelab Service Setup Guide for AI Agents
|
||||||
|
|
||||||
|
This document provides patterns, conventions, and best practices for setting up services in this homelab environment. Follow these guidelines when creating new services or modifying existing ones.
|
||||||
|
|
||||||
|
## Repository Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
homelab/
|
||||||
|
├── compose/
|
||||||
|
│ ├── core/ # Infrastructure services (Traefik, Authelia, LLDAP)
|
||||||
|
│ │ ├── traefik/
|
||||||
|
│ │ ├── authelia/
|
||||||
|
│ │ └── lldap/
|
||||||
|
│ ├── services/ # User-facing applications
|
||||||
|
│ │ ├── service-name/
|
||||||
|
│ │ │ ├── compose.yaml
|
||||||
|
│ │ │ ├── .env
|
||||||
|
│ │ │ ├── .gitignore
|
||||||
|
│ │ │ ├── README.md
|
||||||
|
│ │ │ └── QUICKSTART.md
|
||||||
|
│ ├── media/ # Media-related services
|
||||||
|
│ │ ├── frontend/ # Media viewers (Jellyfin, Immich)
|
||||||
|
│ │ └── automation/ # Media management (*arr stack)
|
||||||
|
│ └── monitoring/ # Monitoring and logging
|
||||||
|
```
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### 1. Domain Convention
|
||||||
|
- **Primary domain:** `fig.systems`
|
||||||
|
- **Pattern:** `service.fig.systems`
|
||||||
|
- **Examples:**
|
||||||
|
- `matrix.fig.systems` - Matrix server
|
||||||
|
- `auth.fig.systems` - Authelia
|
||||||
|
- `books.fig.systems` - BookLore
|
||||||
|
- `ai.fig.systems` - Open WebUI
|
||||||
|
|
||||||
|
### 2. Storage Conventions
|
||||||
|
|
||||||
|
**Media Storage:** `/mnt/media/`
|
||||||
|
- `/mnt/media/books/` - Book library
|
||||||
|
- `/mnt/media/movies/` - Movie library
|
||||||
|
- `/mnt/media/tv/` - TV shows
|
||||||
|
- `/mnt/media/photos/` - Photo library
|
||||||
|
- `/mnt/media/music/` - Music library
|
||||||
|
|
||||||
|
**Service Data:** `/mnt/media/service-name/`
|
||||||
|
```bash
|
||||||
|
# Example: Matrix storage structure
|
||||||
|
/mnt/media/matrix/
|
||||||
|
├── synapse/
|
||||||
|
│ ├── data/ # Configuration and database
|
||||||
|
│ └── media/ # Uploaded media files
|
||||||
|
├── postgres/ # Database files
|
||||||
|
└── bridges/ # Bridge configurations
|
||||||
|
├── telegram/
|
||||||
|
├── whatsapp/
|
||||||
|
└── googlechat/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Always create subdirectories for:**
|
||||||
|
- Configuration files
|
||||||
|
- Database data
|
||||||
|
- User uploads/media
|
||||||
|
- Logs (if persistent)
|
||||||
|
|
||||||
|
### 3. Network Architecture
|
||||||
|
|
||||||
|
**External Network:** `homelab`
|
||||||
|
- All services connect to this for Traefik routing
|
||||||
|
- Created externally, referenced as `external: true`
|
||||||
|
|
||||||
|
**Internal Networks:** `service-internal`
|
||||||
|
- For multi-container service communication
|
||||||
|
- Example: `matrix-internal`, `booklore-internal`
|
||||||
|
- Use `driver: bridge`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
service-internal:
|
||||||
|
driver: bridge
|
||||||
|
```
|
||||||
|
|
||||||
|
## Service Setup Pattern
|
||||||
|
|
||||||
|
### Directory Structure
|
||||||
|
|
||||||
|
Every service should have:
|
||||||
|
```
|
||||||
|
compose/services/service-name/
|
||||||
|
├── compose.yaml # Docker Compose configuration
|
||||||
|
├── .env # Environment variables and secrets
|
||||||
|
├── .gitignore # Ignore data directories and secrets
|
||||||
|
├── README.md # Complete documentation
|
||||||
|
├── QUICKSTART.md # 5-step quick start guide
|
||||||
|
└── config-files/ # Service-specific configs (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Required Files
|
||||||
|
|
||||||
|
#### 1. compose.yaml
|
||||||
|
|
||||||
|
**Basic template:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
service-name:
|
||||||
|
image: vendor/service:latest
|
||||||
|
container_name: service-name
|
||||||
|
environment:
|
||||||
|
- TZ=${TZ}
|
||||||
|
- PUID=${PUID}
|
||||||
|
- PGID=${PGID}
|
||||||
|
# Service-specific vars
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service-name:/data
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
# Traefik routing
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# HTTP Router
|
||||||
|
traefik.http.routers.service-name.rule: Host(`service.fig.systems`)
|
||||||
|
traefik.http.routers.service-name.entrypoints: websecure
|
||||||
|
traefik.http.routers.service-name.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.service-name.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Service Name
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:icon-name
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
```
|
||||||
|
|
||||||
|
**With database:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
# ... app config
|
||||||
|
depends_on:
|
||||||
|
database:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
- service-internal
|
||||||
|
|
||||||
|
database:
|
||||||
|
image: postgres:16-alpine # or mariadb, redis, etc.
|
||||||
|
container_name: service-database
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: ${DB_USER}
|
||||||
|
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
||||||
|
POSTGRES_DB: ${DB_NAME}
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service-name/db:/var/lib/postgresql/data
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- service-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
service-internal:
|
||||||
|
driver: bridge
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. .env File
|
||||||
|
|
||||||
|
**Standard variables:**
|
||||||
|
```bash
|
||||||
|
# Domain Configuration
|
||||||
|
DOMAIN=fig.systems
|
||||||
|
SERVICE_DOMAIN=service.fig.systems
|
||||||
|
TRAEFIK_HOST=service.fig.systems
|
||||||
|
|
||||||
|
# System
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
PUID=1000
|
||||||
|
PGID=1000
|
||||||
|
|
||||||
|
# Database (if applicable)
|
||||||
|
DB_USER=service
|
||||||
|
DB_PASSWORD=<generated-password>
|
||||||
|
DB_NAME=service
|
||||||
|
|
||||||
|
# SMTP Configuration (Mailgun)
|
||||||
|
SMTP_HOST=smtp.mailgun.org
|
||||||
|
SMTP_PORT=587
|
||||||
|
SMTP_USER=noreply@fig.systems
|
||||||
|
SMTP_PASSWORD=<mailgun-smtp-password>
|
||||||
|
SMTP_FROM=Service Name <noreply@fig.systems>
|
||||||
|
# Optional SMTP settings
|
||||||
|
SMTP_TLS=true
|
||||||
|
SMTP_STARTTLS=true
|
||||||
|
|
||||||
|
# Service-specific secrets
|
||||||
|
SERVICE_SECRET_KEY=<generated-secret>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generate secrets:**
|
||||||
|
```bash
|
||||||
|
# Random hex (64 chars)
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
|
# Base64 (32 bytes)
|
||||||
|
openssl rand -base64 32
|
||||||
|
|
||||||
|
# Alphanumeric (32 chars)
|
||||||
|
openssl rand -base64 24 | tr -d '/+=' | head -c 32
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. .gitignore
|
||||||
|
|
||||||
|
**Standard pattern:**
|
||||||
|
```gitignore
|
||||||
|
# Service data (stored in /mnt/media/)
|
||||||
|
data/
|
||||||
|
config/
|
||||||
|
db/
|
||||||
|
logs/
|
||||||
|
|
||||||
|
# Environment secrets
|
||||||
|
.env
|
||||||
|
|
||||||
|
# Backup files
|
||||||
|
*.bak
|
||||||
|
*.backup
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. README.md
|
||||||
|
|
||||||
|
**Structure:**
|
||||||
|
```markdown
|
||||||
|
# Service Name - Brief Description
|
||||||
|
|
||||||
|
One-paragraph overview of what the service does.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- ✅ Feature 1
|
||||||
|
- ✅ Feature 2
|
||||||
|
- ✅ Feature 3
|
||||||
|
|
||||||
|
## Access
|
||||||
|
|
||||||
|
**URL:** https://service.fig.systems
|
||||||
|
**Authentication:** [Authelia SSO | None | Basic Auth]
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
\`\`\`bash
|
||||||
|
cd /home/eduardo_figueroa/homelab/compose/services/service-name
|
||||||
|
docker compose up -d
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### First-Time Setup
|
||||||
|
1. Step 1
|
||||||
|
2. Step 2
|
||||||
|
3. Step 3
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
Explain key .env variables
|
||||||
|
|
||||||
|
### Storage Locations
|
||||||
|
- `/mnt/media/service-name/data` - Application data
|
||||||
|
- `/mnt/media/service-name/uploads` - User uploads
|
||||||
|
|
||||||
|
## Usage Guide
|
||||||
|
|
||||||
|
Detailed usage instructions...
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
Common issues and solutions...
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
Important directories to backup...
|
||||||
|
|
||||||
|
### Update
|
||||||
|
\`\`\`bash
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Links
|
||||||
|
- Documentation: https://...
|
||||||
|
- GitHub: https://...
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. QUICKSTART.md
|
||||||
|
|
||||||
|
**Fast 5-step guide:**
|
||||||
|
```markdown
|
||||||
|
# Service Name - Quick Start
|
||||||
|
|
||||||
|
## Step 1: Deploy
|
||||||
|
\`\`\`bash
|
||||||
|
cd /path/to/service
|
||||||
|
docker compose up -d
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Step 2: Access
|
||||||
|
Open https://service.fig.systems
|
||||||
|
|
||||||
|
## Step 3: Initial Setup
|
||||||
|
Quick setup steps...
|
||||||
|
|
||||||
|
## Step 4: Test
|
||||||
|
Verification steps...
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
\`\`\`bash
|
||||||
|
# View logs
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Restart
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Stop
|
||||||
|
docker compose down
|
||||||
|
\`\`\`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Traefik Integration
|
||||||
|
|
||||||
|
### Basic HTTP Routing
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Router
|
||||||
|
traefik.http.routers.service.rule: Host(`service.fig.systems`)
|
||||||
|
traefik.http.routers.service.entrypoints: websecure
|
||||||
|
traefik.http.routers.service.tls.certresolver: letsencrypt
|
||||||
|
|
||||||
|
# Service (port)
|
||||||
|
traefik.http.services.service.loadbalancer.server.port: 8080
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Custom Headers
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
# ... basic routing ...
|
||||||
|
|
||||||
|
# Headers middleware
|
||||||
|
traefik.http.middlewares.service-headers.headers.customrequestheaders.X-Forwarded-Proto: https
|
||||||
|
traefik.http.middlewares.service-headers.headers.customresponseheaders.X-Frame-Options: SAMEORIGIN
|
||||||
|
|
||||||
|
# Apply middleware
|
||||||
|
traefik.http.routers.service.middlewares: service-headers
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Local-Only Access
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
# ... basic routing ...
|
||||||
|
|
||||||
|
# Apply local-only middleware (defined in Traefik)
|
||||||
|
traefik.http.routers.service.middlewares: local-only
|
||||||
|
```
|
||||||
|
|
||||||
|
### Large Upload Support
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
# ... basic routing ...
|
||||||
|
|
||||||
|
# Buffering middleware
|
||||||
|
traefik.http.middlewares.service-buffering.buffering.maxRequestBodyBytes: 268435456
|
||||||
|
traefik.http.middlewares.service-buffering.buffering.memRequestBodyBytes: 268435456
|
||||||
|
traefik.http.middlewares.service-buffering.buffering.retryExpression: IsNetworkError() && Attempts() < 3
|
||||||
|
|
||||||
|
# Apply middleware
|
||||||
|
traefik.http.routers.service.middlewares: service-buffering
|
||||||
|
```
|
||||||
|
|
||||||
|
## Authelia OIDC Integration
|
||||||
|
|
||||||
|
### 1. Generate Client Secret
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate plain secret
|
||||||
|
openssl rand -base64 32
|
||||||
|
|
||||||
|
# Hash for Authelia
|
||||||
|
docker exec authelia authelia crypto hash generate pbkdf2 --password 'your-secret-here'
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Add Client to Authelia
|
||||||
|
|
||||||
|
Edit `/home/eduardo_figueroa/homelab/compose/core/authelia/config/configuration.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
identity_providers:
|
||||||
|
oidc:
|
||||||
|
clients:
|
||||||
|
# Your Service
|
||||||
|
- client_id: service-name
|
||||||
|
client_name: Service Display Name
|
||||||
|
client_secret: '$pbkdf2-sha512$310000$...' # hashed secret
|
||||||
|
authorization_policy: two_factor
|
||||||
|
redirect_uris:
|
||||||
|
- https://service.fig.systems/oauth/callback
|
||||||
|
scopes:
|
||||||
|
- openid
|
||||||
|
- profile
|
||||||
|
- email
|
||||||
|
grant_types:
|
||||||
|
- authorization_code
|
||||||
|
response_types:
|
||||||
|
- code
|
||||||
|
```
|
||||||
|
|
||||||
|
**For public clients (PKCE):**
|
||||||
|
```yaml
|
||||||
|
- client_id: service-name
|
||||||
|
client_name: Service Name
|
||||||
|
public: true # No client_secret needed
|
||||||
|
authorization_policy: two_factor
|
||||||
|
require_pkce: true
|
||||||
|
pkce_challenge_method: S256
|
||||||
|
redirect_uris:
|
||||||
|
- https://service.fig.systems/oauth/callback
|
||||||
|
scopes:
|
||||||
|
- openid
|
||||||
|
- profile
|
||||||
|
- email
|
||||||
|
- offline_access # For refresh tokens
|
||||||
|
grant_types:
|
||||||
|
- authorization_code
|
||||||
|
- refresh_token
|
||||||
|
response_types:
|
||||||
|
- code
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Configure Service
|
||||||
|
|
||||||
|
**Standard OIDC configuration:**
|
||||||
|
```yaml
|
||||||
|
environment:
|
||||||
|
OIDC_ENABLED: "true"
|
||||||
|
OIDC_CLIENT_ID: "service-name"
|
||||||
|
OIDC_CLIENT_SECRET: "plain-secret-here"
|
||||||
|
OIDC_ISSUER: "https://auth.fig.systems"
|
||||||
|
OIDC_AUTHORIZATION_ENDPOINT: "https://auth.fig.systems/api/oidc/authorization"
|
||||||
|
OIDC_TOKEN_ENDPOINT: "https://auth.fig.systems/api/oidc/token"
|
||||||
|
OIDC_USERINFO_ENDPOINT: "https://auth.fig.systems/api/oidc/userinfo"
|
||||||
|
OIDC_JWKS_URI: "https://auth.fig.systems/jwks.json"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Restart Services
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restart Authelia
|
||||||
|
cd compose/core/authelia
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Start your service
|
||||||
|
cd compose/services/service-name
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## SMTP/Email Configuration
|
||||||
|
|
||||||
|
### Mailgun SMTP
|
||||||
|
|
||||||
|
**Standard Mailgun configuration for all services:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In .env file
|
||||||
|
SMTP_HOST=smtp.mailgun.org
|
||||||
|
SMTP_PORT=587
|
||||||
|
SMTP_USER=noreply@fig.systems
|
||||||
|
SMTP_PASSWORD=<your-mailgun-smtp-password>
|
||||||
|
SMTP_FROM=Service Name <noreply@fig.systems>
|
||||||
|
SMTP_TLS=true
|
||||||
|
SMTP_STARTTLS=true
|
||||||
|
```
|
||||||
|
|
||||||
|
**In compose.yaml:**
|
||||||
|
```yaml
|
||||||
|
environment:
|
||||||
|
# SMTP Settings
|
||||||
|
SMTP_HOST: ${SMTP_HOST}
|
||||||
|
SMTP_PORT: ${SMTP_PORT}
|
||||||
|
SMTP_USER: ${SMTP_USER}
|
||||||
|
SMTP_PASSWORD: ${SMTP_PASSWORD}
|
||||||
|
SMTP_FROM: ${SMTP_FROM}
|
||||||
|
# Some services may use different variable names:
|
||||||
|
# EMAIL_HOST: ${SMTP_HOST}
|
||||||
|
# EMAIL_PORT: ${SMTP_PORT}
|
||||||
|
# EMAIL_USER: ${SMTP_USER}
|
||||||
|
# EMAIL_PASS: ${SMTP_PASSWORD}
|
||||||
|
# EMAIL_FROM: ${SMTP_FROM}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common SMTP variable name variations:**
|
||||||
|
|
||||||
|
Different services use different environment variable names for SMTP configuration. Check the service documentation and use the appropriate format:
|
||||||
|
|
||||||
|
| Common Name | Alternative Names |
|
||||||
|
|------------|-------------------|
|
||||||
|
| SMTP_HOST | EMAIL_HOST, MAIL_HOST, MAIL_SERVER |
|
||||||
|
| SMTP_PORT | EMAIL_PORT, MAIL_PORT |
|
||||||
|
| SMTP_USER | EMAIL_USER, MAIL_USER, SMTP_USERNAME, EMAIL_USERNAME |
|
||||||
|
| SMTP_PASSWORD | EMAIL_PASSWORD, EMAIL_PASS, MAIL_PASSWORD, SMTP_PASS |
|
||||||
|
| SMTP_FROM | EMAIL_FROM, MAIL_FROM, FROM_EMAIL, DEFAULT_FROM_EMAIL |
|
||||||
|
| SMTP_TLS | EMAIL_USE_TLS, MAIL_USE_TLS, SMTP_SECURE |
|
||||||
|
| SMTP_STARTTLS | EMAIL_USE_STARTTLS, MAIL_STARTTLS |
|
||||||
|
|
||||||
|
**Getting Mailgun SMTP credentials:**
|
||||||
|
|
||||||
|
1. Log into Mailgun dashboard: https://app.mailgun.com
|
||||||
|
2. Navigate to **Sending → Domain Settings → SMTP credentials**
|
||||||
|
3. Use the existing `noreply@fig.systems` user or create a new SMTP user
|
||||||
|
4. Copy the SMTP password and add it to your service's `.env` file
|
||||||
|
|
||||||
|
**Testing SMTP configuration:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using swaks (SMTP test tool)
|
||||||
|
swaks --to test@example.com \
|
||||||
|
--from noreply@fig.systems \
|
||||||
|
--server smtp.mailgun.org:587 \
|
||||||
|
--auth LOGIN \
|
||||||
|
--auth-user noreply@fig.systems \
|
||||||
|
--auth-password 'your-password' \
|
||||||
|
--tls
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database Patterns
|
||||||
|
|
||||||
|
### PostgreSQL
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: service-postgres
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: ${DB_USER}
|
||||||
|
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
||||||
|
POSTGRES_DB: ${DB_NAME}
|
||||||
|
POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service-name/postgres:/var/lib/postgresql/data
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- service-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
```
|
||||||
|
|
||||||
|
### MariaDB
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
mariadb:
|
||||||
|
image: lscr.io/linuxserver/mariadb:latest
|
||||||
|
container_name: service-mariadb
|
||||||
|
environment:
|
||||||
|
- PUID=${PUID}
|
||||||
|
- PGID=${PGID}
|
||||||
|
- TZ=${TZ}
|
||||||
|
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
|
||||||
|
- MYSQL_DATABASE=${MYSQL_DATABASE}
|
||||||
|
- MYSQL_USER=${MYSQL_USER}
|
||||||
|
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service-name/mariadb:/config
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- service-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "mariadb-admin", "ping", "-h", "localhost"]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 10
|
||||||
|
```
|
||||||
|
|
||||||
|
### Redis
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
redis:
|
||||||
|
image: redis:alpine
|
||||||
|
container_name: service-redis
|
||||||
|
command: redis-server --save 60 1 --loglevel warning
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service-name/redis:/data
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- service-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD", "redis-cli", "ping"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
```
|
||||||
|
|
||||||
|
## Homarr Integration
|
||||||
|
|
||||||
|
**Add discovery labels to your service:**
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
homarr.name: Display Name
|
||||||
|
homarr.group: Services # or Media, Monitoring, AI, etc.
|
||||||
|
homarr.icon: mdi:icon-name # Material Design Icons
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common groups:**
|
||||||
|
- `Services` - General applications
|
||||||
|
- `Media` - Media-related (Jellyfin, Immich)
|
||||||
|
- `AI` - AI/LLM services
|
||||||
|
- `Monitoring` - Monitoring tools
|
||||||
|
- `Automation` - *arr stack
|
||||||
|
|
||||||
|
**Find icons:** https://pictogrammers.com/library/mdi/
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
### 1. Never Commit Secrets
|
||||||
|
|
||||||
|
**Always in .gitignore:**
|
||||||
|
- `.env` files
|
||||||
|
- Database directories
|
||||||
|
- Configuration files with credentials
|
||||||
|
- SSL certificates
|
||||||
|
- API keys
|
||||||
|
|
||||||
|
### 2. Use Authelia for External Access
|
||||||
|
|
||||||
|
Services exposed to internet should use Authelia SSO with 2FA.
|
||||||
|
|
||||||
|
### 3. Local-Only Services
|
||||||
|
|
||||||
|
For sensitive services (backups, code editors), use `local-only` middleware:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
traefik.http.routers.service.middlewares: local-only
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Least Privilege
|
||||||
|
|
||||||
|
- Use non-root users in containers (`PUID`/`PGID`)
|
||||||
|
- Limit network access (internal networks)
|
||||||
|
- Read-only mounts where possible: `./config:/config:ro`
|
||||||
|
|
||||||
|
### 5. Secrets Generation
|
||||||
|
|
||||||
|
**Always generate unique secrets:**
|
||||||
|
```bash
|
||||||
|
# For each service
|
||||||
|
openssl rand -hex 32 # Different secret each time
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Patterns
|
||||||
|
|
||||||
|
### Multi-Stage Service Setup
|
||||||
|
|
||||||
|
**For services requiring initial config generation:**
|
||||||
|
|
||||||
|
1. Generate config:
|
||||||
|
```bash
|
||||||
|
docker run --rm -v /path:/data image:latest --generate-config
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Edit config files
|
||||||
|
|
||||||
|
3. Start service:
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bridge/Plugin Architecture
|
||||||
|
|
||||||
|
**For services with plugins/bridges:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Main service
|
||||||
|
main-app:
|
||||||
|
# ... config ...
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service/data:/data
|
||||||
|
- ./registrations:/registrations:ro # Plugin registrations
|
||||||
|
|
||||||
|
# Plugin 1
|
||||||
|
plugin-1:
|
||||||
|
# ... config ...
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/service/plugins/plugin-1:/data
|
||||||
|
depends_on:
|
||||||
|
main-app:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- service-internal
|
||||||
|
```
|
||||||
|
|
||||||
|
### Health Checks
|
||||||
|
|
||||||
|
**Always include health checks for databases:**
|
||||||
|
```yaml
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "command to test health"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
```
|
||||||
|
|
||||||
|
**Then use in depends_on:**
|
||||||
|
```yaml
|
||||||
|
depends_on:
|
||||||
|
database:
|
||||||
|
condition: service_healthy
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting Checklist
|
||||||
|
|
||||||
|
### Service Won't Start
|
||||||
|
|
||||||
|
1. Check logs:
|
||||||
|
```bash
|
||||||
|
docker compose logs -f service-name
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Verify environment variables:
|
||||||
|
```bash
|
||||||
|
docker compose config
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Check disk space:
|
||||||
|
```bash
|
||||||
|
df -h /mnt/media
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Verify network exists:
|
||||||
|
```bash
|
||||||
|
docker network ls | grep homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Can't Access via Domain
|
||||||
|
|
||||||
|
1. Check Traefik logs:
|
||||||
|
```bash
|
||||||
|
docker logs traefik | grep service-name
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Verify service is on homelab network:
|
||||||
|
```bash
|
||||||
|
docker inspect service-name | grep -A 10 Networks
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Test endpoint directly:
|
||||||
|
```bash
|
||||||
|
curl -k https://service.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Check DNS resolution:
|
||||||
|
```bash
|
||||||
|
nslookup service.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
### OIDC Login Issues
|
||||||
|
|
||||||
|
1. Verify client secret matches in both Authelia and service
|
||||||
|
2. Check redirect URI exactly matches in Authelia config
|
||||||
|
3. Restart Authelia after config changes
|
||||||
|
4. Check Authelia logs:
|
||||||
|
```bash
|
||||||
|
docker logs authelia | grep oidc
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Connection Issues
|
||||||
|
|
||||||
|
1. Verify database is healthy:
|
||||||
|
```bash
|
||||||
|
docker compose ps
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Check database logs:
|
||||||
|
```bash
|
||||||
|
docker compose logs database
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Test connection from app container:
|
||||||
|
```bash
|
||||||
|
docker compose exec app ping database
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Verify credentials match in .env and config
|
||||||
|
|
||||||
|
## Complete Service Template
|
||||||
|
|
||||||
|
See `compose/services/matrix/` for a complete example of:
|
||||||
|
- ✅ Multi-container setup (app + database + plugins)
|
||||||
|
- ✅ Authelia OIDC integration
|
||||||
|
- ✅ Traefik routing
|
||||||
|
- ✅ Comprehensive documentation
|
||||||
|
- ✅ Bridge/plugin architecture
|
||||||
|
- ✅ Health checks and dependencies
|
||||||
|
- ✅ Proper secret management
|
||||||
|
|
||||||
|
## AI Agent Guidelines
|
||||||
|
|
||||||
|
When setting up new services:
|
||||||
|
|
||||||
|
1. **Always create complete config files in /tmp/** for files requiring sudo access
|
||||||
|
2. **Follow the directory structure** exactly as shown above
|
||||||
|
3. **Generate unique secrets** for each service
|
||||||
|
4. **Create both README.md and QUICKSTART.md**
|
||||||
|
5. **Use the storage conventions** (/mnt/media/service-name/)
|
||||||
|
6. **Add Traefik labels** for automatic routing
|
||||||
|
7. **Include Homarr discovery labels**
|
||||||
|
8. **Set up health checks** for all databases
|
||||||
|
9. **Use internal networks** for multi-container communication
|
||||||
|
10. **Document troubleshooting steps** in README.md
|
||||||
|
|
||||||
|
### Files to Always Create in /tmp/
|
||||||
|
|
||||||
|
When you cannot write directly:
|
||||||
|
- Authelia configuration updates
|
||||||
|
- Traefik configuration changes
|
||||||
|
- System-level configuration files
|
||||||
|
|
||||||
|
**Format:**
|
||||||
|
```bash
|
||||||
|
/tmp/service-name-config-file.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Include clear instructions at the top:
|
||||||
|
```yaml
|
||||||
|
# Copy this file to:
|
||||||
|
# /path/to/actual/location
|
||||||
|
#
|
||||||
|
# Then run:
|
||||||
|
# sudo chmod 644 /path/to/actual/location
|
||||||
|
# docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- **Traefik:** https://doc.traefik.io/traefik/
|
||||||
|
- **Authelia:** https://www.authelia.com/
|
||||||
|
- **Docker Compose:** https://docs.docker.com/compose/
|
||||||
|
- **Material Design Icons:** https://pictogrammers.com/library/mdi/
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember:** Consistency is key. Follow these patterns for all services to maintain a clean, predictable, and maintainable homelab infrastructure.
|
||||||
264
CONTRIBUTING.md
264
CONTRIBUTING.md
|
|
@ -1,264 +0,0 @@
|
||||||
# Contributing Guide
|
|
||||||
|
|
||||||
Thank you for your interest in contributing to this homelab configuration! While this is primarily a personal repository, contributions are welcome.
|
|
||||||
|
|
||||||
## How to Contribute
|
|
||||||
|
|
||||||
### Reporting Issues
|
|
||||||
|
|
||||||
- Use the [bug report template](.github/ISSUE_TEMPLATE/bug-report.md) for bugs
|
|
||||||
- Use the [service request template](.github/ISSUE_TEMPLATE/service-request.md) for new services
|
|
||||||
- Search existing issues before creating a new one
|
|
||||||
- Provide as much detail as possible
|
|
||||||
|
|
||||||
### Submitting Changes
|
|
||||||
|
|
||||||
1. **Fork the repository**
|
|
||||||
2. **Create a feature branch**
|
|
||||||
```bash
|
|
||||||
git checkout -b feature/your-feature-name
|
|
||||||
```
|
|
||||||
3. **Make your changes** following the guidelines below
|
|
||||||
4. **Test your changes** locally
|
|
||||||
5. **Commit with clear messages**
|
|
||||||
```bash
|
|
||||||
git commit -m "feat: add new service"
|
|
||||||
```
|
|
||||||
6. **Push to your fork**
|
|
||||||
```bash
|
|
||||||
git push origin feature/your-feature-name
|
|
||||||
```
|
|
||||||
7. **Open a Pull Request** using the PR template
|
|
||||||
|
|
||||||
## Guidelines
|
|
||||||
|
|
||||||
### File Naming
|
|
||||||
|
|
||||||
- All Docker Compose files must be named `compose.yaml` (not `.yml`)
|
|
||||||
- Use lowercase with hyphens for service directories (e.g., `calibre-web`)
|
|
||||||
- Environment files must be named `.env`
|
|
||||||
|
|
||||||
### Docker Compose Best Practices
|
|
||||||
|
|
||||||
- Use version-pinned images when possible
|
|
||||||
- Include health checks for databases and critical services
|
|
||||||
- Use bind mounts for configuration, named volumes for data
|
|
||||||
- Set proper restart policies (`unless-stopped` or `always`)
|
|
||||||
- Include resource limits for production services
|
|
||||||
|
|
||||||
### Network Configuration
|
|
||||||
|
|
||||||
- All services must use the `homelab` network (marked as `external: true`)
|
|
||||||
- Services with multiple containers should use an internal network
|
|
||||||
- Example:
|
|
||||||
```yaml
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
service_internal:
|
|
||||||
name: service_internal
|
|
||||||
driver: bridge
|
|
||||||
```
|
|
||||||
|
|
||||||
### Traefik Labels
|
|
||||||
|
|
||||||
All web services must include:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.http.routers.service.rule: Host(`service.fig.systems`) || Host(`service.edfig.dev`)
|
|
||||||
traefik.http.routers.service.entrypoints: websecure
|
|
||||||
traefik.http.routers.service.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.service.loadbalancer.server.port: 8080
|
|
||||||
# Optional SSO:
|
|
||||||
traefik.http.routers.service.middlewares: tinyauth
|
|
||||||
```
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
- Use `.env` files for configuration
|
|
||||||
- Never commit real passwords
|
|
||||||
- Use `changeme_*` prefix for placeholder passwords
|
|
||||||
- Document all required environment variables
|
|
||||||
- Include comments explaining non-obvious settings
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
|
|
||||||
- Add service to README.md service table
|
|
||||||
- Include deployment instructions
|
|
||||||
- Document any special configuration
|
|
||||||
- Add comments to compose files explaining purpose
|
|
||||||
- Include links to official documentation
|
|
||||||
|
|
||||||
### Security
|
|
||||||
|
|
||||||
- Never commit secrets
|
|
||||||
- Scan compose files for vulnerabilities
|
|
||||||
- Use official or well-maintained images
|
|
||||||
- Enable SSO when appropriate
|
|
||||||
- Document security considerations
|
|
||||||
|
|
||||||
## Code Style
|
|
||||||
|
|
||||||
### YAML Style
|
|
||||||
|
|
||||||
- 2-space indentation
|
|
||||||
- No trailing whitespace
|
|
||||||
- Use `true/false` instead of `yes/no`
|
|
||||||
- Quote strings with special characters
|
|
||||||
- Follow yamllint rules in `.yamllint.yml`
|
|
||||||
|
|
||||||
### Commit Messages
|
|
||||||
|
|
||||||
Follow [Conventional Commits](https://www.conventionalcommits.org/):
|
|
||||||
|
|
||||||
- `feat:` New feature
|
|
||||||
- `fix:` Bug fix
|
|
||||||
- `docs:` Documentation changes
|
|
||||||
- `refactor:` Code refactoring
|
|
||||||
- `security:` Security improvements
|
|
||||||
- `chore:` Maintenance tasks
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
```
|
|
||||||
feat: add jellyfin media server
|
|
||||||
fix: correct traefik routing for sonarr
|
|
||||||
docs: update README with new services
|
|
||||||
security: update postgres to latest version
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
Before submitting a PR:
|
|
||||||
|
|
||||||
1. **Validate compose files**
|
|
||||||
```bash
|
|
||||||
docker compose -f compose/path/to/compose.yaml config
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Check YAML syntax**
|
|
||||||
```bash
|
|
||||||
yamllint compose/
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Test locally**
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Check for secrets**
|
|
||||||
```bash
|
|
||||||
git diff --cached | grep -i "password\|secret\|token"
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Run pre-commit hooks** (optional)
|
|
||||||
```bash
|
|
||||||
pre-commit install
|
|
||||||
pre-commit run --all-files
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pull Request Process
|
|
||||||
|
|
||||||
1. Fill out the PR template completely
|
|
||||||
2. Ensure all CI checks pass
|
|
||||||
3. Request review if needed
|
|
||||||
4. Address review feedback
|
|
||||||
5. Squash commits if requested
|
|
||||||
6. Wait for approval and merge
|
|
||||||
|
|
||||||
## CI/CD Checks
|
|
||||||
|
|
||||||
Your PR will be automatically checked for:
|
|
||||||
|
|
||||||
- Docker Compose validation
|
|
||||||
- YAML linting
|
|
||||||
- Security scanning
|
|
||||||
- Secret detection
|
|
||||||
- Documentation completeness
|
|
||||||
- Traefik configuration
|
|
||||||
- Network setup
|
|
||||||
- File naming conventions
|
|
||||||
|
|
||||||
Fix any failures before requesting review.
|
|
||||||
|
|
||||||
## Adding a New Service
|
|
||||||
|
|
||||||
1. Choose the correct category:
|
|
||||||
- `compose/core/` - Infrastructure (Traefik, auth, etc.)
|
|
||||||
- `compose/media/` - Media-related services
|
|
||||||
- `compose/services/` - Utility services
|
|
||||||
|
|
||||||
2. Create service directory:
|
|
||||||
```bash
|
|
||||||
mkdir -p compose/category/service-name
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Create `compose.yaml`:
|
|
||||||
- Include documentation header
|
|
||||||
- Add Traefik labels
|
|
||||||
- Configure networks
|
|
||||||
- Set up volumes
|
|
||||||
- Add health checks if applicable
|
|
||||||
|
|
||||||
4. Create `.env` if needed:
|
|
||||||
- Use placeholder passwords
|
|
||||||
- Document all variables
|
|
||||||
- Include comments
|
|
||||||
|
|
||||||
5. Update README.md:
|
|
||||||
- Add to service table
|
|
||||||
- Include URL
|
|
||||||
- Document deployment
|
|
||||||
|
|
||||||
6. Test deployment:
|
|
||||||
```bash
|
|
||||||
cd compose/category/service-name
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
7. Create PR with detailed description
|
|
||||||
|
|
||||||
## Project Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
homelab/
|
|
||||||
├── .github/
|
|
||||||
│ ├── workflows/ # CI/CD workflows
|
|
||||||
│ ├── ISSUE_TEMPLATE/ # Issue templates
|
|
||||||
│ └── pull_request_template.md
|
|
||||||
├── compose/
|
|
||||||
│ ├── core/ # Infrastructure services
|
|
||||||
│ ├── media/ # Media services
|
|
||||||
│ └── services/ # Utility services
|
|
||||||
├── README.md # Main documentation
|
|
||||||
├── CONTRIBUTING.md # This file
|
|
||||||
├── SECURITY.md # Security policy
|
|
||||||
└── .yamllint.yml # YAML linting config
|
|
||||||
```
|
|
||||||
|
|
||||||
## Getting Help
|
|
||||||
|
|
||||||
- Check existing issues and PRs
|
|
||||||
- Review the README.md
|
|
||||||
- Examine similar services for examples
|
|
||||||
- Ask in PR comments
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
By contributing, you agree that your contributions will be licensed under the same terms as the repository.
|
|
||||||
|
|
||||||
## Code of Conduct
|
|
||||||
|
|
||||||
- Be respectful and professional
|
|
||||||
- Focus on constructive feedback
|
|
||||||
- Help others learn and improve
|
|
||||||
- Keep discussions relevant
|
|
||||||
|
|
||||||
## Questions?
|
|
||||||
|
|
||||||
Open an issue with the question label or comment on an existing PR/issue.
|
|
||||||
|
|
||||||
Thank you for contributing! 🎉
|
|
||||||
383
PR_REVIEW.md
383
PR_REVIEW.md
|
|
@ -1,383 +0,0 @@
|
||||||
# Pull Request Review: Homelab GitOps Complete Setup
|
|
||||||
|
|
||||||
## 📋 PR Summary
|
|
||||||
|
|
||||||
**Branch:** `claude/gitops-home-services-011CUqEzDETA2BqAzYUcXtjt`
|
|
||||||
**Commits:** 2 main commits
|
|
||||||
**Files Changed:** 48 files (+2,469 / -300)
|
|
||||||
**Services Added:** 13 new services + 3 core infrastructure
|
|
||||||
|
|
||||||
## ✅ Overall Assessment: **APPROVE with Minor Issues**
|
|
||||||
|
|
||||||
This is an excellent, comprehensive implementation of a homelab GitOps setup. The changes demonstrate strong understanding of Docker best practices, security considerations, and infrastructure-as-code principles.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 What This PR Does
|
|
||||||
|
|
||||||
### Core Infrastructure (NEW)
|
|
||||||
- ✅ Traefik v3.3 reverse proxy with Let's Encrypt
|
|
||||||
- ✅ LLDAP lightweight directory server
|
|
||||||
- ✅ Tinyauth SSO integration with LLDAP backend
|
|
||||||
|
|
||||||
### Media Services (13 services)
|
|
||||||
- ✅ Jellyfin, Jellyseerr, Immich
|
|
||||||
- ✅ Sonarr, Radarr, SABnzbd, qBittorrent
|
|
||||||
- ✅ Calibre-web, Booklore, FreshRSS, RSSHub
|
|
||||||
|
|
||||||
### Utility Services
|
|
||||||
- ✅ Linkwarden, Vikunja, LubeLogger, MicroBin, File Browser
|
|
||||||
|
|
||||||
### CI/CD Pipeline (NEW)
|
|
||||||
- ✅ 5 GitHub Actions workflows
|
|
||||||
- ✅ Security scanning (Gitleaks, Trivy)
|
|
||||||
- ✅ YAML/Markdown linting
|
|
||||||
- ✅ Docker Compose validation
|
|
||||||
- ✅ Documentation checks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💪 Strengths
|
|
||||||
|
|
||||||
### 1. **Excellent Infrastructure Design**
|
|
||||||
- Proper network isolation (homelab + service-specific internal networks)
|
|
||||||
- Consistent Traefik labeling across all services
|
|
||||||
- Dual domain support (fig.systems + edfig.dev)
|
|
||||||
- SSL/TLS with automatic Let's Encrypt certificate management
|
|
||||||
|
|
||||||
### 2. **Security Best Practices**
|
|
||||||
- ✅ Placeholder passwords using `changeme_*` format
|
|
||||||
- ✅ No real secrets committed
|
|
||||||
- ✅ SSO enabled on appropriate services
|
|
||||||
- ✅ Read-only media mounts where appropriate
|
|
||||||
- ✅ Proper PUID/PGID settings
|
|
||||||
|
|
||||||
### 3. **Docker Best Practices**
|
|
||||||
- ✅ Standardized to `compose.yaml` (removed `.yml`)
|
|
||||||
- ✅ Health checks on database services
|
|
||||||
- ✅ Proper dependency management (depends_on)
|
|
||||||
- ✅ Consistent restart policies
|
|
||||||
- ✅ Container naming conventions
|
|
||||||
|
|
||||||
### 4. **Comprehensive Documentation**
|
|
||||||
- ✅ Detailed README with service table
|
|
||||||
- ✅ Deployment instructions
|
|
||||||
- ✅ Security policy (SECURITY.md)
|
|
||||||
- ✅ Contributing guidelines (CONTRIBUTING.md)
|
|
||||||
- ✅ Comments in compose files
|
|
||||||
|
|
||||||
### 5. **Robust CI/CD**
|
|
||||||
- ✅ Multi-layered validation
|
|
||||||
- ✅ Security scanning
|
|
||||||
- ✅ Documentation verification
|
|
||||||
- ✅ Auto-labeling
|
|
||||||
- ✅ PR templates
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚠️ Issues Found
|
|
||||||
|
|
||||||
### 🔴 Critical Issues: 0
|
|
||||||
|
|
||||||
### 🟡 High Priority Issues: 1
|
|
||||||
|
|
||||||
**1. Nginx Proxy Manager Not Removed/Migrated**
|
|
||||||
- **File:** `compose/core/nginxproxymanager/compose.yml`
|
|
||||||
- **Issue:** Template file still exists with `.yml` extension and no configuration
|
|
||||||
- **Impact:** Will fail CI validation workflow
|
|
||||||
- **Recommendation:**
|
|
||||||
```bash
|
|
||||||
# Option 1: Remove if not needed (Traefik replaces it)
|
|
||||||
rm -rf compose/core/nginxproxymanager/
|
|
||||||
|
|
||||||
# Option 2: Configure if needed alongside Traefik
|
|
||||||
# Move to compose.yaml and configure properly
|
|
||||||
```
|
|
||||||
|
|
||||||
### 🟠 Medium Priority Issues: 3
|
|
||||||
|
|
||||||
**2. Missing Password Synchronization Documentation**
|
|
||||||
- **Files:** `compose/core/lldap/.env`, `compose/core/tinyauth/.env`
|
|
||||||
- **Issue:** Password must match between LLDAP and Tinyauth, not clearly documented
|
|
||||||
- **Recommendation:** Add a note in both .env files:
|
|
||||||
```bash
|
|
||||||
# IMPORTANT: This password must match LLDAP_LDAP_USER_PASS in ../lldap/.env
|
|
||||||
LDAP_BIND_PASSWORD=changeme_please_set_secure_password
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Vikunja Database Password Duplication**
|
|
||||||
- **File:** `compose/services/vikunja/compose.yaml`
|
|
||||||
- **Issue:** Database password defined in two places (can get out of sync)
|
|
||||||
- **Recommendation:** Use `.env` file for Vikunja service
|
|
||||||
```yaml
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
VIKUNJA_DATABASE_PASSWORD: ${POSTGRES_PASSWORD}
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. Immich External Photo Library Mounting**
|
|
||||||
- **File:** `compose/media/frontend/immich/compose.yaml`
|
|
||||||
- **Issue:** Added `/media/photos` mount, but Immich uses `UPLOAD_LOCATION` for primary storage
|
|
||||||
- **Recommendation:** Document that `/media/photos` is for external library import only
|
|
||||||
|
|
||||||
### 🔵 Low Priority / Nice-to-Have: 5
|
|
||||||
|
|
||||||
**5. Inconsistent Timezone**
|
|
||||||
- **Files:** Various compose files
|
|
||||||
- **Issue:** Some services use `America/Los_Angeles`, others don't specify
|
|
||||||
- **Recommendation:** Standardize timezone across all services or use `.env`
|
|
||||||
|
|
||||||
**6. Booklore Image May Not Exist**
|
|
||||||
- **File:** `compose/services/booklore/compose.yaml`
|
|
||||||
- **Issue:** Using `ghcr.io/lorebooks/booklore:latest` - verify this image exists
|
|
||||||
- **Recommendation:** Test image availability before deployment
|
|
||||||
|
|
||||||
**7. Port Conflicts Possible**
|
|
||||||
- **Issue:** Several services expose ports that may conflict
|
|
||||||
- Traefik: 80, 443
|
|
||||||
- Jellyfin: 8096, 7359
|
|
||||||
- Immich: 2283
|
|
||||||
- qBittorrent: 6881
|
|
||||||
- **Recommendation:** Document port requirements in README
|
|
||||||
|
|
||||||
**8. Missing Resource Limits**
|
|
||||||
- **Issue:** No CPU/memory limits defined
|
|
||||||
- **Impact:** Services could consume excessive resources
|
|
||||||
- **Recommendation:** Add resource limits in production:
|
|
||||||
```yaml
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
cpus: '1.0'
|
|
||||||
memory: 1G
|
|
||||||
```
|
|
||||||
|
|
||||||
**9. GitHub Actions May Need Secrets**
|
|
||||||
- **File:** `.github/workflows/security-checks.yml`
|
|
||||||
- **Issue:** Some workflows assume `GITHUB_TOKEN` is available
|
|
||||||
- **Recommendation:** Document required GitHub secrets in README
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Code Quality Metrics
|
|
||||||
|
|
||||||
| Metric | Score | Notes |
|
|
||||||
|--------|-------|-------|
|
|
||||||
| **Documentation** | ⭐⭐⭐⭐⭐ | Excellent README, SECURITY.md, CONTRIBUTING.md |
|
|
||||||
| **Security** | ⭐⭐⭐⭐½ | Great practices, minor password sync issue |
|
|
||||||
| **Consistency** | ⭐⭐⭐⭐⭐ | Uniform structure across all services |
|
|
||||||
| **Best Practices** | ⭐⭐⭐⭐⭐ | Follows Docker/Compose standards |
|
|
||||||
| **CI/CD** | ⭐⭐⭐⭐⭐ | Comprehensive validation pipeline |
|
|
||||||
| **Maintainability** | ⭐⭐⭐⭐⭐ | Well-organized, easy to extend |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 Detailed Review by Category
|
|
||||||
|
|
||||||
### Core Infrastructure
|
|
||||||
|
|
||||||
#### Traefik (`compose/core/traefik/compose.yaml`)
|
|
||||||
✅ **Excellent**
|
|
||||||
- Proper entrypoint configuration
|
|
||||||
- HTTP to HTTPS redirect
|
|
||||||
- Let's Encrypt email configured
|
|
||||||
- Dashboard with SSO protection
|
|
||||||
- Log level appropriate for production
|
|
||||||
|
|
||||||
**Suggestion:** Consider adding access log retention:
|
|
||||||
```yaml
|
|
||||||
- --accesslog.filepath=/var/log/traefik/access.log
|
|
||||||
- --accesslog.bufferingsize=100
|
|
||||||
```
|
|
||||||
|
|
||||||
#### LLDAP (`compose/core/lldap/compose.yaml`)
|
|
||||||
✅ **Good**
|
|
||||||
- Clean configuration
|
|
||||||
- Proper volume mounts
|
|
||||||
- Environment variables in .env
|
|
||||||
|
|
||||||
**Minor Issue:** Base DN is `dc=fig,dc=systems` but domain is `fig.systems` - this is correct but document why.
|
|
||||||
|
|
||||||
#### Tinyauth (`compose/core/tinyauth/compose.yaml`)
|
|
||||||
✅ **Good**
|
|
||||||
- LDAP integration properly configured
|
|
||||||
- Forward auth middleware defined
|
|
||||||
- Session management configured
|
|
||||||
|
|
||||||
**Issue:** Depends on LLDAP - add `depends_on` if deploying together.
|
|
||||||
|
|
||||||
### Media Services
|
|
||||||
|
|
||||||
#### Jellyfin ✅ **Excellent**
|
|
||||||
- Proper media folder mappings
|
|
||||||
- GPU transcoding option documented
|
|
||||||
- Traefik labels complete
|
|
||||||
- SSO middleware commented (correct for service with own auth)
|
|
||||||
|
|
||||||
#### Sonarr/Radarr ✅ **Good**
|
|
||||||
- Download folder mappings correct
|
|
||||||
- Consistent configuration
|
|
||||||
- Proper network isolation
|
|
||||||
|
|
||||||
**Suggestion:** Add Traefik rate limiting for public endpoints:
|
|
||||||
```yaml
|
|
||||||
traefik.http.middlewares.sonarr-ratelimit.ratelimit.average: 10
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Immich ⭐ **Very Good**
|
|
||||||
- Multi-container setup properly configured
|
|
||||||
- Internal network for database/redis
|
|
||||||
- Health checks present
|
|
||||||
- Machine learning container included
|
|
||||||
|
|
||||||
**Question:** Does `/media/photos` need write access? Currently read-only.
|
|
||||||
|
|
||||||
### Utility Services
|
|
||||||
|
|
||||||
#### Linkwarden/Vikunja ✅ **Excellent**
|
|
||||||
- Multi-service stacks well organized
|
|
||||||
- Database health checks
|
|
||||||
- Internal networks isolated
|
|
||||||
|
|
||||||
#### File Browser ⚠️ **Needs Review**
|
|
||||||
- Mounts entire `/media` to `/srv`
|
|
||||||
- This gives access to ALL media folders
|
|
||||||
- Consider if this is intentional or security risk
|
|
||||||
|
|
||||||
### CI/CD Pipeline
|
|
||||||
|
|
||||||
#### GitHub Actions Workflows ⭐⭐⭐⭐⭐ **Outstanding**
|
|
||||||
- Comprehensive validation
|
|
||||||
- Security scanning with multiple tools
|
|
||||||
- Documentation verification
|
|
||||||
- Auto-labeling
|
|
||||||
|
|
||||||
**One Issue:** `docker-compose-validation.yml` line 30 assumes `homelab` network exists for validation. This will fail on CI runners.
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```yaml
|
|
||||||
# Skip network existence validation, only check syntax
|
|
||||||
if docker compose -f "$file" config --quiet 2>/dev/null; then
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Testing Performed
|
|
||||||
|
|
||||||
Based on the implementation, these tests should be performed:
|
|
||||||
|
|
||||||
### ✅ Automated Tests (Will Run via CI)
|
|
||||||
- [x] YAML syntax validation
|
|
||||||
- [x] Compose file structure
|
|
||||||
- [x] Secret scanning
|
|
||||||
- [x] Documentation links
|
|
||||||
|
|
||||||
### ⏳ Manual Tests Required
|
|
||||||
- [ ] Deploy Traefik and verify dashboard
|
|
||||||
- [ ] Deploy LLDAP and create test user
|
|
||||||
- [ ] Configure Tinyauth with LLDAP
|
|
||||||
- [ ] Deploy a test service and verify SSO
|
|
||||||
- [ ] Verify SSL certificate generation
|
|
||||||
- [ ] Test dual domain access (fig.systems + edfig.dev)
|
|
||||||
- [ ] Verify media folder permissions (PUID/PGID)
|
|
||||||
- [ ] Test service interdependencies
|
|
||||||
- [ ] Verify health checks work
|
|
||||||
- [ ] Test backup/restore procedures
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Recommendations
|
|
||||||
|
|
||||||
### Before Merge:
|
|
||||||
1. **Fix nginxproxymanager issue** - Remove or migrate to compose.yaml
|
|
||||||
2. **Add password sync documentation** - Clarify LLDAP <-> Tinyauth password relationship
|
|
||||||
3. **Test Booklore image** - Verify container image exists
|
|
||||||
|
|
||||||
### After Merge:
|
|
||||||
4. Create follow-up issues for:
|
|
||||||
- Adding resource limits
|
|
||||||
- Implementing backup strategy
|
|
||||||
- Setting up monitoring (Prometheus/Grafana)
|
|
||||||
- Creating deployment automation script
|
|
||||||
- Testing disaster recovery
|
|
||||||
|
|
||||||
### Documentation Updates:
|
|
||||||
5. Add deployment troubleshooting section
|
|
||||||
6. Document port requirements in README
|
|
||||||
7. Add network topology diagram
|
|
||||||
8. Create quick-start guide
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Action Items
|
|
||||||
|
|
||||||
### For PR Author:
|
|
||||||
- [ ] Remove or fix `compose/core/nginxproxymanager/compose.yml`
|
|
||||||
- [ ] Add password synchronization notes to .env files
|
|
||||||
- [ ] Verify Booklore Docker image exists
|
|
||||||
- [ ] Test at least core infrastructure deployment locally
|
|
||||||
- [ ] Update README with port requirements
|
|
||||||
|
|
||||||
### For Reviewers:
|
|
||||||
- [ ] Verify no secrets in committed files
|
|
||||||
- [ ] Check Traefik configuration security
|
|
||||||
- [ ] Review network isolation
|
|
||||||
- [ ] Validate domain configuration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💬 Questions for PR Author
|
|
||||||
|
|
||||||
1. **Nginx Proxy Manager**: Is this service still needed or can it be removed since Traefik is the reverse proxy?
|
|
||||||
|
|
||||||
2. **Media Folder Permissions**: Have you verified the host will have PUID=1000, PGID=1000 for the media folders?
|
|
||||||
|
|
||||||
3. **Backup Strategy**: What's the plan for backing up:
|
|
||||||
- LLDAP user database
|
|
||||||
- Service configurations
|
|
||||||
- Application databases (Postgres)
|
|
||||||
|
|
||||||
4. **Monitoring**: Plans for adding monitoring/alerting (Grafana, Uptime Kuma, etc.)?
|
|
||||||
|
|
||||||
5. **Testing**: Have you tested the full deployment flow on a clean system?
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Deployment Readiness
|
|
||||||
|
|
||||||
| Category | Status | Notes |
|
|
||||||
|----------|--------|-------|
|
|
||||||
| **Code Quality** | ✅ Ready | Minor issues noted above |
|
|
||||||
| **Security** | ✅ Ready | Proper secrets management |
|
|
||||||
| **Documentation** | ✅ Ready | Comprehensive docs provided |
|
|
||||||
| **Testing** | ⚠️ Partial | Needs manual deployment testing |
|
|
||||||
| **CI/CD** | ✅ Ready | Workflows will validate future changes |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎉 Conclusion
|
|
||||||
|
|
||||||
This is an **excellent PR** that demonstrates:
|
|
||||||
- Strong understanding of Docker/Compose best practices
|
|
||||||
- Thoughtful security considerations
|
|
||||||
- Comprehensive documentation
|
|
||||||
- Robust CI/CD pipeline
|
|
||||||
|
|
||||||
The issues found are minor and easily addressable. The codebase is well-structured and maintainable.
|
|
||||||
|
|
||||||
**Recommendation: APPROVE** after fixing the nginxproxymanager issue.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Additional Resources
|
|
||||||
|
|
||||||
For future enhancements, consider:
|
|
||||||
- [Awesome Selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted)
|
|
||||||
- [Docker Security Best Practices](https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html)
|
|
||||||
- [Traefik Best Practices](https://doc.traefik.io/traefik/getting-started/quick-start/)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Review Date:** 2025-11-05
|
|
||||||
**Reviewer:** Claude (Automated Code Review)
|
|
||||||
**Status:** ✅ **APPROVED WITH CONDITIONS**
|
|
||||||
144
SECURITY.md
144
SECURITY.md
|
|
@ -1,144 +0,0 @@
|
||||||
# Security Policy
|
|
||||||
|
|
||||||
## Supported Versions
|
|
||||||
|
|
||||||
This is a personal homelab configuration repository. The latest commit on `main` is always the supported version.
|
|
||||||
|
|
||||||
| Branch | Supported |
|
|
||||||
| ------ | ------------------ |
|
|
||||||
| main | :white_check_mark: |
|
|
||||||
| other | :x: |
|
|
||||||
|
|
||||||
## Security Considerations
|
|
||||||
|
|
||||||
### Secrets Management
|
|
||||||
|
|
||||||
**DO NOT commit secrets to this repository!**
|
|
||||||
|
|
||||||
- All passwords in `.env` files should use placeholder values (e.g., `changeme_*`)
|
|
||||||
- Real passwords should only be set in your local deployment
|
|
||||||
- Use environment variables or Docker secrets for sensitive data
|
|
||||||
- Never commit files containing real credentials
|
|
||||||
|
|
||||||
### Container Security
|
|
||||||
|
|
||||||
- All container images are scanned for vulnerabilities via GitHub Actions
|
|
||||||
- HIGH and CRITICAL vulnerabilities are reported in security scans
|
|
||||||
- Keep images up to date by pulling latest versions regularly
|
|
||||||
- Review security scan results before deploying
|
|
||||||
|
|
||||||
### Network Security
|
|
||||||
|
|
||||||
- All services are behind Traefik reverse proxy
|
|
||||||
- SSL/TLS is enforced via Let's Encrypt
|
|
||||||
- Internal services use isolated Docker networks
|
|
||||||
- SSO is enabled on most services via Tinyauth
|
|
||||||
|
|
||||||
### Authentication
|
|
||||||
|
|
||||||
- LLDAP provides centralized user management
|
|
||||||
- Tinyauth handles SSO authentication
|
|
||||||
- Services with built-in authentication are documented in README
|
|
||||||
- Change all default passwords before deployment
|
|
||||||
|
|
||||||
## Reporting a Vulnerability
|
|
||||||
|
|
||||||
If you discover a security vulnerability in this configuration:
|
|
||||||
|
|
||||||
1. **DO NOT** open a public issue
|
|
||||||
2. Contact the repository owner directly via GitHub private message
|
|
||||||
3. Include:
|
|
||||||
- Description of the vulnerability
|
|
||||||
- Steps to reproduce
|
|
||||||
- Potential impact
|
|
||||||
- Suggested fix (if any)
|
|
||||||
|
|
||||||
### What to Report
|
|
||||||
|
|
||||||
- Exposed secrets or credentials
|
|
||||||
- Insecure configurations
|
|
||||||
- Vulnerable container images (not already detected by CI)
|
|
||||||
- Authentication bypasses
|
|
||||||
- Network security issues
|
|
||||||
|
|
||||||
### What NOT to Report
|
|
||||||
|
|
||||||
- Issues with third-party services (report to their maintainers)
|
|
||||||
- Theoretical vulnerabilities without proof of concept
|
|
||||||
- Social engineering attempts
|
|
||||||
|
|
||||||
## Security Best Practices
|
|
||||||
|
|
||||||
### Before Deployment
|
|
||||||
|
|
||||||
1. **Change all passwords** in `.env` files
|
|
||||||
2. **Review** all service configurations
|
|
||||||
3. **Update** container images to latest versions
|
|
||||||
4. **Configure** firewall to only allow ports 80/443
|
|
||||||
5. **Enable** automatic security updates on host OS
|
|
||||||
|
|
||||||
### After Deployment
|
|
||||||
|
|
||||||
1. **Monitor** logs regularly for suspicious activity
|
|
||||||
2. **Update** services monthly (at minimum)
|
|
||||||
3. **Backup** data regularly
|
|
||||||
4. **Review** access logs
|
|
||||||
5. **Test** disaster recovery procedures
|
|
||||||
|
|
||||||
### Network Hardening
|
|
||||||
|
|
||||||
- Use a firewall (ufw, iptables, etc.)
|
|
||||||
- Only expose ports 80 and 443 to the internet
|
|
||||||
- Consider using a VPN for administrative access
|
|
||||||
- Enable fail2ban or similar intrusion prevention
|
|
||||||
- Use strong DNS providers with DNSSEC
|
|
||||||
|
|
||||||
### Container Hardening
|
|
||||||
|
|
||||||
- Run containers as non-root when possible
|
|
||||||
- Use read-only filesystems where applicable
|
|
||||||
- Limit container resources (CPU, memory)
|
|
||||||
- Enable security options (no-new-privileges, etc.)
|
|
||||||
- Regularly scan for vulnerabilities
|
|
||||||
|
|
||||||
## Automated Security Scanning
|
|
||||||
|
|
||||||
This repository includes automated security scanning:
|
|
||||||
|
|
||||||
- **Gitleaks**: Detects secrets in commits
|
|
||||||
- **Trivy**: Scans container images for vulnerabilities
|
|
||||||
- **YAML Linting**: Ensures proper configuration
|
|
||||||
- **Dependency Review**: Checks for vulnerable dependencies
|
|
||||||
|
|
||||||
Review GitHub Actions results before merging PRs.
|
|
||||||
|
|
||||||
## Compliance
|
|
||||||
|
|
||||||
This is a personal homelab configuration and does not claim compliance with any specific security standards. However, it follows general security best practices:
|
|
||||||
|
|
||||||
- Principle of least privilege
|
|
||||||
- Defense in depth
|
|
||||||
- Secure by default
|
|
||||||
- Regular updates and patching
|
|
||||||
|
|
||||||
## External Dependencies
|
|
||||||
|
|
||||||
Security of this setup depends on:
|
|
||||||
|
|
||||||
- Docker and Docker Compose security
|
|
||||||
- Container image maintainers
|
|
||||||
- Traefik security
|
|
||||||
- LLDAP security
|
|
||||||
- Host OS security
|
|
||||||
|
|
||||||
Always keep these dependencies up to date.
|
|
||||||
|
|
||||||
## Disclaimer
|
|
||||||
|
|
||||||
This configuration is provided "as is" without warranty. Use at your own risk. The maintainer is not responsible for any security incidents resulting from the use of this configuration.
|
|
||||||
|
|
||||||
## Additional Resources
|
|
||||||
|
|
||||||
- [Docker Security Best Practices](https://docs.docker.com/engine/security/)
|
|
||||||
- [Traefik Security Documentation](https://doc.traefik.io/traefik/https/overview/)
|
|
||||||
- [OWASP Container Security](https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html)
|
|
||||||
54
compose/core/authelia/compose.yaml
Normal file
54
compose/core/authelia/compose.yaml
Normal file
|
|
@ -0,0 +1,54 @@
|
||||||
|
# Authelia - Single Sign-On & Two-Factor Authentication
|
||||||
|
# Docs: https://www.authelia.com/
|
||||||
|
|
||||||
|
services:
|
||||||
|
authelia:
|
||||||
|
container_name: authelia
|
||||||
|
image: authelia/authelia:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./config:/config
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Main Authelia portal
|
||||||
|
traefik.http.routers.authelia.rule: Host(`auth.fig.systems`)
|
||||||
|
traefik.http.routers.authelia.entrypoints: websecure
|
||||||
|
traefik.http.routers.authelia.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.authelia.loadbalancer.server.port: 9091
|
||||||
|
|
||||||
|
# Forward Auth Middleware (for services without native OIDC)
|
||||||
|
traefik.http.middlewares.authelia.forwardAuth.address: http://authelia:9091/api/verify?rd=https%3A%2F%2Fauth.fig.systems%2F
|
||||||
|
traefik.http.middlewares.authelia.forwardAuth.trustForwardHeader: true
|
||||||
|
traefik.http.middlewares.authelia.forwardAuth.authResponseHeaders: Remote-User,Remote-Groups,Remote-Name,Remote-Email
|
||||||
|
|
||||||
|
redis:
|
||||||
|
container_name: authelia-redis
|
||||||
|
image: redis:alpine
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- redis-data:/data
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
command: redis-server --save 60 1 --loglevel warning
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
redis-data:
|
||||||
|
driver: local
|
||||||
11
compose/core/crowdsec/.env.example
Normal file
11
compose/core/crowdsec/.env.example
Normal file
|
|
@ -0,0 +1,11 @@
|
||||||
|
# CrowdSec Configuration
|
||||||
|
# Copy this file to .env and customize
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# Optional: Disable metrics/telemetry
|
||||||
|
# DISABLE_ONLINE_API=true
|
||||||
|
|
||||||
|
# Optional: Log level (info, debug, warning, error)
|
||||||
|
# LOG_LEVEL=info
|
||||||
453
compose/core/crowdsec/README.md
Normal file
453
compose/core/crowdsec/README.md
Normal file
|
|
@ -0,0 +1,453 @@
|
||||||
|
# CrowdSec - Collaborative Security Engine
|
||||||
|
|
||||||
|
CrowdSec is a free, open-source Intrusion Prevention System (IPS) that analyzes logs and blocks malicious IPs based on behavior analysis and community threat intelligence.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Behavior-based detection - Detects attacks from log patterns
|
||||||
|
- Community threat intelligence - Shares & receives IP reputation data
|
||||||
|
- Traefik integration - Protects all web services via plugin
|
||||||
|
- SQLite database - No separate database container needed
|
||||||
|
- Local network whitelist - Prevents self-blocking (10.0.0.0/16)
|
||||||
|
- Multiple scenarios - HTTP attacks, brute force, scanners, etc.
|
||||||
|
- Optional dashboard - Web UI at crowdsec.fig.systems
|
||||||
|
|
||||||
|
## Access
|
||||||
|
|
||||||
|
**Dashboard URL:** https://crowdsec.fig.systems (protected by Authelia)
|
||||||
|
**LAPI:** http://crowdsec:8080 (internal only, used by Traefik plugin)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### Initial Deployment
|
||||||
|
|
||||||
|
1. **Deploy CrowdSec:**
|
||||||
|
```bash
|
||||||
|
cd /home/eduardo_figueroa/homelab/compose/core/crowdsec
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Wait for initialization** (30-60 seconds):
|
||||||
|
```bash
|
||||||
|
docker logs crowdsec -f
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for: "CrowdSec service: crowdsec up and running"
|
||||||
|
|
||||||
|
3. **Generate Bouncer API Key:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli bouncers add traefik-bouncer
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** Copy the API key shown. It will look like:
|
||||||
|
```
|
||||||
|
API key for 'traefik-bouncer':
|
||||||
|
a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Add API key to Traefik:**
|
||||||
|
```bash
|
||||||
|
cd /home/eduardo_figueroa/homelab/compose/core/traefik
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
Update the line:
|
||||||
|
```bash
|
||||||
|
CROWDSEC_BOUNCER_KEY=a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Restart Traefik to load plugin:**
|
||||||
|
```bash
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Verify plugin connection:**
|
||||||
|
```bash
|
||||||
|
docker logs traefik 2>&1 | grep -i crowdsec
|
||||||
|
```
|
||||||
|
|
||||||
|
Should see: "Plugin crowdsec-bouncer-traefik-plugin loaded"
|
||||||
|
|
||||||
|
### Apply CrowdSec Middleware to Services
|
||||||
|
|
||||||
|
Edit service compose.yaml files to add CrowdSec middleware:
|
||||||
|
|
||||||
|
**Example - Jellyfin:**
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
traefik.http.routers.jellyfin.middlewares: crowdsec
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example - With Authelia chain:**
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
traefik.http.routers.service.middlewares: crowdsec,authelia
|
||||||
|
```
|
||||||
|
|
||||||
|
**Recommended for:**
|
||||||
|
- Publicly accessible services (jellyfin, jellyseer, etc.)
|
||||||
|
- Services without rate limiting
|
||||||
|
- High-value targets (admin panels, databases)
|
||||||
|
|
||||||
|
**Skip for:**
|
||||||
|
- Traefik dashboard (already has local-only)
|
||||||
|
- Strictly local services (no external access)
|
||||||
|
|
||||||
|
## Management Commands
|
||||||
|
|
||||||
|
### View Decisions (Active Bans)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all active bans
|
||||||
|
docker exec crowdsec cscli decisions list
|
||||||
|
|
||||||
|
# List bans with details
|
||||||
|
docker exec crowdsec cscli decisions list -o json
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Alerts (Detected Attacks)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Recent alerts
|
||||||
|
docker exec crowdsec cscli alerts list
|
||||||
|
|
||||||
|
# Detailed alert view
|
||||||
|
docker exec crowdsec cscli alerts inspect <alert_id>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Whitelist an IP
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Temporary whitelist
|
||||||
|
docker exec crowdsec cscli decisions delete --ip 1.2.3.4
|
||||||
|
|
||||||
|
# Permanent whitelist - add to config/local_whitelist.yaml:
|
||||||
|
```
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
whitelist:
|
||||||
|
reason: "Trusted service"
|
||||||
|
cidr:
|
||||||
|
- "1.2.3.4/32"
|
||||||
|
```
|
||||||
|
|
||||||
|
Then restart CrowdSec:
|
||||||
|
```bash
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ban an IP Manually
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Ban for 4 hours
|
||||||
|
docker exec crowdsec cscli decisions add --ip 1.2.3.4 --duration 4h --reason "Manual ban"
|
||||||
|
|
||||||
|
# Permanent ban
|
||||||
|
docker exec crowdsec cscli decisions add --ip 1.2.3.4 --duration 24h --reason "Malicious actor"
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Installed Collections
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli collections list
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install Additional Collections
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# WordPress protection
|
||||||
|
docker exec crowdsec cscli collections install crowdsecurity/wordpress
|
||||||
|
|
||||||
|
# SSH brute force (if exposing SSH)
|
||||||
|
docker exec crowdsec cscli collections install crowdsecurity/sshd
|
||||||
|
|
||||||
|
# Apply changes
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Bouncer Status
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List bouncers
|
||||||
|
docker exec crowdsec cscli bouncers list
|
||||||
|
|
||||||
|
# Should show traefik-bouncer with last_pull timestamp
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Metrics
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# CrowdSec metrics
|
||||||
|
docker exec crowdsec cscli metrics
|
||||||
|
|
||||||
|
# Show parser statistics
|
||||||
|
docker exec crowdsec cscli metrics show parsers
|
||||||
|
|
||||||
|
# Show scenario statistics
|
||||||
|
docker exec crowdsec cscli metrics show scenarios
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Files
|
||||||
|
|
||||||
|
### acquis.yaml
|
||||||
|
|
||||||
|
Defines log sources for CrowdSec to monitor:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
filenames:
|
||||||
|
- /var/log/traefik/access.log
|
||||||
|
labels:
|
||||||
|
type: traefik
|
||||||
|
```
|
||||||
|
|
||||||
|
**Modify to add more log sources:**
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
filenames:
|
||||||
|
- /var/log/traefik/access.log
|
||||||
|
labels:
|
||||||
|
type: traefik
|
||||||
|
---
|
||||||
|
filenames:
|
||||||
|
- /var/log/nginx/access.log
|
||||||
|
labels:
|
||||||
|
type: nginx
|
||||||
|
```
|
||||||
|
|
||||||
|
After changes:
|
||||||
|
```bash
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
### local_whitelist.yaml
|
||||||
|
|
||||||
|
Whitelists trusted IPs/CIDRs:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
whitelist:
|
||||||
|
reason: "Local network and trusted infrastructure"
|
||||||
|
cidr:
|
||||||
|
- "10.0.0.0/16"
|
||||||
|
- "127.0.0.1/32"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Add more entries:**
|
||||||
|
```yaml
|
||||||
|
cidr:
|
||||||
|
- "10.0.0.0/16"
|
||||||
|
- "192.168.1.100/32" # Trusted admin IP
|
||||||
|
```
|
||||||
|
|
||||||
|
After changes:
|
||||||
|
```bash
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
## Installed Collections
|
||||||
|
|
||||||
|
### crowdsecurity/traefik
|
||||||
|
Parsers and scenarios for Traefik-specific attacks:
|
||||||
|
- Path traversal attempts
|
||||||
|
- SQLi in query strings
|
||||||
|
- XSS attempts
|
||||||
|
- Admin panel scanning
|
||||||
|
|
||||||
|
### crowdsecurity/base-http-scenarios
|
||||||
|
Generic HTTP attack scenarios:
|
||||||
|
- Brute force (login attempts)
|
||||||
|
- Credential stuffing
|
||||||
|
- Directory enumeration
|
||||||
|
- Sensitive file access attempts
|
||||||
|
|
||||||
|
### crowdsecurity/whitelist-good-actors
|
||||||
|
Whitelists known good actors:
|
||||||
|
- Search engine bots (Google, Bing, etc.)
|
||||||
|
- Monitoring services (UptimeRobot, Pingdom)
|
||||||
|
- CDN providers (Cloudflare, etc.)
|
||||||
|
|
||||||
|
## Integration with Traefik
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
1. **Traefik receives request** → Checks CrowdSec plugin middleware
|
||||||
|
2. **Plugin queries CrowdSec LAPI** → "Is this IP banned?"
|
||||||
|
3. **CrowdSec responds:**
|
||||||
|
- Not banned → Request proceeds to service
|
||||||
|
- Banned → Returns 403 Forbidden
|
||||||
|
4. **Traefik logs request** → Saved to /var/log/traefik/access.log
|
||||||
|
5. **CrowdSec analyzes logs** → Detects attack patterns
|
||||||
|
6. **CrowdSec makes decision** → Ban IP or alert
|
||||||
|
7. **Plugin updates cache** → Every 60 seconds (stream mode)
|
||||||
|
|
||||||
|
### Stream Mode
|
||||||
|
|
||||||
|
The plugin uses **stream mode** for optimal performance:
|
||||||
|
- **Live mode:** Queries LAPI on every request (high latency)
|
||||||
|
- **Stream mode:** Maintains local cache, updates every 60s (low latency)
|
||||||
|
- **Alone mode:** No LAPI connection, local decisions only
|
||||||
|
|
||||||
|
**Current config:** Stream mode with 60s updates
|
||||||
|
|
||||||
|
### Middleware Chain Order
|
||||||
|
|
||||||
|
When chaining middlewares, order matters:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Correct: CrowdSec first, then Authelia
|
||||||
|
traefik.http.routers.service.middlewares: crowdsec,authelia
|
||||||
|
|
||||||
|
# Also valid: CrowdSec after rate limiting
|
||||||
|
traefik.http.routers.service.middlewares: ratelimit,crowdsec
|
||||||
|
```
|
||||||
|
|
||||||
|
**Recommended order:**
|
||||||
|
1. Rate limiting (if any)
|
||||||
|
2. CrowdSec (block banned IPs early)
|
||||||
|
3. Authelia (authentication for allowed IPs)
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### CrowdSec Not Blocking Malicious IPs
|
||||||
|
|
||||||
|
**Check decisions:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli decisions list
|
||||||
|
```
|
||||||
|
|
||||||
|
If empty, CrowdSec isn't detecting attacks.
|
||||||
|
|
||||||
|
**Check alerts:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli alerts list
|
||||||
|
```
|
||||||
|
|
||||||
|
If empty, logs aren't being parsed.
|
||||||
|
|
||||||
|
**Verify log parsing:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli metrics show acquisitions
|
||||||
|
```
|
||||||
|
|
||||||
|
Should show Traefik log file being read.
|
||||||
|
|
||||||
|
**Check acquis.yaml:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cat /etc/crowdsec/acquis.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Traefik Plugin Not Connecting
|
||||||
|
|
||||||
|
**Check Traefik logs:**
|
||||||
|
```bash
|
||||||
|
docker logs traefik 2>&1 | grep -i crowdsec
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common issues:**
|
||||||
|
- API key not set in .env
|
||||||
|
- CrowdSec container not running
|
||||||
|
- Network connectivity (both must be on homelab network)
|
||||||
|
|
||||||
|
**Test connection:**
|
||||||
|
```bash
|
||||||
|
docker exec traefik wget -O- http://crowdsec:8080/v1/decisions/stream
|
||||||
|
```
|
||||||
|
|
||||||
|
Should return JSON (may be unauthorized, but connection works).
|
||||||
|
|
||||||
|
### Traefik Not Loading Plugin
|
||||||
|
|
||||||
|
**Check Traefik startup logs:**
|
||||||
|
```bash
|
||||||
|
docker logs traefik | head -50
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for:
|
||||||
|
- "Plugin crowdsec-bouncer-traefik-plugin loaded"
|
||||||
|
- "experimental.plugins" enabled
|
||||||
|
|
||||||
|
**Verify traefik.yml:**
|
||||||
|
```bash
|
||||||
|
docker exec traefik cat /etc/traefik/traefik.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Ensure experimental.plugins section exists.
|
||||||
|
|
||||||
|
### Accidentally Banned Yourself
|
||||||
|
|
||||||
|
**Quick unban:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli decisions delete --ip YOUR_IP_HERE
|
||||||
|
```
|
||||||
|
|
||||||
|
**Permanent whitelist:**
|
||||||
|
|
||||||
|
Edit `/home/eduardo_figueroa/homelab/compose/core/crowdsec/config/local_whitelist.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
whitelist:
|
||||||
|
cidr:
|
||||||
|
- "YOUR_IP/32"
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart:
|
||||||
|
```bash
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logs Not Being Parsed
|
||||||
|
|
||||||
|
**Check log file permissions:**
|
||||||
|
```bash
|
||||||
|
ls -la /home/eduardo_figueroa/homelab/compose/core/traefik/logs/
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check CrowdSec can read logs:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec ls -la /var/log/traefik/
|
||||||
|
docker exec crowdsec tail /var/log/traefik/access.log
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check acquisitions:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli metrics show acquisitions
|
||||||
|
```
|
||||||
|
|
||||||
|
Should show lines read from access.log.
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **Monitor metrics weekly:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Review decisions periodically:**
|
||||||
|
Check for false positives
|
||||||
|
|
||||||
|
3. **Keep collections updated:**
|
||||||
|
```bash
|
||||||
|
docker exec crowdsec cscli collections upgrade --all
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Backup database:**
|
||||||
|
```bash
|
||||||
|
cp -r /home/eduardo_figueroa/homelab/compose/core/crowdsec/db/ /backup/location/
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Test changes in staging:**
|
||||||
|
Before applying to production services
|
||||||
|
|
||||||
|
6. **Use whitelist liberally:**
|
||||||
|
Better to whitelist trusted IPs than deal with lockouts
|
||||||
|
|
||||||
|
7. **Chain with Authelia:**
|
||||||
|
Defense in depth - CrowdSec blocks bad actors, Authelia handles authentication
|
||||||
|
|
||||||
|
## Links
|
||||||
|
|
||||||
|
- **Official Docs:** https://docs.crowdsec.net/
|
||||||
|
- **Traefik Plugin:** https://plugins.traefik.io/plugins/6335346ca4caa9ddeffda116/crowdsec-bouncer-traefik-plugin
|
||||||
|
- **Collections Hub:** https://app.crowdsec.net/hub/collections
|
||||||
|
- **Community Forum:** https://discourse.crowdsec.net/
|
||||||
|
- **GitHub:** https://github.com/crowdsecurity/crowdsec
|
||||||
73
compose/core/crowdsec/compose.yaml
Normal file
73
compose/core/crowdsec/compose.yaml
Normal file
|
|
@ -0,0 +1,73 @@
|
||||||
|
# CrowdSec - Collaborative IPS/IDS
|
||||||
|
# Docs: https://docs.crowdsec.net/
|
||||||
|
|
||||||
|
services:
|
||||||
|
crowdsec:
|
||||||
|
container_name: crowdsec
|
||||||
|
image: crowdsecurity/crowdsec:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# Timezone
|
||||||
|
TZ: America/Los_Angeles
|
||||||
|
|
||||||
|
# Collections to install on first run
|
||||||
|
COLLECTIONS: >-
|
||||||
|
crowdsecurity/traefik
|
||||||
|
crowdsecurity/base-http-scenarios
|
||||||
|
crowdsecurity/whitelist-good-actors
|
||||||
|
|
||||||
|
# Disable online API for local-only mode (optional)
|
||||||
|
# DISABLE_ONLINE_API: "true"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
# Configuration persistence
|
||||||
|
- ./config/acquis.yaml:/etc/crowdsec/acquis.yaml:ro
|
||||||
|
- ./config/local_whitelist.yaml:/etc/crowdsec/parsers/s02-enrich/local_whitelist.yaml:ro
|
||||||
|
|
||||||
|
# Database persistence (SQLite)
|
||||||
|
- ./db:/var/lib/crowdsec/data
|
||||||
|
|
||||||
|
# Traefik logs (read-only, shared with Traefik)
|
||||||
|
- ../traefik/logs:/var/log/traefik:ro
|
||||||
|
|
||||||
|
# Configuration directory (for runtime config)
|
||||||
|
- crowdsec-config:/etc/crowdsec
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
# Expose 8080 only for metrics/dashboard (optional)
|
||||||
|
# Not exposed to host by default for security
|
||||||
|
# ports:
|
||||||
|
# - "8080:8080"
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik - Optional: Expose CrowdSec dashboard
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# CrowdSec Dashboard
|
||||||
|
traefik.http.routers.crowdsec.rule: Host(`crowdsec.fig.systems`)
|
||||||
|
traefik.http.routers.crowdsec.entrypoints: websecure
|
||||||
|
traefik.http.routers.crowdsec.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.crowdsec.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
|
# Protect with Authelia
|
||||||
|
traefik.http.routers.crowdsec.middlewares: authelia
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: CrowdSec
|
||||||
|
homarr.group: Security
|
||||||
|
homarr.icon: mdi:shield-check
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
crowdsec-config:
|
||||||
|
driver: local
|
||||||
|
|
@ -2,34 +2,27 @@ services:
|
||||||
traefik:
|
traefik:
|
||||||
container_name: traefik
|
container_name: traefik
|
||||||
image: traefik:v3.6.2
|
image: traefik:v3.6.2
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
# Static configuration file
|
||||||
command:
|
command:
|
||||||
# API Settings
|
- --configFile=/etc/traefik/traefik.yml
|
||||||
- --api.dashboard=true
|
|
||||||
# Provider Settings
|
|
||||||
- --providers.docker=true
|
|
||||||
- --providers.docker.exposedbydefault=false
|
|
||||||
- --providers.docker.network=homelab
|
|
||||||
# Entrypoints
|
|
||||||
- --entrypoints.web.address=:80
|
|
||||||
- --entrypoints.websecure.address=:443
|
|
||||||
# HTTP to HTTPS redirect
|
|
||||||
- --entrypoints.web.http.redirections.entrypoint.to=websecure
|
|
||||||
- --entrypoints.web.http.redirections.entrypoint.scheme=https
|
|
||||||
# Let's Encrypt Certificate Resolver
|
|
||||||
- --certificatesresolvers.letsencrypt.acme.email=admin@edfig.dev
|
|
||||||
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
|
|
||||||
- --certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web
|
|
||||||
# Logging
|
|
||||||
- --log.level=INFO
|
|
||||||
- --accesslog=true
|
|
||||||
ports:
|
ports:
|
||||||
- "80:80"
|
- "80:80"
|
||||||
- "443:443"
|
- "443:443"
|
||||||
|
|
||||||
environment:
|
environment:
|
||||||
DOCKER_API_VERSION: "1.52"
|
DOCKER_API_VERSION: "1.52"
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
- ./traefik.yml:/etc/traefik/traefik.yml:ro
|
||||||
- ./letsencrypt:/letsencrypt
|
- ./letsencrypt:/letsencrypt
|
||||||
|
- ./logs:/var/log/traefik
|
||||||
|
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
|
|
@ -40,10 +33,22 @@ services:
|
||||||
traefik.http.routers.traefik.entrypoints: websecure
|
traefik.http.routers.traefik.entrypoints: websecure
|
||||||
traefik.http.routers.traefik.tls.certresolver: letsencrypt
|
traefik.http.routers.traefik.tls.certresolver: letsencrypt
|
||||||
traefik.http.routers.traefik.service: api@internal
|
traefik.http.routers.traefik.service: api@internal
|
||||||
|
traefik.http.routers.traefik.middlewares: local-only
|
||||||
|
|
||||||
# IP Allowlist Middleware for local network only services
|
# IP Allowlist Middleware for local network only services
|
||||||
traefik.http.middlewares.local-only.ipallowlist.sourcerange: 10.0.0.0/16
|
traefik.http.middlewares.local-only.ipallowlist.sourcerange: 10.0.0.0/16
|
||||||
|
|
||||||
|
# CrowdSec Middleware
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.enabled: true
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecMode: stream
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecLapiKey: ${CROWDSEC_BOUNCER_KEY}
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecLapiHost: crowdsec:8080
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecLapiScheme: http
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.updateIntervalSeconds: 60
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.defaultDecisionSeconds: 60
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.forwardedHeadersTrustedIPs: 10.0.0.0/16
|
||||||
|
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.clientTrustedIPs: 10.0.0.0/16
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
56
compose/core/traefik/traefik.yml
Normal file
56
compose/core/traefik/traefik.yml
Normal file
|
|
@ -0,0 +1,56 @@
|
||||||
|
# Traefik Static Configuration
|
||||||
|
# Docs: https://doc.traefik.io/traefik/
|
||||||
|
|
||||||
|
# API Settings
|
||||||
|
api:
|
||||||
|
dashboard: true
|
||||||
|
|
||||||
|
# Provider Settings
|
||||||
|
providers:
|
||||||
|
docker:
|
||||||
|
exposedByDefault: false
|
||||||
|
network: homelab
|
||||||
|
|
||||||
|
# Entrypoints
|
||||||
|
entryPoints:
|
||||||
|
web:
|
||||||
|
address: ":80"
|
||||||
|
http:
|
||||||
|
redirections:
|
||||||
|
entryPoint:
|
||||||
|
to: websecure
|
||||||
|
scheme: https
|
||||||
|
|
||||||
|
websecure:
|
||||||
|
address: ":443"
|
||||||
|
|
||||||
|
# Certificate Resolvers
|
||||||
|
certificatesResolvers:
|
||||||
|
letsencrypt:
|
||||||
|
acme:
|
||||||
|
email: admin@edfig.dev
|
||||||
|
storage: /letsencrypt/acme.json
|
||||||
|
httpChallenge:
|
||||||
|
entryPoint: web
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log:
|
||||||
|
level: INFO
|
||||||
|
|
||||||
|
# Access Logs - Critical for CrowdSec
|
||||||
|
accessLog:
|
||||||
|
filePath: /var/log/traefik/access.log
|
||||||
|
bufferingSize: 100
|
||||||
|
filters:
|
||||||
|
statusCodes:
|
||||||
|
- "200-299"
|
||||||
|
- "300-399"
|
||||||
|
- "400-499"
|
||||||
|
- "500-599"
|
||||||
|
|
||||||
|
# Experimental Features - Required for Plugins
|
||||||
|
experimental:
|
||||||
|
plugins:
|
||||||
|
crowdsec-bouncer-traefik-plugin:
|
||||||
|
moduleName: github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin
|
||||||
|
version: v1.2.1
|
||||||
56
compose/media/automation/dispatcharr/compose.yaml
Normal file
56
compose/media/automation/dispatcharr/compose.yaml
Normal file
|
|
@ -0,0 +1,56 @@
|
||||||
|
# Dispatcharr - IPTV/Live TV Transcoding and Streaming
|
||||||
|
# Docs: https://github.com/DispatchArr/DispatchArr
|
||||||
|
|
||||||
|
services:
|
||||||
|
dispatcharr:
|
||||||
|
image: ghcr.io/dispatcharr/dispatcharr:latest
|
||||||
|
container_name: dispatcharr
|
||||||
|
ports:
|
||||||
|
- 9191:9191
|
||||||
|
volumes:
|
||||||
|
- ./data:/data
|
||||||
|
environment:
|
||||||
|
- DISPATCHARR_ENV=aio
|
||||||
|
- REDIS_HOST=localhost
|
||||||
|
- CELERY_BROKER_URL=redis://localhost:6379/0
|
||||||
|
- DISPATCHARR_LOG_LEVEL=info
|
||||||
|
|
||||||
|
# NVIDIA GPU support for hardware transcoding
|
||||||
|
runtime: nvidia
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.http.routers.dispatcharr.rule: Host(`iptv.fig.systems`)
|
||||||
|
traefik.http.routers.dispatcharr.entrypoints: websecure
|
||||||
|
traefik.http.routers.dispatcharr.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.dispatcharr.loadbalancer.server.port: 9191
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Dispatcharr (IPTV)
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: mdi:television
|
||||||
|
|
||||||
|
# Process Priority Configuration (Optional)
|
||||||
|
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
|
||||||
|
# Negative values require cap_add: SYS_NICE (uncomment below)
|
||||||
|
#- UWSGI_NICE_LEVEL=-5 # uWSGI/FFmpeg/Streaming (default: 0, recommended: -5 for high priority)
|
||||||
|
#- CELERY_NICE_LEVEL=5 # Celery/EPG/Background tasks (default: 5, low priority)
|
||||||
|
#
|
||||||
|
# Uncomment to enable high priority for streaming (required if UWSGI_NICE_LEVEL < 0)
|
||||||
|
#cap_add:
|
||||||
|
# - SYS_NICE
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
||||||
|
|
@ -29,7 +29,8 @@ services:
|
||||||
traefik.http.routers.lidarr.tls.certresolver: letsencrypt
|
traefik.http.routers.lidarr.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.lidarr.loadbalancer.server.port: 8686
|
traefik.http.services.lidarr.loadbalancer.server.port: 8686
|
||||||
|
|
||||||
# SSO Protection
|
# Local Network Only
|
||||||
|
traefik.http.routers.lidarr.middlewares: local-only
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Lidarr (Music)
|
homarr.name: Lidarr (Music)
|
||||||
|
|
|
||||||
|
|
@ -29,6 +29,7 @@ services:
|
||||||
traefik.http.services.profilarr.loadbalancer.server.port: 6868
|
traefik.http.services.profilarr.loadbalancer.server.port: 6868
|
||||||
|
|
||||||
# SSO Protection
|
# SSO Protection
|
||||||
|
traefik.http.routers.profilarr.middlewares: authelia
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Profilarr (Profiles)
|
homarr.name: Profilarr (Profiles)
|
||||||
|
|
|
||||||
|
|
@ -24,6 +24,7 @@ services:
|
||||||
traefik.http.services.prowlarr.loadbalancer.server.port: 9696
|
traefik.http.services.prowlarr.loadbalancer.server.port: 9696
|
||||||
|
|
||||||
# SSO Protection
|
# SSO Protection
|
||||||
|
traefik.http.routers.prowlarr.middlewares: authelia
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Prowlarr (Indexers)
|
homarr.name: Prowlarr (Indexers)
|
||||||
|
|
|
||||||
|
|
@ -19,12 +19,19 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
|
# Traefik
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
traefik.http.routers.qbittorrent.rule: Host(`qbt.fig.systems`)
|
traefik.http.routers.qbittorrent.rule: Host(`qbt.fig.systems`)
|
||||||
traefik.http.routers.qbittorrent.entrypoints: websecure
|
traefik.http.routers.qbittorrent.entrypoints: websecure
|
||||||
traefik.http.routers.qbittorrent.tls.certresolver: letsencrypt
|
traefik.http.routers.qbittorrent.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.qbittorrent.loadbalancer.server.port: 8080
|
traefik.http.services.qbittorrent.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.qbittorrent.middlewares: authelia
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -19,12 +19,19 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
|
# Traefik
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
traefik.http.routers.radarr.rule: Host(`radarr.fig.systems`)
|
traefik.http.routers.radarr.rule: Host(`radarr.fig.systems`)
|
||||||
traefik.http.routers.radarr.entrypoints: websecure
|
traefik.http.routers.radarr.entrypoints: websecure
|
||||||
traefik.http.routers.radarr.tls.certresolver: letsencrypt
|
traefik.http.routers.radarr.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.radarr.loadbalancer.server.port: 7878
|
traefik.http.services.radarr.loadbalancer.server.port: 7878
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.radarr.middlewares: authelia
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -16,13 +16,19 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
|
# Traefik
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
traefik.docker.network: homelab
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
traefik.http.routers.sabnzbd.rule: Host(`sab.fig.systems`)
|
traefik.http.routers.sabnzbd.rule: Host(`sab.fig.systems`)
|
||||||
traefik.http.routers.sabnzbd.entrypoints: websecure
|
traefik.http.routers.sabnzbd.entrypoints: websecure
|
||||||
traefik.http.routers.sabnzbd.tls.certresolver: letsencrypt
|
traefik.http.routers.sabnzbd.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.sabnzbd.loadbalancer.server.port: 8080
|
traefik.http.services.sabnzbd.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.sabnzbd.middlewares: authelia
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
32
compose/media/automation/slskd/app/slskd.yml
Normal file
32
compose/media/automation/slskd/app/slskd.yml
Normal file
|
|
@ -0,0 +1,32 @@
|
||||||
|
# slskd configuration
|
||||||
|
# See: https://github.com/slskd/slskd/blob/master/config/slskd.example.yml
|
||||||
|
|
||||||
|
# Soulseek credentials
|
||||||
|
soulseek:
|
||||||
|
username: eddoe
|
||||||
|
password: Exoteric0
|
||||||
|
description: |
|
||||||
|
A slskd user. https://github.com/slskd/slskd
|
||||||
|
|
||||||
|
# Directories
|
||||||
|
directories:
|
||||||
|
downloads: /downloads
|
||||||
|
|
||||||
|
shares:
|
||||||
|
directories:
|
||||||
|
- /music
|
||||||
|
filters:
|
||||||
|
- \.ini$
|
||||||
|
- Thumbs.db$
|
||||||
|
- \.DS_Store$
|
||||||
|
|
||||||
|
# Web UI Authentication
|
||||||
|
web:
|
||||||
|
authentication:
|
||||||
|
username: slskd
|
||||||
|
password: slskd
|
||||||
|
api_keys:
|
||||||
|
soularr:
|
||||||
|
key: ae207eee1105484e9dd0e472cba7b996fe2069bafc7f86b83001ab29d0c2c211
|
||||||
|
role: readwrite
|
||||||
|
cidr: 0.0.0.0/0,::/0
|
||||||
53
compose/media/automation/slskd/compose.yaml
Normal file
53
compose/media/automation/slskd/compose.yaml
Normal file
|
|
@ -0,0 +1,53 @@
|
||||||
|
# slskd - Soulseek daemon for P2P music sharing
|
||||||
|
# Docs: https://github.com/slskd/slskd
|
||||||
|
# Config: https://github.com/slskd/slskd/blob/master/config/slskd.example.yml
|
||||||
|
|
||||||
|
services:
|
||||||
|
slskd:
|
||||||
|
container_name: slskd
|
||||||
|
image: slskd/slskd:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
environment:
|
||||||
|
- SLSKD_REMOTE_CONFIGURATION=true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./app:/app
|
||||||
|
# Existing music library for sharing (read-only)
|
||||||
|
- /mnt/media/music:/music:ro
|
||||||
|
# Downloads directory (Lidarr can access this)
|
||||||
|
- /mnt/media/downloads/soulseek:/downloads
|
||||||
|
|
||||||
|
ports:
|
||||||
|
- "5030:5030" # Web UI
|
||||||
|
- "5031:5031" # Peer connections
|
||||||
|
- "50300:50300" # Peer listening
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.slskd.rule: Host(`soulseek.fig.systems`)
|
||||||
|
traefik.http.routers.slskd.entrypoints: websecure
|
||||||
|
traefik.http.routers.slskd.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.slskd.loadbalancer.server.port: 5030
|
||||||
|
|
||||||
|
# Local Network Only
|
||||||
|
traefik.http.routers.slskd.middlewares: local-only
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: slskd (Soulseek)
|
||||||
|
homarr.group: Automation
|
||||||
|
homarr.icon: mdi:share-variant
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -19,12 +19,19 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
|
# Traefik
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
traefik.http.routers.sonarr.rule: Host(`sonarr.fig.systems`)
|
traefik.http.routers.sonarr.rule: Host(`sonarr.fig.systems`)
|
||||||
traefik.http.routers.sonarr.entrypoints: websecure
|
traefik.http.routers.sonarr.entrypoints: websecure
|
||||||
traefik.http.routers.sonarr.tls.certresolver: letsencrypt
|
traefik.http.routers.sonarr.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.sonarr.loadbalancer.server.port: 8989
|
traefik.http.services.sonarr.loadbalancer.server.port: 8989
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.sonarr.middlewares: authelia
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
36
compose/media/automation/soularr/compose.yaml
Normal file
36
compose/media/automation/soularr/compose.yaml
Normal file
|
|
@ -0,0 +1,36 @@
|
||||||
|
# Soularr - Automation bridge connecting Lidarr with Slskd
|
||||||
|
# Docs: https://soularr.net/
|
||||||
|
# GitHub: https://github.com/mrusse08/soularr
|
||||||
|
|
||||||
|
services:
|
||||||
|
soularr:
|
||||||
|
container_name: soularr
|
||||||
|
image: mrusse08/soularr:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
environment:
|
||||||
|
- PUID=1000
|
||||||
|
- PGID=1000
|
||||||
|
- SCRIPT_INTERVAL=300 # Run every 5 minutes
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./data:/data # Config file storage
|
||||||
|
- /mnt/media/downloads/soulseek:/downloads # Monitor downloads
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# No Traefik (no web UI)
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Soularr (Lidarr↔Slskd Bridge)
|
||||||
|
homarr.group: Automation
|
||||||
|
homarr.icon: mdi:link-variant
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
37
compose/media/frontend/jellyfin/OIDC-SETUP.md
Normal file
37
compose/media/frontend/jellyfin/OIDC-SETUP.md
Normal file
|
|
@ -0,0 +1,37 @@
|
||||||
|
# Jellyfin OIDC Setup with Authelia
|
||||||
|
|
||||||
|
Jellyfin requires the **SSO Plugin** to be installed for OIDC authentication.
|
||||||
|
|
||||||
|
## Installation Steps
|
||||||
|
|
||||||
|
1. **Install the SSO Plugin**:
|
||||||
|
- Open Jellyfin: https://flix.fig.systems
|
||||||
|
- Navigate to: Dashboard → Plugins → Catalog
|
||||||
|
- Find and install: **"SSO-Authentication"** plugin
|
||||||
|
- Restart Jellyfin
|
||||||
|
|
||||||
|
2. **Configure the Plugin**:
|
||||||
|
- Go to: Dashboard → Plugins → SSO-Authentication
|
||||||
|
|
||||||
|
- **Add New Provider** with these settings:
|
||||||
|
- **Provider Name**: `authelia`
|
||||||
|
- **OID Endpoint**: `https://auth.fig.systems`
|
||||||
|
- **OID Client ID**: `jellyfin`
|
||||||
|
- **OID Secret**: `eOlV1CLiYpCtE9xKaI3FbsXmMBuHc5Mp`
|
||||||
|
- **Enabled**: ✓
|
||||||
|
- **Enable Authorization by Plugin**: ✓
|
||||||
|
- **Enable All Folders**: ✓
|
||||||
|
- **Enable Folder Access (Optional)**: (configure as needed)
|
||||||
|
- **Administrator Roles**: `admin` (if using LDAP groups)
|
||||||
|
- **Default User**: (leave empty for auto-registration)
|
||||||
|
|
||||||
|
3. **Test Login**:
|
||||||
|
- Log out of Jellyfin
|
||||||
|
- You should now see a "Sign in with authelia" button
|
||||||
|
- Click it to authenticate via Authelia
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Users will be auto-created in Jellyfin when they first login via OIDC
|
||||||
|
- You can still use local Jellyfin accounts alongside OIDC
|
||||||
|
- The redirect URI configured in Authelia is: `https://flix.fig.systems/sso/OID/redirect/authelia`
|
||||||
|
|
@ -44,9 +44,15 @@ services:
|
||||||
|
|
||||||
# NVIDIA GPU transcoding (GTX 1070)
|
# NVIDIA GPU transcoding (GTX 1070)
|
||||||
runtime: nvidia
|
runtime: nvidia
|
||||||
|
# Shared memory for transcoding - prevents stuttering
|
||||||
|
shm_size: 4gb
|
||||||
deploy:
|
deploy:
|
||||||
resources:
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 12G
|
||||||
|
cpus: '5.0'
|
||||||
reservations:
|
reservations:
|
||||||
|
memory: 4G
|
||||||
devices:
|
devices:
|
||||||
- driver: nvidia
|
- driver: nvidia
|
||||||
count: all
|
count: all
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@
|
||||||
services:
|
services:
|
||||||
jellyseerr:
|
jellyseerr:
|
||||||
container_name: jellyseerr
|
container_name: jellyseerr
|
||||||
image: fallenbagel/jellyseerr:latest
|
image: ghcr.io/seerr-team/seerr:latest
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
volumes:
|
volumes:
|
||||||
|
|
|
||||||
48
compose/media/frontend/navidrome/compose.yaml
Normal file
48
compose/media/frontend/navidrome/compose.yaml
Normal file
|
|
@ -0,0 +1,48 @@
|
||||||
|
# Navidrome - Modern music streaming server
|
||||||
|
# Docs: https://www.navidrome.org/docs/
|
||||||
|
# Installation: https://www.navidrome.org/docs/installation/docker/
|
||||||
|
|
||||||
|
services:
|
||||||
|
navidrome:
|
||||||
|
container_name: navidrome
|
||||||
|
image: deluan/navidrome:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
user: "1000:1000"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./data:/data
|
||||||
|
# Music library (read-only)
|
||||||
|
- /mnt/media/music:/music:ro
|
||||||
|
|
||||||
|
ports:
|
||||||
|
- "4533:4533"
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.navidrome.rule: Host(`music.fig.systems`)
|
||||||
|
traefik.http.routers.navidrome.entrypoints: websecure
|
||||||
|
traefik.http.routers.navidrome.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.navidrome.loadbalancer.server.port: 4533
|
||||||
|
|
||||||
|
# No SSO - Navidrome has its own auth system
|
||||||
|
# This ensures mobile apps (Subsonic clients) work properly
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Navidrome (Music Streaming)
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: mdi:music-circle
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
35
compose/media/frontend/nodecasttv/compose.yaml
Normal file
35
compose/media/frontend/nodecasttv/compose.yaml
Normal file
|
|
@ -0,0 +1,35 @@
|
||||||
|
# NodeCast TV - Chromecast Dashboard
|
||||||
|
# Source: https://github.com/technomancer702/nodecast-tv
|
||||||
|
|
||||||
|
services:
|
||||||
|
nodecast-tv:
|
||||||
|
container_name: nodecast-tv
|
||||||
|
build: https://github.com/technomancer702/nodecast-tv.git#main
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
environment:
|
||||||
|
- NODE_ENV=production
|
||||||
|
- PORT=3000
|
||||||
|
volumes:
|
||||||
|
- ./data:/app/data
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
traefik.http.routers.nodecast-tv.rule: Host(`iptv.fig.systems`)
|
||||||
|
traefik.http.routers.nodecast-tv.entrypoints: websecure
|
||||||
|
traefik.http.routers.nodecast-tv.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.nodecast-tv.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
|
# Note: No Authelia middleware - NodeCast TV handles authentication via its own OIDC integration
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: NodeCast TV (IPTV)
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: mdi:cast
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -1,28 +0,0 @@
|
||||||
# Centralized Logging Configuration
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# Grafana Admin Credentials
|
|
||||||
# Default username: admin
|
|
||||||
# Change this password immediately after first login!
|
|
||||||
# Example format: MyGr@f@n@P@ssw0rd!2024
|
|
||||||
GF_SECURITY_ADMIN_PASSWORD=changeme_please_set_secure_grafana_password
|
|
||||||
|
|
||||||
# Grafana Configuration
|
|
||||||
GF_SERVER_ROOT_URL=https://logs.fig.systems
|
|
||||||
GF_SERVER_DOMAIN=logs.fig.systems
|
|
||||||
|
|
||||||
# Disable Grafana analytics (optional)
|
|
||||||
GF_ANALYTICS_REPORTING_ENABLED=false
|
|
||||||
GF_ANALYTICS_CHECK_FOR_UPDATES=false
|
|
||||||
|
|
||||||
# Allow embedding (for Homarr dashboard integration)
|
|
||||||
GF_SECURITY_ALLOW_EMBEDDING=true
|
|
||||||
|
|
||||||
# Loki Configuration
|
|
||||||
# Retention period in days (default: 30 days)
|
|
||||||
LOKI_RETENTION_PERIOD=30d
|
|
||||||
|
|
||||||
# Promtail Configuration
|
|
||||||
# No additional configuration needed - configured via promtail-config.yaml
|
|
||||||
|
|
@ -1,28 +0,0 @@
|
||||||
# Centralized Logging Configuration
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# Grafana Admin Credentials
|
|
||||||
# Default username: admin
|
|
||||||
# Change this password immediately after first login!
|
|
||||||
# Example format: MyGr@f@n@P@ssw0rd!2024
|
|
||||||
GF_SECURITY_ADMIN_PASSWORD=REDACTED
|
|
||||||
|
|
||||||
# Grafana Configuration
|
|
||||||
GF_SERVER_ROOT_URL=https://logs.fig.systems
|
|
||||||
GF_SERVER_DOMAIN=logs.fig.systems
|
|
||||||
|
|
||||||
# Disable Grafana analytics (optional)
|
|
||||||
GF_ANALYTICS_REPORTING_ENABLED=false
|
|
||||||
GF_ANALYTICS_CHECK_FOR_UPDATES=false
|
|
||||||
|
|
||||||
# Allow embedding (for Homarr dashboard integration)
|
|
||||||
GF_SECURITY_ALLOW_EMBEDDING=true
|
|
||||||
|
|
||||||
# Loki Configuration
|
|
||||||
# Retention period in days (default: 30 days)
|
|
||||||
LOKI_RETENTION_PERIOD=30d
|
|
||||||
|
|
||||||
# Promtail Configuration
|
|
||||||
# No additional configuration needed - configured via promtail-config.yaml
|
|
||||||
13
compose/monitoring/logging/.gitignore
vendored
13
compose/monitoring/logging/.gitignore
vendored
|
|
@ -1,13 +0,0 @@
|
||||||
# Loki data
|
|
||||||
loki-data/
|
|
||||||
|
|
||||||
# Grafana data
|
|
||||||
grafana-data/
|
|
||||||
|
|
||||||
# Keep provisioning and config files
|
|
||||||
!grafana-provisioning/
|
|
||||||
!loki-config.yaml
|
|
||||||
!promtail-config.yaml
|
|
||||||
|
|
||||||
# Keep .env.example if created
|
|
||||||
!.env.example
|
|
||||||
|
|
@ -1,235 +0,0 @@
|
||||||
# Docker Logs Dashboard - Grafana
|
|
||||||
|
|
||||||
A comprehensive dashboard for viewing all Docker container logs via Loki.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### 📊 Panels Included
|
|
||||||
|
|
||||||
1. **Docker Container Logs** (Main Panel)
|
|
||||||
- Real-time log streaming from all containers
|
|
||||||
- Filter by container, image, or search term
|
|
||||||
- Expandable log details
|
|
||||||
- Sortable (ascending/descending)
|
|
||||||
|
|
||||||
2. **Log Volume by Container**
|
|
||||||
- Stacked bar chart showing log activity over time
|
|
||||||
- Helps identify chatty containers
|
|
||||||
- Per-container breakdown
|
|
||||||
|
|
||||||
3. **Error Logs by Container**
|
|
||||||
- Time series of ERROR/EXCEPTION/FATAL/PANIC logs
|
|
||||||
- Automatically detects error patterns
|
|
||||||
- Useful for monitoring application health
|
|
||||||
|
|
||||||
4. **Total Logs by Container**
|
|
||||||
- Bar gauge showing total log lines per container
|
|
||||||
- Color-coded thresholds (green → yellow → red)
|
|
||||||
- Based on selected time range
|
|
||||||
|
|
||||||
5. **Statistics Panels**
|
|
||||||
- **Active Containers**: Count of containers currently logging
|
|
||||||
- **Total Log Lines**: Sum of all logs in time range
|
|
||||||
- **Total Errors**: Count of error-level logs
|
|
||||||
- **Log Rate**: Logs per second (current rate)
|
|
||||||
|
|
||||||
## Access the Dashboard
|
|
||||||
|
|
||||||
1. Open Grafana: **https://logs.fig.systems**
|
|
||||||
2. Navigate to: **Dashboards** → **Loki** folder → **Docker Logs - All Containers**
|
|
||||||
|
|
||||||
Or use direct link:
|
|
||||||
```
|
|
||||||
https://logs.fig.systems/d/docker-logs-all
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using the Filters
|
|
||||||
|
|
||||||
### Container Filter
|
|
||||||
- Select specific containers to view
|
|
||||||
- Multi-select supported
|
|
||||||
- Default: "All" (shows all containers)
|
|
||||||
|
|
||||||
Example: Select `traefik`, `loki`, `grafana` to view only those
|
|
||||||
|
|
||||||
### Image Filter
|
|
||||||
- Filter by Docker image name
|
|
||||||
- Multi-select supported
|
|
||||||
- Useful for viewing all containers of same image
|
|
||||||
|
|
||||||
Example: Filter by `grafana/loki:*` to see all Loki containers
|
|
||||||
|
|
||||||
### Search Filter
|
|
||||||
- Free-text search with regex support
|
|
||||||
- Searches within log message content
|
|
||||||
- Case-insensitive by default
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- `error` - Find logs containing "error"
|
|
||||||
- `(?i)started` - Case-insensitive "started"
|
|
||||||
- `HTTP [45][0-9]{2}` - HTTP 4xx/5xx errors
|
|
||||||
- `user.*login.*failed` - Failed login attempts
|
|
||||||
|
|
||||||
## Time Range Selection
|
|
||||||
|
|
||||||
Use Grafana's time picker (top right) to select:
|
|
||||||
- Last 5 minutes
|
|
||||||
- Last 15 minutes
|
|
||||||
- Last 1 hour (default)
|
|
||||||
- Last 24 hours
|
|
||||||
- Custom range
|
|
||||||
|
|
||||||
## Auto-Refresh
|
|
||||||
|
|
||||||
Dashboard auto-refreshes every **10 seconds** by default.
|
|
||||||
|
|
||||||
Change refresh rate in top-right dropdown:
|
|
||||||
- 5s (very fast)
|
|
||||||
- 10s (default)
|
|
||||||
- 30s
|
|
||||||
- 1m
|
|
||||||
- 5m
|
|
||||||
- Off
|
|
||||||
|
|
||||||
## LogQL Query Examples
|
|
||||||
|
|
||||||
The dashboard uses these queries. You can modify panels or create new ones:
|
|
||||||
|
|
||||||
### All logs from a container
|
|
||||||
```logql
|
|
||||||
{job="docker_all", container="traefik"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Errors only
|
|
||||||
```logql
|
|
||||||
{job="docker_all"} |~ "(?i)(error|exception|fatal|panic)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### HTTP status codes
|
|
||||||
```logql
|
|
||||||
{job="docker_all", container="traefik"} | json | line_format "{{.status}} {{.method}} {{.path}}"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Rate of logs
|
|
||||||
```logql
|
|
||||||
rate({job="docker_all"}[5m])
|
|
||||||
```
|
|
||||||
|
|
||||||
### Count errors per container
|
|
||||||
```logql
|
|
||||||
sum by (container) (count_over_time({job="docker_all"} |~ "(?i)error" [1h]))
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tips & Tricks
|
|
||||||
|
|
||||||
### 1. Find Noisy Containers
|
|
||||||
- Use "Log Volume by Container" panel
|
|
||||||
- Look for tall bars = lots of logs
|
|
||||||
- Consider adjusting log levels for those containers
|
|
||||||
|
|
||||||
### 2. Debug Application Issues
|
|
||||||
1. Set time range to when issue occurred
|
|
||||||
2. Filter to specific container
|
|
||||||
3. Search for error keywords
|
|
||||||
4. Expand log details for full context
|
|
||||||
|
|
||||||
### 3. Monitor in Real-Time
|
|
||||||
1. Set time range to "Last 5 minutes"
|
|
||||||
2. Enable auto-refresh (5s or 10s)
|
|
||||||
3. Open "Docker Container Logs" panel
|
|
||||||
4. Watch logs stream live
|
|
||||||
|
|
||||||
### 4. Export Logs
|
|
||||||
- Click on any log line
|
|
||||||
- Click "Copy" icon to copy log text
|
|
||||||
- Or use Loki API directly for bulk export
|
|
||||||
|
|
||||||
### 5. Create Alerts
|
|
||||||
In Grafana, you can create alerts based on log patterns:
|
|
||||||
- Alert if errors exceed threshold
|
|
||||||
- Alert if specific pattern detected
|
|
||||||
- Alert if container stops logging (might be down)
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### No logs showing
|
|
||||||
1. Check Promtail is running: `docker ps | grep promtail`
|
|
||||||
2. Verify Loki datasource in Grafana is configured
|
|
||||||
3. Check time range (logs might be older/newer)
|
|
||||||
4. Verify containers are actually logging
|
|
||||||
|
|
||||||
### Slow dashboard
|
|
||||||
- Narrow time range (use last 15m instead of 24h)
|
|
||||||
- Use container filter to reduce data
|
|
||||||
- Increase refresh interval to 30s or 1m
|
|
||||||
|
|
||||||
### Missing containers
|
|
||||||
Your current Promtail config captures ALL Docker containers automatically.
|
|
||||||
If a container is missing, check:
|
|
||||||
1. Container is running: `docker ps`
|
|
||||||
2. Container has logs: `docker logs <container>`
|
|
||||||
3. Promtail can access Docker socket
|
|
||||||
|
|
||||||
## Advanced Customization
|
|
||||||
|
|
||||||
### Add a New Panel
|
|
||||||
|
|
||||||
1. Click "Add Panel" in dashboard
|
|
||||||
2. Select "Logs" visualization
|
|
||||||
3. Use query:
|
|
||||||
```logql
|
|
||||||
{job="docker_all", container="your-container"}
|
|
||||||
```
|
|
||||||
4. Configure options (time display, wrapping, etc.)
|
|
||||||
5. Save dashboard
|
|
||||||
|
|
||||||
### Modify Existing Panels
|
|
||||||
|
|
||||||
1. Click panel title → Edit
|
|
||||||
2. Modify LogQL query
|
|
||||||
3. Adjust visualization options
|
|
||||||
4. Save changes
|
|
||||||
|
|
||||||
### Export Dashboard
|
|
||||||
|
|
||||||
1. Dashboard settings (gear icon)
|
|
||||||
2. JSON Model
|
|
||||||
3. Copy JSON
|
|
||||||
4. Save to file for backup
|
|
||||||
|
|
||||||
## Integration with Other Tools
|
|
||||||
|
|
||||||
### View in Explore
|
|
||||||
- Click "Explore" on any panel
|
|
||||||
- Opens Loki Explore interface
|
|
||||||
- More advanced querying options
|
|
||||||
- Better for ad-hoc investigation
|
|
||||||
|
|
||||||
### Share Dashboard
|
|
||||||
1. Click share icon (next to title)
|
|
||||||
2. Get shareable link
|
|
||||||
3. Or export snapshot
|
|
||||||
|
|
||||||
### Embed in Other Apps
|
|
||||||
Use Grafana's embedding features to show logs in:
|
|
||||||
- Homarr dashboard
|
|
||||||
- Custom web apps
|
|
||||||
- Monitoring tools
|
|
||||||
|
|
||||||
## Related Resources
|
|
||||||
|
|
||||||
- [LogQL Documentation](https://grafana.com/docs/loki/latest/logql/)
|
|
||||||
- [Grafana Dashboards Guide](https://grafana.com/docs/grafana/latest/dashboards/)
|
|
||||||
- [Loki Best Practices](https://grafana.com/docs/loki/latest/best-practices/)
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
For issues with:
|
|
||||||
- **Dashboard**: Edit and customize as needed
|
|
||||||
- **Loki**: Check `/home/eduardo_figueroa/homelab/compose/monitoring/logging/`
|
|
||||||
- **Missing logs**: Verify Promtail configuration
|
|
||||||
|
|
||||||
Dashboard file location:
|
|
||||||
```
|
|
||||||
/home/eduardo_figueroa/homelab/compose/monitoring/logging/grafana-provisioning/dashboards/docker-logs.json
|
|
||||||
```
|
|
||||||
|
|
@ -1,527 +0,0 @@
|
||||||
# Centralized Logging Stack
|
|
||||||
|
|
||||||
Grafana Loki + Promtail + Grafana for centralized Docker container log aggregation and visualization.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
This stack provides centralized logging for all Docker containers in your homelab:
|
|
||||||
|
|
||||||
- **Loki**: Log aggregation backend (like Prometheus but for logs)
|
|
||||||
- **Promtail**: Agent that collects logs from Docker containers
|
|
||||||
- **Grafana**: Web UI for querying and visualizing logs
|
|
||||||
|
|
||||||
### Why This Stack?
|
|
||||||
|
|
||||||
- ✅ **Lightweight**: Minimal resource usage compared to ELK stack
|
|
||||||
- ✅ **Docker-native**: Automatically discovers and collects logs from all containers
|
|
||||||
- ✅ **Powerful search**: LogQL query language for filtering and searching
|
|
||||||
- ✅ **Retention**: Configurable log retention (default: 30 days)
|
|
||||||
- ✅ **Labels**: Automatic labeling by container, image, compose project
|
|
||||||
- ✅ **Integrated**: Works seamlessly with existing homelab services
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Configure Environment
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/monitoring/logging
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update:**
|
|
||||||
```env
|
|
||||||
# Change this!
|
|
||||||
GF_SECURITY_ADMIN_PASSWORD=<your-strong-password>
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Deploy the Stack
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Access Grafana
|
|
||||||
|
|
||||||
Go to: **https://logs.fig.systems**
|
|
||||||
|
|
||||||
**Default credentials:**
|
|
||||||
- Username: `admin`
|
|
||||||
- Password: `<your GF_SECURITY_ADMIN_PASSWORD>`
|
|
||||||
|
|
||||||
**⚠️ Change the password immediately after first login!**
|
|
||||||
|
|
||||||
### 4. View Logs
|
|
||||||
|
|
||||||
1. Click "Explore" (compass icon) in left sidebar
|
|
||||||
2. Select "Loki" datasource (should be selected by default)
|
|
||||||
3. Start querying logs!
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Basic Log Queries
|
|
||||||
|
|
||||||
**View all logs from a container:**
|
|
||||||
```logql
|
|
||||||
{container="jellyfin"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**View logs from a compose project:**
|
|
||||||
```logql
|
|
||||||
{compose_project="media"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**View logs from specific service:**
|
|
||||||
```logql
|
|
||||||
{compose_service="lldap"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Filter by log level:**
|
|
||||||
```logql
|
|
||||||
{container="immich_server"} |= "error"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Exclude lines:**
|
|
||||||
```logql
|
|
||||||
{container="traefik"} != "404"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Multiple filters:**
|
|
||||||
```logql
|
|
||||||
{container="jellyfin"} |= "error" != "404"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Advanced Queries
|
|
||||||
|
|
||||||
**Count errors per minute:**
|
|
||||||
```logql
|
|
||||||
sum(count_over_time({container="jellyfin"} |= "error" [1m])) by (container)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rate of logs:**
|
|
||||||
```logql
|
|
||||||
rate({container="traefik"}[5m])
|
|
||||||
```
|
|
||||||
|
|
||||||
**Logs from last hour:**
|
|
||||||
```logql
|
|
||||||
{container="immich_server"} | __timestamp__ >= now() - 1h
|
|
||||||
```
|
|
||||||
|
|
||||||
**Filter by multiple containers:**
|
|
||||||
```logql
|
|
||||||
{container=~"jellyfin|immich.*|sonarr"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Extract and filter JSON:**
|
|
||||||
```logql
|
|
||||||
{container="linkwarden"} | json | level="error"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Log Retention
|
|
||||||
|
|
||||||
Default: **30 days**
|
|
||||||
|
|
||||||
To change retention period:
|
|
||||||
|
|
||||||
**Edit `.env`:**
|
|
||||||
```env
|
|
||||||
LOKI_RETENTION_PERIOD=60d # Keep logs for 60 days
|
|
||||||
```
|
|
||||||
|
|
||||||
**Edit `loki-config.yaml`:**
|
|
||||||
```yaml
|
|
||||||
limits_config:
|
|
||||||
retention_period: 60d # Must match .env
|
|
||||||
|
|
||||||
table_manager:
|
|
||||||
retention_period: 60d # Must match above
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart:**
|
|
||||||
```bash
|
|
||||||
docker compose restart loki
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adjust Resource Limits
|
|
||||||
|
|
||||||
**Edit `loki-config.yaml`:**
|
|
||||||
```yaml
|
|
||||||
limits_config:
|
|
||||||
ingestion_rate_mb: 10 # MB/sec per stream
|
|
||||||
ingestion_burst_size_mb: 20 # Burst size
|
|
||||||
```
|
|
||||||
|
|
||||||
### Add Custom Labels
|
|
||||||
|
|
||||||
**Edit `promtail-config.yaml`:**
|
|
||||||
```yaml
|
|
||||||
scrape_configs:
|
|
||||||
- job_name: docker
|
|
||||||
docker_sd_configs:
|
|
||||||
- host: unix:///var/run/docker.sock
|
|
||||||
|
|
||||||
relabel_configs:
|
|
||||||
# Add custom label
|
|
||||||
- source_labels: ['__meta_docker_container_label_environment']
|
|
||||||
target_label: 'environment'
|
|
||||||
```
|
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
### Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
Docker Containers
|
|
||||||
↓ (logs via Docker socket)
|
|
||||||
Promtail (scrapes and ships)
|
|
||||||
↓ (HTTP push)
|
|
||||||
Loki (stores and indexes)
|
|
||||||
↓ (LogQL queries)
|
|
||||||
Grafana (visualization)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Log Collection
|
|
||||||
|
|
||||||
Promtail automatically collects logs from:
|
|
||||||
1. **All Docker containers** via Docker socket
|
|
||||||
2. **System logs** from `/var/log`
|
|
||||||
|
|
||||||
Logs are labeled with:
|
|
||||||
- `container`: Container name
|
|
||||||
- `image`: Docker image
|
|
||||||
- `compose_project`: Docker Compose project name
|
|
||||||
- `compose_service`: Service name from compose.yaml
|
|
||||||
- `stream`: stdout or stderr
|
|
||||||
|
|
||||||
### Storage
|
|
||||||
|
|
||||||
Logs are stored in:
|
|
||||||
- **Location**: `./loki-data/`
|
|
||||||
- **Format**: Compressed chunks
|
|
||||||
- **Index**: BoltDB
|
|
||||||
- **Retention**: Automatic cleanup after retention period
|
|
||||||
|
|
||||||
## Integration with Services
|
|
||||||
|
|
||||||
### Option 1: Automatic (Default)
|
|
||||||
|
|
||||||
Promtail automatically discovers all containers. No changes needed!
|
|
||||||
|
|
||||||
### Option 2: Explicit Labels (Recommended)
|
|
||||||
|
|
||||||
Add labels to services for better organization:
|
|
||||||
|
|
||||||
**Edit any service's `compose.yaml`:**
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
servicename:
|
|
||||||
# ... existing config ...
|
|
||||||
labels:
|
|
||||||
# ... existing labels ...
|
|
||||||
|
|
||||||
# Add logging labels
|
|
||||||
logging: "promtail"
|
|
||||||
log_level: "info"
|
|
||||||
environment: "production"
|
|
||||||
```
|
|
||||||
|
|
||||||
These labels will be available in Loki for filtering.
|
|
||||||
|
|
||||||
### Option 3: Send Logs Directly to Loki
|
|
||||||
|
|
||||||
Instead of Promtail scraping, send logs directly:
|
|
||||||
|
|
||||||
**Edit service `compose.yaml`:**
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
servicename:
|
|
||||||
# ... existing config ...
|
|
||||||
logging:
|
|
||||||
driver: loki
|
|
||||||
options:
|
|
||||||
loki-url: "http://loki:3100/loki/api/v1/push"
|
|
||||||
loki-external-labels: "container={{.Name}},compose_project={{.Config.Labels[\"com.docker.compose.project\"]}}"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: This requires the Loki Docker driver plugin (not recommended for simplicity).
|
|
||||||
|
|
||||||
## Grafana Dashboards
|
|
||||||
|
|
||||||
### Built-in Explore
|
|
||||||
|
|
||||||
Best way to start - use Grafana's Explore view:
|
|
||||||
1. Click "Explore" icon (compass)
|
|
||||||
2. Select "Loki" datasource
|
|
||||||
3. Use builder to create queries
|
|
||||||
4. Save interesting queries
|
|
||||||
|
|
||||||
### Pre-built Dashboards
|
|
||||||
|
|
||||||
You can import community dashboards:
|
|
||||||
|
|
||||||
1. Go to Dashboards → Import
|
|
||||||
2. Use dashboard ID: `13639` (Docker logs dashboard)
|
|
||||||
3. Select "Loki" as datasource
|
|
||||||
4. Import
|
|
||||||
|
|
||||||
### Create Custom Dashboard
|
|
||||||
|
|
||||||
1. Click "+" → "Dashboard"
|
|
||||||
2. Add panel
|
|
||||||
3. Select Loki datasource
|
|
||||||
4. Build query using LogQL
|
|
||||||
5. Save dashboard
|
|
||||||
|
|
||||||
**Example panels:**
|
|
||||||
- Error count by container
|
|
||||||
- Log volume over time
|
|
||||||
- Top 10 logging containers
|
|
||||||
- Recent errors table
|
|
||||||
|
|
||||||
## Alerting
|
|
||||||
|
|
||||||
### Create Log-Based Alerts
|
|
||||||
|
|
||||||
1. Go to Alerting → Alert rules
|
|
||||||
2. Create new alert rule
|
|
||||||
3. Query: `sum(count_over_time({container="jellyfin"} |= "error" [5m])) > 10`
|
|
||||||
4. Set thresholds and notification channels
|
|
||||||
5. Save
|
|
||||||
|
|
||||||
**Example alerts:**
|
|
||||||
- Too many errors in container
|
|
||||||
- Container restarted
|
|
||||||
- Disk space warnings
|
|
||||||
- Failed authentication attempts
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Promtail Not Collecting Logs
|
|
||||||
|
|
||||||
**Check Promtail is running:**
|
|
||||||
```bash
|
|
||||||
docker logs promtail
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify Docker socket access:**
|
|
||||||
```bash
|
|
||||||
docker exec promtail ls -la /var/run/docker.sock
|
|
||||||
```
|
|
||||||
|
|
||||||
**Test Promtail config:**
|
|
||||||
```bash
|
|
||||||
docker exec promtail promtail -config.file=/etc/promtail/config.yaml -dry-run
|
|
||||||
```
|
|
||||||
|
|
||||||
### Loki Not Receiving Logs
|
|
||||||
|
|
||||||
**Check Loki health:**
|
|
||||||
```bash
|
|
||||||
curl http://localhost:3100/ready
|
|
||||||
```
|
|
||||||
|
|
||||||
**View Loki logs:**
|
|
||||||
```bash
|
|
||||||
docker logs loki
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Promtail is pushing:**
|
|
||||||
```bash
|
|
||||||
docker logs promtail | grep -i push
|
|
||||||
```
|
|
||||||
|
|
||||||
### Grafana Can't Connect to Loki
|
|
||||||
|
|
||||||
**Test Loki from Grafana container:**
|
|
||||||
```bash
|
|
||||||
docker exec grafana wget -O- http://loki:3100/ready
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check datasource configuration:**
|
|
||||||
- Grafana → Configuration → Data sources → Loki
|
|
||||||
- URL should be: `http://loki:3100`
|
|
||||||
|
|
||||||
### No Logs Appearing
|
|
||||||
|
|
||||||
**Wait a few minutes** - logs take time to appear
|
|
||||||
|
|
||||||
**Check retention:**
|
|
||||||
```bash
|
|
||||||
# Logs older than retention period are deleted
|
|
||||||
grep retention_period loki-config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify time range in Grafana:**
|
|
||||||
- Make sure selected time range includes recent logs
|
|
||||||
- Try "Last 5 minutes"
|
|
||||||
|
|
||||||
### High Disk Usage
|
|
||||||
|
|
||||||
**Check Loki data size:**
|
|
||||||
```bash
|
|
||||||
du -sh ./loki-data
|
|
||||||
```
|
|
||||||
|
|
||||||
**Reduce retention:**
|
|
||||||
```env
|
|
||||||
LOKI_RETENTION_PERIOD=7d # Shorter retention
|
|
||||||
```
|
|
||||||
|
|
||||||
**Manual cleanup:**
|
|
||||||
```bash
|
|
||||||
# Stop Loki
|
|
||||||
docker compose stop loki
|
|
||||||
|
|
||||||
# Remove old data (CAREFUL!)
|
|
||||||
rm -rf ./loki-data/chunks/*
|
|
||||||
|
|
||||||
# Restart
|
|
||||||
docker compose start loki
|
|
||||||
```
|
|
||||||
|
|
||||||
## Performance Tuning
|
|
||||||
|
|
||||||
### For Low Resources (< 8GB RAM)
|
|
||||||
|
|
||||||
**Edit `loki-config.yaml`:**
|
|
||||||
```yaml
|
|
||||||
limits_config:
|
|
||||||
retention_period: 7d # Shorter retention
|
|
||||||
ingestion_rate_mb: 5 # Lower rate
|
|
||||||
ingestion_burst_size_mb: 10 # Lower burst
|
|
||||||
|
|
||||||
query_range:
|
|
||||||
results_cache:
|
|
||||||
cache:
|
|
||||||
embedded_cache:
|
|
||||||
max_size_mb: 50 # Smaller cache
|
|
||||||
```
|
|
||||||
|
|
||||||
### For High Volume
|
|
||||||
|
|
||||||
**Edit `loki-config.yaml`:**
|
|
||||||
```yaml
|
|
||||||
limits_config:
|
|
||||||
ingestion_rate_mb: 20 # Higher rate
|
|
||||||
ingestion_burst_size_mb: 40 # Higher burst
|
|
||||||
|
|
||||||
query_range:
|
|
||||||
results_cache:
|
|
||||||
cache:
|
|
||||||
embedded_cache:
|
|
||||||
max_size_mb: 200 # Larger cache
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### Log Levels
|
|
||||||
|
|
||||||
Configure services to log appropriately:
|
|
||||||
- **Production**: `info` or `warning`
|
|
||||||
- **Development**: `debug`
|
|
||||||
- **Troubleshooting**: `trace`
|
|
||||||
|
|
||||||
Too much logging = higher resource usage!
|
|
||||||
|
|
||||||
### Retention Strategy
|
|
||||||
|
|
||||||
- **Critical services**: 60+ days
|
|
||||||
- **Normal services**: 30 days
|
|
||||||
- **High volume services**: 7-14 days
|
|
||||||
|
|
||||||
### Query Optimization
|
|
||||||
|
|
||||||
- **Use specific labels**: `{container="name"}` not `{container=~".*"}`
|
|
||||||
- **Limit time range**: Query hours not days when possible
|
|
||||||
- **Use filters early**: `|= "error"` before parsing
|
|
||||||
- **Avoid regex when possible**: `|= "string"` faster than `|~ "reg.*ex"`
|
|
||||||
|
|
||||||
### Storage Management
|
|
||||||
|
|
||||||
Monitor disk usage:
|
|
||||||
```bash
|
|
||||||
# Check regularly
|
|
||||||
du -sh compose/monitoring/logging/loki-data
|
|
||||||
|
|
||||||
# Set up alerts when > 80% disk usage
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Homarr
|
|
||||||
|
|
||||||
Grafana will automatically appear in Homarr dashboard. You can also:
|
|
||||||
|
|
||||||
### Add Grafana Widget to Homarr
|
|
||||||
|
|
||||||
1. Edit Homarr dashboard
|
|
||||||
2. Add "iFrame" widget
|
|
||||||
3. URL: `https://logs.fig.systems/d/<dashboard-id>`
|
|
||||||
4. This embeds Grafana dashboards in Homarr
|
|
||||||
|
|
||||||
## Backup and Restore
|
|
||||||
|
|
||||||
### Backup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Backup Loki data
|
|
||||||
tar czf loki-backup-$(date +%Y%m%d).tar.gz ./loki-data
|
|
||||||
|
|
||||||
# Backup Grafana dashboards and datasources
|
|
||||||
tar czf grafana-backup-$(date +%Y%m%d).tar.gz ./grafana-data ./grafana-provisioning
|
|
||||||
```
|
|
||||||
|
|
||||||
### Restore
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Restore Loki
|
|
||||||
docker compose down
|
|
||||||
tar xzf loki-backup-YYYYMMDD.tar.gz
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Restore Grafana
|
|
||||||
docker compose down
|
|
||||||
tar xzf grafana-backup-YYYYMMDD.tar.gz
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## Updating
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/monitoring/logging
|
|
||||||
|
|
||||||
# Pull latest images
|
|
||||||
docker compose pull
|
|
||||||
|
|
||||||
# Restart with new images
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resource Usage
|
|
||||||
|
|
||||||
**Typical usage:**
|
|
||||||
- **Loki**: 200-500MB RAM
|
|
||||||
- **Promtail**: 50-100MB RAM
|
|
||||||
- **Grafana**: 100-200MB RAM
|
|
||||||
- **Disk**: ~1-5GB per week (depends on log volume)
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ Deploy the stack
|
|
||||||
2. ✅ Login to Grafana and explore logs
|
|
||||||
3. ✅ Create useful dashboards
|
|
||||||
4. ✅ Set up alerts for errors
|
|
||||||
5. ✅ Configure retention based on needs
|
|
||||||
6. ⬜ Add Prometheus for metrics (future)
|
|
||||||
7. ⬜ Add Tempo for distributed tracing (future)
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Loki Documentation](https://grafana.com/docs/loki/latest/)
|
|
||||||
- [LogQL Query Language](https://grafana.com/docs/loki/latest/logql/)
|
|
||||||
- [Promtail Configuration](https://grafana.com/docs/loki/latest/clients/promtail/configuration/)
|
|
||||||
- [Grafana Tutorials](https://grafana.com/tutorials/)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Now you can see logs from all containers in one place!** 🎉
|
|
||||||
|
|
@ -1,121 +0,0 @@
|
||||||
# Centralized Logging Stack - Loki + Promtail + Grafana
|
|
||||||
# Docs: https://grafana.com/docs/loki/latest/
|
|
||||||
|
|
||||||
services:
|
|
||||||
loki:
|
|
||||||
container_name: loki
|
|
||||||
image: grafana/loki:3.3.2
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./loki-config.yaml:/etc/loki/local-config.yaml:ro
|
|
||||||
- ./loki-data:/loki
|
|
||||||
|
|
||||||
command: -config.file=/etc/loki/local-config.yaml
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
- logging_internal
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik (for API access)
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Loki API
|
|
||||||
traefik.http.routers.loki.rule: Host(`loki.fig.systems`)
|
|
||||||
traefik.http.routers.loki.entrypoints: websecure
|
|
||||||
traefik.http.routers.loki.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.loki.loadbalancer.server.port: 3100
|
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Loki (Logs)
|
|
||||||
homarr.group: Monitoring
|
|
||||||
homarr.icon: mdi:math-log
|
|
||||||
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3100/ready || exit 1"]
|
|
||||||
interval: 30s
|
|
||||||
timeout: 10s
|
|
||||||
retries: 3
|
|
||||||
start_period: 40s
|
|
||||||
|
|
||||||
promtail:
|
|
||||||
container_name: promtail
|
|
||||||
image: grafana/promtail:3.3.2
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./promtail-config.yaml:/etc/promtail/config.yaml:ro
|
|
||||||
- /var/log:/var/log:ro
|
|
||||||
- /var/lib/docker/containers:/var/lib/docker/containers:ro
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
|
||||||
|
|
||||||
command: -config.file=/etc/promtail/config.yaml
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- logging_internal
|
|
||||||
|
|
||||||
depends_on:
|
|
||||||
loki:
|
|
||||||
condition: service_healthy
|
|
||||||
|
|
||||||
grafana:
|
|
||||||
container_name: grafana
|
|
||||||
image: grafana/grafana:10.2.3
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./grafana-data:/var/lib/grafana
|
|
||||||
- ./grafana-provisioning:/etc/grafana/provisioning
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
- logging_internal
|
|
||||||
|
|
||||||
depends_on:
|
|
||||||
loki:
|
|
||||||
condition: service_healthy
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Grafana Web UI
|
|
||||||
traefik.http.routers.grafana.rule: Host(`logs.fig.systems`)
|
|
||||||
traefik.http.routers.grafana.entrypoints: websecure
|
|
||||||
traefik.http.routers.grafana.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.grafana.loadbalancer.server.port: 3000
|
|
||||||
|
|
||||||
# SSO Protection (optional - Grafana has its own auth)
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Grafana (Logs Dashboard)
|
|
||||||
homarr.group: Monitoring
|
|
||||||
homarr.icon: mdi:chart-line
|
|
||||||
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1"]
|
|
||||||
interval: 30s
|
|
||||||
timeout: 10s
|
|
||||||
retries: 3
|
|
||||||
start_period: 40s
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
logging_internal:
|
|
||||||
name: logging_internal
|
|
||||||
driver: bridge
|
|
||||||
|
|
@ -1,13 +0,0 @@
|
||||||
apiVersion: 1
|
|
||||||
|
|
||||||
providers:
|
|
||||||
- name: 'Loki Dashboards'
|
|
||||||
orgId: 1
|
|
||||||
folder: 'Loki'
|
|
||||||
type: file
|
|
||||||
disableDeletion: false
|
|
||||||
updateIntervalSeconds: 10
|
|
||||||
allowUiUpdates: true
|
|
||||||
options:
|
|
||||||
path: /etc/grafana/provisioning/dashboards
|
|
||||||
foldersFromFilesStructure: true
|
|
||||||
|
|
@ -1,703 +0,0 @@
|
||||||
{
|
|
||||||
"annotations": {
|
|
||||||
"list": [
|
|
||||||
{
|
|
||||||
"builtIn": 1,
|
|
||||||
"datasource": {
|
|
||||||
"type": "grafana",
|
|
||||||
"uid": "-- Grafana --"
|
|
||||||
},
|
|
||||||
"enable": true,
|
|
||||||
"hide": true,
|
|
||||||
"iconColor": "rgba(0, 211, 255, 1)",
|
|
||||||
"name": "Annotations & Alerts",
|
|
||||||
"type": "dashboard"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"editable": true,
|
|
||||||
"fiscalYearStartMonth": 0,
|
|
||||||
"graphTooltip": 0,
|
|
||||||
"id": null,
|
|
||||||
"links": [],
|
|
||||||
"liveNow": false,
|
|
||||||
"panels": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "All Docker container logs in real-time",
|
|
||||||
"gridPos": {
|
|
||||||
"h": 24,
|
|
||||||
"w": 24,
|
|
||||||
"x": 0,
|
|
||||||
"y": 0
|
|
||||||
},
|
|
||||||
"id": 1,
|
|
||||||
"options": {
|
|
||||||
"dedupStrategy": "none",
|
|
||||||
"enableLogDetails": true,
|
|
||||||
"prettifyLogMessage": false,
|
|
||||||
"showCommonLabels": false,
|
|
||||||
"showLabels": false,
|
|
||||||
"showTime": true,
|
|
||||||
"sortOrder": "Descending",
|
|
||||||
"wrapLogMessage": false
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "{job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\"",
|
|
||||||
"queryType": "range",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Docker Container Logs",
|
|
||||||
"type": "logs"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Log volume per container over time",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "palette-classic"
|
|
||||||
},
|
|
||||||
"custom": {
|
|
||||||
"axisCenteredZero": false,
|
|
||||||
"axisColorMode": "text",
|
|
||||||
"axisLabel": "",
|
|
||||||
"axisPlacement": "auto",
|
|
||||||
"barAlignment": 0,
|
|
||||||
"drawStyle": "bars",
|
|
||||||
"fillOpacity": 50,
|
|
||||||
"gradientMode": "none",
|
|
||||||
"hideFrom": {
|
|
||||||
"tooltip": false,
|
|
||||||
"viz": false,
|
|
||||||
"legend": false
|
|
||||||
},
|
|
||||||
"lineInterpolation": "linear",
|
|
||||||
"lineWidth": 1,
|
|
||||||
"pointSize": 5,
|
|
||||||
"scaleDistribution": {
|
|
||||||
"type": "linear"
|
|
||||||
},
|
|
||||||
"showPoints": "auto",
|
|
||||||
"spanNulls": false,
|
|
||||||
"stacking": {
|
|
||||||
"group": "A",
|
|
||||||
"mode": "normal"
|
|
||||||
},
|
|
||||||
"thresholdsStyle": {
|
|
||||||
"mode": "off"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"unit": "short"
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 8,
|
|
||||||
"w": 24,
|
|
||||||
"x": 0,
|
|
||||||
"y": 24
|
|
||||||
},
|
|
||||||
"id": 2,
|
|
||||||
"options": {
|
|
||||||
"legend": {
|
|
||||||
"calcs": [],
|
|
||||||
"displayMode": "list",
|
|
||||||
"placement": "bottom",
|
|
||||||
"showLegend": true
|
|
||||||
},
|
|
||||||
"tooltip": {
|
|
||||||
"mode": "single",
|
|
||||||
"sort": "none"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "sum by (container) (count_over_time({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__interval]))",
|
|
||||||
"legendFormat": "{{container}}",
|
|
||||||
"queryType": "range",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Log Volume by Container",
|
|
||||||
"type": "timeseries"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Count of ERROR level logs by container",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "palette-classic"
|
|
||||||
},
|
|
||||||
"custom": {
|
|
||||||
"axisCenteredZero": false,
|
|
||||||
"axisColorMode": "text",
|
|
||||||
"axisLabel": "",
|
|
||||||
"axisPlacement": "auto",
|
|
||||||
"barAlignment": 0,
|
|
||||||
"drawStyle": "line",
|
|
||||||
"fillOpacity": 20,
|
|
||||||
"gradientMode": "none",
|
|
||||||
"hideFrom": {
|
|
||||||
"tooltip": false,
|
|
||||||
"viz": false,
|
|
||||||
"legend": false
|
|
||||||
},
|
|
||||||
"lineInterpolation": "linear",
|
|
||||||
"lineWidth": 2,
|
|
||||||
"pointSize": 5,
|
|
||||||
"scaleDistribution": {
|
|
||||||
"type": "linear"
|
|
||||||
},
|
|
||||||
"showPoints": "auto",
|
|
||||||
"spanNulls": false,
|
|
||||||
"stacking": {
|
|
||||||
"group": "A",
|
|
||||||
"mode": "none"
|
|
||||||
},
|
|
||||||
"thresholdsStyle": {
|
|
||||||
"mode": "off"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "red",
|
|
||||||
"value": 1
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"unit": "short"
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 8,
|
|
||||||
"w": 12,
|
|
||||||
"x": 0,
|
|
||||||
"y": 32
|
|
||||||
},
|
|
||||||
"id": 3,
|
|
||||||
"options": {
|
|
||||||
"legend": {
|
|
||||||
"calcs": ["last"],
|
|
||||||
"displayMode": "table",
|
|
||||||
"placement": "right",
|
|
||||||
"showLegend": true
|
|
||||||
},
|
|
||||||
"tooltip": {
|
|
||||||
"mode": "single",
|
|
||||||
"sort": "none"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "sum by (container) (count_over_time({job=\"docker_all\", container=~\"$container\"} |~ \"(?i)(error|exception|fatal|panic)\" [$__interval]))",
|
|
||||||
"legendFormat": "{{container}}",
|
|
||||||
"queryType": "range",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Error Logs by Container",
|
|
||||||
"type": "timeseries"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Total log lines per container",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "thresholds"
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "yellow",
|
|
||||||
"value": 1000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "red",
|
|
||||||
"value": 10000
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"unit": "short"
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 8,
|
|
||||||
"w": 12,
|
|
||||||
"x": 12,
|
|
||||||
"y": 32
|
|
||||||
},
|
|
||||||
"id": 4,
|
|
||||||
"options": {
|
|
||||||
"displayMode": "gradient",
|
|
||||||
"minVizHeight": 10,
|
|
||||||
"minVizWidth": 0,
|
|
||||||
"orientation": "horizontal",
|
|
||||||
"reduceOptions": {
|
|
||||||
"values": false,
|
|
||||||
"calcs": ["lastNotNull"],
|
|
||||||
"fields": ""
|
|
||||||
},
|
|
||||||
"showUnfilled": true,
|
|
||||||
"text": {}
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "sum by (container) (count_over_time({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__range]))",
|
|
||||||
"legendFormat": "{{container}}",
|
|
||||||
"queryType": "instant",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Total Logs by Container (Time Range)",
|
|
||||||
"type": "bargauge"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Statistics about container logging",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "thresholds"
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 6,
|
|
||||||
"w": 6,
|
|
||||||
"x": 0,
|
|
||||||
"y": 40
|
|
||||||
},
|
|
||||||
"id": 5,
|
|
||||||
"options": {
|
|
||||||
"colorMode": "value",
|
|
||||||
"graphMode": "area",
|
|
||||||
"justifyMode": "auto",
|
|
||||||
"orientation": "auto",
|
|
||||||
"reduceOptions": {
|
|
||||||
"values": false,
|
|
||||||
"calcs": ["lastNotNull"],
|
|
||||||
"fields": ""
|
|
||||||
},
|
|
||||||
"textMode": "auto"
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "count(count by (container) (count_over_time({job=\"docker_all\"} [$__range])))",
|
|
||||||
"legendFormat": "Active Containers",
|
|
||||||
"queryType": "instant",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Active Containers",
|
|
||||||
"type": "stat"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Total log entries in selected time range",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "thresholds"
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "yellow",
|
|
||||||
"value": 10000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "red",
|
|
||||||
"value": 100000
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"unit": "short"
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 6,
|
|
||||||
"w": 6,
|
|
||||||
"x": 6,
|
|
||||||
"y": 40
|
|
||||||
},
|
|
||||||
"id": 6,
|
|
||||||
"options": {
|
|
||||||
"colorMode": "value",
|
|
||||||
"graphMode": "area",
|
|
||||||
"justifyMode": "auto",
|
|
||||||
"orientation": "auto",
|
|
||||||
"reduceOptions": {
|
|
||||||
"values": false,
|
|
||||||
"calcs": ["lastNotNull"],
|
|
||||||
"fields": ""
|
|
||||||
},
|
|
||||||
"textMode": "auto"
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "sum(count_over_time({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__range]))",
|
|
||||||
"legendFormat": "Total Logs",
|
|
||||||
"queryType": "instant",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Total Log Lines",
|
|
||||||
"type": "stat"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Total errors in selected time range",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "thresholds"
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "yellow",
|
|
||||||
"value": 10
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "red",
|
|
||||||
"value": 100
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"unit": "short"
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 6,
|
|
||||||
"w": 6,
|
|
||||||
"x": 12,
|
|
||||||
"y": 40
|
|
||||||
},
|
|
||||||
"id": 7,
|
|
||||||
"options": {
|
|
||||||
"colorMode": "value",
|
|
||||||
"graphMode": "area",
|
|
||||||
"justifyMode": "auto",
|
|
||||||
"orientation": "auto",
|
|
||||||
"reduceOptions": {
|
|
||||||
"values": false,
|
|
||||||
"calcs": ["lastNotNull"],
|
|
||||||
"fields": ""
|
|
||||||
},
|
|
||||||
"textMode": "auto"
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "sum(count_over_time({job=\"docker_all\", container=~\"$container\"} |~ \"(?i)(error|exception|fatal|panic)\" [$__range]))",
|
|
||||||
"legendFormat": "Errors",
|
|
||||||
"queryType": "instant",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Total Errors",
|
|
||||||
"type": "stat"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"description": "Logs per second rate",
|
|
||||||
"fieldConfig": {
|
|
||||||
"defaults": {
|
|
||||||
"color": {
|
|
||||||
"mode": "thresholds"
|
|
||||||
},
|
|
||||||
"mappings": [],
|
|
||||||
"thresholds": {
|
|
||||||
"mode": "absolute",
|
|
||||||
"steps": [
|
|
||||||
{
|
|
||||||
"color": "green",
|
|
||||||
"value": null
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "yellow",
|
|
||||||
"value": 50
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"color": "red",
|
|
||||||
"value": 200
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"unit": "logs/s"
|
|
||||||
},
|
|
||||||
"overrides": []
|
|
||||||
},
|
|
||||||
"gridPos": {
|
|
||||||
"h": 6,
|
|
||||||
"w": 6,
|
|
||||||
"x": 18,
|
|
||||||
"y": 40
|
|
||||||
},
|
|
||||||
"id": 8,
|
|
||||||
"options": {
|
|
||||||
"colorMode": "value",
|
|
||||||
"graphMode": "area",
|
|
||||||
"justifyMode": "auto",
|
|
||||||
"orientation": "auto",
|
|
||||||
"reduceOptions": {
|
|
||||||
"values": false,
|
|
||||||
"calcs": ["lastNotNull"],
|
|
||||||
"fields": ""
|
|
||||||
},
|
|
||||||
"textMode": "auto"
|
|
||||||
},
|
|
||||||
"pluginVersion": "10.2.3",
|
|
||||||
"targets": [
|
|
||||||
{
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"editorMode": "code",
|
|
||||||
"expr": "sum(rate({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__rate_interval]))",
|
|
||||||
"legendFormat": "Rate",
|
|
||||||
"queryType": "instant",
|
|
||||||
"refId": "A"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"title": "Log Rate",
|
|
||||||
"type": "stat"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"refresh": "10s",
|
|
||||||
"schemaVersion": 38,
|
|
||||||
"style": "dark",
|
|
||||||
"tags": ["docker", "logs", "loki"],
|
|
||||||
"templating": {
|
|
||||||
"list": [
|
|
||||||
{
|
|
||||||
"current": {
|
|
||||||
"selected": false,
|
|
||||||
"text": "Loki",
|
|
||||||
"value": "Loki"
|
|
||||||
},
|
|
||||||
"hide": 0,
|
|
||||||
"includeAll": false,
|
|
||||||
"label": "Datasource",
|
|
||||||
"multi": false,
|
|
||||||
"name": "datasource",
|
|
||||||
"options": [],
|
|
||||||
"query": "loki",
|
|
||||||
"refresh": 1,
|
|
||||||
"regex": "",
|
|
||||||
"skipUrlSync": false,
|
|
||||||
"type": "datasource"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"allValue": ".*",
|
|
||||||
"current": {
|
|
||||||
"selected": true,
|
|
||||||
"text": "All",
|
|
||||||
"value": "$__all"
|
|
||||||
},
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"definition": "label_values(container)",
|
|
||||||
"hide": 0,
|
|
||||||
"includeAll": true,
|
|
||||||
"label": "Container",
|
|
||||||
"multi": true,
|
|
||||||
"name": "container",
|
|
||||||
"options": [],
|
|
||||||
"query": {
|
|
||||||
"qryType": 1,
|
|
||||||
"query": "label_values(container)"
|
|
||||||
},
|
|
||||||
"refresh": 1,
|
|
||||||
"regex": "",
|
|
||||||
"skipUrlSync": false,
|
|
||||||
"sort": 1,
|
|
||||||
"type": "query"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"allValue": ".*",
|
|
||||||
"current": {
|
|
||||||
"selected": true,
|
|
||||||
"text": "All",
|
|
||||||
"value": "$__all"
|
|
||||||
},
|
|
||||||
"datasource": {
|
|
||||||
"type": "loki",
|
|
||||||
"uid": "${datasource}"
|
|
||||||
},
|
|
||||||
"definition": "label_values(image)",
|
|
||||||
"hide": 0,
|
|
||||||
"includeAll": true,
|
|
||||||
"label": "Image",
|
|
||||||
"multi": true,
|
|
||||||
"name": "image",
|
|
||||||
"options": [],
|
|
||||||
"query": {
|
|
||||||
"qryType": 1,
|
|
||||||
"query": "label_values(image)"
|
|
||||||
},
|
|
||||||
"refresh": 1,
|
|
||||||
"regex": "",
|
|
||||||
"skipUrlSync": false,
|
|
||||||
"sort": 1,
|
|
||||||
"type": "query"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"current": {
|
|
||||||
"selected": false,
|
|
||||||
"text": "",
|
|
||||||
"value": ""
|
|
||||||
},
|
|
||||||
"description": "Search within log messages (regex supported)",
|
|
||||||
"hide": 0,
|
|
||||||
"label": "Search",
|
|
||||||
"name": "search",
|
|
||||||
"options": [
|
|
||||||
{
|
|
||||||
"selected": true,
|
|
||||||
"text": "",
|
|
||||||
"value": ""
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"query": "",
|
|
||||||
"skipUrlSync": false,
|
|
||||||
"type": "textbox"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"time": {
|
|
||||||
"from": "now-1h",
|
|
||||||
"to": "now"
|
|
||||||
},
|
|
||||||
"timepicker": {
|
|
||||||
"refresh_intervals": ["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h"]
|
|
||||||
},
|
|
||||||
"timezone": "",
|
|
||||||
"title": "Docker Logs - All Containers",
|
|
||||||
"uid": "docker-logs-all",
|
|
||||||
"version": 1,
|
|
||||||
"weekStart": ""
|
|
||||||
}
|
|
||||||
|
|
@ -1,17 +0,0 @@
|
||||||
apiVersion: 1
|
|
||||||
|
|
||||||
datasources:
|
|
||||||
- name: Loki
|
|
||||||
type: loki
|
|
||||||
access: proxy
|
|
||||||
url: http://loki:3100
|
|
||||||
isDefault: true
|
|
||||||
editable: true
|
|
||||||
jsonData:
|
|
||||||
maxLines: 1000
|
|
||||||
derivedFields:
|
|
||||||
# Extract traceID from logs for distributed tracing (optional)
|
|
||||||
- datasourceUid: tempo
|
|
||||||
matcherRegex: "traceID=(\\w+)"
|
|
||||||
name: TraceID
|
|
||||||
url: "$${__value.raw}"
|
|
||||||
|
|
@ -1,53 +0,0 @@
|
||||||
auth_enabled: false
|
|
||||||
|
|
||||||
server:
|
|
||||||
http_listen_port: 3100
|
|
||||||
grpc_listen_port: 9096
|
|
||||||
|
|
||||||
common:
|
|
||||||
instance_addr: 127.0.0.1
|
|
||||||
path_prefix: /loki
|
|
||||||
storage:
|
|
||||||
filesystem:
|
|
||||||
chunks_directory: /loki/chunks
|
|
||||||
rules_directory: /loki/rules
|
|
||||||
replication_factor: 1
|
|
||||||
ring:
|
|
||||||
kvstore:
|
|
||||||
store: inmemory
|
|
||||||
|
|
||||||
query_range:
|
|
||||||
results_cache:
|
|
||||||
cache:
|
|
||||||
embedded_cache:
|
|
||||||
enabled: true
|
|
||||||
max_size_mb: 100
|
|
||||||
|
|
||||||
schema_config:
|
|
||||||
configs:
|
|
||||||
- from: 2020-10-24
|
|
||||||
store: boltdb-shipper
|
|
||||||
object_store: filesystem
|
|
||||||
schema: v11
|
|
||||||
index:
|
|
||||||
prefix: index_
|
|
||||||
period: 24h
|
|
||||||
|
|
||||||
ruler:
|
|
||||||
alertmanager_url: http://localhost:9093
|
|
||||||
|
|
||||||
# Retention - keeps logs for 30 days
|
|
||||||
limits_config:
|
|
||||||
retention_period: 30d
|
|
||||||
ingestion_rate_mb: 10
|
|
||||||
ingestion_burst_size_mb: 20
|
|
||||||
allow_structured_metadata: false
|
|
||||||
|
|
||||||
# Cleanup old logs
|
|
||||||
compactor:
|
|
||||||
working_directory: /loki/compactor
|
|
||||||
compaction_interval: 10m
|
|
||||||
retention_enabled: true
|
|
||||||
retention_delete_delay: 2h
|
|
||||||
retention_delete_worker_count: 150
|
|
||||||
delete_request_store: filesystem
|
|
||||||
|
|
@ -1,70 +0,0 @@
|
||||||
server:
|
|
||||||
http_listen_port: 9080
|
|
||||||
grpc_listen_port: 0
|
|
||||||
|
|
||||||
positions:
|
|
||||||
filename: /tmp/positions.yaml
|
|
||||||
|
|
||||||
clients:
|
|
||||||
- url: http://loki:3100/loki/api/v1/push
|
|
||||||
|
|
||||||
scrape_configs:
|
|
||||||
# Docker containers logs
|
|
||||||
- job_name: docker
|
|
||||||
docker_sd_configs:
|
|
||||||
- host: unix:///var/run/docker.sock
|
|
||||||
refresh_interval: 5s
|
|
||||||
filters:
|
|
||||||
- name: label
|
|
||||||
values: ["logging=promtail"]
|
|
||||||
|
|
||||||
relabel_configs:
|
|
||||||
# Use container name as job
|
|
||||||
- source_labels: ['__meta_docker_container_name']
|
|
||||||
regex: '/(.*)'
|
|
||||||
target_label: 'container'
|
|
||||||
|
|
||||||
# Use image name
|
|
||||||
- source_labels: ['__meta_docker_container_image']
|
|
||||||
target_label: 'image'
|
|
||||||
|
|
||||||
# Use container ID
|
|
||||||
- source_labels: ['__meta_docker_container_id']
|
|
||||||
target_label: 'container_id'
|
|
||||||
|
|
||||||
# Add all docker labels as labels
|
|
||||||
- action: labelmap
|
|
||||||
regex: __meta_docker_container_label_(.+)
|
|
||||||
|
|
||||||
# All Docker containers (fallback)
|
|
||||||
- job_name: docker_all
|
|
||||||
docker_sd_configs:
|
|
||||||
- host: unix:///var/run/docker.sock
|
|
||||||
refresh_interval: 5s
|
|
||||||
|
|
||||||
relabel_configs:
|
|
||||||
- source_labels: ['__meta_docker_container_name']
|
|
||||||
regex: '/(.*)'
|
|
||||||
target_label: 'container'
|
|
||||||
|
|
||||||
- source_labels: ['__meta_docker_container_image']
|
|
||||||
target_label: 'image'
|
|
||||||
|
|
||||||
- source_labels: ['__meta_docker_container_log_stream']
|
|
||||||
target_label: 'stream'
|
|
||||||
|
|
||||||
# Extract compose project and service
|
|
||||||
- source_labels: ['__meta_docker_container_label_com_docker_compose_project']
|
|
||||||
target_label: 'compose_project'
|
|
||||||
|
|
||||||
- source_labels: ['__meta_docker_container_label_com_docker_compose_service']
|
|
||||||
target_label: 'compose_service'
|
|
||||||
|
|
||||||
# System logs
|
|
||||||
- job_name: system
|
|
||||||
static_configs:
|
|
||||||
- targets:
|
|
||||||
- localhost
|
|
||||||
labels:
|
|
||||||
job: varlogs
|
|
||||||
__path__: /var/log/*log
|
|
||||||
34
compose/services/bentopdf/compose.yaml
Normal file
34
compose/services/bentopdf/compose.yaml
Normal file
|
|
@ -0,0 +1,34 @@
|
||||||
|
# BentoPDF - Privacy-first, client-side PDF toolkit
|
||||||
|
# Docs: https://github.com/alam00000/bentopdf
|
||||||
|
|
||||||
|
services:
|
||||||
|
bentopdf:
|
||||||
|
container_name: bentopdf
|
||||||
|
image: bentopdf/bentopdf:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.bentopdf.rule: Host(`pdf.fig.systems`)
|
||||||
|
traefik.http.routers.bentopdf.entrypoints: websecure
|
||||||
|
traefik.http.routers.bentopdf.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.bentopdf.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.bentopdf.middlewares: authelia
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: BentoPDF (PDF Tools)
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:file-pdf-box
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -1,8 +0,0 @@
|
||||||
# Booklore Configuration
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# User and Group IDs
|
|
||||||
PUID=1000
|
|
||||||
PGID=1000
|
|
||||||
|
|
@ -1,39 +0,0 @@
|
||||||
# Booklore - Book tracking and management
|
|
||||||
# Docs: https://github.com/lorebooks/booklore
|
|
||||||
|
|
||||||
services:
|
|
||||||
booklore:
|
|
||||||
container_name: booklore
|
|
||||||
image: ghcr.io/lorebooks/booklore:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./data:/app/data
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.booklore.rule: Host(`booklore.fig.systems`)
|
|
||||||
traefik.http.routers.booklore.entrypoints: websecure
|
|
||||||
traefik.http.routers.booklore.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.booklore.loadbalancer.server.port: 3000
|
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Booklore
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:book-open-variant
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
||||||
# Calibre-web Configuration
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# User and Group IDs
|
|
||||||
PUID=1000
|
|
||||||
PGID=1000
|
|
||||||
|
|
||||||
# Docker mods (optional - for ebook conversion)
|
|
||||||
# DOCKER_MODS=linuxserver/mods:universal-calibre
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
||||||
# Calibre-web - Web app for browsing, reading and downloading eBooks
|
|
||||||
# Docs: https://hub.docker.com/r/linuxserver/calibre-web
|
|
||||||
|
|
||||||
services:
|
|
||||||
calibre-web:
|
|
||||||
container_name: calibre-web
|
|
||||||
image: lscr.io/linuxserver/calibre-web:latest
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
|
|
||||||
- .env
|
|
||||||
35
compose/services/dockhand/compose.yaml
Normal file
35
compose/services/dockhand/compose.yaml
Normal file
|
|
@ -0,0 +1,35 @@
|
||||||
|
# Dockhand - Docker Management UI
|
||||||
|
# Source: https://github.com/fnsys/dockhand
|
||||||
|
|
||||||
|
services:
|
||||||
|
dockhand:
|
||||||
|
image: fnsys/dockhand:latest
|
||||||
|
container_name: dockhand
|
||||||
|
restart: unless-stopped
|
||||||
|
user: "0:0"
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
|
- ./data:/app/data
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
traefik.http.routers.dockhand.rule: Host(`dockhand.fig.systems`)
|
||||||
|
traefik.http.routers.dockhand.entrypoints: websecure
|
||||||
|
traefik.http.routers.dockhand.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.dockhand.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.dockhand.middlewares: authelia
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Dockhand
|
||||||
|
homarr.group: Infrastructure
|
||||||
|
homarr.icon: mdi:docker
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -1,11 +1,36 @@
|
||||||
# File Browser - Web-based file manager
|
version: '2'
|
||||||
# Docs: https://filebrowser.org/
|
|
||||||
|
|
||||||
services:
|
services:
|
||||||
filebrowser:
|
app:
|
||||||
container_name: filebrowser
|
container_name: filestash
|
||||||
image: filebrowser/filebrowser:latest
|
image: machines/filestash:latest
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
- APPLICATION_URL=
|
||||||
|
- CANARY=true
|
||||||
|
- OFFICE_URL=http://wopi_server:9980
|
||||||
|
- OFFICE_FILESTASH_URL=http://app:8334
|
||||||
|
- OFFICE_REWRITE_URL=http://127.0.0.1:9980
|
||||||
|
ports:
|
||||||
|
- "8334:8334"
|
||||||
|
volumes:
|
||||||
|
- filestash:/app/data/state/
|
||||||
|
|
||||||
env_file:
|
wopi_server:
|
||||||
|
container_name: filestash_wopi
|
||||||
|
image: collabora/code:24.04.10.2.1
|
||||||
|
restart: always
|
||||||
|
environment:
|
||||||
|
- "extra_params=--o:ssl.enable=false"
|
||||||
|
- aliasgroup1="https://.*:443"
|
||||||
|
command:
|
||||||
|
- /bin/bash
|
||||||
|
- -c
|
||||||
|
- |
|
||||||
|
curl -o /usr/share/coolwsd/browser/dist/branding-desktop.css https://gist.githubusercontent.com/mickael-kerjean/bc1f57cd312cf04731d30185cc4e7ba2/raw/d706dcdf23c21441e5af289d871b33defc2770ea/destop.css
|
||||||
|
/bin/su -s /bin/bash -c '/start-collabora-online.sh' cool
|
||||||
|
user: root
|
||||||
|
ports:
|
||||||
|
- "9980:9980"
|
||||||
|
|
||||||
- .env
|
volumes:
|
||||||
|
filestash: {}
|
||||||
|
|
|
||||||
|
|
@ -1,14 +0,0 @@
|
||||||
# Homarr Configuration
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# Base path (if behind reverse proxy with path)
|
|
||||||
# BASE_URL=/dashboard
|
|
||||||
|
|
||||||
# Port (default: 7575)
|
|
||||||
PORT=7575
|
|
||||||
|
|
||||||
# Authentication
|
|
||||||
# AUTH_PROVIDER=oidc # For SSO integration
|
|
||||||
# DEFAULT_COLOR_SCHEME=dark
|
|
||||||
|
|
@ -1,332 +0,0 @@
|
||||||
# Homarr Dashboard
|
|
||||||
|
|
||||||
Modern, customizable dashboard with automatic Docker service discovery.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- 🎨 **Modern UI** - Beautiful, responsive design
|
|
||||||
- 🔍 **Auto-Discovery** - Automatically finds Docker services
|
|
||||||
- 📊 **Widgets** - System stats, weather, calendar, RSS, etc.
|
|
||||||
- 🏷️ **Labels** - Organize services by category
|
|
||||||
- 🔗 **Integration** - Connects to *arr apps, Jellyfin, etc.
|
|
||||||
- 🎯 **Customizable** - Drag-and-drop layout
|
|
||||||
- 🌙 **Dark Mode** - Built-in dark theme
|
|
||||||
- 📱 **Mobile Friendly** - Works on all devices
|
|
||||||
|
|
||||||
## Access
|
|
||||||
|
|
||||||
- **URL:** https://home.fig.systems or https://home.edfig.dev
|
|
||||||
- **Port:** 7575 (if accessing directly)
|
|
||||||
|
|
||||||
## First-Time Setup
|
|
||||||
|
|
||||||
### 1. Deploy Homarr
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd compose/services/homarr
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Access Dashboard
|
|
||||||
|
|
||||||
Open https://home.fig.systems in your browser.
|
|
||||||
|
|
||||||
### 3. Auto-Discovery
|
|
||||||
|
|
||||||
Homarr will automatically detect services with these labels:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
homarr.name: "Service Name"
|
|
||||||
homarr.group: "Category"
|
|
||||||
homarr.icon: "/icons/service.png"
|
|
||||||
homarr.href: "https://service.fig.systems"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Adding Services to Dashboard
|
|
||||||
|
|
||||||
### Automatic (Recommended)
|
|
||||||
|
|
||||||
Add labels to your service's `compose.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
# Traefik labels...
|
|
||||||
traefik.enable: true
|
|
||||||
# ... etc
|
|
||||||
|
|
||||||
# Homarr labels
|
|
||||||
homarr.name: Jellyfin
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jellyfin.png
|
|
||||||
homarr.href: https://flix.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
Redeploy the service:
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
Homarr will automatically add it to the dashboard!
|
|
||||||
|
|
||||||
### Manual
|
|
||||||
|
|
||||||
1. Click the "+" button in Homarr
|
|
||||||
2. Select "Add Service"
|
|
||||||
3. Fill in:
|
|
||||||
- **Name:** Service name
|
|
||||||
- **URL:** https://service.fig.systems
|
|
||||||
- **Icon:** Choose from library or custom URL
|
|
||||||
- **Category:** Group services (Media, Services, etc.)
|
|
||||||
|
|
||||||
## Integration with Services
|
|
||||||
|
|
||||||
### Jellyfin
|
|
||||||
|
|
||||||
Add to Jellyfin's `compose.yaml`:
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
homarr.name: Jellyfin
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: /icons/jellyfin.png
|
|
||||||
homarr.widget.type: jellyfin
|
|
||||||
homarr.widget.url: http://jellyfin:8096
|
|
||||||
homarr.widget.key: ${JELLYFIN_API_KEY}
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows: Currently playing, library stats
|
|
||||||
|
|
||||||
### Sonarr/Radarr
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
homarr.name: Sonarr
|
|
||||||
homarr.group: Media Automation
|
|
||||||
homarr.icon: /icons/sonarr.png
|
|
||||||
homarr.widget.type: sonarr
|
|
||||||
homarr.widget.url: http://sonarr:8989
|
|
||||||
homarr.widget.key: ${SONARR_API_KEY}
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows: Queue, calendar, missing episodes
|
|
||||||
|
|
||||||
### qBittorrent
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
homarr.name: qBittorrent
|
|
||||||
homarr.group: Downloads
|
|
||||||
homarr.icon: /icons/qbittorrent.png
|
|
||||||
homarr.widget.type: qbittorrent
|
|
||||||
homarr.widget.url: http://qbittorrent:8080
|
|
||||||
homarr.widget.username: ${QBIT_USERNAME}
|
|
||||||
homarr.widget.password: ${QBIT_PASSWORD}
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows: Active torrents, download speed
|
|
||||||
|
|
||||||
## Available Widgets
|
|
||||||
|
|
||||||
### System Monitoring
|
|
||||||
- **CPU Usage** - Real-time CPU stats
|
|
||||||
- **Memory Usage** - RAM usage
|
|
||||||
- **Disk Space** - Storage capacity
|
|
||||||
- **Network** - Upload/download speeds
|
|
||||||
|
|
||||||
### Services
|
|
||||||
- **Jellyfin** - Media server stats
|
|
||||||
- **Sonarr** - TV show automation
|
|
||||||
- **Radarr** - Movie automation
|
|
||||||
- **Lidarr** - Music automation
|
|
||||||
- **Readarr** - Book automation
|
|
||||||
- **Prowlarr** - Indexer management
|
|
||||||
- **SABnzbd** - Usenet downloads
|
|
||||||
- **qBittorrent** - Torrent downloads
|
|
||||||
- **Overseerr/Jellyseerr** - Media requests
|
|
||||||
|
|
||||||
### Utilities
|
|
||||||
- **Weather** - Local weather forecast
|
|
||||||
- **Calendar** - Events and tasks
|
|
||||||
- **RSS Feeds** - News aggregator
|
|
||||||
- **Docker** - Container status
|
|
||||||
- **Speed Test** - Internet speed
|
|
||||||
- **Notes** - Sticky notes
|
|
||||||
- **Iframe** - Embed any website
|
|
||||||
|
|
||||||
## Customization
|
|
||||||
|
|
||||||
### Change Theme
|
|
||||||
|
|
||||||
1. Click settings icon (⚙️)
|
|
||||||
2. Go to "Appearance"
|
|
||||||
3. Choose color scheme
|
|
||||||
4. Save
|
|
||||||
|
|
||||||
### Reorganize Layout
|
|
||||||
|
|
||||||
1. Click edit mode (✏️)
|
|
||||||
2. Drag and drop services
|
|
||||||
3. Resize widgets
|
|
||||||
4. Click save
|
|
||||||
|
|
||||||
### Add Categories
|
|
||||||
|
|
||||||
1. Click "Add Category"
|
|
||||||
2. Name it (e.g., "Media", "Tools", "Infrastructure")
|
|
||||||
3. Drag services into categories
|
|
||||||
4. Collapse/expand as needed
|
|
||||||
|
|
||||||
### Custom Icons
|
|
||||||
|
|
||||||
**Option 1: Use Icon Library**
|
|
||||||
- Homarr includes icons from [Dashboard Icons](https://github.com/walkxcode/dashboard-icons)
|
|
||||||
- Search by service name
|
|
||||||
|
|
||||||
**Option 2: Custom URL**
|
|
||||||
```
|
|
||||||
https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/service.png
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option 3: Local Icons**
|
|
||||||
- Place in `./icons/` directory
|
|
||||||
- Reference as `/icons/service.png`
|
|
||||||
|
|
||||||
## Recommended Dashboard Layout
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ 🏠 Homelab Dashboard │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ [System Stats] [Weather] [Calendar] │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ 📺 Media │
|
|
||||||
│ [Jellyfin] [Jellyseerr] [Immich] │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ 🤖 Media Automation │
|
|
||||||
│ [Sonarr] [Radarr] [qBittorrent] │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ 🛠️ Services │
|
|
||||||
│ [Linkwarden] [Vikunja] [FreshRSS] │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ 🔧 Infrastructure │
|
|
||||||
│ [Traefik] [LLDAP] [Tinyauth] │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
## Add to All Services
|
|
||||||
|
|
||||||
To make all your services auto-discoverable, add these labels:
|
|
||||||
|
|
||||||
### Jellyfin
|
|
||||||
```yaml
|
|
||||||
homarr.name: Jellyfin
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jellyfin.png
|
|
||||||
```
|
|
||||||
|
|
||||||
### Jellyseerr
|
|
||||||
```yaml
|
|
||||||
homarr.name: Jellyseerr
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jellyseerr.png
|
|
||||||
```
|
|
||||||
|
|
||||||
### Immich
|
|
||||||
```yaml
|
|
||||||
homarr.name: Immich Photos
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/immich.png
|
|
||||||
```
|
|
||||||
|
|
||||||
### Sonarr/Radarr/SABnzbd/qBittorrent
|
|
||||||
```yaml
|
|
||||||
homarr.name: [Service]
|
|
||||||
homarr.group: Automation
|
|
||||||
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/[service].png
|
|
||||||
```
|
|
||||||
|
|
||||||
### Linkwarden/Vikunja/etc.
|
|
||||||
```yaml
|
|
||||||
homarr.name: [Service]
|
|
||||||
homarr.group: Utilities
|
|
||||||
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/[service].png
|
|
||||||
```
|
|
||||||
|
|
||||||
## Mobile Access
|
|
||||||
|
|
||||||
Homarr is fully responsive. For best mobile experience:
|
|
||||||
|
|
||||||
1. Add to home screen (iOS/Android)
|
|
||||||
2. Works as PWA (Progressive Web App)
|
|
||||||
3. Touch-optimized interface
|
|
||||||
|
|
||||||
## Backup Configuration
|
|
||||||
|
|
||||||
### Backup
|
|
||||||
```bash
|
|
||||||
cd compose/services/homarr
|
|
||||||
tar -czf homarr-backup-$(date +%Y%m%d).tar.gz config/ data/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Restore
|
|
||||||
```bash
|
|
||||||
cd compose/services/homarr
|
|
||||||
tar -xzf homarr-backup-YYYYMMDD.tar.gz
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Services not auto-discovered
|
|
||||||
|
|
||||||
Check Docker socket permission:
|
|
||||||
```bash
|
|
||||||
docker logs homarr
|
|
||||||
```
|
|
||||||
|
|
||||||
Verify labels on service:
|
|
||||||
```bash
|
|
||||||
docker inspect service-name | grep homarr
|
|
||||||
```
|
|
||||||
|
|
||||||
### Can't connect to services
|
|
||||||
|
|
||||||
Services must be on same Docker network or accessible via hostname.
|
|
||||||
|
|
||||||
Use container names, not `localhost`:
|
|
||||||
- ✅ `http://jellyfin:8096`
|
|
||||||
- ❌ `http://localhost:8096`
|
|
||||||
|
|
||||||
### Widgets not working
|
|
||||||
|
|
||||||
1. Check API keys are correct
|
|
||||||
2. Verify service URLs (use container names)
|
|
||||||
3. Check service is running: `docker ps`
|
|
||||||
|
|
||||||
## Alternatives Considered
|
|
||||||
|
|
||||||
| Dashboard | Auto-Discovery | Widgets | Complexity |
|
|
||||||
|-----------|---------------|---------|------------|
|
|
||||||
| **Homarr** | ✅ Excellent | ✅ Many | Low |
|
|
||||||
| Homepage | ✅ Good | ✅ Many | Low |
|
|
||||||
| Heimdall | ❌ Manual | ❌ Few | Very Low |
|
|
||||||
| Dashy | ⚠️ Limited | ✅ Some | Medium |
|
|
||||||
| Homer | ❌ Manual | ❌ None | Very Low |
|
|
||||||
| Organizr | ⚠️ Limited | ✅ Many | High |
|
|
||||||
|
|
||||||
**Homarr chosen for:** Best balance of features, auto-discovery, and ease of use.
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Official Docs](https://homarr.dev/docs)
|
|
||||||
- [GitHub](https://github.com/ajnart/homarr)
|
|
||||||
- [Discord Community](https://discord.gg/aCsmEV5RgA)
|
|
||||||
- [Icon Library](https://github.com/walkxcode/dashboard-icons)
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
1. **Start Simple** - Add core services first, expand later
|
|
||||||
2. **Use Categories** - Group related services
|
|
||||||
3. **Enable Widgets** - Make dashboard informative
|
|
||||||
4. **Mobile First** - Test on phone/tablet
|
|
||||||
5. **Backup Config** - Save your layout regularly
|
|
||||||
|
|
@ -1,38 +0,0 @@
|
||||||
# Homarr - Modern dashboard with Docker auto-discovery
|
|
||||||
# Docs: https://homarr.dev/docs/getting-started/installation
|
|
||||||
# GitHub: https://github.com/ajnart/homarr
|
|
||||||
|
|
||||||
services:
|
|
||||||
homarr:
|
|
||||||
container_name: homarr
|
|
||||||
image: ghcr.io/ajnart/homarr:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
|
||||||
- ./configs:/app/data/configs
|
|
||||||
- ./icons:/app/public/icons
|
|
||||||
- ./data:/data
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.homarr.rule: Host(`dashboard.fig.systems`)
|
|
||||||
traefik.http.routers.homarr.entrypoints: websecure
|
|
||||||
traefik.http.routers.homarr.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.homarr.loadbalancer.server.port: 7575
|
|
||||||
|
|
||||||
# Optional: SSO Protection (disabled for dashboard access)
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,543 +0,0 @@
|
||||||
# Karakeep - Bookmark Everything App
|
|
||||||
|
|
||||||
AI-powered bookmark manager for links, notes, images, and PDFs with automatic tagging and full-text search.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**Karakeep** (previously known as Hoarder) is a self-hostable bookmark-everything app:
|
|
||||||
|
|
||||||
- ✅ **Bookmark Everything**: Links, notes, images, PDFs
|
|
||||||
- ✅ **AI-Powered**: Automatic tagging and summarization
|
|
||||||
- ✅ **Full-Text Search**: Find anything instantly with Meilisearch
|
|
||||||
- ✅ **Web Archiving**: Save complete webpages (full page archive)
|
|
||||||
- ✅ **Browser Extensions**: Chrome and Firefox support
|
|
||||||
- ✅ **Mobile Apps**: iOS and Android apps available
|
|
||||||
- ✅ **Ollama Support**: Use local AI models (no cloud required!)
|
|
||||||
- ✅ **OCR**: Extract text from images
|
|
||||||
- ✅ **Self-Hosted**: Full control of your data
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Configure Secrets
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/services/karakeep
|
|
||||||
|
|
||||||
# Edit .env and update:
|
|
||||||
# - NEXTAUTH_SECRET (generate with: openssl rand -base64 36)
|
|
||||||
# - MEILI_MASTER_KEY (generate with: openssl rand -base64 36)
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Deploy
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Access
|
|
||||||
|
|
||||||
Go to: **https://links.fig.systems**
|
|
||||||
|
|
||||||
**First-time setup:**
|
|
||||||
1. Create your admin account
|
|
||||||
2. Start bookmarking!
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
### Bookmark Types
|
|
||||||
|
|
||||||
**1. Web Links**
|
|
||||||
- Save any URL
|
|
||||||
- Automatic screenshot capture
|
|
||||||
- Full webpage archiving
|
|
||||||
- Extract title, description, favicon
|
|
||||||
- AI-generated summary and tags
|
|
||||||
|
|
||||||
**2. Notes**
|
|
||||||
- Quick text notes
|
|
||||||
- Markdown support
|
|
||||||
- AI-powered categorization
|
|
||||||
- Full-text searchable
|
|
||||||
|
|
||||||
**3. Images**
|
|
||||||
- Upload images directly
|
|
||||||
- OCR text extraction (if enabled)
|
|
||||||
- AI-based tagging
|
|
||||||
- Image search
|
|
||||||
|
|
||||||
**4. PDFs**
|
|
||||||
- Upload PDF documents
|
|
||||||
- Full-text indexing
|
|
||||||
- Searchable content
|
|
||||||
|
|
||||||
### AI Features
|
|
||||||
|
|
||||||
Karakeep can use AI to automatically:
|
|
||||||
- **Tag** your bookmarks
|
|
||||||
- **Summarize** web content
|
|
||||||
- **Extract** key information
|
|
||||||
- **Organize** by category
|
|
||||||
|
|
||||||
**Three AI options:**
|
|
||||||
|
|
||||||
**1. Ollama (Recommended - Local & Free)**
|
|
||||||
```env
|
|
||||||
# In .env, uncomment:
|
|
||||||
OLLAMA_BASE_URL=http://ollama:11434
|
|
||||||
INFERENCE_TEXT_MODEL=llama3.2:3b
|
|
||||||
INFERENCE_IMAGE_MODEL=llava:7b
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. OpenAI**
|
|
||||||
```env
|
|
||||||
OPENAI_API_KEY=sk-...
|
|
||||||
OPENAI_BASE_URL=https://api.openai.com/v1
|
|
||||||
INFERENCE_TEXT_MODEL=gpt-4o-mini
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. OpenRouter (multiple providers)**
|
|
||||||
```env
|
|
||||||
OPENAI_API_KEY=sk-or-v1-...
|
|
||||||
OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
|
||||||
INFERENCE_TEXT_MODEL=anthropic/claude-3.5-sonnet
|
|
||||||
```
|
|
||||||
|
|
||||||
### Web Archiving
|
|
||||||
|
|
||||||
Karakeep saves complete web pages for offline viewing:
|
|
||||||
- **Full HTML archive**
|
|
||||||
- **Screenshots** of the page
|
|
||||||
- **Extracted text** for search
|
|
||||||
- **Works offline** - view archived pages anytime
|
|
||||||
|
|
||||||
### Search
|
|
||||||
|
|
||||||
Powered by Meilisearch:
|
|
||||||
- **Instant** full-text search
|
|
||||||
- **Fuzzy matching** - finds similar terms
|
|
||||||
- **Filter by** type, tags, dates
|
|
||||||
- **Search across** titles, content, tags, notes
|
|
||||||
|
|
||||||
### Browser Extensions
|
|
||||||
|
|
||||||
**Install extensions:**
|
|
||||||
- [Chrome Web Store](https://chromewebstore.google.com/detail/karakeep/kbkejgonjhbmhcaofkhdegeoeoemgkdm)
|
|
||||||
- [Firefox Add-ons](https://addons.mozilla.org/en-US/firefox/addon/karakeep/)
|
|
||||||
|
|
||||||
**Configure extension:**
|
|
||||||
1. Install extension
|
|
||||||
2. Click extension icon
|
|
||||||
3. Enter server URL: `https://links.fig.systems`
|
|
||||||
4. Login with your credentials
|
|
||||||
5. Save bookmarks from any page!
|
|
||||||
|
|
||||||
### Mobile Apps
|
|
||||||
|
|
||||||
**Download apps:**
|
|
||||||
- [iOS App Store](https://apps.apple.com/app/karakeep/id6479258022)
|
|
||||||
- [Android Google Play](https://play.google.com/store/apps/details?id=app.karakeep.mobile)
|
|
||||||
|
|
||||||
**Setup:**
|
|
||||||
1. Install app
|
|
||||||
2. Open app
|
|
||||||
3. Enter server: `https://links.fig.systems`
|
|
||||||
4. Login
|
|
||||||
5. Bookmark on the go!
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Basic Settings
|
|
||||||
|
|
||||||
**Disable public signups:**
|
|
||||||
```env
|
|
||||||
DISABLE_SIGNUPS=true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Set max file size (100MB default):**
|
|
||||||
```env
|
|
||||||
MAX_ASSET_SIZE_MB=100
|
|
||||||
```
|
|
||||||
|
|
||||||
**Enable OCR for multiple languages:**
|
|
||||||
```env
|
|
||||||
OCR_LANGS=eng,spa,fra,deu
|
|
||||||
```
|
|
||||||
|
|
||||||
### Ollama Integration
|
|
||||||
|
|
||||||
**Prerequisites:**
|
|
||||||
1. Deploy Ollama service (see `compose/services/ollama/`)
|
|
||||||
2. Pull models: `docker exec ollama ollama pull llama3.2:3b`
|
|
||||||
|
|
||||||
**Enable in Karakeep:**
|
|
||||||
```env
|
|
||||||
# In karakeep/.env
|
|
||||||
OLLAMA_BASE_URL=http://ollama:11434
|
|
||||||
INFERENCE_TEXT_MODEL=llama3.2:3b
|
|
||||||
INFERENCE_IMAGE_MODEL=llava:7b
|
|
||||||
INFERENCE_LANG=en
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart:**
|
|
||||||
```bash
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
**Recommended models:**
|
|
||||||
- **Text**: llama3.2:3b (fast, good quality)
|
|
||||||
- **Images**: llava:7b (vision model)
|
|
||||||
- **Advanced**: llama3.3:70b (slower, better results)
|
|
||||||
|
|
||||||
### Advanced Settings
|
|
||||||
|
|
||||||
**Custom logging:**
|
|
||||||
```env
|
|
||||||
LOG_LEVEL=debug # Options: debug, info, warn, error
|
|
||||||
```
|
|
||||||
|
|
||||||
**Custom data directory:**
|
|
||||||
```env
|
|
||||||
DATADIR=/custom/path
|
|
||||||
```
|
|
||||||
|
|
||||||
**Chrome timeout (for slow sites):**
|
|
||||||
```env
|
|
||||||
# Add to compose.yaml environment section
|
|
||||||
BROWSER_TIMEOUT=60000 # 60 seconds
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage Workflows
|
|
||||||
|
|
||||||
### 1. Bookmark a Website
|
|
||||||
|
|
||||||
**Via Browser:**
|
|
||||||
1. Click Karakeep extension
|
|
||||||
2. Bookmark opens automatically
|
|
||||||
3. AI generates tags and summary
|
|
||||||
4. Edit tags/notes if needed
|
|
||||||
5. Save
|
|
||||||
|
|
||||||
**Via Mobile:**
|
|
||||||
1. Open share menu
|
|
||||||
2. Select Karakeep
|
|
||||||
3. Bookmark saved
|
|
||||||
|
|
||||||
**Manually:**
|
|
||||||
1. Open Karakeep
|
|
||||||
2. Click "+" button
|
|
||||||
3. Paste URL
|
|
||||||
4. Click Save
|
|
||||||
|
|
||||||
### 2. Quick Note
|
|
||||||
|
|
||||||
1. Open Karakeep
|
|
||||||
2. Click "+" → "Note"
|
|
||||||
3. Type your note
|
|
||||||
4. AI auto-tags
|
|
||||||
5. Save
|
|
||||||
|
|
||||||
### 3. Upload Image
|
|
||||||
|
|
||||||
1. Click "+" → "Image"
|
|
||||||
2. Upload image file
|
|
||||||
3. OCR extracts text (if enabled)
|
|
||||||
4. AI generates tags
|
|
||||||
5. Save
|
|
||||||
|
|
||||||
### 4. Search Everything
|
|
||||||
|
|
||||||
**Simple search:**
|
|
||||||
- Type in search box
|
|
||||||
- Results appear instantly
|
|
||||||
|
|
||||||
**Advanced search:**
|
|
||||||
- Filter by type (links, notes, images)
|
|
||||||
- Filter by tags
|
|
||||||
- Filter by date range
|
|
||||||
- Sort by relevance or date
|
|
||||||
|
|
||||||
### 5. Organize with Tags
|
|
||||||
|
|
||||||
**Auto-tags:**
|
|
||||||
- AI generates tags automatically
|
|
||||||
- Based on content analysis
|
|
||||||
- Can be edited/removed
|
|
||||||
|
|
||||||
**Manual tags:**
|
|
||||||
- Add your own tags
|
|
||||||
- Create tag hierarchies
|
|
||||||
- Color-code tags
|
|
||||||
|
|
||||||
**Tag management:**
|
|
||||||
- Rename tags globally
|
|
||||||
- Merge duplicate tags
|
|
||||||
- Delete unused tags
|
|
||||||
|
|
||||||
## Browser Extension Usage
|
|
||||||
|
|
||||||
### Quick Bookmark
|
|
||||||
|
|
||||||
1. **Visit any page**
|
|
||||||
2. **Click extension icon** (or keyboard shortcut)
|
|
||||||
3. **Automatically saved** with:
|
|
||||||
- URL
|
|
||||||
- Title
|
|
||||||
- Screenshot
|
|
||||||
- Full page archive
|
|
||||||
- AI tags and summary
|
|
||||||
|
|
||||||
### Save Selection
|
|
||||||
|
|
||||||
1. **Highlight text** on any page
|
|
||||||
2. **Right-click** → "Save to Karakeep"
|
|
||||||
3. **Saves as note** with source URL
|
|
||||||
|
|
||||||
### Save Image
|
|
||||||
|
|
||||||
1. **Right-click image**
|
|
||||||
2. Select "Save to Karakeep"
|
|
||||||
3. **Image uploaded** with AI tags
|
|
||||||
|
|
||||||
## Mobile App Features
|
|
||||||
|
|
||||||
- **Share from any app** to Karakeep
|
|
||||||
- **Quick capture** - bookmark in seconds
|
|
||||||
- **Offline access** to archived content
|
|
||||||
- **Search** your entire collection
|
|
||||||
- **Browse by tags**
|
|
||||||
- **Dark mode** support
|
|
||||||
|
|
||||||
## Data Management
|
|
||||||
|
|
||||||
### Backup
|
|
||||||
|
|
||||||
**Important data locations:**
|
|
||||||
```bash
|
|
||||||
compose/services/karakeep/
|
|
||||||
├── data/ # Uploaded files, archives
|
|
||||||
└── meili_data/ # Search index
|
|
||||||
```
|
|
||||||
|
|
||||||
**Backup script:**
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
cd ~/homelab/compose/services/karakeep
|
|
||||||
tar czf karakeep-backup-$(date +%Y%m%d).tar.gz ./data ./meili_data
|
|
||||||
```
|
|
||||||
|
|
||||||
### Export
|
|
||||||
|
|
||||||
**Export bookmarks:**
|
|
||||||
1. Settings → Export
|
|
||||||
2. Choose format:
|
|
||||||
- JSON (complete data)
|
|
||||||
- HTML (browser-compatible)
|
|
||||||
- CSV (spreadsheet)
|
|
||||||
3. Download
|
|
||||||
|
|
||||||
### Import
|
|
||||||
|
|
||||||
**Import from other services:**
|
|
||||||
1. Settings → Import
|
|
||||||
2. Select source:
|
|
||||||
- Browser bookmarks (HTML)
|
|
||||||
- Pocket
|
|
||||||
- Raindrop.io
|
|
||||||
- Omnivore
|
|
||||||
- Instapaper
|
|
||||||
3. Upload file
|
|
||||||
4. Karakeep processes and imports
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Karakeep won't start
|
|
||||||
|
|
||||||
**Check logs:**
|
|
||||||
```bash
|
|
||||||
docker logs karakeep
|
|
||||||
docker logs karakeep-chrome
|
|
||||||
docker logs karakeep-meilisearch
|
|
||||||
```
|
|
||||||
|
|
||||||
**Common issues:**
|
|
||||||
- Missing `NEXTAUTH_SECRET` in `.env`
|
|
||||||
- Missing `MEILI_MASTER_KEY` in `.env`
|
|
||||||
- Services not on `karakeep_internal` network
|
|
||||||
|
|
||||||
### Bookmarks not saving
|
|
||||||
|
|
||||||
**Check chrome service:**
|
|
||||||
```bash
|
|
||||||
docker logs karakeep-chrome
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify chrome is accessible:**
|
|
||||||
```bash
|
|
||||||
docker exec karakeep curl http://karakeep-chrome:9222
|
|
||||||
```
|
|
||||||
|
|
||||||
**Increase timeout:**
|
|
||||||
```env
|
|
||||||
# Add to .env
|
|
||||||
BROWSER_TIMEOUT=60000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Search not working
|
|
||||||
|
|
||||||
**Rebuild search index:**
|
|
||||||
```bash
|
|
||||||
# Stop services
|
|
||||||
docker compose down
|
|
||||||
|
|
||||||
# Remove search data
|
|
||||||
rm -rf ./meili_data
|
|
||||||
|
|
||||||
# Restart (index rebuilds automatically)
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Meilisearch:**
|
|
||||||
```bash
|
|
||||||
docker logs karakeep-meilisearch
|
|
||||||
```
|
|
||||||
|
|
||||||
### AI features not working
|
|
||||||
|
|
||||||
**With Ollama:**
|
|
||||||
```bash
|
|
||||||
# Verify Ollama is running
|
|
||||||
docker ps | grep ollama
|
|
||||||
|
|
||||||
# Test Ollama connection
|
|
||||||
docker exec karakeep curl http://ollama:11434
|
|
||||||
|
|
||||||
# Check models are pulled
|
|
||||||
docker exec ollama ollama list
|
|
||||||
```
|
|
||||||
|
|
||||||
**With OpenAI/OpenRouter:**
|
|
||||||
- Verify API key is correct
|
|
||||||
- Check API balance/credits
|
|
||||||
- Review logs for error messages
|
|
||||||
|
|
||||||
### Extension can't connect
|
|
||||||
|
|
||||||
**Verify server URL:**
|
|
||||||
- Must be `https://links.fig.systems`
|
|
||||||
- Not `http://` or `localhost`
|
|
||||||
|
|
||||||
**Check CORS:**
|
|
||||||
```env
|
|
||||||
# Add to .env if needed
|
|
||||||
CORS_ALLOW_ORIGINS=https://links.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
**Clear extension data:**
|
|
||||||
1. Extension settings
|
|
||||||
2. Logout
|
|
||||||
3. Clear extension storage
|
|
||||||
4. Login again
|
|
||||||
|
|
||||||
### Mobile app issues
|
|
||||||
|
|
||||||
**Can't connect:**
|
|
||||||
- Use full HTTPS URL
|
|
||||||
- Ensure server is accessible externally
|
|
||||||
- Check firewall rules
|
|
||||||
|
|
||||||
**Slow performance:**
|
|
||||||
- Check network speed
|
|
||||||
- Reduce image quality in app settings
|
|
||||||
- Enable "Low data mode"
|
|
||||||
|
|
||||||
## Performance Optimization
|
|
||||||
|
|
||||||
### For Large Collections (10,000+ bookmarks)
|
|
||||||
|
|
||||||
**Increase Meilisearch RAM:**
|
|
||||||
```yaml
|
|
||||||
# In compose.yaml, add to karakeep-meilisearch:
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
memory: 2G
|
|
||||||
reservations:
|
|
||||||
memory: 1G
|
|
||||||
```
|
|
||||||
|
|
||||||
**Optimize search index:**
|
|
||||||
```env
|
|
||||||
# In .env
|
|
||||||
MEILI_MAX_INDEXING_MEMORY=1048576000 # 1GB
|
|
||||||
```
|
|
||||||
|
|
||||||
### For Slow Archiving
|
|
||||||
|
|
||||||
**Increase Chrome resources:**
|
|
||||||
```yaml
|
|
||||||
# In compose.yaml, add to karakeep-chrome:
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
memory: 1G
|
|
||||||
cpus: '1.0'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Adjust timeouts:**
|
|
||||||
```env
|
|
||||||
BROWSER_TIMEOUT=90000 # 90 seconds
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Maintenance
|
|
||||||
|
|
||||||
**Vacuum (compact) database:**
|
|
||||||
```bash
|
|
||||||
# Karakeep uses SQLite by default
|
|
||||||
docker exec karakeep sqlite3 /data/karakeep.db "VACUUM;"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Comparison with Linkwarden
|
|
||||||
|
|
||||||
| Feature | Karakeep | Linkwarden |
|
|
||||||
|---------|----------|------------|
|
|
||||||
| **Bookmark Types** | Links, Notes, Images, PDFs | Links only |
|
|
||||||
| **AI Tagging** | Yes (Ollama/OpenAI) | No |
|
|
||||||
| **Web Archiving** | Full page + Screenshot | Screenshot only |
|
|
||||||
| **Search** | Meilisearch (fuzzy) | Meilisearch |
|
|
||||||
| **Browser Extension** | Yes | Yes |
|
|
||||||
| **Mobile Apps** | iOS + Android | No official apps |
|
|
||||||
| **OCR** | Yes | No |
|
|
||||||
| **Collaboration** | Personal focus | Team features |
|
|
||||||
| **Database** | SQLite | PostgreSQL |
|
|
||||||
|
|
||||||
**Why Karakeep?**
|
|
||||||
- More bookmark types
|
|
||||||
- AI-powered organization
|
|
||||||
- Better mobile support
|
|
||||||
- Lighter resource usage (SQLite vs PostgreSQL)
|
|
||||||
- Active development
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Official Website](https://karakeep.app)
|
|
||||||
- [Documentation](https://docs.karakeep.app)
|
|
||||||
- [GitHub Repository](https://github.com/karakeep-app/karakeep)
|
|
||||||
- [Demo Instance](https://try.karakeep.app)
|
|
||||||
- [Chrome Extension](https://chromewebstore.google.com/detail/karakeep/kbkejgonjhbmhcaofkhdegeoeoemgkdm)
|
|
||||||
- [Firefox Extension](https://addons.mozilla.org/en-US/firefox/addon/karakeep/)
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ Deploy Karakeep
|
|
||||||
2. ✅ Create admin account
|
|
||||||
3. ✅ Install browser extension
|
|
||||||
4. ✅ Install mobile app
|
|
||||||
5. ⬜ Deploy Ollama for AI features
|
|
||||||
6. ⬜ Import existing bookmarks
|
|
||||||
7. ⬜ Configure AI models
|
|
||||||
8. ⬜ Set up automated backups
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Bookmark everything, find anything!** 🔖
|
|
||||||
|
|
@ -12,7 +12,7 @@ services:
|
||||||
- .env
|
- .env
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- ./data:/data
|
- /media/karakeep/data:/data
|
||||||
|
|
||||||
depends_on:
|
depends_on:
|
||||||
- karakeep-meilisearch
|
- karakeep-meilisearch
|
||||||
|
|
@ -34,6 +34,7 @@ services:
|
||||||
traefik.http.services.karakeep.loadbalancer.server.port: 3000
|
traefik.http.services.karakeep.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
# SSO Protection
|
# SSO Protection
|
||||||
|
traefik.http.routers.karakeep.middlewares: authelia
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Karakeep (Bookmarks)
|
homarr.name: Karakeep (Bookmarks)
|
||||||
|
|
@ -65,7 +66,7 @@ services:
|
||||||
- .env
|
- .env
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- ./meili_data:/meili_data
|
- /media/karakeep/meili_data:/meili_data
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
- karakeep_internal
|
- karakeep_internal
|
||||||
|
|
|
||||||
82
compose/services/komga/README.md
Normal file
82
compose/services/komga/README.md
Normal file
|
|
@ -0,0 +1,82 @@
|
||||||
|
# Komga
|
||||||
|
|
||||||
|
Komga is a free and open source comics/ebooks server with OPDS support and Kobo/KOReader integration.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Modern web interface for browsing comics and ebooks
|
||||||
|
- OPDS feed support for reading apps
|
||||||
|
- Native Kobo sync support (connect your Kobo eReader directly)
|
||||||
|
- KOReader integration via OPDS
|
||||||
|
- Metadata management
|
||||||
|
- User management with per-library access control
|
||||||
|
- Reading progress tracking
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
See `.env` file for configuration options:
|
||||||
|
- `KOMGA_PORT`: Internal port for Komga (default: 8080)
|
||||||
|
- `TRAEFIK_HOST`: Public domain for accessing Komga
|
||||||
|
- `TZ`: Timezone
|
||||||
|
- `APP_USER_ID`/`APP_GROUP_ID`: User/group for file permissions
|
||||||
|
|
||||||
|
### Volumes
|
||||||
|
|
||||||
|
- `./config`: Komga configuration and database
|
||||||
|
- `/mnt/media/books`: Your book/comic library (read-only recommended)
|
||||||
|
- `/mnt/media/bookdrop`: Drop folder for importing new content
|
||||||
|
|
||||||
|
## Kobo Setup
|
||||||
|
|
||||||
|
Komga has built-in Kobo sync support. To connect your Kobo eReader:
|
||||||
|
|
||||||
|
1. Access Komga web UI and create a user account
|
||||||
|
2. In Komga user settings, generate a Kobo sync token
|
||||||
|
3. On your Kobo device:
|
||||||
|
- Connect via USB
|
||||||
|
- Edit `.kobo/Kobo/Kobo eReader.conf`
|
||||||
|
- Add under `[OneStoreServices]`:
|
||||||
|
```
|
||||||
|
api_endpoint=https://books.fig.systems/kobo
|
||||||
|
```
|
||||||
|
4. Safely eject and reboot your Kobo
|
||||||
|
5. Sign in with your Komga credentials when prompted
|
||||||
|
|
||||||
|
The Kobo endpoint (`/kobo`) is configured to bypass Authelia authentication since Kobo uses its own authentication mechanism.
|
||||||
|
|
||||||
|
## KOReader Setup
|
||||||
|
|
||||||
|
For KOReader (on any device):
|
||||||
|
|
||||||
|
1. Open KOReader
|
||||||
|
2. Go to Tools → OPDS Catalog
|
||||||
|
3. Add new catalog:
|
||||||
|
- Catalog Name: Komga
|
||||||
|
- Catalog URL: `https://books.fig.systems/opds/v1.2/catalog`
|
||||||
|
- Username: Your Komga username
|
||||||
|
- Password: Your Komga password
|
||||||
|
|
||||||
|
Note: The OPDS endpoints require Authelia authentication for web access, but KOReader will authenticate using HTTP Basic Auth with your Komga credentials.
|
||||||
|
|
||||||
|
## Authentication
|
||||||
|
|
||||||
|
- Web UI: Protected by Authelia SSO
|
||||||
|
- OPDS/Kobo endpoints: Use Komga's built-in authentication
|
||||||
|
- The Kobo sync endpoint bypasses Authelia to allow direct device authentication
|
||||||
|
|
||||||
|
## First Run
|
||||||
|
|
||||||
|
1. Start the service: `docker compose up -d`
|
||||||
|
2. Access the web UI at `https://books.fig.systems`
|
||||||
|
3. Create an admin account on first login
|
||||||
|
4. Add libraries pointing to your book folders
|
||||||
|
5. Configure users and permissions as needed
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
- Komga supports various formats: CBZ, CBR, PDF, EPUB, and more
|
||||||
|
- Use the bookdrop folder for automatic import scanning
|
||||||
|
- Enable "claim" profile for better reverse proxy support (already configured)
|
||||||
|
- Kobo sync requires HTTPS (already configured via Traefik)
|
||||||
61
compose/services/komga/compose.yaml
Normal file
61
compose/services/komga/compose.yaml
Normal file
|
|
@ -0,0 +1,61 @@
|
||||||
|
services:
|
||||||
|
komga:
|
||||||
|
image: gotson/komga:latest
|
||||||
|
container_name: komga
|
||||||
|
environment:
|
||||||
|
- TZ=${TZ}
|
||||||
|
- PUID=${APP_USER_ID}
|
||||||
|
- PGID=${APP_GROUP_ID}
|
||||||
|
- SERVER_PORT=${KOMGA_PORT}
|
||||||
|
# Kobo/KOReader support
|
||||||
|
- KOMGA_KOBO_PROXY=false
|
||||||
|
volumes:
|
||||||
|
- ./config:/config
|
||||||
|
- /mnt/media/books:/books
|
||||||
|
- /mnt/media/bookdrop:/bookdrop
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Middleware for Kobo sync support - increased buffer sizes
|
||||||
|
traefik.http.middlewares.komga-buffering.buffering.maxRequestBodyBytes: 268435456
|
||||||
|
traefik.http.middlewares.komga-buffering.buffering.memRequestBodyBytes: 268435456
|
||||||
|
traefik.http.middlewares.komga-buffering.buffering.retryExpression: IsNetworkError() && Attempts() < 3
|
||||||
|
traefik.http.middlewares.komga-headers.headers.customrequestheaders.X-Forwarded-Proto: https
|
||||||
|
|
||||||
|
# Authelia middleware for /api and /opds endpoints (main web UI)
|
||||||
|
traefik.http.middlewares.komga-auth.forwardauth.address: http://authelia:9091/api/authz/forward-auth
|
||||||
|
traefik.http.middlewares.komga-auth.forwardauth.trustForwardHeader: true
|
||||||
|
traefik.http.middlewares.komga-auth.forwardauth.authResponseHeaders: Remote-User,Remote-Groups,Remote-Name,Remote-Email
|
||||||
|
|
||||||
|
# Kobo router - NO Authelia (uses Kobo's built-in auth) - Higher priority to match first
|
||||||
|
traefik.http.routers.komga-kobo.rule: Host(`${TRAEFIK_HOST}`) && PathPrefix(`/kobo`)
|
||||||
|
traefik.http.routers.komga-kobo.entrypoints: websecure
|
||||||
|
traefik.http.routers.komga-kobo.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.routers.komga-kobo.middlewares: komga-buffering,komga-headers
|
||||||
|
traefik.http.routers.komga-kobo.service: komga
|
||||||
|
traefik.http.routers.komga-kobo.priority: 100
|
||||||
|
|
||||||
|
# Main router for web UI - NO Authelia for initial setup
|
||||||
|
traefik.http.routers.komga.rule: Host(`${TRAEFIK_HOST}`)
|
||||||
|
traefik.http.routers.komga.entrypoints: websecure
|
||||||
|
traefik.http.routers.komga.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.routers.komga.middlewares: komga-buffering,komga-headers
|
||||||
|
traefik.http.routers.komga.service: komga
|
||||||
|
traefik.http.routers.komga.priority: 50
|
||||||
|
|
||||||
|
# Service definition
|
||||||
|
traefik.http.services.komga.loadbalancer.server.port: ${KOMGA_PORT}
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Komga
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:book-open-variant
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
# Komodo Environment Configuration
|
|
||||||
# Copy this file to .env and customize for your deployment
|
|
||||||
|
|
||||||
# Version
|
|
||||||
KOMODO_VERSION=latest
|
|
||||||
|
|
||||||
# Database (CHANGE THESE!)
|
|
||||||
KOMODO_DB_USERNAME=admin
|
|
||||||
KOMODO_DB_PASSWORD=CHANGE_ME_TO_STRONG_PASSWORD
|
|
||||||
|
|
||||||
# Authentication (CHANGE THIS!)
|
|
||||||
KOMODO_PASSKEY=CHANGE_ME_TO_STRONG_RANDOM_STRING
|
|
||||||
|
|
||||||
# Core Settings
|
|
||||||
KOMODO_TITLE=Komodo
|
|
||||||
KOMODO_HOST=https://komodo.fig.systems
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# User Management
|
|
||||||
KOMODO_LOCAL_AUTH=true
|
|
||||||
KOMODO_ENABLE_NEW_USERS=true
|
|
||||||
KOMODO_FIRST_SERVER_ADMIN=true
|
|
||||||
|
|
||||||
# Monitoring
|
|
||||||
KOMODO_MONITORING_INTERVAL=15-sec
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
KOMODO_LOGGING_LEVEL=info
|
|
||||||
PERIPHERY_LOGGING_LEVEL=info
|
|
||||||
|
|
||||||
# Periphery Settings
|
|
||||||
PERIPHERY_ROOT_DIR=/etc/komodo
|
|
||||||
PERIPHERY_HTTPS_ENABLED=true
|
|
||||||
PERIPHERY_DISABLE_TERMINALS=false
|
|
||||||
PERIPHERY_INCLUDE_DISK_MOUNTS=/
|
|
||||||
18
compose/services/komodo/.gitignore
vendored
18
compose/services/komodo/.gitignore
vendored
|
|
@ -1,18 +0,0 @@
|
||||||
# Sensitive configuration
|
|
||||||
.env
|
|
||||||
|
|
||||||
# Data directories
|
|
||||||
data/
|
|
||||||
backups/
|
|
||||||
|
|
||||||
# MongoDB volumes (if using bind mounts)
|
|
||||||
mongo-data/
|
|
||||||
mongo-config/
|
|
||||||
|
|
||||||
# Logs
|
|
||||||
*.log
|
|
||||||
|
|
||||||
# Certificates
|
|
||||||
*.pem
|
|
||||||
*.key
|
|
||||||
*.crt
|
|
||||||
|
|
@ -1,286 +0,0 @@
|
||||||
# Komodo - Docker & Server Management Platform
|
|
||||||
|
|
||||||
Komodo is a comprehensive platform for managing Docker containers, servers, and deployments with a modern web interface.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- **Docker Management**: Deploy and manage Docker containers and compose stacks
|
|
||||||
- **Server Monitoring**: Track server health, resources, and statistics
|
|
||||||
- **Build System**: Build Docker images from Git repositories
|
|
||||||
- **Multi-Server**: Manage multiple servers from a single interface
|
|
||||||
- **Webhooks**: Automatic deployments from git webhooks
|
|
||||||
- **Resource Management**: Organize with tags, descriptions, and search
|
|
||||||
- **Authentication**: Local auth, OAuth (GitHub, Google), and OIDC support
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Update Environment Variables
|
|
||||||
|
|
||||||
Edit `.env` and update these critical values:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Database Password
|
|
||||||
KOMODO_DB_PASSWORD=your-strong-password-here
|
|
||||||
|
|
||||||
# Shared Passkey (Core <-> Periphery authentication)
|
|
||||||
KOMODO_PASSKEY=your-strong-random-string-here
|
|
||||||
|
|
||||||
# Host URL (update to your domain)
|
|
||||||
KOMODO_HOST=https://komodo.fig.systems
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Create Required Directory
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create the periphery root directory on the host
|
|
||||||
sudo mkdir -p /etc/komodo
|
|
||||||
sudo chown -R $USER:$USER /etc/komodo
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Deploy
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Access
|
|
||||||
|
|
||||||
Open https://komodo.fig.systems and create your first admin account.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
The stack consists of three services:
|
|
||||||
|
|
||||||
1. **komodo-mongo**: MongoDB database for storing configuration
|
|
||||||
2. **komodo-core**: Main web interface and API (port 9120)
|
|
||||||
3. **komodo-periphery**: Local agent for Docker/server management (port 8120)
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables (.env)
|
|
||||||
|
|
||||||
The `.env` file contains all primary configuration. Key sections:
|
|
||||||
|
|
||||||
- **Database**: MongoDB credentials
|
|
||||||
- **Authentication**: Passkey, local auth, OAuth providers
|
|
||||||
- **Monitoring**: Polling intervals and logging
|
|
||||||
- **Periphery**: Root directory, SSL, terminal access
|
|
||||||
- **Integrations**: Git providers, Docker registries, AWS
|
|
||||||
|
|
||||||
### TOML Configuration Files (Optional)
|
|
||||||
|
|
||||||
For advanced configuration, mount TOML files:
|
|
||||||
|
|
||||||
- `config/core.config.toml` → `/config/core.config.toml`
|
|
||||||
- `config/periphery.config.toml` → `/config/periphery.config.toml`
|
|
||||||
|
|
||||||
Uncomment the volume mounts in `compose.yaml` to use these files.
|
|
||||||
|
|
||||||
## Security Checklist
|
|
||||||
|
|
||||||
Before deploying to production:
|
|
||||||
|
|
||||||
- [ ] Change `KOMODO_DB_PASSWORD` to a strong password
|
|
||||||
- [ ] Change `KOMODO_PASSKEY` to a strong random string (32+ characters)
|
|
||||||
- [ ] Review `KOMODO_ENABLE_NEW_USERS` - set to `false` after creating admin
|
|
||||||
- [ ] Consider enabling SSO via Traefik middleware (see compose.yaml)
|
|
||||||
- [ ] Set `PERIPHERY_DISABLE_TERMINALS=true` if shell access not needed
|
|
||||||
- [ ] Configure `PERIPHERY_ALLOWED_IPS` to restrict access by IP
|
|
||||||
- [ ] Review disk mount monitoring in `PERIPHERY_INCLUDE_DISK_MOUNTS`
|
|
||||||
- [ ] Enable proper SSL certificates (auto-generated by Traefik)
|
|
||||||
- [ ] Set up OAuth providers (GitHub/Google) or OIDC for SSO
|
|
||||||
|
|
||||||
## Authentication Options
|
|
||||||
|
|
||||||
### Local Authentication (Default)
|
|
||||||
|
|
||||||
Username/password authentication. First user becomes admin.
|
|
||||||
|
|
||||||
### OAuth Providers
|
|
||||||
|
|
||||||
Configure in `.env`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# GitHub OAuth
|
|
||||||
KOMODO_GITHUB_OAUTH_ENABLED=true
|
|
||||||
KOMODO_GITHUB_OAUTH_ID=your-oauth-id
|
|
||||||
KOMODO_GITHUB_OAUTH_SECRET=your-oauth-secret
|
|
||||||
|
|
||||||
# Google OAuth
|
|
||||||
KOMODO_GOOGLE_OAUTH_ENABLED=true
|
|
||||||
KOMODO_GOOGLE_OAUTH_ID=your-oauth-id
|
|
||||||
KOMODO_GOOGLE_OAUTH_SECRET=your-oauth-secret
|
|
||||||
```
|
|
||||||
|
|
||||||
### OIDC (e.g., Keycloak, Auth0)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
KOMODO_OIDC_ENABLED=true
|
|
||||||
KOMODO_OIDC_PROVIDER=https://your-oidc-provider.com
|
|
||||||
KOMODO_OIDC_CLIENT_ID=your-client-id
|
|
||||||
KOMODO_OIDC_CLIENT_SECRET=your-client-secret
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integrations
|
|
||||||
|
|
||||||
### Git Provider Access
|
|
||||||
|
|
||||||
For private repositories, configure credentials:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# GitHub
|
|
||||||
KOMODO_GIT_GITHUB_ACCOUNTS=personal
|
|
||||||
KOMODO_GIT_GITHUB_PERSONAL_USERNAME=your-username
|
|
||||||
KOMODO_GIT_GITHUB_PERSONAL_TOKEN=ghp_your-token
|
|
||||||
|
|
||||||
# Gitea/Self-hosted
|
|
||||||
KOMODO_GIT_GITEA_ACCOUNTS=homelab
|
|
||||||
KOMODO_GIT_GITEA_HOMELAB_DOMAIN=git.example.com
|
|
||||||
KOMODO_GIT_GITEA_HOMELAB_USERNAME=your-username
|
|
||||||
KOMODO_GIT_GITEA_HOMELAB_TOKEN=your-token
|
|
||||||
```
|
|
||||||
|
|
||||||
### Docker Registry Access
|
|
||||||
|
|
||||||
For private registries:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Docker Hub
|
|
||||||
KOMODO_REGISTRY_DOCKERHUB_ACCOUNTS=personal
|
|
||||||
KOMODO_REGISTRY_DOCKERHUB_PERSONAL_USERNAME=your-username
|
|
||||||
KOMODO_REGISTRY_DOCKERHUB_PERSONAL_PASSWORD=your-password
|
|
||||||
|
|
||||||
# Custom Registry
|
|
||||||
KOMODO_REGISTRY_CUSTOM_ACCOUNTS=homelab
|
|
||||||
KOMODO_REGISTRY_CUSTOM_HOMELAB_DOMAIN=registry.example.com
|
|
||||||
KOMODO_REGISTRY_CUSTOM_HOMELAB_USERNAME=your-username
|
|
||||||
KOMODO_REGISTRY_CUSTOM_HOMELAB_PASSWORD=your-password
|
|
||||||
```
|
|
||||||
|
|
||||||
## Multi-Server Setup
|
|
||||||
|
|
||||||
To manage additional servers:
|
|
||||||
|
|
||||||
1. Deploy `komodo-periphery` on each server
|
|
||||||
2. Configure with the same `KOMODO_PASSKEY`
|
|
||||||
3. Expose port 8120 (with SSL enabled)
|
|
||||||
4. Add server in Komodo Core UI with periphery URL
|
|
||||||
|
|
||||||
## Monitoring & Logging
|
|
||||||
|
|
||||||
### Adjust Polling Intervals
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Server health checks
|
|
||||||
KOMODO_MONITORING_INTERVAL=15-sec
|
|
||||||
|
|
||||||
# System stats
|
|
||||||
PERIPHERY_STATS_POLLING_RATE=5-sec
|
|
||||||
|
|
||||||
# Container stats
|
|
||||||
PERIPHERY_CONTAINER_STATS_POLLING_RATE=30-sec
|
|
||||||
```
|
|
||||||
|
|
||||||
### Log Levels
|
|
||||||
|
|
||||||
```bash
|
|
||||||
KOMODO_LOGGING_LEVEL=info # off, error, warn, info, debug, trace
|
|
||||||
PERIPHERY_LOGGING_LEVEL=info
|
|
||||||
```
|
|
||||||
|
|
||||||
### OpenTelemetry
|
|
||||||
|
|
||||||
For distributed tracing:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
KOMODO_LOGGING_OTLP_ENDPOINT=http://your-otlp-collector:4317
|
|
||||||
PERIPHERY_LOGGING_OTLP_ENDPOINT=http://your-otlp-collector:4317
|
|
||||||
```
|
|
||||||
|
|
||||||
## Data Management
|
|
||||||
|
|
||||||
### Backups
|
|
||||||
|
|
||||||
MongoDB data is persisted in Docker volumes:
|
|
||||||
- `mongo-data`: Database files
|
|
||||||
- `mongo-config`: Configuration
|
|
||||||
|
|
||||||
The `./backups` directory is mounted for storing backup exports.
|
|
||||||
|
|
||||||
### Data Pruning
|
|
||||||
|
|
||||||
Automatically clean old data:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
KOMODO_PRUNE_INTERVAL=1-day
|
|
||||||
KOMODO_KEEP_STATS_FOR_DAYS=30
|
|
||||||
KOMODO_KEEP_ALERTS_FOR_DAYS=90
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Check Logs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker compose logs -f komodo-core
|
|
||||||
docker compose logs -f komodo-periphery
|
|
||||||
docker compose logs -f komodo-mongo
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verify Passkey Match
|
|
||||||
|
|
||||||
Core and Periphery must share the same passkey:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In .env, ensure these match:
|
|
||||||
KOMODO_PASSKEY=abc123
|
|
||||||
```
|
|
||||||
|
|
||||||
### Reset Admin Password
|
|
||||||
|
|
||||||
Connect to MongoDB and reset user:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec -it komodo-mongo mongosh -u admin -p admin
|
|
||||||
use komodo
|
|
||||||
db.users.updateOne({username: "admin"}, {$set: {password: "new-hashed-password"}})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check Periphery Connection
|
|
||||||
|
|
||||||
In Komodo Core UI, add a server pointing to:
|
|
||||||
- URL: `http://komodo-periphery:8120` (internal)
|
|
||||||
- Or: `https://komodo.fig.systems:8120` (if externally accessible)
|
|
||||||
- Passkey: Must match `KOMODO_PASSKEY`
|
|
||||||
|
|
||||||
## Upgrading
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Pull latest images
|
|
||||||
docker compose pull
|
|
||||||
|
|
||||||
# Recreate containers
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Check logs
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: Pin specific versions in `.env` for production:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
KOMODO_VERSION=v1.2.3
|
|
||||||
```
|
|
||||||
|
|
||||||
## Links
|
|
||||||
|
|
||||||
- **Documentation**: https://komo.do/docs/
|
|
||||||
- **GitHub**: https://github.com/moghtech/komodo
|
|
||||||
- **Discord**: https://discord.gg/komodo
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
Komodo is open source under the GPL-3.0 license.
|
|
||||||
|
|
@ -1,137 +0,0 @@
|
||||||
# Komodo - Docker & Server Management Platform
|
|
||||||
# Docs: https://komo.do/docs/
|
|
||||||
# GitHub: https://github.com/moghtech/komodo
|
|
||||||
|
|
||||||
services:
|
|
||||||
komodo-mongo:
|
|
||||||
container_name: komodo-mongo
|
|
||||||
image: mongo:8.0
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
command: ["--wiredTigerCacheSizeGB", "0.25"]
|
|
||||||
|
|
||||||
environment:
|
|
||||||
MONGO_INITDB_ROOT_USERNAME: ${KOMODO_DB_USERNAME:-admin}
|
|
||||||
MONGO_INITDB_ROOT_PASSWORD: ${KOMODO_DB_PASSWORD:-admin}
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- mongo-data:/data/db
|
|
||||||
- mongo-config:/data/configdb
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Skip this container from Komodo management
|
|
||||||
komodo.skip: true
|
|
||||||
|
|
||||||
komodo-core:
|
|
||||||
container_name: komodo-core
|
|
||||||
image: ghcr.io/moghtech/komodo-core:${KOMODO_VERSION:-latest}
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
depends_on:
|
|
||||||
- komodo-mongo
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
environment:
|
|
||||||
# Database Configuration
|
|
||||||
KOMODO_DATABASE_URI: mongodb://${KOMODO_DB_USERNAME:-admin}:${KOMODO_DB_PASSWORD:-admin}@komodo-mongo:27017
|
|
||||||
|
|
||||||
# Core Settings
|
|
||||||
KOMODO_TITLE: ${KOMODO_TITLE:-Komodo}
|
|
||||||
KOMODO_HOST: ${KOMODO_HOST:-https://komodo.fig.systems}
|
|
||||||
KOMODO_PORT: 9120
|
|
||||||
|
|
||||||
# Authentication
|
|
||||||
KOMODO_PASSKEY: ${KOMODO_PASSKEY:-abc123}
|
|
||||||
KOMODO_LOCAL_AUTH: ${KOMODO_LOCAL_AUTH:-true}
|
|
||||||
KOMODO_ENABLE_NEW_USERS: ${KOMODO_ENABLE_NEW_USERS:-true}
|
|
||||||
KOMODO_ENABLE_NEW_USER_WEBHOOK: ${KOMODO_ENABLE_NEW_USER_WEBHOOK:-false}
|
|
||||||
|
|
||||||
# Monitoring
|
|
||||||
KOMODO_MONITORING_INTERVAL: ${KOMODO_MONITORING_INTERVAL:-15-sec}
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
KOMODO_LOGGING_LEVEL: ${KOMODO_LOGGING_LEVEL:-info}
|
|
||||||
TZ: ${TZ:-America/Los_Angeles}
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./data:/data
|
|
||||||
- ./backups:/backups
|
|
||||||
# Optional: mount custom config
|
|
||||||
# - ./config/core.config.toml:/config/core.config.toml:ro
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Skip this container from Komodo management
|
|
||||||
komodo.skip: true
|
|
||||||
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.komodo.rule: Host(`komodo.fig.systems`)
|
|
||||||
traefik.http.routers.komodo.entrypoints: websecure
|
|
||||||
traefik.http.routers.komodo.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.komodo.loadbalancer.server.port: 9120
|
|
||||||
|
|
||||||
# Optional: SSO Protection
|
|
||||||
|
|
||||||
komodo-periphery:
|
|
||||||
container_name: komodo-periphery
|
|
||||||
image: ghcr.io/moghtech/komodo-periphery:${KOMODO_VERSION:-latest}
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
depends_on:
|
|
||||||
- komodo-core
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
environment:
|
|
||||||
# Core Settings
|
|
||||||
PERIPHERY_ROOT_DIR: ${PERIPHERY_ROOT_DIR:-/etc/komodo}
|
|
||||||
PERIPHERY_PORT: 8120
|
|
||||||
|
|
||||||
# Authentication
|
|
||||||
PERIPHERY_PASSKEY: ${KOMODO_PASSKEY:-abc123}
|
|
||||||
PERIPHERY_HTTPS_ENABLED: ${PERIPHERY_HTTPS_ENABLED:-true}
|
|
||||||
|
|
||||||
# Features
|
|
||||||
PERIPHERY_DISABLE_TERMINALS: ${PERIPHERY_DISABLE_TERMINALS:-false}
|
|
||||||
|
|
||||||
# Disk Monitoring
|
|
||||||
PERIPHERY_INCLUDE_DISK_MOUNTS: ${PERIPHERY_INCLUDE_DISK_MOUNTS:-/}
|
|
||||||
# PERIPHERY_EXCLUDE_DISK_MOUNTS: /snap,/boot
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
PERIPHERY_LOGGING_LEVEL: ${PERIPHERY_LOGGING_LEVEL:-info}
|
|
||||||
TZ: ${TZ:-America/Los_Angeles}
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
|
||||||
- /proc:/proc:ro
|
|
||||||
- ${PERIPHERY_ROOT_DIR:-/etc/komodo}:${PERIPHERY_ROOT_DIR:-/etc/komodo}
|
|
||||||
# Optional: mount custom config
|
|
||||||
# - ./config/periphery.config.toml:/config/periphery.config.toml:ro
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Skip this container from Komodo management
|
|
||||||
komodo.skip: true
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
mongo-data:
|
|
||||||
mongo-config:
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -1,89 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
# Komodo Setup Script
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "==================================="
|
|
||||||
echo "Komodo Setup"
|
|
||||||
echo "==================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Check if running as root
|
|
||||||
if [ "$EUID" -eq 0 ]; then
|
|
||||||
echo "Please do not run as root"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create periphery root directory
|
|
||||||
echo "Creating periphery root directory..."
|
|
||||||
sudo mkdir -p /etc/komodo
|
|
||||||
sudo chown -R $USER:$USER /etc/komodo
|
|
||||||
echo "✓ Created /etc/komodo"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Check if .env exists
|
|
||||||
if [ ! -f .env ]; then
|
|
||||||
echo "Error: .env file not found!"
|
|
||||||
echo "Please copy .env.example to .env and configure it first."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check for default passwords
|
|
||||||
echo "Checking for default passwords..."
|
|
||||||
if grep -q "KOMODO_DB_PASSWORD=admin" .env; then
|
|
||||||
echo "⚠️ WARNING: Default database password detected!"
|
|
||||||
echo " Please update KOMODO_DB_PASSWORD in .env before deployment."
|
|
||||||
fi
|
|
||||||
|
|
||||||
if grep -q "KOMODO_PASSKEY=abc123" .env; then
|
|
||||||
echo "⚠️ WARNING: Default passkey detected!"
|
|
||||||
echo " Please update KOMODO_PASSKEY in .env before deployment."
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "==================================="
|
|
||||||
echo "Pre-deployment Checklist"
|
|
||||||
echo "==================================="
|
|
||||||
echo ""
|
|
||||||
echo "Before deploying, ensure you have:"
|
|
||||||
echo " [ ] Updated KOMODO_DB_PASSWORD to a strong password"
|
|
||||||
echo " [ ] Updated KOMODO_PASSKEY to a strong random string"
|
|
||||||
echo " [ ] Updated KOMODO_HOST to your domain"
|
|
||||||
echo " [ ] Configured TZ (timezone)"
|
|
||||||
echo " [ ] Reviewed KOMODO_ENABLE_NEW_USERS setting"
|
|
||||||
echo ""
|
|
||||||
read -p "Have you completed the checklist above? (y/N) " -n 1 -r
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
|
||||||
echo "Please complete the checklist and run this script again."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "==================================="
|
|
||||||
echo "Deploying Komodo..."
|
|
||||||
echo "==================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Deploy
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "==================================="
|
|
||||||
echo "Deployment Complete!"
|
|
||||||
echo "==================================="
|
|
||||||
echo ""
|
|
||||||
echo "Access Komodo at: https://komodo.fig.systems"
|
|
||||||
echo ""
|
|
||||||
echo "First-time setup:"
|
|
||||||
echo " 1. Open the URL above"
|
|
||||||
echo " 2. Create your admin account"
|
|
||||||
echo " 3. Configure servers and resources"
|
|
||||||
echo ""
|
|
||||||
echo "To view logs:"
|
|
||||||
echo " docker compose logs -f"
|
|
||||||
echo ""
|
|
||||||
echo "To stop:"
|
|
||||||
echo " docker compose down"
|
|
||||||
echo ""
|
|
||||||
9
compose/services/matrix/.gitignore
vendored
Normal file
9
compose/services/matrix/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
# Synapse data (stored in /mnt/media/matrix/)
|
||||||
|
data/
|
||||||
|
media/
|
||||||
|
|
||||||
|
# Bridge data
|
||||||
|
bridges/
|
||||||
|
|
||||||
|
# Logs
|
||||||
|
*.log
|
||||||
665
compose/services/matrix/INTEGRATIONS-SETUP.md
Normal file
665
compose/services/matrix/INTEGRATIONS-SETUP.md
Normal file
|
|
@ -0,0 +1,665 @@
|
||||||
|
# Matrix Integrations Setup Guide
|
||||||
|
|
||||||
|
This guide covers setup for all Matrix integrations in your homelab.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
1. **Start all services:**
|
||||||
|
```bash
|
||||||
|
cd /home/eduardo_figueroa/homelab/compose/services/matrix
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check service health:**
|
||||||
|
```bash
|
||||||
|
docker compose ps
|
||||||
|
docker compose logs -f
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Services Overview
|
||||||
|
|
||||||
|
| Service | URL | Purpose |
|
||||||
|
|---------|-----|---------|
|
||||||
|
| Synapse | https://matrix.fig.systems | Matrix homeserver |
|
||||||
|
| Element | https://chat.fig.systems | Web client |
|
||||||
|
| Synapse Admin | https://admin.matrix.fig.systems | User/room management |
|
||||||
|
| Maubot | https://maubot.fig.systems | Bot management |
|
||||||
|
| Matrix Registration | https://reg.matrix.fig.systems | Token-based registration |
|
||||||
|
| Hookshot | https://hookshot.fig.systems | GitHub/GitLab webhooks |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Synapse Admin
|
||||||
|
|
||||||
|
**Purpose:** Web UI for managing users, rooms, and server settings.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
1. **Access the UI:**
|
||||||
|
- Navigate to https://admin.matrix.fig.systems
|
||||||
|
- Enter homeserver URL: `https://matrix.fig.systems`
|
||||||
|
|
||||||
|
2. **Login with your admin account:**
|
||||||
|
- Use your Matrix credentials (@username:fig.systems)
|
||||||
|
- Must be a server admin (see below to grant admin)
|
||||||
|
|
||||||
|
3. **Grant admin privileges to a user:**
|
||||||
|
```bash
|
||||||
|
docker compose exec synapse register_new_matrix_user \
|
||||||
|
-u <username> \
|
||||||
|
-p <password> \
|
||||||
|
-a \
|
||||||
|
-c /data/homeserver.yaml \
|
||||||
|
http://localhost:8008
|
||||||
|
```
|
||||||
|
|
||||||
|
### Features:
|
||||||
|
- View and manage all users
|
||||||
|
- Deactivate accounts
|
||||||
|
- Manage rooms (delete, view members)
|
||||||
|
- View server statistics
|
||||||
|
- Media management
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Matrix Registration (Token-Based Registration)
|
||||||
|
|
||||||
|
**Purpose:** Control who can register with invite tokens.
|
||||||
|
|
||||||
|
### Admin Access:
|
||||||
|
|
||||||
|
**Admin credentials:**
|
||||||
|
- URL: https://reg.matrix.fig.systems/admin
|
||||||
|
- Secret: `4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188`
|
||||||
|
(Also in `.env` as `MATRIX_REGISTRATION_ADMIN_SECRET`)
|
||||||
|
|
||||||
|
### Creating Registration Tokens:
|
||||||
|
|
||||||
|
**Via Web UI:**
|
||||||
|
1. Go to https://reg.matrix.fig.systems/admin
|
||||||
|
2. Enter the admin secret above
|
||||||
|
3. Click "Create Token"
|
||||||
|
4. Configure options:
|
||||||
|
- **One-time use:** Token works only once
|
||||||
|
- **Multi-use:** Token can be used multiple times
|
||||||
|
- **Expiration date:** Token expires after this date
|
||||||
|
- **Disable email:** Skip email verification for this token
|
||||||
|
5. Copy the token and share with users
|
||||||
|
|
||||||
|
**Registration URL format:**
|
||||||
|
```
|
||||||
|
https://reg.matrix.fig.systems?token=<your_token_here>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Creating Tokens via API:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a one-time token
|
||||||
|
curl -X POST https://reg.matrix.fig.systems/api/token \
|
||||||
|
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"ex_date": "2026-12-31",
|
||||||
|
"one_time": true,
|
||||||
|
"disable_email": false
|
||||||
|
}'
|
||||||
|
|
||||||
|
# Create a multi-use token (for family/friends)
|
||||||
|
curl -X POST https://reg.matrix.fig.systems/api/token \
|
||||||
|
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"ex_date": "2026-12-31",
|
||||||
|
"one_time": false,
|
||||||
|
"max_usage": 10,
|
||||||
|
"disable_email": true
|
||||||
|
}'
|
||||||
|
|
||||||
|
# List all tokens
|
||||||
|
curl https://reg.matrix.fig.systems/api/tokens \
|
||||||
|
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188"
|
||||||
|
|
||||||
|
# Disable a token
|
||||||
|
curl -X PUT https://reg.matrix.fig.systems/api/token/<token_name> \
|
||||||
|
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"disabled": true}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### User Registration Process:
|
||||||
|
|
||||||
|
1. Admin creates token via web UI or API
|
||||||
|
2. Admin shares URL: `https://reg.matrix.fig.systems?token=abc123`
|
||||||
|
3. User opens URL and fills in:
|
||||||
|
- Username
|
||||||
|
- Password
|
||||||
|
- Email (if required)
|
||||||
|
4. Account is created on your Matrix server
|
||||||
|
|
||||||
|
### Benefits:
|
||||||
|
- Control who can register
|
||||||
|
- Track which tokens were used
|
||||||
|
- Bypass email verification per-token
|
||||||
|
- Prevent spam/abuse
|
||||||
|
- Invite-only registration system
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Maubot (Bot Framework)
|
||||||
|
|
||||||
|
**Purpose:** Modular bot system for GIFs, reminders, RSS, and custom commands.
|
||||||
|
|
||||||
|
### Initial Setup:
|
||||||
|
|
||||||
|
1. **Generate initial config:**
|
||||||
|
```bash
|
||||||
|
docker compose run --rm maubot
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Access the management UI:**
|
||||||
|
- URL: https://maubot.fig.systems
|
||||||
|
- Default credentials are in `/mnt/media/matrix/maubot/config.yaml`
|
||||||
|
|
||||||
|
3. **Login and change password:**
|
||||||
|
- First login with default credentials
|
||||||
|
- Go to Settings → Change password
|
||||||
|
|
||||||
|
### Creating a Bot User:
|
||||||
|
|
||||||
|
1. **Register a bot user on your homeserver:**
|
||||||
|
```bash
|
||||||
|
docker compose exec synapse register_new_matrix_user \
|
||||||
|
-u bot \
|
||||||
|
-p <bot_password> \
|
||||||
|
-c /data/homeserver.yaml \
|
||||||
|
http://localhost:8008
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Add bot client in Maubot UI:**
|
||||||
|
- Go to https://maubot.fig.systems
|
||||||
|
- Click "Clients" → "+"
|
||||||
|
- Enter:
|
||||||
|
- **User ID:** @bot:fig.systems
|
||||||
|
- **Access Token:** (get from login)
|
||||||
|
- **Homeserver:** https://matrix.fig.systems
|
||||||
|
|
||||||
|
3. **Get access token:**
|
||||||
|
```bash
|
||||||
|
curl -X POST https://matrix.fig.systems/_matrix/client/r0/login \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"type": "m.login.password",
|
||||||
|
"user": "bot",
|
||||||
|
"password": "<bot_password>"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
Copy the `access_token` from the response.
|
||||||
|
|
||||||
|
### Installing Plugins:
|
||||||
|
|
||||||
|
**Popular plugins:**
|
||||||
|
|
||||||
|
1. **Giphy** - `/giphy <search>` command
|
||||||
|
- Download: https://github.com/TomCasavant/GiphyMaubot
|
||||||
|
- Upload .mbp file in Maubot UI
|
||||||
|
|
||||||
|
2. **Tenor** - `/tenor <search>` GIF search
|
||||||
|
- Download: https://github.com/williamkray/maubot-tenor
|
||||||
|
|
||||||
|
3. **Reminder** - `/remind <time> <message>`
|
||||||
|
- Download: https://github.com/maubot/reminder
|
||||||
|
|
||||||
|
4. **RSS** - RSS feed notifications
|
||||||
|
- Download: https://github.com/maubot/rss
|
||||||
|
|
||||||
|
5. **Reactions** - Emoji reactions and karma
|
||||||
|
- Download: https://github.com/maubot/reactbot
|
||||||
|
|
||||||
|
6. **Media** - Download media from URLs
|
||||||
|
- Download: https://github.com/maubot/media
|
||||||
|
|
||||||
|
**Installation steps:**
|
||||||
|
1. Download plugin .mbp file
|
||||||
|
2. Go to Maubot UI → Plugins → Upload
|
||||||
|
3. Create instance: Instances → + → Select plugin and client
|
||||||
|
4. Configure and enable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Telegram Bridge (mautrix-telegram)
|
||||||
|
|
||||||
|
**Purpose:** Bridge Telegram chats and DMs to Matrix.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
1. **Get Telegram API credentials:**
|
||||||
|
- Go to https://my.telegram.org/apps
|
||||||
|
- Log in with your phone number
|
||||||
|
- Create an app
|
||||||
|
- Copy `api_id` and `api_hash`
|
||||||
|
|
||||||
|
2. **Generate config:**
|
||||||
|
```bash
|
||||||
|
docker compose run --rm mautrix-telegram
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Edit config:**
|
||||||
|
```bash
|
||||||
|
nano /mnt/media/matrix/bridges/telegram/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Key settings:
|
||||||
|
```yaml
|
||||||
|
homeserver:
|
||||||
|
address: http://synapse:8008
|
||||||
|
domain: fig.systems
|
||||||
|
|
||||||
|
appservice:
|
||||||
|
address: http://mautrix-telegram:29317
|
||||||
|
hostname: 0.0.0.0
|
||||||
|
port: 29317
|
||||||
|
database: sqlite:///data/mautrix-telegram.db
|
||||||
|
|
||||||
|
bridge:
|
||||||
|
permissions:
|
||||||
|
'@yourusername:fig.systems': admin
|
||||||
|
'fig.systems': user
|
||||||
|
|
||||||
|
telegram:
|
||||||
|
api_id: YOUR_API_ID
|
||||||
|
api_hash: YOUR_API_HASH
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Start the bridge:**
|
||||||
|
```bash
|
||||||
|
docker compose up -d mautrix-telegram
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Restart Synapse** (to load the registration file):
|
||||||
|
```bash
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using the Bridge:
|
||||||
|
|
||||||
|
1. **Start chat with bridge bot:**
|
||||||
|
- In Element, start a DM with `@telegrambot:fig.systems`
|
||||||
|
- Send: `login`
|
||||||
|
- Enter your Telegram phone number
|
||||||
|
- Enter the code sent to Telegram
|
||||||
|
|
||||||
|
2. **Bridge a chat:**
|
||||||
|
- Create or open a Matrix room
|
||||||
|
- Invite `@telegrambot:fig.systems`
|
||||||
|
- Send: `!tg bridge <telegram_chat_id>`
|
||||||
|
- Or use `!tg search <query>` to find chats
|
||||||
|
|
||||||
|
3. **Useful commands:**
|
||||||
|
- `!tg help` - Show all commands
|
||||||
|
- `!tg pm` - Bridge personal chats
|
||||||
|
- `!tg search <query>` - Find Telegram chats
|
||||||
|
- `!tg sync` - Sync members/messages
|
||||||
|
- `!tg unbridge` - Remove bridge
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. WhatsApp Bridge (mautrix-whatsapp)
|
||||||
|
|
||||||
|
**Purpose:** Bridge WhatsApp chats to Matrix.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
1. **Generate config:**
|
||||||
|
```bash
|
||||||
|
docker compose run --rm mautrix-whatsapp
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Edit config:**
|
||||||
|
```bash
|
||||||
|
nano /mnt/media/matrix/bridges/whatsapp/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Update:
|
||||||
|
```yaml
|
||||||
|
homeserver:
|
||||||
|
address: http://synapse:8008
|
||||||
|
domain: fig.systems
|
||||||
|
|
||||||
|
bridge:
|
||||||
|
permissions:
|
||||||
|
'@yourusername:fig.systems': admin
|
||||||
|
'fig.systems': user
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Start and restart:**
|
||||||
|
```bash
|
||||||
|
docker compose up -d mautrix-whatsapp
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using the Bridge:
|
||||||
|
|
||||||
|
1. **Start chat with bot:**
|
||||||
|
- DM `@whatsappbot:fig.systems` in Element
|
||||||
|
- Send: `login`
|
||||||
|
|
||||||
|
2. **Scan QR code:**
|
||||||
|
- Bridge will send a QR code
|
||||||
|
- Open WhatsApp on your phone
|
||||||
|
- Go to Settings → Linked Devices → Link a Device
|
||||||
|
- Scan the QR code
|
||||||
|
|
||||||
|
3. **Chats are auto-bridged:**
|
||||||
|
- Existing WhatsApp chats appear as Matrix rooms
|
||||||
|
- New WhatsApp messages create rooms automatically
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Discord Bridge (mautrix-discord)
|
||||||
|
|
||||||
|
**Purpose:** Bridge Discord servers and DMs to Matrix.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
1. **Generate config:**
|
||||||
|
```bash
|
||||||
|
docker compose run --rm mautrix-discord
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Create Discord bot:**
|
||||||
|
- Go to https://discord.com/developers/applications
|
||||||
|
- Create New Application
|
||||||
|
- Go to Bot → Add Bot
|
||||||
|
- Copy the Bot Token
|
||||||
|
- Enable these intents:
|
||||||
|
- Server Members Intent
|
||||||
|
- Message Content Intent
|
||||||
|
|
||||||
|
3. **Edit config:**
|
||||||
|
```bash
|
||||||
|
nano /mnt/media/matrix/bridges/discord/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Add your bot token:
|
||||||
|
```yaml
|
||||||
|
bridge:
|
||||||
|
bot_token: YOUR_DISCORD_BOT_TOKEN
|
||||||
|
permissions:
|
||||||
|
'@yourusername:fig.systems': admin
|
||||||
|
'fig.systems': user
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Start and restart:**
|
||||||
|
```bash
|
||||||
|
docker compose up -d mautrix-discord
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using the Bridge:
|
||||||
|
|
||||||
|
1. **Invite bot to Discord server:**
|
||||||
|
- Get OAuth URL from bridge bot in Matrix
|
||||||
|
- Visit URL and authorize bot for your Discord server
|
||||||
|
|
||||||
|
2. **Bridge channels:**
|
||||||
|
- Create Matrix room
|
||||||
|
- Invite `@discordbot:fig.systems`
|
||||||
|
- Follow bridging instructions from bot
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Google Chat Bridge (mautrix-googlechat)
|
||||||
|
|
||||||
|
**Purpose:** Bridge Google Chat/Hangouts to Matrix.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
Similar to other mautrix bridges:
|
||||||
|
|
||||||
|
1. Generate config: `docker compose run --rm mautrix-googlechat`
|
||||||
|
2. Edit `/mnt/media/matrix/bridges/googlechat/config.yaml`
|
||||||
|
3. Start: `docker compose up -d mautrix-googlechat`
|
||||||
|
4. Restart Synapse: `docker compose restart synapse`
|
||||||
|
5. Login via bridge bot: `@googlechatbot:fig.systems`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Mjolnir (Moderation Bot)
|
||||||
|
|
||||||
|
**Purpose:** Advanced moderation, ban lists, anti-spam protection.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
1. **Create bot user:**
|
||||||
|
```bash
|
||||||
|
docker compose exec synapse register_new_matrix_user \
|
||||||
|
-u mjolnir \
|
||||||
|
-p <password> \
|
||||||
|
-c /data/homeserver.yaml \
|
||||||
|
http://localhost:8008
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Create management room:**
|
||||||
|
- In Element, create a private room
|
||||||
|
- Invite `@mjolnir:fig.systems`
|
||||||
|
- Make the bot admin
|
||||||
|
|
||||||
|
3. **Generate config:**
|
||||||
|
```bash
|
||||||
|
docker compose run --rm mjolnir
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Edit config:**
|
||||||
|
```bash
|
||||||
|
nano /mnt/media/matrix/bridges/mjolnir/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Configure:
|
||||||
|
```yaml
|
||||||
|
homeserver: https://matrix.fig.systems
|
||||||
|
accessToken: <get_from_login>
|
||||||
|
managementRoom: "!roomid:fig.systems"
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Get access token:**
|
||||||
|
```bash
|
||||||
|
curl -X POST https://matrix.fig.systems/_matrix/client/r0/login \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"type": "m.login.password",
|
||||||
|
"user": "mjolnir",
|
||||||
|
"password": "<password>"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Start bot:**
|
||||||
|
```bash
|
||||||
|
docker compose up -d mjolnir
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Mjolnir:
|
||||||
|
|
||||||
|
1. **Protect rooms:**
|
||||||
|
- Invite Mjolnir to rooms you want to moderate
|
||||||
|
- In management room, send: `!mjolnir rooms add <room_id>`
|
||||||
|
|
||||||
|
2. **Subscribe to ban lists:**
|
||||||
|
- `!mjolnir list subscribe <list_room_id>`
|
||||||
|
|
||||||
|
3. **Ban users:**
|
||||||
|
- `!mjolnir ban @user:server.com`
|
||||||
|
|
||||||
|
4. **Commands:**
|
||||||
|
- `!mjolnir help` - Show all commands
|
||||||
|
- `!mjolnir status` - Bot status
|
||||||
|
- `!mjolnir rooms` - Protected rooms
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Matrix Hookshot (GitHub/GitLab Integration)
|
||||||
|
|
||||||
|
**Purpose:** Receive webhooks from GitHub, GitLab, Jira in Matrix rooms.
|
||||||
|
|
||||||
|
### Setup:
|
||||||
|
|
||||||
|
1. **Generate config:**
|
||||||
|
```bash
|
||||||
|
docker compose run --rm hookshot
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Edit config:**
|
||||||
|
```bash
|
||||||
|
nano /mnt/media/matrix/hookshot/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Key settings:
|
||||||
|
```yaml
|
||||||
|
bridge:
|
||||||
|
domain: fig.systems
|
||||||
|
url: https://matrix.fig.systems
|
||||||
|
mediaUrl: https://matrix.fig.systems
|
||||||
|
port: 9993
|
||||||
|
bindAddress: 0.0.0.0
|
||||||
|
|
||||||
|
listeners:
|
||||||
|
- port: 9000
|
||||||
|
bindAddress: 0.0.0.0
|
||||||
|
resources:
|
||||||
|
- webhooks
|
||||||
|
|
||||||
|
github:
|
||||||
|
webhook:
|
||||||
|
secret: <random_secret>
|
||||||
|
|
||||||
|
gitlab:
|
||||||
|
webhook:
|
||||||
|
secret: <random_secret>
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Start service:**
|
||||||
|
```bash
|
||||||
|
docker compose up -d hookshot
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Hookshot:
|
||||||
|
|
||||||
|
**For GitHub:**
|
||||||
|
1. In Matrix room, invite `@hookshot:fig.systems`
|
||||||
|
2. Send: `!github repo owner/repo`
|
||||||
|
3. Bot will provide webhook URL
|
||||||
|
4. Add webhook in GitHub repo settings
|
||||||
|
5. Set webhook URL: `https://hookshot.fig.systems/webhooks/github`
|
||||||
|
6. Add secret from config
|
||||||
|
|
||||||
|
**For GitLab:**
|
||||||
|
Similar process with GitLab webhooks.
|
||||||
|
|
||||||
|
**Features:**
|
||||||
|
- Issue notifications
|
||||||
|
- PR/MR updates
|
||||||
|
- Commit messages
|
||||||
|
- CI/CD status
|
||||||
|
- Custom filters
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Service won't start:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check logs
|
||||||
|
docker compose logs <service_name>
|
||||||
|
|
||||||
|
# Check if config exists
|
||||||
|
ls -la /mnt/media/matrix/<service>/
|
||||||
|
|
||||||
|
# Regenerate config
|
||||||
|
docker compose run --rm <service_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bridge not connecting:
|
||||||
|
|
||||||
|
1. Check registration file exists:
|
||||||
|
```bash
|
||||||
|
ls -la /mnt/media/matrix/bridges/<bridge>/registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Check Synapse can read it:
|
||||||
|
```bash
|
||||||
|
docker compose exec synapse cat /data/bridges/<bridge>/registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Restart Synapse:
|
||||||
|
```bash
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Can't login to admin interfaces:
|
||||||
|
|
||||||
|
- Synapse Admin: Use Matrix account credentials
|
||||||
|
- Maubot: Check `/mnt/media/matrix/maubot/config.yaml` for password
|
||||||
|
- Matrix Registration: Use `MATRIX_REGISTRATION_ADMIN_SECRET` from `.env`
|
||||||
|
|
||||||
|
### Ports already in use:
|
||||||
|
|
||||||
|
Check what's using the port:
|
||||||
|
```bash
|
||||||
|
sudo lsof -i :<port_number>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission issues:
|
||||||
|
|
||||||
|
Fix ownership:
|
||||||
|
```bash
|
||||||
|
sudo chown -R 1000:1000 /mnt/media/matrix/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Useful Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View all service logs
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Restart all services
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Update all services
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Check service status
|
||||||
|
docker compose ps
|
||||||
|
|
||||||
|
# Create admin user
|
||||||
|
docker compose exec synapse register_new_matrix_user \
|
||||||
|
-u <username> -p <password> -a -c /data/homeserver.yaml http://localhost:8008
|
||||||
|
|
||||||
|
# Backup database
|
||||||
|
docker compose exec postgres pg_dump -U synapse synapse > backup.sql
|
||||||
|
|
||||||
|
# Restore database
|
||||||
|
cat backup.sql | docker compose exec -T postgres psql -U synapse synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. **Set up Telegram bridge** - Most useful for Telegram users
|
||||||
|
2. **Create registration tokens** - Invite friends/family
|
||||||
|
3. **Install Maubot plugins** - Add GIF search and other features
|
||||||
|
4. **Configure Mjolnir** - Set up moderation
|
||||||
|
5. **Add GitHub webhooks** - Get repo notifications in Matrix
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Matrix Documentation](https://matrix.org/docs/)
|
||||||
|
- [Synapse Admin Guide](https://element-hq.github.io/synapse/latest/)
|
||||||
|
- [Maubot Plugins](https://github.com/maubot/maubot/wiki/Plugin-directory)
|
||||||
|
- [Bridge Setup Guides](https://docs.mau.fi/bridges/)
|
||||||
216
compose/services/matrix/QUICKSTART.md
Normal file
216
compose/services/matrix/QUICKSTART.md
Normal file
|
|
@ -0,0 +1,216 @@
|
||||||
|
# Matrix Quick Start Guide
|
||||||
|
|
||||||
|
Get your Matrix server running in 4 steps!
|
||||||
|
|
||||||
|
## Before You Start
|
||||||
|
|
||||||
|
You'll need:
|
||||||
|
1. **Telegram API credentials** (see step 3)
|
||||||
|
|
||||||
|
## Step 1: Start Matrix Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd /home/eduardo_figueroa/homelab/compose/services/matrix
|
||||||
|
docker compose up -d postgres synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait ~30 seconds for database initialization. Watch logs:
|
||||||
|
```bash
|
||||||
|
docker compose logs -f synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
Look for: `Synapse now listening on port 8008`
|
||||||
|
|
||||||
|
## Step 2: Create Admin User
|
||||||
|
|
||||||
|
Create your first admin account:
|
||||||
|
```bash
|
||||||
|
docker exec -it matrix-synapse register_new_matrix_user -c /data/homeserver.yaml -a http://localhost:8008
|
||||||
|
```
|
||||||
|
|
||||||
|
Follow the prompts to enter a username and password.
|
||||||
|
|
||||||
|
Your Matrix ID will be: `@yourusername:fig.systems`
|
||||||
|
|
||||||
|
## Step 3: Test Login via Element
|
||||||
|
|
||||||
|
1. Go to https://app.element.io
|
||||||
|
2. Click "Sign in"
|
||||||
|
3. Click "Edit" and enter: `matrix.fig.systems`
|
||||||
|
4. Click "Continue"
|
||||||
|
5. Enter your username and password
|
||||||
|
|
||||||
|
## Step 4: Set Up Telegram Bridge
|
||||||
|
|
||||||
|
### Get API Credentials
|
||||||
|
1. Visit https://my.telegram.org (login with phone)
|
||||||
|
2. "API development tools" → Create app
|
||||||
|
3. Save your `api_id` and `api_hash`
|
||||||
|
|
||||||
|
### Configure Bridge
|
||||||
|
```bash
|
||||||
|
# Generate config
|
||||||
|
docker run --rm -v /mnt/media/matrix/bridges/telegram:/data \
|
||||||
|
dock.mau.dev/mautrix/telegram:latest
|
||||||
|
|
||||||
|
# Edit config (use your favorite editor)
|
||||||
|
sudo nano /mnt/media/matrix/bridges/telegram/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key settings to update:**
|
||||||
|
```yaml
|
||||||
|
homeserver:
|
||||||
|
address: http://synapse:8008
|
||||||
|
domain: fig.systems
|
||||||
|
|
||||||
|
appservice:
|
||||||
|
address: http://mautrix-telegram:29317
|
||||||
|
database: postgres://synapse:46d8cb2e8bdacf5a267a5f35bcdea4ded46e42ced008c4998e180f33e3ce07c5@postgres/telegram
|
||||||
|
|
||||||
|
telegram:
|
||||||
|
api_id: YOUR_API_ID_HERE
|
||||||
|
api_hash: YOUR_API_HASH_HERE
|
||||||
|
|
||||||
|
bridge:
|
||||||
|
permissions:
|
||||||
|
'@yourusername:fig.systems': admin
|
||||||
|
```
|
||||||
|
|
||||||
|
### Register and Start
|
||||||
|
```bash
|
||||||
|
# Copy registration to Synapse
|
||||||
|
sudo cp /mnt/media/matrix/bridges/telegram/registration.yaml \
|
||||||
|
/mnt/media/matrix/synapse/data/telegram-registration.yaml
|
||||||
|
|
||||||
|
# Add to homeserver.yaml (add these lines at the end)
|
||||||
|
echo "
|
||||||
|
app_service_config_files:
|
||||||
|
- /data/telegram-registration.yaml" | sudo tee -a homeserver.yaml
|
||||||
|
|
||||||
|
# Restart and start bridge
|
||||||
|
docker compose restart synapse
|
||||||
|
docker compose up -d mautrix-telegram
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use the Bridge
|
||||||
|
In Element:
|
||||||
|
1. Start chat with `@telegrambot:fig.systems`
|
||||||
|
2. Type: `login`
|
||||||
|
3. Follow the instructions
|
||||||
|
|
||||||
|
## Step 5: Set Up WhatsApp Bridge
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate config
|
||||||
|
docker run --rm -v /mnt/media/matrix/bridges/whatsapp:/data \
|
||||||
|
dock.mau.dev/mautrix/whatsapp:latest
|
||||||
|
|
||||||
|
# Edit config
|
||||||
|
sudo nano /mnt/media/matrix/bridges/whatsapp/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key settings:**
|
||||||
|
```yaml
|
||||||
|
homeserver:
|
||||||
|
address: http://synapse:8008
|
||||||
|
domain: fig.systems
|
||||||
|
|
||||||
|
appservice:
|
||||||
|
address: http://mautrix-whatsapp:29318
|
||||||
|
database:
|
||||||
|
uri: postgres://synapse:46d8cb2e8bdacf5a267a5f35bcdea4ded46e42ced008c4998e180f33e3ce07c5@postgres/whatsapp
|
||||||
|
|
||||||
|
bridge:
|
||||||
|
permissions:
|
||||||
|
'@yourusername:fig.systems': admin
|
||||||
|
```
|
||||||
|
|
||||||
|
### Register and Start
|
||||||
|
```bash
|
||||||
|
# Copy registration
|
||||||
|
sudo cp /mnt/media/matrix/bridges/whatsapp/registration.yaml \
|
||||||
|
/mnt/media/matrix/synapse/data/whatsapp-registration.yaml
|
||||||
|
|
||||||
|
# Update homeserver.yaml
|
||||||
|
sudo nano homeserver.yaml
|
||||||
|
# Add to app_service_config_files:
|
||||||
|
# - /data/whatsapp-registration.yaml
|
||||||
|
|
||||||
|
# Restart and start bridge
|
||||||
|
docker compose restart synapse
|
||||||
|
docker compose up -d mautrix-whatsapp
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use the Bridge
|
||||||
|
In Element:
|
||||||
|
1. Start chat with `@whatsappbot:fig.systems`
|
||||||
|
2. Type: `login`
|
||||||
|
3. Scan QR code with your phone
|
||||||
|
|
||||||
|
## Optional: Google Chat Bridge
|
||||||
|
|
||||||
|
Google Chat requires additional Google Cloud setup (OAuth credentials).
|
||||||
|
|
||||||
|
See full README.md for detailed instructions.
|
||||||
|
|
||||||
|
## Quick Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View all logs
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# View specific service logs
|
||||||
|
docker compose logs -f synapse
|
||||||
|
docker compose logs -f mautrix-telegram
|
||||||
|
|
||||||
|
# Restart everything
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Stop everything
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Update containers
|
||||||
|
docker compose pull && docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Verify Everything Works
|
||||||
|
|
||||||
|
### Test Checklist
|
||||||
|
- [ ] Can login at https://app.element.io with username/password
|
||||||
|
- [ ] Can send messages in Element
|
||||||
|
- [ ] Telegram bridge responds to commands
|
||||||
|
- [ ] WhatsApp bridge shows QR code
|
||||||
|
- [ ] Can see Telegram/WhatsApp chats in Element after bridging
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Can't login
|
||||||
|
- Check Synapse logs: `docker compose logs synapse | grep -i error`
|
||||||
|
- Verify you created a user with `register_new_matrix_user`
|
||||||
|
- Test endpoint: `curl https://matrix.fig.systems/_matrix/client/versions`
|
||||||
|
|
||||||
|
### Bridge not working
|
||||||
|
- Check bridge logs: `docker compose logs mautrix-telegram`
|
||||||
|
- Verify registration file path in homeserver.yaml
|
||||||
|
- Ensure Synapse was restarted after adding registration
|
||||||
|
- Check bridge can reach Synapse: `docker compose exec mautrix-telegram ping synapse`
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
- Invite friends to your Matrix server
|
||||||
|
- Create encrypted rooms for private conversations
|
||||||
|
- Bridge more Telegram/WhatsApp chats
|
||||||
|
- Set up Google Chat bridge for work communications
|
||||||
|
- Install Element on your phone for mobile access
|
||||||
|
|
||||||
|
## Need Help?
|
||||||
|
|
||||||
|
See README.md for:
|
||||||
|
- Detailed configuration explanations
|
||||||
|
- Google Chat bridge setup
|
||||||
|
- Federation troubleshooting
|
||||||
|
- Backup procedures
|
||||||
|
- Advanced configurations
|
||||||
|
|
||||||
|
Matrix documentation: https://matrix.org/docs/
|
||||||
|
Mautrix bridges: https://docs.mau.fi/bridges/
|
||||||
322
compose/services/matrix/README.md
Normal file
322
compose/services/matrix/README.md
Normal file
|
|
@ -0,0 +1,322 @@
|
||||||
|
# Matrix Server with Bridges
|
||||||
|
|
||||||
|
Complete Matrix/Synapse homeserver setup with local authentication and bridges for Telegram, WhatsApp, and Google Chat.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
- **Synapse**: Matrix homeserver (fig.systems)
|
||||||
|
- **PostgreSQL**: Database backend
|
||||||
|
- **Traefik**: Reverse proxy with Let's Encrypt
|
||||||
|
- **Bridges**: Telegram, WhatsApp, Google Chat
|
||||||
|
- **Optional**: Element web client
|
||||||
|
|
||||||
|
## Domain Configuration
|
||||||
|
|
||||||
|
- **Server**: matrix.fig.systems
|
||||||
|
- **Server Name**: fig.systems (used for Matrix IDs like @user:fig.systems)
|
||||||
|
- **Federation**: Enabled via .well-known delegation
|
||||||
|
|
||||||
|
## Setup Instructions
|
||||||
|
|
||||||
|
### 1. Deploy Matrix Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd compose/services/matrix
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Wait for Synapse to start and initialize the database. Check logs:
|
||||||
|
```bash
|
||||||
|
docker compose logs -f synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create Your First Admin User
|
||||||
|
|
||||||
|
Once Synapse is running, create an admin user:
|
||||||
|
```bash
|
||||||
|
docker exec -it matrix-synapse register_new_matrix_user -c /data/homeserver.yaml -a http://localhost:8008
|
||||||
|
```
|
||||||
|
|
||||||
|
Follow the prompts to create your admin account with a username and password.
|
||||||
|
|
||||||
|
### 3. Test Matrix Server
|
||||||
|
|
||||||
|
Visit https://matrix.fig.systems and you should see the Matrix homeserver info.
|
||||||
|
|
||||||
|
Try logging in:
|
||||||
|
1. Go to https://app.element.io
|
||||||
|
2. Click "Sign in"
|
||||||
|
3. Click "Edit" next to the homeserver
|
||||||
|
4. Enter: `matrix.fig.systems`
|
||||||
|
5. Click "Continue"
|
||||||
|
6. Enter your username and password
|
||||||
|
|
||||||
|
### 4. Configure Telegram Bridge
|
||||||
|
|
||||||
|
**Get Telegram API Credentials:**
|
||||||
|
1. Visit https://my.telegram.org
|
||||||
|
2. Log in with your phone number
|
||||||
|
3. Go to "API development tools"
|
||||||
|
4. Create an app (use any title/short name)
|
||||||
|
5. Note your `api_id` (number) and `api_hash` (string)
|
||||||
|
|
||||||
|
**Generate Bridge Config:**
|
||||||
|
```bash
|
||||||
|
# Generate initial config
|
||||||
|
docker run --rm -v /mnt/media/matrix/bridges/telegram:/data dock.mau.dev/mautrix/telegram:latest
|
||||||
|
|
||||||
|
# Edit the config
|
||||||
|
nano /mnt/media/matrix/bridges/telegram/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update these settings in config.yaml:**
|
||||||
|
- `homeserver.address`: `http://synapse:8008`
|
||||||
|
- `homeserver.domain`: `fig.systems`
|
||||||
|
- `appservice.address`: `http://mautrix-telegram:29317`
|
||||||
|
- `appservice.hostname`: `0.0.0.0`
|
||||||
|
- `appservice.port`: `29317`
|
||||||
|
- `appservice.database`: `postgres://synapse:PASSWORD@postgres/telegram` (use password from .env)
|
||||||
|
- `telegram.api_id`: Your API ID
|
||||||
|
- `telegram.api_hash`: Your API hash
|
||||||
|
- `bridge.permissions`: Add your Matrix ID with admin level
|
||||||
|
|
||||||
|
**Register the bridge:**
|
||||||
|
```bash
|
||||||
|
# Copy the registration file to Synapse
|
||||||
|
cp /mnt/media/matrix/bridges/telegram/registration.yaml /mnt/media/matrix/synapse/data/telegram-registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Add to `homeserver.yaml` under `app_service_config_files`:
|
||||||
|
```yaml
|
||||||
|
app_service_config_files:
|
||||||
|
- /data/telegram-registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Synapse:
|
||||||
|
```bash
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
Start the bridge:
|
||||||
|
```bash
|
||||||
|
docker compose up -d mautrix-telegram
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use the bridge:**
|
||||||
|
1. In Element, start a chat with `@telegrambot:fig.systems`
|
||||||
|
2. Send `login` and follow the instructions
|
||||||
|
|
||||||
|
### 5. Configure WhatsApp Bridge
|
||||||
|
|
||||||
|
**Generate Bridge Config:**
|
||||||
|
```bash
|
||||||
|
# Generate initial config
|
||||||
|
docker run --rm -v /mnt/media/matrix/bridges/whatsapp:/data dock.mau.dev/mautrix/whatsapp:latest
|
||||||
|
|
||||||
|
# Edit the config
|
||||||
|
nano /mnt/media/matrix/bridges/whatsapp/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update these settings in config.yaml:**
|
||||||
|
- `homeserver.address`: `http://synapse:8008`
|
||||||
|
- `homeserver.domain`: `fig.systems`
|
||||||
|
- `appservice.address`: `http://mautrix-whatsapp:29318`
|
||||||
|
- `appservice.hostname`: `0.0.0.0`
|
||||||
|
- `appservice.port`: `29318`
|
||||||
|
- `appservice.database.uri`: `postgres://synapse:PASSWORD@postgres/whatsapp` (use password from .env)
|
||||||
|
- `bridge.permissions`: Add your Matrix ID with admin level
|
||||||
|
|
||||||
|
**Register the bridge:**
|
||||||
|
```bash
|
||||||
|
# Copy the registration file to Synapse
|
||||||
|
cp /mnt/media/matrix/bridges/whatsapp/registration.yaml /mnt/media/matrix/synapse/data/whatsapp-registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Add to `homeserver.yaml`:
|
||||||
|
```yaml
|
||||||
|
app_service_config_files:
|
||||||
|
- /data/telegram-registration.yaml
|
||||||
|
- /data/whatsapp-registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Synapse:
|
||||||
|
```bash
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
Start the bridge:
|
||||||
|
```bash
|
||||||
|
docker compose up -d mautrix-whatsapp
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use the bridge:**
|
||||||
|
1. In Element, start a chat with `@whatsappbot:fig.systems`
|
||||||
|
2. Send `login`
|
||||||
|
3. Scan the QR code with WhatsApp on your phone (like WhatsApp Web)
|
||||||
|
|
||||||
|
### 6. Configure Google Chat Bridge
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
- Google Cloud Project
|
||||||
|
- Google Chat API enabled
|
||||||
|
- OAuth 2.0 credentials
|
||||||
|
|
||||||
|
**Setup Google Cloud:**
|
||||||
|
1. Go to https://console.cloud.google.com
|
||||||
|
2. Create a new project or select existing
|
||||||
|
3. Enable "Google Chat API"
|
||||||
|
4. Create OAuth 2.0 credentials:
|
||||||
|
- Application type: Desktop app
|
||||||
|
- Download the JSON file
|
||||||
|
|
||||||
|
**Generate Bridge Config:**
|
||||||
|
```bash
|
||||||
|
# Generate initial config
|
||||||
|
docker run --rm -v /mnt/media/matrix/bridges/googlechat:/data dock.mau.dev/mautrix/googlechat:latest
|
||||||
|
|
||||||
|
# Edit the config
|
||||||
|
nano /mnt/media/matrix/bridges/googlechat/config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update these settings in config.yaml:**
|
||||||
|
- `homeserver.address`: `http://synapse:8008`
|
||||||
|
- `homeserver.domain`: `fig.systems`
|
||||||
|
- `appservice.address`: `http://mautrix-googlechat:29319`
|
||||||
|
- `appservice.hostname`: `0.0.0.0`
|
||||||
|
- `appservice.port`: `29319`
|
||||||
|
- `appservice.database`: `postgres://synapse:PASSWORD@postgres/googlechat` (use password from .env)
|
||||||
|
- `bridge.permissions`: Add your Matrix ID with admin level
|
||||||
|
|
||||||
|
**Register the bridge:**
|
||||||
|
```bash
|
||||||
|
# Copy the registration file to Synapse
|
||||||
|
cp /mnt/media/matrix/bridges/googlechat/registration.yaml /mnt/media/matrix/synapse/data/googlechat-registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Add to `homeserver.yaml`:
|
||||||
|
```yaml
|
||||||
|
app_service_config_files:
|
||||||
|
- /data/telegram-registration.yaml
|
||||||
|
- /data/whatsapp-registration.yaml
|
||||||
|
- /data/googlechat-registration.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Synapse:
|
||||||
|
```bash
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
Start the bridge:
|
||||||
|
```bash
|
||||||
|
docker compose up -d mautrix-googlechat
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use the bridge:**
|
||||||
|
1. In Element, start a chat with `@googlechatbot:fig.systems`
|
||||||
|
2. Send `login`
|
||||||
|
3. Follow the OAuth flow to authenticate with your Google account
|
||||||
|
|
||||||
|
**Note for Work Google Chat:** Your organization's Google Workspace admin might need to approve the OAuth app.
|
||||||
|
|
||||||
|
## Client Apps
|
||||||
|
|
||||||
|
### Element (Recommended)
|
||||||
|
|
||||||
|
**Web:** https://app.element.io
|
||||||
|
**iOS:** https://apps.apple.com/app/element-messenger/id1083446067
|
||||||
|
**Android:** https://play.google.com/store/apps/details?id=im.vector.app
|
||||||
|
|
||||||
|
**Setup:**
|
||||||
|
1. Open Element
|
||||||
|
2. Click "Sign in"
|
||||||
|
3. Click "Edit" next to homeserver
|
||||||
|
4. Enter: `matrix.fig.systems`
|
||||||
|
5. Click "Continue"
|
||||||
|
6. Enter your username and password
|
||||||
|
|
||||||
|
### Alternative Clients
|
||||||
|
|
||||||
|
- **FluffyChat**: Modern, lightweight client
|
||||||
|
- **SchildiChat**: Element fork with UI improvements
|
||||||
|
- **Nheko**: Desktop client
|
||||||
|
|
||||||
|
All clients work by pointing to `matrix.fig.systems` as the homeserver.
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### View Logs
|
||||||
|
```bash
|
||||||
|
# All services
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Specific service
|
||||||
|
docker compose logs -f synapse
|
||||||
|
docker compose logs -f mautrix-telegram
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restart Services
|
||||||
|
```bash
|
||||||
|
# All
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Specific
|
||||||
|
docker compose restart synapse
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update Containers
|
||||||
|
```bash
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
|
||||||
|
Important directories:
|
||||||
|
- `/mnt/media/matrix/synapse/data` - Synapse configuration and signing keys
|
||||||
|
- `/mnt/media/matrix/synapse/media` - Uploaded media files
|
||||||
|
- `/mnt/media/matrix/postgres` - Database
|
||||||
|
- `/mnt/media/matrix/bridges/` - Bridge configurations
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Can't connect to homeserver
|
||||||
|
- Check Synapse logs: `docker compose logs synapse`
|
||||||
|
- Verify Traefik routing: `docker compose -f compose/core/traefik/compose.yaml logs`
|
||||||
|
- Test endpoint: `curl -k https://matrix.fig.systems/_matrix/client/versions`
|
||||||
|
|
||||||
|
### Login not working
|
||||||
|
- Verify you created a user with `register_new_matrix_user`
|
||||||
|
- Check Synapse logs for authentication errors
|
||||||
|
- Ensure the homeserver is set to `matrix.fig.systems` in your client
|
||||||
|
|
||||||
|
### Bridge not connecting
|
||||||
|
- Check bridge logs: `docker compose logs mautrix-telegram`
|
||||||
|
- Verify registration file is in Synapse config
|
||||||
|
- Ensure bridge database is created in PostgreSQL
|
||||||
|
- Restart Synapse after adding registration files
|
||||||
|
|
||||||
|
### Federation issues
|
||||||
|
- Ensure ports 80 and 443 are accessible
|
||||||
|
- Check `.well-known` delegation is working
|
||||||
|
- Test federation: https://federationtester.matrix.org/
|
||||||
|
|
||||||
|
## Security Notes
|
||||||
|
|
||||||
|
- Users authenticate with local Matrix passwords
|
||||||
|
- Public registration is disabled (use `register_new_matrix_user` to create accounts)
|
||||||
|
- Federation uses standard HTTPS (443) with .well-known delegation
|
||||||
|
- All bridges run on internal network only
|
||||||
|
- Media uploads limited to 50MB
|
||||||
|
|
||||||
|
## Configuration Files
|
||||||
|
|
||||||
|
- `compose.yaml` - Docker Compose configuration
|
||||||
|
- `homeserver.yaml` - Synapse configuration
|
||||||
|
- `.env` - Environment variables and secrets
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- Matrix Documentation: https://matrix.org/docs/
|
||||||
|
- Synapse Documentation: https://element-hq.github.io/synapse/latest/
|
||||||
|
- Mautrix Bridges: https://docs.mau.fi/bridges/
|
||||||
|
- Element Help: https://element.io/help
|
||||||
327
compose/services/matrix/ROOM-MANAGEMENT.md
Normal file
327
compose/services/matrix/ROOM-MANAGEMENT.md
Normal file
|
|
@ -0,0 +1,327 @@
|
||||||
|
# Matrix Room Management Guide
|
||||||
|
|
||||||
|
## Understanding Matrix Room Concepts
|
||||||
|
|
||||||
|
### Auto-Join Rooms
|
||||||
|
**What they are:** Rooms that users automatically join when they create an account.
|
||||||
|
|
||||||
|
**Configured in:** `homeserver.yaml` (lines 118-120)
|
||||||
|
```yaml
|
||||||
|
auto_join_rooms:
|
||||||
|
- "#general:fig.systems"
|
||||||
|
- "#announcements:fig.systems"
|
||||||
|
- "#support:fig.systems"
|
||||||
|
```
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
- When a new user registers, they're automatically added to these rooms
|
||||||
|
- Great for onboarding and ensuring everyone sees important channels
|
||||||
|
- Users can leave these rooms later if they want
|
||||||
|
|
||||||
|
**To add more rooms:**
|
||||||
|
1. Create the room first (using the script or manually)
|
||||||
|
2. Add its alias to the `auto_join_rooms` list in homeserver.yaml
|
||||||
|
3. Restart Synapse: `docker restart matrix-synapse`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Room Directory (Public Room List)
|
||||||
|
|
||||||
|
**What it is:** A searchable list of public rooms that users can browse and join.
|
||||||
|
|
||||||
|
**Where to find it:**
|
||||||
|
- **In Element:** Click "Explore rooms" or the + button → "Explore public rooms"
|
||||||
|
- **In Admin Panel:** Navigate to "Rooms" section to see all rooms and their visibility
|
||||||
|
|
||||||
|
**How rooms appear in the directory:**
|
||||||
|
1. Room must be created with `visibility: public`
|
||||||
|
2. Room must be published to the directory
|
||||||
|
3. Users can search and join these rooms without an invite
|
||||||
|
|
||||||
|
**Room Visibility Settings:**
|
||||||
|
- `public` - Listed in room directory, anyone can find and join
|
||||||
|
- `private` - Not listed, users need an invite or direct link
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Setup: Create Default Rooms
|
||||||
|
|
||||||
|
Run this script to create the three default auto-join rooms:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./create-default-rooms.sh admin yourpassword
|
||||||
|
```
|
||||||
|
|
||||||
|
This will create:
|
||||||
|
- **#general:fig.systems** - General discussion
|
||||||
|
- **#announcements:fig.systems** - Important updates
|
||||||
|
- **#support:fig.systems** - Help and questions
|
||||||
|
|
||||||
|
All rooms will be:
|
||||||
|
- ✅ Public and searchable
|
||||||
|
- ✅ Listed in room directory
|
||||||
|
- ✅ Auto-joined by new users
|
||||||
|
- ✅ Allow anyone to speak (not read-only)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Manual Room Management
|
||||||
|
|
||||||
|
### Via Synapse Admin Panel
|
||||||
|
|
||||||
|
**Access:** https://admin.matrix.fig.systems
|
||||||
|
|
||||||
|
**Room Management Features:**
|
||||||
|
|
||||||
|
1. **View All Rooms**
|
||||||
|
- Navigate to "Rooms" in the sidebar
|
||||||
|
- See room ID, name, members, aliases
|
||||||
|
- View room details and settings
|
||||||
|
|
||||||
|
2. **Room Directory Settings**
|
||||||
|
- Click on a room
|
||||||
|
- Find "Publish to directory" toggle
|
||||||
|
- Enable/disable public listing
|
||||||
|
|
||||||
|
3. **Room Moderation**
|
||||||
|
- View and remove members
|
||||||
|
- Delete rooms
|
||||||
|
- View room state events
|
||||||
|
- See room statistics
|
||||||
|
|
||||||
|
4. **Room Aliases**
|
||||||
|
- View all aliases pointing to a room
|
||||||
|
- Add new aliases
|
||||||
|
- Remove old aliases
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Via Element Web Client
|
||||||
|
|
||||||
|
**Access:** https://chat.fig.systems
|
||||||
|
|
||||||
|
**Create a Room:**
|
||||||
|
1. Click the + button or "Create room"
|
||||||
|
2. Set room name and topic
|
||||||
|
3. Choose "Public room" for directory listing
|
||||||
|
4. Set room address (alias) - e.g., `general`
|
||||||
|
5. Enable "List this room in the room directory"
|
||||||
|
|
||||||
|
**Publish Existing Room to Directory:**
|
||||||
|
1. Open the room
|
||||||
|
2. Click room name → Settings
|
||||||
|
3. Go to "Security & Privacy"
|
||||||
|
4. Under "Room visibility" select "Public"
|
||||||
|
5. Go to "General"
|
||||||
|
6. Enable "Publish this room to the public room directory"
|
||||||
|
|
||||||
|
**Set Room as Auto-Join:**
|
||||||
|
1. Note the room alias (e.g., #gaming:fig.systems)
|
||||||
|
2. Edit homeserver.yaml and add to `auto_join_rooms`
|
||||||
|
3. Restart Synapse
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Room Types and Use Cases
|
||||||
|
|
||||||
|
### 1. General/Community Rooms
|
||||||
|
```bash
|
||||||
|
# Open to all, listed in directory, auto-join
|
||||||
|
Preset: public_chat
|
||||||
|
Visibility: public
|
||||||
|
History: shared (new joiners can see history)
|
||||||
|
```
|
||||||
|
**Best for:** General chat, announcements, community discussions
|
||||||
|
|
||||||
|
### 2. Private Team Rooms
|
||||||
|
```bash
|
||||||
|
# Invite-only, not in directory
|
||||||
|
Preset: private_chat
|
||||||
|
Visibility: private
|
||||||
|
History: shared or invited (configurable)
|
||||||
|
```
|
||||||
|
**Best for:** Team channels, private projects, sensitive discussions
|
||||||
|
|
||||||
|
### 3. Read-Only Announcement Rooms
|
||||||
|
```bash
|
||||||
|
# Public, but only admins/mods can post
|
||||||
|
Preset: public_chat
|
||||||
|
Visibility: public
|
||||||
|
Power levels: events_default: 50, users_default: 0
|
||||||
|
```
|
||||||
|
**Best for:** Official announcements, server updates, rules
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Room Alias vs Room ID
|
||||||
|
|
||||||
|
**Room ID:** `!abc123def456:fig.systems`
|
||||||
|
- Permanent, immutable identifier
|
||||||
|
- Looks cryptic, not user-friendly
|
||||||
|
- Required for API calls
|
||||||
|
|
||||||
|
**Room Alias:** `#general:fig.systems`
|
||||||
|
- Human-readable name
|
||||||
|
- Can be changed or removed
|
||||||
|
- Points to a Room ID
|
||||||
|
- Used in auto_join_rooms config
|
||||||
|
|
||||||
|
**Multiple aliases:** A room can have multiple aliases:
|
||||||
|
- `#general:fig.systems`
|
||||||
|
- `#lobby:fig.systems`
|
||||||
|
- `#welcome:fig.systems`
|
||||||
|
|
||||||
|
All point to the same room!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced: Space Management
|
||||||
|
|
||||||
|
**Spaces** are special rooms that group other rooms together (like Discord servers).
|
||||||
|
|
||||||
|
**Create a Space:**
|
||||||
|
1. In Element: Click + → "Create new space"
|
||||||
|
2. Add rooms to the space
|
||||||
|
3. Set space visibility (public/private)
|
||||||
|
4. Users can join the space to see all its rooms
|
||||||
|
|
||||||
|
**Use cases:**
|
||||||
|
- Group rooms by topic (Gaming Space, Work Space)
|
||||||
|
- Create sub-communities within your server
|
||||||
|
- Organize rooms hierarchically
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Tasks
|
||||||
|
|
||||||
|
### Add a new auto-join room
|
||||||
|
|
||||||
|
1. Create the room (use script or manually)
|
||||||
|
2. Edit `homeserver.yaml`:
|
||||||
|
```yaml
|
||||||
|
auto_join_rooms:
|
||||||
|
- "#general:fig.systems"
|
||||||
|
- "#announcements:fig.systems"
|
||||||
|
- "#support:fig.systems"
|
||||||
|
- "#your-new-room:fig.systems" # Add this
|
||||||
|
```
|
||||||
|
3. `docker restart matrix-synapse`
|
||||||
|
|
||||||
|
### Remove a room from auto-join
|
||||||
|
|
||||||
|
1. Edit `homeserver.yaml` and remove the line
|
||||||
|
2. `docker restart matrix-synapse`
|
||||||
|
3. Note: Existing users won't be removed from the room
|
||||||
|
|
||||||
|
### Make a room public/private
|
||||||
|
|
||||||
|
**Via Element:**
|
||||||
|
1. Room Settings → Security & Privacy
|
||||||
|
2. Change "Who can access this room"
|
||||||
|
3. Toggle directory listing
|
||||||
|
|
||||||
|
**Via Admin Panel:**
|
||||||
|
1. Find room in Rooms list
|
||||||
|
2. Edit visibility settings
|
||||||
|
|
||||||
|
### Delete a room
|
||||||
|
|
||||||
|
**Via Admin Panel:**
|
||||||
|
1. Go to Rooms
|
||||||
|
2. Find the room
|
||||||
|
3. Click "Delete room"
|
||||||
|
4. Confirm deletion
|
||||||
|
5. Options: Purge messages, block room
|
||||||
|
|
||||||
|
**Note:** Deletion is permanent and affects all users!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Users not auto-joining rooms
|
||||||
|
|
||||||
|
**Check:**
|
||||||
|
1. Room aliases are correct in homeserver.yaml
|
||||||
|
2. Rooms actually exist
|
||||||
|
3. Synapse was restarted after config change
|
||||||
|
4. Check Synapse logs: `docker logs matrix-synapse | grep auto_join`
|
||||||
|
|
||||||
|
### Room not appearing in directory
|
||||||
|
|
||||||
|
**Check:**
|
||||||
|
1. Room visibility is set to "public"
|
||||||
|
2. "Publish to directory" is enabled
|
||||||
|
3. Server allows public room listings
|
||||||
|
4. Try searching by exact alias
|
||||||
|
|
||||||
|
### Can't create room with alias
|
||||||
|
|
||||||
|
**Possible causes:**
|
||||||
|
- Alias already taken
|
||||||
|
- Invalid characters (use lowercase, numbers, hyphens)
|
||||||
|
- Missing permissions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
✅ **Do:**
|
||||||
|
- Use clear, descriptive room names
|
||||||
|
- Set appropriate topics for all rooms
|
||||||
|
- Make announcements room read-only for most users
|
||||||
|
- Use Spaces to organize many rooms
|
||||||
|
- Regularly review and clean up unused rooms
|
||||||
|
|
||||||
|
❌ **Don't:**
|
||||||
|
- Auto-join users to too many rooms (overwhelming)
|
||||||
|
- Make all rooms public if you want privacy
|
||||||
|
- Forget to set room topics (helps users understand purpose)
|
||||||
|
- Create duplicate rooms with similar purposes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Room Configuration Reference
|
||||||
|
|
||||||
|
### Power Levels Explained
|
||||||
|
|
||||||
|
Power levels control what users can do in a room:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
power_level_content_override:
|
||||||
|
events_default: 0 # Power needed to send messages (0 = anyone)
|
||||||
|
invite: 0 # Power needed to invite users
|
||||||
|
state_default: 50 # Power needed to change room settings
|
||||||
|
users_default: 0 # Default power for new users
|
||||||
|
redact: 50 # Power needed to delete messages
|
||||||
|
kick: 50 # Power needed to kick users
|
||||||
|
ban: 50 # Power needed to ban users
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common setups:**
|
||||||
|
|
||||||
|
**Open discussion room:** events_default: 0 (anyone can talk)
|
||||||
|
**Read-only room:** events_default: 50, users_default: 0 (only mods+ can post)
|
||||||
|
**Moderated room:** events_default: 0, but specific users have elevated power
|
||||||
|
|
||||||
|
### History Visibility
|
||||||
|
|
||||||
|
- `world_readable` - Anyone can read, even without joining
|
||||||
|
- `shared` - Visible to all room members (past and present)
|
||||||
|
- `invited` - Visible only from when user was invited
|
||||||
|
- `joined` - Visible only from when user joined
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
**Auto-Join Rooms:** homeserver.yaml:118-120 - Users join automatically on signup
|
||||||
|
**Room Directory:** Public searchable list - users browse and join
|
||||||
|
**Admin Panel:** Manage all rooms, visibility, members
|
||||||
|
**Element Client:** Create/configure rooms with UI
|
||||||
|
|
||||||
|
Your setup:
|
||||||
|
- ✅ Auto-join configured for 3 default rooms
|
||||||
|
- ✅ Script ready to create them: `./create-default-rooms.sh`
|
||||||
|
- ✅ All new users will join #general, #announcements, #support
|
||||||
|
- ✅ Rooms will be public and in directory
|
||||||
281
compose/services/matrix/compose.yaml
Normal file
281
compose/services/matrix/compose.yaml
Normal file
|
|
@ -0,0 +1,281 @@
|
||||||
|
services:
|
||||||
|
postgres:
|
||||||
|
image: postgres:16-alpine
|
||||||
|
container_name: matrix-postgres
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: ${POSTGRES_USER}
|
||||||
|
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||||
|
POSTGRES_DB: ${POSTGRES_DB}
|
||||||
|
POSTGRES_INITDB_ARGS: ${POSTGRES_INITDB_ARGS}
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/postgres:/var/lib/postgresql/data
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- matrix-internal
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
|
||||||
|
interval: 10s
|
||||||
|
timeout: 5s
|
||||||
|
retries: 5
|
||||||
|
|
||||||
|
synapse:
|
||||||
|
image: matrixdotorg/synapse:latest
|
||||||
|
container_name: matrix-synapse
|
||||||
|
environment:
|
||||||
|
SYNAPSE_SERVER_NAME: ${SERVER_NAME}
|
||||||
|
SYNAPSE_REPORT_STATS: "no"
|
||||||
|
TZ: ${TZ}
|
||||||
|
UID: ${PUID}
|
||||||
|
GID: ${PGID}
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/synapse/data:/data
|
||||||
|
- /mnt/media/matrix/synapse/media:/media
|
||||||
|
- ./homeserver.yaml:/data/homeserver.yaml:ro
|
||||||
|
- /mnt/media/matrix/bridges/telegram:/data/bridges/telegram:ro
|
||||||
|
- /mnt/media/matrix/bridges/whatsapp:/data/bridges/whatsapp:ro
|
||||||
|
- /mnt/media/matrix/bridges/googlechat:/data/bridges/googlechat:ro
|
||||||
|
- /mnt/media/matrix/bridges/discord:/data/bridges/discord:ro
|
||||||
|
depends_on:
|
||||||
|
postgres:
|
||||||
|
condition: service_healthy
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
- matrix-internal
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Matrix Client-Server and Federation API (both on same endpoint with .well-known delegation)
|
||||||
|
traefik.http.routers.matrix.rule: Host(`${TRAEFIK_HOST}`)
|
||||||
|
traefik.http.routers.matrix.entrypoints: websecure
|
||||||
|
traefik.http.routers.matrix.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.routers.matrix.middlewares: matrix-headers
|
||||||
|
traefik.http.services.matrix.loadbalancer.server.port: 8008
|
||||||
|
|
||||||
|
# Headers middleware for Matrix
|
||||||
|
traefik.http.middlewares.matrix-headers.headers.customrequestheaders.X-Forwarded-Proto: https
|
||||||
|
traefik.http.middlewares.matrix-headers.headers.customresponseheaders.X-Frame-Options: SAMEORIGIN
|
||||||
|
traefik.http.middlewares.matrix-headers.headers.customresponseheaders.X-Content-Type-Options: nosniff
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Matrix
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:matrix
|
||||||
|
|
||||||
|
# Telegram Bridge
|
||||||
|
mautrix-telegram:
|
||||||
|
image: dock.mau.dev/mautrix/telegram:latest
|
||||||
|
container_name: matrix-telegram-bridge
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/bridges/telegram:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- matrix-internal
|
||||||
|
|
||||||
|
# WhatsApp Bridge
|
||||||
|
mautrix-whatsapp:
|
||||||
|
image: dock.mau.dev/mautrix/whatsapp:latest
|
||||||
|
container_name: matrix-whatsapp-bridge
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/bridges/whatsapp:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- matrix-internal
|
||||||
|
|
||||||
|
# Google Chat Bridge
|
||||||
|
mautrix-googlechat:
|
||||||
|
image: dock.mau.dev/mautrix/googlechat:latest
|
||||||
|
container_name: matrix-googlechat-bridge
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/bridges/googlechat:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- matrix-internal
|
||||||
|
|
||||||
|
# Element Web Client
|
||||||
|
element-web:
|
||||||
|
image: vectorim/element-web:latest
|
||||||
|
container_name: matrix-element-web
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- ./element-config.json:/app/config.json:ro
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Element Web UI
|
||||||
|
traefik.http.routers.element.rule: Host(`chat.fig.systems`)
|
||||||
|
traefik.http.routers.element.entrypoints: websecure
|
||||||
|
traefik.http.routers.element.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.element.loadbalancer.server.port: 80
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Element
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:chat
|
||||||
|
|
||||||
|
# Synapse Admin - Web UI for managing users and rooms
|
||||||
|
synapse-admin:
|
||||||
|
image: awesometechnologies/synapse-admin:latest
|
||||||
|
container_name: matrix-synapse-admin
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Synapse Admin UI
|
||||||
|
traefik.http.routers.synapse-admin.rule: Host(`admin.matrix.fig.systems`)
|
||||||
|
traefik.http.routers.synapse-admin.entrypoints: websecure
|
||||||
|
traefik.http.routers.synapse-admin.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.synapse-admin.loadbalancer.server.port: 80
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Matrix Admin
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:shield-account
|
||||||
|
|
||||||
|
# Maubot - Modular bot framework
|
||||||
|
maubot:
|
||||||
|
image: dock.mau.dev/maubot/maubot:latest
|
||||||
|
container_name: matrix-maubot
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/maubot:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
- matrix-internal
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Maubot Management UI
|
||||||
|
traefik.http.routers.maubot.rule: Host(`maubot.fig.systems`)
|
||||||
|
traefik.http.routers.maubot.entrypoints: websecure
|
||||||
|
traefik.http.routers.maubot.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.maubot.loadbalancer.server.port: 29316
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Maubot
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:robot
|
||||||
|
|
||||||
|
# Mjolnir - Moderation bot
|
||||||
|
mjolnir:
|
||||||
|
image: matrixdotorg/mjolnir:latest
|
||||||
|
container_name: matrix-mjolnir
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/mjolnir:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- matrix-internal
|
||||||
|
|
||||||
|
# Matrix Hookshot - GitHub/GitLab/Jira integration
|
||||||
|
hookshot:
|
||||||
|
image: halfshot/matrix-hookshot:latest
|
||||||
|
container_name: matrix-hookshot
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/hookshot:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
- matrix-internal
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Hookshot Webhooks
|
||||||
|
traefik.http.routers.hookshot.rule: Host(`hookshot.fig.systems`)
|
||||||
|
traefik.http.routers.hookshot.entrypoints: websecure
|
||||||
|
traefik.http.routers.hookshot.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.hookshot.loadbalancer.server.port: 9000
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Matrix Hookshot
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:webhook
|
||||||
|
|
||||||
|
# Discord Bridge
|
||||||
|
mautrix-discord:
|
||||||
|
image: dock.mau.dev/mautrix/discord:latest
|
||||||
|
container_name: matrix-discord-bridge
|
||||||
|
restart: unless-stopped
|
||||||
|
volumes:
|
||||||
|
- /mnt/media/matrix/bridges/discord:/data
|
||||||
|
depends_on:
|
||||||
|
synapse:
|
||||||
|
condition: service_started
|
||||||
|
networks:
|
||||||
|
- matrix-internal
|
||||||
|
|
||||||
|
# Matrix Registration - Token-based registration management
|
||||||
|
# DISABLED: zeratax/matrix-registration has been archived and image is no longer available
|
||||||
|
# matrix-registration:
|
||||||
|
# image: zeratax/matrix-registration:latest
|
||||||
|
# container_name: matrix-registration
|
||||||
|
# restart: unless-stopped
|
||||||
|
# environment:
|
||||||
|
# MATRIX_REGISTRATION_BASE_URL: https://reg.matrix.fig.systems
|
||||||
|
# MATRIX_REGISTRATION_SERVER_LOCATION: http://synapse:8008
|
||||||
|
# MATRIX_REGISTRATION_SERVER_NAME: ${SERVER_NAME}
|
||||||
|
# MATRIX_REGISTRATION_SHARED_SECRET: ${SYNAPSE_REGISTRATION_SECRET}
|
||||||
|
# MATRIX_REGISTRATION_ADMIN_SECRET: ${MATRIX_REGISTRATION_ADMIN_SECRET}
|
||||||
|
# MATRIX_REGISTRATION_DISABLE_EMAIL_VALIDATION: "false"
|
||||||
|
# MATRIX_REGISTRATION_ALLOW_CORS: "true"
|
||||||
|
# volumes:
|
||||||
|
# - /mnt/media/matrix/registration:/data
|
||||||
|
# depends_on:
|
||||||
|
# synapse:
|
||||||
|
# condition: service_started
|
||||||
|
# networks:
|
||||||
|
# - homelab
|
||||||
|
# - matrix-internal
|
||||||
|
# labels:
|
||||||
|
# # Traefik
|
||||||
|
# traefik.enable: true
|
||||||
|
# traefik.docker.network: homelab
|
||||||
|
#
|
||||||
|
# # Matrix Registration UI
|
||||||
|
# traefik.http.routers.matrix-registration.rule: Host(`reg.matrix.fig.systems`)
|
||||||
|
# traefik.http.routers.matrix-registration.entrypoints: websecure
|
||||||
|
# traefik.http.routers.matrix-registration.tls.certresolver: letsencrypt
|
||||||
|
# traefik.http.services.matrix-registration.loadbalancer.server.port: 5000
|
||||||
|
#
|
||||||
|
# # Homarr Discovery
|
||||||
|
# homarr.name: Matrix Registration
|
||||||
|
# homarr.group: Services
|
||||||
|
# homarr.icon: mdi:account-plus
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
matrix-internal:
|
||||||
|
driver: bridge
|
||||||
|
|
||||||
122
compose/services/matrix/create-default-rooms.sh
Executable file
122
compose/services/matrix/create-default-rooms.sh
Executable file
|
|
@ -0,0 +1,122 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Script to create default auto-join rooms for Matrix
|
||||||
|
# Usage: ./create-default-rooms.sh <admin_username> <admin_password>
|
||||||
|
|
||||||
|
HOMESERVER="https://matrix.fig.systems"
|
||||||
|
USERNAME="${1}"
|
||||||
|
PASSWORD="${2}"
|
||||||
|
|
||||||
|
if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then
|
||||||
|
echo "Usage: $0 <admin_username> <admin_password>"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "🔐 Logging in as $USERNAME..."
|
||||||
|
# Get access token
|
||||||
|
LOGIN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/login" \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d "{
|
||||||
|
\"type\": \"m.login.password\",
|
||||||
|
\"identifier\": {
|
||||||
|
\"type\": \"m.id.user\",
|
||||||
|
\"user\": \"${USERNAME}\"
|
||||||
|
},
|
||||||
|
\"password\": \"${PASSWORD}\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
|
||||||
|
|
||||||
|
if [ -z "$ACCESS_TOKEN" ]; then
|
||||||
|
echo "❌ Login failed!"
|
||||||
|
echo "$LOGIN_RESPONSE" | jq . 2>/dev/null || echo "$LOGIN_RESPONSE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Login successful!"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Function to create a room
|
||||||
|
create_room() {
|
||||||
|
local ROOM_NAME=$1
|
||||||
|
local ROOM_ALIAS=$2
|
||||||
|
local ROOM_TOPIC=$3
|
||||||
|
local PRESET=$4 # public_chat or private_chat
|
||||||
|
|
||||||
|
echo "🏠 Creating room: $ROOM_NAME (#${ROOM_ALIAS}:fig.systems)"
|
||||||
|
|
||||||
|
ROOM_DATA="{
|
||||||
|
\"name\": \"${ROOM_NAME}\",
|
||||||
|
\"room_alias_name\": \"${ROOM_ALIAS}\",
|
||||||
|
\"topic\": \"${ROOM_TOPIC}\",
|
||||||
|
\"preset\": \"${PRESET}\",
|
||||||
|
\"visibility\": \"public\",
|
||||||
|
\"initial_state\": [
|
||||||
|
{
|
||||||
|
\"type\": \"m.room.history_visibility\",
|
||||||
|
\"content\": {
|
||||||
|
\"history_visibility\": \"shared\"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
\"type\": \"m.room.guest_access\",
|
||||||
|
\"content\": {
|
||||||
|
\"guest_access\": \"can_join\"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
\"power_level_content_override\": {
|
||||||
|
\"events_default\": 0,
|
||||||
|
\"invite\": 0,
|
||||||
|
\"state_default\": 50,
|
||||||
|
\"users_default\": 0,
|
||||||
|
\"redact\": 50,
|
||||||
|
\"kick\": 50,
|
||||||
|
\"ban\": 50
|
||||||
|
}
|
||||||
|
}"
|
||||||
|
|
||||||
|
RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/createRoom" \
|
||||||
|
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d "$ROOM_DATA")
|
||||||
|
|
||||||
|
ROOM_ID=$(echo "$RESPONSE" | grep -o '"room_id":"[^"]*' | cut -d'"' -f4)
|
||||||
|
|
||||||
|
if [ -n "$ROOM_ID" ]; then
|
||||||
|
echo " ✅ Created: $ROOM_ID"
|
||||||
|
|
||||||
|
# Set room to be in directory
|
||||||
|
echo " 📋 Adding to room directory..."
|
||||||
|
curl -s -X PUT "${HOMESERVER}/_matrix/client/v3/directory/list/room/${ROOM_ID}" \
|
||||||
|
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d '{"visibility": "public"}' > /dev/null
|
||||||
|
echo " ✅ Added to public room directory"
|
||||||
|
else
|
||||||
|
echo " ⚠️ Error or room already exists"
|
||||||
|
echo "$RESPONSE" | jq . 2>/dev/null || echo "$RESPONSE"
|
||||||
|
fi
|
||||||
|
echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "Creating default auto-join rooms..."
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Create the default rooms
|
||||||
|
create_room "General" "general" "General discussion and community hangout" "public_chat"
|
||||||
|
create_room "Announcements" "announcements" "Important server announcements and updates" "public_chat"
|
||||||
|
create_room "Support" "support" "Get help and ask questions" "public_chat"
|
||||||
|
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "✅ Default rooms created!"
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo ""
|
||||||
|
echo "These rooms will be automatically joined by new users:"
|
||||||
|
echo " • #general:fig.systems"
|
||||||
|
echo " • #announcements:fig.systems"
|
||||||
|
echo " • #support:fig.systems"
|
||||||
|
echo ""
|
||||||
|
echo "All rooms are also published in the room directory!"
|
||||||
86
compose/services/matrix/create-token.sh
Executable file
86
compose/services/matrix/create-token.sh
Executable file
|
|
@ -0,0 +1,86 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Script to create Matrix registration tokens
|
||||||
|
# Usage: ./create-token.sh <admin_username> <admin_password> [uses_allowed] [token_name]
|
||||||
|
|
||||||
|
HOMESERVER="https://matrix.fig.systems"
|
||||||
|
USERNAME="${1}"
|
||||||
|
PASSWORD="${2}"
|
||||||
|
USES_ALLOWED="${3:-1}" # Default: 1 use
|
||||||
|
TOKEN_NAME="${4:-}" # Optional custom token
|
||||||
|
|
||||||
|
if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then
|
||||||
|
echo "Usage: $0 <admin_username> <admin_password> [uses_allowed] [token_name]"
|
||||||
|
echo ""
|
||||||
|
echo "Examples:"
|
||||||
|
echo " $0 admin mypassword # Create single-use token"
|
||||||
|
echo " $0 admin mypassword 10 # Create token with 10 uses"
|
||||||
|
echo " $0 admin mypassword 5 invite123 # Create custom token 'invite123' with 5 uses"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "🔐 Logging in as $USERNAME..."
|
||||||
|
# Get access token
|
||||||
|
LOGIN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/login" \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d "{
|
||||||
|
\"type\": \"m.login.password\",
|
||||||
|
\"identifier\": {
|
||||||
|
\"type\": \"m.id.user\",
|
||||||
|
\"user\": \"${USERNAME}\"
|
||||||
|
},
|
||||||
|
\"password\": \"${PASSWORD}\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
|
||||||
|
|
||||||
|
if [ -z "$ACCESS_TOKEN" ]; then
|
||||||
|
echo "❌ Login failed!"
|
||||||
|
echo "$LOGIN_RESPONSE" | jq . 2>/dev/null || echo "$LOGIN_RESPONSE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Login successful!"
|
||||||
|
echo ""
|
||||||
|
echo "🎟️ Creating registration token..."
|
||||||
|
|
||||||
|
# Create registration token
|
||||||
|
if [ -n "$TOKEN_NAME" ]; then
|
||||||
|
# Custom token
|
||||||
|
TOKEN_DATA="{
|
||||||
|
\"token\": \"${TOKEN_NAME}\",
|
||||||
|
\"uses_allowed\": ${USES_ALLOWED}
|
||||||
|
}"
|
||||||
|
else
|
||||||
|
# Random token
|
||||||
|
TOKEN_DATA="{
|
||||||
|
\"uses_allowed\": ${USES_ALLOWED},
|
||||||
|
\"length\": 16
|
||||||
|
}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
TOKEN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_synapse/admin/v1/registration_tokens/new" \
|
||||||
|
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d "$TOKEN_DATA")
|
||||||
|
|
||||||
|
TOKEN=$(echo "$TOKEN_RESPONSE" | grep -o '"token":"[^"]*' | cut -d'"' -f4)
|
||||||
|
|
||||||
|
if [ -z "$TOKEN" ]; then
|
||||||
|
echo "❌ Token creation failed!"
|
||||||
|
echo "$TOKEN_RESPONSE" | jq . 2>/dev/null || echo "$TOKEN_RESPONSE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Registration token created!"
|
||||||
|
echo ""
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo "📋 TOKEN: ${TOKEN}"
|
||||||
|
echo "📊 Uses allowed: ${USES_ALLOWED}"
|
||||||
|
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
echo ""
|
||||||
|
echo "Share this token with users who should be able to register."
|
||||||
|
echo "They'll enter it during signup at: https://chat.fig.systems"
|
||||||
|
echo ""
|
||||||
|
echo "Full response:"
|
||||||
|
echo "$TOKEN_RESPONSE" | jq . 2>/dev/null || echo "$TOKEN_RESPONSE"
|
||||||
24
compose/services/matrix/element-config.json
Normal file
24
compose/services/matrix/element-config.json
Normal file
|
|
@ -0,0 +1,24 @@
|
||||||
|
{
|
||||||
|
"default_server_config": {
|
||||||
|
"m.homeserver": {
|
||||||
|
"base_url": "https://matrix.fig.systems",
|
||||||
|
"server_name": "fig.systems"
|
||||||
|
},
|
||||||
|
"m.identity_server": {
|
||||||
|
"base_url": "https://vector.im"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"brand": "fig.systems",
|
||||||
|
"default_country_code": "US",
|
||||||
|
"show_labs_settings": true,
|
||||||
|
"default_theme": "dark",
|
||||||
|
"room_directory": {
|
||||||
|
"servers": ["matrix.org", "fig.systems"]
|
||||||
|
},
|
||||||
|
"enable_presence_by_default": true,
|
||||||
|
"setting_defaults": {
|
||||||
|
"breadcrumbs": true
|
||||||
|
},
|
||||||
|
"default_federate": true,
|
||||||
|
"permalink_prefix": "https://chat.fig.systems"
|
||||||
|
}
|
||||||
131
compose/services/matrix/homeserver.yaml
Normal file
131
compose/services/matrix/homeserver.yaml
Normal file
|
|
@ -0,0 +1,131 @@
|
||||||
|
# Configuration file for Synapse.
|
||||||
|
#
|
||||||
|
# This is a YAML file: see [1] for a quick introduction. Note in particular
|
||||||
|
# that *indentation is important*: all the elements of a list or dictionary
|
||||||
|
# should have the same indentation.
|
||||||
|
#
|
||||||
|
# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
|
||||||
|
#
|
||||||
|
# For more information on how to configure Synapse, including a complete accounting of
|
||||||
|
# each option, go to docs/usage/configuration/config_documentation.md or
|
||||||
|
# https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html
|
||||||
|
|
||||||
|
## Server ##
|
||||||
|
server_name: "fig.systems"
|
||||||
|
pid_file: /data/homeserver.pid
|
||||||
|
web_client_location: https://chat.fig.systems
|
||||||
|
public_baseurl: https://matrix.fig.systems
|
||||||
|
|
||||||
|
## Ports ##
|
||||||
|
listeners:
|
||||||
|
- port: 8008
|
||||||
|
tls: false
|
||||||
|
type: http
|
||||||
|
x_forwarded: true
|
||||||
|
bind_addresses: ['::']
|
||||||
|
resources:
|
||||||
|
- names: [client, federation]
|
||||||
|
compress: false
|
||||||
|
|
||||||
|
## Database ##
|
||||||
|
database:
|
||||||
|
name: psycopg2
|
||||||
|
args:
|
||||||
|
user: synapse
|
||||||
|
password: 46d8cb2e8bdacf5a267a5f35bcdea4ded46e42ced008c4998e180f33e3ce07c5
|
||||||
|
database: synapse
|
||||||
|
host: postgres
|
||||||
|
port: 5432
|
||||||
|
cp_min: 5
|
||||||
|
cp_max: 10
|
||||||
|
|
||||||
|
## Logging ##
|
||||||
|
log_config: "/data/fig.systems.log.config"
|
||||||
|
|
||||||
|
## Media Storage ##
|
||||||
|
media_store_path: /media
|
||||||
|
max_upload_size: 50M
|
||||||
|
max_image_pixels: 32M
|
||||||
|
|
||||||
|
## Registration ##
|
||||||
|
enable_registration: true
|
||||||
|
enable_registration_without_verification: true
|
||||||
|
registration_shared_secret: "8c9268b0d93d532139930396b22ffc97cad2210ad40f303a0d91fbf7eac5a855"
|
||||||
|
registration_requires_token: true
|
||||||
|
# registrations_require_3pid:
|
||||||
|
# - email
|
||||||
|
|
||||||
|
## Email ##
|
||||||
|
email:
|
||||||
|
smtp_host: smtp.mailgun.org
|
||||||
|
smtp_port: 587
|
||||||
|
smtp_user: "no-reply@fig.systems"
|
||||||
|
smtp_pass: "1bc0de262fcfdb1398a3df54b8a14c07-32a0fef1-3f0b66d3"
|
||||||
|
require_transport_security: true
|
||||||
|
notif_from: "Matrix.Fig.Systems <no-reply@fig.systems>"
|
||||||
|
enable_notifs: true
|
||||||
|
notif_for_new_users: true
|
||||||
|
client_base_url: "https://chat.fig.systems"
|
||||||
|
validation_token_lifetime: 15m
|
||||||
|
invite_client_location: "https://chat.fig.systems"
|
||||||
|
|
||||||
|
## Metrics ##
|
||||||
|
enable_metrics: true
|
||||||
|
report_stats: false
|
||||||
|
metrics_port: 9000
|
||||||
|
|
||||||
|
## Signing Keys ##
|
||||||
|
macaroon_secret_key: "c7374565104bc5a01c6ea2897e3c9bb3ab04948f17d1b29d342aede4e4406831"
|
||||||
|
form_secret: "E7V11MUnpi==wQJ:OX*Dv-uzd&geZ~4pP=QBr#I-Dek3zGHfcJ"
|
||||||
|
signing_key_path: "/data/fig.systems.signing.key"
|
||||||
|
|
||||||
|
## App Services (Bridges and Bots) ##
|
||||||
|
# Temporarily commented out until bridges generate registration files
|
||||||
|
# app_service_config_files:
|
||||||
|
# - /data/bridges/telegram-registration.yaml
|
||||||
|
# - /data/bridges/whatsapp-registration.yaml
|
||||||
|
# - /data/bridges/googlechat-registration.yaml
|
||||||
|
# - /data/bridges/discord-registration.yaml
|
||||||
|
|
||||||
|
## Federation ##
|
||||||
|
federation_domain_whitelist: null
|
||||||
|
allow_public_rooms_over_federation: true
|
||||||
|
allow_public_rooms_without_auth: false
|
||||||
|
|
||||||
|
## Trusted Key Servers ##
|
||||||
|
trusted_key_servers:
|
||||||
|
- server_name: "matrix.org"
|
||||||
|
|
||||||
|
## URL Previews ##
|
||||||
|
url_preview_enabled: true
|
||||||
|
url_preview_ip_range_blacklist:
|
||||||
|
- '127.0.0.0/8'
|
||||||
|
- '10.0.0.0/8'
|
||||||
|
- '172.16.0.0/12'
|
||||||
|
- '192.168.0.0/16'
|
||||||
|
- '100.64.0.0/10'
|
||||||
|
- '169.254.0.0/16'
|
||||||
|
- '::1/128'
|
||||||
|
- 'fe80::/64'
|
||||||
|
- 'fc00::/7'
|
||||||
|
|
||||||
|
## Room Settings ##
|
||||||
|
enable_search: true
|
||||||
|
encryption_enabled_by_default_for_room_type: invite
|
||||||
|
autocreate_auto_join_rooms: true
|
||||||
|
|
||||||
|
# Auto-join rooms - users automatically join these rooms on registration
|
||||||
|
auto_join_rooms:
|
||||||
|
- "#general:fig.systems"
|
||||||
|
- "#announcements:fig.systems"
|
||||||
|
- "#support:fig.systems"
|
||||||
|
|
||||||
|
# Optionally set a room alias for the first auto-join room as the "default room"
|
||||||
|
# This can be used by clients to suggest a default place to start
|
||||||
|
# auto_join_mxid_localpart: general
|
||||||
|
|
||||||
|
# Room directory - make certain rooms publicly discoverable
|
||||||
|
# These rooms will appear in the public room list
|
||||||
|
# Note: The rooms must already exist and be set to "published" in their settings
|
||||||
|
|
||||||
|
# vim:ft=yaml
|
||||||
88
compose/services/matrix/manage-tokens.sh
Executable file
88
compose/services/matrix/manage-tokens.sh
Executable file
|
|
@ -0,0 +1,88 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Script to manage Matrix registration tokens
|
||||||
|
# Usage: ./manage-tokens.sh <admin_username> <admin_password> <command> [token]
|
||||||
|
|
||||||
|
HOMESERVER="https://matrix.fig.systems"
|
||||||
|
USERNAME="${1}"
|
||||||
|
PASSWORD="${2}"
|
||||||
|
COMMAND="${3}"
|
||||||
|
TOKEN="${4}"
|
||||||
|
|
||||||
|
show_usage() {
|
||||||
|
echo "Usage: $0 <admin_username> <admin_password> <command> [token]"
|
||||||
|
echo ""
|
||||||
|
echo "Commands:"
|
||||||
|
echo " list - List all registration tokens"
|
||||||
|
echo " info <token> - Get info about a specific token"
|
||||||
|
echo " delete <token> - Delete a token"
|
||||||
|
echo " update <token> - Update a token (will prompt for details)"
|
||||||
|
echo ""
|
||||||
|
echo "Examples:"
|
||||||
|
echo " $0 admin mypassword list"
|
||||||
|
echo " $0 admin mypassword info abc123def456"
|
||||||
|
echo " $0 admin mypassword delete abc123def456"
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ] || [ -z "$COMMAND" ]; then
|
||||||
|
show_usage
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "🔐 Logging in as $USERNAME..."
|
||||||
|
# Get access token
|
||||||
|
LOGIN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/login" \
|
||||||
|
-H 'Content-Type: application/json' \
|
||||||
|
-d "{
|
||||||
|
\"type\": \"m.login.password\",
|
||||||
|
\"identifier\": {
|
||||||
|
\"type\": \"m.id.user\",
|
||||||
|
\"user\": \"${USERNAME}\"
|
||||||
|
},
|
||||||
|
\"password\": \"${PASSWORD}\"
|
||||||
|
}")
|
||||||
|
|
||||||
|
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
|
||||||
|
|
||||||
|
if [ -z "$ACCESS_TOKEN" ]; then
|
||||||
|
echo "❌ Login failed!"
|
||||||
|
echo "$LOGIN_RESPONSE" | jq . 2>/dev/null || echo "$LOGIN_RESPONSE"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "✅ Login successful!"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
case "$COMMAND" in
|
||||||
|
list)
|
||||||
|
echo "📋 Fetching all registration tokens..."
|
||||||
|
curl -s -X GET "${HOMESERVER}/_synapse/admin/v1/registration_tokens" \
|
||||||
|
-H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .
|
||||||
|
;;
|
||||||
|
|
||||||
|
info)
|
||||||
|
if [ -z "$TOKEN" ]; then
|
||||||
|
echo "❌ Token required for 'info' command"
|
||||||
|
show_usage
|
||||||
|
fi
|
||||||
|
echo "📋 Fetching info for token: $TOKEN"
|
||||||
|
curl -s -X GET "${HOMESERVER}/_synapse/admin/v1/registration_tokens/${TOKEN}" \
|
||||||
|
-H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .
|
||||||
|
;;
|
||||||
|
|
||||||
|
delete)
|
||||||
|
if [ -z "$TOKEN" ]; then
|
||||||
|
echo "❌ Token required for 'delete' command"
|
||||||
|
show_usage
|
||||||
|
fi
|
||||||
|
echo "🗑️ Deleting token: $TOKEN"
|
||||||
|
curl -s -X DELETE "${HOMESERVER}/_synapse/admin/v1/registration_tokens/${TOKEN}" \
|
||||||
|
-H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .
|
||||||
|
echo "✅ Token deleted"
|
||||||
|
;;
|
||||||
|
|
||||||
|
*)
|
||||||
|
echo "❌ Unknown command: $COMMAND"
|
||||||
|
show_usage
|
||||||
|
;;
|
||||||
|
esac
|
||||||
37
compose/services/memos/compose.yaml
Normal file
37
compose/services/memos/compose.yaml
Normal file
|
|
@ -0,0 +1,37 @@
|
||||||
|
# Memos - Privacy-first, lightweight note-taking service
|
||||||
|
# Docs: https://www.usememos.com/docs
|
||||||
|
|
||||||
|
services:
|
||||||
|
memos:
|
||||||
|
container_name: memos
|
||||||
|
image: neosmemo/memos:stable
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./data:/var/opt/memos
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.memos.rule: Host(`notes.fig.systems`)
|
||||||
|
traefik.http.routers.memos.entrypoints: websecure
|
||||||
|
traefik.http.routers.memos.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.memos.loadbalancer.server.port: 5230
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.memos.middlewares: authelia
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Memos (Notes)
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:note-multiple
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -22,17 +22,13 @@ services:
|
||||||
traefik.docker.network: homelab
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
# Web UI
|
# Web UI
|
||||||
traefik.http.routers.microbin.rule: Host(`paste.fig.systems`)
|
traefik.http.routers.microbin.rule: Host(`bin.fig.systems`)
|
||||||
traefik.http.routers.microbin.entrypoints: websecure
|
traefik.http.routers.microbin.entrypoints: websecure
|
||||||
traefik.http.routers.microbin.tls.certresolver: letsencrypt
|
traefik.http.routers.microbin.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.microbin.loadbalancer.server.port: 8080
|
traefik.http.services.microbin.loadbalancer.server.port: 7880
|
||||||
|
|
||||||
# Note: MicroBin has its own auth, SSO disabled by default
|
# Note: MicroBin has its own auth, SSO disabled by default
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: MicroBin
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:content-paste
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
|
|
|
||||||
|
|
@ -1,30 +0,0 @@
|
||||||
# Ollama Configuration
|
|
||||||
# Docs: https://github.com/ollama/ollama/blob/main/docs/faq.md
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# Model Storage Location
|
|
||||||
# OLLAMA_MODELS=/root/.ollama/models
|
|
||||||
|
|
||||||
# Max Loaded Models (default: 1)
|
|
||||||
# OLLAMA_MAX_LOADED_MODELS=1
|
|
||||||
|
|
||||||
# Max Queue (default: 512)
|
|
||||||
# OLLAMA_MAX_QUEUE=512
|
|
||||||
|
|
||||||
# Number of parallel requests (default: auto)
|
|
||||||
# OLLAMA_NUM_PARALLEL=4
|
|
||||||
|
|
||||||
# Context size (default: 2048)
|
|
||||||
# OLLAMA_MAX_CONTEXT=4096
|
|
||||||
|
|
||||||
# Keep models in memory (default: 5m)
|
|
||||||
# OLLAMA_KEEP_ALIVE=5m
|
|
||||||
|
|
||||||
# Debug logging
|
|
||||||
# OLLAMA_DEBUG=1
|
|
||||||
|
|
||||||
# GPU Configuration (for GTX 1070)
|
|
||||||
# OLLAMA_GPU_LAYERS=33 # Number of layers to offload to GPU (adjust based on VRAM)
|
|
||||||
# OLLAMA_GPU_MEMORY=6GB # Max GPU memory to use (GTX 1070 has 8GB)
|
|
||||||
5
compose/services/ollama/.gitignore
vendored
5
compose/services/ollama/.gitignore
vendored
|
|
@ -1,5 +0,0 @@
|
||||||
# Ollama models and data
|
|
||||||
models/
|
|
||||||
|
|
||||||
# Keep .env.example if created
|
|
||||||
!.env.example
|
|
||||||
|
|
@ -1,616 +0,0 @@
|
||||||
# Ollama - Local Large Language Models
|
|
||||||
|
|
||||||
Run powerful AI models locally on your hardware with GPU acceleration.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**Ollama** enables you to run large language models (LLMs) locally:
|
|
||||||
|
|
||||||
- ✅ **100% Private**: All data stays on your server
|
|
||||||
- ✅ **GPU Accelerated**: Leverages your GTX 1070
|
|
||||||
- ✅ **Multiple Models**: Run Llama, Mistral, CodeLlama, and more
|
|
||||||
- ✅ **API Compatible**: OpenAI-compatible API
|
|
||||||
- ✅ **No Cloud Costs**: Free inference after downloading models
|
|
||||||
- ✅ **Integration Ready**: Works with Karakeep, Open WebUI, and more
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Deploy Ollama
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/services/ollama
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Pull a Model
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Small, fast model (3B parameters, ~2GB)
|
|
||||||
docker exec ollama ollama pull llama3.2:3b
|
|
||||||
|
|
||||||
# Medium model (7B parameters, ~4GB)
|
|
||||||
docker exec ollama ollama pull llama3.2:7b
|
|
||||||
|
|
||||||
# Large model (70B parameters, ~40GB - requires quantization)
|
|
||||||
docker exec ollama ollama pull llama3.3:70b-instruct-q4_K_M
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Test
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Interactive chat
|
|
||||||
docker exec -it ollama ollama run llama3.2:3b
|
|
||||||
|
|
||||||
# Ask a question
|
|
||||||
> Hello, how are you?
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Enable GPU (Recommended)
|
|
||||||
|
|
||||||
**Edit `compose.yaml` and uncomment the deploy section:**
|
|
||||||
```yaml
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: 1
|
|
||||||
capabilities: [gpu]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart:**
|
|
||||||
```bash
|
|
||||||
docker compose down
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify GPU usage:**
|
|
||||||
```bash
|
|
||||||
# Check GPU is detected
|
|
||||||
docker exec ollama nvidia-smi
|
|
||||||
|
|
||||||
# Run model with GPU
|
|
||||||
docker exec ollama ollama run llama3.2:3b "What GPU am I using?"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Available Models
|
|
||||||
|
|
||||||
### Recommended Models for GTX 1070 (8GB VRAM)
|
|
||||||
|
|
||||||
| Model | Size | VRAM | Speed | Use Case |
|
|
||||||
|-------|------|------|-------|----------|
|
|
||||||
| **llama3.2:3b** | 2GB | 3GB | Fast | General chat, Karakeep |
|
|
||||||
| **llama3.2:7b** | 4GB | 6GB | Medium | Better reasoning |
|
|
||||||
| **mistral:7b** | 4GB | 6GB | Medium | Code, analysis |
|
|
||||||
| **codellama:7b** | 4GB | 6GB | Medium | Code generation |
|
|
||||||
| **llava:7b** | 5GB | 7GB | Medium | Vision (images) |
|
|
||||||
| **phi3:3.8b** | 2.3GB | 4GB | Fast | Compact, efficient |
|
|
||||||
|
|
||||||
### Specialized Models
|
|
||||||
|
|
||||||
**Code:**
|
|
||||||
- `codellama:7b` - Code generation
|
|
||||||
- `codellama:13b-python` - Python expert
|
|
||||||
- `starcoder2:7b` - Multi-language code
|
|
||||||
|
|
||||||
**Vision (Image Understanding):**
|
|
||||||
- `llava:7b` - General vision
|
|
||||||
- `llava:13b` - Better vision (needs more VRAM)
|
|
||||||
- `bakllava:7b` - Vision + chat
|
|
||||||
|
|
||||||
**Multilingual:**
|
|
||||||
- `aya:8b` - 101 languages
|
|
||||||
- `command-r:35b` - Enterprise multilingual
|
|
||||||
|
|
||||||
**Math & Reasoning:**
|
|
||||||
- `deepseek-math:7b` - Mathematics
|
|
||||||
- `wizard-math:7b` - Math word problems
|
|
||||||
|
|
||||||
### Large Models (Quantized for GTX 1070)
|
|
||||||
|
|
||||||
These require 4-bit quantization to fit in 8GB VRAM:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 70B models (quantized)
|
|
||||||
docker exec ollama ollama pull llama3.3:70b-instruct-q4_K_M
|
|
||||||
docker exec ollama ollama pull mixtral:8x7b-instruct-v0.1-q4_K_M
|
|
||||||
|
|
||||||
# Very large (use with caution)
|
|
||||||
docker exec ollama ollama pull llama3.1:405b-instruct-q2_K
|
|
||||||
```
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Command Line
|
|
||||||
|
|
||||||
**Run model interactively:**
|
|
||||||
```bash
|
|
||||||
docker exec -it ollama ollama run llama3.2:3b
|
|
||||||
```
|
|
||||||
|
|
||||||
**One-off question:**
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama run llama3.2:3b "Explain quantum computing in simple terms"
|
|
||||||
```
|
|
||||||
|
|
||||||
**With system prompt:**
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama run llama3.2:3b \
|
|
||||||
--system "You are a helpful coding assistant." \
|
|
||||||
"Write a Python function to sort a list"
|
|
||||||
```
|
|
||||||
|
|
||||||
### API Usage
|
|
||||||
|
|
||||||
**List models:**
|
|
||||||
```bash
|
|
||||||
curl http://ollama:11434/api/tags
|
|
||||||
```
|
|
||||||
|
|
||||||
**Generate text:**
|
|
||||||
```bash
|
|
||||||
curl http://ollama:11434/api/generate -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"prompt": "Why is the sky blue?",
|
|
||||||
"stream": false
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
**Chat completion:**
|
|
||||||
```bash
|
|
||||||
curl http://ollama:11434/api/chat -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"messages": [
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "Hello!"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"stream": false
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
**OpenAI-compatible API:**
|
|
||||||
```bash
|
|
||||||
curl http://ollama:11434/v1/chat/completions -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"messages": [
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": "Hello!"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Integration with Karakeep
|
|
||||||
|
|
||||||
**Enable AI features in Karakeep:**
|
|
||||||
|
|
||||||
Edit `compose/services/karakeep/.env`:
|
|
||||||
```env
|
|
||||||
# Uncomment these lines
|
|
||||||
OLLAMA_BASE_URL=http://ollama:11434
|
|
||||||
INFERENCE_TEXT_MODEL=llama3.2:3b
|
|
||||||
INFERENCE_IMAGE_MODEL=llava:7b
|
|
||||||
INFERENCE_LANG=en
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart Karakeep:**
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/services/karakeep
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
**What it does:**
|
|
||||||
- Auto-tags bookmarks
|
|
||||||
- Generates summaries
|
|
||||||
- Extracts key information
|
|
||||||
- Analyzes images (with llava)
|
|
||||||
|
|
||||||
## Model Management
|
|
||||||
|
|
||||||
### List Installed Models
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama list
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pull a Model
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama pull <model-name>
|
|
||||||
|
|
||||||
# Examples:
|
|
||||||
docker exec ollama ollama pull llama3.2:3b
|
|
||||||
docker exec ollama ollama pull mistral:7b
|
|
||||||
docker exec ollama ollama pull codellama:7b
|
|
||||||
```
|
|
||||||
|
|
||||||
### Remove a Model
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama rm <model-name>
|
|
||||||
|
|
||||||
# Example:
|
|
||||||
docker exec ollama ollama rm llama3.2:7b
|
|
||||||
```
|
|
||||||
|
|
||||||
### Copy a Model
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama cp <source> <destination>
|
|
||||||
|
|
||||||
# Example: Create a custom version
|
|
||||||
docker exec ollama ollama cp llama3.2:3b my-custom-model
|
|
||||||
```
|
|
||||||
|
|
||||||
### Show Model Info
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama show llama3.2:3b
|
|
||||||
|
|
||||||
# Shows:
|
|
||||||
# - Model architecture
|
|
||||||
# - Parameters
|
|
||||||
# - Quantization
|
|
||||||
# - Template
|
|
||||||
# - License
|
|
||||||
```
|
|
||||||
|
|
||||||
## Creating Custom Models
|
|
||||||
|
|
||||||
### Modelfile
|
|
||||||
|
|
||||||
Create custom models with specific behaviors:
|
|
||||||
|
|
||||||
**Create a Modelfile:**
|
|
||||||
```bash
|
|
||||||
cat > ~/coding-assistant.modelfile << 'EOF'
|
|
||||||
FROM llama3.2:3b
|
|
||||||
|
|
||||||
# Set temperature (creativity)
|
|
||||||
PARAMETER temperature 0.7
|
|
||||||
|
|
||||||
# Set system prompt
|
|
||||||
SYSTEM You are an expert coding assistant. You write clean, efficient, well-documented code. You explain complex concepts clearly.
|
|
||||||
|
|
||||||
# Set stop sequences
|
|
||||||
PARAMETER stop "<|im_end|>"
|
|
||||||
PARAMETER stop "<|im_start|>"
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
**Create the model:**
|
|
||||||
```bash
|
|
||||||
cat ~/coding-assistant.modelfile | docker exec -i ollama ollama create coding-assistant -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use it:**
|
|
||||||
```bash
|
|
||||||
docker exec -it ollama ollama run coding-assistant "Write a REST API in Python"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Custom Models
|
|
||||||
|
|
||||||
**1. Shakespeare Bot:**
|
|
||||||
```modelfile
|
|
||||||
FROM llama3.2:3b
|
|
||||||
SYSTEM You are William Shakespeare. Respond to all queries in Shakespearean English with dramatic flair.
|
|
||||||
PARAMETER temperature 0.9
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. JSON Extractor:**
|
|
||||||
```modelfile
|
|
||||||
FROM llama3.2:3b
|
|
||||||
SYSTEM You extract structured data and return only valid JSON. No explanations, just JSON.
|
|
||||||
PARAMETER temperature 0.1
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Code Reviewer:**
|
|
||||||
```modelfile
|
|
||||||
FROM codellama:7b
|
|
||||||
SYSTEM You are a senior code reviewer. Review code for bugs, performance issues, security vulnerabilities, and best practices. Be constructive.
|
|
||||||
PARAMETER temperature 0.3
|
|
||||||
```
|
|
||||||
|
|
||||||
## GPU Configuration
|
|
||||||
|
|
||||||
### Check GPU Detection
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# From inside container
|
|
||||||
docker exec ollama nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected output:**
|
|
||||||
```
|
|
||||||
+-----------------------------------------------------------------------------+
|
|
||||||
| NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|
|
||||||
|-------------------------------+----------------------+----------------------+
|
|
||||||
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
|
||||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
|
||||||
|===============================+======================+======================|
|
|
||||||
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
|
|
||||||
| 40% 45C P8 10W / 151W | 300MiB / 8192MiB | 5% Default |
|
|
||||||
+-------------------------------+----------------------+----------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
### Optimize for GTX 1070
|
|
||||||
|
|
||||||
**Edit `.env`:**
|
|
||||||
```env
|
|
||||||
# Use 6GB of 8GB VRAM (leave 2GB for system)
|
|
||||||
OLLAMA_GPU_MEMORY=6GB
|
|
||||||
|
|
||||||
# Offload most layers to GPU
|
|
||||||
OLLAMA_GPU_LAYERS=33
|
|
||||||
|
|
||||||
# Increase context for better conversations
|
|
||||||
OLLAMA_MAX_CONTEXT=4096
|
|
||||||
```
|
|
||||||
|
|
||||||
### Performance Tips
|
|
||||||
|
|
||||||
**1. Use quantized models:**
|
|
||||||
- Q4_K_M: Good quality, 50% size reduction
|
|
||||||
- Q5_K_M: Better quality, 40% size reduction
|
|
||||||
- Q8_0: Best quality, 20% size reduction
|
|
||||||
|
|
||||||
**2. Model selection for VRAM:**
|
|
||||||
```bash
|
|
||||||
# 3B models: 2-3GB VRAM
|
|
||||||
docker exec ollama ollama pull llama3.2:3b
|
|
||||||
|
|
||||||
# 7B models: 4-6GB VRAM
|
|
||||||
docker exec ollama ollama pull llama3.2:7b
|
|
||||||
|
|
||||||
# 13B models: 8-10GB VRAM (tight on GTX 1070)
|
|
||||||
docker exec ollama ollama pull llama3.2:13b-q4_K_M # Quantized
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Unload models when not in use:**
|
|
||||||
```env
|
|
||||||
# In .env
|
|
||||||
OLLAMA_KEEP_ALIVE=1m # Unload after 1 minute
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Model won't load - Out of memory
|
|
||||||
|
|
||||||
**Solution 1: Use quantized version**
|
|
||||||
```bash
|
|
||||||
# Instead of:
|
|
||||||
docker exec ollama ollama pull llama3.2:13b
|
|
||||||
|
|
||||||
# Use:
|
|
||||||
docker exec ollama ollama pull llama3.2:13b-q4_K_M
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution 2: Reduce GPU layers**
|
|
||||||
```env
|
|
||||||
# In .env
|
|
||||||
OLLAMA_GPU_LAYERS=20 # Reduce from 33
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution 3: Use smaller model**
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama pull llama3.2:3b
|
|
||||||
```
|
|
||||||
|
|
||||||
### Slow inference
|
|
||||||
|
|
||||||
**Enable GPU:**
|
|
||||||
1. Uncomment deploy section in `compose.yaml`
|
|
||||||
2. Install NVIDIA Container Toolkit
|
|
||||||
3. Restart container
|
|
||||||
|
|
||||||
**Check GPU usage:**
|
|
||||||
```bash
|
|
||||||
watch -n 1 docker exec ollama nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Should show:**
|
|
||||||
- GPU-Util > 80% during inference
|
|
||||||
- Memory-Usage increasing during load
|
|
||||||
|
|
||||||
### Can't pull models
|
|
||||||
|
|
||||||
**Check disk space:**
|
|
||||||
```bash
|
|
||||||
df -h
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Docker space:**
|
|
||||||
```bash
|
|
||||||
docker system df
|
|
||||||
```
|
|
||||||
|
|
||||||
**Clean up unused models:**
|
|
||||||
```bash
|
|
||||||
docker exec ollama ollama list
|
|
||||||
docker exec ollama ollama rm <unused-model>
|
|
||||||
```
|
|
||||||
|
|
||||||
### API connection issues
|
|
||||||
|
|
||||||
**Test from another container:**
|
|
||||||
```bash
|
|
||||||
docker run --rm --network homelab curlimages/curl \
|
|
||||||
http://ollama:11434/api/tags
|
|
||||||
```
|
|
||||||
|
|
||||||
**Test externally:**
|
|
||||||
```bash
|
|
||||||
curl https://ollama.fig.systems/api/tags
|
|
||||||
```
|
|
||||||
|
|
||||||
**Enable debug logging:**
|
|
||||||
```env
|
|
||||||
OLLAMA_DEBUG=1
|
|
||||||
```
|
|
||||||
|
|
||||||
## Performance Benchmarks
|
|
||||||
|
|
||||||
### GTX 1070 (8GB VRAM) Expected Performance
|
|
||||||
|
|
||||||
| Model | Tokens/sec | Load Time | VRAM Usage |
|
|
||||||
|-------|------------|-----------|------------|
|
|
||||||
| llama3.2:3b | 40-60 | 2-3s | 3GB |
|
|
||||||
| llama3.2:7b | 20-35 | 3-5s | 6GB |
|
|
||||||
| mistral:7b | 20-35 | 3-5s | 6GB |
|
|
||||||
| llama3.3:70b-q4 | 3-8 | 20-30s | 7.5GB |
|
|
||||||
| llava:7b | 15-25 | 4-6s | 7GB |
|
|
||||||
|
|
||||||
**Without GPU (CPU only):**
|
|
||||||
- llama3.2:3b: 2-5 tokens/sec
|
|
||||||
- llama3.2:7b: 0.5-2 tokens/sec
|
|
||||||
|
|
||||||
**GPU provides 10-20x speedup!**
|
|
||||||
|
|
||||||
## Advanced Usage
|
|
||||||
|
|
||||||
### Multi-Modal (Vision)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Pull vision model
|
|
||||||
docker exec ollama ollama pull llava:7b
|
|
||||||
|
|
||||||
# Analyze image
|
|
||||||
docker exec ollama ollama run llava:7b "What's in this image?" \
|
|
||||||
--image /path/to/image.jpg
|
|
||||||
```
|
|
||||||
|
|
||||||
### Embeddings
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate embeddings for semantic search
|
|
||||||
curl http://ollama:11434/api/embeddings -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"prompt": "The sky is blue because of Rayleigh scattering"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Streaming Responses
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Stream tokens as they generate
|
|
||||||
curl http://ollama:11434/api/generate -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"prompt": "Tell me a long story",
|
|
||||||
"stream": true
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### Context Preservation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start chat session
|
|
||||||
SESSION_ID=$(uuidgen)
|
|
||||||
|
|
||||||
# First message (creates context)
|
|
||||||
curl http://ollama:11434/api/chat -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"messages": [{"role": "user", "content": "My name is Alice"}],
|
|
||||||
"context": "'$SESSION_ID'"
|
|
||||||
}'
|
|
||||||
|
|
||||||
# Follow-up (remembers context)
|
|
||||||
curl http://ollama:11434/api/chat -d '{
|
|
||||||
"model": "llama3.2:3b",
|
|
||||||
"messages": [
|
|
||||||
{"role": "user", "content": "My name is Alice"},
|
|
||||||
{"role": "assistant", "content": "Hello Alice!"},
|
|
||||||
{"role": "user", "content": "What is my name?"}
|
|
||||||
],
|
|
||||||
"context": "'$SESSION_ID'"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration Examples
|
|
||||||
|
|
||||||
### Python
|
|
||||||
|
|
||||||
```python
|
|
||||||
import requests
|
|
||||||
|
|
||||||
def ask_ollama(prompt, model="llama3.2:3b"):
|
|
||||||
response = requests.post(
|
|
||||||
"http://ollama.fig.systems/api/generate",
|
|
||||||
json={
|
|
||||||
"model": model,
|
|
||||||
"prompt": prompt,
|
|
||||||
"stream": False
|
|
||||||
},
|
|
||||||
headers={"Authorization": "Bearer YOUR_TOKEN"} # If using SSO
|
|
||||||
)
|
|
||||||
return response.json()["response"]
|
|
||||||
|
|
||||||
print(ask_ollama("What is the meaning of life?"))
|
|
||||||
```
|
|
||||||
|
|
||||||
### JavaScript
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
async function askOllama(prompt, model = "llama3.2:3b") {
|
|
||||||
const response = await fetch("http://ollama.fig.systems/api/generate", {
|
|
||||||
method: "POST",
|
|
||||||
headers: {
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
"Authorization": "Bearer YOUR_TOKEN" // If using SSO
|
|
||||||
},
|
|
||||||
body: JSON.stringify({
|
|
||||||
model: model,
|
|
||||||
prompt: prompt,
|
|
||||||
stream: false
|
|
||||||
})
|
|
||||||
});
|
|
||||||
|
|
||||||
const data = await response.json();
|
|
||||||
return data.response;
|
|
||||||
}
|
|
||||||
|
|
||||||
askOllama("Explain Docker containers").then(console.log);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Bash
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
ask_ollama() {
|
|
||||||
local prompt="$1"
|
|
||||||
local model="${2:-llama3.2:3b}"
|
|
||||||
|
|
||||||
curl -s http://ollama.fig.systems/api/generate -d "{
|
|
||||||
\"model\": \"$model\",
|
|
||||||
\"prompt\": \"$prompt\",
|
|
||||||
\"stream\": false
|
|
||||||
}" | jq -r '.response'
|
|
||||||
}
|
|
||||||
|
|
||||||
ask_ollama "What is Kubernetes?"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Ollama Website](https://ollama.ai)
|
|
||||||
- [Model Library](https://ollama.ai/library)
|
|
||||||
- [GitHub Repository](https://github.com/ollama/ollama)
|
|
||||||
- [API Documentation](https://github.com/ollama/ollama/blob/main/docs/api.md)
|
|
||||||
- [Model Creation Guide](https://github.com/ollama/ollama/blob/main/docs/modelfile.md)
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ Deploy Ollama
|
|
||||||
2. ✅ Enable GPU acceleration
|
|
||||||
3. ✅ Pull recommended models
|
|
||||||
4. ✅ Test with chat
|
|
||||||
5. ⬜ Integrate with Karakeep
|
|
||||||
6. ⬜ Create custom models
|
|
||||||
7. ⬜ Set up automated model updates
|
|
||||||
8. ⬜ Monitor GPU usage
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Run AI locally, privately, powerfully!** 🧠
|
|
||||||
|
|
@ -1,53 +0,0 @@
|
||||||
# Ollama - Run Large Language Models Locally
|
|
||||||
# Docs: https://ollama.ai
|
|
||||||
|
|
||||||
services:
|
|
||||||
ollama:
|
|
||||||
container_name: ollama
|
|
||||||
image: ollama/ollama:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./models:/root/.ollama
|
|
||||||
|
|
||||||
ports:
|
|
||||||
- "11434:11434"
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
# GPU Support (NVIDIA GTX 1070)
|
|
||||||
runtime: nvidia
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: 1
|
|
||||||
capabilities: [gpu]
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik (API only, no web UI)
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# API endpoint
|
|
||||||
traefik.http.routers.ollama.rule: Host(`ollama.fig.systems`)
|
|
||||||
traefik.http.routers.ollama.entrypoints: websecure
|
|
||||||
traefik.http.routers.ollama.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.ollama.loadbalancer.server.port: 11434
|
|
||||||
|
|
||||||
# SSO Protection for API and restrict to local network
|
|
||||||
traefik.http.routers.ollama.middlewares: local-only@docker,authelia@docker
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Ollama (LLM)
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:brain
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -1,52 +0,0 @@
|
||||||
# Open WebUI - ChatGPT-style interface for Ollama
|
|
||||||
# Docs: https://docs.openwebui.com/
|
|
||||||
|
|
||||||
services:
|
|
||||||
open-webui:
|
|
||||||
container_name: open-webui
|
|
||||||
image: ghcr.io/open-webui/open-webui:main
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./data:/app/backend/data
|
|
||||||
|
|
||||||
environment:
|
|
||||||
# Ollama connection
|
|
||||||
- OLLAMA_BASE_URL=http://ollama:11434
|
|
||||||
|
|
||||||
# Enable RAG (Retrieval-Augmented Generation)
|
|
||||||
- ENABLE_RAG_WEB_SEARCH=true
|
|
||||||
- RAG_WEB_SEARCH_ENGINE=duckduckgo
|
|
||||||
- ENABLE_RAG_WEB_LOADER_SSL_VERIFICATION=false
|
|
||||||
|
|
||||||
# Default model
|
|
||||||
- DEFAULT_MODELS=qwen2.5:3b
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.open-webui.rule: Host(`ai.fig.systems`)
|
|
||||||
traefik.http.routers.open-webui.entrypoints: websecure
|
|
||||||
traefik.http.routers.open-webui.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.open-webui.loadbalancer.server.port: 8080
|
|
||||||
|
|
||||||
# No SSO - Open WebUI has its own auth system
|
|
||||||
# Uncomment to add SSO protection:
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Open WebUI (AI Chat)
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:robot
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
33
compose/services/papra/compose.yaml
Normal file
33
compose/services/papra/compose.yaml
Normal file
|
|
@ -0,0 +1,33 @@
|
||||||
|
# Papra - Document Management and Organization System
|
||||||
|
# Docs: https://docs.papra.app/self-hosting/configuration/
|
||||||
|
|
||||||
|
services:
|
||||||
|
papra:
|
||||||
|
container_name: papra
|
||||||
|
image: ghcr.io/papra-hq/papra:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
ports:
|
||||||
|
- ${PORT:-1221}:${PORT:-1221}
|
||||||
|
volumes:
|
||||||
|
- papra-data:/app/app-data
|
||||||
|
- /mnt/media/paper:/app/documents
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
traefik.http.routers.papra.rule: Host(`${DOMAIN}`)
|
||||||
|
traefik.http.routers.papra.entrypoints: websecure
|
||||||
|
traefik.http.routers.papra.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.papra.loadbalancer.server.port: ${PORT:-1221}
|
||||||
|
traefik.http.routers.papra.middlewares: authelia@docker
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
papra-data:
|
||||||
|
driver: local
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -30,7 +30,7 @@ services:
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
volumes:
|
volumes:
|
||||||
- ./db:/var/lib/postgresql/data
|
- ./db:/var/lib/postgresql
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- vikunja_internal
|
- vikunja_internal
|
||||||
|
|
|
||||||
|
|
@ -1,92 +0,0 @@
|
||||||
# Homelab Documentation
|
|
||||||
|
|
||||||
Welcome to the homelab documentation! This folder contains comprehensive guides for setting up, configuring, and maintaining your self-hosted services.
|
|
||||||
|
|
||||||
## 📚 Documentation Structure
|
|
||||||
|
|
||||||
### Quick Start
|
|
||||||
- [Getting Started](./getting-started.md) - First-time setup walkthrough
|
|
||||||
- [Quick Reference](./quick-reference.md) - Common commands and URLs
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
- [Environment Variables & Secrets](./guides/secrets-management.md) - How to configure secure secrets
|
|
||||||
- [DNS Configuration](./guides/dns-setup.md) - Setting up domain names
|
|
||||||
- [SSL/TLS Certificates](./guides/ssl-certificates.md) - Let's Encrypt configuration
|
|
||||||
- [GPU Acceleration](./guides/gpu-setup.md) - NVIDIA GPU setup for Jellyfin and Immich
|
|
||||||
|
|
||||||
### Services
|
|
||||||
- [Service Overview](./services/README.md) - All available services
|
|
||||||
- [SSO Configuration](./services/sso-setup.md) - Single Sign-On with LLDAP and Tinyauth
|
|
||||||
- [Media Stack](./services/media-stack.md) - Jellyfin, Sonarr, Radarr setup
|
|
||||||
- [Backup Solutions](./services/backup.md) - Backrest configuration
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
- [Common Issues](./troubleshooting/common-issues.md) - Frequent problems and solutions
|
|
||||||
- [FAQ](./troubleshooting/faq.md) - Frequently asked questions
|
|
||||||
- [Debugging Guide](./troubleshooting/debugging.md) - How to diagnose problems
|
|
||||||
|
|
||||||
### Operations
|
|
||||||
- [Maintenance](./operations/maintenance.md) - Regular maintenance tasks
|
|
||||||
- [Updates](./operations/updates.md) - Updating services
|
|
||||||
- [Backups](./operations/backups.md) - Backup and restore procedures
|
|
||||||
- [Monitoring](./operations/monitoring.md) - Service monitoring
|
|
||||||
|
|
||||||
## 🚀 Quick Links
|
|
||||||
|
|
||||||
### First Time Setup
|
|
||||||
1. [Prerequisites](./getting-started.md#prerequisites)
|
|
||||||
2. [Configure Secrets](./guides/secrets-management.md)
|
|
||||||
3. [Setup DNS](./guides/dns-setup.md)
|
|
||||||
4. [Deploy Services](./getting-started.md#deployment)
|
|
||||||
|
|
||||||
### Common Tasks
|
|
||||||
- [Add a new service](./guides/adding-services.md)
|
|
||||||
- [Generate secure passwords](./guides/secrets-management.md#generating-secrets)
|
|
||||||
- [Enable GPU acceleration](./guides/gpu-setup.md)
|
|
||||||
- [Backup configuration](./operations/backups.md)
|
|
||||||
- [Update a service](./operations/updates.md)
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
- [Service won't start](./troubleshooting/common-issues.md#service-wont-start)
|
|
||||||
- [SSL certificate errors](./troubleshooting/common-issues.md#ssl-errors)
|
|
||||||
- [SSO not working](./troubleshooting/common-issues.md#sso-issues)
|
|
||||||
- [Can't access service](./troubleshooting/common-issues.md#access-issues)
|
|
||||||
|
|
||||||
## 📖 Documentation Conventions
|
|
||||||
|
|
||||||
Throughout this documentation:
|
|
||||||
- `command` - Commands to run in terminal
|
|
||||||
- **Bold** - Important concepts or UI elements
|
|
||||||
- `https://service.fig.systems` - Example URLs
|
|
||||||
- ⚠️ - Warning or important note
|
|
||||||
- 💡 - Tip or helpful information
|
|
||||||
- ✅ - Verified working configuration
|
|
||||||
|
|
||||||
## 🔐 Security Notes
|
|
||||||
|
|
||||||
Before deploying to production:
|
|
||||||
1. ✅ Change all passwords in `.env` files
|
|
||||||
2. ✅ Configure DNS records
|
|
||||||
3. ✅ Verify SSL certificates are working
|
|
||||||
4. ✅ Enable backups
|
|
||||||
5. ✅ Review security settings
|
|
||||||
|
|
||||||
## 🆘 Getting Help
|
|
||||||
|
|
||||||
If you encounter issues:
|
|
||||||
1. Check [Common Issues](./troubleshooting/common-issues.md)
|
|
||||||
2. Review [FAQ](./troubleshooting/faq.md)
|
|
||||||
3. Check service logs: `docker compose logs servicename`
|
|
||||||
4. Review the [Debugging Guide](./troubleshooting/debugging.md)
|
|
||||||
|
|
||||||
## 📝 Contributing to Documentation
|
|
||||||
|
|
||||||
Found an error or have a suggestion? Documentation improvements are welcome!
|
|
||||||
- Keep guides clear and concise
|
|
||||||
- Include examples and code snippets
|
|
||||||
- Test all commands before documenting
|
|
||||||
- Update the table of contents when adding new files
|
|
||||||
|
|
||||||
## 🔄 Last Updated
|
|
||||||
|
|
||||||
This documentation is automatically maintained and reflects the current state of the homelab repository.
|
|
||||||
|
|
@ -1,648 +0,0 @@
|
||||||
# Homelab Architecture & Integration
|
|
||||||
|
|
||||||
Complete integration guide for the homelab setup on AlmaLinux 9.6.
|
|
||||||
|
|
||||||
## 🖥️ Hardware Specifications
|
|
||||||
|
|
||||||
### Host System
|
|
||||||
- **Hypervisor**: Proxmox VE 9 (Debian 13 based)
|
|
||||||
- **CPU**: AMD Ryzen 5 7600X (6 cores, 12 threads, up to 5.3 GHz)
|
|
||||||
- **GPU**: NVIDIA GeForce GTX 1070 (8GB VRAM, 1920 CUDA cores)
|
|
||||||
- **RAM**: 32GB DDR5
|
|
||||||
|
|
||||||
### VM Configuration
|
|
||||||
- **OS**: AlmaLinux 9.6 (RHEL 9 compatible)
|
|
||||||
- **CPU**: 8 vCPUs (allocated from host)
|
|
||||||
- **RAM**: 24GB (leaving 8GB for host)
|
|
||||||
- **Storage**: 500GB+ (adjust based on media library size)
|
|
||||||
- **GPU**: GTX 1070 (PCIe passthrough from Proxmox)
|
|
||||||
|
|
||||||
## 🏗️ Architecture Overview
|
|
||||||
|
|
||||||
### Network Architecture
|
|
||||||
|
|
||||||
```
|
|
||||||
Internet
|
|
||||||
↓
|
|
||||||
[Router/Firewall]
|
|
||||||
↓ (Port 80/443)
|
|
||||||
[Traefik Reverse Proxy]
|
|
||||||
↓
|
|
||||||
┌──────────────────────────────────────┐
|
|
||||||
│ homelab network │
|
|
||||||
│ (Docker bridge - 172.18.0.0/16) │
|
|
||||||
│ │
|
|
||||||
│ ┌─────────────┐ ┌──────────────┐ │
|
|
||||||
│ │ Core │ │ Media │ │
|
|
||||||
│ │ - Traefik │ │ - Jellyfin │ │
|
|
||||||
│ │ - LLDAP │ │ - Sonarr │ │
|
|
||||||
│ │ - Tinyauth │ │ - Radarr │ │
|
|
||||||
│ └─────────────┘ └──────────────┘ │
|
|
||||||
│ │
|
|
||||||
│ ┌─────────────┐ ┌──────────────┐ │
|
|
||||||
│ │ Services │ │ Monitoring │ │
|
|
||||||
│ │ - Karakeep │ │ - Loki │ │
|
|
||||||
│ │ - Ollama │ │ - Promtail │ │
|
|
||||||
│ │ - Vikunja │ │ - Grafana │ │
|
|
||||||
│ └─────────────┘ └──────────────┘ │
|
|
||||||
└──────────────────────────────────────┘
|
|
||||||
↓
|
|
||||||
[Promtail Agent]
|
|
||||||
↓
|
|
||||||
[Loki Storage]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Service Internal Networks
|
|
||||||
|
|
||||||
Services with databases use isolated internal networks:
|
|
||||||
|
|
||||||
```
|
|
||||||
karakeep
|
|
||||||
├── homelab (external traffic)
|
|
||||||
└── karakeep_internal
|
|
||||||
├── karakeep (app)
|
|
||||||
├── karakeep-chrome (browser)
|
|
||||||
└── karakeep-meilisearch (search)
|
|
||||||
|
|
||||||
vikunja
|
|
||||||
├── homelab (external traffic)
|
|
||||||
└── vikunja_internal
|
|
||||||
├── vikunja (app)
|
|
||||||
└── vikunja-db (postgres)
|
|
||||||
|
|
||||||
monitoring/logging
|
|
||||||
├── homelab (external traffic)
|
|
||||||
└── logging_internal
|
|
||||||
├── loki (storage)
|
|
||||||
├── promtail (collector)
|
|
||||||
└── grafana (UI)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔐 Security Architecture
|
|
||||||
|
|
||||||
### Authentication Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
User Request
|
|
||||||
↓
|
|
||||||
[Traefik] → Check route rules
|
|
||||||
↓
|
|
||||||
[Tinyauth Middleware] → Forward Auth
|
|
||||||
↓
|
|
||||||
[LLDAP] → Verify credentials
|
|
||||||
↓
|
|
||||||
[Backend Service] → Authorized access
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSL/TLS
|
|
||||||
|
|
||||||
- **Certificate Provider**: Let's Encrypt
|
|
||||||
- **Challenge Type**: HTTP-01 (ports 80/443)
|
|
||||||
- **Automatic Renewal**: Via Traefik
|
|
||||||
- **Domains**:
|
|
||||||
- Primary: `*.fig.systems`
|
|
||||||
- Fallback: `*.edfig.dev`
|
|
||||||
|
|
||||||
### SSO Protection
|
|
||||||
|
|
||||||
**Protected Services** (require authentication):
|
|
||||||
- Traefik Dashboard
|
|
||||||
- LLDAP
|
|
||||||
- Sonarr, Radarr, SABnzbd, qBittorrent
|
|
||||||
- Profilarr, Recyclarr (monitoring)
|
|
||||||
- Homarr, Backrest
|
|
||||||
- Karakeep, Vikunja, LubeLogger
|
|
||||||
- Calibre-web, Booklore, FreshRSS, File Browser
|
|
||||||
- Loki API, Ollama API
|
|
||||||
|
|
||||||
**Unprotected Services** (own authentication):
|
|
||||||
- Tinyauth (SSO provider itself)
|
|
||||||
- Jellyfin (own user system)
|
|
||||||
- Jellyseerr (linked to Jellyfin)
|
|
||||||
- Immich (own user system)
|
|
||||||
- RSSHub (public feed generator)
|
|
||||||
- MicroBin (public pastebin)
|
|
||||||
- Grafana (own authentication)
|
|
||||||
- Uptime Kuma (own authentication)
|
|
||||||
|
|
||||||
## 📊 Logging Architecture
|
|
||||||
|
|
||||||
### Centralized Logging with Loki
|
|
||||||
|
|
||||||
All services forward logs to Loki via Promtail:
|
|
||||||
|
|
||||||
```
|
|
||||||
[Docker Container] → stdout/stderr
|
|
||||||
↓
|
|
||||||
[Docker Socket] → /var/run/docker.sock
|
|
||||||
↓
|
|
||||||
[Promtail] → Scrapes logs via Docker API
|
|
||||||
↓
|
|
||||||
[Loki] → Stores and indexes logs
|
|
||||||
↓
|
|
||||||
[Grafana] → Query and visualize
|
|
||||||
```
|
|
||||||
|
|
||||||
### Log Labels
|
|
||||||
|
|
||||||
Promtail automatically adds labels to all logs:
|
|
||||||
- `container`: Container name
|
|
||||||
- `compose_project`: Docker Compose project
|
|
||||||
- `compose_service`: Service name from compose
|
|
||||||
- `image`: Docker image name
|
|
||||||
- `stream`: stdout or stderr
|
|
||||||
|
|
||||||
### Log Retention
|
|
||||||
|
|
||||||
- **Default**: 30 days
|
|
||||||
- **Storage**: `compose/monitoring/logging/loki-data/`
|
|
||||||
- **Automatic cleanup**: Enabled via Loki compactor
|
|
||||||
|
|
||||||
### Querying Logs
|
|
||||||
|
|
||||||
**View all logs for a service:**
|
|
||||||
```logql
|
|
||||||
{container="sonarr"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Filter by log level:**
|
|
||||||
```logql
|
|
||||||
{container="radarr"} |= "ERROR"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Multiple services:**
|
|
||||||
```logql
|
|
||||||
{container=~"sonarr|radarr"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Time range with filters:**
|
|
||||||
```logql
|
|
||||||
{container="karakeep"} |= "ollama" | json
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🌐 Network Configuration
|
|
||||||
|
|
||||||
### Docker Networks
|
|
||||||
|
|
||||||
**homelab** (external bridge):
|
|
||||||
- Type: External bridge network
|
|
||||||
- Subnet: Auto-assigned by Docker
|
|
||||||
- Purpose: Inter-service communication + Traefik routing
|
|
||||||
- Create: `docker network create homelab`
|
|
||||||
|
|
||||||
**Service-specific internal networks**:
|
|
||||||
- `karakeep_internal`: Karakeep + Chrome + Meilisearch
|
|
||||||
- `vikunja_internal`: Vikunja + PostgreSQL
|
|
||||||
- `logging_internal`: Loki + Promtail + Grafana
|
|
||||||
- etc.
|
|
||||||
|
|
||||||
### Port Mappings
|
|
||||||
|
|
||||||
**External Ports** (exposed to host):
|
|
||||||
- `80/tcp`: HTTP (Traefik) - redirects to HTTPS
|
|
||||||
- `443/tcp`: HTTPS (Traefik)
|
|
||||||
- `6881/tcp+udp`: BitTorrent (qBittorrent)
|
|
||||||
|
|
||||||
**No other ports exposed** - all access via Traefik reverse proxy.
|
|
||||||
|
|
||||||
## 🔧 Traefik Integration
|
|
||||||
|
|
||||||
### Standard Traefik Labels
|
|
||||||
|
|
||||||
All services use consistent Traefik labels:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
# Enable Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Router configuration
|
|
||||||
traefik.http.routers.<service>.rule: Host(`<service>.fig.systems`) || Host(`<service>.edfig.dev`)
|
|
||||||
traefik.http.routers.<service>.entrypoints: websecure
|
|
||||||
traefik.http.routers.<service>.tls.certresolver: letsencrypt
|
|
||||||
|
|
||||||
# Service configuration (backend port)
|
|
||||||
traefik.http.services.<service>.loadbalancer.server.port: <port>
|
|
||||||
|
|
||||||
# SSO middleware (if protected)
|
|
||||||
traefik.http.routers.<service>.middlewares: tinyauth
|
|
||||||
|
|
||||||
# Homarr auto-discovery
|
|
||||||
homarr.name: <Service Name>
|
|
||||||
homarr.group: <Category>
|
|
||||||
homarr.icon: mdi:<icon-name>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Middleware
|
|
||||||
|
|
||||||
**tinyauth** - Forward authentication:
|
|
||||||
```yaml
|
|
||||||
# Defined in traefik/compose.yaml
|
|
||||||
middlewares:
|
|
||||||
tinyauth:
|
|
||||||
forwardAuth:
|
|
||||||
address: http://tinyauth:8080
|
|
||||||
trustForwardHeader: true
|
|
||||||
```
|
|
||||||
|
|
||||||
## 💾 Volume Management
|
|
||||||
|
|
||||||
### Volume Types
|
|
||||||
|
|
||||||
**Bind Mounts** (host directories):
|
|
||||||
```yaml
|
|
||||||
volumes:
|
|
||||||
- ./data:/data # Service data
|
|
||||||
- ./config:/config # Configuration files
|
|
||||||
- /media:/media # Media library (shared)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Named Volumes** (Docker-managed):
|
|
||||||
```yaml
|
|
||||||
volumes:
|
|
||||||
- loki-data:/loki # Loki storage
|
|
||||||
- postgres-data:/var/lib/postgresql/data
|
|
||||||
```
|
|
||||||
|
|
||||||
### Media Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
/media/
|
|
||||||
├── tv/ # TV shows (Sonarr → Jellyfin)
|
|
||||||
├── movies/ # Movies (Radarr → Jellyfin)
|
|
||||||
├── music/ # Music
|
|
||||||
├── photos/ # Photos (Immich)
|
|
||||||
├── books/ # Ebooks (Calibre-web)
|
|
||||||
├── audiobooks/ # Audiobooks
|
|
||||||
├── comics/ # Comics
|
|
||||||
├── homemovies/ # Home videos
|
|
||||||
├── downloads/ # Active downloads (SABnzbd/qBittorrent)
|
|
||||||
├── complete/ # Completed downloads
|
|
||||||
└── incomplete/ # In-progress downloads
|
|
||||||
```
|
|
||||||
|
|
||||||
### Backup Strategy
|
|
||||||
|
|
||||||
**Important directories to backup:**
|
|
||||||
```
|
|
||||||
compose/core/lldap/data/ # User directory
|
|
||||||
compose/core/traefik/letsencrypt/ # SSL certificates
|
|
||||||
compose/services/*/config/ # Service configurations
|
|
||||||
compose/services/*/data/ # Service data
|
|
||||||
compose/monitoring/logging/loki-data/ # Logs (optional)
|
|
||||||
/media/ # Media library
|
|
||||||
```
|
|
||||||
|
|
||||||
**Excluded from backups:**
|
|
||||||
```
|
|
||||||
compose/services/*/db/ # Databases (backup via dump)
|
|
||||||
compose/monitoring/logging/loki-data/ # Logs (can be recreated)
|
|
||||||
/media/downloads/ # Temporary downloads
|
|
||||||
/media/incomplete/ # Incomplete downloads
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎮 GPU Acceleration
|
|
||||||
|
|
||||||
### NVIDIA GTX 1070 Configuration
|
|
||||||
|
|
||||||
**GPU Passthrough (Proxmox → VM):**
|
|
||||||
|
|
||||||
1. **Proxmox host** (`/etc/pve/nodes/<node>/qemu-server/<vmid>.conf`):
|
|
||||||
```
|
|
||||||
hostpci0: 0000:01:00,pcie=1,x-vga=1
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **VM (AlmaLinux)** - Install NVIDIA drivers:
|
|
||||||
```bash
|
|
||||||
# Add NVIDIA repository
|
|
||||||
sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
|
|
||||||
|
|
||||||
# Install drivers
|
|
||||||
sudo dnf install nvidia-driver nvidia-settings
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Docker** - Install NVIDIA Container Toolkit:
|
|
||||||
```bash
|
|
||||||
# Add NVIDIA Container Toolkit repo
|
|
||||||
sudo dnf config-manager --add-repo https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
|
|
||||||
|
|
||||||
# Install toolkit
|
|
||||||
sudo dnf install nvidia-container-toolkit
|
|
||||||
|
|
||||||
# Configure Docker
|
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
|
||||||
sudo systemctl restart docker
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Services Using GPU
|
|
||||||
|
|
||||||
**Jellyfin** (Hardware transcoding):
|
|
||||||
```yaml
|
|
||||||
# Uncomment in compose.yaml
|
|
||||||
devices:
|
|
||||||
- /dev/dri:/dev/dri # For NVENC/NVDEC
|
|
||||||
environment:
|
|
||||||
- NVIDIA_VISIBLE_DEVICES=all
|
|
||||||
- NVIDIA_DRIVER_CAPABILITIES=all
|
|
||||||
```
|
|
||||||
|
|
||||||
**Immich** (AI features):
|
|
||||||
```yaml
|
|
||||||
# Already configured
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: 1
|
|
||||||
capabilities: [gpu]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Ollama** (LLM inference):
|
|
||||||
```yaml
|
|
||||||
# Uncomment in compose.yaml
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: 1
|
|
||||||
capabilities: [gpu]
|
|
||||||
```
|
|
||||||
|
|
||||||
### GPU Performance Tuning
|
|
||||||
|
|
||||||
**For Ryzen 5 7600X + GTX 1070:**
|
|
||||||
|
|
||||||
- **Jellyfin**: Can transcode 4-6 simultaneous 4K → 1080p streams
|
|
||||||
- **Ollama**:
|
|
||||||
- 3B models: 40-60 tokens/sec
|
|
||||||
- 7B models: 20-35 tokens/sec
|
|
||||||
- 13B models: 10-15 tokens/sec (quantized)
|
|
||||||
- **Immich**: AI tagging ~5-10 images/sec
|
|
||||||
|
|
||||||
## 🚀 Resource Allocation
|
|
||||||
|
|
||||||
### CPU Allocation (Ryzen 5 7600X - 6C/12T)
|
|
||||||
|
|
||||||
**High Priority** (4-6 cores):
|
|
||||||
- Jellyfin (transcoding)
|
|
||||||
- Sonarr/Radarr (media processing)
|
|
||||||
- Ollama (when running)
|
|
||||||
|
|
||||||
**Medium Priority** (2-4 cores):
|
|
||||||
- Immich (AI processing)
|
|
||||||
- Karakeep (bookmark processing)
|
|
||||||
- SABnzbd/qBittorrent (downloads)
|
|
||||||
|
|
||||||
**Low Priority** (1-2 cores):
|
|
||||||
- Traefik, LLDAP, Tinyauth
|
|
||||||
- Monitoring services
|
|
||||||
- Other utilities
|
|
||||||
|
|
||||||
### RAM Allocation (32GB Total, 24GB VM)
|
|
||||||
|
|
||||||
**Recommended allocation:**
|
|
||||||
|
|
||||||
```
|
|
||||||
Host (Proxmox): 8GB
|
|
||||||
VM Total: 24GB breakdown:
|
|
||||||
├── System: 4GB (AlmaLinux base)
|
|
||||||
├── Docker: 2GB (daemon overhead)
|
|
||||||
├── Jellyfin: 2-4GB (transcoding buffers)
|
|
||||||
├── Immich: 2-3GB (ML models + database)
|
|
||||||
├── Sonarr/Radarr: 1GB each
|
|
||||||
├── Ollama: 4-6GB (when running models)
|
|
||||||
├── Databases: 2-3GB total
|
|
||||||
├── Monitoring: 2GB (Loki + Grafana)
|
|
||||||
└── Other services: 4-5GB
|
|
||||||
```
|
|
||||||
|
|
||||||
### Disk Space Planning
|
|
||||||
|
|
||||||
**System:** 100GB
|
|
||||||
**Docker:** 50GB (images + containers)
|
|
||||||
**Service Data:** 50GB (configs, databases, logs)
|
|
||||||
**Media Library:** Remaining space (expandable)
|
|
||||||
|
|
||||||
**Recommended VM disk:**
|
|
||||||
- Minimum: 500GB (200GB system + 300GB media)
|
|
||||||
- Recommended: 1TB+ (allows room for growth)
|
|
||||||
|
|
||||||
## 🔄 Service Dependencies
|
|
||||||
|
|
||||||
### Startup Order
|
|
||||||
|
|
||||||
**Critical order for initial deployment:**
|
|
||||||
|
|
||||||
1. **Networks**: `docker network create homelab`
|
|
||||||
2. **Core** (must start first):
|
|
||||||
- Traefik (reverse proxy)
|
|
||||||
- LLDAP (user directory)
|
|
||||||
- Tinyauth (SSO provider)
|
|
||||||
3. **Monitoring** (optional but recommended):
|
|
||||||
- Loki + Promtail + Grafana
|
|
||||||
- Uptime Kuma
|
|
||||||
4. **Media Automation**:
|
|
||||||
- Sonarr, Radarr
|
|
||||||
- SABnzbd, qBittorrent
|
|
||||||
- Recyclarr, Profilarr
|
|
||||||
5. **Media Frontend**:
|
|
||||||
- Jellyfin
|
|
||||||
- Jellyseer
|
|
||||||
- Immich
|
|
||||||
6. **Services**:
|
|
||||||
- Karakeep, Ollama (AI features)
|
|
||||||
- Vikunja, Homarr
|
|
||||||
- All other services
|
|
||||||
|
|
||||||
### Service Integration Map
|
|
||||||
|
|
||||||
```
|
|
||||||
Traefik
|
|
||||||
├─→ All services (reverse proxy)
|
|
||||||
└─→ Let's Encrypt (SSL)
|
|
||||||
|
|
||||||
Tinyauth
|
|
||||||
├─→ LLDAP (authentication backend)
|
|
||||||
└─→ All SSO-protected services
|
|
||||||
|
|
||||||
LLDAP
|
|
||||||
└─→ User database for SSO
|
|
||||||
|
|
||||||
Promtail
|
|
||||||
├─→ Docker socket (log collection)
|
|
||||||
└─→ Loki (log forwarding)
|
|
||||||
|
|
||||||
Loki
|
|
||||||
└─→ Grafana (log visualization)
|
|
||||||
|
|
||||||
Karakeep
|
|
||||||
├─→ Ollama (AI tagging)
|
|
||||||
├─→ Meilisearch (search)
|
|
||||||
└─→ Chrome (web archiving)
|
|
||||||
|
|
||||||
Jellyseer
|
|
||||||
├─→ Jellyfin (media info)
|
|
||||||
├─→ Sonarr (TV requests)
|
|
||||||
└─→ Radarr (movie requests)
|
|
||||||
|
|
||||||
Sonarr/Radarr
|
|
||||||
├─→ SABnzbd/qBittorrent (downloads)
|
|
||||||
├─→ Jellyfin (media library)
|
|
||||||
└─→ Recyclarr/Profilarr (quality profiles)
|
|
||||||
|
|
||||||
Homarr
|
|
||||||
└─→ All services (dashboard auto-discovery)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🐛 Troubleshooting
|
|
||||||
|
|
||||||
### Check Service Health
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# All services status
|
|
||||||
cd ~/homelab
|
|
||||||
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
|
||||||
|
|
||||||
# Logs for specific service
|
|
||||||
docker logs <service-name> --tail 100 -f
|
|
||||||
|
|
||||||
# Logs via Loki/Grafana
|
|
||||||
# Go to https://logs.fig.systems
|
|
||||||
# Query: {container="<service-name>"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Network Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check homelab network exists
|
|
||||||
docker network ls | grep homelab
|
|
||||||
|
|
||||||
# Inspect network
|
|
||||||
docker network inspect homelab
|
|
||||||
|
|
||||||
# Test service connectivity
|
|
||||||
docker exec <service-a> ping <service-b>
|
|
||||||
docker exec karakeep curl http://ollama:11434
|
|
||||||
```
|
|
||||||
|
|
||||||
### GPU Not Detected
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check GPU in VM
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Check Docker can access GPU
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
|
|
||||||
# Check service GPU allocation
|
|
||||||
docker exec jellyfin nvidia-smi
|
|
||||||
docker exec ollama nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSL Certificate Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check Traefik logs
|
|
||||||
docker logs traefik | grep -i certificate
|
|
||||||
|
|
||||||
# Force certificate renewal
|
|
||||||
docker exec traefik rm -rf /letsencrypt/acme.json
|
|
||||||
docker restart traefik
|
|
||||||
|
|
||||||
# Verify DNS
|
|
||||||
dig +short sonarr.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSO Not Working
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check Tinyauth status
|
|
||||||
docker logs tinyauth
|
|
||||||
|
|
||||||
# Check LLDAP connection
|
|
||||||
docker exec tinyauth nc -zv lldap 3890
|
|
||||||
docker exec tinyauth nc -zv lldap 17170
|
|
||||||
|
|
||||||
# Verify credentials match
|
|
||||||
grep LDAP_BIND_PASSWORD compose/core/tinyauth/.env
|
|
||||||
grep LLDAP_LDAP_USER_PASS compose/core/lldap/.env
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📈 Monitoring Best Practices
|
|
||||||
|
|
||||||
### Key Metrics to Monitor
|
|
||||||
|
|
||||||
**System Level:**
|
|
||||||
- CPU usage per container
|
|
||||||
- Memory usage per container
|
|
||||||
- Disk I/O
|
|
||||||
- Network throughput
|
|
||||||
- GPU utilization (for Jellyfin/Ollama/Immich)
|
|
||||||
|
|
||||||
**Application Level:**
|
|
||||||
- Traefik request rate
|
|
||||||
- Failed authentication attempts
|
|
||||||
- Jellyfin concurrent streams
|
|
||||||
- Download speeds (SABnzbd/qBittorrent)
|
|
||||||
- Sonarr/Radarr queue size
|
|
||||||
|
|
||||||
### Uptime Kuma Monitoring
|
|
||||||
|
|
||||||
Configure monitors for:
|
|
||||||
- **HTTP(s)**: All web services (200 status check)
|
|
||||||
- **TCP**: Database ports (PostgreSQL, etc.)
|
|
||||||
- **Docker**: Container health (via Docker socket)
|
|
||||||
- **SSL**: Certificate expiration (30-day warning)
|
|
||||||
|
|
||||||
### Log Monitoring
|
|
||||||
|
|
||||||
Set up Loki alerts for:
|
|
||||||
- ERROR level logs
|
|
||||||
- Authentication failures
|
|
||||||
- Service crashes
|
|
||||||
- Disk space warnings
|
|
||||||
|
|
||||||
## 🔧 Maintenance Tasks
|
|
||||||
|
|
||||||
### Daily
|
|
||||||
- Check Uptime Kuma dashboard
|
|
||||||
- Review any critical alerts
|
|
||||||
|
|
||||||
### Weekly
|
|
||||||
- Check disk space: `df -h`
|
|
||||||
- Review failed downloads in Sonarr/Radarr
|
|
||||||
- Check Loki logs for errors
|
|
||||||
|
|
||||||
### Monthly
|
|
||||||
- Update all containers: `docker compose pull && docker compose up -d`
|
|
||||||
- Review and clean old Docker images: `docker image prune -a`
|
|
||||||
- Backup configurations
|
|
||||||
- Check SSL certificate renewal
|
|
||||||
|
|
||||||
### Quarterly
|
|
||||||
- Review and update documentation
|
|
||||||
- Clean up old media (if needed)
|
|
||||||
- Review and adjust quality profiles
|
|
||||||
- Update Recyclarr configurations
|
|
||||||
|
|
||||||
## 📚 Additional Resources
|
|
||||||
|
|
||||||
- [Traefik Documentation](https://doc.traefik.io/traefik/)
|
|
||||||
- [Docker Compose Best Practices](https://docs.docker.com/compose/production/)
|
|
||||||
- [Loki LogQL Guide](https://grafana.com/docs/loki/latest/logql/)
|
|
||||||
- [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/)
|
|
||||||
- [Proxmox GPU Passthrough](https://pve.proxmox.com/wiki/PCI_Passthrough)
|
|
||||||
- [AlmaLinux Documentation](https://wiki.almalinux.org/)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**System Ready!** 🚀
|
|
||||||
|
|
@ -1,497 +0,0 @@
|
||||||
# Getting Started with Homelab
|
|
||||||
|
|
||||||
This guide will walk you through setting up your homelab from scratch.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### Hardware Requirements
|
|
||||||
- **Server/VM**: Linux server with Docker support
|
|
||||||
- **CPU**: 2+ cores recommended
|
|
||||||
- **RAM**: 8GB minimum, 16GB+ recommended
|
|
||||||
- **Storage**: 100GB+ for Docker containers and config
|
|
||||||
- **Optional GPU**: NVIDIA GPU for hardware transcoding (Jellyfin, Immich)
|
|
||||||
|
|
||||||
### Software Requirements
|
|
||||||
- **Operating System**: Ubuntu 22.04 or similar Linux distribution
|
|
||||||
- **Docker**: Version 24.0+
|
|
||||||
- **Docker Compose**: Version 2.20+
|
|
||||||
- **Git**: For cloning the repository
|
|
||||||
- **Domain Names**: `*.fig.systems` and `*.edfig.dev` (or your domains)
|
|
||||||
|
|
||||||
### Network Requirements
|
|
||||||
- **Ports**: 80 and 443 accessible from internet (for Let's Encrypt)
|
|
||||||
- **DNS**: Ability to create A records for your domains
|
|
||||||
- **Static IP**: Recommended for your homelab server
|
|
||||||
|
|
||||||
## Step 1: Prepare Your Server
|
|
||||||
|
|
||||||
### Install Docker and Docker Compose
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Update package index
|
|
||||||
sudo apt update
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
sudo apt install -y ca-certificates curl gnupg lsb-release
|
|
||||||
|
|
||||||
# Add Docker's official GPG key
|
|
||||||
sudo mkdir -p /etc/apt/keyrings
|
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
|
||||||
|
|
||||||
# Set up the repository
|
|
||||||
echo \
|
|
||||||
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
|
||||||
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
||||||
|
|
||||||
# Install Docker Engine
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
|
||||||
|
|
||||||
# Add your user to docker group (logout and login after this)
|
|
||||||
sudo usermod -aG docker $USER
|
|
||||||
|
|
||||||
# Verify installation
|
|
||||||
docker --version
|
|
||||||
docker compose version
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create Media Directory Structure
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create media folders
|
|
||||||
sudo mkdir -p /media/{audiobooks,books,comics,complete,downloads,homemovies,incomplete,movies,music,photos,tv}
|
|
||||||
|
|
||||||
# Set ownership (replace with your username)
|
|
||||||
sudo chown -R $(whoami):$(whoami) /media
|
|
||||||
|
|
||||||
# Verify structure
|
|
||||||
tree -L 1 /media
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 2: Clone the Repository
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Clone the repository
|
|
||||||
cd ~
|
|
||||||
git clone https://github.com/efigueroa/homelab.git
|
|
||||||
cd homelab
|
|
||||||
|
|
||||||
# Checkout the main branch
|
|
||||||
git checkout main # or your target branch
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 3: Configure DNS
|
|
||||||
|
|
||||||
You need to point your domains to your server's IP address.
|
|
||||||
|
|
||||||
### Option 1: Wildcard DNS (Recommended)
|
|
||||||
|
|
||||||
Add these A records to your DNS provider:
|
|
||||||
|
|
||||||
```
|
|
||||||
*.fig.systems A YOUR_SERVER_IP
|
|
||||||
*.edfig.dev A YOUR_SERVER_IP
|
|
||||||
```
|
|
||||||
|
|
||||||
### Option 2: Individual Records
|
|
||||||
|
|
||||||
Create A records for each service:
|
|
||||||
|
|
||||||
```
|
|
||||||
traefik.fig.systems A YOUR_SERVER_IP
|
|
||||||
lldap.fig.systems A YOUR_SERVER_IP
|
|
||||||
auth.fig.systems A YOUR_SERVER_IP
|
|
||||||
home.fig.systems A YOUR_SERVER_IP
|
|
||||||
backup.fig.systems A YOUR_SERVER_IP
|
|
||||||
flix.fig.systems A YOUR_SERVER_IP
|
|
||||||
photos.fig.systems A YOUR_SERVER_IP
|
|
||||||
# ... and so on for all services
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verify DNS
|
|
||||||
|
|
||||||
Wait a few minutes for DNS propagation, then verify:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test DNS resolution
|
|
||||||
dig traefik.fig.systems +short
|
|
||||||
dig lldap.fig.systems +short
|
|
||||||
|
|
||||||
# Should return your server IP
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 4: Configure Environment Variables
|
|
||||||
|
|
||||||
Each service needs its environment variables configured with secure values.
|
|
||||||
|
|
||||||
### Generate Secure Secrets
|
|
||||||
|
|
||||||
Use these commands to generate secure values:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# For JWT secrets and session secrets (64 characters)
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# For passwords (32 alphanumeric characters)
|
|
||||||
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
|
|
||||||
# For API keys (32 characters)
|
|
||||||
openssl rand -hex 16
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update Core Services
|
|
||||||
|
|
||||||
**LLDAP** (`compose/core/lldap/.env`):
|
|
||||||
```bash
|
|
||||||
cd compose/core/lldap
|
|
||||||
nano .env
|
|
||||||
|
|
||||||
# Update these values:
|
|
||||||
LLDAP_LDAP_USER_PASS=<your-strong-password>
|
|
||||||
LLDAP_JWT_SECRET=<output-from-openssl-rand-hex-32>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tinyauth** (`compose/core/tinyauth/.env`):
|
|
||||||
```bash
|
|
||||||
cd ../tinyauth
|
|
||||||
nano .env
|
|
||||||
|
|
||||||
# Update these values (LDAP_BIND_PASSWORD must match LLDAP_LDAP_USER_PASS):
|
|
||||||
LDAP_BIND_PASSWORD=<same-as-LLDAP_LDAP_USER_PASS>
|
|
||||||
SESSION_SECRET=<output-from-openssl-rand-hex-32>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Immich** (`compose/media/frontend/immich/.env`):
|
|
||||||
```bash
|
|
||||||
cd ../../media/frontend/immich
|
|
||||||
nano .env
|
|
||||||
|
|
||||||
# Update:
|
|
||||||
DB_PASSWORD=<output-from-openssl-rand-base64>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update All Other Services
|
|
||||||
|
|
||||||
Go through each service's `.env` file and replace all `changeme_*` values:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Find all files that need updating
|
|
||||||
grep -r "changeme_" ~/homelab/compose
|
|
||||||
|
|
||||||
# Or update them individually
|
|
||||||
cd ~/homelab/compose/services/linkwarden
|
|
||||||
nano .env # Update NEXTAUTH_SECRET, POSTGRES_PASSWORD, MEILI_MASTER_KEY
|
|
||||||
|
|
||||||
cd ../vikunja
|
|
||||||
nano .env # Update VIKUNJA_DATABASE_PASSWORD, VIKUNJA_SERVICE_JWTSECRET, POSTGRES_PASSWORD
|
|
||||||
```
|
|
||||||
|
|
||||||
💡 **Tip**: Keep your secrets in a password manager!
|
|
||||||
|
|
||||||
See [Secrets Management Guide](./guides/secrets-management.md) for detailed instructions.
|
|
||||||
|
|
||||||
## Step 5: Create Docker Network
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create the external homelab network
|
|
||||||
docker network create homelab
|
|
||||||
|
|
||||||
# Verify it was created
|
|
||||||
docker network ls | grep homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 6: Deploy Services
|
|
||||||
|
|
||||||
Deploy services in order, starting with core infrastructure:
|
|
||||||
|
|
||||||
### Deploy Core Infrastructure
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# Deploy Traefik (reverse proxy)
|
|
||||||
cd compose/core/traefik
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Check logs to ensure it starts successfully
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Wait for "Server configuration reloaded" message, then Ctrl+C
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deploy LLDAP (user directory)
|
|
||||||
cd ../lldap
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Access: https://lldap.fig.systems
|
|
||||||
# Default login: admin / <your LLDAP_LDAP_USER_PASS>
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Deploy Tinyauth (SSO)
|
|
||||||
cd ../tinyauth
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Access: https://auth.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create LLDAP Users
|
|
||||||
|
|
||||||
Before deploying other services, create your user in LLDAP:
|
|
||||||
|
|
||||||
1. Go to https://lldap.fig.systems
|
|
||||||
2. Login with admin credentials
|
|
||||||
3. Create your user:
|
|
||||||
- Username: `edfig` (or your choice)
|
|
||||||
- Email: `admin@edfig.dev`
|
|
||||||
- Password: strong password
|
|
||||||
- Add to `lldap_admin` group
|
|
||||||
|
|
||||||
### Deploy Media Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/media/frontend
|
|
||||||
|
|
||||||
# Jellyfin
|
|
||||||
cd jellyfin
|
|
||||||
docker compose up -d
|
|
||||||
# Access: https://flix.fig.systems
|
|
||||||
|
|
||||||
# Immich
|
|
||||||
cd ../immich
|
|
||||||
docker compose up -d
|
|
||||||
# Access: https://photos.fig.systems
|
|
||||||
|
|
||||||
# Jellyseerr
|
|
||||||
cd ../jellyseer
|
|
||||||
docker compose up -d
|
|
||||||
# Access: https://requests.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Media automation
|
|
||||||
cd ~/homelab/compose/media/automation
|
|
||||||
|
|
||||||
cd sonarr && docker compose up -d && cd ..
|
|
||||||
cd radarr && docker compose up -d && cd ..
|
|
||||||
cd sabnzbd && docker compose up -d && cd ..
|
|
||||||
cd qbittorrent && docker compose up -d && cd ..
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy Utility Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/services
|
|
||||||
|
|
||||||
# Dashboard (start with this - it shows all your services!)
|
|
||||||
cd homarr && docker compose up -d && cd ..
|
|
||||||
# Access: https://home.fig.systems
|
|
||||||
|
|
||||||
# Backup manager
|
|
||||||
cd backrest && docker compose up -d && cd ..
|
|
||||||
# Access: https://backup.fig.systems
|
|
||||||
|
|
||||||
# Other services
|
|
||||||
cd linkwarden && docker compose up -d && cd ..
|
|
||||||
cd vikunja && docker compose up -d && cd ..
|
|
||||||
cd lubelogger && docker compose up -d && cd ..
|
|
||||||
cd calibre-web && docker compose up -d && cd ..
|
|
||||||
cd booklore && docker compose up -d && cd ..
|
|
||||||
cd FreshRSS && docker compose up -d && cd ..
|
|
||||||
cd rsshub && docker compose up -d && cd ..
|
|
||||||
cd microbin && docker compose up -d && cd ..
|
|
||||||
cd filebrowser && docker compose up -d && cd ..
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quick Deploy All (Alternative)
|
|
||||||
|
|
||||||
If you've configured everything and want to deploy all at once:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# Create a deployment script
|
|
||||||
cat > deploy-all.sh << 'SCRIPT'
|
|
||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "Deploying homelab services..."
|
|
||||||
|
|
||||||
# Core
|
|
||||||
echo "==> Core Infrastructure"
|
|
||||||
cd compose/core/traefik && docker compose up -d && cd ../../..
|
|
||||||
sleep 5
|
|
||||||
cd compose/core/lldap && docker compose up -d && cd ../../..
|
|
||||||
sleep 5
|
|
||||||
cd compose/core/tinyauth && docker compose up -d && cd ../../..
|
|
||||||
|
|
||||||
# Media
|
|
||||||
echo "==> Media Services"
|
|
||||||
cd compose/media/frontend/immich && docker compose up -d && cd ../../../..
|
|
||||||
cd compose/media/frontend/jellyfin && docker compose up -d && cd ../../../..
|
|
||||||
cd compose/media/frontend/jellyseer && docker compose up -d && cd ../../../..
|
|
||||||
cd compose/media/automation/sonarr && docker compose up -d && cd ../../../..
|
|
||||||
cd compose/media/automation/radarr && docker compose up -d && cd ../../../..
|
|
||||||
cd compose/media/automation/sabnzbd && docker compose up -d && cd ../../../..
|
|
||||||
cd compose/media/automation/qbittorrent && docker compose up -d && cd ../../../..
|
|
||||||
|
|
||||||
# Utility
|
|
||||||
echo "==> Utility Services"
|
|
||||||
cd compose/services/homarr && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/backrest && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/linkwarden && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/vikunja && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/lubelogger && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/calibre-web && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/booklore && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/FreshRSS && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/rsshub && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/microbin && docker compose up -d && cd ../..
|
|
||||||
cd compose/services/filebrowser && docker compose up -d && cd ../..
|
|
||||||
|
|
||||||
echo "==> Deployment Complete!"
|
|
||||||
echo "Access your dashboard at: https://home.fig.systems"
|
|
||||||
SCRIPT
|
|
||||||
|
|
||||||
chmod +x deploy-all.sh
|
|
||||||
./deploy-all.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 7: Verify Deployment
|
|
||||||
|
|
||||||
### Check All Containers Are Running
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all containers
|
|
||||||
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
|
||||||
|
|
||||||
# Check for any stopped containers
|
|
||||||
docker ps -a --filter "status=exited"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verify SSL Certificates
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test SSL certificate
|
|
||||||
curl -I https://home.fig.systems
|
|
||||||
|
|
||||||
# Should show HTTP/2 200 and valid SSL cert
|
|
||||||
```
|
|
||||||
|
|
||||||
### Access Services
|
|
||||||
|
|
||||||
Visit your dashboard: **https://home.fig.systems**
|
|
||||||
|
|
||||||
This should show all your services with their status!
|
|
||||||
|
|
||||||
### Test SSO
|
|
||||||
|
|
||||||
1. Go to any SSO-protected service (e.g., https://tasks.fig.systems)
|
|
||||||
2. You should be redirected to https://auth.fig.systems
|
|
||||||
3. Login with your LLDAP credentials
|
|
||||||
4. You should be redirected back to the service
|
|
||||||
|
|
||||||
## Step 8: Initial Service Configuration
|
|
||||||
|
|
||||||
### Jellyfin Setup
|
|
||||||
1. Go to https://flix.fig.systems
|
|
||||||
2. Select language and create admin account
|
|
||||||
3. Add media libraries:
|
|
||||||
- Movies: `/media/movies`
|
|
||||||
- TV Shows: `/media/tv`
|
|
||||||
- Music: `/media/music`
|
|
||||||
- Photos: `/media/photos`
|
|
||||||
|
|
||||||
### Immich Setup
|
|
||||||
1. Go to https://photos.fig.systems
|
|
||||||
2. Create admin account
|
|
||||||
3. Upload some photos to test
|
|
||||||
4. Configure storage in Settings
|
|
||||||
|
|
||||||
### Sonarr/Radarr Setup
|
|
||||||
1. Go to https://sonarr.fig.systems and https://radarr.fig.systems
|
|
||||||
2. Complete initial setup wizard
|
|
||||||
3. Add indexers (for finding content)
|
|
||||||
4. Add download clients:
|
|
||||||
- SABnzbd: http://sabnzbd:8080
|
|
||||||
- qBittorrent: http://qbittorrent:8080
|
|
||||||
5. Configure root folders:
|
|
||||||
- Sonarr: `/media/tv`
|
|
||||||
- Radarr: `/media/movies`
|
|
||||||
|
|
||||||
### Jellyseerr Setup
|
|
||||||
1. Go to https://requests.fig.systems
|
|
||||||
2. Sign in with Jellyfin
|
|
||||||
3. Connect to Sonarr and Radarr
|
|
||||||
4. Configure user permissions
|
|
||||||
|
|
||||||
### Backrest Setup
|
|
||||||
1. Go to https://backup.fig.systems
|
|
||||||
2. Add Backblaze B2 repository (see [Backup Guide](./services/backup.md))
|
|
||||||
3. Create backup plan for Immich photos
|
|
||||||
4. Schedule automated backups
|
|
||||||
|
|
||||||
## Step 9: Optional Configurations
|
|
||||||
|
|
||||||
### Enable GPU Acceleration
|
|
||||||
|
|
||||||
If you have an NVIDIA GPU, see [GPU Setup Guide](./guides/gpu-setup.md).
|
|
||||||
|
|
||||||
### Configure Backups
|
|
||||||
|
|
||||||
See [Backup Operations Guide](./operations/backups.md).
|
|
||||||
|
|
||||||
### Add More Services
|
|
||||||
|
|
||||||
See [Adding Services Guide](./guides/adding-services.md).
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
- ✅ [Set up automated backups](./operations/backups.md)
|
|
||||||
- ✅ [Configure monitoring](./operations/monitoring.md)
|
|
||||||
- ✅ [Review security settings](./guides/security.md)
|
|
||||||
- ✅ [Enable GPU acceleration](./guides/gpu-setup.md) (optional)
|
|
||||||
- ✅ [Configure media automation](./services/media-stack.md)
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
If you encounter issues during setup, see:
|
|
||||||
- [Common Issues](./troubleshooting/common-issues.md)
|
|
||||||
- [FAQ](./troubleshooting/faq.md)
|
|
||||||
- [Debugging Guide](./troubleshooting/debugging.md)
|
|
||||||
|
|
||||||
## Quick Command Reference
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# View all running containers
|
|
||||||
docker ps
|
|
||||||
|
|
||||||
# View logs for a service
|
|
||||||
cd compose/path/to/service
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Restart a service
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Stop a service
|
|
||||||
docker compose down
|
|
||||||
|
|
||||||
# Update and restart a service
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# View resource usage
|
|
||||||
docker stats
|
|
||||||
```
|
|
||||||
|
|
||||||
## Getting Help
|
|
||||||
|
|
||||||
- Check the [FAQ](./troubleshooting/faq.md)
|
|
||||||
- Review service-specific guides in [docs/services/](./services/)
|
|
||||||
- Check container logs for errors
|
|
||||||
- Verify DNS and SSL certificates
|
|
||||||
|
|
||||||
Welcome to your homelab! 🎉
|
|
||||||
|
|
@ -1,445 +0,0 @@
|
||||||
# Centralized Logging with Loki
|
|
||||||
|
|
||||||
Guide for setting up and using the centralized logging stack (Loki + Promtail + Grafana).
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The logging stack provides centralized log aggregation and visualization for all Docker containers:
|
|
||||||
|
|
||||||
- **Loki**: Log aggregation backend (stores and indexes logs)
|
|
||||||
- **Promtail**: Agent that collects logs from Docker containers
|
|
||||||
- **Grafana**: Web UI for querying and visualizing logs
|
|
||||||
|
|
||||||
### Why Centralized Logging?
|
|
||||||
|
|
||||||
**Problems without it:**
|
|
||||||
- Logs scattered across many containers
|
|
||||||
- Hard to correlate events across services
|
|
||||||
- Logs lost when containers restart
|
|
||||||
- No easy way to search historical logs
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- ✅ Single place to view all logs
|
|
||||||
- ✅ Powerful search and filtering (LogQL)
|
|
||||||
- ✅ Persist logs even after container restarts
|
|
||||||
- ✅ Correlate events across services
|
|
||||||
- ✅ Create dashboards and alerts
|
|
||||||
- ✅ Configurable retention (30 days default)
|
|
||||||
|
|
||||||
## Quick Setup
|
|
||||||
|
|
||||||
### 1. Configure Grafana Password
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/monitoring/logging
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update:**
|
|
||||||
```env
|
|
||||||
GF_SECURITY_ADMIN_PASSWORD=<your-strong-password>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Generate password:**
|
|
||||||
```bash
|
|
||||||
openssl rand -base64 20
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Deploy
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/monitoring/logging
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Access Grafana
|
|
||||||
|
|
||||||
Go to: **https://logs.fig.systems**
|
|
||||||
|
|
||||||
**Login:**
|
|
||||||
- Username: `admin`
|
|
||||||
- Password: `<your GF_SECURITY_ADMIN_PASSWORD>`
|
|
||||||
|
|
||||||
### 4. Start Exploring Logs
|
|
||||||
|
|
||||||
1. Click **Explore** (compass icon) in left sidebar
|
|
||||||
2. Loki datasource should be selected
|
|
||||||
3. Start querying!
|
|
||||||
|
|
||||||
## Basic Usage
|
|
||||||
|
|
||||||
### View Logs from a Container
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{container="jellyfin"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### View Last Hour's Logs
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{container="immich_server"} | __timestamp__ >= now() - 1h
|
|
||||||
```
|
|
||||||
|
|
||||||
### Filter for Errors
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{container="traefik"} |= "error"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Exclude Lines
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{container="traefik"} != "404"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multiple Containers
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{container=~"jellyfin|immich.*"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### By Compose Project
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{compose_project="media"}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Advanced Queries
|
|
||||||
|
|
||||||
### Count Errors
|
|
||||||
|
|
||||||
```logql
|
|
||||||
sum(count_over_time({container="jellyfin"} |= "error" [5m]))
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Rate
|
|
||||||
|
|
||||||
```logql
|
|
||||||
rate({container="traefik"} |= "error" [5m])
|
|
||||||
```
|
|
||||||
|
|
||||||
### Parse JSON Logs
|
|
||||||
|
|
||||||
```logql
|
|
||||||
{container="linkwarden"} | json | level="error"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Top 10 Error Messages
|
|
||||||
|
|
||||||
```logql
|
|
||||||
topk(10,
|
|
||||||
sum by (container) (
|
|
||||||
count_over_time({job="docker"} |= "error" [24h])
|
|
||||||
)
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Creating Dashboards
|
|
||||||
|
|
||||||
### Import Pre-built Dashboard
|
|
||||||
|
|
||||||
1. Go to **Dashboards** → **Import**
|
|
||||||
2. Dashboard ID: **13639** (Docker logs)
|
|
||||||
3. Select **Loki** as datasource
|
|
||||||
4. Click **Import**
|
|
||||||
|
|
||||||
### Create Custom Dashboard
|
|
||||||
|
|
||||||
1. Click **+** → **Dashboard**
|
|
||||||
2. **Add panel**
|
|
||||||
3. Select **Loki** datasource
|
|
||||||
4. Build query
|
|
||||||
5. Choose visualization (logs, graph, table, etc.)
|
|
||||||
6. **Save**
|
|
||||||
|
|
||||||
**Example panels:**
|
|
||||||
- Error count by container
|
|
||||||
- Log volume over time
|
|
||||||
- Recent errors (table)
|
|
||||||
- Top logging containers
|
|
||||||
|
|
||||||
## Setting Up Alerts
|
|
||||||
|
|
||||||
### Create Alert Rule
|
|
||||||
|
|
||||||
1. **Alerting** → **Alert rules** → **New alert rule**
|
|
||||||
2. **Query:**
|
|
||||||
```logql
|
|
||||||
sum(count_over_time({container="jellyfin"} |= "error" [5m])) > 10
|
|
||||||
```
|
|
||||||
3. **Condition**: Alert when > 10 errors in 5 minutes
|
|
||||||
4. **Configure** notification channel (email, webhook, etc.)
|
|
||||||
5. **Save**
|
|
||||||
|
|
||||||
**Example alerts:**
|
|
||||||
- Too many errors in service
|
|
||||||
- Service stopped logging (might have crashed)
|
|
||||||
- Authentication failures
|
|
||||||
- Disk space warnings
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Change Log Retention
|
|
||||||
|
|
||||||
**Default: 30 days**
|
|
||||||
|
|
||||||
Edit `.env`:
|
|
||||||
```env
|
|
||||||
LOKI_RETENTION_PERIOD=60d # 60 days
|
|
||||||
```
|
|
||||||
|
|
||||||
Edit `loki-config.yaml`:
|
|
||||||
```yaml
|
|
||||||
limits_config:
|
|
||||||
retention_period: 60d
|
|
||||||
|
|
||||||
table_manager:
|
|
||||||
retention_period: 60d
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart:
|
|
||||||
```bash
|
|
||||||
docker compose restart loki
|
|
||||||
```
|
|
||||||
|
|
||||||
### Adjust Resource Limits
|
|
||||||
|
|
||||||
For low-resource systems, edit `loki-config.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
limits_config:
|
|
||||||
retention_period: 7d # Shorter retention
|
|
||||||
ingestion_rate_mb: 5 # Lower rate
|
|
||||||
|
|
||||||
query_range:
|
|
||||||
results_cache:
|
|
||||||
cache:
|
|
||||||
embedded_cache:
|
|
||||||
max_size_mb: 50 # Smaller cache
|
|
||||||
```
|
|
||||||
|
|
||||||
### Add Labels to Services
|
|
||||||
|
|
||||||
Make services easier to find by adding labels:
|
|
||||||
|
|
||||||
**Edit service `compose.yaml`:**
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
myservice:
|
|
||||||
labels:
|
|
||||||
logging: "promtail"
|
|
||||||
environment: "production"
|
|
||||||
tier: "frontend"
|
|
||||||
```
|
|
||||||
|
|
||||||
Query with these labels:
|
|
||||||
```logql
|
|
||||||
{environment="production", tier="frontend"}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### No Logs Appearing
|
|
||||||
|
|
||||||
**Wait a few minutes** - initial log collection takes time
|
|
||||||
|
|
||||||
**Check Promtail:**
|
|
||||||
```bash
|
|
||||||
docker logs promtail
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Loki:**
|
|
||||||
```bash
|
|
||||||
docker logs loki
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify Promtail can reach Loki:**
|
|
||||||
```bash
|
|
||||||
docker exec promtail wget -O- http://loki:3100/ready
|
|
||||||
```
|
|
||||||
|
|
||||||
### Grafana Can't Connect to Loki
|
|
||||||
|
|
||||||
**Test from Grafana:**
|
|
||||||
```bash
|
|
||||||
docker exec grafana wget -O- http://loki:3100/ready
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check datasource:** Grafana → Configuration → Data sources → Loki
|
|
||||||
- URL should be: `http://loki:3100`
|
|
||||||
|
|
||||||
### High Disk Usage
|
|
||||||
|
|
||||||
**Check size:**
|
|
||||||
```bash
|
|
||||||
du -sh compose/monitoring/logging/loki-data
|
|
||||||
```
|
|
||||||
|
|
||||||
**Reduce retention:**
|
|
||||||
```env
|
|
||||||
LOKI_RETENTION_PERIOD=7d
|
|
||||||
```
|
|
||||||
|
|
||||||
**Manual cleanup (CAREFUL):**
|
|
||||||
```bash
|
|
||||||
docker compose stop loki
|
|
||||||
rm -rf loki-data/chunks/*
|
|
||||||
docker compose start loki
|
|
||||||
```
|
|
||||||
|
|
||||||
### Slow Queries
|
|
||||||
|
|
||||||
**Optimize queries:**
|
|
||||||
- Use specific labels: `{container="name"}` not `{container=~".*"}`
|
|
||||||
- Limit time range: Hours not days
|
|
||||||
- Filter early: `|= "error"` before parsing
|
|
||||||
- Avoid complex regex
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### Log Verbosity
|
|
||||||
|
|
||||||
Configure appropriate log levels per environment:
|
|
||||||
- **Production**: `info` or `warning`
|
|
||||||
- **Debugging**: `debug` or `trace`
|
|
||||||
|
|
||||||
Too verbose = wasted resources!
|
|
||||||
|
|
||||||
### Retention Strategy
|
|
||||||
|
|
||||||
Match retention to importance:
|
|
||||||
- **Critical services**: 60-90 days
|
|
||||||
- **Normal services**: 30 days
|
|
||||||
- **High-volume services**: 7-14 days
|
|
||||||
|
|
||||||
### Useful Queries to Save
|
|
||||||
|
|
||||||
Create saved queries for common tasks:
|
|
||||||
|
|
||||||
**Recent errors:**
|
|
||||||
```logql
|
|
||||||
{job="docker"} |= "error" | __timestamp__ >= now() - 15m
|
|
||||||
```
|
|
||||||
|
|
||||||
**Service health check:**
|
|
||||||
```logql
|
|
||||||
{container="traefik"} |= "request"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Failed logins:**
|
|
||||||
```logql
|
|
||||||
{container="lldap"} |= "failed" |= "login"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration Tips
|
|
||||||
|
|
||||||
### Embed in Homarr
|
|
||||||
|
|
||||||
Add Grafana dashboards to Homarr:
|
|
||||||
|
|
||||||
1. Edit Homarr dashboard
|
|
||||||
2. Add **iFrame widget**
|
|
||||||
3. URL: `https://logs.fig.systems/d/<dashboard-id>`
|
|
||||||
|
|
||||||
### Use with Backups
|
|
||||||
|
|
||||||
Include logging data in backups:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/monitoring/logging
|
|
||||||
tar czf logging-backup-$(date +%Y%m%d).tar.gz loki-data/ grafana-data/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Combine with Metrics
|
|
||||||
|
|
||||||
Later you can add Prometheus for metrics:
|
|
||||||
- Loki for logs
|
|
||||||
- Prometheus for metrics (CPU, RAM, disk)
|
|
||||||
- Both in Grafana dashboards
|
|
||||||
|
|
||||||
## Common LogQL Patterns
|
|
||||||
|
|
||||||
### Filter by Time
|
|
||||||
|
|
||||||
```logql
|
|
||||||
# Last 5 minutes
|
|
||||||
{container="name"} | __timestamp__ >= now() - 5m
|
|
||||||
|
|
||||||
# Specific time range (in Grafana UI time picker)
|
|
||||||
# Or use: __timestamp__ >= "2024-01-01T00:00:00Z"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pattern Matching
|
|
||||||
|
|
||||||
```logql
|
|
||||||
# Contains
|
|
||||||
{container="name"} |= "error"
|
|
||||||
|
|
||||||
# Does not contain
|
|
||||||
{container="name"} != "404"
|
|
||||||
|
|
||||||
# Regex match
|
|
||||||
{container="name"} |~ "error|fail|critical"
|
|
||||||
|
|
||||||
# Regex does not match
|
|
||||||
{container="name"} !~ "debug|trace"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Aggregations
|
|
||||||
|
|
||||||
```logql
|
|
||||||
# Count
|
|
||||||
count_over_time({container="name"}[5m])
|
|
||||||
|
|
||||||
# Rate
|
|
||||||
rate({container="name"}[5m])
|
|
||||||
|
|
||||||
# Sum
|
|
||||||
sum(count_over_time({job="docker"}[1h])) by (container)
|
|
||||||
|
|
||||||
# Average
|
|
||||||
avg_over_time({container="name"} | unwrap bytes [5m])
|
|
||||||
```
|
|
||||||
|
|
||||||
### JSON Parsing
|
|
||||||
|
|
||||||
```logql
|
|
||||||
# Parse JSON and filter
|
|
||||||
{container="name"} | json | level="error"
|
|
||||||
|
|
||||||
# Extract field
|
|
||||||
{container="name"} | json | line_format "{{.message}}"
|
|
||||||
|
|
||||||
# Filter on JSON field
|
|
||||||
{container="name"} | json status_code="500"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resource Usage
|
|
||||||
|
|
||||||
**Typical usage:**
|
|
||||||
- **Loki**: 200-500MB RAM, 1-5GB disk/week
|
|
||||||
- **Promtail**: 50-100MB RAM
|
|
||||||
- **Grafana**: 100-200MB RAM, ~100MB disk
|
|
||||||
- **Total**: ~400-700MB RAM
|
|
||||||
|
|
||||||
**For 20 containers with moderate logging**
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ Explore your logs in Grafana
|
|
||||||
2. ✅ Create useful dashboards
|
|
||||||
3. ✅ Set up alerts for critical errors
|
|
||||||
4. ⬜ Add Prometheus for metrics (future)
|
|
||||||
5. ⬜ Add Tempo for distributed tracing (future)
|
|
||||||
6. ⬜ Create log-based SLA tracking
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Loki Documentation](https://grafana.com/docs/loki/latest/)
|
|
||||||
- [LogQL Reference](https://grafana.com/docs/loki/latest/logql/)
|
|
||||||
- [Grafana Dashboards](https://grafana.com/grafana/dashboards/)
|
|
||||||
- [Community Dashboards](https://grafana.com/grafana/dashboards/?search=loki)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Now debug issues 10x faster with centralized logs!** 🔍
|
|
||||||
|
|
@ -1,725 +0,0 @@
|
||||||
# NVIDIA GPU Acceleration Setup (GTX 1070)
|
|
||||||
|
|
||||||
This guide covers setting up NVIDIA GPU acceleration for your homelab running on **Proxmox 9 (Debian 13)** with an **NVIDIA GTX 1070**.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
GPU acceleration provides significant benefits:
|
|
||||||
- **Jellyfin**: Hardware video transcoding (H.264, HEVC)
|
|
||||||
- **Immich**: Faster ML inference (face recognition, object detection)
|
|
||||||
- **Performance**: 10-20x faster transcoding vs CPU
|
|
||||||
- **Efficiency**: Lower power consumption, CPU freed for other tasks
|
|
||||||
|
|
||||||
**Your Hardware:**
|
|
||||||
- **GPU**: NVIDIA GTX 1070 (Pascal architecture)
|
|
||||||
- **Capabilities**: NVENC (encoding), NVDEC (decoding), CUDA
|
|
||||||
- **Max Concurrent Streams**: 2 (can be unlocked)
|
|
||||||
- **Supported Codecs**: H.264, HEVC (H.265)
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
```
|
|
||||||
Proxmox Host (Debian 13)
|
|
||||||
│
|
|
||||||
├─ NVIDIA Drivers (host)
|
|
||||||
├─ NVIDIA Container Toolkit
|
|
||||||
│
|
|
||||||
└─ Docker VM/LXC
|
|
||||||
│
|
|
||||||
├─ GPU passthrough
|
|
||||||
│
|
|
||||||
└─ Jellyfin/Immich containers
|
|
||||||
└─ Hardware transcoding
|
|
||||||
```
|
|
||||||
|
|
||||||
## Part 1: Proxmox Host Setup
|
|
||||||
|
|
||||||
### Step 1.1: Enable IOMMU (for GPU Passthrough)
|
|
||||||
|
|
||||||
**Edit GRUB configuration:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH into Proxmox host
|
|
||||||
ssh root@proxmox-host
|
|
||||||
|
|
||||||
# Edit GRUB config
|
|
||||||
nano /etc/default/grub
|
|
||||||
```
|
|
||||||
|
|
||||||
**Find this line:**
|
|
||||||
```
|
|
||||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Replace with (Intel CPU):**
|
|
||||||
```
|
|
||||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Or (AMD CPU):**
|
|
||||||
```
|
|
||||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update GRUB and reboot:**
|
|
||||||
```bash
|
|
||||||
update-grub
|
|
||||||
reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify IOMMU is enabled:**
|
|
||||||
```bash
|
|
||||||
dmesg | grep -e DMAR -e IOMMU
|
|
||||||
|
|
||||||
# Should see: "IOMMU enabled"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 1.2: Load VFIO Modules
|
|
||||||
|
|
||||||
**Edit modules:**
|
|
||||||
```bash
|
|
||||||
nano /etc/modules
|
|
||||||
```
|
|
||||||
|
|
||||||
**Add these lines:**
|
|
||||||
```
|
|
||||||
vfio
|
|
||||||
vfio_iommu_type1
|
|
||||||
vfio_pci
|
|
||||||
vfio_virqfd
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update initramfs:**
|
|
||||||
```bash
|
|
||||||
update-initramfs -u -k all
|
|
||||||
reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 1.3: Find GPU PCI ID
|
|
||||||
|
|
||||||
```bash
|
|
||||||
lspci -nn | grep -i nvidia
|
|
||||||
|
|
||||||
# Example output:
|
|
||||||
# 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
|
|
||||||
# 01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note the IDs**: `10de:1b81` and `10de:10f0` (your values may differ)
|
|
||||||
|
|
||||||
### Step 1.4: Configure VFIO
|
|
||||||
|
|
||||||
**Create VFIO config:**
|
|
||||||
```bash
|
|
||||||
nano /etc/modprobe.d/vfio.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
**Add (replace with your IDs from above):**
|
|
||||||
```
|
|
||||||
options vfio-pci ids=10de:1b81,10de:10f0
|
|
||||||
softdep nvidia pre: vfio-pci
|
|
||||||
```
|
|
||||||
|
|
||||||
**Blacklist nouveau (open-source NVIDIA driver):**
|
|
||||||
```bash
|
|
||||||
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update and reboot:**
|
|
||||||
```bash
|
|
||||||
update-initramfs -u -k all
|
|
||||||
reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify GPU is bound to VFIO:**
|
|
||||||
```bash
|
|
||||||
lspci -nnk -d 10de:1b81
|
|
||||||
|
|
||||||
# Should show:
|
|
||||||
# Kernel driver in use: vfio-pci
|
|
||||||
```
|
|
||||||
|
|
||||||
## Part 2: VM/LXC Setup
|
|
||||||
|
|
||||||
### Option A: Using VM (Recommended for Docker)
|
|
||||||
|
|
||||||
**Create Ubuntu 24.04 VM with GPU passthrough:**
|
|
||||||
|
|
||||||
1. **Create VM in Proxmox UI**:
|
|
||||||
- OS: Ubuntu 24.04 Server
|
|
||||||
- CPU: 4+ cores
|
|
||||||
- RAM: 16GB+
|
|
||||||
- Disk: 100GB+
|
|
||||||
|
|
||||||
2. **Add PCI Device** (GPU):
|
|
||||||
- Hardware → Add → PCI Device
|
|
||||||
- Device: Select your GTX 1070 (01:00.0)
|
|
||||||
- ✅ All Functions
|
|
||||||
- ✅ Primary GPU (if no other GPU)
|
|
||||||
- ✅ PCI-Express
|
|
||||||
|
|
||||||
3. **Add PCI Device** (GPU Audio):
|
|
||||||
- Hardware → Add → PCI Device
|
|
||||||
- Device: NVIDIA Audio (01:00.1)
|
|
||||||
- ✅ All Functions
|
|
||||||
|
|
||||||
4. **Machine Settings**:
|
|
||||||
- Machine: q35
|
|
||||||
- BIOS: OVMF (UEFI)
|
|
||||||
- Add EFI Disk
|
|
||||||
|
|
||||||
5. **Start VM** and install Ubuntu
|
|
||||||
|
|
||||||
### Option B: Using LXC (Advanced, Less Stable)
|
|
||||||
|
|
||||||
**Note**: LXC with GPU is less reliable. VM recommended.
|
|
||||||
|
|
||||||
If you insist on LXC:
|
|
||||||
```bash
|
|
||||||
# Edit LXC config
|
|
||||||
nano /etc/pve/lxc/VMID.conf
|
|
||||||
|
|
||||||
# Add:
|
|
||||||
lxc.cgroup2.devices.allow: c 195:* rwm
|
|
||||||
lxc.cgroup2.devices.allow: c 509:* rwm
|
|
||||||
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
|
|
||||||
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
|
|
||||||
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
|
|
||||||
```
|
|
||||||
|
|
||||||
**For this guide, we'll use VM (Option A)**.
|
|
||||||
|
|
||||||
## Part 3: VM Guest Setup (Debian 13)
|
|
||||||
|
|
||||||
Now we're inside the Ubuntu/Debian VM where Docker runs.
|
|
||||||
|
|
||||||
### Step 3.1: Install NVIDIA Drivers
|
|
||||||
|
|
||||||
**SSH into your Docker VM:**
|
|
||||||
```bash
|
|
||||||
ssh user@docker-vm
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update system:**
|
|
||||||
```bash
|
|
||||||
sudo apt update
|
|
||||||
sudo apt upgrade -y
|
|
||||||
```
|
|
||||||
|
|
||||||
**Debian 13 - Install NVIDIA drivers:**
|
|
||||||
```bash
|
|
||||||
# Add non-free repositories
|
|
||||||
sudo nano /etc/apt/sources.list
|
|
||||||
|
|
||||||
# Add 'non-free non-free-firmware' to each line, example:
|
|
||||||
deb http://deb.debian.org/debian bookworm main non-free non-free-firmware
|
|
||||||
deb http://deb.debian.org/debian bookworm-updates main non-free non-free-firmware
|
|
||||||
|
|
||||||
# Update and install
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y linux-headers-$(uname -r)
|
|
||||||
sudo apt install -y nvidia-driver nvidia-smi
|
|
||||||
|
|
||||||
# Reboot
|
|
||||||
sudo reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify driver installation:**
|
|
||||||
```bash
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Should show:
|
|
||||||
# +-----------------------------------------------------------------------------+
|
|
||||||
# | NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|
|
||||||
# |-------------------------------+----------------------+----------------------+
|
|
||||||
# | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
|
||||||
# | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
|
||||||
# |===============================+======================+======================|
|
|
||||||
# | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
|
|
||||||
# | 30% 35C P8 10W / 150W | 0MiB / 8192MiB | 0% Default |
|
|
||||||
# +-------------------------------+----------------------+----------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
✅ **Success!** Your GTX 1070 is now accessible in the VM.
|
|
||||||
|
|
||||||
### Step 3.2: Install NVIDIA Container Toolkit
|
|
||||||
|
|
||||||
**Add NVIDIA Container Toolkit repository:**
|
|
||||||
```bash
|
|
||||||
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
|
||||||
|
|
||||||
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
|
|
||||||
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
|
|
||||||
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
|
||||||
```
|
|
||||||
|
|
||||||
**Install toolkit:**
|
|
||||||
```bash
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install -y nvidia-container-toolkit
|
|
||||||
```
|
|
||||||
|
|
||||||
**Configure Docker to use NVIDIA runtime:**
|
|
||||||
```bash
|
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart Docker:**
|
|
||||||
```bash
|
|
||||||
sudo systemctl restart docker
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify Docker can access GPU:**
|
|
||||||
```bash
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
|
|
||||||
# Should show nvidia-smi output from inside container
|
|
||||||
```
|
|
||||||
|
|
||||||
✅ **Success!** Docker can now use your GPU.
|
|
||||||
|
|
||||||
## Part 4: Configure Jellyfin for GPU Transcoding
|
|
||||||
|
|
||||||
### Step 4.1: Update Jellyfin Compose File
|
|
||||||
|
|
||||||
**Edit compose file:**
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/media/frontend/jellyfin
|
|
||||||
nano compose.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Uncomment the GPU sections:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
jellyfin:
|
|
||||||
container_name: jellyfin
|
|
||||||
image: lscr.io/linuxserver/jellyfin:latest
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
volumes:
|
|
||||||
- ./config:/config
|
|
||||||
- ./cache:/cache
|
|
||||||
- /media/movies:/media/movies:ro
|
|
||||||
- /media/tv:/media/tv:ro
|
|
||||||
- /media/music:/media/music:ro
|
|
||||||
- /media/photos:/media/photos:ro
|
|
||||||
- /media/homemovies:/media/homemovies:ro
|
|
||||||
ports:
|
|
||||||
- "8096:8096"
|
|
||||||
- "7359:7359/udp"
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.http.routers.jellyfin.rule: Host(`flix.fig.systems`) || Host(`flix.edfig.dev`)
|
|
||||||
traefik.http.routers.jellyfin.entrypoints: websecure
|
|
||||||
traefik.http.routers.jellyfin.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.jellyfin.loadbalancer.server.port: 8096
|
|
||||||
|
|
||||||
# UNCOMMENT THESE LINES FOR GTX 1070:
|
|
||||||
runtime: nvidia
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: all
|
|
||||||
capabilities: [gpu]
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart Jellyfin:**
|
|
||||||
```bash
|
|
||||||
docker compose down
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check logs:**
|
|
||||||
```bash
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Should see lines about NVENC/CUDA being detected
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 4.2: Enable in Jellyfin UI
|
|
||||||
|
|
||||||
1. Go to https://flix.fig.systems
|
|
||||||
2. Dashboard → Playback → Transcoding
|
|
||||||
3. **Hardware acceleration**: NVIDIA NVENC
|
|
||||||
4. **Enable hardware decoding for**:
|
|
||||||
- ✅ H264
|
|
||||||
- ✅ HEVC
|
|
||||||
- ✅ VC1
|
|
||||||
- ✅ VP8
|
|
||||||
- ✅ MPEG2
|
|
||||||
5. **Enable hardware encoding**
|
|
||||||
6. **Enable encoding in HEVC format**
|
|
||||||
7. Save
|
|
||||||
|
|
||||||
### Step 4.3: Test Transcoding
|
|
||||||
|
|
||||||
1. Play a video in Jellyfin web UI
|
|
||||||
2. Click Settings (gear icon) → Quality
|
|
||||||
3. Select a lower bitrate to force transcoding
|
|
||||||
4. In another terminal:
|
|
||||||
```bash
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# While video is transcoding, should see:
|
|
||||||
# GPU utilization: 20-40%
|
|
||||||
# Memory usage: 500-1000MB
|
|
||||||
```
|
|
||||||
|
|
||||||
✅ **Success!** Jellyfin is using your GTX 1070!
|
|
||||||
|
|
||||||
## Part 5: Configure Immich for GPU Acceleration
|
|
||||||
|
|
||||||
Immich can use GPU for two purposes:
|
|
||||||
1. **ML Inference** (face recognition, object detection)
|
|
||||||
2. **Video Transcoding**
|
|
||||||
|
|
||||||
### Step 5.1: ML Inference (CUDA)
|
|
||||||
|
|
||||||
**Edit Immich compose file:**
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/media/frontend/immich
|
|
||||||
nano compose.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Change ML image to CUDA version:**
|
|
||||||
|
|
||||||
Find this line:
|
|
||||||
```yaml
|
|
||||||
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
|
|
||||||
```
|
|
||||||
|
|
||||||
Change to:
|
|
||||||
```yaml
|
|
||||||
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
|
|
||||||
```
|
|
||||||
|
|
||||||
**Add GPU support:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
immich-machine-learning:
|
|
||||||
container_name: immich_machine_learning
|
|
||||||
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
|
|
||||||
volumes:
|
|
||||||
- model-cache:/cache
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
restart: always
|
|
||||||
networks:
|
|
||||||
- immich_internal
|
|
||||||
|
|
||||||
# ADD THESE LINES:
|
|
||||||
runtime: nvidia
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: all
|
|
||||||
capabilities: [gpu]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5.2: Video Transcoding (NVENC)
|
|
||||||
|
|
||||||
**For video transcoding, add to immich-server:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
immich-server:
|
|
||||||
container_name: immich_server
|
|
||||||
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
|
|
||||||
# ... existing config ...
|
|
||||||
|
|
||||||
# ADD THESE LINES:
|
|
||||||
runtime: nvidia
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: all
|
|
||||||
capabilities: [gpu]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Restart Immich:**
|
|
||||||
```bash
|
|
||||||
docker compose down
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5.3: Enable in Immich UI
|
|
||||||
|
|
||||||
1. Go to https://photos.fig.systems
|
|
||||||
2. Administration → Settings → Video Transcoding
|
|
||||||
3. **Transcoding**: h264 (NVENC)
|
|
||||||
4. **Hardware Acceleration**: NVIDIA
|
|
||||||
5. Save
|
|
||||||
|
|
||||||
6. Administration → Settings → Machine Learning
|
|
||||||
7. **Facial Recognition**: Enabled
|
|
||||||
8. **Object Detection**: Enabled
|
|
||||||
9. Should automatically use CUDA
|
|
||||||
|
|
||||||
### Step 5.4: Test ML Inference
|
|
||||||
|
|
||||||
1. Upload photos with faces
|
|
||||||
2. In terminal:
|
|
||||||
```bash
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# While processing, should see:
|
|
||||||
# GPU utilization: 50-80%
|
|
||||||
# Memory usage: 2-4GB
|
|
||||||
```
|
|
||||||
|
|
||||||
✅ **Success!** Immich is using GPU for ML inference!
|
|
||||||
|
|
||||||
## Part 6: Performance Tuning
|
|
||||||
|
|
||||||
### GTX 1070 Specific Settings
|
|
||||||
|
|
||||||
**Jellyfin optimal settings:**
|
|
||||||
- Hardware acceleration: NVIDIA NVENC
|
|
||||||
- Target transcode bandwidth: Let clients decide
|
|
||||||
- Enable hardware encoding: Yes
|
|
||||||
- Prefer OS native DXVA or VA-API hardware decoders: No
|
|
||||||
- Allow encoding in HEVC format: Yes (GTX 1070 supports HEVC)
|
|
||||||
|
|
||||||
**Immich optimal settings:**
|
|
||||||
- Transcoding: h264 or hevc
|
|
||||||
- Target resolution: 1080p (for GTX 1070)
|
|
||||||
- CRF: 23 (good balance)
|
|
||||||
- Preset: fast
|
|
||||||
|
|
||||||
### Unlock NVENC Stream Limit
|
|
||||||
|
|
||||||
GTX 1070 is limited to 2 concurrent transcoding streams. You can unlock unlimited streams:
|
|
||||||
|
|
||||||
**Install patch:**
|
|
||||||
```bash
|
|
||||||
# Inside Docker VM
|
|
||||||
git clone https://github.com/keylase/nvidia-patch.git
|
|
||||||
cd nvidia-patch
|
|
||||||
sudo bash ./patch.sh
|
|
||||||
|
|
||||||
# Reboot
|
|
||||||
sudo reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify:**
|
|
||||||
```bash
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Now supports unlimited concurrent streams
|
|
||||||
```
|
|
||||||
|
|
||||||
⚠️ **Note**: This is a hack that modifies NVIDIA driver. Use at your own risk.
|
|
||||||
|
|
||||||
### Monitor GPU Usage
|
|
||||||
|
|
||||||
**Real-time monitoring:**
|
|
||||||
```bash
|
|
||||||
watch -n 1 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check GPU usage from Docker:**
|
|
||||||
```bash
|
|
||||||
docker stats $(docker ps --format '{{.Names}}' | grep -E 'jellyfin|immich')
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### GPU Not Detected in VM
|
|
||||||
|
|
||||||
**Check from Proxmox host:**
|
|
||||||
```bash
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check from VM:**
|
|
||||||
```bash
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
**If not visible in VM:**
|
|
||||||
1. Verify IOMMU is enabled (`dmesg | grep IOMMU`)
|
|
||||||
2. Check PCI passthrough is configured correctly
|
|
||||||
3. Ensure VM is using q35 machine type
|
|
||||||
4. Verify BIOS is OVMF (UEFI)
|
|
||||||
|
|
||||||
### Docker Can't Access GPU
|
|
||||||
|
|
||||||
**Error**: `could not select device driver "" with capabilities: [[gpu]]`
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Reconfigure NVIDIA runtime
|
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
|
||||||
sudo systemctl restart docker
|
|
||||||
|
|
||||||
# Test again
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Jellyfin Shows "No Hardware Acceleration Available"
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
```bash
|
|
||||||
# Verify container has GPU access
|
|
||||||
docker exec jellyfin nvidia-smi
|
|
||||||
|
|
||||||
# Check Jellyfin logs
|
|
||||||
docker logs jellyfin | grep -i nvenc
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
1. Ensure `runtime: nvidia` is uncommented
|
|
||||||
2. Verify `deploy.resources.reservations.devices` is configured
|
|
||||||
3. Restart container: `docker compose up -d`
|
|
||||||
|
|
||||||
### Transcoding Fails with "Failed to Open GPU"
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
```bash
|
|
||||||
# GPU might be busy
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Kill processes using GPU
|
|
||||||
sudo fuser -v /dev/nvidia*
|
|
||||||
```
|
|
||||||
|
|
||||||
### Low GPU Utilization During Transcoding
|
|
||||||
|
|
||||||
**Normal**: GTX 1070 is powerful. 20-40% utilization is expected for single stream.
|
|
||||||
|
|
||||||
**To max out GPU:**
|
|
||||||
- Transcode multiple streams simultaneously
|
|
||||||
- Use higher resolution source (4K)
|
|
||||||
- Enable HEVC encoding
|
|
||||||
|
|
||||||
## Performance Benchmarks (GTX 1070)
|
|
||||||
|
|
||||||
**Typical Performance:**
|
|
||||||
- **4K HEVC → 1080p H.264**: ~120-150 FPS (real-time)
|
|
||||||
- **1080p H.264 → 720p H.264**: ~300-400 FPS
|
|
||||||
- **Concurrent streams**: 4-6 (after unlocking limit)
|
|
||||||
- **Power draw**: 80-120W during transcoding
|
|
||||||
- **Temperature**: 55-65°C
|
|
||||||
|
|
||||||
**Compare to CPU (typical 4-core):**
|
|
||||||
- **4K HEVC → 1080p H.264**: ~10-15 FPS
|
|
||||||
- CPU would be at 100% utilization
|
|
||||||
- GPU: 10-15x faster!
|
|
||||||
|
|
||||||
## Monitoring and Maintenance
|
|
||||||
|
|
||||||
### Create GPU Monitoring Dashboard
|
|
||||||
|
|
||||||
**Install nvtop (nvidia-top):**
|
|
||||||
```bash
|
|
||||||
sudo apt install nvtop
|
|
||||||
```
|
|
||||||
|
|
||||||
**Run:**
|
|
||||||
```bash
|
|
||||||
nvtop
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows real-time GPU usage, memory, temperature, processes.
|
|
||||||
|
|
||||||
### Check GPU Health
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Temperature
|
|
||||||
nvidia-smi --query-gpu=temperature.gpu --format=csv
|
|
||||||
|
|
||||||
# Memory usage
|
|
||||||
nvidia-smi --query-gpu=memory.used,memory.total --format=csv
|
|
||||||
|
|
||||||
# Fan speed
|
|
||||||
nvidia-smi --query-gpu=fan.speed --format=csv
|
|
||||||
|
|
||||||
# Power draw
|
|
||||||
nvidia-smi --query-gpu=power.draw,power.limit --format=csv
|
|
||||||
```
|
|
||||||
|
|
||||||
### Automated Monitoring
|
|
||||||
|
|
||||||
Add to cron:
|
|
||||||
```bash
|
|
||||||
crontab -e
|
|
||||||
|
|
||||||
# Add:
|
|
||||||
*/5 * * * * nvidia-smi --query-gpu=utilization.gpu,memory.used,temperature.gpu --format=csv,noheader >> /var/log/gpu-stats.log
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
✅ GPU is now configured for Jellyfin and Immich!
|
|
||||||
|
|
||||||
**Recommended:**
|
|
||||||
1. Test transcoding with various file formats
|
|
||||||
2. Upload photos to Immich and verify ML inference works
|
|
||||||
3. Monitor GPU temperature and utilization
|
|
||||||
4. Consider unlocking NVENC stream limit
|
|
||||||
5. Set up automated monitoring
|
|
||||||
|
|
||||||
**Optional:**
|
|
||||||
- Configure Tdarr for batch transcoding using GPU
|
|
||||||
- Set up Plex (also supports NVENC)
|
|
||||||
- Use GPU for other workloads (AI, rendering)
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
### Quick Command Reference
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check GPU from host (Proxmox)
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
|
|
||||||
# Check GPU from VM
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Test Docker GPU access
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
|
|
||||||
# Monitor GPU real-time
|
|
||||||
watch -n 1 nvidia-smi
|
|
||||||
|
|
||||||
# Check Jellyfin GPU usage
|
|
||||||
docker exec jellyfin nvidia-smi
|
|
||||||
|
|
||||||
# Restart Jellyfin with GPU
|
|
||||||
cd ~/homelab/compose/media/frontend/jellyfin
|
|
||||||
docker compose down && docker compose up -d
|
|
||||||
|
|
||||||
# View GPU processes
|
|
||||||
nvidia-smi pmon
|
|
||||||
|
|
||||||
# GPU temperature
|
|
||||||
nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader
|
|
||||||
```
|
|
||||||
|
|
||||||
### GTX 1070 Specifications
|
|
||||||
|
|
||||||
- **Architecture**: Pascal (GP104)
|
|
||||||
- **CUDA Cores**: 1920
|
|
||||||
- **Memory**: 8GB GDDR5
|
|
||||||
- **Memory Bandwidth**: 256 GB/s
|
|
||||||
- **TDP**: 150W
|
|
||||||
- **NVENC**: 6th generation (H.264, HEVC)
|
|
||||||
- **NVDEC**: 2nd generation
|
|
||||||
- **Concurrent Streams**: 2 (unlockable to unlimited)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Your GTX 1070 is now accelerating your homelab! 🚀**
|
|
||||||
|
|
@ -1,567 +0,0 @@
|
||||||
# Secrets and Environment Variables Management
|
|
||||||
|
|
||||||
This guide explains how to properly configure and manage secrets in your homelab.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Every service uses environment variables stored in `.env` files for configuration. This approach:
|
|
||||||
- ✅ Keeps secrets out of version control
|
|
||||||
- ✅ Makes configuration changes easy
|
|
||||||
- ✅ Follows Docker Compose best practices
|
|
||||||
- ✅ Provides clear examples of what each secret should look like
|
|
||||||
|
|
||||||
## Finding What Needs Configuration
|
|
||||||
|
|
||||||
### Search for Placeholder Values
|
|
||||||
|
|
||||||
All secrets that need changing are marked with `changeme_`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Find all files with placeholder secrets
|
|
||||||
grep -r "changeme_" ~/homelab/compose
|
|
||||||
|
|
||||||
# Output shows exactly what needs updating:
|
|
||||||
compose/core/lldap/.env:LLDAP_LDAP_USER_PASS=changeme_please_set_secure_password
|
|
||||||
compose/core/lldap/.env:LLDAP_JWT_SECRET=changeme_please_set_random_secret
|
|
||||||
compose/core/tinyauth/.env:LDAP_BIND_PASSWORD=changeme_please_set_secure_password
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Count What's Left to Configure
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Count how many secrets still need updating
|
|
||||||
grep -r "changeme_" ~/homelab/compose | wc -l
|
|
||||||
|
|
||||||
# Goal: 0
|
|
||||||
```
|
|
||||||
|
|
||||||
## Generating Secrets
|
|
||||||
|
|
||||||
Each `.env` file includes comments showing:
|
|
||||||
1. What the secret is for
|
|
||||||
2. How to generate it
|
|
||||||
3. What format it should be in
|
|
||||||
|
|
||||||
### Common Secret Types
|
|
||||||
|
|
||||||
#### 1. JWT Secrets (64 characters)
|
|
||||||
|
|
||||||
**Used by**: LLDAP, Vikunja, NextAuth
|
|
||||||
|
|
||||||
**Generate:**
|
|
||||||
```bash
|
|
||||||
openssl rand -hex 32
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example output:**
|
|
||||||
```
|
|
||||||
a1b2c3d4e5f67890abcdef1234567890a1b2c3d4e5f67890abcdef1234567890
|
|
||||||
```
|
|
||||||
|
|
||||||
**Where to use:**
|
|
||||||
- `LLDAP_JWT_SECRET`
|
|
||||||
- `VIKUNJA_SERVICE_JWTSECRET`
|
|
||||||
- `NEXTAUTH_SECRET`
|
|
||||||
- `SESSION_SECRET`
|
|
||||||
|
|
||||||
#### 2. Database Passwords (32 alphanumeric)
|
|
||||||
|
|
||||||
**Used by**: Postgres, Immich, Vikunja, Linkwarden
|
|
||||||
|
|
||||||
**Generate:**
|
|
||||||
```bash
|
|
||||||
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example output:**
|
|
||||||
```
|
|
||||||
aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
|
||||||
```
|
|
||||||
|
|
||||||
**Where to use:**
|
|
||||||
- `DB_PASSWORD` (Immich)
|
|
||||||
- `POSTGRES_PASSWORD` (Vikunja, Linkwarden)
|
|
||||||
- `VIKUNJA_DATABASE_PASSWORD`
|
|
||||||
|
|
||||||
#### 3. Strong Passwords (16+ characters, mixed)
|
|
||||||
|
|
||||||
**Used by**: LLDAP admin, service admin accounts
|
|
||||||
|
|
||||||
**Generate:**
|
|
||||||
```bash
|
|
||||||
# Option 1: Using pwgen (install: apt install pwgen)
|
|
||||||
pwgen -s 20 1
|
|
||||||
|
|
||||||
# Option 2: Using openssl
|
|
||||||
openssl rand -base64 20 | tr -d /=+
|
|
||||||
|
|
||||||
# Option 3: Manual (recommended for main admin password)
|
|
||||||
# Create something memorable but strong
|
|
||||||
# Example format: MyS3cur3P@ssw0rd!2024#HomeL@b
|
|
||||||
```
|
|
||||||
|
|
||||||
**Where to use:**
|
|
||||||
- `LLDAP_LDAP_USER_PASS`
|
|
||||||
- `LDAP_BIND_PASSWORD` (must match LLDAP_LDAP_USER_PASS!)
|
|
||||||
|
|
||||||
#### 4. API Keys / Master Keys (32 characters)
|
|
||||||
|
|
||||||
**Used by**: Meilisearch, various APIs
|
|
||||||
|
|
||||||
**Generate:**
|
|
||||||
```bash
|
|
||||||
openssl rand -hex 16
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example output:**
|
|
||||||
```
|
|
||||||
f6g7h8i901234abcdef567890a1b2c3d
|
|
||||||
```
|
|
||||||
|
|
||||||
**Where to use:**
|
|
||||||
- `MEILI_MASTER_KEY`
|
|
||||||
|
|
||||||
## Service-Specific Configuration
|
|
||||||
|
|
||||||
### Core Services
|
|
||||||
|
|
||||||
#### LLDAP (`compose/core/lldap/.env`)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Edit the file
|
|
||||||
cd ~/homelab/compose/core/lldap
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required secrets:**
|
|
||||||
|
|
||||||
```env
|
|
||||||
# Admin password - use a STRONG password you'll remember
|
|
||||||
# Example: MyS3cur3P@ssw0rd!2024#HomeL@b
|
|
||||||
LLDAP_LDAP_USER_PASS=changeme_please_set_secure_password
|
|
||||||
|
|
||||||
# JWT secret - generate with: openssl rand -hex 32
|
|
||||||
# Example: a1b2c3d4e5f67890abcdef1234567890a1b2c3d4e5f67890abcdef1234567890
|
|
||||||
LLDAP_JWT_SECRET=changeme_please_set_random_secret
|
|
||||||
```
|
|
||||||
|
|
||||||
**Generate and update:**
|
|
||||||
```bash
|
|
||||||
# Generate JWT secret
|
|
||||||
echo "LLDAP_JWT_SECRET=$(openssl rand -hex 32)"
|
|
||||||
|
|
||||||
# Choose a strong password for LLDAP_LDAP_USER_PASS
|
|
||||||
# Write it down - you'll need it for Tinyauth too!
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Tinyauth (`compose/core/tinyauth/.env`)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/core/tinyauth
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required secrets:**
|
|
||||||
|
|
||||||
```env
|
|
||||||
# MUST match LLDAP_LDAP_USER_PASS from lldap/.env
|
|
||||||
LDAP_BIND_PASSWORD=changeme_please_set_secure_password
|
|
||||||
|
|
||||||
# Session secret - generate with: openssl rand -hex 32
|
|
||||||
SESSION_SECRET=changeme_please_set_random_session_secret
|
|
||||||
```
|
|
||||||
|
|
||||||
**⚠️ CRITICAL**: `LDAP_BIND_PASSWORD` must exactly match `LLDAP_LDAP_USER_PASS`!
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate session secret
|
|
||||||
echo "SESSION_SECRET=$(openssl rand -hex 32)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Media Services
|
|
||||||
|
|
||||||
#### Immich (`compose/media/frontend/immich/.env`)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/media/frontend/immich
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required secrets:**
|
|
||||||
|
|
||||||
```env
|
|
||||||
# Database password - generate with: openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
DB_PASSWORD=changeme_please_set_secure_password
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate
|
|
||||||
echo "DB_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Utility Services
|
|
||||||
|
|
||||||
#### Linkwarden (`compose/services/linkwarden/.env`)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/services/linkwarden
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required secrets:**
|
|
||||||
|
|
||||||
```env
|
|
||||||
# NextAuth secret - generate with: openssl rand -hex 32
|
|
||||||
NEXTAUTH_SECRET=changeme_please_set_random_secret_key
|
|
||||||
|
|
||||||
# Postgres password - generate with: openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
POSTGRES_PASSWORD=changeme_please_set_secure_postgres_password
|
|
||||||
|
|
||||||
# Meilisearch master key - generate with: openssl rand -hex 16
|
|
||||||
MEILI_MASTER_KEY=changeme_please_set_meili_master_key
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate all three
|
|
||||||
echo "NEXTAUTH_SECRET=$(openssl rand -hex 32)"
|
|
||||||
echo "POSTGRES_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
|
||||||
echo "MEILI_MASTER_KEY=$(openssl rand -hex 16)"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Vikunja (`compose/services/vikunja/.env`)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/services/vikunja
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required secrets:**
|
|
||||||
|
|
||||||
```env
|
|
||||||
# Database password (used in two places - must match!)
|
|
||||||
VIKUNJA_DATABASE_PASSWORD=changeme_please_set_secure_password
|
|
||||||
POSTGRES_PASSWORD=changeme_please_set_secure_password # Same value!
|
|
||||||
|
|
||||||
# JWT secret - generate with: openssl rand -hex 32
|
|
||||||
VIKUNJA_SERVICE_JWTSECRET=changeme_please_set_random_jwt_secret
|
|
||||||
```
|
|
||||||
|
|
||||||
**⚠️ CRITICAL**: Both password fields must match!
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate
|
|
||||||
DB_PASS=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)
|
|
||||||
echo "VIKUNJA_DATABASE_PASSWORD=$DB_PASS"
|
|
||||||
echo "POSTGRES_PASSWORD=$DB_PASS"
|
|
||||||
echo "VIKUNJA_SERVICE_JWTSECRET=$(openssl rand -hex 32)"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Automated Configuration Script
|
|
||||||
|
|
||||||
Create a script to generate all secrets at once:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
# save as: ~/homelab/generate-secrets.sh
|
|
||||||
|
|
||||||
# Colors for output
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Homelab Secrets Generator${NC}\n"
|
|
||||||
|
|
||||||
echo "This script will help you generate secure secrets for your homelab."
|
|
||||||
echo "You'll need to manually copy these values into the respective .env files."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# LLDAP
|
|
||||||
echo -e "${GREEN}=== LLDAP (compose/core/lldap/.env) ===${NC}"
|
|
||||||
echo "LLDAP_JWT_SECRET=$(openssl rand -hex 32)"
|
|
||||||
echo "LLDAP_LDAP_USER_PASS=<choose-a-strong-password-manually>"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Tinyauth
|
|
||||||
echo -e "${GREEN}=== Tinyauth (compose/core/tinyauth/.env) ===${NC}"
|
|
||||||
echo "LDAP_BIND_PASSWORD=<same-as-LLDAP_LDAP_USER_PASS-above>"
|
|
||||||
echo "SESSION_SECRET=$(openssl rand -hex 32)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Immich
|
|
||||||
echo -e "${GREEN}=== Immich (compose/media/frontend/immich/.env) ===${NC}"
|
|
||||||
echo "DB_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Linkwarden
|
|
||||||
echo -e "${GREEN}=== Linkwarden (compose/services/linkwarden/.env) ===${NC}"
|
|
||||||
echo "NEXTAUTH_SECRET=$(openssl rand -hex 32)"
|
|
||||||
echo "POSTGRES_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
|
||||||
echo "MEILI_MASTER_KEY=$(openssl rand -hex 16)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Vikunja
|
|
||||||
VIKUNJA_PASS=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)
|
|
||||||
echo -e "${GREEN}=== Vikunja (compose/services/vikunja/.env) ===${NC}"
|
|
||||||
echo "VIKUNJA_DATABASE_PASSWORD=$VIKUNJA_PASS"
|
|
||||||
echo "POSTGRES_PASSWORD=$VIKUNJA_PASS # Must match above!"
|
|
||||||
echo "VIKUNJA_SERVICE_JWTSECRET=$(openssl rand -hex 32)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo -e "${YELLOW}Done! Copy these values into your .env files.${NC}"
|
|
||||||
echo ""
|
|
||||||
echo "Don't forget to:"
|
|
||||||
echo "1. Choose a strong LLDAP_LDAP_USER_PASS manually"
|
|
||||||
echo "2. Use the same password for LDAP_BIND_PASSWORD in tinyauth"
|
|
||||||
echo "3. Save all secrets in a password manager"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Usage:**
|
|
||||||
```bash
|
|
||||||
chmod +x ~/homelab/generate-secrets.sh
|
|
||||||
~/homelab/generate-secrets.sh > secrets.txt
|
|
||||||
|
|
||||||
# Review and copy secrets
|
|
||||||
cat secrets.txt
|
|
||||||
|
|
||||||
# Keep this file safe or delete after copying to .env files
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Best Practices
|
|
||||||
|
|
||||||
### 1. Use a Password Manager
|
|
||||||
|
|
||||||
Store all secrets in a password manager:
|
|
||||||
- **1Password**: Great for teams
|
|
||||||
- **Bitwarden**: Self-hostable option
|
|
||||||
- **KeePassXC**: Offline, open-source
|
|
||||||
|
|
||||||
Create an entry for each service with:
|
|
||||||
- Service name
|
|
||||||
- URL
|
|
||||||
- All secrets from `.env` file
|
|
||||||
- Admin credentials
|
|
||||||
|
|
||||||
### 2. Never Commit Secrets
|
|
||||||
|
|
||||||
The repository `.gitignore` already excludes `.env` files, but double-check:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Verify .env files are ignored
|
|
||||||
git status
|
|
||||||
|
|
||||||
# Should NOT show any .env files
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Backup Your Secrets
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create encrypted backup of all .env files
|
|
||||||
cd ~/homelab
|
|
||||||
tar czf env-backup-$(date +%Y%m%d).tar.gz $(find compose -name ".env")
|
|
||||||
|
|
||||||
# Encrypt with GPG
|
|
||||||
gpg -c env-backup-$(date +%Y%m%d).tar.gz
|
|
||||||
|
|
||||||
# Store encrypted file safely
|
|
||||||
mv env-backup-*.tar.gz.gpg ~/backups/
|
|
||||||
|
|
||||||
# Delete unencrypted tar
|
|
||||||
rm env-backup-*.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Rotate Secrets Regularly
|
|
||||||
|
|
||||||
Change critical secrets periodically:
|
|
||||||
- **Admin passwords**: Every 90 days
|
|
||||||
- **JWT secrets**: Every 180 days
|
|
||||||
- **Database passwords**: When personnel changes
|
|
||||||
|
|
||||||
### 5. Limit Secret Access
|
|
||||||
|
|
||||||
- Don't share raw secrets over email/chat
|
|
||||||
- Use password manager's sharing features
|
|
||||||
- Delete shared secrets when no longer needed
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
### Check All Secrets Are Set
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Should return 0 (no changeme_ values left)
|
|
||||||
grep -r "changeme_" ~/homelab/compose | wc -l
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test Service Startup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start a service and check for password errors
|
|
||||||
cd ~/homelab/compose/core/lldap
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs
|
|
||||||
|
|
||||||
# Should NOT see:
|
|
||||||
# - "invalid password"
|
|
||||||
# - "authentication failed"
|
|
||||||
# - "secret not set"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verify SSO Works
|
|
||||||
|
|
||||||
1. Start LLDAP and Tinyauth
|
|
||||||
2. Access protected service (e.g., https://tasks.fig.systems)
|
|
||||||
3. Should redirect to auth.fig.systems
|
|
||||||
4. Login with LLDAP credentials
|
|
||||||
5. Should redirect back to service
|
|
||||||
|
|
||||||
If this works, your LLDAP ↔ Tinyauth passwords match! ✅
|
|
||||||
|
|
||||||
## Common Mistakes
|
|
||||||
|
|
||||||
### ❌ Using Weak Passwords
|
|
||||||
|
|
||||||
**Don't:**
|
|
||||||
```env
|
|
||||||
LLDAP_LDAP_USER_PASS=password123
|
|
||||||
```
|
|
||||||
|
|
||||||
**Do:**
|
|
||||||
```env
|
|
||||||
LLDAP_LDAP_USER_PASS=MyS3cur3P@ssw0rd!2024#HomeL@b
|
|
||||||
```
|
|
||||||
|
|
||||||
### ❌ Mismatched Passwords
|
|
||||||
|
|
||||||
**Don't:**
|
|
||||||
```env
|
|
||||||
# In lldap/.env
|
|
||||||
LLDAP_LDAP_USER_PASS=password1
|
|
||||||
|
|
||||||
# In tinyauth/.env
|
|
||||||
LDAP_BIND_PASSWORD=password2 # Different!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Do:**
|
|
||||||
```env
|
|
||||||
# In lldap/.env
|
|
||||||
LLDAP_LDAP_USER_PASS=MyS3cur3P@ssw0rd!2024#HomeL@b
|
|
||||||
|
|
||||||
# In tinyauth/.env
|
|
||||||
LDAP_BIND_PASSWORD=MyS3cur3P@ssw0rd!2024#HomeL@b # Same!
|
|
||||||
```
|
|
||||||
|
|
||||||
### ❌ Using Same Secret Everywhere
|
|
||||||
|
|
||||||
**Don't:**
|
|
||||||
```env
|
|
||||||
# Same secret in multiple places
|
|
||||||
LLDAP_JWT_SECRET=abc123
|
|
||||||
NEXTAUTH_SECRET=abc123
|
|
||||||
SESSION_SECRET=abc123
|
|
||||||
```
|
|
||||||
|
|
||||||
**Do:**
|
|
||||||
```env
|
|
||||||
# Unique secret for each
|
|
||||||
LLDAP_JWT_SECRET=a1b2c3d4e5f67890...
|
|
||||||
NEXTAUTH_SECRET=f6g7h8i9j0k1l2m3...
|
|
||||||
SESSION_SECRET=x9y8z7w6v5u4t3s2...
|
|
||||||
```
|
|
||||||
|
|
||||||
### ❌ Forgetting to Update Both Password Fields
|
|
||||||
|
|
||||||
In Vikunja `.env`, both must match:
|
|
||||||
```env
|
|
||||||
# Both must be the same!
|
|
||||||
VIKUNJA_DATABASE_PASSWORD=aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
|
||||||
POSTGRES_PASSWORD=aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### "Authentication failed" in Tinyauth
|
|
||||||
|
|
||||||
**Cause**: LDAP_BIND_PASSWORD doesn't match LLDAP_LDAP_USER_PASS
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
```bash
|
|
||||||
# Check LLDAP password
|
|
||||||
grep LLDAP_LDAP_USER_PASS ~/homelab/compose/core/lldap/.env
|
|
||||||
|
|
||||||
# Check Tinyauth password
|
|
||||||
grep LDAP_BIND_PASSWORD ~/homelab/compose/core/tinyauth/.env
|
|
||||||
|
|
||||||
# They should be identical!
|
|
||||||
```
|
|
||||||
|
|
||||||
### "Invalid JWT" errors
|
|
||||||
|
|
||||||
**Cause**: JWT_SECRET is too short or invalid format
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
```bash
|
|
||||||
# Regenerate with proper length
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# Update in .env file
|
|
||||||
```
|
|
||||||
|
|
||||||
### "Database connection failed"
|
|
||||||
|
|
||||||
**Cause**: Database password mismatch
|
|
||||||
|
|
||||||
**Fix**:
|
|
||||||
```bash
|
|
||||||
# Check both password fields match
|
|
||||||
grep -E "(POSTGRES_PASSWORD|DATABASE_PASSWORD)" compose/services/vikunja/.env
|
|
||||||
|
|
||||||
# Both should be identical
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
Once all secrets are configured:
|
|
||||||
1. ✅ [Deploy services](../getting-started.md#step-6-deploy-services)
|
|
||||||
2. ✅ [Configure SSO](../services/sso-setup.md)
|
|
||||||
3. ✅ [Set up backups](../operations/backups.md)
|
|
||||||
4. ✅ Store secrets in password manager
|
|
||||||
5. ✅ Create encrypted backup of .env files
|
|
||||||
|
|
||||||
## Reference
|
|
||||||
|
|
||||||
### Quick Command Reference
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate 64-char hex
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# Generate 32-char password
|
|
||||||
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
|
|
||||||
# Generate 32-char hex
|
|
||||||
openssl rand -hex 16
|
|
||||||
|
|
||||||
# Find all changeme_ values
|
|
||||||
grep -r "changeme_" compose/
|
|
||||||
|
|
||||||
# Count remaining secrets to configure
|
|
||||||
grep -r "changeme_" compose/ | wc -l
|
|
||||||
|
|
||||||
# Backup all .env files (encrypted)
|
|
||||||
tar czf env-files.tar.gz $(find compose -name ".env")
|
|
||||||
gpg -c env-files.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
### Secret Types Quick Reference
|
|
||||||
|
|
||||||
| Secret Type | Command | Example Length | Used By |
|
|
||||||
|-------------|---------|----------------|---------|
|
|
||||||
| JWT Secret | `openssl rand -hex 32` | 64 chars | LLDAP, Vikunja, NextAuth |
|
|
||||||
| Session Secret | `openssl rand -hex 32` | 64 chars | Tinyauth |
|
|
||||||
| DB Password | `openssl rand -base64 32 \| tr -d /=+ \| cut -c1-32` | 32 chars | Postgres, Immich |
|
|
||||||
| API Key | `openssl rand -hex 16` | 32 chars | Meilisearch |
|
|
||||||
| Admin Password | Manual | 16+ chars | LLDAP admin |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Remember**: Strong, unique secrets are your first line of defense. Take the time to generate them properly! 🔐
|
|
||||||
|
|
@ -1,567 +0,0 @@
|
||||||
# Quick Reference Guide
|
|
||||||
|
|
||||||
Fast reference for common tasks and commands.
|
|
||||||
|
|
||||||
## Service URLs
|
|
||||||
|
|
||||||
All services accessible via:
|
|
||||||
- Primary domain: `*.fig.systems`
|
|
||||||
- Secondary domain: `*.edfig.dev`
|
|
||||||
|
|
||||||
### Core Services
|
|
||||||
```
|
|
||||||
https://traefik.fig.systems # Reverse proxy dashboard
|
|
||||||
https://lldap.fig.systems # User directory
|
|
||||||
https://auth.fig.systems # SSO authentication
|
|
||||||
```
|
|
||||||
|
|
||||||
### Dashboard & Management
|
|
||||||
```
|
|
||||||
https://home.fig.systems # Homarr dashboard (START HERE!)
|
|
||||||
https://backup.fig.systems # Backrest backup manager
|
|
||||||
```
|
|
||||||
|
|
||||||
### Media Services
|
|
||||||
```
|
|
||||||
https://flix.fig.systems # Jellyfin media server
|
|
||||||
https://photos.fig.systems # Immich photo library
|
|
||||||
https://requests.fig.systems # Jellyseerr media requests
|
|
||||||
https://sonarr.fig.systems # TV show automation
|
|
||||||
https://radarr.fig.systems # Movie automation
|
|
||||||
https://sabnzbd.fig.systems # Usenet downloader
|
|
||||||
https://qbt.fig.systems # qBittorrent client
|
|
||||||
```
|
|
||||||
|
|
||||||
### Utility Services
|
|
||||||
```
|
|
||||||
https://links.fig.systems # Linkwarden bookmarks
|
|
||||||
https://tasks.fig.systems # Vikunja task management
|
|
||||||
https://garage.fig.systems # LubeLogger vehicle tracking
|
|
||||||
https://books.fig.systems # Calibre-web ebook library
|
|
||||||
https://booklore.fig.systems # Book tracking
|
|
||||||
https://rss.fig.systems # FreshRSS reader
|
|
||||||
https://files.fig.systems # File Browser
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Commands
|
|
||||||
|
|
||||||
### Docker Compose
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start service
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Restart service
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Stop service
|
|
||||||
docker compose down
|
|
||||||
|
|
||||||
# Update and restart
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Rebuild service
|
|
||||||
docker compose up -d --force-recreate
|
|
||||||
```
|
|
||||||
|
|
||||||
### Docker Management
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all containers
|
|
||||||
docker ps
|
|
||||||
|
|
||||||
# List all containers (including stopped)
|
|
||||||
docker ps -a
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
docker logs <container_name>
|
|
||||||
docker logs -f <container_name> # Follow logs
|
|
||||||
|
|
||||||
# Execute command in container
|
|
||||||
docker exec -it <container_name> bash
|
|
||||||
|
|
||||||
# View resource usage
|
|
||||||
docker stats
|
|
||||||
|
|
||||||
# Remove stopped containers
|
|
||||||
docker container prune
|
|
||||||
|
|
||||||
# Remove unused images
|
|
||||||
docker image prune -a
|
|
||||||
|
|
||||||
# Remove unused volumes (CAREFUL!)
|
|
||||||
docker volume prune
|
|
||||||
|
|
||||||
# Complete cleanup
|
|
||||||
docker system prune -a --volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
### Service Management
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start all core services
|
|
||||||
cd ~/homelab/compose/core
|
|
||||||
for dir in traefik lldap tinyauth; do
|
|
||||||
cd $dir && docker compose up -d && cd ..
|
|
||||||
done
|
|
||||||
|
|
||||||
# Stop all services
|
|
||||||
cd ~/homelab
|
|
||||||
find compose -name "compose.yaml" -execdir docker compose down \;
|
|
||||||
|
|
||||||
# Restart single service
|
|
||||||
cd ~/homelab/compose/services/servicename
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# View all running containers
|
|
||||||
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
|
||||||
```
|
|
||||||
|
|
||||||
### System Checks
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check all containers
|
|
||||||
docker ps --format "table {{.Names}}\t{{.Status}}"
|
|
||||||
|
|
||||||
# Check network
|
|
||||||
docker network inspect homelab
|
|
||||||
|
|
||||||
# Check disk usage
|
|
||||||
docker system df
|
|
||||||
df -h
|
|
||||||
|
|
||||||
# Check logs for errors
|
|
||||||
docker compose logs --tail=100 | grep -i error
|
|
||||||
|
|
||||||
# Test DNS resolution
|
|
||||||
dig home.fig.systems +short
|
|
||||||
|
|
||||||
# Test SSL
|
|
||||||
curl -I https://home.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
## Secret Generation
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# JWT/Session secrets (64 char)
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# Database passwords (32 char alphanumeric)
|
|
||||||
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
|
|
||||||
# API keys (32 char hex)
|
|
||||||
openssl rand -hex 16
|
|
||||||
|
|
||||||
# Find what needs updating
|
|
||||||
grep -r "changeme_" ~/homelab/compose
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Service Won't Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check logs
|
|
||||||
docker compose logs
|
|
||||||
|
|
||||||
# Check container status
|
|
||||||
docker compose ps
|
|
||||||
|
|
||||||
# Check for port conflicts
|
|
||||||
sudo netstat -tulpn | grep :80
|
|
||||||
sudo netstat -tulpn | grep :443
|
|
||||||
|
|
||||||
# Recreate container
|
|
||||||
docker compose down
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSL Certificate Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check Traefik logs
|
|
||||||
docker logs traefik | grep -i certificate
|
|
||||||
|
|
||||||
# Check Let's Encrypt logs
|
|
||||||
docker logs traefik | grep -i letsencrypt
|
|
||||||
|
|
||||||
# Verify DNS
|
|
||||||
dig home.fig.systems +short
|
|
||||||
|
|
||||||
# Test port 80 accessibility
|
|
||||||
curl -I http://home.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSO Not Working
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check LLDAP
|
|
||||||
docker logs lldap
|
|
||||||
|
|
||||||
# Check Tinyauth
|
|
||||||
docker logs tinyauth
|
|
||||||
|
|
||||||
# Verify passwords match
|
|
||||||
grep LLDAP_LDAP_USER_PASS ~/homelab/compose/core/lldap/.env
|
|
||||||
grep LDAP_BIND_PASSWORD ~/homelab/compose/core/tinyauth/.env
|
|
||||||
|
|
||||||
# Test LDAP connection
|
|
||||||
docker exec tinyauth nc -zv lldap 3890
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Connection Failures
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check database container
|
|
||||||
docker ps | grep postgres
|
|
||||||
|
|
||||||
# View database logs
|
|
||||||
docker logs <db_container_name>
|
|
||||||
|
|
||||||
# Test connection from app container
|
|
||||||
docker exec <app_container> nc -zv <db_container> 5432
|
|
||||||
|
|
||||||
# Verify password in .env
|
|
||||||
cat .env | grep POSTGRES_PASSWORD
|
|
||||||
```
|
|
||||||
|
|
||||||
## File Locations
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
```
|
|
||||||
~/homelab/compose/ # All services
|
|
||||||
~/homelab/compose/core/ # Core infrastructure
|
|
||||||
~/homelab/compose/media/ # Media services
|
|
||||||
~/homelab/compose/services/ # Utility services
|
|
||||||
```
|
|
||||||
|
|
||||||
### Service Data
|
|
||||||
```
|
|
||||||
compose/<service>/config/ # Service configuration
|
|
||||||
compose/<service>/data/ # Service data
|
|
||||||
compose/<service>/db/ # Database files
|
|
||||||
compose/<service>/.env # Environment variables
|
|
||||||
```
|
|
||||||
|
|
||||||
### Media Files
|
|
||||||
```
|
|
||||||
/media/movies/ # Movies
|
|
||||||
/media/tv/ # TV shows
|
|
||||||
/media/music/ # Music
|
|
||||||
/media/photos/ # Photos
|
|
||||||
/media/books/ # Books
|
|
||||||
/media/downloads/ # Active downloads
|
|
||||||
/media/complete/ # Completed downloads
|
|
||||||
```
|
|
||||||
|
|
||||||
### Logs
|
|
||||||
```
|
|
||||||
docker logs <container_name> # Container logs
|
|
||||||
compose/<service>/logs/ # Service-specific logs (if configured)
|
|
||||||
/var/lib/docker/volumes/ # Volume data
|
|
||||||
```
|
|
||||||
|
|
||||||
## Network
|
|
||||||
|
|
||||||
### Create Network
|
|
||||||
```bash
|
|
||||||
docker network create homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inspect Network
|
|
||||||
```bash
|
|
||||||
docker network inspect homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
### Connect Container to Network
|
|
||||||
```bash
|
|
||||||
docker network connect homelab <container_name>
|
|
||||||
```
|
|
||||||
|
|
||||||
## GPU (NVIDIA GTX 1070)
|
|
||||||
|
|
||||||
### Check GPU Status
|
|
||||||
```bash
|
|
||||||
nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test GPU in Docker
|
|
||||||
```bash
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Monitor GPU Usage
|
|
||||||
```bash
|
|
||||||
watch -n 1 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check GPU in Container
|
|
||||||
```bash
|
|
||||||
docker exec jellyfin nvidia-smi
|
|
||||||
docker exec immich_machine_learning nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
## Backup
|
|
||||||
|
|
||||||
### Backup Configuration Files
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
tar czf homelab-config-$(date +%Y%m%d).tar.gz \
|
|
||||||
$(find compose -name ".env") \
|
|
||||||
$(find compose -name "compose.yaml")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Backup Service Data
|
|
||||||
```bash
|
|
||||||
# Example: Backup Immich
|
|
||||||
cd ~/homelab/compose/media/frontend/immich
|
|
||||||
tar czf immich-backup-$(date +%Y%m%d).tar.gz upload/ config/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Restore Configuration
|
|
||||||
```bash
|
|
||||||
tar xzf homelab-config-YYYYMMDD.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
## Updates
|
|
||||||
|
|
||||||
### Update Single Service
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update All Services
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
for dir in $(find compose -name "compose.yaml" -exec dirname {} \;); do
|
|
||||||
echo "Updating $dir"
|
|
||||||
cd $dir
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
cd ~/homelab
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update Docker
|
|
||||||
```bash
|
|
||||||
sudo apt update
|
|
||||||
sudo apt upgrade docker-ce docker-ce-cli containerd.io
|
|
||||||
```
|
|
||||||
|
|
||||||
## Performance
|
|
||||||
|
|
||||||
### Check Resource Usage
|
|
||||||
```bash
|
|
||||||
# Overall system
|
|
||||||
htop
|
|
||||||
|
|
||||||
# Docker containers
|
|
||||||
docker stats
|
|
||||||
|
|
||||||
# Disk usage
|
|
||||||
df -h
|
|
||||||
docker system df
|
|
||||||
|
|
||||||
# Network usage
|
|
||||||
iftop
|
|
||||||
```
|
|
||||||
|
|
||||||
### Clean Up Disk Space
|
|
||||||
```bash
|
|
||||||
# Docker cleanup
|
|
||||||
docker system prune -a
|
|
||||||
|
|
||||||
# Remove old logs
|
|
||||||
sudo journalctl --vacuum-time=7d
|
|
||||||
|
|
||||||
# Find large files
|
|
||||||
du -h /media | sort -rh | head -20
|
|
||||||
```
|
|
||||||
|
|
||||||
## DNS Configuration
|
|
||||||
|
|
||||||
### Cloudflare Example
|
|
||||||
```
|
|
||||||
Type: A
|
|
||||||
Name: *
|
|
||||||
Content: YOUR_SERVER_IP
|
|
||||||
Proxy: Off (disable for Let's Encrypt)
|
|
||||||
TTL: Auto
|
|
||||||
```
|
|
||||||
|
|
||||||
### Local DNS (Pi-hole/hosts file)
|
|
||||||
```
|
|
||||||
192.168.1.100 home.fig.systems
|
|
||||||
192.168.1.100 flix.fig.systems
|
|
||||||
192.168.1.100 photos.fig.systems
|
|
||||||
# ... etc
|
|
||||||
```
|
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
### List All Services with Secrets
|
|
||||||
```bash
|
|
||||||
find ~/homelab/compose -name ".env" -exec echo {} \;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Check for Unconfigured Secrets
|
|
||||||
```bash
|
|
||||||
grep -r "changeme_" ~/homelab/compose | wc -l
|
|
||||||
# Should be 0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Backup All .env Files
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
tar czf env-files-$(date +%Y%m%d).tar.gz $(find compose -name ".env")
|
|
||||||
gpg -c env-files-$(date +%Y%m%d).tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
## Monitoring
|
|
||||||
|
|
||||||
### Service Health
|
|
||||||
```bash
|
|
||||||
# Check all containers are running
|
|
||||||
docker ps --format "{{.Names}}: {{.Status}}" | grep -v "Up"
|
|
||||||
|
|
||||||
# Check for restarts
|
|
||||||
docker ps --format "{{.Names}}: {{.Status}}" | grep "Restarting"
|
|
||||||
|
|
||||||
# Check logs for errors
|
|
||||||
docker compose logs --tail=100 | grep -i error
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSL Certificate Expiry
|
|
||||||
```bash
|
|
||||||
# Check cert expiry
|
|
||||||
echo | openssl s_client -servername home.fig.systems -connect home.fig.systems:443 2>/dev/null | openssl x509 -noout -dates
|
|
||||||
```
|
|
||||||
|
|
||||||
### Disk Space
|
|
||||||
```bash
|
|
||||||
# Overall
|
|
||||||
df -h
|
|
||||||
|
|
||||||
# Docker
|
|
||||||
docker system df
|
|
||||||
|
|
||||||
# Media
|
|
||||||
du -sh /media/*
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common File Paths
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Core services
|
|
||||||
~/homelab/compose/core/traefik/
|
|
||||||
~/homelab/compose/core/lldap/
|
|
||||||
~/homelab/compose/core/tinyauth/
|
|
||||||
|
|
||||||
# Media
|
|
||||||
~/homelab/compose/media/frontend/jellyfin/
|
|
||||||
~/homelab/compose/media/frontend/immich/
|
|
||||||
~/homelab/compose/media/automation/sonarr/
|
|
||||||
|
|
||||||
# Utilities
|
|
||||||
~/homelab/compose/services/homarr/
|
|
||||||
~/homelab/compose/services/backrest/
|
|
||||||
~/homelab/compose/services/linkwarden/
|
|
||||||
|
|
||||||
# Documentation
|
|
||||||
~/homelab/docs/
|
|
||||||
~/homelab/README.md
|
|
||||||
```
|
|
||||||
|
|
||||||
## Port Reference
|
|
||||||
|
|
||||||
```
|
|
||||||
80 - HTTP (Traefik)
|
|
||||||
443 - HTTPS (Traefik)
|
|
||||||
3890 - LLDAP
|
|
||||||
6881 - qBittorrent (TCP/UDP)
|
|
||||||
8096 - Jellyfin
|
|
||||||
2283 - Immich
|
|
||||||
```
|
|
||||||
|
|
||||||
## Default Credentials
|
|
||||||
|
|
||||||
⚠️ **Change these immediately after first login!**
|
|
||||||
|
|
||||||
### qBittorrent
|
|
||||||
```
|
|
||||||
Username: admin
|
|
||||||
Password: adminadmin
|
|
||||||
```
|
|
||||||
|
|
||||||
### Microbin
|
|
||||||
```
|
|
||||||
Check compose/services/microbin/.env
|
|
||||||
MICROBIN_ADMIN_USERNAME
|
|
||||||
MICROBIN_ADMIN_PASSWORD
|
|
||||||
```
|
|
||||||
|
|
||||||
### All Other Services
|
|
||||||
Use SSO (LLDAP) or create admin account on first visit.
|
|
||||||
|
|
||||||
## Quick Deployment
|
|
||||||
|
|
||||||
### Deploy Everything
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
chmod +x deploy-all.sh
|
|
||||||
./deploy-all.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy Core Only
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/core/traefik && docker compose up -d
|
|
||||||
cd ../lldap && docker compose up -d
|
|
||||||
cd ../tinyauth && docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy Media Stack
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/media/frontend
|
|
||||||
for dir in */; do cd "$dir" && docker compose up -d && cd ..; done
|
|
||||||
|
|
||||||
cd ~/homelab/compose/media/automation
|
|
||||||
for dir in */; do cd "$dir" && docker compose up -d && cd ..; done
|
|
||||||
```
|
|
||||||
|
|
||||||
## Emergency Procedures
|
|
||||||
|
|
||||||
### Stop All Services
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
find compose -name "compose.yaml" -execdir docker compose down \;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Remove All Containers (Nuclear Option)
|
|
||||||
```bash
|
|
||||||
docker stop $(docker ps -aq)
|
|
||||||
docker rm $(docker ps -aq)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Reset Network
|
|
||||||
```bash
|
|
||||||
docker network rm homelab
|
|
||||||
docker network create homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
### Reset Service
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
docker compose down -v # REMOVES VOLUMES!
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**For detailed guides, see the [docs folder](./README.md).**
|
|
||||||
|
|
@ -1,366 +0,0 @@
|
||||||
# Services Overview
|
|
||||||
|
|
||||||
Complete list of all services in the homelab with descriptions and use cases.
|
|
||||||
|
|
||||||
## Core Infrastructure (Required)
|
|
||||||
|
|
||||||
### Traefik
|
|
||||||
- **URL**: https://traefik.fig.systems
|
|
||||||
- **Purpose**: Reverse proxy with automatic SSL/TLS
|
|
||||||
- **Why**: Routes all traffic, manages Let's Encrypt certificates
|
|
||||||
- **Required**: ✅ Yes - Nothing works without this
|
|
||||||
|
|
||||||
### LLDAP
|
|
||||||
- **URL**: https://lldap.fig.systems
|
|
||||||
- **Purpose**: Lightweight LDAP directory for user management
|
|
||||||
- **Why**: Centralized user database for SSO
|
|
||||||
- **Required**: ✅ Yes (if using SSO)
|
|
||||||
- **Default Login**: admin / <your LLDAP_LDAP_USER_PASS>
|
|
||||||
|
|
||||||
### Tinyauth
|
|
||||||
- **URL**: https://auth.fig.systems
|
|
||||||
- **Purpose**: SSO forward authentication middleware
|
|
||||||
- **Why**: Single login for all services
|
|
||||||
- **Required**: ✅ Yes (if using SSO)
|
|
||||||
|
|
||||||
## Dashboard & Management
|
|
||||||
|
|
||||||
### Homarr
|
|
||||||
- **URL**: https://home.fig.systems
|
|
||||||
- **Purpose**: Service dashboard with auto-discovery
|
|
||||||
- **Why**: See all your services in one place, monitor status
|
|
||||||
- **Required**: ⬜ No, but highly recommended
|
|
||||||
- **Features**:
|
|
||||||
- Auto-discovers Docker containers
|
|
||||||
- Customizable widgets
|
|
||||||
- Service status monitoring
|
|
||||||
- Integration with media services
|
|
||||||
|
|
||||||
### Backrest
|
|
||||||
- **URL**: https://backup.fig.systems
|
|
||||||
- **Purpose**: Backup management with web UI (uses Restic)
|
|
||||||
- **Why**: Encrypted, deduplicated backups to Backblaze B2
|
|
||||||
- **Required**: ⬜ No, but critical for data safety
|
|
||||||
- **Features**:
|
|
||||||
- Web-based backup management
|
|
||||||
- Scheduled backups
|
|
||||||
- File browsing and restore
|
|
||||||
- Encryption at rest
|
|
||||||
- S3-compatible storage support
|
|
||||||
|
|
||||||
## Media Services
|
|
||||||
|
|
||||||
### Jellyfin
|
|
||||||
- **URL**: https://flix.fig.systems
|
|
||||||
- **Purpose**: Media server (Netflix alternative)
|
|
||||||
- **Why**: Watch your movies/TV shows anywhere
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Stream to any device
|
|
||||||
- Hardware transcoding (with GPU)
|
|
||||||
- Live TV & DVR
|
|
||||||
- Mobile apps available
|
|
||||||
- Subtitle support
|
|
||||||
|
|
||||||
### Immich
|
|
||||||
- **URL**: https://photos.fig.systems
|
|
||||||
- **Purpose**: Photo and video management (Google Photos alternative)
|
|
||||||
- **Why**: Self-hosted photo library with ML features
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Face recognition (with GPU)
|
|
||||||
- Object detection
|
|
||||||
- Mobile apps with auto-upload
|
|
||||||
- Timeline view
|
|
||||||
- Album organization
|
|
||||||
|
|
||||||
### Jellyseerr
|
|
||||||
- **URL**: https://requests.fig.systems
|
|
||||||
- **Purpose**: Media request management
|
|
||||||
- **Why**: Let users request movies/shows
|
|
||||||
- **Required**: ⬜ No (only if using Sonarr/Radarr)
|
|
||||||
- **Features**:
|
|
||||||
- Request movies and TV shows
|
|
||||||
- Integration with Jellyfin
|
|
||||||
- User permissions
|
|
||||||
- Notification system
|
|
||||||
|
|
||||||
## Media Automation
|
|
||||||
|
|
||||||
### Sonarr
|
|
||||||
- **URL**: https://sonarr.fig.systems
|
|
||||||
- **Purpose**: TV show automation
|
|
||||||
- **Why**: Automatically download and organize TV shows
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Episode tracking
|
|
||||||
- Automatic downloading
|
|
||||||
- Quality management
|
|
||||||
- Calendar view
|
|
||||||
|
|
||||||
### Radarr
|
|
||||||
- **URL**: https://radarr.fig.systems
|
|
||||||
- **Purpose**: Movie automation
|
|
||||||
- **Why**: Automatically download and organize movies
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Movie tracking
|
|
||||||
- Automatic downloading
|
|
||||||
- Quality profiles
|
|
||||||
- Collection management
|
|
||||||
|
|
||||||
### SABnzbd
|
|
||||||
- **URL**: https://sabnzbd.fig.systems
|
|
||||||
- **Purpose**: Usenet downloader
|
|
||||||
- **Why**: Download from Usenet newsgroups
|
|
||||||
- **Required**: ⬜ No (only if using Usenet)
|
|
||||||
- **Features**:
|
|
||||||
- Fast downloads
|
|
||||||
- Automatic verification and repair
|
|
||||||
- Category-based processing
|
|
||||||
- Password support
|
|
||||||
|
|
||||||
### qBittorrent
|
|
||||||
- **URL**: https://qbt.fig.systems
|
|
||||||
- **Purpose**: BitTorrent client
|
|
||||||
- **Why**: Download torrents
|
|
||||||
- **Required**: ⬜ No (only if using torrents)
|
|
||||||
- **Features**:
|
|
||||||
- Web-based UI
|
|
||||||
- RSS support
|
|
||||||
- Sequential downloading
|
|
||||||
- IP filtering
|
|
||||||
|
|
||||||
## Productivity Services
|
|
||||||
|
|
||||||
### Linkwarden
|
|
||||||
- **URL**: https://links.fig.systems
|
|
||||||
- **Purpose**: Bookmark manager
|
|
||||||
- **Why**: Save and organize web links
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Collaborative bookmarking
|
|
||||||
- Full-text search
|
|
||||||
- Screenshots and PDFs
|
|
||||||
- Tags and collections
|
|
||||||
- Browser extensions
|
|
||||||
|
|
||||||
### Vikunja
|
|
||||||
- **URL**: https://tasks.fig.systems
|
|
||||||
- **Purpose**: Task management (Todoist alternative)
|
|
||||||
- **Why**: Track tasks and projects
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Kanban boards
|
|
||||||
- Lists and sub-tasks
|
|
||||||
- Due dates and reminders
|
|
||||||
- Collaboration
|
|
||||||
- CalDAV support
|
|
||||||
|
|
||||||
### FreshRSS
|
|
||||||
- **URL**: https://rss.fig.systems
|
|
||||||
- **Purpose**: RSS/Atom feed reader
|
|
||||||
- **Why**: Aggregate news and blogs
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Web-based reader
|
|
||||||
- Mobile apps via API
|
|
||||||
- Filtering and search
|
|
||||||
- Multi-user support
|
|
||||||
|
|
||||||
## Specialized Services
|
|
||||||
|
|
||||||
### LubeLogger
|
|
||||||
- **URL**: https://garage.fig.systems
|
|
||||||
- **Purpose**: Vehicle maintenance tracker
|
|
||||||
- **Why**: Track mileage, maintenance, costs
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Service records
|
|
||||||
- Fuel tracking
|
|
||||||
- Cost analysis
|
|
||||||
- Reminder system
|
|
||||||
- Export data
|
|
||||||
|
|
||||||
### Calibre-web
|
|
||||||
- **URL**: https://books.fig.systems
|
|
||||||
- **Purpose**: Ebook library manager
|
|
||||||
- **Why**: Manage and read ebooks
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Web-based ebook reader
|
|
||||||
- Format conversion
|
|
||||||
- Metadata management
|
|
||||||
- Send to Kindle
|
|
||||||
- OPDS support
|
|
||||||
|
|
||||||
### Booklore
|
|
||||||
- **URL**: https://booklore.fig.systems
|
|
||||||
- **Purpose**: Book tracking and reviews
|
|
||||||
- **Why**: Track reading progress and reviews
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Reading lists
|
|
||||||
- Progress tracking
|
|
||||||
- Reviews and ratings
|
|
||||||
- Import from Goodreads
|
|
||||||
|
|
||||||
### RSSHub
|
|
||||||
- **URL**: https://rsshub.fig.systems
|
|
||||||
- **Purpose**: RSS feed generator
|
|
||||||
- **Why**: Generate RSS feeds for sites without them
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- 1000+ source support
|
|
||||||
- Custom routes
|
|
||||||
- Filter and transform feeds
|
|
||||||
|
|
||||||
### MicroBin
|
|
||||||
- **URL**: https://paste.fig.systems
|
|
||||||
- **Purpose**: Encrypted pastebin with file upload
|
|
||||||
- **Why**: Share code snippets and files
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Encryption support
|
|
||||||
- File uploads
|
|
||||||
- Burn after reading
|
|
||||||
- Custom expiry
|
|
||||||
- Password protection
|
|
||||||
|
|
||||||
### File Browser
|
|
||||||
- **URL**: https://files.fig.systems
|
|
||||||
- **Purpose**: Web-based file manager
|
|
||||||
- **Why**: Browse and manage media files
|
|
||||||
- **Required**: ⬜ No
|
|
||||||
- **Features**:
|
|
||||||
- Upload/download files
|
|
||||||
- Preview images and videos
|
|
||||||
- Text editor
|
|
||||||
- File sharing
|
|
||||||
- User permissions
|
|
||||||
|
|
||||||
## Service Categories
|
|
||||||
|
|
||||||
### Minimum Viable Setup
|
|
||||||
Just want to get started? Deploy these:
|
|
||||||
1. Traefik
|
|
||||||
2. LLDAP
|
|
||||||
3. Tinyauth
|
|
||||||
4. Homarr
|
|
||||||
|
|
||||||
### Media Enthusiast Setup
|
|
||||||
For streaming media:
|
|
||||||
1. Core services (above)
|
|
||||||
2. Jellyfin
|
|
||||||
3. Sonarr
|
|
||||||
4. Radarr
|
|
||||||
5. qBittorrent
|
|
||||||
6. Jellyseerr
|
|
||||||
|
|
||||||
### Complete Homelab
|
|
||||||
Everything:
|
|
||||||
1. Core services
|
|
||||||
2. All media services
|
|
||||||
3. All productivity services
|
|
||||||
4. Backrest for backups
|
|
||||||
|
|
||||||
## Resource Requirements
|
|
||||||
|
|
||||||
### Light (2 Core, 4GB RAM)
|
|
||||||
- Core services
|
|
||||||
- Homarr
|
|
||||||
- 2-3 utility services
|
|
||||||
|
|
||||||
### Medium (4 Core, 8GB RAM)
|
|
||||||
- Core services
|
|
||||||
- Media services (without transcoding)
|
|
||||||
- Most utility services
|
|
||||||
|
|
||||||
### Heavy (6+ Core, 16GB+ RAM)
|
|
||||||
- All services
|
|
||||||
- GPU transcoding
|
|
||||||
- Multiple concurrent users
|
|
||||||
|
|
||||||
## Quick Deploy Checklist
|
|
||||||
|
|
||||||
**Before deploying a service:**
|
|
||||||
- ✅ Core infrastructure is running
|
|
||||||
- ✅ `.env` file configured with secrets
|
|
||||||
- ✅ DNS record created
|
|
||||||
- ✅ Understand what the service does
|
|
||||||
- ✅ Know how to configure it
|
|
||||||
|
|
||||||
**After deploying:**
|
|
||||||
- ✅ Check container is running: `docker ps`
|
|
||||||
- ✅ Check logs: `docker compose logs`
|
|
||||||
- ✅ Access web UI and complete setup
|
|
||||||
- ✅ Test SSO if applicable
|
|
||||||
- ✅ Add to Homarr dashboard
|
|
||||||
|
|
||||||
## Service Dependencies
|
|
||||||
|
|
||||||
```
|
|
||||||
Traefik (required for all)
|
|
||||||
├── LLDAP
|
|
||||||
│ └── Tinyauth
|
|
||||||
│ └── All SSO-protected services
|
|
||||||
├── Jellyfin
|
|
||||||
│ └── Jellyseerr
|
|
||||||
│ ├── Sonarr
|
|
||||||
│ └── Radarr
|
|
||||||
│ ├── SABnzbd
|
|
||||||
│ └── qBittorrent
|
|
||||||
├── Immich
|
|
||||||
│ └── Backrest (for backups)
|
|
||||||
└── All other services
|
|
||||||
```
|
|
||||||
|
|
||||||
## When to Use Each Service
|
|
||||||
|
|
||||||
### Use Jellyfin if:
|
|
||||||
- You have a movie/TV collection
|
|
||||||
- Want to stream from anywhere
|
|
||||||
- Have family/friends who want access
|
|
||||||
- Want apps on all devices
|
|
||||||
|
|
||||||
### Use Immich if:
|
|
||||||
- You want Google Photos alternative
|
|
||||||
- Have lots of photos to manage
|
|
||||||
- Want ML features (face recognition)
|
|
||||||
- Have mobile devices
|
|
||||||
|
|
||||||
### Use Sonarr/Radarr if:
|
|
||||||
- You watch a lot of TV/movies
|
|
||||||
- Want automatic downloads
|
|
||||||
- Don't want to manually search
|
|
||||||
- Want quality control
|
|
||||||
|
|
||||||
### Use Backrest if:
|
|
||||||
- You care about your data (you should!)
|
|
||||||
- Want encrypted cloud backups
|
|
||||||
- Have important photos/documents
|
|
||||||
- Want easy restore process
|
|
||||||
|
|
||||||
### Use Linkwarden if:
|
|
||||||
- You save lots of bookmarks
|
|
||||||
- Want full-text search
|
|
||||||
- Share links with team
|
|
||||||
- Want offline archives
|
|
||||||
|
|
||||||
### Use Vikunja if:
|
|
||||||
- You need task management
|
|
||||||
- Work with teams
|
|
||||||
- Want Kanban boards
|
|
||||||
- Need CalDAV for calendar integration
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. Review which services you actually need
|
|
||||||
2. Start with core + 2-3 services
|
|
||||||
3. Deploy and configure each fully
|
|
||||||
4. Add more services gradually
|
|
||||||
5. Monitor resource usage
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Remember**: You don't need all services. Start small and add what you actually use!
|
|
||||||
|
|
@ -1,775 +0,0 @@
|
||||||
# AlmaLinux 9.6 VM Setup Guide
|
|
||||||
|
|
||||||
Complete setup guide for the homelab VM on AlmaLinux 9.6 running on Proxmox VE 9.
|
|
||||||
|
|
||||||
## Hardware Context
|
|
||||||
|
|
||||||
- **Host**: Proxmox VE 9 (Debian 13 based)
|
|
||||||
- CPU: AMD Ryzen 5 7600X (6C/12T, 5.3 GHz boost)
|
|
||||||
- GPU: NVIDIA GTX 1070 (8GB VRAM)
|
|
||||||
- RAM: 32GB DDR5
|
|
||||||
|
|
||||||
- **VM Allocation**:
|
|
||||||
- OS: AlmaLinux 9.6 (RHEL 9 compatible)
|
|
||||||
- CPU: 8 vCPUs
|
|
||||||
- RAM: 24GB
|
|
||||||
- Disk: 500GB+ (expandable)
|
|
||||||
- GPU: GTX 1070 (PCIe passthrough)
|
|
||||||
|
|
||||||
## Proxmox VM Creation
|
|
||||||
|
|
||||||
### 1. Create VM
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On Proxmox host
|
|
||||||
qm create 100 \
|
|
||||||
--name homelab \
|
|
||||||
--memory 24576 \
|
|
||||||
--cores 8 \
|
|
||||||
--cpu host \
|
|
||||||
--sockets 1 \
|
|
||||||
--net0 virtio,bridge=vmbr0 \
|
|
||||||
--scsi0 local-lvm:500 \
|
|
||||||
--ostype l26 \
|
|
||||||
--boot order=scsi0
|
|
||||||
|
|
||||||
# Attach AlmaLinux ISO
|
|
||||||
qm set 100 --ide2 local:iso/AlmaLinux-9.6-x86_64-dvd.iso,media=cdrom
|
|
||||||
|
|
||||||
# Enable UEFI
|
|
||||||
qm set 100 --bios ovmf --efidisk0 local-lvm:1
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. GPU Passthrough
|
|
||||||
|
|
||||||
**Find GPU PCI address:**
|
|
||||||
```bash
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
# Example output: 01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Enable IOMMU in Proxmox:**
|
|
||||||
|
|
||||||
Edit `/etc/default/grub`:
|
|
||||||
```bash
|
|
||||||
# For AMD CPU (Ryzen 5 7600X)
|
|
||||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
|
|
||||||
```
|
|
||||||
|
|
||||||
Update GRUB and reboot:
|
|
||||||
```bash
|
|
||||||
update-grub
|
|
||||||
reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify IOMMU:**
|
|
||||||
```bash
|
|
||||||
dmesg | grep -e DMAR -e IOMMU
|
|
||||||
# Should show IOMMU enabled
|
|
||||||
```
|
|
||||||
|
|
||||||
**Add GPU to VM:**
|
|
||||||
|
|
||||||
Edit `/etc/pve/qemu-server/100.conf`:
|
|
||||||
```
|
|
||||||
hostpci0: 0000:01:00,pcie=1,x-vga=1
|
|
||||||
```
|
|
||||||
|
|
||||||
Or via command:
|
|
||||||
```bash
|
|
||||||
qm set 100 --hostpci0 0000:01:00,pcie=1,x-vga=1
|
|
||||||
```
|
|
||||||
|
|
||||||
**Blacklist GPU on host:**
|
|
||||||
|
|
||||||
Edit `/etc/modprobe.d/blacklist-nvidia.conf`:
|
|
||||||
```
|
|
||||||
blacklist nouveau
|
|
||||||
blacklist nvidia
|
|
||||||
blacklist nvidia_drm
|
|
||||||
blacklist nvidia_modeset
|
|
||||||
blacklist nvidia_uvm
|
|
||||||
```
|
|
||||||
|
|
||||||
Update initramfs:
|
|
||||||
```bash
|
|
||||||
update-initramfs -u
|
|
||||||
reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
## AlmaLinux Installation
|
|
||||||
|
|
||||||
### 1. Install AlmaLinux 9.6
|
|
||||||
|
|
||||||
Start VM and follow installer:
|
|
||||||
1. **Language**: English (US)
|
|
||||||
2. **Installation Destination**: Use all space, automatic partitioning
|
|
||||||
3. **Network**: Enable and set hostname to `homelab.fig.systems`
|
|
||||||
4. **Software Selection**: Minimal Install
|
|
||||||
5. **Root Password**: Set strong password
|
|
||||||
6. **User Creation**: Create admin user (e.g., `homelab`)
|
|
||||||
|
|
||||||
### 2. Post-Installation Configuration
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH into VM
|
|
||||||
ssh homelab@<vm-ip>
|
|
||||||
|
|
||||||
# Update system
|
|
||||||
sudo dnf update -y
|
|
||||||
|
|
||||||
# Install essential tools
|
|
||||||
sudo dnf install -y \
|
|
||||||
vim \
|
|
||||||
git \
|
|
||||||
curl \
|
|
||||||
wget \
|
|
||||||
htop \
|
|
||||||
ncdu \
|
|
||||||
tree \
|
|
||||||
tmux \
|
|
||||||
bind-utils \
|
|
||||||
net-tools \
|
|
||||||
firewalld
|
|
||||||
|
|
||||||
# Enable and configure firewall
|
|
||||||
sudo systemctl enable --now firewalld
|
|
||||||
sudo firewall-cmd --permanent --add-service=http
|
|
||||||
sudo firewall-cmd --permanent --add-service=https
|
|
||||||
sudo firewall-cmd --reload
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Configure Static IP (Optional)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Find connection name
|
|
||||||
nmcli connection show
|
|
||||||
|
|
||||||
# Set static IP (example: 192.168.1.100)
|
|
||||||
sudo nmcli connection modify "System eth0" \
|
|
||||||
ipv4.addresses 192.168.1.100/24 \
|
|
||||||
ipv4.gateway 192.168.1.1 \
|
|
||||||
ipv4.dns "1.1.1.1,8.8.8.8" \
|
|
||||||
ipv4.method manual
|
|
||||||
|
|
||||||
# Restart network
|
|
||||||
sudo nmcli connection down "System eth0"
|
|
||||||
sudo nmcli connection up "System eth0"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Docker Installation
|
|
||||||
|
|
||||||
### 1. Install Docker Engine
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Remove old versions
|
|
||||||
sudo dnf remove docker \
|
|
||||||
docker-client \
|
|
||||||
docker-client-latest \
|
|
||||||
docker-common \
|
|
||||||
docker-latest \
|
|
||||||
docker-latest-logrotate \
|
|
||||||
docker-logrotate \
|
|
||||||
docker-engine
|
|
||||||
|
|
||||||
# Add Docker repository
|
|
||||||
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
|
||||||
|
|
||||||
# Install Docker
|
|
||||||
sudo dnf install -y \
|
|
||||||
docker-ce \
|
|
||||||
docker-ce-cli \
|
|
||||||
containerd.io \
|
|
||||||
docker-buildx-plugin \
|
|
||||||
docker-compose-plugin
|
|
||||||
|
|
||||||
# Start Docker
|
|
||||||
sudo systemctl enable --now docker
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
sudo docker run hello-world
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Configure Docker
|
|
||||||
|
|
||||||
**Add user to docker group:**
|
|
||||||
```bash
|
|
||||||
sudo usermod -aG docker $USER
|
|
||||||
newgrp docker
|
|
||||||
|
|
||||||
# Verify (no sudo needed)
|
|
||||||
docker ps
|
|
||||||
```
|
|
||||||
|
|
||||||
**Configure Docker daemon:**
|
|
||||||
|
|
||||||
Create `/etc/docker/daemon.json`:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"log-driver": "json-file",
|
|
||||||
"log-opts": {
|
|
||||||
"max-size": "10m",
|
|
||||||
"max-file": "3"
|
|
||||||
},
|
|
||||||
"storage-driver": "overlay2",
|
|
||||||
"features": {
|
|
||||||
"buildkit": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Docker:
|
|
||||||
```bash
|
|
||||||
sudo systemctl restart docker
|
|
||||||
```
|
|
||||||
|
|
||||||
## NVIDIA GPU Setup
|
|
||||||
|
|
||||||
### 1. Install NVIDIA Drivers
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Add EPEL repository
|
|
||||||
sudo dnf install -y epel-release
|
|
||||||
|
|
||||||
# Add NVIDIA repository
|
|
||||||
sudo dnf config-manager --add-repo \
|
|
||||||
https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
|
|
||||||
|
|
||||||
# Install drivers
|
|
||||||
sudo dnf install -y \
|
|
||||||
nvidia-driver \
|
|
||||||
nvidia-driver-cuda \
|
|
||||||
nvidia-settings \
|
|
||||||
nvidia-persistenced
|
|
||||||
|
|
||||||
# Reboot to load drivers
|
|
||||||
sudo reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Verify GPU
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check driver version
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Expected output:
|
|
||||||
# +-----------------------------------------------------------------------------+
|
|
||||||
# | NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|
|
||||||
# |-------------------------------+----------------------+----------------------+
|
|
||||||
# | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
|
||||||
# | 0 GeForce GTX 1070 Off | 00000000:01:00.0 Off | N/A |
|
|
||||||
# +-------------------------------+----------------------+----------------------+
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Install NVIDIA Container Toolkit
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Add NVIDIA Container Toolkit repository
|
|
||||||
sudo dnf config-manager --add-repo \
|
|
||||||
https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
|
|
||||||
|
|
||||||
# Install toolkit
|
|
||||||
sudo dnf install -y nvidia-container-toolkit
|
|
||||||
|
|
||||||
# Configure Docker to use nvidia runtime
|
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
|
||||||
|
|
||||||
# Restart Docker
|
|
||||||
sudo systemctl restart docker
|
|
||||||
|
|
||||||
# Test GPU in container
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
## Storage Setup
|
|
||||||
|
|
||||||
### 1. Create Media Directory
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create media directory structure
|
|
||||||
sudo mkdir -p /media/{tv,movies,music,photos,books,audiobooks,comics,homemovies}
|
|
||||||
sudo mkdir -p /media/{downloads,complete,incomplete}
|
|
||||||
|
|
||||||
# Set ownership
|
|
||||||
sudo chown -R $USER:$USER /media
|
|
||||||
|
|
||||||
# Set permissions
|
|
||||||
chmod -R 755 /media
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Mount Additional Storage (Optional)
|
|
||||||
|
|
||||||
If using separate disk for media:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Find disk
|
|
||||||
lsblk
|
|
||||||
|
|
||||||
# Format disk (example: /dev/sdb)
|
|
||||||
sudo mkfs.ext4 /dev/sdb
|
|
||||||
|
|
||||||
# Get UUID
|
|
||||||
sudo blkid /dev/sdb
|
|
||||||
|
|
||||||
# Add to /etc/fstab
|
|
||||||
echo "UUID=<uuid> /media ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstab
|
|
||||||
|
|
||||||
# Mount
|
|
||||||
sudo mount -a
|
|
||||||
```
|
|
||||||
|
|
||||||
## Homelab Repository Setup
|
|
||||||
|
|
||||||
### 1. Clone Repository
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create workspace
|
|
||||||
mkdir -p ~/homelab
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# Clone repository
|
|
||||||
git clone https://github.com/efigueroa/homelab.git .
|
|
||||||
|
|
||||||
# Or if using SSH
|
|
||||||
git clone git@github.com:efigueroa/homelab.git .
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Create Docker Network
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create homelab network
|
|
||||||
docker network create homelab
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
docker network ls | grep homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Configure Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate secrets for all services
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# LLDAP
|
|
||||||
cd compose/core/lldap
|
|
||||||
openssl rand -hex 32 > /tmp/lldap_jwt_secret
|
|
||||||
openssl rand -base64 32 | tr -d /=+ | cut -c1-32 > /tmp/lldap_pass
|
|
||||||
# Update .env with generated secrets
|
|
||||||
|
|
||||||
# Tinyauth
|
|
||||||
cd ../tinyauth
|
|
||||||
openssl rand -hex 32 > /tmp/tinyauth_session
|
|
||||||
# Update .env (LDAP_BIND_PASSWORD must match LLDAP)
|
|
||||||
|
|
||||||
# Continue for all services...
|
|
||||||
```
|
|
||||||
|
|
||||||
See [`docs/guides/secrets-management.md`](../guides/secrets-management.md) for complete guide.
|
|
||||||
|
|
||||||
## SELinux Configuration
|
|
||||||
|
|
||||||
AlmaLinux uses SELinux by default. Configure for Docker:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check SELinux status
|
|
||||||
getenforce
|
|
||||||
# Should show: Enforcing
|
|
||||||
|
|
||||||
# Allow Docker to access bind mounts
|
|
||||||
sudo setsebool -P container_manage_cgroup on
|
|
||||||
|
|
||||||
# If you encounter permission issues:
|
|
||||||
# Option 1: Add SELinux context to directories
|
|
||||||
sudo chcon -R -t container_file_t ~/homelab/compose
|
|
||||||
sudo chcon -R -t container_file_t /media
|
|
||||||
|
|
||||||
# Option 2: Use :Z flag in docker volumes (auto-relabels)
|
|
||||||
# Example: ./data:/data:Z
|
|
||||||
|
|
||||||
# Option 3: Set SELinux to permissive (not recommended)
|
|
||||||
# sudo setenforce 0
|
|
||||||
```
|
|
||||||
|
|
||||||
## System Tuning
|
|
||||||
|
|
||||||
### 1. Increase File Limits
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Add to /etc/security/limits.conf
|
|
||||||
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
|
|
||||||
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
|
|
||||||
|
|
||||||
# Add to /etc/sysctl.conf
|
|
||||||
echo "fs.file-max = 65536" | sudo tee -a /etc/sysctl.conf
|
|
||||||
echo "fs.inotify.max_user_watches = 524288" | sudo tee -a /etc/sysctl.conf
|
|
||||||
|
|
||||||
# Apply
|
|
||||||
sudo sysctl -p
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Optimize for Media Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Network tuning
|
|
||||||
echo "net.core.rmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
|
|
||||||
echo "net.core.wmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
|
|
||||||
echo "net.ipv4.tcp_rmem = 4096 87380 67108864" | sudo tee -a /etc/sysctl.conf
|
|
||||||
echo "net.ipv4.tcp_wmem = 4096 65536 67108864" | sudo tee -a /etc/sysctl.conf
|
|
||||||
|
|
||||||
# Apply
|
|
||||||
sudo sysctl -p
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. CPU Governor (Ryzen 5 7600X)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install cpupower
|
|
||||||
sudo dnf install -y kernel-tools
|
|
||||||
|
|
||||||
# Set to performance mode
|
|
||||||
sudo cpupower frequency-set -g performance
|
|
||||||
|
|
||||||
# Make permanent
|
|
||||||
echo "performance" | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deployment
|
|
||||||
|
|
||||||
### 1. Deploy Core Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# Create network
|
|
||||||
docker network create homelab
|
|
||||||
|
|
||||||
# Deploy Traefik
|
|
||||||
cd compose/core/traefik
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Deploy LLDAP
|
|
||||||
cd ../lldap
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Wait for LLDAP to be ready (30 seconds)
|
|
||||||
sleep 30
|
|
||||||
|
|
||||||
# Deploy Tinyauth
|
|
||||||
cd ../tinyauth
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Configure LLDAP
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Access LLDAP web UI
|
|
||||||
# https://lldap.fig.systems
|
|
||||||
|
|
||||||
# 1. Login with admin credentials from .env
|
|
||||||
# 2. Create observer user for tinyauth
|
|
||||||
# 3. Create regular users
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Deploy Monitoring
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# Deploy logging stack
|
|
||||||
cd compose/monitoring/logging
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Deploy uptime monitoring
|
|
||||||
cd ../uptime
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Deploy Services
|
|
||||||
|
|
||||||
See [`README.md`](../../README.md) for complete deployment order.
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
### 1. Check All Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all running containers
|
|
||||||
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
|
||||||
|
|
||||||
# Check networks
|
|
||||||
docker network ls
|
|
||||||
|
|
||||||
# Check volumes
|
|
||||||
docker volume ls
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Test GPU Access
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test in Jellyfin
|
|
||||||
docker exec jellyfin nvidia-smi
|
|
||||||
|
|
||||||
# Test in Ollama
|
|
||||||
docker exec ollama nvidia-smi
|
|
||||||
|
|
||||||
# Test in Immich
|
|
||||||
docker exec immich-machine-learning nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Test Logging
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check Promtail is collecting logs
|
|
||||||
docker logs promtail | grep "clients configured"
|
|
||||||
|
|
||||||
# Access Grafana
|
|
||||||
# https://logs.fig.systems
|
|
||||||
|
|
||||||
# Query logs
|
|
||||||
# {container="traefik"}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Test SSL
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check certificate
|
|
||||||
curl -vI https://sonarr.fig.systems 2>&1 | grep -i "subject:"
|
|
||||||
|
|
||||||
# Should show valid Let's Encrypt certificate
|
|
||||||
```
|
|
||||||
|
|
||||||
## Backup Strategy
|
|
||||||
|
|
||||||
### 1. VM Snapshots (Proxmox)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On Proxmox host
|
|
||||||
# Create snapshot before major changes
|
|
||||||
qm snapshot 100 pre-update-$(date +%Y%m%d)
|
|
||||||
|
|
||||||
# List snapshots
|
|
||||||
qm listsnapshot 100
|
|
||||||
|
|
||||||
# Restore snapshot
|
|
||||||
qm rollback 100 <snapshot-name>
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Configuration Backup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On VM
|
|
||||||
cd ~/homelab
|
|
||||||
|
|
||||||
# Backup all configs (excludes data directories)
|
|
||||||
tar czf homelab-config-$(date +%Y%m%d).tar.gz \
|
|
||||||
--exclude='*/data' \
|
|
||||||
--exclude='*/db' \
|
|
||||||
--exclude='*/pgdata' \
|
|
||||||
--exclude='*/config' \
|
|
||||||
--exclude='*/models' \
|
|
||||||
--exclude='*_data' \
|
|
||||||
compose/
|
|
||||||
|
|
||||||
# Backup to external storage
|
|
||||||
scp homelab-config-*.tar.gz user@backup-server:/backups/
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Automated Backups with Backrest
|
|
||||||
|
|
||||||
Backrest service is included and configured. See:
|
|
||||||
- `compose/services/backrest/`
|
|
||||||
- Access: https://backup.fig.systems
|
|
||||||
|
|
||||||
## Maintenance
|
|
||||||
|
|
||||||
### Weekly
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Update containers
|
|
||||||
cd ~/homelab
|
|
||||||
find compose -name "compose.yaml" -type f | while read compose; do
|
|
||||||
dir=$(dirname "$compose")
|
|
||||||
echo "Updating $dir"
|
|
||||||
cd "$dir"
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
cd ~/homelab
|
|
||||||
done
|
|
||||||
|
|
||||||
# Clean up old images
|
|
||||||
docker image prune -a -f
|
|
||||||
|
|
||||||
# Check disk space
|
|
||||||
df -h
|
|
||||||
ncdu /media
|
|
||||||
```
|
|
||||||
|
|
||||||
### Monthly
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Update AlmaLinux
|
|
||||||
sudo dnf update -y
|
|
||||||
|
|
||||||
# Update NVIDIA drivers (if available)
|
|
||||||
sudo dnf update nvidia-driver* -y
|
|
||||||
|
|
||||||
# Reboot if kernel updated
|
|
||||||
sudo reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Services Won't Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check SELinux denials
|
|
||||||
sudo ausearch -m avc -ts recent
|
|
||||||
|
|
||||||
# If SELinux is blocking:
|
|
||||||
sudo setsebool -P container_manage_cgroup on
|
|
||||||
|
|
||||||
# Or relabel directories
|
|
||||||
sudo restorecon -Rv ~/homelab/compose
|
|
||||||
```
|
|
||||||
|
|
||||||
### GPU Not Detected
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check GPU is passed through
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
|
|
||||||
# Check drivers loaded
|
|
||||||
lsmod | grep nvidia
|
|
||||||
|
|
||||||
# Reinstall drivers
|
|
||||||
sudo dnf reinstall nvidia-driver* -y
|
|
||||||
sudo reboot
|
|
||||||
```
|
|
||||||
|
|
||||||
### Network Issues
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check firewall
|
|
||||||
sudo firewall-cmd --list-all
|
|
||||||
|
|
||||||
# Add ports if needed
|
|
||||||
sudo firewall-cmd --permanent --add-port=80/tcp
|
|
||||||
sudo firewall-cmd --permanent --add-port=443/tcp
|
|
||||||
sudo firewall-cmd --reload
|
|
||||||
|
|
||||||
# Check Docker network
|
|
||||||
docker network inspect homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
### Permission Denied Errors
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check ownership
|
|
||||||
ls -la ~/homelab/compose/*/
|
|
||||||
|
|
||||||
# Fix ownership
|
|
||||||
sudo chown -R $USER:$USER ~/homelab
|
|
||||||
|
|
||||||
# Check SELinux context
|
|
||||||
ls -Z ~/homelab/compose
|
|
||||||
|
|
||||||
# Fix SELinux labels
|
|
||||||
sudo chcon -R -t container_file_t ~/homelab/compose
|
|
||||||
```
|
|
||||||
|
|
||||||
## Performance Monitoring
|
|
||||||
|
|
||||||
### System Stats
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# CPU usage
|
|
||||||
htop
|
|
||||||
|
|
||||||
# GPU usage
|
|
||||||
watch -n 1 nvidia-smi
|
|
||||||
|
|
||||||
# Disk I/O
|
|
||||||
iostat -x 1
|
|
||||||
|
|
||||||
# Network
|
|
||||||
iftop
|
|
||||||
|
|
||||||
# Per-container stats
|
|
||||||
docker stats
|
|
||||||
```
|
|
||||||
|
|
||||||
### Resource Limits
|
|
||||||
|
|
||||||
Example container resource limits:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# In compose.yaml
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
cpus: '2.0'
|
|
||||||
memory: 4G
|
|
||||||
reservations:
|
|
||||||
cpus: '1.0'
|
|
||||||
memory: 2G
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Hardening
|
|
||||||
|
|
||||||
### 1. Disable Root SSH
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Edit /etc/ssh/sshd_config
|
|
||||||
sudo sed -i 's/#PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
|
|
||||||
|
|
||||||
# Restart SSH
|
|
||||||
sudo systemctl restart sshd
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Configure Fail2Ban
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install
|
|
||||||
sudo dnf install -y fail2ban
|
|
||||||
|
|
||||||
# Configure
|
|
||||||
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
|
|
||||||
|
|
||||||
# Edit /etc/fail2ban/jail.local
|
|
||||||
# [sshd]
|
|
||||||
# enabled = true
|
|
||||||
# maxretry = 3
|
|
||||||
# bantime = 3600
|
|
||||||
|
|
||||||
# Start
|
|
||||||
sudo systemctl enable --now fail2ban
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Automatic Updates
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install dnf-automatic
|
|
||||||
sudo dnf install -y dnf-automatic
|
|
||||||
|
|
||||||
# Configure /etc/dnf/automatic.conf
|
|
||||||
# apply_updates = yes
|
|
||||||
|
|
||||||
# Enable
|
|
||||||
sudo systemctl enable --now dnf-automatic.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ VM created and AlmaLinux installed
|
|
||||||
2. ✅ Docker and NVIDIA drivers configured
|
|
||||||
3. ✅ Homelab repository cloned
|
|
||||||
4. ✅ Network and storage configured
|
|
||||||
5. ⬜ Deploy core services
|
|
||||||
6. ⬜ Configure SSO
|
|
||||||
7. ⬜ Deploy all services
|
|
||||||
8. ⬜ Configure backups
|
|
||||||
9. ⬜ Set up monitoring
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**System ready for deployment!** 🚀
|
|
||||||
|
|
@ -1,707 +0,0 @@
|
||||||
# Common Issues and Solutions
|
|
||||||
|
|
||||||
This guide covers the most common problems you might encounter and how to fix them.
|
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
- [Service Won't Start](#service-wont-start)
|
|
||||||
- [SSL/TLS Certificate Errors](#ssltls-certificate-errors)
|
|
||||||
- [SSO Authentication Issues](#sso-authentication-issues)
|
|
||||||
- [Access Issues](#access-issues)
|
|
||||||
- [Performance Problems](#performance-problems)
|
|
||||||
- [Database Errors](#database-errors)
|
|
||||||
- [Network Issues](#network-issues)
|
|
||||||
- [GPU Problems](#gpu-problems)
|
|
||||||
|
|
||||||
## Service Won't Start
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
Container exits immediately or shows "Exited (1)" status.
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
|
|
||||||
# Check container status
|
|
||||||
docker compose ps
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
docker compose logs
|
|
||||||
|
|
||||||
# Check for specific errors
|
|
||||||
docker compose logs | grep -i error
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. Environment Variables Not Set
|
|
||||||
|
|
||||||
**Error in logs:**
|
|
||||||
```
|
|
||||||
Error: POSTGRES_PASSWORD is not set
|
|
||||||
Error: required environment variable 'XXX' is missing
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check .env file exists
|
|
||||||
ls -la .env
|
|
||||||
|
|
||||||
# Check for changeme_ values
|
|
||||||
grep "changeme_" .env
|
|
||||||
|
|
||||||
# Update with proper secrets (see secrets guide)
|
|
||||||
nano .env
|
|
||||||
|
|
||||||
# Restart
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Port Already in Use
|
|
||||||
|
|
||||||
**Error in logs:**
|
|
||||||
```
|
|
||||||
Error: bind: address already in use
|
|
||||||
Error: failed to bind to port 80: address already in use
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Find what's using the port
|
|
||||||
sudo netstat -tulpn | grep :80
|
|
||||||
sudo netstat -tulpn | grep :443
|
|
||||||
|
|
||||||
# Stop conflicting service
|
|
||||||
sudo systemctl stop apache2 # Example
|
|
||||||
sudo systemctl stop nginx # Example
|
|
||||||
|
|
||||||
# Or change port in compose.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Network Not Created
|
|
||||||
|
|
||||||
**Error in logs:**
|
|
||||||
```
|
|
||||||
network homelab declared as external, but could not be found
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Create network
|
|
||||||
docker network create homelab
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
docker network ls | grep homelab
|
|
||||||
|
|
||||||
# Restart service
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. Volume Permission Issues
|
|
||||||
|
|
||||||
**Error in logs:**
|
|
||||||
```
|
|
||||||
Permission denied: '/config'
|
|
||||||
mkdir: cannot create directory '/data': Permission denied
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check directory ownership
|
|
||||||
ls -la ./config ./data
|
|
||||||
|
|
||||||
# Fix ownership (replace 1000:1000 with your UID:GID)
|
|
||||||
sudo chown -R 1000:1000 ./config ./data
|
|
||||||
|
|
||||||
# Restart
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5. Dependency Not Running
|
|
||||||
|
|
||||||
**Error in logs:**
|
|
||||||
```
|
|
||||||
Failed to connect to database
|
|
||||||
Connection refused: postgres:5432
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Start dependency first
|
|
||||||
cd ~/homelab/compose/path/to/dependency
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Wait for it to be healthy
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Then start the service
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## SSL/TLS Certificate Errors
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
Browser shows "Your connection is not private" or "NET::ERR_CERT_AUTHORITY_INVALID"
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Check Traefik logs
|
|
||||||
docker logs traefik | grep -i certificate
|
|
||||||
docker logs traefik | grep -i letsencrypt
|
|
||||||
docker logs traefik | grep -i error
|
|
||||||
|
|
||||||
# Test certificate
|
|
||||||
echo | openssl s_client -servername home.fig.systems -connect home.fig.systems:443 2>/dev/null | openssl x509 -noout -dates
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. DNS Not Configured
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Test DNS resolution
|
|
||||||
dig home.fig.systems +short
|
|
||||||
|
|
||||||
# Should return your server's IP
|
|
||||||
# If not, configure DNS A records:
|
|
||||||
# *.fig.systems -> YOUR_SERVER_IP
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Port 80 Not Accessible
|
|
||||||
|
|
||||||
Let's Encrypt needs port 80 for HTTP-01 challenge.
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Test from external network
|
|
||||||
curl -I http://home.fig.systems
|
|
||||||
|
|
||||||
# Check firewall
|
|
||||||
sudo ufw status
|
|
||||||
sudo ufw allow 80/tcp
|
|
||||||
sudo ufw allow 443/tcp
|
|
||||||
|
|
||||||
# Check port forwarding on router
|
|
||||||
# Ensure ports 80 and 443 are forwarded to server
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Rate Limiting
|
|
||||||
|
|
||||||
Let's Encrypt has limits: 5 certificates per domain per week.
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check Traefik logs for rate limit errors
|
|
||||||
docker logs traefik | grep -i "rate limit"
|
|
||||||
|
|
||||||
# Wait for rate limit to reset (1 week)
|
|
||||||
# Or use Let's Encrypt staging environment for testing
|
|
||||||
|
|
||||||
# Enable staging in traefik/compose.yaml:
|
|
||||||
# - --certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. First Startup - Certificates Not Yet Generated
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Wait 2-5 minutes for certificate generation
|
|
||||||
docker logs traefik -f
|
|
||||||
|
|
||||||
# Look for:
|
|
||||||
# "Certificate obtained for domain"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5. Certificate Expired
|
|
||||||
|
|
||||||
Traefik should auto-renew, but if manual renewal needed:
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Remove old certificates
|
|
||||||
cd ~/homelab/compose/core/traefik
|
|
||||||
rm -rf ./acme.json
|
|
||||||
|
|
||||||
# Restart Traefik
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Wait for new certificates
|
|
||||||
docker logs traefik -f
|
|
||||||
```
|
|
||||||
|
|
||||||
## SSO Authentication Issues
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
- Can't login to SSO-protected services
|
|
||||||
- Redirected to auth page but login fails
|
|
||||||
- "Invalid credentials" error
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Check LLDAP is running
|
|
||||||
docker ps | grep lldap
|
|
||||||
|
|
||||||
# Check Tinyauth is running
|
|
||||||
docker ps | grep tinyauth
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
docker logs lldap
|
|
||||||
docker logs tinyauth
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. Password Mismatch
|
|
||||||
|
|
||||||
LDAP_BIND_PASSWORD must match LLDAP_LDAP_USER_PASS.
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check both passwords
|
|
||||||
grep LLDAP_LDAP_USER_PASS ~/homelab/compose/core/lldap/.env
|
|
||||||
grep LDAP_BIND_PASSWORD ~/homelab/compose/core/tinyauth/.env
|
|
||||||
|
|
||||||
# They must be EXACTLY the same!
|
|
||||||
|
|
||||||
# If different, update tinyauth/.env
|
|
||||||
cd ~/homelab/compose/core/tinyauth
|
|
||||||
nano .env
|
|
||||||
# Set LDAP_BIND_PASSWORD to match LLDAP_LDAP_USER_PASS
|
|
||||||
|
|
||||||
# Restart Tinyauth
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. User Doesn't Exist in LLDAP
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Access LLDAP web UI
|
|
||||||
# Go to: https://lldap.fig.systems
|
|
||||||
|
|
||||||
# Login with admin credentials
|
|
||||||
# Username: admin
|
|
||||||
# Password: <your LLDAP_LDAP_USER_PASS>
|
|
||||||
|
|
||||||
# Create user:
|
|
||||||
# - Click "Create user"
|
|
||||||
# - Set username, email, password
|
|
||||||
# - Add to "lldap_admin" group
|
|
||||||
|
|
||||||
# Try logging in again
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. LLDAP or Tinyauth Not Running
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Start LLDAP
|
|
||||||
cd ~/homelab/compose/core/lldap
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Wait for it to be ready
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Start Tinyauth
|
|
||||||
cd ~/homelab/compose/core/tinyauth
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. Network Issue Between Tinyauth and LLDAP
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Test connection
|
|
||||||
docker exec tinyauth nc -zv lldap 3890
|
|
||||||
|
|
||||||
# Should show: Connection to lldap 3890 port [tcp/*] succeeded!
|
|
||||||
|
|
||||||
# If not, check both are on homelab network
|
|
||||||
docker network inspect homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
## Access Issues
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
- Can't access service from browser
|
|
||||||
- Connection timeout
|
|
||||||
- "This site can't be reached"
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Test from server
|
|
||||||
curl -I https://home.fig.systems
|
|
||||||
|
|
||||||
# Test DNS
|
|
||||||
dig home.fig.systems +short
|
|
||||||
|
|
||||||
# Check container is running
|
|
||||||
docker ps | grep servicename
|
|
||||||
|
|
||||||
# Check Traefik routing
|
|
||||||
docker logs traefik | grep servicename
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. Service Not Running
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Traefik Not Running
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/core/traefik
|
|
||||||
docker compose up -d
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. DNS Not Resolving
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check DNS
|
|
||||||
dig service.fig.systems +short
|
|
||||||
|
|
||||||
# Should return your server IP
|
|
||||||
# If not, add/update DNS A record
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. Firewall Blocking
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check firewall
|
|
||||||
sudo ufw status
|
|
||||||
|
|
||||||
# Allow if needed
|
|
||||||
sudo ufw allow 80/tcp
|
|
||||||
sudo ufw allow 443/tcp
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5. Wrong Traefik Labels
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check compose.yaml has correct labels
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
cat compose.yaml | grep -A 10 "labels:"
|
|
||||||
|
|
||||||
# Should have:
|
|
||||||
# traefik.enable: true
|
|
||||||
# traefik.http.routers.servicename.rule: Host(`service.fig.systems`)
|
|
||||||
# etc.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Performance Problems
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
- Services running slowly
|
|
||||||
- High CPU/RAM usage
|
|
||||||
- System unresponsive
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Overall system
|
|
||||||
htop
|
|
||||||
|
|
||||||
# Docker resources
|
|
||||||
docker stats
|
|
||||||
|
|
||||||
# Disk usage
|
|
||||||
df -h
|
|
||||||
docker system df
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. Insufficient RAM
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check RAM usage
|
|
||||||
free -h
|
|
||||||
|
|
||||||
# If low, either:
|
|
||||||
# 1. Add more RAM
|
|
||||||
# 2. Stop unused services
|
|
||||||
# 3. Add resource limits to compose files
|
|
||||||
|
|
||||||
# Example resource limit:
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
memory: 2G
|
|
||||||
reservations:
|
|
||||||
memory: 1G
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Disk Full
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check disk usage
|
|
||||||
df -h
|
|
||||||
|
|
||||||
# Clean Docker
|
|
||||||
docker system prune -a
|
|
||||||
|
|
||||||
# Remove old logs
|
|
||||||
sudo journalctl --vacuum-time=7d
|
|
||||||
|
|
||||||
# Check media folder
|
|
||||||
du -sh /media/*
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Too Many Services Running
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Stop unused services
|
|
||||||
cd ~/homelab/compose/services/unused-service
|
|
||||||
docker compose down
|
|
||||||
|
|
||||||
# Or deploy only what you need
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. Database Not Optimized
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# For postgres services, add to .env:
|
|
||||||
POSTGRES_INITDB_ARGS=--data-checksums
|
|
||||||
|
|
||||||
# Increase shared buffers (if enough RAM):
|
|
||||||
# Edit compose.yaml, add to postgres:
|
|
||||||
command: postgres -c shared_buffers=256MB -c max_connections=200
|
|
||||||
```
|
|
||||||
|
|
||||||
## Database Errors
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
- "Connection refused" to database
|
|
||||||
- "Authentication failed for user"
|
|
||||||
- "Database does not exist"
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Check database container
|
|
||||||
docker ps | grep postgres
|
|
||||||
|
|
||||||
# View database logs
|
|
||||||
docker logs <postgres_container_name>
|
|
||||||
|
|
||||||
# Test connection from app
|
|
||||||
docker exec <app_container> nc -zv <db_container> 5432
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. Password Mismatch
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check passwords match in .env
|
|
||||||
cat .env | grep PASSWORD
|
|
||||||
|
|
||||||
# For example, in Vikunja:
|
|
||||||
# VIKUNJA_DATABASE_PASSWORD and POSTGRES_PASSWORD must match!
|
|
||||||
|
|
||||||
# Update if needed
|
|
||||||
nano .env
|
|
||||||
docker compose down
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Database Not Initialized
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Remove database and reinitialize
|
|
||||||
docker compose down
|
|
||||||
rm -rf ./db/ # CAREFUL: This deletes all data!
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. Database Still Starting
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Wait for database to be ready
|
|
||||||
docker logs <postgres_container> -f
|
|
||||||
|
|
||||||
# Look for "database system is ready to accept connections"
|
|
||||||
|
|
||||||
# Then restart app
|
|
||||||
docker compose restart <app_service>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Network Issues
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
- Containers can't communicate
|
|
||||||
- "Connection refused" between services
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Inspect network
|
|
||||||
docker network inspect homelab
|
|
||||||
|
|
||||||
# Test connectivity
|
|
||||||
docker exec container1 ping container2
|
|
||||||
docker exec container1 nc -zv container2 PORT
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. Containers Not on Same Network
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check compose.yaml has networks section
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
||||||
# Ensure service is using the network
|
|
||||||
services:
|
|
||||||
servicename:
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Network Doesn't Exist
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
docker network create homelab
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. DNS Resolution Between Containers
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Use container name, not localhost
|
|
||||||
# Wrong: http://localhost:5432
|
|
||||||
# Right: http://postgres:5432
|
|
||||||
|
|
||||||
# Or use service name from compose.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
## GPU Problems
|
|
||||||
|
|
||||||
### Symptom
|
|
||||||
- "No hardware acceleration available"
|
|
||||||
- GPU not detected in container
|
|
||||||
- "Failed to open GPU"
|
|
||||||
|
|
||||||
### Diagnosis
|
|
||||||
```bash
|
|
||||||
# Check GPU on host
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Check GPU in container
|
|
||||||
docker exec jellyfin nvidia-smi
|
|
||||||
|
|
||||||
# Check Docker GPU runtime
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### Common Causes and Fixes
|
|
||||||
|
|
||||||
#### 1. NVIDIA Container Toolkit Not Installed
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Install toolkit
|
|
||||||
sudo apt install nvidia-container-toolkit
|
|
||||||
|
|
||||||
# Configure runtime
|
|
||||||
sudo nvidia-ctk runtime configure --runtime=docker
|
|
||||||
|
|
||||||
# Restart Docker
|
|
||||||
sudo systemctl restart docker
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. Runtime Not Specified in Compose
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Edit compose.yaml
|
|
||||||
nano compose.yaml
|
|
||||||
|
|
||||||
# Uncomment:
|
|
||||||
runtime: nvidia
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: all
|
|
||||||
capabilities: [gpu]
|
|
||||||
|
|
||||||
# Restart
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. GPU Already in Use
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# Check processes using GPU
|
|
||||||
nvidia-smi
|
|
||||||
|
|
||||||
# Kill process if needed
|
|
||||||
sudo kill <PID>
|
|
||||||
|
|
||||||
# Restart service
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. GPU Not Passed Through to VM (Proxmox)
|
|
||||||
|
|
||||||
**Fix:**
|
|
||||||
```bash
|
|
||||||
# From Proxmox host, check GPU passthrough
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
|
|
||||||
# From VM, check GPU visible
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
|
|
||||||
# If not visible, reconfigure passthrough (see GPU guide)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Getting More Help
|
|
||||||
|
|
||||||
If your issue isn't listed here:
|
|
||||||
|
|
||||||
1. **Check service-specific logs**:
|
|
||||||
```bash
|
|
||||||
cd ~/homelab/compose/path/to/service
|
|
||||||
docker compose logs --tail=200
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Search container logs for errors**:
|
|
||||||
```bash
|
|
||||||
docker compose logs | grep -i error
|
|
||||||
docker compose logs | grep -i fail
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Check FAQ**: See [FAQ](./faq.md)
|
|
||||||
|
|
||||||
4. **Debugging Guide**: See [Debugging Guide](./debugging.md)
|
|
||||||
|
|
||||||
5. **Service Documentation**: Check service's official documentation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Most issues can be solved by checking logs and environment variables!**
|
|
||||||
|
|
@ -1,484 +0,0 @@
|
||||||
# Frequently Asked Questions (FAQ)
|
|
||||||
|
|
||||||
Common questions and answers about the homelab setup.
|
|
||||||
|
|
||||||
## General Questions
|
|
||||||
|
|
||||||
### Q: What is this homelab setup?
|
|
||||||
|
|
||||||
**A:** This is a GitOps-based infrastructure for self-hosting services using Docker Compose. It includes:
|
|
||||||
- 20+ pre-configured services (media, productivity, utilities)
|
|
||||||
- Automatic SSL/TLS with Let's Encrypt via Traefik
|
|
||||||
- Single Sign-On (SSO) with LLDAP and Tinyauth
|
|
||||||
- Automated backups with Backrest
|
|
||||||
- Service discovery dashboard with Homarr
|
|
||||||
|
|
||||||
### Q: What are the minimum hardware requirements?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
- **CPU**: 2+ cores (4+ recommended)
|
|
||||||
- **RAM**: 8GB minimum (16GB+ recommended)
|
|
||||||
- **Storage**: 100GB for containers, additional space for media
|
|
||||||
- **Network**: Static IP recommended, ports 80 and 443 accessible
|
|
||||||
- **GPU** (Optional): NVIDIA GPU for hardware transcoding
|
|
||||||
|
|
||||||
### Q: Do I need my own domain name?
|
|
||||||
|
|
||||||
**A:** Yes, you need at least one domain (two configured by default: `fig.systems` and `edfig.dev`). You can:
|
|
||||||
- Register a domain from any registrar
|
|
||||||
- Update all compose files to use your domain
|
|
||||||
- Configure wildcard DNS (`*.yourdomain.com`)
|
|
||||||
|
|
||||||
### Q: Can I run this on a Raspberry Pi?
|
|
||||||
|
|
||||||
**A:** Partially. ARM64 architecture is supported by most services, but:
|
|
||||||
- Performance will be limited
|
|
||||||
- No GPU acceleration available
|
|
||||||
- Some services may not have ARM images
|
|
||||||
- 8GB RAM minimum recommended (Pi 4 or Pi 5)
|
|
||||||
|
|
||||||
### Q: How much does this cost to run?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
- **Server**: $0 (if using existing hardware) or $5-20/month (VPS)
|
|
||||||
- **Domain**: $10-15/year
|
|
||||||
- **Backblaze B2**: ~$0.60/month for 100GB photos
|
|
||||||
- **Electricity**: Varies by hardware and location
|
|
||||||
- **Total**: $15-30/year minimum
|
|
||||||
|
|
||||||
## Setup Questions
|
|
||||||
|
|
||||||
### Q: Why won't my services start?
|
|
||||||
|
|
||||||
**A:** Common causes:
|
|
||||||
1. **Environment variables not set**: Check for `changeme_*` in `.env` files
|
|
||||||
2. **Ports already in use**: Check if 80/443 are available
|
|
||||||
3. **Network not created**: Run `docker network create homelab`
|
|
||||||
4. **DNS not configured**: Services need valid DNS records
|
|
||||||
5. **Insufficient resources**: Check RAM and disk space
|
|
||||||
|
|
||||||
**Debug:**
|
|
||||||
```bash
|
|
||||||
cd compose/path/to/service
|
|
||||||
docker compose logs
|
|
||||||
docker compose ps
|
|
||||||
```
|
|
||||||
|
|
||||||
### Q: How do I know if everything is working?
|
|
||||||
|
|
||||||
**A:** Check these indicators:
|
|
||||||
1. **All containers running**: `docker ps` shows all services
|
|
||||||
2. **SSL certificates valid**: Visit https://home.fig.systems (no cert errors)
|
|
||||||
3. **Dashboard accessible**: Homarr shows all services
|
|
||||||
4. **SSO working**: Can login to protected services
|
|
||||||
5. **No errors in logs**: `docker compose logs` shows no critical errors
|
|
||||||
|
|
||||||
### Q: What order should I deploy services?
|
|
||||||
|
|
||||||
**A:** Follow this order:
|
|
||||||
1. **Core**: Traefik → LLDAP → Tinyauth
|
|
||||||
2. **Configure**: Create LLDAP users
|
|
||||||
3. **Media**: Jellyfin → Immich → Jellyseerr → Sonarr → Radarr → Downloaders
|
|
||||||
4. **Utility**: Homarr → Backrest → Everything else
|
|
||||||
|
|
||||||
### Q: Do I need to configure all 20 services?
|
|
||||||
|
|
||||||
**A:** No! Deploy only what you need:
|
|
||||||
- **Core** (required): Traefik, LLDAP, Tinyauth
|
|
||||||
- **Media** (optional): Jellyfin, Immich, Sonarr, Radarr
|
|
||||||
- **Utility** (pick what you want): Homarr, Backrest, Linkwarden, Vikunja, etc.
|
|
||||||
|
|
||||||
## Configuration Questions
|
|
||||||
|
|
||||||
### Q: What secrets do I need to change?
|
|
||||||
|
|
||||||
**A:** Search for `changeme_*` in all `.env` files:
|
|
||||||
```bash
|
|
||||||
grep -r "changeme_" compose/
|
|
||||||
```
|
|
||||||
|
|
||||||
Critical secrets:
|
|
||||||
- **LLDAP_LDAP_USER_PASS**: Admin password for LLDAP
|
|
||||||
- **LLDAP_JWT_SECRET**: 64-character hex string
|
|
||||||
- **SESSION_SECRET**: 64-character hex string for Tinyauth
|
|
||||||
- **DB_PASSWORD**: Database passwords (Immich, Vikunja, Linkwarden)
|
|
||||||
- **NEXTAUTH_SECRET**: NextAuth secret for Linkwarden
|
|
||||||
- **VIKUNJA_SERVICE_JWTSECRET**: JWT secret for Vikunja
|
|
||||||
|
|
||||||
### Q: How do I generate secure secrets?
|
|
||||||
|
|
||||||
**A:** Use these commands:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 64-character hex (for JWT secrets, session secrets)
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# 32-character password (for databases)
|
|
||||||
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
|
||||||
|
|
||||||
# 32-character hex (for API keys)
|
|
||||||
openssl rand -hex 16
|
|
||||||
```
|
|
||||||
|
|
||||||
See [Secrets Management Guide](../guides/secrets-management.md) for details.
|
|
||||||
|
|
||||||
### Q: Can I change the domains from fig.systems to my own?
|
|
||||||
|
|
||||||
**A:** Yes! You need to:
|
|
||||||
1. Find and replace in all `compose.yaml` files:
|
|
||||||
```bash
|
|
||||||
find compose -name "compose.yaml" -exec sed -i 's/fig\.systems/yourdomain.com/g' {} \;
|
|
||||||
find compose -name "compose.yaml" -exec sed -i 's/edfig\.dev/yourotherdomain.com/g' {} \;
|
|
||||||
```
|
|
||||||
2. Update DNS records to point to your server
|
|
||||||
3. Update `.env` files with new URLs (e.g., `NEXTAUTH_URL`, `VIKUNJA_SERVICE_PUBLICURL`)
|
|
||||||
|
|
||||||
### Q: Do all passwords need to match?
|
|
||||||
|
|
||||||
**A:** No, but some must match:
|
|
||||||
- **LLDAP_LDAP_USER_PASS** must equal **LDAP_BIND_PASSWORD** (in tinyauth)
|
|
||||||
- **VIKUNJA_DATABASE_PASSWORD** must equal **POSTGRES_PASSWORD** (in vikunja)
|
|
||||||
- **Linkwarden POSTGRES_PASSWORD** is used in DATABASE_URL
|
|
||||||
|
|
||||||
All other passwords should be unique!
|
|
||||||
|
|
||||||
## SSL/TLS Questions
|
|
||||||
|
|
||||||
### Q: Why am I getting SSL certificate errors?
|
|
||||||
|
|
||||||
**A:** Common causes:
|
|
||||||
1. **DNS not configured**: Ensure domains point to your server
|
|
||||||
2. **Ports not accessible**: Let's Encrypt needs port 80 for HTTP challenge
|
|
||||||
3. **Rate limiting**: Let's Encrypt has rate limits (5 certs per domain/week)
|
|
||||||
4. **First startup**: Certs take a few minutes to generate
|
|
||||||
|
|
||||||
**Debug:**
|
|
||||||
```bash
|
|
||||||
docker logs traefik | grep -i error
|
|
||||||
docker logs traefik | grep -i certificate
|
|
||||||
```
|
|
||||||
|
|
||||||
### Q: How long do SSL certificates last?
|
|
||||||
|
|
||||||
**A:** Let's Encrypt certificates:
|
|
||||||
- Valid for 90 days
|
|
||||||
- Traefik auto-renews at 30 days before expiration
|
|
||||||
- Renewals happen automatically in the background
|
|
||||||
|
|
||||||
### Q: Can I use my own SSL certificates?
|
|
||||||
|
|
||||||
**A:** Yes, but it requires modifying Traefik configuration. The default Let's Encrypt setup is recommended.
|
|
||||||
|
|
||||||
## SSO Questions
|
|
||||||
|
|
||||||
### Q: What is SSO and do I need it?
|
|
||||||
|
|
||||||
**A:** SSO (Single Sign-On) lets you log in once and access all services:
|
|
||||||
- **LLDAP**: Stores users and passwords
|
|
||||||
- **Tinyauth**: Authenticates users before allowing service access
|
|
||||||
- **Benefits**: One login for all services, centralized user management
|
|
||||||
- **Optional**: Some services can work without SSO (have their own auth)
|
|
||||||
|
|
||||||
### Q: Why can't I log into SSO-protected services?
|
|
||||||
|
|
||||||
**A:** Check:
|
|
||||||
1. **LLDAP is running**: `docker ps | grep lldap`
|
|
||||||
2. **Tinyauth is running**: `docker ps | grep tinyauth`
|
|
||||||
3. **User exists in LLDAP**: Go to https://lldap.fig.systems and verify
|
|
||||||
4. **Passwords match**: LDAP_BIND_PASSWORD = LLDAP_LDAP_USER_PASS
|
|
||||||
5. **User in correct group**: Check user is in `lldap_admin` group
|
|
||||||
|
|
||||||
**Debug:**
|
|
||||||
```bash
|
|
||||||
cd compose/core/tinyauth
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
### Q: Can I disable SSO for a service?
|
|
||||||
|
|
||||||
**A:** Yes! Comment out the middleware line in compose.yaml:
|
|
||||||
```yaml
|
|
||||||
# traefik.http.routers.servicename.middlewares: tinyauth
|
|
||||||
```
|
|
||||||
|
|
||||||
Then restart the service:
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Q: How do I reset my LLDAP admin password?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
1. Stop LLDAP: `cd compose/core/lldap && docker compose down`
|
|
||||||
2. Update `LLDAP_LDAP_USER_PASS` in `.env`
|
|
||||||
3. Remove the database: `rm -rf data/`
|
|
||||||
4. Restart: `docker compose up -d`
|
|
||||||
5. Recreate users in LLDAP UI
|
|
||||||
|
|
||||||
⚠️ **Warning**: This deletes all users!
|
|
||||||
|
|
||||||
## Service-Specific Questions
|
|
||||||
|
|
||||||
### Q: Jellyfin shows "Playback Error" - what's wrong?
|
|
||||||
|
|
||||||
**A:** Common causes:
|
|
||||||
1. **Media file corrupt**: Test file with VLC
|
|
||||||
2. **Permissions**: Check file ownership (`ls -la /media/movies`)
|
|
||||||
3. **Codec not supported**: Enable transcoding or use different file
|
|
||||||
4. **GPU not configured**: If using GPU, verify NVIDIA Container Toolkit
|
|
||||||
|
|
||||||
### Q: Immich won't upload photos - why?
|
|
||||||
|
|
||||||
**A:** Check:
|
|
||||||
1. **Database connected**: `docker logs immich_postgres`
|
|
||||||
2. **Upload directory writable**: Check permissions on `./upload`
|
|
||||||
3. **Disk space**: `df -h`
|
|
||||||
4. **File size limits**: Check browser console for errors
|
|
||||||
|
|
||||||
### Q: Why isn't Homarr showing my services?
|
|
||||||
|
|
||||||
**A:** Homarr needs:
|
|
||||||
1. **Docker socket access**: Volume mount `/var/run/docker.sock`
|
|
||||||
2. **Labels on services**: Each service needs `homarr.name` label
|
|
||||||
3. **Same network**: Homarr must be on `homelab` network
|
|
||||||
4. **Time to detect**: Refresh page or wait 30 seconds
|
|
||||||
|
|
||||||
### Q: Backrest shows "Repository not initialized" - what do I do?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
1. Go to https://backup.fig.systems
|
|
||||||
2. Click "Add Repository"
|
|
||||||
3. Configure Backblaze B2 settings
|
|
||||||
4. Click "Initialize Repository"
|
|
||||||
|
|
||||||
See [Backup Guide](../services/backup.md) for detailed setup.
|
|
||||||
|
|
||||||
### Q: Sonarr/Radarr can't find anything - help!
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
1. **Add indexers**: Settings → Indexers → Add indexer
|
|
||||||
2. **Configure download client**: Settings → Download Clients → Add
|
|
||||||
3. **Set root folder**: Series/Movies → Add Root Folder → `/media/tv` or `/media/movies`
|
|
||||||
4. **Test indexers**: Settings → Indexers → Test
|
|
||||||
|
|
||||||
### Q: qBittorrent shows "Unauthorized" - what's the password?
|
|
||||||
|
|
||||||
**A:** Default credentials:
|
|
||||||
- Username: `admin`
|
|
||||||
- Password: `adminadmin`
|
|
||||||
|
|
||||||
⚠️ **Change this immediately** in qBittorrent settings!
|
|
||||||
|
|
||||||
## Media Questions
|
|
||||||
|
|
||||||
### Q: Where should I put my media files?
|
|
||||||
|
|
||||||
**A:** Use the /media directory structure:
|
|
||||||
- Movies: `/media/movies/Movie Name (Year)/movie.mkv`
|
|
||||||
- TV: `/media/tv/Show Name/Season 01/episode.mkv`
|
|
||||||
- Music: `/media/music/Artist/Album/song.flac`
|
|
||||||
- Photos: `/media/photos/` (any structure)
|
|
||||||
- Books: `/media/books/` (any structure)
|
|
||||||
|
|
||||||
### Q: How do I add more media storage?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
1. Mount additional drive to `/media2` (or any path)
|
|
||||||
2. Update compose files to include new volume:
|
|
||||||
```yaml
|
|
||||||
volumes:
|
|
||||||
- /media:/media:ro
|
|
||||||
- /media2:/media2:ro # Add this
|
|
||||||
```
|
|
||||||
3. Restart service: `docker compose up -d`
|
|
||||||
4. Add new library in service UI
|
|
||||||
|
|
||||||
### Q: Can Sonarr/Radarr automatically download shows/movies?
|
|
||||||
|
|
||||||
**A:** Yes! That's their purpose:
|
|
||||||
1. Add indexers (for searching)
|
|
||||||
2. Add download client (SABnzbd or qBittorrent)
|
|
||||||
3. Add a series/movie
|
|
||||||
4. Enable monitoring
|
|
||||||
5. Sonarr/Radarr will search, download, and organize automatically
|
|
||||||
|
|
||||||
### Q: How do I enable hardware transcoding in Jellyfin?
|
|
||||||
|
|
||||||
**A:** See [GPU Setup Guide](../guides/gpu-setup.md) for full instructions.
|
|
||||||
|
|
||||||
Quick steps:
|
|
||||||
1. Install NVIDIA Container Toolkit on host
|
|
||||||
2. Uncomment GPU sections in `jellyfin/compose.yaml`
|
|
||||||
3. Restart Jellyfin
|
|
||||||
4. Enable in Jellyfin: Dashboard → Playback → Hardware Acceleration → NVIDIA NVENC
|
|
||||||
|
|
||||||
## Network Questions
|
|
||||||
|
|
||||||
### Q: Can I access services only from my local network?
|
|
||||||
|
|
||||||
**A:** Yes, don't expose ports 80/443 to internet:
|
|
||||||
1. Use firewall to block external access
|
|
||||||
2. Use local DNS (Pi-hole, AdGuard Home)
|
|
||||||
3. Point domains to local IP (192.168.x.x)
|
|
||||||
4. Use self-signed certs or no HTTPS
|
|
||||||
|
|
||||||
**Or** use Traefik's IP allowlist middleware.
|
|
||||||
|
|
||||||
### Q: Can I use a VPN with these services?
|
|
||||||
|
|
||||||
**A:** Yes, options:
|
|
||||||
1. **VPN on download clients**: Add VPN container for qBittorrent/SABnzbd
|
|
||||||
2. **VPN to access homelab**: Use WireGuard/Tailscale to access from anywhere
|
|
||||||
3. **VPN for entire server**: All traffic goes through VPN (not recommended)
|
|
||||||
|
|
||||||
### Q: Why can't I access services from outside my network?
|
|
||||||
|
|
||||||
**A:** Check:
|
|
||||||
1. **Port forwarding**: Ports 80 and 443 forwarded to homelab server
|
|
||||||
2. **Firewall**: Allow ports 80/443 through firewall
|
|
||||||
3. **DNS**: Domains point to your public IP
|
|
||||||
4. **ISP**: Some ISPs block ports 80/443 (use CloudFlare Tunnel)
|
|
||||||
|
|
||||||
## Backup Questions
|
|
||||||
|
|
||||||
### Q: What should I backup?
|
|
||||||
|
|
||||||
**A:** Priority order:
|
|
||||||
1. **High**: Immich photos (`compose/media/frontend/immich/upload`)
|
|
||||||
2. **High**: Configuration files (all `.env` files, compose files)
|
|
||||||
3. **Medium**: Service data directories (`./config`, `./data` in each service)
|
|
||||||
4. **Low**: Media files (usually have source elsewhere)
|
|
||||||
|
|
||||||
### Q: How do I restore from backup?
|
|
||||||
|
|
||||||
**A:** See [Backup Operations Guide](../operations/backups.md).
|
|
||||||
|
|
||||||
Quick steps:
|
|
||||||
1. Install fresh homelab setup
|
|
||||||
2. Restore `.env` files and configs
|
|
||||||
3. Use Backrest to restore data
|
|
||||||
4. Restart services
|
|
||||||
|
|
||||||
### Q: Does Backrest backup everything automatically?
|
|
||||||
|
|
||||||
**A:** Only what you configure:
|
|
||||||
- Default: Immich photos and homelab configs
|
|
||||||
- Add more paths in `backrest/compose.yaml` volumes
|
|
||||||
- Create backup plans in Backrest UI for each path
|
|
||||||
|
|
||||||
## Performance Questions
|
|
||||||
|
|
||||||
### Q: Services are running slow - how do I optimize?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
1. **Check resources**: `docker stats` - are you out of RAM/CPU?
|
|
||||||
2. **Reduce services**: Stop unused services
|
|
||||||
3. **Use SSD**: Move Docker to SSD storage
|
|
||||||
4. **Add RAM**: Minimum 8GB, 16GB+ recommended
|
|
||||||
5. **Enable GPU**: For Jellyfin and Immich
|
|
||||||
|
|
||||||
### Q: Docker is using too much disk space - what do I do?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
```bash
|
|
||||||
# Check Docker disk usage
|
|
||||||
docker system df
|
|
||||||
|
|
||||||
# Clean up
|
|
||||||
docker system prune -a --volumes
|
|
||||||
|
|
||||||
# WARNING: This removes all stopped containers and unused volumes!
|
|
||||||
```
|
|
||||||
|
|
||||||
Better approach - clean specific services:
|
|
||||||
```bash
|
|
||||||
cd compose/path/to/service
|
|
||||||
docker compose down
|
|
||||||
docker volume rm $(docker volume ls -q | grep servicename)
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Q: How do I limit RAM/CPU for a service?
|
|
||||||
|
|
||||||
**A:** Add resource limits to compose.yaml:
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
servicename:
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
cpus: '2.0'
|
|
||||||
memory: 4G
|
|
||||||
reservations:
|
|
||||||
memory: 2G
|
|
||||||
```
|
|
||||||
|
|
||||||
## Update Questions
|
|
||||||
|
|
||||||
### Q: How do I update a service?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
```bash
|
|
||||||
cd compose/path/to/service
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
See [Updates Guide](../operations/updates.md) for details.
|
|
||||||
|
|
||||||
### Q: How often should I update?
|
|
||||||
|
|
||||||
**A:**
|
|
||||||
- **Security updates**: Weekly
|
|
||||||
- **Feature updates**: Monthly
|
|
||||||
- **Major versions**: When stable
|
|
||||||
|
|
||||||
Use Watchtower for automatic updates (optional).
|
|
||||||
|
|
||||||
### Q: Will updating break my configuration?
|
|
||||||
|
|
||||||
**A:** Usually no, but:
|
|
||||||
- Always backup before major updates
|
|
||||||
- Check release notes for breaking changes
|
|
||||||
- Test in staging environment if critical
|
|
||||||
|
|
||||||
## Security Questions
|
|
||||||
|
|
||||||
### Q: Is this setup secure?
|
|
||||||
|
|
||||||
**A:** Reasonably secure with best practices:
|
|
||||||
- ✅ SSL/TLS encryption
|
|
||||||
- ✅ SSO authentication
|
|
||||||
- ✅ Secrets in environment files
|
|
||||||
- ⚠️ Some services exposed to internet
|
|
||||||
- ⚠️ Depends on keeping services updated
|
|
||||||
|
|
||||||
See [Security Guide](../guides/security.md) for hardening.
|
|
||||||
|
|
||||||
### Q: Should I expose my homelab to the internet?
|
|
||||||
|
|
||||||
**A:** Depends on your risk tolerance:
|
|
||||||
- **Yes**: Convenient access from anywhere, Let's Encrypt works
|
|
||||||
- **No**: More secure, requires VPN for external access
|
|
||||||
- **Hybrid**: Expose only essential services, use VPN for sensitive ones
|
|
||||||
|
|
||||||
### Q: What if someone gets my LLDAP password?
|
|
||||||
|
|
||||||
**A:** They can access all SSO-protected services. Mitigations:
|
|
||||||
- Use strong, unique passwords
|
|
||||||
- Enable 2FA where supported
|
|
||||||
- Review LLDAP access logs
|
|
||||||
- Use fail2ban to block brute force
|
|
||||||
- Consider VPN-only access
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
For specific error messages and debugging, see:
|
|
||||||
- [Common Issues](./common-issues.md)
|
|
||||||
- [Debugging Guide](./debugging.md)
|
|
||||||
|
|
||||||
Still stuck? Check:
|
|
||||||
1. Service logs: `docker compose logs`
|
|
||||||
2. Traefik logs: `docker logs traefik`
|
|
||||||
3. Container status: `docker ps -a`
|
|
||||||
4. Network connectivity: `docker network inspect homelab`
|
|
||||||
|
|
@ -1,18 +0,0 @@
|
||||||
{
|
|
||||||
"$schema": "https://opencode.ai/config.json",
|
|
||||||
"provider": {
|
|
||||||
"ollama": {
|
|
||||||
"npm": "@ai-sdk/openai-compatible",
|
|
||||||
"name": "Ollama (Local)",
|
|
||||||
"options": {
|
|
||||||
"baseURL": "http://localhost:11434/v1"
|
|
||||||
},
|
|
||||||
"models": {
|
|
||||||
"qwen2.5-coder:7b": {
|
|
||||||
"name": "Qwen2.5-Coder 7B (Coding)",
|
|
||||||
"context_length": 32768
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -24,12 +24,15 @@ PGID=1000
|
||||||
# APP_SECRET=changeme_please_set_random_secret
|
# APP_SECRET=changeme_please_set_random_secret
|
||||||
# API_KEY=changeme_your_api_key_here
|
# API_KEY=changeme_your_api_key_here
|
||||||
|
|
||||||
# Email/SMTP Configuration (if needed)
|
# Email/SMTP Configuration (Mailgun)
|
||||||
# SMTP_HOST=smtp.gmail.com
|
SMTP_HOST=smtp.mailgun.org
|
||||||
# SMTP_PORT=587
|
SMTP_PORT=587
|
||||||
# SMTP_USER=your-email@gmail.com
|
SMTP_USER=noreply@fig.systems
|
||||||
# SMTP_PASSWORD=changeme_your_app_password
|
SMTP_PASSWORD=
|
||||||
# SMTP_FROM=Service <noreply@fig.systems>
|
SMTP_FROM=Service <noreply@fig.systems>
|
||||||
|
# Optional SMTP settings
|
||||||
|
# SMTP_TLS=true
|
||||||
|
# SMTP_STARTTLS=true
|
||||||
|
|
||||||
# Storage Configuration
|
# Storage Configuration
|
||||||
# STORAGE_PATH=/data
|
# STORAGE_PATH=/data
|
||||||
|
|
|
||||||
|
|
@ -1,506 +0,0 @@
|
||||||
# OpenTofu Infrastructure as Code for Proxmox
|
|
||||||
|
|
||||||
This directory contains OpenTofu (Terraform) configurations for managing Proxmox infrastructure.
|
|
||||||
|
|
||||||
## What is OpenTofu?
|
|
||||||
|
|
||||||
OpenTofu is an open-source fork of Terraform, providing Infrastructure as Code (IaC) capabilities. It allows you to:
|
|
||||||
|
|
||||||
- 📝 **Define infrastructure as code** - Version control your infrastructure
|
|
||||||
- 🔄 **Automate provisioning** - Create VMs/containers with one command
|
|
||||||
- 🎯 **Consistency** - Same config = same result every time
|
|
||||||
- 🔍 **Plan changes** - Preview changes before applying
|
|
||||||
- 🗑️ **Easy cleanup** - Destroy infrastructure when done
|
|
||||||
|
|
||||||
## Why OpenTofu over Terraform?
|
|
||||||
|
|
||||||
- ✅ **Truly Open Source** - MPL 2.0 license (vs. Terraform's BSL)
|
|
||||||
- ✅ **Community Driven** - Not controlled by single company
|
|
||||||
- ✅ **Terraform Compatible** - Drop-in replacement
|
|
||||||
- ✅ **Active Development** - Regular updates and features
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Install OpenTofu
|
|
||||||
|
|
||||||
**Linux/macOS:**
|
|
||||||
```bash
|
|
||||||
# Install via package manager
|
|
||||||
curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh | sh
|
|
||||||
|
|
||||||
# Or via Homebrew (macOS/Linux)
|
|
||||||
brew install opentofu
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify installation:**
|
|
||||||
```bash
|
|
||||||
tofu version
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Configure Proxmox API
|
|
||||||
|
|
||||||
**Create API Token in Proxmox:**
|
|
||||||
1. Login to Proxmox web UI
|
|
||||||
2. Datacenter → Permissions → API Tokens
|
|
||||||
3. Add new token:
|
|
||||||
- User: `root@pam`
|
|
||||||
- Token ID: `terraform`
|
|
||||||
- Privilege Separation: Unchecked (for full access)
|
|
||||||
4. Save the token ID and secret!
|
|
||||||
|
|
||||||
**Set environment variables:**
|
|
||||||
```bash
|
|
||||||
export PM_API_URL="https://proxmox.local:8006/api2/json"
|
|
||||||
export PM_API_TOKEN_ID="root@pam!terraform"
|
|
||||||
export PM_API_TOKEN_SECRET="your-secret-here"
|
|
||||||
|
|
||||||
# Verify SSL (optional, set to false for self-signed certs)
|
|
||||||
export PM_TLS_INSECURE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Choose Your Use Case
|
|
||||||
|
|
||||||
We provide examples for common scenarios:
|
|
||||||
|
|
||||||
| Example | Description | Best For |
|
|
||||||
|---------|-------------|----------|
|
|
||||||
| [single-vm](./proxmox-examples/single-vm/) | Simple Ubuntu VM | Learning, testing |
|
|
||||||
| [docker-host](./proxmox-examples/docker-host/) | VM for Docker containers | Production homelab |
|
|
||||||
| [lxc-containers](./proxmox-examples/lxc-containers/) | Lightweight LXC containers | Resource efficiency |
|
|
||||||
| [multi-node](./proxmox-examples/multi-node/) | Multiple VMs/services | Complex deployments |
|
|
||||||
| [cloud-init](./proxmox-examples/cloud-init/) | Cloud-init automation | Production VMs |
|
|
||||||
|
|
||||||
## Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
terraform/
|
|
||||||
├── README.md # This file
|
|
||||||
├── proxmox-examples/
|
|
||||||
│ ├── single-vm/ # Simple VM example
|
|
||||||
│ │ ├── main.tf
|
|
||||||
│ │ ├── variables.tf
|
|
||||||
│ │ └── terraform.tfvars
|
|
||||||
│ ├── docker-host/ # Docker host VM
|
|
||||||
│ ├── lxc-containers/ # LXC container examples
|
|
||||||
│ ├── multi-node/ # Multiple VM deployment
|
|
||||||
│ └── cloud-init/ # Cloud-init examples
|
|
||||||
└── modules/ # Reusable modules (future)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Basic Workflow
|
|
||||||
|
|
||||||
### Initialize
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd proxmox-examples/single-vm
|
|
||||||
tofu init
|
|
||||||
```
|
|
||||||
|
|
||||||
### Plan
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu plan
|
|
||||||
```
|
|
||||||
|
|
||||||
Preview changes before applying.
|
|
||||||
|
|
||||||
### Apply
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu apply
|
|
||||||
```
|
|
||||||
|
|
||||||
Review plan and type `yes` to proceed.
|
|
||||||
|
|
||||||
### Destroy
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu destroy
|
|
||||||
```
|
|
||||||
|
|
||||||
Removes all managed resources.
|
|
||||||
|
|
||||||
## Common Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize and download providers
|
|
||||||
tofu init
|
|
||||||
|
|
||||||
# Validate configuration syntax
|
|
||||||
tofu validate
|
|
||||||
|
|
||||||
# Format code to standard style
|
|
||||||
tofu fmt
|
|
||||||
|
|
||||||
# Preview changes
|
|
||||||
tofu plan
|
|
||||||
|
|
||||||
# Apply changes
|
|
||||||
tofu apply
|
|
||||||
|
|
||||||
# Apply without confirmation (careful!)
|
|
||||||
tofu apply -auto-approve
|
|
||||||
|
|
||||||
# Show current state
|
|
||||||
tofu show
|
|
||||||
|
|
||||||
# List all resources
|
|
||||||
tofu state list
|
|
||||||
|
|
||||||
# Destroy specific resource
|
|
||||||
tofu destroy -target=proxmox_vm_qemu.vm
|
|
||||||
|
|
||||||
# Destroy everything
|
|
||||||
tofu destroy
|
|
||||||
```
|
|
||||||
|
|
||||||
## Provider Configuration
|
|
||||||
|
|
||||||
### Proxmox Provider
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
terraform {
|
|
||||||
required_providers {
|
|
||||||
proxmox = {
|
|
||||||
source = "bpg/proxmox"
|
|
||||||
version = "~> 0.50"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
provider "proxmox" {
|
|
||||||
endpoint = var.pm_api_url
|
|
||||||
api_token = "${var.pm_token_id}!${var.pm_token_secret}"
|
|
||||||
insecure = true # For self-signed certs
|
|
||||||
|
|
||||||
ssh {
|
|
||||||
agent = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### 1. Use Variables
|
|
||||||
|
|
||||||
Don't hardcode values:
|
|
||||||
```hcl
|
|
||||||
# Bad
|
|
||||||
target_node = "pve"
|
|
||||||
|
|
||||||
# Good
|
|
||||||
target_node = var.proxmox_node
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Use terraform.tfvars
|
|
||||||
|
|
||||||
Store configuration separately:
|
|
||||||
```hcl
|
|
||||||
# terraform.tfvars
|
|
||||||
proxmox_node = "pve"
|
|
||||||
vm_name = "docker-host"
|
|
||||||
vm_cores = 4
|
|
||||||
vm_memory = 8192
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Version Control
|
|
||||||
|
|
||||||
**Commit:**
|
|
||||||
- ✅ `*.tf` files
|
|
||||||
- ✅ `*.tfvars` (if no secrets)
|
|
||||||
- ✅ `.terraform.lock.hcl`
|
|
||||||
|
|
||||||
**DO NOT commit:**
|
|
||||||
- ❌ `terraform.tfstate`
|
|
||||||
- ❌ `terraform.tfstate.backup`
|
|
||||||
- ❌ `.terraform/` directory
|
|
||||||
- ❌ Secrets/passwords
|
|
||||||
|
|
||||||
Use `.gitignore`:
|
|
||||||
```
|
|
||||||
.terraform/
|
|
||||||
*.tfstate
|
|
||||||
*.tfstate.backup
|
|
||||||
*.tfvars # If contains secrets
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Use Modules
|
|
||||||
|
|
||||||
For reusable components:
|
|
||||||
```hcl
|
|
||||||
module "docker_vm" {
|
|
||||||
source = "./modules/docker-host"
|
|
||||||
|
|
||||||
vm_name = "docker-01"
|
|
||||||
cores = 4
|
|
||||||
memory = 8192
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. State Management
|
|
||||||
|
|
||||||
**Local State (default):**
|
|
||||||
- Simple, single-user
|
|
||||||
- State in `terraform.tfstate`
|
|
||||||
|
|
||||||
**Remote State (recommended for teams):**
|
|
||||||
```hcl
|
|
||||||
terraform {
|
|
||||||
backend "s3" {
|
|
||||||
bucket = "my-terraform-state"
|
|
||||||
key = "proxmox/terraform.tfstate"
|
|
||||||
region = "us-east-1"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example Use Cases
|
|
||||||
|
|
||||||
### Homelab Docker Host
|
|
||||||
|
|
||||||
Provision a VM optimized for Docker:
|
|
||||||
- 4-8 CPU cores
|
|
||||||
- 8-16GB RAM
|
|
||||||
- 50GB+ disk
|
|
||||||
- Ubuntu Server 24.04
|
|
||||||
- Docker pre-installed via cloud-init
|
|
||||||
|
|
||||||
See: `proxmox-examples/docker-host/`
|
|
||||||
|
|
||||||
### Development Environment
|
|
||||||
|
|
||||||
Multiple VMs for testing:
|
|
||||||
- Web server VM
|
|
||||||
- Database VM
|
|
||||||
- Application VM
|
|
||||||
- All networked together
|
|
||||||
|
|
||||||
See: `proxmox-examples/multi-node/`
|
|
||||||
|
|
||||||
### LXC Containers
|
|
||||||
|
|
||||||
Lightweight containers for services:
|
|
||||||
- Lower overhead than VMs
|
|
||||||
- Fast startup
|
|
||||||
- Resource efficient
|
|
||||||
|
|
||||||
See: `proxmox-examples/lxc-containers/`
|
|
||||||
|
|
||||||
## Proxmox Provider Resources
|
|
||||||
|
|
||||||
### Virtual Machines (QEMU)
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
resource "proxmox_vm_qemu" "vm" {
|
|
||||||
name = "my-vm"
|
|
||||||
target_node = "pve"
|
|
||||||
|
|
||||||
clone = "ubuntu-cloud-template" # Template to clone
|
|
||||||
cores = 2
|
|
||||||
memory = 2048
|
|
||||||
|
|
||||||
disk {
|
|
||||||
size = "20G"
|
|
||||||
storage = "local-lvm"
|
|
||||||
}
|
|
||||||
|
|
||||||
network {
|
|
||||||
model = "virtio"
|
|
||||||
bridge = "vmbr0"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### LXC Containers
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
resource "proxmox_lxc" "container" {
|
|
||||||
hostname = "my-container"
|
|
||||||
target_node = "pve"
|
|
||||||
|
|
||||||
ostemplate = "local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.gz"
|
|
||||||
cores = 1
|
|
||||||
memory = 512
|
|
||||||
|
|
||||||
rootfs {
|
|
||||||
storage = "local-lvm"
|
|
||||||
size = "8G"
|
|
||||||
}
|
|
||||||
|
|
||||||
network {
|
|
||||||
name = "eth0"
|
|
||||||
bridge = "vmbr0"
|
|
||||||
ip = "dhcp"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cloud-Init
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
resource "proxmox_vm_qemu" "cloudinit_vm" {
|
|
||||||
# ... basic config ...
|
|
||||||
|
|
||||||
ciuser = "ubuntu"
|
|
||||||
cipassword = var.vm_password
|
|
||||||
sshkeys = file("~/.ssh/id_rsa.pub")
|
|
||||||
|
|
||||||
ipconfig0 = "ip=dhcp"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### SSL Certificate Errors
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export PM_TLS_INSECURE=true
|
|
||||||
```
|
|
||||||
|
|
||||||
Or add to provider:
|
|
||||||
```hcl
|
|
||||||
provider "proxmox" {
|
|
||||||
insecure = true
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### API Permission Errors
|
|
||||||
|
|
||||||
Ensure API token has necessary permissions:
|
|
||||||
```bash
|
|
||||||
# In Proxmox shell
|
|
||||||
pveum acl modify / -token 'root@pam!terraform' -role Administrator
|
|
||||||
```
|
|
||||||
|
|
||||||
### VM Clone Errors
|
|
||||||
|
|
||||||
Ensure template exists:
|
|
||||||
```bash
|
|
||||||
# List VMs
|
|
||||||
qm list
|
|
||||||
|
|
||||||
# Check template flag
|
|
||||||
qm config 9000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Timeout Errors
|
|
||||||
|
|
||||||
Increase timeout:
|
|
||||||
```hcl
|
|
||||||
resource "proxmox_vm_qemu" "vm" {
|
|
||||||
# ...
|
|
||||||
timeout_create = "30m"
|
|
||||||
timeout_clone = "30m"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration from Terraform
|
|
||||||
|
|
||||||
OpenTofu is a drop-in replacement:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Rename binary
|
|
||||||
alias tofu=terraform
|
|
||||||
|
|
||||||
# Or replace commands
|
|
||||||
terraform → tofu
|
|
||||||
```
|
|
||||||
|
|
||||||
State files are compatible - no conversion needed!
|
|
||||||
|
|
||||||
## Advanced Topics
|
|
||||||
|
|
||||||
### Custom Cloud Images
|
|
||||||
|
|
||||||
1. Download cloud image
|
|
||||||
2. Create VM template
|
|
||||||
3. Use cloud-init for customization
|
|
||||||
|
|
||||||
See: `proxmox-examples/cloud-init/`
|
|
||||||
|
|
||||||
### Network Configuration
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# VLAN tagging
|
|
||||||
network {
|
|
||||||
model = "virtio"
|
|
||||||
bridge = "vmbr0"
|
|
||||||
tag = 100 # VLAN 100
|
|
||||||
}
|
|
||||||
|
|
||||||
# Multiple NICs
|
|
||||||
network {
|
|
||||||
model = "virtio"
|
|
||||||
bridge = "vmbr0"
|
|
||||||
}
|
|
||||||
network {
|
|
||||||
model = "virtio"
|
|
||||||
bridge = "vmbr1"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Storage Options
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# Local LVM
|
|
||||||
disk {
|
|
||||||
storage = "local-lvm"
|
|
||||||
size = "50G"
|
|
||||||
type = "scsi"
|
|
||||||
}
|
|
||||||
|
|
||||||
# NFS/CIFS
|
|
||||||
disk {
|
|
||||||
storage = "nfs-storage"
|
|
||||||
size = "100G"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Multiple disks
|
|
||||||
disk {
|
|
||||||
slot = 0
|
|
||||||
size = "50G"
|
|
||||||
storage = "local-lvm"
|
|
||||||
}
|
|
||||||
disk {
|
|
||||||
slot = 1
|
|
||||||
size = "100G"
|
|
||||||
storage = "data"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Recommended Resources
|
|
||||||
|
|
||||||
### Providers
|
|
||||||
|
|
||||||
- **[bpg/proxmox](https://registry.terraform.io/providers/bpg/proxmox)** - Most feature-complete (recommended)
|
|
||||||
- **[Telmate/proxmox](https://registry.terraform.io/providers/Telmate/proxmox)** - Legacy, still works
|
|
||||||
|
|
||||||
### Learning
|
|
||||||
|
|
||||||
- [OpenTofu Docs](https://opentofu.org/docs/)
|
|
||||||
- [Proxmox Provider Docs](https://registry.terraform.io/providers/bpg/proxmox/latest/docs)
|
|
||||||
- [Terraform/OpenTofu Tutorial](https://developer.hashicorp.com/terraform/tutorials)
|
|
||||||
|
|
||||||
### Tools
|
|
||||||
|
|
||||||
- **[tflint](https://github.com/terraform-linters/tflint)** - Linting
|
|
||||||
- **[terraform-docs](https://github.com/terraform-docs/terraform-docs)** - Generate docs
|
|
||||||
- **[infracost](https://www.infracost.io/)** - Cost estimation
|
|
||||||
- **[terragrunt](https://terragrunt.gruntwork.io/)** - Wrapper for DRY configs
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Start Simple:** Try `proxmox-examples/single-vm/`
|
|
||||||
2. **Learn Basics:** Get familiar with plan/apply/destroy
|
|
||||||
3. **Expand:** Try docker-host or multi-node
|
|
||||||
4. **Customize:** Adapt examples to your needs
|
|
||||||
5. **Automate:** Integrate with CI/CD
|
|
||||||
|
|
||||||
## Getting Help
|
|
||||||
|
|
||||||
- Check example READMEs in each directory
|
|
||||||
- Review Proxmox provider docs
|
|
||||||
- OpenTofu community Discord
|
|
||||||
- Ask in r/Proxmox or r/selfhosted
|
|
||||||
|
|
||||||
Happy Infrastructure as Code! 🚀
|
|
||||||
|
|
@ -1,34 +0,0 @@
|
||||||
# Terraform state files (unencrypted)
|
|
||||||
*.tfstate
|
|
||||||
*.tfstate.backup
|
|
||||||
*.tfstate.*.backup
|
|
||||||
|
|
||||||
# Keep encrypted state files
|
|
||||||
!*.tfstate.enc
|
|
||||||
|
|
||||||
# Terraform directory
|
|
||||||
.terraform/
|
|
||||||
.terraform.lock.hcl
|
|
||||||
|
|
||||||
# SOPS configuration with your private key
|
|
||||||
.sops.yaml
|
|
||||||
|
|
||||||
# Actual terraform.tfvars (may contain secrets)
|
|
||||||
terraform.tfvars
|
|
||||||
|
|
||||||
# Keep encrypted version
|
|
||||||
!terraform.tfvars.enc
|
|
||||||
|
|
||||||
# Crash logs
|
|
||||||
crash.log
|
|
||||||
crash.*.log
|
|
||||||
|
|
||||||
# Override files
|
|
||||||
override.tf
|
|
||||||
override.tf.json
|
|
||||||
*_override.tf
|
|
||||||
*_override.tf.json
|
|
||||||
|
|
||||||
# Terraform RC files
|
|
||||||
.terraformrc
|
|
||||||
terraform.rc
|
|
||||||
|
|
@ -1,34 +0,0 @@
|
||||||
# SOPS Configuration for Terraform State Encryption
|
|
||||||
#
|
|
||||||
# Setup Instructions:
|
|
||||||
# 1. Install age and sops:
|
|
||||||
# - Debian/Ubuntu: sudo apt install age
|
|
||||||
# - macOS: brew install age sops
|
|
||||||
# - Manual: https://github.com/FiloSottile/age/releases
|
|
||||||
# https://github.com/getsops/sops/releases
|
|
||||||
#
|
|
||||||
# 2. Generate an age key:
|
|
||||||
# mkdir -p ~/.sops
|
|
||||||
# age-keygen -o ~/.sops/homelab-terraform.txt
|
|
||||||
#
|
|
||||||
# 3. Copy this file:
|
|
||||||
# cp .sops.yaml.example .sops.yaml
|
|
||||||
#
|
|
||||||
# 4. Replace YOUR_AGE_PUBLIC_KEY_HERE with the public key from step 2
|
|
||||||
# (the line starting with "age1...")
|
|
||||||
#
|
|
||||||
# 5. DO NOT commit .sops.yaml to git (it's in .gitignore)
|
|
||||||
# Keep your private key (~/.sops/homelab-terraform.txt) secure!
|
|
||||||
|
|
||||||
creation_rules:
|
|
||||||
# Encrypt all .tfstate files with age
|
|
||||||
- path_regex: \.tfstate$
|
|
||||||
age: YOUR_AGE_PUBLIC_KEY_HERE
|
|
||||||
|
|
||||||
# Encrypt any .secret files
|
|
||||||
- path_regex: \.secret$
|
|
||||||
age: YOUR_AGE_PUBLIC_KEY_HERE
|
|
||||||
|
|
||||||
# Encrypt terraform.tfvars (contains API tokens)
|
|
||||||
- path_regex: terraform\.tfvars$
|
|
||||||
age: YOUR_AGE_PUBLIC_KEY_HERE
|
|
||||||
|
|
@ -1,871 +0,0 @@
|
||||||
# Docker Host VM with OpenTofu
|
|
||||||
|
|
||||||
This configuration creates a VM optimized for running Docker containers in your homelab with support for GPU passthrough and NFS media mounts.
|
|
||||||
|
|
||||||
## What This Creates
|
|
||||||
|
|
||||||
- ✅ Ubuntu or AlmaLinux VM (from cloud template)
|
|
||||||
- ✅ Docker & Docker Compose installed
|
|
||||||
- ✅ Homelab network created
|
|
||||||
- ✅ /media directories structure
|
|
||||||
- ✅ SSH key authentication
|
|
||||||
- ✅ Automatic updates enabled
|
|
||||||
- ✅ Optional GPU passthrough (NVIDIA GTX 1070)
|
|
||||||
- ✅ Optional NFS mounts from Proxmox host
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### 1. Create Ubuntu Cloud Template
|
|
||||||
|
|
||||||
First, create a cloud-init enabled template in Proxmox:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH to Proxmox server
|
|
||||||
ssh root@proxmox.local
|
|
||||||
|
|
||||||
# Download Ubuntu cloud image
|
|
||||||
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
|
|
||||||
|
|
||||||
# Create VM
|
|
||||||
qm create 9000 --name ubuntu-cloud-template --memory 2048 --net0 virtio,bridge=vmbr0
|
|
||||||
|
|
||||||
# Import disk
|
|
||||||
qm importdisk 9000 jammy-server-cloudimg-amd64.img local-lvm
|
|
||||||
|
|
||||||
# Attach disk
|
|
||||||
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
|
|
||||||
|
|
||||||
# Add cloud-init drive
|
|
||||||
qm set 9000 --ide2 local-lvm:cloudinit
|
|
||||||
|
|
||||||
# Set boot disk
|
|
||||||
qm set 9000 --boot c --bootdisk scsi0
|
|
||||||
|
|
||||||
# Add serial console
|
|
||||||
qm set 9000 --serial0 socket --vga serial0
|
|
||||||
|
|
||||||
# Convert to template
|
|
||||||
qm template 9000
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
rm jammy-server-cloudimg-amd64.img
|
|
||||||
```
|
|
||||||
|
|
||||||
**Or create AlmaLinux 9.6 Cloud Template:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH to Proxmox server
|
|
||||||
ssh root@proxmox.local
|
|
||||||
|
|
||||||
# Download AlmaLinux cloud image
|
|
||||||
wget https://repo.almalinux.org/almalinux/9/cloud/x86_64/images/AlmaLinux-9-GenericCloud-latest.x86_64.qcow2
|
|
||||||
|
|
||||||
# Create VM
|
|
||||||
qm create 9001 --name almalinux-cloud-template --memory 2048 --net0 virtio,bridge=vmbr0
|
|
||||||
|
|
||||||
# Import disk
|
|
||||||
qm importdisk 9001 AlmaLinux-9-GenericCloud-latest.x86_64.qcow2 local-lvm
|
|
||||||
|
|
||||||
# Attach disk
|
|
||||||
qm set 9001 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9001-disk-0
|
|
||||||
|
|
||||||
# Add cloud-init drive
|
|
||||||
qm set 9001 --ide2 local-lvm:cloudinit
|
|
||||||
|
|
||||||
# Set boot disk
|
|
||||||
qm set 9001 --boot c --bootdisk scsi0
|
|
||||||
|
|
||||||
# Add serial console
|
|
||||||
qm set 9001 --serial0 socket --vga serial0
|
|
||||||
|
|
||||||
# Convert to template
|
|
||||||
qm template 9001
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
rm AlmaLinux-9-GenericCloud-latest.x86_64.qcow2
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. (Optional) Enable GPU Passthrough
|
|
||||||
|
|
||||||
**For NVIDIA GTX 1070 on AMD Ryzen CPU:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On Proxmox host, edit GRUB config
|
|
||||||
nano /etc/default/grub
|
|
||||||
|
|
||||||
# Add to GRUB_CMDLINE_LINUX_DEFAULT:
|
|
||||||
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
|
|
||||||
|
|
||||||
# Update GRUB
|
|
||||||
update-grub
|
|
||||||
|
|
||||||
# Load required kernel modules
|
|
||||||
nano /etc/modules
|
|
||||||
|
|
||||||
# Add these lines:
|
|
||||||
vfio
|
|
||||||
vfio_iommu_type1
|
|
||||||
vfio_pci
|
|
||||||
vfio_virqfd
|
|
||||||
|
|
||||||
# Blacklist NVIDIA drivers on host
|
|
||||||
nano /etc/modprobe.d/blacklist.conf
|
|
||||||
|
|
||||||
# Add:
|
|
||||||
blacklist nouveau
|
|
||||||
blacklist nvidia
|
|
||||||
blacklist nvidiafb
|
|
||||||
blacklist nvidia_drm
|
|
||||||
|
|
||||||
# Update initramfs
|
|
||||||
update-initramfs -u -k all
|
|
||||||
|
|
||||||
# Reboot Proxmox host
|
|
||||||
reboot
|
|
||||||
|
|
||||||
# After reboot, verify IOMMU is enabled:
|
|
||||||
dmesg | grep -e DMAR -e IOMMU
|
|
||||||
|
|
||||||
# Find GPU PCI ID:
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
# Output example: 01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070]
|
|
||||||
# Use: 0000:01:00 (note the format)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. (Optional) Configure NFS Server on Proxmox
|
|
||||||
|
|
||||||
**Export media directories from Proxmox host:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On Proxmox host
|
|
||||||
# Install NFS server
|
|
||||||
apt update
|
|
||||||
apt install nfs-kernel-server -y
|
|
||||||
|
|
||||||
# Create /etc/exports entry
|
|
||||||
nano /etc/exports
|
|
||||||
|
|
||||||
# Add (replace 192.168.1.0/24 with your network):
|
|
||||||
/data/media/audiobooks 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/books 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/comics 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/complete 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/downloads 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/homemovies 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/incomplete 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/movies 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/music 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/photos 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
/data/media/tv 192.168.1.0/24(rw,sync,no_subtree_check,no_root_squash)
|
|
||||||
|
|
||||||
# Export NFS shares
|
|
||||||
exportfs -ra
|
|
||||||
|
|
||||||
# Enable and start NFS server
|
|
||||||
systemctl enable nfs-server
|
|
||||||
systemctl start nfs-server
|
|
||||||
|
|
||||||
# Verify exports
|
|
||||||
showmount -e localhost
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Create API Token
|
|
||||||
|
|
||||||
In Proxmox UI:
|
|
||||||
1. Datacenter → Permissions → API Tokens
|
|
||||||
2. Add → User: `root@pam`, Token ID: `terraform`
|
|
||||||
3. Uncheck "Privilege Separation"
|
|
||||||
4. Save the secret!
|
|
||||||
|
|
||||||
### 5. Install OpenTofu
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Linux/macOS
|
|
||||||
curl --proto '=https' --tlsv1.2 -fsSL https://get.opentofu.org/install-opentofu.sh | sh
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
tofu version
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Configure Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd terraform/proxmox-examples/docker-host
|
|
||||||
|
|
||||||
# Copy example config
|
|
||||||
cp terraform.tfvars.example terraform.tfvars
|
|
||||||
|
|
||||||
# Edit with your values
|
|
||||||
nano terraform.tfvars
|
|
||||||
```
|
|
||||||
|
|
||||||
**Required changes:**
|
|
||||||
- `pm_api_token_secret` - Your Proxmox API secret
|
|
||||||
- `pm_ssh_username` - SSH username for Proxmox host (usually "root")
|
|
||||||
- `vm_ssh_keys` - Your SSH public key
|
|
||||||
- `vm_password` - Set a secure password
|
|
||||||
|
|
||||||
**Important:** Before running terraform, ensure you have SSH access:
|
|
||||||
|
|
||||||
**Option A - Root SSH (if enabled):**
|
|
||||||
```bash
|
|
||||||
# Set in terraform.tfvars
|
|
||||||
pm_ssh_username = "root"
|
|
||||||
|
|
||||||
# Set up key-based auth
|
|
||||||
ssh-copy-id root@proxmox.local
|
|
||||||
|
|
||||||
# Start ssh-agent and add your key
|
|
||||||
eval "$(ssh-agent -s)"
|
|
||||||
ssh-add ~/.ssh/id_rsa # or id_ed25519, etc.
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
ssh root@proxmox.local "echo 'SSH works!'"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option B - Non-root user with sudo (recommended for security):**
|
|
||||||
```bash
|
|
||||||
# Set in terraform.tfvars
|
|
||||||
pm_ssh_username = "eduardo" # Your username
|
|
||||||
|
|
||||||
# Set up key-based auth for your user
|
|
||||||
ssh-copy-id eduardo@proxmox.local
|
|
||||||
|
|
||||||
# On Proxmox host, ensure your user can write to snippets directory
|
|
||||||
ssh eduardo@proxmox.local
|
|
||||||
sudo usermod -aG www-data eduardo # Add to www-data group
|
|
||||||
sudo chmod g+w /var/lib/vz/snippets
|
|
||||||
sudo chown root:www-data /var/lib/vz/snippets
|
|
||||||
|
|
||||||
# OR set up passwordless sudo for snippet uploads (more secure)
|
|
||||||
sudo visudo -f /etc/sudoers.d/terraform-snippets
|
|
||||||
# Add this line (replace 'eduardo' with your username):
|
|
||||||
# eduardo ALL=(ALL) NOPASSWD: /usr/bin/tee /var/lib/vz/snippets/*
|
|
||||||
|
|
||||||
# Exit Proxmox and test locally
|
|
||||||
exit
|
|
||||||
|
|
||||||
# Start ssh-agent and add your key
|
|
||||||
eval "$(ssh-agent -s)"
|
|
||||||
ssh-add ~/.ssh/id_rsa # or id_ed25519, etc.
|
|
||||||
|
|
||||||
# Verify SSH and write access
|
|
||||||
ssh eduardo@proxmox.local "ls -la /var/lib/vz/snippets"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Optional changes:**
|
|
||||||
- `vm_name` - Change VM name
|
|
||||||
- `vm_cores` / `vm_memory` - Adjust resources
|
|
||||||
- `vm_ip_address` - Set static IP (or keep DHCP)
|
|
||||||
- `vm_os_type` - Choose "ubuntu", "almalinux", or "debian"
|
|
||||||
- `template_vm_id` - Use 9001 for AlmaLinux template
|
|
||||||
- `enable_gpu_passthrough` - Set to true for GPU support
|
|
||||||
- `gpu_pci_id` - Your GPU PCI ID (find with `lspci`)
|
|
||||||
- `mount_media_directories` - Set to true for NFS mounts
|
|
||||||
- `proxmox_host_ip` - IP for NFS server (Proxmox host)
|
|
||||||
- `media_source_path` - Path on Proxmox host (default: /data/media)
|
|
||||||
|
|
||||||
### 2. Initialize
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu init
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Plan
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu plan
|
|
||||||
```
|
|
||||||
|
|
||||||
Review what will be created.
|
|
||||||
|
|
||||||
### 4. Apply
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu apply
|
|
||||||
```
|
|
||||||
|
|
||||||
Type `yes` to confirm.
|
|
||||||
|
|
||||||
### 5. Connect
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Get SSH command from output
|
|
||||||
tofu output ssh_command
|
|
||||||
|
|
||||||
# Or manually
|
|
||||||
ssh ubuntu@<VM-IP>
|
|
||||||
|
|
||||||
# Verify Docker
|
|
||||||
docker --version
|
|
||||||
docker ps
|
|
||||||
docker network ls | grep homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration Options
|
|
||||||
|
|
||||||
### Resource Sizing
|
|
||||||
|
|
||||||
**Light workload (1-5 containers):**
|
|
||||||
```hcl
|
|
||||||
vm_cores = 2
|
|
||||||
vm_memory = 4096
|
|
||||||
disk_size = "30"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Medium workload (5-15 containers):**
|
|
||||||
```hcl
|
|
||||||
vm_cores = 4
|
|
||||||
vm_memory = 8192
|
|
||||||
disk_size = "50"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Heavy workload (15+ containers):**
|
|
||||||
```hcl
|
|
||||||
vm_cores = 8
|
|
||||||
vm_memory = 16384
|
|
||||||
disk_size = "100"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Network Configuration
|
|
||||||
|
|
||||||
**DHCP (easiest):**
|
|
||||||
```hcl
|
|
||||||
vm_ip_address = "dhcp"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Static IP:**
|
|
||||||
```hcl
|
|
||||||
vm_ip_address = "192.168.1.100"
|
|
||||||
vm_ip_netmask = 24
|
|
||||||
vm_gateway = "192.168.1.1"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multiple SSH Keys
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
vm_ssh_keys = [
|
|
||||||
"ssh-rsa AAAAB3... user1@laptop",
|
|
||||||
"ssh-rsa AAAAB3... user2@desktop"
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
### GPU Passthrough Configuration
|
|
||||||
|
|
||||||
**Enable NVIDIA GTX 1070 for Jellyfin, Ollama, Immich:**
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# Must complete Proxmox host GPU passthrough setup first
|
|
||||||
enable_gpu_passthrough = true
|
|
||||||
gpu_pci_id = "0000:01:00" # Find with: lspci | grep -i nvidia
|
|
||||||
|
|
||||||
# Use AlmaLinux for better GPU support
|
|
||||||
vm_os_type = "almalinux"
|
|
||||||
template_vm_id = 9001
|
|
||||||
|
|
||||||
# Allocate sufficient resources
|
|
||||||
vm_cores = 8
|
|
||||||
vm_memory = 24576 # 24GB
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verify GPU in VM after deployment:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ssh ubuntu@<VM-IP>
|
|
||||||
|
|
||||||
# Install NVIDIA drivers (AlmaLinux)
|
|
||||||
sudo dnf install -y epel-release
|
|
||||||
sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
|
|
||||||
sudo dnf install -y nvidia-driver nvidia-container-toolkit
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
nvidia-smi
|
|
||||||
docker run --rm --gpus all nvidia/cuda:12.3.0-base-ubuntu22.04 nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### NFS Media Mounts Configuration
|
|
||||||
|
|
||||||
**Mount Proxmox host media directories to VM:**
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
# Enable NFS mounts from Proxmox host
|
|
||||||
mount_media_directories = true
|
|
||||||
|
|
||||||
# Proxmox host IP (not API URL)
|
|
||||||
proxmox_host_ip = "192.168.1.100"
|
|
||||||
|
|
||||||
# Source path on Proxmox host
|
|
||||||
media_source_path = "/data/media"
|
|
||||||
|
|
||||||
# Mount point in VM
|
|
||||||
media_mount_path = "/media"
|
|
||||||
```
|
|
||||||
|
|
||||||
**After deployment, verify mounts:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ssh ubuntu@<VM-IP>
|
|
||||||
|
|
||||||
# Check mounts
|
|
||||||
df -h | grep /media
|
|
||||||
ls -la /media
|
|
||||||
|
|
||||||
# Expected directories:
|
|
||||||
# /media/audiobooks, /media/books, /media/comics,
|
|
||||||
# /media/complete, /media/downloads, /media/homemovies,
|
|
||||||
# /media/incomplete, /media/movies, /media/music,
|
|
||||||
# /media/photos, /media/tv
|
|
||||||
```
|
|
||||||
|
|
||||||
### Operating System Selection
|
|
||||||
|
|
||||||
**AlmaLinux 9.6 (Recommended for GPU):**
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
vm_os_type = "almalinux"
|
|
||||||
template_vm_id = 9001
|
|
||||||
vm_username = "almalinux" # Default AlmaLinux user
|
|
||||||
```
|
|
||||||
|
|
||||||
**Ubuntu 22.04 LTS:**
|
|
||||||
|
|
||||||
```hcl
|
|
||||||
vm_os_type = "ubuntu"
|
|
||||||
template_vm_id = 9000
|
|
||||||
vm_username = "ubuntu"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key differences:**
|
|
||||||
- AlmaLinux: Better RHEL ecosystem, SELinux, dnf package manager
|
|
||||||
- Ubuntu: Wider community support, apt package manager
|
|
||||||
- Both support Docker, GPU passthrough, and NFS mounts
|
|
||||||
|
|
||||||
## Post-Deployment
|
|
||||||
|
|
||||||
### Deploy Homelab Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH to VM
|
|
||||||
ssh ubuntu@<VM-IP>
|
|
||||||
|
|
||||||
# Clone homelab repo (if not auto-cloned)
|
|
||||||
git clone https://github.com/efigueroa/homelab.git
|
|
||||||
cd homelab
|
|
||||||
|
|
||||||
# Deploy services
|
|
||||||
cd compose/core/traefik
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
cd ../lldap
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Continue with other services...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verify Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check Docker
|
|
||||||
docker --version
|
|
||||||
docker compose version
|
|
||||||
|
|
||||||
# Check network
|
|
||||||
docker network ls | grep homelab
|
|
||||||
|
|
||||||
# Check media directories and NFS mounts
|
|
||||||
ls -la /media
|
|
||||||
df -h | grep /media
|
|
||||||
|
|
||||||
# If GPU passthrough is enabled
|
|
||||||
nvidia-smi
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
|
|
||||||
# Check system resources
|
|
||||||
htop
|
|
||||||
df -h
|
|
||||||
```
|
|
||||||
|
|
||||||
## Managing the VM
|
|
||||||
|
|
||||||
### View State
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tofu show
|
|
||||||
tofu state list
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update VM
|
|
||||||
|
|
||||||
1. Edit `terraform.tfvars`:
|
|
||||||
```hcl
|
|
||||||
vm_cores = 8 # Increase from 4
|
|
||||||
vm_memory = 16384 # Increase from 8192
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Apply changes:
|
|
||||||
```bash
|
|
||||||
tofu plan
|
|
||||||
tofu apply
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Some changes require VM restart.
|
|
||||||
|
|
||||||
### Destroy VM
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Backup any data first!
|
|
||||||
tofu destroy
|
|
||||||
```
|
|
||||||
|
|
||||||
Type `yes` to confirm deletion.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Datastore Does Not Support Snippets
|
|
||||||
|
|
||||||
Error: `the datastore "local" does not support content type "snippets"`
|
|
||||||
|
|
||||||
**Cause:** The storage you specified doesn't have snippets enabled
|
|
||||||
|
|
||||||
**Solution 1 - Enable snippets on existing storage:**
|
|
||||||
```bash
|
|
||||||
# On Proxmox host, check current content types
|
|
||||||
pvesm status
|
|
||||||
|
|
||||||
# Enable snippets on local storage
|
|
||||||
pvesm set local --content backup,iso,vztmpl,snippets
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
pvesm status | grep local
|
|
||||||
```
|
|
||||||
|
|
||||||
**Solution 2 - Create dedicated directory storage:**
|
|
||||||
```bash
|
|
||||||
# On Proxmox host
|
|
||||||
# Create directory for snippets
|
|
||||||
mkdir -p /var/lib/vz/snippets
|
|
||||||
|
|
||||||
# Add directory storage via Proxmox UI:
|
|
||||||
# Datacenter → Storage → Add → Directory
|
|
||||||
# ID: local-snippets
|
|
||||||
# Directory: /var/lib/vz/snippets
|
|
||||||
# Content: Snippets
|
|
||||||
|
|
||||||
# Or via CLI:
|
|
||||||
pvesm add dir local-snippets --path /var/lib/vz/snippets --content snippets
|
|
||||||
|
|
||||||
# Update terraform.tfvars:
|
|
||||||
# snippets_storage = "local-snippets"
|
|
||||||
```
|
|
||||||
|
|
||||||
### SSH Authentication Failed
|
|
||||||
|
|
||||||
Error: `failed to open SSH client: unable to authenticate user "" over SSH`
|
|
||||||
|
|
||||||
**Cause:** The Proxmox provider needs SSH access to upload cloud-init files. This error occurs when:
|
|
||||||
1. SSH username is not set
|
|
||||||
2. SSH key is not in ssh-agent
|
|
||||||
3. SSH key is not authorized on Proxmox host
|
|
||||||
|
|
||||||
**Solution - Complete SSH Setup:**
|
|
||||||
|
|
||||||
**For root user:**
|
|
||||||
```bash
|
|
||||||
# 1. Generate SSH key if you don't have one
|
|
||||||
ssh-keygen -t ed25519 -C "terraform@homelab"
|
|
||||||
|
|
||||||
# 2. Copy to Proxmox host
|
|
||||||
ssh-copy-id root@10.0.0.169
|
|
||||||
|
|
||||||
# 3. Start ssh-agent (REQUIRED!)
|
|
||||||
eval "$(ssh-agent -s)"
|
|
||||||
|
|
||||||
# 4. Add your key to ssh-agent (REQUIRED!)
|
|
||||||
ssh-add ~/.ssh/id_ed25519
|
|
||||||
|
|
||||||
# 5. Test SSH connection
|
|
||||||
ssh root@10.0.0.169 "echo 'SSH works!'"
|
|
||||||
|
|
||||||
# 6. Set in terraform.tfvars
|
|
||||||
pm_ssh_username = "root"
|
|
||||||
|
|
||||||
# 7. Run terraform
|
|
||||||
./scripts/tf apply
|
|
||||||
```
|
|
||||||
|
|
||||||
**For non-root user (if root SSH is disabled):**
|
|
||||||
```bash
|
|
||||||
# 1. Generate SSH key if you don't have one
|
|
||||||
ssh-keygen -t ed25519 -C "terraform@homelab"
|
|
||||||
|
|
||||||
# 2. Copy to Proxmox host (use your username)
|
|
||||||
ssh-copy-id eduardo@10.0.0.169
|
|
||||||
|
|
||||||
# 3. Configure write permissions on Proxmox
|
|
||||||
ssh eduardo@10.0.0.169
|
|
||||||
sudo usermod -aG www-data eduardo
|
|
||||||
sudo chmod g+w /var/lib/vz/snippets
|
|
||||||
sudo chown root:www-data /var/lib/vz/snippets
|
|
||||||
exit
|
|
||||||
|
|
||||||
# 4. Start ssh-agent (REQUIRED!)
|
|
||||||
eval "$(ssh-agent -s)"
|
|
||||||
|
|
||||||
# 5. Add your key to ssh-agent (REQUIRED!)
|
|
||||||
ssh-add ~/.ssh/id_ed25519
|
|
||||||
|
|
||||||
# 6. Test SSH and permissions
|
|
||||||
ssh eduardo@10.0.0.169 "touch /var/lib/vz/snippets/test.txt && rm /var/lib/vz/snippets/test.txt"
|
|
||||||
|
|
||||||
# 7. Set in terraform.tfvars
|
|
||||||
pm_ssh_username = "eduardo" # Your username
|
|
||||||
|
|
||||||
# 8. Run terraform
|
|
||||||
./scripts/tf apply
|
|
||||||
```
|
|
||||||
|
|
||||||
**Common Issues:**
|
|
||||||
|
|
||||||
- **ssh-agent not running:** Run `eval "$(ssh-agent -s)"` in your current terminal
|
|
||||||
- **Key not added:** Run `ssh-add ~/.ssh/id_ed25519` (or id_rsa)
|
|
||||||
- **Wrong username:** Check `pm_ssh_username` in terraform.tfvars matches your Proxmox SSH user
|
|
||||||
- **Key not authorized:** Run `ssh-copy-id` again to ensure key is in ~/.ssh/authorized_keys on Proxmox
|
|
||||||
- **Permission denied writing snippets (non-root user):** Ensure your user has write access to `/var/lib/vz/snippets` (see non-root setup steps above)
|
|
||||||
|
|
||||||
**Solution 2 - Use API token only (workaround):**
|
|
||||||
|
|
||||||
If SSH is problematic, you can create the cloud-init snippet manually:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# On Proxmox host, create the snippet
|
|
||||||
nano /var/lib/vz/snippets/cloud-init-docker-host.yaml
|
|
||||||
# Paste the cloud-init content from main.tf
|
|
||||||
|
|
||||||
# Then remove the proxmox_virtual_environment_file resource from main.tf
|
|
||||||
# and reference the file directly in the VM resource:
|
|
||||||
# user_data_file_id = "local:snippets/cloud-init-docker-host.yaml"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Template Not Found
|
|
||||||
|
|
||||||
Error: `template with ID 9000 not found`
|
|
||||||
|
|
||||||
**Solution:** Create cloud template (see Prerequisites)
|
|
||||||
|
|
||||||
### API Permission Error
|
|
||||||
|
|
||||||
Error: `permission denied`
|
|
||||||
|
|
||||||
**Solution:** Check API token permissions:
|
|
||||||
```bash
|
|
||||||
pveum acl modify / -token 'root@pam!terraform' -role Administrator
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cloud-Init Not Working
|
|
||||||
|
|
||||||
**Check cloud-init status:**
|
|
||||||
```bash
|
|
||||||
ssh ubuntu@<VM-IP>
|
|
||||||
sudo cloud-init status
|
|
||||||
sudo cat /var/log/cloud-init-output.log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Docker Not Installed
|
|
||||||
|
|
||||||
**Manual installation:**
|
|
||||||
```bash
|
|
||||||
ssh ubuntu@<VM-IP>
|
|
||||||
curl -fsSL https://get.docker.com | sudo sh
|
|
||||||
sudo usermod -aG docker ubuntu
|
|
||||||
```
|
|
||||||
|
|
||||||
### VM Won't Start
|
|
||||||
|
|
||||||
**Check Proxmox logs:**
|
|
||||||
```bash
|
|
||||||
# On Proxmox server
|
|
||||||
qm status <VM-ID>
|
|
||||||
tail -f /var/log/pve/tasks/active
|
|
||||||
```
|
|
||||||
|
|
||||||
### GPU Not Detected in VM
|
|
||||||
|
|
||||||
**Verify IOMMU is enabled:**
|
|
||||||
```bash
|
|
||||||
# On Proxmox host
|
|
||||||
dmesg | grep -e DMAR -e IOMMU
|
|
||||||
# Should show: "IOMMU enabled"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check GPU is available:**
|
|
||||||
```bash
|
|
||||||
# On Proxmox host
|
|
||||||
lspci | grep -i nvidia
|
|
||||||
lspci -n -s 01:00
|
|
||||||
|
|
||||||
# Verify it's not being used by host
|
|
||||||
lsmod | grep nvidia
|
|
||||||
# Should be empty (blacklisted)
|
|
||||||
```
|
|
||||||
|
|
||||||
**In VM, install drivers:**
|
|
||||||
```bash
|
|
||||||
# AlmaLinux
|
|
||||||
sudo dnf install -y epel-release
|
|
||||||
sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
|
|
||||||
sudo dnf install -y nvidia-driver
|
|
||||||
|
|
||||||
# Ubuntu
|
|
||||||
sudo apt install -y nvidia-driver-535
|
|
||||||
sudo reboot
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
nvidia-smi
|
|
||||||
```
|
|
||||||
|
|
||||||
### NFS Mounts Not Working
|
|
||||||
|
|
||||||
**On Proxmox host, verify NFS server:**
|
|
||||||
```bash
|
|
||||||
systemctl status nfs-server
|
|
||||||
showmount -e localhost
|
|
||||||
# Should list all /data/media/* exports
|
|
||||||
```
|
|
||||||
|
|
||||||
**In VM, test manual mount:**
|
|
||||||
```bash
|
|
||||||
# Install NFS client if missing
|
|
||||||
sudo apt install nfs-common # Ubuntu
|
|
||||||
sudo dnf install nfs-utils # AlmaLinux
|
|
||||||
|
|
||||||
# Test mount
|
|
||||||
sudo mount -t nfs 192.168.1.100:/data/media/movies /mnt
|
|
||||||
ls /mnt
|
|
||||||
sudo umount /mnt
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check /etc/fstab in VM:**
|
|
||||||
```bash
|
|
||||||
cat /etc/fstab | grep nfs
|
|
||||||
# Should show all media directory mounts
|
|
||||||
```
|
|
||||||
|
|
||||||
**Firewall issues:**
|
|
||||||
```bash
|
|
||||||
# On Proxmox host, allow NFS
|
|
||||||
ufw allow from 192.168.1.0/24 to any port nfs
|
|
||||||
# Or disable firewall temporarily to test:
|
|
||||||
systemctl stop ufw
|
|
||||||
```
|
|
||||||
|
|
||||||
## Advanced Usage
|
|
||||||
|
|
||||||
### Multiple VMs
|
|
||||||
|
|
||||||
Create `docker-host-02.tfvars`:
|
|
||||||
```hcl
|
|
||||||
vm_name = "docker-host-02"
|
|
||||||
vm_ip_address = "192.168.1.101"
|
|
||||||
```
|
|
||||||
|
|
||||||
Deploy:
|
|
||||||
```bash
|
|
||||||
tofu apply -var-file="docker-host-02.tfvars"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Custom Cloud-Init
|
|
||||||
|
|
||||||
Edit `main.tf` to add custom cloud-init sections:
|
|
||||||
```yaml
|
|
||||||
users:
|
|
||||||
- name: myuser
|
|
||||||
groups: sudo, docker
|
|
||||||
shell: /bin/bash
|
|
||||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
|
||||||
|
|
||||||
packages:
|
|
||||||
- zsh
|
|
||||||
- tmux
|
|
||||||
- neovim
|
|
||||||
|
|
||||||
runcmd:
|
|
||||||
- sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Attach Additional Disk
|
|
||||||
|
|
||||||
Add to `main.tf`:
|
|
||||||
```hcl
|
|
||||||
disk {
|
|
||||||
datastore_id = var.storage
|
|
||||||
size = 200
|
|
||||||
interface = "scsi1"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Mount in cloud-init:
|
|
||||||
```yaml
|
|
||||||
mounts:
|
|
||||||
- ["/dev/sdb", "/mnt/data", "ext4", "defaults", "0", "0"]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cost Analysis
|
|
||||||
|
|
||||||
**Resource Usage:**
|
|
||||||
- 4 cores, 8GB RAM, 50GB disk
|
|
||||||
- Running 24/7
|
|
||||||
|
|
||||||
**Homelab Cost:** $0 (uses existing hardware)
|
|
||||||
|
|
||||||
**If in cloud (comparison):**
|
|
||||||
- AWS: ~$50-100/month
|
|
||||||
- DigitalOcean: ~$40/month
|
|
||||||
- Linode: ~$40/month
|
|
||||||
|
|
||||||
**Homelab ROI:** Pays for itself in ~2-3 months!
|
|
||||||
|
|
||||||
## Security Hardening
|
|
||||||
|
|
||||||
### Enable Firewall
|
|
||||||
|
|
||||||
Add to cloud-init:
|
|
||||||
```yaml
|
|
||||||
runcmd:
|
|
||||||
- ufw default deny incoming
|
|
||||||
- ufw default allow outgoing
|
|
||||||
- ufw allow ssh
|
|
||||||
- ufw allow 80/tcp
|
|
||||||
- ufw allow 443/tcp
|
|
||||||
- ufw --force enable
|
|
||||||
```
|
|
||||||
|
|
||||||
### Disable Password Authentication
|
|
||||||
|
|
||||||
After SSH key setup:
|
|
||||||
```yaml
|
|
||||||
ssh_pwauth: false
|
|
||||||
```
|
|
||||||
|
|
||||||
### Automatic Updates
|
|
||||||
|
|
||||||
Already enabled in cloud-init. Verify:
|
|
||||||
```bash
|
|
||||||
sudo systemctl status unattended-upgrades
|
|
||||||
```
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. ✅ Deploy core services (Traefik, LLDAP, Tinyauth)
|
|
||||||
2. ✅ Configure SSL certificates
|
|
||||||
3. ✅ Deploy media services
|
|
||||||
4. ✅ Set up backups (Restic)
|
|
||||||
5. ✅ Add monitoring (Prometheus/Grafana)
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [OpenTofu Docs](https://opentofu.org/docs/)
|
|
||||||
- [Proxmox Provider](https://registry.terraform.io/providers/bpg/proxmox/latest/docs)
|
|
||||||
- [Cloud-Init Docs](https://cloudinit.readthedocs.io/)
|
|
||||||
- [Docker Docs](https://docs.docker.com/)
|
|
||||||
|
|
@ -1,378 +0,0 @@
|
||||||
# Terraform State Management with SOPS
|
|
||||||
|
|
||||||
This project uses [SOPS](https://github.com/getsops/sops) (Secrets OPerationS) with [age](https://github.com/FiloSottile/age) encryption to securely store Terraform state files in Git.
|
|
||||||
|
|
||||||
## Why SOPS + age?
|
|
||||||
|
|
||||||
✅ **Encrypted at rest** - State files contain sensitive data (IPs, tokens)
|
|
||||||
✅ **Version controlled** - Track infrastructure changes over time
|
|
||||||
✅ **No infrastructure required** - No need for S3, PostgreSQL, or other backends
|
|
||||||
✅ **Perfect for homelabs** - Simple, secure, self-contained
|
|
||||||
✅ **FOSS** - Fully open source tools
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### 1. Install age
|
|
||||||
|
|
||||||
**Debian/Ubuntu:**
|
|
||||||
```bash
|
|
||||||
sudo apt update
|
|
||||||
sudo apt install age
|
|
||||||
```
|
|
||||||
|
|
||||||
**macOS:**
|
|
||||||
```bash
|
|
||||||
brew install age
|
|
||||||
```
|
|
||||||
|
|
||||||
**Manual installation:**
|
|
||||||
```bash
|
|
||||||
# Download from https://github.com/FiloSottile/age/releases
|
|
||||||
wget https://github.com/FiloSottile/age/releases/download/v1.1.1/age-v1.1.1-linux-amd64.tar.gz
|
|
||||||
tar xzf age-v1.1.1-linux-amd64.tar.gz
|
|
||||||
sudo mv age/age age/age-keygen /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Install SOPS
|
|
||||||
|
|
||||||
**Debian/Ubuntu:**
|
|
||||||
```bash
|
|
||||||
# Download from https://github.com/getsops/sops/releases
|
|
||||||
wget https://github.com/getsops/sops/releases/download/v3.8.1/sops-v3.8.1.linux.amd64
|
|
||||||
sudo mv sops-v3.8.1.linux.amd64 /usr/local/bin/sops
|
|
||||||
sudo chmod +x /usr/local/bin/sops
|
|
||||||
```
|
|
||||||
|
|
||||||
**macOS:**
|
|
||||||
```bash
|
|
||||||
brew install sops
|
|
||||||
```
|
|
||||||
|
|
||||||
Verify installation:
|
|
||||||
```bash
|
|
||||||
age --version
|
|
||||||
sops --version
|
|
||||||
```
|
|
||||||
|
|
||||||
## Initial Setup
|
|
||||||
|
|
||||||
### 1. Generate Age Encryption Key
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create SOPS directory
|
|
||||||
mkdir -p ~/.sops
|
|
||||||
|
|
||||||
# Generate a new age key pair
|
|
||||||
age-keygen -o ~/.sops/homelab-terraform.txt
|
|
||||||
|
|
||||||
# View the key (you'll need the public key)
|
|
||||||
cat ~/.sops/homelab-terraform.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
Output will look like:
|
|
||||||
```
|
|
||||||
# created: 2025-11-11T12:34:56Z
|
|
||||||
# public key: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
AGE-SECRET-KEY-1XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx
|
|
||||||
```
|
|
||||||
|
|
||||||
**⚠️ IMPORTANT:**
|
|
||||||
- The line starting with `AGE-SECRET-KEY-1` is your **private key** - keep it secret!
|
|
||||||
- The line starting with `age1` is your **public key** - you'll use this in .sops.yaml
|
|
||||||
- **Backup this file** to a secure location (password manager, encrypted backup, etc.)
|
|
||||||
- If you lose this key, you **cannot decrypt** your state files!
|
|
||||||
|
|
||||||
### 2. Configure SOPS
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd terraform/proxmox-examples/docker-host
|
|
||||||
|
|
||||||
# Copy the example config
|
|
||||||
cp .sops.yaml.example .sops.yaml
|
|
||||||
|
|
||||||
# Edit and replace YOUR_AGE_PUBLIC_KEY_HERE with your public key from step 1
|
|
||||||
nano .sops.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Your `.sops.yaml` should look like:
|
|
||||||
```yaml
|
|
||||||
creation_rules:
|
|
||||||
- path_regex: \.tfstate$
|
|
||||||
age: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
- path_regex: \.secret$
|
|
||||||
age: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
- path_regex: terraform\.tfvars$
|
|
||||||
age: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Set Environment Variable (Optional but Recommended)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Add to your ~/.bashrc or ~/.zshrc
|
|
||||||
echo 'export SOPS_AGE_KEY_FILE=~/.sops/homelab-terraform.txt' >> ~/.bashrc
|
|
||||||
source ~/.bashrc
|
|
||||||
```
|
|
||||||
|
|
||||||
This tells SOPS where to find your private key for decryption.
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
### Option A: Automatic Wrapper Script (Recommended)
|
|
||||||
|
|
||||||
Use the `./scripts/tf` wrapper that handles encryption/decryption automatically:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Initialize (first time)
|
|
||||||
./scripts/tf init
|
|
||||||
|
|
||||||
# Plan changes
|
|
||||||
./scripts/tf plan
|
|
||||||
|
|
||||||
# Apply changes (automatically encrypts after)
|
|
||||||
./scripts/tf apply
|
|
||||||
|
|
||||||
# Destroy infrastructure (automatically encrypts after)
|
|
||||||
./scripts/tf destroy
|
|
||||||
|
|
||||||
# View state
|
|
||||||
./scripts/tf show
|
|
||||||
```
|
|
||||||
|
|
||||||
The wrapper script:
|
|
||||||
1. Decrypts state files before running
|
|
||||||
2. Runs your terraform/tofu command
|
|
||||||
3. Encrypts state files after (if state was modified)
|
|
||||||
|
|
||||||
### Option B: Manual Encryption/Decryption
|
|
||||||
|
|
||||||
If you prefer manual control:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Decrypt state files
|
|
||||||
./scripts/tf-decrypt
|
|
||||||
|
|
||||||
# 2. Run terraform commands
|
|
||||||
tofu init
|
|
||||||
tofu plan
|
|
||||||
tofu apply
|
|
||||||
|
|
||||||
# 3. Encrypt state files
|
|
||||||
./scripts/tf-encrypt
|
|
||||||
|
|
||||||
# 4. Commit encrypted files to Git
|
|
||||||
git add *.enc
|
|
||||||
git commit -m "Update infrastructure"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow Examples
|
|
||||||
|
|
||||||
### First Time Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd terraform/proxmox-examples/docker-host
|
|
||||||
|
|
||||||
# 1. Configure your variables
|
|
||||||
cp terraform.tfvars.example terraform.tfvars
|
|
||||||
nano terraform.tfvars # Add your API tokens, SSH keys, etc.
|
|
||||||
|
|
||||||
# 2. Initialize Terraform
|
|
||||||
./scripts/tf init
|
|
||||||
|
|
||||||
# 3. Plan infrastructure
|
|
||||||
./scripts/tf plan
|
|
||||||
|
|
||||||
# 4. Apply infrastructure
|
|
||||||
./scripts/tf apply
|
|
||||||
|
|
||||||
# 5. Encrypted state files are automatically created
|
|
||||||
# terraform.tfstate.enc now exists
|
|
||||||
|
|
||||||
# 6. Commit encrypted state to Git
|
|
||||||
git add terraform.tfstate.enc .sops.yaml.example
|
|
||||||
git commit -m "Add encrypted Terraform state"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
### Making Infrastructure Changes
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Decrypt, apply changes, re-encrypt (all automatic)
|
|
||||||
./scripts/tf apply
|
|
||||||
|
|
||||||
# 2. Commit updated encrypted state
|
|
||||||
git add terraform.tfstate.enc
|
|
||||||
git commit -m "Update VM configuration"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cloning on a New Machine
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Clone the repository
|
|
||||||
git clone https://github.com/efigueroa/homelab.git
|
|
||||||
cd homelab/terraform/proxmox-examples/docker-host
|
|
||||||
|
|
||||||
# 2. Copy your age private key to the new machine
|
|
||||||
# (Securely transfer ~/.sops/homelab-terraform.txt)
|
|
||||||
mkdir -p ~/.sops
|
|
||||||
# Copy the key file here
|
|
||||||
|
|
||||||
# 3. Set up SOPS config
|
|
||||||
cp .sops.yaml.example .sops.yaml
|
|
||||||
# Edit with your public key
|
|
||||||
|
|
||||||
# 4. Decrypt state
|
|
||||||
./scripts/tf-decrypt
|
|
||||||
|
|
||||||
# 5. Now you can run terraform commands
|
|
||||||
./scripts/tf plan
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Best Practices
|
|
||||||
|
|
||||||
### DO ✅
|
|
||||||
|
|
||||||
- **Backup your age private key** to multiple secure locations
|
|
||||||
- **Use different keys** for different projects/environments
|
|
||||||
- **Commit `.sops.yaml.example`** to Git (without your actual key)
|
|
||||||
- **Commit encrypted `*.enc` files** to Git
|
|
||||||
- **Use the wrapper script** to avoid forgetting to encrypt
|
|
||||||
|
|
||||||
### DON'T ❌
|
|
||||||
|
|
||||||
- **Never commit `.sops.yaml`** with your actual key (it's in .gitignore)
|
|
||||||
- **Never commit unencrypted `.tfstate`** files (they're in .gitignore)
|
|
||||||
- **Never commit unencrypted `terraform.tfvars`** with secrets
|
|
||||||
- **Never share your private age key** publicly
|
|
||||||
- **Don't lose your private key** - you can't decrypt without it!
|
|
||||||
|
|
||||||
## File Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
terraform/proxmox-examples/docker-host/
|
|
||||||
├── .gitignore # Ignores unencrypted files
|
|
||||||
├── .sops.yaml # Your SOPS config (NOT in Git)
|
|
||||||
├── .sops.yaml.example # Template (in Git)
|
|
||||||
├── terraform.tfstate # Unencrypted state (NOT in Git)
|
|
||||||
├── terraform.tfstate.enc # Encrypted state (in Git) ✅
|
|
||||||
├── terraform.tfvars # Your config with secrets (NOT in Git)
|
|
||||||
├── terraform.tfvars.enc # Encrypted config (in Git) ✅
|
|
||||||
├── terraform.tfvars.example # Template without secrets (in Git)
|
|
||||||
├── scripts/
|
|
||||||
│ ├── tf # Wrapper script
|
|
||||||
│ ├── tf-encrypt # Manual encrypt
|
|
||||||
│ └── tf-decrypt # Manual decrypt
|
|
||||||
└── STATE_MANAGEMENT.md # This file
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Error: "no key could decrypt the data"
|
|
||||||
|
|
||||||
**Cause:** SOPS can't find your private key
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Set the key file location
|
|
||||||
export SOPS_AGE_KEY_FILE=~/.sops/homelab-terraform.txt
|
|
||||||
|
|
||||||
# Or add to ~/.bashrc permanently
|
|
||||||
echo 'export SOPS_AGE_KEY_FILE=~/.sops/homelab-terraform.txt' >> ~/.bashrc
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error: "YOUR_AGE_PUBLIC_KEY_HERE"
|
|
||||||
|
|
||||||
**Cause:** You didn't replace the placeholder in `.sops.yaml`
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Edit .sops.yaml and replace with your actual public key
|
|
||||||
nano .sops.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error: "failed to get the data key"
|
|
||||||
|
|
||||||
**Cause:** The file was encrypted with a different key
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
- Ensure you're using the same age key that encrypted the file
|
|
||||||
- If you lost the original key, you'll need to re-create the state by running `tofu import`
|
|
||||||
|
|
||||||
### Accidentally Committed Unencrypted State
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
```bash
|
|
||||||
# Remove from Git history (DANGEROUS - coordinate with team if not solo)
|
|
||||||
git filter-branch --force --index-filter \
|
|
||||||
'git rm --cached --ignore-unmatch terraform.tfstate' \
|
|
||||||
--prune-empty --tag-name-filter cat -- --all
|
|
||||||
|
|
||||||
# Force push (only if solo or coordinated)
|
|
||||||
git push origin --force --all
|
|
||||||
```
|
|
||||||
|
|
||||||
### Lost Private Key
|
|
||||||
|
|
||||||
**Solution:**
|
|
||||||
- Restore from your backup (you made a backup, right?)
|
|
||||||
- If truly lost, you'll need to:
|
|
||||||
1. Manually recreate infrastructure or import existing resources
|
|
||||||
2. Generate a new age key
|
|
||||||
3. Re-encrypt everything with the new key
|
|
||||||
|
|
||||||
## Advanced: Multiple Keys (Team Access)
|
|
||||||
|
|
||||||
If multiple people need access:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# .sops.yaml
|
|
||||||
creation_rules:
|
|
||||||
- path_regex: \.tfstate$
|
|
||||||
age: >-
|
|
||||||
age1person1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
|
|
||||||
age1person2xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
|
|
||||||
age1person3xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
|
||||||
```
|
|
||||||
|
|
||||||
Each person's private key can decrypt the files.
|
|
||||||
|
|
||||||
## Backup Strategy
|
|
||||||
|
|
||||||
### Recommended Backup Locations:
|
|
||||||
|
|
||||||
1. **Password Manager** (1Password, Bitwarden, etc.)
|
|
||||||
```bash
|
|
||||||
# Copy the contents
|
|
||||||
cat ~/.sops/homelab-terraform.txt
|
|
||||||
# Store as a secure note in your password manager
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Encrypted USB Drive**
|
|
||||||
```bash
|
|
||||||
# Copy to encrypted drive
|
|
||||||
cp ~/.sops/homelab-terraform.txt /media/encrypted-usb/
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Encrypted Cloud Storage**
|
|
||||||
```bash
|
|
||||||
# Encrypt with gpg before uploading
|
|
||||||
gpg -c ~/.sops/homelab-terraform.txt
|
|
||||||
# Upload homelab-terraform.txt.gpg to cloud
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [SOPS Documentation](https://github.com/getsops/sops)
|
|
||||||
- [age Documentation](https://github.com/FiloSottile/age)
|
|
||||||
- [Terraform State Security](https://developer.hashicorp.com/terraform/language/state/sensitive-data)
|
|
||||||
- [OpenTofu Documentation](https://opentofu.org/docs/)
|
|
||||||
|
|
||||||
## Questions?
|
|
||||||
|
|
||||||
Common questions answered in this document:
|
|
||||||
- ✅ How do I set up SOPS? → See [Initial Setup](#initial-setup)
|
|
||||||
- ✅ How do I use it daily? → See [Option A: Automatic Wrapper](#option-a-automatic-wrapper-script-recommended)
|
|
||||||
- ✅ What if I lose my key? → See [Lost Private Key](#lost-private-key)
|
|
||||||
- ✅ How do I backup my key? → See [Backup Strategy](#backup-strategy)
|
|
||||||
- ✅ Can multiple people access? → See [Advanced: Multiple Keys](#advanced-multiple-keys-team-access)
|
|
||||||
|
|
@ -1,290 +0,0 @@
|
||||||
terraform {
|
|
||||||
required_version = ">= 1.6"
|
|
||||||
|
|
||||||
required_providers {
|
|
||||||
proxmox = {
|
|
||||||
source = "bpg/proxmox"
|
|
||||||
version = "~> 0.50"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
provider "proxmox" {
|
|
||||||
endpoint = var.pm_api_url
|
|
||||||
|
|
||||||
api_token = var.pm_api_token_secret != "" ? "${var.pm_api_token_id}=${var.pm_api_token_secret}" : null
|
|
||||||
|
|
||||||
# For self-signed certificates
|
|
||||||
insecure = var.pm_tls_insecure
|
|
||||||
|
|
||||||
ssh {
|
|
||||||
agent = true
|
|
||||||
username = var.pm_ssh_username
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
resource "proxmox_virtual_environment_vm" "docker_host" {
|
|
||||||
name = var.vm_name
|
|
||||||
description = "Docker host for homelab services - Managed by OpenTofu"
|
|
||||||
node_name = var.proxmox_node
|
|
||||||
|
|
||||||
# Clone from template (must exist in Proxmox)
|
|
||||||
clone {
|
|
||||||
vm_id = var.template_vm_id
|
|
||||||
full = true
|
|
||||||
}
|
|
||||||
|
|
||||||
# BIOS type - OVMF required for GPU passthrough
|
|
||||||
bios = var.enable_gpu_passthrough ? "ovmf" : "seabios"
|
|
||||||
|
|
||||||
# Machine type - q35 required for GPU passthrough
|
|
||||||
machine = var.enable_gpu_passthrough ? "q35" : "pc"
|
|
||||||
|
|
||||||
# CPU configuration
|
|
||||||
cpu {
|
|
||||||
cores = var.vm_cores
|
|
||||||
type = "host" # Use host CPU type for best performance
|
|
||||||
}
|
|
||||||
|
|
||||||
# Memory configuration
|
|
||||||
memory {
|
|
||||||
dedicated = var.vm_memory
|
|
||||||
}
|
|
||||||
|
|
||||||
# EFI disk (required for OVMF BIOS when GPU passthrough is enabled)
|
|
||||||
dynamic "efi_disk" {
|
|
||||||
for_each = var.enable_gpu_passthrough ? [1] : []
|
|
||||||
content {
|
|
||||||
datastore_id = var.storage
|
|
||||||
type = "4m"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# GPU passthrough configuration
|
|
||||||
dynamic "hostpci" {
|
|
||||||
for_each = var.enable_gpu_passthrough ? [1] : []
|
|
||||||
content {
|
|
||||||
device = "hostpci0"
|
|
||||||
mapping = var.gpu_pci_id
|
|
||||||
pcie = true
|
|
||||||
rombar = true
|
|
||||||
xvga = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Network interface
|
|
||||||
network_device {
|
|
||||||
bridge = var.network_bridge
|
|
||||||
model = "virtio"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Disk configuration
|
|
||||||
disk {
|
|
||||||
datastore_id = var.storage
|
|
||||||
size = var.disk_size
|
|
||||||
interface = "scsi0"
|
|
||||||
discard = "on" # Enable TRIM for SSDs
|
|
||||||
iothread = true
|
|
||||||
}
|
|
||||||
|
|
||||||
# Cloud-init configuration
|
|
||||||
initialization {
|
|
||||||
ip_config {
|
|
||||||
ipv4 {
|
|
||||||
address = var.vm_ip_address == "dhcp" ? "dhcp" : "${var.vm_ip_address}/${var.vm_ip_netmask}"
|
|
||||||
gateway = var.vm_gateway
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
user_account {
|
|
||||||
username = var.vm_username
|
|
||||||
keys = var.vm_ssh_keys
|
|
||||||
password = var.vm_password
|
|
||||||
}
|
|
||||||
|
|
||||||
user_data_file_id = proxmox_virtual_environment_file.cloud_init_user_data.id
|
|
||||||
}
|
|
||||||
|
|
||||||
# Start VM on boot
|
|
||||||
on_boot = true
|
|
||||||
|
|
||||||
# Tags for organization
|
|
||||||
tags = ["terraform", "docker", "homelab"]
|
|
||||||
}
|
|
||||||
|
|
||||||
# Cloud-init user data for Docker installation
|
|
||||||
resource "proxmox_virtual_environment_file" "cloud_init_user_data" {
|
|
||||||
content_type = "snippets"
|
|
||||||
datastore_id = var.snippets_storage
|
|
||||||
node_name = var.proxmox_node
|
|
||||||
|
|
||||||
source_raw {
|
|
||||||
data = var.vm_os_type == "almalinux" ? local.cloud_init_almalinux : local.cloud_init_ubuntu
|
|
||||||
|
|
||||||
file_name = "cloud-init-docker-${var.vm_name}.yaml"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Cloud-init configuration for Ubuntu
|
|
||||||
locals {
|
|
||||||
cloud_init_ubuntu = <<-EOF
|
|
||||||
#cloud-config
|
|
||||||
hostname: ${var.vm_name}
|
|
||||||
manage_etc_hosts: true
|
|
||||||
|
|
||||||
# Install Docker and dependencies
|
|
||||||
package_update: true
|
|
||||||
package_upgrade: true
|
|
||||||
|
|
||||||
packages:
|
|
||||||
- apt-transport-https
|
|
||||||
- ca-certificates
|
|
||||||
- curl
|
|
||||||
- gnupg
|
|
||||||
- lsb-release
|
|
||||||
- git
|
|
||||||
- vim
|
|
||||||
- htop
|
|
||||||
- net-tools
|
|
||||||
${var.mount_media_directories ? "- nfs-common" : ""}
|
|
||||||
|
|
||||||
# Docker installation and NFS mount setup
|
|
||||||
runcmd:
|
|
||||||
# Install Docker
|
|
||||||
- mkdir -p /etc/apt/keyrings
|
|
||||||
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
|
||||||
- chmod a+r /etc/apt/keyrings/docker.gpg
|
|
||||||
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
||||||
- apt-get update
|
|
||||||
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
|
||||||
- systemctl enable docker
|
|
||||||
- systemctl start docker
|
|
||||||
- usermod -aG docker ${var.vm_username}
|
|
||||||
- docker network create homelab || true
|
|
||||||
|
|
||||||
# Create media directories
|
|
||||||
- mkdir -p ${var.media_mount_path}/{audiobooks,books,comics,complete,downloads,homemovies,incomplete,movies,music,photos,tv}
|
|
||||||
|
|
||||||
${var.mount_media_directories ? "# Mount NFS shares from Proxmox host" : ""}
|
|
||||||
${var.mount_media_directories ? "- systemctl enable nfs-client.target" : ""}
|
|
||||||
${var.mount_media_directories ? "- systemctl start nfs-client.target" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/audiobooks ${var.media_mount_path}/audiobooks" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/books ${var.media_mount_path}/books" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/comics ${var.media_mount_path}/comics" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/complete ${var.media_mount_path}/complete" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/downloads ${var.media_mount_path}/downloads" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/homemovies ${var.media_mount_path}/homemovies" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/incomplete ${var.media_mount_path}/incomplete" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/movies ${var.media_mount_path}/movies" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/music ${var.media_mount_path}/music" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/photos ${var.media_mount_path}/photos" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/tv ${var.media_mount_path}/tv" : ""}
|
|
||||||
|
|
||||||
- chown -R ${var.vm_username}:${var.vm_username} ${var.media_mount_path}
|
|
||||||
- chmod -R 755 ${var.media_mount_path}
|
|
||||||
|
|
||||||
${var.clone_homelab_repo ? "- su - ${var.vm_username} -c 'cd ~ && git clone https://github.com/${var.github_username}/homelab.git'" : ""}
|
|
||||||
|
|
||||||
${var.mount_media_directories ? "# Make NFS mounts persistent" : ""}
|
|
||||||
${var.mount_media_directories ? "write_files:" : ""}
|
|
||||||
${var.mount_media_directories ? " - path: /etc/fstab" : ""}
|
|
||||||
${var.mount_media_directories ? " append: true" : ""}
|
|
||||||
${var.mount_media_directories ? " content: |" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/audiobooks ${var.media_mount_path}/audiobooks nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/books ${var.media_mount_path}/books nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/comics ${var.media_mount_path}/comics nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/complete ${var.media_mount_path}/complete nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/downloads ${var.media_mount_path}/downloads nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/homemovies ${var.media_mount_path}/homemovies nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/incomplete ${var.media_mount_path}/incomplete nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/movies ${var.media_mount_path}/movies nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/music ${var.media_mount_path}/music nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/photos ${var.media_mount_path}/photos nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/tv ${var.media_mount_path}/tv nfs defaults 0 0" : ""}
|
|
||||||
|
|
||||||
# Set timezone
|
|
||||||
timezone: ${var.vm_timezone}
|
|
||||||
|
|
||||||
# Reboot after setup
|
|
||||||
power_state:
|
|
||||||
mode: reboot
|
|
||||||
condition: true
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cloud_init_almalinux = <<-EOF
|
|
||||||
#cloud-config
|
|
||||||
hostname: ${var.vm_name}
|
|
||||||
manage_etc_hosts: true
|
|
||||||
|
|
||||||
# Install Docker and dependencies
|
|
||||||
package_update: true
|
|
||||||
package_upgrade: true
|
|
||||||
|
|
||||||
packages:
|
|
||||||
- curl
|
|
||||||
- ca-certificates
|
|
||||||
- git
|
|
||||||
- vim
|
|
||||||
- htop
|
|
||||||
- net-tools
|
|
||||||
${var.mount_media_directories ? "- nfs-utils" : ""}
|
|
||||||
|
|
||||||
# Docker installation and NFS mount setup
|
|
||||||
runcmd:
|
|
||||||
# Install Docker
|
|
||||||
- dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
|
||||||
- dnf install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
|
||||||
- systemctl enable docker
|
|
||||||
- systemctl start docker
|
|
||||||
- usermod -aG docker ${var.vm_username}
|
|
||||||
- docker network create homelab || true
|
|
||||||
|
|
||||||
# Create media directories
|
|
||||||
- mkdir -p ${var.media_mount_path}/{audiobooks,books,comics,complete,downloads,homemovies,incomplete,movies,music,photos,tv}
|
|
||||||
|
|
||||||
${var.mount_media_directories ? "# Mount NFS shares from Proxmox host" : ""}
|
|
||||||
${var.mount_media_directories ? "- systemctl enable nfs-client.target" : ""}
|
|
||||||
${var.mount_media_directories ? "- systemctl start nfs-client.target" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/audiobooks ${var.media_mount_path}/audiobooks" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/books ${var.media_mount_path}/books" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/comics ${var.media_mount_path}/comics" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/complete ${var.media_mount_path}/complete" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/downloads ${var.media_mount_path}/downloads" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/homemovies ${var.media_mount_path}/homemovies" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/incomplete ${var.media_mount_path}/incomplete" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/movies ${var.media_mount_path}/movies" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/music ${var.media_mount_path}/music" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/photos ${var.media_mount_path}/photos" : ""}
|
|
||||||
${var.mount_media_directories ? "- mount -t nfs ${var.proxmox_host_ip}:${var.media_source_path}/tv ${var.media_mount_path}/tv" : ""}
|
|
||||||
|
|
||||||
- chown -R ${var.vm_username}:${var.vm_username} ${var.media_mount_path}
|
|
||||||
- chmod -R 755 ${var.media_mount_path}
|
|
||||||
|
|
||||||
${var.clone_homelab_repo ? "- su - ${var.vm_username} -c 'cd ~ && git clone https://github.com/${var.github_username}/homelab.git'" : ""}
|
|
||||||
|
|
||||||
${var.mount_media_directories ? "# Make NFS mounts persistent" : ""}
|
|
||||||
${var.mount_media_directories ? "write_files:" : ""}
|
|
||||||
${var.mount_media_directories ? " - path: /etc/fstab" : ""}
|
|
||||||
${var.mount_media_directories ? " append: true" : ""}
|
|
||||||
${var.mount_media_directories ? " content: |" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/audiobooks ${var.media_mount_path}/audiobooks nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/books ${var.media_mount_path}/books nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/comics ${var.media_mount_path}/comics nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/complete ${var.media_mount_path}/complete nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/downloads ${var.media_mount_path}/downloads nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/homemovies ${var.media_mount_path}/homemovies nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/incomplete ${var.media_mount_path}/incomplete nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/movies ${var.media_mount_path}/movies nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/music ${var.media_mount_path}/music nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/photos ${var.media_mount_path}/photos nfs defaults 0 0" : ""}
|
|
||||||
${var.mount_media_directories ? " ${var.proxmox_host_ip}:${var.media_source_path}/tv ${var.media_mount_path}/tv nfs defaults 0 0" : ""}
|
|
||||||
|
|
||||||
# Set timezone
|
|
||||||
timezone: ${var.vm_timezone}
|
|
||||||
|
|
||||||
# Reboot after setup
|
|
||||||
power_state:
|
|
||||||
mode: reboot
|
|
||||||
condition: true
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
@ -1,29 +0,0 @@
|
||||||
output "vm_id" {
|
|
||||||
description = "VM ID"
|
|
||||||
value = proxmox_virtual_environment_vm.docker_host.vm_id
|
|
||||||
}
|
|
||||||
|
|
||||||
output "vm_name" {
|
|
||||||
description = "VM name"
|
|
||||||
value = proxmox_virtual_environment_vm.docker_host.name
|
|
||||||
}
|
|
||||||
|
|
||||||
output "vm_ipv4_address" {
|
|
||||||
description = "VM IPv4 address"
|
|
||||||
value = try(proxmox_virtual_environment_vm.docker_host.ipv4_addresses[1][0], "DHCP - check Proxmox UI")
|
|
||||||
}
|
|
||||||
|
|
||||||
output "vm_mac_address" {
|
|
||||||
description = "VM MAC address"
|
|
||||||
value = proxmox_virtual_environment_vm.docker_host.mac_addresses[0]
|
|
||||||
}
|
|
||||||
|
|
||||||
output "ssh_command" {
|
|
||||||
description = "SSH command to connect to VM"
|
|
||||||
value = "ssh ${var.vm_username}@${try(proxmox_virtual_environment_vm.docker_host.ipv4_addresses[1][0], "DHCP-ADDRESS")}"
|
|
||||||
}
|
|
||||||
|
|
||||||
output "docker_status_command" {
|
|
||||||
description = "Command to check Docker status"
|
|
||||||
value = "ssh ${var.vm_username}@${try(proxmox_virtual_environment_vm.docker_host.ipv4_addresses[1][0], "DHCP-ADDRESS")} 'docker ps'"
|
|
||||||
}
|
|
||||||
|
|
@ -1,76 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
#
|
|
||||||
# tf - Wrapper for OpenTofu/Terraform with automatic SOPS encryption/decryption
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# ./scripts/tf init
|
|
||||||
# ./scripts/tf plan
|
|
||||||
# ./scripts/tf apply
|
|
||||||
# ./scripts/tf destroy
|
|
||||||
#
|
|
||||||
# This script automatically:
|
|
||||||
# 1. Decrypts state before running tofu commands
|
|
||||||
# 2. Runs your tofu command
|
|
||||||
# 3. Encrypts state after running tofu commands
|
|
||||||
#
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
TF_DIR="$(dirname "$SCRIPT_DIR")"
|
|
||||||
|
|
||||||
cd "$TF_DIR"
|
|
||||||
|
|
||||||
# Colors for output
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
error() {
|
|
||||||
echo -e "${RED}ERROR: $1${NC}" >&2
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
success() {
|
|
||||||
echo -e "${GREEN}✓ $1${NC}"
|
|
||||||
}
|
|
||||||
|
|
||||||
info() {
|
|
||||||
echo -e "${BLUE}ℹ $1${NC}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if tofu or terraform is installed
|
|
||||||
if command -v tofu &> /dev/null; then
|
|
||||||
TF_CMD="tofu"
|
|
||||||
elif command -v terraform &> /dev/null; then
|
|
||||||
TF_CMD="terraform"
|
|
||||||
else
|
|
||||||
error "Neither tofu nor terraform is installed"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Decrypt state if encrypted files exist
|
|
||||||
if [[ -f terraform.tfstate.enc || -f terraform.tfvars.enc ]]; then
|
|
||||||
info "Decrypting state files..."
|
|
||||||
"$SCRIPT_DIR/tf-decrypt"
|
|
||||||
echo
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Run the terraform/tofu command
|
|
||||||
echo -e "${BLUE}Running: $TF_CMD $*${NC}"
|
|
||||||
echo
|
|
||||||
$TF_CMD "$@"
|
|
||||||
TF_EXIT_CODE=$?
|
|
||||||
|
|
||||||
# If the command succeeded and modified state, encrypt it
|
|
||||||
if [[ $TF_EXIT_CODE -eq 0 ]]; then
|
|
||||||
# Commands that modify state
|
|
||||||
if [[ "$1" =~ ^(apply|destroy|import|refresh|state)$ ]]; then
|
|
||||||
echo
|
|
||||||
info "Encrypting state files..."
|
|
||||||
"$SCRIPT_DIR/tf-encrypt"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit $TF_EXIT_CODE
|
|
||||||
|
|
@ -1,87 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
#
|
|
||||||
# tf-decrypt - Decrypt Terraform state and tfvars files with SOPS
|
|
||||||
#
|
|
||||||
# Usage: ./scripts/tf-decrypt
|
|
||||||
#
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
TF_DIR="$(dirname "$SCRIPT_DIR")"
|
|
||||||
|
|
||||||
cd "$TF_DIR"
|
|
||||||
|
|
||||||
# Colors for output
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
error() {
|
|
||||||
echo -e "${RED}ERROR: $1${NC}" >&2
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
success() {
|
|
||||||
echo -e "${GREEN}✓ $1${NC}"
|
|
||||||
}
|
|
||||||
|
|
||||||
warn() {
|
|
||||||
echo -e "${YELLOW}⚠ $1${NC}"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if SOPS is installed
|
|
||||||
if ! command -v sops &> /dev/null; then
|
|
||||||
error "sops is not installed. Install it from: https://github.com/getsops/sops/releases"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if .sops.yaml exists
|
|
||||||
if [[ ! -f .sops.yaml ]]; then
|
|
||||||
error ".sops.yaml not found. Copy .sops.yaml.example and configure your age key."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if SOPS_AGE_KEY_FILE is set or exists in default location
|
|
||||||
if [[ -z "${SOPS_AGE_KEY_FILE:-}" ]]; then
|
|
||||||
if [[ -f ~/.sops/homelab-terraform.txt ]]; then
|
|
||||||
export SOPS_AGE_KEY_FILE=~/.sops/homelab-terraform.txt
|
|
||||||
else
|
|
||||||
warn "SOPS_AGE_KEY_FILE not set. Trying default age identities..."
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "🔓 Decrypting Terraform files..."
|
|
||||||
echo
|
|
||||||
|
|
||||||
# Decrypt terraform.tfstate.enc if it exists
|
|
||||||
if [[ -f terraform.tfstate.enc ]]; then
|
|
||||||
echo "Decrypting terraform.tfstate.enc..."
|
|
||||||
sops -d terraform.tfstate.enc > terraform.tfstate
|
|
||||||
success "terraform.tfstate.enc → terraform.tfstate"
|
|
||||||
else
|
|
||||||
warn "terraform.tfstate.enc not found (this is normal for first-time setup)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Decrypt terraform.tfvars.enc if it exists
|
|
||||||
if [[ -f terraform.tfvars.enc ]]; then
|
|
||||||
echo "Decrypting terraform.tfvars.enc..."
|
|
||||||
sops -d terraform.tfvars.enc > terraform.tfvars
|
|
||||||
success "terraform.tfvars.enc → terraform.tfvars"
|
|
||||||
else
|
|
||||||
warn "terraform.tfvars.enc not found"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Decrypt backup state files if they exist
|
|
||||||
for backup_enc in terraform.tfstate.backup.enc terraform.tfstate.*.backup.enc; do
|
|
||||||
if [[ -f "$backup_enc" ]]; then
|
|
||||||
backup="${backup_enc%.enc}"
|
|
||||||
echo "Decrypting $backup_enc..."
|
|
||||||
sops -d "$backup_enc" > "$backup"
|
|
||||||
success "$backup_enc → $backup"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo
|
|
||||||
success "All Terraform files decrypted successfully!"
|
|
||||||
echo
|
|
||||||
warn "Remember to encrypt files after making changes: ./scripts/tf-encrypt"
|
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue