Compare commits
No commits in common. "4c1c18f5c76019e07c69a409932fb50a708aa407" and "a1824a40437626440641424743f7460d9615614f" have entirely different histories.
4c1c18f5c7
...
a1824a4043
116 changed files with 15397 additions and 4979 deletions
|
|
@ -1,303 +0,0 @@
|
||||||
# Wiki Documentation Skill
|
|
||||||
|
|
||||||
Create and manage markdown documentation files that sync to Wiki.js.
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
**Repository Location:** `/mnt/media/wikijs-content/`
|
|
||||||
**Git Remote:** `git.fig.systems/eddie/wiki.git`
|
|
||||||
**Wiki.js URL:** https://wiki.fig.systems
|
|
||||||
|
|
||||||
This repository is synchronized with Wiki.js. Any markdown files created here will automatically appear in the wiki after a sync (typically within 5 minutes, or immediately if triggered manually).
|
|
||||||
|
|
||||||
## Capabilities
|
|
||||||
|
|
||||||
1. **Create Documentation Pages**
|
|
||||||
- Write markdown files with proper Wiki.js frontmatter
|
|
||||||
- Organize content in directories (maps to wiki hierarchy)
|
|
||||||
- Add tags and metadata
|
|
||||||
|
|
||||||
2. **Git Operations**
|
|
||||||
- Commit changes with descriptive messages
|
|
||||||
- Push to remote repository
|
|
||||||
- Pull latest changes before writing
|
|
||||||
|
|
||||||
3. **Frontmatter Format**
|
|
||||||
All wiki pages require this YAML frontmatter:
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
title: Page Title
|
|
||||||
description: Brief description of the page
|
|
||||||
published: true
|
|
||||||
date: 2026-03-15T00:00:00.000Z
|
|
||||||
tags: tag1, tag2, tag3
|
|
||||||
editor: markdown
|
|
||||||
dateCreated: 2026-03-15T00:00:00.000Z
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
**Important:** Tags must be comma-separated, not YAML array format!
|
|
||||||
|
|
||||||
## Workflow
|
|
||||||
|
|
||||||
When creating wiki documentation:
|
|
||||||
|
|
||||||
1. **Navigate to repo:**
|
|
||||||
```bash
|
|
||||||
cd /mnt/media/wikijs-content
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Pull latest changes:**
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Write markdown file:**
|
|
||||||
- Use clear, descriptive filenames (lowercase-with-dashes.md)
|
|
||||||
- Include proper frontmatter
|
|
||||||
- Use standard markdown formatting
|
|
||||||
- Organize in subdirectories as needed (e.g., `home/containers/services/service-name.md`)
|
|
||||||
|
|
||||||
4. **Scan for secrets with Gitleaks:**
|
|
||||||
```bash
|
|
||||||
# Install gitleaks if not already installed
|
|
||||||
# On Ubuntu/Debian: apt install gitleaks
|
|
||||||
# Or download from: https://github.com/gitleaks/gitleaks/releases
|
|
||||||
|
|
||||||
# Scan staged files before commit
|
|
||||||
gitleaks detect --source . --verbose --no-git
|
|
||||||
|
|
||||||
# Or scan specific files
|
|
||||||
gitleaks detect --source . --verbose --no-git --log-opts="<filename>"
|
|
||||||
```
|
|
||||||
|
|
||||||
**If secrets are found:**
|
|
||||||
- **Remove them immediately** - replace with environment variables or placeholders
|
|
||||||
- Use patterns like `${SECRET_KEY}`, `YOUR_KEY_HERE`, or `TBD`
|
|
||||||
- Never commit actual passwords, API keys, tokens, or credentials
|
|
||||||
- Check `.gitleaks.toml` for allowlist patterns
|
|
||||||
|
|
||||||
5. **Commit and push:**
|
|
||||||
```bash
|
|
||||||
git add <filename>
|
|
||||||
git commit -m "Add/Update: brief description"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Gitleaks CI/CD will automatically scan on push and fail if secrets detected
|
|
||||||
|
|
||||||
6. **Verify:** Changes will appear at https://wiki.fig.systems after sync
|
|
||||||
|
|
||||||
## File Organization
|
|
||||||
|
|
||||||
Suggested directory structure:
|
|
||||||
```
|
|
||||||
/mnt/media/wikijs-content/
|
|
||||||
├── homelab/
|
|
||||||
│ ├── services/
|
|
||||||
│ │ └── service-name.md
|
|
||||||
│ ├── networking/
|
|
||||||
│ │ └── traefik-setup.md
|
|
||||||
│ └── guides/
|
|
||||||
│ └── how-to-guide.md
|
|
||||||
├── development/
|
|
||||||
│ └── project-docs.md
|
|
||||||
└── reference/
|
|
||||||
└── commands.md
|
|
||||||
```
|
|
||||||
|
|
||||||
Directories in the repo map to page hierarchy in Wiki.js.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Create a Service Documentation Page
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
title: Jellyfin Media Server
|
|
||||||
description: Jellyfin configuration and usage guide
|
|
||||||
published: true
|
|
||||||
date: 2026-03-15T00:00:00.000Z
|
|
||||||
tags: homelab, media, jellyfin
|
|
||||||
editor: markdown
|
|
||||||
dateCreated: 2026-03-15T00:00:00.000Z
|
|
||||||
---
|
|
||||||
|
|
||||||
# Jellyfin Media Server
|
|
||||||
|
|
||||||
Jellyfin is a free software media system...
|
|
||||||
|
|
||||||
## Access
|
|
||||||
- **URL:** https://jellyfin.fig.systems
|
|
||||||
- **Authentication:** Authelia SSO
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Create a How-To Guide
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
---
|
|
||||||
title: How to Add a New Service
|
|
||||||
description: Step-by-step guide for adding services to the homelab
|
|
||||||
published: true
|
|
||||||
date: 2026-03-15T00:00:00.000Z
|
|
||||||
tags: homelab, guide, docker
|
|
||||||
editor: markdown
|
|
||||||
dateCreated: 2026-03-15T00:00:00.000Z
|
|
||||||
---
|
|
||||||
|
|
||||||
# How to Add a New Service
|
|
||||||
|
|
||||||
This guide walks through the process...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Git Configuration
|
|
||||||
|
|
||||||
The repository is already configured:
|
|
||||||
- **User:** Claude
|
|
||||||
- **Email:** claude@fig.systems
|
|
||||||
- **Authentication:** Token-based (embedded in remote URL)
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Always pull before writing** to avoid conflicts
|
|
||||||
2. **Scan for secrets with Gitleaks** before committing
|
|
||||||
3. **Use descriptive commit messages** following the pattern: "Add: X" or "Update: Y"
|
|
||||||
4. **Include proper frontmatter** - pages without it won't render correctly
|
|
||||||
5. **Use semantic filenames** - lowercase with dashes instead of spaces
|
|
||||||
6. **Organize logically** - use subdirectories for categories
|
|
||||||
7. **Add relevant tags** - helps with wiki navigation and search
|
|
||||||
8. **Set published: true** - pages with `published: false` won't be visible
|
|
||||||
9. **Never commit secrets** - use placeholders like `TBD`, `${VAR}`, or `YOUR_KEY_HERE`
|
|
||||||
|
|
||||||
## Secret Management with Gitleaks
|
|
||||||
|
|
||||||
### What is Gitleaks?
|
|
||||||
|
|
||||||
Gitleaks is a secret scanner that detects hardcoded secrets, passwords, API keys, and tokens in Git repositories.
|
|
||||||
|
|
||||||
### CI/CD Integration
|
|
||||||
|
|
||||||
The wiki repository has automated Gitleaks scanning:
|
|
||||||
- **Workflow**: `.forgejo/workflows/gitleaks.yaml`
|
|
||||||
- **Config**: `.gitleaks.toml`
|
|
||||||
- **Triggers**: Every push to main, all pull requests
|
|
||||||
- **Action**: Fails build if secrets detected
|
|
||||||
|
|
||||||
### Local Scanning
|
|
||||||
|
|
||||||
**Before committing:**
|
|
||||||
```bash
|
|
||||||
cd /mnt/media/wikijs-content
|
|
||||||
|
|
||||||
# Scan all files
|
|
||||||
gitleaks detect --source . --verbose --no-git
|
|
||||||
|
|
||||||
# Scan specific files
|
|
||||||
gitleaks detect --source . --verbose --no-git --log-opts="path/to/file.md"
|
|
||||||
|
|
||||||
# Scan uncommitted changes only
|
|
||||||
gitleaks protect --staged --verbose
|
|
||||||
```
|
|
||||||
|
|
||||||
### Handling Detected Secrets
|
|
||||||
|
|
||||||
**If Gitleaks finds secrets:**
|
|
||||||
|
|
||||||
1. **Immediate action:**
|
|
||||||
- DO NOT commit
|
|
||||||
- Replace secret with placeholder
|
|
||||||
- Use `TBD`, `${SECRET_KEY}`, or `YOUR_KEY_HERE`
|
|
||||||
|
|
||||||
2. **Examples of safe placeholders:**
|
|
||||||
```markdown
|
|
||||||
API_KEY=YOUR_API_KEY_HERE
|
|
||||||
PASSWORD=${DB_PASSWORD}
|
|
||||||
TOKEN=TBD
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Allowlisted patterns** (in `.gitleaks.toml`):
|
|
||||||
- `example.com` domains
|
|
||||||
- `localhost` and `127.0.0.1`
|
|
||||||
- `TBD` placeholders
|
|
||||||
- Environment variable syntax `${VAR}`
|
|
||||||
|
|
||||||
### What Gitleaks Detects
|
|
||||||
|
|
||||||
- AWS keys (AKIA...)
|
|
||||||
- GitHub tokens (ghp_...)
|
|
||||||
- GitLab tokens (glpat-...)
|
|
||||||
- Private keys (-----BEGIN PRIVATE KEY-----)
|
|
||||||
- Generic API keys and secrets
|
|
||||||
- Passwords in configuration files
|
|
||||||
|
|
||||||
### False Positives
|
|
||||||
|
|
||||||
If Gitleaks flags safe content:
|
|
||||||
|
|
||||||
1. **Update `.gitleaks.toml` allowlist:**
|
|
||||||
```toml
|
|
||||||
[allowlist]
|
|
||||||
regexes = [
|
|
||||||
'''safe-pattern-here''',
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Commit the config update:**
|
|
||||||
```bash
|
|
||||||
git add .gitleaks.toml
|
|
||||||
git commit -m "chore: Update Gitleaks allowlist"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Git History Scanning
|
|
||||||
|
|
||||||
To scan entire git history:
|
|
||||||
```bash
|
|
||||||
gitleaks detect --source . --verbose
|
|
||||||
```
|
|
||||||
|
|
||||||
This checks all commits, not just current files.
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
**If page doesn't appear in Wiki.js:**
|
|
||||||
- Check Wiki.js logs: `docker compose logs wikijs`
|
|
||||||
- Manually trigger sync in Wiki.js admin panel (Storage section)
|
|
||||||
- Verify frontmatter is valid YAML
|
|
||||||
- Ensure file has `.md` extension
|
|
||||||
|
|
||||||
**If git push fails:**
|
|
||||||
- Check authentication token is still valid
|
|
||||||
- Verify network connectivity to git.fig.systems
|
|
||||||
- Try pulling first to resolve conflicts
|
|
||||||
|
|
||||||
**If Gitleaks CI/CD fails:**
|
|
||||||
- View Forgejo Actions logs at https://git.fig.systems/eddie/wiki/actions
|
|
||||||
- Identify detected secrets in the workflow output
|
|
||||||
- Remove or replace secrets with placeholders
|
|
||||||
- Update `.gitleaks.toml` if false positive
|
|
||||||
- Commit and push again
|
|
||||||
|
|
||||||
**If Gitleaks not installed locally:**
|
|
||||||
```bash
|
|
||||||
# Ubuntu/Debian
|
|
||||||
sudo apt install gitleaks
|
|
||||||
|
|
||||||
# Or download latest release
|
|
||||||
wget https://github.com/gitleaks/gitleaks/releases/latest/download/gitleaks_linux_amd64.tar.gz
|
|
||||||
tar -xzf gitleaks_linux_amd64.tar.gz
|
|
||||||
sudo mv gitleaks /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Other Services
|
|
||||||
|
|
||||||
This wiki can document:
|
|
||||||
- **Homelab services** (compose/services/*)
|
|
||||||
- **Infrastructure setup** (Traefik, Authelia, LLDAP)
|
|
||||||
- **Media management** (*arr stack, Jellyfin)
|
|
||||||
- **Development projects**
|
|
||||||
- **Personal notes and references**
|
|
||||||
|
|
||||||
All documentation is version-controlled and backed up via Git!
|
|
||||||
58
.gitignore
vendored
58
.gitignore
vendored
|
|
@ -30,53 +30,6 @@
|
||||||
**/config/
|
**/config/
|
||||||
!**/config/*.example
|
!**/config/*.example
|
||||||
!**/config/.gitkeep
|
!**/config/.gitkeep
|
||||||
**/config.bak/
|
|
||||||
**/db/
|
|
||||||
**/postgres/
|
|
||||||
**/library/
|
|
||||||
**/letsencrypt/
|
|
||||||
|
|
||||||
# Runtime directories
|
|
||||||
**/app/
|
|
||||||
!**/app.yaml
|
|
||||||
!**/app.json
|
|
||||||
**/appdata/
|
|
||||||
**/cache/
|
|
||||||
**/downloads/
|
|
||||||
**/uploads/
|
|
||||||
**/output/
|
|
||||||
**/backup/
|
|
||||||
**/backups/
|
|
||||||
**/incomplete/
|
|
||||||
**/media/
|
|
||||||
!compose/media/
|
|
||||||
!compose/media/**/
|
|
||||||
**/tmp/
|
|
||||||
**/temp/
|
|
||||||
|
|
||||||
# Media files
|
|
||||||
**/*.flac
|
|
||||||
**/*.mp3
|
|
||||||
**/*.mp4
|
|
||||||
**/*.mkv
|
|
||||||
**/*.avi
|
|
||||||
**/*.m4a
|
|
||||||
**/*.wav
|
|
||||||
**/*.ogg
|
|
||||||
|
|
||||||
# Database files
|
|
||||||
**/*.sqlite
|
|
||||||
**/*.sqlite3
|
|
||||||
**/*.db
|
|
||||||
!**/*.db.example
|
|
||||||
|
|
||||||
# Certificate files
|
|
||||||
**/*.pem
|
|
||||||
**/*.key
|
|
||||||
**/*.crt
|
|
||||||
**/*.cert
|
|
||||||
!**/*.example.pem
|
|
||||||
!**/*.example.key
|
|
||||||
|
|
||||||
# Logs
|
# Logs
|
||||||
**/logs/
|
**/logs/
|
||||||
|
|
@ -97,14 +50,3 @@ Thumbs.db
|
||||||
# Temporary files
|
# Temporary files
|
||||||
*.tmp
|
*.tmp
|
||||||
*.temp
|
*.temp
|
||||||
compose/media/automation/dispatcharr/data/
|
|
||||||
compose/media/automation/slskd/app/data/
|
|
||||||
compose/media/automation/profilarr/config/db/
|
|
||||||
compose/media/automation/soularr/data/
|
|
||||||
compose/media/frontend/immich/postgres/
|
|
||||||
compose/services/vikunja/db/
|
|
||||||
**/config/
|
|
||||||
\!**/config/*.example
|
|
||||||
\!**/config/.gitkeep
|
|
||||||
*.backup
|
|
||||||
*.bak
|
|
||||||
|
|
|
||||||
974
AGENTS.md
974
AGENTS.md
|
|
@ -1,974 +0,0 @@
|
||||||
# Homelab Service Setup Guide for AI Agents
|
|
||||||
|
|
||||||
This document provides patterns, conventions, and best practices for setting up services in this homelab environment. Follow these guidelines when creating new services or modifying existing ones.
|
|
||||||
|
|
||||||
## Repository Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
homelab/
|
|
||||||
├── .claude/ # Claude Code configuration
|
|
||||||
│ └── skills/ # Custom skills for AI agents
|
|
||||||
│ └── wiki-docs.md # Wiki documentation skill
|
|
||||||
├── compose/
|
|
||||||
│ ├── core/ # Infrastructure services (Traefik, Authelia, LLDAP)
|
|
||||||
│ │ ├── traefik/
|
|
||||||
│ │ ├── authelia/
|
|
||||||
│ │ └── lldap/
|
|
||||||
│ ├── services/ # User-facing applications
|
|
||||||
│ │ ├── service-name/
|
|
||||||
│ │ │ ├── compose.yaml
|
|
||||||
│ │ │ ├── .env
|
|
||||||
│ │ │ ├── .gitignore
|
|
||||||
│ │ │ ├── README.md
|
|
||||||
│ │ │ └── QUICKSTART.md
|
|
||||||
│ ├── media/ # Media-related services
|
|
||||||
│ │ ├── frontend/ # Media viewers (Jellyfin, Immich)
|
|
||||||
│ │ └── automation/ # Media management (*arr stack)
|
|
||||||
│ └── monitoring/ # Monitoring and logging
|
|
||||||
├── AGENTS.md # AI agent guidelines (this file)
|
|
||||||
└── README.md # Repository overview
|
|
||||||
```
|
|
||||||
|
|
||||||
**External Directories:**
|
|
||||||
- `/mnt/media/wikijs-content/` - Wiki.js content repository (Git-backed)
|
|
||||||
|
|
||||||
## Core Principles
|
|
||||||
|
|
||||||
### 1. Domain Convention
|
|
||||||
- **Primary domain:** `fig.systems`
|
|
||||||
- **Secondary domain:** `edfig.dev`
|
|
||||||
- **Pattern:** `service.fig.systems` or `service.edfig.dev`
|
|
||||||
- **Examples:**
|
|
||||||
- `matrix.fig.systems` - Matrix server
|
|
||||||
- `auth.fig.systems` - Authelia
|
|
||||||
- `books.fig.systems` - BookLore
|
|
||||||
- `ai.fig.systems` - Open WebUI
|
|
||||||
|
|
||||||
#### DNS and DDNS Setup
|
|
||||||
|
|
||||||
**Automatic DNS Resolution:**
|
|
||||||
- Wildcard DNS records are automatically updated via DDNS Updater
|
|
||||||
- `*.fig.systems` → Points to current public IP (Cloudflare)
|
|
||||||
- `*.edfig.dev` → Points to current public IP (Porkbun)
|
|
||||||
- `fig.systems` (root) → Points to current public IP
|
|
||||||
- `edfig.dev` (root) → Points to current public IP
|
|
||||||
|
|
||||||
**What this means for new services:**
|
|
||||||
- ✅ DNS is automatic - any `newservice.fig.systems` will resolve to the homelab IP
|
|
||||||
- ✅ No manual DNS record creation needed
|
|
||||||
- ✅ Works for all subdomains automatically
|
|
||||||
- ⚠️ You still need Traefik labels to route traffic to containers (see Traefik Integration section)
|
|
||||||
|
|
||||||
**DDNS Updater Service:**
|
|
||||||
- Location: `compose/services/ddns-updater/`
|
|
||||||
- Monitors: Public IP changes every 5 minutes
|
|
||||||
- Updates: Both Cloudflare (fig.systems) and Porkbun (edfig.dev)
|
|
||||||
- Web UI: https://ddns.fig.systems (local network only)
|
|
||||||
|
|
||||||
**Adding a new service:**
|
|
||||||
1. DNS resolution is already handled by wildcard records
|
|
||||||
2. Add Traefik labels to your compose.yaml (see Service Setup Pattern below)
|
|
||||||
3. Start container - Traefik auto-detects and routes traffic
|
|
||||||
4. Let's Encrypt SSL certificate generated automatically
|
|
||||||
|
|
||||||
### 2. Storage Conventions
|
|
||||||
|
|
||||||
**Media Storage:** `/mnt/media/`
|
|
||||||
- `/mnt/media/books/` - Book library
|
|
||||||
- `/mnt/media/movies/` - Movie library
|
|
||||||
- `/mnt/media/tv/` - TV shows
|
|
||||||
- `/mnt/media/photos/` - Photo library
|
|
||||||
- `/mnt/media/music/` - Music library
|
|
||||||
|
|
||||||
**Service Data:** `/mnt/media/service-name/`
|
|
||||||
```bash
|
|
||||||
# Example: Matrix storage structure
|
|
||||||
/mnt/media/matrix/
|
|
||||||
├── synapse/
|
|
||||||
│ ├── data/ # Configuration and database
|
|
||||||
│ └── media/ # Uploaded media files
|
|
||||||
├── postgres/ # Database files
|
|
||||||
└── bridges/ # Bridge configurations
|
|
||||||
├── telegram/
|
|
||||||
├── whatsapp/
|
|
||||||
└── googlechat/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Always create subdirectories for:**
|
|
||||||
- Configuration files
|
|
||||||
- Database data
|
|
||||||
- User uploads/media
|
|
||||||
- Logs (if persistent)
|
|
||||||
|
|
||||||
### 3. Network Architecture
|
|
||||||
|
|
||||||
**External Network:** `homelab`
|
|
||||||
- All services connect to this for Traefik routing
|
|
||||||
- Created externally, referenced as `external: true`
|
|
||||||
|
|
||||||
**Internal Networks:** `service-internal`
|
|
||||||
- For multi-container service communication
|
|
||||||
- Example: `matrix-internal`, `booklore-internal`
|
|
||||||
- Use `driver: bridge`
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
service-internal:
|
|
||||||
driver: bridge
|
|
||||||
```
|
|
||||||
|
|
||||||
## Service Setup Pattern
|
|
||||||
|
|
||||||
### Directory Structure
|
|
||||||
|
|
||||||
Every service should have:
|
|
||||||
```
|
|
||||||
compose/services/service-name/
|
|
||||||
├── compose.yaml # Docker Compose configuration
|
|
||||||
├── .env # Environment variables and secrets
|
|
||||||
├── .gitignore # Ignore data directories and secrets
|
|
||||||
├── README.md # Complete documentation
|
|
||||||
├── QUICKSTART.md # 5-step quick start guide
|
|
||||||
└── config-files/ # Service-specific configs (optional)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Required Files
|
|
||||||
|
|
||||||
#### 1. compose.yaml
|
|
||||||
|
|
||||||
**Basic template:**
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
service-name:
|
|
||||||
image: vendor/service:latest
|
|
||||||
container_name: service-name
|
|
||||||
environment:
|
|
||||||
- TZ=${TZ}
|
|
||||||
- PUID=${PUID}
|
|
||||||
- PGID=${PGID}
|
|
||||||
# Service-specific vars
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service-name:/data
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
# Traefik routing
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# HTTP Router
|
|
||||||
traefik.http.routers.service-name.rule: Host(`service.fig.systems`)
|
|
||||||
traefik.http.routers.service-name.entrypoints: websecure
|
|
||||||
traefik.http.routers.service-name.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.service-name.loadbalancer.server.port: 8080
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Service Name
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:icon-name
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
```
|
|
||||||
|
|
||||||
**With database:**
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
app:
|
|
||||||
# ... app config
|
|
||||||
depends_on:
|
|
||||||
database:
|
|
||||||
condition: service_healthy
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
- service-internal
|
|
||||||
|
|
||||||
database:
|
|
||||||
image: postgres:16-alpine # or mariadb, redis, etc.
|
|
||||||
container_name: service-database
|
|
||||||
environment:
|
|
||||||
POSTGRES_USER: ${DB_USER}
|
|
||||||
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
|
||||||
POSTGRES_DB: ${DB_NAME}
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service-name/db:/var/lib/postgresql/data
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- service-internal
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
|
|
||||||
interval: 10s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
service-internal:
|
|
||||||
driver: bridge
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. .env File
|
|
||||||
|
|
||||||
**Standard variables:**
|
|
||||||
```bash
|
|
||||||
# Domain Configuration
|
|
||||||
DOMAIN=fig.systems
|
|
||||||
SERVICE_DOMAIN=service.fig.systems
|
|
||||||
TRAEFIK_HOST=service.fig.systems
|
|
||||||
|
|
||||||
# System
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
PUID=1000
|
|
||||||
PGID=1000
|
|
||||||
|
|
||||||
# Database (if applicable)
|
|
||||||
DB_USER=service
|
|
||||||
DB_PASSWORD=<generated-password>
|
|
||||||
DB_NAME=service
|
|
||||||
|
|
||||||
# SMTP Configuration (Mailgun)
|
|
||||||
SMTP_HOST=smtp.mailgun.org
|
|
||||||
SMTP_PORT=587
|
|
||||||
SMTP_USER=noreply@fig.systems
|
|
||||||
SMTP_PASSWORD=<mailgun-smtp-password>
|
|
||||||
SMTP_FROM=Service Name <noreply@fig.systems>
|
|
||||||
# Optional SMTP settings
|
|
||||||
SMTP_TLS=true
|
|
||||||
SMTP_STARTTLS=true
|
|
||||||
|
|
||||||
# Service-specific secrets
|
|
||||||
SERVICE_SECRET_KEY=<generated-secret>
|
|
||||||
```
|
|
||||||
|
|
||||||
**Generate secrets:**
|
|
||||||
```bash
|
|
||||||
# Random hex (64 chars)
|
|
||||||
openssl rand -hex 32
|
|
||||||
|
|
||||||
# Base64 (32 bytes)
|
|
||||||
openssl rand -base64 32
|
|
||||||
|
|
||||||
# Alphanumeric (32 chars)
|
|
||||||
openssl rand -base64 24 | tr -d '/+=' | head -c 32
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. .gitignore
|
|
||||||
|
|
||||||
**Standard pattern:**
|
|
||||||
```gitignore
|
|
||||||
# Service data (stored in /mnt/media/)
|
|
||||||
data/
|
|
||||||
config/
|
|
||||||
db/
|
|
||||||
logs/
|
|
||||||
|
|
||||||
# Environment secrets
|
|
||||||
.env
|
|
||||||
|
|
||||||
# Backup files
|
|
||||||
*.bak
|
|
||||||
*.backup
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. README.md
|
|
||||||
|
|
||||||
**Structure:**
|
|
||||||
```markdown
|
|
||||||
# Service Name - Brief Description
|
|
||||||
|
|
||||||
One-paragraph overview of what the service does.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- ✅ Feature 1
|
|
||||||
- ✅ Feature 2
|
|
||||||
- ✅ Feature 3
|
|
||||||
|
|
||||||
## Access
|
|
||||||
|
|
||||||
**URL:** https://service.fig.systems
|
|
||||||
**Authentication:** [Authelia SSO | None | Basic Auth]
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### Deploy
|
|
||||||
\`\`\`bash
|
|
||||||
cd /home/eduardo_figueroa/homelab/compose/services/service-name
|
|
||||||
docker compose up -d
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
### First-Time Setup
|
|
||||||
1. Step 1
|
|
||||||
2. Step 2
|
|
||||||
3. Step 3
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
Explain key .env variables
|
|
||||||
|
|
||||||
### Storage Locations
|
|
||||||
- `/mnt/media/service-name/data` - Application data
|
|
||||||
- `/mnt/media/service-name/uploads` - User uploads
|
|
||||||
|
|
||||||
## Usage Guide
|
|
||||||
|
|
||||||
Detailed usage instructions...
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
Common issues and solutions...
|
|
||||||
|
|
||||||
## Maintenance
|
|
||||||
|
|
||||||
### Backup
|
|
||||||
Important directories to backup...
|
|
||||||
|
|
||||||
### Update
|
|
||||||
\`\`\`bash
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
## Links
|
|
||||||
- Documentation: https://...
|
|
||||||
- GitHub: https://...
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5. QUICKSTART.md
|
|
||||||
|
|
||||||
**Fast 5-step guide:**
|
|
||||||
```markdown
|
|
||||||
# Service Name - Quick Start
|
|
||||||
|
|
||||||
## Step 1: Deploy
|
|
||||||
\`\`\`bash
|
|
||||||
cd /path/to/service
|
|
||||||
docker compose up -d
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
## Step 2: Access
|
|
||||||
Open https://service.fig.systems
|
|
||||||
|
|
||||||
## Step 3: Initial Setup
|
|
||||||
Quick setup steps...
|
|
||||||
|
|
||||||
## Step 4: Test
|
|
||||||
Verification steps...
|
|
||||||
|
|
||||||
## Common Commands
|
|
||||||
\`\`\`bash
|
|
||||||
# View logs
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Restart
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Stop
|
|
||||||
docker compose down
|
|
||||||
\`\`\`
|
|
||||||
```
|
|
||||||
|
|
||||||
## Traefik Integration
|
|
||||||
|
|
||||||
### Basic HTTP Routing
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Router
|
|
||||||
traefik.http.routers.service.rule: Host(`service.fig.systems`)
|
|
||||||
traefik.http.routers.service.entrypoints: websecure
|
|
||||||
traefik.http.routers.service.tls.certresolver: letsencrypt
|
|
||||||
|
|
||||||
# Service (port)
|
|
||||||
traefik.http.services.service.loadbalancer.server.port: 8080
|
|
||||||
```
|
|
||||||
|
|
||||||
### With Custom Headers
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
# ... basic routing ...
|
|
||||||
|
|
||||||
# Headers middleware
|
|
||||||
traefik.http.middlewares.service-headers.headers.customrequestheaders.X-Forwarded-Proto: https
|
|
||||||
traefik.http.middlewares.service-headers.headers.customresponseheaders.X-Frame-Options: SAMEORIGIN
|
|
||||||
|
|
||||||
# Apply middleware
|
|
||||||
traefik.http.routers.service.middlewares: service-headers
|
|
||||||
```
|
|
||||||
|
|
||||||
### With Local-Only Access
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
# ... basic routing ...
|
|
||||||
|
|
||||||
# Apply local-only middleware (defined in Traefik)
|
|
||||||
traefik.http.routers.service.middlewares: local-only
|
|
||||||
```
|
|
||||||
|
|
||||||
### Large Upload Support
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
# ... basic routing ...
|
|
||||||
|
|
||||||
# Buffering middleware
|
|
||||||
traefik.http.middlewares.service-buffering.buffering.maxRequestBodyBytes: 268435456
|
|
||||||
traefik.http.middlewares.service-buffering.buffering.memRequestBodyBytes: 268435456
|
|
||||||
traefik.http.middlewares.service-buffering.buffering.retryExpression: IsNetworkError() && Attempts() < 3
|
|
||||||
|
|
||||||
# Apply middleware
|
|
||||||
traefik.http.routers.service.middlewares: service-buffering
|
|
||||||
```
|
|
||||||
|
|
||||||
## Authelia OIDC Integration
|
|
||||||
|
|
||||||
### 1. Generate Client Secret
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate plain secret
|
|
||||||
openssl rand -base64 32
|
|
||||||
|
|
||||||
# Hash for Authelia
|
|
||||||
docker exec authelia authelia crypto hash generate pbkdf2 --password 'your-secret-here'
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Add Client to Authelia
|
|
||||||
|
|
||||||
Edit `/home/eduardo_figueroa/homelab/compose/core/authelia/config/configuration.yml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
identity_providers:
|
|
||||||
oidc:
|
|
||||||
clients:
|
|
||||||
# Your Service
|
|
||||||
- client_id: service-name
|
|
||||||
client_name: Service Display Name
|
|
||||||
client_secret: '$pbkdf2-sha512$310000$...' # hashed secret
|
|
||||||
authorization_policy: two_factor
|
|
||||||
redirect_uris:
|
|
||||||
- https://service.fig.systems/oauth/callback
|
|
||||||
scopes:
|
|
||||||
- openid
|
|
||||||
- profile
|
|
||||||
- email
|
|
||||||
grant_types:
|
|
||||||
- authorization_code
|
|
||||||
response_types:
|
|
||||||
- code
|
|
||||||
```
|
|
||||||
|
|
||||||
**For public clients (PKCE):**
|
|
||||||
```yaml
|
|
||||||
- client_id: service-name
|
|
||||||
client_name: Service Name
|
|
||||||
public: true # No client_secret needed
|
|
||||||
authorization_policy: two_factor
|
|
||||||
require_pkce: true
|
|
||||||
pkce_challenge_method: S256
|
|
||||||
redirect_uris:
|
|
||||||
- https://service.fig.systems/oauth/callback
|
|
||||||
scopes:
|
|
||||||
- openid
|
|
||||||
- profile
|
|
||||||
- email
|
|
||||||
- offline_access # For refresh tokens
|
|
||||||
grant_types:
|
|
||||||
- authorization_code
|
|
||||||
- refresh_token
|
|
||||||
response_types:
|
|
||||||
- code
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Configure Service
|
|
||||||
|
|
||||||
**Standard OIDC configuration:**
|
|
||||||
```yaml
|
|
||||||
environment:
|
|
||||||
OIDC_ENABLED: "true"
|
|
||||||
OIDC_CLIENT_ID: "service-name"
|
|
||||||
OIDC_CLIENT_SECRET: "plain-secret-here"
|
|
||||||
OIDC_ISSUER: "https://auth.fig.systems"
|
|
||||||
OIDC_AUTHORIZATION_ENDPOINT: "https://auth.fig.systems/api/oidc/authorization"
|
|
||||||
OIDC_TOKEN_ENDPOINT: "https://auth.fig.systems/api/oidc/token"
|
|
||||||
OIDC_USERINFO_ENDPOINT: "https://auth.fig.systems/api/oidc/userinfo"
|
|
||||||
OIDC_JWKS_URI: "https://auth.fig.systems/jwks.json"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Restart Services
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Restart Authelia
|
|
||||||
cd compose/core/authelia
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Start your service
|
|
||||||
cd compose/services/service-name
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## SMTP/Email Configuration
|
|
||||||
|
|
||||||
### Mailgun SMTP
|
|
||||||
|
|
||||||
**Standard Mailgun configuration for all services:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# In .env file
|
|
||||||
SMTP_HOST=smtp.mailgun.org
|
|
||||||
SMTP_PORT=587
|
|
||||||
SMTP_USER=noreply@fig.systems
|
|
||||||
SMTP_PASSWORD=<your-mailgun-smtp-password>
|
|
||||||
SMTP_FROM=Service Name <noreply@fig.systems>
|
|
||||||
SMTP_TLS=true
|
|
||||||
SMTP_STARTTLS=true
|
|
||||||
```
|
|
||||||
|
|
||||||
**In compose.yaml:**
|
|
||||||
```yaml
|
|
||||||
environment:
|
|
||||||
# SMTP Settings
|
|
||||||
SMTP_HOST: ${SMTP_HOST}
|
|
||||||
SMTP_PORT: ${SMTP_PORT}
|
|
||||||
SMTP_USER: ${SMTP_USER}
|
|
||||||
SMTP_PASSWORD: ${SMTP_PASSWORD}
|
|
||||||
SMTP_FROM: ${SMTP_FROM}
|
|
||||||
# Some services may use different variable names:
|
|
||||||
# EMAIL_HOST: ${SMTP_HOST}
|
|
||||||
# EMAIL_PORT: ${SMTP_PORT}
|
|
||||||
# EMAIL_USER: ${SMTP_USER}
|
|
||||||
# EMAIL_PASS: ${SMTP_PASSWORD}
|
|
||||||
# EMAIL_FROM: ${SMTP_FROM}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Common SMTP variable name variations:**
|
|
||||||
|
|
||||||
Different services use different environment variable names for SMTP configuration. Check the service documentation and use the appropriate format:
|
|
||||||
|
|
||||||
| Common Name | Alternative Names |
|
|
||||||
|------------|-------------------|
|
|
||||||
| SMTP_HOST | EMAIL_HOST, MAIL_HOST, MAIL_SERVER |
|
|
||||||
| SMTP_PORT | EMAIL_PORT, MAIL_PORT |
|
|
||||||
| SMTP_USER | EMAIL_USER, MAIL_USER, SMTP_USERNAME, EMAIL_USERNAME |
|
|
||||||
| SMTP_PASSWORD | EMAIL_PASSWORD, EMAIL_PASS, MAIL_PASSWORD, SMTP_PASS |
|
|
||||||
| SMTP_FROM | EMAIL_FROM, MAIL_FROM, FROM_EMAIL, DEFAULT_FROM_EMAIL |
|
|
||||||
| SMTP_TLS | EMAIL_USE_TLS, MAIL_USE_TLS, SMTP_SECURE |
|
|
||||||
| SMTP_STARTTLS | EMAIL_USE_STARTTLS, MAIL_STARTTLS |
|
|
||||||
|
|
||||||
**Getting Mailgun SMTP credentials:**
|
|
||||||
|
|
||||||
1. Log into Mailgun dashboard: https://app.mailgun.com
|
|
||||||
2. Navigate to **Sending → Domain Settings → SMTP credentials**
|
|
||||||
3. Use the existing `noreply@fig.systems` user or create a new SMTP user
|
|
||||||
4. Copy the SMTP password and add it to your service's `.env` file
|
|
||||||
|
|
||||||
**Testing SMTP configuration:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Using swaks (SMTP test tool)
|
|
||||||
swaks --to test@example.com \
|
|
||||||
--from noreply@fig.systems \
|
|
||||||
--server smtp.mailgun.org:587 \
|
|
||||||
--auth LOGIN \
|
|
||||||
--auth-user noreply@fig.systems \
|
|
||||||
--auth-password 'your-password' \
|
|
||||||
--tls
|
|
||||||
```
|
|
||||||
|
|
||||||
## Database Patterns
|
|
||||||
|
|
||||||
### PostgreSQL
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
postgres:
|
|
||||||
image: postgres:16-alpine
|
|
||||||
container_name: service-postgres
|
|
||||||
environment:
|
|
||||||
POSTGRES_USER: ${DB_USER}
|
|
||||||
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
|
||||||
POSTGRES_DB: ${DB_NAME}
|
|
||||||
POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service-name/postgres:/var/lib/postgresql/data
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- service-internal
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "pg_isready -U ${DB_USER}"]
|
|
||||||
interval: 10s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
```
|
|
||||||
|
|
||||||
### MariaDB
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
mariadb:
|
|
||||||
image: lscr.io/linuxserver/mariadb:latest
|
|
||||||
container_name: service-mariadb
|
|
||||||
environment:
|
|
||||||
- PUID=${PUID}
|
|
||||||
- PGID=${PGID}
|
|
||||||
- TZ=${TZ}
|
|
||||||
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
|
|
||||||
- MYSQL_DATABASE=${MYSQL_DATABASE}
|
|
||||||
- MYSQL_USER=${MYSQL_USER}
|
|
||||||
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service-name/mariadb:/config
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- service-internal
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD", "mariadb-admin", "ping", "-h", "localhost"]
|
|
||||||
interval: 5s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 10
|
|
||||||
```
|
|
||||||
|
|
||||||
### Redis
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
redis:
|
|
||||||
image: redis:alpine
|
|
||||||
container_name: service-redis
|
|
||||||
command: redis-server --save 60 1 --loglevel warning
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service-name/redis:/data
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- service-internal
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD", "redis-cli", "ping"]
|
|
||||||
interval: 10s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
```
|
|
||||||
|
|
||||||
## Homarr Integration
|
|
||||||
|
|
||||||
**Add discovery labels to your service:**
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
homarr.name: Display Name
|
|
||||||
homarr.group: Services # or Media, Monitoring, AI, etc.
|
|
||||||
homarr.icon: mdi:icon-name # Material Design Icons
|
|
||||||
```
|
|
||||||
|
|
||||||
**Common groups:**
|
|
||||||
- `Services` - General applications
|
|
||||||
- `Media` - Media-related (Jellyfin, Immich)
|
|
||||||
- `AI` - AI/LLM services
|
|
||||||
- `Monitoring` - Monitoring tools
|
|
||||||
- `Automation` - *arr stack
|
|
||||||
|
|
||||||
**Find icons:** https://pictogrammers.com/library/mdi/
|
|
||||||
|
|
||||||
## Security Best Practices
|
|
||||||
|
|
||||||
### 1. Never Commit Secrets
|
|
||||||
|
|
||||||
**Always in .gitignore:**
|
|
||||||
- `.env` files
|
|
||||||
- Database directories
|
|
||||||
- Configuration files with credentials
|
|
||||||
- SSL certificates
|
|
||||||
- API keys
|
|
||||||
|
|
||||||
### 2. Use Authelia for External Access
|
|
||||||
|
|
||||||
Services exposed to internet should use Authelia SSO with 2FA.
|
|
||||||
|
|
||||||
### 3. Local-Only Services
|
|
||||||
|
|
||||||
For sensitive services (backups, code editors), use `local-only` middleware:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
traefik.http.routers.service.middlewares: local-only
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Least Privilege
|
|
||||||
|
|
||||||
- Use non-root users in containers (`PUID`/`PGID`)
|
|
||||||
- Limit network access (internal networks)
|
|
||||||
- Read-only mounts where possible: `./config:/config:ro`
|
|
||||||
|
|
||||||
### 5. Secrets Generation
|
|
||||||
|
|
||||||
**Always generate unique secrets:**
|
|
||||||
```bash
|
|
||||||
# For each service
|
|
||||||
openssl rand -hex 32 # Different secret each time
|
|
||||||
```
|
|
||||||
|
|
||||||
## Common Patterns
|
|
||||||
|
|
||||||
### Multi-Stage Service Setup
|
|
||||||
|
|
||||||
**For services requiring initial config generation:**
|
|
||||||
|
|
||||||
1. Generate config:
|
|
||||||
```bash
|
|
||||||
docker run --rm -v /path:/data image:latest --generate-config
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Edit config files
|
|
||||||
|
|
||||||
3. Start service:
|
|
||||||
```bash
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Bridge/Plugin Architecture
|
|
||||||
|
|
||||||
**For services with plugins/bridges:**
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Main service
|
|
||||||
main-app:
|
|
||||||
# ... config ...
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service/data:/data
|
|
||||||
- ./registrations:/registrations:ro # Plugin registrations
|
|
||||||
|
|
||||||
# Plugin 1
|
|
||||||
plugin-1:
|
|
||||||
# ... config ...
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/service/plugins/plugin-1:/data
|
|
||||||
depends_on:
|
|
||||||
main-app:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- service-internal
|
|
||||||
```
|
|
||||||
|
|
||||||
### Health Checks
|
|
||||||
|
|
||||||
**Always include health checks for databases:**
|
|
||||||
```yaml
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "command to test health"]
|
|
||||||
interval: 10s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
```
|
|
||||||
|
|
||||||
**Then use in depends_on:**
|
|
||||||
```yaml
|
|
||||||
depends_on:
|
|
||||||
database:
|
|
||||||
condition: service_healthy
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting Checklist
|
|
||||||
|
|
||||||
### Service Won't Start
|
|
||||||
|
|
||||||
1. Check logs:
|
|
||||||
```bash
|
|
||||||
docker compose logs -f service-name
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Verify environment variables:
|
|
||||||
```bash
|
|
||||||
docker compose config
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check disk space:
|
|
||||||
```bash
|
|
||||||
df -h /mnt/media
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Verify network exists:
|
|
||||||
```bash
|
|
||||||
docker network ls | grep homelab
|
|
||||||
```
|
|
||||||
|
|
||||||
### Can't Access via Domain
|
|
||||||
|
|
||||||
1. Check Traefik logs:
|
|
||||||
```bash
|
|
||||||
docker logs traefik | grep service-name
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Verify service is on homelab network:
|
|
||||||
```bash
|
|
||||||
docker inspect service-name | grep -A 10 Networks
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Test endpoint directly:
|
|
||||||
```bash
|
|
||||||
curl -k https://service.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Check DNS resolution:
|
|
||||||
```bash
|
|
||||||
nslookup service.fig.systems
|
|
||||||
```
|
|
||||||
|
|
||||||
### OIDC Login Issues
|
|
||||||
|
|
||||||
1. Verify client secret matches in both Authelia and service
|
|
||||||
2. Check redirect URI exactly matches in Authelia config
|
|
||||||
3. Restart Authelia after config changes
|
|
||||||
4. Check Authelia logs:
|
|
||||||
```bash
|
|
||||||
docker logs authelia | grep oidc
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Connection Issues
|
|
||||||
|
|
||||||
1. Verify database is healthy:
|
|
||||||
```bash
|
|
||||||
docker compose ps
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Check database logs:
|
|
||||||
```bash
|
|
||||||
docker compose logs database
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Test connection from app container:
|
|
||||||
```bash
|
|
||||||
docker compose exec app ping database
|
|
||||||
```
|
|
||||||
|
|
||||||
4. Verify credentials match in .env and config
|
|
||||||
|
|
||||||
## Complete Service Template
|
|
||||||
|
|
||||||
See `compose/services/matrix/` for a complete example of:
|
|
||||||
- ✅ Multi-container setup (app + database + plugins)
|
|
||||||
- ✅ Authelia OIDC integration
|
|
||||||
- ✅ Traefik routing
|
|
||||||
- ✅ Comprehensive documentation
|
|
||||||
- ✅ Bridge/plugin architecture
|
|
||||||
- ✅ Health checks and dependencies
|
|
||||||
- ✅ Proper secret management
|
|
||||||
|
|
||||||
## AI Agent Guidelines
|
|
||||||
|
|
||||||
When setting up new services:
|
|
||||||
|
|
||||||
1. **Always create complete config files in /tmp/** for files requiring sudo access
|
|
||||||
2. **Follow the directory structure** exactly as shown above
|
|
||||||
3. **Generate unique secrets** for each service
|
|
||||||
4. **Create both README.md and QUICKSTART.md**
|
|
||||||
5. **Use the storage conventions** (/mnt/media/service-name/)
|
|
||||||
6. **Add Traefik labels** for automatic routing
|
|
||||||
7. **Include Homarr discovery labels**
|
|
||||||
8. **Set up health checks** for all databases
|
|
||||||
9. **Use internal networks** for multi-container communication
|
|
||||||
10. **Document troubleshooting steps** in README.md
|
|
||||||
|
|
||||||
### Files to Always Create in /tmp/
|
|
||||||
|
|
||||||
When you cannot write directly:
|
|
||||||
- Authelia configuration updates
|
|
||||||
- Traefik configuration changes
|
|
||||||
- System-level configuration files
|
|
||||||
|
|
||||||
**Format:**
|
|
||||||
```bash
|
|
||||||
/tmp/service-name-config-file.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
Include clear instructions at the top:
|
|
||||||
```yaml
|
|
||||||
# Copy this file to:
|
|
||||||
# /path/to/actual/location
|
|
||||||
#
|
|
||||||
# Then run:
|
|
||||||
# sudo chmod 644 /path/to/actual/location
|
|
||||||
# docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Claude Code Skills
|
|
||||||
|
|
||||||
This repository includes custom skills for Claude Code to enhance productivity and maintain consistency.
|
|
||||||
|
|
||||||
### Available Skills
|
|
||||||
|
|
||||||
#### wiki-docs (Documentation Management)
|
|
||||||
|
|
||||||
**Purpose:** Create and manage markdown documentation files that automatically sync to Wiki.js
|
|
||||||
|
|
||||||
**Location:** `.claude/skills/wiki-docs.md`
|
|
||||||
|
|
||||||
**When to use:**
|
|
||||||
- Documenting new services or infrastructure changes
|
|
||||||
- Creating how-to guides or tutorials
|
|
||||||
- Recording configuration details for future reference
|
|
||||||
- Building a knowledge base for the homelab
|
|
||||||
|
|
||||||
**Repository:** `/mnt/media/wikijs-content/`
|
|
||||||
**Wiki URL:** https://wiki.fig.systems
|
|
||||||
**Git Remote:** `git.fig.systems/eddie/wiki.git`
|
|
||||||
|
|
||||||
**How it works:**
|
|
||||||
1. Markdown files are written to `/mnt/media/wikijs-content/`
|
|
||||||
2. Files are committed and pushed to the Git repository
|
|
||||||
3. Wiki.js automatically syncs changes (within 5 minutes)
|
|
||||||
4. Content appears at https://wiki.fig.systems
|
|
||||||
|
|
||||||
**Frontmatter format:**
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
title: Page Title
|
|
||||||
description: Brief description
|
|
||||||
published: true
|
|
||||||
date: 2026-03-15T00:00:00.000Z
|
|
||||||
tags: tag1, tag2, tag3
|
|
||||||
editor: markdown
|
|
||||||
dateCreated: 2026-03-15T00:00:00.000Z
|
|
||||||
---
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Tags must be comma-separated, not YAML array format!
|
|
||||||
|
|
||||||
**Example usage:**
|
|
||||||
```bash
|
|
||||||
# Create documentation for a service
|
|
||||||
/mnt/media/wikijs-content/homelab/services/jellyfin.md
|
|
||||||
|
|
||||||
# Commit and push
|
|
||||||
cd /mnt/media/wikijs-content
|
|
||||||
git pull
|
|
||||||
git add homelab/services/jellyfin.md
|
|
||||||
git commit -m "Add: Jellyfin service documentation"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Version-controlled documentation
|
|
||||||
- Accessible via web interface (Wiki.js)
|
|
||||||
- Searchable and organized
|
|
||||||
- Supports markdown with frontmatter
|
|
||||||
- Automatic synchronization
|
|
||||||
|
|
||||||
### Using Skills
|
|
||||||
|
|
||||||
To invoke a skill in Claude Code, use the appropriate skill when the task matches its purpose. The wiki-docs skill is automatically available for documentation tasks.
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- **Traefik:** https://doc.traefik.io/traefik/
|
|
||||||
- **Authelia:** https://www.authelia.com/
|
|
||||||
- **Docker Compose:** https://docs.docker.com/compose/
|
|
||||||
- **Material Design Icons:** https://pictogrammers.com/library/mdi/
|
|
||||||
- **Wiki.js:** https://docs.requarks.io/
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Remember:** Consistency is key. Follow these patterns for all services to maintain a clean, predictable, and maintainable homelab infrastructure.
|
|
||||||
264
CONTRIBUTING.md
Normal file
264
CONTRIBUTING.md
Normal file
|
|
@ -0,0 +1,264 @@
|
||||||
|
# Contributing Guide
|
||||||
|
|
||||||
|
Thank you for your interest in contributing to this homelab configuration! While this is primarily a personal repository, contributions are welcome.
|
||||||
|
|
||||||
|
## How to Contribute
|
||||||
|
|
||||||
|
### Reporting Issues
|
||||||
|
|
||||||
|
- Use the [bug report template](.github/ISSUE_TEMPLATE/bug-report.md) for bugs
|
||||||
|
- Use the [service request template](.github/ISSUE_TEMPLATE/service-request.md) for new services
|
||||||
|
- Search existing issues before creating a new one
|
||||||
|
- Provide as much detail as possible
|
||||||
|
|
||||||
|
### Submitting Changes
|
||||||
|
|
||||||
|
1. **Fork the repository**
|
||||||
|
2. **Create a feature branch**
|
||||||
|
```bash
|
||||||
|
git checkout -b feature/your-feature-name
|
||||||
|
```
|
||||||
|
3. **Make your changes** following the guidelines below
|
||||||
|
4. **Test your changes** locally
|
||||||
|
5. **Commit with clear messages**
|
||||||
|
```bash
|
||||||
|
git commit -m "feat: add new service"
|
||||||
|
```
|
||||||
|
6. **Push to your fork**
|
||||||
|
```bash
|
||||||
|
git push origin feature/your-feature-name
|
||||||
|
```
|
||||||
|
7. **Open a Pull Request** using the PR template
|
||||||
|
|
||||||
|
## Guidelines
|
||||||
|
|
||||||
|
### File Naming
|
||||||
|
|
||||||
|
- All Docker Compose files must be named `compose.yaml` (not `.yml`)
|
||||||
|
- Use lowercase with hyphens for service directories (e.g., `calibre-web`)
|
||||||
|
- Environment files must be named `.env`
|
||||||
|
|
||||||
|
### Docker Compose Best Practices
|
||||||
|
|
||||||
|
- Use version-pinned images when possible
|
||||||
|
- Include health checks for databases and critical services
|
||||||
|
- Use bind mounts for configuration, named volumes for data
|
||||||
|
- Set proper restart policies (`unless-stopped` or `always`)
|
||||||
|
- Include resource limits for production services
|
||||||
|
|
||||||
|
### Network Configuration
|
||||||
|
|
||||||
|
- All services must use the `homelab` network (marked as `external: true`)
|
||||||
|
- Services with multiple containers should use an internal network
|
||||||
|
- Example:
|
||||||
|
```yaml
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
service_internal:
|
||||||
|
name: service_internal
|
||||||
|
driver: bridge
|
||||||
|
```
|
||||||
|
|
||||||
|
### Traefik Labels
|
||||||
|
|
||||||
|
All web services must include:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.http.routers.service.rule: Host(`service.fig.systems`) || Host(`service.edfig.dev`)
|
||||||
|
traefik.http.routers.service.entrypoints: websecure
|
||||||
|
traefik.http.routers.service.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.service.loadbalancer.server.port: 8080
|
||||||
|
# Optional SSO:
|
||||||
|
traefik.http.routers.service.middlewares: tinyauth
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
- Use `.env` files for configuration
|
||||||
|
- Never commit real passwords
|
||||||
|
- Use `changeme_*` prefix for placeholder passwords
|
||||||
|
- Document all required environment variables
|
||||||
|
- Include comments explaining non-obvious settings
|
||||||
|
|
||||||
|
### Documentation
|
||||||
|
|
||||||
|
- Add service to README.md service table
|
||||||
|
- Include deployment instructions
|
||||||
|
- Document any special configuration
|
||||||
|
- Add comments to compose files explaining purpose
|
||||||
|
- Include links to official documentation
|
||||||
|
|
||||||
|
### Security
|
||||||
|
|
||||||
|
- Never commit secrets
|
||||||
|
- Scan compose files for vulnerabilities
|
||||||
|
- Use official or well-maintained images
|
||||||
|
- Enable SSO when appropriate
|
||||||
|
- Document security considerations
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
|
||||||
|
### YAML Style
|
||||||
|
|
||||||
|
- 2-space indentation
|
||||||
|
- No trailing whitespace
|
||||||
|
- Use `true/false` instead of `yes/no`
|
||||||
|
- Quote strings with special characters
|
||||||
|
- Follow yamllint rules in `.yamllint.yml`
|
||||||
|
|
||||||
|
### Commit Messages
|
||||||
|
|
||||||
|
Follow [Conventional Commits](https://www.conventionalcommits.org/):
|
||||||
|
|
||||||
|
- `feat:` New feature
|
||||||
|
- `fix:` Bug fix
|
||||||
|
- `docs:` Documentation changes
|
||||||
|
- `refactor:` Code refactoring
|
||||||
|
- `security:` Security improvements
|
||||||
|
- `chore:` Maintenance tasks
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
```
|
||||||
|
feat: add jellyfin media server
|
||||||
|
fix: correct traefik routing for sonarr
|
||||||
|
docs: update README with new services
|
||||||
|
security: update postgres to latest version
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Before submitting a PR:
|
||||||
|
|
||||||
|
1. **Validate compose files**
|
||||||
|
```bash
|
||||||
|
docker compose -f compose/path/to/compose.yaml config
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check YAML syntax**
|
||||||
|
```bash
|
||||||
|
yamllint compose/
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Test locally**
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Check for secrets**
|
||||||
|
```bash
|
||||||
|
git diff --cached | grep -i "password\|secret\|token"
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Run pre-commit hooks** (optional)
|
||||||
|
```bash
|
||||||
|
pre-commit install
|
||||||
|
pre-commit run --all-files
|
||||||
|
```
|
||||||
|
|
||||||
|
## Pull Request Process
|
||||||
|
|
||||||
|
1. Fill out the PR template completely
|
||||||
|
2. Ensure all CI checks pass
|
||||||
|
3. Request review if needed
|
||||||
|
4. Address review feedback
|
||||||
|
5. Squash commits if requested
|
||||||
|
6. Wait for approval and merge
|
||||||
|
|
||||||
|
## CI/CD Checks
|
||||||
|
|
||||||
|
Your PR will be automatically checked for:
|
||||||
|
|
||||||
|
- Docker Compose validation
|
||||||
|
- YAML linting
|
||||||
|
- Security scanning
|
||||||
|
- Secret detection
|
||||||
|
- Documentation completeness
|
||||||
|
- Traefik configuration
|
||||||
|
- Network setup
|
||||||
|
- File naming conventions
|
||||||
|
|
||||||
|
Fix any failures before requesting review.
|
||||||
|
|
||||||
|
## Adding a New Service
|
||||||
|
|
||||||
|
1. Choose the correct category:
|
||||||
|
- `compose/core/` - Infrastructure (Traefik, auth, etc.)
|
||||||
|
- `compose/media/` - Media-related services
|
||||||
|
- `compose/services/` - Utility services
|
||||||
|
|
||||||
|
2. Create service directory:
|
||||||
|
```bash
|
||||||
|
mkdir -p compose/category/service-name
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Create `compose.yaml`:
|
||||||
|
- Include documentation header
|
||||||
|
- Add Traefik labels
|
||||||
|
- Configure networks
|
||||||
|
- Set up volumes
|
||||||
|
- Add health checks if applicable
|
||||||
|
|
||||||
|
4. Create `.env` if needed:
|
||||||
|
- Use placeholder passwords
|
||||||
|
- Document all variables
|
||||||
|
- Include comments
|
||||||
|
|
||||||
|
5. Update README.md:
|
||||||
|
- Add to service table
|
||||||
|
- Include URL
|
||||||
|
- Document deployment
|
||||||
|
|
||||||
|
6. Test deployment:
|
||||||
|
```bash
|
||||||
|
cd compose/category/service-name
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Create PR with detailed description
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
homelab/
|
||||||
|
├── .github/
|
||||||
|
│ ├── workflows/ # CI/CD workflows
|
||||||
|
│ ├── ISSUE_TEMPLATE/ # Issue templates
|
||||||
|
│ └── pull_request_template.md
|
||||||
|
├── compose/
|
||||||
|
│ ├── core/ # Infrastructure services
|
||||||
|
│ ├── media/ # Media services
|
||||||
|
│ └── services/ # Utility services
|
||||||
|
├── README.md # Main documentation
|
||||||
|
├── CONTRIBUTING.md # This file
|
||||||
|
├── SECURITY.md # Security policy
|
||||||
|
└── .yamllint.yml # YAML linting config
|
||||||
|
```
|
||||||
|
|
||||||
|
## Getting Help
|
||||||
|
|
||||||
|
- Check existing issues and PRs
|
||||||
|
- Review the README.md
|
||||||
|
- Examine similar services for examples
|
||||||
|
- Ask in PR comments
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
By contributing, you agree that your contributions will be licensed under the same terms as the repository.
|
||||||
|
|
||||||
|
## Code of Conduct
|
||||||
|
|
||||||
|
- Be respectful and professional
|
||||||
|
- Focus on constructive feedback
|
||||||
|
- Help others learn and improve
|
||||||
|
- Keep discussions relevant
|
||||||
|
|
||||||
|
## Questions?
|
||||||
|
|
||||||
|
Open an issue with the question label or comment on an existing PR/issue.
|
||||||
|
|
||||||
|
Thank you for contributing! 🎉
|
||||||
383
PR_REVIEW.md
Normal file
383
PR_REVIEW.md
Normal file
|
|
@ -0,0 +1,383 @@
|
||||||
|
# Pull Request Review: Homelab GitOps Complete Setup
|
||||||
|
|
||||||
|
## 📋 PR Summary
|
||||||
|
|
||||||
|
**Branch:** `claude/gitops-home-services-011CUqEzDETA2BqAzYUcXtjt`
|
||||||
|
**Commits:** 2 main commits
|
||||||
|
**Files Changed:** 48 files (+2,469 / -300)
|
||||||
|
**Services Added:** 13 new services + 3 core infrastructure
|
||||||
|
|
||||||
|
## ✅ Overall Assessment: **APPROVE with Minor Issues**
|
||||||
|
|
||||||
|
This is an excellent, comprehensive implementation of a homelab GitOps setup. The changes demonstrate strong understanding of Docker best practices, security considerations, and infrastructure-as-code principles.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 What This PR Does
|
||||||
|
|
||||||
|
### Core Infrastructure (NEW)
|
||||||
|
- ✅ Traefik v3.3 reverse proxy with Let's Encrypt
|
||||||
|
- ✅ LLDAP lightweight directory server
|
||||||
|
- ✅ Tinyauth SSO integration with LLDAP backend
|
||||||
|
|
||||||
|
### Media Services (13 services)
|
||||||
|
- ✅ Jellyfin, Jellyseerr, Immich
|
||||||
|
- ✅ Sonarr, Radarr, SABnzbd, qBittorrent
|
||||||
|
- ✅ Calibre-web, Booklore, FreshRSS, RSSHub
|
||||||
|
|
||||||
|
### Utility Services
|
||||||
|
- ✅ Linkwarden, Vikunja, LubeLogger, MicroBin, File Browser
|
||||||
|
|
||||||
|
### CI/CD Pipeline (NEW)
|
||||||
|
- ✅ 5 GitHub Actions workflows
|
||||||
|
- ✅ Security scanning (Gitleaks, Trivy)
|
||||||
|
- ✅ YAML/Markdown linting
|
||||||
|
- ✅ Docker Compose validation
|
||||||
|
- ✅ Documentation checks
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 💪 Strengths
|
||||||
|
|
||||||
|
### 1. **Excellent Infrastructure Design**
|
||||||
|
- Proper network isolation (homelab + service-specific internal networks)
|
||||||
|
- Consistent Traefik labeling across all services
|
||||||
|
- Dual domain support (fig.systems + edfig.dev)
|
||||||
|
- SSL/TLS with automatic Let's Encrypt certificate management
|
||||||
|
|
||||||
|
### 2. **Security Best Practices**
|
||||||
|
- ✅ Placeholder passwords using `changeme_*` format
|
||||||
|
- ✅ No real secrets committed
|
||||||
|
- ✅ SSO enabled on appropriate services
|
||||||
|
- ✅ Read-only media mounts where appropriate
|
||||||
|
- ✅ Proper PUID/PGID settings
|
||||||
|
|
||||||
|
### 3. **Docker Best Practices**
|
||||||
|
- ✅ Standardized to `compose.yaml` (removed `.yml`)
|
||||||
|
- ✅ Health checks on database services
|
||||||
|
- ✅ Proper dependency management (depends_on)
|
||||||
|
- ✅ Consistent restart policies
|
||||||
|
- ✅ Container naming conventions
|
||||||
|
|
||||||
|
### 4. **Comprehensive Documentation**
|
||||||
|
- ✅ Detailed README with service table
|
||||||
|
- ✅ Deployment instructions
|
||||||
|
- ✅ Security policy (SECURITY.md)
|
||||||
|
- ✅ Contributing guidelines (CONTRIBUTING.md)
|
||||||
|
- ✅ Comments in compose files
|
||||||
|
|
||||||
|
### 5. **Robust CI/CD**
|
||||||
|
- ✅ Multi-layered validation
|
||||||
|
- ✅ Security scanning
|
||||||
|
- ✅ Documentation verification
|
||||||
|
- ✅ Auto-labeling
|
||||||
|
- ✅ PR templates
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ⚠️ Issues Found
|
||||||
|
|
||||||
|
### 🔴 Critical Issues: 0
|
||||||
|
|
||||||
|
### 🟡 High Priority Issues: 1
|
||||||
|
|
||||||
|
**1. Nginx Proxy Manager Not Removed/Migrated**
|
||||||
|
- **File:** `compose/core/nginxproxymanager/compose.yml`
|
||||||
|
- **Issue:** Template file still exists with `.yml` extension and no configuration
|
||||||
|
- **Impact:** Will fail CI validation workflow
|
||||||
|
- **Recommendation:**
|
||||||
|
```bash
|
||||||
|
# Option 1: Remove if not needed (Traefik replaces it)
|
||||||
|
rm -rf compose/core/nginxproxymanager/
|
||||||
|
|
||||||
|
# Option 2: Configure if needed alongside Traefik
|
||||||
|
# Move to compose.yaml and configure properly
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🟠 Medium Priority Issues: 3
|
||||||
|
|
||||||
|
**2. Missing Password Synchronization Documentation**
|
||||||
|
- **Files:** `compose/core/lldap/.env`, `compose/core/tinyauth/.env`
|
||||||
|
- **Issue:** Password must match between LLDAP and Tinyauth, not clearly documented
|
||||||
|
- **Recommendation:** Add a note in both .env files:
|
||||||
|
```bash
|
||||||
|
# IMPORTANT: This password must match LLDAP_LDAP_USER_PASS in ../lldap/.env
|
||||||
|
LDAP_BIND_PASSWORD=changeme_please_set_secure_password
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Vikunja Database Password Duplication**
|
||||||
|
- **File:** `compose/services/vikunja/compose.yaml`
|
||||||
|
- **Issue:** Database password defined in two places (can get out of sync)
|
||||||
|
- **Recommendation:** Use `.env` file for Vikunja service
|
||||||
|
```yaml
|
||||||
|
env_file: .env
|
||||||
|
environment:
|
||||||
|
VIKUNJA_DATABASE_PASSWORD: ${POSTGRES_PASSWORD}
|
||||||
|
```
|
||||||
|
|
||||||
|
**4. Immich External Photo Library Mounting**
|
||||||
|
- **File:** `compose/media/frontend/immich/compose.yaml`
|
||||||
|
- **Issue:** Added `/media/photos` mount, but Immich uses `UPLOAD_LOCATION` for primary storage
|
||||||
|
- **Recommendation:** Document that `/media/photos` is for external library import only
|
||||||
|
|
||||||
|
### 🔵 Low Priority / Nice-to-Have: 5
|
||||||
|
|
||||||
|
**5. Inconsistent Timezone**
|
||||||
|
- **Files:** Various compose files
|
||||||
|
- **Issue:** Some services use `America/Los_Angeles`, others don't specify
|
||||||
|
- **Recommendation:** Standardize timezone across all services or use `.env`
|
||||||
|
|
||||||
|
**6. Booklore Image May Not Exist**
|
||||||
|
- **File:** `compose/services/booklore/compose.yaml`
|
||||||
|
- **Issue:** Using `ghcr.io/lorebooks/booklore:latest` - verify this image exists
|
||||||
|
- **Recommendation:** Test image availability before deployment
|
||||||
|
|
||||||
|
**7. Port Conflicts Possible**
|
||||||
|
- **Issue:** Several services expose ports that may conflict
|
||||||
|
- Traefik: 80, 443
|
||||||
|
- Jellyfin: 8096, 7359
|
||||||
|
- Immich: 2283
|
||||||
|
- qBittorrent: 6881
|
||||||
|
- **Recommendation:** Document port requirements in README
|
||||||
|
|
||||||
|
**8. Missing Resource Limits**
|
||||||
|
- **Issue:** No CPU/memory limits defined
|
||||||
|
- **Impact:** Services could consume excessive resources
|
||||||
|
- **Recommendation:** Add resource limits in production:
|
||||||
|
```yaml
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '1.0'
|
||||||
|
memory: 1G
|
||||||
|
```
|
||||||
|
|
||||||
|
**9. GitHub Actions May Need Secrets**
|
||||||
|
- **File:** `.github/workflows/security-checks.yml`
|
||||||
|
- **Issue:** Some workflows assume `GITHUB_TOKEN` is available
|
||||||
|
- **Recommendation:** Document required GitHub secrets in README
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📊 Code Quality Metrics
|
||||||
|
|
||||||
|
| Metric | Score | Notes |
|
||||||
|
|--------|-------|-------|
|
||||||
|
| **Documentation** | ⭐⭐⭐⭐⭐ | Excellent README, SECURITY.md, CONTRIBUTING.md |
|
||||||
|
| **Security** | ⭐⭐⭐⭐½ | Great practices, minor password sync issue |
|
||||||
|
| **Consistency** | ⭐⭐⭐⭐⭐ | Uniform structure across all services |
|
||||||
|
| **Best Practices** | ⭐⭐⭐⭐⭐ | Follows Docker/Compose standards |
|
||||||
|
| **CI/CD** | ⭐⭐⭐⭐⭐ | Comprehensive validation pipeline |
|
||||||
|
| **Maintainability** | ⭐⭐⭐⭐⭐ | Well-organized, easy to extend |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🔍 Detailed Review by Category
|
||||||
|
|
||||||
|
### Core Infrastructure
|
||||||
|
|
||||||
|
#### Traefik (`compose/core/traefik/compose.yaml`)
|
||||||
|
✅ **Excellent**
|
||||||
|
- Proper entrypoint configuration
|
||||||
|
- HTTP to HTTPS redirect
|
||||||
|
- Let's Encrypt email configured
|
||||||
|
- Dashboard with SSO protection
|
||||||
|
- Log level appropriate for production
|
||||||
|
|
||||||
|
**Suggestion:** Consider adding access log retention:
|
||||||
|
```yaml
|
||||||
|
- --accesslog.filepath=/var/log/traefik/access.log
|
||||||
|
- --accesslog.bufferingsize=100
|
||||||
|
```
|
||||||
|
|
||||||
|
#### LLDAP (`compose/core/lldap/compose.yaml`)
|
||||||
|
✅ **Good**
|
||||||
|
- Clean configuration
|
||||||
|
- Proper volume mounts
|
||||||
|
- Environment variables in .env
|
||||||
|
|
||||||
|
**Minor Issue:** Base DN is `dc=fig,dc=systems` but domain is `fig.systems` - this is correct but document why.
|
||||||
|
|
||||||
|
#### Tinyauth (`compose/core/tinyauth/compose.yaml`)
|
||||||
|
✅ **Good**
|
||||||
|
- LDAP integration properly configured
|
||||||
|
- Forward auth middleware defined
|
||||||
|
- Session management configured
|
||||||
|
|
||||||
|
**Issue:** Depends on LLDAP - add `depends_on` if deploying together.
|
||||||
|
|
||||||
|
### Media Services
|
||||||
|
|
||||||
|
#### Jellyfin ✅ **Excellent**
|
||||||
|
- Proper media folder mappings
|
||||||
|
- GPU transcoding option documented
|
||||||
|
- Traefik labels complete
|
||||||
|
- SSO middleware commented (correct for service with own auth)
|
||||||
|
|
||||||
|
#### Sonarr/Radarr ✅ **Good**
|
||||||
|
- Download folder mappings correct
|
||||||
|
- Consistent configuration
|
||||||
|
- Proper network isolation
|
||||||
|
|
||||||
|
**Suggestion:** Add Traefik rate limiting for public endpoints:
|
||||||
|
```yaml
|
||||||
|
traefik.http.middlewares.sonarr-ratelimit.ratelimit.average: 10
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Immich ⭐ **Very Good**
|
||||||
|
- Multi-container setup properly configured
|
||||||
|
- Internal network for database/redis
|
||||||
|
- Health checks present
|
||||||
|
- Machine learning container included
|
||||||
|
|
||||||
|
**Question:** Does `/media/photos` need write access? Currently read-only.
|
||||||
|
|
||||||
|
### Utility Services
|
||||||
|
|
||||||
|
#### Linkwarden/Vikunja ✅ **Excellent**
|
||||||
|
- Multi-service stacks well organized
|
||||||
|
- Database health checks
|
||||||
|
- Internal networks isolated
|
||||||
|
|
||||||
|
#### File Browser ⚠️ **Needs Review**
|
||||||
|
- Mounts entire `/media` to `/srv`
|
||||||
|
- This gives access to ALL media folders
|
||||||
|
- Consider if this is intentional or security risk
|
||||||
|
|
||||||
|
### CI/CD Pipeline
|
||||||
|
|
||||||
|
#### GitHub Actions Workflows ⭐⭐⭐⭐⭐ **Outstanding**
|
||||||
|
- Comprehensive validation
|
||||||
|
- Security scanning with multiple tools
|
||||||
|
- Documentation verification
|
||||||
|
- Auto-labeling
|
||||||
|
|
||||||
|
**One Issue:** `docker-compose-validation.yml` line 30 assumes `homelab` network exists for validation. This will fail on CI runners.
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```yaml
|
||||||
|
# Skip network existence validation, only check syntax
|
||||||
|
if docker compose -f "$file" config --quiet 2>/dev/null; then
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🧪 Testing Performed
|
||||||
|
|
||||||
|
Based on the implementation, these tests should be performed:
|
||||||
|
|
||||||
|
### ✅ Automated Tests (Will Run via CI)
|
||||||
|
- [x] YAML syntax validation
|
||||||
|
- [x] Compose file structure
|
||||||
|
- [x] Secret scanning
|
||||||
|
- [x] Documentation links
|
||||||
|
|
||||||
|
### ⏳ Manual Tests Required
|
||||||
|
- [ ] Deploy Traefik and verify dashboard
|
||||||
|
- [ ] Deploy LLDAP and create test user
|
||||||
|
- [ ] Configure Tinyauth with LLDAP
|
||||||
|
- [ ] Deploy a test service and verify SSO
|
||||||
|
- [ ] Verify SSL certificate generation
|
||||||
|
- [ ] Test dual domain access (fig.systems + edfig.dev)
|
||||||
|
- [ ] Verify media folder permissions (PUID/PGID)
|
||||||
|
- [ ] Test service interdependencies
|
||||||
|
- [ ] Verify health checks work
|
||||||
|
- [ ] Test backup/restore procedures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📝 Recommendations
|
||||||
|
|
||||||
|
### Before Merge:
|
||||||
|
1. **Fix nginxproxymanager issue** - Remove or migrate to compose.yaml
|
||||||
|
2. **Add password sync documentation** - Clarify LLDAP <-> Tinyauth password relationship
|
||||||
|
3. **Test Booklore image** - Verify container image exists
|
||||||
|
|
||||||
|
### After Merge:
|
||||||
|
4. Create follow-up issues for:
|
||||||
|
- Adding resource limits
|
||||||
|
- Implementing backup strategy
|
||||||
|
- Setting up monitoring (Prometheus/Grafana)
|
||||||
|
- Creating deployment automation script
|
||||||
|
- Testing disaster recovery
|
||||||
|
|
||||||
|
### Documentation Updates:
|
||||||
|
5. Add deployment troubleshooting section
|
||||||
|
6. Document port requirements in README
|
||||||
|
7. Add network topology diagram
|
||||||
|
8. Create quick-start guide
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎯 Action Items
|
||||||
|
|
||||||
|
### For PR Author:
|
||||||
|
- [ ] Remove or fix `compose/core/nginxproxymanager/compose.yml`
|
||||||
|
- [ ] Add password synchronization notes to .env files
|
||||||
|
- [ ] Verify Booklore Docker image exists
|
||||||
|
- [ ] Test at least core infrastructure deployment locally
|
||||||
|
- [ ] Update README with port requirements
|
||||||
|
|
||||||
|
### For Reviewers:
|
||||||
|
- [ ] Verify no secrets in committed files
|
||||||
|
- [ ] Check Traefik configuration security
|
||||||
|
- [ ] Review network isolation
|
||||||
|
- [ ] Validate domain configuration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 💬 Questions for PR Author
|
||||||
|
|
||||||
|
1. **Nginx Proxy Manager**: Is this service still needed or can it be removed since Traefik is the reverse proxy?
|
||||||
|
|
||||||
|
2. **Media Folder Permissions**: Have you verified the host will have PUID=1000, PGID=1000 for the media folders?
|
||||||
|
|
||||||
|
3. **Backup Strategy**: What's the plan for backing up:
|
||||||
|
- LLDAP user database
|
||||||
|
- Service configurations
|
||||||
|
- Application databases (Postgres)
|
||||||
|
|
||||||
|
4. **Monitoring**: Plans for adding monitoring/alerting (Grafana, Uptime Kuma, etc.)?
|
||||||
|
|
||||||
|
5. **Testing**: Have you tested the full deployment flow on a clean system?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🚀 Deployment Readiness
|
||||||
|
|
||||||
|
| Category | Status | Notes |
|
||||||
|
|----------|--------|-------|
|
||||||
|
| **Code Quality** | ✅ Ready | Minor issues noted above |
|
||||||
|
| **Security** | ✅ Ready | Proper secrets management |
|
||||||
|
| **Documentation** | ✅ Ready | Comprehensive docs provided |
|
||||||
|
| **Testing** | ⚠️ Partial | Needs manual deployment testing |
|
||||||
|
| **CI/CD** | ✅ Ready | Workflows will validate future changes |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🎉 Conclusion
|
||||||
|
|
||||||
|
This is an **excellent PR** that demonstrates:
|
||||||
|
- Strong understanding of Docker/Compose best practices
|
||||||
|
- Thoughtful security considerations
|
||||||
|
- Comprehensive documentation
|
||||||
|
- Robust CI/CD pipeline
|
||||||
|
|
||||||
|
The issues found are minor and easily addressable. The codebase is well-structured and maintainable.
|
||||||
|
|
||||||
|
**Recommendation: APPROVE** after fixing the nginxproxymanager issue.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📚 Additional Resources
|
||||||
|
|
||||||
|
For future enhancements, consider:
|
||||||
|
- [Awesome Selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted)
|
||||||
|
- [Docker Security Best Practices](https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html)
|
||||||
|
- [Traefik Best Practices](https://doc.traefik.io/traefik/getting-started/quick-start/)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Review Date:** 2025-11-05
|
||||||
|
**Reviewer:** Claude (Automated Code Review)
|
||||||
|
**Status:** ✅ **APPROVED WITH CONDITIONS**
|
||||||
144
SECURITY.md
Normal file
144
SECURITY.md
Normal file
|
|
@ -0,0 +1,144 @@
|
||||||
|
# Security Policy
|
||||||
|
|
||||||
|
## Supported Versions
|
||||||
|
|
||||||
|
This is a personal homelab configuration repository. The latest commit on `main` is always the supported version.
|
||||||
|
|
||||||
|
| Branch | Supported |
|
||||||
|
| ------ | ------------------ |
|
||||||
|
| main | :white_check_mark: |
|
||||||
|
| other | :x: |
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
### Secrets Management
|
||||||
|
|
||||||
|
**DO NOT commit secrets to this repository!**
|
||||||
|
|
||||||
|
- All passwords in `.env` files should use placeholder values (e.g., `changeme_*`)
|
||||||
|
- Real passwords should only be set in your local deployment
|
||||||
|
- Use environment variables or Docker secrets for sensitive data
|
||||||
|
- Never commit files containing real credentials
|
||||||
|
|
||||||
|
### Container Security
|
||||||
|
|
||||||
|
- All container images are scanned for vulnerabilities via GitHub Actions
|
||||||
|
- HIGH and CRITICAL vulnerabilities are reported in security scans
|
||||||
|
- Keep images up to date by pulling latest versions regularly
|
||||||
|
- Review security scan results before deploying
|
||||||
|
|
||||||
|
### Network Security
|
||||||
|
|
||||||
|
- All services are behind Traefik reverse proxy
|
||||||
|
- SSL/TLS is enforced via Let's Encrypt
|
||||||
|
- Internal services use isolated Docker networks
|
||||||
|
- SSO is enabled on most services via Tinyauth
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
|
||||||
|
- LLDAP provides centralized user management
|
||||||
|
- Tinyauth handles SSO authentication
|
||||||
|
- Services with built-in authentication are documented in README
|
||||||
|
- Change all default passwords before deployment
|
||||||
|
|
||||||
|
## Reporting a Vulnerability
|
||||||
|
|
||||||
|
If you discover a security vulnerability in this configuration:
|
||||||
|
|
||||||
|
1. **DO NOT** open a public issue
|
||||||
|
2. Contact the repository owner directly via GitHub private message
|
||||||
|
3. Include:
|
||||||
|
- Description of the vulnerability
|
||||||
|
- Steps to reproduce
|
||||||
|
- Potential impact
|
||||||
|
- Suggested fix (if any)
|
||||||
|
|
||||||
|
### What to Report
|
||||||
|
|
||||||
|
- Exposed secrets or credentials
|
||||||
|
- Insecure configurations
|
||||||
|
- Vulnerable container images (not already detected by CI)
|
||||||
|
- Authentication bypasses
|
||||||
|
- Network security issues
|
||||||
|
|
||||||
|
### What NOT to Report
|
||||||
|
|
||||||
|
- Issues with third-party services (report to their maintainers)
|
||||||
|
- Theoretical vulnerabilities without proof of concept
|
||||||
|
- Social engineering attempts
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
### Before Deployment
|
||||||
|
|
||||||
|
1. **Change all passwords** in `.env` files
|
||||||
|
2. **Review** all service configurations
|
||||||
|
3. **Update** container images to latest versions
|
||||||
|
4. **Configure** firewall to only allow ports 80/443
|
||||||
|
5. **Enable** automatic security updates on host OS
|
||||||
|
|
||||||
|
### After Deployment
|
||||||
|
|
||||||
|
1. **Monitor** logs regularly for suspicious activity
|
||||||
|
2. **Update** services monthly (at minimum)
|
||||||
|
3. **Backup** data regularly
|
||||||
|
4. **Review** access logs
|
||||||
|
5. **Test** disaster recovery procedures
|
||||||
|
|
||||||
|
### Network Hardening
|
||||||
|
|
||||||
|
- Use a firewall (ufw, iptables, etc.)
|
||||||
|
- Only expose ports 80 and 443 to the internet
|
||||||
|
- Consider using a VPN for administrative access
|
||||||
|
- Enable fail2ban or similar intrusion prevention
|
||||||
|
- Use strong DNS providers with DNSSEC
|
||||||
|
|
||||||
|
### Container Hardening
|
||||||
|
|
||||||
|
- Run containers as non-root when possible
|
||||||
|
- Use read-only filesystems where applicable
|
||||||
|
- Limit container resources (CPU, memory)
|
||||||
|
- Enable security options (no-new-privileges, etc.)
|
||||||
|
- Regularly scan for vulnerabilities
|
||||||
|
|
||||||
|
## Automated Security Scanning
|
||||||
|
|
||||||
|
This repository includes automated security scanning:
|
||||||
|
|
||||||
|
- **Gitleaks**: Detects secrets in commits
|
||||||
|
- **Trivy**: Scans container images for vulnerabilities
|
||||||
|
- **YAML Linting**: Ensures proper configuration
|
||||||
|
- **Dependency Review**: Checks for vulnerable dependencies
|
||||||
|
|
||||||
|
Review GitHub Actions results before merging PRs.
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This is a personal homelab configuration and does not claim compliance with any specific security standards. However, it follows general security best practices:
|
||||||
|
|
||||||
|
- Principle of least privilege
|
||||||
|
- Defense in depth
|
||||||
|
- Secure by default
|
||||||
|
- Regular updates and patching
|
||||||
|
|
||||||
|
## External Dependencies
|
||||||
|
|
||||||
|
Security of this setup depends on:
|
||||||
|
|
||||||
|
- Docker and Docker Compose security
|
||||||
|
- Container image maintainers
|
||||||
|
- Traefik security
|
||||||
|
- LLDAP security
|
||||||
|
- Host OS security
|
||||||
|
|
||||||
|
Always keep these dependencies up to date.
|
||||||
|
|
||||||
|
## Disclaimer
|
||||||
|
|
||||||
|
This configuration is provided "as is" without warranty. Use at your own risk. The maintainer is not responsible for any security incidents resulting from the use of this configuration.
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- [Docker Security Best Practices](https://docs.docker.com/engine/security/)
|
||||||
|
- [Traefik Security Documentation](https://doc.traefik.io/traefik/https/overview/)
|
||||||
|
- [OWASP Container Security](https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html)
|
||||||
|
|
@ -1,54 +0,0 @@
|
||||||
# Authelia - Single Sign-On & Two-Factor Authentication
|
|
||||||
# Docs: https://www.authelia.com/
|
|
||||||
|
|
||||||
services:
|
|
||||||
authelia:
|
|
||||||
container_name: authelia
|
|
||||||
image: authelia/authelia:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./config:/config
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Main Authelia portal
|
|
||||||
traefik.http.routers.authelia.rule: Host(`auth.fig.systems`)
|
|
||||||
traefik.http.routers.authelia.entrypoints: websecure
|
|
||||||
traefik.http.routers.authelia.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.authelia.loadbalancer.server.port: 9091
|
|
||||||
|
|
||||||
# Forward Auth Middleware (for services without native OIDC)
|
|
||||||
traefik.http.middlewares.authelia.forwardAuth.address: http://authelia:9091/api/verify?rd=https%3A%2F%2Fauth.fig.systems%2F
|
|
||||||
traefik.http.middlewares.authelia.forwardAuth.trustForwardHeader: true
|
|
||||||
traefik.http.middlewares.authelia.forwardAuth.authResponseHeaders: Remote-User,Remote-Groups,Remote-Name,Remote-Email
|
|
||||||
|
|
||||||
redis:
|
|
||||||
container_name: authelia-redis
|
|
||||||
image: redis:alpine
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- redis-data:/data
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
command: redis-server --save 60 1 --loglevel warning
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
redis-data:
|
|
||||||
driver: local
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
||||||
# CrowdSec Configuration
|
|
||||||
# Copy this file to .env and customize
|
|
||||||
|
|
||||||
# Timezone
|
|
||||||
TZ=America/Los_Angeles
|
|
||||||
|
|
||||||
# Optional: Disable metrics/telemetry
|
|
||||||
# DISABLE_ONLINE_API=true
|
|
||||||
|
|
||||||
# Optional: Log level (info, debug, warning, error)
|
|
||||||
# LOG_LEVEL=info
|
|
||||||
|
|
@ -1,453 +0,0 @@
|
||||||
# CrowdSec - Collaborative Security Engine
|
|
||||||
|
|
||||||
CrowdSec is a free, open-source Intrusion Prevention System (IPS) that analyzes logs and blocks malicious IPs based on behavior analysis and community threat intelligence.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- Behavior-based detection - Detects attacks from log patterns
|
|
||||||
- Community threat intelligence - Shares & receives IP reputation data
|
|
||||||
- Traefik integration - Protects all web services via plugin
|
|
||||||
- SQLite database - No separate database container needed
|
|
||||||
- Local network whitelist - Prevents self-blocking (10.0.0.0/16)
|
|
||||||
- Multiple scenarios - HTTP attacks, brute force, scanners, etc.
|
|
||||||
- Optional dashboard - Web UI at crowdsec.fig.systems
|
|
||||||
|
|
||||||
## Access
|
|
||||||
|
|
||||||
**Dashboard URL:** https://crowdsec.fig.systems (protected by Authelia)
|
|
||||||
**LAPI:** http://crowdsec:8080 (internal only, used by Traefik plugin)
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### Initial Deployment
|
|
||||||
|
|
||||||
1. **Deploy CrowdSec:**
|
|
||||||
```bash
|
|
||||||
cd /home/eduardo_figueroa/homelab/compose/core/crowdsec
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Wait for initialization** (30-60 seconds):
|
|
||||||
```bash
|
|
||||||
docker logs crowdsec -f
|
|
||||||
```
|
|
||||||
|
|
||||||
Look for: "CrowdSec service: crowdsec up and running"
|
|
||||||
|
|
||||||
3. **Generate Bouncer API Key:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli bouncers add traefik-bouncer
|
|
||||||
```
|
|
||||||
|
|
||||||
**Important:** Copy the API key shown. It will look like:
|
|
||||||
```
|
|
||||||
API key for 'traefik-bouncer':
|
|
||||||
a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Add API key to Traefik:**
|
|
||||||
```bash
|
|
||||||
cd /home/eduardo_figueroa/homelab/compose/core/traefik
|
|
||||||
nano .env
|
|
||||||
```
|
|
||||||
|
|
||||||
Update the line:
|
|
||||||
```bash
|
|
||||||
CROWDSEC_BOUNCER_KEY=a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Restart Traefik to load plugin:**
|
|
||||||
```bash
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Verify plugin connection:**
|
|
||||||
```bash
|
|
||||||
docker logs traefik 2>&1 | grep -i crowdsec
|
|
||||||
```
|
|
||||||
|
|
||||||
Should see: "Plugin crowdsec-bouncer-traefik-plugin loaded"
|
|
||||||
|
|
||||||
### Apply CrowdSec Middleware to Services
|
|
||||||
|
|
||||||
Edit service compose.yaml files to add CrowdSec middleware:
|
|
||||||
|
|
||||||
**Example - Jellyfin:**
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
traefik.http.routers.jellyfin.middlewares: crowdsec
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example - With Authelia chain:**
|
|
||||||
```yaml
|
|
||||||
labels:
|
|
||||||
traefik.http.routers.service.middlewares: crowdsec,authelia
|
|
||||||
```
|
|
||||||
|
|
||||||
**Recommended for:**
|
|
||||||
- Publicly accessible services (jellyfin, jellyseer, etc.)
|
|
||||||
- Services without rate limiting
|
|
||||||
- High-value targets (admin panels, databases)
|
|
||||||
|
|
||||||
**Skip for:**
|
|
||||||
- Traefik dashboard (already has local-only)
|
|
||||||
- Strictly local services (no external access)
|
|
||||||
|
|
||||||
## Management Commands
|
|
||||||
|
|
||||||
### View Decisions (Active Bans)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List all active bans
|
|
||||||
docker exec crowdsec cscli decisions list
|
|
||||||
|
|
||||||
# List bans with details
|
|
||||||
docker exec crowdsec cscli decisions list -o json
|
|
||||||
```
|
|
||||||
|
|
||||||
### View Alerts (Detected Attacks)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Recent alerts
|
|
||||||
docker exec crowdsec cscli alerts list
|
|
||||||
|
|
||||||
# Detailed alert view
|
|
||||||
docker exec crowdsec cscli alerts inspect <alert_id>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Whitelist an IP
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Temporary whitelist
|
|
||||||
docker exec crowdsec cscli decisions delete --ip 1.2.3.4
|
|
||||||
|
|
||||||
# Permanent whitelist - add to config/local_whitelist.yaml:
|
|
||||||
```
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
whitelist:
|
|
||||||
reason: "Trusted service"
|
|
||||||
cidr:
|
|
||||||
- "1.2.3.4/32"
|
|
||||||
```
|
|
||||||
|
|
||||||
Then restart CrowdSec:
|
|
||||||
```bash
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
### Ban an IP Manually
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Ban for 4 hours
|
|
||||||
docker exec crowdsec cscli decisions add --ip 1.2.3.4 --duration 4h --reason "Manual ban"
|
|
||||||
|
|
||||||
# Permanent ban
|
|
||||||
docker exec crowdsec cscli decisions add --ip 1.2.3.4 --duration 24h --reason "Malicious actor"
|
|
||||||
```
|
|
||||||
|
|
||||||
### View Installed Collections
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli collections list
|
|
||||||
```
|
|
||||||
|
|
||||||
### Install Additional Collections
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# WordPress protection
|
|
||||||
docker exec crowdsec cscli collections install crowdsecurity/wordpress
|
|
||||||
|
|
||||||
# SSH brute force (if exposing SSH)
|
|
||||||
docker exec crowdsec cscli collections install crowdsecurity/sshd
|
|
||||||
|
|
||||||
# Apply changes
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
### View Bouncer Status
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# List bouncers
|
|
||||||
docker exec crowdsec cscli bouncers list
|
|
||||||
|
|
||||||
# Should show traefik-bouncer with last_pull timestamp
|
|
||||||
```
|
|
||||||
|
|
||||||
### View Metrics
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# CrowdSec metrics
|
|
||||||
docker exec crowdsec cscli metrics
|
|
||||||
|
|
||||||
# Show parser statistics
|
|
||||||
docker exec crowdsec cscli metrics show parsers
|
|
||||||
|
|
||||||
# Show scenario statistics
|
|
||||||
docker exec crowdsec cscli metrics show scenarios
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration Files
|
|
||||||
|
|
||||||
### acquis.yaml
|
|
||||||
|
|
||||||
Defines log sources for CrowdSec to monitor:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
filenames:
|
|
||||||
- /var/log/traefik/access.log
|
|
||||||
labels:
|
|
||||||
type: traefik
|
|
||||||
```
|
|
||||||
|
|
||||||
**Modify to add more log sources:**
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
filenames:
|
|
||||||
- /var/log/traefik/access.log
|
|
||||||
labels:
|
|
||||||
type: traefik
|
|
||||||
---
|
|
||||||
filenames:
|
|
||||||
- /var/log/nginx/access.log
|
|
||||||
labels:
|
|
||||||
type: nginx
|
|
||||||
```
|
|
||||||
|
|
||||||
After changes:
|
|
||||||
```bash
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
### local_whitelist.yaml
|
|
||||||
|
|
||||||
Whitelists trusted IPs/CIDRs:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
whitelist:
|
|
||||||
reason: "Local network and trusted infrastructure"
|
|
||||||
cidr:
|
|
||||||
- "10.0.0.0/16"
|
|
||||||
- "127.0.0.1/32"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Add more entries:**
|
|
||||||
```yaml
|
|
||||||
cidr:
|
|
||||||
- "10.0.0.0/16"
|
|
||||||
- "192.168.1.100/32" # Trusted admin IP
|
|
||||||
```
|
|
||||||
|
|
||||||
After changes:
|
|
||||||
```bash
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
## Installed Collections
|
|
||||||
|
|
||||||
### crowdsecurity/traefik
|
|
||||||
Parsers and scenarios for Traefik-specific attacks:
|
|
||||||
- Path traversal attempts
|
|
||||||
- SQLi in query strings
|
|
||||||
- XSS attempts
|
|
||||||
- Admin panel scanning
|
|
||||||
|
|
||||||
### crowdsecurity/base-http-scenarios
|
|
||||||
Generic HTTP attack scenarios:
|
|
||||||
- Brute force (login attempts)
|
|
||||||
- Credential stuffing
|
|
||||||
- Directory enumeration
|
|
||||||
- Sensitive file access attempts
|
|
||||||
|
|
||||||
### crowdsecurity/whitelist-good-actors
|
|
||||||
Whitelists known good actors:
|
|
||||||
- Search engine bots (Google, Bing, etc.)
|
|
||||||
- Monitoring services (UptimeRobot, Pingdom)
|
|
||||||
- CDN providers (Cloudflare, etc.)
|
|
||||||
|
|
||||||
## Integration with Traefik
|
|
||||||
|
|
||||||
### How It Works
|
|
||||||
|
|
||||||
1. **Traefik receives request** → Checks CrowdSec plugin middleware
|
|
||||||
2. **Plugin queries CrowdSec LAPI** → "Is this IP banned?"
|
|
||||||
3. **CrowdSec responds:**
|
|
||||||
- Not banned → Request proceeds to service
|
|
||||||
- Banned → Returns 403 Forbidden
|
|
||||||
4. **Traefik logs request** → Saved to /var/log/traefik/access.log
|
|
||||||
5. **CrowdSec analyzes logs** → Detects attack patterns
|
|
||||||
6. **CrowdSec makes decision** → Ban IP or alert
|
|
||||||
7. **Plugin updates cache** → Every 60 seconds (stream mode)
|
|
||||||
|
|
||||||
### Stream Mode
|
|
||||||
|
|
||||||
The plugin uses **stream mode** for optimal performance:
|
|
||||||
- **Live mode:** Queries LAPI on every request (high latency)
|
|
||||||
- **Stream mode:** Maintains local cache, updates every 60s (low latency)
|
|
||||||
- **Alone mode:** No LAPI connection, local decisions only
|
|
||||||
|
|
||||||
**Current config:** Stream mode with 60s updates
|
|
||||||
|
|
||||||
### Middleware Chain Order
|
|
||||||
|
|
||||||
When chaining middlewares, order matters:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
# Correct: CrowdSec first, then Authelia
|
|
||||||
traefik.http.routers.service.middlewares: crowdsec,authelia
|
|
||||||
|
|
||||||
# Also valid: CrowdSec after rate limiting
|
|
||||||
traefik.http.routers.service.middlewares: ratelimit,crowdsec
|
|
||||||
```
|
|
||||||
|
|
||||||
**Recommended order:**
|
|
||||||
1. Rate limiting (if any)
|
|
||||||
2. CrowdSec (block banned IPs early)
|
|
||||||
3. Authelia (authentication for allowed IPs)
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### CrowdSec Not Blocking Malicious IPs
|
|
||||||
|
|
||||||
**Check decisions:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli decisions list
|
|
||||||
```
|
|
||||||
|
|
||||||
If empty, CrowdSec isn't detecting attacks.
|
|
||||||
|
|
||||||
**Check alerts:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli alerts list
|
|
||||||
```
|
|
||||||
|
|
||||||
If empty, logs aren't being parsed.
|
|
||||||
|
|
||||||
**Verify log parsing:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli metrics show acquisitions
|
|
||||||
```
|
|
||||||
|
|
||||||
Should show Traefik log file being read.
|
|
||||||
|
|
||||||
**Check acquis.yaml:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cat /etc/crowdsec/acquis.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Traefik Plugin Not Connecting
|
|
||||||
|
|
||||||
**Check Traefik logs:**
|
|
||||||
```bash
|
|
||||||
docker logs traefik 2>&1 | grep -i crowdsec
|
|
||||||
```
|
|
||||||
|
|
||||||
**Common issues:**
|
|
||||||
- API key not set in .env
|
|
||||||
- CrowdSec container not running
|
|
||||||
- Network connectivity (both must be on homelab network)
|
|
||||||
|
|
||||||
**Test connection:**
|
|
||||||
```bash
|
|
||||||
docker exec traefik wget -O- http://crowdsec:8080/v1/decisions/stream
|
|
||||||
```
|
|
||||||
|
|
||||||
Should return JSON (may be unauthorized, but connection works).
|
|
||||||
|
|
||||||
### Traefik Not Loading Plugin
|
|
||||||
|
|
||||||
**Check Traefik startup logs:**
|
|
||||||
```bash
|
|
||||||
docker logs traefik | head -50
|
|
||||||
```
|
|
||||||
|
|
||||||
Look for:
|
|
||||||
- "Plugin crowdsec-bouncer-traefik-plugin loaded"
|
|
||||||
- "experimental.plugins" enabled
|
|
||||||
|
|
||||||
**Verify traefik.yml:**
|
|
||||||
```bash
|
|
||||||
docker exec traefik cat /etc/traefik/traefik.yml
|
|
||||||
```
|
|
||||||
|
|
||||||
Ensure experimental.plugins section exists.
|
|
||||||
|
|
||||||
### Accidentally Banned Yourself
|
|
||||||
|
|
||||||
**Quick unban:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli decisions delete --ip YOUR_IP_HERE
|
|
||||||
```
|
|
||||||
|
|
||||||
**Permanent whitelist:**
|
|
||||||
|
|
||||||
Edit `/home/eduardo_figueroa/homelab/compose/core/crowdsec/config/local_whitelist.yaml`:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
whitelist:
|
|
||||||
cidr:
|
|
||||||
- "YOUR_IP/32"
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart:
|
|
||||||
```bash
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
### Logs Not Being Parsed
|
|
||||||
|
|
||||||
**Check log file permissions:**
|
|
||||||
```bash
|
|
||||||
ls -la /home/eduardo_figueroa/homelab/compose/core/traefik/logs/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check CrowdSec can read logs:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec ls -la /var/log/traefik/
|
|
||||||
docker exec crowdsec tail /var/log/traefik/access.log
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check acquisitions:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli metrics show acquisitions
|
|
||||||
```
|
|
||||||
|
|
||||||
Should show lines read from access.log.
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
1. **Monitor metrics weekly:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Review decisions periodically:**
|
|
||||||
Check for false positives
|
|
||||||
|
|
||||||
3. **Keep collections updated:**
|
|
||||||
```bash
|
|
||||||
docker exec crowdsec cscli collections upgrade --all
|
|
||||||
docker compose restart
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Backup database:**
|
|
||||||
```bash
|
|
||||||
cp -r /home/eduardo_figueroa/homelab/compose/core/crowdsec/db/ /backup/location/
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Test changes in staging:**
|
|
||||||
Before applying to production services
|
|
||||||
|
|
||||||
6. **Use whitelist liberally:**
|
|
||||||
Better to whitelist trusted IPs than deal with lockouts
|
|
||||||
|
|
||||||
7. **Chain with Authelia:**
|
|
||||||
Defense in depth - CrowdSec blocks bad actors, Authelia handles authentication
|
|
||||||
|
|
||||||
## Links
|
|
||||||
|
|
||||||
- **Official Docs:** https://docs.crowdsec.net/
|
|
||||||
- **Traefik Plugin:** https://plugins.traefik.io/plugins/6335346ca4caa9ddeffda116/crowdsec-bouncer-traefik-plugin
|
|
||||||
- **Collections Hub:** https://app.crowdsec.net/hub/collections
|
|
||||||
- **Community Forum:** https://discourse.crowdsec.net/
|
|
||||||
- **GitHub:** https://github.com/crowdsecurity/crowdsec
|
|
||||||
|
|
@ -1,73 +0,0 @@
|
||||||
# CrowdSec - Collaborative IPS/IDS
|
|
||||||
# Docs: https://docs.crowdsec.net/
|
|
||||||
|
|
||||||
services:
|
|
||||||
crowdsec:
|
|
||||||
container_name: crowdsec
|
|
||||||
image: crowdsecurity/crowdsec:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
environment:
|
|
||||||
# Timezone
|
|
||||||
TZ: America/Los_Angeles
|
|
||||||
|
|
||||||
# Collections to install on first run
|
|
||||||
COLLECTIONS: >-
|
|
||||||
crowdsecurity/traefik
|
|
||||||
crowdsecurity/base-http-scenarios
|
|
||||||
crowdsecurity/whitelist-good-actors
|
|
||||||
|
|
||||||
# Disable online API for local-only mode (optional)
|
|
||||||
# DISABLE_ONLINE_API: "true"
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
# Configuration persistence
|
|
||||||
- ./config/acquis.yaml:/etc/crowdsec/acquis.yaml:ro
|
|
||||||
- ./config/local_whitelist.yaml:/etc/crowdsec/parsers/s02-enrich/local_whitelist.yaml:ro
|
|
||||||
|
|
||||||
# Database persistence (SQLite)
|
|
||||||
- ./db:/var/lib/crowdsec/data
|
|
||||||
|
|
||||||
# Traefik logs (read-only, shared with Traefik)
|
|
||||||
- ../traefik/logs:/var/log/traefik:ro
|
|
||||||
|
|
||||||
# Configuration directory (for runtime config)
|
|
||||||
- crowdsec-config:/etc/crowdsec
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
# Expose 8080 only for metrics/dashboard (optional)
|
|
||||||
# Not exposed to host by default for security
|
|
||||||
# ports:
|
|
||||||
# - "8080:8080"
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik - Optional: Expose CrowdSec dashboard
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# CrowdSec Dashboard
|
|
||||||
traefik.http.routers.crowdsec.rule: Host(`crowdsec.fig.systems`)
|
|
||||||
traefik.http.routers.crowdsec.entrypoints: websecure
|
|
||||||
traefik.http.routers.crowdsec.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.crowdsec.loadbalancer.server.port: 8080
|
|
||||||
|
|
||||||
# Protect with Authelia
|
|
||||||
traefik.http.routers.crowdsec.middlewares: authelia
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: CrowdSec
|
|
||||||
homarr.group: Security
|
|
||||||
homarr.icon: mdi:shield-check
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
crowdsec-config:
|
|
||||||
driver: local
|
|
||||||
|
|
@ -2,27 +2,34 @@ services:
|
||||||
traefik:
|
traefik:
|
||||||
container_name: traefik
|
container_name: traefik
|
||||||
image: traefik:v3.6.2
|
image: traefik:v3.6.2
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
# Static configuration file
|
|
||||||
command:
|
command:
|
||||||
- --configFile=/etc/traefik/traefik.yml
|
# API Settings
|
||||||
|
- --api.dashboard=true
|
||||||
|
# Provider Settings
|
||||||
|
- --providers.docker=true
|
||||||
|
- --providers.docker.exposedbydefault=false
|
||||||
|
- --providers.docker.network=homelab
|
||||||
|
# Entrypoints
|
||||||
|
- --entrypoints.web.address=:80
|
||||||
|
- --entrypoints.websecure.address=:443
|
||||||
|
# HTTP to HTTPS redirect
|
||||||
|
- --entrypoints.web.http.redirections.entrypoint.to=websecure
|
||||||
|
- --entrypoints.web.http.redirections.entrypoint.scheme=https
|
||||||
|
# Let's Encrypt Certificate Resolver
|
||||||
|
- --certificatesresolvers.letsencrypt.acme.email=admin@edfig.dev
|
||||||
|
- --certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json
|
||||||
|
- --certificatesresolvers.letsencrypt.acme.httpchallenge.entrypoint=web
|
||||||
|
# Logging
|
||||||
|
- --log.level=INFO
|
||||||
|
- --accesslog=true
|
||||||
ports:
|
ports:
|
||||||
- "80:80"
|
- "80:80"
|
||||||
- "443:443"
|
- "443:443"
|
||||||
|
|
||||||
environment:
|
environment:
|
||||||
DOCKER_API_VERSION: "1.52"
|
DOCKER_API_VERSION: "1.52"
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
- ./traefik.yml:/etc/traefik/traefik.yml:ro
|
|
||||||
- ./letsencrypt:/letsencrypt
|
- ./letsencrypt:/letsencrypt
|
||||||
- ./logs:/var/log/traefik
|
|
||||||
|
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
|
|
@ -33,22 +40,10 @@ services:
|
||||||
traefik.http.routers.traefik.entrypoints: websecure
|
traefik.http.routers.traefik.entrypoints: websecure
|
||||||
traefik.http.routers.traefik.tls.certresolver: letsencrypt
|
traefik.http.routers.traefik.tls.certresolver: letsencrypt
|
||||||
traefik.http.routers.traefik.service: api@internal
|
traefik.http.routers.traefik.service: api@internal
|
||||||
traefik.http.routers.traefik.middlewares: local-only
|
|
||||||
|
|
||||||
# IP Allowlist Middleware for local network only services
|
# IP Allowlist Middleware for local network only services
|
||||||
traefik.http.middlewares.local-only.ipallowlist.sourcerange: 10.0.0.0/16
|
traefik.http.middlewares.local-only.ipallowlist.sourcerange: 10.0.0.0/16
|
||||||
|
|
||||||
# CrowdSec Middleware
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.enabled: true
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecMode: stream
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecLapiKey: ${CROWDSEC_BOUNCER_KEY}
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecLapiHost: crowdsec:8080
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.crowdsecLapiScheme: http
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.updateIntervalSeconds: 60
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.defaultDecisionSeconds: 60
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.forwardedHeadersTrustedIPs: 10.0.0.0/16
|
|
||||||
traefik.http.middlewares.crowdsec.plugin.crowdsec-bouncer-traefik-plugin.clientTrustedIPs: 10.0.0.0/16
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -1,56 +0,0 @@
|
||||||
# Traefik Static Configuration
|
|
||||||
# Docs: https://doc.traefik.io/traefik/
|
|
||||||
|
|
||||||
# API Settings
|
|
||||||
api:
|
|
||||||
dashboard: true
|
|
||||||
|
|
||||||
# Provider Settings
|
|
||||||
providers:
|
|
||||||
docker:
|
|
||||||
exposedByDefault: false
|
|
||||||
network: homelab
|
|
||||||
|
|
||||||
# Entrypoints
|
|
||||||
entryPoints:
|
|
||||||
web:
|
|
||||||
address: ":80"
|
|
||||||
http:
|
|
||||||
redirections:
|
|
||||||
entryPoint:
|
|
||||||
to: websecure
|
|
||||||
scheme: https
|
|
||||||
|
|
||||||
websecure:
|
|
||||||
address: ":443"
|
|
||||||
|
|
||||||
# Certificate Resolvers
|
|
||||||
certificatesResolvers:
|
|
||||||
letsencrypt:
|
|
||||||
acme:
|
|
||||||
email: admin@edfig.dev
|
|
||||||
storage: /letsencrypt/acme.json
|
|
||||||
httpChallenge:
|
|
||||||
entryPoint: web
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
log:
|
|
||||||
level: INFO
|
|
||||||
|
|
||||||
# Access Logs - Critical for CrowdSec
|
|
||||||
accessLog:
|
|
||||||
filePath: /var/log/traefik/access.log
|
|
||||||
bufferingSize: 100
|
|
||||||
filters:
|
|
||||||
statusCodes:
|
|
||||||
- "200-299"
|
|
||||||
- "300-399"
|
|
||||||
- "400-499"
|
|
||||||
- "500-599"
|
|
||||||
|
|
||||||
# Experimental Features - Required for Plugins
|
|
||||||
experimental:
|
|
||||||
plugins:
|
|
||||||
crowdsec-bouncer-traefik-plugin:
|
|
||||||
moduleName: github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin
|
|
||||||
version: v1.2.1
|
|
||||||
|
|
@ -1,56 +0,0 @@
|
||||||
# Dispatcharr - IPTV/Live TV Transcoding and Streaming
|
|
||||||
# Docs: https://github.com/DispatchArr/DispatchArr
|
|
||||||
|
|
||||||
services:
|
|
||||||
dispatcharr:
|
|
||||||
image: ghcr.io/dispatcharr/dispatcharr:latest
|
|
||||||
container_name: dispatcharr
|
|
||||||
ports:
|
|
||||||
- 9191:9191
|
|
||||||
volumes:
|
|
||||||
- ./data:/data
|
|
||||||
environment:
|
|
||||||
- DISPATCHARR_ENV=aio
|
|
||||||
- REDIS_HOST=localhost
|
|
||||||
- CELERY_BROKER_URL=redis://localhost:6379/0
|
|
||||||
- DISPATCHARR_LOG_LEVEL=info
|
|
||||||
|
|
||||||
# NVIDIA GPU support for hardware transcoding
|
|
||||||
runtime: nvidia
|
|
||||||
deploy:
|
|
||||||
resources:
|
|
||||||
reservations:
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: all
|
|
||||||
capabilities: [gpu]
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.http.routers.dispatcharr.rule: Host(`iptv.fig.systems`)
|
|
||||||
traefik.http.routers.dispatcharr.entrypoints: websecure
|
|
||||||
traefik.http.routers.dispatcharr.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.dispatcharr.loadbalancer.server.port: 9191
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Dispatcharr (IPTV)
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: mdi:television
|
|
||||||
|
|
||||||
# Process Priority Configuration (Optional)
|
|
||||||
# Lower values = higher priority. Range: -20 (highest) to 19 (lowest)
|
|
||||||
# Negative values require cap_add: SYS_NICE (uncomment below)
|
|
||||||
#- UWSGI_NICE_LEVEL=-5 # uWSGI/FFmpeg/Streaming (default: 0, recommended: -5 for high priority)
|
|
||||||
#- CELERY_NICE_LEVEL=5 # Celery/EPG/Background tasks (default: 5, low priority)
|
|
||||||
#
|
|
||||||
# Uncomment to enable high priority for streaming (required if UWSGI_NICE_LEVEL < 0)
|
|
||||||
#cap_add:
|
|
||||||
# - SYS_NICE
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
||||||
|
|
@ -29,8 +29,7 @@ services:
|
||||||
traefik.http.routers.lidarr.tls.certresolver: letsencrypt
|
traefik.http.routers.lidarr.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.lidarr.loadbalancer.server.port: 8686
|
traefik.http.services.lidarr.loadbalancer.server.port: 8686
|
||||||
|
|
||||||
# Local Network Only
|
# SSO Protection
|
||||||
traefik.http.routers.lidarr.middlewares: local-only
|
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Lidarr (Music)
|
homarr.name: Lidarr (Music)
|
||||||
|
|
|
||||||
|
|
@ -29,7 +29,6 @@ services:
|
||||||
traefik.http.services.profilarr.loadbalancer.server.port: 6868
|
traefik.http.services.profilarr.loadbalancer.server.port: 6868
|
||||||
|
|
||||||
# SSO Protection
|
# SSO Protection
|
||||||
traefik.http.routers.profilarr.middlewares: authelia
|
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Profilarr (Profiles)
|
homarr.name: Profilarr (Profiles)
|
||||||
|
|
|
||||||
|
|
@ -24,7 +24,6 @@ services:
|
||||||
traefik.http.services.prowlarr.loadbalancer.server.port: 9696
|
traefik.http.services.prowlarr.loadbalancer.server.port: 9696
|
||||||
|
|
||||||
# SSO Protection
|
# SSO Protection
|
||||||
traefik.http.routers.prowlarr.middlewares: authelia
|
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Prowlarr (Indexers)
|
homarr.name: Prowlarr (Indexers)
|
||||||
|
|
|
||||||
|
|
@ -19,19 +19,12 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.qbittorrent.rule: Host(`qbt.fig.systems`)
|
traefik.http.routers.qbittorrent.rule: Host(`qbt.fig.systems`)
|
||||||
traefik.http.routers.qbittorrent.entrypoints: websecure
|
traefik.http.routers.qbittorrent.entrypoints: websecure
|
||||||
traefik.http.routers.qbittorrent.tls.certresolver: letsencrypt
|
traefik.http.routers.qbittorrent.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.qbittorrent.loadbalancer.server.port: 8080
|
traefik.http.services.qbittorrent.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.qbittorrent.middlewares: authelia
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -19,19 +19,12 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.radarr.rule: Host(`radarr.fig.systems`)
|
traefik.http.routers.radarr.rule: Host(`radarr.fig.systems`)
|
||||||
traefik.http.routers.radarr.entrypoints: websecure
|
traefik.http.routers.radarr.entrypoints: websecure
|
||||||
traefik.http.routers.radarr.tls.certresolver: letsencrypt
|
traefik.http.routers.radarr.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.radarr.loadbalancer.server.port: 7878
|
traefik.http.services.radarr.loadbalancer.server.port: 7878
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.radarr.middlewares: authelia
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -16,19 +16,13 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
traefik.docker.network: homelab
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.sabnzbd.rule: Host(`sab.fig.systems`)
|
traefik.http.routers.sabnzbd.rule: Host(`sab.fig.systems`)
|
||||||
traefik.http.routers.sabnzbd.entrypoints: websecure
|
traefik.http.routers.sabnzbd.entrypoints: websecure
|
||||||
traefik.http.routers.sabnzbd.tls.certresolver: letsencrypt
|
traefik.http.routers.sabnzbd.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.sabnzbd.loadbalancer.server.port: 8080
|
traefik.http.services.sabnzbd.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.sabnzbd.middlewares: authelia
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -1,32 +0,0 @@
|
||||||
# slskd configuration
|
|
||||||
# See: https://github.com/slskd/slskd/blob/master/config/slskd.example.yml
|
|
||||||
|
|
||||||
# Soulseek credentials
|
|
||||||
soulseek:
|
|
||||||
username: eddoe
|
|
||||||
password: Exoteric0
|
|
||||||
description: |
|
|
||||||
A slskd user. https://github.com/slskd/slskd
|
|
||||||
|
|
||||||
# Directories
|
|
||||||
directories:
|
|
||||||
downloads: /downloads
|
|
||||||
|
|
||||||
shares:
|
|
||||||
directories:
|
|
||||||
- /music
|
|
||||||
filters:
|
|
||||||
- \.ini$
|
|
||||||
- Thumbs.db$
|
|
||||||
- \.DS_Store$
|
|
||||||
|
|
||||||
# Web UI Authentication
|
|
||||||
web:
|
|
||||||
authentication:
|
|
||||||
username: slskd
|
|
||||||
password: slskd
|
|
||||||
api_keys:
|
|
||||||
soularr:
|
|
||||||
key: ae207eee1105484e9dd0e472cba7b996fe2069bafc7f86b83001ab29d0c2c211
|
|
||||||
role: readwrite
|
|
||||||
cidr: 0.0.0.0/0,::/0
|
|
||||||
|
|
@ -1,53 +0,0 @@
|
||||||
# slskd - Soulseek daemon for P2P music sharing
|
|
||||||
# Docs: https://github.com/slskd/slskd
|
|
||||||
# Config: https://github.com/slskd/slskd/blob/master/config/slskd.example.yml
|
|
||||||
|
|
||||||
services:
|
|
||||||
slskd:
|
|
||||||
container_name: slskd
|
|
||||||
image: slskd/slskd:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
environment:
|
|
||||||
- SLSKD_REMOTE_CONFIGURATION=true
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./app:/app
|
|
||||||
# Existing music library for sharing (read-only)
|
|
||||||
- /mnt/media/music:/music:ro
|
|
||||||
# Downloads directory (Lidarr can access this)
|
|
||||||
- /mnt/media/downloads/soulseek:/downloads
|
|
||||||
|
|
||||||
ports:
|
|
||||||
- "5030:5030" # Web UI
|
|
||||||
- "5031:5031" # Peer connections
|
|
||||||
- "50300:50300" # Peer listening
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.slskd.rule: Host(`soulseek.fig.systems`)
|
|
||||||
traefik.http.routers.slskd.entrypoints: websecure
|
|
||||||
traefik.http.routers.slskd.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.slskd.loadbalancer.server.port: 5030
|
|
||||||
|
|
||||||
# Local Network Only
|
|
||||||
traefik.http.routers.slskd.middlewares: local-only
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: slskd (Soulseek)
|
|
||||||
homarr.group: Automation
|
|
||||||
homarr.icon: mdi:share-variant
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -19,19 +19,12 @@ services:
|
||||||
networks:
|
networks:
|
||||||
- homelab
|
- homelab
|
||||||
labels:
|
labels:
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
traefik.enable: true
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.sonarr.rule: Host(`sonarr.fig.systems`)
|
traefik.http.routers.sonarr.rule: Host(`sonarr.fig.systems`)
|
||||||
traefik.http.routers.sonarr.entrypoints: websecure
|
traefik.http.routers.sonarr.entrypoints: websecure
|
||||||
traefik.http.routers.sonarr.tls.certresolver: letsencrypt
|
traefik.http.routers.sonarr.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.sonarr.loadbalancer.server.port: 8989
|
traefik.http.services.sonarr.loadbalancer.server.port: 8989
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.sonarr.middlewares: authelia
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
external: true
|
external: true
|
||||||
|
|
|
||||||
|
|
@ -1,36 +0,0 @@
|
||||||
# Soularr - Automation bridge connecting Lidarr with Slskd
|
|
||||||
# Docs: https://soularr.net/
|
|
||||||
# GitHub: https://github.com/mrusse08/soularr
|
|
||||||
|
|
||||||
services:
|
|
||||||
soularr:
|
|
||||||
container_name: soularr
|
|
||||||
image: mrusse08/soularr:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
environment:
|
|
||||||
- PUID=1000
|
|
||||||
- PGID=1000
|
|
||||||
- SCRIPT_INTERVAL=300 # Run every 5 minutes
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./data:/data # Config file storage
|
|
||||||
- /mnt/media/downloads/soulseek:/downloads # Monitor downloads
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# No Traefik (no web UI)
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Soularr (Lidarr↔Slskd Bridge)
|
|
||||||
homarr.group: Automation
|
|
||||||
homarr.icon: mdi:link-variant
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -10,7 +10,7 @@ DB_DATA_LOCATION=./postgres
|
||||||
TZ=America/Los_Angeles
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
# The Immich version to use. You can pin this to a specific version like "v1.71.0"
|
# The Immich version to use. You can pin this to a specific version like "v1.71.0"
|
||||||
IMMICH_VERSION=V2.3.1
|
IMMICH_VERSION=V2.1.0
|
||||||
|
|
||||||
# Connection secret for postgres. You should change it to a random password
|
# Connection secret for postgres. You should change it to a random password
|
||||||
# Please use only the characters `A-Za-z0-9`, without special characters or spaces
|
# Please use only the characters `A-Za-z0-9`, without special characters or spaces
|
||||||
|
|
@ -18,17 +18,6 @@ IMMICH_VERSION=V2.3.1
|
||||||
# Example format: aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
# Example format: aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
||||||
DB_PASSWORD=changeme_please_set_secure_password
|
DB_PASSWORD=changeme_please_set_secure_password
|
||||||
|
|
||||||
# OAuth/OIDC Configuration (Authelia)
|
|
||||||
# Docs: https://immich.app/docs/administration/oauth
|
|
||||||
OAUTH_ENABLED=true
|
|
||||||
OAUTH_ISSUER_URL=https://auth.fig.systems
|
|
||||||
OAUTH_CLIENT_ID=immich
|
|
||||||
OAUTH_CLIENT_SECRET=UXmLznRcvsyZexV0GUeJcJren7FwW8cr
|
|
||||||
OAUTH_SCOPE=openid profile email
|
|
||||||
OAUTH_BUTTON_TEXT=Login with Authelia
|
|
||||||
OAUTH_AUTO_REGISTER=true
|
|
||||||
OAUTH_AUTO_LAUNCH=false
|
|
||||||
|
|
||||||
# The values below this line do not need to be changed
|
# The values below this line do not need to be changed
|
||||||
###################################################################################
|
###################################################################################
|
||||||
DB_USERNAME=postgres
|
DB_USERNAME=postgres
|
||||||
|
|
|
||||||
|
|
@ -45,6 +45,7 @@ services:
|
||||||
traefik.http.routers.immich.tls.certresolver: letsencrypt
|
traefik.http.routers.immich.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.immich.loadbalancer.server.port: 2283
|
traefik.http.services.immich.loadbalancer.server.port: 2283
|
||||||
# Optional: Enable SSO (note: Immich has its own user management)
|
# Optional: Enable SSO (note: Immich has its own user management)
|
||||||
|
# traefik.http.routers.immich.middlewares: tinyauth
|
||||||
|
|
||||||
immich-machine-learning:
|
immich-machine-learning:
|
||||||
container_name: immich_machine_learning
|
container_name: immich_machine_learning
|
||||||
|
|
|
||||||
|
|
@ -1,37 +0,0 @@
|
||||||
# Jellyfin OIDC Setup with Authelia
|
|
||||||
|
|
||||||
Jellyfin requires the **SSO Plugin** to be installed for OIDC authentication.
|
|
||||||
|
|
||||||
## Installation Steps
|
|
||||||
|
|
||||||
1. **Install the SSO Plugin**:
|
|
||||||
- Open Jellyfin: https://flix.fig.systems
|
|
||||||
- Navigate to: Dashboard → Plugins → Catalog
|
|
||||||
- Find and install: **"SSO-Authentication"** plugin
|
|
||||||
- Restart Jellyfin
|
|
||||||
|
|
||||||
2. **Configure the Plugin**:
|
|
||||||
- Go to: Dashboard → Plugins → SSO-Authentication
|
|
||||||
|
|
||||||
- **Add New Provider** with these settings:
|
|
||||||
- **Provider Name**: `authelia`
|
|
||||||
- **OID Endpoint**: `https://auth.fig.systems`
|
|
||||||
- **OID Client ID**: `jellyfin`
|
|
||||||
- **OID Secret**: `eOlV1CLiYpCtE9xKaI3FbsXmMBuHc5Mp`
|
|
||||||
- **Enabled**: ✓
|
|
||||||
- **Enable Authorization by Plugin**: ✓
|
|
||||||
- **Enable All Folders**: ✓
|
|
||||||
- **Enable Folder Access (Optional)**: (configure as needed)
|
|
||||||
- **Administrator Roles**: `admin` (if using LDAP groups)
|
|
||||||
- **Default User**: (leave empty for auto-registration)
|
|
||||||
|
|
||||||
3. **Test Login**:
|
|
||||||
- Log out of Jellyfin
|
|
||||||
- You should now see a "Sign in with authelia" button
|
|
||||||
- Click it to authenticate via Authelia
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- Users will be auto-created in Jellyfin when they first login via OIDC
|
|
||||||
- You can still use local Jellyfin accounts alongside OIDC
|
|
||||||
- The redirect URI configured in Authelia is: `https://flix.fig.systems/sso/OID/redirect/authelia`
|
|
||||||
|
|
@ -8,9 +8,6 @@ services:
|
||||||
image: lscr.io/linuxserver/jellyfin:latest
|
image: lscr.io/linuxserver/jellyfin:latest
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
environment:
|
|
||||||
- NVIDIA_VISIBLE_DEVICES=all
|
|
||||||
- NVIDIA_DRIVER_CAPABILITIES=compute,video,utility
|
|
||||||
volumes:
|
volumes:
|
||||||
- ./config:/config
|
- ./config:/config
|
||||||
- ./cache:/cache
|
- ./cache:/cache
|
||||||
|
|
@ -40,23 +37,19 @@ services:
|
||||||
homarr.icon: simple-icons:jellyfin
|
homarr.icon: simple-icons:jellyfin
|
||||||
|
|
||||||
# Note: Jellyfin has its own auth system, SSO middleware disabled by default
|
# Note: Jellyfin has its own auth system, SSO middleware disabled by default
|
||||||
# Uncomment the line below to enable SSO (requires users to auth via Authelia first)
|
# Uncomment the line below to enable SSO (requires users to auth via tinyauth first)
|
||||||
|
# traefik.http.routers.jellyfin.middlewares: tinyauth
|
||||||
|
|
||||||
# NVIDIA GPU transcoding (GTX 1070)
|
# Uncomment for NVIDIA GPU transcoding (GTX 1070)
|
||||||
runtime: nvidia
|
# Requires NVIDIA Container Toolkit installed on host
|
||||||
# Shared memory for transcoding - prevents stuttering
|
# runtime: nvidia
|
||||||
shm_size: 4gb
|
# deploy:
|
||||||
deploy:
|
# resources:
|
||||||
resources:
|
# reservations:
|
||||||
limits:
|
# devices:
|
||||||
memory: 12G
|
# - driver: nvidia
|
||||||
cpus: '5.0'
|
# count: all
|
||||||
reservations:
|
# capabilities: [gpu]
|
||||||
memory: 4G
|
|
||||||
devices:
|
|
||||||
- driver: nvidia
|
|
||||||
count: all
|
|
||||||
capabilities: [gpu]
|
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@
|
||||||
services:
|
services:
|
||||||
jellyseerr:
|
jellyseerr:
|
||||||
container_name: jellyseerr
|
container_name: jellyseerr
|
||||||
image: ghcr.io/seerr-team/seerr:latest
|
image: fallenbagel/jellyseerr:latest
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
volumes:
|
volumes:
|
||||||
|
|
|
||||||
|
|
@ -1,48 +0,0 @@
|
||||||
# Navidrome - Modern music streaming server
|
|
||||||
# Docs: https://www.navidrome.org/docs/
|
|
||||||
# Installation: https://www.navidrome.org/docs/installation/docker/
|
|
||||||
|
|
||||||
services:
|
|
||||||
navidrome:
|
|
||||||
container_name: navidrome
|
|
||||||
image: deluan/navidrome:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
|
|
||||||
user: "1000:1000"
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./data:/data
|
|
||||||
# Music library (read-only)
|
|
||||||
- /mnt/media/music:/music:ro
|
|
||||||
|
|
||||||
ports:
|
|
||||||
- "4533:4533"
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.navidrome.rule: Host(`music.fig.systems`)
|
|
||||||
traefik.http.routers.navidrome.entrypoints: websecure
|
|
||||||
traefik.http.routers.navidrome.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.navidrome.loadbalancer.server.port: 4533
|
|
||||||
|
|
||||||
# No SSO - Navidrome has its own auth system
|
|
||||||
# This ensures mobile apps (Subsonic clients) work properly
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Navidrome (Music Streaming)
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: mdi:music-circle
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
# NodeCast TV - Chromecast Dashboard
|
|
||||||
# Source: https://github.com/technomancer702/nodecast-tv
|
|
||||||
|
|
||||||
services:
|
|
||||||
nodecast-tv:
|
|
||||||
container_name: nodecast-tv
|
|
||||||
build: https://github.com/technomancer702/nodecast-tv.git#main
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
environment:
|
|
||||||
- NODE_ENV=production
|
|
||||||
- PORT=3000
|
|
||||||
volumes:
|
|
||||||
- ./data:/app/data
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
traefik.http.routers.nodecast-tv.rule: Host(`iptv.fig.systems`)
|
|
||||||
traefik.http.routers.nodecast-tv.entrypoints: websecure
|
|
||||||
traefik.http.routers.nodecast-tv.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.nodecast-tv.loadbalancer.server.port: 3000
|
|
||||||
|
|
||||||
# Note: No Authelia middleware - NodeCast TV handles authentication via its own OIDC integration
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: NodeCast TV (IPTV)
|
|
||||||
homarr.group: Media
|
|
||||||
homarr.icon: mdi:cast
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
28
compose/monitoring/logging/.env
Normal file
28
compose/monitoring/logging/.env
Normal file
|
|
@ -0,0 +1,28 @@
|
||||||
|
# Centralized Logging Configuration
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# Grafana Admin Credentials
|
||||||
|
# Default username: admin
|
||||||
|
# Change this password immediately after first login!
|
||||||
|
# Example format: MyGr@f@n@P@ssw0rd!2024
|
||||||
|
GF_SECURITY_ADMIN_PASSWORD=changeme_please_set_secure_grafana_password
|
||||||
|
|
||||||
|
# Grafana Configuration
|
||||||
|
GF_SERVER_ROOT_URL=https://logs.fig.systems
|
||||||
|
GF_SERVER_DOMAIN=logs.fig.systems
|
||||||
|
|
||||||
|
# Disable Grafana analytics (optional)
|
||||||
|
GF_ANALYTICS_REPORTING_ENABLED=false
|
||||||
|
GF_ANALYTICS_CHECK_FOR_UPDATES=false
|
||||||
|
|
||||||
|
# Allow embedding (for Homarr dashboard integration)
|
||||||
|
GF_SECURITY_ALLOW_EMBEDDING=true
|
||||||
|
|
||||||
|
# Loki Configuration
|
||||||
|
# Retention period in days (default: 30 days)
|
||||||
|
LOKI_RETENTION_PERIOD=30d
|
||||||
|
|
||||||
|
# Promtail Configuration
|
||||||
|
# No additional configuration needed - configured via promtail-config.yaml
|
||||||
28
compose/monitoring/logging/.env.example
Normal file
28
compose/monitoring/logging/.env.example
Normal file
|
|
@ -0,0 +1,28 @@
|
||||||
|
# Centralized Logging Configuration
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# Grafana Admin Credentials
|
||||||
|
# Default username: admin
|
||||||
|
# Change this password immediately after first login!
|
||||||
|
# Example format: MyGr@f@n@P@ssw0rd!2024
|
||||||
|
GF_SECURITY_ADMIN_PASSWORD=REDACTED
|
||||||
|
|
||||||
|
# Grafana Configuration
|
||||||
|
GF_SERVER_ROOT_URL=https://logs.fig.systems
|
||||||
|
GF_SERVER_DOMAIN=logs.fig.systems
|
||||||
|
|
||||||
|
# Disable Grafana analytics (optional)
|
||||||
|
GF_ANALYTICS_REPORTING_ENABLED=false
|
||||||
|
GF_ANALYTICS_CHECK_FOR_UPDATES=false
|
||||||
|
|
||||||
|
# Allow embedding (for Homarr dashboard integration)
|
||||||
|
GF_SECURITY_ALLOW_EMBEDDING=true
|
||||||
|
|
||||||
|
# Loki Configuration
|
||||||
|
# Retention period in days (default: 30 days)
|
||||||
|
LOKI_RETENTION_PERIOD=30d
|
||||||
|
|
||||||
|
# Promtail Configuration
|
||||||
|
# No additional configuration needed - configured via promtail-config.yaml
|
||||||
13
compose/monitoring/logging/.gitignore
vendored
Normal file
13
compose/monitoring/logging/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,13 @@
|
||||||
|
# Loki data
|
||||||
|
loki-data/
|
||||||
|
|
||||||
|
# Grafana data
|
||||||
|
grafana-data/
|
||||||
|
|
||||||
|
# Keep provisioning and config files
|
||||||
|
!grafana-provisioning/
|
||||||
|
!loki-config.yaml
|
||||||
|
!promtail-config.yaml
|
||||||
|
|
||||||
|
# Keep .env.example if created
|
||||||
|
!.env.example
|
||||||
235
compose/monitoring/logging/DOCKER-LOGS-DASHBOARD.md
Normal file
235
compose/monitoring/logging/DOCKER-LOGS-DASHBOARD.md
Normal file
|
|
@ -0,0 +1,235 @@
|
||||||
|
# Docker Logs Dashboard - Grafana
|
||||||
|
|
||||||
|
A comprehensive dashboard for viewing all Docker container logs via Loki.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### 📊 Panels Included
|
||||||
|
|
||||||
|
1. **Docker Container Logs** (Main Panel)
|
||||||
|
- Real-time log streaming from all containers
|
||||||
|
- Filter by container, image, or search term
|
||||||
|
- Expandable log details
|
||||||
|
- Sortable (ascending/descending)
|
||||||
|
|
||||||
|
2. **Log Volume by Container**
|
||||||
|
- Stacked bar chart showing log activity over time
|
||||||
|
- Helps identify chatty containers
|
||||||
|
- Per-container breakdown
|
||||||
|
|
||||||
|
3. **Error Logs by Container**
|
||||||
|
- Time series of ERROR/EXCEPTION/FATAL/PANIC logs
|
||||||
|
- Automatically detects error patterns
|
||||||
|
- Useful for monitoring application health
|
||||||
|
|
||||||
|
4. **Total Logs by Container**
|
||||||
|
- Bar gauge showing total log lines per container
|
||||||
|
- Color-coded thresholds (green → yellow → red)
|
||||||
|
- Based on selected time range
|
||||||
|
|
||||||
|
5. **Statistics Panels**
|
||||||
|
- **Active Containers**: Count of containers currently logging
|
||||||
|
- **Total Log Lines**: Sum of all logs in time range
|
||||||
|
- **Total Errors**: Count of error-level logs
|
||||||
|
- **Log Rate**: Logs per second (current rate)
|
||||||
|
|
||||||
|
## Access the Dashboard
|
||||||
|
|
||||||
|
1. Open Grafana: **https://logs.fig.systems**
|
||||||
|
2. Navigate to: **Dashboards** → **Loki** folder → **Docker Logs - All Containers**
|
||||||
|
|
||||||
|
Or use direct link:
|
||||||
|
```
|
||||||
|
https://logs.fig.systems/d/docker-logs-all
|
||||||
|
```
|
||||||
|
|
||||||
|
## Using the Filters
|
||||||
|
|
||||||
|
### Container Filter
|
||||||
|
- Select specific containers to view
|
||||||
|
- Multi-select supported
|
||||||
|
- Default: "All" (shows all containers)
|
||||||
|
|
||||||
|
Example: Select `traefik`, `loki`, `grafana` to view only those
|
||||||
|
|
||||||
|
### Image Filter
|
||||||
|
- Filter by Docker image name
|
||||||
|
- Multi-select supported
|
||||||
|
- Useful for viewing all containers of same image
|
||||||
|
|
||||||
|
Example: Filter by `grafana/loki:*` to see all Loki containers
|
||||||
|
|
||||||
|
### Search Filter
|
||||||
|
- Free-text search with regex support
|
||||||
|
- Searches within log message content
|
||||||
|
- Case-insensitive by default
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- `error` - Find logs containing "error"
|
||||||
|
- `(?i)started` - Case-insensitive "started"
|
||||||
|
- `HTTP [45][0-9]{2}` - HTTP 4xx/5xx errors
|
||||||
|
- `user.*login.*failed` - Failed login attempts
|
||||||
|
|
||||||
|
## Time Range Selection
|
||||||
|
|
||||||
|
Use Grafana's time picker (top right) to select:
|
||||||
|
- Last 5 minutes
|
||||||
|
- Last 15 minutes
|
||||||
|
- Last 1 hour (default)
|
||||||
|
- Last 24 hours
|
||||||
|
- Custom range
|
||||||
|
|
||||||
|
## Auto-Refresh
|
||||||
|
|
||||||
|
Dashboard auto-refreshes every **10 seconds** by default.
|
||||||
|
|
||||||
|
Change refresh rate in top-right dropdown:
|
||||||
|
- 5s (very fast)
|
||||||
|
- 10s (default)
|
||||||
|
- 30s
|
||||||
|
- 1m
|
||||||
|
- 5m
|
||||||
|
- Off
|
||||||
|
|
||||||
|
## LogQL Query Examples
|
||||||
|
|
||||||
|
The dashboard uses these queries. You can modify panels or create new ones:
|
||||||
|
|
||||||
|
### All logs from a container
|
||||||
|
```logql
|
||||||
|
{job="docker_all", container="traefik"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Errors only
|
||||||
|
```logql
|
||||||
|
{job="docker_all"} |~ "(?i)(error|exception|fatal|panic)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### HTTP status codes
|
||||||
|
```logql
|
||||||
|
{job="docker_all", container="traefik"} | json | line_format "{{.status}} {{.method}} {{.path}}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rate of logs
|
||||||
|
```logql
|
||||||
|
rate({job="docker_all"}[5m])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Count errors per container
|
||||||
|
```logql
|
||||||
|
sum by (container) (count_over_time({job="docker_all"} |~ "(?i)error" [1h]))
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tips & Tricks
|
||||||
|
|
||||||
|
### 1. Find Noisy Containers
|
||||||
|
- Use "Log Volume by Container" panel
|
||||||
|
- Look for tall bars = lots of logs
|
||||||
|
- Consider adjusting log levels for those containers
|
||||||
|
|
||||||
|
### 2. Debug Application Issues
|
||||||
|
1. Set time range to when issue occurred
|
||||||
|
2. Filter to specific container
|
||||||
|
3. Search for error keywords
|
||||||
|
4. Expand log details for full context
|
||||||
|
|
||||||
|
### 3. Monitor in Real-Time
|
||||||
|
1. Set time range to "Last 5 minutes"
|
||||||
|
2. Enable auto-refresh (5s or 10s)
|
||||||
|
3. Open "Docker Container Logs" panel
|
||||||
|
4. Watch logs stream live
|
||||||
|
|
||||||
|
### 4. Export Logs
|
||||||
|
- Click on any log line
|
||||||
|
- Click "Copy" icon to copy log text
|
||||||
|
- Or use Loki API directly for bulk export
|
||||||
|
|
||||||
|
### 5. Create Alerts
|
||||||
|
In Grafana, you can create alerts based on log patterns:
|
||||||
|
- Alert if errors exceed threshold
|
||||||
|
- Alert if specific pattern detected
|
||||||
|
- Alert if container stops logging (might be down)
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### No logs showing
|
||||||
|
1. Check Promtail is running: `docker ps | grep promtail`
|
||||||
|
2. Verify Loki datasource in Grafana is configured
|
||||||
|
3. Check time range (logs might be older/newer)
|
||||||
|
4. Verify containers are actually logging
|
||||||
|
|
||||||
|
### Slow dashboard
|
||||||
|
- Narrow time range (use last 15m instead of 24h)
|
||||||
|
- Use container filter to reduce data
|
||||||
|
- Increase refresh interval to 30s or 1m
|
||||||
|
|
||||||
|
### Missing containers
|
||||||
|
Your current Promtail config captures ALL Docker containers automatically.
|
||||||
|
If a container is missing, check:
|
||||||
|
1. Container is running: `docker ps`
|
||||||
|
2. Container has logs: `docker logs <container>`
|
||||||
|
3. Promtail can access Docker socket
|
||||||
|
|
||||||
|
## Advanced Customization
|
||||||
|
|
||||||
|
### Add a New Panel
|
||||||
|
|
||||||
|
1. Click "Add Panel" in dashboard
|
||||||
|
2. Select "Logs" visualization
|
||||||
|
3. Use query:
|
||||||
|
```logql
|
||||||
|
{job="docker_all", container="your-container"}
|
||||||
|
```
|
||||||
|
4. Configure options (time display, wrapping, etc.)
|
||||||
|
5. Save dashboard
|
||||||
|
|
||||||
|
### Modify Existing Panels
|
||||||
|
|
||||||
|
1. Click panel title → Edit
|
||||||
|
2. Modify LogQL query
|
||||||
|
3. Adjust visualization options
|
||||||
|
4. Save changes
|
||||||
|
|
||||||
|
### Export Dashboard
|
||||||
|
|
||||||
|
1. Dashboard settings (gear icon)
|
||||||
|
2. JSON Model
|
||||||
|
3. Copy JSON
|
||||||
|
4. Save to file for backup
|
||||||
|
|
||||||
|
## Integration with Other Tools
|
||||||
|
|
||||||
|
### View in Explore
|
||||||
|
- Click "Explore" on any panel
|
||||||
|
- Opens Loki Explore interface
|
||||||
|
- More advanced querying options
|
||||||
|
- Better for ad-hoc investigation
|
||||||
|
|
||||||
|
### Share Dashboard
|
||||||
|
1. Click share icon (next to title)
|
||||||
|
2. Get shareable link
|
||||||
|
3. Or export snapshot
|
||||||
|
|
||||||
|
### Embed in Other Apps
|
||||||
|
Use Grafana's embedding features to show logs in:
|
||||||
|
- Homarr dashboard
|
||||||
|
- Custom web apps
|
||||||
|
- Monitoring tools
|
||||||
|
|
||||||
|
## Related Resources
|
||||||
|
|
||||||
|
- [LogQL Documentation](https://grafana.com/docs/loki/latest/logql/)
|
||||||
|
- [Grafana Dashboards Guide](https://grafana.com/docs/grafana/latest/dashboards/)
|
||||||
|
- [Loki Best Practices](https://grafana.com/docs/loki/latest/best-practices/)
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues with:
|
||||||
|
- **Dashboard**: Edit and customize as needed
|
||||||
|
- **Loki**: Check `/home/eduardo_figueroa/homelab/compose/monitoring/logging/`
|
||||||
|
- **Missing logs**: Verify Promtail configuration
|
||||||
|
|
||||||
|
Dashboard file location:
|
||||||
|
```
|
||||||
|
/home/eduardo_figueroa/homelab/compose/monitoring/logging/grafana-provisioning/dashboards/docker-logs.json
|
||||||
|
```
|
||||||
527
compose/monitoring/logging/README.md
Normal file
527
compose/monitoring/logging/README.md
Normal file
|
|
@ -0,0 +1,527 @@
|
||||||
|
# Centralized Logging Stack
|
||||||
|
|
||||||
|
Grafana Loki + Promtail + Grafana for centralized Docker container log aggregation and visualization.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This stack provides centralized logging for all Docker containers in your homelab:
|
||||||
|
|
||||||
|
- **Loki**: Log aggregation backend (like Prometheus but for logs)
|
||||||
|
- **Promtail**: Agent that collects logs from Docker containers
|
||||||
|
- **Grafana**: Web UI for querying and visualizing logs
|
||||||
|
|
||||||
|
### Why This Stack?
|
||||||
|
|
||||||
|
- ✅ **Lightweight**: Minimal resource usage compared to ELK stack
|
||||||
|
- ✅ **Docker-native**: Automatically discovers and collects logs from all containers
|
||||||
|
- ✅ **Powerful search**: LogQL query language for filtering and searching
|
||||||
|
- ✅ **Retention**: Configurable log retention (default: 30 days)
|
||||||
|
- ✅ **Labels**: Automatic labeling by container, image, compose project
|
||||||
|
- ✅ **Integrated**: Works seamlessly with existing homelab services
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Configure Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/monitoring/logging
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update:**
|
||||||
|
```env
|
||||||
|
# Change this!
|
||||||
|
GF_SECURITY_ADMIN_PASSWORD=<your-strong-password>
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deploy the Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Access Grafana
|
||||||
|
|
||||||
|
Go to: **https://logs.fig.systems**
|
||||||
|
|
||||||
|
**Default credentials:**
|
||||||
|
- Username: `admin`
|
||||||
|
- Password: `<your GF_SECURITY_ADMIN_PASSWORD>`
|
||||||
|
|
||||||
|
**⚠️ Change the password immediately after first login!**
|
||||||
|
|
||||||
|
### 4. View Logs
|
||||||
|
|
||||||
|
1. Click "Explore" (compass icon) in left sidebar
|
||||||
|
2. Select "Loki" datasource (should be selected by default)
|
||||||
|
3. Start querying logs!
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Basic Log Queries
|
||||||
|
|
||||||
|
**View all logs from a container:**
|
||||||
|
```logql
|
||||||
|
{container="jellyfin"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**View logs from a compose project:**
|
||||||
|
```logql
|
||||||
|
{compose_project="media"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**View logs from specific service:**
|
||||||
|
```logql
|
||||||
|
{compose_service="lldap"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Filter by log level:**
|
||||||
|
```logql
|
||||||
|
{container="immich_server"} |= "error"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Exclude lines:**
|
||||||
|
```logql
|
||||||
|
{container="traefik"} != "404"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Multiple filters:**
|
||||||
|
```logql
|
||||||
|
{container="jellyfin"} |= "error" != "404"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Advanced Queries
|
||||||
|
|
||||||
|
**Count errors per minute:**
|
||||||
|
```logql
|
||||||
|
sum(count_over_time({container="jellyfin"} |= "error" [1m])) by (container)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rate of logs:**
|
||||||
|
```logql
|
||||||
|
rate({container="traefik"}[5m])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Logs from last hour:**
|
||||||
|
```logql
|
||||||
|
{container="immich_server"} | __timestamp__ >= now() - 1h
|
||||||
|
```
|
||||||
|
|
||||||
|
**Filter by multiple containers:**
|
||||||
|
```logql
|
||||||
|
{container=~"jellyfin|immich.*|sonarr"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Extract and filter JSON:**
|
||||||
|
```logql
|
||||||
|
{container="linkwarden"} | json | level="error"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Log Retention
|
||||||
|
|
||||||
|
Default: **30 days**
|
||||||
|
|
||||||
|
To change retention period:
|
||||||
|
|
||||||
|
**Edit `.env`:**
|
||||||
|
```env
|
||||||
|
LOKI_RETENTION_PERIOD=60d # Keep logs for 60 days
|
||||||
|
```
|
||||||
|
|
||||||
|
**Edit `loki-config.yaml`:**
|
||||||
|
```yaml
|
||||||
|
limits_config:
|
||||||
|
retention_period: 60d # Must match .env
|
||||||
|
|
||||||
|
table_manager:
|
||||||
|
retention_period: 60d # Must match above
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart:**
|
||||||
|
```bash
|
||||||
|
docker compose restart loki
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adjust Resource Limits
|
||||||
|
|
||||||
|
**Edit `loki-config.yaml`:**
|
||||||
|
```yaml
|
||||||
|
limits_config:
|
||||||
|
ingestion_rate_mb: 10 # MB/sec per stream
|
||||||
|
ingestion_burst_size_mb: 20 # Burst size
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add Custom Labels
|
||||||
|
|
||||||
|
**Edit `promtail-config.yaml`:**
|
||||||
|
```yaml
|
||||||
|
scrape_configs:
|
||||||
|
- job_name: docker
|
||||||
|
docker_sd_configs:
|
||||||
|
- host: unix:///var/run/docker.sock
|
||||||
|
|
||||||
|
relabel_configs:
|
||||||
|
# Add custom label
|
||||||
|
- source_labels: ['__meta_docker_container_label_environment']
|
||||||
|
target_label: 'environment'
|
||||||
|
```
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
Docker Containers
|
||||||
|
↓ (logs via Docker socket)
|
||||||
|
Promtail (scrapes and ships)
|
||||||
|
↓ (HTTP push)
|
||||||
|
Loki (stores and indexes)
|
||||||
|
↓ (LogQL queries)
|
||||||
|
Grafana (visualization)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Log Collection
|
||||||
|
|
||||||
|
Promtail automatically collects logs from:
|
||||||
|
1. **All Docker containers** via Docker socket
|
||||||
|
2. **System logs** from `/var/log`
|
||||||
|
|
||||||
|
Logs are labeled with:
|
||||||
|
- `container`: Container name
|
||||||
|
- `image`: Docker image
|
||||||
|
- `compose_project`: Docker Compose project name
|
||||||
|
- `compose_service`: Service name from compose.yaml
|
||||||
|
- `stream`: stdout or stderr
|
||||||
|
|
||||||
|
### Storage
|
||||||
|
|
||||||
|
Logs are stored in:
|
||||||
|
- **Location**: `./loki-data/`
|
||||||
|
- **Format**: Compressed chunks
|
||||||
|
- **Index**: BoltDB
|
||||||
|
- **Retention**: Automatic cleanup after retention period
|
||||||
|
|
||||||
|
## Integration with Services
|
||||||
|
|
||||||
|
### Option 1: Automatic (Default)
|
||||||
|
|
||||||
|
Promtail automatically discovers all containers. No changes needed!
|
||||||
|
|
||||||
|
### Option 2: Explicit Labels (Recommended)
|
||||||
|
|
||||||
|
Add labels to services for better organization:
|
||||||
|
|
||||||
|
**Edit any service's `compose.yaml`:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
servicename:
|
||||||
|
# ... existing config ...
|
||||||
|
labels:
|
||||||
|
# ... existing labels ...
|
||||||
|
|
||||||
|
# Add logging labels
|
||||||
|
logging: "promtail"
|
||||||
|
log_level: "info"
|
||||||
|
environment: "production"
|
||||||
|
```
|
||||||
|
|
||||||
|
These labels will be available in Loki for filtering.
|
||||||
|
|
||||||
|
### Option 3: Send Logs Directly to Loki
|
||||||
|
|
||||||
|
Instead of Promtail scraping, send logs directly:
|
||||||
|
|
||||||
|
**Edit service `compose.yaml`:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
servicename:
|
||||||
|
# ... existing config ...
|
||||||
|
logging:
|
||||||
|
driver: loki
|
||||||
|
options:
|
||||||
|
loki-url: "http://loki:3100/loki/api/v1/push"
|
||||||
|
loki-external-labels: "container={{.Name}},compose_project={{.Config.Labels[\"com.docker.compose.project\"]}}"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: This requires the Loki Docker driver plugin (not recommended for simplicity).
|
||||||
|
|
||||||
|
## Grafana Dashboards
|
||||||
|
|
||||||
|
### Built-in Explore
|
||||||
|
|
||||||
|
Best way to start - use Grafana's Explore view:
|
||||||
|
1. Click "Explore" icon (compass)
|
||||||
|
2. Select "Loki" datasource
|
||||||
|
3. Use builder to create queries
|
||||||
|
4. Save interesting queries
|
||||||
|
|
||||||
|
### Pre-built Dashboards
|
||||||
|
|
||||||
|
You can import community dashboards:
|
||||||
|
|
||||||
|
1. Go to Dashboards → Import
|
||||||
|
2. Use dashboard ID: `13639` (Docker logs dashboard)
|
||||||
|
3. Select "Loki" as datasource
|
||||||
|
4. Import
|
||||||
|
|
||||||
|
### Create Custom Dashboard
|
||||||
|
|
||||||
|
1. Click "+" → "Dashboard"
|
||||||
|
2. Add panel
|
||||||
|
3. Select Loki datasource
|
||||||
|
4. Build query using LogQL
|
||||||
|
5. Save dashboard
|
||||||
|
|
||||||
|
**Example panels:**
|
||||||
|
- Error count by container
|
||||||
|
- Log volume over time
|
||||||
|
- Top 10 logging containers
|
||||||
|
- Recent errors table
|
||||||
|
|
||||||
|
## Alerting
|
||||||
|
|
||||||
|
### Create Log-Based Alerts
|
||||||
|
|
||||||
|
1. Go to Alerting → Alert rules
|
||||||
|
2. Create new alert rule
|
||||||
|
3. Query: `sum(count_over_time({container="jellyfin"} |= "error" [5m])) > 10`
|
||||||
|
4. Set thresholds and notification channels
|
||||||
|
5. Save
|
||||||
|
|
||||||
|
**Example alerts:**
|
||||||
|
- Too many errors in container
|
||||||
|
- Container restarted
|
||||||
|
- Disk space warnings
|
||||||
|
- Failed authentication attempts
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Promtail Not Collecting Logs
|
||||||
|
|
||||||
|
**Check Promtail is running:**
|
||||||
|
```bash
|
||||||
|
docker logs promtail
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify Docker socket access:**
|
||||||
|
```bash
|
||||||
|
docker exec promtail ls -la /var/run/docker.sock
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test Promtail config:**
|
||||||
|
```bash
|
||||||
|
docker exec promtail promtail -config.file=/etc/promtail/config.yaml -dry-run
|
||||||
|
```
|
||||||
|
|
||||||
|
### Loki Not Receiving Logs
|
||||||
|
|
||||||
|
**Check Loki health:**
|
||||||
|
```bash
|
||||||
|
curl http://localhost:3100/ready
|
||||||
|
```
|
||||||
|
|
||||||
|
**View Loki logs:**
|
||||||
|
```bash
|
||||||
|
docker logs loki
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Promtail is pushing:**
|
||||||
|
```bash
|
||||||
|
docker logs promtail | grep -i push
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grafana Can't Connect to Loki
|
||||||
|
|
||||||
|
**Test Loki from Grafana container:**
|
||||||
|
```bash
|
||||||
|
docker exec grafana wget -O- http://loki:3100/ready
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check datasource configuration:**
|
||||||
|
- Grafana → Configuration → Data sources → Loki
|
||||||
|
- URL should be: `http://loki:3100`
|
||||||
|
|
||||||
|
### No Logs Appearing
|
||||||
|
|
||||||
|
**Wait a few minutes** - logs take time to appear
|
||||||
|
|
||||||
|
**Check retention:**
|
||||||
|
```bash
|
||||||
|
# Logs older than retention period are deleted
|
||||||
|
grep retention_period loki-config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify time range in Grafana:**
|
||||||
|
- Make sure selected time range includes recent logs
|
||||||
|
- Try "Last 5 minutes"
|
||||||
|
|
||||||
|
### High Disk Usage
|
||||||
|
|
||||||
|
**Check Loki data size:**
|
||||||
|
```bash
|
||||||
|
du -sh ./loki-data
|
||||||
|
```
|
||||||
|
|
||||||
|
**Reduce retention:**
|
||||||
|
```env
|
||||||
|
LOKI_RETENTION_PERIOD=7d # Shorter retention
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual cleanup:**
|
||||||
|
```bash
|
||||||
|
# Stop Loki
|
||||||
|
docker compose stop loki
|
||||||
|
|
||||||
|
# Remove old data (CAREFUL!)
|
||||||
|
rm -rf ./loki-data/chunks/*
|
||||||
|
|
||||||
|
# Restart
|
||||||
|
docker compose start loki
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Tuning
|
||||||
|
|
||||||
|
### For Low Resources (< 8GB RAM)
|
||||||
|
|
||||||
|
**Edit `loki-config.yaml`:**
|
||||||
|
```yaml
|
||||||
|
limits_config:
|
||||||
|
retention_period: 7d # Shorter retention
|
||||||
|
ingestion_rate_mb: 5 # Lower rate
|
||||||
|
ingestion_burst_size_mb: 10 # Lower burst
|
||||||
|
|
||||||
|
query_range:
|
||||||
|
results_cache:
|
||||||
|
cache:
|
||||||
|
embedded_cache:
|
||||||
|
max_size_mb: 50 # Smaller cache
|
||||||
|
```
|
||||||
|
|
||||||
|
### For High Volume
|
||||||
|
|
||||||
|
**Edit `loki-config.yaml`:**
|
||||||
|
```yaml
|
||||||
|
limits_config:
|
||||||
|
ingestion_rate_mb: 20 # Higher rate
|
||||||
|
ingestion_burst_size_mb: 40 # Higher burst
|
||||||
|
|
||||||
|
query_range:
|
||||||
|
results_cache:
|
||||||
|
cache:
|
||||||
|
embedded_cache:
|
||||||
|
max_size_mb: 200 # Larger cache
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Log Levels
|
||||||
|
|
||||||
|
Configure services to log appropriately:
|
||||||
|
- **Production**: `info` or `warning`
|
||||||
|
- **Development**: `debug`
|
||||||
|
- **Troubleshooting**: `trace`
|
||||||
|
|
||||||
|
Too much logging = higher resource usage!
|
||||||
|
|
||||||
|
### Retention Strategy
|
||||||
|
|
||||||
|
- **Critical services**: 60+ days
|
||||||
|
- **Normal services**: 30 days
|
||||||
|
- **High volume services**: 7-14 days
|
||||||
|
|
||||||
|
### Query Optimization
|
||||||
|
|
||||||
|
- **Use specific labels**: `{container="name"}` not `{container=~".*"}`
|
||||||
|
- **Limit time range**: Query hours not days when possible
|
||||||
|
- **Use filters early**: `|= "error"` before parsing
|
||||||
|
- **Avoid regex when possible**: `|= "string"` faster than `|~ "reg.*ex"`
|
||||||
|
|
||||||
|
### Storage Management
|
||||||
|
|
||||||
|
Monitor disk usage:
|
||||||
|
```bash
|
||||||
|
# Check regularly
|
||||||
|
du -sh compose/monitoring/logging/loki-data
|
||||||
|
|
||||||
|
# Set up alerts when > 80% disk usage
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration with Homarr
|
||||||
|
|
||||||
|
Grafana will automatically appear in Homarr dashboard. You can also:
|
||||||
|
|
||||||
|
### Add Grafana Widget to Homarr
|
||||||
|
|
||||||
|
1. Edit Homarr dashboard
|
||||||
|
2. Add "iFrame" widget
|
||||||
|
3. URL: `https://logs.fig.systems/d/<dashboard-id>`
|
||||||
|
4. This embeds Grafana dashboards in Homarr
|
||||||
|
|
||||||
|
## Backup and Restore
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup Loki data
|
||||||
|
tar czf loki-backup-$(date +%Y%m%d).tar.gz ./loki-data
|
||||||
|
|
||||||
|
# Backup Grafana dashboards and datasources
|
||||||
|
tar czf grafana-backup-$(date +%Y%m%d).tar.gz ./grafana-data ./grafana-provisioning
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restore
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Restore Loki
|
||||||
|
docker compose down
|
||||||
|
tar xzf loki-backup-YYYYMMDD.tar.gz
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Restore Grafana
|
||||||
|
docker compose down
|
||||||
|
tar xzf grafana-backup-YYYYMMDD.tar.gz
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Updating
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/monitoring/logging
|
||||||
|
|
||||||
|
# Pull latest images
|
||||||
|
docker compose pull
|
||||||
|
|
||||||
|
# Restart with new images
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resource Usage
|
||||||
|
|
||||||
|
**Typical usage:**
|
||||||
|
- **Loki**: 200-500MB RAM
|
||||||
|
- **Promtail**: 50-100MB RAM
|
||||||
|
- **Grafana**: 100-200MB RAM
|
||||||
|
- **Disk**: ~1-5GB per week (depends on log volume)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ Deploy the stack
|
||||||
|
2. ✅ Login to Grafana and explore logs
|
||||||
|
3. ✅ Create useful dashboards
|
||||||
|
4. ✅ Set up alerts for errors
|
||||||
|
5. ✅ Configure retention based on needs
|
||||||
|
6. ⬜ Add Prometheus for metrics (future)
|
||||||
|
7. ⬜ Add Tempo for distributed tracing (future)
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Loki Documentation](https://grafana.com/docs/loki/latest/)
|
||||||
|
- [LogQL Query Language](https://grafana.com/docs/loki/latest/logql/)
|
||||||
|
- [Promtail Configuration](https://grafana.com/docs/loki/latest/clients/promtail/configuration/)
|
||||||
|
- [Grafana Tutorials](https://grafana.com/tutorials/)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Now you can see logs from all containers in one place!** 🎉
|
||||||
123
compose/monitoring/logging/compose.yaml
Normal file
123
compose/monitoring/logging/compose.yaml
Normal file
|
|
@ -0,0 +1,123 @@
|
||||||
|
# Centralized Logging Stack - Loki + Promtail + Grafana
|
||||||
|
# Docs: https://grafana.com/docs/loki/latest/
|
||||||
|
|
||||||
|
services:
|
||||||
|
loki:
|
||||||
|
container_name: loki
|
||||||
|
image: grafana/loki:3.3.2
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./loki-config.yaml:/etc/loki/local-config.yaml:ro
|
||||||
|
- ./loki-data:/loki
|
||||||
|
|
||||||
|
command: -config.file=/etc/loki/local-config.yaml
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
- logging_internal
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik (for API access)
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Loki API
|
||||||
|
traefik.http.routers.loki.rule: Host(`loki.fig.systems`)
|
||||||
|
traefik.http.routers.loki.entrypoints: websecure
|
||||||
|
traefik.http.routers.loki.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.loki.loadbalancer.server.port: 3100
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.loki.middlewares: tinyauth
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Loki (Logs)
|
||||||
|
homarr.group: Monitoring
|
||||||
|
homarr.icon: mdi:math-log
|
||||||
|
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3100/ready || exit 1"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 40s
|
||||||
|
|
||||||
|
promtail:
|
||||||
|
container_name: promtail
|
||||||
|
image: grafana/promtail:3.3.2
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./promtail-config.yaml:/etc/promtail/config.yaml:ro
|
||||||
|
- /var/log:/var/log:ro
|
||||||
|
- /var/lib/docker/containers:/var/lib/docker/containers:ro
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
|
||||||
|
command: -config.file=/etc/promtail/config.yaml
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- logging_internal
|
||||||
|
|
||||||
|
depends_on:
|
||||||
|
loki:
|
||||||
|
condition: service_healthy
|
||||||
|
|
||||||
|
grafana:
|
||||||
|
container_name: grafana
|
||||||
|
image: grafana/grafana:10.2.3
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./grafana-data:/var/lib/grafana
|
||||||
|
- ./grafana-provisioning:/etc/grafana/provisioning
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
- logging_internal
|
||||||
|
|
||||||
|
depends_on:
|
||||||
|
loki:
|
||||||
|
condition: service_healthy
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Grafana Web UI
|
||||||
|
traefik.http.routers.grafana.rule: Host(`logs.fig.systems`)
|
||||||
|
traefik.http.routers.grafana.entrypoints: websecure
|
||||||
|
traefik.http.routers.grafana.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.grafana.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
|
# SSO Protection (optional - Grafana has its own auth)
|
||||||
|
# traefik.http.routers.grafana.middlewares: tinyauth
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Grafana (Logs Dashboard)
|
||||||
|
homarr.group: Monitoring
|
||||||
|
homarr.icon: mdi:chart-line
|
||||||
|
|
||||||
|
healthcheck:
|
||||||
|
test: ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:3000/api/health || exit 1"]
|
||||||
|
interval: 30s
|
||||||
|
timeout: 10s
|
||||||
|
retries: 3
|
||||||
|
start_period: 40s
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
logging_internal:
|
||||||
|
name: logging_internal
|
||||||
|
driver: bridge
|
||||||
|
|
@ -0,0 +1,13 @@
|
||||||
|
apiVersion: 1
|
||||||
|
|
||||||
|
providers:
|
||||||
|
- name: 'Loki Dashboards'
|
||||||
|
orgId: 1
|
||||||
|
folder: 'Loki'
|
||||||
|
type: file
|
||||||
|
disableDeletion: false
|
||||||
|
updateIntervalSeconds: 10
|
||||||
|
allowUiUpdates: true
|
||||||
|
options:
|
||||||
|
path: /etc/grafana/provisioning/dashboards
|
||||||
|
foldersFromFilesStructure: true
|
||||||
|
|
@ -0,0 +1,703 @@
|
||||||
|
{
|
||||||
|
"annotations": {
|
||||||
|
"list": [
|
||||||
|
{
|
||||||
|
"builtIn": 1,
|
||||||
|
"datasource": {
|
||||||
|
"type": "grafana",
|
||||||
|
"uid": "-- Grafana --"
|
||||||
|
},
|
||||||
|
"enable": true,
|
||||||
|
"hide": true,
|
||||||
|
"iconColor": "rgba(0, 211, 255, 1)",
|
||||||
|
"name": "Annotations & Alerts",
|
||||||
|
"type": "dashboard"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"editable": true,
|
||||||
|
"fiscalYearStartMonth": 0,
|
||||||
|
"graphTooltip": 0,
|
||||||
|
"id": null,
|
||||||
|
"links": [],
|
||||||
|
"liveNow": false,
|
||||||
|
"panels": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "All Docker container logs in real-time",
|
||||||
|
"gridPos": {
|
||||||
|
"h": 24,
|
||||||
|
"w": 24,
|
||||||
|
"x": 0,
|
||||||
|
"y": 0
|
||||||
|
},
|
||||||
|
"id": 1,
|
||||||
|
"options": {
|
||||||
|
"dedupStrategy": "none",
|
||||||
|
"enableLogDetails": true,
|
||||||
|
"prettifyLogMessage": false,
|
||||||
|
"showCommonLabels": false,
|
||||||
|
"showLabels": false,
|
||||||
|
"showTime": true,
|
||||||
|
"sortOrder": "Descending",
|
||||||
|
"wrapLogMessage": false
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "{job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\"",
|
||||||
|
"queryType": "range",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Docker Container Logs",
|
||||||
|
"type": "logs"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Log volume per container over time",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "palette-classic"
|
||||||
|
},
|
||||||
|
"custom": {
|
||||||
|
"axisCenteredZero": false,
|
||||||
|
"axisColorMode": "text",
|
||||||
|
"axisLabel": "",
|
||||||
|
"axisPlacement": "auto",
|
||||||
|
"barAlignment": 0,
|
||||||
|
"drawStyle": "bars",
|
||||||
|
"fillOpacity": 50,
|
||||||
|
"gradientMode": "none",
|
||||||
|
"hideFrom": {
|
||||||
|
"tooltip": false,
|
||||||
|
"viz": false,
|
||||||
|
"legend": false
|
||||||
|
},
|
||||||
|
"lineInterpolation": "linear",
|
||||||
|
"lineWidth": 1,
|
||||||
|
"pointSize": 5,
|
||||||
|
"scaleDistribution": {
|
||||||
|
"type": "linear"
|
||||||
|
},
|
||||||
|
"showPoints": "auto",
|
||||||
|
"spanNulls": false,
|
||||||
|
"stacking": {
|
||||||
|
"group": "A",
|
||||||
|
"mode": "normal"
|
||||||
|
},
|
||||||
|
"thresholdsStyle": {
|
||||||
|
"mode": "off"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "short"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 8,
|
||||||
|
"w": 24,
|
||||||
|
"x": 0,
|
||||||
|
"y": 24
|
||||||
|
},
|
||||||
|
"id": 2,
|
||||||
|
"options": {
|
||||||
|
"legend": {
|
||||||
|
"calcs": [],
|
||||||
|
"displayMode": "list",
|
||||||
|
"placement": "bottom",
|
||||||
|
"showLegend": true
|
||||||
|
},
|
||||||
|
"tooltip": {
|
||||||
|
"mode": "single",
|
||||||
|
"sort": "none"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "sum by (container) (count_over_time({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__interval]))",
|
||||||
|
"legendFormat": "{{container}}",
|
||||||
|
"queryType": "range",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Log Volume by Container",
|
||||||
|
"type": "timeseries"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Count of ERROR level logs by container",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "palette-classic"
|
||||||
|
},
|
||||||
|
"custom": {
|
||||||
|
"axisCenteredZero": false,
|
||||||
|
"axisColorMode": "text",
|
||||||
|
"axisLabel": "",
|
||||||
|
"axisPlacement": "auto",
|
||||||
|
"barAlignment": 0,
|
||||||
|
"drawStyle": "line",
|
||||||
|
"fillOpacity": 20,
|
||||||
|
"gradientMode": "none",
|
||||||
|
"hideFrom": {
|
||||||
|
"tooltip": false,
|
||||||
|
"viz": false,
|
||||||
|
"legend": false
|
||||||
|
},
|
||||||
|
"lineInterpolation": "linear",
|
||||||
|
"lineWidth": 2,
|
||||||
|
"pointSize": 5,
|
||||||
|
"scaleDistribution": {
|
||||||
|
"type": "linear"
|
||||||
|
},
|
||||||
|
"showPoints": "auto",
|
||||||
|
"spanNulls": false,
|
||||||
|
"stacking": {
|
||||||
|
"group": "A",
|
||||||
|
"mode": "none"
|
||||||
|
},
|
||||||
|
"thresholdsStyle": {
|
||||||
|
"mode": "off"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "red",
|
||||||
|
"value": 1
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "short"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 8,
|
||||||
|
"w": 12,
|
||||||
|
"x": 0,
|
||||||
|
"y": 32
|
||||||
|
},
|
||||||
|
"id": 3,
|
||||||
|
"options": {
|
||||||
|
"legend": {
|
||||||
|
"calcs": ["last"],
|
||||||
|
"displayMode": "table",
|
||||||
|
"placement": "right",
|
||||||
|
"showLegend": true
|
||||||
|
},
|
||||||
|
"tooltip": {
|
||||||
|
"mode": "single",
|
||||||
|
"sort": "none"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "sum by (container) (count_over_time({job=\"docker_all\", container=~\"$container\"} |~ \"(?i)(error|exception|fatal|panic)\" [$__interval]))",
|
||||||
|
"legendFormat": "{{container}}",
|
||||||
|
"queryType": "range",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Error Logs by Container",
|
||||||
|
"type": "timeseries"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Total log lines per container",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "yellow",
|
||||||
|
"value": 1000
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "red",
|
||||||
|
"value": 10000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "short"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 8,
|
||||||
|
"w": 12,
|
||||||
|
"x": 12,
|
||||||
|
"y": 32
|
||||||
|
},
|
||||||
|
"id": 4,
|
||||||
|
"options": {
|
||||||
|
"displayMode": "gradient",
|
||||||
|
"minVizHeight": 10,
|
||||||
|
"minVizWidth": 0,
|
||||||
|
"orientation": "horizontal",
|
||||||
|
"reduceOptions": {
|
||||||
|
"values": false,
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": ""
|
||||||
|
},
|
||||||
|
"showUnfilled": true,
|
||||||
|
"text": {}
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "sum by (container) (count_over_time({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__range]))",
|
||||||
|
"legendFormat": "{{container}}",
|
||||||
|
"queryType": "instant",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Total Logs by Container (Time Range)",
|
||||||
|
"type": "bargauge"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Statistics about container logging",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 6,
|
||||||
|
"w": 6,
|
||||||
|
"x": 0,
|
||||||
|
"y": 40
|
||||||
|
},
|
||||||
|
"id": 5,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "area",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"values": false,
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": ""
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "count(count by (container) (count_over_time({job=\"docker_all\"} [$__range])))",
|
||||||
|
"legendFormat": "Active Containers",
|
||||||
|
"queryType": "instant",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Active Containers",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Total log entries in selected time range",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "yellow",
|
||||||
|
"value": 10000
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "red",
|
||||||
|
"value": 100000
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "short"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 6,
|
||||||
|
"w": 6,
|
||||||
|
"x": 6,
|
||||||
|
"y": 40
|
||||||
|
},
|
||||||
|
"id": 6,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "area",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"values": false,
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": ""
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "sum(count_over_time({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__range]))",
|
||||||
|
"legendFormat": "Total Logs",
|
||||||
|
"queryType": "instant",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Total Log Lines",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Total errors in selected time range",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "yellow",
|
||||||
|
"value": 10
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "red",
|
||||||
|
"value": 100
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "short"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 6,
|
||||||
|
"w": 6,
|
||||||
|
"x": 12,
|
||||||
|
"y": 40
|
||||||
|
},
|
||||||
|
"id": 7,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "area",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"values": false,
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": ""
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "sum(count_over_time({job=\"docker_all\", container=~\"$container\"} |~ \"(?i)(error|exception|fatal|panic)\" [$__range]))",
|
||||||
|
"legendFormat": "Errors",
|
||||||
|
"queryType": "instant",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Total Errors",
|
||||||
|
"type": "stat"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"description": "Logs per second rate",
|
||||||
|
"fieldConfig": {
|
||||||
|
"defaults": {
|
||||||
|
"color": {
|
||||||
|
"mode": "thresholds"
|
||||||
|
},
|
||||||
|
"mappings": [],
|
||||||
|
"thresholds": {
|
||||||
|
"mode": "absolute",
|
||||||
|
"steps": [
|
||||||
|
{
|
||||||
|
"color": "green",
|
||||||
|
"value": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "yellow",
|
||||||
|
"value": 50
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"color": "red",
|
||||||
|
"value": 200
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"unit": "logs/s"
|
||||||
|
},
|
||||||
|
"overrides": []
|
||||||
|
},
|
||||||
|
"gridPos": {
|
||||||
|
"h": 6,
|
||||||
|
"w": 6,
|
||||||
|
"x": 18,
|
||||||
|
"y": 40
|
||||||
|
},
|
||||||
|
"id": 8,
|
||||||
|
"options": {
|
||||||
|
"colorMode": "value",
|
||||||
|
"graphMode": "area",
|
||||||
|
"justifyMode": "auto",
|
||||||
|
"orientation": "auto",
|
||||||
|
"reduceOptions": {
|
||||||
|
"values": false,
|
||||||
|
"calcs": ["lastNotNull"],
|
||||||
|
"fields": ""
|
||||||
|
},
|
||||||
|
"textMode": "auto"
|
||||||
|
},
|
||||||
|
"pluginVersion": "10.2.3",
|
||||||
|
"targets": [
|
||||||
|
{
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"editorMode": "code",
|
||||||
|
"expr": "sum(rate({job=\"docker_all\", container=~\"$container\", image=~\"$image\"} |~ \"$search\" [$__rate_interval]))",
|
||||||
|
"legendFormat": "Rate",
|
||||||
|
"queryType": "instant",
|
||||||
|
"refId": "A"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"title": "Log Rate",
|
||||||
|
"type": "stat"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"refresh": "10s",
|
||||||
|
"schemaVersion": 38,
|
||||||
|
"style": "dark",
|
||||||
|
"tags": ["docker", "logs", "loki"],
|
||||||
|
"templating": {
|
||||||
|
"list": [
|
||||||
|
{
|
||||||
|
"current": {
|
||||||
|
"selected": false,
|
||||||
|
"text": "Loki",
|
||||||
|
"value": "Loki"
|
||||||
|
},
|
||||||
|
"hide": 0,
|
||||||
|
"includeAll": false,
|
||||||
|
"label": "Datasource",
|
||||||
|
"multi": false,
|
||||||
|
"name": "datasource",
|
||||||
|
"options": [],
|
||||||
|
"query": "loki",
|
||||||
|
"refresh": 1,
|
||||||
|
"regex": "",
|
||||||
|
"skipUrlSync": false,
|
||||||
|
"type": "datasource"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"allValue": ".*",
|
||||||
|
"current": {
|
||||||
|
"selected": true,
|
||||||
|
"text": "All",
|
||||||
|
"value": "$__all"
|
||||||
|
},
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"definition": "label_values(container)",
|
||||||
|
"hide": 0,
|
||||||
|
"includeAll": true,
|
||||||
|
"label": "Container",
|
||||||
|
"multi": true,
|
||||||
|
"name": "container",
|
||||||
|
"options": [],
|
||||||
|
"query": {
|
||||||
|
"qryType": 1,
|
||||||
|
"query": "label_values(container)"
|
||||||
|
},
|
||||||
|
"refresh": 1,
|
||||||
|
"regex": "",
|
||||||
|
"skipUrlSync": false,
|
||||||
|
"sort": 1,
|
||||||
|
"type": "query"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"allValue": ".*",
|
||||||
|
"current": {
|
||||||
|
"selected": true,
|
||||||
|
"text": "All",
|
||||||
|
"value": "$__all"
|
||||||
|
},
|
||||||
|
"datasource": {
|
||||||
|
"type": "loki",
|
||||||
|
"uid": "${datasource}"
|
||||||
|
},
|
||||||
|
"definition": "label_values(image)",
|
||||||
|
"hide": 0,
|
||||||
|
"includeAll": true,
|
||||||
|
"label": "Image",
|
||||||
|
"multi": true,
|
||||||
|
"name": "image",
|
||||||
|
"options": [],
|
||||||
|
"query": {
|
||||||
|
"qryType": 1,
|
||||||
|
"query": "label_values(image)"
|
||||||
|
},
|
||||||
|
"refresh": 1,
|
||||||
|
"regex": "",
|
||||||
|
"skipUrlSync": false,
|
||||||
|
"sort": 1,
|
||||||
|
"type": "query"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"current": {
|
||||||
|
"selected": false,
|
||||||
|
"text": "",
|
||||||
|
"value": ""
|
||||||
|
},
|
||||||
|
"description": "Search within log messages (regex supported)",
|
||||||
|
"hide": 0,
|
||||||
|
"label": "Search",
|
||||||
|
"name": "search",
|
||||||
|
"options": [
|
||||||
|
{
|
||||||
|
"selected": true,
|
||||||
|
"text": "",
|
||||||
|
"value": ""
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"query": "",
|
||||||
|
"skipUrlSync": false,
|
||||||
|
"type": "textbox"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"time": {
|
||||||
|
"from": "now-1h",
|
||||||
|
"to": "now"
|
||||||
|
},
|
||||||
|
"timepicker": {
|
||||||
|
"refresh_intervals": ["5s", "10s", "30s", "1m", "5m", "15m", "30m", "1h"]
|
||||||
|
},
|
||||||
|
"timezone": "",
|
||||||
|
"title": "Docker Logs - All Containers",
|
||||||
|
"uid": "docker-logs-all",
|
||||||
|
"version": 1,
|
||||||
|
"weekStart": ""
|
||||||
|
}
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
apiVersion: 1
|
||||||
|
|
||||||
|
datasources:
|
||||||
|
- name: Loki
|
||||||
|
type: loki
|
||||||
|
access: proxy
|
||||||
|
url: http://loki:3100
|
||||||
|
isDefault: true
|
||||||
|
editable: true
|
||||||
|
jsonData:
|
||||||
|
maxLines: 1000
|
||||||
|
derivedFields:
|
||||||
|
# Extract traceID from logs for distributed tracing (optional)
|
||||||
|
- datasourceUid: tempo
|
||||||
|
matcherRegex: "traceID=(\\w+)"
|
||||||
|
name: TraceID
|
||||||
|
url: "$${__value.raw}"
|
||||||
53
compose/monitoring/logging/loki-config.yaml
Normal file
53
compose/monitoring/logging/loki-config.yaml
Normal file
|
|
@ -0,0 +1,53 @@
|
||||||
|
auth_enabled: false
|
||||||
|
|
||||||
|
server:
|
||||||
|
http_listen_port: 3100
|
||||||
|
grpc_listen_port: 9096
|
||||||
|
|
||||||
|
common:
|
||||||
|
instance_addr: 127.0.0.1
|
||||||
|
path_prefix: /loki
|
||||||
|
storage:
|
||||||
|
filesystem:
|
||||||
|
chunks_directory: /loki/chunks
|
||||||
|
rules_directory: /loki/rules
|
||||||
|
replication_factor: 1
|
||||||
|
ring:
|
||||||
|
kvstore:
|
||||||
|
store: inmemory
|
||||||
|
|
||||||
|
query_range:
|
||||||
|
results_cache:
|
||||||
|
cache:
|
||||||
|
embedded_cache:
|
||||||
|
enabled: true
|
||||||
|
max_size_mb: 100
|
||||||
|
|
||||||
|
schema_config:
|
||||||
|
configs:
|
||||||
|
- from: 2020-10-24
|
||||||
|
store: boltdb-shipper
|
||||||
|
object_store: filesystem
|
||||||
|
schema: v11
|
||||||
|
index:
|
||||||
|
prefix: index_
|
||||||
|
period: 24h
|
||||||
|
|
||||||
|
ruler:
|
||||||
|
alertmanager_url: http://localhost:9093
|
||||||
|
|
||||||
|
# Retention - keeps logs for 30 days
|
||||||
|
limits_config:
|
||||||
|
retention_period: 30d
|
||||||
|
ingestion_rate_mb: 10
|
||||||
|
ingestion_burst_size_mb: 20
|
||||||
|
allow_structured_metadata: false
|
||||||
|
|
||||||
|
# Cleanup old logs
|
||||||
|
compactor:
|
||||||
|
working_directory: /loki/compactor
|
||||||
|
compaction_interval: 10m
|
||||||
|
retention_enabled: true
|
||||||
|
retention_delete_delay: 2h
|
||||||
|
retention_delete_worker_count: 150
|
||||||
|
delete_request_store: filesystem
|
||||||
70
compose/monitoring/logging/promtail-config.yaml
Normal file
70
compose/monitoring/logging/promtail-config.yaml
Normal file
|
|
@ -0,0 +1,70 @@
|
||||||
|
server:
|
||||||
|
http_listen_port: 9080
|
||||||
|
grpc_listen_port: 0
|
||||||
|
|
||||||
|
positions:
|
||||||
|
filename: /tmp/positions.yaml
|
||||||
|
|
||||||
|
clients:
|
||||||
|
- url: http://loki:3100/loki/api/v1/push
|
||||||
|
|
||||||
|
scrape_configs:
|
||||||
|
# Docker containers logs
|
||||||
|
- job_name: docker
|
||||||
|
docker_sd_configs:
|
||||||
|
- host: unix:///var/run/docker.sock
|
||||||
|
refresh_interval: 5s
|
||||||
|
filters:
|
||||||
|
- name: label
|
||||||
|
values: ["logging=promtail"]
|
||||||
|
|
||||||
|
relabel_configs:
|
||||||
|
# Use container name as job
|
||||||
|
- source_labels: ['__meta_docker_container_name']
|
||||||
|
regex: '/(.*)'
|
||||||
|
target_label: 'container'
|
||||||
|
|
||||||
|
# Use image name
|
||||||
|
- source_labels: ['__meta_docker_container_image']
|
||||||
|
target_label: 'image'
|
||||||
|
|
||||||
|
# Use container ID
|
||||||
|
- source_labels: ['__meta_docker_container_id']
|
||||||
|
target_label: 'container_id'
|
||||||
|
|
||||||
|
# Add all docker labels as labels
|
||||||
|
- action: labelmap
|
||||||
|
regex: __meta_docker_container_label_(.+)
|
||||||
|
|
||||||
|
# All Docker containers (fallback)
|
||||||
|
- job_name: docker_all
|
||||||
|
docker_sd_configs:
|
||||||
|
- host: unix:///var/run/docker.sock
|
||||||
|
refresh_interval: 5s
|
||||||
|
|
||||||
|
relabel_configs:
|
||||||
|
- source_labels: ['__meta_docker_container_name']
|
||||||
|
regex: '/(.*)'
|
||||||
|
target_label: 'container'
|
||||||
|
|
||||||
|
- source_labels: ['__meta_docker_container_image']
|
||||||
|
target_label: 'image'
|
||||||
|
|
||||||
|
- source_labels: ['__meta_docker_container_log_stream']
|
||||||
|
target_label: 'stream'
|
||||||
|
|
||||||
|
# Extract compose project and service
|
||||||
|
- source_labels: ['__meta_docker_container_label_com_docker_compose_project']
|
||||||
|
target_label: 'compose_project'
|
||||||
|
|
||||||
|
- source_labels: ['__meta_docker_container_label_com_docker_compose_service']
|
||||||
|
target_label: 'compose_service'
|
||||||
|
|
||||||
|
# System logs
|
||||||
|
- job_name: system
|
||||||
|
static_configs:
|
||||||
|
- targets:
|
||||||
|
- localhost
|
||||||
|
labels:
|
||||||
|
job: varlogs
|
||||||
|
__path__: /var/log/*log
|
||||||
|
|
@ -29,6 +29,7 @@ services:
|
||||||
|
|
||||||
# SSO Protection (optional - Uptime Kuma has its own auth)
|
# SSO Protection (optional - Uptime Kuma has its own auth)
|
||||||
# Uncomment to require SSO:
|
# Uncomment to require SSO:
|
||||||
|
# traefik.http.routers.uptime-kuma.middlewares: tinyauth
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Uptime Kuma (Status)
|
homarr.name: Uptime Kuma (Status)
|
||||||
|
|
|
||||||
|
|
@ -29,6 +29,7 @@ services:
|
||||||
traefik.http.services.freshrss.loadbalancer.passhostheader: true
|
traefik.http.services.freshrss.loadbalancer.passhostheader: true
|
||||||
|
|
||||||
# SSO Protection removed - using FreshRSS built-in auth
|
# SSO Protection removed - using FreshRSS built-in auth
|
||||||
|
# traefik.http.routers.freshrss.middlewares: tinyauth
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: FreshRSS
|
homarr.name: FreshRSS
|
||||||
|
|
|
||||||
|
|
@ -27,7 +27,7 @@ services:
|
||||||
traefik.http.services.backrest.loadbalancer.server.port: 9898
|
traefik.http.services.backrest.loadbalancer.server.port: 9898
|
||||||
|
|
||||||
# Require authentication and restrict to local network
|
# Require authentication and restrict to local network
|
||||||
traefik.http.routers.backrest.middlewares: local-only
|
traefik.http.routers.backrest.middlewares: tinyauth,local-only
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Backrest Backup
|
homarr.name: Backrest Backup
|
||||||
|
|
|
||||||
|
|
@ -1,34 +0,0 @@
|
||||||
# BentoPDF - Privacy-first, client-side PDF toolkit
|
|
||||||
# Docs: https://github.com/alam00000/bentopdf
|
|
||||||
|
|
||||||
services:
|
|
||||||
bentopdf:
|
|
||||||
container_name: bentopdf
|
|
||||||
image: bentopdf/bentopdf:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.bentopdf.rule: Host(`pdf.fig.systems`)
|
|
||||||
traefik.http.routers.bentopdf.entrypoints: websecure
|
|
||||||
traefik.http.routers.bentopdf.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.bentopdf.loadbalancer.server.port: 8080
|
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.bentopdf.middlewares: authelia
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: BentoPDF (PDF Tools)
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:file-pdf-box
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
8
compose/services/booklore/.env
Normal file
8
compose/services/booklore/.env
Normal file
|
|
@ -0,0 +1,8 @@
|
||||||
|
# Booklore Configuration
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# User and Group IDs
|
||||||
|
PUID=1000
|
||||||
|
PGID=1000
|
||||||
40
compose/services/booklore/compose.yaml
Normal file
40
compose/services/booklore/compose.yaml
Normal file
|
|
@ -0,0 +1,40 @@
|
||||||
|
# Booklore - Book tracking and management
|
||||||
|
# Docs: https://github.com/lorebooks/booklore
|
||||||
|
|
||||||
|
services:
|
||||||
|
booklore:
|
||||||
|
container_name: booklore
|
||||||
|
image: ghcr.io/lorebooks/booklore:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./data:/app/data
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.booklore.rule: Host(`booklore.fig.systems`)
|
||||||
|
traefik.http.routers.booklore.entrypoints: websecure
|
||||||
|
traefik.http.routers.booklore.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.booklore.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
|
# SSO Protection
|
||||||
|
traefik.http.routers.booklore.middlewares: tinyauth
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Booklore
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:book-open-variant
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
11
compose/services/calibre-web/.env
Normal file
11
compose/services/calibre-web/.env
Normal file
|
|
@ -0,0 +1,11 @@
|
||||||
|
# Calibre-web Configuration
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# User and Group IDs
|
||||||
|
PUID=1000
|
||||||
|
PGID=1000
|
||||||
|
|
||||||
|
# Docker mods (optional - for ebook conversion)
|
||||||
|
# DOCKER_MODS=linuxserver/mods:universal-calibre
|
||||||
11
compose/services/calibre-web/compose.yaml
Normal file
11
compose/services/calibre-web/compose.yaml
Normal file
|
|
@ -0,0 +1,11 @@
|
||||||
|
# Calibre-web - Web app for browsing, reading and downloading eBooks
|
||||||
|
# Docs: https://hub.docker.com/r/linuxserver/calibre-web
|
||||||
|
|
||||||
|
services:
|
||||||
|
calibre-web:
|
||||||
|
container_name: calibre-web
|
||||||
|
image: lscr.io/linuxserver/calibre-web:latest
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
|
||||||
|
- .env
|
||||||
|
|
@ -30,7 +30,7 @@ services:
|
||||||
traefik.http.services.code-server.loadbalancer.server.port: 8443
|
traefik.http.services.code-server.loadbalancer.server.port: 8443
|
||||||
|
|
||||||
# SSO Protection and restrict to local network
|
# SSO Protection and restrict to local network
|
||||||
traefik.http.routers.code-server.middlewares: local-only
|
traefik.http.routers.code-server.middlewares: tinyauth,local-only
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Code Server (IDE)
|
homarr.name: Code Server (IDE)
|
||||||
|
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
# Dockhand - Docker Management UI
|
|
||||||
# Source: https://github.com/fnsys/dockhand
|
|
||||||
|
|
||||||
services:
|
|
||||||
dockhand:
|
|
||||||
image: fnsys/dockhand:latest
|
|
||||||
container_name: dockhand
|
|
||||||
restart: unless-stopped
|
|
||||||
user: "0:0"
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
volumes:
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
|
||||||
- ./data:/app/data
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
traefik.http.routers.dockhand.rule: Host(`dockhand.fig.systems`)
|
|
||||||
traefik.http.routers.dockhand.entrypoints: websecure
|
|
||||||
traefik.http.routers.dockhand.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.dockhand.loadbalancer.server.port: 3000
|
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.dockhand.middlewares: authelia
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Dockhand
|
|
||||||
homarr.group: Infrastructure
|
|
||||||
homarr.icon: mdi:docker
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -1,36 +1,11 @@
|
||||||
version: '2'
|
# File Browser - Web-based file manager
|
||||||
|
# Docs: https://filebrowser.org/
|
||||||
|
|
||||||
services:
|
services:
|
||||||
app:
|
filebrowser:
|
||||||
container_name: filestash
|
container_name: filebrowser
|
||||||
image: machines/filestash:latest
|
image: filebrowser/filebrowser:latest
|
||||||
restart: always
|
|
||||||
environment:
|
|
||||||
- APPLICATION_URL=
|
|
||||||
- CANARY=true
|
|
||||||
- OFFICE_URL=http://wopi_server:9980
|
|
||||||
- OFFICE_FILESTASH_URL=http://app:8334
|
|
||||||
- OFFICE_REWRITE_URL=http://127.0.0.1:9980
|
|
||||||
ports:
|
|
||||||
- "8334:8334"
|
|
||||||
volumes:
|
|
||||||
- filestash:/app/data/state/
|
|
||||||
|
|
||||||
wopi_server:
|
env_file:
|
||||||
container_name: filestash_wopi
|
|
||||||
image: collabora/code:24.04.10.2.1
|
|
||||||
restart: always
|
|
||||||
environment:
|
|
||||||
- "extra_params=--o:ssl.enable=false"
|
|
||||||
- aliasgroup1="https://.*:443"
|
|
||||||
command:
|
|
||||||
- /bin/bash
|
|
||||||
- -c
|
|
||||||
- |
|
|
||||||
curl -o /usr/share/coolwsd/browser/dist/branding-desktop.css https://gist.githubusercontent.com/mickael-kerjean/bc1f57cd312cf04731d30185cc4e7ba2/raw/d706dcdf23c21441e5af289d871b33defc2770ea/destop.css
|
|
||||||
/bin/su -s /bin/bash -c '/start-collabora-online.sh' cool
|
|
||||||
user: root
|
|
||||||
ports:
|
|
||||||
- "9980:9980"
|
|
||||||
|
|
||||||
volumes:
|
- .env
|
||||||
filestash: {}
|
|
||||||
|
|
|
||||||
14
compose/services/homarr/.env
Normal file
14
compose/services/homarr/.env
Normal file
|
|
@ -0,0 +1,14 @@
|
||||||
|
# Homarr Configuration
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# Base path (if behind reverse proxy with path)
|
||||||
|
# BASE_URL=/dashboard
|
||||||
|
|
||||||
|
# Port (default: 7575)
|
||||||
|
PORT=7575
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
# AUTH_PROVIDER=oidc # For SSO integration
|
||||||
|
# DEFAULT_COLOR_SCHEME=dark
|
||||||
332
compose/services/homarr/README.md
Normal file
332
compose/services/homarr/README.md
Normal file
|
|
@ -0,0 +1,332 @@
|
||||||
|
# Homarr Dashboard
|
||||||
|
|
||||||
|
Modern, customizable dashboard with automatic Docker service discovery.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- 🎨 **Modern UI** - Beautiful, responsive design
|
||||||
|
- 🔍 **Auto-Discovery** - Automatically finds Docker services
|
||||||
|
- 📊 **Widgets** - System stats, weather, calendar, RSS, etc.
|
||||||
|
- 🏷️ **Labels** - Organize services by category
|
||||||
|
- 🔗 **Integration** - Connects to *arr apps, Jellyfin, etc.
|
||||||
|
- 🎯 **Customizable** - Drag-and-drop layout
|
||||||
|
- 🌙 **Dark Mode** - Built-in dark theme
|
||||||
|
- 📱 **Mobile Friendly** - Works on all devices
|
||||||
|
|
||||||
|
## Access
|
||||||
|
|
||||||
|
- **URL:** https://home.fig.systems or https://home.edfig.dev
|
||||||
|
- **Port:** 7575 (if accessing directly)
|
||||||
|
|
||||||
|
## First-Time Setup
|
||||||
|
|
||||||
|
### 1. Deploy Homarr
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd compose/services/homarr
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Access Dashboard
|
||||||
|
|
||||||
|
Open https://home.fig.systems in your browser.
|
||||||
|
|
||||||
|
### 3. Auto-Discovery
|
||||||
|
|
||||||
|
Homarr will automatically detect services with these labels:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
homarr.name: "Service Name"
|
||||||
|
homarr.group: "Category"
|
||||||
|
homarr.icon: "/icons/service.png"
|
||||||
|
homarr.href: "https://service.fig.systems"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding Services to Dashboard
|
||||||
|
|
||||||
|
### Automatic (Recommended)
|
||||||
|
|
||||||
|
Add labels to your service's `compose.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
# Traefik labels...
|
||||||
|
traefik.enable: true
|
||||||
|
# ... etc
|
||||||
|
|
||||||
|
# Homarr labels
|
||||||
|
homarr.name: Jellyfin
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jellyfin.png
|
||||||
|
homarr.href: https://flix.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
Redeploy the service:
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Homarr will automatically add it to the dashboard!
|
||||||
|
|
||||||
|
### Manual
|
||||||
|
|
||||||
|
1. Click the "+" button in Homarr
|
||||||
|
2. Select "Add Service"
|
||||||
|
3. Fill in:
|
||||||
|
- **Name:** Service name
|
||||||
|
- **URL:** https://service.fig.systems
|
||||||
|
- **Icon:** Choose from library or custom URL
|
||||||
|
- **Category:** Group services (Media, Services, etc.)
|
||||||
|
|
||||||
|
## Integration with Services
|
||||||
|
|
||||||
|
### Jellyfin
|
||||||
|
|
||||||
|
Add to Jellyfin's `compose.yaml`:
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
homarr.name: Jellyfin
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: /icons/jellyfin.png
|
||||||
|
homarr.widget.type: jellyfin
|
||||||
|
homarr.widget.url: http://jellyfin:8096
|
||||||
|
homarr.widget.key: ${JELLYFIN_API_KEY}
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows: Currently playing, library stats
|
||||||
|
|
||||||
|
### Sonarr/Radarr
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
homarr.name: Sonarr
|
||||||
|
homarr.group: Media Automation
|
||||||
|
homarr.icon: /icons/sonarr.png
|
||||||
|
homarr.widget.type: sonarr
|
||||||
|
homarr.widget.url: http://sonarr:8989
|
||||||
|
homarr.widget.key: ${SONARR_API_KEY}
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows: Queue, calendar, missing episodes
|
||||||
|
|
||||||
|
### qBittorrent
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
homarr.name: qBittorrent
|
||||||
|
homarr.group: Downloads
|
||||||
|
homarr.icon: /icons/qbittorrent.png
|
||||||
|
homarr.widget.type: qbittorrent
|
||||||
|
homarr.widget.url: http://qbittorrent:8080
|
||||||
|
homarr.widget.username: ${QBIT_USERNAME}
|
||||||
|
homarr.widget.password: ${QBIT_PASSWORD}
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows: Active torrents, download speed
|
||||||
|
|
||||||
|
## Available Widgets
|
||||||
|
|
||||||
|
### System Monitoring
|
||||||
|
- **CPU Usage** - Real-time CPU stats
|
||||||
|
- **Memory Usage** - RAM usage
|
||||||
|
- **Disk Space** - Storage capacity
|
||||||
|
- **Network** - Upload/download speeds
|
||||||
|
|
||||||
|
### Services
|
||||||
|
- **Jellyfin** - Media server stats
|
||||||
|
- **Sonarr** - TV show automation
|
||||||
|
- **Radarr** - Movie automation
|
||||||
|
- **Lidarr** - Music automation
|
||||||
|
- **Readarr** - Book automation
|
||||||
|
- **Prowlarr** - Indexer management
|
||||||
|
- **SABnzbd** - Usenet downloads
|
||||||
|
- **qBittorrent** - Torrent downloads
|
||||||
|
- **Overseerr/Jellyseerr** - Media requests
|
||||||
|
|
||||||
|
### Utilities
|
||||||
|
- **Weather** - Local weather forecast
|
||||||
|
- **Calendar** - Events and tasks
|
||||||
|
- **RSS Feeds** - News aggregator
|
||||||
|
- **Docker** - Container status
|
||||||
|
- **Speed Test** - Internet speed
|
||||||
|
- **Notes** - Sticky notes
|
||||||
|
- **Iframe** - Embed any website
|
||||||
|
|
||||||
|
## Customization
|
||||||
|
|
||||||
|
### Change Theme
|
||||||
|
|
||||||
|
1. Click settings icon (⚙️)
|
||||||
|
2. Go to "Appearance"
|
||||||
|
3. Choose color scheme
|
||||||
|
4. Save
|
||||||
|
|
||||||
|
### Reorganize Layout
|
||||||
|
|
||||||
|
1. Click edit mode (✏️)
|
||||||
|
2. Drag and drop services
|
||||||
|
3. Resize widgets
|
||||||
|
4. Click save
|
||||||
|
|
||||||
|
### Add Categories
|
||||||
|
|
||||||
|
1. Click "Add Category"
|
||||||
|
2. Name it (e.g., "Media", "Tools", "Infrastructure")
|
||||||
|
3. Drag services into categories
|
||||||
|
4. Collapse/expand as needed
|
||||||
|
|
||||||
|
### Custom Icons
|
||||||
|
|
||||||
|
**Option 1: Use Icon Library**
|
||||||
|
- Homarr includes icons from [Dashboard Icons](https://github.com/walkxcode/dashboard-icons)
|
||||||
|
- Search by service name
|
||||||
|
|
||||||
|
**Option 2: Custom URL**
|
||||||
|
```
|
||||||
|
https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/service.png
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option 3: Local Icons**
|
||||||
|
- Place in `./icons/` directory
|
||||||
|
- Reference as `/icons/service.png`
|
||||||
|
|
||||||
|
## Recommended Dashboard Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────┐
|
||||||
|
│ 🏠 Homelab Dashboard │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ [System Stats] [Weather] [Calendar] │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ 📺 Media │
|
||||||
|
│ [Jellyfin] [Jellyseerr] [Immich] │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ 🤖 Media Automation │
|
||||||
|
│ [Sonarr] [Radarr] [qBittorrent] │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ 🛠️ Services │
|
||||||
|
│ [Linkwarden] [Vikunja] [FreshRSS] │
|
||||||
|
├─────────────────────────────────────────┤
|
||||||
|
│ 🔧 Infrastructure │
|
||||||
|
│ [Traefik] [LLDAP] [Tinyauth] │
|
||||||
|
└─────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Add to All Services
|
||||||
|
|
||||||
|
To make all your services auto-discoverable, add these labels:
|
||||||
|
|
||||||
|
### Jellyfin
|
||||||
|
```yaml
|
||||||
|
homarr.name: Jellyfin
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jellyfin.png
|
||||||
|
```
|
||||||
|
|
||||||
|
### Jellyseerr
|
||||||
|
```yaml
|
||||||
|
homarr.name: Jellyseerr
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/jellyseerr.png
|
||||||
|
```
|
||||||
|
|
||||||
|
### Immich
|
||||||
|
```yaml
|
||||||
|
homarr.name: Immich Photos
|
||||||
|
homarr.group: Media
|
||||||
|
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/immich.png
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sonarr/Radarr/SABnzbd/qBittorrent
|
||||||
|
```yaml
|
||||||
|
homarr.name: [Service]
|
||||||
|
homarr.group: Automation
|
||||||
|
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/[service].png
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linkwarden/Vikunja/etc.
|
||||||
|
```yaml
|
||||||
|
homarr.name: [Service]
|
||||||
|
homarr.group: Utilities
|
||||||
|
homarr.icon: https://cdn.jsdelivr.net/gh/walkxcode/dashboard-icons/png/[service].png
|
||||||
|
```
|
||||||
|
|
||||||
|
## Mobile Access
|
||||||
|
|
||||||
|
Homarr is fully responsive. For best mobile experience:
|
||||||
|
|
||||||
|
1. Add to home screen (iOS/Android)
|
||||||
|
2. Works as PWA (Progressive Web App)
|
||||||
|
3. Touch-optimized interface
|
||||||
|
|
||||||
|
## Backup Configuration
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
```bash
|
||||||
|
cd compose/services/homarr
|
||||||
|
tar -czf homarr-backup-$(date +%Y%m%d).tar.gz config/ data/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restore
|
||||||
|
```bash
|
||||||
|
cd compose/services/homarr
|
||||||
|
tar -xzf homarr-backup-YYYYMMDD.tar.gz
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Services not auto-discovered
|
||||||
|
|
||||||
|
Check Docker socket permission:
|
||||||
|
```bash
|
||||||
|
docker logs homarr
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify labels on service:
|
||||||
|
```bash
|
||||||
|
docker inspect service-name | grep homarr
|
||||||
|
```
|
||||||
|
|
||||||
|
### Can't connect to services
|
||||||
|
|
||||||
|
Services must be on same Docker network or accessible via hostname.
|
||||||
|
|
||||||
|
Use container names, not `localhost`:
|
||||||
|
- ✅ `http://jellyfin:8096`
|
||||||
|
- ❌ `http://localhost:8096`
|
||||||
|
|
||||||
|
### Widgets not working
|
||||||
|
|
||||||
|
1. Check API keys are correct
|
||||||
|
2. Verify service URLs (use container names)
|
||||||
|
3. Check service is running: `docker ps`
|
||||||
|
|
||||||
|
## Alternatives Considered
|
||||||
|
|
||||||
|
| Dashboard | Auto-Discovery | Widgets | Complexity |
|
||||||
|
|-----------|---------------|---------|------------|
|
||||||
|
| **Homarr** | ✅ Excellent | ✅ Many | Low |
|
||||||
|
| Homepage | ✅ Good | ✅ Many | Low |
|
||||||
|
| Heimdall | ❌ Manual | ❌ Few | Very Low |
|
||||||
|
| Dashy | ⚠️ Limited | ✅ Some | Medium |
|
||||||
|
| Homer | ❌ Manual | ❌ None | Very Low |
|
||||||
|
| Organizr | ⚠️ Limited | ✅ Many | High |
|
||||||
|
|
||||||
|
**Homarr chosen for:** Best balance of features, auto-discovery, and ease of use.
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Official Docs](https://homarr.dev/docs)
|
||||||
|
- [GitHub](https://github.com/ajnart/homarr)
|
||||||
|
- [Discord Community](https://discord.gg/aCsmEV5RgA)
|
||||||
|
- [Icon Library](https://github.com/walkxcode/dashboard-icons)
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
1. **Start Simple** - Add core services first, expand later
|
||||||
|
2. **Use Categories** - Group related services
|
||||||
|
3. **Enable Widgets** - Make dashboard informative
|
||||||
|
4. **Mobile First** - Test on phone/tablet
|
||||||
|
5. **Backup Config** - Save your layout regularly
|
||||||
39
compose/services/homarr/compose.yaml
Normal file
39
compose/services/homarr/compose.yaml
Normal file
|
|
@ -0,0 +1,39 @@
|
||||||
|
# Homarr - Modern dashboard with Docker auto-discovery
|
||||||
|
# Docs: https://homarr.dev/docs/getting-started/installation
|
||||||
|
# GitHub: https://github.com/ajnart/homarr
|
||||||
|
|
||||||
|
services:
|
||||||
|
homarr:
|
||||||
|
container_name: homarr
|
||||||
|
image: ghcr.io/ajnart/homarr:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||||
|
- ./configs:/app/data/configs
|
||||||
|
- ./icons:/app/public/icons
|
||||||
|
- ./data:/data
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.homarr.rule: Host(`dashboard.fig.systems`)
|
||||||
|
traefik.http.routers.homarr.entrypoints: websecure
|
||||||
|
traefik.http.routers.homarr.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.homarr.loadbalancer.server.port: 7575
|
||||||
|
|
||||||
|
# Optional: SSO Protection (disabled for dashboard access)
|
||||||
|
# traefik.http.routers.homarr.middlewares: tinyauth
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
1847
compose/services/homarr/configs/default.json
Normal file
1847
compose/services/homarr/configs/default.json
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -30,6 +30,7 @@ services:
|
||||||
traefik.http.services.homepage.loadbalancer.server.port: 3000
|
traefik.http.services.homepage.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
# Optional: SSO Protection (disabled for easy dashboard access)
|
# Optional: SSO Protection (disabled for easy dashboard access)
|
||||||
|
# traefik.http.routers.homepage.middlewares: tinyauth
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
|
|
|
||||||
543
compose/services/karakeep/README.md
Normal file
543
compose/services/karakeep/README.md
Normal file
|
|
@ -0,0 +1,543 @@
|
||||||
|
# Karakeep - Bookmark Everything App
|
||||||
|
|
||||||
|
AI-powered bookmark manager for links, notes, images, and PDFs with automatic tagging and full-text search.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**Karakeep** (previously known as Hoarder) is a self-hostable bookmark-everything app:
|
||||||
|
|
||||||
|
- ✅ **Bookmark Everything**: Links, notes, images, PDFs
|
||||||
|
- ✅ **AI-Powered**: Automatic tagging and summarization
|
||||||
|
- ✅ **Full-Text Search**: Find anything instantly with Meilisearch
|
||||||
|
- ✅ **Web Archiving**: Save complete webpages (full page archive)
|
||||||
|
- ✅ **Browser Extensions**: Chrome and Firefox support
|
||||||
|
- ✅ **Mobile Apps**: iOS and Android apps available
|
||||||
|
- ✅ **Ollama Support**: Use local AI models (no cloud required!)
|
||||||
|
- ✅ **OCR**: Extract text from images
|
||||||
|
- ✅ **Self-Hosted**: Full control of your data
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Configure Secrets
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/services/karakeep
|
||||||
|
|
||||||
|
# Edit .env and update:
|
||||||
|
# - NEXTAUTH_SECRET (generate with: openssl rand -base64 36)
|
||||||
|
# - MEILI_MASTER_KEY (generate with: openssl rand -base64 36)
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deploy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Access
|
||||||
|
|
||||||
|
Go to: **https://links.fig.systems**
|
||||||
|
|
||||||
|
**First-time setup:**
|
||||||
|
1. Create your admin account
|
||||||
|
2. Start bookmarking!
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### Bookmark Types
|
||||||
|
|
||||||
|
**1. Web Links**
|
||||||
|
- Save any URL
|
||||||
|
- Automatic screenshot capture
|
||||||
|
- Full webpage archiving
|
||||||
|
- Extract title, description, favicon
|
||||||
|
- AI-generated summary and tags
|
||||||
|
|
||||||
|
**2. Notes**
|
||||||
|
- Quick text notes
|
||||||
|
- Markdown support
|
||||||
|
- AI-powered categorization
|
||||||
|
- Full-text searchable
|
||||||
|
|
||||||
|
**3. Images**
|
||||||
|
- Upload images directly
|
||||||
|
- OCR text extraction (if enabled)
|
||||||
|
- AI-based tagging
|
||||||
|
- Image search
|
||||||
|
|
||||||
|
**4. PDFs**
|
||||||
|
- Upload PDF documents
|
||||||
|
- Full-text indexing
|
||||||
|
- Searchable content
|
||||||
|
|
||||||
|
### AI Features
|
||||||
|
|
||||||
|
Karakeep can use AI to automatically:
|
||||||
|
- **Tag** your bookmarks
|
||||||
|
- **Summarize** web content
|
||||||
|
- **Extract** key information
|
||||||
|
- **Organize** by category
|
||||||
|
|
||||||
|
**Three AI options:**
|
||||||
|
|
||||||
|
**1. Ollama (Recommended - Local & Free)**
|
||||||
|
```env
|
||||||
|
# In .env, uncomment:
|
||||||
|
OLLAMA_BASE_URL=http://ollama:11434
|
||||||
|
INFERENCE_TEXT_MODEL=llama3.2:3b
|
||||||
|
INFERENCE_IMAGE_MODEL=llava:7b
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. OpenAI**
|
||||||
|
```env
|
||||||
|
OPENAI_API_KEY=sk-...
|
||||||
|
OPENAI_BASE_URL=https://api.openai.com/v1
|
||||||
|
INFERENCE_TEXT_MODEL=gpt-4o-mini
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. OpenRouter (multiple providers)**
|
||||||
|
```env
|
||||||
|
OPENAI_API_KEY=sk-or-v1-...
|
||||||
|
OPENAI_BASE_URL=https://openrouter.ai/api/v1
|
||||||
|
INFERENCE_TEXT_MODEL=anthropic/claude-3.5-sonnet
|
||||||
|
```
|
||||||
|
|
||||||
|
### Web Archiving
|
||||||
|
|
||||||
|
Karakeep saves complete web pages for offline viewing:
|
||||||
|
- **Full HTML archive**
|
||||||
|
- **Screenshots** of the page
|
||||||
|
- **Extracted text** for search
|
||||||
|
- **Works offline** - view archived pages anytime
|
||||||
|
|
||||||
|
### Search
|
||||||
|
|
||||||
|
Powered by Meilisearch:
|
||||||
|
- **Instant** full-text search
|
||||||
|
- **Fuzzy matching** - finds similar terms
|
||||||
|
- **Filter by** type, tags, dates
|
||||||
|
- **Search across** titles, content, tags, notes
|
||||||
|
|
||||||
|
### Browser Extensions
|
||||||
|
|
||||||
|
**Install extensions:**
|
||||||
|
- [Chrome Web Store](https://chromewebstore.google.com/detail/karakeep/kbkejgonjhbmhcaofkhdegeoeoemgkdm)
|
||||||
|
- [Firefox Add-ons](https://addons.mozilla.org/en-US/firefox/addon/karakeep/)
|
||||||
|
|
||||||
|
**Configure extension:**
|
||||||
|
1. Install extension
|
||||||
|
2. Click extension icon
|
||||||
|
3. Enter server URL: `https://links.fig.systems`
|
||||||
|
4. Login with your credentials
|
||||||
|
5. Save bookmarks from any page!
|
||||||
|
|
||||||
|
### Mobile Apps
|
||||||
|
|
||||||
|
**Download apps:**
|
||||||
|
- [iOS App Store](https://apps.apple.com/app/karakeep/id6479258022)
|
||||||
|
- [Android Google Play](https://play.google.com/store/apps/details?id=app.karakeep.mobile)
|
||||||
|
|
||||||
|
**Setup:**
|
||||||
|
1. Install app
|
||||||
|
2. Open app
|
||||||
|
3. Enter server: `https://links.fig.systems`
|
||||||
|
4. Login
|
||||||
|
5. Bookmark on the go!
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Basic Settings
|
||||||
|
|
||||||
|
**Disable public signups:**
|
||||||
|
```env
|
||||||
|
DISABLE_SIGNUPS=true
|
||||||
|
```
|
||||||
|
|
||||||
|
**Set max file size (100MB default):**
|
||||||
|
```env
|
||||||
|
MAX_ASSET_SIZE_MB=100
|
||||||
|
```
|
||||||
|
|
||||||
|
**Enable OCR for multiple languages:**
|
||||||
|
```env
|
||||||
|
OCR_LANGS=eng,spa,fra,deu
|
||||||
|
```
|
||||||
|
|
||||||
|
### Ollama Integration
|
||||||
|
|
||||||
|
**Prerequisites:**
|
||||||
|
1. Deploy Ollama service (see `compose/services/ollama/`)
|
||||||
|
2. Pull models: `docker exec ollama ollama pull llama3.2:3b`
|
||||||
|
|
||||||
|
**Enable in Karakeep:**
|
||||||
|
```env
|
||||||
|
# In karakeep/.env
|
||||||
|
OLLAMA_BASE_URL=http://ollama:11434
|
||||||
|
INFERENCE_TEXT_MODEL=llama3.2:3b
|
||||||
|
INFERENCE_IMAGE_MODEL=llava:7b
|
||||||
|
INFERENCE_LANG=en
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart:**
|
||||||
|
```bash
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
**Recommended models:**
|
||||||
|
- **Text**: llama3.2:3b (fast, good quality)
|
||||||
|
- **Images**: llava:7b (vision model)
|
||||||
|
- **Advanced**: llama3.3:70b (slower, better results)
|
||||||
|
|
||||||
|
### Advanced Settings
|
||||||
|
|
||||||
|
**Custom logging:**
|
||||||
|
```env
|
||||||
|
LOG_LEVEL=debug # Options: debug, info, warn, error
|
||||||
|
```
|
||||||
|
|
||||||
|
**Custom data directory:**
|
||||||
|
```env
|
||||||
|
DATADIR=/custom/path
|
||||||
|
```
|
||||||
|
|
||||||
|
**Chrome timeout (for slow sites):**
|
||||||
|
```env
|
||||||
|
# Add to compose.yaml environment section
|
||||||
|
BROWSER_TIMEOUT=60000 # 60 seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Workflows
|
||||||
|
|
||||||
|
### 1. Bookmark a Website
|
||||||
|
|
||||||
|
**Via Browser:**
|
||||||
|
1. Click Karakeep extension
|
||||||
|
2. Bookmark opens automatically
|
||||||
|
3. AI generates tags and summary
|
||||||
|
4. Edit tags/notes if needed
|
||||||
|
5. Save
|
||||||
|
|
||||||
|
**Via Mobile:**
|
||||||
|
1. Open share menu
|
||||||
|
2. Select Karakeep
|
||||||
|
3. Bookmark saved
|
||||||
|
|
||||||
|
**Manually:**
|
||||||
|
1. Open Karakeep
|
||||||
|
2. Click "+" button
|
||||||
|
3. Paste URL
|
||||||
|
4. Click Save
|
||||||
|
|
||||||
|
### 2. Quick Note
|
||||||
|
|
||||||
|
1. Open Karakeep
|
||||||
|
2. Click "+" → "Note"
|
||||||
|
3. Type your note
|
||||||
|
4. AI auto-tags
|
||||||
|
5. Save
|
||||||
|
|
||||||
|
### 3. Upload Image
|
||||||
|
|
||||||
|
1. Click "+" → "Image"
|
||||||
|
2. Upload image file
|
||||||
|
3. OCR extracts text (if enabled)
|
||||||
|
4. AI generates tags
|
||||||
|
5. Save
|
||||||
|
|
||||||
|
### 4. Search Everything
|
||||||
|
|
||||||
|
**Simple search:**
|
||||||
|
- Type in search box
|
||||||
|
- Results appear instantly
|
||||||
|
|
||||||
|
**Advanced search:**
|
||||||
|
- Filter by type (links, notes, images)
|
||||||
|
- Filter by tags
|
||||||
|
- Filter by date range
|
||||||
|
- Sort by relevance or date
|
||||||
|
|
||||||
|
### 5. Organize with Tags
|
||||||
|
|
||||||
|
**Auto-tags:**
|
||||||
|
- AI generates tags automatically
|
||||||
|
- Based on content analysis
|
||||||
|
- Can be edited/removed
|
||||||
|
|
||||||
|
**Manual tags:**
|
||||||
|
- Add your own tags
|
||||||
|
- Create tag hierarchies
|
||||||
|
- Color-code tags
|
||||||
|
|
||||||
|
**Tag management:**
|
||||||
|
- Rename tags globally
|
||||||
|
- Merge duplicate tags
|
||||||
|
- Delete unused tags
|
||||||
|
|
||||||
|
## Browser Extension Usage
|
||||||
|
|
||||||
|
### Quick Bookmark
|
||||||
|
|
||||||
|
1. **Visit any page**
|
||||||
|
2. **Click extension icon** (or keyboard shortcut)
|
||||||
|
3. **Automatically saved** with:
|
||||||
|
- URL
|
||||||
|
- Title
|
||||||
|
- Screenshot
|
||||||
|
- Full page archive
|
||||||
|
- AI tags and summary
|
||||||
|
|
||||||
|
### Save Selection
|
||||||
|
|
||||||
|
1. **Highlight text** on any page
|
||||||
|
2. **Right-click** → "Save to Karakeep"
|
||||||
|
3. **Saves as note** with source URL
|
||||||
|
|
||||||
|
### Save Image
|
||||||
|
|
||||||
|
1. **Right-click image**
|
||||||
|
2. Select "Save to Karakeep"
|
||||||
|
3. **Image uploaded** with AI tags
|
||||||
|
|
||||||
|
## Mobile App Features
|
||||||
|
|
||||||
|
- **Share from any app** to Karakeep
|
||||||
|
- **Quick capture** - bookmark in seconds
|
||||||
|
- **Offline access** to archived content
|
||||||
|
- **Search** your entire collection
|
||||||
|
- **Browse by tags**
|
||||||
|
- **Dark mode** support
|
||||||
|
|
||||||
|
## Data Management
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
|
||||||
|
**Important data locations:**
|
||||||
|
```bash
|
||||||
|
compose/services/karakeep/
|
||||||
|
├── data/ # Uploaded files, archives
|
||||||
|
└── meili_data/ # Search index
|
||||||
|
```
|
||||||
|
|
||||||
|
**Backup script:**
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
cd ~/homelab/compose/services/karakeep
|
||||||
|
tar czf karakeep-backup-$(date +%Y%m%d).tar.gz ./data ./meili_data
|
||||||
|
```
|
||||||
|
|
||||||
|
### Export
|
||||||
|
|
||||||
|
**Export bookmarks:**
|
||||||
|
1. Settings → Export
|
||||||
|
2. Choose format:
|
||||||
|
- JSON (complete data)
|
||||||
|
- HTML (browser-compatible)
|
||||||
|
- CSV (spreadsheet)
|
||||||
|
3. Download
|
||||||
|
|
||||||
|
### Import
|
||||||
|
|
||||||
|
**Import from other services:**
|
||||||
|
1. Settings → Import
|
||||||
|
2. Select source:
|
||||||
|
- Browser bookmarks (HTML)
|
||||||
|
- Pocket
|
||||||
|
- Raindrop.io
|
||||||
|
- Omnivore
|
||||||
|
- Instapaper
|
||||||
|
3. Upload file
|
||||||
|
4. Karakeep processes and imports
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Karakeep won't start
|
||||||
|
|
||||||
|
**Check logs:**
|
||||||
|
```bash
|
||||||
|
docker logs karakeep
|
||||||
|
docker logs karakeep-chrome
|
||||||
|
docker logs karakeep-meilisearch
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common issues:**
|
||||||
|
- Missing `NEXTAUTH_SECRET` in `.env`
|
||||||
|
- Missing `MEILI_MASTER_KEY` in `.env`
|
||||||
|
- Services not on `karakeep_internal` network
|
||||||
|
|
||||||
|
### Bookmarks not saving
|
||||||
|
|
||||||
|
**Check chrome service:**
|
||||||
|
```bash
|
||||||
|
docker logs karakeep-chrome
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify chrome is accessible:**
|
||||||
|
```bash
|
||||||
|
docker exec karakeep curl http://karakeep-chrome:9222
|
||||||
|
```
|
||||||
|
|
||||||
|
**Increase timeout:**
|
||||||
|
```env
|
||||||
|
# Add to .env
|
||||||
|
BROWSER_TIMEOUT=60000
|
||||||
|
```
|
||||||
|
|
||||||
|
### Search not working
|
||||||
|
|
||||||
|
**Rebuild search index:**
|
||||||
|
```bash
|
||||||
|
# Stop services
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Remove search data
|
||||||
|
rm -rf ./meili_data
|
||||||
|
|
||||||
|
# Restart (index rebuilds automatically)
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Meilisearch:**
|
||||||
|
```bash
|
||||||
|
docker logs karakeep-meilisearch
|
||||||
|
```
|
||||||
|
|
||||||
|
### AI features not working
|
||||||
|
|
||||||
|
**With Ollama:**
|
||||||
|
```bash
|
||||||
|
# Verify Ollama is running
|
||||||
|
docker ps | grep ollama
|
||||||
|
|
||||||
|
# Test Ollama connection
|
||||||
|
docker exec karakeep curl http://ollama:11434
|
||||||
|
|
||||||
|
# Check models are pulled
|
||||||
|
docker exec ollama ollama list
|
||||||
|
```
|
||||||
|
|
||||||
|
**With OpenAI/OpenRouter:**
|
||||||
|
- Verify API key is correct
|
||||||
|
- Check API balance/credits
|
||||||
|
- Review logs for error messages
|
||||||
|
|
||||||
|
### Extension can't connect
|
||||||
|
|
||||||
|
**Verify server URL:**
|
||||||
|
- Must be `https://links.fig.systems`
|
||||||
|
- Not `http://` or `localhost`
|
||||||
|
|
||||||
|
**Check CORS:**
|
||||||
|
```env
|
||||||
|
# Add to .env if needed
|
||||||
|
CORS_ALLOW_ORIGINS=https://links.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
**Clear extension data:**
|
||||||
|
1. Extension settings
|
||||||
|
2. Logout
|
||||||
|
3. Clear extension storage
|
||||||
|
4. Login again
|
||||||
|
|
||||||
|
### Mobile app issues
|
||||||
|
|
||||||
|
**Can't connect:**
|
||||||
|
- Use full HTTPS URL
|
||||||
|
- Ensure server is accessible externally
|
||||||
|
- Check firewall rules
|
||||||
|
|
||||||
|
**Slow performance:**
|
||||||
|
- Check network speed
|
||||||
|
- Reduce image quality in app settings
|
||||||
|
- Enable "Low data mode"
|
||||||
|
|
||||||
|
## Performance Optimization
|
||||||
|
|
||||||
|
### For Large Collections (10,000+ bookmarks)
|
||||||
|
|
||||||
|
**Increase Meilisearch RAM:**
|
||||||
|
```yaml
|
||||||
|
# In compose.yaml, add to karakeep-meilisearch:
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 2G
|
||||||
|
reservations:
|
||||||
|
memory: 1G
|
||||||
|
```
|
||||||
|
|
||||||
|
**Optimize search index:**
|
||||||
|
```env
|
||||||
|
# In .env
|
||||||
|
MEILI_MAX_INDEXING_MEMORY=1048576000 # 1GB
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Slow Archiving
|
||||||
|
|
||||||
|
**Increase Chrome resources:**
|
||||||
|
```yaml
|
||||||
|
# In compose.yaml, add to karakeep-chrome:
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 1G
|
||||||
|
cpus: '1.0'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Adjust timeouts:**
|
||||||
|
```env
|
||||||
|
BROWSER_TIMEOUT=90000 # 90 seconds
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Maintenance
|
||||||
|
|
||||||
|
**Vacuum (compact) database:**
|
||||||
|
```bash
|
||||||
|
# Karakeep uses SQLite by default
|
||||||
|
docker exec karakeep sqlite3 /data/karakeep.db "VACUUM;"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Comparison with Linkwarden
|
||||||
|
|
||||||
|
| Feature | Karakeep | Linkwarden |
|
||||||
|
|---------|----------|------------|
|
||||||
|
| **Bookmark Types** | Links, Notes, Images, PDFs | Links only |
|
||||||
|
| **AI Tagging** | Yes (Ollama/OpenAI) | No |
|
||||||
|
| **Web Archiving** | Full page + Screenshot | Screenshot only |
|
||||||
|
| **Search** | Meilisearch (fuzzy) | Meilisearch |
|
||||||
|
| **Browser Extension** | Yes | Yes |
|
||||||
|
| **Mobile Apps** | iOS + Android | No official apps |
|
||||||
|
| **OCR** | Yes | No |
|
||||||
|
| **Collaboration** | Personal focus | Team features |
|
||||||
|
| **Database** | SQLite | PostgreSQL |
|
||||||
|
|
||||||
|
**Why Karakeep?**
|
||||||
|
- More bookmark types
|
||||||
|
- AI-powered organization
|
||||||
|
- Better mobile support
|
||||||
|
- Lighter resource usage (SQLite vs PostgreSQL)
|
||||||
|
- Active development
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Official Website](https://karakeep.app)
|
||||||
|
- [Documentation](https://docs.karakeep.app)
|
||||||
|
- [GitHub Repository](https://github.com/karakeep-app/karakeep)
|
||||||
|
- [Demo Instance](https://try.karakeep.app)
|
||||||
|
- [Chrome Extension](https://chromewebstore.google.com/detail/karakeep/kbkejgonjhbmhcaofkhdegeoeoemgkdm)
|
||||||
|
- [Firefox Extension](https://addons.mozilla.org/en-US/firefox/addon/karakeep/)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ Deploy Karakeep
|
||||||
|
2. ✅ Create admin account
|
||||||
|
3. ✅ Install browser extension
|
||||||
|
4. ✅ Install mobile app
|
||||||
|
5. ⬜ Deploy Ollama for AI features
|
||||||
|
6. ⬜ Import existing bookmarks
|
||||||
|
7. ⬜ Configure AI models
|
||||||
|
8. ⬜ Set up automated backups
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Bookmark everything, find anything!** 🔖
|
||||||
|
|
@ -12,7 +12,7 @@ services:
|
||||||
- .env
|
- .env
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- /media/karakeep/data:/data
|
- ./data:/data
|
||||||
|
|
||||||
depends_on:
|
depends_on:
|
||||||
- karakeep-meilisearch
|
- karakeep-meilisearch
|
||||||
|
|
@ -34,7 +34,7 @@ services:
|
||||||
traefik.http.services.karakeep.loadbalancer.server.port: 3000
|
traefik.http.services.karakeep.loadbalancer.server.port: 3000
|
||||||
|
|
||||||
# SSO Protection
|
# SSO Protection
|
||||||
traefik.http.routers.karakeep.middlewares: authelia
|
traefik.http.routers.karakeep.middlewares: tinyauth
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: Karakeep (Bookmarks)
|
homarr.name: Karakeep (Bookmarks)
|
||||||
|
|
@ -66,7 +66,7 @@ services:
|
||||||
- .env
|
- .env
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
- /media/karakeep/meili_data:/meili_data
|
- ./meili_data:/meili_data
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
- karakeep_internal
|
- karakeep_internal
|
||||||
|
|
|
||||||
|
|
@ -1,82 +0,0 @@
|
||||||
# Komga
|
|
||||||
|
|
||||||
Komga is a free and open source comics/ebooks server with OPDS support and Kobo/KOReader integration.
|
|
||||||
|
|
||||||
## Features
|
|
||||||
|
|
||||||
- Modern web interface for browsing comics and ebooks
|
|
||||||
- OPDS feed support for reading apps
|
|
||||||
- Native Kobo sync support (connect your Kobo eReader directly)
|
|
||||||
- KOReader integration via OPDS
|
|
||||||
- Metadata management
|
|
||||||
- User management with per-library access control
|
|
||||||
- Reading progress tracking
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
See `.env` file for configuration options:
|
|
||||||
- `KOMGA_PORT`: Internal port for Komga (default: 8080)
|
|
||||||
- `TRAEFIK_HOST`: Public domain for accessing Komga
|
|
||||||
- `TZ`: Timezone
|
|
||||||
- `APP_USER_ID`/`APP_GROUP_ID`: User/group for file permissions
|
|
||||||
|
|
||||||
### Volumes
|
|
||||||
|
|
||||||
- `./config`: Komga configuration and database
|
|
||||||
- `/mnt/media/books`: Your book/comic library (read-only recommended)
|
|
||||||
- `/mnt/media/bookdrop`: Drop folder for importing new content
|
|
||||||
|
|
||||||
## Kobo Setup
|
|
||||||
|
|
||||||
Komga has built-in Kobo sync support. To connect your Kobo eReader:
|
|
||||||
|
|
||||||
1. Access Komga web UI and create a user account
|
|
||||||
2. In Komga user settings, generate a Kobo sync token
|
|
||||||
3. On your Kobo device:
|
|
||||||
- Connect via USB
|
|
||||||
- Edit `.kobo/Kobo/Kobo eReader.conf`
|
|
||||||
- Add under `[OneStoreServices]`:
|
|
||||||
```
|
|
||||||
api_endpoint=https://books.fig.systems/kobo
|
|
||||||
```
|
|
||||||
4. Safely eject and reboot your Kobo
|
|
||||||
5. Sign in with your Komga credentials when prompted
|
|
||||||
|
|
||||||
The Kobo endpoint (`/kobo`) is configured to bypass Authelia authentication since Kobo uses its own authentication mechanism.
|
|
||||||
|
|
||||||
## KOReader Setup
|
|
||||||
|
|
||||||
For KOReader (on any device):
|
|
||||||
|
|
||||||
1. Open KOReader
|
|
||||||
2. Go to Tools → OPDS Catalog
|
|
||||||
3. Add new catalog:
|
|
||||||
- Catalog Name: Komga
|
|
||||||
- Catalog URL: `https://books.fig.systems/opds/v1.2/catalog`
|
|
||||||
- Username: Your Komga username
|
|
||||||
- Password: Your Komga password
|
|
||||||
|
|
||||||
Note: The OPDS endpoints require Authelia authentication for web access, but KOReader will authenticate using HTTP Basic Auth with your Komga credentials.
|
|
||||||
|
|
||||||
## Authentication
|
|
||||||
|
|
||||||
- Web UI: Protected by Authelia SSO
|
|
||||||
- OPDS/Kobo endpoints: Use Komga's built-in authentication
|
|
||||||
- The Kobo sync endpoint bypasses Authelia to allow direct device authentication
|
|
||||||
|
|
||||||
## First Run
|
|
||||||
|
|
||||||
1. Start the service: `docker compose up -d`
|
|
||||||
2. Access the web UI at `https://books.fig.systems`
|
|
||||||
3. Create an admin account on first login
|
|
||||||
4. Add libraries pointing to your book folders
|
|
||||||
5. Configure users and permissions as needed
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
- Komga supports various formats: CBZ, CBR, PDF, EPUB, and more
|
|
||||||
- Use the bookdrop folder for automatic import scanning
|
|
||||||
- Enable "claim" profile for better reverse proxy support (already configured)
|
|
||||||
- Kobo sync requires HTTPS (already configured via Traefik)
|
|
||||||
|
|
@ -1,61 +0,0 @@
|
||||||
services:
|
|
||||||
komga:
|
|
||||||
image: gotson/komga:latest
|
|
||||||
container_name: komga
|
|
||||||
environment:
|
|
||||||
- TZ=${TZ}
|
|
||||||
- PUID=${APP_USER_ID}
|
|
||||||
- PGID=${APP_GROUP_ID}
|
|
||||||
- SERVER_PORT=${KOMGA_PORT}
|
|
||||||
# Kobo/KOReader support
|
|
||||||
- KOMGA_KOBO_PROXY=false
|
|
||||||
volumes:
|
|
||||||
- ./config:/config
|
|
||||||
- /mnt/media/books:/books
|
|
||||||
- /mnt/media/bookdrop:/bookdrop
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Middleware for Kobo sync support - increased buffer sizes
|
|
||||||
traefik.http.middlewares.komga-buffering.buffering.maxRequestBodyBytes: 268435456
|
|
||||||
traefik.http.middlewares.komga-buffering.buffering.memRequestBodyBytes: 268435456
|
|
||||||
traefik.http.middlewares.komga-buffering.buffering.retryExpression: IsNetworkError() && Attempts() < 3
|
|
||||||
traefik.http.middlewares.komga-headers.headers.customrequestheaders.X-Forwarded-Proto: https
|
|
||||||
|
|
||||||
# Authelia middleware for /api and /opds endpoints (main web UI)
|
|
||||||
traefik.http.middlewares.komga-auth.forwardauth.address: http://authelia:9091/api/authz/forward-auth
|
|
||||||
traefik.http.middlewares.komga-auth.forwardauth.trustForwardHeader: true
|
|
||||||
traefik.http.middlewares.komga-auth.forwardauth.authResponseHeaders: Remote-User,Remote-Groups,Remote-Name,Remote-Email
|
|
||||||
|
|
||||||
# Kobo router - NO Authelia (uses Kobo's built-in auth) - Higher priority to match first
|
|
||||||
traefik.http.routers.komga-kobo.rule: Host(`${TRAEFIK_HOST}`) && PathPrefix(`/kobo`)
|
|
||||||
traefik.http.routers.komga-kobo.entrypoints: websecure
|
|
||||||
traefik.http.routers.komga-kobo.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.routers.komga-kobo.middlewares: komga-buffering,komga-headers
|
|
||||||
traefik.http.routers.komga-kobo.service: komga
|
|
||||||
traefik.http.routers.komga-kobo.priority: 100
|
|
||||||
|
|
||||||
# Main router for web UI - NO Authelia for initial setup
|
|
||||||
traefik.http.routers.komga.rule: Host(`${TRAEFIK_HOST}`)
|
|
||||||
traefik.http.routers.komga.entrypoints: websecure
|
|
||||||
traefik.http.routers.komga.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.routers.komga.middlewares: komga-buffering,komga-headers
|
|
||||||
traefik.http.routers.komga.service: komga
|
|
||||||
traefik.http.routers.komga.priority: 50
|
|
||||||
|
|
||||||
# Service definition
|
|
||||||
traefik.http.services.komga.loadbalancer.server.port: ${KOMGA_PORT}
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Komga
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:book-open-variant
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
35
compose/services/komodo/.env.example
Normal file
35
compose/services/komodo/.env.example
Normal file
|
|
@ -0,0 +1,35 @@
|
||||||
|
# Komodo Environment Configuration
|
||||||
|
# Copy this file to .env and customize for your deployment
|
||||||
|
|
||||||
|
# Version
|
||||||
|
KOMODO_VERSION=latest
|
||||||
|
|
||||||
|
# Database (CHANGE THESE!)
|
||||||
|
KOMODO_DB_USERNAME=admin
|
||||||
|
KOMODO_DB_PASSWORD=CHANGE_ME_TO_STRONG_PASSWORD
|
||||||
|
|
||||||
|
# Authentication (CHANGE THIS!)
|
||||||
|
KOMODO_PASSKEY=CHANGE_ME_TO_STRONG_RANDOM_STRING
|
||||||
|
|
||||||
|
# Core Settings
|
||||||
|
KOMODO_TITLE=Komodo
|
||||||
|
KOMODO_HOST=https://komodo.fig.systems
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# User Management
|
||||||
|
KOMODO_LOCAL_AUTH=true
|
||||||
|
KOMODO_ENABLE_NEW_USERS=true
|
||||||
|
KOMODO_FIRST_SERVER_ADMIN=true
|
||||||
|
|
||||||
|
# Monitoring
|
||||||
|
KOMODO_MONITORING_INTERVAL=15-sec
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
KOMODO_LOGGING_LEVEL=info
|
||||||
|
PERIPHERY_LOGGING_LEVEL=info
|
||||||
|
|
||||||
|
# Periphery Settings
|
||||||
|
PERIPHERY_ROOT_DIR=/etc/komodo
|
||||||
|
PERIPHERY_HTTPS_ENABLED=true
|
||||||
|
PERIPHERY_DISABLE_TERMINALS=false
|
||||||
|
PERIPHERY_INCLUDE_DISK_MOUNTS=/
|
||||||
18
compose/services/komodo/.gitignore
vendored
Normal file
18
compose/services/komodo/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,18 @@
|
||||||
|
# Sensitive configuration
|
||||||
|
.env
|
||||||
|
|
||||||
|
# Data directories
|
||||||
|
data/
|
||||||
|
backups/
|
||||||
|
|
||||||
|
# MongoDB volumes (if using bind mounts)
|
||||||
|
mongo-data/
|
||||||
|
mongo-config/
|
||||||
|
|
||||||
|
# Logs
|
||||||
|
*.log
|
||||||
|
|
||||||
|
# Certificates
|
||||||
|
*.pem
|
||||||
|
*.key
|
||||||
|
*.crt
|
||||||
286
compose/services/komodo/README.md
Normal file
286
compose/services/komodo/README.md
Normal file
|
|
@ -0,0 +1,286 @@
|
||||||
|
# Komodo - Docker & Server Management Platform
|
||||||
|
|
||||||
|
Komodo is a comprehensive platform for managing Docker containers, servers, and deployments with a modern web interface.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Docker Management**: Deploy and manage Docker containers and compose stacks
|
||||||
|
- **Server Monitoring**: Track server health, resources, and statistics
|
||||||
|
- **Build System**: Build Docker images from Git repositories
|
||||||
|
- **Multi-Server**: Manage multiple servers from a single interface
|
||||||
|
- **Webhooks**: Automatic deployments from git webhooks
|
||||||
|
- **Resource Management**: Organize with tags, descriptions, and search
|
||||||
|
- **Authentication**: Local auth, OAuth (GitHub, Google), and OIDC support
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Update Environment Variables
|
||||||
|
|
||||||
|
Edit `.env` and update these critical values:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Database Password
|
||||||
|
KOMODO_DB_PASSWORD=your-strong-password-here
|
||||||
|
|
||||||
|
# Shared Passkey (Core <-> Periphery authentication)
|
||||||
|
KOMODO_PASSKEY=your-strong-random-string-here
|
||||||
|
|
||||||
|
# Host URL (update to your domain)
|
||||||
|
KOMODO_HOST=https://komodo.fig.systems
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create Required Directory
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create the periphery root directory on the host
|
||||||
|
sudo mkdir -p /etc/komodo
|
||||||
|
sudo chown -R $USER:$USER /etc/komodo
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Deploy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Access
|
||||||
|
|
||||||
|
Open https://komodo.fig.systems and create your first admin account.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
The stack consists of three services:
|
||||||
|
|
||||||
|
1. **komodo-mongo**: MongoDB database for storing configuration
|
||||||
|
2. **komodo-core**: Main web interface and API (port 9120)
|
||||||
|
3. **komodo-periphery**: Local agent for Docker/server management (port 8120)
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Environment Variables (.env)
|
||||||
|
|
||||||
|
The `.env` file contains all primary configuration. Key sections:
|
||||||
|
|
||||||
|
- **Database**: MongoDB credentials
|
||||||
|
- **Authentication**: Passkey, local auth, OAuth providers
|
||||||
|
- **Monitoring**: Polling intervals and logging
|
||||||
|
- **Periphery**: Root directory, SSL, terminal access
|
||||||
|
- **Integrations**: Git providers, Docker registries, AWS
|
||||||
|
|
||||||
|
### TOML Configuration Files (Optional)
|
||||||
|
|
||||||
|
For advanced configuration, mount TOML files:
|
||||||
|
|
||||||
|
- `config/core.config.toml` → `/config/core.config.toml`
|
||||||
|
- `config/periphery.config.toml` → `/config/periphery.config.toml`
|
||||||
|
|
||||||
|
Uncomment the volume mounts in `compose.yaml` to use these files.
|
||||||
|
|
||||||
|
## Security Checklist
|
||||||
|
|
||||||
|
Before deploying to production:
|
||||||
|
|
||||||
|
- [ ] Change `KOMODO_DB_PASSWORD` to a strong password
|
||||||
|
- [ ] Change `KOMODO_PASSKEY` to a strong random string (32+ characters)
|
||||||
|
- [ ] Review `KOMODO_ENABLE_NEW_USERS` - set to `false` after creating admin
|
||||||
|
- [ ] Consider enabling SSO via Traefik middleware (see compose.yaml)
|
||||||
|
- [ ] Set `PERIPHERY_DISABLE_TERMINALS=true` if shell access not needed
|
||||||
|
- [ ] Configure `PERIPHERY_ALLOWED_IPS` to restrict access by IP
|
||||||
|
- [ ] Review disk mount monitoring in `PERIPHERY_INCLUDE_DISK_MOUNTS`
|
||||||
|
- [ ] Enable proper SSL certificates (auto-generated by Traefik)
|
||||||
|
- [ ] Set up OAuth providers (GitHub/Google) or OIDC for SSO
|
||||||
|
|
||||||
|
## Authentication Options
|
||||||
|
|
||||||
|
### Local Authentication (Default)
|
||||||
|
|
||||||
|
Username/password authentication. First user becomes admin.
|
||||||
|
|
||||||
|
### OAuth Providers
|
||||||
|
|
||||||
|
Configure in `.env`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# GitHub OAuth
|
||||||
|
KOMODO_GITHUB_OAUTH_ENABLED=true
|
||||||
|
KOMODO_GITHUB_OAUTH_ID=your-oauth-id
|
||||||
|
KOMODO_GITHUB_OAUTH_SECRET=your-oauth-secret
|
||||||
|
|
||||||
|
# Google OAuth
|
||||||
|
KOMODO_GOOGLE_OAUTH_ENABLED=true
|
||||||
|
KOMODO_GOOGLE_OAUTH_ID=your-oauth-id
|
||||||
|
KOMODO_GOOGLE_OAUTH_SECRET=your-oauth-secret
|
||||||
|
```
|
||||||
|
|
||||||
|
### OIDC (e.g., Keycloak, Auth0)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
KOMODO_OIDC_ENABLED=true
|
||||||
|
KOMODO_OIDC_PROVIDER=https://your-oidc-provider.com
|
||||||
|
KOMODO_OIDC_CLIENT_ID=your-client-id
|
||||||
|
KOMODO_OIDC_CLIENT_SECRET=your-client-secret
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integrations
|
||||||
|
|
||||||
|
### Git Provider Access
|
||||||
|
|
||||||
|
For private repositories, configure credentials:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# GitHub
|
||||||
|
KOMODO_GIT_GITHUB_ACCOUNTS=personal
|
||||||
|
KOMODO_GIT_GITHUB_PERSONAL_USERNAME=your-username
|
||||||
|
KOMODO_GIT_GITHUB_PERSONAL_TOKEN=ghp_your-token
|
||||||
|
|
||||||
|
# Gitea/Self-hosted
|
||||||
|
KOMODO_GIT_GITEA_ACCOUNTS=homelab
|
||||||
|
KOMODO_GIT_GITEA_HOMELAB_DOMAIN=git.example.com
|
||||||
|
KOMODO_GIT_GITEA_HOMELAB_USERNAME=your-username
|
||||||
|
KOMODO_GIT_GITEA_HOMELAB_TOKEN=your-token
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Registry Access
|
||||||
|
|
||||||
|
For private registries:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Docker Hub
|
||||||
|
KOMODO_REGISTRY_DOCKERHUB_ACCOUNTS=personal
|
||||||
|
KOMODO_REGISTRY_DOCKERHUB_PERSONAL_USERNAME=your-username
|
||||||
|
KOMODO_REGISTRY_DOCKERHUB_PERSONAL_PASSWORD=your-password
|
||||||
|
|
||||||
|
# Custom Registry
|
||||||
|
KOMODO_REGISTRY_CUSTOM_ACCOUNTS=homelab
|
||||||
|
KOMODO_REGISTRY_CUSTOM_HOMELAB_DOMAIN=registry.example.com
|
||||||
|
KOMODO_REGISTRY_CUSTOM_HOMELAB_USERNAME=your-username
|
||||||
|
KOMODO_REGISTRY_CUSTOM_HOMELAB_PASSWORD=your-password
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multi-Server Setup
|
||||||
|
|
||||||
|
To manage additional servers:
|
||||||
|
|
||||||
|
1. Deploy `komodo-periphery` on each server
|
||||||
|
2. Configure with the same `KOMODO_PASSKEY`
|
||||||
|
3. Expose port 8120 (with SSL enabled)
|
||||||
|
4. Add server in Komodo Core UI with periphery URL
|
||||||
|
|
||||||
|
## Monitoring & Logging
|
||||||
|
|
||||||
|
### Adjust Polling Intervals
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Server health checks
|
||||||
|
KOMODO_MONITORING_INTERVAL=15-sec
|
||||||
|
|
||||||
|
# System stats
|
||||||
|
PERIPHERY_STATS_POLLING_RATE=5-sec
|
||||||
|
|
||||||
|
# Container stats
|
||||||
|
PERIPHERY_CONTAINER_STATS_POLLING_RATE=30-sec
|
||||||
|
```
|
||||||
|
|
||||||
|
### Log Levels
|
||||||
|
|
||||||
|
```bash
|
||||||
|
KOMODO_LOGGING_LEVEL=info # off, error, warn, info, debug, trace
|
||||||
|
PERIPHERY_LOGGING_LEVEL=info
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenTelemetry
|
||||||
|
|
||||||
|
For distributed tracing:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
KOMODO_LOGGING_OTLP_ENDPOINT=http://your-otlp-collector:4317
|
||||||
|
PERIPHERY_LOGGING_OTLP_ENDPOINT=http://your-otlp-collector:4317
|
||||||
|
```
|
||||||
|
|
||||||
|
## Data Management
|
||||||
|
|
||||||
|
### Backups
|
||||||
|
|
||||||
|
MongoDB data is persisted in Docker volumes:
|
||||||
|
- `mongo-data`: Database files
|
||||||
|
- `mongo-config`: Configuration
|
||||||
|
|
||||||
|
The `./backups` directory is mounted for storing backup exports.
|
||||||
|
|
||||||
|
### Data Pruning
|
||||||
|
|
||||||
|
Automatically clean old data:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
KOMODO_PRUNE_INTERVAL=1-day
|
||||||
|
KOMODO_KEEP_STATS_FOR_DAYS=30
|
||||||
|
KOMODO_KEEP_ALERTS_FOR_DAYS=90
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Check Logs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker compose logs -f komodo-core
|
||||||
|
docker compose logs -f komodo-periphery
|
||||||
|
docker compose logs -f komodo-mongo
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Passkey Match
|
||||||
|
|
||||||
|
Core and Periphery must share the same passkey:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In .env, ensure these match:
|
||||||
|
KOMODO_PASSKEY=abc123
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reset Admin Password
|
||||||
|
|
||||||
|
Connect to MongoDB and reset user:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec -it komodo-mongo mongosh -u admin -p admin
|
||||||
|
use komodo
|
||||||
|
db.users.updateOne({username: "admin"}, {$set: {password: "new-hashed-password"}})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Periphery Connection
|
||||||
|
|
||||||
|
In Komodo Core UI, add a server pointing to:
|
||||||
|
- URL: `http://komodo-periphery:8120` (internal)
|
||||||
|
- Or: `https://komodo.fig.systems:8120` (if externally accessible)
|
||||||
|
- Passkey: Must match `KOMODO_PASSKEY`
|
||||||
|
|
||||||
|
## Upgrading
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pull latest images
|
||||||
|
docker compose pull
|
||||||
|
|
||||||
|
# Recreate containers
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Check logs
|
||||||
|
docker compose logs -f
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: Pin specific versions in `.env` for production:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
KOMODO_VERSION=v1.2.3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Links
|
||||||
|
|
||||||
|
- **Documentation**: https://komo.do/docs/
|
||||||
|
- **GitHub**: https://github.com/moghtech/komodo
|
||||||
|
- **Discord**: https://discord.gg/komodo
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Komodo is open source under the GPL-3.0 license.
|
||||||
138
compose/services/komodo/compose.yaml
Normal file
138
compose/services/komodo/compose.yaml
Normal file
|
|
@ -0,0 +1,138 @@
|
||||||
|
# Komodo - Docker & Server Management Platform
|
||||||
|
# Docs: https://komo.do/docs/
|
||||||
|
# GitHub: https://github.com/moghtech/komodo
|
||||||
|
|
||||||
|
services:
|
||||||
|
komodo-mongo:
|
||||||
|
container_name: komodo-mongo
|
||||||
|
image: mongo:8.0
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
command: ["--wiredTigerCacheSizeGB", "0.25"]
|
||||||
|
|
||||||
|
environment:
|
||||||
|
MONGO_INITDB_ROOT_USERNAME: ${KOMODO_DB_USERNAME:-admin}
|
||||||
|
MONGO_INITDB_ROOT_PASSWORD: ${KOMODO_DB_PASSWORD:-admin}
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- mongo-data:/data/db
|
||||||
|
- mongo-config:/data/configdb
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Skip this container from Komodo management
|
||||||
|
komodo.skip: true
|
||||||
|
|
||||||
|
komodo-core:
|
||||||
|
container_name: komodo-core
|
||||||
|
image: ghcr.io/moghtech/komodo-core:${KOMODO_VERSION:-latest}
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
depends_on:
|
||||||
|
- komodo-mongo
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# Database Configuration
|
||||||
|
KOMODO_DATABASE_URI: mongodb://${KOMODO_DB_USERNAME:-admin}:${KOMODO_DB_PASSWORD:-admin}@komodo-mongo:27017
|
||||||
|
|
||||||
|
# Core Settings
|
||||||
|
KOMODO_TITLE: ${KOMODO_TITLE:-Komodo}
|
||||||
|
KOMODO_HOST: ${KOMODO_HOST:-https://komodo.fig.systems}
|
||||||
|
KOMODO_PORT: 9120
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
KOMODO_PASSKEY: ${KOMODO_PASSKEY:-abc123}
|
||||||
|
KOMODO_LOCAL_AUTH: ${KOMODO_LOCAL_AUTH:-true}
|
||||||
|
KOMODO_ENABLE_NEW_USERS: ${KOMODO_ENABLE_NEW_USERS:-true}
|
||||||
|
KOMODO_ENABLE_NEW_USER_WEBHOOK: ${KOMODO_ENABLE_NEW_USER_WEBHOOK:-false}
|
||||||
|
|
||||||
|
# Monitoring
|
||||||
|
KOMODO_MONITORING_INTERVAL: ${KOMODO_MONITORING_INTERVAL:-15-sec}
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
KOMODO_LOGGING_LEVEL: ${KOMODO_LOGGING_LEVEL:-info}
|
||||||
|
TZ: ${TZ:-America/Los_Angeles}
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./data:/data
|
||||||
|
- ./backups:/backups
|
||||||
|
# Optional: mount custom config
|
||||||
|
# - ./config/core.config.toml:/config/core.config.toml:ro
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Skip this container from Komodo management
|
||||||
|
komodo.skip: true
|
||||||
|
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.komodo.rule: Host(`komodo.fig.systems`)
|
||||||
|
traefik.http.routers.komodo.entrypoints: websecure
|
||||||
|
traefik.http.routers.komodo.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.komodo.loadbalancer.server.port: 9120
|
||||||
|
|
||||||
|
# Optional: SSO Protection
|
||||||
|
# traefik.http.routers.komodo.middlewares: tinyauth
|
||||||
|
|
||||||
|
komodo-periphery:
|
||||||
|
container_name: komodo-periphery
|
||||||
|
image: ghcr.io/moghtech/komodo-periphery:${KOMODO_VERSION:-latest}
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
depends_on:
|
||||||
|
- komodo-core
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# Core Settings
|
||||||
|
PERIPHERY_ROOT_DIR: ${PERIPHERY_ROOT_DIR:-/etc/komodo}
|
||||||
|
PERIPHERY_PORT: 8120
|
||||||
|
|
||||||
|
# Authentication
|
||||||
|
PERIPHERY_PASSKEY: ${KOMODO_PASSKEY:-abc123}
|
||||||
|
PERIPHERY_HTTPS_ENABLED: ${PERIPHERY_HTTPS_ENABLED:-true}
|
||||||
|
|
||||||
|
# Features
|
||||||
|
PERIPHERY_DISABLE_TERMINALS: ${PERIPHERY_DISABLE_TERMINALS:-false}
|
||||||
|
|
||||||
|
# Disk Monitoring
|
||||||
|
PERIPHERY_INCLUDE_DISK_MOUNTS: ${PERIPHERY_INCLUDE_DISK_MOUNTS:-/}
|
||||||
|
# PERIPHERY_EXCLUDE_DISK_MOUNTS: /snap,/boot
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
PERIPHERY_LOGGING_LEVEL: ${PERIPHERY_LOGGING_LEVEL:-info}
|
||||||
|
TZ: ${TZ:-America/Los_Angeles}
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
|
- /proc:/proc:ro
|
||||||
|
- ${PERIPHERY_ROOT_DIR:-/etc/komodo}:${PERIPHERY_ROOT_DIR:-/etc/komodo}
|
||||||
|
# Optional: mount custom config
|
||||||
|
# - ./config/periphery.config.toml:/config/periphery.config.toml:ro
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Skip this container from Komodo management
|
||||||
|
komodo.skip: true
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
mongo-data:
|
||||||
|
mongo-config:
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
89
compose/services/komodo/setup.sh
Executable file
89
compose/services/komodo/setup.sh
Executable file
|
|
@ -0,0 +1,89 @@
|
||||||
|
#!/bin/bash
|
||||||
|
# Komodo Setup Script
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "==================================="
|
||||||
|
echo "Komodo Setup"
|
||||||
|
echo "==================================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if running as root
|
||||||
|
if [ "$EUID" -eq 0 ]; then
|
||||||
|
echo "Please do not run as root"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create periphery root directory
|
||||||
|
echo "Creating periphery root directory..."
|
||||||
|
sudo mkdir -p /etc/komodo
|
||||||
|
sudo chown -R $USER:$USER /etc/komodo
|
||||||
|
echo "✓ Created /etc/komodo"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Check if .env exists
|
||||||
|
if [ ! -f .env ]; then
|
||||||
|
echo "Error: .env file not found!"
|
||||||
|
echo "Please copy .env.example to .env and configure it first."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for default passwords
|
||||||
|
echo "Checking for default passwords..."
|
||||||
|
if grep -q "KOMODO_DB_PASSWORD=admin" .env; then
|
||||||
|
echo "⚠️ WARNING: Default database password detected!"
|
||||||
|
echo " Please update KOMODO_DB_PASSWORD in .env before deployment."
|
||||||
|
fi
|
||||||
|
|
||||||
|
if grep -q "KOMODO_PASSKEY=abc123" .env; then
|
||||||
|
echo "⚠️ WARNING: Default passkey detected!"
|
||||||
|
echo " Please update KOMODO_PASSKEY in .env before deployment."
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "==================================="
|
||||||
|
echo "Pre-deployment Checklist"
|
||||||
|
echo "==================================="
|
||||||
|
echo ""
|
||||||
|
echo "Before deploying, ensure you have:"
|
||||||
|
echo " [ ] Updated KOMODO_DB_PASSWORD to a strong password"
|
||||||
|
echo " [ ] Updated KOMODO_PASSKEY to a strong random string"
|
||||||
|
echo " [ ] Updated KOMODO_HOST to your domain"
|
||||||
|
echo " [ ] Configured TZ (timezone)"
|
||||||
|
echo " [ ] Reviewed KOMODO_ENABLE_NEW_USERS setting"
|
||||||
|
echo ""
|
||||||
|
read -p "Have you completed the checklist above? (y/N) " -n 1 -r
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||||
|
echo "Please complete the checklist and run this script again."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "==================================="
|
||||||
|
echo "Deploying Komodo..."
|
||||||
|
echo "==================================="
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Deploy
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "==================================="
|
||||||
|
echo "Deployment Complete!"
|
||||||
|
echo "==================================="
|
||||||
|
echo ""
|
||||||
|
echo "Access Komodo at: https://komodo.fig.systems"
|
||||||
|
echo ""
|
||||||
|
echo "First-time setup:"
|
||||||
|
echo " 1. Open the URL above"
|
||||||
|
echo " 2. Create your admin account"
|
||||||
|
echo " 3. Configure servers and resources"
|
||||||
|
echo ""
|
||||||
|
echo "To view logs:"
|
||||||
|
echo " docker compose logs -f"
|
||||||
|
echo ""
|
||||||
|
echo "To stop:"
|
||||||
|
echo " docker compose down"
|
||||||
|
echo ""
|
||||||
9
compose/services/matrix/.gitignore
vendored
9
compose/services/matrix/.gitignore
vendored
|
|
@ -1,9 +0,0 @@
|
||||||
# Synapse data (stored in /mnt/media/matrix/)
|
|
||||||
data/
|
|
||||||
media/
|
|
||||||
|
|
||||||
# Bridge data
|
|
||||||
bridges/
|
|
||||||
|
|
||||||
# Logs
|
|
||||||
*.log
|
|
||||||
|
|
@ -1,665 +0,0 @@
|
||||||
# Matrix Integrations Setup Guide
|
|
||||||
|
|
||||||
This guide covers setup for all Matrix integrations in your homelab.
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
1. **Start all services:**
|
|
||||||
```bash
|
|
||||||
cd /home/eduardo_figueroa/homelab/compose/services/matrix
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Check service health:**
|
|
||||||
```bash
|
|
||||||
docker compose ps
|
|
||||||
docker compose logs -f
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Services Overview
|
|
||||||
|
|
||||||
| Service | URL | Purpose |
|
|
||||||
|---------|-----|---------|
|
|
||||||
| Synapse | https://matrix.fig.systems | Matrix homeserver |
|
|
||||||
| Element | https://chat.fig.systems | Web client |
|
|
||||||
| Synapse Admin | https://admin.matrix.fig.systems | User/room management |
|
|
||||||
| Maubot | https://maubot.fig.systems | Bot management |
|
|
||||||
| Matrix Registration | https://reg.matrix.fig.systems | Token-based registration |
|
|
||||||
| Hookshot | https://hookshot.fig.systems | GitHub/GitLab webhooks |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Synapse Admin
|
|
||||||
|
|
||||||
**Purpose:** Web UI for managing users, rooms, and server settings.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
1. **Access the UI:**
|
|
||||||
- Navigate to https://admin.matrix.fig.systems
|
|
||||||
- Enter homeserver URL: `https://matrix.fig.systems`
|
|
||||||
|
|
||||||
2. **Login with your admin account:**
|
|
||||||
- Use your Matrix credentials (@username:fig.systems)
|
|
||||||
- Must be a server admin (see below to grant admin)
|
|
||||||
|
|
||||||
3. **Grant admin privileges to a user:**
|
|
||||||
```bash
|
|
||||||
docker compose exec synapse register_new_matrix_user \
|
|
||||||
-u <username> \
|
|
||||||
-p <password> \
|
|
||||||
-a \
|
|
||||||
-c /data/homeserver.yaml \
|
|
||||||
http://localhost:8008
|
|
||||||
```
|
|
||||||
|
|
||||||
### Features:
|
|
||||||
- View and manage all users
|
|
||||||
- Deactivate accounts
|
|
||||||
- Manage rooms (delete, view members)
|
|
||||||
- View server statistics
|
|
||||||
- Media management
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Matrix Registration (Token-Based Registration)
|
|
||||||
|
|
||||||
**Purpose:** Control who can register with invite tokens.
|
|
||||||
|
|
||||||
### Admin Access:
|
|
||||||
|
|
||||||
**Admin credentials:**
|
|
||||||
- URL: https://reg.matrix.fig.systems/admin
|
|
||||||
- Secret: `4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188`
|
|
||||||
(Also in `.env` as `MATRIX_REGISTRATION_ADMIN_SECRET`)
|
|
||||||
|
|
||||||
### Creating Registration Tokens:
|
|
||||||
|
|
||||||
**Via Web UI:**
|
|
||||||
1. Go to https://reg.matrix.fig.systems/admin
|
|
||||||
2. Enter the admin secret above
|
|
||||||
3. Click "Create Token"
|
|
||||||
4. Configure options:
|
|
||||||
- **One-time use:** Token works only once
|
|
||||||
- **Multi-use:** Token can be used multiple times
|
|
||||||
- **Expiration date:** Token expires after this date
|
|
||||||
- **Disable email:** Skip email verification for this token
|
|
||||||
5. Copy the token and share with users
|
|
||||||
|
|
||||||
**Registration URL format:**
|
|
||||||
```
|
|
||||||
https://reg.matrix.fig.systems?token=<your_token_here>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Creating Tokens via API:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create a one-time token
|
|
||||||
curl -X POST https://reg.matrix.fig.systems/api/token \
|
|
||||||
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"ex_date": "2026-12-31",
|
|
||||||
"one_time": true,
|
|
||||||
"disable_email": false
|
|
||||||
}'
|
|
||||||
|
|
||||||
# Create a multi-use token (for family/friends)
|
|
||||||
curl -X POST https://reg.matrix.fig.systems/api/token \
|
|
||||||
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"ex_date": "2026-12-31",
|
|
||||||
"one_time": false,
|
|
||||||
"max_usage": 10,
|
|
||||||
"disable_email": true
|
|
||||||
}'
|
|
||||||
|
|
||||||
# List all tokens
|
|
||||||
curl https://reg.matrix.fig.systems/api/tokens \
|
|
||||||
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188"
|
|
||||||
|
|
||||||
# Disable a token
|
|
||||||
curl -X PUT https://reg.matrix.fig.systems/api/token/<token_name> \
|
|
||||||
-H "Authorization: Bearer 4a385519f20e015faf06996f12532236aa02d15511ea48bf1abec32e21d40188" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"disabled": true}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### User Registration Process:
|
|
||||||
|
|
||||||
1. Admin creates token via web UI or API
|
|
||||||
2. Admin shares URL: `https://reg.matrix.fig.systems?token=abc123`
|
|
||||||
3. User opens URL and fills in:
|
|
||||||
- Username
|
|
||||||
- Password
|
|
||||||
- Email (if required)
|
|
||||||
4. Account is created on your Matrix server
|
|
||||||
|
|
||||||
### Benefits:
|
|
||||||
- Control who can register
|
|
||||||
- Track which tokens were used
|
|
||||||
- Bypass email verification per-token
|
|
||||||
- Prevent spam/abuse
|
|
||||||
- Invite-only registration system
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Maubot (Bot Framework)
|
|
||||||
|
|
||||||
**Purpose:** Modular bot system for GIFs, reminders, RSS, and custom commands.
|
|
||||||
|
|
||||||
### Initial Setup:
|
|
||||||
|
|
||||||
1. **Generate initial config:**
|
|
||||||
```bash
|
|
||||||
docker compose run --rm maubot
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Access the management UI:**
|
|
||||||
- URL: https://maubot.fig.systems
|
|
||||||
- Default credentials are in `/mnt/media/matrix/maubot/config.yaml`
|
|
||||||
|
|
||||||
3. **Login and change password:**
|
|
||||||
- First login with default credentials
|
|
||||||
- Go to Settings → Change password
|
|
||||||
|
|
||||||
### Creating a Bot User:
|
|
||||||
|
|
||||||
1. **Register a bot user on your homeserver:**
|
|
||||||
```bash
|
|
||||||
docker compose exec synapse register_new_matrix_user \
|
|
||||||
-u bot \
|
|
||||||
-p <bot_password> \
|
|
||||||
-c /data/homeserver.yaml \
|
|
||||||
http://localhost:8008
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Add bot client in Maubot UI:**
|
|
||||||
- Go to https://maubot.fig.systems
|
|
||||||
- Click "Clients" → "+"
|
|
||||||
- Enter:
|
|
||||||
- **User ID:** @bot:fig.systems
|
|
||||||
- **Access Token:** (get from login)
|
|
||||||
- **Homeserver:** https://matrix.fig.systems
|
|
||||||
|
|
||||||
3. **Get access token:**
|
|
||||||
```bash
|
|
||||||
curl -X POST https://matrix.fig.systems/_matrix/client/r0/login \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"type": "m.login.password",
|
|
||||||
"user": "bot",
|
|
||||||
"password": "<bot_password>"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
Copy the `access_token` from the response.
|
|
||||||
|
|
||||||
### Installing Plugins:
|
|
||||||
|
|
||||||
**Popular plugins:**
|
|
||||||
|
|
||||||
1. **Giphy** - `/giphy <search>` command
|
|
||||||
- Download: https://github.com/TomCasavant/GiphyMaubot
|
|
||||||
- Upload .mbp file in Maubot UI
|
|
||||||
|
|
||||||
2. **Tenor** - `/tenor <search>` GIF search
|
|
||||||
- Download: https://github.com/williamkray/maubot-tenor
|
|
||||||
|
|
||||||
3. **Reminder** - `/remind <time> <message>`
|
|
||||||
- Download: https://github.com/maubot/reminder
|
|
||||||
|
|
||||||
4. **RSS** - RSS feed notifications
|
|
||||||
- Download: https://github.com/maubot/rss
|
|
||||||
|
|
||||||
5. **Reactions** - Emoji reactions and karma
|
|
||||||
- Download: https://github.com/maubot/reactbot
|
|
||||||
|
|
||||||
6. **Media** - Download media from URLs
|
|
||||||
- Download: https://github.com/maubot/media
|
|
||||||
|
|
||||||
**Installation steps:**
|
|
||||||
1. Download plugin .mbp file
|
|
||||||
2. Go to Maubot UI → Plugins → Upload
|
|
||||||
3. Create instance: Instances → + → Select plugin and client
|
|
||||||
4. Configure and enable
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Telegram Bridge (mautrix-telegram)
|
|
||||||
|
|
||||||
**Purpose:** Bridge Telegram chats and DMs to Matrix.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
1. **Get Telegram API credentials:**
|
|
||||||
- Go to https://my.telegram.org/apps
|
|
||||||
- Log in with your phone number
|
|
||||||
- Create an app
|
|
||||||
- Copy `api_id` and `api_hash`
|
|
||||||
|
|
||||||
2. **Generate config:**
|
|
||||||
```bash
|
|
||||||
docker compose run --rm mautrix-telegram
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Edit config:**
|
|
||||||
```bash
|
|
||||||
nano /mnt/media/matrix/bridges/telegram/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Key settings:
|
|
||||||
```yaml
|
|
||||||
homeserver:
|
|
||||||
address: http://synapse:8008
|
|
||||||
domain: fig.systems
|
|
||||||
|
|
||||||
appservice:
|
|
||||||
address: http://mautrix-telegram:29317
|
|
||||||
hostname: 0.0.0.0
|
|
||||||
port: 29317
|
|
||||||
database: sqlite:///data/mautrix-telegram.db
|
|
||||||
|
|
||||||
bridge:
|
|
||||||
permissions:
|
|
||||||
'@yourusername:fig.systems': admin
|
|
||||||
'fig.systems': user
|
|
||||||
|
|
||||||
telegram:
|
|
||||||
api_id: YOUR_API_ID
|
|
||||||
api_hash: YOUR_API_HASH
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Start the bridge:**
|
|
||||||
```bash
|
|
||||||
docker compose up -d mautrix-telegram
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Restart Synapse** (to load the registration file):
|
|
||||||
```bash
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using the Bridge:
|
|
||||||
|
|
||||||
1. **Start chat with bridge bot:**
|
|
||||||
- In Element, start a DM with `@telegrambot:fig.systems`
|
|
||||||
- Send: `login`
|
|
||||||
- Enter your Telegram phone number
|
|
||||||
- Enter the code sent to Telegram
|
|
||||||
|
|
||||||
2. **Bridge a chat:**
|
|
||||||
- Create or open a Matrix room
|
|
||||||
- Invite `@telegrambot:fig.systems`
|
|
||||||
- Send: `!tg bridge <telegram_chat_id>`
|
|
||||||
- Or use `!tg search <query>` to find chats
|
|
||||||
|
|
||||||
3. **Useful commands:**
|
|
||||||
- `!tg help` - Show all commands
|
|
||||||
- `!tg pm` - Bridge personal chats
|
|
||||||
- `!tg search <query>` - Find Telegram chats
|
|
||||||
- `!tg sync` - Sync members/messages
|
|
||||||
- `!tg unbridge` - Remove bridge
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. WhatsApp Bridge (mautrix-whatsapp)
|
|
||||||
|
|
||||||
**Purpose:** Bridge WhatsApp chats to Matrix.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
1. **Generate config:**
|
|
||||||
```bash
|
|
||||||
docker compose run --rm mautrix-whatsapp
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Edit config:**
|
|
||||||
```bash
|
|
||||||
nano /mnt/media/matrix/bridges/whatsapp/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Update:
|
|
||||||
```yaml
|
|
||||||
homeserver:
|
|
||||||
address: http://synapse:8008
|
|
||||||
domain: fig.systems
|
|
||||||
|
|
||||||
bridge:
|
|
||||||
permissions:
|
|
||||||
'@yourusername:fig.systems': admin
|
|
||||||
'fig.systems': user
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Start and restart:**
|
|
||||||
```bash
|
|
||||||
docker compose up -d mautrix-whatsapp
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using the Bridge:
|
|
||||||
|
|
||||||
1. **Start chat with bot:**
|
|
||||||
- DM `@whatsappbot:fig.systems` in Element
|
|
||||||
- Send: `login`
|
|
||||||
|
|
||||||
2. **Scan QR code:**
|
|
||||||
- Bridge will send a QR code
|
|
||||||
- Open WhatsApp on your phone
|
|
||||||
- Go to Settings → Linked Devices → Link a Device
|
|
||||||
- Scan the QR code
|
|
||||||
|
|
||||||
3. **Chats are auto-bridged:**
|
|
||||||
- Existing WhatsApp chats appear as Matrix rooms
|
|
||||||
- New WhatsApp messages create rooms automatically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Discord Bridge (mautrix-discord)
|
|
||||||
|
|
||||||
**Purpose:** Bridge Discord servers and DMs to Matrix.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
1. **Generate config:**
|
|
||||||
```bash
|
|
||||||
docker compose run --rm mautrix-discord
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Create Discord bot:**
|
|
||||||
- Go to https://discord.com/developers/applications
|
|
||||||
- Create New Application
|
|
||||||
- Go to Bot → Add Bot
|
|
||||||
- Copy the Bot Token
|
|
||||||
- Enable these intents:
|
|
||||||
- Server Members Intent
|
|
||||||
- Message Content Intent
|
|
||||||
|
|
||||||
3. **Edit config:**
|
|
||||||
```bash
|
|
||||||
nano /mnt/media/matrix/bridges/discord/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Add your bot token:
|
|
||||||
```yaml
|
|
||||||
bridge:
|
|
||||||
bot_token: YOUR_DISCORD_BOT_TOKEN
|
|
||||||
permissions:
|
|
||||||
'@yourusername:fig.systems': admin
|
|
||||||
'fig.systems': user
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Start and restart:**
|
|
||||||
```bash
|
|
||||||
docker compose up -d mautrix-discord
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using the Bridge:
|
|
||||||
|
|
||||||
1. **Invite bot to Discord server:**
|
|
||||||
- Get OAuth URL from bridge bot in Matrix
|
|
||||||
- Visit URL and authorize bot for your Discord server
|
|
||||||
|
|
||||||
2. **Bridge channels:**
|
|
||||||
- Create Matrix room
|
|
||||||
- Invite `@discordbot:fig.systems`
|
|
||||||
- Follow bridging instructions from bot
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 7. Google Chat Bridge (mautrix-googlechat)
|
|
||||||
|
|
||||||
**Purpose:** Bridge Google Chat/Hangouts to Matrix.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
Similar to other mautrix bridges:
|
|
||||||
|
|
||||||
1. Generate config: `docker compose run --rm mautrix-googlechat`
|
|
||||||
2. Edit `/mnt/media/matrix/bridges/googlechat/config.yaml`
|
|
||||||
3. Start: `docker compose up -d mautrix-googlechat`
|
|
||||||
4. Restart Synapse: `docker compose restart synapse`
|
|
||||||
5. Login via bridge bot: `@googlechatbot:fig.systems`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 8. Mjolnir (Moderation Bot)
|
|
||||||
|
|
||||||
**Purpose:** Advanced moderation, ban lists, anti-spam protection.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
1. **Create bot user:**
|
|
||||||
```bash
|
|
||||||
docker compose exec synapse register_new_matrix_user \
|
|
||||||
-u mjolnir \
|
|
||||||
-p <password> \
|
|
||||||
-c /data/homeserver.yaml \
|
|
||||||
http://localhost:8008
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Create management room:**
|
|
||||||
- In Element, create a private room
|
|
||||||
- Invite `@mjolnir:fig.systems`
|
|
||||||
- Make the bot admin
|
|
||||||
|
|
||||||
3. **Generate config:**
|
|
||||||
```bash
|
|
||||||
docker compose run --rm mjolnir
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Edit config:**
|
|
||||||
```bash
|
|
||||||
nano /mnt/media/matrix/bridges/mjolnir/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Configure:
|
|
||||||
```yaml
|
|
||||||
homeserver: https://matrix.fig.systems
|
|
||||||
accessToken: <get_from_login>
|
|
||||||
managementRoom: "!roomid:fig.systems"
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Get access token:**
|
|
||||||
```bash
|
|
||||||
curl -X POST https://matrix.fig.systems/_matrix/client/r0/login \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{
|
|
||||||
"type": "m.login.password",
|
|
||||||
"user": "mjolnir",
|
|
||||||
"password": "<password>"
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Start bot:**
|
|
||||||
```bash
|
|
||||||
docker compose up -d mjolnir
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using Mjolnir:
|
|
||||||
|
|
||||||
1. **Protect rooms:**
|
|
||||||
- Invite Mjolnir to rooms you want to moderate
|
|
||||||
- In management room, send: `!mjolnir rooms add <room_id>`
|
|
||||||
|
|
||||||
2. **Subscribe to ban lists:**
|
|
||||||
- `!mjolnir list subscribe <list_room_id>`
|
|
||||||
|
|
||||||
3. **Ban users:**
|
|
||||||
- `!mjolnir ban @user:server.com`
|
|
||||||
|
|
||||||
4. **Commands:**
|
|
||||||
- `!mjolnir help` - Show all commands
|
|
||||||
- `!mjolnir status` - Bot status
|
|
||||||
- `!mjolnir rooms` - Protected rooms
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 9. Matrix Hookshot (GitHub/GitLab Integration)
|
|
||||||
|
|
||||||
**Purpose:** Receive webhooks from GitHub, GitLab, Jira in Matrix rooms.
|
|
||||||
|
|
||||||
### Setup:
|
|
||||||
|
|
||||||
1. **Generate config:**
|
|
||||||
```bash
|
|
||||||
docker compose run --rm hookshot
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Edit config:**
|
|
||||||
```bash
|
|
||||||
nano /mnt/media/matrix/hookshot/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Key settings:
|
|
||||||
```yaml
|
|
||||||
bridge:
|
|
||||||
domain: fig.systems
|
|
||||||
url: https://matrix.fig.systems
|
|
||||||
mediaUrl: https://matrix.fig.systems
|
|
||||||
port: 9993
|
|
||||||
bindAddress: 0.0.0.0
|
|
||||||
|
|
||||||
listeners:
|
|
||||||
- port: 9000
|
|
||||||
bindAddress: 0.0.0.0
|
|
||||||
resources:
|
|
||||||
- webhooks
|
|
||||||
|
|
||||||
github:
|
|
||||||
webhook:
|
|
||||||
secret: <random_secret>
|
|
||||||
|
|
||||||
gitlab:
|
|
||||||
webhook:
|
|
||||||
secret: <random_secret>
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Start service:**
|
|
||||||
```bash
|
|
||||||
docker compose up -d hookshot
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using Hookshot:
|
|
||||||
|
|
||||||
**For GitHub:**
|
|
||||||
1. In Matrix room, invite `@hookshot:fig.systems`
|
|
||||||
2. Send: `!github repo owner/repo`
|
|
||||||
3. Bot will provide webhook URL
|
|
||||||
4. Add webhook in GitHub repo settings
|
|
||||||
5. Set webhook URL: `https://hookshot.fig.systems/webhooks/github`
|
|
||||||
6. Add secret from config
|
|
||||||
|
|
||||||
**For GitLab:**
|
|
||||||
Similar process with GitLab webhooks.
|
|
||||||
|
|
||||||
**Features:**
|
|
||||||
- Issue notifications
|
|
||||||
- PR/MR updates
|
|
||||||
- Commit messages
|
|
||||||
- CI/CD status
|
|
||||||
- Custom filters
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Service won't start:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check logs
|
|
||||||
docker compose logs <service_name>
|
|
||||||
|
|
||||||
# Check if config exists
|
|
||||||
ls -la /mnt/media/matrix/<service>/
|
|
||||||
|
|
||||||
# Regenerate config
|
|
||||||
docker compose run --rm <service_name>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Bridge not connecting:
|
|
||||||
|
|
||||||
1. Check registration file exists:
|
|
||||||
```bash
|
|
||||||
ls -la /mnt/media/matrix/bridges/<bridge>/registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Check Synapse can read it:
|
|
||||||
```bash
|
|
||||||
docker compose exec synapse cat /data/bridges/<bridge>/registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Restart Synapse:
|
|
||||||
```bash
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### Can't login to admin interfaces:
|
|
||||||
|
|
||||||
- Synapse Admin: Use Matrix account credentials
|
|
||||||
- Maubot: Check `/mnt/media/matrix/maubot/config.yaml` for password
|
|
||||||
- Matrix Registration: Use `MATRIX_REGISTRATION_ADMIN_SECRET` from `.env`
|
|
||||||
|
|
||||||
### Ports already in use:
|
|
||||||
|
|
||||||
Check what's using the port:
|
|
||||||
```bash
|
|
||||||
sudo lsof -i :<port_number>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Permission issues:
|
|
||||||
|
|
||||||
Fix ownership:
|
|
||||||
```bash
|
|
||||||
sudo chown -R 1000:1000 /mnt/media/matrix/
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Useful Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# View all service logs
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Restart all services
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Update all services
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
# Check service status
|
|
||||||
docker compose ps
|
|
||||||
|
|
||||||
# Create admin user
|
|
||||||
docker compose exec synapse register_new_matrix_user \
|
|
||||||
-u <username> -p <password> -a -c /data/homeserver.yaml http://localhost:8008
|
|
||||||
|
|
||||||
# Backup database
|
|
||||||
docker compose exec postgres pg_dump -U synapse synapse > backup.sql
|
|
||||||
|
|
||||||
# Restore database
|
|
||||||
cat backup.sql | docker compose exec -T postgres psql -U synapse synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Set up Telegram bridge** - Most useful for Telegram users
|
|
||||||
2. **Create registration tokens** - Invite friends/family
|
|
||||||
3. **Install Maubot plugins** - Add GIF search and other features
|
|
||||||
4. **Configure Mjolnir** - Set up moderation
|
|
||||||
5. **Add GitHub webhooks** - Get repo notifications in Matrix
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Matrix Documentation](https://matrix.org/docs/)
|
|
||||||
- [Synapse Admin Guide](https://element-hq.github.io/synapse/latest/)
|
|
||||||
- [Maubot Plugins](https://github.com/maubot/maubot/wiki/Plugin-directory)
|
|
||||||
- [Bridge Setup Guides](https://docs.mau.fi/bridges/)
|
|
||||||
|
|
@ -1,216 +0,0 @@
|
||||||
# Matrix Quick Start Guide
|
|
||||||
|
|
||||||
Get your Matrix server running in 4 steps!
|
|
||||||
|
|
||||||
## Before You Start
|
|
||||||
|
|
||||||
You'll need:
|
|
||||||
1. **Telegram API credentials** (see step 3)
|
|
||||||
|
|
||||||
## Step 1: Start Matrix Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /home/eduardo_figueroa/homelab/compose/services/matrix
|
|
||||||
docker compose up -d postgres synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
Wait ~30 seconds for database initialization. Watch logs:
|
|
||||||
```bash
|
|
||||||
docker compose logs -f synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
Look for: `Synapse now listening on port 8008`
|
|
||||||
|
|
||||||
## Step 2: Create Admin User
|
|
||||||
|
|
||||||
Create your first admin account:
|
|
||||||
```bash
|
|
||||||
docker exec -it matrix-synapse register_new_matrix_user -c /data/homeserver.yaml -a http://localhost:8008
|
|
||||||
```
|
|
||||||
|
|
||||||
Follow the prompts to enter a username and password.
|
|
||||||
|
|
||||||
Your Matrix ID will be: `@yourusername:fig.systems`
|
|
||||||
|
|
||||||
## Step 3: Test Login via Element
|
|
||||||
|
|
||||||
1. Go to https://app.element.io
|
|
||||||
2. Click "Sign in"
|
|
||||||
3. Click "Edit" and enter: `matrix.fig.systems`
|
|
||||||
4. Click "Continue"
|
|
||||||
5. Enter your username and password
|
|
||||||
|
|
||||||
## Step 4: Set Up Telegram Bridge
|
|
||||||
|
|
||||||
### Get API Credentials
|
|
||||||
1. Visit https://my.telegram.org (login with phone)
|
|
||||||
2. "API development tools" → Create app
|
|
||||||
3. Save your `api_id` and `api_hash`
|
|
||||||
|
|
||||||
### Configure Bridge
|
|
||||||
```bash
|
|
||||||
# Generate config
|
|
||||||
docker run --rm -v /mnt/media/matrix/bridges/telegram:/data \
|
|
||||||
dock.mau.dev/mautrix/telegram:latest
|
|
||||||
|
|
||||||
# Edit config (use your favorite editor)
|
|
||||||
sudo nano /mnt/media/matrix/bridges/telegram/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key settings to update:**
|
|
||||||
```yaml
|
|
||||||
homeserver:
|
|
||||||
address: http://synapse:8008
|
|
||||||
domain: fig.systems
|
|
||||||
|
|
||||||
appservice:
|
|
||||||
address: http://mautrix-telegram:29317
|
|
||||||
database: postgres://synapse:46d8cb2e8bdacf5a267a5f35bcdea4ded46e42ced008c4998e180f33e3ce07c5@postgres/telegram
|
|
||||||
|
|
||||||
telegram:
|
|
||||||
api_id: YOUR_API_ID_HERE
|
|
||||||
api_hash: YOUR_API_HASH_HERE
|
|
||||||
|
|
||||||
bridge:
|
|
||||||
permissions:
|
|
||||||
'@yourusername:fig.systems': admin
|
|
||||||
```
|
|
||||||
|
|
||||||
### Register and Start
|
|
||||||
```bash
|
|
||||||
# Copy registration to Synapse
|
|
||||||
sudo cp /mnt/media/matrix/bridges/telegram/registration.yaml \
|
|
||||||
/mnt/media/matrix/synapse/data/telegram-registration.yaml
|
|
||||||
|
|
||||||
# Add to homeserver.yaml (add these lines at the end)
|
|
||||||
echo "
|
|
||||||
app_service_config_files:
|
|
||||||
- /data/telegram-registration.yaml" | sudo tee -a homeserver.yaml
|
|
||||||
|
|
||||||
# Restart and start bridge
|
|
||||||
docker compose restart synapse
|
|
||||||
docker compose up -d mautrix-telegram
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use the Bridge
|
|
||||||
In Element:
|
|
||||||
1. Start chat with `@telegrambot:fig.systems`
|
|
||||||
2. Type: `login`
|
|
||||||
3. Follow the instructions
|
|
||||||
|
|
||||||
## Step 5: Set Up WhatsApp Bridge
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Generate config
|
|
||||||
docker run --rm -v /mnt/media/matrix/bridges/whatsapp:/data \
|
|
||||||
dock.mau.dev/mautrix/whatsapp:latest
|
|
||||||
|
|
||||||
# Edit config
|
|
||||||
sudo nano /mnt/media/matrix/bridges/whatsapp/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key settings:**
|
|
||||||
```yaml
|
|
||||||
homeserver:
|
|
||||||
address: http://synapse:8008
|
|
||||||
domain: fig.systems
|
|
||||||
|
|
||||||
appservice:
|
|
||||||
address: http://mautrix-whatsapp:29318
|
|
||||||
database:
|
|
||||||
uri: postgres://synapse:46d8cb2e8bdacf5a267a5f35bcdea4ded46e42ced008c4998e180f33e3ce07c5@postgres/whatsapp
|
|
||||||
|
|
||||||
bridge:
|
|
||||||
permissions:
|
|
||||||
'@yourusername:fig.systems': admin
|
|
||||||
```
|
|
||||||
|
|
||||||
### Register and Start
|
|
||||||
```bash
|
|
||||||
# Copy registration
|
|
||||||
sudo cp /mnt/media/matrix/bridges/whatsapp/registration.yaml \
|
|
||||||
/mnt/media/matrix/synapse/data/whatsapp-registration.yaml
|
|
||||||
|
|
||||||
# Update homeserver.yaml
|
|
||||||
sudo nano homeserver.yaml
|
|
||||||
# Add to app_service_config_files:
|
|
||||||
# - /data/whatsapp-registration.yaml
|
|
||||||
|
|
||||||
# Restart and start bridge
|
|
||||||
docker compose restart synapse
|
|
||||||
docker compose up -d mautrix-whatsapp
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use the Bridge
|
|
||||||
In Element:
|
|
||||||
1. Start chat with `@whatsappbot:fig.systems`
|
|
||||||
2. Type: `login`
|
|
||||||
3. Scan QR code with your phone
|
|
||||||
|
|
||||||
## Optional: Google Chat Bridge
|
|
||||||
|
|
||||||
Google Chat requires additional Google Cloud setup (OAuth credentials).
|
|
||||||
|
|
||||||
See full README.md for detailed instructions.
|
|
||||||
|
|
||||||
## Quick Commands
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# View all logs
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# View specific service logs
|
|
||||||
docker compose logs -f synapse
|
|
||||||
docker compose logs -f mautrix-telegram
|
|
||||||
|
|
||||||
# Restart everything
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Stop everything
|
|
||||||
docker compose down
|
|
||||||
|
|
||||||
# Update containers
|
|
||||||
docker compose pull && docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
## Verify Everything Works
|
|
||||||
|
|
||||||
### Test Checklist
|
|
||||||
- [ ] Can login at https://app.element.io with username/password
|
|
||||||
- [ ] Can send messages in Element
|
|
||||||
- [ ] Telegram bridge responds to commands
|
|
||||||
- [ ] WhatsApp bridge shows QR code
|
|
||||||
- [ ] Can see Telegram/WhatsApp chats in Element after bridging
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Can't login
|
|
||||||
- Check Synapse logs: `docker compose logs synapse | grep -i error`
|
|
||||||
- Verify you created a user with `register_new_matrix_user`
|
|
||||||
- Test endpoint: `curl https://matrix.fig.systems/_matrix/client/versions`
|
|
||||||
|
|
||||||
### Bridge not working
|
|
||||||
- Check bridge logs: `docker compose logs mautrix-telegram`
|
|
||||||
- Verify registration file path in homeserver.yaml
|
|
||||||
- Ensure Synapse was restarted after adding registration
|
|
||||||
- Check bridge can reach Synapse: `docker compose exec mautrix-telegram ping synapse`
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
- Invite friends to your Matrix server
|
|
||||||
- Create encrypted rooms for private conversations
|
|
||||||
- Bridge more Telegram/WhatsApp chats
|
|
||||||
- Set up Google Chat bridge for work communications
|
|
||||||
- Install Element on your phone for mobile access
|
|
||||||
|
|
||||||
## Need Help?
|
|
||||||
|
|
||||||
See README.md for:
|
|
||||||
- Detailed configuration explanations
|
|
||||||
- Google Chat bridge setup
|
|
||||||
- Federation troubleshooting
|
|
||||||
- Backup procedures
|
|
||||||
- Advanced configurations
|
|
||||||
|
|
||||||
Matrix documentation: https://matrix.org/docs/
|
|
||||||
Mautrix bridges: https://docs.mau.fi/bridges/
|
|
||||||
|
|
@ -1,322 +0,0 @@
|
||||||
# Matrix Server with Bridges
|
|
||||||
|
|
||||||
Complete Matrix/Synapse homeserver setup with local authentication and bridges for Telegram, WhatsApp, and Google Chat.
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
- **Synapse**: Matrix homeserver (fig.systems)
|
|
||||||
- **PostgreSQL**: Database backend
|
|
||||||
- **Traefik**: Reverse proxy with Let's Encrypt
|
|
||||||
- **Bridges**: Telegram, WhatsApp, Google Chat
|
|
||||||
- **Optional**: Element web client
|
|
||||||
|
|
||||||
## Domain Configuration
|
|
||||||
|
|
||||||
- **Server**: matrix.fig.systems
|
|
||||||
- **Server Name**: fig.systems (used for Matrix IDs like @user:fig.systems)
|
|
||||||
- **Federation**: Enabled via .well-known delegation
|
|
||||||
|
|
||||||
## Setup Instructions
|
|
||||||
|
|
||||||
### 1. Deploy Matrix Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd compose/services/matrix
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
Wait for Synapse to start and initialize the database. Check logs:
|
|
||||||
```bash
|
|
||||||
docker compose logs -f synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Create Your First Admin User
|
|
||||||
|
|
||||||
Once Synapse is running, create an admin user:
|
|
||||||
```bash
|
|
||||||
docker exec -it matrix-synapse register_new_matrix_user -c /data/homeserver.yaml -a http://localhost:8008
|
|
||||||
```
|
|
||||||
|
|
||||||
Follow the prompts to create your admin account with a username and password.
|
|
||||||
|
|
||||||
### 3. Test Matrix Server
|
|
||||||
|
|
||||||
Visit https://matrix.fig.systems and you should see the Matrix homeserver info.
|
|
||||||
|
|
||||||
Try logging in:
|
|
||||||
1. Go to https://app.element.io
|
|
||||||
2. Click "Sign in"
|
|
||||||
3. Click "Edit" next to the homeserver
|
|
||||||
4. Enter: `matrix.fig.systems`
|
|
||||||
5. Click "Continue"
|
|
||||||
6. Enter your username and password
|
|
||||||
|
|
||||||
### 4. Configure Telegram Bridge
|
|
||||||
|
|
||||||
**Get Telegram API Credentials:**
|
|
||||||
1. Visit https://my.telegram.org
|
|
||||||
2. Log in with your phone number
|
|
||||||
3. Go to "API development tools"
|
|
||||||
4. Create an app (use any title/short name)
|
|
||||||
5. Note your `api_id` (number) and `api_hash` (string)
|
|
||||||
|
|
||||||
**Generate Bridge Config:**
|
|
||||||
```bash
|
|
||||||
# Generate initial config
|
|
||||||
docker run --rm -v /mnt/media/matrix/bridges/telegram:/data dock.mau.dev/mautrix/telegram:latest
|
|
||||||
|
|
||||||
# Edit the config
|
|
||||||
nano /mnt/media/matrix/bridges/telegram/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update these settings in config.yaml:**
|
|
||||||
- `homeserver.address`: `http://synapse:8008`
|
|
||||||
- `homeserver.domain`: `fig.systems`
|
|
||||||
- `appservice.address`: `http://mautrix-telegram:29317`
|
|
||||||
- `appservice.hostname`: `0.0.0.0`
|
|
||||||
- `appservice.port`: `29317`
|
|
||||||
- `appservice.database`: `postgres://synapse:PASSWORD@postgres/telegram` (use password from .env)
|
|
||||||
- `telegram.api_id`: Your API ID
|
|
||||||
- `telegram.api_hash`: Your API hash
|
|
||||||
- `bridge.permissions`: Add your Matrix ID with admin level
|
|
||||||
|
|
||||||
**Register the bridge:**
|
|
||||||
```bash
|
|
||||||
# Copy the registration file to Synapse
|
|
||||||
cp /mnt/media/matrix/bridges/telegram/registration.yaml /mnt/media/matrix/synapse/data/telegram-registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Add to `homeserver.yaml` under `app_service_config_files`:
|
|
||||||
```yaml
|
|
||||||
app_service_config_files:
|
|
||||||
- /data/telegram-registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Synapse:
|
|
||||||
```bash
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
Start the bridge:
|
|
||||||
```bash
|
|
||||||
docker compose up -d mautrix-telegram
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use the bridge:**
|
|
||||||
1. In Element, start a chat with `@telegrambot:fig.systems`
|
|
||||||
2. Send `login` and follow the instructions
|
|
||||||
|
|
||||||
### 5. Configure WhatsApp Bridge
|
|
||||||
|
|
||||||
**Generate Bridge Config:**
|
|
||||||
```bash
|
|
||||||
# Generate initial config
|
|
||||||
docker run --rm -v /mnt/media/matrix/bridges/whatsapp:/data dock.mau.dev/mautrix/whatsapp:latest
|
|
||||||
|
|
||||||
# Edit the config
|
|
||||||
nano /mnt/media/matrix/bridges/whatsapp/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update these settings in config.yaml:**
|
|
||||||
- `homeserver.address`: `http://synapse:8008`
|
|
||||||
- `homeserver.domain`: `fig.systems`
|
|
||||||
- `appservice.address`: `http://mautrix-whatsapp:29318`
|
|
||||||
- `appservice.hostname`: `0.0.0.0`
|
|
||||||
- `appservice.port`: `29318`
|
|
||||||
- `appservice.database.uri`: `postgres://synapse:PASSWORD@postgres/whatsapp` (use password from .env)
|
|
||||||
- `bridge.permissions`: Add your Matrix ID with admin level
|
|
||||||
|
|
||||||
**Register the bridge:**
|
|
||||||
```bash
|
|
||||||
# Copy the registration file to Synapse
|
|
||||||
cp /mnt/media/matrix/bridges/whatsapp/registration.yaml /mnt/media/matrix/synapse/data/whatsapp-registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Add to `homeserver.yaml`:
|
|
||||||
```yaml
|
|
||||||
app_service_config_files:
|
|
||||||
- /data/telegram-registration.yaml
|
|
||||||
- /data/whatsapp-registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Synapse:
|
|
||||||
```bash
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
Start the bridge:
|
|
||||||
```bash
|
|
||||||
docker compose up -d mautrix-whatsapp
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use the bridge:**
|
|
||||||
1. In Element, start a chat with `@whatsappbot:fig.systems`
|
|
||||||
2. Send `login`
|
|
||||||
3. Scan the QR code with WhatsApp on your phone (like WhatsApp Web)
|
|
||||||
|
|
||||||
### 6. Configure Google Chat Bridge
|
|
||||||
|
|
||||||
**Prerequisites:**
|
|
||||||
- Google Cloud Project
|
|
||||||
- Google Chat API enabled
|
|
||||||
- OAuth 2.0 credentials
|
|
||||||
|
|
||||||
**Setup Google Cloud:**
|
|
||||||
1. Go to https://console.cloud.google.com
|
|
||||||
2. Create a new project or select existing
|
|
||||||
3. Enable "Google Chat API"
|
|
||||||
4. Create OAuth 2.0 credentials:
|
|
||||||
- Application type: Desktop app
|
|
||||||
- Download the JSON file
|
|
||||||
|
|
||||||
**Generate Bridge Config:**
|
|
||||||
```bash
|
|
||||||
# Generate initial config
|
|
||||||
docker run --rm -v /mnt/media/matrix/bridges/googlechat:/data dock.mau.dev/mautrix/googlechat:latest
|
|
||||||
|
|
||||||
# Edit the config
|
|
||||||
nano /mnt/media/matrix/bridges/googlechat/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
**Update these settings in config.yaml:**
|
|
||||||
- `homeserver.address`: `http://synapse:8008`
|
|
||||||
- `homeserver.domain`: `fig.systems`
|
|
||||||
- `appservice.address`: `http://mautrix-googlechat:29319`
|
|
||||||
- `appservice.hostname`: `0.0.0.0`
|
|
||||||
- `appservice.port`: `29319`
|
|
||||||
- `appservice.database`: `postgres://synapse:PASSWORD@postgres/googlechat` (use password from .env)
|
|
||||||
- `bridge.permissions`: Add your Matrix ID with admin level
|
|
||||||
|
|
||||||
**Register the bridge:**
|
|
||||||
```bash
|
|
||||||
# Copy the registration file to Synapse
|
|
||||||
cp /mnt/media/matrix/bridges/googlechat/registration.yaml /mnt/media/matrix/synapse/data/googlechat-registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Add to `homeserver.yaml`:
|
|
||||||
```yaml
|
|
||||||
app_service_config_files:
|
|
||||||
- /data/telegram-registration.yaml
|
|
||||||
- /data/whatsapp-registration.yaml
|
|
||||||
- /data/googlechat-registration.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Restart Synapse:
|
|
||||||
```bash
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
Start the bridge:
|
|
||||||
```bash
|
|
||||||
docker compose up -d mautrix-googlechat
|
|
||||||
```
|
|
||||||
|
|
||||||
**Use the bridge:**
|
|
||||||
1. In Element, start a chat with `@googlechatbot:fig.systems`
|
|
||||||
2. Send `login`
|
|
||||||
3. Follow the OAuth flow to authenticate with your Google account
|
|
||||||
|
|
||||||
**Note for Work Google Chat:** Your organization's Google Workspace admin might need to approve the OAuth app.
|
|
||||||
|
|
||||||
## Client Apps
|
|
||||||
|
|
||||||
### Element (Recommended)
|
|
||||||
|
|
||||||
**Web:** https://app.element.io
|
|
||||||
**iOS:** https://apps.apple.com/app/element-messenger/id1083446067
|
|
||||||
**Android:** https://play.google.com/store/apps/details?id=im.vector.app
|
|
||||||
|
|
||||||
**Setup:**
|
|
||||||
1. Open Element
|
|
||||||
2. Click "Sign in"
|
|
||||||
3. Click "Edit" next to homeserver
|
|
||||||
4. Enter: `matrix.fig.systems`
|
|
||||||
5. Click "Continue"
|
|
||||||
6. Enter your username and password
|
|
||||||
|
|
||||||
### Alternative Clients
|
|
||||||
|
|
||||||
- **FluffyChat**: Modern, lightweight client
|
|
||||||
- **SchildiChat**: Element fork with UI improvements
|
|
||||||
- **Nheko**: Desktop client
|
|
||||||
|
|
||||||
All clients work by pointing to `matrix.fig.systems` as the homeserver.
|
|
||||||
|
|
||||||
## Maintenance
|
|
||||||
|
|
||||||
### View Logs
|
|
||||||
```bash
|
|
||||||
# All services
|
|
||||||
docker compose logs -f
|
|
||||||
|
|
||||||
# Specific service
|
|
||||||
docker compose logs -f synapse
|
|
||||||
docker compose logs -f mautrix-telegram
|
|
||||||
```
|
|
||||||
|
|
||||||
### Restart Services
|
|
||||||
```bash
|
|
||||||
# All
|
|
||||||
docker compose restart
|
|
||||||
|
|
||||||
# Specific
|
|
||||||
docker compose restart synapse
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update Containers
|
|
||||||
```bash
|
|
||||||
docker compose pull
|
|
||||||
docker compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Backup
|
|
||||||
|
|
||||||
Important directories:
|
|
||||||
- `/mnt/media/matrix/synapse/data` - Synapse configuration and signing keys
|
|
||||||
- `/mnt/media/matrix/synapse/media` - Uploaded media files
|
|
||||||
- `/mnt/media/matrix/postgres` - Database
|
|
||||||
- `/mnt/media/matrix/bridges/` - Bridge configurations
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Can't connect to homeserver
|
|
||||||
- Check Synapse logs: `docker compose logs synapse`
|
|
||||||
- Verify Traefik routing: `docker compose -f compose/core/traefik/compose.yaml logs`
|
|
||||||
- Test endpoint: `curl -k https://matrix.fig.systems/_matrix/client/versions`
|
|
||||||
|
|
||||||
### Login not working
|
|
||||||
- Verify you created a user with `register_new_matrix_user`
|
|
||||||
- Check Synapse logs for authentication errors
|
|
||||||
- Ensure the homeserver is set to `matrix.fig.systems` in your client
|
|
||||||
|
|
||||||
### Bridge not connecting
|
|
||||||
- Check bridge logs: `docker compose logs mautrix-telegram`
|
|
||||||
- Verify registration file is in Synapse config
|
|
||||||
- Ensure bridge database is created in PostgreSQL
|
|
||||||
- Restart Synapse after adding registration files
|
|
||||||
|
|
||||||
### Federation issues
|
|
||||||
- Ensure ports 80 and 443 are accessible
|
|
||||||
- Check `.well-known` delegation is working
|
|
||||||
- Test federation: https://federationtester.matrix.org/
|
|
||||||
|
|
||||||
## Security Notes
|
|
||||||
|
|
||||||
- Users authenticate with local Matrix passwords
|
|
||||||
- Public registration is disabled (use `register_new_matrix_user` to create accounts)
|
|
||||||
- Federation uses standard HTTPS (443) with .well-known delegation
|
|
||||||
- All bridges run on internal network only
|
|
||||||
- Media uploads limited to 50MB
|
|
||||||
|
|
||||||
## Configuration Files
|
|
||||||
|
|
||||||
- `compose.yaml` - Docker Compose configuration
|
|
||||||
- `homeserver.yaml` - Synapse configuration
|
|
||||||
- `.env` - Environment variables and secrets
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- Matrix Documentation: https://matrix.org/docs/
|
|
||||||
- Synapse Documentation: https://element-hq.github.io/synapse/latest/
|
|
||||||
- Mautrix Bridges: https://docs.mau.fi/bridges/
|
|
||||||
- Element Help: https://element.io/help
|
|
||||||
|
|
@ -1,327 +0,0 @@
|
||||||
# Matrix Room Management Guide
|
|
||||||
|
|
||||||
## Understanding Matrix Room Concepts
|
|
||||||
|
|
||||||
### Auto-Join Rooms
|
|
||||||
**What they are:** Rooms that users automatically join when they create an account.
|
|
||||||
|
|
||||||
**Configured in:** `homeserver.yaml` (lines 118-120)
|
|
||||||
```yaml
|
|
||||||
auto_join_rooms:
|
|
||||||
- "#general:fig.systems"
|
|
||||||
- "#announcements:fig.systems"
|
|
||||||
- "#support:fig.systems"
|
|
||||||
```
|
|
||||||
|
|
||||||
**How it works:**
|
|
||||||
- When a new user registers, they're automatically added to these rooms
|
|
||||||
- Great for onboarding and ensuring everyone sees important channels
|
|
||||||
- Users can leave these rooms later if they want
|
|
||||||
|
|
||||||
**To add more rooms:**
|
|
||||||
1. Create the room first (using the script or manually)
|
|
||||||
2. Add its alias to the `auto_join_rooms` list in homeserver.yaml
|
|
||||||
3. Restart Synapse: `docker restart matrix-synapse`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Room Directory (Public Room List)
|
|
||||||
|
|
||||||
**What it is:** A searchable list of public rooms that users can browse and join.
|
|
||||||
|
|
||||||
**Where to find it:**
|
|
||||||
- **In Element:** Click "Explore rooms" or the + button → "Explore public rooms"
|
|
||||||
- **In Admin Panel:** Navigate to "Rooms" section to see all rooms and their visibility
|
|
||||||
|
|
||||||
**How rooms appear in the directory:**
|
|
||||||
1. Room must be created with `visibility: public`
|
|
||||||
2. Room must be published to the directory
|
|
||||||
3. Users can search and join these rooms without an invite
|
|
||||||
|
|
||||||
**Room Visibility Settings:**
|
|
||||||
- `public` - Listed in room directory, anyone can find and join
|
|
||||||
- `private` - Not listed, users need an invite or direct link
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quick Setup: Create Default Rooms
|
|
||||||
|
|
||||||
Run this script to create the three default auto-join rooms:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./create-default-rooms.sh admin yourpassword
|
|
||||||
```
|
|
||||||
|
|
||||||
This will create:
|
|
||||||
- **#general:fig.systems** - General discussion
|
|
||||||
- **#announcements:fig.systems** - Important updates
|
|
||||||
- **#support:fig.systems** - Help and questions
|
|
||||||
|
|
||||||
All rooms will be:
|
|
||||||
- ✅ Public and searchable
|
|
||||||
- ✅ Listed in room directory
|
|
||||||
- ✅ Auto-joined by new users
|
|
||||||
- ✅ Allow anyone to speak (not read-only)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Manual Room Management
|
|
||||||
|
|
||||||
### Via Synapse Admin Panel
|
|
||||||
|
|
||||||
**Access:** https://admin.matrix.fig.systems
|
|
||||||
|
|
||||||
**Room Management Features:**
|
|
||||||
|
|
||||||
1. **View All Rooms**
|
|
||||||
- Navigate to "Rooms" in the sidebar
|
|
||||||
- See room ID, name, members, aliases
|
|
||||||
- View room details and settings
|
|
||||||
|
|
||||||
2. **Room Directory Settings**
|
|
||||||
- Click on a room
|
|
||||||
- Find "Publish to directory" toggle
|
|
||||||
- Enable/disable public listing
|
|
||||||
|
|
||||||
3. **Room Moderation**
|
|
||||||
- View and remove members
|
|
||||||
- Delete rooms
|
|
||||||
- View room state events
|
|
||||||
- See room statistics
|
|
||||||
|
|
||||||
4. **Room Aliases**
|
|
||||||
- View all aliases pointing to a room
|
|
||||||
- Add new aliases
|
|
||||||
- Remove old aliases
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Via Element Web Client
|
|
||||||
|
|
||||||
**Access:** https://chat.fig.systems
|
|
||||||
|
|
||||||
**Create a Room:**
|
|
||||||
1. Click the + button or "Create room"
|
|
||||||
2. Set room name and topic
|
|
||||||
3. Choose "Public room" for directory listing
|
|
||||||
4. Set room address (alias) - e.g., `general`
|
|
||||||
5. Enable "List this room in the room directory"
|
|
||||||
|
|
||||||
**Publish Existing Room to Directory:**
|
|
||||||
1. Open the room
|
|
||||||
2. Click room name → Settings
|
|
||||||
3. Go to "Security & Privacy"
|
|
||||||
4. Under "Room visibility" select "Public"
|
|
||||||
5. Go to "General"
|
|
||||||
6. Enable "Publish this room to the public room directory"
|
|
||||||
|
|
||||||
**Set Room as Auto-Join:**
|
|
||||||
1. Note the room alias (e.g., #gaming:fig.systems)
|
|
||||||
2. Edit homeserver.yaml and add to `auto_join_rooms`
|
|
||||||
3. Restart Synapse
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Room Types and Use Cases
|
|
||||||
|
|
||||||
### 1. General/Community Rooms
|
|
||||||
```bash
|
|
||||||
# Open to all, listed in directory, auto-join
|
|
||||||
Preset: public_chat
|
|
||||||
Visibility: public
|
|
||||||
History: shared (new joiners can see history)
|
|
||||||
```
|
|
||||||
**Best for:** General chat, announcements, community discussions
|
|
||||||
|
|
||||||
### 2. Private Team Rooms
|
|
||||||
```bash
|
|
||||||
# Invite-only, not in directory
|
|
||||||
Preset: private_chat
|
|
||||||
Visibility: private
|
|
||||||
History: shared or invited (configurable)
|
|
||||||
```
|
|
||||||
**Best for:** Team channels, private projects, sensitive discussions
|
|
||||||
|
|
||||||
### 3. Read-Only Announcement Rooms
|
|
||||||
```bash
|
|
||||||
# Public, but only admins/mods can post
|
|
||||||
Preset: public_chat
|
|
||||||
Visibility: public
|
|
||||||
Power levels: events_default: 50, users_default: 0
|
|
||||||
```
|
|
||||||
**Best for:** Official announcements, server updates, rules
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Room Alias vs Room ID
|
|
||||||
|
|
||||||
**Room ID:** `!abc123def456:fig.systems`
|
|
||||||
- Permanent, immutable identifier
|
|
||||||
- Looks cryptic, not user-friendly
|
|
||||||
- Required for API calls
|
|
||||||
|
|
||||||
**Room Alias:** `#general:fig.systems`
|
|
||||||
- Human-readable name
|
|
||||||
- Can be changed or removed
|
|
||||||
- Points to a Room ID
|
|
||||||
- Used in auto_join_rooms config
|
|
||||||
|
|
||||||
**Multiple aliases:** A room can have multiple aliases:
|
|
||||||
- `#general:fig.systems`
|
|
||||||
- `#lobby:fig.systems`
|
|
||||||
- `#welcome:fig.systems`
|
|
||||||
|
|
||||||
All point to the same room!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Advanced: Space Management
|
|
||||||
|
|
||||||
**Spaces** are special rooms that group other rooms together (like Discord servers).
|
|
||||||
|
|
||||||
**Create a Space:**
|
|
||||||
1. In Element: Click + → "Create new space"
|
|
||||||
2. Add rooms to the space
|
|
||||||
3. Set space visibility (public/private)
|
|
||||||
4. Users can join the space to see all its rooms
|
|
||||||
|
|
||||||
**Use cases:**
|
|
||||||
- Group rooms by topic (Gaming Space, Work Space)
|
|
||||||
- Create sub-communities within your server
|
|
||||||
- Organize rooms hierarchically
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Common Tasks
|
|
||||||
|
|
||||||
### Add a new auto-join room
|
|
||||||
|
|
||||||
1. Create the room (use script or manually)
|
|
||||||
2. Edit `homeserver.yaml`:
|
|
||||||
```yaml
|
|
||||||
auto_join_rooms:
|
|
||||||
- "#general:fig.systems"
|
|
||||||
- "#announcements:fig.systems"
|
|
||||||
- "#support:fig.systems"
|
|
||||||
- "#your-new-room:fig.systems" # Add this
|
|
||||||
```
|
|
||||||
3. `docker restart matrix-synapse`
|
|
||||||
|
|
||||||
### Remove a room from auto-join
|
|
||||||
|
|
||||||
1. Edit `homeserver.yaml` and remove the line
|
|
||||||
2. `docker restart matrix-synapse`
|
|
||||||
3. Note: Existing users won't be removed from the room
|
|
||||||
|
|
||||||
### Make a room public/private
|
|
||||||
|
|
||||||
**Via Element:**
|
|
||||||
1. Room Settings → Security & Privacy
|
|
||||||
2. Change "Who can access this room"
|
|
||||||
3. Toggle directory listing
|
|
||||||
|
|
||||||
**Via Admin Panel:**
|
|
||||||
1. Find room in Rooms list
|
|
||||||
2. Edit visibility settings
|
|
||||||
|
|
||||||
### Delete a room
|
|
||||||
|
|
||||||
**Via Admin Panel:**
|
|
||||||
1. Go to Rooms
|
|
||||||
2. Find the room
|
|
||||||
3. Click "Delete room"
|
|
||||||
4. Confirm deletion
|
|
||||||
5. Options: Purge messages, block room
|
|
||||||
|
|
||||||
**Note:** Deletion is permanent and affects all users!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Users not auto-joining rooms
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
1. Room aliases are correct in homeserver.yaml
|
|
||||||
2. Rooms actually exist
|
|
||||||
3. Synapse was restarted after config change
|
|
||||||
4. Check Synapse logs: `docker logs matrix-synapse | grep auto_join`
|
|
||||||
|
|
||||||
### Room not appearing in directory
|
|
||||||
|
|
||||||
**Check:**
|
|
||||||
1. Room visibility is set to "public"
|
|
||||||
2. "Publish to directory" is enabled
|
|
||||||
3. Server allows public room listings
|
|
||||||
4. Try searching by exact alias
|
|
||||||
|
|
||||||
### Can't create room with alias
|
|
||||||
|
|
||||||
**Possible causes:**
|
|
||||||
- Alias already taken
|
|
||||||
- Invalid characters (use lowercase, numbers, hyphens)
|
|
||||||
- Missing permissions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
✅ **Do:**
|
|
||||||
- Use clear, descriptive room names
|
|
||||||
- Set appropriate topics for all rooms
|
|
||||||
- Make announcements room read-only for most users
|
|
||||||
- Use Spaces to organize many rooms
|
|
||||||
- Regularly review and clean up unused rooms
|
|
||||||
|
|
||||||
❌ **Don't:**
|
|
||||||
- Auto-join users to too many rooms (overwhelming)
|
|
||||||
- Make all rooms public if you want privacy
|
|
||||||
- Forget to set room topics (helps users understand purpose)
|
|
||||||
- Create duplicate rooms with similar purposes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Room Configuration Reference
|
|
||||||
|
|
||||||
### Power Levels Explained
|
|
||||||
|
|
||||||
Power levels control what users can do in a room:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
power_level_content_override:
|
|
||||||
events_default: 0 # Power needed to send messages (0 = anyone)
|
|
||||||
invite: 0 # Power needed to invite users
|
|
||||||
state_default: 50 # Power needed to change room settings
|
|
||||||
users_default: 0 # Default power for new users
|
|
||||||
redact: 50 # Power needed to delete messages
|
|
||||||
kick: 50 # Power needed to kick users
|
|
||||||
ban: 50 # Power needed to ban users
|
|
||||||
```
|
|
||||||
|
|
||||||
**Common setups:**
|
|
||||||
|
|
||||||
**Open discussion room:** events_default: 0 (anyone can talk)
|
|
||||||
**Read-only room:** events_default: 50, users_default: 0 (only mods+ can post)
|
|
||||||
**Moderated room:** events_default: 0, but specific users have elevated power
|
|
||||||
|
|
||||||
### History Visibility
|
|
||||||
|
|
||||||
- `world_readable` - Anyone can read, even without joining
|
|
||||||
- `shared` - Visible to all room members (past and present)
|
|
||||||
- `invited` - Visible only from when user was invited
|
|
||||||
- `joined` - Visible only from when user joined
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
**Auto-Join Rooms:** homeserver.yaml:118-120 - Users join automatically on signup
|
|
||||||
**Room Directory:** Public searchable list - users browse and join
|
|
||||||
**Admin Panel:** Manage all rooms, visibility, members
|
|
||||||
**Element Client:** Create/configure rooms with UI
|
|
||||||
|
|
||||||
Your setup:
|
|
||||||
- ✅ Auto-join configured for 3 default rooms
|
|
||||||
- ✅ Script ready to create them: `./create-default-rooms.sh`
|
|
||||||
- ✅ All new users will join #general, #announcements, #support
|
|
||||||
- ✅ Rooms will be public and in directory
|
|
||||||
|
|
@ -1,281 +0,0 @@
|
||||||
services:
|
|
||||||
postgres:
|
|
||||||
image: postgres:16-alpine
|
|
||||||
container_name: matrix-postgres
|
|
||||||
environment:
|
|
||||||
POSTGRES_USER: ${POSTGRES_USER}
|
|
||||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
|
||||||
POSTGRES_DB: ${POSTGRES_DB}
|
|
||||||
POSTGRES_INITDB_ARGS: ${POSTGRES_INITDB_ARGS}
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/postgres:/var/lib/postgresql/data
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- matrix-internal
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
|
|
||||||
interval: 10s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
|
|
||||||
synapse:
|
|
||||||
image: matrixdotorg/synapse:latest
|
|
||||||
container_name: matrix-synapse
|
|
||||||
environment:
|
|
||||||
SYNAPSE_SERVER_NAME: ${SERVER_NAME}
|
|
||||||
SYNAPSE_REPORT_STATS: "no"
|
|
||||||
TZ: ${TZ}
|
|
||||||
UID: ${PUID}
|
|
||||||
GID: ${PGID}
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/synapse/data:/data
|
|
||||||
- /mnt/media/matrix/synapse/media:/media
|
|
||||||
- ./homeserver.yaml:/data/homeserver.yaml:ro
|
|
||||||
- /mnt/media/matrix/bridges/telegram:/data/bridges/telegram:ro
|
|
||||||
- /mnt/media/matrix/bridges/whatsapp:/data/bridges/whatsapp:ro
|
|
||||||
- /mnt/media/matrix/bridges/googlechat:/data/bridges/googlechat:ro
|
|
||||||
- /mnt/media/matrix/bridges/discord:/data/bridges/discord:ro
|
|
||||||
depends_on:
|
|
||||||
postgres:
|
|
||||||
condition: service_healthy
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
- matrix-internal
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Matrix Client-Server and Federation API (both on same endpoint with .well-known delegation)
|
|
||||||
traefik.http.routers.matrix.rule: Host(`${TRAEFIK_HOST}`)
|
|
||||||
traefik.http.routers.matrix.entrypoints: websecure
|
|
||||||
traefik.http.routers.matrix.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.routers.matrix.middlewares: matrix-headers
|
|
||||||
traefik.http.services.matrix.loadbalancer.server.port: 8008
|
|
||||||
|
|
||||||
# Headers middleware for Matrix
|
|
||||||
traefik.http.middlewares.matrix-headers.headers.customrequestheaders.X-Forwarded-Proto: https
|
|
||||||
traefik.http.middlewares.matrix-headers.headers.customresponseheaders.X-Frame-Options: SAMEORIGIN
|
|
||||||
traefik.http.middlewares.matrix-headers.headers.customresponseheaders.X-Content-Type-Options: nosniff
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Matrix
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:matrix
|
|
||||||
|
|
||||||
# Telegram Bridge
|
|
||||||
mautrix-telegram:
|
|
||||||
image: dock.mau.dev/mautrix/telegram:latest
|
|
||||||
container_name: matrix-telegram-bridge
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/bridges/telegram:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- matrix-internal
|
|
||||||
|
|
||||||
# WhatsApp Bridge
|
|
||||||
mautrix-whatsapp:
|
|
||||||
image: dock.mau.dev/mautrix/whatsapp:latest
|
|
||||||
container_name: matrix-whatsapp-bridge
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/bridges/whatsapp:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- matrix-internal
|
|
||||||
|
|
||||||
# Google Chat Bridge
|
|
||||||
mautrix-googlechat:
|
|
||||||
image: dock.mau.dev/mautrix/googlechat:latest
|
|
||||||
container_name: matrix-googlechat-bridge
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/bridges/googlechat:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- matrix-internal
|
|
||||||
|
|
||||||
# Element Web Client
|
|
||||||
element-web:
|
|
||||||
image: vectorim/element-web:latest
|
|
||||||
container_name: matrix-element-web
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- ./element-config.json:/app/config.json:ro
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Element Web UI
|
|
||||||
traefik.http.routers.element.rule: Host(`chat.fig.systems`)
|
|
||||||
traefik.http.routers.element.entrypoints: websecure
|
|
||||||
traefik.http.routers.element.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.element.loadbalancer.server.port: 80
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Element
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:chat
|
|
||||||
|
|
||||||
# Synapse Admin - Web UI for managing users and rooms
|
|
||||||
synapse-admin:
|
|
||||||
image: awesometechnologies/synapse-admin:latest
|
|
||||||
container_name: matrix-synapse-admin
|
|
||||||
restart: unless-stopped
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Synapse Admin UI
|
|
||||||
traefik.http.routers.synapse-admin.rule: Host(`admin.matrix.fig.systems`)
|
|
||||||
traefik.http.routers.synapse-admin.entrypoints: websecure
|
|
||||||
traefik.http.routers.synapse-admin.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.synapse-admin.loadbalancer.server.port: 80
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Matrix Admin
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:shield-account
|
|
||||||
|
|
||||||
# Maubot - Modular bot framework
|
|
||||||
maubot:
|
|
||||||
image: dock.mau.dev/maubot/maubot:latest
|
|
||||||
container_name: matrix-maubot
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/maubot:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
- matrix-internal
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Maubot Management UI
|
|
||||||
traefik.http.routers.maubot.rule: Host(`maubot.fig.systems`)
|
|
||||||
traefik.http.routers.maubot.entrypoints: websecure
|
|
||||||
traefik.http.routers.maubot.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.maubot.loadbalancer.server.port: 29316
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Maubot
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:robot
|
|
||||||
|
|
||||||
# Mjolnir - Moderation bot
|
|
||||||
mjolnir:
|
|
||||||
image: matrixdotorg/mjolnir:latest
|
|
||||||
container_name: matrix-mjolnir
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/mjolnir:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- matrix-internal
|
|
||||||
|
|
||||||
# Matrix Hookshot - GitHub/GitLab/Jira integration
|
|
||||||
hookshot:
|
|
||||||
image: halfshot/matrix-hookshot:latest
|
|
||||||
container_name: matrix-hookshot
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/hookshot:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
- matrix-internal
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Hookshot Webhooks
|
|
||||||
traefik.http.routers.hookshot.rule: Host(`hookshot.fig.systems`)
|
|
||||||
traefik.http.routers.hookshot.entrypoints: websecure
|
|
||||||
traefik.http.routers.hookshot.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.hookshot.loadbalancer.server.port: 9000
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Matrix Hookshot
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:webhook
|
|
||||||
|
|
||||||
# Discord Bridge
|
|
||||||
mautrix-discord:
|
|
||||||
image: dock.mau.dev/mautrix/discord:latest
|
|
||||||
container_name: matrix-discord-bridge
|
|
||||||
restart: unless-stopped
|
|
||||||
volumes:
|
|
||||||
- /mnt/media/matrix/bridges/discord:/data
|
|
||||||
depends_on:
|
|
||||||
synapse:
|
|
||||||
condition: service_started
|
|
||||||
networks:
|
|
||||||
- matrix-internal
|
|
||||||
|
|
||||||
# Matrix Registration - Token-based registration management
|
|
||||||
# DISABLED: zeratax/matrix-registration has been archived and image is no longer available
|
|
||||||
# matrix-registration:
|
|
||||||
# image: zeratax/matrix-registration:latest
|
|
||||||
# container_name: matrix-registration
|
|
||||||
# restart: unless-stopped
|
|
||||||
# environment:
|
|
||||||
# MATRIX_REGISTRATION_BASE_URL: https://reg.matrix.fig.systems
|
|
||||||
# MATRIX_REGISTRATION_SERVER_LOCATION: http://synapse:8008
|
|
||||||
# MATRIX_REGISTRATION_SERVER_NAME: ${SERVER_NAME}
|
|
||||||
# MATRIX_REGISTRATION_SHARED_SECRET: ${SYNAPSE_REGISTRATION_SECRET}
|
|
||||||
# MATRIX_REGISTRATION_ADMIN_SECRET: ${MATRIX_REGISTRATION_ADMIN_SECRET}
|
|
||||||
# MATRIX_REGISTRATION_DISABLE_EMAIL_VALIDATION: "false"
|
|
||||||
# MATRIX_REGISTRATION_ALLOW_CORS: "true"
|
|
||||||
# volumes:
|
|
||||||
# - /mnt/media/matrix/registration:/data
|
|
||||||
# depends_on:
|
|
||||||
# synapse:
|
|
||||||
# condition: service_started
|
|
||||||
# networks:
|
|
||||||
# - homelab
|
|
||||||
# - matrix-internal
|
|
||||||
# labels:
|
|
||||||
# # Traefik
|
|
||||||
# traefik.enable: true
|
|
||||||
# traefik.docker.network: homelab
|
|
||||||
#
|
|
||||||
# # Matrix Registration UI
|
|
||||||
# traefik.http.routers.matrix-registration.rule: Host(`reg.matrix.fig.systems`)
|
|
||||||
# traefik.http.routers.matrix-registration.entrypoints: websecure
|
|
||||||
# traefik.http.routers.matrix-registration.tls.certresolver: letsencrypt
|
|
||||||
# traefik.http.services.matrix-registration.loadbalancer.server.port: 5000
|
|
||||||
#
|
|
||||||
# # Homarr Discovery
|
|
||||||
# homarr.name: Matrix Registration
|
|
||||||
# homarr.group: Services
|
|
||||||
# homarr.icon: mdi:account-plus
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
matrix-internal:
|
|
||||||
driver: bridge
|
|
||||||
|
|
||||||
|
|
@ -1,122 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Script to create default auto-join rooms for Matrix
|
|
||||||
# Usage: ./create-default-rooms.sh <admin_username> <admin_password>
|
|
||||||
|
|
||||||
HOMESERVER="https://matrix.fig.systems"
|
|
||||||
USERNAME="${1}"
|
|
||||||
PASSWORD="${2}"
|
|
||||||
|
|
||||||
if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then
|
|
||||||
echo "Usage: $0 <admin_username> <admin_password>"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "🔐 Logging in as $USERNAME..."
|
|
||||||
# Get access token
|
|
||||||
LOGIN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/login" \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d "{
|
|
||||||
\"type\": \"m.login.password\",
|
|
||||||
\"identifier\": {
|
|
||||||
\"type\": \"m.id.user\",
|
|
||||||
\"user\": \"${USERNAME}\"
|
|
||||||
},
|
|
||||||
\"password\": \"${PASSWORD}\"
|
|
||||||
}")
|
|
||||||
|
|
||||||
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
|
|
||||||
|
|
||||||
if [ -z "$ACCESS_TOKEN" ]; then
|
|
||||||
echo "❌ Login failed!"
|
|
||||||
echo "$LOGIN_RESPONSE" | jq . 2>/dev/null || echo "$LOGIN_RESPONSE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "✅ Login successful!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Function to create a room
|
|
||||||
create_room() {
|
|
||||||
local ROOM_NAME=$1
|
|
||||||
local ROOM_ALIAS=$2
|
|
||||||
local ROOM_TOPIC=$3
|
|
||||||
local PRESET=$4 # public_chat or private_chat
|
|
||||||
|
|
||||||
echo "🏠 Creating room: $ROOM_NAME (#${ROOM_ALIAS}:fig.systems)"
|
|
||||||
|
|
||||||
ROOM_DATA="{
|
|
||||||
\"name\": \"${ROOM_NAME}\",
|
|
||||||
\"room_alias_name\": \"${ROOM_ALIAS}\",
|
|
||||||
\"topic\": \"${ROOM_TOPIC}\",
|
|
||||||
\"preset\": \"${PRESET}\",
|
|
||||||
\"visibility\": \"public\",
|
|
||||||
\"initial_state\": [
|
|
||||||
{
|
|
||||||
\"type\": \"m.room.history_visibility\",
|
|
||||||
\"content\": {
|
|
||||||
\"history_visibility\": \"shared\"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
\"type\": \"m.room.guest_access\",
|
|
||||||
\"content\": {
|
|
||||||
\"guest_access\": \"can_join\"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
],
|
|
||||||
\"power_level_content_override\": {
|
|
||||||
\"events_default\": 0,
|
|
||||||
\"invite\": 0,
|
|
||||||
\"state_default\": 50,
|
|
||||||
\"users_default\": 0,
|
|
||||||
\"redact\": 50,
|
|
||||||
\"kick\": 50,
|
|
||||||
\"ban\": 50
|
|
||||||
}
|
|
||||||
}"
|
|
||||||
|
|
||||||
RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/createRoom" \
|
|
||||||
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d "$ROOM_DATA")
|
|
||||||
|
|
||||||
ROOM_ID=$(echo "$RESPONSE" | grep -o '"room_id":"[^"]*' | cut -d'"' -f4)
|
|
||||||
|
|
||||||
if [ -n "$ROOM_ID" ]; then
|
|
||||||
echo " ✅ Created: $ROOM_ID"
|
|
||||||
|
|
||||||
# Set room to be in directory
|
|
||||||
echo " 📋 Adding to room directory..."
|
|
||||||
curl -s -X PUT "${HOMESERVER}/_matrix/client/v3/directory/list/room/${ROOM_ID}" \
|
|
||||||
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d '{"visibility": "public"}' > /dev/null
|
|
||||||
echo " ✅ Added to public room directory"
|
|
||||||
else
|
|
||||||
echo " ⚠️ Error or room already exists"
|
|
||||||
echo "$RESPONSE" | jq . 2>/dev/null || echo "$RESPONSE"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
}
|
|
||||||
|
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
||||||
echo "Creating default auto-join rooms..."
|
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Create the default rooms
|
|
||||||
create_room "General" "general" "General discussion and community hangout" "public_chat"
|
|
||||||
create_room "Announcements" "announcements" "Important server announcements and updates" "public_chat"
|
|
||||||
create_room "Support" "support" "Get help and ask questions" "public_chat"
|
|
||||||
|
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
||||||
echo "✅ Default rooms created!"
|
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
||||||
echo ""
|
|
||||||
echo "These rooms will be automatically joined by new users:"
|
|
||||||
echo " • #general:fig.systems"
|
|
||||||
echo " • #announcements:fig.systems"
|
|
||||||
echo " • #support:fig.systems"
|
|
||||||
echo ""
|
|
||||||
echo "All rooms are also published in the room directory!"
|
|
||||||
|
|
@ -1,86 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Script to create Matrix registration tokens
|
|
||||||
# Usage: ./create-token.sh <admin_username> <admin_password> [uses_allowed] [token_name]
|
|
||||||
|
|
||||||
HOMESERVER="https://matrix.fig.systems"
|
|
||||||
USERNAME="${1}"
|
|
||||||
PASSWORD="${2}"
|
|
||||||
USES_ALLOWED="${3:-1}" # Default: 1 use
|
|
||||||
TOKEN_NAME="${4:-}" # Optional custom token
|
|
||||||
|
|
||||||
if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then
|
|
||||||
echo "Usage: $0 <admin_username> <admin_password> [uses_allowed] [token_name]"
|
|
||||||
echo ""
|
|
||||||
echo "Examples:"
|
|
||||||
echo " $0 admin mypassword # Create single-use token"
|
|
||||||
echo " $0 admin mypassword 10 # Create token with 10 uses"
|
|
||||||
echo " $0 admin mypassword 5 invite123 # Create custom token 'invite123' with 5 uses"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "🔐 Logging in as $USERNAME..."
|
|
||||||
# Get access token
|
|
||||||
LOGIN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/login" \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d "{
|
|
||||||
\"type\": \"m.login.password\",
|
|
||||||
\"identifier\": {
|
|
||||||
\"type\": \"m.id.user\",
|
|
||||||
\"user\": \"${USERNAME}\"
|
|
||||||
},
|
|
||||||
\"password\": \"${PASSWORD}\"
|
|
||||||
}")
|
|
||||||
|
|
||||||
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
|
|
||||||
|
|
||||||
if [ -z "$ACCESS_TOKEN" ]; then
|
|
||||||
echo "❌ Login failed!"
|
|
||||||
echo "$LOGIN_RESPONSE" | jq . 2>/dev/null || echo "$LOGIN_RESPONSE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "✅ Login successful!"
|
|
||||||
echo ""
|
|
||||||
echo "🎟️ Creating registration token..."
|
|
||||||
|
|
||||||
# Create registration token
|
|
||||||
if [ -n "$TOKEN_NAME" ]; then
|
|
||||||
# Custom token
|
|
||||||
TOKEN_DATA="{
|
|
||||||
\"token\": \"${TOKEN_NAME}\",
|
|
||||||
\"uses_allowed\": ${USES_ALLOWED}
|
|
||||||
}"
|
|
||||||
else
|
|
||||||
# Random token
|
|
||||||
TOKEN_DATA="{
|
|
||||||
\"uses_allowed\": ${USES_ALLOWED},
|
|
||||||
\"length\": 16
|
|
||||||
}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
TOKEN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_synapse/admin/v1/registration_tokens/new" \
|
|
||||||
-H "Authorization: Bearer ${ACCESS_TOKEN}" \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d "$TOKEN_DATA")
|
|
||||||
|
|
||||||
TOKEN=$(echo "$TOKEN_RESPONSE" | grep -o '"token":"[^"]*' | cut -d'"' -f4)
|
|
||||||
|
|
||||||
if [ -z "$TOKEN" ]; then
|
|
||||||
echo "❌ Token creation failed!"
|
|
||||||
echo "$TOKEN_RESPONSE" | jq . 2>/dev/null || echo "$TOKEN_RESPONSE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "✅ Registration token created!"
|
|
||||||
echo ""
|
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
||||||
echo "📋 TOKEN: ${TOKEN}"
|
|
||||||
echo "📊 Uses allowed: ${USES_ALLOWED}"
|
|
||||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
|
||||||
echo ""
|
|
||||||
echo "Share this token with users who should be able to register."
|
|
||||||
echo "They'll enter it during signup at: https://chat.fig.systems"
|
|
||||||
echo ""
|
|
||||||
echo "Full response:"
|
|
||||||
echo "$TOKEN_RESPONSE" | jq . 2>/dev/null || echo "$TOKEN_RESPONSE"
|
|
||||||
|
|
@ -1,24 +0,0 @@
|
||||||
{
|
|
||||||
"default_server_config": {
|
|
||||||
"m.homeserver": {
|
|
||||||
"base_url": "https://matrix.fig.systems",
|
|
||||||
"server_name": "fig.systems"
|
|
||||||
},
|
|
||||||
"m.identity_server": {
|
|
||||||
"base_url": "https://vector.im"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"brand": "fig.systems",
|
|
||||||
"default_country_code": "US",
|
|
||||||
"show_labs_settings": true,
|
|
||||||
"default_theme": "dark",
|
|
||||||
"room_directory": {
|
|
||||||
"servers": ["matrix.org", "fig.systems"]
|
|
||||||
},
|
|
||||||
"enable_presence_by_default": true,
|
|
||||||
"setting_defaults": {
|
|
||||||
"breadcrumbs": true
|
|
||||||
},
|
|
||||||
"default_federate": true,
|
|
||||||
"permalink_prefix": "https://chat.fig.systems"
|
|
||||||
}
|
|
||||||
|
|
@ -1,131 +0,0 @@
|
||||||
# Configuration file for Synapse.
|
|
||||||
#
|
|
||||||
# This is a YAML file: see [1] for a quick introduction. Note in particular
|
|
||||||
# that *indentation is important*: all the elements of a list or dictionary
|
|
||||||
# should have the same indentation.
|
|
||||||
#
|
|
||||||
# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
|
|
||||||
#
|
|
||||||
# For more information on how to configure Synapse, including a complete accounting of
|
|
||||||
# each option, go to docs/usage/configuration/config_documentation.md or
|
|
||||||
# https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html
|
|
||||||
|
|
||||||
## Server ##
|
|
||||||
server_name: "fig.systems"
|
|
||||||
pid_file: /data/homeserver.pid
|
|
||||||
web_client_location: https://chat.fig.systems
|
|
||||||
public_baseurl: https://matrix.fig.systems
|
|
||||||
|
|
||||||
## Ports ##
|
|
||||||
listeners:
|
|
||||||
- port: 8008
|
|
||||||
tls: false
|
|
||||||
type: http
|
|
||||||
x_forwarded: true
|
|
||||||
bind_addresses: ['::']
|
|
||||||
resources:
|
|
||||||
- names: [client, federation]
|
|
||||||
compress: false
|
|
||||||
|
|
||||||
## Database ##
|
|
||||||
database:
|
|
||||||
name: psycopg2
|
|
||||||
args:
|
|
||||||
user: synapse
|
|
||||||
password: 46d8cb2e8bdacf5a267a5f35bcdea4ded46e42ced008c4998e180f33e3ce07c5
|
|
||||||
database: synapse
|
|
||||||
host: postgres
|
|
||||||
port: 5432
|
|
||||||
cp_min: 5
|
|
||||||
cp_max: 10
|
|
||||||
|
|
||||||
## Logging ##
|
|
||||||
log_config: "/data/fig.systems.log.config"
|
|
||||||
|
|
||||||
## Media Storage ##
|
|
||||||
media_store_path: /media
|
|
||||||
max_upload_size: 50M
|
|
||||||
max_image_pixels: 32M
|
|
||||||
|
|
||||||
## Registration ##
|
|
||||||
enable_registration: true
|
|
||||||
enable_registration_without_verification: true
|
|
||||||
registration_shared_secret: "8c9268b0d93d532139930396b22ffc97cad2210ad40f303a0d91fbf7eac5a855"
|
|
||||||
registration_requires_token: true
|
|
||||||
# registrations_require_3pid:
|
|
||||||
# - email
|
|
||||||
|
|
||||||
## Email ##
|
|
||||||
email:
|
|
||||||
smtp_host: smtp.mailgun.org
|
|
||||||
smtp_port: 587
|
|
||||||
smtp_user: "no-reply@fig.systems"
|
|
||||||
smtp_pass: "1bc0de262fcfdb1398a3df54b8a14c07-32a0fef1-3f0b66d3"
|
|
||||||
require_transport_security: true
|
|
||||||
notif_from: "Matrix.Fig.Systems <no-reply@fig.systems>"
|
|
||||||
enable_notifs: true
|
|
||||||
notif_for_new_users: true
|
|
||||||
client_base_url: "https://chat.fig.systems"
|
|
||||||
validation_token_lifetime: 15m
|
|
||||||
invite_client_location: "https://chat.fig.systems"
|
|
||||||
|
|
||||||
## Metrics ##
|
|
||||||
enable_metrics: true
|
|
||||||
report_stats: false
|
|
||||||
metrics_port: 9000
|
|
||||||
|
|
||||||
## Signing Keys ##
|
|
||||||
macaroon_secret_key: "c7374565104bc5a01c6ea2897e3c9bb3ab04948f17d1b29d342aede4e4406831"
|
|
||||||
form_secret: "E7V11MUnpi==wQJ:OX*Dv-uzd&geZ~4pP=QBr#I-Dek3zGHfcJ"
|
|
||||||
signing_key_path: "/data/fig.systems.signing.key"
|
|
||||||
|
|
||||||
## App Services (Bridges and Bots) ##
|
|
||||||
# Temporarily commented out until bridges generate registration files
|
|
||||||
# app_service_config_files:
|
|
||||||
# - /data/bridges/telegram-registration.yaml
|
|
||||||
# - /data/bridges/whatsapp-registration.yaml
|
|
||||||
# - /data/bridges/googlechat-registration.yaml
|
|
||||||
# - /data/bridges/discord-registration.yaml
|
|
||||||
|
|
||||||
## Federation ##
|
|
||||||
federation_domain_whitelist: null
|
|
||||||
allow_public_rooms_over_federation: true
|
|
||||||
allow_public_rooms_without_auth: false
|
|
||||||
|
|
||||||
## Trusted Key Servers ##
|
|
||||||
trusted_key_servers:
|
|
||||||
- server_name: "matrix.org"
|
|
||||||
|
|
||||||
## URL Previews ##
|
|
||||||
url_preview_enabled: true
|
|
||||||
url_preview_ip_range_blacklist:
|
|
||||||
- '127.0.0.0/8'
|
|
||||||
- '10.0.0.0/8'
|
|
||||||
- '172.16.0.0/12'
|
|
||||||
- '192.168.0.0/16'
|
|
||||||
- '100.64.0.0/10'
|
|
||||||
- '169.254.0.0/16'
|
|
||||||
- '::1/128'
|
|
||||||
- 'fe80::/64'
|
|
||||||
- 'fc00::/7'
|
|
||||||
|
|
||||||
## Room Settings ##
|
|
||||||
enable_search: true
|
|
||||||
encryption_enabled_by_default_for_room_type: invite
|
|
||||||
autocreate_auto_join_rooms: true
|
|
||||||
|
|
||||||
# Auto-join rooms - users automatically join these rooms on registration
|
|
||||||
auto_join_rooms:
|
|
||||||
- "#general:fig.systems"
|
|
||||||
- "#announcements:fig.systems"
|
|
||||||
- "#support:fig.systems"
|
|
||||||
|
|
||||||
# Optionally set a room alias for the first auto-join room as the "default room"
|
|
||||||
# This can be used by clients to suggest a default place to start
|
|
||||||
# auto_join_mxid_localpart: general
|
|
||||||
|
|
||||||
# Room directory - make certain rooms publicly discoverable
|
|
||||||
# These rooms will appear in the public room list
|
|
||||||
# Note: The rooms must already exist and be set to "published" in their settings
|
|
||||||
|
|
||||||
# vim:ft=yaml
|
|
||||||
|
|
@ -1,88 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Script to manage Matrix registration tokens
|
|
||||||
# Usage: ./manage-tokens.sh <admin_username> <admin_password> <command> [token]
|
|
||||||
|
|
||||||
HOMESERVER="https://matrix.fig.systems"
|
|
||||||
USERNAME="${1}"
|
|
||||||
PASSWORD="${2}"
|
|
||||||
COMMAND="${3}"
|
|
||||||
TOKEN="${4}"
|
|
||||||
|
|
||||||
show_usage() {
|
|
||||||
echo "Usage: $0 <admin_username> <admin_password> <command> [token]"
|
|
||||||
echo ""
|
|
||||||
echo "Commands:"
|
|
||||||
echo " list - List all registration tokens"
|
|
||||||
echo " info <token> - Get info about a specific token"
|
|
||||||
echo " delete <token> - Delete a token"
|
|
||||||
echo " update <token> - Update a token (will prompt for details)"
|
|
||||||
echo ""
|
|
||||||
echo "Examples:"
|
|
||||||
echo " $0 admin mypassword list"
|
|
||||||
echo " $0 admin mypassword info abc123def456"
|
|
||||||
echo " $0 admin mypassword delete abc123def456"
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ] || [ -z "$COMMAND" ]; then
|
|
||||||
show_usage
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "🔐 Logging in as $USERNAME..."
|
|
||||||
# Get access token
|
|
||||||
LOGIN_RESPONSE=$(curl -s -X POST "${HOMESERVER}/_matrix/client/v3/login" \
|
|
||||||
-H 'Content-Type: application/json' \
|
|
||||||
-d "{
|
|
||||||
\"type\": \"m.login.password\",
|
|
||||||
\"identifier\": {
|
|
||||||
\"type\": \"m.id.user\",
|
|
||||||
\"user\": \"${USERNAME}\"
|
|
||||||
},
|
|
||||||
\"password\": \"${PASSWORD}\"
|
|
||||||
}")
|
|
||||||
|
|
||||||
ACCESS_TOKEN=$(echo "$LOGIN_RESPONSE" | grep -o '"access_token":"[^"]*' | cut -d'"' -f4)
|
|
||||||
|
|
||||||
if [ -z "$ACCESS_TOKEN" ]; then
|
|
||||||
echo "❌ Login failed!"
|
|
||||||
echo "$LOGIN_RESPONSE" | jq . 2>/dev/null || echo "$LOGIN_RESPONSE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "✅ Login successful!"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
case "$COMMAND" in
|
|
||||||
list)
|
|
||||||
echo "📋 Fetching all registration tokens..."
|
|
||||||
curl -s -X GET "${HOMESERVER}/_synapse/admin/v1/registration_tokens" \
|
|
||||||
-H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .
|
|
||||||
;;
|
|
||||||
|
|
||||||
info)
|
|
||||||
if [ -z "$TOKEN" ]; then
|
|
||||||
echo "❌ Token required for 'info' command"
|
|
||||||
show_usage
|
|
||||||
fi
|
|
||||||
echo "📋 Fetching info for token: $TOKEN"
|
|
||||||
curl -s -X GET "${HOMESERVER}/_synapse/admin/v1/registration_tokens/${TOKEN}" \
|
|
||||||
-H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .
|
|
||||||
;;
|
|
||||||
|
|
||||||
delete)
|
|
||||||
if [ -z "$TOKEN" ]; then
|
|
||||||
echo "❌ Token required for 'delete' command"
|
|
||||||
show_usage
|
|
||||||
fi
|
|
||||||
echo "🗑️ Deleting token: $TOKEN"
|
|
||||||
curl -s -X DELETE "${HOMESERVER}/_synapse/admin/v1/registration_tokens/${TOKEN}" \
|
|
||||||
-H "Authorization: Bearer ${ACCESS_TOKEN}" | jq .
|
|
||||||
echo "✅ Token deleted"
|
|
||||||
;;
|
|
||||||
|
|
||||||
*)
|
|
||||||
echo "❌ Unknown command: $COMMAND"
|
|
||||||
show_usage
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
@ -1,37 +0,0 @@
|
||||||
# Memos - Privacy-first, lightweight note-taking service
|
|
||||||
# Docs: https://www.usememos.com/docs
|
|
||||||
|
|
||||||
services:
|
|
||||||
memos:
|
|
||||||
container_name: memos
|
|
||||||
image: neosmemo/memos:stable
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./data:/var/opt/memos
|
|
||||||
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
|
|
||||||
labels:
|
|
||||||
# Traefik
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
|
|
||||||
# Web UI
|
|
||||||
traefik.http.routers.memos.rule: Host(`notes.fig.systems`)
|
|
||||||
traefik.http.routers.memos.entrypoints: websecure
|
|
||||||
traefik.http.routers.memos.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.memos.loadbalancer.server.port: 5230
|
|
||||||
|
|
||||||
# SSO Protection
|
|
||||||
traefik.http.routers.memos.middlewares: authelia
|
|
||||||
|
|
||||||
# Homarr Discovery
|
|
||||||
homarr.name: Memos (Notes)
|
|
||||||
homarr.group: Services
|
|
||||||
homarr.icon: mdi:note-multiple
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -22,13 +22,18 @@ services:
|
||||||
traefik.docker.network: homelab
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
# Web UI
|
# Web UI
|
||||||
traefik.http.routers.microbin.rule: Host(`bin.fig.systems`)
|
traefik.http.routers.microbin.rule: Host(`paste.fig.systems`)
|
||||||
traefik.http.routers.microbin.entrypoints: websecure
|
traefik.http.routers.microbin.entrypoints: websecure
|
||||||
traefik.http.routers.microbin.tls.certresolver: letsencrypt
|
traefik.http.routers.microbin.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.microbin.loadbalancer.server.port: 7880
|
traefik.http.services.microbin.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
# Note: MicroBin has its own auth, SSO disabled by default
|
# Note: MicroBin has its own auth, SSO disabled by default
|
||||||
|
# traefik.http.routers.microbin.middlewares: tinyauth
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: MicroBin
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:content-paste
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
homelab:
|
homelab:
|
||||||
|
|
|
||||||
30
compose/services/ollama/.env
Normal file
30
compose/services/ollama/.env
Normal file
|
|
@ -0,0 +1,30 @@
|
||||||
|
# Ollama Configuration
|
||||||
|
# Docs: https://github.com/ollama/ollama/blob/main/docs/faq.md
|
||||||
|
|
||||||
|
# Timezone
|
||||||
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
|
# Model Storage Location
|
||||||
|
# OLLAMA_MODELS=/root/.ollama/models
|
||||||
|
|
||||||
|
# Max Loaded Models (default: 1)
|
||||||
|
# OLLAMA_MAX_LOADED_MODELS=1
|
||||||
|
|
||||||
|
# Max Queue (default: 512)
|
||||||
|
# OLLAMA_MAX_QUEUE=512
|
||||||
|
|
||||||
|
# Number of parallel requests (default: auto)
|
||||||
|
# OLLAMA_NUM_PARALLEL=4
|
||||||
|
|
||||||
|
# Context size (default: 2048)
|
||||||
|
# OLLAMA_MAX_CONTEXT=4096
|
||||||
|
|
||||||
|
# Keep models in memory (default: 5m)
|
||||||
|
# OLLAMA_KEEP_ALIVE=5m
|
||||||
|
|
||||||
|
# Debug logging
|
||||||
|
# OLLAMA_DEBUG=1
|
||||||
|
|
||||||
|
# GPU Configuration (for GTX 1070)
|
||||||
|
# OLLAMA_GPU_LAYERS=33 # Number of layers to offload to GPU (adjust based on VRAM)
|
||||||
|
# OLLAMA_GPU_MEMORY=6GB # Max GPU memory to use (GTX 1070 has 8GB)
|
||||||
5
compose/services/ollama/.gitignore
vendored
Normal file
5
compose/services/ollama/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,5 @@
|
||||||
|
# Ollama models and data
|
||||||
|
models/
|
||||||
|
|
||||||
|
# Keep .env.example if created
|
||||||
|
!.env.example
|
||||||
616
compose/services/ollama/README.md
Normal file
616
compose/services/ollama/README.md
Normal file
|
|
@ -0,0 +1,616 @@
|
||||||
|
# Ollama - Local Large Language Models
|
||||||
|
|
||||||
|
Run powerful AI models locally on your hardware with GPU acceleration.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**Ollama** enables you to run large language models (LLMs) locally:
|
||||||
|
|
||||||
|
- ✅ **100% Private**: All data stays on your server
|
||||||
|
- ✅ **GPU Accelerated**: Leverages your GTX 1070
|
||||||
|
- ✅ **Multiple Models**: Run Llama, Mistral, CodeLlama, and more
|
||||||
|
- ✅ **API Compatible**: OpenAI-compatible API
|
||||||
|
- ✅ **No Cloud Costs**: Free inference after downloading models
|
||||||
|
- ✅ **Integration Ready**: Works with Karakeep, Open WebUI, and more
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### 1. Deploy Ollama
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/services/ollama
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Pull a Model
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Small, fast model (3B parameters, ~2GB)
|
||||||
|
docker exec ollama ollama pull llama3.2:3b
|
||||||
|
|
||||||
|
# Medium model (7B parameters, ~4GB)
|
||||||
|
docker exec ollama ollama pull llama3.2:7b
|
||||||
|
|
||||||
|
# Large model (70B parameters, ~40GB - requires quantization)
|
||||||
|
docker exec ollama ollama pull llama3.3:70b-instruct-q4_K_M
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Test
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Interactive chat
|
||||||
|
docker exec -it ollama ollama run llama3.2:3b
|
||||||
|
|
||||||
|
# Ask a question
|
||||||
|
> Hello, how are you?
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Enable GPU (Recommended)
|
||||||
|
|
||||||
|
**Edit `compose.yaml` and uncomment the deploy section:**
|
||||||
|
```yaml
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: 1
|
||||||
|
capabilities: [gpu]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart:**
|
||||||
|
```bash
|
||||||
|
docker compose down
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify GPU usage:**
|
||||||
|
```bash
|
||||||
|
# Check GPU is detected
|
||||||
|
docker exec ollama nvidia-smi
|
||||||
|
|
||||||
|
# Run model with GPU
|
||||||
|
docker exec ollama ollama run llama3.2:3b "What GPU am I using?"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Available Models
|
||||||
|
|
||||||
|
### Recommended Models for GTX 1070 (8GB VRAM)
|
||||||
|
|
||||||
|
| Model | Size | VRAM | Speed | Use Case |
|
||||||
|
|-------|------|------|-------|----------|
|
||||||
|
| **llama3.2:3b** | 2GB | 3GB | Fast | General chat, Karakeep |
|
||||||
|
| **llama3.2:7b** | 4GB | 6GB | Medium | Better reasoning |
|
||||||
|
| **mistral:7b** | 4GB | 6GB | Medium | Code, analysis |
|
||||||
|
| **codellama:7b** | 4GB | 6GB | Medium | Code generation |
|
||||||
|
| **llava:7b** | 5GB | 7GB | Medium | Vision (images) |
|
||||||
|
| **phi3:3.8b** | 2.3GB | 4GB | Fast | Compact, efficient |
|
||||||
|
|
||||||
|
### Specialized Models
|
||||||
|
|
||||||
|
**Code:**
|
||||||
|
- `codellama:7b` - Code generation
|
||||||
|
- `codellama:13b-python` - Python expert
|
||||||
|
- `starcoder2:7b` - Multi-language code
|
||||||
|
|
||||||
|
**Vision (Image Understanding):**
|
||||||
|
- `llava:7b` - General vision
|
||||||
|
- `llava:13b` - Better vision (needs more VRAM)
|
||||||
|
- `bakllava:7b` - Vision + chat
|
||||||
|
|
||||||
|
**Multilingual:**
|
||||||
|
- `aya:8b` - 101 languages
|
||||||
|
- `command-r:35b` - Enterprise multilingual
|
||||||
|
|
||||||
|
**Math & Reasoning:**
|
||||||
|
- `deepseek-math:7b` - Mathematics
|
||||||
|
- `wizard-math:7b` - Math word problems
|
||||||
|
|
||||||
|
### Large Models (Quantized for GTX 1070)
|
||||||
|
|
||||||
|
These require 4-bit quantization to fit in 8GB VRAM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 70B models (quantized)
|
||||||
|
docker exec ollama ollama pull llama3.3:70b-instruct-q4_K_M
|
||||||
|
docker exec ollama ollama pull mixtral:8x7b-instruct-v0.1-q4_K_M
|
||||||
|
|
||||||
|
# Very large (use with caution)
|
||||||
|
docker exec ollama ollama pull llama3.1:405b-instruct-q2_K
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Command Line
|
||||||
|
|
||||||
|
**Run model interactively:**
|
||||||
|
```bash
|
||||||
|
docker exec -it ollama ollama run llama3.2:3b
|
||||||
|
```
|
||||||
|
|
||||||
|
**One-off question:**
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama run llama3.2:3b "Explain quantum computing in simple terms"
|
||||||
|
```
|
||||||
|
|
||||||
|
**With system prompt:**
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama run llama3.2:3b \
|
||||||
|
--system "You are a helpful coding assistant." \
|
||||||
|
"Write a Python function to sort a list"
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Usage
|
||||||
|
|
||||||
|
**List models:**
|
||||||
|
```bash
|
||||||
|
curl http://ollama:11434/api/tags
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generate text:**
|
||||||
|
```bash
|
||||||
|
curl http://ollama:11434/api/generate -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"prompt": "Why is the sky blue?",
|
||||||
|
"stream": false
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Chat completion:**
|
||||||
|
```bash
|
||||||
|
curl http://ollama:11434/api/chat -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "Hello!"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stream": false
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
**OpenAI-compatible API:**
|
||||||
|
```bash
|
||||||
|
curl http://ollama:11434/v1/chat/completions -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"messages": [
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": "Hello!"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration with Karakeep
|
||||||
|
|
||||||
|
**Enable AI features in Karakeep:**
|
||||||
|
|
||||||
|
Edit `compose/services/karakeep/.env`:
|
||||||
|
```env
|
||||||
|
# Uncomment these lines
|
||||||
|
OLLAMA_BASE_URL=http://ollama:11434
|
||||||
|
INFERENCE_TEXT_MODEL=llama3.2:3b
|
||||||
|
INFERENCE_IMAGE_MODEL=llava:7b
|
||||||
|
INFERENCE_LANG=en
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart Karakeep:**
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/services/karakeep
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
**What it does:**
|
||||||
|
- Auto-tags bookmarks
|
||||||
|
- Generates summaries
|
||||||
|
- Extracts key information
|
||||||
|
- Analyzes images (with llava)
|
||||||
|
|
||||||
|
## Model Management
|
||||||
|
|
||||||
|
### List Installed Models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama list
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pull a Model
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama pull <model-name>
|
||||||
|
|
||||||
|
# Examples:
|
||||||
|
docker exec ollama ollama pull llama3.2:3b
|
||||||
|
docker exec ollama ollama pull mistral:7b
|
||||||
|
docker exec ollama ollama pull codellama:7b
|
||||||
|
```
|
||||||
|
|
||||||
|
### Remove a Model
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama rm <model-name>
|
||||||
|
|
||||||
|
# Example:
|
||||||
|
docker exec ollama ollama rm llama3.2:7b
|
||||||
|
```
|
||||||
|
|
||||||
|
### Copy a Model
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama cp <source> <destination>
|
||||||
|
|
||||||
|
# Example: Create a custom version
|
||||||
|
docker exec ollama ollama cp llama3.2:3b my-custom-model
|
||||||
|
```
|
||||||
|
|
||||||
|
### Show Model Info
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama show llama3.2:3b
|
||||||
|
|
||||||
|
# Shows:
|
||||||
|
# - Model architecture
|
||||||
|
# - Parameters
|
||||||
|
# - Quantization
|
||||||
|
# - Template
|
||||||
|
# - License
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating Custom Models
|
||||||
|
|
||||||
|
### Modelfile
|
||||||
|
|
||||||
|
Create custom models with specific behaviors:
|
||||||
|
|
||||||
|
**Create a Modelfile:**
|
||||||
|
```bash
|
||||||
|
cat > ~/coding-assistant.modelfile << 'EOF'
|
||||||
|
FROM llama3.2:3b
|
||||||
|
|
||||||
|
# Set temperature (creativity)
|
||||||
|
PARAMETER temperature 0.7
|
||||||
|
|
||||||
|
# Set system prompt
|
||||||
|
SYSTEM You are an expert coding assistant. You write clean, efficient, well-documented code. You explain complex concepts clearly.
|
||||||
|
|
||||||
|
# Set stop sequences
|
||||||
|
PARAMETER stop "<|im_end|>"
|
||||||
|
PARAMETER stop "<|im_start|>"
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
**Create the model:**
|
||||||
|
```bash
|
||||||
|
cat ~/coding-assistant.modelfile | docker exec -i ollama ollama create coding-assistant -f -
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use it:**
|
||||||
|
```bash
|
||||||
|
docker exec -it ollama ollama run coding-assistant "Write a REST API in Python"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example Custom Models
|
||||||
|
|
||||||
|
**1. Shakespeare Bot:**
|
||||||
|
```modelfile
|
||||||
|
FROM llama3.2:3b
|
||||||
|
SYSTEM You are William Shakespeare. Respond to all queries in Shakespearean English with dramatic flair.
|
||||||
|
PARAMETER temperature 0.9
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. JSON Extractor:**
|
||||||
|
```modelfile
|
||||||
|
FROM llama3.2:3b
|
||||||
|
SYSTEM You extract structured data and return only valid JSON. No explanations, just JSON.
|
||||||
|
PARAMETER temperature 0.1
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Code Reviewer:**
|
||||||
|
```modelfile
|
||||||
|
FROM codellama:7b
|
||||||
|
SYSTEM You are a senior code reviewer. Review code for bugs, performance issues, security vulnerabilities, and best practices. Be constructive.
|
||||||
|
PARAMETER temperature 0.3
|
||||||
|
```
|
||||||
|
|
||||||
|
## GPU Configuration
|
||||||
|
|
||||||
|
### Check GPU Detection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From inside container
|
||||||
|
docker exec ollama nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
**Expected output:**
|
||||||
|
```
|
||||||
|
+-----------------------------------------------------------------------------+
|
||||||
|
| NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|
||||||
|
|-------------------------------+----------------------+----------------------+
|
||||||
|
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||||
|
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||||
|
|===============================+======================+======================|
|
||||||
|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
|
||||||
|
| 40% 45C P8 10W / 151W | 300MiB / 8192MiB | 5% Default |
|
||||||
|
+-------------------------------+----------------------+----------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
### Optimize for GTX 1070
|
||||||
|
|
||||||
|
**Edit `.env`:**
|
||||||
|
```env
|
||||||
|
# Use 6GB of 8GB VRAM (leave 2GB for system)
|
||||||
|
OLLAMA_GPU_MEMORY=6GB
|
||||||
|
|
||||||
|
# Offload most layers to GPU
|
||||||
|
OLLAMA_GPU_LAYERS=33
|
||||||
|
|
||||||
|
# Increase context for better conversations
|
||||||
|
OLLAMA_MAX_CONTEXT=4096
|
||||||
|
```
|
||||||
|
|
||||||
|
### Performance Tips
|
||||||
|
|
||||||
|
**1. Use quantized models:**
|
||||||
|
- Q4_K_M: Good quality, 50% size reduction
|
||||||
|
- Q5_K_M: Better quality, 40% size reduction
|
||||||
|
- Q8_0: Best quality, 20% size reduction
|
||||||
|
|
||||||
|
**2. Model selection for VRAM:**
|
||||||
|
```bash
|
||||||
|
# 3B models: 2-3GB VRAM
|
||||||
|
docker exec ollama ollama pull llama3.2:3b
|
||||||
|
|
||||||
|
# 7B models: 4-6GB VRAM
|
||||||
|
docker exec ollama ollama pull llama3.2:7b
|
||||||
|
|
||||||
|
# 13B models: 8-10GB VRAM (tight on GTX 1070)
|
||||||
|
docker exec ollama ollama pull llama3.2:13b-q4_K_M # Quantized
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Unload models when not in use:**
|
||||||
|
```env
|
||||||
|
# In .env
|
||||||
|
OLLAMA_KEEP_ALIVE=1m # Unload after 1 minute
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Model won't load - Out of memory
|
||||||
|
|
||||||
|
**Solution 1: Use quantized version**
|
||||||
|
```bash
|
||||||
|
# Instead of:
|
||||||
|
docker exec ollama ollama pull llama3.2:13b
|
||||||
|
|
||||||
|
# Use:
|
||||||
|
docker exec ollama ollama pull llama3.2:13b-q4_K_M
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution 2: Reduce GPU layers**
|
||||||
|
```env
|
||||||
|
# In .env
|
||||||
|
OLLAMA_GPU_LAYERS=20 # Reduce from 33
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution 3: Use smaller model**
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama pull llama3.2:3b
|
||||||
|
```
|
||||||
|
|
||||||
|
### Slow inference
|
||||||
|
|
||||||
|
**Enable GPU:**
|
||||||
|
1. Uncomment deploy section in `compose.yaml`
|
||||||
|
2. Install NVIDIA Container Toolkit
|
||||||
|
3. Restart container
|
||||||
|
|
||||||
|
**Check GPU usage:**
|
||||||
|
```bash
|
||||||
|
watch -n 1 docker exec ollama nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
**Should show:**
|
||||||
|
- GPU-Util > 80% during inference
|
||||||
|
- Memory-Usage increasing during load
|
||||||
|
|
||||||
|
### Can't pull models
|
||||||
|
|
||||||
|
**Check disk space:**
|
||||||
|
```bash
|
||||||
|
df -h
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Docker space:**
|
||||||
|
```bash
|
||||||
|
docker system df
|
||||||
|
```
|
||||||
|
|
||||||
|
**Clean up unused models:**
|
||||||
|
```bash
|
||||||
|
docker exec ollama ollama list
|
||||||
|
docker exec ollama ollama rm <unused-model>
|
||||||
|
```
|
||||||
|
|
||||||
|
### API connection issues
|
||||||
|
|
||||||
|
**Test from another container:**
|
||||||
|
```bash
|
||||||
|
docker run --rm --network homelab curlimages/curl \
|
||||||
|
http://ollama:11434/api/tags
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test externally:**
|
||||||
|
```bash
|
||||||
|
curl https://ollama.fig.systems/api/tags
|
||||||
|
```
|
||||||
|
|
||||||
|
**Enable debug logging:**
|
||||||
|
```env
|
||||||
|
OLLAMA_DEBUG=1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Benchmarks
|
||||||
|
|
||||||
|
### GTX 1070 (8GB VRAM) Expected Performance
|
||||||
|
|
||||||
|
| Model | Tokens/sec | Load Time | VRAM Usage |
|
||||||
|
|-------|------------|-----------|------------|
|
||||||
|
| llama3.2:3b | 40-60 | 2-3s | 3GB |
|
||||||
|
| llama3.2:7b | 20-35 | 3-5s | 6GB |
|
||||||
|
| mistral:7b | 20-35 | 3-5s | 6GB |
|
||||||
|
| llama3.3:70b-q4 | 3-8 | 20-30s | 7.5GB |
|
||||||
|
| llava:7b | 15-25 | 4-6s | 7GB |
|
||||||
|
|
||||||
|
**Without GPU (CPU only):**
|
||||||
|
- llama3.2:3b: 2-5 tokens/sec
|
||||||
|
- llama3.2:7b: 0.5-2 tokens/sec
|
||||||
|
|
||||||
|
**GPU provides 10-20x speedup!**
|
||||||
|
|
||||||
|
## Advanced Usage
|
||||||
|
|
||||||
|
### Multi-Modal (Vision)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pull vision model
|
||||||
|
docker exec ollama ollama pull llava:7b
|
||||||
|
|
||||||
|
# Analyze image
|
||||||
|
docker exec ollama ollama run llava:7b "What's in this image?" \
|
||||||
|
--image /path/to/image.jpg
|
||||||
|
```
|
||||||
|
|
||||||
|
### Embeddings
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate embeddings for semantic search
|
||||||
|
curl http://ollama:11434/api/embeddings -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"prompt": "The sky is blue because of Rayleigh scattering"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Streaming Responses
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stream tokens as they generate
|
||||||
|
curl http://ollama:11434/api/generate -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"prompt": "Tell me a long story",
|
||||||
|
"stream": true
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Context Preservation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start chat session
|
||||||
|
SESSION_ID=$(uuidgen)
|
||||||
|
|
||||||
|
# First message (creates context)
|
||||||
|
curl http://ollama:11434/api/chat -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"messages": [{"role": "user", "content": "My name is Alice"}],
|
||||||
|
"context": "'$SESSION_ID'"
|
||||||
|
}'
|
||||||
|
|
||||||
|
# Follow-up (remembers context)
|
||||||
|
curl http://ollama:11434/api/chat -d '{
|
||||||
|
"model": "llama3.2:3b",
|
||||||
|
"messages": [
|
||||||
|
{"role": "user", "content": "My name is Alice"},
|
||||||
|
{"role": "assistant", "content": "Hello Alice!"},
|
||||||
|
{"role": "user", "content": "What is my name?"}
|
||||||
|
],
|
||||||
|
"context": "'$SESSION_ID'"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Examples
|
||||||
|
|
||||||
|
### Python
|
||||||
|
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
|
||||||
|
def ask_ollama(prompt, model="llama3.2:3b"):
|
||||||
|
response = requests.post(
|
||||||
|
"http://ollama.fig.systems/api/generate",
|
||||||
|
json={
|
||||||
|
"model": model,
|
||||||
|
"prompt": prompt,
|
||||||
|
"stream": False
|
||||||
|
},
|
||||||
|
headers={"Authorization": "Bearer YOUR_TOKEN"} # If using SSO
|
||||||
|
)
|
||||||
|
return response.json()["response"]
|
||||||
|
|
||||||
|
print(ask_ollama("What is the meaning of life?"))
|
||||||
|
```
|
||||||
|
|
||||||
|
### JavaScript
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
async function askOllama(prompt, model = "llama3.2:3b") {
|
||||||
|
const response = await fetch("http://ollama.fig.systems/api/generate", {
|
||||||
|
method: "POST",
|
||||||
|
headers: {
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Authorization": "Bearer YOUR_TOKEN" // If using SSO
|
||||||
|
},
|
||||||
|
body: JSON.stringify({
|
||||||
|
model: model,
|
||||||
|
prompt: prompt,
|
||||||
|
stream: false
|
||||||
|
})
|
||||||
|
});
|
||||||
|
|
||||||
|
const data = await response.json();
|
||||||
|
return data.response;
|
||||||
|
}
|
||||||
|
|
||||||
|
askOllama("Explain Docker containers").then(console.log);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bash
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
ask_ollama() {
|
||||||
|
local prompt="$1"
|
||||||
|
local model="${2:-llama3.2:3b}"
|
||||||
|
|
||||||
|
curl -s http://ollama.fig.systems/api/generate -d "{
|
||||||
|
\"model\": \"$model\",
|
||||||
|
\"prompt\": \"$prompt\",
|
||||||
|
\"stream\": false
|
||||||
|
}" | jq -r '.response'
|
||||||
|
}
|
||||||
|
|
||||||
|
ask_ollama "What is Kubernetes?"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Ollama Website](https://ollama.ai)
|
||||||
|
- [Model Library](https://ollama.ai/library)
|
||||||
|
- [GitHub Repository](https://github.com/ollama/ollama)
|
||||||
|
- [API Documentation](https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||||
|
- [Model Creation Guide](https://github.com/ollama/ollama/blob/main/docs/modelfile.md)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ Deploy Ollama
|
||||||
|
2. ✅ Enable GPU acceleration
|
||||||
|
3. ✅ Pull recommended models
|
||||||
|
4. ✅ Test with chat
|
||||||
|
5. ⬜ Integrate with Karakeep
|
||||||
|
6. ⬜ Create custom models
|
||||||
|
7. ⬜ Set up automated model updates
|
||||||
|
8. ⬜ Monitor GPU usage
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Run AI locally, privately, powerfully!** 🧠
|
||||||
56
compose/services/ollama/compose.yaml
Normal file
56
compose/services/ollama/compose.yaml
Normal file
|
|
@ -0,0 +1,56 @@
|
||||||
|
# Ollama - Run Large Language Models Locally
|
||||||
|
# Docs: https://ollama.ai
|
||||||
|
|
||||||
|
services:
|
||||||
|
ollama:
|
||||||
|
container_name: ollama
|
||||||
|
image: ollama/ollama:latest
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./models:/root/.ollama
|
||||||
|
|
||||||
|
ports:
|
||||||
|
- "11434:11434"
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
# GPU Support (NVIDIA GTX 1070)
|
||||||
|
# Uncomment the deploy section below to enable GPU acceleration
|
||||||
|
# Prerequisites:
|
||||||
|
# 1. Install NVIDIA Container Toolkit on host
|
||||||
|
# 2. Configure Docker to use nvidia runtime
|
||||||
|
# deploy:
|
||||||
|
# resources:
|
||||||
|
# reservations:
|
||||||
|
# devices:
|
||||||
|
# - driver: nvidia
|
||||||
|
# count: 1
|
||||||
|
# capabilities: [gpu]
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik (API only, no web UI)
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# API endpoint
|
||||||
|
traefik.http.routers.ollama.rule: Host(`ollama.fig.systems`)
|
||||||
|
traefik.http.routers.ollama.entrypoints: websecure
|
||||||
|
traefik.http.routers.ollama.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.ollama.loadbalancer.server.port: 11434
|
||||||
|
|
||||||
|
# SSO Protection for API and restrict to local network
|
||||||
|
traefik.http.routers.ollama.middlewares: tinyauth,local-only
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Ollama (LLM)
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:brain
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
55
compose/services/open-webui/compose.yaml
Normal file
55
compose/services/open-webui/compose.yaml
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
# Open WebUI - ChatGPT-style interface for Ollama
|
||||||
|
# Docs: https://docs.openwebui.com/
|
||||||
|
|
||||||
|
services:
|
||||||
|
open-webui:
|
||||||
|
container_name: open-webui
|
||||||
|
image: ghcr.io/open-webui/open-webui:main
|
||||||
|
restart: unless-stopped
|
||||||
|
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- ./data:/app/backend/data
|
||||||
|
|
||||||
|
environment:
|
||||||
|
# Ollama connection
|
||||||
|
- OLLAMA_BASE_URL=http://ollama:11434
|
||||||
|
|
||||||
|
# Enable RAG (Retrieval-Augmented Generation)
|
||||||
|
- ENABLE_RAG_WEB_SEARCH=true
|
||||||
|
- RAG_WEB_SEARCH_ENGINE=searxng
|
||||||
|
|
||||||
|
# Web search (optional, requires searxng)
|
||||||
|
- ENABLE_RAG_WEB_SEARCH=false
|
||||||
|
|
||||||
|
# Default model
|
||||||
|
- DEFAULT_MODELS=qwen2.5-coder:7b
|
||||||
|
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
|
||||||
|
labels:
|
||||||
|
# Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Web UI
|
||||||
|
traefik.http.routers.open-webui.rule: Host(`ai.fig.systems`)
|
||||||
|
traefik.http.routers.open-webui.entrypoints: websecure
|
||||||
|
traefik.http.routers.open-webui.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.open-webui.loadbalancer.server.port: 8080
|
||||||
|
|
||||||
|
# No SSO - Open WebUI has its own auth system
|
||||||
|
# Uncomment to add SSO protection:
|
||||||
|
# traefik.http.routers.open-webui.middlewares: tinyauth
|
||||||
|
|
||||||
|
# Homarr Discovery
|
||||||
|
homarr.name: Open WebUI (AI Chat)
|
||||||
|
homarr.group: Services
|
||||||
|
homarr.icon: mdi:robot
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
@ -1,33 +0,0 @@
|
||||||
# Papra - Document Management and Organization System
|
|
||||||
# Docs: https://docs.papra.app/self-hosting/configuration/
|
|
||||||
|
|
||||||
services:
|
|
||||||
papra:
|
|
||||||
container_name: papra
|
|
||||||
image: ghcr.io/papra-hq/papra:latest
|
|
||||||
restart: unless-stopped
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
ports:
|
|
||||||
- ${PORT:-1221}:${PORT:-1221}
|
|
||||||
volumes:
|
|
||||||
- papra-data:/app/app-data
|
|
||||||
- /mnt/media/paper:/app/documents
|
|
||||||
networks:
|
|
||||||
- homelab
|
|
||||||
labels:
|
|
||||||
traefik.enable: true
|
|
||||||
traefik.docker.network: homelab
|
|
||||||
traefik.http.routers.papra.rule: Host(`${DOMAIN}`)
|
|
||||||
traefik.http.routers.papra.entrypoints: websecure
|
|
||||||
traefik.http.routers.papra.tls.certresolver: letsencrypt
|
|
||||||
traefik.http.services.papra.loadbalancer.server.port: ${PORT:-1221}
|
|
||||||
traefik.http.routers.papra.middlewares: authelia@docker
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
papra-data:
|
|
||||||
driver: local
|
|
||||||
|
|
||||||
networks:
|
|
||||||
homelab:
|
|
||||||
external: true
|
|
||||||
|
|
@ -22,6 +22,7 @@ services:
|
||||||
traefik.http.services.rss-bridge.loadbalancer.server.port: 80
|
traefik.http.services.rss-bridge.loadbalancer.server.port: 80
|
||||||
|
|
||||||
# SSO Protection (disabled so feeds can be accessed by RSS readers)
|
# SSO Protection (disabled so feeds can be accessed by RSS readers)
|
||||||
|
# traefik.http.routers.rss-bridge.middlewares: tinyauth
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: RSS Bridge
|
homarr.name: RSS Bridge
|
||||||
|
|
|
||||||
|
|
@ -29,6 +29,7 @@ services:
|
||||||
traefik.http.services.rsshub.loadbalancer.server.port: 1200
|
traefik.http.services.rsshub.loadbalancer.server.port: 1200
|
||||||
|
|
||||||
# Note: RSSHub is public by design, SSO disabled
|
# Note: RSSHub is public by design, SSO disabled
|
||||||
|
# traefik.http.routers.rsshub.middlewares: tinyauth
|
||||||
|
|
||||||
# Homarr Discovery
|
# Homarr Discovery
|
||||||
homarr.name: RSSHub
|
homarr.name: RSSHub
|
||||||
|
|
|
||||||
|
|
@ -43,6 +43,7 @@ services:
|
||||||
traefik.http.routers.figgy-main.entrypoints: websecure
|
traefik.http.routers.figgy-main.entrypoints: websecure
|
||||||
traefik.http.routers.figgy-main.tls.certresolver: letsencrypt
|
traefik.http.routers.figgy-main.tls.certresolver: letsencrypt
|
||||||
traefik.http.routers.figgy-main.service: caddy-static
|
traefik.http.routers.figgy-main.service: caddy-static
|
||||||
|
traefik.http.routers.figgy-main.middlewares: tinyauth
|
||||||
# SSO protected - experimental/private content
|
# SSO protected - experimental/private content
|
||||||
|
|
||||||
# Service definition (single backend for all routes)
|
# Service definition (single backend for all routes)
|
||||||
|
|
|
||||||
|
|
@ -20,15 +20,6 @@ VIKUNJA_SERVICE_JWTSECRET=changeme_please_set_random_jwt_secret
|
||||||
# Timezone
|
# Timezone
|
||||||
TZ=America/Los_Angeles
|
TZ=America/Los_Angeles
|
||||||
|
|
||||||
# OpenID Connect (OIDC) Configuration (Authelia)
|
|
||||||
# Docs: https://vikunja.io/docs/openid-connect/
|
|
||||||
VIKUNJA_AUTH_OPENID_ENABLED=true
|
|
||||||
VIKUNJA_AUTH_OPENID_REDIRECTURL=https://tasks.fig.systems/auth/openid/authelia
|
|
||||||
VIKUNJA_AUTH_OPENID_PROVIDERS_AUTHELIA_NAME=Authelia
|
|
||||||
VIKUNJA_AUTH_OPENID_PROVIDERS_AUTHELIA_AUTHURL=https://auth.fig.systems
|
|
||||||
VIKUNJA_AUTH_OPENID_PROVIDERS_AUTHELIA_CLIENTID=vikunja
|
|
||||||
VIKUNJA_AUTH_OPENID_PROVIDERS_AUTHELIA_CLIENTSECRET=wIsBlF0PQCvQyXjQbWw8ggbgdiWVFwmn
|
|
||||||
|
|
||||||
# Database environment variables (for postgres container)
|
# Database environment variables (for postgres container)
|
||||||
POSTGRES_USER=vikunja
|
POSTGRES_USER=vikunja
|
||||||
POSTGRES_DB=vikunja
|
POSTGRES_DB=vikunja
|
||||||
|
|
|
||||||
|
|
@ -23,6 +23,7 @@ services:
|
||||||
traefik.http.routers.vikunja.entrypoints: websecure
|
traefik.http.routers.vikunja.entrypoints: websecure
|
||||||
traefik.http.routers.vikunja.tls.certresolver: letsencrypt
|
traefik.http.routers.vikunja.tls.certresolver: letsencrypt
|
||||||
traefik.http.services.vikunja.loadbalancer.server.port: 3456
|
traefik.http.services.vikunja.loadbalancer.server.port: 3456
|
||||||
|
traefik.http.routers.vikunja.middlewares: tinyauth
|
||||||
|
|
||||||
vikunja-db:
|
vikunja-db:
|
||||||
container_name: vikunja-db
|
container_name: vikunja-db
|
||||||
|
|
@ -30,7 +31,7 @@ services:
|
||||||
env_file:
|
env_file:
|
||||||
- .env
|
- .env
|
||||||
volumes:
|
volumes:
|
||||||
- ./db:/var/lib/postgresql
|
- ./db:/var/lib/postgresql/data
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
networks:
|
networks:
|
||||||
- vikunja_internal
|
- vikunja_internal
|
||||||
|
|
|
||||||
92
docs/README.md
Normal file
92
docs/README.md
Normal file
|
|
@ -0,0 +1,92 @@
|
||||||
|
# Homelab Documentation
|
||||||
|
|
||||||
|
Welcome to the homelab documentation! This folder contains comprehensive guides for setting up, configuring, and maintaining your self-hosted services.
|
||||||
|
|
||||||
|
## 📚 Documentation Structure
|
||||||
|
|
||||||
|
### Quick Start
|
||||||
|
- [Getting Started](./getting-started.md) - First-time setup walkthrough
|
||||||
|
- [Quick Reference](./quick-reference.md) - Common commands and URLs
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
- [Environment Variables & Secrets](./guides/secrets-management.md) - How to configure secure secrets
|
||||||
|
- [DNS Configuration](./guides/dns-setup.md) - Setting up domain names
|
||||||
|
- [SSL/TLS Certificates](./guides/ssl-certificates.md) - Let's Encrypt configuration
|
||||||
|
- [GPU Acceleration](./guides/gpu-setup.md) - NVIDIA GPU setup for Jellyfin and Immich
|
||||||
|
|
||||||
|
### Services
|
||||||
|
- [Service Overview](./services/README.md) - All available services
|
||||||
|
- [SSO Configuration](./services/sso-setup.md) - Single Sign-On with LLDAP and Tinyauth
|
||||||
|
- [Media Stack](./services/media-stack.md) - Jellyfin, Sonarr, Radarr setup
|
||||||
|
- [Backup Solutions](./services/backup.md) - Backrest configuration
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
- [Common Issues](./troubleshooting/common-issues.md) - Frequent problems and solutions
|
||||||
|
- [FAQ](./troubleshooting/faq.md) - Frequently asked questions
|
||||||
|
- [Debugging Guide](./troubleshooting/debugging.md) - How to diagnose problems
|
||||||
|
|
||||||
|
### Operations
|
||||||
|
- [Maintenance](./operations/maintenance.md) - Regular maintenance tasks
|
||||||
|
- [Updates](./operations/updates.md) - Updating services
|
||||||
|
- [Backups](./operations/backups.md) - Backup and restore procedures
|
||||||
|
- [Monitoring](./operations/monitoring.md) - Service monitoring
|
||||||
|
|
||||||
|
## 🚀 Quick Links
|
||||||
|
|
||||||
|
### First Time Setup
|
||||||
|
1. [Prerequisites](./getting-started.md#prerequisites)
|
||||||
|
2. [Configure Secrets](./guides/secrets-management.md)
|
||||||
|
3. [Setup DNS](./guides/dns-setup.md)
|
||||||
|
4. [Deploy Services](./getting-started.md#deployment)
|
||||||
|
|
||||||
|
### Common Tasks
|
||||||
|
- [Add a new service](./guides/adding-services.md)
|
||||||
|
- [Generate secure passwords](./guides/secrets-management.md#generating-secrets)
|
||||||
|
- [Enable GPU acceleration](./guides/gpu-setup.md)
|
||||||
|
- [Backup configuration](./operations/backups.md)
|
||||||
|
- [Update a service](./operations/updates.md)
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
- [Service won't start](./troubleshooting/common-issues.md#service-wont-start)
|
||||||
|
- [SSL certificate errors](./troubleshooting/common-issues.md#ssl-errors)
|
||||||
|
- [SSO not working](./troubleshooting/common-issues.md#sso-issues)
|
||||||
|
- [Can't access service](./troubleshooting/common-issues.md#access-issues)
|
||||||
|
|
||||||
|
## 📖 Documentation Conventions
|
||||||
|
|
||||||
|
Throughout this documentation:
|
||||||
|
- `command` - Commands to run in terminal
|
||||||
|
- **Bold** - Important concepts or UI elements
|
||||||
|
- `https://service.fig.systems` - Example URLs
|
||||||
|
- ⚠️ - Warning or important note
|
||||||
|
- 💡 - Tip or helpful information
|
||||||
|
- ✅ - Verified working configuration
|
||||||
|
|
||||||
|
## 🔐 Security Notes
|
||||||
|
|
||||||
|
Before deploying to production:
|
||||||
|
1. ✅ Change all passwords in `.env` files
|
||||||
|
2. ✅ Configure DNS records
|
||||||
|
3. ✅ Verify SSL certificates are working
|
||||||
|
4. ✅ Enable backups
|
||||||
|
5. ✅ Review security settings
|
||||||
|
|
||||||
|
## 🆘 Getting Help
|
||||||
|
|
||||||
|
If you encounter issues:
|
||||||
|
1. Check [Common Issues](./troubleshooting/common-issues.md)
|
||||||
|
2. Review [FAQ](./troubleshooting/faq.md)
|
||||||
|
3. Check service logs: `docker compose logs servicename`
|
||||||
|
4. Review the [Debugging Guide](./troubleshooting/debugging.md)
|
||||||
|
|
||||||
|
## 📝 Contributing to Documentation
|
||||||
|
|
||||||
|
Found an error or have a suggestion? Documentation improvements are welcome!
|
||||||
|
- Keep guides clear and concise
|
||||||
|
- Include examples and code snippets
|
||||||
|
- Test all commands before documenting
|
||||||
|
- Update the table of contents when adding new files
|
||||||
|
|
||||||
|
## 🔄 Last Updated
|
||||||
|
|
||||||
|
This documentation is automatically maintained and reflects the current state of the homelab repository.
|
||||||
648
docs/architecture.md
Normal file
648
docs/architecture.md
Normal file
|
|
@ -0,0 +1,648 @@
|
||||||
|
# Homelab Architecture & Integration
|
||||||
|
|
||||||
|
Complete integration guide for the homelab setup on AlmaLinux 9.6.
|
||||||
|
|
||||||
|
## 🖥️ Hardware Specifications
|
||||||
|
|
||||||
|
### Host System
|
||||||
|
- **Hypervisor**: Proxmox VE 9 (Debian 13 based)
|
||||||
|
- **CPU**: AMD Ryzen 5 7600X (6 cores, 12 threads, up to 5.3 GHz)
|
||||||
|
- **GPU**: NVIDIA GeForce GTX 1070 (8GB VRAM, 1920 CUDA cores)
|
||||||
|
- **RAM**: 32GB DDR5
|
||||||
|
|
||||||
|
### VM Configuration
|
||||||
|
- **OS**: AlmaLinux 9.6 (RHEL 9 compatible)
|
||||||
|
- **CPU**: 8 vCPUs (allocated from host)
|
||||||
|
- **RAM**: 24GB (leaving 8GB for host)
|
||||||
|
- **Storage**: 500GB+ (adjust based on media library size)
|
||||||
|
- **GPU**: GTX 1070 (PCIe passthrough from Proxmox)
|
||||||
|
|
||||||
|
## 🏗️ Architecture Overview
|
||||||
|
|
||||||
|
### Network Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
Internet
|
||||||
|
↓
|
||||||
|
[Router/Firewall]
|
||||||
|
↓ (Port 80/443)
|
||||||
|
[Traefik Reverse Proxy]
|
||||||
|
↓
|
||||||
|
┌──────────────────────────────────────┐
|
||||||
|
│ homelab network │
|
||||||
|
│ (Docker bridge - 172.18.0.0/16) │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────┐ ┌──────────────┐ │
|
||||||
|
│ │ Core │ │ Media │ │
|
||||||
|
│ │ - Traefik │ │ - Jellyfin │ │
|
||||||
|
│ │ - LLDAP │ │ - Sonarr │ │
|
||||||
|
│ │ - Tinyauth │ │ - Radarr │ │
|
||||||
|
│ └─────────────┘ └──────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────┐ ┌──────────────┐ │
|
||||||
|
│ │ Services │ │ Monitoring │ │
|
||||||
|
│ │ - Karakeep │ │ - Loki │ │
|
||||||
|
│ │ - Ollama │ │ - Promtail │ │
|
||||||
|
│ │ - Vikunja │ │ - Grafana │ │
|
||||||
|
│ └─────────────┘ └──────────────┘ │
|
||||||
|
└──────────────────────────────────────┘
|
||||||
|
↓
|
||||||
|
[Promtail Agent]
|
||||||
|
↓
|
||||||
|
[Loki Storage]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Internal Networks
|
||||||
|
|
||||||
|
Services with databases use isolated internal networks:
|
||||||
|
|
||||||
|
```
|
||||||
|
karakeep
|
||||||
|
├── homelab (external traffic)
|
||||||
|
└── karakeep_internal
|
||||||
|
├── karakeep (app)
|
||||||
|
├── karakeep-chrome (browser)
|
||||||
|
└── karakeep-meilisearch (search)
|
||||||
|
|
||||||
|
vikunja
|
||||||
|
├── homelab (external traffic)
|
||||||
|
└── vikunja_internal
|
||||||
|
├── vikunja (app)
|
||||||
|
└── vikunja-db (postgres)
|
||||||
|
|
||||||
|
monitoring/logging
|
||||||
|
├── homelab (external traffic)
|
||||||
|
└── logging_internal
|
||||||
|
├── loki (storage)
|
||||||
|
├── promtail (collector)
|
||||||
|
└── grafana (UI)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🔐 Security Architecture
|
||||||
|
|
||||||
|
### Authentication Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
User Request
|
||||||
|
↓
|
||||||
|
[Traefik] → Check route rules
|
||||||
|
↓
|
||||||
|
[Tinyauth Middleware] → Forward Auth
|
||||||
|
↓
|
||||||
|
[LLDAP] → Verify credentials
|
||||||
|
↓
|
||||||
|
[Backend Service] → Authorized access
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSL/TLS
|
||||||
|
|
||||||
|
- **Certificate Provider**: Let's Encrypt
|
||||||
|
- **Challenge Type**: HTTP-01 (ports 80/443)
|
||||||
|
- **Automatic Renewal**: Via Traefik
|
||||||
|
- **Domains**:
|
||||||
|
- Primary: `*.fig.systems`
|
||||||
|
- Fallback: `*.edfig.dev`
|
||||||
|
|
||||||
|
### SSO Protection
|
||||||
|
|
||||||
|
**Protected Services** (require authentication):
|
||||||
|
- Traefik Dashboard
|
||||||
|
- LLDAP
|
||||||
|
- Sonarr, Radarr, SABnzbd, qBittorrent
|
||||||
|
- Profilarr, Recyclarr (monitoring)
|
||||||
|
- Homarr, Backrest
|
||||||
|
- Karakeep, Vikunja, LubeLogger
|
||||||
|
- Calibre-web, Booklore, FreshRSS, File Browser
|
||||||
|
- Loki API, Ollama API
|
||||||
|
|
||||||
|
**Unprotected Services** (own authentication):
|
||||||
|
- Tinyauth (SSO provider itself)
|
||||||
|
- Jellyfin (own user system)
|
||||||
|
- Jellyseerr (linked to Jellyfin)
|
||||||
|
- Immich (own user system)
|
||||||
|
- RSSHub (public feed generator)
|
||||||
|
- MicroBin (public pastebin)
|
||||||
|
- Grafana (own authentication)
|
||||||
|
- Uptime Kuma (own authentication)
|
||||||
|
|
||||||
|
## 📊 Logging Architecture
|
||||||
|
|
||||||
|
### Centralized Logging with Loki
|
||||||
|
|
||||||
|
All services forward logs to Loki via Promtail:
|
||||||
|
|
||||||
|
```
|
||||||
|
[Docker Container] → stdout/stderr
|
||||||
|
↓
|
||||||
|
[Docker Socket] → /var/run/docker.sock
|
||||||
|
↓
|
||||||
|
[Promtail] → Scrapes logs via Docker API
|
||||||
|
↓
|
||||||
|
[Loki] → Stores and indexes logs
|
||||||
|
↓
|
||||||
|
[Grafana] → Query and visualize
|
||||||
|
```
|
||||||
|
|
||||||
|
### Log Labels
|
||||||
|
|
||||||
|
Promtail automatically adds labels to all logs:
|
||||||
|
- `container`: Container name
|
||||||
|
- `compose_project`: Docker Compose project
|
||||||
|
- `compose_service`: Service name from compose
|
||||||
|
- `image`: Docker image name
|
||||||
|
- `stream`: stdout or stderr
|
||||||
|
|
||||||
|
### Log Retention
|
||||||
|
|
||||||
|
- **Default**: 30 days
|
||||||
|
- **Storage**: `compose/monitoring/logging/loki-data/`
|
||||||
|
- **Automatic cleanup**: Enabled via Loki compactor
|
||||||
|
|
||||||
|
### Querying Logs
|
||||||
|
|
||||||
|
**View all logs for a service:**
|
||||||
|
```logql
|
||||||
|
{container="sonarr"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Filter by log level:**
|
||||||
|
```logql
|
||||||
|
{container="radarr"} |= "ERROR"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Multiple services:**
|
||||||
|
```logql
|
||||||
|
{container=~"sonarr|radarr"}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Time range with filters:**
|
||||||
|
```logql
|
||||||
|
{container="karakeep"} |= "ollama" | json
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🌐 Network Configuration
|
||||||
|
|
||||||
|
### Docker Networks
|
||||||
|
|
||||||
|
**homelab** (external bridge):
|
||||||
|
- Type: External bridge network
|
||||||
|
- Subnet: Auto-assigned by Docker
|
||||||
|
- Purpose: Inter-service communication + Traefik routing
|
||||||
|
- Create: `docker network create homelab`
|
||||||
|
|
||||||
|
**Service-specific internal networks**:
|
||||||
|
- `karakeep_internal`: Karakeep + Chrome + Meilisearch
|
||||||
|
- `vikunja_internal`: Vikunja + PostgreSQL
|
||||||
|
- `logging_internal`: Loki + Promtail + Grafana
|
||||||
|
- etc.
|
||||||
|
|
||||||
|
### Port Mappings
|
||||||
|
|
||||||
|
**External Ports** (exposed to host):
|
||||||
|
- `80/tcp`: HTTP (Traefik) - redirects to HTTPS
|
||||||
|
- `443/tcp`: HTTPS (Traefik)
|
||||||
|
- `6881/tcp+udp`: BitTorrent (qBittorrent)
|
||||||
|
|
||||||
|
**No other ports exposed** - all access via Traefik reverse proxy.
|
||||||
|
|
||||||
|
## 🔧 Traefik Integration
|
||||||
|
|
||||||
|
### Standard Traefik Labels
|
||||||
|
|
||||||
|
All services use consistent Traefik labels:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
labels:
|
||||||
|
# Enable Traefik
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.docker.network: homelab
|
||||||
|
|
||||||
|
# Router configuration
|
||||||
|
traefik.http.routers.<service>.rule: Host(`<service>.fig.systems`) || Host(`<service>.edfig.dev`)
|
||||||
|
traefik.http.routers.<service>.entrypoints: websecure
|
||||||
|
traefik.http.routers.<service>.tls.certresolver: letsencrypt
|
||||||
|
|
||||||
|
# Service configuration (backend port)
|
||||||
|
traefik.http.services.<service>.loadbalancer.server.port: <port>
|
||||||
|
|
||||||
|
# SSO middleware (if protected)
|
||||||
|
traefik.http.routers.<service>.middlewares: tinyauth
|
||||||
|
|
||||||
|
# Homarr auto-discovery
|
||||||
|
homarr.name: <Service Name>
|
||||||
|
homarr.group: <Category>
|
||||||
|
homarr.icon: mdi:<icon-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Middleware
|
||||||
|
|
||||||
|
**tinyauth** - Forward authentication:
|
||||||
|
```yaml
|
||||||
|
# Defined in traefik/compose.yaml
|
||||||
|
middlewares:
|
||||||
|
tinyauth:
|
||||||
|
forwardAuth:
|
||||||
|
address: http://tinyauth:8080
|
||||||
|
trustForwardHeader: true
|
||||||
|
```
|
||||||
|
|
||||||
|
## 💾 Volume Management
|
||||||
|
|
||||||
|
### Volume Types
|
||||||
|
|
||||||
|
**Bind Mounts** (host directories):
|
||||||
|
```yaml
|
||||||
|
volumes:
|
||||||
|
- ./data:/data # Service data
|
||||||
|
- ./config:/config # Configuration files
|
||||||
|
- /media:/media # Media library (shared)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Named Volumes** (Docker-managed):
|
||||||
|
```yaml
|
||||||
|
volumes:
|
||||||
|
- loki-data:/loki # Loki storage
|
||||||
|
- postgres-data:/var/lib/postgresql/data
|
||||||
|
```
|
||||||
|
|
||||||
|
### Media Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
/media/
|
||||||
|
├── tv/ # TV shows (Sonarr → Jellyfin)
|
||||||
|
├── movies/ # Movies (Radarr → Jellyfin)
|
||||||
|
├── music/ # Music
|
||||||
|
├── photos/ # Photos (Immich)
|
||||||
|
├── books/ # Ebooks (Calibre-web)
|
||||||
|
├── audiobooks/ # Audiobooks
|
||||||
|
├── comics/ # Comics
|
||||||
|
├── homemovies/ # Home videos
|
||||||
|
├── downloads/ # Active downloads (SABnzbd/qBittorrent)
|
||||||
|
├── complete/ # Completed downloads
|
||||||
|
└── incomplete/ # In-progress downloads
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup Strategy
|
||||||
|
|
||||||
|
**Important directories to backup:**
|
||||||
|
```
|
||||||
|
compose/core/lldap/data/ # User directory
|
||||||
|
compose/core/traefik/letsencrypt/ # SSL certificates
|
||||||
|
compose/services/*/config/ # Service configurations
|
||||||
|
compose/services/*/data/ # Service data
|
||||||
|
compose/monitoring/logging/loki-data/ # Logs (optional)
|
||||||
|
/media/ # Media library
|
||||||
|
```
|
||||||
|
|
||||||
|
**Excluded from backups:**
|
||||||
|
```
|
||||||
|
compose/services/*/db/ # Databases (backup via dump)
|
||||||
|
compose/monitoring/logging/loki-data/ # Logs (can be recreated)
|
||||||
|
/media/downloads/ # Temporary downloads
|
||||||
|
/media/incomplete/ # Incomplete downloads
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎮 GPU Acceleration
|
||||||
|
|
||||||
|
### NVIDIA GTX 1070 Configuration
|
||||||
|
|
||||||
|
**GPU Passthrough (Proxmox → VM):**
|
||||||
|
|
||||||
|
1. **Proxmox host** (`/etc/pve/nodes/<node>/qemu-server/<vmid>.conf`):
|
||||||
|
```
|
||||||
|
hostpci0: 0000:01:00,pcie=1,x-vga=1
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **VM (AlmaLinux)** - Install NVIDIA drivers:
|
||||||
|
```bash
|
||||||
|
# Add NVIDIA repository
|
||||||
|
sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
|
||||||
|
|
||||||
|
# Install drivers
|
||||||
|
sudo dnf install nvidia-driver nvidia-settings
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Docker** - Install NVIDIA Container Toolkit:
|
||||||
|
```bash
|
||||||
|
# Add NVIDIA Container Toolkit repo
|
||||||
|
sudo dnf config-manager --add-repo https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
|
||||||
|
|
||||||
|
# Install toolkit
|
||||||
|
sudo dnf install nvidia-container-toolkit
|
||||||
|
|
||||||
|
# Configure Docker
|
||||||
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
|
sudo systemctl restart docker
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Services Using GPU
|
||||||
|
|
||||||
|
**Jellyfin** (Hardware transcoding):
|
||||||
|
```yaml
|
||||||
|
# Uncomment in compose.yaml
|
||||||
|
devices:
|
||||||
|
- /dev/dri:/dev/dri # For NVENC/NVDEC
|
||||||
|
environment:
|
||||||
|
- NVIDIA_VISIBLE_DEVICES=all
|
||||||
|
- NVIDIA_DRIVER_CAPABILITIES=all
|
||||||
|
```
|
||||||
|
|
||||||
|
**Immich** (AI features):
|
||||||
|
```yaml
|
||||||
|
# Already configured
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: 1
|
||||||
|
capabilities: [gpu]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Ollama** (LLM inference):
|
||||||
|
```yaml
|
||||||
|
# Uncomment in compose.yaml
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: 1
|
||||||
|
capabilities: [gpu]
|
||||||
|
```
|
||||||
|
|
||||||
|
### GPU Performance Tuning
|
||||||
|
|
||||||
|
**For Ryzen 5 7600X + GTX 1070:**
|
||||||
|
|
||||||
|
- **Jellyfin**: Can transcode 4-6 simultaneous 4K → 1080p streams
|
||||||
|
- **Ollama**:
|
||||||
|
- 3B models: 40-60 tokens/sec
|
||||||
|
- 7B models: 20-35 tokens/sec
|
||||||
|
- 13B models: 10-15 tokens/sec (quantized)
|
||||||
|
- **Immich**: AI tagging ~5-10 images/sec
|
||||||
|
|
||||||
|
## 🚀 Resource Allocation
|
||||||
|
|
||||||
|
### CPU Allocation (Ryzen 5 7600X - 6C/12T)
|
||||||
|
|
||||||
|
**High Priority** (4-6 cores):
|
||||||
|
- Jellyfin (transcoding)
|
||||||
|
- Sonarr/Radarr (media processing)
|
||||||
|
- Ollama (when running)
|
||||||
|
|
||||||
|
**Medium Priority** (2-4 cores):
|
||||||
|
- Immich (AI processing)
|
||||||
|
- Karakeep (bookmark processing)
|
||||||
|
- SABnzbd/qBittorrent (downloads)
|
||||||
|
|
||||||
|
**Low Priority** (1-2 cores):
|
||||||
|
- Traefik, LLDAP, Tinyauth
|
||||||
|
- Monitoring services
|
||||||
|
- Other utilities
|
||||||
|
|
||||||
|
### RAM Allocation (32GB Total, 24GB VM)
|
||||||
|
|
||||||
|
**Recommended allocation:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Host (Proxmox): 8GB
|
||||||
|
VM Total: 24GB breakdown:
|
||||||
|
├── System: 4GB (AlmaLinux base)
|
||||||
|
├── Docker: 2GB (daemon overhead)
|
||||||
|
├── Jellyfin: 2-4GB (transcoding buffers)
|
||||||
|
├── Immich: 2-3GB (ML models + database)
|
||||||
|
├── Sonarr/Radarr: 1GB each
|
||||||
|
├── Ollama: 4-6GB (when running models)
|
||||||
|
├── Databases: 2-3GB total
|
||||||
|
├── Monitoring: 2GB (Loki + Grafana)
|
||||||
|
└── Other services: 4-5GB
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disk Space Planning
|
||||||
|
|
||||||
|
**System:** 100GB
|
||||||
|
**Docker:** 50GB (images + containers)
|
||||||
|
**Service Data:** 50GB (configs, databases, logs)
|
||||||
|
**Media Library:** Remaining space (expandable)
|
||||||
|
|
||||||
|
**Recommended VM disk:**
|
||||||
|
- Minimum: 500GB (200GB system + 300GB media)
|
||||||
|
- Recommended: 1TB+ (allows room for growth)
|
||||||
|
|
||||||
|
## 🔄 Service Dependencies
|
||||||
|
|
||||||
|
### Startup Order
|
||||||
|
|
||||||
|
**Critical order for initial deployment:**
|
||||||
|
|
||||||
|
1. **Networks**: `docker network create homelab`
|
||||||
|
2. **Core** (must start first):
|
||||||
|
- Traefik (reverse proxy)
|
||||||
|
- LLDAP (user directory)
|
||||||
|
- Tinyauth (SSO provider)
|
||||||
|
3. **Monitoring** (optional but recommended):
|
||||||
|
- Loki + Promtail + Grafana
|
||||||
|
- Uptime Kuma
|
||||||
|
4. **Media Automation**:
|
||||||
|
- Sonarr, Radarr
|
||||||
|
- SABnzbd, qBittorrent
|
||||||
|
- Recyclarr, Profilarr
|
||||||
|
5. **Media Frontend**:
|
||||||
|
- Jellyfin
|
||||||
|
- Jellyseer
|
||||||
|
- Immich
|
||||||
|
6. **Services**:
|
||||||
|
- Karakeep, Ollama (AI features)
|
||||||
|
- Vikunja, Homarr
|
||||||
|
- All other services
|
||||||
|
|
||||||
|
### Service Integration Map
|
||||||
|
|
||||||
|
```
|
||||||
|
Traefik
|
||||||
|
├─→ All services (reverse proxy)
|
||||||
|
└─→ Let's Encrypt (SSL)
|
||||||
|
|
||||||
|
Tinyauth
|
||||||
|
├─→ LLDAP (authentication backend)
|
||||||
|
└─→ All SSO-protected services
|
||||||
|
|
||||||
|
LLDAP
|
||||||
|
└─→ User database for SSO
|
||||||
|
|
||||||
|
Promtail
|
||||||
|
├─→ Docker socket (log collection)
|
||||||
|
└─→ Loki (log forwarding)
|
||||||
|
|
||||||
|
Loki
|
||||||
|
└─→ Grafana (log visualization)
|
||||||
|
|
||||||
|
Karakeep
|
||||||
|
├─→ Ollama (AI tagging)
|
||||||
|
├─→ Meilisearch (search)
|
||||||
|
└─→ Chrome (web archiving)
|
||||||
|
|
||||||
|
Jellyseer
|
||||||
|
├─→ Jellyfin (media info)
|
||||||
|
├─→ Sonarr (TV requests)
|
||||||
|
└─→ Radarr (movie requests)
|
||||||
|
|
||||||
|
Sonarr/Radarr
|
||||||
|
├─→ SABnzbd/qBittorrent (downloads)
|
||||||
|
├─→ Jellyfin (media library)
|
||||||
|
└─→ Recyclarr/Profilarr (quality profiles)
|
||||||
|
|
||||||
|
Homarr
|
||||||
|
└─→ All services (dashboard auto-discovery)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🐛 Troubleshooting
|
||||||
|
|
||||||
|
### Check Service Health
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# All services status
|
||||||
|
cd ~/homelab
|
||||||
|
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||||
|
|
||||||
|
# Logs for specific service
|
||||||
|
docker logs <service-name> --tail 100 -f
|
||||||
|
|
||||||
|
# Logs via Loki/Grafana
|
||||||
|
# Go to https://logs.fig.systems
|
||||||
|
# Query: {container="<service-name>"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check homelab network exists
|
||||||
|
docker network ls | grep homelab
|
||||||
|
|
||||||
|
# Inspect network
|
||||||
|
docker network inspect homelab
|
||||||
|
|
||||||
|
# Test service connectivity
|
||||||
|
docker exec <service-a> ping <service-b>
|
||||||
|
docker exec karakeep curl http://ollama:11434
|
||||||
|
```
|
||||||
|
|
||||||
|
### GPU Not Detected
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check GPU in VM
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Check Docker can access GPU
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
|
||||||
|
# Check service GPU allocation
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
docker exec ollama nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSL Certificate Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Traefik logs
|
||||||
|
docker logs traefik | grep -i certificate
|
||||||
|
|
||||||
|
# Force certificate renewal
|
||||||
|
docker exec traefik rm -rf /letsencrypt/acme.json
|
||||||
|
docker restart traefik
|
||||||
|
|
||||||
|
# Verify DNS
|
||||||
|
dig +short sonarr.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSO Not Working
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Tinyauth status
|
||||||
|
docker logs tinyauth
|
||||||
|
|
||||||
|
# Check LLDAP connection
|
||||||
|
docker exec tinyauth nc -zv lldap 3890
|
||||||
|
docker exec tinyauth nc -zv lldap 17170
|
||||||
|
|
||||||
|
# Verify credentials match
|
||||||
|
grep LDAP_BIND_PASSWORD compose/core/tinyauth/.env
|
||||||
|
grep LLDAP_LDAP_USER_PASS compose/core/lldap/.env
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📈 Monitoring Best Practices
|
||||||
|
|
||||||
|
### Key Metrics to Monitor
|
||||||
|
|
||||||
|
**System Level:**
|
||||||
|
- CPU usage per container
|
||||||
|
- Memory usage per container
|
||||||
|
- Disk I/O
|
||||||
|
- Network throughput
|
||||||
|
- GPU utilization (for Jellyfin/Ollama/Immich)
|
||||||
|
|
||||||
|
**Application Level:**
|
||||||
|
- Traefik request rate
|
||||||
|
- Failed authentication attempts
|
||||||
|
- Jellyfin concurrent streams
|
||||||
|
- Download speeds (SABnzbd/qBittorrent)
|
||||||
|
- Sonarr/Radarr queue size
|
||||||
|
|
||||||
|
### Uptime Kuma Monitoring
|
||||||
|
|
||||||
|
Configure monitors for:
|
||||||
|
- **HTTP(s)**: All web services (200 status check)
|
||||||
|
- **TCP**: Database ports (PostgreSQL, etc.)
|
||||||
|
- **Docker**: Container health (via Docker socket)
|
||||||
|
- **SSL**: Certificate expiration (30-day warning)
|
||||||
|
|
||||||
|
### Log Monitoring
|
||||||
|
|
||||||
|
Set up Loki alerts for:
|
||||||
|
- ERROR level logs
|
||||||
|
- Authentication failures
|
||||||
|
- Service crashes
|
||||||
|
- Disk space warnings
|
||||||
|
|
||||||
|
## 🔧 Maintenance Tasks
|
||||||
|
|
||||||
|
### Daily
|
||||||
|
- Check Uptime Kuma dashboard
|
||||||
|
- Review any critical alerts
|
||||||
|
|
||||||
|
### Weekly
|
||||||
|
- Check disk space: `df -h`
|
||||||
|
- Review failed downloads in Sonarr/Radarr
|
||||||
|
- Check Loki logs for errors
|
||||||
|
|
||||||
|
### Monthly
|
||||||
|
- Update all containers: `docker compose pull && docker compose up -d`
|
||||||
|
- Review and clean old Docker images: `docker image prune -a`
|
||||||
|
- Backup configurations
|
||||||
|
- Check SSL certificate renewal
|
||||||
|
|
||||||
|
### Quarterly
|
||||||
|
- Review and update documentation
|
||||||
|
- Clean up old media (if needed)
|
||||||
|
- Review and adjust quality profiles
|
||||||
|
- Update Recyclarr configurations
|
||||||
|
|
||||||
|
## 📚 Additional Resources
|
||||||
|
|
||||||
|
- [Traefik Documentation](https://doc.traefik.io/traefik/)
|
||||||
|
- [Docker Compose Best Practices](https://docs.docker.com/compose/production/)
|
||||||
|
- [Loki LogQL Guide](https://grafana.com/docs/loki/latest/logql/)
|
||||||
|
- [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/)
|
||||||
|
- [Proxmox GPU Passthrough](https://pve.proxmox.com/wiki/PCI_Passthrough)
|
||||||
|
- [AlmaLinux Documentation](https://wiki.almalinux.org/)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**System Ready!** 🚀
|
||||||
497
docs/getting-started.md
Normal file
497
docs/getting-started.md
Normal file
|
|
@ -0,0 +1,497 @@
|
||||||
|
# Getting Started with Homelab
|
||||||
|
|
||||||
|
This guide will walk you through setting up your homelab from scratch.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### Hardware Requirements
|
||||||
|
- **Server/VM**: Linux server with Docker support
|
||||||
|
- **CPU**: 2+ cores recommended
|
||||||
|
- **RAM**: 8GB minimum, 16GB+ recommended
|
||||||
|
- **Storage**: 100GB+ for Docker containers and config
|
||||||
|
- **Optional GPU**: NVIDIA GPU for hardware transcoding (Jellyfin, Immich)
|
||||||
|
|
||||||
|
### Software Requirements
|
||||||
|
- **Operating System**: Ubuntu 22.04 or similar Linux distribution
|
||||||
|
- **Docker**: Version 24.0+
|
||||||
|
- **Docker Compose**: Version 2.20+
|
||||||
|
- **Git**: For cloning the repository
|
||||||
|
- **Domain Names**: `*.fig.systems` and `*.edfig.dev` (or your domains)
|
||||||
|
|
||||||
|
### Network Requirements
|
||||||
|
- **Ports**: 80 and 443 accessible from internet (for Let's Encrypt)
|
||||||
|
- **DNS**: Ability to create A records for your domains
|
||||||
|
- **Static IP**: Recommended for your homelab server
|
||||||
|
|
||||||
|
## Step 1: Prepare Your Server
|
||||||
|
|
||||||
|
### Install Docker and Docker Compose
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update package index
|
||||||
|
sudo apt update
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
sudo apt install -y ca-certificates curl gnupg lsb-release
|
||||||
|
|
||||||
|
# Add Docker's official GPG key
|
||||||
|
sudo mkdir -p /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
|
||||||
|
# Set up the repository
|
||||||
|
echo \
|
||||||
|
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
|
||||||
|
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
|
||||||
|
# Install Docker Engine
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
|
||||||
|
|
||||||
|
# Add your user to docker group (logout and login after this)
|
||||||
|
sudo usermod -aG docker $USER
|
||||||
|
|
||||||
|
# Verify installation
|
||||||
|
docker --version
|
||||||
|
docker compose version
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create Media Directory Structure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create media folders
|
||||||
|
sudo mkdir -p /media/{audiobooks,books,comics,complete,downloads,homemovies,incomplete,movies,music,photos,tv}
|
||||||
|
|
||||||
|
# Set ownership (replace with your username)
|
||||||
|
sudo chown -R $(whoami):$(whoami) /media
|
||||||
|
|
||||||
|
# Verify structure
|
||||||
|
tree -L 1 /media
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 2: Clone the Repository
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone the repository
|
||||||
|
cd ~
|
||||||
|
git clone https://github.com/efigueroa/homelab.git
|
||||||
|
cd homelab
|
||||||
|
|
||||||
|
# Checkout the main branch
|
||||||
|
git checkout main # or your target branch
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 3: Configure DNS
|
||||||
|
|
||||||
|
You need to point your domains to your server's IP address.
|
||||||
|
|
||||||
|
### Option 1: Wildcard DNS (Recommended)
|
||||||
|
|
||||||
|
Add these A records to your DNS provider:
|
||||||
|
|
||||||
|
```
|
||||||
|
*.fig.systems A YOUR_SERVER_IP
|
||||||
|
*.edfig.dev A YOUR_SERVER_IP
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option 2: Individual Records
|
||||||
|
|
||||||
|
Create A records for each service:
|
||||||
|
|
||||||
|
```
|
||||||
|
traefik.fig.systems A YOUR_SERVER_IP
|
||||||
|
lldap.fig.systems A YOUR_SERVER_IP
|
||||||
|
auth.fig.systems A YOUR_SERVER_IP
|
||||||
|
home.fig.systems A YOUR_SERVER_IP
|
||||||
|
backup.fig.systems A YOUR_SERVER_IP
|
||||||
|
flix.fig.systems A YOUR_SERVER_IP
|
||||||
|
photos.fig.systems A YOUR_SERVER_IP
|
||||||
|
# ... and so on for all services
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify DNS
|
||||||
|
|
||||||
|
Wait a few minutes for DNS propagation, then verify:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test DNS resolution
|
||||||
|
dig traefik.fig.systems +short
|
||||||
|
dig lldap.fig.systems +short
|
||||||
|
|
||||||
|
# Should return your server IP
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 4: Configure Environment Variables
|
||||||
|
|
||||||
|
Each service needs its environment variables configured with secure values.
|
||||||
|
|
||||||
|
### Generate Secure Secrets
|
||||||
|
|
||||||
|
Use these commands to generate secure values:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For JWT secrets and session secrets (64 characters)
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
|
# For passwords (32 alphanumeric characters)
|
||||||
|
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
||||||
|
|
||||||
|
# For API keys (32 characters)
|
||||||
|
openssl rand -hex 16
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update Core Services
|
||||||
|
|
||||||
|
**LLDAP** (`compose/core/lldap/.env`):
|
||||||
|
```bash
|
||||||
|
cd compose/core/lldap
|
||||||
|
nano .env
|
||||||
|
|
||||||
|
# Update these values:
|
||||||
|
LLDAP_LDAP_USER_PASS=<your-strong-password>
|
||||||
|
LLDAP_JWT_SECRET=<output-from-openssl-rand-hex-32>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tinyauth** (`compose/core/tinyauth/.env`):
|
||||||
|
```bash
|
||||||
|
cd ../tinyauth
|
||||||
|
nano .env
|
||||||
|
|
||||||
|
# Update these values (LDAP_BIND_PASSWORD must match LLDAP_LDAP_USER_PASS):
|
||||||
|
LDAP_BIND_PASSWORD=<same-as-LLDAP_LDAP_USER_PASS>
|
||||||
|
SESSION_SECRET=<output-from-openssl-rand-hex-32>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Immich** (`compose/media/frontend/immich/.env`):
|
||||||
|
```bash
|
||||||
|
cd ../../media/frontend/immich
|
||||||
|
nano .env
|
||||||
|
|
||||||
|
# Update:
|
||||||
|
DB_PASSWORD=<output-from-openssl-rand-base64>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update All Other Services
|
||||||
|
|
||||||
|
Go through each service's `.env` file and replace all `changeme_*` values:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find all files that need updating
|
||||||
|
grep -r "changeme_" ~/homelab/compose
|
||||||
|
|
||||||
|
# Or update them individually
|
||||||
|
cd ~/homelab/compose/services/linkwarden
|
||||||
|
nano .env # Update NEXTAUTH_SECRET, POSTGRES_PASSWORD, MEILI_MASTER_KEY
|
||||||
|
|
||||||
|
cd ../vikunja
|
||||||
|
nano .env # Update VIKUNJA_DATABASE_PASSWORD, VIKUNJA_SERVICE_JWTSECRET, POSTGRES_PASSWORD
|
||||||
|
```
|
||||||
|
|
||||||
|
💡 **Tip**: Keep your secrets in a password manager!
|
||||||
|
|
||||||
|
See [Secrets Management Guide](./guides/secrets-management.md) for detailed instructions.
|
||||||
|
|
||||||
|
## Step 5: Create Docker Network
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create the external homelab network
|
||||||
|
docker network create homelab
|
||||||
|
|
||||||
|
# Verify it was created
|
||||||
|
docker network ls | grep homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 6: Deploy Services
|
||||||
|
|
||||||
|
Deploy services in order, starting with core infrastructure:
|
||||||
|
|
||||||
|
### Deploy Core Infrastructure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# Deploy Traefik (reverse proxy)
|
||||||
|
cd compose/core/traefik
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Check logs to ensure it starts successfully
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Wait for "Server configuration reloaded" message, then Ctrl+C
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy LLDAP (user directory)
|
||||||
|
cd ../lldap
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Access: https://lldap.fig.systems
|
||||||
|
# Default login: admin / <your LLDAP_LDAP_USER_PASS>
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy Tinyauth (SSO)
|
||||||
|
cd ../tinyauth
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Access: https://auth.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create LLDAP Users
|
||||||
|
|
||||||
|
Before deploying other services, create your user in LLDAP:
|
||||||
|
|
||||||
|
1. Go to https://lldap.fig.systems
|
||||||
|
2. Login with admin credentials
|
||||||
|
3. Create your user:
|
||||||
|
- Username: `edfig` (or your choice)
|
||||||
|
- Email: `admin@edfig.dev`
|
||||||
|
- Password: strong password
|
||||||
|
- Add to `lldap_admin` group
|
||||||
|
|
||||||
|
### Deploy Media Services
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/media/frontend
|
||||||
|
|
||||||
|
# Jellyfin
|
||||||
|
cd jellyfin
|
||||||
|
docker compose up -d
|
||||||
|
# Access: https://flix.fig.systems
|
||||||
|
|
||||||
|
# Immich
|
||||||
|
cd ../immich
|
||||||
|
docker compose up -d
|
||||||
|
# Access: https://photos.fig.systems
|
||||||
|
|
||||||
|
# Jellyseerr
|
||||||
|
cd ../jellyseer
|
||||||
|
docker compose up -d
|
||||||
|
# Access: https://requests.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Media automation
|
||||||
|
cd ~/homelab/compose/media/automation
|
||||||
|
|
||||||
|
cd sonarr && docker compose up -d && cd ..
|
||||||
|
cd radarr && docker compose up -d && cd ..
|
||||||
|
cd sabnzbd && docker compose up -d && cd ..
|
||||||
|
cd qbittorrent && docker compose up -d && cd ..
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy Utility Services
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/services
|
||||||
|
|
||||||
|
# Dashboard (start with this - it shows all your services!)
|
||||||
|
cd homarr && docker compose up -d && cd ..
|
||||||
|
# Access: https://home.fig.systems
|
||||||
|
|
||||||
|
# Backup manager
|
||||||
|
cd backrest && docker compose up -d && cd ..
|
||||||
|
# Access: https://backup.fig.systems
|
||||||
|
|
||||||
|
# Other services
|
||||||
|
cd linkwarden && docker compose up -d && cd ..
|
||||||
|
cd vikunja && docker compose up -d && cd ..
|
||||||
|
cd lubelogger && docker compose up -d && cd ..
|
||||||
|
cd calibre-web && docker compose up -d && cd ..
|
||||||
|
cd booklore && docker compose up -d && cd ..
|
||||||
|
cd FreshRSS && docker compose up -d && cd ..
|
||||||
|
cd rsshub && docker compose up -d && cd ..
|
||||||
|
cd microbin && docker compose up -d && cd ..
|
||||||
|
cd filebrowser && docker compose up -d && cd ..
|
||||||
|
```
|
||||||
|
|
||||||
|
### Quick Deploy All (Alternative)
|
||||||
|
|
||||||
|
If you've configured everything and want to deploy all at once:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# Create a deployment script
|
||||||
|
cat > deploy-all.sh << 'SCRIPT'
|
||||||
|
#!/bin/bash
|
||||||
|
set -e
|
||||||
|
|
||||||
|
echo "Deploying homelab services..."
|
||||||
|
|
||||||
|
# Core
|
||||||
|
echo "==> Core Infrastructure"
|
||||||
|
cd compose/core/traefik && docker compose up -d && cd ../../..
|
||||||
|
sleep 5
|
||||||
|
cd compose/core/lldap && docker compose up -d && cd ../../..
|
||||||
|
sleep 5
|
||||||
|
cd compose/core/tinyauth && docker compose up -d && cd ../../..
|
||||||
|
|
||||||
|
# Media
|
||||||
|
echo "==> Media Services"
|
||||||
|
cd compose/media/frontend/immich && docker compose up -d && cd ../../../..
|
||||||
|
cd compose/media/frontend/jellyfin && docker compose up -d && cd ../../../..
|
||||||
|
cd compose/media/frontend/jellyseer && docker compose up -d && cd ../../../..
|
||||||
|
cd compose/media/automation/sonarr && docker compose up -d && cd ../../../..
|
||||||
|
cd compose/media/automation/radarr && docker compose up -d && cd ../../../..
|
||||||
|
cd compose/media/automation/sabnzbd && docker compose up -d && cd ../../../..
|
||||||
|
cd compose/media/automation/qbittorrent && docker compose up -d && cd ../../../..
|
||||||
|
|
||||||
|
# Utility
|
||||||
|
echo "==> Utility Services"
|
||||||
|
cd compose/services/homarr && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/backrest && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/linkwarden && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/vikunja && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/lubelogger && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/calibre-web && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/booklore && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/FreshRSS && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/rsshub && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/microbin && docker compose up -d && cd ../..
|
||||||
|
cd compose/services/filebrowser && docker compose up -d && cd ../..
|
||||||
|
|
||||||
|
echo "==> Deployment Complete!"
|
||||||
|
echo "Access your dashboard at: https://home.fig.systems"
|
||||||
|
SCRIPT
|
||||||
|
|
||||||
|
chmod +x deploy-all.sh
|
||||||
|
./deploy-all.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 7: Verify Deployment
|
||||||
|
|
||||||
|
### Check All Containers Are Running
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all containers
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||||
|
|
||||||
|
# Check for any stopped containers
|
||||||
|
docker ps -a --filter "status=exited"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify SSL Certificates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test SSL certificate
|
||||||
|
curl -I https://home.fig.systems
|
||||||
|
|
||||||
|
# Should show HTTP/2 200 and valid SSL cert
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access Services
|
||||||
|
|
||||||
|
Visit your dashboard: **https://home.fig.systems**
|
||||||
|
|
||||||
|
This should show all your services with their status!
|
||||||
|
|
||||||
|
### Test SSO
|
||||||
|
|
||||||
|
1. Go to any SSO-protected service (e.g., https://tasks.fig.systems)
|
||||||
|
2. You should be redirected to https://auth.fig.systems
|
||||||
|
3. Login with your LLDAP credentials
|
||||||
|
4. You should be redirected back to the service
|
||||||
|
|
||||||
|
## Step 8: Initial Service Configuration
|
||||||
|
|
||||||
|
### Jellyfin Setup
|
||||||
|
1. Go to https://flix.fig.systems
|
||||||
|
2. Select language and create admin account
|
||||||
|
3. Add media libraries:
|
||||||
|
- Movies: `/media/movies`
|
||||||
|
- TV Shows: `/media/tv`
|
||||||
|
- Music: `/media/music`
|
||||||
|
- Photos: `/media/photos`
|
||||||
|
|
||||||
|
### Immich Setup
|
||||||
|
1. Go to https://photos.fig.systems
|
||||||
|
2. Create admin account
|
||||||
|
3. Upload some photos to test
|
||||||
|
4. Configure storage in Settings
|
||||||
|
|
||||||
|
### Sonarr/Radarr Setup
|
||||||
|
1. Go to https://sonarr.fig.systems and https://radarr.fig.systems
|
||||||
|
2. Complete initial setup wizard
|
||||||
|
3. Add indexers (for finding content)
|
||||||
|
4. Add download clients:
|
||||||
|
- SABnzbd: http://sabnzbd:8080
|
||||||
|
- qBittorrent: http://qbittorrent:8080
|
||||||
|
5. Configure root folders:
|
||||||
|
- Sonarr: `/media/tv`
|
||||||
|
- Radarr: `/media/movies`
|
||||||
|
|
||||||
|
### Jellyseerr Setup
|
||||||
|
1. Go to https://requests.fig.systems
|
||||||
|
2. Sign in with Jellyfin
|
||||||
|
3. Connect to Sonarr and Radarr
|
||||||
|
4. Configure user permissions
|
||||||
|
|
||||||
|
### Backrest Setup
|
||||||
|
1. Go to https://backup.fig.systems
|
||||||
|
2. Add Backblaze B2 repository (see [Backup Guide](./services/backup.md))
|
||||||
|
3. Create backup plan for Immich photos
|
||||||
|
4. Schedule automated backups
|
||||||
|
|
||||||
|
## Step 9: Optional Configurations
|
||||||
|
|
||||||
|
### Enable GPU Acceleration
|
||||||
|
|
||||||
|
If you have an NVIDIA GPU, see [GPU Setup Guide](./guides/gpu-setup.md).
|
||||||
|
|
||||||
|
### Configure Backups
|
||||||
|
|
||||||
|
See [Backup Operations Guide](./operations/backups.md).
|
||||||
|
|
||||||
|
### Add More Services
|
||||||
|
|
||||||
|
See [Adding Services Guide](./guides/adding-services.md).
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
- ✅ [Set up automated backups](./operations/backups.md)
|
||||||
|
- ✅ [Configure monitoring](./operations/monitoring.md)
|
||||||
|
- ✅ [Review security settings](./guides/security.md)
|
||||||
|
- ✅ [Enable GPU acceleration](./guides/gpu-setup.md) (optional)
|
||||||
|
- ✅ [Configure media automation](./services/media-stack.md)
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
If you encounter issues during setup, see:
|
||||||
|
- [Common Issues](./troubleshooting/common-issues.md)
|
||||||
|
- [FAQ](./troubleshooting/faq.md)
|
||||||
|
- [Debugging Guide](./troubleshooting/debugging.md)
|
||||||
|
|
||||||
|
## Quick Command Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View all running containers
|
||||||
|
docker ps
|
||||||
|
|
||||||
|
# View logs for a service
|
||||||
|
cd compose/path/to/service
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Restart a service
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Stop a service
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Update and restart a service
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# View resource usage
|
||||||
|
docker stats
|
||||||
|
```
|
||||||
|
|
||||||
|
## Getting Help
|
||||||
|
|
||||||
|
- Check the [FAQ](./troubleshooting/faq.md)
|
||||||
|
- Review service-specific guides in [docs/services/](./services/)
|
||||||
|
- Check container logs for errors
|
||||||
|
- Verify DNS and SSL certificates
|
||||||
|
|
||||||
|
Welcome to your homelab! 🎉
|
||||||
445
docs/guides/centralized-logging.md
Normal file
445
docs/guides/centralized-logging.md
Normal file
|
|
@ -0,0 +1,445 @@
|
||||||
|
# Centralized Logging with Loki
|
||||||
|
|
||||||
|
Guide for setting up and using the centralized logging stack (Loki + Promtail + Grafana).
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The logging stack provides centralized log aggregation and visualization for all Docker containers:
|
||||||
|
|
||||||
|
- **Loki**: Log aggregation backend (stores and indexes logs)
|
||||||
|
- **Promtail**: Agent that collects logs from Docker containers
|
||||||
|
- **Grafana**: Web UI for querying and visualizing logs
|
||||||
|
|
||||||
|
### Why Centralized Logging?
|
||||||
|
|
||||||
|
**Problems without it:**
|
||||||
|
- Logs scattered across many containers
|
||||||
|
- Hard to correlate events across services
|
||||||
|
- Logs lost when containers restart
|
||||||
|
- No easy way to search historical logs
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- ✅ Single place to view all logs
|
||||||
|
- ✅ Powerful search and filtering (LogQL)
|
||||||
|
- ✅ Persist logs even after container restarts
|
||||||
|
- ✅ Correlate events across services
|
||||||
|
- ✅ Create dashboards and alerts
|
||||||
|
- ✅ Configurable retention (30 days default)
|
||||||
|
|
||||||
|
## Quick Setup
|
||||||
|
|
||||||
|
### 1. Configure Grafana Password
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/monitoring/logging
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update:**
|
||||||
|
```env
|
||||||
|
GF_SECURITY_ADMIN_PASSWORD=<your-strong-password>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generate password:**
|
||||||
|
```bash
|
||||||
|
openssl rand -base64 20
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deploy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/monitoring/logging
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Access Grafana
|
||||||
|
|
||||||
|
Go to: **https://logs.fig.systems**
|
||||||
|
|
||||||
|
**Login:**
|
||||||
|
- Username: `admin`
|
||||||
|
- Password: `<your GF_SECURITY_ADMIN_PASSWORD>`
|
||||||
|
|
||||||
|
### 4. Start Exploring Logs
|
||||||
|
|
||||||
|
1. Click **Explore** (compass icon) in left sidebar
|
||||||
|
2. Loki datasource should be selected
|
||||||
|
3. Start querying!
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
### View Logs from a Container
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{container="jellyfin"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Last Hour's Logs
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{container="immich_server"} | __timestamp__ >= now() - 1h
|
||||||
|
```
|
||||||
|
|
||||||
|
### Filter for Errors
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{container="traefik"} |= "error"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exclude Lines
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{container="traefik"} != "404"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multiple Containers
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{container=~"jellyfin|immich.*"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### By Compose Project
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{compose_project="media"}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Advanced Queries
|
||||||
|
|
||||||
|
### Count Errors
|
||||||
|
|
||||||
|
```logql
|
||||||
|
sum(count_over_time({container="jellyfin"} |= "error" [5m]))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error Rate
|
||||||
|
|
||||||
|
```logql
|
||||||
|
rate({container="traefik"} |= "error" [5m])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Parse JSON Logs
|
||||||
|
|
||||||
|
```logql
|
||||||
|
{container="linkwarden"} | json | level="error"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Top 10 Error Messages
|
||||||
|
|
||||||
|
```logql
|
||||||
|
topk(10,
|
||||||
|
sum by (container) (
|
||||||
|
count_over_time({job="docker"} |= "error" [24h])
|
||||||
|
)
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating Dashboards
|
||||||
|
|
||||||
|
### Import Pre-built Dashboard
|
||||||
|
|
||||||
|
1. Go to **Dashboards** → **Import**
|
||||||
|
2. Dashboard ID: **13639** (Docker logs)
|
||||||
|
3. Select **Loki** as datasource
|
||||||
|
4. Click **Import**
|
||||||
|
|
||||||
|
### Create Custom Dashboard
|
||||||
|
|
||||||
|
1. Click **+** → **Dashboard**
|
||||||
|
2. **Add panel**
|
||||||
|
3. Select **Loki** datasource
|
||||||
|
4. Build query
|
||||||
|
5. Choose visualization (logs, graph, table, etc.)
|
||||||
|
6. **Save**
|
||||||
|
|
||||||
|
**Example panels:**
|
||||||
|
- Error count by container
|
||||||
|
- Log volume over time
|
||||||
|
- Recent errors (table)
|
||||||
|
- Top logging containers
|
||||||
|
|
||||||
|
## Setting Up Alerts
|
||||||
|
|
||||||
|
### Create Alert Rule
|
||||||
|
|
||||||
|
1. **Alerting** → **Alert rules** → **New alert rule**
|
||||||
|
2. **Query:**
|
||||||
|
```logql
|
||||||
|
sum(count_over_time({container="jellyfin"} |= "error" [5m])) > 10
|
||||||
|
```
|
||||||
|
3. **Condition**: Alert when > 10 errors in 5 minutes
|
||||||
|
4. **Configure** notification channel (email, webhook, etc.)
|
||||||
|
5. **Save**
|
||||||
|
|
||||||
|
**Example alerts:**
|
||||||
|
- Too many errors in service
|
||||||
|
- Service stopped logging (might have crashed)
|
||||||
|
- Authentication failures
|
||||||
|
- Disk space warnings
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Change Log Retention
|
||||||
|
|
||||||
|
**Default: 30 days**
|
||||||
|
|
||||||
|
Edit `.env`:
|
||||||
|
```env
|
||||||
|
LOKI_RETENTION_PERIOD=60d # 60 days
|
||||||
|
```
|
||||||
|
|
||||||
|
Edit `loki-config.yaml`:
|
||||||
|
```yaml
|
||||||
|
limits_config:
|
||||||
|
retention_period: 60d
|
||||||
|
|
||||||
|
table_manager:
|
||||||
|
retention_period: 60d
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart:
|
||||||
|
```bash
|
||||||
|
docker compose restart loki
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adjust Resource Limits
|
||||||
|
|
||||||
|
For low-resource systems, edit `loki-config.yaml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
limits_config:
|
||||||
|
retention_period: 7d # Shorter retention
|
||||||
|
ingestion_rate_mb: 5 # Lower rate
|
||||||
|
|
||||||
|
query_range:
|
||||||
|
results_cache:
|
||||||
|
cache:
|
||||||
|
embedded_cache:
|
||||||
|
max_size_mb: 50 # Smaller cache
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add Labels to Services
|
||||||
|
|
||||||
|
Make services easier to find by adding labels:
|
||||||
|
|
||||||
|
**Edit service `compose.yaml`:**
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
myservice:
|
||||||
|
labels:
|
||||||
|
logging: "promtail"
|
||||||
|
environment: "production"
|
||||||
|
tier: "frontend"
|
||||||
|
```
|
||||||
|
|
||||||
|
Query with these labels:
|
||||||
|
```logql
|
||||||
|
{environment="production", tier="frontend"}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### No Logs Appearing
|
||||||
|
|
||||||
|
**Wait a few minutes** - initial log collection takes time
|
||||||
|
|
||||||
|
**Check Promtail:**
|
||||||
|
```bash
|
||||||
|
docker logs promtail
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Loki:**
|
||||||
|
```bash
|
||||||
|
docker logs loki
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify Promtail can reach Loki:**
|
||||||
|
```bash
|
||||||
|
docker exec promtail wget -O- http://loki:3100/ready
|
||||||
|
```
|
||||||
|
|
||||||
|
### Grafana Can't Connect to Loki
|
||||||
|
|
||||||
|
**Test from Grafana:**
|
||||||
|
```bash
|
||||||
|
docker exec grafana wget -O- http://loki:3100/ready
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check datasource:** Grafana → Configuration → Data sources → Loki
|
||||||
|
- URL should be: `http://loki:3100`
|
||||||
|
|
||||||
|
### High Disk Usage
|
||||||
|
|
||||||
|
**Check size:**
|
||||||
|
```bash
|
||||||
|
du -sh compose/monitoring/logging/loki-data
|
||||||
|
```
|
||||||
|
|
||||||
|
**Reduce retention:**
|
||||||
|
```env
|
||||||
|
LOKI_RETENTION_PERIOD=7d
|
||||||
|
```
|
||||||
|
|
||||||
|
**Manual cleanup (CAREFUL):**
|
||||||
|
```bash
|
||||||
|
docker compose stop loki
|
||||||
|
rm -rf loki-data/chunks/*
|
||||||
|
docker compose start loki
|
||||||
|
```
|
||||||
|
|
||||||
|
### Slow Queries
|
||||||
|
|
||||||
|
**Optimize queries:**
|
||||||
|
- Use specific labels: `{container="name"}` not `{container=~".*"}`
|
||||||
|
- Limit time range: Hours not days
|
||||||
|
- Filter early: `|= "error"` before parsing
|
||||||
|
- Avoid complex regex
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Log Verbosity
|
||||||
|
|
||||||
|
Configure appropriate log levels per environment:
|
||||||
|
- **Production**: `info` or `warning`
|
||||||
|
- **Debugging**: `debug` or `trace`
|
||||||
|
|
||||||
|
Too verbose = wasted resources!
|
||||||
|
|
||||||
|
### Retention Strategy
|
||||||
|
|
||||||
|
Match retention to importance:
|
||||||
|
- **Critical services**: 60-90 days
|
||||||
|
- **Normal services**: 30 days
|
||||||
|
- **High-volume services**: 7-14 days
|
||||||
|
|
||||||
|
### Useful Queries to Save
|
||||||
|
|
||||||
|
Create saved queries for common tasks:
|
||||||
|
|
||||||
|
**Recent errors:**
|
||||||
|
```logql
|
||||||
|
{job="docker"} |= "error" | __timestamp__ >= now() - 15m
|
||||||
|
```
|
||||||
|
|
||||||
|
**Service health check:**
|
||||||
|
```logql
|
||||||
|
{container="traefik"} |= "request"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Failed logins:**
|
||||||
|
```logql
|
||||||
|
{container="lldap"} |= "failed" |= "login"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Integration Tips
|
||||||
|
|
||||||
|
### Embed in Homarr
|
||||||
|
|
||||||
|
Add Grafana dashboards to Homarr:
|
||||||
|
|
||||||
|
1. Edit Homarr dashboard
|
||||||
|
2. Add **iFrame widget**
|
||||||
|
3. URL: `https://logs.fig.systems/d/<dashboard-id>`
|
||||||
|
|
||||||
|
### Use with Backups
|
||||||
|
|
||||||
|
Include logging data in backups:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/monitoring/logging
|
||||||
|
tar czf logging-backup-$(date +%Y%m%d).tar.gz loki-data/ grafana-data/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Combine with Metrics
|
||||||
|
|
||||||
|
Later you can add Prometheus for metrics:
|
||||||
|
- Loki for logs
|
||||||
|
- Prometheus for metrics (CPU, RAM, disk)
|
||||||
|
- Both in Grafana dashboards
|
||||||
|
|
||||||
|
## Common LogQL Patterns
|
||||||
|
|
||||||
|
### Filter by Time
|
||||||
|
|
||||||
|
```logql
|
||||||
|
# Last 5 minutes
|
||||||
|
{container="name"} | __timestamp__ >= now() - 5m
|
||||||
|
|
||||||
|
# Specific time range (in Grafana UI time picker)
|
||||||
|
# Or use: __timestamp__ >= "2024-01-01T00:00:00Z"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern Matching
|
||||||
|
|
||||||
|
```logql
|
||||||
|
# Contains
|
||||||
|
{container="name"} |= "error"
|
||||||
|
|
||||||
|
# Does not contain
|
||||||
|
{container="name"} != "404"
|
||||||
|
|
||||||
|
# Regex match
|
||||||
|
{container="name"} |~ "error|fail|critical"
|
||||||
|
|
||||||
|
# Regex does not match
|
||||||
|
{container="name"} !~ "debug|trace"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Aggregations
|
||||||
|
|
||||||
|
```logql
|
||||||
|
# Count
|
||||||
|
count_over_time({container="name"}[5m])
|
||||||
|
|
||||||
|
# Rate
|
||||||
|
rate({container="name"}[5m])
|
||||||
|
|
||||||
|
# Sum
|
||||||
|
sum(count_over_time({job="docker"}[1h])) by (container)
|
||||||
|
|
||||||
|
# Average
|
||||||
|
avg_over_time({container="name"} | unwrap bytes [5m])
|
||||||
|
```
|
||||||
|
|
||||||
|
### JSON Parsing
|
||||||
|
|
||||||
|
```logql
|
||||||
|
# Parse JSON and filter
|
||||||
|
{container="name"} | json | level="error"
|
||||||
|
|
||||||
|
# Extract field
|
||||||
|
{container="name"} | json | line_format "{{.message}}"
|
||||||
|
|
||||||
|
# Filter on JSON field
|
||||||
|
{container="name"} | json status_code="500"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resource Usage
|
||||||
|
|
||||||
|
**Typical usage:**
|
||||||
|
- **Loki**: 200-500MB RAM, 1-5GB disk/week
|
||||||
|
- **Promtail**: 50-100MB RAM
|
||||||
|
- **Grafana**: 100-200MB RAM, ~100MB disk
|
||||||
|
- **Total**: ~400-700MB RAM
|
||||||
|
|
||||||
|
**For 20 containers with moderate logging**
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ Explore your logs in Grafana
|
||||||
|
2. ✅ Create useful dashboards
|
||||||
|
3. ✅ Set up alerts for critical errors
|
||||||
|
4. ⬜ Add Prometheus for metrics (future)
|
||||||
|
5. ⬜ Add Tempo for distributed tracing (future)
|
||||||
|
6. ⬜ Create log-based SLA tracking
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Loki Documentation](https://grafana.com/docs/loki/latest/)
|
||||||
|
- [LogQL Reference](https://grafana.com/docs/loki/latest/logql/)
|
||||||
|
- [Grafana Dashboards](https://grafana.com/grafana/dashboards/)
|
||||||
|
- [Community Dashboards](https://grafana.com/grafana/dashboards/?search=loki)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Now debug issues 10x faster with centralized logs!** 🔍
|
||||||
725
docs/guides/gpu-setup.md
Normal file
725
docs/guides/gpu-setup.md
Normal file
|
|
@ -0,0 +1,725 @@
|
||||||
|
# NVIDIA GPU Acceleration Setup (GTX 1070)
|
||||||
|
|
||||||
|
This guide covers setting up NVIDIA GPU acceleration for your homelab running on **Proxmox 9 (Debian 13)** with an **NVIDIA GTX 1070**.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
GPU acceleration provides significant benefits:
|
||||||
|
- **Jellyfin**: Hardware video transcoding (H.264, HEVC)
|
||||||
|
- **Immich**: Faster ML inference (face recognition, object detection)
|
||||||
|
- **Performance**: 10-20x faster transcoding vs CPU
|
||||||
|
- **Efficiency**: Lower power consumption, CPU freed for other tasks
|
||||||
|
|
||||||
|
**Your Hardware:**
|
||||||
|
- **GPU**: NVIDIA GTX 1070 (Pascal architecture)
|
||||||
|
- **Capabilities**: NVENC (encoding), NVDEC (decoding), CUDA
|
||||||
|
- **Max Concurrent Streams**: 2 (can be unlocked)
|
||||||
|
- **Supported Codecs**: H.264, HEVC (H.265)
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
Proxmox Host (Debian 13)
|
||||||
|
│
|
||||||
|
├─ NVIDIA Drivers (host)
|
||||||
|
├─ NVIDIA Container Toolkit
|
||||||
|
│
|
||||||
|
└─ Docker VM/LXC
|
||||||
|
│
|
||||||
|
├─ GPU passthrough
|
||||||
|
│
|
||||||
|
└─ Jellyfin/Immich containers
|
||||||
|
└─ Hardware transcoding
|
||||||
|
```
|
||||||
|
|
||||||
|
## Part 1: Proxmox Host Setup
|
||||||
|
|
||||||
|
### Step 1.1: Enable IOMMU (for GPU Passthrough)
|
||||||
|
|
||||||
|
**Edit GRUB configuration:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH into Proxmox host
|
||||||
|
ssh root@proxmox-host
|
||||||
|
|
||||||
|
# Edit GRUB config
|
||||||
|
nano /etc/default/grub
|
||||||
|
```
|
||||||
|
|
||||||
|
**Find this line:**
|
||||||
|
```
|
||||||
|
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Replace with (Intel CPU):**
|
||||||
|
```
|
||||||
|
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Or (AMD CPU):**
|
||||||
|
```
|
||||||
|
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update GRUB and reboot:**
|
||||||
|
```bash
|
||||||
|
update-grub
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify IOMMU is enabled:**
|
||||||
|
```bash
|
||||||
|
dmesg | grep -e DMAR -e IOMMU
|
||||||
|
|
||||||
|
# Should see: "IOMMU enabled"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 1.2: Load VFIO Modules
|
||||||
|
|
||||||
|
**Edit modules:**
|
||||||
|
```bash
|
||||||
|
nano /etc/modules
|
||||||
|
```
|
||||||
|
|
||||||
|
**Add these lines:**
|
||||||
|
```
|
||||||
|
vfio
|
||||||
|
vfio_iommu_type1
|
||||||
|
vfio_pci
|
||||||
|
vfio_virqfd
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update initramfs:**
|
||||||
|
```bash
|
||||||
|
update-initramfs -u -k all
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 1.3: Find GPU PCI ID
|
||||||
|
|
||||||
|
```bash
|
||||||
|
lspci -nn | grep -i nvidia
|
||||||
|
|
||||||
|
# Example output:
|
||||||
|
# 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
|
||||||
|
# 01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note the IDs**: `10de:1b81` and `10de:10f0` (your values may differ)
|
||||||
|
|
||||||
|
### Step 1.4: Configure VFIO
|
||||||
|
|
||||||
|
**Create VFIO config:**
|
||||||
|
```bash
|
||||||
|
nano /etc/modprobe.d/vfio.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
**Add (replace with your IDs from above):**
|
||||||
|
```
|
||||||
|
options vfio-pci ids=10de:1b81,10de:10f0
|
||||||
|
softdep nvidia pre: vfio-pci
|
||||||
|
```
|
||||||
|
|
||||||
|
**Blacklist nouveau (open-source NVIDIA driver):**
|
||||||
|
```bash
|
||||||
|
echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update and reboot:**
|
||||||
|
```bash
|
||||||
|
update-initramfs -u -k all
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify GPU is bound to VFIO:**
|
||||||
|
```bash
|
||||||
|
lspci -nnk -d 10de:1b81
|
||||||
|
|
||||||
|
# Should show:
|
||||||
|
# Kernel driver in use: vfio-pci
|
||||||
|
```
|
||||||
|
|
||||||
|
## Part 2: VM/LXC Setup
|
||||||
|
|
||||||
|
### Option A: Using VM (Recommended for Docker)
|
||||||
|
|
||||||
|
**Create Ubuntu 24.04 VM with GPU passthrough:**
|
||||||
|
|
||||||
|
1. **Create VM in Proxmox UI**:
|
||||||
|
- OS: Ubuntu 24.04 Server
|
||||||
|
- CPU: 4+ cores
|
||||||
|
- RAM: 16GB+
|
||||||
|
- Disk: 100GB+
|
||||||
|
|
||||||
|
2. **Add PCI Device** (GPU):
|
||||||
|
- Hardware → Add → PCI Device
|
||||||
|
- Device: Select your GTX 1070 (01:00.0)
|
||||||
|
- ✅ All Functions
|
||||||
|
- ✅ Primary GPU (if no other GPU)
|
||||||
|
- ✅ PCI-Express
|
||||||
|
|
||||||
|
3. **Add PCI Device** (GPU Audio):
|
||||||
|
- Hardware → Add → PCI Device
|
||||||
|
- Device: NVIDIA Audio (01:00.1)
|
||||||
|
- ✅ All Functions
|
||||||
|
|
||||||
|
4. **Machine Settings**:
|
||||||
|
- Machine: q35
|
||||||
|
- BIOS: OVMF (UEFI)
|
||||||
|
- Add EFI Disk
|
||||||
|
|
||||||
|
5. **Start VM** and install Ubuntu
|
||||||
|
|
||||||
|
### Option B: Using LXC (Advanced, Less Stable)
|
||||||
|
|
||||||
|
**Note**: LXC with GPU is less reliable. VM recommended.
|
||||||
|
|
||||||
|
If you insist on LXC:
|
||||||
|
```bash
|
||||||
|
# Edit LXC config
|
||||||
|
nano /etc/pve/lxc/VMID.conf
|
||||||
|
|
||||||
|
# Add:
|
||||||
|
lxc.cgroup2.devices.allow: c 195:* rwm
|
||||||
|
lxc.cgroup2.devices.allow: c 509:* rwm
|
||||||
|
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
|
||||||
|
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
|
||||||
|
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
|
||||||
|
```
|
||||||
|
|
||||||
|
**For this guide, we'll use VM (Option A)**.
|
||||||
|
|
||||||
|
## Part 3: VM Guest Setup (Debian 13)
|
||||||
|
|
||||||
|
Now we're inside the Ubuntu/Debian VM where Docker runs.
|
||||||
|
|
||||||
|
### Step 3.1: Install NVIDIA Drivers
|
||||||
|
|
||||||
|
**SSH into your Docker VM:**
|
||||||
|
```bash
|
||||||
|
ssh user@docker-vm
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update system:**
|
||||||
|
```bash
|
||||||
|
sudo apt update
|
||||||
|
sudo apt upgrade -y
|
||||||
|
```
|
||||||
|
|
||||||
|
**Debian 13 - Install NVIDIA drivers:**
|
||||||
|
```bash
|
||||||
|
# Add non-free repositories
|
||||||
|
sudo nano /etc/apt/sources.list
|
||||||
|
|
||||||
|
# Add 'non-free non-free-firmware' to each line, example:
|
||||||
|
deb http://deb.debian.org/debian bookworm main non-free non-free-firmware
|
||||||
|
deb http://deb.debian.org/debian bookworm-updates main non-free non-free-firmware
|
||||||
|
|
||||||
|
# Update and install
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y linux-headers-$(uname -r)
|
||||||
|
sudo apt install -y nvidia-driver nvidia-smi
|
||||||
|
|
||||||
|
# Reboot
|
||||||
|
sudo reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify driver installation:**
|
||||||
|
```bash
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Should show:
|
||||||
|
# +-----------------------------------------------------------------------------+
|
||||||
|
# | NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|
||||||
|
# |-------------------------------+----------------------+----------------------+
|
||||||
|
# | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||||
|
# | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||||
|
# |===============================+======================+======================|
|
||||||
|
# | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
|
||||||
|
# | 30% 35C P8 10W / 150W | 0MiB / 8192MiB | 0% Default |
|
||||||
|
# +-------------------------------+----------------------+----------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
✅ **Success!** Your GTX 1070 is now accessible in the VM.
|
||||||
|
|
||||||
|
### Step 3.2: Install NVIDIA Container Toolkit
|
||||||
|
|
||||||
|
**Add NVIDIA Container Toolkit repository:**
|
||||||
|
```bash
|
||||||
|
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
|
||||||
|
|
||||||
|
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
|
||||||
|
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
|
||||||
|
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
|
||||||
|
```
|
||||||
|
|
||||||
|
**Install toolkit:**
|
||||||
|
```bash
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y nvidia-container-toolkit
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configure Docker to use NVIDIA runtime:**
|
||||||
|
```bash
|
||||||
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart Docker:**
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart docker
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify Docker can access GPU:**
|
||||||
|
```bash
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
|
||||||
|
# Should show nvidia-smi output from inside container
|
||||||
|
```
|
||||||
|
|
||||||
|
✅ **Success!** Docker can now use your GPU.
|
||||||
|
|
||||||
|
## Part 4: Configure Jellyfin for GPU Transcoding
|
||||||
|
|
||||||
|
### Step 4.1: Update Jellyfin Compose File
|
||||||
|
|
||||||
|
**Edit compose file:**
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/media/frontend/jellyfin
|
||||||
|
nano compose.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Uncomment the GPU sections:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
services:
|
||||||
|
jellyfin:
|
||||||
|
container_name: jellyfin
|
||||||
|
image: lscr.io/linuxserver/jellyfin:latest
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
volumes:
|
||||||
|
- ./config:/config
|
||||||
|
- ./cache:/cache
|
||||||
|
- /media/movies:/media/movies:ro
|
||||||
|
- /media/tv:/media/tv:ro
|
||||||
|
- /media/music:/media/music:ro
|
||||||
|
- /media/photos:/media/photos:ro
|
||||||
|
- /media/homemovies:/media/homemovies:ro
|
||||||
|
ports:
|
||||||
|
- "8096:8096"
|
||||||
|
- "7359:7359/udp"
|
||||||
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
labels:
|
||||||
|
traefik.enable: true
|
||||||
|
traefik.http.routers.jellyfin.rule: Host(`flix.fig.systems`) || Host(`flix.edfig.dev`)
|
||||||
|
traefik.http.routers.jellyfin.entrypoints: websecure
|
||||||
|
traefik.http.routers.jellyfin.tls.certresolver: letsencrypt
|
||||||
|
traefik.http.services.jellyfin.loadbalancer.server.port: 8096
|
||||||
|
|
||||||
|
# UNCOMMENT THESE LINES FOR GTX 1070:
|
||||||
|
runtime: nvidia
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart Jellyfin:**
|
||||||
|
```bash
|
||||||
|
docker compose down
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check logs:**
|
||||||
|
```bash
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Should see lines about NVENC/CUDA being detected
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4.2: Enable in Jellyfin UI
|
||||||
|
|
||||||
|
1. Go to https://flix.fig.systems
|
||||||
|
2. Dashboard → Playback → Transcoding
|
||||||
|
3. **Hardware acceleration**: NVIDIA NVENC
|
||||||
|
4. **Enable hardware decoding for**:
|
||||||
|
- ✅ H264
|
||||||
|
- ✅ HEVC
|
||||||
|
- ✅ VC1
|
||||||
|
- ✅ VP8
|
||||||
|
- ✅ MPEG2
|
||||||
|
5. **Enable hardware encoding**
|
||||||
|
6. **Enable encoding in HEVC format**
|
||||||
|
7. Save
|
||||||
|
|
||||||
|
### Step 4.3: Test Transcoding
|
||||||
|
|
||||||
|
1. Play a video in Jellyfin web UI
|
||||||
|
2. Click Settings (gear icon) → Quality
|
||||||
|
3. Select a lower bitrate to force transcoding
|
||||||
|
4. In another terminal:
|
||||||
|
```bash
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# While video is transcoding, should see:
|
||||||
|
# GPU utilization: 20-40%
|
||||||
|
# Memory usage: 500-1000MB
|
||||||
|
```
|
||||||
|
|
||||||
|
✅ **Success!** Jellyfin is using your GTX 1070!
|
||||||
|
|
||||||
|
## Part 5: Configure Immich for GPU Acceleration
|
||||||
|
|
||||||
|
Immich can use GPU for two purposes:
|
||||||
|
1. **ML Inference** (face recognition, object detection)
|
||||||
|
2. **Video Transcoding**
|
||||||
|
|
||||||
|
### Step 5.1: ML Inference (CUDA)
|
||||||
|
|
||||||
|
**Edit Immich compose file:**
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/media/frontend/immich
|
||||||
|
nano compose.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
**Change ML image to CUDA version:**
|
||||||
|
|
||||||
|
Find this line:
|
||||||
|
```yaml
|
||||||
|
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
|
||||||
|
```
|
||||||
|
|
||||||
|
Change to:
|
||||||
|
```yaml
|
||||||
|
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
|
||||||
|
```
|
||||||
|
|
||||||
|
**Add GPU support:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
immich-machine-learning:
|
||||||
|
container_name: immich_machine_learning
|
||||||
|
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
|
||||||
|
volumes:
|
||||||
|
- model-cache:/cache
|
||||||
|
env_file:
|
||||||
|
- .env
|
||||||
|
restart: always
|
||||||
|
networks:
|
||||||
|
- immich_internal
|
||||||
|
|
||||||
|
# ADD THESE LINES:
|
||||||
|
runtime: nvidia
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5.2: Video Transcoding (NVENC)
|
||||||
|
|
||||||
|
**For video transcoding, add to immich-server:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
immich-server:
|
||||||
|
container_name: immich_server
|
||||||
|
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
|
||||||
|
# ... existing config ...
|
||||||
|
|
||||||
|
# ADD THESE LINES:
|
||||||
|
runtime: nvidia
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart Immich:**
|
||||||
|
```bash
|
||||||
|
docker compose down
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5.3: Enable in Immich UI
|
||||||
|
|
||||||
|
1. Go to https://photos.fig.systems
|
||||||
|
2. Administration → Settings → Video Transcoding
|
||||||
|
3. **Transcoding**: h264 (NVENC)
|
||||||
|
4. **Hardware Acceleration**: NVIDIA
|
||||||
|
5. Save
|
||||||
|
|
||||||
|
6. Administration → Settings → Machine Learning
|
||||||
|
7. **Facial Recognition**: Enabled
|
||||||
|
8. **Object Detection**: Enabled
|
||||||
|
9. Should automatically use CUDA
|
||||||
|
|
||||||
|
### Step 5.4: Test ML Inference
|
||||||
|
|
||||||
|
1. Upload photos with faces
|
||||||
|
2. In terminal:
|
||||||
|
```bash
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# While processing, should see:
|
||||||
|
# GPU utilization: 50-80%
|
||||||
|
# Memory usage: 2-4GB
|
||||||
|
```
|
||||||
|
|
||||||
|
✅ **Success!** Immich is using GPU for ML inference!
|
||||||
|
|
||||||
|
## Part 6: Performance Tuning
|
||||||
|
|
||||||
|
### GTX 1070 Specific Settings
|
||||||
|
|
||||||
|
**Jellyfin optimal settings:**
|
||||||
|
- Hardware acceleration: NVIDIA NVENC
|
||||||
|
- Target transcode bandwidth: Let clients decide
|
||||||
|
- Enable hardware encoding: Yes
|
||||||
|
- Prefer OS native DXVA or VA-API hardware decoders: No
|
||||||
|
- Allow encoding in HEVC format: Yes (GTX 1070 supports HEVC)
|
||||||
|
|
||||||
|
**Immich optimal settings:**
|
||||||
|
- Transcoding: h264 or hevc
|
||||||
|
- Target resolution: 1080p (for GTX 1070)
|
||||||
|
- CRF: 23 (good balance)
|
||||||
|
- Preset: fast
|
||||||
|
|
||||||
|
### Unlock NVENC Stream Limit
|
||||||
|
|
||||||
|
GTX 1070 is limited to 2 concurrent transcoding streams. You can unlock unlimited streams:
|
||||||
|
|
||||||
|
**Install patch:**
|
||||||
|
```bash
|
||||||
|
# Inside Docker VM
|
||||||
|
git clone https://github.com/keylase/nvidia-patch.git
|
||||||
|
cd nvidia-patch
|
||||||
|
sudo bash ./patch.sh
|
||||||
|
|
||||||
|
# Reboot
|
||||||
|
sudo reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify:**
|
||||||
|
```bash
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Now supports unlimited concurrent streams
|
||||||
|
```
|
||||||
|
|
||||||
|
⚠️ **Note**: This is a hack that modifies NVIDIA driver. Use at your own risk.
|
||||||
|
|
||||||
|
### Monitor GPU Usage
|
||||||
|
|
||||||
|
**Real-time monitoring:**
|
||||||
|
```bash
|
||||||
|
watch -n 1 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check GPU usage from Docker:**
|
||||||
|
```bash
|
||||||
|
docker stats $(docker ps --format '{{.Names}}' | grep -E 'jellyfin|immich')
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### GPU Not Detected in VM
|
||||||
|
|
||||||
|
**Check from Proxmox host:**
|
||||||
|
```bash
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check from VM:**
|
||||||
|
```bash
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
**If not visible in VM:**
|
||||||
|
1. Verify IOMMU is enabled (`dmesg | grep IOMMU`)
|
||||||
|
2. Check PCI passthrough is configured correctly
|
||||||
|
3. Ensure VM is using q35 machine type
|
||||||
|
4. Verify BIOS is OVMF (UEFI)
|
||||||
|
|
||||||
|
### Docker Can't Access GPU
|
||||||
|
|
||||||
|
**Error**: `could not select device driver "" with capabilities: [[gpu]]`
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Reconfigure NVIDIA runtime
|
||||||
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
|
sudo systemctl restart docker
|
||||||
|
|
||||||
|
# Test again
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Jellyfin Shows "No Hardware Acceleration Available"
|
||||||
|
|
||||||
|
**Check:**
|
||||||
|
```bash
|
||||||
|
# Verify container has GPU access
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
|
||||||
|
# Check Jellyfin logs
|
||||||
|
docker logs jellyfin | grep -i nvenc
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
1. Ensure `runtime: nvidia` is uncommented
|
||||||
|
2. Verify `deploy.resources.reservations.devices` is configured
|
||||||
|
3. Restart container: `docker compose up -d`
|
||||||
|
|
||||||
|
### Transcoding Fails with "Failed to Open GPU"
|
||||||
|
|
||||||
|
**Check:**
|
||||||
|
```bash
|
||||||
|
# GPU might be busy
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Kill processes using GPU
|
||||||
|
sudo fuser -v /dev/nvidia*
|
||||||
|
```
|
||||||
|
|
||||||
|
### Low GPU Utilization During Transcoding
|
||||||
|
|
||||||
|
**Normal**: GTX 1070 is powerful. 20-40% utilization is expected for single stream.
|
||||||
|
|
||||||
|
**To max out GPU:**
|
||||||
|
- Transcode multiple streams simultaneously
|
||||||
|
- Use higher resolution source (4K)
|
||||||
|
- Enable HEVC encoding
|
||||||
|
|
||||||
|
## Performance Benchmarks (GTX 1070)
|
||||||
|
|
||||||
|
**Typical Performance:**
|
||||||
|
- **4K HEVC → 1080p H.264**: ~120-150 FPS (real-time)
|
||||||
|
- **1080p H.264 → 720p H.264**: ~300-400 FPS
|
||||||
|
- **Concurrent streams**: 4-6 (after unlocking limit)
|
||||||
|
- **Power draw**: 80-120W during transcoding
|
||||||
|
- **Temperature**: 55-65°C
|
||||||
|
|
||||||
|
**Compare to CPU (typical 4-core):**
|
||||||
|
- **4K HEVC → 1080p H.264**: ~10-15 FPS
|
||||||
|
- CPU would be at 100% utilization
|
||||||
|
- GPU: 10-15x faster!
|
||||||
|
|
||||||
|
## Monitoring and Maintenance
|
||||||
|
|
||||||
|
### Create GPU Monitoring Dashboard
|
||||||
|
|
||||||
|
**Install nvtop (nvidia-top):**
|
||||||
|
```bash
|
||||||
|
sudo apt install nvtop
|
||||||
|
```
|
||||||
|
|
||||||
|
**Run:**
|
||||||
|
```bash
|
||||||
|
nvtop
|
||||||
|
```
|
||||||
|
|
||||||
|
Shows real-time GPU usage, memory, temperature, processes.
|
||||||
|
|
||||||
|
### Check GPU Health
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Temperature
|
||||||
|
nvidia-smi --query-gpu=temperature.gpu --format=csv
|
||||||
|
|
||||||
|
# Memory usage
|
||||||
|
nvidia-smi --query-gpu=memory.used,memory.total --format=csv
|
||||||
|
|
||||||
|
# Fan speed
|
||||||
|
nvidia-smi --query-gpu=fan.speed --format=csv
|
||||||
|
|
||||||
|
# Power draw
|
||||||
|
nvidia-smi --query-gpu=power.draw,power.limit --format=csv
|
||||||
|
```
|
||||||
|
|
||||||
|
### Automated Monitoring
|
||||||
|
|
||||||
|
Add to cron:
|
||||||
|
```bash
|
||||||
|
crontab -e
|
||||||
|
|
||||||
|
# Add:
|
||||||
|
*/5 * * * * nvidia-smi --query-gpu=utilization.gpu,memory.used,temperature.gpu --format=csv,noheader >> /var/log/gpu-stats.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
✅ GPU is now configured for Jellyfin and Immich!
|
||||||
|
|
||||||
|
**Recommended:**
|
||||||
|
1. Test transcoding with various file formats
|
||||||
|
2. Upload photos to Immich and verify ML inference works
|
||||||
|
3. Monitor GPU temperature and utilization
|
||||||
|
4. Consider unlocking NVENC stream limit
|
||||||
|
5. Set up automated monitoring
|
||||||
|
|
||||||
|
**Optional:**
|
||||||
|
- Configure Tdarr for batch transcoding using GPU
|
||||||
|
- Set up Plex (also supports NVENC)
|
||||||
|
- Use GPU for other workloads (AI, rendering)
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
### Quick Command Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check GPU from host (Proxmox)
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
|
||||||
|
# Check GPU from VM
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Test Docker GPU access
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
|
||||||
|
# Monitor GPU real-time
|
||||||
|
watch -n 1 nvidia-smi
|
||||||
|
|
||||||
|
# Check Jellyfin GPU usage
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
|
||||||
|
# Restart Jellyfin with GPU
|
||||||
|
cd ~/homelab/compose/media/frontend/jellyfin
|
||||||
|
docker compose down && docker compose up -d
|
||||||
|
|
||||||
|
# View GPU processes
|
||||||
|
nvidia-smi pmon
|
||||||
|
|
||||||
|
# GPU temperature
|
||||||
|
nvidia-smi --query-gpu=temperature.gpu --format=csv,noheader
|
||||||
|
```
|
||||||
|
|
||||||
|
### GTX 1070 Specifications
|
||||||
|
|
||||||
|
- **Architecture**: Pascal (GP104)
|
||||||
|
- **CUDA Cores**: 1920
|
||||||
|
- **Memory**: 8GB GDDR5
|
||||||
|
- **Memory Bandwidth**: 256 GB/s
|
||||||
|
- **TDP**: 150W
|
||||||
|
- **NVENC**: 6th generation (H.264, HEVC)
|
||||||
|
- **NVDEC**: 2nd generation
|
||||||
|
- **Concurrent Streams**: 2 (unlockable to unlimited)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Your GTX 1070 is now accelerating your homelab! 🚀**
|
||||||
567
docs/guides/secrets-management.md
Normal file
567
docs/guides/secrets-management.md
Normal file
|
|
@ -0,0 +1,567 @@
|
||||||
|
# Secrets and Environment Variables Management
|
||||||
|
|
||||||
|
This guide explains how to properly configure and manage secrets in your homelab.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Every service uses environment variables stored in `.env` files for configuration. This approach:
|
||||||
|
- ✅ Keeps secrets out of version control
|
||||||
|
- ✅ Makes configuration changes easy
|
||||||
|
- ✅ Follows Docker Compose best practices
|
||||||
|
- ✅ Provides clear examples of what each secret should look like
|
||||||
|
|
||||||
|
## Finding What Needs Configuration
|
||||||
|
|
||||||
|
### Search for Placeholder Values
|
||||||
|
|
||||||
|
All secrets that need changing are marked with `changeme_`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find all files with placeholder secrets
|
||||||
|
grep -r "changeme_" ~/homelab/compose
|
||||||
|
|
||||||
|
# Output shows exactly what needs updating:
|
||||||
|
compose/core/lldap/.env:LLDAP_LDAP_USER_PASS=changeme_please_set_secure_password
|
||||||
|
compose/core/lldap/.env:LLDAP_JWT_SECRET=changeme_please_set_random_secret
|
||||||
|
compose/core/tinyauth/.env:LDAP_BIND_PASSWORD=changeme_please_set_secure_password
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Count What's Left to Configure
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Count how many secrets still need updating
|
||||||
|
grep -r "changeme_" ~/homelab/compose | wc -l
|
||||||
|
|
||||||
|
# Goal: 0
|
||||||
|
```
|
||||||
|
|
||||||
|
## Generating Secrets
|
||||||
|
|
||||||
|
Each `.env` file includes comments showing:
|
||||||
|
1. What the secret is for
|
||||||
|
2. How to generate it
|
||||||
|
3. What format it should be in
|
||||||
|
|
||||||
|
### Common Secret Types
|
||||||
|
|
||||||
|
#### 1. JWT Secrets (64 characters)
|
||||||
|
|
||||||
|
**Used by**: LLDAP, Vikunja, NextAuth
|
||||||
|
|
||||||
|
**Generate:**
|
||||||
|
```bash
|
||||||
|
openssl rand -hex 32
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example output:**
|
||||||
|
```
|
||||||
|
a1b2c3d4e5f67890abcdef1234567890a1b2c3d4e5f67890abcdef1234567890
|
||||||
|
```
|
||||||
|
|
||||||
|
**Where to use:**
|
||||||
|
- `LLDAP_JWT_SECRET`
|
||||||
|
- `VIKUNJA_SERVICE_JWTSECRET`
|
||||||
|
- `NEXTAUTH_SECRET`
|
||||||
|
- `SESSION_SECRET`
|
||||||
|
|
||||||
|
#### 2. Database Passwords (32 alphanumeric)
|
||||||
|
|
||||||
|
**Used by**: Postgres, Immich, Vikunja, Linkwarden
|
||||||
|
|
||||||
|
**Generate:**
|
||||||
|
```bash
|
||||||
|
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example output:**
|
||||||
|
```
|
||||||
|
aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Where to use:**
|
||||||
|
- `DB_PASSWORD` (Immich)
|
||||||
|
- `POSTGRES_PASSWORD` (Vikunja, Linkwarden)
|
||||||
|
- `VIKUNJA_DATABASE_PASSWORD`
|
||||||
|
|
||||||
|
#### 3. Strong Passwords (16+ characters, mixed)
|
||||||
|
|
||||||
|
**Used by**: LLDAP admin, service admin accounts
|
||||||
|
|
||||||
|
**Generate:**
|
||||||
|
```bash
|
||||||
|
# Option 1: Using pwgen (install: apt install pwgen)
|
||||||
|
pwgen -s 20 1
|
||||||
|
|
||||||
|
# Option 2: Using openssl
|
||||||
|
openssl rand -base64 20 | tr -d /=+
|
||||||
|
|
||||||
|
# Option 3: Manual (recommended for main admin password)
|
||||||
|
# Create something memorable but strong
|
||||||
|
# Example format: MyS3cur3P@ssw0rd!2024#HomeL@b
|
||||||
|
```
|
||||||
|
|
||||||
|
**Where to use:**
|
||||||
|
- `LLDAP_LDAP_USER_PASS`
|
||||||
|
- `LDAP_BIND_PASSWORD` (must match LLDAP_LDAP_USER_PASS!)
|
||||||
|
|
||||||
|
#### 4. API Keys / Master Keys (32 characters)
|
||||||
|
|
||||||
|
**Used by**: Meilisearch, various APIs
|
||||||
|
|
||||||
|
**Generate:**
|
||||||
|
```bash
|
||||||
|
openssl rand -hex 16
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example output:**
|
||||||
|
```
|
||||||
|
f6g7h8i901234abcdef567890a1b2c3d
|
||||||
|
```
|
||||||
|
|
||||||
|
**Where to use:**
|
||||||
|
- `MEILI_MASTER_KEY`
|
||||||
|
|
||||||
|
## Service-Specific Configuration
|
||||||
|
|
||||||
|
### Core Services
|
||||||
|
|
||||||
|
#### LLDAP (`compose/core/lldap/.env`)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit the file
|
||||||
|
cd ~/homelab/compose/core/lldap
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required secrets:**
|
||||||
|
|
||||||
|
```env
|
||||||
|
# Admin password - use a STRONG password you'll remember
|
||||||
|
# Example: MyS3cur3P@ssw0rd!2024#HomeL@b
|
||||||
|
LLDAP_LDAP_USER_PASS=changeme_please_set_secure_password
|
||||||
|
|
||||||
|
# JWT secret - generate with: openssl rand -hex 32
|
||||||
|
# Example: a1b2c3d4e5f67890abcdef1234567890a1b2c3d4e5f67890abcdef1234567890
|
||||||
|
LLDAP_JWT_SECRET=changeme_please_set_random_secret
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generate and update:**
|
||||||
|
```bash
|
||||||
|
# Generate JWT secret
|
||||||
|
echo "LLDAP_JWT_SECRET=$(openssl rand -hex 32)"
|
||||||
|
|
||||||
|
# Choose a strong password for LLDAP_LDAP_USER_PASS
|
||||||
|
# Write it down - you'll need it for Tinyauth too!
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Tinyauth (`compose/core/tinyauth/.env`)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/core/tinyauth
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required secrets:**
|
||||||
|
|
||||||
|
```env
|
||||||
|
# MUST match LLDAP_LDAP_USER_PASS from lldap/.env
|
||||||
|
LDAP_BIND_PASSWORD=changeme_please_set_secure_password
|
||||||
|
|
||||||
|
# Session secret - generate with: openssl rand -hex 32
|
||||||
|
SESSION_SECRET=changeme_please_set_random_session_secret
|
||||||
|
```
|
||||||
|
|
||||||
|
**⚠️ CRITICAL**: `LDAP_BIND_PASSWORD` must exactly match `LLDAP_LDAP_USER_PASS`!
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate session secret
|
||||||
|
echo "SESSION_SECRET=$(openssl rand -hex 32)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Media Services
|
||||||
|
|
||||||
|
#### Immich (`compose/media/frontend/immich/.env`)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/media/frontend/immich
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required secrets:**
|
||||||
|
|
||||||
|
```env
|
||||||
|
# Database password - generate with: openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
||||||
|
DB_PASSWORD=changeme_please_set_secure_password
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate
|
||||||
|
echo "DB_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Utility Services
|
||||||
|
|
||||||
|
#### Linkwarden (`compose/services/linkwarden/.env`)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/services/linkwarden
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required secrets:**
|
||||||
|
|
||||||
|
```env
|
||||||
|
# NextAuth secret - generate with: openssl rand -hex 32
|
||||||
|
NEXTAUTH_SECRET=changeme_please_set_random_secret_key
|
||||||
|
|
||||||
|
# Postgres password - generate with: openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
||||||
|
POSTGRES_PASSWORD=changeme_please_set_secure_postgres_password
|
||||||
|
|
||||||
|
# Meilisearch master key - generate with: openssl rand -hex 16
|
||||||
|
MEILI_MASTER_KEY=changeme_please_set_meili_master_key
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate all three
|
||||||
|
echo "NEXTAUTH_SECRET=$(openssl rand -hex 32)"
|
||||||
|
echo "POSTGRES_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
||||||
|
echo "MEILI_MASTER_KEY=$(openssl rand -hex 16)"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Vikunja (`compose/services/vikunja/.env`)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/services/vikunja
|
||||||
|
nano .env
|
||||||
|
```
|
||||||
|
|
||||||
|
**Required secrets:**
|
||||||
|
|
||||||
|
```env
|
||||||
|
# Database password (used in two places - must match!)
|
||||||
|
VIKUNJA_DATABASE_PASSWORD=changeme_please_set_secure_password
|
||||||
|
POSTGRES_PASSWORD=changeme_please_set_secure_password # Same value!
|
||||||
|
|
||||||
|
# JWT secret - generate with: openssl rand -hex 32
|
||||||
|
VIKUNJA_SERVICE_JWTSECRET=changeme_please_set_random_jwt_secret
|
||||||
|
```
|
||||||
|
|
||||||
|
**⚠️ CRITICAL**: Both password fields must match!
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate
|
||||||
|
DB_PASS=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)
|
||||||
|
echo "VIKUNJA_DATABASE_PASSWORD=$DB_PASS"
|
||||||
|
echo "POSTGRES_PASSWORD=$DB_PASS"
|
||||||
|
echo "VIKUNJA_SERVICE_JWTSECRET=$(openssl rand -hex 32)"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Automated Configuration Script
|
||||||
|
|
||||||
|
Create a script to generate all secrets at once:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
#!/bin/bash
|
||||||
|
# save as: ~/homelab/generate-secrets.sh
|
||||||
|
|
||||||
|
# Colors for output
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
NC='\033[0m'
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Homelab Secrets Generator${NC}\n"
|
||||||
|
|
||||||
|
echo "This script will help you generate secure secrets for your homelab."
|
||||||
|
echo "You'll need to manually copy these values into the respective .env files."
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# LLDAP
|
||||||
|
echo -e "${GREEN}=== LLDAP (compose/core/lldap/.env) ===${NC}"
|
||||||
|
echo "LLDAP_JWT_SECRET=$(openssl rand -hex 32)"
|
||||||
|
echo "LLDAP_LDAP_USER_PASS=<choose-a-strong-password-manually>"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Tinyauth
|
||||||
|
echo -e "${GREEN}=== Tinyauth (compose/core/tinyauth/.env) ===${NC}"
|
||||||
|
echo "LDAP_BIND_PASSWORD=<same-as-LLDAP_LDAP_USER_PASS-above>"
|
||||||
|
echo "SESSION_SECRET=$(openssl rand -hex 32)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Immich
|
||||||
|
echo -e "${GREEN}=== Immich (compose/media/frontend/immich/.env) ===${NC}"
|
||||||
|
echo "DB_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Linkwarden
|
||||||
|
echo -e "${GREEN}=== Linkwarden (compose/services/linkwarden/.env) ===${NC}"
|
||||||
|
echo "NEXTAUTH_SECRET=$(openssl rand -hex 32)"
|
||||||
|
echo "POSTGRES_PASSWORD=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)"
|
||||||
|
echo "MEILI_MASTER_KEY=$(openssl rand -hex 16)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
# Vikunja
|
||||||
|
VIKUNJA_PASS=$(openssl rand -base64 32 | tr -d /=+ | cut -c1-32)
|
||||||
|
echo -e "${GREEN}=== Vikunja (compose/services/vikunja/.env) ===${NC}"
|
||||||
|
echo "VIKUNJA_DATABASE_PASSWORD=$VIKUNJA_PASS"
|
||||||
|
echo "POSTGRES_PASSWORD=$VIKUNJA_PASS # Must match above!"
|
||||||
|
echo "VIKUNJA_SERVICE_JWTSECRET=$(openssl rand -hex 32)"
|
||||||
|
echo ""
|
||||||
|
|
||||||
|
echo -e "${YELLOW}Done! Copy these values into your .env files.${NC}"
|
||||||
|
echo ""
|
||||||
|
echo "Don't forget to:"
|
||||||
|
echo "1. Choose a strong LLDAP_LDAP_USER_PASS manually"
|
||||||
|
echo "2. Use the same password for LDAP_BIND_PASSWORD in tinyauth"
|
||||||
|
echo "3. Save all secrets in a password manager"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
```bash
|
||||||
|
chmod +x ~/homelab/generate-secrets.sh
|
||||||
|
~/homelab/generate-secrets.sh > secrets.txt
|
||||||
|
|
||||||
|
# Review and copy secrets
|
||||||
|
cat secrets.txt
|
||||||
|
|
||||||
|
# Keep this file safe or delete after copying to .env files
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
### 1. Use a Password Manager
|
||||||
|
|
||||||
|
Store all secrets in a password manager:
|
||||||
|
- **1Password**: Great for teams
|
||||||
|
- **Bitwarden**: Self-hostable option
|
||||||
|
- **KeePassXC**: Offline, open-source
|
||||||
|
|
||||||
|
Create an entry for each service with:
|
||||||
|
- Service name
|
||||||
|
- URL
|
||||||
|
- All secrets from `.env` file
|
||||||
|
- Admin credentials
|
||||||
|
|
||||||
|
### 2. Never Commit Secrets
|
||||||
|
|
||||||
|
The repository `.gitignore` already excludes `.env` files, but double-check:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify .env files are ignored
|
||||||
|
git status
|
||||||
|
|
||||||
|
# Should NOT show any .env files
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Backup Your Secrets
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create encrypted backup of all .env files
|
||||||
|
cd ~/homelab
|
||||||
|
tar czf env-backup-$(date +%Y%m%d).tar.gz $(find compose -name ".env")
|
||||||
|
|
||||||
|
# Encrypt with GPG
|
||||||
|
gpg -c env-backup-$(date +%Y%m%d).tar.gz
|
||||||
|
|
||||||
|
# Store encrypted file safely
|
||||||
|
mv env-backup-*.tar.gz.gpg ~/backups/
|
||||||
|
|
||||||
|
# Delete unencrypted tar
|
||||||
|
rm env-backup-*.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Rotate Secrets Regularly
|
||||||
|
|
||||||
|
Change critical secrets periodically:
|
||||||
|
- **Admin passwords**: Every 90 days
|
||||||
|
- **JWT secrets**: Every 180 days
|
||||||
|
- **Database passwords**: When personnel changes
|
||||||
|
|
||||||
|
### 5. Limit Secret Access
|
||||||
|
|
||||||
|
- Don't share raw secrets over email/chat
|
||||||
|
- Use password manager's sharing features
|
||||||
|
- Delete shared secrets when no longer needed
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
### Check All Secrets Are Set
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Should return 0 (no changeme_ values left)
|
||||||
|
grep -r "changeme_" ~/homelab/compose | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Service Startup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start a service and check for password errors
|
||||||
|
cd ~/homelab/compose/core/lldap
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs
|
||||||
|
|
||||||
|
# Should NOT see:
|
||||||
|
# - "invalid password"
|
||||||
|
# - "authentication failed"
|
||||||
|
# - "secret not set"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify SSO Works
|
||||||
|
|
||||||
|
1. Start LLDAP and Tinyauth
|
||||||
|
2. Access protected service (e.g., https://tasks.fig.systems)
|
||||||
|
3. Should redirect to auth.fig.systems
|
||||||
|
4. Login with LLDAP credentials
|
||||||
|
5. Should redirect back to service
|
||||||
|
|
||||||
|
If this works, your LLDAP ↔ Tinyauth passwords match! ✅
|
||||||
|
|
||||||
|
## Common Mistakes
|
||||||
|
|
||||||
|
### ❌ Using Weak Passwords
|
||||||
|
|
||||||
|
**Don't:**
|
||||||
|
```env
|
||||||
|
LLDAP_LDAP_USER_PASS=password123
|
||||||
|
```
|
||||||
|
|
||||||
|
**Do:**
|
||||||
|
```env
|
||||||
|
LLDAP_LDAP_USER_PASS=MyS3cur3P@ssw0rd!2024#HomeL@b
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Mismatched Passwords
|
||||||
|
|
||||||
|
**Don't:**
|
||||||
|
```env
|
||||||
|
# In lldap/.env
|
||||||
|
LLDAP_LDAP_USER_PASS=password1
|
||||||
|
|
||||||
|
# In tinyauth/.env
|
||||||
|
LDAP_BIND_PASSWORD=password2 # Different!
|
||||||
|
```
|
||||||
|
|
||||||
|
**Do:**
|
||||||
|
```env
|
||||||
|
# In lldap/.env
|
||||||
|
LLDAP_LDAP_USER_PASS=MyS3cur3P@ssw0rd!2024#HomeL@b
|
||||||
|
|
||||||
|
# In tinyauth/.env
|
||||||
|
LDAP_BIND_PASSWORD=MyS3cur3P@ssw0rd!2024#HomeL@b # Same!
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Using Same Secret Everywhere
|
||||||
|
|
||||||
|
**Don't:**
|
||||||
|
```env
|
||||||
|
# Same secret in multiple places
|
||||||
|
LLDAP_JWT_SECRET=abc123
|
||||||
|
NEXTAUTH_SECRET=abc123
|
||||||
|
SESSION_SECRET=abc123
|
||||||
|
```
|
||||||
|
|
||||||
|
**Do:**
|
||||||
|
```env
|
||||||
|
# Unique secret for each
|
||||||
|
LLDAP_JWT_SECRET=a1b2c3d4e5f67890...
|
||||||
|
NEXTAUTH_SECRET=f6g7h8i9j0k1l2m3...
|
||||||
|
SESSION_SECRET=x9y8z7w6v5u4t3s2...
|
||||||
|
```
|
||||||
|
|
||||||
|
### ❌ Forgetting to Update Both Password Fields
|
||||||
|
|
||||||
|
In Vikunja `.env`, both must match:
|
||||||
|
```env
|
||||||
|
# Both must be the same!
|
||||||
|
VIKUNJA_DATABASE_PASSWORD=aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
||||||
|
POSTGRES_PASSWORD=aB3dEf7HiJ9kLmN2oPqR5sTuV8wXyZ1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### "Authentication failed" in Tinyauth
|
||||||
|
|
||||||
|
**Cause**: LDAP_BIND_PASSWORD doesn't match LLDAP_LDAP_USER_PASS
|
||||||
|
|
||||||
|
**Fix**:
|
||||||
|
```bash
|
||||||
|
# Check LLDAP password
|
||||||
|
grep LLDAP_LDAP_USER_PASS ~/homelab/compose/core/lldap/.env
|
||||||
|
|
||||||
|
# Check Tinyauth password
|
||||||
|
grep LDAP_BIND_PASSWORD ~/homelab/compose/core/tinyauth/.env
|
||||||
|
|
||||||
|
# They should be identical!
|
||||||
|
```
|
||||||
|
|
||||||
|
### "Invalid JWT" errors
|
||||||
|
|
||||||
|
**Cause**: JWT_SECRET is too short or invalid format
|
||||||
|
|
||||||
|
**Fix**:
|
||||||
|
```bash
|
||||||
|
# Regenerate with proper length
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
|
# Update in .env file
|
||||||
|
```
|
||||||
|
|
||||||
|
### "Database connection failed"
|
||||||
|
|
||||||
|
**Cause**: Database password mismatch
|
||||||
|
|
||||||
|
**Fix**:
|
||||||
|
```bash
|
||||||
|
# Check both password fields match
|
||||||
|
grep -E "(POSTGRES_PASSWORD|DATABASE_PASSWORD)" compose/services/vikunja/.env
|
||||||
|
|
||||||
|
# Both should be identical
|
||||||
|
```
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
Once all secrets are configured:
|
||||||
|
1. ✅ [Deploy services](../getting-started.md#step-6-deploy-services)
|
||||||
|
2. ✅ [Configure SSO](../services/sso-setup.md)
|
||||||
|
3. ✅ [Set up backups](../operations/backups.md)
|
||||||
|
4. ✅ Store secrets in password manager
|
||||||
|
5. ✅ Create encrypted backup of .env files
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
### Quick Command Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate 64-char hex
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
|
# Generate 32-char password
|
||||||
|
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
||||||
|
|
||||||
|
# Generate 32-char hex
|
||||||
|
openssl rand -hex 16
|
||||||
|
|
||||||
|
# Find all changeme_ values
|
||||||
|
grep -r "changeme_" compose/
|
||||||
|
|
||||||
|
# Count remaining secrets to configure
|
||||||
|
grep -r "changeme_" compose/ | wc -l
|
||||||
|
|
||||||
|
# Backup all .env files (encrypted)
|
||||||
|
tar czf env-files.tar.gz $(find compose -name ".env")
|
||||||
|
gpg -c env-files.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
### Secret Types Quick Reference
|
||||||
|
|
||||||
|
| Secret Type | Command | Example Length | Used By |
|
||||||
|
|-------------|---------|----------------|---------|
|
||||||
|
| JWT Secret | `openssl rand -hex 32` | 64 chars | LLDAP, Vikunja, NextAuth |
|
||||||
|
| Session Secret | `openssl rand -hex 32` | 64 chars | Tinyauth |
|
||||||
|
| DB Password | `openssl rand -base64 32 \| tr -d /=+ \| cut -c1-32` | 32 chars | Postgres, Immich |
|
||||||
|
| API Key | `openssl rand -hex 16` | 32 chars | Meilisearch |
|
||||||
|
| Admin Password | Manual | 16+ chars | LLDAP admin |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember**: Strong, unique secrets are your first line of defense. Take the time to generate them properly! 🔐
|
||||||
567
docs/quick-reference.md
Normal file
567
docs/quick-reference.md
Normal file
|
|
@ -0,0 +1,567 @@
|
||||||
|
# Quick Reference Guide
|
||||||
|
|
||||||
|
Fast reference for common tasks and commands.
|
||||||
|
|
||||||
|
## Service URLs
|
||||||
|
|
||||||
|
All services accessible via:
|
||||||
|
- Primary domain: `*.fig.systems`
|
||||||
|
- Secondary domain: `*.edfig.dev`
|
||||||
|
|
||||||
|
### Core Services
|
||||||
|
```
|
||||||
|
https://traefik.fig.systems # Reverse proxy dashboard
|
||||||
|
https://lldap.fig.systems # User directory
|
||||||
|
https://auth.fig.systems # SSO authentication
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dashboard & Management
|
||||||
|
```
|
||||||
|
https://home.fig.systems # Homarr dashboard (START HERE!)
|
||||||
|
https://backup.fig.systems # Backrest backup manager
|
||||||
|
```
|
||||||
|
|
||||||
|
### Media Services
|
||||||
|
```
|
||||||
|
https://flix.fig.systems # Jellyfin media server
|
||||||
|
https://photos.fig.systems # Immich photo library
|
||||||
|
https://requests.fig.systems # Jellyseerr media requests
|
||||||
|
https://sonarr.fig.systems # TV show automation
|
||||||
|
https://radarr.fig.systems # Movie automation
|
||||||
|
https://sabnzbd.fig.systems # Usenet downloader
|
||||||
|
https://qbt.fig.systems # qBittorrent client
|
||||||
|
```
|
||||||
|
|
||||||
|
### Utility Services
|
||||||
|
```
|
||||||
|
https://links.fig.systems # Linkwarden bookmarks
|
||||||
|
https://tasks.fig.systems # Vikunja task management
|
||||||
|
https://garage.fig.systems # LubeLogger vehicle tracking
|
||||||
|
https://books.fig.systems # Calibre-web ebook library
|
||||||
|
https://booklore.fig.systems # Book tracking
|
||||||
|
https://rss.fig.systems # FreshRSS reader
|
||||||
|
https://files.fig.systems # File Browser
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Commands
|
||||||
|
|
||||||
|
### Docker Compose
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start service
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Restart service
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Stop service
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Update and restart
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Rebuild service
|
||||||
|
docker compose up -d --force-recreate
|
||||||
|
```
|
||||||
|
|
||||||
|
### Docker Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all containers
|
||||||
|
docker ps
|
||||||
|
|
||||||
|
# List all containers (including stopped)
|
||||||
|
docker ps -a
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker logs <container_name>
|
||||||
|
docker logs -f <container_name> # Follow logs
|
||||||
|
|
||||||
|
# Execute command in container
|
||||||
|
docker exec -it <container_name> bash
|
||||||
|
|
||||||
|
# View resource usage
|
||||||
|
docker stats
|
||||||
|
|
||||||
|
# Remove stopped containers
|
||||||
|
docker container prune
|
||||||
|
|
||||||
|
# Remove unused images
|
||||||
|
docker image prune -a
|
||||||
|
|
||||||
|
# Remove unused volumes (CAREFUL!)
|
||||||
|
docker volume prune
|
||||||
|
|
||||||
|
# Complete cleanup
|
||||||
|
docker system prune -a --volumes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start all core services
|
||||||
|
cd ~/homelab/compose/core
|
||||||
|
for dir in traefik lldap tinyauth; do
|
||||||
|
cd $dir && docker compose up -d && cd ..
|
||||||
|
done
|
||||||
|
|
||||||
|
# Stop all services
|
||||||
|
cd ~/homelab
|
||||||
|
find compose -name "compose.yaml" -execdir docker compose down \;
|
||||||
|
|
||||||
|
# Restart single service
|
||||||
|
cd ~/homelab/compose/services/servicename
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# View all running containers
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||||
|
```
|
||||||
|
|
||||||
|
### System Checks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check all containers
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}"
|
||||||
|
|
||||||
|
# Check network
|
||||||
|
docker network inspect homelab
|
||||||
|
|
||||||
|
# Check disk usage
|
||||||
|
docker system df
|
||||||
|
df -h
|
||||||
|
|
||||||
|
# Check logs for errors
|
||||||
|
docker compose logs --tail=100 | grep -i error
|
||||||
|
|
||||||
|
# Test DNS resolution
|
||||||
|
dig home.fig.systems +short
|
||||||
|
|
||||||
|
# Test SSL
|
||||||
|
curl -I https://home.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
## Secret Generation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# JWT/Session secrets (64 char)
|
||||||
|
openssl rand -hex 32
|
||||||
|
|
||||||
|
# Database passwords (32 char alphanumeric)
|
||||||
|
openssl rand -base64 32 | tr -d /=+ | cut -c1-32
|
||||||
|
|
||||||
|
# API keys (32 char hex)
|
||||||
|
openssl rand -hex 16
|
||||||
|
|
||||||
|
# Find what needs updating
|
||||||
|
grep -r "changeme_" ~/homelab/compose
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Service Won't Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check logs
|
||||||
|
docker compose logs
|
||||||
|
|
||||||
|
# Check container status
|
||||||
|
docker compose ps
|
||||||
|
|
||||||
|
# Check for port conflicts
|
||||||
|
sudo netstat -tulpn | grep :80
|
||||||
|
sudo netstat -tulpn | grep :443
|
||||||
|
|
||||||
|
# Recreate container
|
||||||
|
docker compose down
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSL Certificate Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Traefik logs
|
||||||
|
docker logs traefik | grep -i certificate
|
||||||
|
|
||||||
|
# Check Let's Encrypt logs
|
||||||
|
docker logs traefik | grep -i letsencrypt
|
||||||
|
|
||||||
|
# Verify DNS
|
||||||
|
dig home.fig.systems +short
|
||||||
|
|
||||||
|
# Test port 80 accessibility
|
||||||
|
curl -I http://home.fig.systems
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSO Not Working
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check LLDAP
|
||||||
|
docker logs lldap
|
||||||
|
|
||||||
|
# Check Tinyauth
|
||||||
|
docker logs tinyauth
|
||||||
|
|
||||||
|
# Verify passwords match
|
||||||
|
grep LLDAP_LDAP_USER_PASS ~/homelab/compose/core/lldap/.env
|
||||||
|
grep LDAP_BIND_PASSWORD ~/homelab/compose/core/tinyauth/.env
|
||||||
|
|
||||||
|
# Test LDAP connection
|
||||||
|
docker exec tinyauth nc -zv lldap 3890
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database Connection Failures
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check database container
|
||||||
|
docker ps | grep postgres
|
||||||
|
|
||||||
|
# View database logs
|
||||||
|
docker logs <db_container_name>
|
||||||
|
|
||||||
|
# Test connection from app container
|
||||||
|
docker exec <app_container> nc -zv <db_container> 5432
|
||||||
|
|
||||||
|
# Verify password in .env
|
||||||
|
cat .env | grep POSTGRES_PASSWORD
|
||||||
|
```
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
```
|
||||||
|
~/homelab/compose/ # All services
|
||||||
|
~/homelab/compose/core/ # Core infrastructure
|
||||||
|
~/homelab/compose/media/ # Media services
|
||||||
|
~/homelab/compose/services/ # Utility services
|
||||||
|
```
|
||||||
|
|
||||||
|
### Service Data
|
||||||
|
```
|
||||||
|
compose/<service>/config/ # Service configuration
|
||||||
|
compose/<service>/data/ # Service data
|
||||||
|
compose/<service>/db/ # Database files
|
||||||
|
compose/<service>/.env # Environment variables
|
||||||
|
```
|
||||||
|
|
||||||
|
### Media Files
|
||||||
|
```
|
||||||
|
/media/movies/ # Movies
|
||||||
|
/media/tv/ # TV shows
|
||||||
|
/media/music/ # Music
|
||||||
|
/media/photos/ # Photos
|
||||||
|
/media/books/ # Books
|
||||||
|
/media/downloads/ # Active downloads
|
||||||
|
/media/complete/ # Completed downloads
|
||||||
|
```
|
||||||
|
|
||||||
|
### Logs
|
||||||
|
```
|
||||||
|
docker logs <container_name> # Container logs
|
||||||
|
compose/<service>/logs/ # Service-specific logs (if configured)
|
||||||
|
/var/lib/docker/volumes/ # Volume data
|
||||||
|
```
|
||||||
|
|
||||||
|
## Network
|
||||||
|
|
||||||
|
### Create Network
|
||||||
|
```bash
|
||||||
|
docker network create homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Inspect Network
|
||||||
|
```bash
|
||||||
|
docker network inspect homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Connect Container to Network
|
||||||
|
```bash
|
||||||
|
docker network connect homelab <container_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
## GPU (NVIDIA GTX 1070)
|
||||||
|
|
||||||
|
### Check GPU Status
|
||||||
|
```bash
|
||||||
|
nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test GPU in Docker
|
||||||
|
```bash
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Monitor GPU Usage
|
||||||
|
```bash
|
||||||
|
watch -n 1 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check GPU in Container
|
||||||
|
```bash
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
docker exec immich_machine_learning nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup
|
||||||
|
|
||||||
|
### Backup Configuration Files
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
tar czf homelab-config-$(date +%Y%m%d).tar.gz \
|
||||||
|
$(find compose -name ".env") \
|
||||||
|
$(find compose -name "compose.yaml")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup Service Data
|
||||||
|
```bash
|
||||||
|
# Example: Backup Immich
|
||||||
|
cd ~/homelab/compose/media/frontend/immich
|
||||||
|
tar czf immich-backup-$(date +%Y%m%d).tar.gz upload/ config/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restore Configuration
|
||||||
|
```bash
|
||||||
|
tar xzf homelab-config-YYYYMMDD.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
## Updates
|
||||||
|
|
||||||
|
### Update Single Service
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update All Services
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
for dir in $(find compose -name "compose.yaml" -exec dirname {} \;); do
|
||||||
|
echo "Updating $dir"
|
||||||
|
cd $dir
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
cd ~/homelab
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update Docker
|
||||||
|
```bash
|
||||||
|
sudo apt update
|
||||||
|
sudo apt upgrade docker-ce docker-ce-cli containerd.io
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
### Check Resource Usage
|
||||||
|
```bash
|
||||||
|
# Overall system
|
||||||
|
htop
|
||||||
|
|
||||||
|
# Docker containers
|
||||||
|
docker stats
|
||||||
|
|
||||||
|
# Disk usage
|
||||||
|
df -h
|
||||||
|
docker system df
|
||||||
|
|
||||||
|
# Network usage
|
||||||
|
iftop
|
||||||
|
```
|
||||||
|
|
||||||
|
### Clean Up Disk Space
|
||||||
|
```bash
|
||||||
|
# Docker cleanup
|
||||||
|
docker system prune -a
|
||||||
|
|
||||||
|
# Remove old logs
|
||||||
|
sudo journalctl --vacuum-time=7d
|
||||||
|
|
||||||
|
# Find large files
|
||||||
|
du -h /media | sort -rh | head -20
|
||||||
|
```
|
||||||
|
|
||||||
|
## DNS Configuration
|
||||||
|
|
||||||
|
### Cloudflare Example
|
||||||
|
```
|
||||||
|
Type: A
|
||||||
|
Name: *
|
||||||
|
Content: YOUR_SERVER_IP
|
||||||
|
Proxy: Off (disable for Let's Encrypt)
|
||||||
|
TTL: Auto
|
||||||
|
```
|
||||||
|
|
||||||
|
### Local DNS (Pi-hole/hosts file)
|
||||||
|
```
|
||||||
|
192.168.1.100 home.fig.systems
|
||||||
|
192.168.1.100 flix.fig.systems
|
||||||
|
192.168.1.100 photos.fig.systems
|
||||||
|
# ... etc
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
### List All Services with Secrets
|
||||||
|
```bash
|
||||||
|
find ~/homelab/compose -name ".env" -exec echo {} \;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check for Unconfigured Secrets
|
||||||
|
```bash
|
||||||
|
grep -r "changeme_" ~/homelab/compose | wc -l
|
||||||
|
# Should be 0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup All .env Files
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
tar czf env-files-$(date +%Y%m%d).tar.gz $(find compose -name ".env")
|
||||||
|
gpg -c env-files-$(date +%Y%m%d).tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
## Monitoring
|
||||||
|
|
||||||
|
### Service Health
|
||||||
|
```bash
|
||||||
|
# Check all containers are running
|
||||||
|
docker ps --format "{{.Names}}: {{.Status}}" | grep -v "Up"
|
||||||
|
|
||||||
|
# Check for restarts
|
||||||
|
docker ps --format "{{.Names}}: {{.Status}}" | grep "Restarting"
|
||||||
|
|
||||||
|
# Check logs for errors
|
||||||
|
docker compose logs --tail=100 | grep -i error
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSL Certificate Expiry
|
||||||
|
```bash
|
||||||
|
# Check cert expiry
|
||||||
|
echo | openssl s_client -servername home.fig.systems -connect home.fig.systems:443 2>/dev/null | openssl x509 -noout -dates
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disk Space
|
||||||
|
```bash
|
||||||
|
# Overall
|
||||||
|
df -h
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
docker system df
|
||||||
|
|
||||||
|
# Media
|
||||||
|
du -sh /media/*
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common File Paths
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Core services
|
||||||
|
~/homelab/compose/core/traefik/
|
||||||
|
~/homelab/compose/core/lldap/
|
||||||
|
~/homelab/compose/core/tinyauth/
|
||||||
|
|
||||||
|
# Media
|
||||||
|
~/homelab/compose/media/frontend/jellyfin/
|
||||||
|
~/homelab/compose/media/frontend/immich/
|
||||||
|
~/homelab/compose/media/automation/sonarr/
|
||||||
|
|
||||||
|
# Utilities
|
||||||
|
~/homelab/compose/services/homarr/
|
||||||
|
~/homelab/compose/services/backrest/
|
||||||
|
~/homelab/compose/services/linkwarden/
|
||||||
|
|
||||||
|
# Documentation
|
||||||
|
~/homelab/docs/
|
||||||
|
~/homelab/README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Port Reference
|
||||||
|
|
||||||
|
```
|
||||||
|
80 - HTTP (Traefik)
|
||||||
|
443 - HTTPS (Traefik)
|
||||||
|
3890 - LLDAP
|
||||||
|
6881 - qBittorrent (TCP/UDP)
|
||||||
|
8096 - Jellyfin
|
||||||
|
2283 - Immich
|
||||||
|
```
|
||||||
|
|
||||||
|
## Default Credentials
|
||||||
|
|
||||||
|
⚠️ **Change these immediately after first login!**
|
||||||
|
|
||||||
|
### qBittorrent
|
||||||
|
```
|
||||||
|
Username: admin
|
||||||
|
Password: adminadmin
|
||||||
|
```
|
||||||
|
|
||||||
|
### Microbin
|
||||||
|
```
|
||||||
|
Check compose/services/microbin/.env
|
||||||
|
MICROBIN_ADMIN_USERNAME
|
||||||
|
MICROBIN_ADMIN_PASSWORD
|
||||||
|
```
|
||||||
|
|
||||||
|
### All Other Services
|
||||||
|
Use SSO (LLDAP) or create admin account on first visit.
|
||||||
|
|
||||||
|
## Quick Deployment
|
||||||
|
|
||||||
|
### Deploy Everything
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
chmod +x deploy-all.sh
|
||||||
|
./deploy-all.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy Core Only
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/core/traefik && docker compose up -d
|
||||||
|
cd ../lldap && docker compose up -d
|
||||||
|
cd ../tinyauth && docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deploy Media Stack
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/media/frontend
|
||||||
|
for dir in */; do cd "$dir" && docker compose up -d && cd ..; done
|
||||||
|
|
||||||
|
cd ~/homelab/compose/media/automation
|
||||||
|
for dir in */; do cd "$dir" && docker compose up -d && cd ..; done
|
||||||
|
```
|
||||||
|
|
||||||
|
## Emergency Procedures
|
||||||
|
|
||||||
|
### Stop All Services
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
find compose -name "compose.yaml" -execdir docker compose down \;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Remove All Containers (Nuclear Option)
|
||||||
|
```bash
|
||||||
|
docker stop $(docker ps -aq)
|
||||||
|
docker rm $(docker ps -aq)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reset Network
|
||||||
|
```bash
|
||||||
|
docker network rm homelab
|
||||||
|
docker network create homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reset Service
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
docker compose down -v # REMOVES VOLUMES!
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**For detailed guides, see the [docs folder](./README.md).**
|
||||||
366
docs/services/README.md
Normal file
366
docs/services/README.md
Normal file
|
|
@ -0,0 +1,366 @@
|
||||||
|
# Services Overview
|
||||||
|
|
||||||
|
Complete list of all services in the homelab with descriptions and use cases.
|
||||||
|
|
||||||
|
## Core Infrastructure (Required)
|
||||||
|
|
||||||
|
### Traefik
|
||||||
|
- **URL**: https://traefik.fig.systems
|
||||||
|
- **Purpose**: Reverse proxy with automatic SSL/TLS
|
||||||
|
- **Why**: Routes all traffic, manages Let's Encrypt certificates
|
||||||
|
- **Required**: ✅ Yes - Nothing works without this
|
||||||
|
|
||||||
|
### LLDAP
|
||||||
|
- **URL**: https://lldap.fig.systems
|
||||||
|
- **Purpose**: Lightweight LDAP directory for user management
|
||||||
|
- **Why**: Centralized user database for SSO
|
||||||
|
- **Required**: ✅ Yes (if using SSO)
|
||||||
|
- **Default Login**: admin / <your LLDAP_LDAP_USER_PASS>
|
||||||
|
|
||||||
|
### Tinyauth
|
||||||
|
- **URL**: https://auth.fig.systems
|
||||||
|
- **Purpose**: SSO forward authentication middleware
|
||||||
|
- **Why**: Single login for all services
|
||||||
|
- **Required**: ✅ Yes (if using SSO)
|
||||||
|
|
||||||
|
## Dashboard & Management
|
||||||
|
|
||||||
|
### Homarr
|
||||||
|
- **URL**: https://home.fig.systems
|
||||||
|
- **Purpose**: Service dashboard with auto-discovery
|
||||||
|
- **Why**: See all your services in one place, monitor status
|
||||||
|
- **Required**: ⬜ No, but highly recommended
|
||||||
|
- **Features**:
|
||||||
|
- Auto-discovers Docker containers
|
||||||
|
- Customizable widgets
|
||||||
|
- Service status monitoring
|
||||||
|
- Integration with media services
|
||||||
|
|
||||||
|
### Backrest
|
||||||
|
- **URL**: https://backup.fig.systems
|
||||||
|
- **Purpose**: Backup management with web UI (uses Restic)
|
||||||
|
- **Why**: Encrypted, deduplicated backups to Backblaze B2
|
||||||
|
- **Required**: ⬜ No, but critical for data safety
|
||||||
|
- **Features**:
|
||||||
|
- Web-based backup management
|
||||||
|
- Scheduled backups
|
||||||
|
- File browsing and restore
|
||||||
|
- Encryption at rest
|
||||||
|
- S3-compatible storage support
|
||||||
|
|
||||||
|
## Media Services
|
||||||
|
|
||||||
|
### Jellyfin
|
||||||
|
- **URL**: https://flix.fig.systems
|
||||||
|
- **Purpose**: Media server (Netflix alternative)
|
||||||
|
- **Why**: Watch your movies/TV shows anywhere
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Stream to any device
|
||||||
|
- Hardware transcoding (with GPU)
|
||||||
|
- Live TV & DVR
|
||||||
|
- Mobile apps available
|
||||||
|
- Subtitle support
|
||||||
|
|
||||||
|
### Immich
|
||||||
|
- **URL**: https://photos.fig.systems
|
||||||
|
- **Purpose**: Photo and video management (Google Photos alternative)
|
||||||
|
- **Why**: Self-hosted photo library with ML features
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Face recognition (with GPU)
|
||||||
|
- Object detection
|
||||||
|
- Mobile apps with auto-upload
|
||||||
|
- Timeline view
|
||||||
|
- Album organization
|
||||||
|
|
||||||
|
### Jellyseerr
|
||||||
|
- **URL**: https://requests.fig.systems
|
||||||
|
- **Purpose**: Media request management
|
||||||
|
- **Why**: Let users request movies/shows
|
||||||
|
- **Required**: ⬜ No (only if using Sonarr/Radarr)
|
||||||
|
- **Features**:
|
||||||
|
- Request movies and TV shows
|
||||||
|
- Integration with Jellyfin
|
||||||
|
- User permissions
|
||||||
|
- Notification system
|
||||||
|
|
||||||
|
## Media Automation
|
||||||
|
|
||||||
|
### Sonarr
|
||||||
|
- **URL**: https://sonarr.fig.systems
|
||||||
|
- **Purpose**: TV show automation
|
||||||
|
- **Why**: Automatically download and organize TV shows
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Episode tracking
|
||||||
|
- Automatic downloading
|
||||||
|
- Quality management
|
||||||
|
- Calendar view
|
||||||
|
|
||||||
|
### Radarr
|
||||||
|
- **URL**: https://radarr.fig.systems
|
||||||
|
- **Purpose**: Movie automation
|
||||||
|
- **Why**: Automatically download and organize movies
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Movie tracking
|
||||||
|
- Automatic downloading
|
||||||
|
- Quality profiles
|
||||||
|
- Collection management
|
||||||
|
|
||||||
|
### SABnzbd
|
||||||
|
- **URL**: https://sabnzbd.fig.systems
|
||||||
|
- **Purpose**: Usenet downloader
|
||||||
|
- **Why**: Download from Usenet newsgroups
|
||||||
|
- **Required**: ⬜ No (only if using Usenet)
|
||||||
|
- **Features**:
|
||||||
|
- Fast downloads
|
||||||
|
- Automatic verification and repair
|
||||||
|
- Category-based processing
|
||||||
|
- Password support
|
||||||
|
|
||||||
|
### qBittorrent
|
||||||
|
- **URL**: https://qbt.fig.systems
|
||||||
|
- **Purpose**: BitTorrent client
|
||||||
|
- **Why**: Download torrents
|
||||||
|
- **Required**: ⬜ No (only if using torrents)
|
||||||
|
- **Features**:
|
||||||
|
- Web-based UI
|
||||||
|
- RSS support
|
||||||
|
- Sequential downloading
|
||||||
|
- IP filtering
|
||||||
|
|
||||||
|
## Productivity Services
|
||||||
|
|
||||||
|
### Linkwarden
|
||||||
|
- **URL**: https://links.fig.systems
|
||||||
|
- **Purpose**: Bookmark manager
|
||||||
|
- **Why**: Save and organize web links
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Collaborative bookmarking
|
||||||
|
- Full-text search
|
||||||
|
- Screenshots and PDFs
|
||||||
|
- Tags and collections
|
||||||
|
- Browser extensions
|
||||||
|
|
||||||
|
### Vikunja
|
||||||
|
- **URL**: https://tasks.fig.systems
|
||||||
|
- **Purpose**: Task management (Todoist alternative)
|
||||||
|
- **Why**: Track tasks and projects
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Kanban boards
|
||||||
|
- Lists and sub-tasks
|
||||||
|
- Due dates and reminders
|
||||||
|
- Collaboration
|
||||||
|
- CalDAV support
|
||||||
|
|
||||||
|
### FreshRSS
|
||||||
|
- **URL**: https://rss.fig.systems
|
||||||
|
- **Purpose**: RSS/Atom feed reader
|
||||||
|
- **Why**: Aggregate news and blogs
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Web-based reader
|
||||||
|
- Mobile apps via API
|
||||||
|
- Filtering and search
|
||||||
|
- Multi-user support
|
||||||
|
|
||||||
|
## Specialized Services
|
||||||
|
|
||||||
|
### LubeLogger
|
||||||
|
- **URL**: https://garage.fig.systems
|
||||||
|
- **Purpose**: Vehicle maintenance tracker
|
||||||
|
- **Why**: Track mileage, maintenance, costs
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Service records
|
||||||
|
- Fuel tracking
|
||||||
|
- Cost analysis
|
||||||
|
- Reminder system
|
||||||
|
- Export data
|
||||||
|
|
||||||
|
### Calibre-web
|
||||||
|
- **URL**: https://books.fig.systems
|
||||||
|
- **Purpose**: Ebook library manager
|
||||||
|
- **Why**: Manage and read ebooks
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Web-based ebook reader
|
||||||
|
- Format conversion
|
||||||
|
- Metadata management
|
||||||
|
- Send to Kindle
|
||||||
|
- OPDS support
|
||||||
|
|
||||||
|
### Booklore
|
||||||
|
- **URL**: https://booklore.fig.systems
|
||||||
|
- **Purpose**: Book tracking and reviews
|
||||||
|
- **Why**: Track reading progress and reviews
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Reading lists
|
||||||
|
- Progress tracking
|
||||||
|
- Reviews and ratings
|
||||||
|
- Import from Goodreads
|
||||||
|
|
||||||
|
### RSSHub
|
||||||
|
- **URL**: https://rsshub.fig.systems
|
||||||
|
- **Purpose**: RSS feed generator
|
||||||
|
- **Why**: Generate RSS feeds for sites without them
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- 1000+ source support
|
||||||
|
- Custom routes
|
||||||
|
- Filter and transform feeds
|
||||||
|
|
||||||
|
### MicroBin
|
||||||
|
- **URL**: https://paste.fig.systems
|
||||||
|
- **Purpose**: Encrypted pastebin with file upload
|
||||||
|
- **Why**: Share code snippets and files
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Encryption support
|
||||||
|
- File uploads
|
||||||
|
- Burn after reading
|
||||||
|
- Custom expiry
|
||||||
|
- Password protection
|
||||||
|
|
||||||
|
### File Browser
|
||||||
|
- **URL**: https://files.fig.systems
|
||||||
|
- **Purpose**: Web-based file manager
|
||||||
|
- **Why**: Browse and manage media files
|
||||||
|
- **Required**: ⬜ No
|
||||||
|
- **Features**:
|
||||||
|
- Upload/download files
|
||||||
|
- Preview images and videos
|
||||||
|
- Text editor
|
||||||
|
- File sharing
|
||||||
|
- User permissions
|
||||||
|
|
||||||
|
## Service Categories
|
||||||
|
|
||||||
|
### Minimum Viable Setup
|
||||||
|
Just want to get started? Deploy these:
|
||||||
|
1. Traefik
|
||||||
|
2. LLDAP
|
||||||
|
3. Tinyauth
|
||||||
|
4. Homarr
|
||||||
|
|
||||||
|
### Media Enthusiast Setup
|
||||||
|
For streaming media:
|
||||||
|
1. Core services (above)
|
||||||
|
2. Jellyfin
|
||||||
|
3. Sonarr
|
||||||
|
4. Radarr
|
||||||
|
5. qBittorrent
|
||||||
|
6. Jellyseerr
|
||||||
|
|
||||||
|
### Complete Homelab
|
||||||
|
Everything:
|
||||||
|
1. Core services
|
||||||
|
2. All media services
|
||||||
|
3. All productivity services
|
||||||
|
4. Backrest for backups
|
||||||
|
|
||||||
|
## Resource Requirements
|
||||||
|
|
||||||
|
### Light (2 Core, 4GB RAM)
|
||||||
|
- Core services
|
||||||
|
- Homarr
|
||||||
|
- 2-3 utility services
|
||||||
|
|
||||||
|
### Medium (4 Core, 8GB RAM)
|
||||||
|
- Core services
|
||||||
|
- Media services (without transcoding)
|
||||||
|
- Most utility services
|
||||||
|
|
||||||
|
### Heavy (6+ Core, 16GB+ RAM)
|
||||||
|
- All services
|
||||||
|
- GPU transcoding
|
||||||
|
- Multiple concurrent users
|
||||||
|
|
||||||
|
## Quick Deploy Checklist
|
||||||
|
|
||||||
|
**Before deploying a service:**
|
||||||
|
- ✅ Core infrastructure is running
|
||||||
|
- ✅ `.env` file configured with secrets
|
||||||
|
- ✅ DNS record created
|
||||||
|
- ✅ Understand what the service does
|
||||||
|
- ✅ Know how to configure it
|
||||||
|
|
||||||
|
**After deploying:**
|
||||||
|
- ✅ Check container is running: `docker ps`
|
||||||
|
- ✅ Check logs: `docker compose logs`
|
||||||
|
- ✅ Access web UI and complete setup
|
||||||
|
- ✅ Test SSO if applicable
|
||||||
|
- ✅ Add to Homarr dashboard
|
||||||
|
|
||||||
|
## Service Dependencies
|
||||||
|
|
||||||
|
```
|
||||||
|
Traefik (required for all)
|
||||||
|
├── LLDAP
|
||||||
|
│ └── Tinyauth
|
||||||
|
│ └── All SSO-protected services
|
||||||
|
├── Jellyfin
|
||||||
|
│ └── Jellyseerr
|
||||||
|
│ ├── Sonarr
|
||||||
|
│ └── Radarr
|
||||||
|
│ ├── SABnzbd
|
||||||
|
│ └── qBittorrent
|
||||||
|
├── Immich
|
||||||
|
│ └── Backrest (for backups)
|
||||||
|
└── All other services
|
||||||
|
```
|
||||||
|
|
||||||
|
## When to Use Each Service
|
||||||
|
|
||||||
|
### Use Jellyfin if:
|
||||||
|
- You have a movie/TV collection
|
||||||
|
- Want to stream from anywhere
|
||||||
|
- Have family/friends who want access
|
||||||
|
- Want apps on all devices
|
||||||
|
|
||||||
|
### Use Immich if:
|
||||||
|
- You want Google Photos alternative
|
||||||
|
- Have lots of photos to manage
|
||||||
|
- Want ML features (face recognition)
|
||||||
|
- Have mobile devices
|
||||||
|
|
||||||
|
### Use Sonarr/Radarr if:
|
||||||
|
- You watch a lot of TV/movies
|
||||||
|
- Want automatic downloads
|
||||||
|
- Don't want to manually search
|
||||||
|
- Want quality control
|
||||||
|
|
||||||
|
### Use Backrest if:
|
||||||
|
- You care about your data (you should!)
|
||||||
|
- Want encrypted cloud backups
|
||||||
|
- Have important photos/documents
|
||||||
|
- Want easy restore process
|
||||||
|
|
||||||
|
### Use Linkwarden if:
|
||||||
|
- You save lots of bookmarks
|
||||||
|
- Want full-text search
|
||||||
|
- Share links with team
|
||||||
|
- Want offline archives
|
||||||
|
|
||||||
|
### Use Vikunja if:
|
||||||
|
- You need task management
|
||||||
|
- Work with teams
|
||||||
|
- Want Kanban boards
|
||||||
|
- Need CalDAV for calendar integration
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. Review which services you actually need
|
||||||
|
2. Start with core + 2-3 services
|
||||||
|
3. Deploy and configure each fully
|
||||||
|
4. Add more services gradually
|
||||||
|
5. Monitor resource usage
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Remember**: You don't need all services. Start small and add what you actually use!
|
||||||
775
docs/setup/almalinux-vm.md
Normal file
775
docs/setup/almalinux-vm.md
Normal file
|
|
@ -0,0 +1,775 @@
|
||||||
|
# AlmaLinux 9.6 VM Setup Guide
|
||||||
|
|
||||||
|
Complete setup guide for the homelab VM on AlmaLinux 9.6 running on Proxmox VE 9.
|
||||||
|
|
||||||
|
## Hardware Context
|
||||||
|
|
||||||
|
- **Host**: Proxmox VE 9 (Debian 13 based)
|
||||||
|
- CPU: AMD Ryzen 5 7600X (6C/12T, 5.3 GHz boost)
|
||||||
|
- GPU: NVIDIA GTX 1070 (8GB VRAM)
|
||||||
|
- RAM: 32GB DDR5
|
||||||
|
|
||||||
|
- **VM Allocation**:
|
||||||
|
- OS: AlmaLinux 9.6 (RHEL 9 compatible)
|
||||||
|
- CPU: 8 vCPUs
|
||||||
|
- RAM: 24GB
|
||||||
|
- Disk: 500GB+ (expandable)
|
||||||
|
- GPU: GTX 1070 (PCIe passthrough)
|
||||||
|
|
||||||
|
## Proxmox VM Creation
|
||||||
|
|
||||||
|
### 1. Create VM
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On Proxmox host
|
||||||
|
qm create 100 \
|
||||||
|
--name homelab \
|
||||||
|
--memory 24576 \
|
||||||
|
--cores 8 \
|
||||||
|
--cpu host \
|
||||||
|
--sockets 1 \
|
||||||
|
--net0 virtio,bridge=vmbr0 \
|
||||||
|
--scsi0 local-lvm:500 \
|
||||||
|
--ostype l26 \
|
||||||
|
--boot order=scsi0
|
||||||
|
|
||||||
|
# Attach AlmaLinux ISO
|
||||||
|
qm set 100 --ide2 local:iso/AlmaLinux-9.6-x86_64-dvd.iso,media=cdrom
|
||||||
|
|
||||||
|
# Enable UEFI
|
||||||
|
qm set 100 --bios ovmf --efidisk0 local-lvm:1
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. GPU Passthrough
|
||||||
|
|
||||||
|
**Find GPU PCI address:**
|
||||||
|
```bash
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
# Example output: 01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Enable IOMMU in Proxmox:**
|
||||||
|
|
||||||
|
Edit `/etc/default/grub`:
|
||||||
|
```bash
|
||||||
|
# For AMD CPU (Ryzen 5 7600X)
|
||||||
|
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
|
||||||
|
```
|
||||||
|
|
||||||
|
Update GRUB and reboot:
|
||||||
|
```bash
|
||||||
|
update-grub
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify IOMMU:**
|
||||||
|
```bash
|
||||||
|
dmesg | grep -e DMAR -e IOMMU
|
||||||
|
# Should show IOMMU enabled
|
||||||
|
```
|
||||||
|
|
||||||
|
**Add GPU to VM:**
|
||||||
|
|
||||||
|
Edit `/etc/pve/qemu-server/100.conf`:
|
||||||
|
```
|
||||||
|
hostpci0: 0000:01:00,pcie=1,x-vga=1
|
||||||
|
```
|
||||||
|
|
||||||
|
Or via command:
|
||||||
|
```bash
|
||||||
|
qm set 100 --hostpci0 0000:01:00,pcie=1,x-vga=1
|
||||||
|
```
|
||||||
|
|
||||||
|
**Blacklist GPU on host:**
|
||||||
|
|
||||||
|
Edit `/etc/modprobe.d/blacklist-nvidia.conf`:
|
||||||
|
```
|
||||||
|
blacklist nouveau
|
||||||
|
blacklist nvidia
|
||||||
|
blacklist nvidia_drm
|
||||||
|
blacklist nvidia_modeset
|
||||||
|
blacklist nvidia_uvm
|
||||||
|
```
|
||||||
|
|
||||||
|
Update initramfs:
|
||||||
|
```bash
|
||||||
|
update-initramfs -u
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
## AlmaLinux Installation
|
||||||
|
|
||||||
|
### 1. Install AlmaLinux 9.6
|
||||||
|
|
||||||
|
Start VM and follow installer:
|
||||||
|
1. **Language**: English (US)
|
||||||
|
2. **Installation Destination**: Use all space, automatic partitioning
|
||||||
|
3. **Network**: Enable and set hostname to `homelab.fig.systems`
|
||||||
|
4. **Software Selection**: Minimal Install
|
||||||
|
5. **Root Password**: Set strong password
|
||||||
|
6. **User Creation**: Create admin user (e.g., `homelab`)
|
||||||
|
|
||||||
|
### 2. Post-Installation Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH into VM
|
||||||
|
ssh homelab@<vm-ip>
|
||||||
|
|
||||||
|
# Update system
|
||||||
|
sudo dnf update -y
|
||||||
|
|
||||||
|
# Install essential tools
|
||||||
|
sudo dnf install -y \
|
||||||
|
vim \
|
||||||
|
git \
|
||||||
|
curl \
|
||||||
|
wget \
|
||||||
|
htop \
|
||||||
|
ncdu \
|
||||||
|
tree \
|
||||||
|
tmux \
|
||||||
|
bind-utils \
|
||||||
|
net-tools \
|
||||||
|
firewalld
|
||||||
|
|
||||||
|
# Enable and configure firewall
|
||||||
|
sudo systemctl enable --now firewalld
|
||||||
|
sudo firewall-cmd --permanent --add-service=http
|
||||||
|
sudo firewall-cmd --permanent --add-service=https
|
||||||
|
sudo firewall-cmd --reload
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Configure Static IP (Optional)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find connection name
|
||||||
|
nmcli connection show
|
||||||
|
|
||||||
|
# Set static IP (example: 192.168.1.100)
|
||||||
|
sudo nmcli connection modify "System eth0" \
|
||||||
|
ipv4.addresses 192.168.1.100/24 \
|
||||||
|
ipv4.gateway 192.168.1.1 \
|
||||||
|
ipv4.dns "1.1.1.1,8.8.8.8" \
|
||||||
|
ipv4.method manual
|
||||||
|
|
||||||
|
# Restart network
|
||||||
|
sudo nmcli connection down "System eth0"
|
||||||
|
sudo nmcli connection up "System eth0"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Docker Installation
|
||||||
|
|
||||||
|
### 1. Install Docker Engine
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Remove old versions
|
||||||
|
sudo dnf remove docker \
|
||||||
|
docker-client \
|
||||||
|
docker-client-latest \
|
||||||
|
docker-common \
|
||||||
|
docker-latest \
|
||||||
|
docker-latest-logrotate \
|
||||||
|
docker-logrotate \
|
||||||
|
docker-engine
|
||||||
|
|
||||||
|
# Add Docker repository
|
||||||
|
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||||
|
|
||||||
|
# Install Docker
|
||||||
|
sudo dnf install -y \
|
||||||
|
docker-ce \
|
||||||
|
docker-ce-cli \
|
||||||
|
containerd.io \
|
||||||
|
docker-buildx-plugin \
|
||||||
|
docker-compose-plugin
|
||||||
|
|
||||||
|
# Start Docker
|
||||||
|
sudo systemctl enable --now docker
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
sudo docker run hello-world
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure Docker
|
||||||
|
|
||||||
|
**Add user to docker group:**
|
||||||
|
```bash
|
||||||
|
sudo usermod -aG docker $USER
|
||||||
|
newgrp docker
|
||||||
|
|
||||||
|
# Verify (no sudo needed)
|
||||||
|
docker ps
|
||||||
|
```
|
||||||
|
|
||||||
|
**Configure Docker daemon:**
|
||||||
|
|
||||||
|
Create `/etc/docker/daemon.json`:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"log-driver": "json-file",
|
||||||
|
"log-opts": {
|
||||||
|
"max-size": "10m",
|
||||||
|
"max-file": "3"
|
||||||
|
},
|
||||||
|
"storage-driver": "overlay2",
|
||||||
|
"features": {
|
||||||
|
"buildkit": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Restart Docker:
|
||||||
|
```bash
|
||||||
|
sudo systemctl restart docker
|
||||||
|
```
|
||||||
|
|
||||||
|
## NVIDIA GPU Setup
|
||||||
|
|
||||||
|
### 1. Install NVIDIA Drivers
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add EPEL repository
|
||||||
|
sudo dnf install -y epel-release
|
||||||
|
|
||||||
|
# Add NVIDIA repository
|
||||||
|
sudo dnf config-manager --add-repo \
|
||||||
|
https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
|
||||||
|
|
||||||
|
# Install drivers
|
||||||
|
sudo dnf install -y \
|
||||||
|
nvidia-driver \
|
||||||
|
nvidia-driver-cuda \
|
||||||
|
nvidia-settings \
|
||||||
|
nvidia-persistenced
|
||||||
|
|
||||||
|
# Reboot to load drivers
|
||||||
|
sudo reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Verify GPU
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check driver version
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Expected output:
|
||||||
|
# +-----------------------------------------------------------------------------+
|
||||||
|
# | NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|
||||||
|
# |-------------------------------+----------------------+----------------------+
|
||||||
|
# | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||||
|
# | 0 GeForce GTX 1070 Off | 00000000:01:00.0 Off | N/A |
|
||||||
|
# +-------------------------------+----------------------+----------------------+
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Install NVIDIA Container Toolkit
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add NVIDIA Container Toolkit repository
|
||||||
|
sudo dnf config-manager --add-repo \
|
||||||
|
https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
|
||||||
|
|
||||||
|
# Install toolkit
|
||||||
|
sudo dnf install -y nvidia-container-toolkit
|
||||||
|
|
||||||
|
# Configure Docker to use nvidia runtime
|
||||||
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
|
|
||||||
|
# Restart Docker
|
||||||
|
sudo systemctl restart docker
|
||||||
|
|
||||||
|
# Test GPU in container
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
## Storage Setup
|
||||||
|
|
||||||
|
### 1. Create Media Directory
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create media directory structure
|
||||||
|
sudo mkdir -p /media/{tv,movies,music,photos,books,audiobooks,comics,homemovies}
|
||||||
|
sudo mkdir -p /media/{downloads,complete,incomplete}
|
||||||
|
|
||||||
|
# Set ownership
|
||||||
|
sudo chown -R $USER:$USER /media
|
||||||
|
|
||||||
|
# Set permissions
|
||||||
|
chmod -R 755 /media
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Mount Additional Storage (Optional)
|
||||||
|
|
||||||
|
If using separate disk for media:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Find disk
|
||||||
|
lsblk
|
||||||
|
|
||||||
|
# Format disk (example: /dev/sdb)
|
||||||
|
sudo mkfs.ext4 /dev/sdb
|
||||||
|
|
||||||
|
# Get UUID
|
||||||
|
sudo blkid /dev/sdb
|
||||||
|
|
||||||
|
# Add to /etc/fstab
|
||||||
|
echo "UUID=<uuid> /media ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstab
|
||||||
|
|
||||||
|
# Mount
|
||||||
|
sudo mount -a
|
||||||
|
```
|
||||||
|
|
||||||
|
## Homelab Repository Setup
|
||||||
|
|
||||||
|
### 1. Clone Repository
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create workspace
|
||||||
|
mkdir -p ~/homelab
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# Clone repository
|
||||||
|
git clone https://github.com/efigueroa/homelab.git .
|
||||||
|
|
||||||
|
# Or if using SSH
|
||||||
|
git clone git@github.com:efigueroa/homelab.git .
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Create Docker Network
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create homelab network
|
||||||
|
docker network create homelab
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
docker network ls | grep homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Configure Environment Variables
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate secrets for all services
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# LLDAP
|
||||||
|
cd compose/core/lldap
|
||||||
|
openssl rand -hex 32 > /tmp/lldap_jwt_secret
|
||||||
|
openssl rand -base64 32 | tr -d /=+ | cut -c1-32 > /tmp/lldap_pass
|
||||||
|
# Update .env with generated secrets
|
||||||
|
|
||||||
|
# Tinyauth
|
||||||
|
cd ../tinyauth
|
||||||
|
openssl rand -hex 32 > /tmp/tinyauth_session
|
||||||
|
# Update .env (LDAP_BIND_PASSWORD must match LLDAP)
|
||||||
|
|
||||||
|
# Continue for all services...
|
||||||
|
```
|
||||||
|
|
||||||
|
See [`docs/guides/secrets-management.md`](../guides/secrets-management.md) for complete guide.
|
||||||
|
|
||||||
|
## SELinux Configuration
|
||||||
|
|
||||||
|
AlmaLinux uses SELinux by default. Configure for Docker:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check SELinux status
|
||||||
|
getenforce
|
||||||
|
# Should show: Enforcing
|
||||||
|
|
||||||
|
# Allow Docker to access bind mounts
|
||||||
|
sudo setsebool -P container_manage_cgroup on
|
||||||
|
|
||||||
|
# If you encounter permission issues:
|
||||||
|
# Option 1: Add SELinux context to directories
|
||||||
|
sudo chcon -R -t container_file_t ~/homelab/compose
|
||||||
|
sudo chcon -R -t container_file_t /media
|
||||||
|
|
||||||
|
# Option 2: Use :Z flag in docker volumes (auto-relabels)
|
||||||
|
# Example: ./data:/data:Z
|
||||||
|
|
||||||
|
# Option 3: Set SELinux to permissive (not recommended)
|
||||||
|
# sudo setenforce 0
|
||||||
|
```
|
||||||
|
|
||||||
|
## System Tuning
|
||||||
|
|
||||||
|
### 1. Increase File Limits
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add to /etc/security/limits.conf
|
||||||
|
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
|
||||||
|
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
|
||||||
|
|
||||||
|
# Add to /etc/sysctl.conf
|
||||||
|
echo "fs.file-max = 65536" | sudo tee -a /etc/sysctl.conf
|
||||||
|
echo "fs.inotify.max_user_watches = 524288" | sudo tee -a /etc/sysctl.conf
|
||||||
|
|
||||||
|
# Apply
|
||||||
|
sudo sysctl -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Optimize for Media Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Network tuning
|
||||||
|
echo "net.core.rmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
|
||||||
|
echo "net.core.wmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
|
||||||
|
echo "net.ipv4.tcp_rmem = 4096 87380 67108864" | sudo tee -a /etc/sysctl.conf
|
||||||
|
echo "net.ipv4.tcp_wmem = 4096 65536 67108864" | sudo tee -a /etc/sysctl.conf
|
||||||
|
|
||||||
|
# Apply
|
||||||
|
sudo sysctl -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. CPU Governor (Ryzen 5 7600X)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install cpupower
|
||||||
|
sudo dnf install -y kernel-tools
|
||||||
|
|
||||||
|
# Set to performance mode
|
||||||
|
sudo cpupower frequency-set -g performance
|
||||||
|
|
||||||
|
# Make permanent
|
||||||
|
echo "performance" | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
### 1. Deploy Core Services
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# Create network
|
||||||
|
docker network create homelab
|
||||||
|
|
||||||
|
# Deploy Traefik
|
||||||
|
cd compose/core/traefik
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Deploy LLDAP
|
||||||
|
cd ../lldap
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Wait for LLDAP to be ready (30 seconds)
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
# Deploy Tinyauth
|
||||||
|
cd ../tinyauth
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure LLDAP
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Access LLDAP web UI
|
||||||
|
# https://lldap.fig.systems
|
||||||
|
|
||||||
|
# 1. Login with admin credentials from .env
|
||||||
|
# 2. Create observer user for tinyauth
|
||||||
|
# 3. Create regular users
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Deploy Monitoring
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# Deploy logging stack
|
||||||
|
cd compose/monitoring/logging
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Deploy uptime monitoring
|
||||||
|
cd ../uptime
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Deploy Services
|
||||||
|
|
||||||
|
See [`README.md`](../../README.md) for complete deployment order.
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
### 1. Check All Services
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all running containers
|
||||||
|
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||||
|
|
||||||
|
# Check networks
|
||||||
|
docker network ls
|
||||||
|
|
||||||
|
# Check volumes
|
||||||
|
docker volume ls
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Test GPU Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test in Jellyfin
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
|
||||||
|
# Test in Ollama
|
||||||
|
docker exec ollama nvidia-smi
|
||||||
|
|
||||||
|
# Test in Immich
|
||||||
|
docker exec immich-machine-learning nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Test Logging
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check Promtail is collecting logs
|
||||||
|
docker logs promtail | grep "clients configured"
|
||||||
|
|
||||||
|
# Access Grafana
|
||||||
|
# https://logs.fig.systems
|
||||||
|
|
||||||
|
# Query logs
|
||||||
|
# {container="traefik"}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Test SSL
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check certificate
|
||||||
|
curl -vI https://sonarr.fig.systems 2>&1 | grep -i "subject:"
|
||||||
|
|
||||||
|
# Should show valid Let's Encrypt certificate
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backup Strategy
|
||||||
|
|
||||||
|
### 1. VM Snapshots (Proxmox)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On Proxmox host
|
||||||
|
# Create snapshot before major changes
|
||||||
|
qm snapshot 100 pre-update-$(date +%Y%m%d)
|
||||||
|
|
||||||
|
# List snapshots
|
||||||
|
qm listsnapshot 100
|
||||||
|
|
||||||
|
# Restore snapshot
|
||||||
|
qm rollback 100 <snapshot-name>
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configuration Backup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# On VM
|
||||||
|
cd ~/homelab
|
||||||
|
|
||||||
|
# Backup all configs (excludes data directories)
|
||||||
|
tar czf homelab-config-$(date +%Y%m%d).tar.gz \
|
||||||
|
--exclude='*/data' \
|
||||||
|
--exclude='*/db' \
|
||||||
|
--exclude='*/pgdata' \
|
||||||
|
--exclude='*/config' \
|
||||||
|
--exclude='*/models' \
|
||||||
|
--exclude='*_data' \
|
||||||
|
compose/
|
||||||
|
|
||||||
|
# Backup to external storage
|
||||||
|
scp homelab-config-*.tar.gz user@backup-server:/backups/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Automated Backups with Backrest
|
||||||
|
|
||||||
|
Backrest service is included and configured. See:
|
||||||
|
- `compose/services/backrest/`
|
||||||
|
- Access: https://backup.fig.systems
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Weekly
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update containers
|
||||||
|
cd ~/homelab
|
||||||
|
find compose -name "compose.yaml" -type f | while read compose; do
|
||||||
|
dir=$(dirname "$compose")
|
||||||
|
echo "Updating $dir"
|
||||||
|
cd "$dir"
|
||||||
|
docker compose pull
|
||||||
|
docker compose up -d
|
||||||
|
cd ~/homelab
|
||||||
|
done
|
||||||
|
|
||||||
|
# Clean up old images
|
||||||
|
docker image prune -a -f
|
||||||
|
|
||||||
|
# Check disk space
|
||||||
|
df -h
|
||||||
|
ncdu /media
|
||||||
|
```
|
||||||
|
|
||||||
|
### Monthly
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update AlmaLinux
|
||||||
|
sudo dnf update -y
|
||||||
|
|
||||||
|
# Update NVIDIA drivers (if available)
|
||||||
|
sudo dnf update nvidia-driver* -y
|
||||||
|
|
||||||
|
# Reboot if kernel updated
|
||||||
|
sudo reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Services Won't Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check SELinux denials
|
||||||
|
sudo ausearch -m avc -ts recent
|
||||||
|
|
||||||
|
# If SELinux is blocking:
|
||||||
|
sudo setsebool -P container_manage_cgroup on
|
||||||
|
|
||||||
|
# Or relabel directories
|
||||||
|
sudo restorecon -Rv ~/homelab/compose
|
||||||
|
```
|
||||||
|
|
||||||
|
### GPU Not Detected
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check GPU is passed through
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
|
||||||
|
# Check drivers loaded
|
||||||
|
lsmod | grep nvidia
|
||||||
|
|
||||||
|
# Reinstall drivers
|
||||||
|
sudo dnf reinstall nvidia-driver* -y
|
||||||
|
sudo reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
### Network Issues
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check firewall
|
||||||
|
sudo firewall-cmd --list-all
|
||||||
|
|
||||||
|
# Add ports if needed
|
||||||
|
sudo firewall-cmd --permanent --add-port=80/tcp
|
||||||
|
sudo firewall-cmd --permanent --add-port=443/tcp
|
||||||
|
sudo firewall-cmd --reload
|
||||||
|
|
||||||
|
# Check Docker network
|
||||||
|
docker network inspect homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission Denied Errors
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check ownership
|
||||||
|
ls -la ~/homelab/compose/*/
|
||||||
|
|
||||||
|
# Fix ownership
|
||||||
|
sudo chown -R $USER:$USER ~/homelab
|
||||||
|
|
||||||
|
# Check SELinux context
|
||||||
|
ls -Z ~/homelab/compose
|
||||||
|
|
||||||
|
# Fix SELinux labels
|
||||||
|
sudo chcon -R -t container_file_t ~/homelab/compose
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Monitoring
|
||||||
|
|
||||||
|
### System Stats
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# CPU usage
|
||||||
|
htop
|
||||||
|
|
||||||
|
# GPU usage
|
||||||
|
watch -n 1 nvidia-smi
|
||||||
|
|
||||||
|
# Disk I/O
|
||||||
|
iostat -x 1
|
||||||
|
|
||||||
|
# Network
|
||||||
|
iftop
|
||||||
|
|
||||||
|
# Per-container stats
|
||||||
|
docker stats
|
||||||
|
```
|
||||||
|
|
||||||
|
### Resource Limits
|
||||||
|
|
||||||
|
Example container resource limits:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# In compose.yaml
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpus: '2.0'
|
||||||
|
memory: 4G
|
||||||
|
reservations:
|
||||||
|
cpus: '1.0'
|
||||||
|
memory: 2G
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Hardening
|
||||||
|
|
||||||
|
### 1. Disable Root SSH
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Edit /etc/ssh/sshd_config
|
||||||
|
sudo sed -i 's/#PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
# Restart SSH
|
||||||
|
sudo systemctl restart sshd
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Configure Fail2Ban
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install
|
||||||
|
sudo dnf install -y fail2ban
|
||||||
|
|
||||||
|
# Configure
|
||||||
|
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
|
||||||
|
|
||||||
|
# Edit /etc/fail2ban/jail.local
|
||||||
|
# [sshd]
|
||||||
|
# enabled = true
|
||||||
|
# maxretry = 3
|
||||||
|
# bantime = 3600
|
||||||
|
|
||||||
|
# Start
|
||||||
|
sudo systemctl enable --now fail2ban
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Automatic Updates
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install dnf-automatic
|
||||||
|
sudo dnf install -y dnf-automatic
|
||||||
|
|
||||||
|
# Configure /etc/dnf/automatic.conf
|
||||||
|
# apply_updates = yes
|
||||||
|
|
||||||
|
# Enable
|
||||||
|
sudo systemctl enable --now dnf-automatic.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. ✅ VM created and AlmaLinux installed
|
||||||
|
2. ✅ Docker and NVIDIA drivers configured
|
||||||
|
3. ✅ Homelab repository cloned
|
||||||
|
4. ✅ Network and storage configured
|
||||||
|
5. ⬜ Deploy core services
|
||||||
|
6. ⬜ Configure SSO
|
||||||
|
7. ⬜ Deploy all services
|
||||||
|
8. ⬜ Configure backups
|
||||||
|
9. ⬜ Set up monitoring
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**System ready for deployment!** 🚀
|
||||||
707
docs/troubleshooting/common-issues.md
Normal file
707
docs/troubleshooting/common-issues.md
Normal file
|
|
@ -0,0 +1,707 @@
|
||||||
|
# Common Issues and Solutions
|
||||||
|
|
||||||
|
This guide covers the most common problems you might encounter and how to fix them.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
- [Service Won't Start](#service-wont-start)
|
||||||
|
- [SSL/TLS Certificate Errors](#ssltls-certificate-errors)
|
||||||
|
- [SSO Authentication Issues](#sso-authentication-issues)
|
||||||
|
- [Access Issues](#access-issues)
|
||||||
|
- [Performance Problems](#performance-problems)
|
||||||
|
- [Database Errors](#database-errors)
|
||||||
|
- [Network Issues](#network-issues)
|
||||||
|
- [GPU Problems](#gpu-problems)
|
||||||
|
|
||||||
|
## Service Won't Start
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
Container exits immediately or shows "Exited (1)" status.
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
|
||||||
|
# Check container status
|
||||||
|
docker compose ps
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker compose logs
|
||||||
|
|
||||||
|
# Check for specific errors
|
||||||
|
docker compose logs | grep -i error
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. Environment Variables Not Set
|
||||||
|
|
||||||
|
**Error in logs:**
|
||||||
|
```
|
||||||
|
Error: POSTGRES_PASSWORD is not set
|
||||||
|
Error: required environment variable 'XXX' is missing
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check .env file exists
|
||||||
|
ls -la .env
|
||||||
|
|
||||||
|
# Check for changeme_ values
|
||||||
|
grep "changeme_" .env
|
||||||
|
|
||||||
|
# Update with proper secrets (see secrets guide)
|
||||||
|
nano .env
|
||||||
|
|
||||||
|
# Restart
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Port Already in Use
|
||||||
|
|
||||||
|
**Error in logs:**
|
||||||
|
```
|
||||||
|
Error: bind: address already in use
|
||||||
|
Error: failed to bind to port 80: address already in use
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Find what's using the port
|
||||||
|
sudo netstat -tulpn | grep :80
|
||||||
|
sudo netstat -tulpn | grep :443
|
||||||
|
|
||||||
|
# Stop conflicting service
|
||||||
|
sudo systemctl stop apache2 # Example
|
||||||
|
sudo systemctl stop nginx # Example
|
||||||
|
|
||||||
|
# Or change port in compose.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Network Not Created
|
||||||
|
|
||||||
|
**Error in logs:**
|
||||||
|
```
|
||||||
|
network homelab declared as external, but could not be found
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Create network
|
||||||
|
docker network create homelab
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
docker network ls | grep homelab
|
||||||
|
|
||||||
|
# Restart service
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Volume Permission Issues
|
||||||
|
|
||||||
|
**Error in logs:**
|
||||||
|
```
|
||||||
|
Permission denied: '/config'
|
||||||
|
mkdir: cannot create directory '/data': Permission denied
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check directory ownership
|
||||||
|
ls -la ./config ./data
|
||||||
|
|
||||||
|
# Fix ownership (replace 1000:1000 with your UID:GID)
|
||||||
|
sudo chown -R 1000:1000 ./config ./data
|
||||||
|
|
||||||
|
# Restart
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Dependency Not Running
|
||||||
|
|
||||||
|
**Error in logs:**
|
||||||
|
```
|
||||||
|
Failed to connect to database
|
||||||
|
Connection refused: postgres:5432
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Start dependency first
|
||||||
|
cd ~/homelab/compose/path/to/dependency
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Wait for it to be healthy
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Then start the service
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
## SSL/TLS Certificate Errors
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
Browser shows "Your connection is not private" or "NET::ERR_CERT_AUTHORITY_INVALID"
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Check Traefik logs
|
||||||
|
docker logs traefik | grep -i certificate
|
||||||
|
docker logs traefik | grep -i letsencrypt
|
||||||
|
docker logs traefik | grep -i error
|
||||||
|
|
||||||
|
# Test certificate
|
||||||
|
echo | openssl s_client -servername home.fig.systems -connect home.fig.systems:443 2>/dev/null | openssl x509 -noout -dates
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. DNS Not Configured
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Test DNS resolution
|
||||||
|
dig home.fig.systems +short
|
||||||
|
|
||||||
|
# Should return your server's IP
|
||||||
|
# If not, configure DNS A records:
|
||||||
|
# *.fig.systems -> YOUR_SERVER_IP
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Port 80 Not Accessible
|
||||||
|
|
||||||
|
Let's Encrypt needs port 80 for HTTP-01 challenge.
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Test from external network
|
||||||
|
curl -I http://home.fig.systems
|
||||||
|
|
||||||
|
# Check firewall
|
||||||
|
sudo ufw status
|
||||||
|
sudo ufw allow 80/tcp
|
||||||
|
sudo ufw allow 443/tcp
|
||||||
|
|
||||||
|
# Check port forwarding on router
|
||||||
|
# Ensure ports 80 and 443 are forwarded to server
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Rate Limiting
|
||||||
|
|
||||||
|
Let's Encrypt has limits: 5 certificates per domain per week.
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check Traefik logs for rate limit errors
|
||||||
|
docker logs traefik | grep -i "rate limit"
|
||||||
|
|
||||||
|
# Wait for rate limit to reset (1 week)
|
||||||
|
# Or use Let's Encrypt staging environment for testing
|
||||||
|
|
||||||
|
# Enable staging in traefik/compose.yaml:
|
||||||
|
# - --certificatesresolvers.letsencrypt.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. First Startup - Certificates Not Yet Generated
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Wait 2-5 minutes for certificate generation
|
||||||
|
docker logs traefik -f
|
||||||
|
|
||||||
|
# Look for:
|
||||||
|
# "Certificate obtained for domain"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Certificate Expired
|
||||||
|
|
||||||
|
Traefik should auto-renew, but if manual renewal needed:
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Remove old certificates
|
||||||
|
cd ~/homelab/compose/core/traefik
|
||||||
|
rm -rf ./acme.json
|
||||||
|
|
||||||
|
# Restart Traefik
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
# Wait for new certificates
|
||||||
|
docker logs traefik -f
|
||||||
|
```
|
||||||
|
|
||||||
|
## SSO Authentication Issues
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
- Can't login to SSO-protected services
|
||||||
|
- Redirected to auth page but login fails
|
||||||
|
- "Invalid credentials" error
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Check LLDAP is running
|
||||||
|
docker ps | grep lldap
|
||||||
|
|
||||||
|
# Check Tinyauth is running
|
||||||
|
docker ps | grep tinyauth
|
||||||
|
|
||||||
|
# View logs
|
||||||
|
docker logs lldap
|
||||||
|
docker logs tinyauth
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. Password Mismatch
|
||||||
|
|
||||||
|
LDAP_BIND_PASSWORD must match LLDAP_LDAP_USER_PASS.
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check both passwords
|
||||||
|
grep LLDAP_LDAP_USER_PASS ~/homelab/compose/core/lldap/.env
|
||||||
|
grep LDAP_BIND_PASSWORD ~/homelab/compose/core/tinyauth/.env
|
||||||
|
|
||||||
|
# They must be EXACTLY the same!
|
||||||
|
|
||||||
|
# If different, update tinyauth/.env
|
||||||
|
cd ~/homelab/compose/core/tinyauth
|
||||||
|
nano .env
|
||||||
|
# Set LDAP_BIND_PASSWORD to match LLDAP_LDAP_USER_PASS
|
||||||
|
|
||||||
|
# Restart Tinyauth
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. User Doesn't Exist in LLDAP
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Access LLDAP web UI
|
||||||
|
# Go to: https://lldap.fig.systems
|
||||||
|
|
||||||
|
# Login with admin credentials
|
||||||
|
# Username: admin
|
||||||
|
# Password: <your LLDAP_LDAP_USER_PASS>
|
||||||
|
|
||||||
|
# Create user:
|
||||||
|
# - Click "Create user"
|
||||||
|
# - Set username, email, password
|
||||||
|
# - Add to "lldap_admin" group
|
||||||
|
|
||||||
|
# Try logging in again
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. LLDAP or Tinyauth Not Running
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Start LLDAP
|
||||||
|
cd ~/homelab/compose/core/lldap
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# Wait for it to be ready
|
||||||
|
docker compose logs -f
|
||||||
|
|
||||||
|
# Start Tinyauth
|
||||||
|
cd ~/homelab/compose/core/tinyauth
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Network Issue Between Tinyauth and LLDAP
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Test connection
|
||||||
|
docker exec tinyauth nc -zv lldap 3890
|
||||||
|
|
||||||
|
# Should show: Connection to lldap 3890 port [tcp/*] succeeded!
|
||||||
|
|
||||||
|
# If not, check both are on homelab network
|
||||||
|
docker network inspect homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
## Access Issues
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
- Can't access service from browser
|
||||||
|
- Connection timeout
|
||||||
|
- "This site can't be reached"
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Test from server
|
||||||
|
curl -I https://home.fig.systems
|
||||||
|
|
||||||
|
# Test DNS
|
||||||
|
dig home.fig.systems +short
|
||||||
|
|
||||||
|
# Check container is running
|
||||||
|
docker ps | grep servicename
|
||||||
|
|
||||||
|
# Check Traefik routing
|
||||||
|
docker logs traefik | grep servicename
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. Service Not Running
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Traefik Not Running
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/core/traefik
|
||||||
|
docker compose up -d
|
||||||
|
docker compose logs -f
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. DNS Not Resolving
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check DNS
|
||||||
|
dig service.fig.systems +short
|
||||||
|
|
||||||
|
# Should return your server IP
|
||||||
|
# If not, add/update DNS A record
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Firewall Blocking
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check firewall
|
||||||
|
sudo ufw status
|
||||||
|
|
||||||
|
# Allow if needed
|
||||||
|
sudo ufw allow 80/tcp
|
||||||
|
sudo ufw allow 443/tcp
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Wrong Traefik Labels
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check compose.yaml has correct labels
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
cat compose.yaml | grep -A 10 "labels:"
|
||||||
|
|
||||||
|
# Should have:
|
||||||
|
# traefik.enable: true
|
||||||
|
# traefik.http.routers.servicename.rule: Host(`service.fig.systems`)
|
||||||
|
# etc.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Problems
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
- Services running slowly
|
||||||
|
- High CPU/RAM usage
|
||||||
|
- System unresponsive
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Overall system
|
||||||
|
htop
|
||||||
|
|
||||||
|
# Docker resources
|
||||||
|
docker stats
|
||||||
|
|
||||||
|
# Disk usage
|
||||||
|
df -h
|
||||||
|
docker system df
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. Insufficient RAM
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check RAM usage
|
||||||
|
free -h
|
||||||
|
|
||||||
|
# If low, either:
|
||||||
|
# 1. Add more RAM
|
||||||
|
# 2. Stop unused services
|
||||||
|
# 3. Add resource limits to compose files
|
||||||
|
|
||||||
|
# Example resource limit:
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: 2G
|
||||||
|
reservations:
|
||||||
|
memory: 1G
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Disk Full
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check disk usage
|
||||||
|
df -h
|
||||||
|
|
||||||
|
# Clean Docker
|
||||||
|
docker system prune -a
|
||||||
|
|
||||||
|
# Remove old logs
|
||||||
|
sudo journalctl --vacuum-time=7d
|
||||||
|
|
||||||
|
# Check media folder
|
||||||
|
du -sh /media/*
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Too Many Services Running
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Stop unused services
|
||||||
|
cd ~/homelab/compose/services/unused-service
|
||||||
|
docker compose down
|
||||||
|
|
||||||
|
# Or deploy only what you need
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Database Not Optimized
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# For postgres services, add to .env:
|
||||||
|
POSTGRES_INITDB_ARGS=--data-checksums
|
||||||
|
|
||||||
|
# Increase shared buffers (if enough RAM):
|
||||||
|
# Edit compose.yaml, add to postgres:
|
||||||
|
command: postgres -c shared_buffers=256MB -c max_connections=200
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database Errors
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
- "Connection refused" to database
|
||||||
|
- "Authentication failed for user"
|
||||||
|
- "Database does not exist"
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Check database container
|
||||||
|
docker ps | grep postgres
|
||||||
|
|
||||||
|
# View database logs
|
||||||
|
docker logs <postgres_container_name>
|
||||||
|
|
||||||
|
# Test connection from app
|
||||||
|
docker exec <app_container> nc -zv <db_container> 5432
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. Password Mismatch
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check passwords match in .env
|
||||||
|
cat .env | grep PASSWORD
|
||||||
|
|
||||||
|
# For example, in Vikunja:
|
||||||
|
# VIKUNJA_DATABASE_PASSWORD and POSTGRES_PASSWORD must match!
|
||||||
|
|
||||||
|
# Update if needed
|
||||||
|
nano .env
|
||||||
|
docker compose down
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Database Not Initialized
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Remove database and reinitialize
|
||||||
|
docker compose down
|
||||||
|
rm -rf ./db/ # CAREFUL: This deletes all data!
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Database Still Starting
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Wait for database to be ready
|
||||||
|
docker logs <postgres_container> -f
|
||||||
|
|
||||||
|
# Look for "database system is ready to accept connections"
|
||||||
|
|
||||||
|
# Then restart app
|
||||||
|
docker compose restart <app_service>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Network Issues
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
- Containers can't communicate
|
||||||
|
- "Connection refused" between services
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Inspect network
|
||||||
|
docker network inspect homelab
|
||||||
|
|
||||||
|
# Test connectivity
|
||||||
|
docker exec container1 ping container2
|
||||||
|
docker exec container1 nc -zv container2 PORT
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. Containers Not on Same Network
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check compose.yaml has networks section
|
||||||
|
networks:
|
||||||
|
homelab:
|
||||||
|
external: true
|
||||||
|
|
||||||
|
# Ensure service is using the network
|
||||||
|
services:
|
||||||
|
servicename:
|
||||||
|
networks:
|
||||||
|
- homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Network Doesn't Exist
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
docker network create homelab
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. DNS Resolution Between Containers
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Use container name, not localhost
|
||||||
|
# Wrong: http://localhost:5432
|
||||||
|
# Right: http://postgres:5432
|
||||||
|
|
||||||
|
# Or use service name from compose.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## GPU Problems
|
||||||
|
|
||||||
|
### Symptom
|
||||||
|
- "No hardware acceleration available"
|
||||||
|
- GPU not detected in container
|
||||||
|
- "Failed to open GPU"
|
||||||
|
|
||||||
|
### Diagnosis
|
||||||
|
```bash
|
||||||
|
# Check GPU on host
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Check GPU in container
|
||||||
|
docker exec jellyfin nvidia-smi
|
||||||
|
|
||||||
|
# Check Docker GPU runtime
|
||||||
|
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common Causes and Fixes
|
||||||
|
|
||||||
|
#### 1. NVIDIA Container Toolkit Not Installed
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Install toolkit
|
||||||
|
sudo apt install nvidia-container-toolkit
|
||||||
|
|
||||||
|
# Configure runtime
|
||||||
|
sudo nvidia-ctk runtime configure --runtime=docker
|
||||||
|
|
||||||
|
# Restart Docker
|
||||||
|
sudo systemctl restart docker
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Runtime Not Specified in Compose
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Edit compose.yaml
|
||||||
|
nano compose.yaml
|
||||||
|
|
||||||
|
# Uncomment:
|
||||||
|
runtime: nvidia
|
||||||
|
deploy:
|
||||||
|
resources:
|
||||||
|
reservations:
|
||||||
|
devices:
|
||||||
|
- driver: nvidia
|
||||||
|
count: all
|
||||||
|
capabilities: [gpu]
|
||||||
|
|
||||||
|
# Restart
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. GPU Already in Use
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# Check processes using GPU
|
||||||
|
nvidia-smi
|
||||||
|
|
||||||
|
# Kill process if needed
|
||||||
|
sudo kill <PID>
|
||||||
|
|
||||||
|
# Restart service
|
||||||
|
docker compose restart
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. GPU Not Passed Through to VM (Proxmox)
|
||||||
|
|
||||||
|
**Fix:**
|
||||||
|
```bash
|
||||||
|
# From Proxmox host, check GPU passthrough
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
|
||||||
|
# From VM, check GPU visible
|
||||||
|
lspci | grep -i nvidia
|
||||||
|
|
||||||
|
# If not visible, reconfigure passthrough (see GPU guide)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Getting More Help
|
||||||
|
|
||||||
|
If your issue isn't listed here:
|
||||||
|
|
||||||
|
1. **Check service-specific logs**:
|
||||||
|
```bash
|
||||||
|
cd ~/homelab/compose/path/to/service
|
||||||
|
docker compose logs --tail=200
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Search container logs for errors**:
|
||||||
|
```bash
|
||||||
|
docker compose logs | grep -i error
|
||||||
|
docker compose logs | grep -i fail
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Check FAQ**: See [FAQ](./faq.md)
|
||||||
|
|
||||||
|
4. **Debugging Guide**: See [Debugging Guide](./debugging.md)
|
||||||
|
|
||||||
|
5. **Service Documentation**: Check service's official documentation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Most issues can be solved by checking logs and environment variables!**
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue