Merge pull request #6 from efigueroa/claude/karakeep-ollama-configs-011CUqEzDETA2BqAzYUcXtjt

Claude/karakeep ollama configs 011 c uq ez deta2 bq az y uc xtjt
This commit is contained in:
Eduardo Figueroa 2025-11-10 19:44:09 -08:00 committed by GitHub
commit c6132361c7
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
47 changed files with 5111 additions and 150 deletions

141
README.md
View file

@ -2,6 +2,23 @@
This repository contains Docker Compose configurations for self-hosted home services. This repository contains Docker Compose configurations for self-hosted home services.
## 💻 Hardware Specifications
- **Host**: Proxmox VE 9 (Debian 13)
- CPU: AMD Ryzen 5 7600X (6 cores, 12 threads, up to 5.3 GHz)
- GPU: NVIDIA GeForce GTX 1070 (8GB VRAM)
- RAM: 32GB DDR5
- **VM**: AlmaLinux 9.6 (RHEL 9 compatible)
- CPU: 8 vCPUs
- RAM: 24GB
- Storage: 500GB+ (expandable)
- GPU: GTX 1070 (PCIe passthrough)
**Documentation:**
- [Complete Architecture Guide](docs/architecture.md) - Integration, networking, logging, GPU setup
- [AlmaLinux VM Setup](docs/setup/almalinux-vm.md) - Full installation and configuration guide
## 🏗️ Infrastructure ## 🏗️ Infrastructure
### Core Services (Port 80/443) ### Core Services (Port 80/443)
@ -43,7 +60,9 @@ compose/
└── services/ # Utility services └── services/ # Utility services
├── homarr/ # Dashboard (home.fig.systems) ├── homarr/ # Dashboard (home.fig.systems)
├── backrest/ # Backup manager (backup.fig.systems) ├── backrest/ # Backup manager (backup.fig.systems)
├── linkwarden/ # Bookmark manager (links.fig.systems) ├── static-sites/ # Static websites (Caddy)
├── karakeep/ # Bookmark manager with AI (links.fig.systems)
├── ollama/ # Local LLM server (ollama.fig.systems)
├── vikunja/ # Task management (tasks.fig.systems) ├── vikunja/ # Task management (tasks.fig.systems)
├── lubelogger/ # Vehicle tracker (garage.fig.systems) ├── lubelogger/ # Vehicle tracker (garage.fig.systems)
├── calibre-web/ # Ebook library (books.fig.systems) ├── calibre-web/ # Ebook library (books.fig.systems)
@ -56,9 +75,21 @@ compose/
## 🌐 Domains ## 🌐 Domains
All services are accessible via: Three domains are used with different purposes:
- Primary: `*.fig.systems`
- Secondary: `*.edfig.dev` ### fig.systems (Homelab Services)
Primary domain for all self-hosted homelab services:
- `*.fig.systems` - All services listed below
### edfig.dev (Professional/Public)
Professional and public-facing sites:
- `edfig.dev` / `www.edfig.dev` - Personal website/portfolio
- `blog.edfig.dev` - Technical blog
### figgy.foo (Experimental/Private)
Testing and experimental services:
- `figgy.foo` - Experimental lab (SSO protected)
- `*.figgy.foo` - Test instances of services
### Service URLs ### Service URLs
@ -67,6 +98,10 @@ All services are accessible via:
| Traefik Dashboard | traefik.fig.systems | ✅ | | Traefik Dashboard | traefik.fig.systems | ✅ |
| LLDAP | lldap.fig.systems | ✅ | | LLDAP | lldap.fig.systems | ✅ |
| Tinyauth | auth.fig.systems | ❌ | | Tinyauth | auth.fig.systems | ❌ |
| **Static Sites** | | |
| Personal Site | edfig.dev | ❌ |
| Blog | blog.edfig.dev | ❌ |
| Experimental Lab | figgy.foo | ✅ |
| **Monitoring** | | | | **Monitoring** | | |
| Grafana (Logs) | logs.fig.systems | ❌* | | Grafana (Logs) | logs.fig.systems | ❌* |
| Loki (API) | loki.fig.systems | ✅ | | Loki (API) | loki.fig.systems | ✅ |
@ -82,7 +117,8 @@ All services are accessible via:
| SABnzbd | sabnzbd.fig.systems | ✅ | | SABnzbd | sabnzbd.fig.systems | ✅ |
| qBittorrent | qbt.fig.systems | ✅ | | qBittorrent | qbt.fig.systems | ✅ |
| Profilarr | profilarr.fig.systems | ✅ | | Profilarr | profilarr.fig.systems | ✅ |
| Linkwarden | links.fig.systems | ✅ | | Karakeep | links.fig.systems | ✅ |
| Ollama (API) | ollama.fig.systems | ✅ |
| Vikunja | tasks.fig.systems | ✅ | | Vikunja | tasks.fig.systems | ✅ |
| LubeLogger | garage.fig.systems | ✅ | | LubeLogger | garage.fig.systems | ✅ |
| Calibre-web | books.fig.systems | ✅ | | Calibre-web | books.fig.systems | ✅ |
@ -164,7 +200,9 @@ cd compose/media/automation/recyclarr && docker compose up -d
cd compose/media/automation/profilarr && docker compose up -d cd compose/media/automation/profilarr && docker compose up -d
# Utility services # Utility services
cd compose/services/linkwarden && docker compose up -d cd compose/services/static-sites && docker compose up -d # Static websites (edfig.dev, blog, figgy.foo)
cd compose/services/karakeep && docker compose up -d
cd compose/services/ollama && docker compose up -d
cd compose/services/vikunja && docker compose up -d cd compose/services/vikunja && docker compose up -d
cd compose/services/homarr && docker compose up -d cd compose/services/homarr && docker compose up -d
cd compose/services/backrest && docker compose up -d cd compose/services/backrest && docker compose up -d
@ -196,9 +234,21 @@ Each service has its own `.env` file where applicable. Key files to review:
- `core/lldap/.env` - LDAP configuration and admin credentials - `core/lldap/.env` - LDAP configuration and admin credentials
- `core/tinyauth/.env` - LDAP connection and session settings - `core/tinyauth/.env` - LDAP connection and session settings
- `media/frontend/immich/.env` - Photo management configuration - `media/frontend/immich/.env` - Photo management configuration
- `services/linkwarden/.env` - Bookmark manager settings - `services/karakeep/.env` - AI-powered bookmark manager
- `services/ollama/.env` - Local LLM configuration
- `services/microbin/.env` - Pastebin configuration - `services/microbin/.env` - Pastebin configuration
**Example Configuration Files:**
Several services include `.example` config files for reference:
- `media/automation/sonarr/config.xml.example`
- `media/automation/radarr/config.xml.example`
- `media/automation/sabnzbd/sabnzbd.ini.example`
- `media/automation/qbittorrent/qBittorrent.conf.example`
- `services/vikunja/config.yml.example`
- `services/FreshRSS/config.php.example`
Copy these to the appropriate location (usually `./config/`) and customize as needed.
## 🔧 Maintenance ## 🔧 Maintenance
### Viewing Logs ### Viewing Logs
@ -238,6 +288,83 @@ Important data locations:
2. Check LLDAP connection in tinyauth logs 2. Check LLDAP connection in tinyauth logs
3. Verify LDAP bind credentials match in both services 3. Verify LDAP bind credentials match in both services
### GPU not detected
1. Check GPU passthrough: `lspci | grep -i nvidia`
2. Verify drivers: `nvidia-smi`
3. Test in container: `docker exec ollama nvidia-smi`
4. See [AlmaLinux VM Setup](docs/setup/almalinux-vm.md) for GPU configuration
## 📊 Monitoring & Logging
### Centralized Logging (Loki + Promtail + Grafana)
All container logs are automatically collected and stored in Loki:
**Access Grafana**: https://logs.fig.systems
**Query examples:**
```logql
# View logs for specific service
{container="sonarr"}
# Filter by log level
{container="radarr"} |= "ERROR"
# Multiple services
{container=~"sonarr|radarr"}
# Search with JSON parsing
{container="karakeep"} |= "ollama" | json
```
**Retention**: 30 days (configurable in `compose/monitoring/logging/loki-config.yaml`)
### Uptime Monitoring (Uptime Kuma)
Monitor service availability and performance:
**Access Uptime Kuma**: https://status.fig.systems
**Features:**
- HTTP(s) monitoring for all web services
- Docker container health checks
- SSL certificate expiration alerts
- Public/private status pages
- 90+ notification integrations (Discord, Slack, Email, etc.)
### Service Integration
**How services integrate:**
```
Traefik (Reverse Proxy)
├─→ All services (SSL + routing)
└─→ Let's Encrypt (certificates)
Tinyauth (SSO)
├─→ LLDAP (user authentication)
└─→ Protected services (authorization)
Promtail (Log Collection)
├─→ Docker socket (all containers)
└─→ Loki (log storage)
Loki (Log Storage)
└─→ Grafana (visualization)
Karakeep (Bookmarks)
├─→ Ollama (AI tagging)
├─→ Meilisearch (search)
└─→ Chrome (web archiving)
Sonarr/Radarr (Media Automation)
├─→ SABnzbd/qBittorrent (downloads)
├─→ Jellyfin (media library)
└─→ Recyclarr/Profilarr (quality management)
```
See [Architecture Guide](docs/architecture.md) for complete integration details.
## 📄 License ## 📄 License
This is a personal homelab configuration. Use at your own risk. This is a personal homelab configuration. Use at your own risk.

View file

@ -13,7 +13,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.lldap.rule: Host(`lldap.fig.systems`) || Host(`lldap.edfig.dev`) traefik.http.routers.lldap.rule: Host(`lldap.fig.systems`)
traefik.http.routers.lldap.entrypoints: websecure traefik.http.routers.lldap.entrypoints: websecure
traefik.http.routers.lldap.tls.certresolver: letsencrypt traefik.http.routers.lldap.tls.certresolver: letsencrypt
traefik.http.services.lldap.loadbalancer.server.port: 17170 traefik.http.services.lldap.loadbalancer.server.port: 17170

View file

@ -11,7 +11,7 @@ services:
labels: labels:
traefik.enable: true traefik.enable: true
# Web UI routing # Web UI routing
traefik.http.routers.tinyauth.rule: Host(`auth.fig.systems`) || Host(`auth.edfig.dev`) traefik.http.routers.tinyauth.rule: Host(`auth.fig.systems`)
traefik.http.routers.tinyauth.entrypoints: websecure traefik.http.routers.tinyauth.entrypoints: websecure
traefik.http.routers.tinyauth.tls.certresolver: letsencrypt traefik.http.routers.tinyauth.tls.certresolver: letsencrypt
traefik.http.routers.tinyauth.service: tinyauth-ui traefik.http.routers.tinyauth.service: tinyauth-ui

View file

@ -34,7 +34,7 @@ services:
labels: labels:
traefik.enable: true traefik.enable: true
# Dashboard routing # Dashboard routing
traefik.http.routers.traefik.rule: Host(`traefik.fig.systems`) || Host(`traefik.edfig.dev`) traefik.http.routers.traefik.rule: Host(`traefik.fig.systems`)
traefik.http.routers.traefik.entrypoints: websecure traefik.http.routers.traefik.entrypoints: websecure
traefik.http.routers.traefik.tls.certresolver: letsencrypt traefik.http.routers.traefik.tls.certresolver: letsencrypt
traefik.http.routers.traefik.service: api@internal traefik.http.routers.traefik.service: api@internal

View file

@ -22,7 +22,7 @@ services:
traefik.docker.network: homelab traefik.docker.network: homelab
# Web UI # Web UI
traefik.http.routers.profilarr.rule: Host(`profilarr.fig.systems`) || Host(`profilarr.edfig.dev`) traefik.http.routers.profilarr.rule: Host(`profilarr.fig.systems`)
traefik.http.routers.profilarr.entrypoints: websecure traefik.http.routers.profilarr.entrypoints: websecure
traefik.http.routers.profilarr.tls.certresolver: letsencrypt traefik.http.routers.profilarr.tls.certresolver: letsencrypt
traefik.http.services.profilarr.loadbalancer.server.port: 6868 traefik.http.services.profilarr.loadbalancer.server.port: 6868

View file

@ -20,7 +20,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.qbittorrent.rule: Host(`qbt.fig.systems`) || Host(`qbt.edfig.dev`) traefik.http.routers.qbittorrent.rule: Host(`qbt.fig.systems`)
traefik.http.routers.qbittorrent.entrypoints: websecure traefik.http.routers.qbittorrent.entrypoints: websecure
traefik.http.routers.qbittorrent.tls.certresolver: letsencrypt traefik.http.routers.qbittorrent.tls.certresolver: letsencrypt
traefik.http.services.qbittorrent.loadbalancer.server.port: 8080 traefik.http.services.qbittorrent.loadbalancer.server.port: 8080

View file

@ -0,0 +1,200 @@
# qBittorrent Configuration Example
# This file will be auto-generated on first run
# Copy to ./config/qBittorrent/qBittorrent.conf and modify as needed
# Docs: https://github.com/qbittorrent/qBittorrent/wiki
[Application]
# File Logger
FileLogger\Enabled=true
FileLogger\Path=/config/qBittorrent/logs
FileLogger\Backup=true
FileLogger\DeleteOld=true
FileLogger\MaxSize=6MiB
FileLogger\Age=1
FileLogger\AgeType=1
# Memory
MemoryWorkingSetLimit=512
[BitTorrent]
# Session Settings
Session\DefaultSavePath=/downloads
Session\TempPath=/incomplete
Session\TempPathEnabled=true
# Port for incoming connections
Session\Port=6881
# Use UPnP/NAT-PMP
Session\UseUPnP=false
# Encryption mode
Session\Encryption=1
# 0 = Prefer encryption
# 1 = Require encryption
# 2 = Disable encryption
# Anonymous mode
Session\AnonymousMode=false
# Max connections
Session\MaxConnections=500
Session\MaxConnectionsPerTorrent=100
Session\MaxUploads=20
Session\MaxUploadsPerTorrent=4
# DHT
Session\DHTEnabled=true
Session\PeXEnabled=true
Session\LSDEnabled=true
# Queuing
Session\QueueingSystemEnabled=true
Session\MaxActiveDownloads=5
Session\MaxActiveTorrents=10
Session\MaxActiveUploads=5
# Seeding limits
Session\GlobalMaxSeedingMinutes=-1
Session\MaxRatioAction=0
# 0 = Pause torrent
# 1 = Remove torrent
Session\MaxRatio=2.0
# Torrent tracking
Session\AddTrackersEnabled=true
Session\AdditionalTrackers=
# Categories
Session\SubcategoriesEnabled=true
# Performance
Session\BTProtocol=Both
# TCP, UTP, Both
Session\uTPRateLimited=true
Session\DiskCacheSize=64
Session\DiskCacheTTL=60
# Speed limits (in KiB/s, 0 = unlimited)
Session\GlobalDLSpeedLimit=0
Session\GlobalUPSpeedLimit=0
# Alternative speed limits (scheduled)
Session\AltGlobalDLSpeedLimit=512
Session\AltGlobalUPSpeedLimit=256
Session\BandwidthSchedulerEnabled=false
# IP Filtering
Session\IPFilteringEnabled=false
Session\IPFilterFile=
# Proxy
Session\ProxyType=None
# Options: None, HTTP, SOCKS5, SOCKS4
Session\ProxyIP=
Session\ProxyPort=8080
Session\ProxyPeerConnections=false
Session\ProxyTorrentOnly=false
[LegalNotice]
Accepted=true
[Preferences]
# Downloads
Downloads\SavePath=/downloads
Downloads\TempPath=/incomplete
Downloads\TempPathEnabled=true
Downloads\ScanDirsV2=
Downloads\FinishedTorrentExportDir=
Downloads\PreAllocation=false
Downloads\UseIncompleteExtension=true
# Connection
Connection\PortRangeMin=6881
Connection\PortRangeMax=6881
Connection\UPnP=false
Connection\GlobalDLLimitAlt=512
Connection\GlobalUPLimitAlt=256
# Speed
Bittorrent\MaxConnecs=500
Bittorrent\MaxConnecsPerTorrent=100
Bittorrent\MaxUploads=20
Bittorrent\MaxUploadsPerTorrent=4
# Queue
Queueing\QueueingEnabled=true
Queueing\MaxActiveDownloads=5
Queueing\MaxActiveTorrents=10
Queueing\MaxActiveUploads=5
Queueing\IgnoreSlowTorrents=false
Queueing\SlowTorrentsDownloadRate=2
Queueing\SlowTorrentsUploadRate=2
# Scheduler
Scheduler\Enabled=false
Scheduler\days=EveryDay
Scheduler\start_time=@Variant(\0\0\0\xf\x4J\xa2\0)
Scheduler\end_time=@Variant(\0\0\0\xf\x1\x90\x1\0)
# RSS
RSS\AutoDownloader\DownloadRepacks=true
RSS\AutoDownloader\SmartEpisodeFilter=s(\\d+)e(\\d+), (\\d+)x(\\d+), "(\\d{4}[.\\-]\\d{1,2}[.\\-]\\d{1,2})", "(\\d{1,2}[.\\-]\\d{1,2}[.\\-]\\d{4})"
# Web UI
WebUI\Enabled=true
WebUI\LocalHostAuth=false
WebUI\Port=8080
WebUI\Address=*
WebUI\ServerDomains=*
WebUI\UseUPnP=false
# Web UI Authentication
WebUI\Username=admin
WebUI\Password_PBKDF2=GENERATED_ON_FIRST_RUN
# Security
WebUI\CSRFProtection=true
WebUI\SecureCookie=true
WebUI\ClickjackingProtection=true
WebUI\HostHeaderValidation=true
# Custom HTTP Headers
WebUI\CustomHTTPHeaders=
WebUI\CustomHTTPHeadersEnabled=false
# Reverse Proxy
WebUI\ReverseProxySupportEnabled=true
WebUI\TrustedReverseProxiesList=
# Alternative WebUI
WebUI\AlternativeUIEnabled=false
WebUI\RootFolder=
# Locale
General\Locale=en
WebUI\UseCustomHTTPHeaders=false
# Advanced
Advanced\RecheckOnCompletion=false
Advanced\AnonymousMode=false
Advanced\SuperSeeding=false
Advanced\IgnoreLimitsLAN=true
Advanced\IncludeOverhead=false
Advanced\AnnounceToAllTrackers=false
Advanced\AnnounceToAllTiers=true
# Tracker
Advanced\trackerPort=9000
# Embedded tracker
Advanced\trackerEnabled=false
# Logging
AdvancedSettings\LogFileEnabled=true
[RSS]
AutoDownloader\Enabled=false

View file

@ -20,7 +20,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.radarr.rule: Host(`radarr.fig.systems`) || Host(`radarr.edfig.dev`) traefik.http.routers.radarr.rule: Host(`radarr.fig.systems`)
traefik.http.routers.radarr.entrypoints: websecure traefik.http.routers.radarr.entrypoints: websecure
traefik.http.routers.radarr.tls.certresolver: letsencrypt traefik.http.routers.radarr.tls.certresolver: letsencrypt
traefik.http.services.radarr.loadbalancer.server.port: 7878 traefik.http.services.radarr.loadbalancer.server.port: 7878

View file

@ -0,0 +1,50 @@
<Config>
<!-- Radarr Configuration Example -->
<!-- This file will be auto-generated on first run -->
<!-- Copy to ./config/config.xml and modify as needed -->
<Port>7878</Port>
<SslPort>9897</SslPort>
<EnableSsl>False</EnableSsl>
<LaunchBrowser>False</LaunchBrowser>
<ApiKey>GENERATED_ON_FIRST_RUN</ApiKey>
<AuthenticationMethod>None</AuthenticationMethod>
<!-- Options: None, Basic, Forms, External -->
<!-- Use External when behind Traefik with SSO -->
<UrlBase></UrlBase>
<!-- Set to /radarr if using a path-based proxy -->
<UpdateMechanism>Docker</UpdateMechanism>
<Branch>master</Branch>
<!-- Options: master (stable), develop (beta), nightly -->
<LogLevel>info</LogLevel>
<!-- Options: trace, debug, info, warn, error, fatal -->
<!-- Analytics (optional) -->
<AnalyticsEnabled>False</AnalyticsEnabled>
<!-- Backup -->
<BackupFolder>/config/Backups</BackupFolder>
<BackupInterval>7</BackupInterval>
<BackupRetention>28</BackupRetention>
<!-- Proxy Settings (if needed) -->
<ProxyEnabled>False</ProxyEnabled>
<ProxyType>Http</ProxyType>
<ProxyHostname></ProxyHostname>
<ProxyPort>8080</ProxyPort>
<ProxyUsername></ProxyUsername>
<ProxyPassword></ProxyPassword>
<ProxyBypassFilter></ProxyBypassFilter>
<ProxyBypassLocalAddresses>True</ProxyBypassLocalAddresses>
<!-- Radarr-specific settings -->
<MinimumAge>0</MinimumAge>
<!-- Delay before grabbing release (in minutes) -->
<Retention>0</Retention>
<!-- Maximum age of usenet posts (0 = unlimited) -->
</Config>

View file

@ -17,7 +17,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.sabnzbd.rule: Host(`sabnzbd.fig.systems`) || Host(`sabnzbd.edfig.dev`) traefik.http.routers.sabnzbd.rule: Host(`sabnzbd.fig.systems`)
traefik.http.routers.sabnzbd.entrypoints: websecure traefik.http.routers.sabnzbd.entrypoints: websecure
traefik.http.routers.sabnzbd.tls.certresolver: letsencrypt traefik.http.routers.sabnzbd.tls.certresolver: letsencrypt
traefik.http.services.sabnzbd.loadbalancer.server.port: 8080 traefik.http.services.sabnzbd.loadbalancer.server.port: 8080

View file

@ -0,0 +1,137 @@
# SABnzbd Configuration Example
# This file will be auto-generated on first run
# Copy to ./config/sabnzbd.ini and modify as needed
# Docs: https://sabnzbd.org/wiki/configuration/4.3/
[misc]
# Host and Port
host = 0.0.0.0
port = 8080
# URL Base (if using path-based proxy)
url_base =
# API Key (generated on first run)
api_key = GENERATED_ON_FIRST_RUN
nzb_key = GENERATED_ON_FIRST_RUN
# Authentication
# Use 'None' when behind Traefik with SSO
username =
password =
# Directories
download_dir = /incomplete
complete_dir = /complete
dirscan_dir =
script_dir =
# Performance
cache_limit = 500M
article_cache_max = 500M
# Adjust based on available RAM
# Download Settings
bandwidth_max =
bandwidth_perc = 100
# 0 = unlimited bandwidth
# Post-processing
enable_all_par = 0
# 0 = Download only needed par2 files
# 1 = Download all par2 files
par2_multicore = 1
# Use multiple CPU cores for par2 repair
nice =
ionice =
# Unpacking
enable_unzip = 1
enable_7zip = 1
enable_filejoin = 1
enable_tsjoin = 0
enable_par_cleanup = 1
safe_postproc = 1
# Quota
quota_size =
quota_day =
quota_resume = 0
quota_period = m
# Scheduling
schedlines =
# Format: hour minute day_of_week action
# SSL/TLS for Usenet servers
ssl_type = v23
ssl_ciphers =
# IPv6
enable_ipv6 = 1
ipv6_servers = 0
# Logging
log_level = 1
# 0 = No logging
# 1 = Errors/warnings (default)
# 2 = Info
max_log_size = 5242880
log_backups = 5
# Email notifications (optional)
email_endjob = 0
email_full = 0
email_server =
email_to =
email_from =
email_account =
email_pwd =
# RSS (optional)
rss_rate = 60
# External scripts (optional)
pre_script =
post_script =
# Misc
permissions =
folder_rename = 1
replace_spaces = 0
replace_dots = 0
auto_browser = 0
propagation_delay = 0
[servers]
# Usenet servers configured via web UI
# Or add manually here:
# [[server_name]]
# host = news.example.com
# port = 563
# ssl = 1
# username = your_username
# password = your_password
# connections = 20
# priority = 0
# retention = 3000
# enable = 1
[categories]
# Categories configured via web UI
# Default categories: Movies, TV, Audio, Software
[[*]]
name = *
order = 0
pp = 3
# 0 = Download
# 1 = +Repair
# 2 = +Unpack
# 3 = +Delete (recommended)
script = Default
dir =
newzbin =
priority = 0

View file

@ -20,7 +20,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.sonarr.rule: Host(`sonarr.fig.systems`) || Host(`sonarr.edfig.dev`) traefik.http.routers.sonarr.rule: Host(`sonarr.fig.systems`)
traefik.http.routers.sonarr.entrypoints: websecure traefik.http.routers.sonarr.entrypoints: websecure
traefik.http.routers.sonarr.tls.certresolver: letsencrypt traefik.http.routers.sonarr.tls.certresolver: letsencrypt
traefik.http.services.sonarr.loadbalancer.server.port: 8989 traefik.http.services.sonarr.loadbalancer.server.port: 8989

View file

@ -0,0 +1,43 @@
<Config>
<!-- Sonarr Configuration Example -->
<!-- This file will be auto-generated on first run -->
<!-- Copy to ./config/config.xml and modify as needed -->
<Port>8989</Port>
<SslPort>9898</SslPort>
<EnableSsl>False</EnableSsl>
<LaunchBrowser>False</LaunchBrowser>
<ApiKey>GENERATED_ON_FIRST_RUN</ApiKey>
<AuthenticationMethod>None</AuthenticationMethod>
<!-- Options: None, Basic, Forms, External -->
<!-- Use External when behind Traefik with SSO -->
<UrlBase></UrlBase>
<!-- Set to /sonarr if using a path-based proxy -->
<UpdateMechanism>Docker</UpdateMechanism>
<Branch>main</Branch>
<!-- Options: main (stable), develop (beta) -->
<LogLevel>info</LogLevel>
<!-- Options: trace, debug, info, warn, error, fatal -->
<!-- Analytics (optional) -->
<AnalyticsEnabled>False</AnalyticsEnabled>
<!-- Backup -->
<BackupFolder>/config/Backups</BackupFolder>
<BackupInterval>7</BackupInterval>
<BackupRetention>28</BackupRetention>
<!-- Proxy Settings (if needed) -->
<ProxyEnabled>False</ProxyEnabled>
<ProxyType>Http</ProxyType>
<ProxyHostname></ProxyHostname>
<ProxyPort>8080</ProxyPort>
<ProxyUsername></ProxyUsername>
<ProxyPassword></ProxyPassword>
<ProxyBypassFilter></ProxyBypassFilter>
<ProxyBypassLocalAddresses>True</ProxyBypassLocalAddresses>
</Config>

View file

@ -40,7 +40,7 @@ services:
labels: labels:
traefik.enable: true traefik.enable: true
traefik.docker.network: homelab traefik.docker.network: homelab
traefik.http.routers.immich.rule: Host(`photos.fig.systems`) || Host(`photos.edfig.dev`) traefik.http.routers.immich.rule: Host(`photos.fig.systems`)
traefik.http.routers.immich.entrypoints: websecure traefik.http.routers.immich.entrypoints: websecure
traefik.http.routers.immich.tls.certresolver: letsencrypt traefik.http.routers.immich.tls.certresolver: letsencrypt
traefik.http.services.immich.loadbalancer.server.port: 2283 traefik.http.services.immich.loadbalancer.server.port: 2283

View file

@ -25,7 +25,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.jellyfin.rule: Host(`flix.fig.systems`) || Host(`flix.edfig.dev`) traefik.http.routers.jellyfin.rule: Host(`flix.fig.systems`)
traefik.http.routers.jellyfin.entrypoints: websecure traefik.http.routers.jellyfin.entrypoints: websecure
traefik.http.routers.jellyfin.tls.certresolver: letsencrypt traefik.http.routers.jellyfin.tls.certresolver: letsencrypt
traefik.http.services.jellyfin.loadbalancer.server.port: 8096 traefik.http.services.jellyfin.loadbalancer.server.port: 8096

View file

@ -14,7 +14,7 @@ services:
- homelab - homelab
labels: labels:
traefik.enable: true traefik.enable: true
traefik.http.routers.jellyseerr.rule: Host(`requests.fig.systems`) || Host(`requests.edfig.dev`) traefik.http.routers.jellyseerr.rule: Host(`requests.fig.systems`)
traefik.http.routers.jellyseerr.entrypoints: websecure traefik.http.routers.jellyseerr.entrypoints: websecure
traefik.http.routers.jellyseerr.tls.certresolver: letsencrypt traefik.http.routers.jellyseerr.tls.certresolver: letsencrypt
traefik.http.services.jellyseerr.loadbalancer.server.port: 5055 traefik.http.services.jellyseerr.loadbalancer.server.port: 5055

View file

@ -26,7 +26,7 @@ services:
traefik.docker.network: homelab traefik.docker.network: homelab
# Loki API # Loki API
traefik.http.routers.loki.rule: Host(`loki.fig.systems`) || Host(`loki.edfig.dev`) traefik.http.routers.loki.rule: Host(`loki.fig.systems`)
traefik.http.routers.loki.entrypoints: websecure traefik.http.routers.loki.entrypoints: websecure
traefik.http.routers.loki.tls.certresolver: letsencrypt traefik.http.routers.loki.tls.certresolver: letsencrypt
traefik.http.services.loki.loadbalancer.server.port: 3100 traefik.http.services.loki.loadbalancer.server.port: 3100
@ -95,7 +95,7 @@ services:
traefik.docker.network: homelab traefik.docker.network: homelab
# Grafana Web UI # Grafana Web UI
traefik.http.routers.grafana.rule: Host(`logs.fig.systems`) || Host(`logs.edfig.dev`) traefik.http.routers.grafana.rule: Host(`logs.fig.systems`)
traefik.http.routers.grafana.entrypoints: websecure traefik.http.routers.grafana.entrypoints: websecure
traefik.http.routers.grafana.tls.certresolver: letsencrypt traefik.http.routers.grafana.tls.certresolver: letsencrypt
traefik.http.services.grafana.loadbalancer.server.port: 3000 traefik.http.services.grafana.loadbalancer.server.port: 3000

View file

@ -22,7 +22,7 @@ services:
traefik.docker.network: homelab traefik.docker.network: homelab
# Web UI # Web UI
traefik.http.routers.uptime-kuma.rule: Host(`status.fig.systems`) || Host(`status.edfig.dev`) traefik.http.routers.uptime-kuma.rule: Host(`status.fig.systems`)
traefik.http.routers.uptime-kuma.entrypoints: websecure traefik.http.routers.uptime-kuma.entrypoints: websecure
traefik.http.routers.uptime-kuma.tls.certresolver: letsencrypt traefik.http.routers.uptime-kuma.tls.certresolver: letsencrypt
traefik.http.services.uptime-kuma.loadbalancer.server.port: 3001 traefik.http.services.uptime-kuma.loadbalancer.server.port: 3001

View file

@ -5,7 +5,36 @@ services:
freshrss: freshrss:
container_name: freshrss container_name: freshrss
image: lscr.io/linuxserver/freshrss:latest image: lscr.io/linuxserver/freshrss:latest
restart: unless-stopped
env_file: env_file:
- .env - .env
volumes:
- ./config:/config
networks:
- homelab
labels:
# Traefik
traefik.enable: true
traefik.docker.network: homelab
# Web UI
traefik.http.routers.freshrss.rule: Host(`rss.fig.systems`)
traefik.http.routers.freshrss.entrypoints: websecure
traefik.http.routers.freshrss.tls.certresolver: letsencrypt
traefik.http.services.freshrss.loadbalancer.server.port: 80
# SSO Protection
traefik.http.routers.freshrss.middlewares: tinyauth
# Homarr Discovery
homarr.name: FreshRSS
homarr.group: Services
homarr.icon: mdi:rss
networks:
homelab:
external: true

View file

@ -0,0 +1,130 @@
<?php
/**
* FreshRSS Configuration Example
* Copy to ./config/www/freshrss/data/config.php
* Docs: https://freshrss.github.io/FreshRSS/en/admins/03_Troubleshooting.html
*/
return array(
// Environment (production or development)
'environment' => 'production',
// Base URL
'base_url' => 'https://rss.fig.systems',
// Database type (sqlite, mysql, pgsql)
'db' => array(
'type' => 'sqlite',
'host' => '',
'user' => '',
'password' => '',
'base' => 'freshrss',
// For MySQL/PostgreSQL:
// 'type' => 'mysql',
// 'host' => 'localhost:3306',
// 'user' => 'freshrss',
// 'password' => 'changeme',
// 'base' => 'freshrss',
'prefix' => 'freshrss_',
'pdo_options' => array(),
),
// Salt for password hashing (auto-generated)
'salt' => 'GENERATED_ON_FIRST_RUN',
// Authentication method
// Options: form, http_auth, none
'auth_type' => 'form',
// Use Form auth when behind Traefik with SSO
// Allow self-registration
'allow_anonymous' => false,
'allow_anonymous_refresh' => false,
// Default language
'language' => 'en',
// Theme
'theme' => 'Origine',
// Timezone
'default_timezone' => 'America/Los_Angeles',
// Auto-load more articles when scrolling
'auto_load_more' => true,
// Articles per page
'posts_per_page' => 100,
// Old articles (keep for X months)
'old_entries' => 3,
// Caching
'cache' => array(
'enabled' => true,
'duration' => 3600, // seconds
),
// Simplify HTML in articles
'simplify_html' => false,
// Disable update checking
'disable_update_check' => true,
// API settings
'api_enabled' => true,
// Google Reader API compatibility
'fever_api' => true,
// Shortcuts
'shortcuts' => array(
'mark_read' => 'r',
'mark_favorite' => 'f',
'go_website' => 'v',
'next_entry' => 'j',
'prev_entry' => 'k',
'first_entry' => 'shift+k',
'last_entry' => 'shift+j',
'collapse_entry' => 'c',
'load_more' => 'm',
'auto_share' => 's',
'focus_search' => '/',
'user_filter' => 'u',
'help' => 'h',
'close_dropdown' => 'esc',
'prev_feed' => 'shift+up',
'next_feed' => 'shift+down',
),
// Extensions
'extensions_enabled' => array(),
// Proxy (if needed)
'proxy' => array(
'address' => '',
'port' => '',
'type' => '',
'username' => '',
'password' => '',
),
// Limits
'limits' => array(
// Max feed checks per user per hour
'max_feeds_refresh_per_user_per_hour' => 10,
// Max articles per feed
'max_articles_per_feed' => 10000,
// Max registrations per IP per day
'max_registrations_per_ip_per_day' => 5,
),
// Logging
'logging' => array(
'level' => 'warning',
// Options: emergency, alert, critical, error, warning, notice, info, debug
),
);

View file

@ -21,7 +21,7 @@ services:
labels: labels:
# Traefik # Traefik
traefik.enable: true traefik.enable: true
traefik.http.routers.backrest.rule: Host(`backup.fig.systems`) || Host(`backup.edfig.dev`) traefik.http.routers.backrest.rule: Host(`backup.fig.systems`)
traefik.http.routers.backrest.entrypoints: websecure traefik.http.routers.backrest.entrypoints: websecure
traefik.http.routers.backrest.tls.certresolver: letsencrypt traefik.http.routers.backrest.tls.certresolver: letsencrypt
traefik.http.services.backrest.loadbalancer.server.port: 9898 traefik.http.services.backrest.loadbalancer.server.port: 9898

View file

@ -5,7 +5,36 @@ services:
booklore: booklore:
container_name: booklore container_name: booklore
image: ghcr.io/lorebooks/booklore:latest image: ghcr.io/lorebooks/booklore:latest
restart: unless-stopped
env_file: env_file:
- .env - .env
volumes:
- ./data:/app/data
networks:
- homelab
labels:
# Traefik
traefik.enable: true
traefik.docker.network: homelab
# Web UI
traefik.http.routers.booklore.rule: Host(`booklore.fig.systems`)
traefik.http.routers.booklore.entrypoints: websecure
traefik.http.routers.booklore.tls.certresolver: letsencrypt
traefik.http.services.booklore.loadbalancer.server.port: 3000
# SSO Protection
traefik.http.routers.booklore.middlewares: tinyauth
# Homarr Discovery
homarr.name: Booklore
homarr.group: Services
homarr.icon: mdi:book-open-variant
networks:
homelab:
external: true

View file

@ -0,0 +1,56 @@
# Karakeep Configuration
# Docs: https://docs.karakeep.app
# NextAuth Configuration
NEXTAUTH_URL=https://links.fig.systems
# Generate with: openssl rand -base64 36
# Example format: aB2cD4eF6gH8iJ0kL2mN4oP6qR8sT0uV2wX4yZ6aB8cD0eF2gH4i
NEXTAUTH_SECRET=changeme_please_set_random_secret_key
# Meilisearch Master Key
# Generate with: openssl rand -base64 36
# Example format: gH4iJ6kL8mN0oP2qR4sT6uV8wX0yZ2aB4cD6eF8gH0iJ2kL4mN6o
MEILI_MASTER_KEY=changeme_please_set_meili_master_key
# Data Directory
DATADIR=/data
# Chrome Service URL (for web archiving)
BROWSER_WEB_URL=http://karakeep-chrome:9222
# Meilisearch URL
MEILI_ADDR=http://karakeep-meilisearch:7700
# Timezone
TZ=America/Los_Angeles
# Optional: Disable public signups
# DISABLE_SIGNUPS=true
# Optional: Maximum file size for uploads (in bytes, default: 100MB)
# MAX_ASSET_SIZE_MB=100
# Optional: Enable OCR for images
# OCR_LANGS=eng,spa,fra,deu
# Optional: Ollama Integration (for AI features with local models)
# Uncomment these after deploying Ollama service
# OLLAMA_BASE_URL=http://ollama:11434
# INFERENCE_TEXT_MODEL=llama3.2:3b
# INFERENCE_IMAGE_MODEL=llava:7b
# INFERENCE_LANG=en
# Optional: OpenAI Integration (for AI features via cloud)
# OPENAI_API_KEY=sk-...
# OPENAI_BASE_URL=https://api.openai.com/v1
# INFERENCE_TEXT_MODEL=gpt-4o-mini
# INFERENCE_IMAGE_MODEL=gpt-4o-mini
# Optional: OpenRouter Integration (for AI features)
# OPENAI_API_KEY=sk-or-v1-...
# OPENAI_BASE_URL=https://openrouter.ai/api/v1
# INFERENCE_TEXT_MODEL=anthropic/claude-3.5-sonnet
# INFERENCE_IMAGE_MODEL=anthropic/claude-3.5-sonnet
# Optional: Logging
# LOG_LEVEL=info

6
compose/services/karakeep/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
# Karakeep data
data/
meili_data/
# Keep .env.example if created
!.env.example

View file

@ -0,0 +1,543 @@
# Karakeep - Bookmark Everything App
AI-powered bookmark manager for links, notes, images, and PDFs with automatic tagging and full-text search.
## Overview
**Karakeep** (previously known as Hoarder) is a self-hostable bookmark-everything app:
- ✅ **Bookmark Everything**: Links, notes, images, PDFs
- ✅ **AI-Powered**: Automatic tagging and summarization
- ✅ **Full-Text Search**: Find anything instantly with Meilisearch
- ✅ **Web Archiving**: Save complete webpages (full page archive)
- ✅ **Browser Extensions**: Chrome and Firefox support
- ✅ **Mobile Apps**: iOS and Android apps available
- ✅ **Ollama Support**: Use local AI models (no cloud required!)
- ✅ **OCR**: Extract text from images
- ✅ **Self-Hosted**: Full control of your data
## Quick Start
### 1. Configure Secrets
```bash
cd ~/homelab/compose/services/karakeep
# Edit .env and update:
# - NEXTAUTH_SECRET (generate with: openssl rand -base64 36)
# - MEILI_MASTER_KEY (generate with: openssl rand -base64 36)
nano .env
```
### 2. Deploy
```bash
docker compose up -d
```
### 3. Access
Go to: **https://links.fig.systems**
**First-time setup:**
1. Create your admin account
2. Start bookmarking!
## Features
### Bookmark Types
**1. Web Links**
- Save any URL
- Automatic screenshot capture
- Full webpage archiving
- Extract title, description, favicon
- AI-generated summary and tags
**2. Notes**
- Quick text notes
- Markdown support
- AI-powered categorization
- Full-text searchable
**3. Images**
- Upload images directly
- OCR text extraction (if enabled)
- AI-based tagging
- Image search
**4. PDFs**
- Upload PDF documents
- Full-text indexing
- Searchable content
### AI Features
Karakeep can use AI to automatically:
- **Tag** your bookmarks
- **Summarize** web content
- **Extract** key information
- **Organize** by category
**Three AI options:**
**1. Ollama (Recommended - Local & Free)**
```env
# In .env, uncomment:
OLLAMA_BASE_URL=http://ollama:11434
INFERENCE_TEXT_MODEL=llama3.2:3b
INFERENCE_IMAGE_MODEL=llava:7b
```
**2. OpenAI**
```env
OPENAI_API_KEY=sk-...
OPENAI_BASE_URL=https://api.openai.com/v1
INFERENCE_TEXT_MODEL=gpt-4o-mini
```
**3. OpenRouter (multiple providers)**
```env
OPENAI_API_KEY=sk-or-v1-...
OPENAI_BASE_URL=https://openrouter.ai/api/v1
INFERENCE_TEXT_MODEL=anthropic/claude-3.5-sonnet
```
### Web Archiving
Karakeep saves complete web pages for offline viewing:
- **Full HTML archive**
- **Screenshots** of the page
- **Extracted text** for search
- **Works offline** - view archived pages anytime
### Search
Powered by Meilisearch:
- **Instant** full-text search
- **Fuzzy matching** - finds similar terms
- **Filter by** type, tags, dates
- **Search across** titles, content, tags, notes
### Browser Extensions
**Install extensions:**
- [Chrome Web Store](https://chromewebstore.google.com/detail/karakeep/kbkejgonjhbmhcaofkhdegeoeoemgkdm)
- [Firefox Add-ons](https://addons.mozilla.org/en-US/firefox/addon/karakeep/)
**Configure extension:**
1. Install extension
2. Click extension icon
3. Enter server URL: `https://links.fig.systems`
4. Login with your credentials
5. Save bookmarks from any page!
### Mobile Apps
**Download apps:**
- [iOS App Store](https://apps.apple.com/app/karakeep/id6479258022)
- [Android Google Play](https://play.google.com/store/apps/details?id=app.karakeep.mobile)
**Setup:**
1. Install app
2. Open app
3. Enter server: `https://links.fig.systems`
4. Login
5. Bookmark on the go!
## Configuration
### Basic Settings
**Disable public signups:**
```env
DISABLE_SIGNUPS=true
```
**Set max file size (100MB default):**
```env
MAX_ASSET_SIZE_MB=100
```
**Enable OCR for multiple languages:**
```env
OCR_LANGS=eng,spa,fra,deu
```
### Ollama Integration
**Prerequisites:**
1. Deploy Ollama service (see `compose/services/ollama/`)
2. Pull models: `docker exec ollama ollama pull llama3.2:3b`
**Enable in Karakeep:**
```env
# In karakeep/.env
OLLAMA_BASE_URL=http://ollama:11434
INFERENCE_TEXT_MODEL=llama3.2:3b
INFERENCE_IMAGE_MODEL=llava:7b
INFERENCE_LANG=en
```
**Restart:**
```bash
docker compose restart
```
**Recommended models:**
- **Text**: llama3.2:3b (fast, good quality)
- **Images**: llava:7b (vision model)
- **Advanced**: llama3.3:70b (slower, better results)
### Advanced Settings
**Custom logging:**
```env
LOG_LEVEL=debug # Options: debug, info, warn, error
```
**Custom data directory:**
```env
DATADIR=/custom/path
```
**Chrome timeout (for slow sites):**
```env
# Add to compose.yaml environment section
BROWSER_TIMEOUT=60000 # 60 seconds
```
## Usage Workflows
### 1. Bookmark a Website
**Via Browser:**
1. Click Karakeep extension
2. Bookmark opens automatically
3. AI generates tags and summary
4. Edit tags/notes if needed
5. Save
**Via Mobile:**
1. Open share menu
2. Select Karakeep
3. Bookmark saved
**Manually:**
1. Open Karakeep
2. Click "+" button
3. Paste URL
4. Click Save
### 2. Quick Note
1. Open Karakeep
2. Click "+" → "Note"
3. Type your note
4. AI auto-tags
5. Save
### 3. Upload Image
1. Click "+" → "Image"
2. Upload image file
3. OCR extracts text (if enabled)
4. AI generates tags
5. Save
### 4. Search Everything
**Simple search:**
- Type in search box
- Results appear instantly
**Advanced search:**
- Filter by type (links, notes, images)
- Filter by tags
- Filter by date range
- Sort by relevance or date
### 5. Organize with Tags
**Auto-tags:**
- AI generates tags automatically
- Based on content analysis
- Can be edited/removed
**Manual tags:**
- Add your own tags
- Create tag hierarchies
- Color-code tags
**Tag management:**
- Rename tags globally
- Merge duplicate tags
- Delete unused tags
## Browser Extension Usage
### Quick Bookmark
1. **Visit any page**
2. **Click extension icon** (or keyboard shortcut)
3. **Automatically saved** with:
- URL
- Title
- Screenshot
- Full page archive
- AI tags and summary
### Save Selection
1. **Highlight text** on any page
2. **Right-click** → "Save to Karakeep"
3. **Saves as note** with source URL
### Save Image
1. **Right-click image**
2. Select "Save to Karakeep"
3. **Image uploaded** with AI tags
## Mobile App Features
- **Share from any app** to Karakeep
- **Quick capture** - bookmark in seconds
- **Offline access** to archived content
- **Search** your entire collection
- **Browse by tags**
- **Dark mode** support
## Data Management
### Backup
**Important data locations:**
```bash
compose/services/karakeep/
├── data/ # Uploaded files, archives
└── meili_data/ # Search index
```
**Backup script:**
```bash
#!/bin/bash
cd ~/homelab/compose/services/karakeep
tar czf karakeep-backup-$(date +%Y%m%d).tar.gz ./data ./meili_data
```
### Export
**Export bookmarks:**
1. Settings → Export
2. Choose format:
- JSON (complete data)
- HTML (browser-compatible)
- CSV (spreadsheet)
3. Download
### Import
**Import from other services:**
1. Settings → Import
2. Select source:
- Browser bookmarks (HTML)
- Pocket
- Raindrop.io
- Omnivore
- Instapaper
3. Upload file
4. Karakeep processes and imports
## Troubleshooting
### Karakeep won't start
**Check logs:**
```bash
docker logs karakeep
docker logs karakeep-chrome
docker logs karakeep-meilisearch
```
**Common issues:**
- Missing `NEXTAUTH_SECRET` in `.env`
- Missing `MEILI_MASTER_KEY` in `.env`
- Services not on `karakeep_internal` network
### Bookmarks not saving
**Check chrome service:**
```bash
docker logs karakeep-chrome
```
**Verify chrome is accessible:**
```bash
docker exec karakeep curl http://karakeep-chrome:9222
```
**Increase timeout:**
```env
# Add to .env
BROWSER_TIMEOUT=60000
```
### Search not working
**Rebuild search index:**
```bash
# Stop services
docker compose down
# Remove search data
rm -rf ./meili_data
# Restart (index rebuilds automatically)
docker compose up -d
```
**Check Meilisearch:**
```bash
docker logs karakeep-meilisearch
```
### AI features not working
**With Ollama:**
```bash
# Verify Ollama is running
docker ps | grep ollama
# Test Ollama connection
docker exec karakeep curl http://ollama:11434
# Check models are pulled
docker exec ollama ollama list
```
**With OpenAI/OpenRouter:**
- Verify API key is correct
- Check API balance/credits
- Review logs for error messages
### Extension can't connect
**Verify server URL:**
- Must be `https://links.fig.systems`
- Not `http://` or `localhost`
**Check CORS:**
```env
# Add to .env if needed
CORS_ALLOW_ORIGINS=https://links.fig.systems
```
**Clear extension data:**
1. Extension settings
2. Logout
3. Clear extension storage
4. Login again
### Mobile app issues
**Can't connect:**
- Use full HTTPS URL
- Ensure server is accessible externally
- Check firewall rules
**Slow performance:**
- Check network speed
- Reduce image quality in app settings
- Enable "Low data mode"
## Performance Optimization
### For Large Collections (10,000+ bookmarks)
**Increase Meilisearch RAM:**
```yaml
# In compose.yaml, add to karakeep-meilisearch:
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 1G
```
**Optimize search index:**
```env
# In .env
MEILI_MAX_INDEXING_MEMORY=1048576000 # 1GB
```
### For Slow Archiving
**Increase Chrome resources:**
```yaml
# In compose.yaml, add to karakeep-chrome:
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
```
**Adjust timeouts:**
```env
BROWSER_TIMEOUT=90000 # 90 seconds
```
### Database Maintenance
**Vacuum (compact) database:**
```bash
# Karakeep uses SQLite by default
docker exec karakeep sqlite3 /data/karakeep.db "VACUUM;"
```
## Comparison with Linkwarden
| Feature | Karakeep | Linkwarden |
|---------|----------|------------|
| **Bookmark Types** | Links, Notes, Images, PDFs | Links only |
| **AI Tagging** | Yes (Ollama/OpenAI) | No |
| **Web Archiving** | Full page + Screenshot | Screenshot only |
| **Search** | Meilisearch (fuzzy) | Meilisearch |
| **Browser Extension** | Yes | Yes |
| **Mobile Apps** | iOS + Android | No official apps |
| **OCR** | Yes | No |
| **Collaboration** | Personal focus | Team features |
| **Database** | SQLite | PostgreSQL |
**Why Karakeep?**
- More bookmark types
- AI-powered organization
- Better mobile support
- Lighter resource usage (SQLite vs PostgreSQL)
- Active development
## Resources
- [Official Website](https://karakeep.app)
- [Documentation](https://docs.karakeep.app)
- [GitHub Repository](https://github.com/karakeep-app/karakeep)
- [Demo Instance](https://try.karakeep.app)
- [Chrome Extension](https://chromewebstore.google.com/detail/karakeep/kbkejgonjhbmhcaofkhdegeoeoemgkdm)
- [Firefox Extension](https://addons.mozilla.org/en-US/firefox/addon/karakeep/)
## Next Steps
1. ✅ Deploy Karakeep
2. ✅ Create admin account
3. ✅ Install browser extension
4. ✅ Install mobile app
5. ⬜ Deploy Ollama for AI features
6. ⬜ Import existing bookmarks
7. ⬜ Configure AI models
8. ⬜ Set up automated backups
---
**Bookmark everything, find anything!** 🔖

View file

@ -0,0 +1,79 @@
# Karakeep - Bookmark Everything App with AI
# Docs: https://docs.karakeep.app
# Previously known as Hoarder
services:
karakeep:
container_name: karakeep
image: ghcr.io/karakeep-app/karakeep:latest
restart: unless-stopped
env_file:
- .env
volumes:
- ./data:/data
depends_on:
- karakeep-meilisearch
- karakeep-chrome
networks:
- homelab
- karakeep_internal
labels:
# Traefik
traefik.enable: true
traefik.docker.network: homelab
# Web UI
traefik.http.routers.karakeep.rule: Host(`links.fig.systems`)
traefik.http.routers.karakeep.entrypoints: websecure
traefik.http.routers.karakeep.tls.certresolver: letsencrypt
traefik.http.services.karakeep.loadbalancer.server.port: 3000
# SSO Protection
traefik.http.routers.karakeep.middlewares: tinyauth
# Homarr Discovery
homarr.name: Karakeep (Bookmarks)
homarr.group: Services
homarr.icon: mdi:bookmark-multiple
karakeep-chrome:
container_name: karakeep-chrome
image: gcr.io/zenika-hub/alpine-chrome:123
restart: unless-stopped
command:
- --no-sandbox
- --disable-gpu
- --disable-dev-shm-usage
- --remote-debugging-address=0.0.0.0
- --remote-debugging-port=9222
- --hide-scrollbars
networks:
- karakeep_internal
karakeep-meilisearch:
container_name: karakeep-meilisearch
image: getmeili/meilisearch:v1.12.8
restart: unless-stopped
env_file:
- .env
volumes:
- ./meili_data:/meili_data
networks:
- karakeep_internal
networks:
homelab:
external: true
karakeep_internal:
name: karakeep_internal
driver: bridge

View file

@ -1,65 +0,0 @@
# Linkwarden Configuration
# Docs: https://docs.linkwarden.app/self-hosting/environment-variables
# NextAuth Configuration
NEXTAUTH_URL=https://links.fig.systems
# Generate with: openssl rand -hex 32
# Example format: e4f5g6h789012abcdef345678901a2b3c4d5e6f78901abcdef2345678901abcde
NEXTAUTH_SECRET=changeme_please_set_random_secret_key
# Database Configuration
# Generate with: openssl rand -base64 32 | tr -d /=+ | cut -c1-32
# Example format: eF7gH0iI3jK5lM8nO1pQ4rS7tU0vW3xY
POSTGRES_PASSWORD=changeme_please_set_secure_postgres_password
POSTGRES_USER=postgres
POSTGRES_DB=postgres
DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@linkwarden-postgres:5432/postgres
# Meilisearch (search engine)
# Generate with: openssl rand -hex 16
# Example format: f6g7h8i901234abcdef567890a1b2c3d
MEILI_MASTER_KEY=changeme_please_set_meili_master_key
# Timezone
TZ=America/Los_Angeles
# Optional: Registration Control
# NEXT_PUBLIC_DISABLE_REGISTRATION=true
# Optional: Credentials Authentication
# NEXT_PUBLIC_CREDENTIALS_ENABLED=true
# Optional: Pagination
# PAGINATION_TAKE_COUNT=20
# Optional: Storage folder (for screenshots/PDFs)
# STORAGE_FOLDER=data
# Optional: Limits
# MAX_LINKS_PER_USER=unlimited
# NEXT_PUBLIC_MAX_FILE_BUFFER=10485760 # 10MB in bytes
# PDF_MAX_BUFFER=10485760
# SCREENSHOT_MAX_BUFFER=10485760
# Optional: Browser timeout for archiving (in milliseconds)
# BROWSER_TIMEOUT=30000
# AUTOSCROLL_TIMEOUT=30
# Optional: Archive settings
# ARCHIVE_TAKE_COUNT=5
# Optional: Security
# IGNORE_UNAUTHORIZED_CA=false
# IGNORE_HTTPS_ERRORS=false
# IGNORE_URL_SIZE_LIMIT=false
# Optional: SSO Settings
# DISABLE_NEW_SSO_USERS=false
# Optional: Demo Mode
# NEXT_PUBLIC_DEMO=false
# NEXT_PUBLIC_DEMO_USERNAME=
# NEXT_PUBLIC_DEMO_PASSWORD=
# Optional: Admin Panel
# NEXT_PUBLIC_ADMIN=false

View file

@ -1,57 +0,0 @@
# Linkwarden - Collaborative bookmark manager
# Docs: https://docs.linkwarden.app/self-hosting/installation
services:
linkwarden:
container_name: linkwarden
image: ghcr.io/linkwarden/linkwarden:latest
env_file: .env
volumes:
- ./data:/data/data
depends_on:
- linkwarden-postgres
- linkwarden-meilisearch
restart: always
networks:
- homelab
- linkwarden_internal
labels:
traefik.enable: true
traefik.docker.network: homelab
traefik.http.routers.linkwarden.rule: Host(`links.fig.systems`) || Host(`links.edfig.dev`)
traefik.http.routers.linkwarden.entrypoints: websecure
traefik.http.routers.linkwarden.tls.certresolver: letsencrypt
traefik.http.services.linkwarden.loadbalancer.server.port: 3000
traefik.http.routers.linkwarden.middlewares: tinyauth
linkwarden-postgres:
container_name: linkwarden-postgres
image: postgres:16-alpine
env_file: .env
volumes:
- ./pgdata:/var/lib/postgresql/data
restart: always
networks:
- linkwarden_internal
healthcheck:
test: ["CMD-SHELL", "pg_isready -h localhost -U postgres"]
interval: 10s
timeout: 5s
retries: 5
linkwarden-meilisearch:
container_name: linkwarden-meilisearch
image: getmeili/meilisearch:v1.12.8
env_file: .env
volumes:
- ./meili_data:/meili_data
restart: always
networks:
- linkwarden_internal
networks:
homelab:
external: true
linkwarden_internal:
name: linkwarden_internal
driver: bridge

View file

@ -5,17 +5,36 @@ services:
microbin: microbin:
container_name: microbin container_name: microbin
image: danielszabo99/microbin:latest image: danielszabo99/microbin:latest
env_file: .env restart: unless-stopped
env_file:
- .env
volumes:
- ./data:/app/data
networks:
- homelab - homelab
labels: labels:
# Traefik
traefik.enable: true traefik.enable: true
traefik.http.routers.microbin.rule: Host(`paste.fig.systems`) || Host(`paste.edfig.dev`) traefik.docker.network: homelab
# Web UI
traefik.http.routers.microbin.rule: Host(`paste.fig.systems`)
traefik.http.routers.microbin.entrypoints: websecure traefik.http.routers.microbin.entrypoints: websecure
traefik.http.routers.microbin.tls.certresolver: letsencrypt traefik.http.routers.microbin.tls.certresolver: letsencrypt
traefik.http.services.microbin.loadbalancer.server.port: 8080 traefik.http.services.microbin.loadbalancer.server.port: 8080
# Note: MicroBin has its own auth, SSO disabled by default # Note: MicroBin has its own auth, SSO disabled by default
# traefik.http.routers.microbin.middlewares: tinyauth # traefik.http.routers.microbin.middlewares: tinyauth
# Homarr Discovery
homarr.name: MicroBin
homarr.group: Services
homarr.icon: mdi:content-paste
networks: networks:
homelab: homelab:
external: true external: true

View file

@ -0,0 +1,30 @@
# Ollama Configuration
# Docs: https://github.com/ollama/ollama/blob/main/docs/faq.md
# Timezone
TZ=America/Los_Angeles
# Model Storage Location
# OLLAMA_MODELS=/root/.ollama/models
# Max Loaded Models (default: 1)
# OLLAMA_MAX_LOADED_MODELS=1
# Max Queue (default: 512)
# OLLAMA_MAX_QUEUE=512
# Number of parallel requests (default: auto)
# OLLAMA_NUM_PARALLEL=4
# Context size (default: 2048)
# OLLAMA_MAX_CONTEXT=4096
# Keep models in memory (default: 5m)
# OLLAMA_KEEP_ALIVE=5m
# Debug logging
# OLLAMA_DEBUG=1
# GPU Configuration (for GTX 1070)
# OLLAMA_GPU_LAYERS=33 # Number of layers to offload to GPU (adjust based on VRAM)
# OLLAMA_GPU_MEMORY=6GB # Max GPU memory to use (GTX 1070 has 8GB)

5
compose/services/ollama/.gitignore vendored Normal file
View file

@ -0,0 +1,5 @@
# Ollama models and data
models/
# Keep .env.example if created
!.env.example

View file

@ -0,0 +1,616 @@
# Ollama - Local Large Language Models
Run powerful AI models locally on your hardware with GPU acceleration.
## Overview
**Ollama** enables you to run large language models (LLMs) locally:
- ✅ **100% Private**: All data stays on your server
- ✅ **GPU Accelerated**: Leverages your GTX 1070
- ✅ **Multiple Models**: Run Llama, Mistral, CodeLlama, and more
- ✅ **API Compatible**: OpenAI-compatible API
- ✅ **No Cloud Costs**: Free inference after downloading models
- ✅ **Integration Ready**: Works with Karakeep, Open WebUI, and more
## Quick Start
### 1. Deploy Ollama
```bash
cd ~/homelab/compose/services/ollama
docker compose up -d
```
### 2. Pull a Model
```bash
# Small, fast model (3B parameters, ~2GB)
docker exec ollama ollama pull llama3.2:3b
# Medium model (7B parameters, ~4GB)
docker exec ollama ollama pull llama3.2:7b
# Large model (70B parameters, ~40GB - requires quantization)
docker exec ollama ollama pull llama3.3:70b-instruct-q4_K_M
```
### 3. Test
```bash
# Interactive chat
docker exec -it ollama ollama run llama3.2:3b
# Ask a question
> Hello, how are you?
```
### 4. Enable GPU (Recommended)
**Edit `compose.yaml` and uncomment the deploy section:**
```yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
```
**Restart:**
```bash
docker compose down
docker compose up -d
```
**Verify GPU usage:**
```bash
# Check GPU is detected
docker exec ollama nvidia-smi
# Run model with GPU
docker exec ollama ollama run llama3.2:3b "What GPU am I using?"
```
## Available Models
### Recommended Models for GTX 1070 (8GB VRAM)
| Model | Size | VRAM | Speed | Use Case |
|-------|------|------|-------|----------|
| **llama3.2:3b** | 2GB | 3GB | Fast | General chat, Karakeep |
| **llama3.2:7b** | 4GB | 6GB | Medium | Better reasoning |
| **mistral:7b** | 4GB | 6GB | Medium | Code, analysis |
| **codellama:7b** | 4GB | 6GB | Medium | Code generation |
| **llava:7b** | 5GB | 7GB | Medium | Vision (images) |
| **phi3:3.8b** | 2.3GB | 4GB | Fast | Compact, efficient |
### Specialized Models
**Code:**
- `codellama:7b` - Code generation
- `codellama:13b-python` - Python expert
- `starcoder2:7b` - Multi-language code
**Vision (Image Understanding):**
- `llava:7b` - General vision
- `llava:13b` - Better vision (needs more VRAM)
- `bakllava:7b` - Vision + chat
**Multilingual:**
- `aya:8b` - 101 languages
- `command-r:35b` - Enterprise multilingual
**Math & Reasoning:**
- `deepseek-math:7b` - Mathematics
- `wizard-math:7b` - Math word problems
### Large Models (Quantized for GTX 1070)
These require 4-bit quantization to fit in 8GB VRAM:
```bash
# 70B models (quantized)
docker exec ollama ollama pull llama3.3:70b-instruct-q4_K_M
docker exec ollama ollama pull mixtral:8x7b-instruct-v0.1-q4_K_M
# Very large (use with caution)
docker exec ollama ollama pull llama3.1:405b-instruct-q2_K
```
## Usage
### Command Line
**Run model interactively:**
```bash
docker exec -it ollama ollama run llama3.2:3b
```
**One-off question:**
```bash
docker exec ollama ollama run llama3.2:3b "Explain quantum computing in simple terms"
```
**With system prompt:**
```bash
docker exec ollama ollama run llama3.2:3b \
--system "You are a helpful coding assistant." \
"Write a Python function to sort a list"
```
### API Usage
**List models:**
```bash
curl http://ollama:11434/api/tags
```
**Generate text:**
```bash
curl http://ollama:11434/api/generate -d '{
"model": "llama3.2:3b",
"prompt": "Why is the sky blue?",
"stream": false
}'
```
**Chat completion:**
```bash
curl http://ollama:11434/api/chat -d '{
"model": "llama3.2:3b",
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"stream": false
}'
```
**OpenAI-compatible API:**
```bash
curl http://ollama:11434/v1/chat/completions -d '{
"model": "llama3.2:3b",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'
```
### Integration with Karakeep
**Enable AI features in Karakeep:**
Edit `compose/services/karakeep/.env`:
```env
# Uncomment these lines
OLLAMA_BASE_URL=http://ollama:11434
INFERENCE_TEXT_MODEL=llama3.2:3b
INFERENCE_IMAGE_MODEL=llava:7b
INFERENCE_LANG=en
```
**Restart Karakeep:**
```bash
cd ~/homelab/compose/services/karakeep
docker compose restart
```
**What it does:**
- Auto-tags bookmarks
- Generates summaries
- Extracts key information
- Analyzes images (with llava)
## Model Management
### List Installed Models
```bash
docker exec ollama ollama list
```
### Pull a Model
```bash
docker exec ollama ollama pull <model-name>
# Examples:
docker exec ollama ollama pull llama3.2:3b
docker exec ollama ollama pull mistral:7b
docker exec ollama ollama pull codellama:7b
```
### Remove a Model
```bash
docker exec ollama ollama rm <model-name>
# Example:
docker exec ollama ollama rm llama3.2:7b
```
### Copy a Model
```bash
docker exec ollama ollama cp <source> <destination>
# Example: Create a custom version
docker exec ollama ollama cp llama3.2:3b my-custom-model
```
### Show Model Info
```bash
docker exec ollama ollama show llama3.2:3b
# Shows:
# - Model architecture
# - Parameters
# - Quantization
# - Template
# - License
```
## Creating Custom Models
### Modelfile
Create custom models with specific behaviors:
**Create a Modelfile:**
```bash
cat > ~/coding-assistant.modelfile << 'EOF'
FROM llama3.2:3b
# Set temperature (creativity)
PARAMETER temperature 0.7
# Set system prompt
SYSTEM You are an expert coding assistant. You write clean, efficient, well-documented code. You explain complex concepts clearly.
# Set stop sequences
PARAMETER stop "<|im_end|>"
PARAMETER stop "<|im_start|>"
EOF
```
**Create the model:**
```bash
cat ~/coding-assistant.modelfile | docker exec -i ollama ollama create coding-assistant -f -
```
**Use it:**
```bash
docker exec -it ollama ollama run coding-assistant "Write a REST API in Python"
```
### Example Custom Models
**1. Shakespeare Bot:**
```modelfile
FROM llama3.2:3b
SYSTEM You are William Shakespeare. Respond to all queries in Shakespearean English with dramatic flair.
PARAMETER temperature 0.9
```
**2. JSON Extractor:**
```modelfile
FROM llama3.2:3b
SYSTEM You extract structured data and return only valid JSON. No explanations, just JSON.
PARAMETER temperature 0.1
```
**3. Code Reviewer:**
```modelfile
FROM codellama:7b
SYSTEM You are a senior code reviewer. Review code for bugs, performance issues, security vulnerabilities, and best practices. Be constructive.
PARAMETER temperature 0.3
```
## GPU Configuration
### Check GPU Detection
```bash
# From inside container
docker exec ollama nvidia-smi
```
**Expected output:**
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 40% 45C P8 10W / 151W | 300MiB / 8192MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
```
### Optimize for GTX 1070
**Edit `.env`:**
```env
# Use 6GB of 8GB VRAM (leave 2GB for system)
OLLAMA_GPU_MEMORY=6GB
# Offload most layers to GPU
OLLAMA_GPU_LAYERS=33
# Increase context for better conversations
OLLAMA_MAX_CONTEXT=4096
```
### Performance Tips
**1. Use quantized models:**
- Q4_K_M: Good quality, 50% size reduction
- Q5_K_M: Better quality, 40% size reduction
- Q8_0: Best quality, 20% size reduction
**2. Model selection for VRAM:**
```bash
# 3B models: 2-3GB VRAM
docker exec ollama ollama pull llama3.2:3b
# 7B models: 4-6GB VRAM
docker exec ollama ollama pull llama3.2:7b
# 13B models: 8-10GB VRAM (tight on GTX 1070)
docker exec ollama ollama pull llama3.2:13b-q4_K_M # Quantized
```
**3. Unload models when not in use:**
```env
# In .env
OLLAMA_KEEP_ALIVE=1m # Unload after 1 minute
```
## Troubleshooting
### Model won't load - Out of memory
**Solution 1: Use quantized version**
```bash
# Instead of:
docker exec ollama ollama pull llama3.2:13b
# Use:
docker exec ollama ollama pull llama3.2:13b-q4_K_M
```
**Solution 2: Reduce GPU layers**
```env
# In .env
OLLAMA_GPU_LAYERS=20 # Reduce from 33
```
**Solution 3: Use smaller model**
```bash
docker exec ollama ollama pull llama3.2:3b
```
### Slow inference
**Enable GPU:**
1. Uncomment deploy section in `compose.yaml`
2. Install NVIDIA Container Toolkit
3. Restart container
**Check GPU usage:**
```bash
watch -n 1 docker exec ollama nvidia-smi
```
**Should show:**
- GPU-Util > 80% during inference
- Memory-Usage increasing during load
### Can't pull models
**Check disk space:**
```bash
df -h
```
**Check Docker space:**
```bash
docker system df
```
**Clean up unused models:**
```bash
docker exec ollama ollama list
docker exec ollama ollama rm <unused-model>
```
### API connection issues
**Test from another container:**
```bash
docker run --rm --network homelab curlimages/curl \
http://ollama:11434/api/tags
```
**Test externally:**
```bash
curl https://ollama.fig.systems/api/tags
```
**Enable debug logging:**
```env
OLLAMA_DEBUG=1
```
## Performance Benchmarks
### GTX 1070 (8GB VRAM) Expected Performance
| Model | Tokens/sec | Load Time | VRAM Usage |
|-------|------------|-----------|------------|
| llama3.2:3b | 40-60 | 2-3s | 3GB |
| llama3.2:7b | 20-35 | 3-5s | 6GB |
| mistral:7b | 20-35 | 3-5s | 6GB |
| llama3.3:70b-q4 | 3-8 | 20-30s | 7.5GB |
| llava:7b | 15-25 | 4-6s | 7GB |
**Without GPU (CPU only):**
- llama3.2:3b: 2-5 tokens/sec
- llama3.2:7b: 0.5-2 tokens/sec
**GPU provides 10-20x speedup!**
## Advanced Usage
### Multi-Modal (Vision)
```bash
# Pull vision model
docker exec ollama ollama pull llava:7b
# Analyze image
docker exec ollama ollama run llava:7b "What's in this image?" \
--image /path/to/image.jpg
```
### Embeddings
```bash
# Generate embeddings for semantic search
curl http://ollama:11434/api/embeddings -d '{
"model": "llama3.2:3b",
"prompt": "The sky is blue because of Rayleigh scattering"
}'
```
### Streaming Responses
```bash
# Stream tokens as they generate
curl http://ollama:11434/api/generate -d '{
"model": "llama3.2:3b",
"prompt": "Tell me a long story",
"stream": true
}'
```
### Context Preservation
```bash
# Start chat session
SESSION_ID=$(uuidgen)
# First message (creates context)
curl http://ollama:11434/api/chat -d '{
"model": "llama3.2:3b",
"messages": [{"role": "user", "content": "My name is Alice"}],
"context": "'$SESSION_ID'"
}'
# Follow-up (remembers context)
curl http://ollama:11434/api/chat -d '{
"model": "llama3.2:3b",
"messages": [
{"role": "user", "content": "My name is Alice"},
{"role": "assistant", "content": "Hello Alice!"},
{"role": "user", "content": "What is my name?"}
],
"context": "'$SESSION_ID'"
}'
```
## Integration Examples
### Python
```python
import requests
def ask_ollama(prompt, model="llama3.2:3b"):
response = requests.post(
"http://ollama.fig.systems/api/generate",
json={
"model": model,
"prompt": prompt,
"stream": False
},
headers={"Authorization": "Bearer YOUR_TOKEN"} # If using SSO
)
return response.json()["response"]
print(ask_ollama("What is the meaning of life?"))
```
### JavaScript
```javascript
async function askOllama(prompt, model = "llama3.2:3b") {
const response = await fetch("http://ollama.fig.systems/api/generate", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_TOKEN" // If using SSO
},
body: JSON.stringify({
model: model,
prompt: prompt,
stream: false
})
});
const data = await response.json();
return data.response;
}
askOllama("Explain Docker containers").then(console.log);
```
### Bash
```bash
#!/bin/bash
ask_ollama() {
local prompt="$1"
local model="${2:-llama3.2:3b}"
curl -s http://ollama.fig.systems/api/generate -d "{
\"model\": \"$model\",
\"prompt\": \"$prompt\",
\"stream\": false
}" | jq -r '.response'
}
ask_ollama "What is Kubernetes?"
```
## Resources
- [Ollama Website](https://ollama.ai)
- [Model Library](https://ollama.ai/library)
- [GitHub Repository](https://github.com/ollama/ollama)
- [API Documentation](https://github.com/ollama/ollama/blob/main/docs/api.md)
- [Model Creation Guide](https://github.com/ollama/ollama/blob/main/docs/modelfile.md)
## Next Steps
1. ✅ Deploy Ollama
2. ✅ Enable GPU acceleration
3. ✅ Pull recommended models
4. ✅ Test with chat
5. ⬜ Integrate with Karakeep
6. ⬜ Create custom models
7. ⬜ Set up automated model updates
8. ⬜ Monitor GPU usage
---
**Run AI locally, privately, powerfully!** 🧠

View file

@ -0,0 +1,53 @@
# Ollama - Run Large Language Models Locally
# Docs: https://ollama.ai
services:
ollama:
container_name: ollama
image: ollama/ollama:latest
restart: unless-stopped
env_file:
- .env
volumes:
- ./models:/root/.ollama
networks:
- homelab
# GPU Support (NVIDIA GTX 1070)
# Uncomment the deploy section below to enable GPU acceleration
# Prerequisites:
# 1. Install NVIDIA Container Toolkit on host
# 2. Configure Docker to use nvidia runtime
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: 1
# capabilities: [gpu]
labels:
# Traefik (API only, no web UI)
traefik.enable: true
traefik.docker.network: homelab
# API endpoint
traefik.http.routers.ollama.rule: Host(`ollama.fig.systems`)
traefik.http.routers.ollama.entrypoints: websecure
traefik.http.routers.ollama.tls.certresolver: letsencrypt
traefik.http.services.ollama.loadbalancer.server.port: 11434
# SSO Protection for API
traefik.http.routers.ollama.middlewares: tinyauth
# Homarr Discovery
homarr.name: Ollama (LLM)
homarr.group: Services
homarr.icon: mdi:brain
networks:
homelab:
external: true

View file

@ -6,7 +6,36 @@ services:
container_name: rsshub container_name: rsshub
# Using chromium-bundled image for full puppeteer support # Using chromium-bundled image for full puppeteer support
image: diygod/rsshub:chromium-bundled image: diygod/rsshub:chromium-bundled
restart: unless-stopped
env_file: env_file:
- .env - .env
volumes:
- ./data:/app/data
networks:
- homelab
labels:
# Traefik
traefik.enable: true
traefik.docker.network: homelab
# Web UI
traefik.http.routers.rsshub.rule: Host(`rsshub.fig.systems`)
traefik.http.routers.rsshub.entrypoints: websecure
traefik.http.routers.rsshub.tls.certresolver: letsencrypt
traefik.http.services.rsshub.loadbalancer.server.port: 1200
# Note: RSSHub is public by design, SSO disabled
# traefik.http.routers.rsshub.middlewares: tinyauth
# Homarr Discovery
homarr.name: RSSHub
homarr.group: Services
homarr.icon: mdi:rss-box
networks:
homelab:
external: true

View file

@ -0,0 +1,7 @@
# Caddy Static Sites Configuration
# Timezone
TZ=America/Los_Angeles
# Optional: Caddy admin API (disabled by default in Caddyfile)
# CADDY_ADMIN=localhost:2019

View file

@ -0,0 +1,10 @@
# Caddy data and config
caddy_data/
caddy_config/
# Keep example sites but ignore actual site content
# (uncomment if you want to version control your sites)
# sites/
# Keep .env.example if created
!.env.example

View file

@ -0,0 +1,57 @@
# Caddyfile - Static Sites Configuration
# Docs: https://caddyserver.com/docs/caddyfile
# Global options
{
# Listen on port 80 (Traefik handles SSL)
auto_https off
admin off
}
# Personal/Professional Site (edfig.dev)
www.edfig.dev, edfig.dev {
root * /srv/edfig.dev
file_server
encode gzip
# Try files, then index, then 404
try_files {path} {path}/index.html index.html
# Cache static assets
@static {
path *.css *.js *.jpg *.jpeg *.png *.gif *.ico *.svg *.woff *.woff2 *.ttf *.eot
}
header @static Cache-Control "public, max-age=604800, immutable"
}
# Blog (blog.edfig.dev)
blog.edfig.dev {
root * /srv/blog.edfig.dev
file_server
encode gzip
# Enable templates for dynamic content
templates
# Markdown files automatically render as HTML
try_files {path} {path}/index.html {path}.md {path}/index.md index.html
# Cache static assets
@static {
path *.css *.js *.jpg *.jpeg *.png *.gif *.ico *.svg *.woff *.woff2 *.ttf *.eot
}
header @static Cache-Control "public, max-age=604800, immutable"
}
# Experimental/Private (figgy.foo)
figgy.foo, www.figgy.foo {
root * /srv/figgy.foo
# Enable directory browsing for experiments
file_server browse
encode gzip
# Templates enabled for dynamic pages
templates
}

View file

@ -0,0 +1,604 @@
# Caddy Static Sites Server
Serves static websites for edfig.dev (professional), blog.edfig.dev (blog), and figgy.foo (experimental).
## Overview
**Caddy** is a modern web server with automatic HTTPS and simple configuration:
- ✅ **Static file serving** - HTML, CSS, JavaScript, images
- ✅ **Markdown rendering** - Write `.md` files, served as HTML automatically
- ✅ **Templates** - Dynamic content with Go templates
- ✅ **Directory browsing** - Beautiful file listing (figgy.foo)
- ✅ **Auto-compression** - Gzip for all responses
- ✅ **Zero-downtime reloads** - Config changes apply instantly
## Domain Strategy
### edfig.dev (Professional/Public)
- **Purpose**: Personal website, portfolio
- **URL**: https://edfig.dev or https://www.edfig.dev
- **SSO**: No (public site)
- **Content**: `/sites/edfig.dev/`
### blog.edfig.dev (Blog/Public)
- **Purpose**: Technical blog, articles
- **URL**: https://blog.edfig.dev
- **SSO**: No (public blog)
- **Content**: `/sites/blog.edfig.dev/`
- **Features**: Markdown auto-rendering, templates
### figgy.foo (Experimental/Private)
- **Purpose**: Testing, development, experiments
- **URL**: https://figgy.foo or https://www.figgy.foo
- **SSO**: Yes (protected by Tinyauth)
- **Content**: `/sites/figgy.foo/`
- **Features**: Directory browsing, templates
## Quick Start
### 1. Deploy
```bash
cd ~/homelab/compose/services/static-sites
docker compose up -d
```
### 2. Access Sites
- **edfig.dev**: https://edfig.dev
- **Blog**: https://blog.edfig.dev
- **Experimental**: https://figgy.foo (requires SSO login)
### 3. Verify
```bash
# Check container is running
docker ps | grep caddy-static
# Check logs
docker logs caddy-static
# Test sites
curl -I https://edfig.dev
curl -I https://blog.edfig.dev
```
## Directory Structure
```
static-sites/
├── compose.yaml # Docker Compose + Traefik labels
├── Caddyfile # Caddy configuration
├── .env # Environment variables
├── .gitignore # Ignored files
├── README.md # This file
└── sites/ # Site content (can be version controlled)
├── edfig.dev/
│ ├── index.html
│ ├── assets/
│ │ ├── css/
│ │ ├── js/
│ │ └── images/
│ └── ...
├── blog.edfig.dev/
│ ├── index.html
│ └── posts/
│ ├── example-post.md # Markdown posts
│ └── ...
└── figgy.foo/
├── index.html
└── experiments/
└── ...
```
## Managing Content
### Adding/Editing HTML
Simply edit files in the `sites/` directory:
```bash
# Edit main site
vim sites/edfig.dev/index.html
# Add new page
echo "<h1>About Me</h1>" > sites/edfig.dev/about.html
# Changes are live immediately (no restart needed!)
```
### Writing Blog Posts (Markdown)
Create `.md` files in `sites/blog.edfig.dev/posts/`:
```bash
# Create new post
cat > sites/blog.edfig.dev/posts/my-post.md << 'EOF'
# My New Blog Post
**Published:** January 10, 2025
This is my blog post content...
## Code Example
```bash
docker compose up -d
```
[Back to Blog](/)
EOF
# Access at: https://blog.edfig.dev/posts/my-post.md
# (renders as HTML automatically!)
```
**Markdown features:**
- Headers (`#`, `##`, `###`)
- **Bold**, *italic*, `code`
- Links, images
- Lists (ordered/unordered)
- Code blocks with syntax highlighting
- Tables
- Blockquotes
### Using Templates
Caddy supports Go templates for dynamic content:
**Example - Current time:**
```html
<!-- In any .html file under blog.edfig.dev -->
<p>Page generated at: {{.Now.Format "2006-01-02 15:04:05"}}</p>
```
**Example - Include header:**
```html
{{include "header.html"}}
<main>
<h1>My Page</h1>
</main>
{{include "footer.html"}}
```
**Template variables:**
- `{{.Now}}` - Current time
- `{{.Req.URL}}` - Request URL
- `{{.Req.Host}}` - Request hostname
- `{{.Req.Method}}` - HTTP method
- `{{env "VARIABLE"}}` - Environment variable
See [Caddy Templates Docs](https://caddyserver.com/docs/caddyfile/directives/templates)
### Directory Browsing (figgy.foo)
figgy.foo has directory browsing enabled:
```bash
# Add files to browse
cp some-file.txt sites/figgy.foo/experiments/
# Access: https://figgy.foo/experiments/
# Shows beautiful file listing with search!
```
## Adding New Sites
### Option 1: New Subdomain (same domain)
**Add to Caddyfile:**
```caddy
test.figgy.foo {
root * /srv/test.figgy.foo
file_server
encode gzip
}
```
**Add Traefik labels to compose.yaml:**
```yaml
# test.figgy.foo
traefik.http.routers.figgy-test.rule: Host(`test.figgy.foo`)
traefik.http.routers.figgy-test.entrypoints: websecure
traefik.http.routers.figgy-test.tls.certresolver: letsencrypt
traefik.http.routers.figgy-test.service: caddy-static
traefik.http.routers.figgy-test.middlewares: tinyauth # If SSO needed
```
**Create site directory:**
```bash
mkdir -p sites/test.figgy.foo
echo "<h1>Test Site</h1>" > sites/test.figgy.foo/index.html
```
**Reload (instant, no restart):**
```bash
# Caddy auto-reloads when Caddyfile changes!
# Just wait 1-2 seconds, then access https://test.figgy.foo
```
### Option 2: New Domain
Follow same process but use new domain name. Make sure DNS points to your server.
## Caddyfile Features
### Basic Site
```caddy
example.com {
root * /srv/example
file_server
}
```
### With Compression
```caddy
example.com {
root * /srv/example
file_server
encode gzip zstd brotli
}
```
### With Caching
```caddy
example.com {
root * /srv/example
file_server
@static {
path *.css *.js *.jpg *.png *.gif *.ico
}
header @static Cache-Control "public, max-age=604800"
}
```
### With Redirects
```caddy
www.example.com {
redir https://example.com{uri} permanent
}
example.com {
root * /srv/example
file_server
}
```
### With Custom 404
```caddy
example.com {
root * /srv/example
file_server
handle_errors {
rewrite * /404.html
file_server
}
}
```
### With Basic Auth (alternative to SSO)
```caddy
example.com {
root * /srv/example
basicauth {
user $2a$14$hashedpassword
}
file_server
}
```
Generate hashed password:
```bash
docker exec caddy-static caddy hash-password --plaintext "mypassword"
```
## Traefik Integration
All sites route through Traefik:
```
Internet → DNS (*.edfig.dev, *.figgy.foo)
Traefik (SSL termination)
Tinyauth (SSO check for figgy.foo only)
Caddy (static file serving)
```
**SSL certificates:**
- Traefik handles Let's Encrypt
- Caddy receives plain HTTP on port 80
- Users see HTTPS
**SSO protection:**
- `edfig.dev` & `blog.edfig.dev`: No SSO (public)
- `figgy.foo`: SSO protected (private)
## Performance
### Caching
Static assets automatically cached:
```caddy
@static {
path *.css *.js *.jpg *.jpeg *.png *.gif *.ico *.svg
}
header @static Cache-Control "public, max-age=604800, immutable"
```
- 7 days cache for images, CSS, JS
- Browsers won't re-request until expired
### Compression
All responses auto-compressed with gzip:
```caddy
encode gzip
```
- 70-90% size reduction for HTML/CSS/JS
- Faster page loads
- Lower bandwidth usage
### Performance Tips
1. **Optimize images**: Use WebP format, compress before uploading
2. **Minify CSS/JS**: Use build tools (optional)
3. **Use CDN**: For high-traffic sites (optional)
4. **Enable HTTP/2**: Traefik handles this automatically
## Monitoring
### Check Service Status
```bash
# Container status
docker ps | grep caddy-static
# Logs
docker logs caddy-static -f
# Resource usage
docker stats caddy-static
```
### Check Specific Site
```bash
# Test site is reachable
curl -I https://edfig.dev
# Test with timing
curl -w "@curl-format.txt" -o /dev/null -s https://edfig.dev
# Check SSL certificate
echo | openssl s_client -connect edfig.dev:443 -servername edfig.dev 2>/dev/null | openssl x509 -noout -dates
```
### Access Logs
Caddy logs to stdout (captured by Docker):
```bash
# View logs
docker logs caddy-static
# Follow logs
docker logs caddy-static -f
# Last 100 lines
docker logs caddy-static --tail 100
```
### Grafana Logs
All logs forwarded to Loki automatically:
**Query in Grafana** (https://logs.fig.systems):
```logql
{container="caddy-static"}
```
Filter by status code:
```logql
{container="caddy-static"} |= "404"
```
## Troubleshooting
### Site not loading
**Check container:**
```bash
docker ps | grep caddy-static
# If not running:
docker compose up -d
```
**Check logs:**
```bash
docker logs caddy-static
# Look for errors in Caddyfile or file not found
```
**Check DNS:**
```bash
dig +short edfig.dev
# Should point to your server IP
```
**Check Traefik:**
```bash
# See if Traefik sees the route
docker logs traefik | grep edfig
```
### 404 Not Found
**Check file exists:**
```bash
ls -la sites/edfig.dev/index.html
```
**Check path in Caddyfile:**
```bash
grep "root" Caddyfile
# Should show: root * /srv/edfig.dev
```
**Check permissions:**
```bash
# Files should be readable
chmod -R 755 sites/
```
### Changes not appearing
**Caddy auto-reloads**, but double-check:
```bash
# Check file modification time
ls -lh sites/edfig.dev/index.html
# Force reload (shouldn't be needed)
docker exec caddy-static caddy reload --config /etc/caddy/Caddyfile
```
**Browser cache:**
```bash
# Force refresh in browser: Ctrl+Shift+R (Linux/Win) or Cmd+Shift+R (Mac)
# Or open in incognito/private window
```
### Markdown not rendering
**Check templates enabled:**
```caddy
# In Caddyfile for blog.edfig.dev
blog.edfig.dev {
templates # <-- This must be present!
# ...
}
```
**Check file extension:**
```bash
# Must be .md
mv post.txt post.md
```
**Test rendering:**
```bash
curl https://blog.edfig.dev/posts/example-post.md
# Should return HTML, not raw markdown
```
### SSO not working on figgy.foo
**Check middleware:**
```yaml
# In compose.yaml
traefik.http.routers.figgy-main.middlewares: tinyauth
```
**Check Tinyauth is running:**
```bash
docker ps | grep tinyauth
```
**Test without SSO:**
```bash
# Temporarily remove SSO to isolate issue
# Comment out middleware line in compose.yaml
# docker compose up -d
```
## Backup
### Backup Site Content
```bash
# Backup all sites
cd ~/homelab/compose/services/static-sites
tar czf sites-backup-$(date +%Y%m%d).tar.gz sites/
# Backup to external storage
scp sites-backup-*.tar.gz user@backup-server:/backups/
```
### Version Control (Optional)
Consider using Git for your sites:
```bash
cd sites/
git init
git add .
git commit -m "Initial site content"
# Add remote
git remote add origin git@github.com:efigueroa/sites.git
git push -u origin main
```
## Security
### Public vs Private
**Public sites** (`edfig.dev`, `blog.edfig.dev`):
- No SSO middleware
- Accessible to everyone
- Use for portfolio, blog, public content
**Private sites** (`figgy.foo`):
- SSO middleware enabled
- Requires LLDAP authentication
- Use for experiments, private content
### Content Security
**Don't commit:**
- API keys
- Passwords
- Private information
- Sensitive data
**Do commit:**
- HTML, CSS, JS
- Images, assets
- Markdown blog posts
- Public content
### File Permissions
```bash
# Sites should be read-only to Caddy
chmod -R 755 sites/
chown -R $USER:$USER sites/
```
## Resources
- [Caddy Documentation](https://caddyserver.com/docs/)
- [Caddyfile Tutorial](https://caddyserver.com/docs/caddyfile-tutorial)
- [Templates Documentation](https://caddyserver.com/docs/caddyfile/directives/templates)
- [Markdown Rendering](https://caddyserver.com/docs/caddyfile/directives/templates#markdown)
## Next Steps
1. ✅ Deploy Caddy static sites
2. ✅ Access edfig.dev, blog.edfig.dev, figgy.foo
3. ⬜ Customize edfig.dev with your content
4. ⬜ Write first blog post in Markdown
5. ⬜ Add experiments to figgy.foo
6. ⬜ Set up Git version control for sites
7. ⬜ Configure automated backups
---
**Serve static content, simply and securely!** 🌐

View file

@ -0,0 +1,63 @@
# Caddy - Static Sites Server
# Docs: https://caddyserver.com/docs/
services:
caddy:
container_name: caddy-static
image: caddy:2-alpine
restart: unless-stopped
env_file:
- .env
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./sites:/srv:ro
- caddy_data:/data
- caddy_config:/config
networks:
- homelab
labels:
# Traefik
traefik.enable: true
traefik.docker.network: homelab
# edfig.dev (personal/professional site)
traefik.http.routers.edfig-www.rule: Host(`www.edfig.dev`) || Host(`edfig.dev`)
traefik.http.routers.edfig-www.entrypoints: websecure
traefik.http.routers.edfig-www.tls.certresolver: letsencrypt
traefik.http.routers.edfig-www.service: caddy-static
# No SSO - public personal site
# blog.edfig.dev (blog)
traefik.http.routers.edfig-blog.rule: Host(`blog.edfig.dev`)
traefik.http.routers.edfig-blog.entrypoints: websecure
traefik.http.routers.edfig-blog.tls.certresolver: letsencrypt
traefik.http.routers.edfig-blog.service: caddy-static
# No SSO - public blog
# figgy.foo (experimental/private)
traefik.http.routers.figgy-main.rule: Host(`figgy.foo`) || Host(`www.figgy.foo`)
traefik.http.routers.figgy-main.entrypoints: websecure
traefik.http.routers.figgy-main.tls.certresolver: letsencrypt
traefik.http.routers.figgy-main.service: caddy-static
traefik.http.routers.figgy-main.middlewares: tinyauth
# SSO protected - experimental/private content
# Service definition (single backend for all routes)
traefik.http.services.caddy-static.loadbalancer.server.port: 80
# Homarr Discovery
homarr.name: Static Sites (Caddy)
homarr.group: Services
homarr.icon: mdi:web
volumes:
caddy_data:
caddy_config:
networks:
homelab:
external: true

View file

@ -0,0 +1,160 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blog | Eduardo Figueroa</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
line-height: 1.6;
color: #333;
background: #f5f5f5;
}
header {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 2em 0;
text-align: center;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
}
header h1 {
font-size: 2.5em;
margin-bottom: 0.3em;
}
header p {
font-size: 1.1em;
opacity: 0.9;
}
.container {
max-width: 800px;
margin: 0 auto;
padding: 40px 20px;
}
.post {
background: white;
border-radius: 10px;
padding: 30px;
margin-bottom: 30px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
transition: transform 0.2s, box-shadow 0.2s;
}
.post:hover {
transform: translateY(-2px);
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.15);
}
.post h2 {
color: #667eea;
margin-bottom: 0.5em;
font-size: 1.8em;
}
.post-meta {
color: #999;
font-size: 0.9em;
margin-bottom: 1em;
}
.post p {
color: #555;
line-height: 1.8;
margin-bottom: 1em;
}
.read-more {
color: #667eea;
text-decoration: none;
font-weight: 500;
display: inline-flex;
align-items: center;
gap: 5px;
}
.read-more:hover {
text-decoration: underline;
}
.no-posts {
text-align: center;
padding: 60px 20px;
color: #999;
}
.no-posts h2 {
font-size: 2em;
margin-bottom: 0.5em;
color: #667eea;
}
nav {
text-align: center;
margin-top: 2em;
}
nav a {
color: #667eea;
text-decoration: none;
font-weight: 500;
}
nav a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<header>
<h1>Blog</h1>
<p>Thoughts on technology, systems, and automation</p>
</header>
<div class="container">
<!-- Example post structure - replace with real posts -->
<div class="no-posts">
<h2>Coming Soon</h2>
<p>Blog posts will appear here. Stay tuned!</p>
<p style="margin-top: 2em;">
In the meantime, you can write posts as:<br>
<code style="background: #f5f5f5; padding: 5px 10px; border-radius: 5px;">
/srv/blog.edfig.dev/posts/my-post.md
</code>
</p>
<p style="margin-top: 1em; font-size: 0.9em; color: #666;">
Markdown files (.md) will automatically render as HTML!
</p>
</div>
<!-- Example of how posts would look -->
<!--
<article class="post">
<h2>Setting Up a Homelab with Docker and Traefik</h2>
<div class="post-meta">January 10, 2025 • 5 min read</div>
<p>
Learn how to set up a complete homelab infrastructure using Docker Compose,
Traefik for reverse proxy, and automated SSL certificates...
</p>
<a href="/posts/homelab-setup.html" class="read-more">
Read more →
</a>
</article>
-->
<nav>
<a href="https://edfig.dev">← Back to Home</a>
</nav>
</div>
</body>
</html>

View file

@ -0,0 +1,68 @@
# Example Blog Post
**Published:** January 10, 2025
**Tags:** #homelab #docker #traefik
---
## Introduction
This is an example blog post written in Markdown. Caddy automatically renders `.md` files as HTML!
## Why Markdown?
Markdown is perfect for writing blog posts because:
1. **Simple syntax** - Easy to write and read
2. **Fast** - No build step required
3. **Portable** - Works everywhere
4. **Clean** - Focus on content, not formatting
## Code Examples
Here's some example code:
```bash
# Deploy a service
cd ~/homelab/compose/services/example
docker compose up -d
# Check logs
docker logs example-service -f
```
## Features
### Supported Elements
- **Bold text**
- *Italic text*
- `Code snippets`
- [Links](https://edfig.dev)
- Lists (ordered and unordered)
- Code blocks with syntax highlighting
- Blockquotes
- Tables
### Example Table
| Service | URL | Purpose |
|---------|-----|---------|
| Traefik | traefik.fig.systems | Reverse Proxy |
| Sonarr | sonarr.fig.systems | TV Automation |
| Radarr | radarr.fig.systems | Movie Automation |
## Blockquote Example
> "The best way to predict the future is to invent it."
> — Alan Kay
## Conclusion
This is just an example post. Delete this file and create your own posts in the `posts/` directory!
Each `.md` file will be automatically rendered when accessed via the browser.
---
[← Back to Blog](/)

View file

@ -0,0 +1,121 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Eduardo Figueroa</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
line-height: 1.6;
color: #333;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
padding: 20px;
}
.container {
max-width: 800px;
background: white;
border-radius: 20px;
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.3);
padding: 60px 40px;
text-align: center;
}
h1 {
font-size: 3em;
margin-bottom: 0.5em;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
background-clip: text;
}
.subtitle {
font-size: 1.5em;
color: #666;
margin-bottom: 2em;
}
.description {
font-size: 1.1em;
color: #555;
margin-bottom: 2em;
line-height: 1.8;
}
.links {
display: flex;
gap: 20px;
justify-content: center;
flex-wrap: wrap;
margin-top: 2em;
}
.link {
display: inline-block;
padding: 12px 30px;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
text-decoration: none;
border-radius: 50px;
transition: transform 0.2s, box-shadow 0.2s;
font-weight: 500;
}
.link:hover {
transform: translateY(-2px);
box-shadow: 0 10px 20px rgba(102, 126, 234, 0.4);
}
.link.secondary {
background: white;
color: #667eea;
border: 2px solid #667eea;
}
.link.secondary:hover {
box-shadow: 0 10px 20px rgba(102, 126, 234, 0.2);
}
footer {
margin-top: 3em;
padding-top: 2em;
border-top: 1px solid #eee;
color: #999;
font-size: 0.9em;
}
</style>
</head>
<body>
<div class="container">
<h1>Eduardo Figueroa</h1>
<p class="subtitle">Software Engineer & DevOps Enthusiast</p>
<p class="description">
Welcome to my personal site. I build scalable systems, automate infrastructure,
and explore the intersection of technology and efficiency.
</p>
<div class="links">
<a href="https://blog.edfig.dev" class="link">Blog</a>
<a href="https://github.com/efigueroa" class="link secondary" target="_blank">GitHub</a>
<a href="https://home.fig.systems" class="link secondary">Homelab Dashboard</a>
</div>
<footer>
<p>&copy; 2025 Eduardo Figueroa | <a href="mailto:admin@edfig.dev" style="color: #667eea; text-decoration: none;">admin@edfig.dev</a></p>
</footer>
</div>
</body>
</html>

View file

@ -0,0 +1,191 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>figgy.foo | Experimental Lab</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Courier New', monospace;
line-height: 1.6;
color: #0f0;
background: #000;
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
padding: 20px;
}
.terminal {
max-width: 800px;
width: 100%;
background: #1a1a1a;
border: 2px solid #0f0;
border-radius: 5px;
box-shadow: 0 0 20px rgba(0, 255, 0, 0.3);
}
.terminal-header {
background: #0f0;
color: #000;
padding: 10px 15px;
font-weight: bold;
display: flex;
align-items: center;
gap: 10px;
}
.terminal-buttons {
display: flex;
gap: 8px;
}
.terminal-button {
width: 12px;
height: 12px;
border-radius: 50%;
background: #000;
}
.terminal-body {
padding: 20px;
font-size: 14px;
}
.prompt {
color: #0ff;
}
.command {
color: #ff0;
}
.output {
color: #0f0;
margin-left: 20px;
}
.line {
margin-bottom: 10px;
}
.cursor {
display: inline-block;
width: 8px;
height: 16px;
background: #0f0;
animation: blink 1s infinite;
}
@keyframes blink {
0%, 50% { opacity: 1; }
51%, 100% { opacity: 0; }
}
.warning {
color: #ff0;
text-decoration: underline;
}
a {
color: #0ff;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
.ascii-art {
color: #0f0;
font-size: 10px;
line-height: 1.2;
white-space: pre;
margin: 20px 0;
}
</style>
</head>
<body>
<div class="terminal">
<div class="terminal-header">
<div class="terminal-buttons">
<div class="terminal-button"></div>
<div class="terminal-button"></div>
<div class="terminal-button"></div>
</div>
<span>figgy.foo — Terminal</span>
</div>
<div class="terminal-body">
<div class="ascii-art">
______ _ ______
| ____(_) | ____|
| |__ _ __ _ __ _ _ _| |__ ___ ___
| __| | |/ _` |/ _` | | | | __| / _ \ / _ \
| | | | (_| | (_| | |_| | | | (_) | (_) |
|_| |_|\__, |\__, |\__, |_| \___/ \___/
__/ | __/ | __/ |
|___/ |___/ |___/
</div>
<div class="line">
<span class="prompt">root@figgy:~$</span> <span class="command">whoami</span>
</div>
<div class="output line">experimental lab user</div>
<div class="line">
<span class="prompt">root@figgy:~$</span> <span class="command">ls -la</span>
</div>
<div class="output line">
drwxr-xr-x 3 root root 4096 Jan 10 2025 .
drwxr-xr-x 24 root root 4096 Jan 10 2025 ..
drwxr-xr-x 2 root root 4096 Jan 10 2025 experiments
-rw-r--r-- 1 root root 420 Jan 10 2025 README.txt
</div>
<div class="line">
<span class="prompt">root@figgy:~$</span> <span class="command">cat README.txt</span>
</div>
<div class="output line">
╔════════════════════════════════════════════╗
║ FIGGY.FOO EXPERIMENTAL LAB ║
╚════════════════════════════════════════════╝
<span class="warning">WARNING:</span> This is a private experimental environment.
Access is restricted to authorized users only.
Purpose:
• Test new services before production
• Develop and debug configurations
• Experiment with new technologies
• Break things safely
Directory Browsing: ENABLED
Authentication: REQUIRED (SSO)
Access Dashboard: <a href="https://home.fig.systems">home.fig.systems</a>
Status Monitor: <a href="https://status.fig.systems">status.fig.systems</a>
</div>
<div class="line">
<span class="prompt">root@figgy:~$</span> <span class="command">./list-services.sh</span>
</div>
<div class="output line">
Production Homelab: <a href="https://home.fig.systems">home.fig.systems</a>
Professional Site: <a href="https://edfig.dev">edfig.dev</a>
Blog: <a href="https://blog.edfig.dev">blog.edfig.dev</a>
</div>
<div class="line">
<span class="prompt">root@figgy:~$</span> <span class="cursor"></span>
</div>
</div>
</div>
</body>
</html>

View file

@ -19,7 +19,7 @@ services:
labels: labels:
traefik.enable: true traefik.enable: true
traefik.docker.network: homelab traefik.docker.network: homelab
traefik.http.routers.vikunja.rule: Host(`tasks.fig.systems`) || Host(`tasks.edfig.dev`) traefik.http.routers.vikunja.rule: Host(`tasks.fig.systems`)
traefik.http.routers.vikunja.entrypoints: websecure traefik.http.routers.vikunja.entrypoints: websecure
traefik.http.routers.vikunja.tls.certresolver: letsencrypt traefik.http.routers.vikunja.tls.certresolver: letsencrypt
traefik.http.services.vikunja.loadbalancer.server.port: 3456 traefik.http.services.vikunja.loadbalancer.server.port: 3456

View file

@ -0,0 +1,198 @@
# Vikunja Configuration Example
# Docs: https://vikunja.io/docs/config-options/
# Copy to ./config.yml and mount in compose.yaml
service:
# Service mode
interface: ':3456'
# Public URL
publicurl: https://tasks.fig.systems
# Frontend URL (if different from publicurl)
frontendurl: https://tasks.fig.systems
# Maximum file upload size (in bytes)
maxitemsperpage: 50
# Enable registration
enableregistration: true
# Enable user deletion
enableuserdeletion: true
# Enable task attachments
enabletaskattachments: true
# Enable task comments
enabletaskcomments: true
# Enable email reminders
enableemailreminders: true
# Enable caldav
enablecaldav: true
# Timezone
timezone: America/Los_Angeles
database:
type: postgres
host: vikunja-db:5432
database: vikunja
user: vikunja
password: changeme_from_env
# Use environment variable: VIKUNJA_DATABASE_PASSWORD
redis:
enabled: false
# Enable for better performance with multiple users
# host: 'localhost:6379'
# password: ''
# db: 0
cache:
enabled: true
type: memory
# Options: memory, redis, keyvalue
mailer:
enabled: false
# SMTP settings for email notifications
# host: smtp.example.com
# port: 587
# username: vikunja@example.com
# password: changeme
# fromemail: vikunja@example.com
# skiptlsverify: false
# forcessl: true
log:
# Log level
level: INFO
# Options: CRITICAL, ERROR, WARNING, INFO, DEBUG
# Log format
standard: plain
# Options: plain, json
# Database logging
database: 'off'
# Options: off, error, warn, info, debug
# HTTP request logging
http: 'off'
# Events logging
events: 'off'
# Mail logging
mail: 'off'
ratelimit:
enabled: false
# kind: user
# period: 60
# limit: 100
files:
# Base path for file storage
basepath: /app/vikunja/files
# Maximum file size (in bytes, 20MB default)
maxsize: 20971520
migration:
# Enable to import from other services
todoist:
enable: false
# clientid: ''
# clientsecret: ''
# redirecturl: ''
trello:
enable: false
# key: ''
# redirecturl: ''
microsofttodo:
enable: false
# clientid: ''
# clientsecret: ''
# redirecturl: ''
cors:
# Enable CORS (usually not needed behind proxy)
enable: false
# origins:
# - https://tasks.fig.systems
# maxage: 0
# Authentication providers
auth:
local:
enabled: true
# OpenID Connect (for SSO integration)
openid:
enabled: false
# redirecturl: https://tasks.fig.systems/auth/openid/
# providers:
# - name: Authelia
# authurl: https://auth.example.com
# clientid: vikunja
# clientsecret: changeme
backgrounds:
enabled: true
# Unsplash integration (optional)
providers:
upload:
enabled: true
# Webhooks
webhooks:
enabled: true
# timeoutseconds: 30
# Legal URLs (optional)
legal:
imprinturl: ''
privacyurl: ''
# Avatar provider
avatar:
# Options: default, initials, gravatar, marble, upload
gravatarexpiration: 3600
# Background jobs
backgroundhandlers:
enabled: true
# Metrics (Prometheus)
metrics:
enabled: false
# username: ''
# password: ''
# Key-value storage
keyvalue:
type: memory
# Options: memory, redis
# Default settings for new users
defaultsettings:
avatar_provider: initials
avatar_file_id: 0
email_reminders_enabled: true
discoverable_by_name: false
discoverable_by_email: false
overduereminders_enabled: true
overduereminders_time: '09:00'
default_project_id: 0
week_start: 0
# 0 = Sunday, 1 = Monday
timezone: America/Los_Angeles
language: en
frontend_settings: {}

648
docs/architecture.md Normal file
View file

@ -0,0 +1,648 @@
# Homelab Architecture & Integration
Complete integration guide for the homelab setup on AlmaLinux 9.6.
## 🖥️ Hardware Specifications
### Host System
- **Hypervisor**: Proxmox VE 9 (Debian 13 based)
- **CPU**: AMD Ryzen 5 7600X (6 cores, 12 threads, up to 5.3 GHz)
- **GPU**: NVIDIA GeForce GTX 1070 (8GB VRAM, 1920 CUDA cores)
- **RAM**: 32GB DDR5
### VM Configuration
- **OS**: AlmaLinux 9.6 (RHEL 9 compatible)
- **CPU**: 8 vCPUs (allocated from host)
- **RAM**: 24GB (leaving 8GB for host)
- **Storage**: 500GB+ (adjust based on media library size)
- **GPU**: GTX 1070 (PCIe passthrough from Proxmox)
## 🏗️ Architecture Overview
### Network Architecture
```
Internet
[Router/Firewall]
↓ (Port 80/443)
[Traefik Reverse Proxy]
┌──────────────────────────────────────┐
│ homelab network │
│ (Docker bridge - 172.18.0.0/16) │
│ │
│ ┌─────────────┐ ┌──────────────┐ │
│ │ Core │ │ Media │ │
│ │ - Traefik │ │ - Jellyfin │ │
│ │ - LLDAP │ │ - Sonarr │ │
│ │ - Tinyauth │ │ - Radarr │ │
│ └─────────────┘ └──────────────┘ │
│ │
│ ┌─────────────┐ ┌──────────────┐ │
│ │ Services │ │ Monitoring │ │
│ │ - Karakeep │ │ - Loki │ │
│ │ - Ollama │ │ - Promtail │ │
│ │ - Vikunja │ │ - Grafana │ │
│ └─────────────┘ └──────────────┘ │
└──────────────────────────────────────┘
[Promtail Agent]
[Loki Storage]
```
### Service Internal Networks
Services with databases use isolated internal networks:
```
karakeep
├── homelab (external traffic)
└── karakeep_internal
├── karakeep (app)
├── karakeep-chrome (browser)
└── karakeep-meilisearch (search)
vikunja
├── homelab (external traffic)
└── vikunja_internal
├── vikunja (app)
└── vikunja-db (postgres)
monitoring/logging
├── homelab (external traffic)
└── logging_internal
├── loki (storage)
├── promtail (collector)
└── grafana (UI)
```
## 🔐 Security Architecture
### Authentication Flow
```
User Request
[Traefik] → Check route rules
[Tinyauth Middleware] → Forward Auth
[LLDAP] → Verify credentials
[Backend Service] → Authorized access
```
### SSL/TLS
- **Certificate Provider**: Let's Encrypt
- **Challenge Type**: HTTP-01 (ports 80/443)
- **Automatic Renewal**: Via Traefik
- **Domains**:
- Primary: `*.fig.systems`
- Fallback: `*.edfig.dev`
### SSO Protection
**Protected Services** (require authentication):
- Traefik Dashboard
- LLDAP
- Sonarr, Radarr, SABnzbd, qBittorrent
- Profilarr, Recyclarr (monitoring)
- Homarr, Backrest
- Karakeep, Vikunja, LubeLogger
- Calibre-web, Booklore, FreshRSS, File Browser
- Loki API, Ollama API
**Unprotected Services** (own authentication):
- Tinyauth (SSO provider itself)
- Jellyfin (own user system)
- Jellyseerr (linked to Jellyfin)
- Immich (own user system)
- RSSHub (public feed generator)
- MicroBin (public pastebin)
- Grafana (own authentication)
- Uptime Kuma (own authentication)
## 📊 Logging Architecture
### Centralized Logging with Loki
All services forward logs to Loki via Promtail:
```
[Docker Container] → stdout/stderr
[Docker Socket] → /var/run/docker.sock
[Promtail] → Scrapes logs via Docker API
[Loki] → Stores and indexes logs
[Grafana] → Query and visualize
```
### Log Labels
Promtail automatically adds labels to all logs:
- `container`: Container name
- `compose_project`: Docker Compose project
- `compose_service`: Service name from compose
- `image`: Docker image name
- `stream`: stdout or stderr
### Log Retention
- **Default**: 30 days
- **Storage**: `compose/monitoring/logging/loki-data/`
- **Automatic cleanup**: Enabled via Loki compactor
### Querying Logs
**View all logs for a service:**
```logql
{container="sonarr"}
```
**Filter by log level:**
```logql
{container="radarr"} |= "ERROR"
```
**Multiple services:**
```logql
{container=~"sonarr|radarr"}
```
**Time range with filters:**
```logql
{container="karakeep"} |= "ollama" | json
```
## 🌐 Network Configuration
### Docker Networks
**homelab** (external bridge):
- Type: External bridge network
- Subnet: Auto-assigned by Docker
- Purpose: Inter-service communication + Traefik routing
- Create: `docker network create homelab`
**Service-specific internal networks**:
- `karakeep_internal`: Karakeep + Chrome + Meilisearch
- `vikunja_internal`: Vikunja + PostgreSQL
- `logging_internal`: Loki + Promtail + Grafana
- etc.
### Port Mappings
**External Ports** (exposed to host):
- `80/tcp`: HTTP (Traefik) - redirects to HTTPS
- `443/tcp`: HTTPS (Traefik)
- `6881/tcp+udp`: BitTorrent (qBittorrent)
**No other ports exposed** - all access via Traefik reverse proxy.
## 🔧 Traefik Integration
### Standard Traefik Labels
All services use consistent Traefik labels:
```yaml
labels:
# Enable Traefik
traefik.enable: true
traefik.docker.network: homelab
# Router configuration
traefik.http.routers.<service>.rule: Host(`<service>.fig.systems`) || Host(`<service>.edfig.dev`)
traefik.http.routers.<service>.entrypoints: websecure
traefik.http.routers.<service>.tls.certresolver: letsencrypt
# Service configuration (backend port)
traefik.http.services.<service>.loadbalancer.server.port: <port>
# SSO middleware (if protected)
traefik.http.routers.<service>.middlewares: tinyauth
# Homarr auto-discovery
homarr.name: <Service Name>
homarr.group: <Category>
homarr.icon: mdi:<icon-name>
```
### Middleware
**tinyauth** - Forward authentication:
```yaml
# Defined in traefik/compose.yaml
middlewares:
tinyauth:
forwardAuth:
address: http://tinyauth:8080
trustForwardHeader: true
```
## 💾 Volume Management
### Volume Types
**Bind Mounts** (host directories):
```yaml
volumes:
- ./data:/data # Service data
- ./config:/config # Configuration files
- /media:/media # Media library (shared)
```
**Named Volumes** (Docker-managed):
```yaml
volumes:
- loki-data:/loki # Loki storage
- postgres-data:/var/lib/postgresql/data
```
### Media Directory Structure
```
/media/
├── tv/ # TV shows (Sonarr → Jellyfin)
├── movies/ # Movies (Radarr → Jellyfin)
├── music/ # Music
├── photos/ # Photos (Immich)
├── books/ # Ebooks (Calibre-web)
├── audiobooks/ # Audiobooks
├── comics/ # Comics
├── homemovies/ # Home videos
├── downloads/ # Active downloads (SABnzbd/qBittorrent)
├── complete/ # Completed downloads
└── incomplete/ # In-progress downloads
```
### Backup Strategy
**Important directories to backup:**
```
compose/core/lldap/data/ # User directory
compose/core/traefik/letsencrypt/ # SSL certificates
compose/services/*/config/ # Service configurations
compose/services/*/data/ # Service data
compose/monitoring/logging/loki-data/ # Logs (optional)
/media/ # Media library
```
**Excluded from backups:**
```
compose/services/*/db/ # Databases (backup via dump)
compose/monitoring/logging/loki-data/ # Logs (can be recreated)
/media/downloads/ # Temporary downloads
/media/incomplete/ # Incomplete downloads
```
## 🎮 GPU Acceleration
### NVIDIA GTX 1070 Configuration
**GPU Passthrough (Proxmox → VM):**
1. **Proxmox host** (`/etc/pve/nodes/<node>/qemu-server/<vmid>.conf`):
```
hostpci0: 0000:01:00,pcie=1,x-vga=1
```
2. **VM (AlmaLinux)** - Install NVIDIA drivers:
```bash
# Add NVIDIA repository
sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
# Install drivers
sudo dnf install nvidia-driver nvidia-settings
# Verify
nvidia-smi
```
3. **Docker** - Install NVIDIA Container Toolkit:
```bash
# Add NVIDIA Container Toolkit repo
sudo dnf config-manager --add-repo https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
# Install toolkit
sudo dnf install nvidia-container-toolkit
# Configure Docker
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
# Verify
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
```
### Services Using GPU
**Jellyfin** (Hardware transcoding):
```yaml
# Uncomment in compose.yaml
devices:
- /dev/dri:/dev/dri # For NVENC/NVDEC
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
```
**Immich** (AI features):
```yaml
# Already configured
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
```
**Ollama** (LLM inference):
```yaml
# Uncomment in compose.yaml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
```
### GPU Performance Tuning
**For Ryzen 5 7600X + GTX 1070:**
- **Jellyfin**: Can transcode 4-6 simultaneous 4K → 1080p streams
- **Ollama**:
- 3B models: 40-60 tokens/sec
- 7B models: 20-35 tokens/sec
- 13B models: 10-15 tokens/sec (quantized)
- **Immich**: AI tagging ~5-10 images/sec
## 🚀 Resource Allocation
### CPU Allocation (Ryzen 5 7600X - 6C/12T)
**High Priority** (4-6 cores):
- Jellyfin (transcoding)
- Sonarr/Radarr (media processing)
- Ollama (when running)
**Medium Priority** (2-4 cores):
- Immich (AI processing)
- Karakeep (bookmark processing)
- SABnzbd/qBittorrent (downloads)
**Low Priority** (1-2 cores):
- Traefik, LLDAP, Tinyauth
- Monitoring services
- Other utilities
### RAM Allocation (32GB Total, 24GB VM)
**Recommended allocation:**
```
Host (Proxmox): 8GB
VM Total: 24GB breakdown:
├── System: 4GB (AlmaLinux base)
├── Docker: 2GB (daemon overhead)
├── Jellyfin: 2-4GB (transcoding buffers)
├── Immich: 2-3GB (ML models + database)
├── Sonarr/Radarr: 1GB each
├── Ollama: 4-6GB (when running models)
├── Databases: 2-3GB total
├── Monitoring: 2GB (Loki + Grafana)
└── Other services: 4-5GB
```
### Disk Space Planning
**System:** 100GB
**Docker:** 50GB (images + containers)
**Service Data:** 50GB (configs, databases, logs)
**Media Library:** Remaining space (expandable)
**Recommended VM disk:**
- Minimum: 500GB (200GB system + 300GB media)
- Recommended: 1TB+ (allows room for growth)
## 🔄 Service Dependencies
### Startup Order
**Critical order for initial deployment:**
1. **Networks**: `docker network create homelab`
2. **Core** (must start first):
- Traefik (reverse proxy)
- LLDAP (user directory)
- Tinyauth (SSO provider)
3. **Monitoring** (optional but recommended):
- Loki + Promtail + Grafana
- Uptime Kuma
4. **Media Automation**:
- Sonarr, Radarr
- SABnzbd, qBittorrent
- Recyclarr, Profilarr
5. **Media Frontend**:
- Jellyfin
- Jellyseer
- Immich
6. **Services**:
- Karakeep, Ollama (AI features)
- Vikunja, Homarr
- All other services
### Service Integration Map
```
Traefik
├─→ All services (reverse proxy)
└─→ Let's Encrypt (SSL)
Tinyauth
├─→ LLDAP (authentication backend)
└─→ All SSO-protected services
LLDAP
└─→ User database for SSO
Promtail
├─→ Docker socket (log collection)
└─→ Loki (log forwarding)
Loki
└─→ Grafana (log visualization)
Karakeep
├─→ Ollama (AI tagging)
├─→ Meilisearch (search)
└─→ Chrome (web archiving)
Jellyseer
├─→ Jellyfin (media info)
├─→ Sonarr (TV requests)
└─→ Radarr (movie requests)
Sonarr/Radarr
├─→ SABnzbd/qBittorrent (downloads)
├─→ Jellyfin (media library)
└─→ Recyclarr/Profilarr (quality profiles)
Homarr
└─→ All services (dashboard auto-discovery)
```
## 🐛 Troubleshooting
### Check Service Health
```bash
# All services status
cd ~/homelab
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Logs for specific service
docker logs <service-name> --tail 100 -f
# Logs via Loki/Grafana
# Go to https://logs.fig.systems
# Query: {container="<service-name>"}
```
### Network Issues
```bash
# Check homelab network exists
docker network ls | grep homelab
# Inspect network
docker network inspect homelab
# Test service connectivity
docker exec <service-a> ping <service-b>
docker exec karakeep curl http://ollama:11434
```
### GPU Not Detected
```bash
# Check GPU in VM
nvidia-smi
# Check Docker can access GPU
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
# Check service GPU allocation
docker exec jellyfin nvidia-smi
docker exec ollama nvidia-smi
```
### SSL Certificate Issues
```bash
# Check Traefik logs
docker logs traefik | grep -i certificate
# Force certificate renewal
docker exec traefik rm -rf /letsencrypt/acme.json
docker restart traefik
# Verify DNS
dig +short sonarr.fig.systems
```
### SSO Not Working
```bash
# Check Tinyauth status
docker logs tinyauth
# Check LLDAP connection
docker exec tinyauth nc -zv lldap 3890
docker exec tinyauth nc -zv lldap 17170
# Verify credentials match
grep LDAP_BIND_PASSWORD compose/core/tinyauth/.env
grep LLDAP_LDAP_USER_PASS compose/core/lldap/.env
```
## 📈 Monitoring Best Practices
### Key Metrics to Monitor
**System Level:**
- CPU usage per container
- Memory usage per container
- Disk I/O
- Network throughput
- GPU utilization (for Jellyfin/Ollama/Immich)
**Application Level:**
- Traefik request rate
- Failed authentication attempts
- Jellyfin concurrent streams
- Download speeds (SABnzbd/qBittorrent)
- Sonarr/Radarr queue size
### Uptime Kuma Monitoring
Configure monitors for:
- **HTTP(s)**: All web services (200 status check)
- **TCP**: Database ports (PostgreSQL, etc.)
- **Docker**: Container health (via Docker socket)
- **SSL**: Certificate expiration (30-day warning)
### Log Monitoring
Set up Loki alerts for:
- ERROR level logs
- Authentication failures
- Service crashes
- Disk space warnings
## 🔧 Maintenance Tasks
### Daily
- Check Uptime Kuma dashboard
- Review any critical alerts
### Weekly
- Check disk space: `df -h`
- Review failed downloads in Sonarr/Radarr
- Check Loki logs for errors
### Monthly
- Update all containers: `docker compose pull && docker compose up -d`
- Review and clean old Docker images: `docker image prune -a`
- Backup configurations
- Check SSL certificate renewal
### Quarterly
- Review and update documentation
- Clean up old media (if needed)
- Review and adjust quality profiles
- Update Recyclarr configurations
## 📚 Additional Resources
- [Traefik Documentation](https://doc.traefik.io/traefik/)
- [Docker Compose Best Practices](https://docs.docker.com/compose/production/)
- [Loki LogQL Guide](https://grafana.com/docs/loki/latest/logql/)
- [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/)
- [Proxmox GPU Passthrough](https://pve.proxmox.com/wiki/PCI_Passthrough)
- [AlmaLinux Documentation](https://wiki.almalinux.org/)
---
**System Ready!** 🚀

775
docs/setup/almalinux-vm.md Normal file
View file

@ -0,0 +1,775 @@
# AlmaLinux 9.6 VM Setup Guide
Complete setup guide for the homelab VM on AlmaLinux 9.6 running on Proxmox VE 9.
## Hardware Context
- **Host**: Proxmox VE 9 (Debian 13 based)
- CPU: AMD Ryzen 5 7600X (6C/12T, 5.3 GHz boost)
- GPU: NVIDIA GTX 1070 (8GB VRAM)
- RAM: 32GB DDR5
- **VM Allocation**:
- OS: AlmaLinux 9.6 (RHEL 9 compatible)
- CPU: 8 vCPUs
- RAM: 24GB
- Disk: 500GB+ (expandable)
- GPU: GTX 1070 (PCIe passthrough)
## Proxmox VM Creation
### 1. Create VM
```bash
# On Proxmox host
qm create 100 \
--name homelab \
--memory 24576 \
--cores 8 \
--cpu host \
--sockets 1 \
--net0 virtio,bridge=vmbr0 \
--scsi0 local-lvm:500 \
--ostype l26 \
--boot order=scsi0
# Attach AlmaLinux ISO
qm set 100 --ide2 local:iso/AlmaLinux-9.6-x86_64-dvd.iso,media=cdrom
# Enable UEFI
qm set 100 --bios ovmf --efidisk0 local-lvm:1
```
### 2. GPU Passthrough
**Find GPU PCI address:**
```bash
lspci | grep -i nvidia
# Example output: 01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070]
```
**Enable IOMMU in Proxmox:**
Edit `/etc/default/grub`:
```bash
# For AMD CPU (Ryzen 5 7600X)
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
```
Update GRUB and reboot:
```bash
update-grub
reboot
```
**Verify IOMMU:**
```bash
dmesg | grep -e DMAR -e IOMMU
# Should show IOMMU enabled
```
**Add GPU to VM:**
Edit `/etc/pve/qemu-server/100.conf`:
```
hostpci0: 0000:01:00,pcie=1,x-vga=1
```
Or via command:
```bash
qm set 100 --hostpci0 0000:01:00,pcie=1,x-vga=1
```
**Blacklist GPU on host:**
Edit `/etc/modprobe.d/blacklist-nvidia.conf`:
```
blacklist nouveau
blacklist nvidia
blacklist nvidia_drm
blacklist nvidia_modeset
blacklist nvidia_uvm
```
Update initramfs:
```bash
update-initramfs -u
reboot
```
## AlmaLinux Installation
### 1. Install AlmaLinux 9.6
Start VM and follow installer:
1. **Language**: English (US)
2. **Installation Destination**: Use all space, automatic partitioning
3. **Network**: Enable and set hostname to `homelab.fig.systems`
4. **Software Selection**: Minimal Install
5. **Root Password**: Set strong password
6. **User Creation**: Create admin user (e.g., `homelab`)
### 2. Post-Installation Configuration
```bash
# SSH into VM
ssh homelab@<vm-ip>
# Update system
sudo dnf update -y
# Install essential tools
sudo dnf install -y \
vim \
git \
curl \
wget \
htop \
ncdu \
tree \
tmux \
bind-utils \
net-tools \
firewalld
# Enable and configure firewall
sudo systemctl enable --now firewalld
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
```
### 3. Configure Static IP (Optional)
```bash
# Find connection name
nmcli connection show
# Set static IP (example: 192.168.1.100)
sudo nmcli connection modify "System eth0" \
ipv4.addresses 192.168.1.100/24 \
ipv4.gateway 192.168.1.1 \
ipv4.dns "1.1.1.1,8.8.8.8" \
ipv4.method manual
# Restart network
sudo nmcli connection down "System eth0"
sudo nmcli connection up "System eth0"
```
## Docker Installation
### 1. Install Docker Engine
```bash
# Remove old versions
sudo dnf remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
# Add Docker repository
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install Docker
sudo dnf install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin
# Start Docker
sudo systemctl enable --now docker
# Verify
sudo docker run hello-world
```
### 2. Configure Docker
**Add user to docker group:**
```bash
sudo usermod -aG docker $USER
newgrp docker
# Verify (no sudo needed)
docker ps
```
**Configure Docker daemon:**
Create `/etc/docker/daemon.json`:
```json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"features": {
"buildkit": true
}
}
```
Restart Docker:
```bash
sudo systemctl restart docker
```
## NVIDIA GPU Setup
### 1. Install NVIDIA Drivers
```bash
# Add EPEL repository
sudo dnf install -y epel-release
# Add NVIDIA repository
sudo dnf config-manager --add-repo \
https://developer.download.nvidia.com/compute/cuda/repos/rhel9/x86_64/cuda-rhel9.repo
# Install drivers
sudo dnf install -y \
nvidia-driver \
nvidia-driver-cuda \
nvidia-settings \
nvidia-persistenced
# Reboot to load drivers
sudo reboot
```
### 2. Verify GPU
```bash
# Check driver version
nvidia-smi
# Expected output:
# +-----------------------------------------------------------------------------+
# | NVIDIA-SMI 535.xx.xx Driver Version: 535.xx.xx CUDA Version: 12.2 |
# |-------------------------------+----------------------+----------------------+
# | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
# | 0 GeForce GTX 1070 Off | 00000000:01:00.0 Off | N/A |
# +-------------------------------+----------------------+----------------------+
```
### 3. Install NVIDIA Container Toolkit
```bash
# Add NVIDIA Container Toolkit repository
sudo dnf config-manager --add-repo \
https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo
# Install toolkit
sudo dnf install -y nvidia-container-toolkit
# Configure Docker to use nvidia runtime
sudo nvidia-ctk runtime configure --runtime=docker
# Restart Docker
sudo systemctl restart docker
# Test GPU in container
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi
```
## Storage Setup
### 1. Create Media Directory
```bash
# Create media directory structure
sudo mkdir -p /media/{tv,movies,music,photos,books,audiobooks,comics,homemovies}
sudo mkdir -p /media/{downloads,complete,incomplete}
# Set ownership
sudo chown -R $USER:$USER /media
# Set permissions
chmod -R 755 /media
```
### 2. Mount Additional Storage (Optional)
If using separate disk for media:
```bash
# Find disk
lsblk
# Format disk (example: /dev/sdb)
sudo mkfs.ext4 /dev/sdb
# Get UUID
sudo blkid /dev/sdb
# Add to /etc/fstab
echo "UUID=<uuid> /media ext4 defaults,nofail 0 2" | sudo tee -a /etc/fstab
# Mount
sudo mount -a
```
## Homelab Repository Setup
### 1. Clone Repository
```bash
# Create workspace
mkdir -p ~/homelab
cd ~/homelab
# Clone repository
git clone https://github.com/efigueroa/homelab.git .
# Or if using SSH
git clone git@github.com:efigueroa/homelab.git .
```
### 2. Create Docker Network
```bash
# Create homelab network
docker network create homelab
# Verify
docker network ls | grep homelab
```
### 3. Configure Environment Variables
```bash
# Generate secrets for all services
cd ~/homelab
# LLDAP
cd compose/core/lldap
openssl rand -hex 32 > /tmp/lldap_jwt_secret
openssl rand -base64 32 | tr -d /=+ | cut -c1-32 > /tmp/lldap_pass
# Update .env with generated secrets
# Tinyauth
cd ../tinyauth
openssl rand -hex 32 > /tmp/tinyauth_session
# Update .env (LDAP_BIND_PASSWORD must match LLDAP)
# Continue for all services...
```
See [`docs/guides/secrets-management.md`](../guides/secrets-management.md) for complete guide.
## SELinux Configuration
AlmaLinux uses SELinux by default. Configure for Docker:
```bash
# Check SELinux status
getenforce
# Should show: Enforcing
# Allow Docker to access bind mounts
sudo setsebool -P container_manage_cgroup on
# If you encounter permission issues:
# Option 1: Add SELinux context to directories
sudo chcon -R -t container_file_t ~/homelab/compose
sudo chcon -R -t container_file_t /media
# Option 2: Use :Z flag in docker volumes (auto-relabels)
# Example: ./data:/data:Z
# Option 3: Set SELinux to permissive (not recommended)
# sudo setenforce 0
```
## System Tuning
### 1. Increase File Limits
```bash
# Add to /etc/security/limits.conf
echo "* soft nofile 65536" | sudo tee -a /etc/security/limits.conf
echo "* hard nofile 65536" | sudo tee -a /etc/security/limits.conf
# Add to /etc/sysctl.conf
echo "fs.file-max = 65536" | sudo tee -a /etc/sysctl.conf
echo "fs.inotify.max_user_watches = 524288" | sudo tee -a /etc/sysctl.conf
# Apply
sudo sysctl -p
```
### 2. Optimize for Media Server
```bash
# Network tuning
echo "net.core.rmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
echo "net.core.wmem_max = 134217728" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.tcp_rmem = 4096 87380 67108864" | sudo tee -a /etc/sysctl.conf
echo "net.ipv4.tcp_wmem = 4096 65536 67108864" | sudo tee -a /etc/sysctl.conf
# Apply
sudo sysctl -p
```
### 3. CPU Governor (Ryzen 5 7600X)
```bash
# Install cpupower
sudo dnf install -y kernel-tools
# Set to performance mode
sudo cpupower frequency-set -g performance
# Make permanent
echo "performance" | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
```
## Deployment
### 1. Deploy Core Services
```bash
cd ~/homelab
# Create network
docker network create homelab
# Deploy Traefik
cd compose/core/traefik
docker compose up -d
# Deploy LLDAP
cd ../lldap
docker compose up -d
# Wait for LLDAP to be ready (30 seconds)
sleep 30
# Deploy Tinyauth
cd ../tinyauth
docker compose up -d
```
### 2. Configure LLDAP
```bash
# Access LLDAP web UI
# https://lldap.fig.systems
# 1. Login with admin credentials from .env
# 2. Create observer user for tinyauth
# 3. Create regular users
```
### 3. Deploy Monitoring
```bash
cd ~/homelab
# Deploy logging stack
cd compose/monitoring/logging
docker compose up -d
# Deploy uptime monitoring
cd ../uptime
docker compose up -d
```
### 4. Deploy Services
See [`README.md`](../../README.md) for complete deployment order.
## Verification
### 1. Check All Services
```bash
# List all running containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Check networks
docker network ls
# Check volumes
docker volume ls
```
### 2. Test GPU Access
```bash
# Test in Jellyfin
docker exec jellyfin nvidia-smi
# Test in Ollama
docker exec ollama nvidia-smi
# Test in Immich
docker exec immich-machine-learning nvidia-smi
```
### 3. Test Logging
```bash
# Check Promtail is collecting logs
docker logs promtail | grep "clients configured"
# Access Grafana
# https://logs.fig.systems
# Query logs
# {container="traefik"}
```
### 4. Test SSL
```bash
# Check certificate
curl -vI https://sonarr.fig.systems 2>&1 | grep -i "subject:"
# Should show valid Let's Encrypt certificate
```
## Backup Strategy
### 1. VM Snapshots (Proxmox)
```bash
# On Proxmox host
# Create snapshot before major changes
qm snapshot 100 pre-update-$(date +%Y%m%d)
# List snapshots
qm listsnapshot 100
# Restore snapshot
qm rollback 100 <snapshot-name>
```
### 2. Configuration Backup
```bash
# On VM
cd ~/homelab
# Backup all configs (excludes data directories)
tar czf homelab-config-$(date +%Y%m%d).tar.gz \
--exclude='*/data' \
--exclude='*/db' \
--exclude='*/pgdata' \
--exclude='*/config' \
--exclude='*/models' \
--exclude='*_data' \
compose/
# Backup to external storage
scp homelab-config-*.tar.gz user@backup-server:/backups/
```
### 3. Automated Backups with Backrest
Backrest service is included and configured. See:
- `compose/services/backrest/`
- Access: https://backup.fig.systems
## Maintenance
### Weekly
```bash
# Update containers
cd ~/homelab
find compose -name "compose.yaml" -type f | while read compose; do
dir=$(dirname "$compose")
echo "Updating $dir"
cd "$dir"
docker compose pull
docker compose up -d
cd ~/homelab
done
# Clean up old images
docker image prune -a -f
# Check disk space
df -h
ncdu /media
```
### Monthly
```bash
# Update AlmaLinux
sudo dnf update -y
# Update NVIDIA drivers (if available)
sudo dnf update nvidia-driver* -y
# Reboot if kernel updated
sudo reboot
```
## Troubleshooting
### Services Won't Start
```bash
# Check SELinux denials
sudo ausearch -m avc -ts recent
# If SELinux is blocking:
sudo setsebool -P container_manage_cgroup on
# Or relabel directories
sudo restorecon -Rv ~/homelab/compose
```
### GPU Not Detected
```bash
# Check GPU is passed through
lspci | grep -i nvidia
# Check drivers loaded
lsmod | grep nvidia
# Reinstall drivers
sudo dnf reinstall nvidia-driver* -y
sudo reboot
```
### Network Issues
```bash
# Check firewall
sudo firewall-cmd --list-all
# Add ports if needed
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --reload
# Check Docker network
docker network inspect homelab
```
### Permission Denied Errors
```bash
# Check ownership
ls -la ~/homelab/compose/*/
# Fix ownership
sudo chown -R $USER:$USER ~/homelab
# Check SELinux context
ls -Z ~/homelab/compose
# Fix SELinux labels
sudo chcon -R -t container_file_t ~/homelab/compose
```
## Performance Monitoring
### System Stats
```bash
# CPU usage
htop
# GPU usage
watch -n 1 nvidia-smi
# Disk I/O
iostat -x 1
# Network
iftop
# Per-container stats
docker stats
```
### Resource Limits
Example container resource limits:
```yaml
# In compose.yaml
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
reservations:
cpus: '1.0'
memory: 2G
```
## Security Hardening
### 1. Disable Root SSH
```bash
# Edit /etc/ssh/sshd_config
sudo sed -i 's/#PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
# Restart SSH
sudo systemctl restart sshd
```
### 2. Configure Fail2Ban
```bash
# Install
sudo dnf install -y fail2ban
# Configure
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
# Edit /etc/fail2ban/jail.local
# [sshd]
# enabled = true
# maxretry = 3
# bantime = 3600
# Start
sudo systemctl enable --now fail2ban
```
### 3. Automatic Updates
```bash
# Install dnf-automatic
sudo dnf install -y dnf-automatic
# Configure /etc/dnf/automatic.conf
# apply_updates = yes
# Enable
sudo systemctl enable --now dnf-automatic.timer
```
## Next Steps
1. ✅ VM created and AlmaLinux installed
2. ✅ Docker and NVIDIA drivers configured
3. ✅ Homelab repository cloned
4. ✅ Network and storage configured
5. ⬜ Deploy core services
6. ⬜ Configure SSO
7. ⬜ Deploy all services
8. ⬜ Configure backups
9. ⬜ Set up monitoring
---
**System ready for deployment!** 🚀