| .github/ISSUE_TEMPLATE | ||
| static | ||
| templates | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| app.py | ||
| CONTRIBUTING.md | ||
| docker-compose.local.yml | ||
| docker-compose.yml | ||
| Dockerfile | ||
| entrypoint.sh | ||
| import_from_aws.py | ||
| LICENSE | ||
| README.md | ||
| requirements.txt | ||
SGO: Security Groups Observatory
A web-based tool for exploring AWS EC2 instances and Security Groups with direct AWS import, MFA support, and CSV export capabilities.
TL;DR - Get Started in 30 Seconds
# 1. Create .env file with your AWS credentials path
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# 2. Start the container
docker-compose up --build
# or with Podman:
podman-compose up --build
# 3. Open browser to http://localhost:5000
# 4. Select AWS profiles, enter MFA codes, and import!
Features
- Direct AWS Import: Import data directly from AWS using
~/.aws/configwith MFA/OTP support - Parallel Import: Import from multiple AWS accounts simultaneously
- Search & Filter: Search by EC2 name, SG name, instance ID, group ID, or IP address
- Regex Search: Enable regex checkbox for advanced pattern matching
- Filter by Type: View all resources, only EC2 instances, or only Security Groups
- CSV Export: Export search results, EC2 details, SG details, and security group rules to CSV
- Detailed Views:
- EC2 View: Shows EC2 instance details with nested boxes for attached Security Groups
- Security Group View: Shows SG details with nested boxes for all attached EC2 instances
- Security Group Rules: View and search ingress/egress rules for any security group
- Statistics Dashboard: Quick overview of total SGs, EC2s, and accounts
Quick Start (Container - Recommended)
The easiest way to run SGO is using Docker or Podman. Works on Linux, macOS, and Windows.
Prerequisites
Install either:
Run the Application
# Docker
docker-compose up --build
# Podman
podman-compose up --build
2. Import Data via GUI
- Open your browser to
http://localhost:5000 - You'll see the Import Page with all your AWS profiles
- Select profiles: Check the AWS accounts you want to import
- Enter MFA codes: Paste your MFA/OTP codes for each selected profile
- Click "Start Import": Watch real-time progress as data is fetched in parallel
- Auto-redirect: When complete, you're taken to the Explorer
Parallel Import: All selected profiles are imported simultaneously in separate threads, so total time is the max of any single import, not the sum. This prevents MFA timeout issues.
3. Explore Your Data
- Search for EC2 instances and Security Groups
- View detailed information
- Inspect security group rules
- Filter and search using regex
4. Refresh Data
- Click the Refresh Data button to refresh data using cached AWS sessions (valid for 55 minutes)
- Click the Change Profiles button to switch to different AWS accounts
Container Configuration
Environment Variables
SGO supports configuration through environment variables. Create a .env file:
# Copy the example file
cp .env.example .env
# Edit with your settings
nano .env # or your preferred editor
Or create it manually:
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
Available Options:
| Variable | Description | Required | Default |
|---|---|---|---|
AWS_CONFIG_PATH |
Absolute path to AWS credentials directory | Yes | None |
PUID |
User ID for file permissions | No | 1000 |
PGID |
Group ID for file permissions | No | 1000 |
DATA_PATH |
Path for database storage (local mode) | No | ./data |
SGO_PORT |
Port to expose on host | No | 5000 |
DEBUG |
Enable Flask debug logging | No | false |
FLASK_ENV |
Flask environment | No | production |
Data Storage Options
Option 1: Docker Volume (Default - Recommended)
- Data stored in Docker-managed volume
sgo-data - Survives container restarts and rebuilds
- Better performance on macOS/Windows
- Use default
docker-compose.yml
Option 2: Local Directory
- Data stored in
./datadirectory - Easy to backup and access
- Better for development
- Use
docker-compose.local.yml:
docker-compose -f docker-compose.local.yml up --build
# or
podman-compose -f docker-compose.local.yml up --build
Or edit docker-compose.yml and swap the volume configuration as indicated in comments.
User/Group Configuration
To avoid permission issues, set PUID and PGID to match your host user:
# Find your IDs
id -u # Your PUID
id -g # Your PGID
# Add to .env file
echo "PUID=$(id -u)" >> .env
echo "PGID=$(id -g)" >> .env
Stopping the Application
# Stop with Ctrl+C, or:
docker-compose down # Docker
podman-compose down # Podman
# To also remove the data volume:
docker-compose down -v
Quick Start (Local Python)
If you prefer to run without containers:
1. Install Dependencies
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
2. Start the Application
python app.py
3. Open Browser
Navigate to http://localhost:5000
Important Notes
- Database Persistence: When using containers, the database persists in the
./datadirectory - Session Caching: AWS sessions are cached for 55 minutes, allowing multiple refreshes without re-authentication
- Parallel Import: All selected AWS accounts are imported simultaneously for maximum speed
AWS Configuration
MFA Device Setup
For profiles that require MFA, add your MFA device ARN to ~/.aws/config:
[profile nonprod-p1p2-admin]
region = us-west-2
mfa_serial = arn:aws:iam::131340773912:mfa/your-username
Finding Your MFA Device ARN
- Go to AWS IAM Console
- Navigate to Users → Your User → Security Credentials
- Copy the ARN from "Assigned MFA device"
How MFA Works in the GUI
- The import page shows all profiles from
~/.aws/config - Select the profiles you want to import
- Enter MFA codes in the text boxes (one per profile)
- Click "Start Import" to begin
- Real-time progress shows authentication and data fetching
- MFA session is valid for 1 hour - refresh without re-entering codes during this window
Usage
Search
- Type in the search box (minimum 2 characters)
- Results appear instantly as you type
- Filter by resource type using the buttons: All Resources | EC2 Instances | Security Groups
- Enable Regex: Check the "Regex" box to use regular expressions
- Example:
^prod-.*-\d+$finds names starting with "prod-" and ending with numbers - Example:
(dev|test|qa)finds names containing dev, test, or qa - Example:
10\.0\.\d+\.\d+finds IP addresses in the 10.0.x.x range
- Example:
View Details
EC2 Instance View:
- Click on any EC2 instance from search results
- Main card shows EC2 details (Instance ID, IP, State, Account, Tags)
- Nested cards show all attached Security Groups with their details
Security Group View:
- Click on any Security Group from search results
- Main card shows SG details (Group ID, Name, Ingress Rules, Wave, Tags)
- Nested cards show all EC2 instances using this Security Group
View Security Group Rules
When viewing security groups (either attached to an EC2 or directly):
- Click the View Rules button on any security group card
- A modal opens showing all ingress and egress rules
- Switch between Ingress and Egress tabs
- Use the search box to filter rules by protocol, port, source, or description
- Rules are displayed in a compact table format with:
- Protocol (TCP, UDP, ICMP, All)
- Port Range
- Source Type (CIDR, Security Group, Prefix List)
- Source (IP range or SG ID)
- Description
Navigate
- Click ← Back to Search to return to search results
- Perform a new search at any time
- Click outside the rules modal to close it
Export to CSV
SGO provides comprehensive CSV export capabilities:
Search Results Export:
- Click the 💾 Export button in the view controls (top right)
- Exports all current search results with filters applied
- Includes: Type, Name, ID, Account, State, IP, Security Groups count, Wave, Git info
EC2 Instance Details Export:
- Click the 💾 Export button in any EC2 detail card
- Exports complete EC2 information including:
- Instance details (ID, name, state, IP, account info)
- All AWS tags
- Attached security groups with their details
Security Group Details Export:
- Click the 💾 Export button in any SG detail card
- Exports complete SG information including:
- Group details (ID, name, wave, rule counts)
- All AWS tags
- Attached EC2 instances with their details
Security Group Rules Export:
- Click the 💾 Export button in the rules modal
- Exports all ingress and egress rules with:
- Rule details (direction, protocol, ports, source)
- Group ID, account ID
- Git file and commit information from tags
All exports include timestamps in filenames and proper CSV escaping.
Data Structure
Security Groups Table
- Account ID & Name
- Group ID & Name
- Tag Name
- Wave Tag
- Git Repo Tag
- Ingress Rule Count
EC2 Instances Table
- Account ID & Name
- Instance ID
- Tag Name
- State (running, stopped, etc.)
- Private IP Address
- Security Groups (IDs and Names)
- Git Repo Tag
File Structure
sgo/
├── app.py # Flask web application
├── import_from_aws.py # AWS direct import functions
├── import_data.py # CSV to SQLite import (legacy)
├── requirements.txt # Python dependencies
├── Dockerfile # Container image definition
├── docker-compose.yml # Container orchestration (Docker volume)
├── docker-compose.local.yml # Alternative with local directory storage
├── entrypoint.sh # Container entrypoint with PUID/PGID support
├── .dockerignore # Files to exclude from container
├── .env.example # Example environment configuration
├── .gitignore # Git ignore patterns
├── README.md # This file
├── data/ # Local data directory (if using local mode)
│ └── aws_export.db # SQLite database
├── static/
│ ├── css/
│ │ └── style.css # Application styles
│ └── images/
│ └── logo.svg # Application logo
└── templates/
├── import.html # Import/profile selection page
└── index.html # Main explorer interface
Configuration Examples
Example 1: Basic Setup (Default)
Minimal configuration with Docker volume:
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# Run
docker-compose up --build
# or: podman-compose up --build
Example 2: Local Data Directory
Store database in local directory for easy access:
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
DATA_PATH=./data
EOF
# Use the local compose file
docker-compose -f docker-compose.local.yml up --build
Example 3: Custom AWS Credentials Location
If your AWS credentials are in a non-standard location:
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=/path/to/custom/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
docker-compose up --build
Example 4: Debug Mode
Enable detailed logging for troubleshooting:
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
DEBUG=true
EOF
docker-compose up --build
Example 5: Custom Port
Run on a different port (e.g., 8080):
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
SGO_PORT=8080
EOF
# Access at http://localhost:8080
docker-compose up --build
Troubleshooting
Container Issues
Cannot connect to Docker daemon:
# Start Docker service
sudo systemctl start docker # Linux
# or open Docker Desktop # macOS/Windows
Port 5000 already in use:
# Change port in docker-compose.yml:
ports:
- "8080:5000" # Use port 8080 instead
AWS credentials not found in container:
- Ensure you have created a
.envfile withAWS_CONFIG_PATHset - Verify
~/.aws/configand~/.aws/credentialsexist on your host machine - Check file permissions on
~/.awsdirectory - Example:
cat > .env << EOF AWS_CONFIG_PATH=$HOME/.aws PUID=$(id -u) PGID=$(id -g) EOF
Permission denied errors:
# Set PUID/PGID to match your user
cat > .env << EOF
PUID=$(id -u)
PGID=$(id -g)
EOF
# Rebuild container
docker-compose down && docker-compose up --build
Database locked or permission errors:
# If using local directory mode, ensure proper ownership
sudo chown -R $(id -u):$(id -g) ./data
# Or use Docker volume mode (default) which handles permissions automatically
AWS Configuration Issues
No AWS profiles found:
Ensure you have ~/.aws/config file with profiles configured:
[profile nonprod-p1p2-admin]
region = us-west-2
mfa_serial = arn:aws:iam::123456789012:mfa/username
MFA authentication fails:
- Verify your MFA code is current (codes expire every 30 seconds)
- Check that
mfa_serialis configured in~/.aws/config - Ensure your AWS credentials in
~/.aws/credentialsare valid
Import fails:
- Check network connectivity to AWS
- Verify you have permissions to describe EC2 instances and security groups
- Look at the progress log for specific error messages
Platform-Specific Notes
Windows:
- Use Git Bash, WSL2, or PowerShell for running scripts
- Docker Desktop must be running
- WSL2 backend recommended for better performance
macOS:
- Docker Desktop must be running
- Ensure Docker has access to your home directory in Docker Desktop settings
Linux:
- You may need to run Docker commands with
sudoor add your user to thedockergroup - Podman works without root by default
Quick Reference
Essential Commands
# Start
docker-compose up --build
# or: podman-compose up --build
# Stop
docker-compose down
# or: Ctrl+C
# View logs
docker-compose logs -f
# Rebuild after changes
docker-compose up --build
# Remove everything including data
docker-compose down -v
Quick .env Setup
# Minimal configuration for most users
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# Then run
docker-compose up --build
Data Location
-
Docker volume (default): Managed by Docker, survives rebuilds
# Inspect volume docker volume inspect sgo-data # Backup volume docker run --rm -v sgo-data:/data -v $(pwd):/backup alpine tar czf /backup/sgo-backup.tar.gz -C /data . -
Local directory:
./data/aws_export.db# Use local mode docker-compose -f docker-compose.local.yml up --build
License
This project is dual-licensed:
- FREE for individual, personal, educational, and non-commercial use
- PAID LICENSE REQUIRED for commercial use by businesses and organizations
You may NOT modify this software for the purpose of selling or commercially distributing it.
See the LICENSE file for full details.
For commercial licensing inquiries, please open an issue in this repository.