Initial Commit

This commit is contained in:
Eduardo Figueroa 2025-11-20 12:03:30 -08:00
commit 6886c8871c
No known key found for this signature in database
GPG key ID: E4B7BBE6F7D53330
20 changed files with 4903 additions and 0 deletions

16
.dockerignore Normal file
View file

@ -0,0 +1,16 @@
*.md
.git
.gitignore
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
*.egg
*.egg-info
dist
build
.env
venv/
ENV/

39
.env.example Normal file
View file

@ -0,0 +1,39 @@
# SGO Configuration
# Copy this file to .env and customize as needed
# ===== User/Group Configuration =====
# Set these to match your host user ID for proper file permissions
# Find your IDs with: id -u (PUID) and id -g (PGID)
PUID=1000
PGID=1000
# ===== AWS Credentials Path (REQUIRED) =====
# Absolute path to your AWS configuration directory on the host
# REQUIRED: Must be set to your actual path
# Example: AWS_CONFIG_PATH=/home/username/.aws
AWS_CONFIG_PATH=/home/yourusername/.aws
# ===== Data Storage =====
# By default, uses a Docker volume (sgo-data)
# To use a local directory instead:
# 1. Edit docker-compose.yml
# 2. Comment out: - sgo-data:/app/data
# 3. Uncomment: - ${DATA_PATH:-./data}:/app/data
# DATA_PATH=./data
# ===== Port Configuration =====
# Port to expose the web interface on
# Default: 5000
# SGO_PORT=5000
# ===== Debug Mode =====
# Enable Flask debug mode for detailed logging
# Options: true, false
# Default: false
DEBUG=false
# ===== Flask Environment =====
# Flask environment setting
# Options: production, development
# Default: production
FLASK_ENV=production

30
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,30 @@
---
name: Bug Report
about: Report a bug or issue with SGO
title: '[BUG] '
labels: bug
assignees: ''
---
## Description
A clear description of the bug.
## Steps to Reproduce
1.
2.
3.
## Expected Behavior
What you expected to happen.
## Actual Behavior
What actually happened.
## Environment
- OS: [e.g., Linux, macOS, Windows]
- Container Runtime: [Docker or Podman]
- Version: [Docker/Podman version]
- SGO Version: [commit hash or release]
## Additional Context
Add any other context, logs, or screenshots about the problem here.

View file

@ -0,0 +1,26 @@
---
name: Commercial License Inquiry
about: Request information about commercial licensing
title: '[COMMERCIAL LICENSE] '
labels: commercial
assignees: ''
---
## Organization Information
- Company Name:
- Industry:
- Size:
## Use Case
How do you plan to use SGO?
## Contact Information
- Name:
- Email:
- Phone (optional):
## Additional Information
Any other details about your requirements.
---
**Note:** For commercial use, a paid license is required. The maintainer will contact you to discuss licensing options.

View file

@ -0,0 +1,19 @@
---
name: Feature Request
about: Suggest a new feature for SGO
title: '[FEATURE] '
labels: enhancement
assignees: ''
---
## Problem
A clear description of the problem you're trying to solve.
## Proposed Solution
Describe your proposed solution.
## Alternatives Considered
What other solutions have you considered?
## Additional Context
Add any other context, mockups, or examples here.

37
.gitignore vendored Normal file
View file

@ -0,0 +1,37 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
venv/
ENV/
env/
*.egg-info/
dist/
build/
# Data
data/
*.db
*.sqlite
*.sqlite3
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
Thumbs.db
# Environment
.env
.env.local
.env.*.local
# Logs
*.log

57
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,57 @@
# Contributing to SGO
Thank you for your interest in contributing to the Security Groups Observatory (SGO)!
## License Considerations
Before contributing, please note that this project uses a dual-license model:
- FREE for personal, educational, and non-commercial use
- PAID license required for commercial use
By contributing to this project, you agree that your contributions will be licensed under the same terms.
## How to Contribute
### Reporting Bugs
If you find a bug, please open an issue with:
- A clear description of the problem
- Steps to reproduce
- Expected vs actual behavior
- Your environment (OS, Docker/Podman version, etc.)
### Suggesting Features
Feature requests are welcome! Please open an issue describing:
- The problem you're trying to solve
- Your proposed solution
- Any alternatives you've considered
### Pull Requests
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes
4. Test thoroughly with both Docker and Podman
5. Commit your changes (`git commit -m 'Add amazing feature'`)
6. Push to your branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request
### Code Style
- Use comments starting with `#` (no multiline comment markers)
- Keep code clean and well-documented
- Follow existing patterns in the codebase
### Testing
Before submitting a PR, please test:
- Docker functionality: `docker-compose up --build`
- Podman functionality: `podman-compose up --build`
- Both Docker volume and local directory modes
- CSV export features
- AWS credential handling
## Questions?
Feel free to open an issue for any questions about contributing!

39
Dockerfile Normal file
View file

@ -0,0 +1,39 @@
FROM python:3.11-slim
# Install gosu for user switching
RUN apt-get update && \
apt-get install -y --no-install-recommends gosu && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application files
COPY . .
# Create default directories
RUN mkdir -p /app/data /home/sgo
# Copy entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Expose port
EXPOSE 5000
# Set environment variables
ENV FLASK_APP=app.py \
PYTHONUNBUFFERED=1 \
PUID=1000 \
PGID=1000 \
DEBUG=false \
HOME=/home/sgo
# Use entrypoint for PUID/PGID handling
ENTRYPOINT ["/entrypoint.sh"]
# Run the application
CMD ["python", "app.py"]

39
LICENSE Normal file
View file

@ -0,0 +1,39 @@
SGO (Security Groups Observatory) License
Copyright (c) 2024
TERMS AND CONDITIONS
1. DEFINITIONS
- "Software" refers to the SGO application and all associated files
- "Individual Use" means use by a natural person for personal, educational, or non-commercial purposes
- "Commercial Use" means use by any business entity, organization, or for profit-generating activities
2. LICENSE GRANT
2.1 Individual Use (FREE)
This Software is free to use and copy for Individual Use, subject to the following conditions:
- The above copyright notice and this license shall be included in all copies
- The Software is provided "AS IS", without warranty of any kind
- Modifications for personal use are permitted
2.2 Commercial Use (PAID LICENSE REQUIRED)
Commercial Use of this Software requires a separate paid license agreement.
Companies, businesses, or organizations wishing to use this Software must contact the copyright holder to obtain a commercial license.
3. RESTRICTIONS
- Commercial entities may NOT use this Software without obtaining a paid commercial license
- You may NOT modify this Software for the purpose of selling, licensing, or otherwise commercially distributing it
- You may NOT sell, sublicense, or commercially distribute modified or unmodified versions of this Software
- Redistribution in commercial products requires explicit written permission
4. PERMITTED USES
- Personal, educational, and non-profit use
- Modification for personal use only
- Academic research and teaching
- Non-commercial open source projects
5. DISCLAIMER
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
For commercial licensing inquiries, please contact: [Your contact information]

580
README.md Normal file
View file

@ -0,0 +1,580 @@
# SGO: Security Groups Observatory
A web-based tool for exploring AWS EC2 instances and Security Groups with direct AWS import, MFA support, and CSV export capabilities.
## TL;DR - Get Started in 30 Seconds
```bash
# 1. Create .env file with your AWS credentials path
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# 2. Start the container
docker-compose up --build
# or with Podman:
podman-compose up --build
# 3. Open browser to http://localhost:5000
# 4. Select AWS profiles, enter MFA codes, and import!
```
## Features
- **Direct AWS Import**: Import data directly from AWS using `~/.aws/config` with MFA/OTP support
- **Parallel Import**: Import from multiple AWS accounts simultaneously
- **Search & Filter**: Search by EC2 name, SG name, instance ID, group ID, or IP address
- **Regex Search**: Enable regex checkbox for advanced pattern matching
- **Filter by Type**: View all resources, only EC2 instances, or only Security Groups
- **CSV Export**: Export search results, EC2 details, SG details, and security group rules to CSV
- **Detailed Views**:
- **EC2 View**: Shows EC2 instance details with nested boxes for attached Security Groups
- **Security Group View**: Shows SG details with nested boxes for all attached EC2 instances
- **Security Group Rules**: View and search ingress/egress rules for any security group
- **Statistics Dashboard**: Quick overview of total SGs, EC2s, and accounts
## Quick Start (Container - Recommended)
The easiest way to run SGO is using Docker or Podman. Works on Linux, macOS, and Windows.
### Prerequisites
Install either:
- **Docker**: https://docs.docker.com/get-docker/
- **Podman**: https://podman.io/getting-started/installation
### Run the Application
```bash
# Docker
docker-compose up --build
# Podman
podman-compose up --build
```
### 2. Import Data via GUI
1. Open your browser to `http://localhost:5000`
2. You'll see the **Import Page** with all your AWS profiles
3. **Select profiles**: Check the AWS accounts you want to import
4. **Enter MFA codes**: Paste your MFA/OTP codes for each selected profile
5. **Click "Start Import"**: Watch real-time progress as data is fetched **in parallel**
6. **Auto-redirect**: When complete, you're taken to the Explorer
**Parallel Import**: All selected profiles are imported simultaneously in separate threads, so total time is the max of any single import, not the sum. This prevents MFA timeout issues.
### 3. Explore Your Data
- Search for EC2 instances and Security Groups
- View detailed information
- Inspect security group rules
- Filter and search using regex
### 4. Refresh Data
- Click the **Refresh Data** button to refresh data using cached AWS sessions (valid for 55 minutes)
- Click the **Change Profiles** button to switch to different AWS accounts
## Container Configuration
### Environment Variables
SGO supports configuration through environment variables. Create a `.env` file:
```bash
# Copy the example file
cp .env.example .env
# Edit with your settings
nano .env # or your preferred editor
```
Or create it manually:
```bash
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
```
**Available Options:**
| Variable | Description | Required | Default |
|----------|-------------|----------|---------|
| `AWS_CONFIG_PATH` | Absolute path to AWS credentials directory | Yes | None |
| `PUID` | User ID for file permissions | No | `1000` |
| `PGID` | Group ID for file permissions | No | `1000` |
| `DATA_PATH` | Path for database storage (local mode) | No | `./data` |
| `SGO_PORT` | Port to expose on host | No | `5000` |
| `DEBUG` | Enable Flask debug logging | No | `false` |
| `FLASK_ENV` | Flask environment | No | `production` |
### Data Storage Options
**Option 1: Docker Volume (Default - Recommended)**
- Data stored in Docker-managed volume `sgo-data`
- Survives container restarts and rebuilds
- Better performance on macOS/Windows
- Use default `docker-compose.yml`
**Option 2: Local Directory**
- Data stored in `./data` directory
- Easy to backup and access
- Better for development
- Use `docker-compose.local.yml`:
```bash
docker-compose -f docker-compose.local.yml up --build
# or
podman-compose -f docker-compose.local.yml up --build
```
Or edit `docker-compose.yml` and swap the volume configuration as indicated in comments.
### User/Group Configuration
To avoid permission issues, set `PUID` and `PGID` to match your host user:
```bash
# Find your IDs
id -u # Your PUID
id -g # Your PGID
# Add to .env file
echo "PUID=$(id -u)" >> .env
echo "PGID=$(id -g)" >> .env
```
### Stopping the Application
```bash
# Stop with Ctrl+C, or:
docker-compose down # Docker
podman-compose down # Podman
# To also remove the data volume:
docker-compose down -v
```
## Quick Start (Local Python)
If you prefer to run without containers:
### 1. Install Dependencies
```bash
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
```
### 2. Start the Application
```bash
python app.py
```
### 3. Open Browser
Navigate to `http://localhost:5000`
## Important Notes
- **Database Persistence**: When using containers, the database persists in the `./data` directory
- **Session Caching**: AWS sessions are cached for 55 minutes, allowing multiple refreshes without re-authentication
- **Parallel Import**: All selected AWS accounts are imported simultaneously for maximum speed
## AWS Configuration
### MFA Device Setup
For profiles that require MFA, add your MFA device ARN to `~/.aws/config`:
```ini
[profile nonprod-p1p2-admin]
region = us-west-2
mfa_serial = arn:aws:iam::131340773912:mfa/your-username
```
### Finding Your MFA Device ARN
1. Go to AWS IAM Console
2. Navigate to Users → Your User → Security Credentials
3. Copy the ARN from "Assigned MFA device"
### How MFA Works in the GUI
1. The import page shows all profiles from `~/.aws/config`
2. Select the profiles you want to import
3. Enter MFA codes in the text boxes (one per profile)
4. Click "Start Import" to begin
5. Real-time progress shows authentication and data fetching
6. MFA session is valid for 1 hour - refresh without re-entering codes during this window
## Usage
### Search
1. Type in the search box (minimum 2 characters)
2. Results appear instantly as you type
3. Filter by resource type using the buttons: **All Resources** | **EC2 Instances** | **Security Groups**
4. **Enable Regex**: Check the "Regex" box to use regular expressions
- Example: `^prod-.*-\d+$` finds names starting with "prod-" and ending with numbers
- Example: `(dev|test|qa)` finds names containing dev, test, or qa
- Example: `10\.0\.\d+\.\d+` finds IP addresses in the 10.0.x.x range
### View Details
**EC2 Instance View:**
- Click on any EC2 instance from search results
- Main card shows EC2 details (Instance ID, IP, State, Account, Tags)
- Nested cards show all attached Security Groups with their details
**Security Group View:**
- Click on any Security Group from search results
- Main card shows SG details (Group ID, Name, Ingress Rules, Wave, Tags)
- Nested cards show all EC2 instances using this Security Group
### View Security Group Rules
When viewing security groups (either attached to an EC2 or directly):
1. Click the **View Rules** button on any security group card
2. A modal opens showing all ingress and egress rules
3. Switch between **Ingress** and **Egress** tabs
4. Use the search box to filter rules by protocol, port, source, or description
5. Rules are displayed in a compact table format with:
- Protocol (TCP, UDP, ICMP, All)
- Port Range
- Source Type (CIDR, Security Group, Prefix List)
- Source (IP range or SG ID)
- Description
### Navigate
- Click **← Back to Search** to return to search results
- Perform a new search at any time
- Click outside the rules modal to close it
### Export to CSV
SGO provides comprehensive CSV export capabilities:
**Search Results Export:**
- Click the **💾 Export** button in the view controls (top right)
- Exports all current search results with filters applied
- Includes: Type, Name, ID, Account, State, IP, Security Groups count, Wave, Git info
**EC2 Instance Details Export:**
- Click the **💾 Export** button in any EC2 detail card
- Exports complete EC2 information including:
- Instance details (ID, name, state, IP, account info)
- All AWS tags
- Attached security groups with their details
**Security Group Details Export:**
- Click the **💾 Export** button in any SG detail card
- Exports complete SG information including:
- Group details (ID, name, wave, rule counts)
- All AWS tags
- Attached EC2 instances with their details
**Security Group Rules Export:**
- Click the **💾 Export** button in the rules modal
- Exports all ingress and egress rules with:
- Rule details (direction, protocol, ports, source)
- Group ID, account ID
- Git file and commit information from tags
All exports include timestamps in filenames and proper CSV escaping.
## Data Structure
### Security Groups Table
- Account ID & Name
- Group ID & Name
- Tag Name
- Wave Tag
- Git Repo Tag
- Ingress Rule Count
### EC2 Instances Table
- Account ID & Name
- Instance ID
- Tag Name
- State (running, stopped, etc.)
- Private IP Address
- Security Groups (IDs and Names)
- Git Repo Tag
## File Structure
```
sgo/
├── app.py # Flask web application
├── import_from_aws.py # AWS direct import functions
├── import_data.py # CSV to SQLite import (legacy)
├── requirements.txt # Python dependencies
├── Dockerfile # Container image definition
├── docker-compose.yml # Container orchestration (Docker volume)
├── docker-compose.local.yml # Alternative with local directory storage
├── entrypoint.sh # Container entrypoint with PUID/PGID support
├── .dockerignore # Files to exclude from container
├── .env.example # Example environment configuration
├── .gitignore # Git ignore patterns
├── README.md # This file
├── data/ # Local data directory (if using local mode)
│ └── aws_export.db # SQLite database
├── static/
│ ├── css/
│ │ └── style.css # Application styles
│ └── images/
│ └── logo.svg # Application logo
└── templates/
├── import.html # Import/profile selection page
└── index.html # Main explorer interface
```
## Configuration Examples
### Example 1: Basic Setup (Default)
Minimal configuration with Docker volume:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# Run
docker-compose up --build
# or: podman-compose up --build
```
### Example 2: Local Data Directory
Store database in local directory for easy access:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
DATA_PATH=./data
EOF
# Use the local compose file
docker-compose -f docker-compose.local.yml up --build
```
### Example 3: Custom AWS Credentials Location
If your AWS credentials are in a non-standard location:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=/path/to/custom/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
docker-compose up --build
```
### Example 4: Debug Mode
Enable detailed logging for troubleshooting:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
DEBUG=true
EOF
docker-compose up --build
```
### Example 5: Custom Port
Run on a different port (e.g., 8080):
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
SGO_PORT=8080
EOF
# Access at http://localhost:8080
docker-compose up --build
```
## Troubleshooting
### Container Issues
**Cannot connect to Docker daemon:**
```bash
# Start Docker service
sudo systemctl start docker # Linux
# or open Docker Desktop # macOS/Windows
```
**Port 5000 already in use:**
```bash
# Change port in docker-compose.yml:
ports:
- "8080:5000" # Use port 8080 instead
```
**AWS credentials not found in container:**
- Ensure you have created a `.env` file with `AWS_CONFIG_PATH` set
- Verify `~/.aws/config` and `~/.aws/credentials` exist on your host machine
- Check file permissions on `~/.aws` directory
- Example:
```bash
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
```
**Permission denied errors:**
```bash
# Set PUID/PGID to match your user
cat > .env << EOF
PUID=$(id -u)
PGID=$(id -g)
EOF
# Rebuild container
docker-compose down && docker-compose up --build
```
**Database locked or permission errors:**
```bash
# If using local directory mode, ensure proper ownership
sudo chown -R $(id -u):$(id -g) ./data
# Or use Docker volume mode (default) which handles permissions automatically
```
### AWS Configuration Issues
**No AWS profiles found:**
Ensure you have `~/.aws/config` file with profiles configured:
```ini
[profile nonprod-p1p2-admin]
region = us-west-2
mfa_serial = arn:aws:iam::123456789012:mfa/username
```
**MFA authentication fails:**
- Verify your MFA code is current (codes expire every 30 seconds)
- Check that `mfa_serial` is configured in `~/.aws/config`
- Ensure your AWS credentials in `~/.aws/credentials` are valid
**Import fails:**
- Check network connectivity to AWS
- Verify you have permissions to describe EC2 instances and security groups
- Look at the progress log for specific error messages
### Platform-Specific Notes
**Windows:**
- Use Git Bash, WSL2, or PowerShell for running scripts
- Docker Desktop must be running
- WSL2 backend recommended for better performance
**macOS:**
- Docker Desktop must be running
- Ensure Docker has access to your home directory in Docker Desktop settings
**Linux:**
- You may need to run Docker commands with `sudo` or add your user to the `docker` group
- Podman works without root by default
## Quick Reference
### Essential Commands
```bash
# Start
docker-compose up --build
# or: podman-compose up --build
# Stop
docker-compose down
# or: Ctrl+C
# View logs
docker-compose logs -f
# Rebuild after changes
docker-compose up --build
# Remove everything including data
docker-compose down -v
```
### Quick .env Setup
```bash
# Minimal configuration for most users
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# Then run
docker-compose up --build
```
### Data Location
- **Docker volume (default)**: Managed by Docker, survives rebuilds
```bash
# Inspect volume
docker volume inspect sgo-data
# Backup volume
docker run --rm -v sgo-data:/data -v $(pwd):/backup alpine tar czf /backup/sgo-backup.tar.gz -C /data .
```
- **Local directory**: `./data/aws_export.db`
```bash
# Use local mode
docker-compose -f docker-compose.local.yml up --build
```
## License
This project is dual-licensed:
- **FREE** for individual, personal, educational, and non-commercial use
- **PAID LICENSE REQUIRED** for commercial use by businesses and organizations
You may NOT modify this software for the purpose of selling or commercially distributing it.
See the [LICENSE](LICENSE) file for full details.
For commercial licensing inquiries, please open an issue in this repository.

869
app.py Executable file
View file

@ -0,0 +1,869 @@
#!/usr/bin/env python3
"""
Flask web application for exploring AWS EC2 and Security Group exports
"""
from flask import Flask, render_template, request, jsonify, Response, stream_with_context
import sqlite3
import os
import re
import atexit
import signal
import sys
import boto3
import configparser
from pathlib import Path
import json
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
import threading
import queue
app = Flask(__name__)
DB_PATH = os.path.join(os.path.dirname(__file__), 'data', 'aws_export.db')
data_imported = False
# Cache for AWS session credentials (valid for 1 hour)
session_cache = {} # {profile: {'credentials': {...}, 'region': ..., 'timestamp': ...}}
def regexp(pattern, value):
"""Custom REGEXP function for SQLite"""
if value is None:
return False
try:
return re.search(pattern, value, re.IGNORECASE) is not None
except re.error:
return False
def get_db():
"""Get database connection"""
# Ensure data directory exists
os.makedirs(os.path.dirname(DB_PATH), exist_ok=True)
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
conn.create_function("REGEXP", 2, regexp)
# Create tables if they don't exist
cursor = conn.cursor()
cursor.execute("""
CREATE TABLE IF NOT EXISTS security_groups (
id INTEGER PRIMARY KEY AUTOINCREMENT,
account_id TEXT,
account_name TEXT,
group_id TEXT UNIQUE,
group_name TEXT,
tag_name TEXT,
tag_wave TEXT,
tag_git_repo TEXT,
tag_git_org TEXT,
tag_git_file TEXT,
tags_json TEXT,
ingress_rule_count INTEGER,
egress_rule_count INTEGER
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS ec2_instances (
id INTEGER PRIMARY KEY AUTOINCREMENT,
account_id TEXT,
account_name TEXT,
tag_name TEXT,
instance_id TEXT UNIQUE,
state TEXT,
private_ip_address TEXT,
security_groups_id_list TEXT,
security_groups_name_list TEXT,
tag_git_repo TEXT,
tag_git_org TEXT,
tag_git_file TEXT,
tags_json TEXT
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS sg_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT,
direction TEXT,
protocol TEXT,
port_range TEXT,
source_type TEXT,
source TEXT,
description TEXT
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS refresh_timestamps (
id INTEGER PRIMARY KEY AUTOINCREMENT,
account_id TEXT,
account_name TEXT,
last_refresh TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(account_id)
)
""")
conn.commit()
return conn
@app.route('/')
def index():
"""Import page - always shown first"""
global data_imported
# If data already imported, redirect to explorer
if data_imported and os.path.exists(DB_PATH):
return render_template('index.html')
return render_template('import.html')
@app.route('/explorer')
def explorer():
"""Main explorer interface"""
# Always show explorer, will display empty state if no data
return render_template('index.html')
@app.route('/api/profiles')
def get_profiles():
"""Get list of AWS profiles"""
try:
config_path = Path.home() / '.aws' / 'config'
if not config_path.exists():
return jsonify({'error': f'AWS config file not found at {config_path}'}), 404
config = configparser.ConfigParser()
config.read(config_path)
profiles = []
for section in config.sections():
if section.startswith('profile '):
profile_name = section.replace('profile ', '')
profiles.append(profile_name)
elif section == 'default':
profiles.append('default')
# Sort profiles alphabetically, but keep 'default' at the top
profiles.sort(key=lambda x: ('0' if x == 'default' else '1' + x.lower()))
return jsonify({'profiles': profiles})
except Exception as e:
return jsonify({'error': str(e)}), 500
def send_progress(message, status='info'):
"""Send progress update via Server-Sent Events"""
return f"data: {json.dumps({'message': message, 'status': status})}\n\n"
def get_account_info_inline(session):
"""Get AWS account ID and alias (inline version)"""
sts = session.client('sts')
identity = sts.get_caller_identity()
account_id = identity['Account']
try:
iam = session.client('iam')
aliases = iam.list_account_aliases()
account_name = aliases['AccountAliases'][0] if aliases['AccountAliases'] else account_id
except:
account_name = account_id
return account_id, account_name
def import_profile(profile, mfa_code, progress_queue):
"""Import data from a single AWS profile (runs in thread)"""
try:
from import_from_aws import fetch_security_groups, fetch_ec2_instances
progress_queue.put(('info', f"[{profile}] Starting authentication..."))
# Read AWS config to get MFA serial
config_path = Path.home() / '.aws' / 'config'
config = configparser.ConfigParser()
config.read(config_path)
section_name = f'profile {profile}' if profile != 'default' else 'default'
mfa_serial = None
region = None
source_profile = None
role_arn = None
if section_name in config:
mfa_serial = config[section_name].get('mfa_serial')
region = config[section_name].get('region', 'us-east-1')
source_profile = config[section_name].get('source_profile')
role_arn = config[section_name].get('role_arn')
# Debug output
progress_queue.put(('info', f"[{profile}] Config: region={region}, mfa_serial={bool(mfa_serial)}, source_profile={source_profile}, role_arn={role_arn}"))
# Read base credentials from ~/.aws/credentials
creds_path = Path.home() / '.aws' / 'credentials'
creds_config = configparser.ConfigParser()
creds_config.read(creds_path)
# Determine which credentials section to use
# Priority: source_profile > profile name > default
if source_profile and source_profile in creds_config:
cred_section = source_profile
elif profile in creds_config:
cred_section = profile
elif 'default' in creds_config:
cred_section = 'default'
else:
progress_queue.put(('error', f"✗ [{profile}] Credentials not found in ~/.aws/credentials"))
return None
if cred_section not in creds_config:
progress_queue.put(('error', f"✗ [{profile}] Credentials not found in ~/.aws/credentials"))
return None
base_access_key = creds_config[cred_section].get('aws_access_key_id')
base_secret_key = creds_config[cred_section].get('aws_secret_access_key')
if not base_access_key or not base_secret_key:
progress_queue.put(('error', f"✗ [{profile}] Invalid credentials in ~/.aws/credentials"))
return None
# If MFA is configured and we have a code, use it
if mfa_serial and mfa_code:
progress_queue.put(('info', f"[{profile}] Using MFA authentication..."))
# Create STS client with base credentials (no session)
sts = boto3.client(
'sts',
aws_access_key_id=base_access_key,
aws_secret_access_key=base_secret_key,
region_name=region or 'us-east-1'
)
try:
# Get temporary credentials with MFA
response = sts.get_session_token(
DurationSeconds=3600,
SerialNumber=mfa_serial,
TokenCode=mfa_code
)
credentials = response['Credentials']
progress_queue.put(('success', f"✓ [{profile}] MFA authentication successful"))
# If there's a role to assume, assume it
if role_arn:
progress_queue.put(('info', f"[{profile}] Assuming role {role_arn}..."))
# Create STS client with MFA session credentials
sts_with_mfa = boto3.client(
'sts',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
region_name=region or 'us-east-1'
)
try:
# Assume the role
role_response = sts_with_mfa.assume_role(
RoleArn=role_arn,
RoleSessionName=f"{profile}-session"
)
role_credentials = role_response['Credentials']
session = boto3.Session(
aws_access_key_id=role_credentials['AccessKeyId'],
aws_secret_access_key=role_credentials['SecretAccessKey'],
aws_session_token=role_credentials['SessionToken'],
region_name=region or 'us-east-1'
)
progress_queue.put(('success', f"✓ [{profile}] Role assumption successful"))
except Exception as role_error:
progress_queue.put(('error', f"✗ [{profile}] Role assumption failed - {str(role_error)}"))
return None
else:
# No role to assume, use MFA session directly
session = boto3.Session(
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
region_name=region or 'us-east-1'
)
except Exception as mfa_error:
progress_queue.put(('error', f"✗ [{profile}] MFA authentication failed - {str(mfa_error)}"))
return None
else:
# No MFA configured or no code provided
if mfa_serial and not mfa_code:
progress_queue.put(('error', f"✗ [{profile}] MFA code required but not provided"))
return None
progress_queue.put(('info', f"[{profile}] Using direct authentication (no MFA)..."))
# If there's a role to assume (without MFA)
if role_arn:
progress_queue.put(('info', f"[{profile}] Assuming role {role_arn}..."))
sts = boto3.client(
'sts',
aws_access_key_id=base_access_key,
aws_secret_access_key=base_secret_key,
region_name=region or 'us-east-1'
)
try:
role_response = sts.assume_role(
RoleArn=role_arn,
RoleSessionName=f"{profile}-session"
)
role_credentials = role_response['Credentials']
session = boto3.Session(
aws_access_key_id=role_credentials['AccessKeyId'],
aws_secret_access_key=role_credentials['SecretAccessKey'],
aws_session_token=role_credentials['SessionToken'],
region_name=region or 'us-east-1'
)
progress_queue.put(('success', f"✓ [{profile}] Role assumption successful"))
except Exception as role_error:
progress_queue.put(('error', f"✗ [{profile}] Role assumption failed - {str(role_error)}"))
return None
else:
# No role, use base credentials directly
session = boto3.Session(
aws_access_key_id=base_access_key,
aws_secret_access_key=base_secret_key,
region_name=region or 'us-east-1'
)
# Verify it works
try:
sts = session.client('sts')
sts.get_caller_identity()
progress_queue.put(('success', f"✓ [{profile}] Authentication successful"))
except Exception as e:
progress_queue.put(('error', f"✗ [{profile}] Authentication failed - {str(e)}"))
return None
# Get account info
account_id, account_name = get_account_info_inline(session)
progress_queue.put(('info', f" [{profile}] Account: {account_name} ({account_id})"))
# Cache the session credentials for reuse (valid for 1 hour)
global session_cache
session_cache[profile] = {
'session': session,
'region': region,
'timestamp': time.time(),
'account_id': account_id,
'account_name': account_name
}
# Fetch data
progress_queue.put(('info', f" [{profile}] Fetching security groups..."))
security_groups, sg_rules = fetch_security_groups(session, account_id, account_name)
progress_queue.put(('success', f" ✓ [{profile}] Found {len(security_groups)} security groups with {len(sg_rules)} rules"))
progress_queue.put(('info', f" [{profile}] Fetching EC2 instances..."))
ec2_instances = fetch_ec2_instances(session, account_id, account_name)
progress_queue.put(('success', f" ✓ [{profile}] Found {len(ec2_instances)} EC2 instances"))
return {
'profile': profile,
'security_groups': security_groups,
'ec2_instances': ec2_instances,
'sg_rules': sg_rules
}
except Exception as e:
progress_queue.put(('error', f"✗ [{profile}] Error - {str(e)}"))
return None
@app.route('/api/import', methods=['POST'])
def import_data():
"""Import data from AWS with parallel execution and streaming progress"""
global data_imported
data = request.json
selected_profiles = data.get('profiles', [])
mfa_codes = data.get('mfa_codes', {})
def generate():
try:
from import_from_aws import import_to_database
yield send_progress(f"Starting parallel import from {len(selected_profiles)} profile(s)...", 'info')
# Create a queue for progress messages from threads
progress_queue = queue.Queue()
# Submit all profiles for parallel execution
with ThreadPoolExecutor(max_workers=len(selected_profiles)) as executor:
# Submit all import tasks
futures = {}
for profile in selected_profiles:
mfa_code = mfa_codes.get(profile, '')
future = executor.submit(import_profile, profile, mfa_code, progress_queue)
futures[future] = profile
# Process results as they complete and drain progress queue
all_security_groups = []
all_ec2_instances = []
all_sg_rules = []
completed = 0
while completed < len(selected_profiles):
# Check for progress messages
while not progress_queue.empty():
status, message = progress_queue.get()
yield send_progress(message, status)
# Check for completed futures
for future in as_completed(futures, timeout=0.1):
if future in futures:
result = future.result()
completed += 1
if result:
all_security_groups.extend(result['security_groups'])
all_ec2_instances.extend(result['ec2_instances'])
all_sg_rules.extend(result['sg_rules'])
del futures[future]
break
time.sleep(0.1) # Small delay to prevent busy waiting
# Drain any remaining progress messages
while not progress_queue.empty():
status, message = progress_queue.get()
yield send_progress(message, status)
# Import to database
if all_security_groups or all_ec2_instances:
yield send_progress("Importing to database...", 'info')
import_to_database(DB_PATH, all_security_groups, all_ec2_instances, all_sg_rules, append=False)
yield send_progress(f"✓ Import complete!", 'success')
yield send_progress(f" Total Security Groups: {len(all_security_groups)}", 'success')
yield send_progress(f" Total EC2 Instances: {len(all_ec2_instances)}", 'success')
yield send_progress(f" Total SG Rules: {len(all_sg_rules)}", 'success')
data_imported = True
yield send_progress("Redirecting to explorer...", 'complete')
else:
yield send_progress("✗ No data imported", 'error')
except Exception as e:
yield send_progress(f"✗ Import failed: {str(e)}", 'error')
return Response(stream_with_context(generate()), mimetype='text/event-stream')
@app.route('/api/import-profile', methods=['POST'])
def import_single_profile():
"""Import data from a single AWS profile with streaming progress"""
global data_imported
data = request.json
profile = data.get('profile')
mfa_code = data.get('mfa_code', '')
def generate():
try:
from import_from_aws import import_to_database
yield send_progress(f"Starting import from {profile}...", 'info')
# Create a queue for progress messages
progress_queue = queue.Queue()
# Import the profile
result = import_profile(profile, mfa_code, progress_queue)
# Drain progress messages
while not progress_queue.empty():
status, message = progress_queue.get()
yield send_progress(message, status)
# Import to database
if result:
yield send_progress("Importing to database...", 'info')
import_to_database(
DB_PATH,
result['security_groups'],
result['ec2_instances'],
result['sg_rules'],
append=True # Append mode for individual imports
)
yield send_progress(f"✓ Import complete for {profile}!", 'success')
yield send_progress(f" Security Groups: {len(result['security_groups'])}", 'success')
yield send_progress(f" EC2 Instances: {len(result['ec2_instances'])}", 'success')
yield send_progress(f" SG Rules: {len(result['sg_rules'])}", 'success')
data_imported = True
yield send_progress("Done", 'complete')
else:
yield send_progress(f"✗ Import failed for {profile}", 'error')
except Exception as e:
yield send_progress(f"✗ Import failed: {str(e)}", 'error')
return Response(stream_with_context(generate()), mimetype='text/event-stream')
@app.route('/api/refresh-cached', methods=['POST'])
def refresh_cached():
"""Refresh data using cached AWS sessions (if still valid)"""
global session_cache, data_imported
if not session_cache:
return jsonify({'error': 'No cached sessions', 'redirect': True})
def generate():
try:
from import_from_aws import fetch_security_groups, fetch_ec2_instances, import_to_database
# Check if cached sessions are still valid (< 1 hour old)
current_time = time.time()
valid_profiles = []
for profile, cache_data in session_cache.items():
age_minutes = (current_time - cache_data['timestamp']) / 60
if age_minutes < 55: # Use 55 minutes to be safe
valid_profiles.append(profile)
else:
yield send_progress(f"[{profile}] Session expired ({age_minutes:.1f} min old)", 'error')
if not valid_profiles:
yield send_progress("All sessions expired. Please re-authenticate.", 'error')
yield send_progress("REDIRECT", 'redirect')
return
yield send_progress(f"Refreshing data from {len(valid_profiles)} cached session(s)...", 'info')
all_security_groups = []
all_ec2_instances = []
all_sg_rules = []
for profile in valid_profiles:
cache_data = session_cache[profile]
session = cache_data['session']
account_id = cache_data['account_id']
account_name = cache_data['account_name']
try:
yield send_progress(f"[{profile}] Fetching security groups...", 'info')
security_groups, sg_rules = fetch_security_groups(session, account_id, account_name)
yield send_progress(f"✓ [{profile}] Found {len(security_groups)} security groups", 'success')
yield send_progress(f"[{profile}] Fetching EC2 instances...", 'info')
ec2_instances = fetch_ec2_instances(session, account_id, account_name)
yield send_progress(f"✓ [{profile}] Found {len(ec2_instances)} EC2 instances", 'success')
all_security_groups.extend(security_groups)
all_ec2_instances.extend(ec2_instances)
all_sg_rules.extend(sg_rules)
except Exception as e:
error_msg = str(e)
if 'ExpiredToken' in error_msg or 'InvalidToken' in error_msg:
yield send_progress(f"✗ [{profile}] Session expired", 'error')
yield send_progress("REDIRECT", 'redirect')
return
else:
yield send_progress(f"✗ [{profile}] Error: {error_msg}", 'error')
# Import to database
if all_security_groups or all_ec2_instances:
yield send_progress("Updating database...", 'info')
import_to_database(DB_PATH, all_security_groups, all_ec2_instances, all_sg_rules, append=False)
yield send_progress(f"✓ Refresh complete!", 'success')
yield send_progress(f" Total Security Groups: {len(all_security_groups)}", 'success')
yield send_progress(f" Total EC2 Instances: {len(all_ec2_instances)}", 'success')
data_imported = True
yield send_progress("COMPLETE", 'complete')
else:
yield send_progress("✗ No data refreshed", 'error')
except Exception as e:
yield send_progress(f"✗ Refresh failed: {str(e)}", 'error')
return Response(stream_with_context(generate()), mimetype='text/event-stream')
@app.route('/api/refresh', methods=['POST'])
def refresh_data():
"""Refresh data from AWS - reuses existing MFA session if valid"""
return import_data()
@app.route('/api/tags')
def get_tags():
"""Get all available tag values for filtering"""
conn = get_db()
# Get distinct tag_wave values
waves = conn.execute("""
SELECT DISTINCT tag_wave FROM security_groups
WHERE tag_wave IS NOT NULL AND tag_wave != ''
ORDER BY tag_wave
""").fetchall()
# Get distinct tag_git_repo values from both tables
repos = conn.execute("""
SELECT DISTINCT tag_git_repo FROM security_groups
WHERE tag_git_repo IS NOT NULL AND tag_git_repo != ''
UNION
SELECT DISTINCT tag_git_repo FROM ec2_instances
WHERE tag_git_repo IS NOT NULL AND tag_git_repo != ''
ORDER BY tag_git_repo
""").fetchall()
conn.close()
return jsonify({
'waves': [w['tag_wave'] for w in waves],
'repos': [r['tag_git_repo'] for r in repos]
})
@app.route('/api/search')
def search():
"""Search for EC2 instances or security groups"""
query = request.args.get('q', '').strip()
search_type = request.args.get('type', 'all')
use_regex = request.args.get('regex', 'false').lower() == 'true'
filter_wave = request.args.get('wave', '').strip()
filter_repo = request.args.get('repo', '').strip()
conn = get_db()
results = []
try:
if search_type in ['all', 'sg']:
# Build WHERE clause with tag filters
where_clauses = []
params = []
if query:
if use_regex:
try:
re.compile(query)
except re.error as e:
conn.close()
return jsonify({'error': f'Invalid regex pattern: {str(e)}', 'results': []})
where_clauses.append("(group_id REGEXP ? OR group_name REGEXP ? OR tag_name REGEXP ?)")
params.extend([query, query, query])
else:
where_clauses.append("(group_id LIKE ? OR group_name LIKE ? OR tag_name LIKE ?)")
params.extend([f'%{query}%', f'%{query}%', f'%{query}%'])
if filter_wave:
where_clauses.append("tag_wave = ?")
params.append(filter_wave)
if filter_repo:
where_clauses.append("tag_git_repo = ?")
params.append(filter_repo)
where_sql = " AND ".join(where_clauses) if where_clauses else "1=1"
sg_results = conn.execute(f"""
SELECT 'sg' as type, group_id as id, group_name as name, tag_name,
account_name, account_id, tag_wave, tag_git_repo, tag_git_org, tag_git_file,
ingress_rule_count
FROM security_groups
WHERE {where_sql}
ORDER BY tag_name, group_name
LIMIT 500
""", params).fetchall()
for row in sg_results:
results.append(dict(row))
if search_type in ['all', 'ec2']:
# Build WHERE clause with tag filters
where_clauses = []
params = []
if query:
if use_regex:
where_clauses.append("(instance_id REGEXP ? OR tag_name REGEXP ? OR private_ip_address REGEXP ?)")
params.extend([query, query, query])
else:
where_clauses.append("(instance_id LIKE ? OR tag_name LIKE ? OR private_ip_address LIKE ?)")
params.extend([f'%{query}%', f'%{query}%', f'%{query}%'])
if filter_repo:
where_clauses.append("tag_git_repo = ?")
params.append(filter_repo)
where_sql = " AND ".join(where_clauses) if where_clauses else "1=1"
ec2_results = conn.execute(f"""
SELECT 'ec2' as type, instance_id as id, tag_name as name, tag_name,
account_name, account_id, state, private_ip_address,
security_groups_id_list, security_groups_name_list, tag_git_repo,
tag_git_org, tag_git_file
FROM ec2_instances
WHERE {where_sql}
ORDER BY tag_name
LIMIT 500
""", params).fetchall()
for row in ec2_results:
results.append(dict(row))
except Exception as e:
conn.close()
return jsonify({'error': f'Search error: {str(e)}', 'results': []})
conn.close()
return jsonify({'results': results})
@app.route('/api/ec2/<instance_id>')
def get_ec2_details(instance_id):
"""Get detailed information about an EC2 instance and its security groups"""
conn = get_db()
ec2 = conn.execute("""
SELECT * FROM ec2_instances WHERE instance_id = ?
""", (instance_id,)).fetchone()
if not ec2:
conn.close()
return jsonify({'error': 'EC2 instance not found'}), 404
ec2_dict = dict(ec2)
sg_ids = ec2_dict['security_groups_id_list'].split(';') if ec2_dict['security_groups_id_list'] else []
security_groups = []
for sg_id in sg_ids:
if sg_id:
sg = conn.execute("""
SELECT * FROM security_groups WHERE group_id = ?
""", (sg_id,)).fetchone()
if sg:
security_groups.append(dict(sg))
conn.close()
return jsonify({
'ec2': ec2_dict,
'security_groups': security_groups
})
@app.route('/api/sg/<group_id>')
def get_sg_details(group_id):
"""Get detailed information about a security group and attached EC2 instances"""
conn = get_db()
sg = conn.execute("""
SELECT * FROM security_groups WHERE group_id = ?
""", (group_id,)).fetchone()
if not sg:
conn.close()
return jsonify({'error': 'Security group not found'}), 404
sg_dict = dict(sg)
ec2_instances = conn.execute("""
SELECT * FROM ec2_instances
WHERE security_groups_id_list LIKE ?
""", (f'%{group_id}%',)).fetchall()
ec2_list = [dict(row) for row in ec2_instances]
conn.close()
return jsonify({
'security_group': sg_dict,
'ec2_instances': ec2_list
})
@app.route('/api/sg/<group_id>/rules')
def get_sg_rules(group_id):
"""Get all rules for a security group"""
conn = get_db()
ingress_rules = conn.execute("""
SELECT * FROM sg_rules
WHERE group_id = ? AND direction = 'ingress'
ORDER BY protocol, port_range, source
""", (group_id,)).fetchall()
egress_rules = conn.execute("""
SELECT * FROM sg_rules
WHERE group_id = ? AND direction = 'egress'
ORDER BY protocol, port_range, source
""", (group_id,)).fetchall()
conn.close()
return jsonify({
'ingress': [dict(row) for row in ingress_rules],
'egress': [dict(row) for row in egress_rules]
})
@app.route('/api/stats')
def get_stats():
"""Get database statistics"""
conn = get_db()
sg_count = conn.execute("SELECT COUNT(*) as count FROM security_groups").fetchone()['count']
ec2_count = conn.execute("SELECT COUNT(*) as count FROM ec2_instances").fetchone()['count']
accounts = conn.execute("""
SELECT DISTINCT account_name FROM security_groups
UNION
SELECT DISTINCT account_name FROM ec2_instances
ORDER BY account_name
""").fetchall()
# Get refresh timestamps
refresh_times = conn.execute("""
SELECT account_name, last_refresh
FROM refresh_timestamps
ORDER BY last_refresh DESC
""").fetchall()
conn.close()
return jsonify({
'security_groups': sg_count,
'ec2_instances': ec2_count,
'accounts': [a['account_name'] for a in accounts],
'refresh_timestamps': [{'account': r['account_name'], 'timestamp': r['last_refresh']} for r in refresh_times]
})
if __name__ == '__main__':
# Get debug mode from environment variable
debug_mode = os.getenv('DEBUG', 'false').lower() in ('true', '1', 'yes')
print("\n" + "="*60)
print("🔭 SGO: Security Groups (and Instances) Observatory")
print("="*60)
print(f"\n Database location: {DB_PATH}")
print(" Database is persistent - data will be preserved between runs")
print(" Access the application at: http://localhost:5000")
print(f" Debug mode: {'enabled' if debug_mode else 'disabled'}")
print("\n" + "="*60 + "\n")
app.run(host='0.0.0.0', port=5000, debug=debug_mode)

28
docker-compose.local.yml Normal file
View file

@ -0,0 +1,28 @@
version: '3.8'
# Alternative compose file using local directory for data storage
# Usage: docker-compose -f docker-compose.local.yml up --build
# or: podman-compose -f docker-compose.local.yml up --build
services:
sgo:
build: .
container_name: sgo
ports:
- "${SGO_PORT:-5000}:5000"
volumes:
# AWS credentials - mounted to temp location, copied by entrypoint
# IMPORTANT: You must set AWS_CONFIG_PATH in .env file
- ${AWS_CONFIG_PATH}:/tmp/aws-host:ro,z
# Database storage - uses local directory
- ${DATA_PATH:-./data}:/app/data
environment:
# User/Group IDs - set to match your host user for proper permissions
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
# Debug mode - set to true for Flask debug logging
- DEBUG=${DEBUG:-false}
# Flask environment
- FLASK_ENV=${FLASK_ENV:-production}
- PYTHONUNBUFFERED=1
restart: unless-stopped

32
docker-compose.yml Normal file
View file

@ -0,0 +1,32 @@
version: '3.8'
services:
sgo:
build: .
container_name: sgo
ports:
- "${SGO_PORT:-5000}:5000"
volumes:
# AWS credentials - mounted to temp location, copied by entrypoint
# IMPORTANT: You must set AWS_CONFIG_PATH in .env file
# Example: AWS_CONFIG_PATH=/home/username/.aws
- ${AWS_CONFIG_PATH}:/tmp/aws-host:ro,z
# Database storage - uses Docker volume by default
# To use local directory instead, comment the volume line and uncomment the bind mount
- sgo-data:/app/data
# - ${DATA_PATH:-./data}:/app/data
environment:
# User/Group IDs - set to match your host user for proper permissions
- PUID=${PUID:-1000}
- PGID=${PGID:-1000}
# Debug mode - set to true for Flask debug logging
- DEBUG=${DEBUG:-false}
# Flask environment
- FLASK_ENV=${FLASK_ENV:-production}
- PYTHONUNBUFFERED=1
restart: unless-stopped
volumes:
# Named volume for persistent database storage
# Data persists across container restarts and rebuilds
sgo-data:

39
entrypoint.sh Executable file
View file

@ -0,0 +1,39 @@
#!/bin/bash
set -e
# Default PUID/PGID if not set
PUID=${PUID:-1000}
PGID=${PGID:-1000}
# Create group if it doesn't exist
if ! getent group sgo >/dev/null 2>&1; then
groupadd -g ${PGID} sgo
fi
# Create or modify user
if ! id -u sgo >/dev/null 2>&1; then
useradd -u ${PUID} -g ${PGID} -d /home/sgo -m -s /bin/bash sgo
else
# Update existing user
usermod -u ${PUID} sgo 2>/dev/null || true
groupmod -g ${PGID} sgo 2>/dev/null || true
fi
# Copy AWS credentials from mounted location to user directory
# This ensures proper permissions regardless of host UID/GID
if [ -d "/tmp/aws-host" ]; then
mkdir -p /home/sgo/.aws
cp -r /tmp/aws-host/* /home/sgo/.aws/ 2>/dev/null || true
chmod 700 /home/sgo/.aws
chmod 600 /home/sgo/.aws/* 2>/dev/null || true
chown -R sgo:sgo /home/sgo/.aws
fi
# Ensure proper ownership of app files and data directory
chown -R sgo:sgo /app
# Ensure home directory ownership
chown sgo:sgo /home/sgo 2>/dev/null || true
# Execute the command as the sgo user
exec gosu sgo "$@"

548
import_from_aws.py Executable file
View file

@ -0,0 +1,548 @@
#!/usr/bin/env python3
"""
Import AWS EC2 and Security Group data directly from AWS accounts using boto3
Supports MFA/OTP authentication
"""
import boto3
import sqlite3
import os
import sys
import configparser
from pathlib import Path
from getpass import getpass
def get_aws_profiles():
"""Read available AWS profiles from ~/.aws/config"""
config_path = Path.home() / '.aws' / 'config'
if not config_path.exists():
print(f"Error: AWS config file not found at {config_path}")
return []
config = configparser.ConfigParser()
config.read(config_path)
profiles = []
for section in config.sections():
if section.startswith('profile '):
profile_name = section.replace('profile ', '')
profiles.append(profile_name)
elif section == 'default':
profiles.append('default')
# Sort profiles alphabetically, but keep 'default' at the top
profiles.sort(key=lambda x: ('0' if x == 'default' else '1' + x.lower()))
return profiles
def get_session_with_mfa(profile_name):
"""
Create a boto3 session with MFA authentication
"""
print(f"\nAuthenticating with profile: {profile_name}")
# Create initial session
session = boto3.Session(profile_name=profile_name)
sts = session.client('sts')
try:
# Try to get caller identity (will fail if MFA is required)
identity = sts.get_caller_identity()
print(f"✓ Authenticated as: {identity['Arn']}")
return session
except Exception as e:
# Check if MFA is required
if 'MultiFactorAuthentication' in str(e) or 'MFA' in str(e):
print("MFA/OTP required for this profile")
# Get MFA device ARN from config or prompt
config_path = Path.home() / '.aws' / 'config'
config = configparser.ConfigParser()
config.read(config_path)
section_name = f'profile {profile_name}' if profile_name != 'default' else 'default'
mfa_serial = None
if section_name in config:
mfa_serial = config[section_name].get('mfa_serial')
if not mfa_serial:
print("\nMFA device ARN not found in config.")
print("Enter MFA device ARN (e.g., arn:aws:iam::123456789012:mfa/username):")
mfa_serial = input("MFA ARN: ").strip()
else:
print(f"Using MFA device: {mfa_serial}")
# Get OTP token
token_code = getpass("Enter MFA token code: ")
# Get temporary credentials
try:
response = sts.get_session_token(
DurationSeconds=3600, # 1 hour
SerialNumber=mfa_serial,
TokenCode=token_code
)
credentials = response['Credentials']
# Create new session with temporary credentials
session = boto3.Session(
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
print("✓ MFA authentication successful")
return session
except Exception as mfa_error:
print(f"Error: MFA authentication failed: {mfa_error}")
return None
else:
print(f"Error: Authentication failed: {e}")
return None
def get_account_info(session):
"""Get AWS account ID and alias"""
sts = session.client('sts')
identity = sts.get_caller_identity()
account_id = identity['Account']
# Try to get account alias
try:
iam = session.client('iam')
aliases = iam.list_account_aliases()
account_name = aliases['AccountAliases'][0] if aliases['AccountAliases'] else account_id
except:
account_name = account_id
return account_id, account_name
def fetch_security_groups(session, account_id, account_name):
"""Fetch all security groups from AWS"""
ec2 = session.client('ec2')
print("Fetching security groups...")
paginator = ec2.get_paginator('describe_security_groups')
security_groups = []
sg_rules = []
for page in paginator.paginate():
for sg in page['SecurityGroups']:
# Extract tags
tags = {tag['Key']: tag['Value'] for tag in sg.get('Tags', [])}
# Parse rules first to get accurate counts
ingress_rules = []
egress_rules = []
for rule in sg.get('IpPermissions', []):
ingress_rules.extend(parse_sg_rule(sg['GroupId'], 'ingress', rule))
for rule in sg.get('IpPermissionsEgress', []):
egress_rules.extend(parse_sg_rule(sg['GroupId'], 'egress', rule))
sg_data = {
'account_id': account_id,
'account_name': account_name,
'group_id': sg['GroupId'],
'group_name': sg['GroupName'],
'tag_name': tags.get('Name', ''),
'tag_wave': tags.get('ucsb:dept:INFR:wave', 'none'),
'tag_git_repo': tags.get('git_repo', 'none'),
'tag_git_org': tags.get('git_org', ''),
'tag_git_file': tags.get('git_file', ''),
'tags_json': tags,
'ingress_rule_count': len(ingress_rules),
'egress_rule_count': len(egress_rules)
}
security_groups.append(sg_data)
# Add parsed rules to the list
sg_rules.extend(ingress_rules)
sg_rules.extend(egress_rules)
print(f"✓ Found {len(security_groups)} security groups with {len(sg_rules)} rules")
return security_groups, sg_rules
def parse_sg_rule(group_id, direction, rule):
"""Parse a security group rule into individual entries"""
rules = []
protocol = rule.get('IpProtocol', '-1')
from_port = rule.get('FromPort', '')
to_port = rule.get('ToPort', '')
# Normalize protocol
if protocol == '-1':
protocol_str = 'All'
port_range = 'All'
elif protocol == '6':
protocol_str = 'TCP'
port_range = f"{from_port}-{to_port}" if from_port != to_port else str(from_port)
elif protocol == '17':
protocol_str = 'UDP'
port_range = f"{from_port}-{to_port}" if from_port != to_port else str(from_port)
elif protocol == '1':
protocol_str = 'ICMP'
port_range = 'N/A'
else:
protocol_str = protocol
port_range = f"{from_port}-{to_port}" if from_port and to_port else 'N/A'
# Parse IP ranges
for ip_range in rule.get('IpRanges', []):
rules.append({
'group_id': group_id,
'direction': direction,
'protocol': protocol_str,
'port_range': port_range,
'source_type': 'CIDR',
'source': ip_range['CidrIp'],
'description': ip_range.get('Description', '')
})
# Parse IPv6 ranges
for ip_range in rule.get('Ipv6Ranges', []):
rules.append({
'group_id': group_id,
'direction': direction,
'protocol': protocol_str,
'port_range': port_range,
'source_type': 'CIDR',
'source': ip_range['CidrIpv6'],
'description': ip_range.get('Description', '')
})
# Parse security group references
for sg_ref in rule.get('UserIdGroupPairs', []):
source = sg_ref.get('GroupId', '')
if sg_ref.get('GroupName'):
source += f" ({sg_ref['GroupName']})"
rules.append({
'group_id': group_id,
'direction': direction,
'protocol': protocol_str,
'port_range': port_range,
'source_type': 'Security Group',
'source': source,
'description': sg_ref.get('Description', '')
})
# Parse prefix lists
for prefix in rule.get('PrefixListIds', []):
rules.append({
'group_id': group_id,
'direction': direction,
'protocol': protocol_str,
'port_range': port_range,
'source_type': 'Prefix List',
'source': prefix['PrefixListId'],
'description': prefix.get('Description', '')
})
return rules
def fetch_ec2_instances(session, account_id, account_name):
"""Fetch all EC2 instances from AWS"""
ec2 = session.client('ec2')
print("Fetching EC2 instances...")
paginator = ec2.get_paginator('describe_instances')
instances = []
for page in paginator.paginate():
for reservation in page['Reservations']:
for instance in reservation['Instances']:
# Extract tags
tags = {tag['Key']: tag['Value'] for tag in instance.get('Tags', [])}
# Extract security groups
sg_ids = [sg['GroupId'] for sg in instance.get('SecurityGroups', [])]
sg_names = [sg['GroupName'] for sg in instance.get('SecurityGroups', [])]
instance_data = {
'account_id': account_id,
'account_name': account_name,
'tag_name': tags.get('Name', ''),
'instance_id': instance['InstanceId'],
'state': instance['State']['Name'],
'private_ip_address': instance.get('PrivateIpAddress', ''),
'security_groups_id_list': ';'.join(sg_ids),
'security_groups_name_list': ';'.join(sg_names),
'tag_git_repo': tags.get('git_repo', 'none'),
'tag_git_org': tags.get('git_org', ''),
'tag_git_file': tags.get('git_file', ''),
'tags_json': tags
}
instances.append(instance_data)
print(f"✓ Found {len(instances)} EC2 instances")
return instances
def get_db(db_path):
"""Get database connection and create schema if needed"""
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Create tables if they don't exist
cursor.execute("""
CREATE TABLE IF NOT EXISTS security_groups (
id INTEGER PRIMARY KEY AUTOINCREMENT,
account_id TEXT,
account_name TEXT,
group_id TEXT UNIQUE,
group_name TEXT,
tag_name TEXT,
tag_wave TEXT,
tag_git_repo TEXT,
tag_git_org TEXT,
tag_git_file TEXT,
tags_json TEXT,
ingress_rule_count INTEGER,
egress_rule_count INTEGER
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS ec2_instances (
id INTEGER PRIMARY KEY AUTOINCREMENT,
account_id TEXT,
account_name TEXT,
tag_name TEXT,
instance_id TEXT UNIQUE,
state TEXT,
private_ip_address TEXT,
security_groups_id_list TEXT,
security_groups_name_list TEXT,
tag_git_repo TEXT,
tag_git_org TEXT,
tag_git_file TEXT,
tags_json TEXT
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS sg_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT,
direction TEXT,
protocol TEXT,
port_range TEXT,
source_type TEXT,
source TEXT,
description TEXT,
FOREIGN KEY (group_id) REFERENCES security_groups(group_id)
)
""")
cursor.execute("""
CREATE TABLE IF NOT EXISTS refresh_timestamps (
id INTEGER PRIMARY KEY AUTOINCREMENT,
account_id TEXT,
account_name TEXT,
last_refresh TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(account_id)
)
""")
# Create indexes
cursor.execute("CREATE INDEX IF NOT EXISTS idx_sg_group_id ON security_groups(group_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_sg_account_name ON security_groups(account_name)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_ec2_instance_id ON ec2_instances(instance_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_ec2_account_name ON ec2_instances(account_name)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_sg_rules_group_id ON sg_rules(group_id)")
cursor.execute("CREATE INDEX IF NOT EXISTS idx_sg_rules_direction ON sg_rules(direction)")
conn.commit()
return conn
def import_to_database(db_path, security_groups, ec2_instances, sg_rules=None, append=False):
"""Import data into SQLite database"""
import json
from datetime import datetime
conn = get_db(db_path)
cursor = conn.cursor()
if not append:
# Clear existing data (but keep refresh_timestamps)
print("Clearing existing data...")
cursor.execute("DELETE FROM security_groups")
cursor.execute("DELETE FROM ec2_instances")
cursor.execute("DELETE FROM sg_rules")
# Import security groups
print(f"Importing {len(security_groups)} security groups...")
for sg in security_groups:
cursor.execute("""
INSERT OR REPLACE INTO security_groups
(account_id, account_name, group_id, group_name, tag_name, tag_wave, tag_git_repo,
tag_git_org, tag_git_file, tags_json, ingress_rule_count, egress_rule_count)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
sg['account_id'], sg['account_name'], sg['group_id'], sg['group_name'],
sg['tag_name'], sg['tag_wave'], sg['tag_git_repo'],
sg.get('tag_git_org', ''), sg.get('tag_git_file', ''),
json.dumps(sg.get('tags_json', {})),
sg['ingress_rule_count'], sg.get('egress_rule_count', 0)
))
# Import EC2 instances
print(f"Importing {len(ec2_instances)} EC2 instances...")
for instance in ec2_instances:
cursor.execute("""
INSERT OR REPLACE INTO ec2_instances
(account_id, account_name, tag_name, instance_id, state, private_ip_address,
security_groups_id_list, security_groups_name_list, tag_git_repo,
tag_git_org, tag_git_file, tags_json)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
instance['account_id'], instance['account_name'], instance['tag_name'],
instance['instance_id'], instance['state'], instance['private_ip_address'],
instance['security_groups_id_list'], instance['security_groups_name_list'],
instance['tag_git_repo'], instance.get('tag_git_org', ''),
instance.get('tag_git_file', ''), json.dumps(instance.get('tags_json', {}))
))
# Import security group rules
if sg_rules:
print(f"Importing {len(sg_rules)} security group rules...")
# If appending, delete existing rules for these security groups to avoid duplicates
if append:
unique_group_ids = set(rule['group_id'] for rule in sg_rules)
for group_id in unique_group_ids:
cursor.execute("DELETE FROM sg_rules WHERE group_id = ?", (group_id,))
for rule in sg_rules:
cursor.execute("""
INSERT INTO sg_rules
(group_id, direction, protocol, port_range, source_type, source, description)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
rule['group_id'], rule['direction'], rule['protocol'], rule['port_range'],
rule['source_type'], rule['source'], rule['description']
))
# Update refresh timestamps for all accounts
print("Updating refresh timestamps...")
accounts = set()
for sg in security_groups:
accounts.add((sg['account_id'], sg['account_name']))
for instance in ec2_instances:
accounts.add((instance['account_id'], instance['account_name']))
for account_id, account_name in accounts:
cursor.execute("""
INSERT INTO refresh_timestamps (account_id, account_name, last_refresh)
VALUES (?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(account_id) DO UPDATE SET
last_refresh = CURRENT_TIMESTAMP,
account_name = excluded.account_name
""", (account_id, account_name))
conn.commit()
conn.close()
print("✓ Import complete")
def main():
# Database path
db_path = os.path.join(os.path.dirname(__file__), 'data', 'aws_export.db')
os.makedirs(os.path.dirname(db_path), exist_ok=True)
print("=" * 60)
print("AWS Direct Import Tool")
print("=" * 60)
# Get available profiles
profiles = get_aws_profiles()
if not profiles:
print("No AWS profiles found in ~/.aws/config")
sys.exit(1)
print("\nAvailable AWS profiles:")
for i, profile in enumerate(profiles, 1):
print(f" {i}. {profile}")
# Let user select profile(s)
print("\nEnter profile number(s) to import (comma-separated, or 'all'):")
selection = input("Selection: ").strip()
if selection.lower() == 'all':
selected_profiles = profiles
else:
try:
indices = [int(x.strip()) - 1 for x in selection.split(',')]
selected_profiles = [profiles[i] for i in indices]
except (ValueError, IndexError):
print("Invalid selection")
sys.exit(1)
# Ask if should append or replace
append_mode = False
if len(selected_profiles) > 1:
append_choice = input("\nAppend to existing data? (y/N): ").strip().lower()
append_mode = append_choice == 'y'
# Process each profile
all_security_groups = []
all_ec2_instances = []
all_sg_rules = []
for i, profile in enumerate(selected_profiles):
print(f"\n{'=' * 60}")
print(f"Processing profile {i+1}/{len(selected_profiles)}: {profile}")
print('=' * 60)
# Authenticate
session = get_session_with_mfa(profile)
if not session:
print(f"✗ Skipping profile {profile} due to authentication failure")
continue
# Get account info
account_id, account_name = get_account_info(session)
print(f"Account: {account_name} ({account_id})")
# Fetch data
security_groups, sg_rules = fetch_security_groups(session, account_id, account_name)
ec2_instances = fetch_ec2_instances(session, account_id, account_name)
all_security_groups.extend(security_groups)
all_ec2_instances.extend(ec2_instances)
all_sg_rules.extend(sg_rules)
# Import to database
if all_security_groups or all_ec2_instances:
print(f"\n{'=' * 60}")
print("Importing to database...")
print('=' * 60)
import_to_database(db_path, all_security_groups, all_ec2_instances, all_sg_rules,
append=append_mode and len(selected_profiles) > 1)
print(f"\n✓ Successfully imported data from {len(selected_profiles)} profile(s)")
print(f" Database: {db_path}")
print(f" Total Security Groups: {len(all_security_groups)}")
print(f" Total EC2 Instances: {len(all_ec2_instances)}")
print(f" Total SG Rules: {len(all_sg_rules)}")
else:
print("\n✗ No data imported")
if __name__ == "__main__":
main()

3
requirements.txt Normal file
View file

@ -0,0 +1,3 @@
Flask==3.0.0
Werkzeug==3.0.1
boto3==1.34.0

819
static/css/style.css Normal file
View file

@ -0,0 +1,819 @@
:root {
--primary-color: #2563eb;
--primary-hover: #1d4ed8;
--success-color: #10b981;
--warning-color: #f59e0b;
--danger-color: #ef4444;
--bg-color: #f8fafc;
--card-bg: #ffffff;
--border-color: #e2e8f0;
--text-primary: #1e293b;
--text-secondary: #64748b;
--shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 0.05);
--shadow-md: 0 4px 6px -1px rgb(0 0 0 / 0.1), 0 2px 4px -2px rgb(0 0 0 / 0.1);
--shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 0.1), 0 4px 6px -4px rgb(0 0 0 / 0.1);
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
background: var(--bg-color);
color: var(--text-primary);
line-height: 1.4;
font-size: 0.875rem;
}
.container {
max-width: 1600px;
margin: 0 auto;
padding: 0.75rem;
}
header {
background: var(--card-bg);
border-bottom: 1px solid var(--border-color);
box-shadow: var(--shadow-sm);
margin-bottom: 0.75rem;
padding: 0.75rem 0;
}
.header-content {
max-width: 1600px;
margin: 0 auto;
padding: 0 0.75rem;
}
h1 {
color: var(--primary-color);
font-size: 1.5rem;
font-weight: 700;
margin-bottom: 0.25rem;
}
.subtitle {
color: var(--text-secondary);
font-size: 0.8rem;
}
.stats-bar {
background: var(--card-bg);
border-radius: 0.375rem;
padding: 0.5rem 0.75rem;
margin-bottom: 0.75rem;
display: flex;
gap: 1.5rem;
box-shadow: var(--shadow-sm);
border: 1px solid var(--border-color);
}
.stat-item {
display: flex;
flex-direction: column;
}
.stat-label {
font-size: 0.7rem;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.05em;
}
.stat-value {
font-size: 1.1rem;
font-weight: 700;
color: var(--primary-color);
}
.search-section {
background: var(--card-bg);
border-radius: 0.375rem;
padding: 0.75rem;
margin-bottom: 0.75rem;
box-shadow: var(--shadow-md);
border: 1px solid var(--border-color);
}
.search-box {
display: flex;
gap: 0.5rem;
margin-bottom: 0.5rem;
}
.search-input {
flex: 1;
padding: 0.5rem 0.75rem;
border: 2px solid var(--border-color);
border-radius: 0.375rem;
font-size: 0.875rem;
transition: all 0.2s;
}
.search-input:focus {
outline: none;
border-color: var(--primary-color);
box-shadow: 0 0 0 3px rgba(37, 99, 235, 0.1);
}
.regex-checkbox {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.75rem 1rem;
background: var(--bg-color);
border: 2px solid var(--border-color);
border-radius: 0.5rem;
cursor: pointer;
transition: all 0.2s;
user-select: none;
}
.regex-checkbox:hover {
border-color: var(--primary-color);
}
.regex-checkbox input[type="checkbox"] {
width: 1.25rem;
height: 1.25rem;
cursor: pointer;
accent-color: var(--primary-color);
}
.regex-checkbox span {
font-weight: 500;
color: var(--text-primary);
font-size: 0.95rem;
}
.filter-buttons {
display: flex;
gap: 0.5rem;
}
.filter-btn {
padding: 0.75rem 1.5rem;
border: 2px solid var(--border-color);
background: white;
border-radius: 0.5rem;
cursor: pointer;
font-size: 0.95rem;
font-weight: 500;
transition: all 0.2s;
}
.filter-btn:hover {
border-color: var(--primary-color);
color: var(--primary-color);
}
.filter-btn.active {
background: var(--primary-color);
border-color: var(--primary-color);
color: white;
}
.search-results {
margin-bottom: 2rem;
}
.result-item {
background: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: 0.5rem;
padding: 1rem;
margin-bottom: 0.5rem;
cursor: pointer;
transition: all 0.2s;
}
.result-item:hover {
box-shadow: var(--shadow-md);
border-color: var(--primary-color);
transform: translateY(-2px);
}
.result-header {
display: flex;
align-items: center;
gap: 0.75rem;
margin-bottom: 0.5rem;
}
.result-type-badge {
padding: 0.25rem 0.75rem;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
}
.result-type-badge.ec2 {
background: #dbeafe;
color: #1e40af;
}
.result-type-badge.sg {
background: #d1fae5;
color: #065f46;
}
.result-name {
font-weight: 600;
font-size: 1.1rem;
}
.result-meta {
color: var(--text-secondary);
font-size: 0.875rem;
}
.details-view {
display: none;
margin-top: 2rem;
}
.details-view.active {
display: block;
}
.main-card {
background: var(--card-bg);
border-radius: 0.75rem;
padding: 2rem;
box-shadow: var(--shadow-lg);
border: 2px solid var(--primary-color);
margin-bottom: 2rem;
}
.card-header {
display: flex;
align-items: start;
justify-content: space-between;
margin-bottom: 1.5rem;
padding-bottom: 1rem;
border-bottom: 2px solid var(--border-color);
}
.card-title {
font-size: 1.75rem;
font-weight: 700;
color: var(--primary-color);
margin-bottom: 0.5rem;
}
.card-subtitle {
color: var(--text-secondary);
font-size: 1rem;
}
.status-badge {
padding: 0.5rem 1rem;
border-radius: 0.375rem;
font-size: 0.875rem;
font-weight: 600;
}
.status-badge.running {
background: #d1fae5;
color: #065f46;
}
.status-badge.stopped {
background: #fee2e2;
color: #991b1b;
}
.info-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1.5rem;
margin-bottom: 2rem;
}
.info-item {
display: flex;
flex-direction: column;
}
.info-label {
font-size: 0.875rem;
color: var(--text-secondary);
font-weight: 500;
margin-bottom: 0.25rem;
}
.info-value {
font-size: 1rem;
color: var(--text-primary);
font-weight: 600;
word-break: break-all;
}
.nested-cards {
margin-top: 2rem;
}
.nested-cards-header {
font-size: 1.25rem;
font-weight: 600;
margin-bottom: 1rem;
color: var(--text-primary);
}
.nested-card {
background: linear-gradient(to right, #f8fafc, #ffffff);
border: 1px solid var(--border-color);
border-left: 4px solid var(--primary-color);
border-radius: 0.5rem;
padding: 1.5rem;
margin-bottom: 1rem;
box-shadow: var(--shadow-sm);
transition: all 0.2s;
}
.nested-card:hover {
box-shadow: var(--shadow-md);
transform: translateX(4px);
}
.nested-card[style*="cursor: pointer"]:hover {
border-left-color: var(--primary-hover);
background: linear-gradient(to right, #f1f5f9, #ffffff);
}
.nested-card-title {
font-size: 1.1rem;
font-weight: 600;
color: var(--primary-color);
margin-bottom: 1rem;
}
.tag-list {
display: flex;
flex-wrap: wrap;
gap: 0.5rem;
margin-top: 1rem;
}
.tag {
padding: 0.375rem 0.75rem;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: 0.375rem;
font-size: 0.875rem;
}
.tag-key {
font-weight: 600;
color: var(--text-secondary);
}
.tag-value {
color: var(--text-primary);
}
.back-button {
padding: 0.75rem 1.5rem;
background: var(--primary-color);
color: white;
border: none;
border-radius: 0.5rem;
cursor: pointer;
font-size: 1rem;
font-weight: 500;
transition: all 0.2s;
margin-bottom: 1rem;
}
.back-button:hover {
background: var(--primary-hover);
transform: translateY(-2px);
box-shadow: var(--shadow-md);
}
.empty-state {
text-align: center;
padding: 3rem;
color: var(--text-secondary);
}
.empty-state-icon {
font-size: 3rem;
margin-bottom: 1rem;
}
.loading {
text-align: center;
padding: 2rem;
color: var(--text-secondary);
}
.spinner {
border: 3px solid var(--border-color);
border-top: 3px solid var(--primary-color);
border-radius: 50%;
width: 40px;
height: 40px;
animation: spin 1s linear infinite;
margin: 0 auto 1rem;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
/* Security Group Rules Styles */
.rules-section {
margin-top: 1rem;
background: var(--bg-color);
border-radius: 0.375rem;
padding: 0.75rem;
}
.rules-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
}
.rules-title {
font-size: 0.95rem;
font-weight: 600;
color: var(--text-primary);
}
.rules-search {
padding: 0.375rem 0.5rem;
border: 1px solid var(--border-color);
border-radius: 0.25rem;
font-size: 0.8rem;
width: 200px;
}
.rules-tabs {
display: flex;
gap: 0.5rem;
margin-bottom: 0.5rem;
border-bottom: 1px solid var(--border-color);
}
.rule-tab {
padding: 0.375rem 0.75rem;
background: none;
border: none;
border-bottom: 2px solid transparent;
cursor: pointer;
font-size: 0.85rem;
font-weight: 500;
color: var(--text-secondary);
transition: all 0.2s;
}
.rule-tab:hover {
color: var(--primary-color);
}
.rule-tab.active {
color: var(--primary-color);
border-bottom-color: var(--primary-color);
}
.rules-table-container {
overflow-x: auto;
max-height: calc(85vh - 250px);
overflow-y: auto;
}
.rules-table {
width: 100%;
border-collapse: collapse;
font-size: 0.8rem;
}
.rules-table th {
background: var(--border-color);
padding: 0.375rem 0.5rem;
text-align: left;
font-weight: 600;
font-size: 0.75rem;
position: sticky;
top: 0;
z-index: 1;
}
.rules-table td {
padding: 0.375rem 0.5rem;
border-bottom: 1px solid var(--border-color);
}
.rules-table tr:hover {
background: var(--bg-color);
}
.rule-protocol {
font-weight: 600;
color: var(--primary-color);
}
.rule-port {
font-family: 'Courier New', monospace;
font-size: 0.75rem;
}
.rule-source {
font-family: 'Courier New', monospace;
font-size: 0.75rem;
}
.rule-description {
color: var(--text-secondary);
font-size: 0.75rem;
font-style: italic;
}
.no-rules {
text-align: center;
padding: 1.5rem;
color: var(--text-secondary);
font-size: 0.85rem;
}
.view-rules-btn {
padding: 0.25rem 0.5rem;
background: var(--primary-color);
color: white;
border: none;
border-radius: 0.25rem;
cursor: pointer;
font-size: 0.75rem;
margin-top: 0.5rem;
transition: all 0.2s;
}
.view-rules-btn:hover {
background: var(--primary-hover);
}
/* Tag Filters */
.tag-filters {
display: flex;
gap: 0.5rem;
margin-bottom: 0.5rem;
flex-wrap: wrap;
}
.tag-filter-group {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.tag-filter-label {
font-size: 0.75rem;
font-weight: 600;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.05em;
}
.tag-filter-select {
padding: 0.5rem 0.75rem;
border: 2px solid var(--border-color);
border-radius: 0.375rem;
font-size: 0.875rem;
background: white;
cursor: pointer;
transition: all 0.2s;
min-width: 200px;
}
.tag-filter-select:focus {
outline: none;
border-color: var(--primary-color);
}
.tag-filter-select:hover {
border-color: var(--primary-color);
}
.clear-filters-btn {
padding: 0.5rem 1rem;
background: var(--bg-color);
border: 2px solid var(--border-color);
border-radius: 0.375rem;
cursor: pointer;
font-size: 0.875rem;
font-weight: 500;
align-self: flex-end;
transition: all 0.2s;
}
.clear-filters-btn:hover {
background: var(--danger-color);
border-color: var(--danger-color);
color: white;
}
/* View Toggle */
.view-controls {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
}
.view-toggle {
display: flex;
gap: 0.25rem;
background: var(--bg-color);
padding: 0.25rem;
border-radius: 0.375rem;
border: 1px solid var(--border-color);
}
.view-toggle-btn {
padding: 0.5rem 1rem;
border: none;
background: transparent;
border-radius: 0.25rem;
cursor: pointer;
font-size: 0.875rem;
font-weight: 500;
color: var(--text-secondary);
transition: all 0.2s;
}
.view-toggle-btn:hover {
color: var(--primary-color);
}
.view-toggle-btn.active {
background: var(--card-bg);
color: var(--primary-color);
box-shadow: var(--shadow-sm);
}
/* Table View */
.results-table-view {
display: none;
background: var(--card-bg);
border-radius: 0.5rem;
overflow: hidden;
box-shadow: var(--shadow-md);
border: 1px solid var(--border-color);
}
.results-table-view.active {
display: block;
}
.results-table-container {
overflow-x: auto;
max-height: calc(100vh - 400px);
overflow-y: auto;
}
.results-table {
width: 100%;
border-collapse: collapse;
font-size: 0.875rem;
}
.results-table thead {
background: var(--bg-color);
position: sticky;
top: 0;
z-index: 10;
}
.results-table th {
padding: 0.75rem;
text-align: left;
font-weight: 600;
font-size: 0.875rem;
color: var(--text-primary);
border-bottom: 2px solid var(--border-color);
white-space: nowrap;
}
.results-table td {
padding: 0.75rem;
border-bottom: 1px solid var(--border-color);
vertical-align: top;
}
.results-table tbody tr {
transition: all 0.15s;
cursor: pointer;
}
.results-table tbody tr:hover {
background: var(--bg-color);
}
.table-type-badge {
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 600;
text-transform: uppercase;
display: inline-block;
}
.table-type-badge.ec2 {
background: #dbeafe;
color: #1e40af;
}
.table-type-badge.sg {
background: #d1fae5;
color: #065f46;
}
.table-cell-mono {
font-family: 'Courier New', monospace;
font-size: 0.8rem;
}
.table-cell-secondary {
color: var(--text-secondary);
font-size: 0.85rem;
}
.table-status-badge {
padding: 0.25rem 0.5rem;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 600;
display: inline-block;
}
.table-status-badge.running {
background: #d1fae5;
color: #065f46;
}
.table-status-badge.stopped {
background: #fee2e2;
color: #991b1b;
}
/* Refresh Tooltip */
.refresh-tooltip {
display: none;
position: absolute;
top: 100%;
left: 50%;
transform: translateX(-50%);
margin-top: 0.5rem;
background: var(--text-primary);
color: white;
padding: 0.75rem;
border-radius: 0.375rem;
font-size: 0.75rem;
white-space: nowrap;
z-index: 1000;
box-shadow: var(--shadow-lg);
min-width: 200px;
}
.refresh-tooltip::after {
content: '';
position: absolute;
bottom: 100%;
left: 50%;
transform: translateX(-50%);
border: 6px solid transparent;
border-bottom-color: var(--text-primary);
}
.stat-item:hover .refresh-tooltip {
display: block;
}
.refresh-tooltip-item {
padding: 0.25rem 0;
border-bottom: 1px solid rgba(255, 255, 255, 0.2);
}
.refresh-tooltip-item:last-child {
border-bottom: none;
}
.refresh-tooltip-account {
font-weight: 600;
/* Colors now set via hash-based inline styles */
}
.refresh-tooltip-time {
color: rgba(255, 255, 255, 0.8);
font-size: 0.7rem;
}
/* Account Name Color Coding - Now using hash-based inline styles */

3
static/images/logo.svg Normal file
View file

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
<text x="50" y="50" font-size="80" text-anchor="middle" dominant-baseline="middle">🔭</text>
</svg>

After

Width:  |  Height:  |  Size: 166 B

432
templates/import.html Normal file
View file

@ -0,0 +1,432 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>SGO - Import</title>
<link rel="icon" href="{{ url_for('static', filename='images/logo.svg') }}" type="image/svg+xml">
<link rel="stylesheet" href="{{ url_for('static', filename='css/style.css') }}">
<style>
.import-container {
max-width: 800px;
margin: 4rem auto;
padding: 2rem;
}
.import-card {
background: var(--card-bg);
border-radius: 0.5rem;
padding: 2rem;
box-shadow: var(--shadow-lg);
border: 1px solid var(--border-color);
}
.import-title {
font-size: 2rem;
font-weight: 700;
color: var(--primary-color);
margin-bottom: 0.5rem;
}
.import-subtitle {
color: var(--text-secondary);
margin-bottom: 2rem;
}
.profile-list {
max-height: 300px;
overflow-y: auto;
border: 1px solid var(--border-color);
border-radius: 0.375rem;
padding: 0.5rem;
margin-bottom: 1rem;
}
.profile-item {
padding: 0.75rem;
border-radius: 0.25rem;
margin-bottom: 0.25rem;
cursor: pointer;
transition: all 0.2s;
}
.profile-item:hover {
background: var(--bg-color);
}
.profile-item label {
display: flex;
align-items: center;
gap: 0.75rem;
cursor: pointer;
}
.profile-item input[type="checkbox"] {
width: 1.25rem;
height: 1.25rem;
cursor: pointer;
}
.mfa-section {
display: none;
margin-top: 1.5rem;
padding: 1.5rem;
background: var(--bg-color);
border-radius: 0.375rem;
}
.mfa-section.active {
display: block;
}
.mfa-inputs {
display: grid;
gap: 1rem;
}
.mfa-input-group {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.mfa-input-group label {
font-weight: 600;
color: var(--text-primary);
}
.mfa-input-row {
display: flex;
gap: 0.5rem;
align-items: center;
}
.mfa-input-group input {
flex: 1;
padding: 0.75rem;
border: 2px solid var(--border-color);
border-radius: 0.375rem;
font-size: 1rem;
}
.mfa-input-group input:focus {
outline: none;
border-color: var(--primary-color);
}
.profile-import-btn {
padding: 0.75rem 1.5rem;
background: var(--primary-color);
color: white;
border: none;
border-radius: 0.375rem;
font-weight: 600;
cursor: pointer;
white-space: nowrap;
transition: all 0.2s;
}
.profile-import-btn:hover:not(:disabled) {
background: var(--primary-hover);
transform: translateY(-1px);
}
.profile-import-btn:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.profile-import-btn.success {
background: var(--success-color);
}
.profile-import-btn.error {
background: var(--danger-color);
}
.import-btn {
width: 100%;
padding: 1rem;
background: var(--primary-color);
color: white;
border: none;
border-radius: 0.5rem;
font-size: 1.1rem;
font-weight: 600;
cursor: pointer;
transition: all 0.2s;
margin-top: 1rem;
}
.import-btn:hover:not(:disabled) {
background: var(--primary-hover);
transform: translateY(-2px);
box-shadow: var(--shadow-md);
}
.import-btn:disabled {
opacity: 0.5;
cursor: not-allowed;
}
.select-all-btn {
padding: 0.5rem 1rem;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: 0.375rem;
cursor: pointer;
font-size: 0.875rem;
margin-bottom: 0.5rem;
}
.select-all-btn:hover {
background: var(--border-color);
}
.progress-section {
display: none;
margin-top: 1.5rem;
padding: 1.5rem;
background: var(--bg-color);
border-radius: 0.375rem;
}
.progress-section.active {
display: block;
}
#progressLog {
max-height: 400px;
overflow-y: auto;
border: 1px solid var(--border-color);
border-radius: 0.375rem;
padding: 0.5rem;
background: var(--bg-color);
}
.progress-item {
padding: 0.5rem 0;
color: var(--text-secondary);
}
.progress-item.success {
color: var(--success-color);
}
.progress-item.error {
color: var(--danger-color);
}
</style>
</head>
<body>
<div class="import-container">
<div class="import-card">
<h1 class="import-title">SG Observatory</h1>
<p class="import-subtitle">Select AWS profiles to import EC2 instances and Security Groups</p>
<div id="loadingProfiles" class="loading">
<div class="spinner"></div>
Loading AWS profiles...
</div>
<div id="profileSelection" style="display: none;">
<button class="select-all-btn" onclick="toggleSelectAll()">Select All / Deselect All</button>
<div class="profile-list" id="profileList"></div>
<div class="mfa-section" id="mfaSection">
<h3>MFA Codes</h3>
<p style="color: var(--text-secondary); font-size: 0.9rem; margin-bottom: 1rem;">
Enter MFA/OTP codes for profiles that require authentication
</p>
<div class="mfa-inputs" id="mfaInputs"></div>
</div>
<button class="import-btn" id="doneBtn" onclick="goToExplorer()">
Done - Go to Explorer
</button>
<div class="progress-section" id="progressSection">
<h3>Import Progress</h3>
<div id="progressLog"></div>
</div>
</div>
</div>
</div>
<script>
let profiles = [];
let selectedProfiles = new Set();
let importedProfiles = new Set();
async function loadProfiles() {
try {
const response = await fetch('/api/profiles');
const data = await response.json();
if (data.error) {
document.getElementById('loadingProfiles').innerHTML =
`<div class="empty-state"><div class="empty-state-icon">⚠️</div><p>${data.error}</p></div>`;
return;
}
profiles = data.profiles;
renderProfiles();
document.getElementById('loadingProfiles').style.display = 'none';
document.getElementById('profileSelection').style.display = 'block';
} catch (error) {
document.getElementById('loadingProfiles').innerHTML =
`<div class="empty-state"><div class="empty-state-icon">⚠️</div><p>Error loading profiles: ${error.message}</p></div>`;
}
}
function renderProfiles() {
const list = document.getElementById('profileList');
list.innerHTML = profiles.map((profile, idx) => `
<div class="profile-item">
<label>
<input type="checkbox"
id="profile-${idx}"
value="${profile}"
onchange="handleProfileSelection()">
<span>${profile}</span>
</label>
</div>
`).join('');
}
function toggleSelectAll() {
const checkboxes = document.querySelectorAll('.profile-item input[type="checkbox"]');
const allChecked = Array.from(checkboxes).every(cb => cb.checked);
checkboxes.forEach(cb => cb.checked = !allChecked);
handleProfileSelection();
}
function handleProfileSelection() {
selectedProfiles.clear();
document.querySelectorAll('.profile-item input[type="checkbox"]:checked').forEach(cb => {
selectedProfiles.add(cb.value);
});
if (selectedProfiles.size > 0) {
renderMfaInputs();
document.getElementById('mfaSection').classList.add('active');
} else {
document.getElementById('mfaSection').classList.remove('active');
}
}
function renderMfaInputs() {
const container = document.getElementById('mfaInputs');
// Save current MFA values and button states before re-rendering
const savedMfaValues = {};
const savedButtonStates = {};
selectedProfiles.forEach(profile => {
const input = document.getElementById(`mfa-${profile}`);
const btn = document.getElementById(`btn-${profile}`);
if (input) {
savedMfaValues[profile] = input.value;
}
if (btn) {
savedButtonStates[profile] = {
text: btn.textContent,
disabled: btn.disabled,
classes: btn.className
};
}
});
// Render inputs
container.innerHTML = Array.from(selectedProfiles).map(profile => `
<div class="mfa-input-group">
<label for="mfa-${profile}">${profile}</label>
<div class="mfa-input-row">
<input type="text"
id="mfa-${profile}"
placeholder="Enter MFA code (leave blank if not required)"
maxlength="6"
pattern="[0-9]*">
<button class="profile-import-btn"
id="btn-${profile}"
onclick="startProfileImport('${profile}')">
Start Import
</button>
</div>
</div>
`).join('');
// Restore saved values and button states
selectedProfiles.forEach(profile => {
const input = document.getElementById(`mfa-${profile}`);
const btn = document.getElementById(`btn-${profile}`);
if (savedMfaValues[profile] !== undefined) {
input.value = savedMfaValues[profile];
}
if (savedButtonStates[profile]) {
btn.textContent = savedButtonStates[profile].text;
btn.disabled = savedButtonStates[profile].disabled;
btn.className = savedButtonStates[profile].classes;
}
});
}
async function startProfileImport(profile) {
const btn = document.getElementById(`btn-${profile}`);
const mfaInput = document.getElementById(`mfa-${profile}`);
const progressSection = document.getElementById('progressSection');
const progressLog = document.getElementById('progressLog');
// Disable button and show progress
btn.disabled = true;
btn.textContent = 'Importing...';
progressSection.classList.add('active');
// Get MFA code for this profile
const mfaCode = mfaInput.value.trim();
try {
const response = await fetch('/api/import-profile', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
profile: profile,
mfa_code: mfaCode
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
let importSuccess = false;
while (true) {
const {value, done} = await reader.read();
if (done) break;
const text = decoder.decode(value);
const lines = text.split('\n').filter(l => l.trim());
lines.forEach(line => {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.substring(6));
let className = 'progress-item';
if (data.status === 'success') className += ' success';
if (data.status === 'error') className += ' error';
progressLog.innerHTML += `<div class="${className}">${data.message}</div>`;
progressLog.scrollTop = progressLog.scrollHeight;
if (data.status === 'complete') {
importSuccess = true;
importedProfiles.add(profile);
btn.textContent = '✓ Imported';
btn.classList.add('success');
} else if (data.status === 'error' && data.message.includes('✗')) {
btn.textContent = '✗ Failed';
btn.classList.add('error');
btn.disabled = false;
}
}
});
}
if (!importSuccess && !btn.classList.contains('error')) {
btn.textContent = 'Start Import';
btn.disabled = false;
}
} catch (error) {
progressLog.innerHTML += `<div class="progress-item error">Error: ${error.message}</div>`;
btn.textContent = '✗ Failed';
btn.classList.add('error');
btn.disabled = false;
}
}
function goToExplorer() {
window.location.href = '/explorer';
}
// Load profiles on page load
loadProfiles();
</script>
</body>
</html>

1248
templates/index.html Normal file

File diff suppressed because it is too large Load diff