Update README.md

This commit is contained in:
edfig 2025-11-20 22:55:21 +01:00
parent 6886c8871c
commit 7cb647d437

445
README.md
View file

@ -2,7 +2,7 @@
A web-based tool for exploring AWS EC2 instances and Security Groups with direct AWS import, MFA support, and CSV export capabilities.
## TL;DR - Get Started in 30 Seconds
## Quick Start
```bash
# 1. Create .env file with your AWS credentials path
@ -56,31 +56,6 @@ docker-compose up --build
podman-compose up --build
```
### 2. Import Data via GUI
1. Open your browser to `http://localhost:5000`
2. You'll see the **Import Page** with all your AWS profiles
3. **Select profiles**: Check the AWS accounts you want to import
4. **Enter MFA codes**: Paste your MFA/OTP codes for each selected profile
5. **Click "Start Import"**: Watch real-time progress as data is fetched **in parallel**
6. **Auto-redirect**: When complete, you're taken to the Explorer
**Parallel Import**: All selected profiles are imported simultaneously in separate threads, so total time is the max of any single import, not the sum. This prevents MFA timeout issues.
### 3. Explore Your Data
- Search for EC2 instances and Security Groups
- View detailed information
- Inspect security group rules
- Filter and search using regex
### 4. Refresh Data
- Click the **Refresh Data** button to refresh data using cached AWS sessions (valid for 55 minutes)
- Click the **Change Profiles** button to switch to different AWS accounts
## Container Configuration
### Environment Variables
SGO supports configuration through environment variables. Create a `.env` file:
@ -129,189 +104,6 @@ EOF
- Better for development
- Use `docker-compose.local.yml`:
```bash
docker-compose -f docker-compose.local.yml up --build
# or
podman-compose -f docker-compose.local.yml up --build
```
Or edit `docker-compose.yml` and swap the volume configuration as indicated in comments.
### User/Group Configuration
To avoid permission issues, set `PUID` and `PGID` to match your host user:
```bash
# Find your IDs
id -u # Your PUID
id -g # Your PGID
# Add to .env file
echo "PUID=$(id -u)" >> .env
echo "PGID=$(id -g)" >> .env
```
### Stopping the Application
```bash
# Stop with Ctrl+C, or:
docker-compose down # Docker
podman-compose down # Podman
# To also remove the data volume:
docker-compose down -v
```
## Quick Start (Local Python)
If you prefer to run without containers:
### 1. Install Dependencies
```bash
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
```
### 2. Start the Application
```bash
python app.py
```
### 3. Open Browser
Navigate to `http://localhost:5000`
## Important Notes
- **Database Persistence**: When using containers, the database persists in the `./data` directory
- **Session Caching**: AWS sessions are cached for 55 minutes, allowing multiple refreshes without re-authentication
- **Parallel Import**: All selected AWS accounts are imported simultaneously for maximum speed
## AWS Configuration
### MFA Device Setup
For profiles that require MFA, add your MFA device ARN to `~/.aws/config`:
```ini
[profile nonprod-p1p2-admin]
region = us-west-2
mfa_serial = arn:aws:iam::131340773912:mfa/your-username
```
### Finding Your MFA Device ARN
1. Go to AWS IAM Console
2. Navigate to Users → Your User → Security Credentials
3. Copy the ARN from "Assigned MFA device"
### How MFA Works in the GUI
1. The import page shows all profiles from `~/.aws/config`
2. Select the profiles you want to import
3. Enter MFA codes in the text boxes (one per profile)
4. Click "Start Import" to begin
5. Real-time progress shows authentication and data fetching
6. MFA session is valid for 1 hour - refresh without re-entering codes during this window
## Usage
### Search
1. Type in the search box (minimum 2 characters)
2. Results appear instantly as you type
3. Filter by resource type using the buttons: **All Resources** | **EC2 Instances** | **Security Groups**
4. **Enable Regex**: Check the "Regex" box to use regular expressions
- Example: `^prod-.*-\d+$` finds names starting with "prod-" and ending with numbers
- Example: `(dev|test|qa)` finds names containing dev, test, or qa
- Example: `10\.0\.\d+\.\d+` finds IP addresses in the 10.0.x.x range
### View Details
**EC2 Instance View:**
- Click on any EC2 instance from search results
- Main card shows EC2 details (Instance ID, IP, State, Account, Tags)
- Nested cards show all attached Security Groups with their details
**Security Group View:**
- Click on any Security Group from search results
- Main card shows SG details (Group ID, Name, Ingress Rules, Wave, Tags)
- Nested cards show all EC2 instances using this Security Group
### View Security Group Rules
When viewing security groups (either attached to an EC2 or directly):
1. Click the **View Rules** button on any security group card
2. A modal opens showing all ingress and egress rules
3. Switch between **Ingress** and **Egress** tabs
4. Use the search box to filter rules by protocol, port, source, or description
5. Rules are displayed in a compact table format with:
- Protocol (TCP, UDP, ICMP, All)
- Port Range
- Source Type (CIDR, Security Group, Prefix List)
- Source (IP range or SG ID)
- Description
### Navigate
- Click **← Back to Search** to return to search results
- Perform a new search at any time
- Click outside the rules modal to close it
### Export to CSV
SGO provides comprehensive CSV export capabilities:
**Search Results Export:**
- Click the **💾 Export** button in the view controls (top right)
- Exports all current search results with filters applied
- Includes: Type, Name, ID, Account, State, IP, Security Groups count, Wave, Git info
**EC2 Instance Details Export:**
- Click the **💾 Export** button in any EC2 detail card
- Exports complete EC2 information including:
- Instance details (ID, name, state, IP, account info)
- All AWS tags
- Attached security groups with their details
**Security Group Details Export:**
- Click the **💾 Export** button in any SG detail card
- Exports complete SG information including:
- Group details (ID, name, wave, rule counts)
- All AWS tags
- Attached EC2 instances with their details
**Security Group Rules Export:**
- Click the **💾 Export** button in the rules modal
- Exports all ingress and egress rules with:
- Rule details (direction, protocol, ports, source)
- Group ID, account ID
- Git file and commit information from tags
All exports include timestamps in filenames and proper CSV escaping.
## Data Structure
### Security Groups Table
- Account ID & Name
- Group ID & Name
- Tag Name
- Wave Tag
- Git Repo Tag
- Ingress Rule Count
### EC2 Instances Table
- Account ID & Name
- Instance ID
- Tag Name
- State (running, stopped, etc.)
- Private IP Address
- Security Groups (IDs and Names)
- Git Repo Tag
## File Structure
@ -319,7 +111,6 @@ All exports include timestamps in filenames and proper CSV escaping.
sgo/
├── app.py # Flask web application
├── import_from_aws.py # AWS direct import functions
├── import_data.py # CSV to SQLite import (legacy)
├── requirements.txt # Python dependencies
├── Dockerfile # Container image definition
├── docker-compose.yml # Container orchestration (Docker volume)
@ -331,240 +122,10 @@ sgo/
├── README.md # This file
├── data/ # Local data directory (if using local mode)
│ └── aws_export.db # SQLite database
├── static/
│ ├── css/
│ │ └── style.css # Application styles
│ └── images/
│ └── logo.svg # Application logo
└── templates/
├── import.html # Import/profile selection page
└── index.html # Main explorer interface
├── static/ # CSS and images
└── templates/ # HTML
```
## Configuration Examples
### Example 1: Basic Setup (Default)
Minimal configuration with Docker volume:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# Run
docker-compose up --build
# or: podman-compose up --build
```
### Example 2: Local Data Directory
Store database in local directory for easy access:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
DATA_PATH=./data
EOF
# Use the local compose file
docker-compose -f docker-compose.local.yml up --build
```
### Example 3: Custom AWS Credentials Location
If your AWS credentials are in a non-standard location:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=/path/to/custom/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
docker-compose up --build
```
### Example 4: Debug Mode
Enable detailed logging for troubleshooting:
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
DEBUG=true
EOF
docker-compose up --build
```
### Example 5: Custom Port
Run on a different port (e.g., 8080):
```bash
# Create .env file
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
SGO_PORT=8080
EOF
# Access at http://localhost:8080
docker-compose up --build
```
## Troubleshooting
### Container Issues
**Cannot connect to Docker daemon:**
```bash
# Start Docker service
sudo systemctl start docker # Linux
# or open Docker Desktop # macOS/Windows
```
**Port 5000 already in use:**
```bash
# Change port in docker-compose.yml:
ports:
- "8080:5000" # Use port 8080 instead
```
**AWS credentials not found in container:**
- Ensure you have created a `.env` file with `AWS_CONFIG_PATH` set
- Verify `~/.aws/config` and `~/.aws/credentials` exist on your host machine
- Check file permissions on `~/.aws` directory
- Example:
```bash
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
```
**Permission denied errors:**
```bash
# Set PUID/PGID to match your user
cat > .env << EOF
PUID=$(id -u)
PGID=$(id -g)
EOF
# Rebuild container
docker-compose down && docker-compose up --build
```
**Database locked or permission errors:**
```bash
# If using local directory mode, ensure proper ownership
sudo chown -R $(id -u):$(id -g) ./data
# Or use Docker volume mode (default) which handles permissions automatically
```
### AWS Configuration Issues
**No AWS profiles found:**
Ensure you have `~/.aws/config` file with profiles configured:
```ini
[profile nonprod-p1p2-admin]
region = us-west-2
mfa_serial = arn:aws:iam::123456789012:mfa/username
```
**MFA authentication fails:**
- Verify your MFA code is current (codes expire every 30 seconds)
- Check that `mfa_serial` is configured in `~/.aws/config`
- Ensure your AWS credentials in `~/.aws/credentials` are valid
**Import fails:**
- Check network connectivity to AWS
- Verify you have permissions to describe EC2 instances and security groups
- Look at the progress log for specific error messages
### Platform-Specific Notes
**Windows:**
- Use Git Bash, WSL2, or PowerShell for running scripts
- Docker Desktop must be running
- WSL2 backend recommended for better performance
**macOS:**
- Docker Desktop must be running
- Ensure Docker has access to your home directory in Docker Desktop settings
**Linux:**
- You may need to run Docker commands with `sudo` or add your user to the `docker` group
- Podman works without root by default
## Quick Reference
### Essential Commands
```bash
# Start
docker-compose up --build
# or: podman-compose up --build
# Stop
docker-compose down
# or: Ctrl+C
# View logs
docker-compose logs -f
# Rebuild after changes
docker-compose up --build
# Remove everything including data
docker-compose down -v
```
### Quick .env Setup
```bash
# Minimal configuration for most users
cat > .env << EOF
AWS_CONFIG_PATH=$HOME/.aws
PUID=$(id -u)
PGID=$(id -g)
EOF
# Then run
docker-compose up --build
```
### Data Location
- **Docker volume (default)**: Managed by Docker, survives rebuilds
```bash
# Inspect volume
docker volume inspect sgo-data
# Backup volume
docker run --rm -v sgo-data:/data -v $(pwd):/backup alpine tar czf /backup/sgo-backup.tar.gz -C /data .
```
- **Local directory**: `./data/aws_export.db`
```bash
# Use local mode
docker-compose -f docker-compose.local.yml up --build
```
## License