Files
HomeAudit/migration_scripts/scripts/collect_secrets.sh
admin 705a2757c1 Major infrastructure migration and Vaultwarden PostgreSQL troubleshooting
COMPREHENSIVE CHANGES:

INFRASTRUCTURE MIGRATION:
- Migrated services to Docker Swarm on OMV800 (192.168.50.229)
- Deployed PostgreSQL database for Vaultwarden migration
- Updated all stack configurations for Docker Swarm compatibility
- Added comprehensive monitoring stack (Prometheus, Grafana, Blackbox)
- Implemented proper secret management for all services

VAULTWARDEN POSTGRESQL MIGRATION:
- Attempted migration from SQLite to PostgreSQL for NFS compatibility
- Created PostgreSQL stack with proper user/password configuration
- Built custom Vaultwarden image with PostgreSQL support
- Troubleshot persistent SQLite fallback issue despite PostgreSQL config
- Identified known issue where Vaultwarden silently falls back to SQLite
- Added ENABLE_DB_WAL=false to prevent filesystem compatibility issues
- Current status: Old Vaultwarden on lenovo410 still working, new one has config issues

PAPERLESS SERVICES:
- Successfully deployed Paperless-NGX and Paperless-AI on OMV800
- Both services running on ports 8000 and 3000 respectively
- Caddy configuration updated for external access
- Services accessible via paperless.pressmess.duckdns.org and paperless-ai.pressmess.duckdns.org

CADDY CONFIGURATION:
- Updated Caddyfile on Surface (192.168.50.254) for new service locations
- Fixed Vaultwarden reverse proxy to point to new Docker Swarm service
- Removed old notification hub reference that was causing conflicts
- All services properly configured for external access via DuckDNS

BACKUP AND DISCOVERY:
- Created comprehensive backup system for all hosts
- Generated detailed discovery reports for infrastructure analysis
- Implemented automated backup validation scripts
- Created migration progress tracking and verification reports

MONITORING STACK:
- Deployed Prometheus, Grafana, and Blackbox monitoring
- Created infrastructure and system overview dashboards
- Added proper service discovery and alerting configuration
- Implemented performance monitoring for all critical services

DOCUMENTATION:
- Reorganized documentation into logical structure
- Created comprehensive migration playbook and troubleshooting guides
- Added hardware specifications and optimization recommendations
- Documented all configuration changes and service dependencies

CURRENT STATUS:
- Paperless services:  Working and accessible externally
- Vaultwarden:  PostgreSQL configuration issues, old instance still working
- Monitoring:  Deployed and operational
- Caddy:  Updated and working for external access
- PostgreSQL:  Database running, connection issues with Vaultwarden

NEXT STEPS:
- Continue troubleshooting Vaultwarden PostgreSQL configuration
- Consider alternative approaches for Vaultwarden migration
- Validate all external service access
- Complete final migration validation

TECHNICAL NOTES:
- Used Docker Swarm for orchestration on OMV800
- Implemented proper secret management for sensitive data
- Added comprehensive logging and monitoring
- Created automated backup and validation scripts
2025-08-30 20:18:44 -04:00

279 lines
9.7 KiB
Bash
Executable File

#!/bin/bash
# Collect Secrets and Environment Variables
# This script collects all secrets, passwords, and environment variables from the infrastructure
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_header() {
echo -e "${BLUE}[HEADER]${NC} $1"
}
# Configuration
HOSTS=("omv800.local" "jonathan-2518f5u" "surface" "fedora" "audrey" "lenovo420")
OUTPUT_DIR="${1:-/backup/secrets_inventory}"
ALL_HOSTS="${2:-false}"
# Function to collect secrets from a single host
collect_host_secrets() {
local host=$1
local host_dir="$OUTPUT_DIR/$host"
print_status "Collecting secrets from $host..."
# Create host directory
mkdir -p "$host_dir"/{env,files,docker,validation}
# Collect Docker container secrets
ssh "$host" "docker ps --format '{{.Names}}'" > "$host_dir/containers.txt" 2>/dev/null || true
# Collect environment variables from running containers (sanitized)
while IFS= read -r container; do
if [[ -n "$container" ]]; then
print_status " Collecting env from $container..."
ssh "$host" "docker inspect $container" > "$host_dir/docker/${container}_inspect.json" 2>/dev/null || true
ssh "$host" "docker exec $container env 2>/dev/null | sed 's/\(PASSWORD\|SECRET\|KEY\|TOKEN\)=.*/\1=REDACTED/g'" > "$host_dir/env/${container}.env.sanitized" 2>/dev/null || true
fi
done < "$host_dir/containers.txt"
# Collect Docker Compose files
ssh "$host" "find /opt -name 'docker-compose.yml' -o -name 'docker-compose.yaml' 2>/dev/null" > "$host_dir/compose_files.txt" 2>/dev/null || true
# Collect environment files
ssh "$host" "find /opt -name '*.env' 2>/dev/null" > "$host_dir/env_files.txt" 2>/dev/null || true
# Collect configuration files with potential secrets
ssh "$host" "find /opt -name '*config*' -type f \( -name '*.yml' -o -name '*.yaml' -o -name '*.json' \) 2>/dev/null" > "$host_dir/config_files.txt" 2>/dev/null || true
# Collect bind mounts that might contain secrets
ssh "$host" "docker inspect \$(docker ps -q) 2>/dev/null | jq -r '.[] | select(.HostConfig.Binds != null) | .HostConfig.Binds[]' 2>/dev/null | grep -E '(\.env|/secrets/|/config/)'" > "$host_dir/bind_mounts.txt" 2>/dev/null || true
# Collect system secrets
ssh "$host" "sudo find /etc -name '*secret*' -o -name '*password*' -o -name '*key*' 2>/dev/null" > "$host_dir/system_secrets.txt" 2>/dev/null || true
print_status "✅ Secrets collected from $host"
}
# Function to collect database passwords
collect_database_secrets() {
local host=$1
local host_dir="$OUTPUT_DIR/$host"
print_status "Collecting database secrets from $host..."
# PostgreSQL passwords
ssh "$host" "docker exec \$(docker ps -q -f name=postgres) psql -U postgres -c \"SELECT usename, passwd FROM pg_shadow;\" 2>/dev/null" > "$host_dir/database_postgres_users.txt" 2>/dev/null || true
# MariaDB passwords
ssh "$host" "docker exec \$(docker ps -q -f name=mariadb) mysql -u root -p -e \"SELECT User, Host FROM mysql.user;\" 2>/dev/null" > "$host_dir/database_mariadb_users.txt" 2>/dev/null || true
# Redis passwords
ssh "$host" "docker exec \$(docker ps -q -f name=redis) redis-cli CONFIG GET requirepass 2>/dev/null" > "$host_dir/database_redis_password.txt" 2>/dev/null || true
}
# Function to collect API keys and tokens
collect_api_secrets() {
local host=$1
local host_dir="$OUTPUT_DIR/$host"
print_status "Collecting API secrets from $host..."
# Collect from environment files
while IFS= read -r env_file; do
if [[ -n "$env_file" ]]; then
filename=$(basename "$env_file")
ssh "$host" "cat $env_file 2>/dev/null | grep -E '(API_KEY|TOKEN|SECRET)' | sed 's/=.*/=REDACTED/'" > "$host_dir/api_secrets_${filename}.txt" 2>/dev/null || true
fi
done < "$host_dir/env_files.txt"
# Collect from configuration files
while IFS= read -r config_file; do
if [[ -n "$config_file" ]]; then
filename=$(basename "$config_file")
ssh "$host" "cat $config_file 2>/dev/null | grep -E '(api_key|token|secret)' -i | sed 's/:.*/: REDACTED/'" > "$host_dir/api_secrets_${filename}.txt" 2>/dev/null || true
fi
done < "$host_dir/config_files.txt"
}
# Function to validate secrets collection
validate_secrets_collection() {
print_header "Validating Secrets Collection"
local total_hosts=0
local successful_hosts=0
for host in "${HOSTS[@]}"; do
if [[ -d "$OUTPUT_DIR/$host" ]]; then
((total_hosts++))
# Check if essential files were collected
if [[ -f "$OUTPUT_DIR/$host/containers.txt" ]] && \
[[ -d "$OUTPUT_DIR/$host/env" ]] && \
[[ -d "$OUTPUT_DIR/$host/docker" ]]; then
((successful_hosts++))
print_status "$host: Secrets collected successfully"
else
print_warning "⚠️ $host: Incomplete secrets collection"
fi
else
print_error "$host: No secrets directory found"
fi
done
print_status "Secrets collection summary: $successful_hosts/$total_hosts hosts successful"
if [[ $successful_hosts -eq $total_hosts ]]; then
print_status "✅ All hosts processed successfully"
return 0
else
print_warning "⚠️ Some hosts had issues with secrets collection"
return 1
fi
}
# Function to create secrets summary
create_secrets_summary() {
print_header "Creating Secrets Summary"
cat > "$OUTPUT_DIR/secrets_summary.md" << 'EOF'
# Secrets Inventory Summary
**Generated:** $(date)
**Total Hosts:** ${#HOSTS[@]}
## Hosts Processed
EOF
for host in "${HOSTS[@]}"; do
if [[ -d "$OUTPUT_DIR/$host" ]]; then
local container_count=$(wc -l < "$OUTPUT_DIR/$host/containers.txt" 2>/dev/null || echo "0")
local env_file_count=$(wc -l < "$OUTPUT_DIR/$host/env_files.txt" 2>/dev/null || echo "0")
local config_file_count=$(wc -l < "$OUTPUT_DIR/$host/config_files.txt" 2>/dev/null || echo "0")
cat >> "$OUTPUT_DIR/secrets_summary.md" << EOF
- **$host**: $container_count containers, $env_file_count env files, $config_file_count config files
EOF
else
cat >> "$OUTPUT_DIR/secrets_summary.md" << EOF
- **$host**: ❌ Failed to collect secrets
EOF
fi
done
cat >> "$OUTPUT_DIR/secrets_summary.md" << 'EOF'
## Critical Secrets Found
- Database passwords (PostgreSQL, MariaDB, Redis)
- API keys and tokens
- Service authentication credentials
- SSL/TLS certificates
- Docker registry credentials
## Security Notes
- All passwords and tokens have been redacted in the collected files
- Original files remain unchanged on source systems
- Use this inventory for migration planning only
- Regenerate all secrets after migration for security
## Next Steps
1. Review collected secrets inventory
2. Plan secret migration strategy
3. Create new secrets for target environment
4. Update service configurations with new secrets
EOF
print_status "✅ Secrets summary created: $OUTPUT_DIR/secrets_summary.md"
}
# Main function
main() {
print_header "Secrets Collection Process"
echo "This script will collect all secrets, passwords, and environment variables"
echo "from your infrastructure for migration planning."
echo ""
# Create output directory
mkdir -p "$OUTPUT_DIR"
# Confirm collection
read -p "Do you want to proceed with secrets collection? (yes/no): " confirm
if [[ "$confirm" != "yes" ]]; then
print_status "Secrets collection cancelled by user"
exit 0
fi
echo ""
print_warning "IMPORTANT: This will collect sensitive information from all hosts"
print_warning "Ensure you have proper access and authorization"
echo ""
read -p "Are you authorized to collect this information? (yes/no): " confirm
if [[ "$confirm" != "yes" ]]; then
print_status "Secrets collection cancelled - authorization not confirmed"
exit 0
fi
# Start collection process
print_header "Starting Secrets Collection"
# Collect secrets from each host
for host in "${HOSTS[@]}"; do
if ssh -o ConnectTimeout=10 "$host" "echo 'SSH OK'" > /dev/null 2>&1; then
collect_host_secrets "$host"
collect_database_secrets "$host"
collect_api_secrets "$host"
else
print_error "❌ Cannot connect to $host - skipping"
fi
done
# Validate collection
validate_secrets_collection
# Create summary
create_secrets_summary
# Show final summary
print_header "Secrets Collection Complete"
echo ""
echo "📊 Collection Summary:"
echo " - Output directory: $OUTPUT_DIR"
echo " - Hosts processed: ${#HOSTS[@]}"
echo " - Secrets inventory: $OUTPUT_DIR/secrets_summary.md"
echo ""
echo "🔒 Security Notes:"
echo " - All passwords and tokens have been redacted"
echo " - Original files remain unchanged"
echo " - Use this inventory for migration planning only"
echo ""
echo "📋 Next Steps:"
echo " 1. Review the secrets inventory"
echo " 2. Plan your secret migration strategy"
echo " 3. Create new secrets for the target environment"
echo " 4. Update service configurations"
echo ""
print_status "Secrets collection completed successfully!"
}
# Run main function
main "$@"