Skip to content

Monitoring Logging

Rumen Damyanov edited this page Aug 23, 2025 · 1 revision

Monitoring & Logging

This comprehensive guide covers monitoring, logging, and observability for nginx-torblocker deployments, including metrics collection, log analysis, alerting, and integration with monitoring systems.

Overview

Effective monitoring and logging are crucial for maintaining secure and performant nginx-torblocker deployments. This guide covers everything from basic logging to advanced monitoring integrations with popular systems like Prometheus, Grafana, and ELK stack.

Logging Configuration

Basic Logging Setup

Configure comprehensive logging for nginx-torblocker:

# Basic logging configuration
http {
    # Standard access log with torblock information
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                   '$status $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for" '
                   'torblock_status=$torblock_status '
                   'tor_exit=$torblock_exit_node';
    
    # Detailed torblock-specific log format
    log_format torblock '$remote_addr [$time_local] "$request" $status '
                       'torblock_status=$torblock_status '
                       'tor_exit=$torblock_exit_node '
                       'cache_status=$torblock_cache_status '
                       'lookup_time=$torblock_lookup_time '
                       'last_update=$torblock_last_update';
    
    # Security-focused log format
    log_format security '$time_iso8601 $remote_addr "$request" $status '
                       'tor_blocked=$torblock_blocked '
                       'user_agent="$http_user_agent" '
                       'referer="$http_referer"';
    
    # Performance monitoring log format
    log_format performance '$remote_addr [$time_local] "$request" $status '
                          'rt=$request_time '
                          'ut=$upstream_response_time '
                          'tor_lookup=$torblock_lookup_time '
                          'cache_hit=$torblock_cache_status '
                          'bytes=$body_bytes_sent';
    
    # Configure log files
    access_log /var/log/nginx/access.log main;
    access_log /var/log/nginx/torblock.log torblock;
    access_log /var/log/nginx/security.log security;
    access_log /var/log/nginx/performance.log performance;
    
    error_log /var/log/nginx/error.log warn;
}

server {
    listen 80;
    server_name example.com;
    
    # Enable torblock debugging (for troubleshooting only)
    torblock_debug off;  # Set to 'on' only during debugging
    
    location / {
        torblock on;
        # Your application configuration
    }
}

Structured Logging (JSON)

Enable JSON logging for better parsing by log aggregation systems:

# JSON log format for structured logging
http {
    log_format json escape=json '{'
        '"timestamp": "$time_iso8601", '
        '"remote_addr": "$remote_addr", '
        '"request": "$request", '
        '"status": $status, '
        '"body_bytes_sent": $body_bytes_sent, '
        '"request_time": $request_time, '
        '"user_agent": "$http_user_agent", '
        '"torblock": {'
            '"status": "$torblock_status", '
            '"exit_node": "$torblock_exit_node", '
            '"blocked": "$torblock_blocked", '
            '"cache_status": "$torblock_cache_status", '
            '"lookup_time": $torblock_lookup_time, '
            '"last_update": "$torblock_last_update"'
        '}'
    '}';
    
    access_log /var/log/nginx/access.json json;
}

Log Rotation Configuration

Set up proper log rotation to prevent disk space issues:

# /etc/logrotate.d/nginx-torblocker
/var/log/nginx/*.log {
    daily
    missingok
    rotate 52
    compress
    delaycompress
    notifempty
    create 644 nginx nginx
    
    postrotate
        if [ -f /var/run/nginx.pid ]; then
            kill -USR1 `cat /var/run/nginx.pid`
        fi
    endscript
}

# Torblock-specific logs with different retention
/var/log/nginx/torblock.log /var/log/nginx/security.log {
    daily
    missingok
    rotate 90
    compress
    delaycompress
    notifempty
    create 644 nginx nginx
    
    postrotate
        if [ -f /var/run/nginx.pid ]; then
            kill -USR1 `cat /var/run/nginx.pid`
        fi
    endscript
}

Metrics Collection

Nginx Module Status

Create a status endpoint for metrics collection:

# Metrics collection endpoint
server {
    listen 127.0.0.1:8080;
    server_name localhost;
    
    location /nginx_status {
        stub_status on;
        access_log off;
        allow 127.0.0.1;
        deny all;
    }
    
    location /torblock_status {
        access_log off;
        allow 127.0.0.1;
        deny all;
        
        add_header Content-Type text/plain always;
        return 200 "torblock_cache_size: $torblock_cache_size
torblock_cache_hits: $torblock_cache_hits
torblock_cache_misses: $torblock_cache_misses
torblock_blocked_total: $torblock_blocked_total
torblock_last_update: $torblock_last_update
torblock_exit_count: $torblock_exit_count
";
    }
}

Custom Metrics Exporter

Python script to collect and export torblock metrics:

#!/usr/bin/env python3
# torblock-metrics-exporter.py

import re
import time
import json
import urllib.request
from datetime import datetime
from typing import Dict, Any

class TorblockMetricsExporter:
    def __init__(self, nginx_status_url="http://127.0.0.1:8080"):
        self.nginx_status_url = nginx_status_url
        self.metrics_history = []
    
    def collect_nginx_status(self) -> Dict[str, Any]:
        """Collect basic nginx status metrics"""
        try:
            with urllib.request.urlopen(f"{self.nginx_status_url}/nginx_status") as response:
                content = response.read().decode('utf-8')
            
            # Parse nginx status
            active_match = re.search(r'Active connections: (\d+)', content)
            requests_match = re.search(r'(\d+) (\d+) (\d+)', content)
            
            metrics = {
                'active_connections': int(active_match.group(1)) if active_match else 0,
                'total_requests': int(requests_match.group(3)) if requests_match else 0,
                'timestamp': datetime.now().isoformat()
            }
            
            return metrics
        except Exception as e:
            print(f"Error collecting nginx status: {e}")
            return {}
    
    def collect_torblock_status(self) -> Dict[str, Any]:
        """Collect torblock-specific metrics"""
        try:
            with urllib.request.urlopen(f"{self.nginx_status_url}/torblock_status") as response:
                content = response.read().decode('utf-8')
            
            metrics = {}
            for line in content.strip().split('\n'):
                if ':' in line:
                    key, value = line.split(':', 1)
                    key = key.strip()
                    value = value.strip()
                    
                    # Convert numeric values
                    if value.isdigit():
                        metrics[key] = int(value)
                    else:
                        metrics[key] = value
            
            return metrics
        except Exception as e:
            print(f"Error collecting torblock status: {e}")
            return {}
    
    def parse_access_logs(self, log_file="/var/log/nginx/torblock.log") -> Dict[str, Any]:
        """Parse access logs for additional metrics"""
        metrics = {
            'blocked_requests_last_hour': 0,
            'cache_hit_rate': 0,
            'average_lookup_time': 0
        }
        
        try:
            current_time = time.time()
            hour_ago = current_time - 3600
            
            blocked_count = 0
            cache_hits = 0
            cache_total = 0
            lookup_times = []
            
            with open(log_file, 'r') as f:
                for line in f:
                    # Parse timestamp
                    timestamp_match = re.search(r'\[([^\]]+)\]', line)
                    if not timestamp_match:
                        continue
                    
                    try:
                        log_time = datetime.strptime(
                            timestamp_match.group(1), 
                            '%d/%b/%Y:%H:%M:%S %z'
                        ).timestamp()
                    except:
                        continue
                    
                    # Only process last hour
                    if log_time < hour_ago:
                        continue
                    
                    # Count blocked requests
                    if 'torblock_status=blocked' in line:
                        blocked_count += 1
                    
                    # Cache statistics
                    if 'cache_status=' in line:
                        cache_total += 1
                        if 'cache_status=hit' in line:
                            cache_hits += 1
                    
                    # Lookup times
                    lookup_match = re.search(r'lookup_time=(\d+)', line)
                    if lookup_match:
                        lookup_times.append(int(lookup_match.group(1)))
            
            metrics['blocked_requests_last_hour'] = blocked_count
            metrics['cache_hit_rate'] = (cache_hits / cache_total * 100) if cache_total > 0 else 0
            metrics['average_lookup_time'] = (sum(lookup_times) / len(lookup_times)) if lookup_times else 0
            
        except Exception as e:
            print(f"Error parsing access logs: {e}")
        
        return metrics
    
    def export_prometheus_format(self) -> str:
        """Export metrics in Prometheus format"""
        nginx_metrics = self.collect_nginx_status()
        torblock_metrics = self.collect_torblock_status()
        log_metrics = self.parse_access_logs()
        
        prometheus_output = []
        
        # Nginx metrics
        if 'active_connections' in nginx_metrics:
            prometheus_output.append(f"nginx_active_connections {nginx_metrics['active_connections']}")
        if 'total_requests' in nginx_metrics:
            prometheus_output.append(f"nginx_total_requests {nginx_metrics['total_requests']}")
        
        # Torblock metrics
        for key, value in torblock_metrics.items():
            if isinstance(value, (int, float)):
                metric_name = f"torblock_{key}"
                prometheus_output.append(f"{metric_name} {value}")
        
        # Log-derived metrics
        for key, value in log_metrics.items():
            metric_name = f"torblock_{key}"
            prometheus_output.append(f"{metric_name} {value}")
        
        return '\n'.join(prometheus_output)
    
    def export_json_format(self) -> str:
        """Export metrics in JSON format"""
        all_metrics = {
            'timestamp': datetime.now().isoformat(),
            'nginx': self.collect_nginx_status(),
            'torblock': self.collect_torblock_status(),
            'logs': self.parse_access_logs()
        }
        
        return json.dumps(all_metrics, indent=2)

def main():
    import sys
    
    exporter = TorblockMetricsExporter()
    
    format_type = sys.argv[1] if len(sys.argv) > 1 else 'json'
    
    if format_type == 'prometheus':
        print(exporter.export_prometheus_format())
    else:
        print(exporter.export_json_format())

if __name__ == "__main__":
    main()

Prometheus Integration

Prometheus Configuration

Set up Prometheus to scrape nginx-torblocker metrics:

# prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'nginx-torblocker'
    static_configs:
      - targets: ['localhost:9113']  # nginx-prometheus-exporter
    scrape_interval: 30s
    metrics_path: /metrics
    
  - job_name: 'torblock-custom'
    static_configs:
      - targets: ['localhost:8080']
    scrape_interval: 60s
    metrics_path: /torblock_metrics

Nginx Prometheus Exporter Setup

#!/bin/bash
# setup-prometheus-exporter.sh

# Download and setup nginx-prometheus-exporter
wget https://github.com/nginxinc/nginx-prometheus-exporter/releases/download/v0.10.0/nginx-prometheus-exporter_0.10.0_linux_amd64.tar.gz
tar xzf nginx-prometheus-exporter_0.10.0_linux_amd64.tar.gz

# Create systemd service
cat > /etc/systemd/system/nginx-prometheus-exporter.service << 'EOF'
[Unit]
Description=Nginx Prometheus Exporter
After=network.target

[Service]
Type=simple
User=nginx
ExecStart=/usr/local/bin/nginx-prometheus-exporter -nginx.scrape-uri=http://127.0.0.1:8080/nginx_status
Restart=always

[Install]
WantedBy=multi-user.target
EOF

# Install and start service
sudo mv nginx-prometheus-exporter /usr/local/bin/
sudo systemctl daemon-reload
sudo systemctl enable nginx-prometheus-exporter
sudo systemctl start nginx-prometheus-exporter

echo "Nginx Prometheus exporter installed and started"

Custom Torblock Metrics Endpoint

# Add to nginx configuration for Prometheus scraping
server {
    listen 127.0.0.1:8080;
    
    location /torblock_metrics {
        access_log off;
        allow 127.0.0.1;
        deny all;
        
        # Call external script to generate Prometheus metrics
        content_by_lua_block {
            local handle = io.popen("/usr/local/bin/torblock-metrics-exporter.py prometheus")
            local result = handle:read("*a")
            handle:close()
            
            ngx.header.content_type = "text/plain"
            ngx.say(result)
        }
    }
}

Grafana Dashboards

Torblock Overview Dashboard

{
  "dashboard": {
    "id": null,
    "title": "Nginx Torblock Overview",
    "tags": ["nginx", "torblock", "security"],
    "timezone": "browser",
    "panels": [
      {
        "id": 1,
        "title": "Blocked Requests Rate",
        "type": "stat",
        "targets": [
          {
            "expr": "rate(torblock_blocked_total[5m])",
            "legendFormat": "Blocked/sec"
          }
        ],
        "fieldConfig": {
          "defaults": {
            "color": {
              "mode": "thresholds"
            },
            "thresholds": {
              "steps": [
                {"color": "green", "value": null},
                {"color": "yellow", "value": 10},
                {"color": "red", "value": 50}
              ]
            }
          }
        }
      },
      {
        "id": 2,
        "title": "Cache Hit Rate",
        "type": "stat",
        "targets": [
          {
            "expr": "torblock_cache_hits / (torblock_cache_hits + torblock_cache_misses) * 100",
            "legendFormat": "Hit Rate %"
          }
        ],
        "fieldConfig": {
          "defaults": {
            "unit": "percent",
            "min": 0,
            "max": 100
          }
        }
      },
      {
        "id": 3,
        "title": "Lookup Performance",
        "type": "timeseries",
        "targets": [
          {
            "expr": "torblock_average_lookup_time",
            "legendFormat": "Avg Lookup Time (ms)"
          }
        ],
        "fieldConfig": {
          "defaults": {
            "unit": "ms"
          }
        }
      },
      {
        "id": 4,
        "title": "Exit List Updates",
        "type": "timeseries",
        "targets": [
          {
            "expr": "torblock_exit_count",
            "legendFormat": "Known Exit Nodes"
          }
        ]
      }
    ],
    "time": {
      "from": "now-1h",
      "to": "now"
    },
    "refresh": "30s"
  }
}

Performance Dashboard

{
  "dashboard": {
    "title": "Nginx Torblock Performance",
    "panels": [
      {
        "id": 1,
        "title": "Request Rate",
        "type": "timeseries",
        "targets": [
          {
            "expr": "rate(nginx_total_requests[5m])",
            "legendFormat": "Total Requests/sec"
          },
          {
            "expr": "rate(torblock_blocked_total[5m])",
            "legendFormat": "Blocked Requests/sec"
          }
        ]
      },
      {
        "id": 2,
        "title": "Response Time Distribution",
        "type": "heatmap",
        "targets": [
          {
            "expr": "histogram_quantile(0.95, rate(nginx_request_duration_seconds_bucket[5m]))",
            "legendFormat": "95th Percentile"
          }
        ]
      },
      {
        "id": 3,
        "title": "Memory Usage",
        "type": "timeseries",
        "targets": [
          {
            "expr": "process_resident_memory_bytes{job=\"nginx-torblocker\"}",
            "legendFormat": "Memory Usage"
          }
        ],
        "fieldConfig": {
          "defaults": {
            "unit": "bytes"
          }
        }
      }
    ]
  }
}

ELK Stack Integration

Logstash Configuration

Configure Logstash to parse nginx-torblocker logs:

# /etc/logstash/conf.d/nginx-torblocker.conf
input {
  file {
    path => "/var/log/nginx/torblock.log"
    start_position => "beginning"
    sincedb_path => "/tmp/logstash_torblock.sincedb"
    codec => "plain"
    tags => ["nginx", "torblock"]
  }
  
  file {
    path => "/var/log/nginx/access.json"
    start_position => "beginning"
    sincedb_path => "/tmp/logstash_access.sincedb"
    codec => "json"
    tags => ["nginx", "access", "json"]
  }
}

filter {
  if "torblock" in [tags] {
    grok {
      match => { 
        "message" => "%{IPORHOST:client_ip} \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{URIPATH:uri}(?:%{URIPARAM:params})? HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} torblock_status=%{WORD:torblock_status} tor_exit=%{WORD:tor_exit} cache_status=%{WORD:cache_status} lookup_time=%{NUMBER:lookup_time} last_update=%{WORD:last_update}"
      }
    }
    
    date {
      match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
    
    mutate {
      convert => { 
        "response_code" => "integer"
        "lookup_time" => "integer"
      }
    }
    
    if [torblock_status] == "blocked" {
      mutate {
        add_tag => ["blocked", "security_event"]
      }
    }
    
    if [tor_exit] != "false" {
      mutate {
        add_tag => ["tor_exit_node"]
      }
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "nginx-torblock-%{+YYYY.MM.dd}"
  }
  
  # Send security events to separate index
  if "security_event" in [tags] {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "security-events-%{+YYYY.MM.dd}"
    }
  }
}

Elasticsearch Index Template

{
  "index_patterns": ["nginx-torblock-*"],
  "template": {
    "settings": {
      "number_of_shards": 1,
      "number_of_replicas": 1,
      "index.refresh_interval": "30s"
    },
    "mappings": {
      "properties": {
        "timestamp": {
          "type": "date"
        },
        "client_ip": {
          "type": "ip"
        },
        "torblock_status": {
          "type": "keyword"
        },
        "tor_exit": {
          "type": "keyword"
        },
        "cache_status": {
          "type": "keyword"
        },
        "lookup_time": {
          "type": "integer"
        },
        "response_code": {
          "type": "integer"
        },
        "uri": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        }
      }
    }
  }
}

Kibana Visualizations

Create Kibana visualizations for torblock data:

#!/bin/bash
# create-kibana-dashboards.sh

# Create index pattern
curl -X POST "kibana:5601/api/saved_objects/index-pattern/nginx-torblock" \
  -H "Content-Type: application/json" \
  -H "kbn-xsrf: true" \
  -d '{
    "attributes": {
      "title": "nginx-torblock-*",
      "timeFieldName": "timestamp"
    }
  }'

# Create blocked requests visualization
curl -X POST "kibana:5601/api/saved_objects/visualization" \
  -H "Content-Type: application/json" \
  -H "kbn-xsrf: true" \
  -d '{
    "attributes": {
      "title": "Blocked Requests Over Time",
      "visState": "{\"title\":\"Blocked Requests Over Time\",\"type\":\"histogram\",\"params\":{\"grid\":{\"categoryLines\":false,\"style\":{\"color\":\"#eee\"}},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"truncate\":100},\"title\":{}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Count\"}}],\"seriesParams\":[{\"show\":\"true\",\"type\":\"histogram\",\"mode\":\"stacked\",\"data\":{\"label\":\"Count\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"count\",\"schema\":\"metric\",\"params\":{}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}}]}",
      "uiStateJSON": "{}",
      "description": "",
      "version": 1,
      "kibanaSavedObjectMeta": {
        "searchSourceJSON": "{\"index\":\"nginx-torblock\",\"query\":{\"match\":{\"torblock_status\":\"blocked\"}},\"filter\":[]}"
      }
    }
  }'

Alerting Systems

Prometheus Alerting Rules

# /etc/prometheus/rules/torblock.yml
groups:
- name: nginx-torblock
  rules:
  - alert: HighBlockedRequestRate
    expr: rate(torblock_blocked_total[5m]) > 50
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: "High rate of blocked Tor requests"
      description: "Blocked request rate is {{ $value }} per second"
  
  - alert: LowCacheHitRate
    expr: (torblock_cache_hits / (torblock_cache_hits + torblock_cache_misses)) * 100 < 80
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Low cache hit rate"
      description: "Cache hit rate is {{ $value }}%"
  
  - alert: SlowTorblockLookups
    expr: torblock_average_lookup_time > 1000
    for: 3m
    labels:
      severity: critical
    annotations:
      summary: "Slow torblock lookups detected"
      description: "Average lookup time is {{ $value }}ms"
  
  - alert: TorblockUpdateFailure
    expr: increase(torblock_last_update[1h]) == 0
    for: 1h
    labels:
      severity: critical
    annotations:
      summary: "Torblock exit list not updated"
      description: "Exit list hasn't been updated in the last hour"

Slack Alerting Integration

#!/bin/bash
# slack-alert-script.sh

SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
ALERT_THRESHOLD=100  # Blocked requests per minute

# Monitor blocked requests
BLOCKED_COUNT=$(tail -n 1000 /var/log/nginx/torblock.log | grep "torblock_status=blocked" | wc -l)

if [ "$BLOCKED_COUNT" -gt "$ALERT_THRESHOLD" ]; then
    TIMESTAMP=$(date -Iseconds)
    MESSAGE="🚨 High Tor blocking activity detected: $BLOCKED_COUNT blocked requests in last 1000 log entries at $TIMESTAMP"
    
    curl -X POST -H 'Content-type: application/json' \
        --data "{\"text\":\"$MESSAGE\"}" \
        "$SLACK_WEBHOOK"
fi

Email Alerting Script

#!/usr/bin/env python3
# email-alerts.py

import smtplib
import sys
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime

class TorblockEmailAlerter:
    def __init__(self, smtp_server, smtp_port, username, password):
        self.smtp_server = smtp_server
        self.smtp_port = smtp_port
        self.username = username
        self.password = password
    
    def send_alert(self, to_addresses, subject, message, severity="INFO"):
        """Send email alert"""
        try:
            msg = MIMEMultipart()
            msg['From'] = self.username
            msg['To'] = ', '.join(to_addresses)
            msg['Subject'] = f"[{severity}] Nginx Torblock Alert: {subject}"
            
            # Create HTML message
            html_body = f"""
            <html>
            <body>
                <h2>Nginx Torblock Alert</h2>
                <p><strong>Severity:</strong> {severity}</p>
                <p><strong>Timestamp:</strong> {datetime.now().isoformat()}</p>
                <p><strong>Subject:</strong> {subject}</p>
                <hr>
                <pre>{message}</pre>
            </body>
            </html>
            """
            
            msg.attach(MIMEText(html_body, 'html'))
            
            # Send email
            with smtplib.SMTP(self.smtp_server, self.smtp_port) as server:
                server.starttls()
                server.login(self.username, self.password)
                server.send_message(msg)
            
            print(f"Alert sent successfully to {to_addresses}")
            
        except Exception as e:
            print(f"Failed to send alert: {e}")

def main():
    if len(sys.argv) < 4:
        print("Usage: email-alerts.py <severity> <subject> <message>")
        sys.exit(1)
    
    severity = sys.argv[1]
    subject = sys.argv[2]
    message = sys.argv[3]
    
    # Configuration
    alerter = TorblockEmailAlerter(
        smtp_server="smtp.gmail.com",
        smtp_port=587,
        username="[email protected]",
        password="your-app-password"
    )
    
    # Send alert
    alerter.send_alert(
        to_addresses=["[email protected]", "[email protected]"],
        subject=subject,
        message=message,
        severity=severity
    )

if __name__ == "__main__":
    main()

Log Analysis Tools

Automated Log Analysis Script

#!/bin/bash
# analyze-torblock-logs.sh

LOG_FILE="/var/log/nginx/torblock.log"
ANALYSIS_PERIOD="1 hour ago"

echo "=== Nginx Torblock Log Analysis ==="
echo "Analysis Period: $ANALYSIS_PERIOD to now"
echo "Log File: $LOG_FILE"
echo

# Extract recent logs
TEMP_LOG="/tmp/torblock_analysis.log"
journalctl --since "$ANALYSIS_PERIOD" -u nginx | grep torblock > "$TEMP_LOG" 2>/dev/null || \
awk -v start_time="$(date -d "$ANALYSIS_PERIOD" +%s)" '
{
    # Extract timestamp from log line
    match($0, /\[([^\]]+)\]/, arr)
    if (arr[1]) {
        # Convert to epoch time (simplified)
        if (arr[1] > start_time) print $0
    }
}' "$LOG_FILE" > "$TEMP_LOG"

if [ ! -s "$TEMP_LOG" ]; then
    echo "No torblock log entries found for the specified period."
    exit 0
fi

# Basic statistics
TOTAL_REQUESTS=$(wc -l < "$TEMP_LOG")
BLOCKED_REQUESTS=$(grep -c "torblock_status=blocked" "$TEMP_LOG")
ALLOWED_REQUESTS=$(grep -c "torblock_status=allowed" "$TEMP_LOG")
CACHE_HITS=$(grep -c "cache_status=hit" "$TEMP_LOG")
CACHE_MISSES=$(grep -c "cache_status=miss" "$TEMP_LOG")

echo "📊 Request Statistics:"
echo "  Total Requests: $TOTAL_REQUESTS"
echo "  Blocked Requests: $BLOCKED_REQUESTS ($(( BLOCKED_REQUESTS * 100 / TOTAL_REQUESTS ))%)"
echo "  Allowed Requests: $ALLOWED_REQUESTS"
echo

if [ $((CACHE_HITS + CACHE_MISSES)) -gt 0 ]; then
    HIT_RATE=$(( CACHE_HITS * 100 / (CACHE_HITS + CACHE_MISSES) ))
    echo "🗄️ Cache Performance:"
    echo "  Cache Hits: $CACHE_HITS"
    echo "  Cache Misses: $CACHE_MISSES"
    echo "  Hit Rate: ${HIT_RATE}%"
    echo
fi

# Top blocked IPs
echo "🚫 Top 10 Blocked IP Addresses:"
grep "torblock_status=blocked" "$TEMP_LOG" | \
awk '{print $1}' | sort | uniq -c | sort -nr | head -10 | \
while read count ip; do
    echo "  $ip: $count requests"
done
echo

# Performance analysis
echo "⚡ Performance Analysis:"
LOOKUP_TIMES=$(grep -o "lookup_time=[0-9]*" "$TEMP_LOG" | cut -d= -f2)
if [ -n "$LOOKUP_TIMES" ]; then
    AVG_LOOKUP=$(echo "$LOOKUP_TIMES" | awk '{sum+=$1; count++} END {print sum/count}')
    MAX_LOOKUP=$(echo "$LOOKUP_TIMES" | sort -n | tail -1)
    echo "  Average Lookup Time: ${AVG_LOOKUP}ms"
    echo "  Maximum Lookup Time: ${MAX_LOOKUP}ms"
    
    SLOW_LOOKUPS=$(echo "$LOOKUP_TIMES" | awk '$1 > 1000' | wc -l)
    echo "  Slow Lookups (>1s): $SLOW_LOOKUPS"
fi
echo

# Geographic analysis (if available)
echo "🌍 Geographic Distribution of Blocked IPs:"
grep "torblock_status=blocked" "$TEMP_LOG" | awk '{print $1}' | sort -u | \
while read ip; do
    # This requires external GeoIP service
    curl -s "http://ip-api.com/json/$ip" | jq -r '.country' 2>/dev/null || echo "Unknown"
done | sort | uniq -c | sort -nr | head -5 | \
while read count country; do
    echo "  $country: $count unique IPs"
done

# Cleanup
rm -f "$TEMP_LOG"

echo
echo "Analysis complete. $(date)"

Real-time Log Monitoring

#!/usr/bin/env python3
# realtime-monitor.py

import time
import re
import sys
from collections import defaultdict, deque
from datetime import datetime, timedelta

class TorblockRealTimeMonitor:
    def __init__(self, log_file="/var/log/nginx/torblock.log", window_size=300):
        self.log_file = log_file
        self.window_size = window_size  # 5 minute window
        self.stats = {
            'blocked_ips': deque(maxlen=1000),
            'lookup_times': deque(maxlen=1000),
            'cache_hits': 0,
            'cache_misses': 0,
            'blocked_count': 0,
            'allowed_count': 0
        }
        
    def parse_log_line(self, line):
        """Parse nginx torblock log line"""
        patterns = {
            'ip': r'^(\d+\.\d+\.\d+\.\d+)',
            'timestamp': r'\[([^\]]+)\]',
            'status': r'torblock_status=(\w+)',
            'cache': r'cache_status=(\w+)',
            'lookup_time': r'lookup_time=(\d+)'
        }
        
        result = {}
        for key, pattern in patterns.items():
            match = re.search(pattern, line)
            if match:
                result[key] = match.group(1)
        
        return result
    
    def update_stats(self, parsed_line):
        """Update running statistics"""
        current_time = datetime.now()
        
        # Update counters
        if parsed_line.get('status') == 'blocked':
            self.stats['blocked_count'] += 1
            self.stats['blocked_ips'].append({
                'ip': parsed_line.get('ip'),
                'timestamp': current_time
            })
        elif parsed_line.get('status') == 'allowed':
            self.stats['allowed_count'] += 1
        
        # Cache statistics
        if parsed_line.get('cache') == 'hit':
            self.stats['cache_hits'] += 1
        elif parsed_line.get('cache') == 'miss':
            self.stats['cache_misses'] += 1
        
        # Lookup time tracking
        if parsed_line.get('lookup_time'):
            self.stats['lookup_times'].append({
                'time': int(parsed_line['lookup_time']),
                'timestamp': current_time
            })
    
    def get_recent_stats(self):
        """Get statistics for recent time window"""
        cutoff_time = datetime.now() - timedelta(seconds=self.window_size)
        
        # Recent blocked IPs
        recent_blocked = [
            entry for entry in self.stats['blocked_ips']
            if entry['timestamp'] > cutoff_time
        ]
        
        # Recent lookup times
        recent_lookups = [
            entry for entry in self.stats['lookup_times']
            if entry['timestamp'] > cutoff_time
        ]
        
        avg_lookup = 0
        if recent_lookups:
            avg_lookup = sum(entry['time'] for entry in recent_lookups) / len(recent_lookups)
        
        return {
            'blocked_ips_recent': len(recent_blocked),
            'unique_blocked_ips': len(set(entry['ip'] for entry in recent_blocked)),
            'avg_lookup_time': avg_lookup,
            'max_lookup_time': max((entry['time'] for entry in recent_lookups), default=0),
            'total_blocked': self.stats['blocked_count'],
            'total_allowed': self.stats['allowed_count'],
            'cache_hit_rate': (
                self.stats['cache_hits'] / 
                (self.stats['cache_hits'] + self.stats['cache_misses']) * 100
                if (self.stats['cache_hits'] + self.stats['cache_misses']) > 0 else 0
            )
        }
    
    def display_stats(self):
        """Display current statistics"""
        stats = self.get_recent_stats()
        
        print(f"\r{'='*60}")
        print(f"Nginx Torblock Real-time Monitor - {datetime.now().strftime('%H:%M:%S')}")
        print(f"{'='*60}")
        print(f"📊 Last {self.window_size//60} minutes:")
        print(f"  Blocked Requests: {stats['blocked_ips_recent']}")
        print(f"  Unique Blocked IPs: {stats['unique_blocked_ips']}")
        print(f"  Cache Hit Rate: {stats['cache_hit_rate']:.1f}%")
        print(f"  Avg Lookup Time: {stats['avg_lookup_time']:.1f}ms")
        print(f"  Max Lookup Time: {stats['max_lookup_time']}ms")
        print(f"")
        print(f"📈 Session Totals:")
        print(f"  Total Blocked: {stats['total_blocked']}")
        print(f"  Total Allowed: {stats['total_allowed']}")
        
        # Alert conditions
        if stats['avg_lookup_time'] > 1000:
            print(f"⚠️  High lookup times detected!")
        if stats['cache_hit_rate'] < 80:
            print(f"⚠️  Low cache hit rate!")
        if stats['blocked_ips_recent'] > 100:
            print(f"🚨 High blocking activity!")
    
    def monitor(self):
        """Start real-time monitoring"""
        print("Starting nginx-torblocker real-time monitor...")
        print("Press Ctrl+C to stop")
        
        try:
            with open(self.log_file, 'r') as f:
                # Go to end of file
                f.seek(0, 2)
                
                while True:
                    line = f.readline()
                    if line:
                        parsed = self.parse_log_line(line.strip())
                        if parsed:
                            self.update_stats(parsed)
                    else:
                        # No new line, display stats and wait
                        self.display_stats()
                        time.sleep(5)
                        
        except KeyboardInterrupt:
            print("\nMonitoring stopped.")
        except FileNotFoundError:
            print(f"Log file not found: {self.log_file}")
        except Exception as e:
            print(f"Error: {e}")

def main():
    log_file = sys.argv[1] if len(sys.argv) > 1 else "/var/log/nginx/torblock.log"
    monitor = TorblockRealTimeMonitor(log_file)
    monitor.monitor()

if __name__ == "__main__":
    main()

Security Event Monitoring

SIEM Integration Script

#!/usr/bin/env python3
# siem-integration.py

import json
import time
import requests
from datetime import datetime

class SIEMIntegration:
    def __init__(self, siem_endpoint, api_key):
        self.siem_endpoint = siem_endpoint
        self.api_key = api_key
        self.headers = {
            'Authorization': f'Bearer {api_key}',
            'Content-Type': 'application/json'
        }
    
    def send_security_event(self, event_data):
        """Send security event to SIEM"""
        payload = {
            'timestamp': datetime.now().isoformat(),
            'source': 'nginx-torblocker',
            'event_type': 'tor_blocking',
            'severity': self.calculate_severity(event_data),
            'data': event_data
        }
        
        try:
            response = requests.post(
                self.siem_endpoint,
                headers=self.headers,
                json=payload,
                timeout=10
            )
            
            if response.status_code == 200:
                print(f"Event sent to SIEM: {event_data.get('blocked_ip')}")
            else:
                print(f"SIEM error: {response.status_code}")
                
        except Exception as e:
            print(f"Failed to send to SIEM: {e}")
    
    def calculate_severity(self, event_data):
        """Calculate event severity based on criteria"""
        # High severity conditions
        if event_data.get('repeated_attempts', 0) > 10:
            return 'HIGH'
        if event_data.get('targeting_sensitive_paths'):
            return 'HIGH'
        
        # Medium severity conditions  
        if event_data.get('repeated_attempts', 0) > 3:
            return 'MEDIUM'
        
        return 'LOW'

# Example usage
def process_blocked_request(ip, uri, user_agent):
    """Process a blocked request and send to SIEM if needed"""
    siem = SIEMIntegration('https://siem.company.com/api/events', 'your-api-key')
    
    event_data = {
        'blocked_ip': ip,
        'requested_uri': uri,
        'user_agent': user_agent,
        'repeated_attempts': count_recent_attempts(ip),
        'targeting_sensitive_paths': is_sensitive_path(uri)
    }
    
    siem.send_security_event(event_data)

def count_recent_attempts(ip):
    """Count recent attempts from IP (simplified)"""
    # Implementation would check recent logs
    return 1

def is_sensitive_path(uri):
    """Check if URI targets sensitive paths"""
    sensitive_paths = ['/admin', '/api', '/login', '/wp-admin']
    return any(path in uri for path in sensitive_paths)

Best Practices

Monitoring Checklist

  • Log Configuration: Comprehensive logging with structured formats
  • Metrics Collection: Key performance and security metrics
  • Alerting Rules: Proactive alerting for critical conditions
  • Dashboard Setup: Visual monitoring with Grafana or similar
  • Log Rotation: Proper log rotation to prevent disk issues
  • Security Integration: SIEM integration for security events
  • Performance Monitoring: Regular performance analysis
  • Backup Strategy: Log backup and retention policies

Configuration Template

# Production monitoring configuration
http {
    # Comprehensive logging
    log_format combined_torblock '$remote_addr - $remote_user [$time_local] '
                               '"$request" $status $body_bytes_sent '
                               '"$http_referer" "$http_user_agent" '
                               'torblock_status=$torblock_status '
                               'cache_status=$torblock_cache_status '
                               'lookup_time=$torblock_lookup_time';
    
    # JSON format for log aggregation
    log_format json_torblock escape=json '{'
        '"timestamp": "$time_iso8601", '
        '"client_ip": "$remote_addr", '
        '"request": "$request", '
        '"status": $status, '
        '"torblock_status": "$torblock_status", '
        '"cache_status": "$torblock_cache_status", '
        '"lookup_time": $torblock_lookup_time'
    '}';
    
    access_log /var/log/nginx/access.log combined_torblock;
    access_log /var/log/nginx/torblock.json json_torblock;
    error_log /var/log/nginx/error.log warn;
    
    # Metrics endpoint
    server {
        listen 127.0.0.1:8080;
        location /metrics { 
            # Metrics configuration
        }
    }
}

Next Steps

With monitoring and logging in place:

  1. Performance Optimization: Use insights from Performance Tuning
  2. Troubleshooting: Diagnose issues with Troubleshooting Guide
  3. Testing: Validate configuration with Testing Procedures
  4. Advanced Configuration: Implement complex policies from Advanced Configuration
Clone this wiki locally