The voices... They're getting louder.

This commit is contained in:
Aditya Pulipaka
2025-10-19 06:58:39 -05:00
parent 89924c1580
commit 154283e7c6
13 changed files with 1167 additions and 25 deletions

40
.env.production.example Normal file
View File

@@ -0,0 +1,40 @@
# Production Environment Configuration for Render.com
# Copy this to .env and fill in your values when deploying
# Gemini API Configuration
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-2.5-flash
# Environment (MUST be "production" for deployment)
ENVIRONMENT=production
# Service Configuration
AI_SERVICE_PORT=9000
AI_SERVICE_HOST=0.0.0.0
# Production URL (your Render.com app URL)
# Example: https://hpc-simulation-ai.onrender.com
PRODUCTION_URL=https://your-app-name.onrender.com
# Enrichment Service Integration
# In production on Render, services communicate internally
# Leave as localhost since both services run in same container
ENRICHMENT_SERVICE_URL=http://localhost:8000
ENRICHMENT_FETCH_LIMIT=10
# Demo Mode (enables caching and consistent responses for demos)
DEMO_MODE=false
# Fast Mode (use shorter prompts for faster responses)
FAST_MODE=true
# Strategy Generation Settings
STRATEGY_COUNT=3 # Number of strategies to generate (3 for testing, 20 for production)
# Performance Settings
BRAINSTORM_TIMEOUT=90
ANALYZE_TIMEOUT=120
GEMINI_MAX_RETRIES=3
# ElevenLabs API Key (optional, for voice features)
ELEVENLABS_API_KEY=your_elevenlabs_api_key_here

249
ENVIRONMENT_CONFIG.md Normal file
View File

@@ -0,0 +1,249 @@
# Environment Configuration Guide
## Overview
The HPC Simulation system automatically adapts between **development** and **production** environments based on the `ENVIRONMENT` variable.
## Configuration Files
### Development (Local)
- File: `.env` (in project root)
- `ENVIRONMENT=development`
- Uses `localhost` URLs
### Production (Render.com)
- Set environment variables in Render dashboard
- `ENVIRONMENT=production`
- `PRODUCTION_URL=https://your-app.onrender.com`
- Automatically adapts all URLs
## Key Environment Variables
### Required for Both Environments
| Variable | Description | Example |
|----------|-------------|---------|
| `GEMINI_API_KEY` | Google Gemini API key | `AIzaSy...` |
| `ENVIRONMENT` | Environment mode | `development` or `production` |
### Production-Specific
| Variable | Description | Example |
|----------|-------------|---------|
| `PRODUCTION_URL` | Your Render.com app URL | `https://hpc-ai.onrender.com` |
### Optional
| Variable | Default | Description |
|----------|---------|-------------|
| `ELEVENLABS_API_KEY` | - | Voice synthesis (optional) |
| `GEMINI_MODEL` | `gemini-1.5-pro` | AI model version |
| `STRATEGY_COUNT` | `3` | Strategies per lap |
| `FAST_MODE` | `true` | Use shorter prompts |
## How It Works
### URL Auto-Configuration
The `config.py` automatically provides environment-aware URLs:
```python
settings = get_settings()
# Automatically returns correct URL based on environment:
settings.base_url # http://localhost:9000 OR https://your-app.onrender.com
settings.websocket_url # ws://localhost:9000 OR wss://your-app.onrender.com
settings.internal_enrichment_url # Always http://localhost:8000 (internal)
```
### Development Environment
```bash
ENVIRONMENT=development
# Result:
# - base_url: http://localhost:9000
# - websocket_url: ws://localhost:9000
# - Dashboard connects to: ws://localhost:9000/ws/dashboard
```
### Production Environment
```bash
ENVIRONMENT=production
PRODUCTION_URL=https://hpc-ai.onrender.com
# Result:
# - base_url: https://hpc-ai.onrender.com
# - websocket_url: wss://hpc-ai.onrender.com
# - Dashboard connects to: wss://hpc-ai.onrender.com/ws/dashboard
```
## Component-Specific Configuration
### 1. AI Intelligence Layer
**Development:**
- Binds to: `0.0.0.0:9000`
- Enrichment client connects to: `http://localhost:8000`
- Dashboard WebSocket: `ws://localhost:9000/ws/dashboard`
**Production:**
- Binds to: `0.0.0.0:9000` (Render exposes externally)
- Enrichment client connects to: `http://localhost:8000` (internal)
- Dashboard WebSocket: `wss://your-app.onrender.com/ws/dashboard`
### 2. Enrichment Service
**Development:**
- Binds to: `0.0.0.0:8000`
- Accessed at: `http://localhost:8000`
**Production:**
- Binds to: `0.0.0.0:8000` (internal only)
- Accessed internally at: `http://localhost:8000`
- Not exposed externally (behind AI layer)
### 3. Dashboard (Frontend)
**Auto-detects environment:**
```javascript
// Automatically uses current host and protocol
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const host = window.location.host;
const wsUrl = `${protocol}//${host}/ws/dashboard`;
```
### 4. Pi Simulator (Client)
**Development:**
```bash
python simulate_pi_websocket.py \
--ws-url ws://localhost:9000/ws/pi \
--enrichment-url http://localhost:8000
```
**Production:**
```bash
python simulate_pi_websocket.py \
--ws-url wss://your-app.onrender.com/ws/pi \
--enrichment-url https://your-app.onrender.com # If exposed
```
## Quick Setup
### Local Development
1. **Create `.env` in project root:**
```bash
GEMINI_API_KEY=your_key_here
ENVIRONMENT=development
```
2. **Start services:**
```bash
./start.sh
```
### Render.com Production
1. **Set environment variables in Render dashboard:**
```
GEMINI_API_KEY=your_key_here
ENVIRONMENT=production
PRODUCTION_URL=https://your-app-name.onrender.com
```
2. **Deploy** - URLs auto-configure!
## Troubleshooting
### Issue: WebSocket connection fails in production
**Check:**
1. `ENVIRONMENT=production` is set
2. `PRODUCTION_URL` matches your actual Render URL (including `https://`)
3. Dashboard uses `wss://` protocol (should auto-detect)
### Issue: Enrichment service unreachable
**In production:**
- Both services run in same container
- Internal communication uses `http://localhost:8000`
- This is automatic, no configuration needed
**In development:**
- Ensure enrichment service is running: `python scripts/serve.py`
- Check `http://localhost:8000/health`
### Issue: Pi simulator can't connect
**Development:**
```bash
# Test connection
curl http://localhost:9000/health
wscat -c ws://localhost:9000/ws/pi
```
**Production:**
```bash
# Test connection
curl https://your-app.onrender.com/health
wscat -c wss://your-app.onrender.com/ws/pi
```
## Environment Variable Priority
1. **Render Environment Variables** (highest priority in production)
2. **.env file** (development)
3. **Default values** (in config.py)
## Best Practices
### Development
- ✅ Use `.env` file
- ✅ Keep `ENVIRONMENT=development`
- ✅ Use `localhost` URLs
- ❌ Don't commit `.env` to git
### Production
- ✅ Set all variables in Render dashboard
- ✅ Use `ENVIRONMENT=production`
- ✅ Set `PRODUCTION_URL` after deployment
- ✅ Use HTTPS/WSS protocols
- ❌ Don't hardcode URLs in code
## Example Configurations
### .env (Development)
```bash
GEMINI_API_KEY=AIzaSyDK_jxVlJUpzyxuiGcopSFkiqMAUD3-w0I
GEMINI_MODEL=gemini-2.5-flash
ENVIRONMENT=development
ELEVENLABS_API_KEY=your_key_here
STRATEGY_COUNT=3
FAST_MODE=true
```
### Render Environment Variables (Production)
```bash
GEMINI_API_KEY=AIzaSyDK_jxVlJUpzyxuiGcopSFkiqMAUD3-w0I
GEMINI_MODEL=gemini-2.5-flash
ENVIRONMENT=production
PRODUCTION_URL=https://hpc-simulation-ai.onrender.com
ELEVENLABS_API_KEY=your_key_here
STRATEGY_COUNT=3
FAST_MODE=true
```
## Migration Checklist
When deploying to production:
- [ ] Set `ENVIRONMENT=production` in Render
- [ ] Deploy and get Render URL
- [ ] Set `PRODUCTION_URL` with your Render URL
- [ ] Test health endpoint: `https://your-app.onrender.com/health`
- [ ] Test WebSocket: `wss://your-app.onrender.com/ws/pi`
- [ ] Open dashboard: `https://your-app.onrender.com/dashboard`
- [ ] Verify logs show production URLs
Done! The system will automatically use production URLs for all connections.

1
Procfile Normal file
View File

@@ -0,0 +1 @@
web: ./start.sh

211
RENDER_DEPLOYMENT.md Normal file
View File

@@ -0,0 +1,211 @@
# Render.com Deployment Guide
## Quick Start
### 1. Render.com Configuration
**Service Type:** Web Service
**Build Command:**
```bash
pip install -r requirements.txt
```
**Start Command (choose one):**
#### Option A: Shell Script (Recommended)
```bash
./start.sh
```
#### Option B: Python Supervisor
```bash
python start.py
```
#### Option C: Direct Command
```bash
python scripts/serve.py & python ai_intelligence_layer/main.py
```
### 2. Environment Variables
Set these in Render.com dashboard:
**Required:**
```bash
GEMINI_API_KEY=your_gemini_api_key_here
ENVIRONMENT=production
PRODUCTION_URL=https://your-app-name.onrender.com # Your Render app URL
```
**Optional:**
```bash
ELEVENLABS_API_KEY=your_elevenlabs_api_key_here # For voice features
GEMINI_MODEL=gemini-2.5-flash
STRATEGY_COUNT=3
FAST_MODE=true
```
**Auto-configured (no need to set):**
```bash
# These are handled automatically by the config system
AI_SERVICE_PORT=9000
AI_SERVICE_HOST=0.0.0.0
ENRICHMENT_SERVICE_URL=http://localhost:8000 # Internal communication
```
### Important: Production URL
After deploying to Render, you'll get a URL like:
```
https://your-app-name.onrender.com
```
**You MUST set this URL in the environment variables:**
1. Go to Render dashboard → your service → Environment
2. Add: `PRODUCTION_URL=https://your-app-name.onrender.com`
3. The app will automatically use this for WebSocket connections and API URLs
### 3. Health Check
**Health Check Path:** `/health` (on port 9000)
**Health Check Command:**
```bash
curl http://localhost:9000/health
```
### 4. Port Configuration
- **Enrichment Service:** 8000 (internal)
- **AI Intelligence Layer:** 9000 (external, Render will expose this)
Render will automatically bind to `PORT` environment variable.
### 5. Files Required
-`start.sh` - Shell startup script
-`start.py` - Python startup supervisor
-`Procfile` - Render configuration
-`requirements.txt` - Python dependencies
### 6. Testing Locally
Test the startup script before deploying:
```bash
# Make executable
chmod +x start.sh
# Run locally
./start.sh
```
Or with Python supervisor:
```bash
python start.py
```
### 7. Deployment Steps
1. **Push to GitHub:**
```bash
git add .
git commit -m "Add Render deployment configuration"
git push
```
2. **Create Render Service:**
- Go to [render.com](https://render.com)
- New → Web Service
- Connect your GitHub repository
- Select branch (main)
3. **Configure Service:**
- Name: `hpc-simulation-ai`
- Environment: `Python 3`
- Build Command: `pip install -r requirements.txt`
- Start Command: `./start.sh`
4. **Add Environment Variables:**
- `GEMINI_API_KEY`
- `ELEVENLABS_API_KEY` (optional)
5. **Deploy!** 🚀
### 8. Monitoring
Check logs in Render dashboard for:
- `📊 Starting Enrichment Service on port 8000...`
- `🤖 Starting AI Intelligence Layer on port 9000...`
- `✨ All services running!`
### 9. Connecting Clients
**WebSocket URL:**
```
wss://your-app-name.onrender.com/ws/pi
```
**Enrichment API:**
```
https://your-app-name.onrender.com/ingest/telemetry
```
### 10. Troubleshooting
**Services won't start:**
- Check environment variables are set
- Verify `start.sh` is executable: `chmod +x start.sh`
- Check build logs for dependency issues
**Port conflicts:**
- Render will set `PORT` automatically (9000 by default)
- Services bind to `0.0.0.0` for external access
**Memory issues:**
- Consider Render's paid plans for more resources
- Free tier may struggle with AI model loading
## Architecture on Render
```
┌─────────────────────────────────────┐
│ Render.com Container │
├─────────────────────────────────────┤
│ │
│ ┌───────────────────────────────┐ │
│ │ start.sh / start.py │ │
│ └──────────┬────────────────────┘ │
│ │ │
│ ┌────────┴─────────┐ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────┐ ┌──────────────┐ │
│ │Enrichment│ │AI Intelligence│ │
│ │Service │ │Layer │ │
│ │:8000 │◄──│:9000 │ │
│ └──────────┘ └──────┬────────┘ │
│ │ │
└────────────────────────┼────────────┘
Internet
┌────▼─────┐
│ Client │
│(Pi/Web) │
└──────────┘
```
## Next Steps
1. Test locally with `./start.sh`
2. Commit and push to GitHub
3. Create Render service
4. Configure environment variables
5. Deploy and monitor logs
6. Update client connection URLs
Good luck! 🎉

View File

@@ -1,6 +1,8 @@
""" """
Configuration management for AI Intelligence Layer. Configuration management for AI Intelligence Layer.
Uses pydantic-settings for environment variable validation. Uses pydantic-settings for environment variable validation.
Environment variables are loaded via load_dotenv() in main.py.
Automatically adapts URLs for development vs production environments.
""" """
from pydantic_settings import BaseSettings, SettingsConfigDict from pydantic_settings import BaseSettings, SettingsConfigDict
from typing import Optional from typing import Optional
@@ -9,6 +11,10 @@ from typing import Optional
class Settings(BaseSettings): class Settings(BaseSettings):
"""Application settings loaded from environment variables.""" """Application settings loaded from environment variables."""
# Environment Configuration
environment: str = "development" # "development" or "production"
production_url: Optional[str] = None # e.g., "https://your-app.onrender.com"
# Gemini API Configuration # Gemini API Configuration
gemini_api_key: str gemini_api_key: str
gemini_model: str = "gemini-1.5-pro" gemini_model: str = "gemini-1.5-pro"
@@ -28,7 +34,7 @@ class Settings(BaseSettings):
fast_mode: bool = True fast_mode: bool = True
# Strategy Generation Settings # Strategy Generation Settings
strategy_count: int = 3 # Number of strategies to generate (3 for fast testing) strategy_count: int = 3 # Number of strategies to generate (3 for testing, 20 for production)
# Performance Settings # Performance Settings
brainstorm_timeout: int = 30 brainstorm_timeout: int = 30
@@ -36,11 +42,37 @@ class Settings(BaseSettings):
gemini_max_retries: int = 3 gemini_max_retries: int = 3
model_config = SettingsConfigDict( model_config = SettingsConfigDict(
env_file=".env",
env_file_encoding="utf-8",
case_sensitive=False, case_sensitive=False,
extra="ignore" extra="ignore"
) )
@property
def is_production(self) -> bool:
"""Check if running in production environment."""
return self.environment.lower() == "production"
@property
def base_url(self) -> str:
"""Get the base URL for the application."""
if self.is_production and self.production_url:
return self.production_url
return f"http://localhost:{self.ai_service_port}"
@property
def websocket_url(self) -> str:
"""Get the WebSocket URL for the application."""
if self.is_production and self.production_url:
# Replace https:// with wss:// or http:// with ws://
return self.production_url.replace("https://", "wss://").replace("http://", "ws://")
return f"ws://localhost:{self.ai_service_port}"
@property
def internal_enrichment_url(self) -> str:
"""Get the enrichment service URL (internal on Render)."""
if self.is_production:
# On Render, services communicate internally via localhost
return "http://localhost:8000"
return self.enrichment_service_url
# Global settings instance # Global settings instance

View File

@@ -15,6 +15,10 @@ import random
from typing import Dict, Any, List from typing import Dict, Any, List
from datetime import datetime from datetime import datetime
import json import json
from dotenv import load_dotenv
# Load environment variables from .env file in project root
load_dotenv()
from config import get_settings from config import get_settings
from models.input_models import ( from models.input_models import (
@@ -445,14 +449,12 @@ async def websocket_pi_endpoint(websocket: WebSocket):
logger.info(f"LAP {lap_number} - GENERATING STRATEGY") logger.info(f"LAP {lap_number} - GENERATING STRATEGY")
logger.info(f"{'='*60}") logger.info(f"{'='*60}")
# Send immediate acknowledgment while processing # Send SILENT acknowledgment to prevent timeout (no control update)
# Use last known control values instead of resetting to neutral # This tells the Pi "we're working on it" without triggering voice/controls
await websocket.send_json({ await websocket.send_json({
"type": "control_command", "type": "acknowledgment",
"lap": lap_number, "lap": lap_number,
"brake_bias": last_control_command["brake_bias"], "message": "Processing strategies, please wait..."
"differential_slip": last_control_command["differential_slip"],
"message": "Processing strategies (maintaining previous settings)..."
}) })
# Generate strategies (this is the slow part) # Generate strategies (this is the slow part)
@@ -500,6 +502,7 @@ async def websocket_pi_endpoint(websocket: WebSocket):
"brake_bias": control_command["brake_bias"], "brake_bias": control_command["brake_bias"],
"differential_slip": control_command["differential_slip"], "differential_slip": control_command["differential_slip"],
"strategy_name": top_strategy.strategy_name if top_strategy else "N/A", "strategy_name": top_strategy.strategy_name if top_strategy else "N/A",
"risk_level": top_strategy.risk_level if top_strategy else "medium",
"total_strategies": len(response.strategies), "total_strategies": len(response.strategies),
"reasoning": control_command.get("reasoning", "") "reasoning": control_command.get("reasoning", "")
}) })

View File

@@ -16,7 +16,8 @@ class TelemetryClient:
def __init__(self): def __init__(self):
"""Initialize telemetry client.""" """Initialize telemetry client."""
settings = get_settings() settings = get_settings()
self.base_url = settings.enrichment_service_url # Use internal_enrichment_url which adapts for production
self.base_url = settings.internal_enrichment_url
self.fetch_limit = settings.enrichment_fetch_limit self.fetch_limit = settings.enrichment_fetch_limit
logger.info(f"Telemetry client initialized for {self.base_url}") logger.info(f"Telemetry client initialized for {self.base_url}")

View File

@@ -683,7 +683,13 @@
} }
function connect() { function connect() {
ws = new WebSocket('ws://localhost:9000/ws/dashboard'); // Dynamically determine WebSocket URL based on current location
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const host = window.location.host;
const wsUrl = `${protocol}//${host}/ws/dashboard`;
console.log(`Connecting to WebSocket: ${wsUrl}`);
ws = new WebSocket(wsUrl);
ws.onopen = () => { ws.onopen = () => {
console.log('Dashboard WebSocket connected'); console.log('Dashboard WebSocket connected');

View File

@@ -6,9 +6,10 @@ Connects to AI Intelligence Layer via WebSocket and:
1. Streams lap telemetry to AI layer 1. Streams lap telemetry to AI layer
2. Receives control commands (brake_bias, differential_slip) from AI layer 2. Receives control commands (brake_bias, differential_slip) from AI layer
3. Applies control adjustments in real-time 3. Applies control adjustments in real-time
4. Generates voice announcements for strategy updates
Usage: Usage:
python simulate_pi_websocket.py --interval 5 --ws-url ws://localhost:9000/ws/pi python simulate_pi_websocket.py --interval 5 --ws-url ws://localhost:9000/ws/pi --enable-voice
""" """
from __future__ import annotations from __future__ import annotations
@@ -19,6 +20,8 @@ import logging
from pathlib import Path from pathlib import Path
from typing import Dict, Any, Optional from typing import Dict, Any, Optional
import sys import sys
import os
from datetime import datetime
try: try:
import pandas as pd import pandas as pd
@@ -29,6 +32,19 @@ except ImportError:
print("Run: pip install pandas websockets") print("Run: pip install pandas websockets")
sys.exit(1) sys.exit(1)
# Optional voice support
try:
from elevenlabs.client import ElevenLabs
from elevenlabs import save
from dotenv import load_dotenv
# Load .env from root directory (default behavior)
load_dotenv()
VOICE_AVAILABLE = True
except ImportError:
VOICE_AVAILABLE = False
print("Note: elevenlabs not installed. Voice features disabled.")
print("To enable voice: pip install elevenlabs python-dotenv")
# Configure logging # Configure logging
logging.basicConfig( logging.basicConfig(
level=logging.INFO, level=logging.INFO,
@@ -37,10 +53,260 @@ logging.basicConfig(
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class PiSimulator: class VoiceAnnouncer:
"""WebSocket-based Pi simulator with control feedback.""" """ElevenLabs text-to-speech announcer for race engineer communications."""
def __init__(self, csv_path: Path, ws_url: str, interval: float = 60.0, enrichment_url: str = "http://localhost:8000"): def __init__(self, enabled: bool = True):
"""Initialize ElevenLabs voice engine if available."""
self.enabled = enabled and VOICE_AVAILABLE
self.client = None
self.audio_dir = Path("data/audio")
# Use exact same voice as voice_service.py
self.voice_id = "mbBupyLcEivjpxh8Brkf" # Rachel voice
if self.enabled:
try:
api_key = os.getenv("ELEVENLABS_API_KEY")
if not api_key:
logger.warning("⚠ ELEVENLABS_API_KEY not found in environment")
self.enabled = False
return
self.client = ElevenLabs(api_key=api_key)
self.audio_dir.mkdir(parents=True, exist_ok=True)
logger.info("✓ Voice announcer initialized (ElevenLabs)")
except Exception as e:
logger.warning(f"⚠ Voice engine initialization failed: {e}")
self.enabled = False
def _format_strategy_message(self, data: Dict[str, Any]) -> str:
"""
Format strategy update into natural race engineer speech.
Args:
data: Control command update from AI layer
Returns:
Formatted message string
"""
lap = data.get('lap', 0)
strategy_name = data.get('strategy_name', 'Unknown')
brake_bias = data.get('brake_bias', 5)
diff_slip = data.get('differential_slip', 5)
reasoning = data.get('reasoning', '')
risk_level = data.get('risk_level', '')
# Build natural message
parts = []
# Opening with lap number
parts.append(f"Lap {lap}.")
# Strategy announcement with risk level
if strategy_name and strategy_name != "N/A":
# Simplify strategy name for speech
clean_strategy = strategy_name.replace('-', ' ').replace('_', ' ')
if risk_level:
parts.append(f"Running {clean_strategy} strategy, {risk_level} risk.")
else:
parts.append(f"Running {clean_strategy} strategy.")
# Control adjustments with specific values
control_messages = []
# Brake bias announcement with context
if brake_bias < 4:
control_messages.append(f"Brake bias set to {brake_bias}, forward biased for sharper turn in response")
elif brake_bias == 4:
control_messages.append(f"Brake bias {brake_bias}, slightly forward to help rotation")
elif brake_bias > 6:
control_messages.append(f"Brake bias set to {brake_bias}, rearward to protect front tire wear")
elif brake_bias == 6:
control_messages.append(f"Brake bias {brake_bias}, slightly rear for front tire management")
else:
control_messages.append(f"Brake bias neutral at {brake_bias}")
# Differential slip announcement with context
if diff_slip < 4:
control_messages.append(f"Differential at {diff_slip}, tightened for better rotation through corners")
elif diff_slip == 4:
control_messages.append(f"Differential {diff_slip}, slightly tight for rotation")
elif diff_slip > 6:
control_messages.append(f"Differential set to {diff_slip}, loosened to reduce rear tire degradation")
elif diff_slip == 6:
control_messages.append(f"Differential {diff_slip}, slightly loose for tire preservation")
else:
control_messages.append(f"Differential neutral at {diff_slip}")
if control_messages:
parts.append(". ".join(control_messages) + ".")
# Key reasoning excerpt (first sentence only)
if reasoning:
# Extract first meaningful sentence
sentences = reasoning.split('.')
if sentences:
key_reason = sentences[0].strip()
if len(key_reason) > 20 and len(key_reason) < 150: # Slightly longer for more context
parts.append(key_reason + ".")
return " ".join(parts)
def _format_control_message(self, data: Dict[str, Any]) -> str:
"""
Format control command into brief message.
Args:
data: Control command from AI layer
Returns:
Formatted message string
"""
lap = data.get('lap', 0)
brake_bias = data.get('brake_bias', 5)
diff_slip = data.get('differential_slip', 5)
message = data.get('message', '')
# For early laps or non-strategy updates
if message and "Collecting data" in message:
return f"Lap {lap}. Collecting baseline data."
if brake_bias == 5 and diff_slip == 5:
return f"Lap {lap}. Maintaining neutral settings."
return f"Lap {lap}. Controls adjusted."
async def announce_strategy(self, data: Dict[str, Any]):
"""
Announce strategy update with ElevenLabs voice synthesis.
Args:
data: Control command update from AI layer
"""
if not self.enabled:
return
try:
# Format message
message = self._format_strategy_message(data)
logger.info(f"[VOICE] Announcing: {message}")
# Generate unique audio filename
lap = data.get('lap', 0)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
audio_path = self.audio_dir / f"lap_{lap}_{timestamp}.mp3"
# Synthesize with ElevenLabs (exact same settings as voice_service.py)
def synthesize():
try:
audio = self.client.text_to_speech.convert(
voice_id=self.voice_id,
text=message,
model_id="eleven_multilingual_v2", # Fast, low-latency model
voice_settings={
"stability": 0.4,
"similarity_boost": 0.95,
"style": 0.7,
"use_speaker_boost": True
}
)
save(audio, str(audio_path))
logger.info(f"[VOICE] Saved to {audio_path}")
# Play the audio
if sys.platform == "darwin": # macOS
os.system(f"afplay {audio_path}")
elif sys.platform == "linux":
os.system(f"mpg123 {audio_path} || ffplay -nodisp -autoexit {audio_path}")
elif sys.platform == "win32":
os.system(f"start {audio_path}")
# Clean up audio file after playing
try:
if audio_path.exists():
audio_path.unlink()
logger.info(f"[VOICE] Cleaned up {audio_path}")
except Exception as cleanup_error:
logger.warning(f"[VOICE] Failed to delete audio file: {cleanup_error}")
except Exception as e:
logger.error(f"[VOICE] Synthesis error: {e}")
# Run in separate thread to avoid blocking
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, synthesize)
except Exception as e:
logger.error(f"[VOICE] Announcement failed: {e}")
async def announce_control(self, data: Dict[str, Any]):
"""
Announce control command with ElevenLabs voice synthesis (brief version).
Args:
data: Control command from AI layer
"""
if not self.enabled:
return
try:
# Format message
message = self._format_control_message(data)
logger.info(f"[VOICE] Announcing: {message}")
# Generate unique audio filename
lap = data.get('lap', 0)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
audio_path = self.audio_dir / f"lap_{lap}_control_{timestamp}.mp3"
# Synthesize with ElevenLabs (exact same settings as voice_service.py)
def synthesize():
try:
audio = self.client.text_to_speech.convert(
voice_id=self.voice_id,
text=message,
model_id="eleven_multilingual_v2", # Fast, low-latency model
voice_settings={
"stability": 0.4,
"similarity_boost": 0.95,
"style": 0.7,
"use_speaker_boost": True
}
)
save(audio, str(audio_path))
logger.info(f"[VOICE] Saved to {audio_path}")
# Play the audio
if sys.platform == "darwin": # macOS
os.system(f"afplay {audio_path}")
elif sys.platform == "linux":
os.system(f"mpg123 {audio_path} || ffplay -nodisp -autoexit {audio_path}")
elif sys.platform == "win32":
os.system(f"start {audio_path}")
# Clean up audio file after playing
try:
if audio_path.exists():
audio_path.unlink()
logger.info(f"[VOICE] Cleaned up {audio_path}")
except Exception as cleanup_error:
logger.warning(f"[VOICE] Failed to delete audio file: {cleanup_error}")
except Exception as e:
logger.error(f"[VOICE] Synthesis error: {e}")
# Run in separate thread to avoid blocking
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, synthesize)
except Exception as e:
logger.error(f"[VOICE] Announcement failed: {e}")
class PiSimulator:
"""WebSocket-based Pi simulator with control feedback and voice announcements."""
def __init__(self, csv_path: Path, ws_url: str, interval: float = 60.0, enrichment_url: str = "http://localhost:8000", voice_enabled: bool = False):
self.csv_path = csv_path self.csv_path = csv_path
self.ws_url = ws_url self.ws_url = ws_url
self.enrichment_url = enrichment_url self.enrichment_url = enrichment_url
@@ -50,6 +316,12 @@ class PiSimulator:
"brake_bias": 5, "brake_bias": 5,
"differential_slip": 5 "differential_slip": 5
} }
self.previous_controls = {
"brake_bias": 5,
"differential_slip": 5
}
self.current_risk_level: Optional[str] = None
self.voice_announcer = VoiceAnnouncer(enabled=voice_enabled)
def load_lap_csv(self) -> pd.DataFrame: def load_lap_csv(self) -> pd.DataFrame:
"""Load lap-level CSV data.""" """Load lap-level CSV data."""
@@ -245,12 +517,73 @@ class PiSimulator:
response = await asyncio.wait_for(websocket.recv(), timeout=5.0) response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
response_data = json.loads(response) response_data = json.loads(response)
if response_data.get("type") == "control_command": # Handle silent acknowledgment (no control update, no voice)
if response_data.get("type") == "acknowledgment":
message = response_data.get("message", "")
logger.info(f"[ACK] {message}")
# Now wait for the actual control command update
try:
update = await asyncio.wait_for(websocket.recv(), timeout=45.0)
update_data = json.loads(update)
if update_data.get("type") == "control_command_update":
brake_bias = update_data.get("brake_bias", 5)
diff_slip = update_data.get("differential_slip", 5)
strategy_name = update_data.get("strategy_name", "N/A")
risk_level = update_data.get("risk_level", "medium")
reasoning = update_data.get("reasoning", "")
# Check if controls changed from previous
controls_changed = (
self.current_controls["brake_bias"] != brake_bias or
self.current_controls["differential_slip"] != diff_slip
)
# Check if risk level changed
risk_level_changed = (
self.current_risk_level is not None and
self.current_risk_level != risk_level
)
self.previous_controls = self.current_controls.copy()
self.current_controls["brake_bias"] = brake_bias
self.current_controls["differential_slip"] = diff_slip
self.current_risk_level = risk_level
logger.info(f"[UPDATED] Strategy-Based Control:")
logger.info(f" ├─ Brake Bias: {brake_bias}/10")
logger.info(f" ├─ Differential Slip: {diff_slip}/10")
logger.info(f" ├─ Strategy: {strategy_name}")
logger.info(f" ├─ Risk Level: {risk_level}")
if reasoning:
logger.info(f" └─ Reasoning: {reasoning[:100]}...")
self.apply_controls(brake_bias, diff_slip)
# Voice announcement if controls OR risk level changed
if controls_changed or risk_level_changed:
if risk_level_changed and not controls_changed:
logger.info(f"[VOICE] Risk level changed to {risk_level}")
await self.voice_announcer.announce_strategy(update_data)
else:
logger.info(f"[VOICE] Skipping announcement - controls and risk level unchanged")
except asyncio.TimeoutError:
logger.warning("[TIMEOUT] Strategy generation took too long")
elif response_data.get("type") == "control_command":
brake_bias = response_data.get("brake_bias", 5) brake_bias = response_data.get("brake_bias", 5)
diff_slip = response_data.get("differential_slip", 5) diff_slip = response_data.get("differential_slip", 5)
strategy_name = response_data.get("strategy_name", "N/A") strategy_name = response_data.get("strategy_name", "N/A")
message = response_data.get("message") message = response_data.get("message")
# Store previous values before updating
controls_changed = (
self.current_controls["brake_bias"] != brake_bias or
self.current_controls["differential_slip"] != diff_slip
)
self.previous_controls = self.current_controls.copy()
self.current_controls["brake_bias"] = brake_bias self.current_controls["brake_bias"] = brake_bias
self.current_controls["differential_slip"] = diff_slip self.current_controls["differential_slip"] = diff_slip
@@ -265,6 +598,10 @@ class PiSimulator:
# Apply controls (in real Pi, this would adjust hardware) # Apply controls (in real Pi, this would adjust hardware)
self.apply_controls(brake_bias, diff_slip) self.apply_controls(brake_bias, diff_slip)
# Voice announcement ONLY if controls changed
if controls_changed:
await self.voice_announcer.announce_control(response_data)
# If message indicates processing, wait for update # If message indicates processing, wait for update
if message and "Processing" in message: if message and "Processing" in message:
logger.info(" AI is generating strategies, waiting for update...") logger.info(" AI is generating strategies, waiting for update...")
@@ -276,16 +613,43 @@ class PiSimulator:
brake_bias = update_data.get("brake_bias", 5) brake_bias = update_data.get("brake_bias", 5)
diff_slip = update_data.get("differential_slip", 5) diff_slip = update_data.get("differential_slip", 5)
strategy_name = update_data.get("strategy_name", "N/A") strategy_name = update_data.get("strategy_name", "N/A")
risk_level = update_data.get("risk_level", "medium")
reasoning = update_data.get("reasoning", "")
# Check if controls changed from previous
controls_changed = (
self.current_controls["brake_bias"] != brake_bias or
self.current_controls["differential_slip"] != diff_slip
)
# Check if risk level changed
risk_level_changed = (
self.current_risk_level is not None and
self.current_risk_level != risk_level
)
self.previous_controls = self.current_controls.copy()
self.current_controls["brake_bias"] = brake_bias self.current_controls["brake_bias"] = brake_bias
self.current_controls["differential_slip"] = diff_slip self.current_controls["differential_slip"] = diff_slip
self.current_risk_level = risk_level
logger.info(f"[UPDATED] Strategy-Based Control:") logger.info(f"[UPDATED] Strategy-Based Control:")
logger.info(f" ├─ Brake Bias: {brake_bias}/10") logger.info(f" ├─ Brake Bias: {brake_bias}/10")
logger.info(f" ├─ Differential Slip: {diff_slip}/10") logger.info(f" ├─ Differential Slip: {diff_slip}/10")
logger.info(f" ─ Strategy: {strategy_name}") logger.info(f" ─ Strategy: {strategy_name}")
logger.info(f" ├─ Risk Level: {risk_level}")
if reasoning:
logger.info(f" └─ Reasoning: {reasoning[:100]}...")
self.apply_controls(brake_bias, diff_slip) self.apply_controls(brake_bias, diff_slip)
# Voice announcement if controls OR risk level changed
if controls_changed or risk_level_changed:
if risk_level_changed and not controls_changed:
logger.info(f"[VOICE] Risk level changed to {risk_level}")
await self.voice_announcer.announce_strategy(update_data)
else:
logger.info(f"[VOICE] Skipping announcement - controls and risk level unchanged")
except asyncio.TimeoutError: except asyncio.TimeoutError:
logger.warning("[TIMEOUT] Strategy generation took too long") logger.warning("[TIMEOUT] Strategy generation took too long")
@@ -344,7 +708,7 @@ class PiSimulator:
async def main(): async def main():
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="WebSocket-based Raspberry Pi Telemetry Simulator" description="WebSocket-based Raspberry Pi Telemetry Simulator with Voice Announcements"
) )
parser.add_argument( parser.add_argument(
"--interval", "--interval",
@@ -370,6 +734,11 @@ async def main():
default=None, default=None,
help="Path to lap CSV file (default: scripts/ALONSO_2023_MONZA_LAPS.csv)" help="Path to lap CSV file (default: scripts/ALONSO_2023_MONZA_LAPS.csv)"
) )
parser.add_argument(
"--enable-voice",
action="store_true",
help="Enable voice announcements for strategy updates (requires elevenlabs and ELEVENLABS_API_KEY)"
)
args = parser.parse_args() args = parser.parse_args()
@@ -389,7 +758,8 @@ async def main():
csv_path=csv_path, csv_path=csv_path,
ws_url=args.ws_url, ws_url=args.ws_url,
enrichment_url=args.enrichment_url, enrichment_url=args.enrichment_url,
interval=args.interval interval=args.interval,
voice_enabled=args.enable_voice
) )
logger.info("Starting WebSocket Pi Simulator") logger.info("Starting WebSocket Pi Simulator")
@@ -397,6 +767,7 @@ async def main():
logger.info(f"Enrichment Service: {args.enrichment_url}") logger.info(f"Enrichment Service: {args.enrichment_url}")
logger.info(f"WebSocket URL: {args.ws_url}") logger.info(f"WebSocket URL: {args.ws_url}")
logger.info(f"Interval: {args.interval}s per lap") logger.info(f"Interval: {args.interval}s per lap")
logger.info(f"Voice Announcements: {'Enabled' if args.enable_voice and VOICE_AVAILABLE else 'Disabled'}")
logger.info("-" * 60) logger.info("-" * 60)
await simulator.stream_telemetry() await simulator.stream_telemetry()

76
scripts/test_voice.py Normal file
View File

@@ -0,0 +1,76 @@
#!/usr/bin/env python3
"""
Quick test script for ElevenLabs voice announcements.
"""
import sys
import os
from pathlib import Path
sys.path.insert(0, '.')
try:
from elevenlabs.client import ElevenLabs
from elevenlabs import save
from dotenv import load_dotenv
load_dotenv()
# Check API key
api_key = os.getenv("ELEVENLABS_API_KEY")
if not api_key:
print("✗ ELEVENLABS_API_KEY not found in environment")
print("Create a .env file with: ELEVENLABS_API_KEY=your_key_here")
sys.exit(1)
# Initialize client with same settings as voice_service.py
client = ElevenLabs(api_key=api_key)
voice_id = "mbBupyLcEivjpxh8Brkf" # Rachel voice
# Test message
test_message = "Lap 3. Strategy: Conservative One Stop. Brake bias forward for turn in. Current tire degradation suggests extended first stint."
print(f"Testing ElevenLabs voice announcement...")
print(f"Voice ID: {voice_id} (Rachel)")
print(f"Message: {test_message}")
print("-" * 60)
# Synthesize
audio = client.text_to_speech.convert(
voice_id=voice_id,
text=test_message,
model_id="eleven_multilingual_v2",
voice_settings={
"stability": 0.4,
"similarity_boost": 0.95,
"style": 0.7,
"use_speaker_boost": True
}
)
# Save audio
output_dir = Path("data/audio")
output_dir.mkdir(parents=True, exist_ok=True)
output_path = output_dir / "test_voice.mp3"
save(audio, str(output_path))
print(f"✓ Audio saved to: {output_path}")
# Play audio
print("✓ Playing audio...")
if sys.platform == "darwin": # macOS
os.system(f"afplay {output_path}")
elif sys.platform == "linux":
os.system(f"mpg123 {output_path} || ffplay -nodisp -autoexit {output_path}")
elif sys.platform == "win32":
os.system(f"start {output_path}")
print("✓ Voice test completed successfully!")
except ImportError as e:
print(f"✗ elevenlabs not available: {e}")
print("Install with: pip install elevenlabs python-dotenv")
except Exception as e:
print(f"✗ Voice test failed: {e}")
import traceback
traceback.print_exc()

View File

@@ -13,13 +13,8 @@ from dotenv import load_dotenv
load_dotenv() load_dotenv()
class RaceEngineerVoice: class RaceEngineerVoice:
def __init__(self, voice_id: str = "mbBupyLcEivjpxh8Brkf"): # Default: Rachel def __init__(self, voice_id: str = "mbBupyLcEivjpxh8Brkf"):
"""
Initialize ElevenLabs voice service.
Args:
voice_id: ElevenLabs voice ID (Rachel is default, professional female voice)
"""
self.client = ElevenLabs(api_key=os.getenv("ELEVENLABS_API_KEY")) self.client = ElevenLabs(api_key=os.getenv("ELEVENLABS_API_KEY"))
self.voice_id = voice_id self.voice_id = voice_id

99
start.py Executable file
View File

@@ -0,0 +1,99 @@
#!/usr/bin/env python3
"""
Startup supervisor for HPC Simulation Services.
Manages both enrichment service and AI intelligence layer.
"""
import subprocess
import sys
import time
import signal
import os
processes = []
def cleanup(signum=None, frame=None):
"""Clean up all child processes."""
print("\n🛑 Shutting down all services...")
for proc in processes:
try:
proc.terminate()
proc.wait(timeout=5)
except subprocess.TimeoutExpired:
proc.kill()
except Exception as e:
print(f"Error stopping process: {e}")
sys.exit(0)
# Register signal handlers
signal.signal(signal.SIGINT, cleanup)
signal.signal(signal.SIGTERM, cleanup)
def main():
print("🚀 Starting HPC Simulation Services...")
# Start enrichment service
print("📊 Starting Enrichment Service on port 8000...")
enrichment_proc = subprocess.Popen(
[sys.executable, "scripts/serve.py"],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1
)
processes.append(enrichment_proc)
print(f" ├─ PID: {enrichment_proc.pid}")
# Give it time to start
time.sleep(5)
# Check if still running
if enrichment_proc.poll() is not None:
print("❌ Enrichment service failed to start")
cleanup()
return 1
print(" └─ ✅ Enrichment service started successfully")
# Start AI Intelligence Layer
print("🤖 Starting AI Intelligence Layer on port 9000...")
ai_proc = subprocess.Popen(
[sys.executable, "ai_intelligence_layer/main.py"],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1
)
processes.append(ai_proc)
print(f" ├─ PID: {ai_proc.pid}")
# Give it time to start
time.sleep(3)
# Check if still running
if ai_proc.poll() is not None:
print("❌ AI Intelligence Layer failed to start")
cleanup()
return 1
print(" └─ ✅ AI Intelligence Layer started successfully")
print("\n✨ All services running!")
print(" 📊 Enrichment Service: http://0.0.0.0:8000")
print(" 🤖 AI Intelligence Layer: ws://0.0.0.0:9000/ws/pi")
print("\nPress Ctrl+C to stop all services\n")
# Monitor processes
try:
while True:
# Check if any process has died
for proc in processes:
if proc.poll() is not None:
print(f"⚠️ Process {proc.pid} died unexpectedly!")
cleanup()
return 1
time.sleep(1)
except KeyboardInterrupt:
cleanup()
return 0
if __name__ == "__main__":
sys.exit(main())

58
start.sh Executable file
View File

@@ -0,0 +1,58 @@
#!/bin/bash
# Startup script for Render.com deployment
# Starts both the enrichment service and AI intelligence layer
set -e # Exit on error
echo "🚀 Starting HPC Simulation Services..."
# Trap to handle cleanup on exit
cleanup() {
echo "🛑 Shutting down services..."
kill $ENRICHMENT_PID $AI_PID 2>/dev/null || true
exit
}
trap cleanup SIGINT SIGTERM
# Start enrichment service in background
echo "📊 Starting Enrichment Service on port 8000..."
python scripts/serve.py &
ENRICHMENT_PID=$!
echo " ├─ PID: $ENRICHMENT_PID"
# Give enrichment service time to start
sleep 5
# Check if enrichment service is still running
if ! kill -0 $ENRICHMENT_PID 2>/dev/null; then
echo "❌ Enrichment service failed to start"
exit 1
fi
echo " └─ ✅ Enrichment service started successfully"
# Start AI Intelligence Layer in background
echo "🤖 Starting AI Intelligence Layer on port 9000..."
python ai_intelligence_layer/main.py &
AI_PID=$!
echo " ├─ PID: $AI_PID"
# Give AI layer time to start
sleep 3
# Check if AI layer is still running
if ! kill -0 $AI_PID 2>/dev/null; then
echo "❌ AI Intelligence Layer failed to start"
kill $ENRICHMENT_PID 2>/dev/null || true
exit 1
fi
echo " └─ ✅ AI Intelligence Layer started successfully"
echo ""
echo "✨ All services running!"
echo " 📊 Enrichment Service: http://0.0.0.0:8000"
echo " 🤖 AI Intelligence Layer: ws://0.0.0.0:9000/ws/pi"
echo ""
echo "Press Ctrl+C to stop all services"
# Wait for both processes (this keeps the script running)
wait $ENRICHMENT_PID $AI_PID