Compare commits
10 Commits
89924c1580
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c8f4063e6b | ||
|
|
06a1c93c8a | ||
|
|
0628901dc3 | ||
|
|
bae972aaef | ||
|
|
aab4ab8767 | ||
|
|
0ee6d64969 | ||
|
|
076dc1659e | ||
|
|
23ad5ded24 | ||
|
|
37aaef3f62 | ||
|
|
154283e7c6 |
40
.env.production.example
Normal file
40
.env.production.example
Normal file
@@ -0,0 +1,40 @@
|
||||
# Production Environment Configuration for Render.com
|
||||
# Copy this to .env and fill in your values when deploying
|
||||
|
||||
# Gemini API Configuration
|
||||
GEMINI_API_KEY=your_gemini_api_key_here
|
||||
GEMINI_MODEL=gemini-2.5-flash
|
||||
|
||||
# Environment (MUST be "production" for deployment)
|
||||
ENVIRONMENT=production
|
||||
|
||||
# Service Configuration
|
||||
AI_SERVICE_PORT=9000
|
||||
AI_SERVICE_HOST=0.0.0.0
|
||||
|
||||
# Production URL (your Render.com app URL)
|
||||
# Example: https://hpc-simulation-ai.onrender.com
|
||||
PRODUCTION_URL=https://your-app-name.onrender.com
|
||||
|
||||
# Enrichment Service Integration
|
||||
# In production on Render, services communicate internally
|
||||
# Leave as localhost since both services run in same container
|
||||
ENRICHMENT_SERVICE_URL=http://localhost:8000
|
||||
ENRICHMENT_FETCH_LIMIT=10
|
||||
|
||||
# Demo Mode (enables caching and consistent responses for demos)
|
||||
DEMO_MODE=false
|
||||
|
||||
# Fast Mode (use shorter prompts for faster responses)
|
||||
FAST_MODE=true
|
||||
|
||||
# Strategy Generation Settings
|
||||
STRATEGY_COUNT=3 # Number of strategies to generate (3 for testing, 20 for production)
|
||||
|
||||
# Performance Settings
|
||||
BRAINSTORM_TIMEOUT=90
|
||||
ANALYZE_TIMEOUT=120
|
||||
GEMINI_MAX_RETRIES=3
|
||||
|
||||
# ElevenLabs API Key (optional, for voice features)
|
||||
ELEVENLABS_API_KEY=your_elevenlabs_api_key_here
|
||||
249
ENVIRONMENT_CONFIG.md
Normal file
249
ENVIRONMENT_CONFIG.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Environment Configuration Guide
|
||||
|
||||
## Overview
|
||||
|
||||
The HPC Simulation system automatically adapts between **development** and **production** environments based on the `ENVIRONMENT` variable.
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### Development (Local)
|
||||
- File: `.env` (in project root)
|
||||
- `ENVIRONMENT=development`
|
||||
- Uses `localhost` URLs
|
||||
|
||||
### Production (Render.com)
|
||||
- Set environment variables in Render dashboard
|
||||
- `ENVIRONMENT=production`
|
||||
- `PRODUCTION_URL=https://your-app.onrender.com`
|
||||
- Automatically adapts all URLs
|
||||
|
||||
## Key Environment Variables
|
||||
|
||||
### Required for Both Environments
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `GEMINI_API_KEY` | Google Gemini API key | `AIzaSy...` |
|
||||
| `ENVIRONMENT` | Environment mode | `development` or `production` |
|
||||
|
||||
### Production-Specific
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `PRODUCTION_URL` | Your Render.com app URL | `https://hpc-ai.onrender.com` |
|
||||
|
||||
### Optional
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `ELEVENLABS_API_KEY` | - | Voice synthesis (optional) |
|
||||
| `GEMINI_MODEL` | `gemini-1.5-pro` | AI model version |
|
||||
| `STRATEGY_COUNT` | `3` | Strategies per lap |
|
||||
| `FAST_MODE` | `true` | Use shorter prompts |
|
||||
|
||||
## How It Works
|
||||
|
||||
### URL Auto-Configuration
|
||||
|
||||
The `config.py` automatically provides environment-aware URLs:
|
||||
|
||||
```python
|
||||
settings = get_settings()
|
||||
|
||||
# Automatically returns correct URL based on environment:
|
||||
settings.base_url # http://localhost:9000 OR https://your-app.onrender.com
|
||||
settings.websocket_url # ws://localhost:9000 OR wss://your-app.onrender.com
|
||||
settings.internal_enrichment_url # Always http://localhost:8000 (internal)
|
||||
```
|
||||
|
||||
### Development Environment
|
||||
|
||||
```bash
|
||||
ENVIRONMENT=development
|
||||
# Result:
|
||||
# - base_url: http://localhost:9000
|
||||
# - websocket_url: ws://localhost:9000
|
||||
# - Dashboard connects to: ws://localhost:9000/ws/dashboard
|
||||
```
|
||||
|
||||
### Production Environment
|
||||
|
||||
```bash
|
||||
ENVIRONMENT=production
|
||||
PRODUCTION_URL=https://hpc-ai.onrender.com
|
||||
# Result:
|
||||
# - base_url: https://hpc-ai.onrender.com
|
||||
# - websocket_url: wss://hpc-ai.onrender.com
|
||||
# - Dashboard connects to: wss://hpc-ai.onrender.com/ws/dashboard
|
||||
```
|
||||
|
||||
## Component-Specific Configuration
|
||||
|
||||
### 1. AI Intelligence Layer
|
||||
|
||||
**Development:**
|
||||
- Binds to: `0.0.0.0:9000`
|
||||
- Enrichment client connects to: `http://localhost:8000`
|
||||
- Dashboard WebSocket: `ws://localhost:9000/ws/dashboard`
|
||||
|
||||
**Production:**
|
||||
- Binds to: `0.0.0.0:9000` (Render exposes externally)
|
||||
- Enrichment client connects to: `http://localhost:8000` (internal)
|
||||
- Dashboard WebSocket: `wss://your-app.onrender.com/ws/dashboard`
|
||||
|
||||
### 2. Enrichment Service
|
||||
|
||||
**Development:**
|
||||
- Binds to: `0.0.0.0:8000`
|
||||
- Accessed at: `http://localhost:8000`
|
||||
|
||||
**Production:**
|
||||
- Binds to: `0.0.0.0:8000` (internal only)
|
||||
- Accessed internally at: `http://localhost:8000`
|
||||
- Not exposed externally (behind AI layer)
|
||||
|
||||
### 3. Dashboard (Frontend)
|
||||
|
||||
**Auto-detects environment:**
|
||||
```javascript
|
||||
// Automatically uses current host and protocol
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
const host = window.location.host;
|
||||
const wsUrl = `${protocol}//${host}/ws/dashboard`;
|
||||
```
|
||||
|
||||
### 4. Pi Simulator (Client)
|
||||
|
||||
**Development:**
|
||||
```bash
|
||||
python simulate_pi_websocket.py \
|
||||
--ws-url ws://localhost:9000/ws/pi \
|
||||
--enrichment-url http://localhost:8000
|
||||
```
|
||||
|
||||
**Production:**
|
||||
```bash
|
||||
python simulate_pi_websocket.py \
|
||||
--ws-url wss://your-app.onrender.com/ws/pi \
|
||||
--enrichment-url https://your-app.onrender.com # If exposed
|
||||
```
|
||||
|
||||
## Quick Setup
|
||||
|
||||
### Local Development
|
||||
|
||||
1. **Create `.env` in project root:**
|
||||
```bash
|
||||
GEMINI_API_KEY=your_key_here
|
||||
ENVIRONMENT=development
|
||||
```
|
||||
|
||||
2. **Start services:**
|
||||
```bash
|
||||
./start.sh
|
||||
```
|
||||
|
||||
### Render.com Production
|
||||
|
||||
1. **Set environment variables in Render dashboard:**
|
||||
```
|
||||
GEMINI_API_KEY=your_key_here
|
||||
ENVIRONMENT=production
|
||||
PRODUCTION_URL=https://your-app-name.onrender.com
|
||||
```
|
||||
|
||||
2. **Deploy** - URLs auto-configure!
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: WebSocket connection fails in production
|
||||
|
||||
**Check:**
|
||||
1. `ENVIRONMENT=production` is set
|
||||
2. `PRODUCTION_URL` matches your actual Render URL (including `https://`)
|
||||
3. Dashboard uses `wss://` protocol (should auto-detect)
|
||||
|
||||
### Issue: Enrichment service unreachable
|
||||
|
||||
**In production:**
|
||||
- Both services run in same container
|
||||
- Internal communication uses `http://localhost:8000`
|
||||
- This is automatic, no configuration needed
|
||||
|
||||
**In development:**
|
||||
- Ensure enrichment service is running: `python scripts/serve.py`
|
||||
- Check `http://localhost:8000/health`
|
||||
|
||||
### Issue: Pi simulator can't connect
|
||||
|
||||
**Development:**
|
||||
```bash
|
||||
# Test connection
|
||||
curl http://localhost:9000/health
|
||||
wscat -c ws://localhost:9000/ws/pi
|
||||
```
|
||||
|
||||
**Production:**
|
||||
```bash
|
||||
# Test connection
|
||||
curl https://your-app.onrender.com/health
|
||||
wscat -c wss://your-app.onrender.com/ws/pi
|
||||
```
|
||||
|
||||
## Environment Variable Priority
|
||||
|
||||
1. **Render Environment Variables** (highest priority in production)
|
||||
2. **.env file** (development)
|
||||
3. **Default values** (in config.py)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Development
|
||||
- ✅ Use `.env` file
|
||||
- ✅ Keep `ENVIRONMENT=development`
|
||||
- ✅ Use `localhost` URLs
|
||||
- ❌ Don't commit `.env` to git
|
||||
|
||||
### Production
|
||||
- ✅ Set all variables in Render dashboard
|
||||
- ✅ Use `ENVIRONMENT=production`
|
||||
- ✅ Set `PRODUCTION_URL` after deployment
|
||||
- ✅ Use HTTPS/WSS protocols
|
||||
- ❌ Don't hardcode URLs in code
|
||||
|
||||
## Example Configurations
|
||||
|
||||
### .env (Development)
|
||||
```bash
|
||||
GEMINI_API_KEY=AIzaSyDK_jxVlJUpzyxuiGcopSFkiqMAUD3-w0I
|
||||
GEMINI_MODEL=gemini-2.5-flash
|
||||
ENVIRONMENT=development
|
||||
ELEVENLABS_API_KEY=your_key_here
|
||||
STRATEGY_COUNT=3
|
||||
FAST_MODE=true
|
||||
```
|
||||
|
||||
### Render Environment Variables (Production)
|
||||
```bash
|
||||
GEMINI_API_KEY=AIzaSyDK_jxVlJUpzyxuiGcopSFkiqMAUD3-w0I
|
||||
GEMINI_MODEL=gemini-2.5-flash
|
||||
ENVIRONMENT=production
|
||||
PRODUCTION_URL=https://hpc-simulation-ai.onrender.com
|
||||
ELEVENLABS_API_KEY=your_key_here
|
||||
STRATEGY_COUNT=3
|
||||
FAST_MODE=true
|
||||
```
|
||||
|
||||
## Migration Checklist
|
||||
|
||||
When deploying to production:
|
||||
|
||||
- [ ] Set `ENVIRONMENT=production` in Render
|
||||
- [ ] Deploy and get Render URL
|
||||
- [ ] Set `PRODUCTION_URL` with your Render URL
|
||||
- [ ] Test health endpoint: `https://your-app.onrender.com/health`
|
||||
- [ ] Test WebSocket: `wss://your-app.onrender.com/ws/pi`
|
||||
- [ ] Open dashboard: `https://your-app.onrender.com/dashboard`
|
||||
- [ ] Verify logs show production URLs
|
||||
|
||||
Done! The system will automatically use production URLs for all connections.
|
||||
117
README.md
117
README.md
@@ -1,94 +1,54 @@
|
||||
# HPCSimSite
|
||||
HPC simulation site
|
||||
# Guido.tech:
|
||||
## An F1 AI Race Engineer System
|
||||
|
||||
# F1 Virtual Race Engineer — Enrichment Module
|
||||
Real-time F1 race strategy system combining telemetry enrichment with AI-powered strategy generation. The system receives lap-by-lap telemetry from vehicle controller simulation, enriches it with performance analytics, and generates dynamic race strategies using Google Gemini AI before sending control updates back to the vehicle controller simulation.
|
||||
|
||||
This repo contains a minimal, dependency-free Python module to enrich Raspberry Pi telemetry (derived from FastF1) with HPC-style analytics features. It simulates the first LLM stage (data enrichment) using deterministic heuristics so you can run the pipeline locally and in CI without external services.
|
||||
## Architecture
|
||||
|
||||
## What it does
|
||||
- Accepts lap-level telemetry JSON records.
|
||||
- Produces an enriched record with:
|
||||
- aero_efficiency (0..1)
|
||||
- tire_degradation_index (0..1, higher=worse)
|
||||
- ers_charge (0..1)
|
||||
- fuel_optimization_score (0..1)
|
||||
- driver_consistency (0..1)
|
||||
- weather_impact (low|medium|high)
|
||||
The system consists of two main services:
|
||||
|
||||
## Expected input schema
|
||||
Fields are extensible; these cover the initial POC.
|
||||
1. **Enrichment Service** (`hpcsim/`) - Port 8000
|
||||
- Receives raw telemetry from Raspberry Pi simulator
|
||||
- Enriches data with tire degradation, pace trends, pit window predictions
|
||||
- Forwards to AI Layer via webhook
|
||||
|
||||
Required (or sensible defaults applied):
|
||||
- lap: int
|
||||
- speed: float (km/h)
|
||||
- throttle: float (0..1)
|
||||
- brake: float (0..1)
|
||||
- tire_compound: string (soft|medium|hard|inter|wet)
|
||||
- fuel_level: float (0..1)
|
||||
2. **AI Intelligence Layer** (`ai_intelligence_layer/`) - Port 9000
|
||||
- WebSocket server for real-time Pi communication
|
||||
- Generates race strategies using Google Gemini AI
|
||||
- Sends control commands (brake bias, differential slip) back to Pi
|
||||
- Web dashboard for monitoring (**visit port 9000 after starting the server to view the dashboard**)
|
||||
|
||||
Optional:
|
||||
- ers: float (0..1)
|
||||
- track_temp: float (Celsius)
|
||||
- rain_probability: float (0..1)
|
||||
## Quick Start
|
||||
|
||||
Example telemetry line (JSONL):
|
||||
{"lap":27,"speed":282,"throttle":0.91,"brake":0.05,"tire_compound":"medium","fuel_level":0.47}
|
||||
### Prerequisites
|
||||
|
||||
## Output schema (enriched)
|
||||
Example:
|
||||
{"lap":27,"aero_efficiency":0.83,"tire_degradation_index":0.65,"ers_charge":0.72,"fuel_optimization_score":0.91,"driver_consistency":0.89,"weather_impact":"medium"}
|
||||
- Python 3.9+
|
||||
- Google Gemini API key
|
||||
|
||||
## Quick start
|
||||
|
||||
### Run the CLI
|
||||
The CLI reads JSON Lines (one JSON object per line) from stdin or a file and writes enriched JSON lines to stdout or a file.
|
||||
### 1. Install Dependencies
|
||||
|
||||
```bash
|
||||
python3 scripts/enrich_telemetry.py -i telemetry.jsonl -o enriched.jsonl
|
||||
# Install enrichment service dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Install AI layer dependencies
|
||||
pip install -r ai_intelligence_layer/requirements.txt
|
||||
```
|
||||
|
||||
Or stream:
|
||||
### 2. Configure Environment
|
||||
|
||||
Create `ai_intelligence_layer/.env`:
|
||||
|
||||
```bash
|
||||
cat telemetry.jsonl | python3 scripts/enrich_telemetry.py > enriched.jsonl
|
||||
GEMINI_API_KEY=your_gemini_api_key_here
|
||||
ENVIRONMENT=development
|
||||
FAST_MODE=true
|
||||
STRATEGY_COUNT=3
|
||||
```
|
||||
|
||||
### Library usage
|
||||
### 3. Run the System
|
||||
|
||||
```python
|
||||
from hpcsim.enrichment import Enricher
|
||||
|
||||
enricher = Enricher()
|
||||
out = enricher.enrich({
|
||||
"lap": 1,
|
||||
"speed": 250,
|
||||
"throttle": 0.8,
|
||||
"brake": 0.1,
|
||||
"tire_compound": "medium",
|
||||
"fuel_level": 0.6,
|
||||
})
|
||||
print(out)
|
||||
```
|
||||
|
||||
## Notes
|
||||
- The enrichment maintains state across laps (e.g., cumulative tire wear, consistency from last up to 5 laps). If you restart the process mid-race, these will reset; you can re-feed prior laps to restore state.
|
||||
- If your FastF1-derived telemetry has a different shape, share a sample and we can add adapters.
|
||||
|
||||
## Tests
|
||||
|
||||
Run minimal tests:
|
||||
|
||||
```bash
|
||||
python3 -m unittest tests/test_enrichment.py -v
|
||||
```
|
||||
|
||||
## API reference (Enrichment Service)
|
||||
|
||||
Base URL (local): http://localhost:8000
|
||||
|
||||
Interactive docs: http://localhost:8000/docs (Swagger) and http://localhost:8000/redoc
|
||||
|
||||
### Run the API server
|
||||
**Option A: Run both services together (recommended)**
|
||||
|
||||
```bash
|
||||
python3 scripts/serve.py
|
||||
@@ -122,16 +82,7 @@ Accepts raw Raspberry Pi or FastF1-style telemetry, normalizes field names, enri
|
||||
Example request:
|
||||
|
||||
```bash
|
||||
curl -s -X POST http://localhost:8000/ingest/telemetry \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"LapNumber": 27,
|
||||
"Speed": 282,
|
||||
"Throttle": 0.91,
|
||||
"Brakes": 0.05,
|
||||
"TyreCompound": "medium",
|
||||
"FuelRel": 0.47
|
||||
}'
|
||||
|
||||
```
|
||||
|
||||
Response 200 (application/json):
|
||||
|
||||
211
RENDER_DEPLOYMENT.md
Normal file
211
RENDER_DEPLOYMENT.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Render.com Deployment Guide
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Render.com Configuration
|
||||
|
||||
**Service Type:** Web Service
|
||||
|
||||
**Build Command:**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**Start Command (choose one):**
|
||||
|
||||
#### Option A: Shell Script (Recommended)
|
||||
```bash
|
||||
./start.sh
|
||||
```
|
||||
|
||||
#### Option B: Python Supervisor
|
||||
```bash
|
||||
python start.py
|
||||
```
|
||||
|
||||
#### Option C: Direct Command
|
||||
```bash
|
||||
python scripts/serve.py & python ai_intelligence_layer/main.py
|
||||
```
|
||||
|
||||
### 2. Environment Variables
|
||||
|
||||
Set these in Render.com dashboard:
|
||||
|
||||
**Required:**
|
||||
```bash
|
||||
GEMINI_API_KEY=your_gemini_api_key_here
|
||||
ENVIRONMENT=production
|
||||
PRODUCTION_URL=https://your-app-name.onrender.com # Your Render app URL
|
||||
```
|
||||
|
||||
**Optional:**
|
||||
```bash
|
||||
ELEVENLABS_API_KEY=your_elevenlabs_api_key_here # For voice features
|
||||
GEMINI_MODEL=gemini-2.5-flash
|
||||
STRATEGY_COUNT=3
|
||||
FAST_MODE=true
|
||||
```
|
||||
|
||||
**Auto-configured (no need to set):**
|
||||
```bash
|
||||
# These are handled automatically by the config system
|
||||
AI_SERVICE_PORT=9000
|
||||
AI_SERVICE_HOST=0.0.0.0
|
||||
ENRICHMENT_SERVICE_URL=http://localhost:8000 # Internal communication
|
||||
```
|
||||
|
||||
### Important: Production URL
|
||||
|
||||
After deploying to Render, you'll get a URL like:
|
||||
```
|
||||
https://your-app-name.onrender.com
|
||||
```
|
||||
|
||||
**You MUST set this URL in the environment variables:**
|
||||
1. Go to Render dashboard → your service → Environment
|
||||
2. Add: `PRODUCTION_URL=https://your-app-name.onrender.com`
|
||||
3. The app will automatically use this for WebSocket connections and API URLs
|
||||
|
||||
### 3. Health Check
|
||||
|
||||
**Health Check Path:** `/health` (on port 9000)
|
||||
|
||||
**Health Check Command:**
|
||||
```bash
|
||||
curl http://localhost:9000/health
|
||||
```
|
||||
|
||||
### 4. Port Configuration
|
||||
|
||||
- **Enrichment Service:** 8000 (internal)
|
||||
- **AI Intelligence Layer:** 9000 (external, Render will expose this)
|
||||
|
||||
Render will automatically bind to `PORT` environment variable.
|
||||
|
||||
### 5. Files Required
|
||||
|
||||
- ✅ `start.sh` - Shell startup script
|
||||
- ✅ `start.py` - Python startup supervisor
|
||||
- ✅ `Procfile` - Render configuration
|
||||
- ✅ `requirements.txt` - Python dependencies
|
||||
|
||||
### 6. Testing Locally
|
||||
|
||||
Test the startup script before deploying:
|
||||
|
||||
```bash
|
||||
# Make executable
|
||||
chmod +x start.sh
|
||||
|
||||
# Run locally
|
||||
./start.sh
|
||||
```
|
||||
|
||||
Or with Python supervisor:
|
||||
|
||||
```bash
|
||||
python start.py
|
||||
```
|
||||
|
||||
### 7. Deployment Steps
|
||||
|
||||
1. **Push to GitHub:**
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "Add Render deployment configuration"
|
||||
git push
|
||||
```
|
||||
|
||||
2. **Create Render Service:**
|
||||
- Go to [render.com](https://render.com)
|
||||
- New → Web Service
|
||||
- Connect your GitHub repository
|
||||
- Select branch (main)
|
||||
|
||||
3. **Configure Service:**
|
||||
- Name: `hpc-simulation-ai`
|
||||
- Environment: `Python 3`
|
||||
- Build Command: `pip install -r requirements.txt`
|
||||
- Start Command: `./start.sh`
|
||||
|
||||
4. **Add Environment Variables:**
|
||||
- `GEMINI_API_KEY`
|
||||
- `ELEVENLABS_API_KEY` (optional)
|
||||
|
||||
5. **Deploy!** 🚀
|
||||
|
||||
### 8. Monitoring
|
||||
|
||||
Check logs in Render dashboard for:
|
||||
- `📊 Starting Enrichment Service on port 8000...`
|
||||
- `🤖 Starting AI Intelligence Layer on port 9000...`
|
||||
- `✨ All services running!`
|
||||
|
||||
### 9. Connecting Clients
|
||||
|
||||
**WebSocket URL:**
|
||||
```
|
||||
wss://your-app-name.onrender.com/ws/pi
|
||||
```
|
||||
|
||||
**Enrichment API:**
|
||||
```
|
||||
https://your-app-name.onrender.com/ingest/telemetry
|
||||
```
|
||||
|
||||
### 10. Troubleshooting
|
||||
|
||||
**Services won't start:**
|
||||
- Check environment variables are set
|
||||
- Verify `start.sh` is executable: `chmod +x start.sh`
|
||||
- Check build logs for dependency issues
|
||||
|
||||
**Port conflicts:**
|
||||
- Render will set `PORT` automatically (9000 by default)
|
||||
- Services bind to `0.0.0.0` for external access
|
||||
|
||||
**Memory issues:**
|
||||
- Consider Render's paid plans for more resources
|
||||
- Free tier may struggle with AI model loading
|
||||
|
||||
## Architecture on Render
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ Render.com Container │
|
||||
├─────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────────────────────────┐ │
|
||||
│ │ start.sh / start.py │ │
|
||||
│ └──────────┬────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────┴─────────┐ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌──────────┐ ┌──────────────┐ │
|
||||
│ │Enrichment│ │AI Intelligence│ │
|
||||
│ │Service │ │Layer │ │
|
||||
│ │:8000 │◄──│:9000 │ │
|
||||
│ └──────────┘ └──────┬────────┘ │
|
||||
│ │ │
|
||||
└────────────────────────┼────────────┘
|
||||
│
|
||||
Internet
|
||||
│
|
||||
┌────▼─────┐
|
||||
│ Client │
|
||||
│(Pi/Web) │
|
||||
└──────────┘
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Test locally with `./start.sh`
|
||||
2. Commit and push to GitHub
|
||||
3. Create Render service
|
||||
4. Configure environment variables
|
||||
5. Deploy and monitor logs
|
||||
6. Update client connection URLs
|
||||
|
||||
Good luck! 🎉
|
||||
@@ -1,4 +1,4 @@
|
||||
# F1 AI Intelligence Layer
|
||||
r# F1 AI Intelligence Layer
|
||||
|
||||
**The core innovation of our HPC-powered race strategy system**
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
"""
|
||||
Configuration management for AI Intelligence Layer.
|
||||
Uses pydantic-settings for environment variable validation.
|
||||
Environment variables are loaded via load_dotenv() in main.py.
|
||||
Automatically adapts URLs for development vs production environments.
|
||||
"""
|
||||
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||
from typing import Optional
|
||||
@@ -9,6 +11,10 @@ from typing import Optional
|
||||
class Settings(BaseSettings):
|
||||
"""Application settings loaded from environment variables."""
|
||||
|
||||
# Environment Configuration
|
||||
environment: str = "development" # "development" or "production"
|
||||
production_url: Optional[str] = None # e.g., "https://your-app.onrender.com"
|
||||
|
||||
# Gemini API Configuration
|
||||
gemini_api_key: str
|
||||
gemini_model: str = "gemini-1.5-pro"
|
||||
@@ -28,7 +34,7 @@ class Settings(BaseSettings):
|
||||
fast_mode: bool = True
|
||||
|
||||
# Strategy Generation Settings
|
||||
strategy_count: int = 3 # Number of strategies to generate (3 for fast testing)
|
||||
strategy_count: int = 3 # Number of strategies to generate (3 for testing, 20 for production)
|
||||
|
||||
# Performance Settings
|
||||
brainstorm_timeout: int = 30
|
||||
@@ -36,12 +42,38 @@ class Settings(BaseSettings):
|
||||
gemini_max_retries: int = 3
|
||||
|
||||
model_config = SettingsConfigDict(
|
||||
env_file=".env",
|
||||
env_file_encoding="utf-8",
|
||||
case_sensitive=False,
|
||||
extra="ignore"
|
||||
)
|
||||
|
||||
@property
|
||||
def is_production(self) -> bool:
|
||||
"""Check if running in production environment."""
|
||||
return self.environment.lower() == "production"
|
||||
|
||||
@property
|
||||
def base_url(self) -> str:
|
||||
"""Get the base URL for the application."""
|
||||
if self.is_production and self.production_url:
|
||||
return self.production_url
|
||||
return f"http://localhost:{self.ai_service_port}"
|
||||
|
||||
@property
|
||||
def websocket_url(self) -> str:
|
||||
"""Get the WebSocket URL for the application."""
|
||||
if self.is_production and self.production_url:
|
||||
# Replace https:// with wss:// or http:// with ws://
|
||||
return self.production_url.replace("https://", "wss://").replace("http://", "ws://")
|
||||
return f"ws://localhost:{self.ai_service_port}"
|
||||
|
||||
@property
|
||||
def internal_enrichment_url(self) -> str:
|
||||
"""Get the enrichment service URL (internal on Render)."""
|
||||
if self.is_production:
|
||||
# On Render, services communicate internally via localhost
|
||||
return "http://localhost:8000"
|
||||
return self.enrichment_service_url
|
||||
|
||||
|
||||
# Global settings instance
|
||||
settings: Optional[Settings] = None
|
||||
|
||||
@@ -15,6 +15,10 @@ import random
|
||||
from typing import Dict, Any, List
|
||||
from datetime import datetime
|
||||
import json
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables from .env file in project root
|
||||
load_dotenv()
|
||||
|
||||
from config import get_settings
|
||||
from models.input_models import (
|
||||
@@ -354,7 +358,52 @@ async def websocket_dashboard_endpoint(websocket: WebSocket):
|
||||
await dashboard_manager.connect(websocket)
|
||||
|
||||
try:
|
||||
# Keep connection alive
|
||||
# Send historical data from current session immediately after connection
|
||||
buffer_data = telemetry_buffer.get_all()
|
||||
if buffer_data and current_race_context:
|
||||
logger.info(f"[Dashboard] Sending {len(buffer_data)} historical lap records to new dashboard")
|
||||
|
||||
# Reverse to get chronological order (oldest to newest)
|
||||
buffer_data.reverse()
|
||||
|
||||
# Send each historical lap as a lap_data message
|
||||
for i, telemetry in enumerate(buffer_data):
|
||||
try:
|
||||
# Find matching strategy from history if available
|
||||
lap_strategy = None
|
||||
for strat in strategy_history:
|
||||
if strat.get("lap") == telemetry.lap:
|
||||
lap_strategy = {
|
||||
"strategy_name": strat.get("strategy_name"),
|
||||
"risk_level": strat.get("risk_level"),
|
||||
"brief_description": strat.get("brief_description"),
|
||||
"reasoning": strat.get("reasoning")
|
||||
}
|
||||
break
|
||||
|
||||
# Send historical lap data
|
||||
await websocket.send_json({
|
||||
"type": "lap_data",
|
||||
"vehicle_id": 1, # Assume single vehicle for now
|
||||
"lap_data": telemetry.model_dump(),
|
||||
"race_context": {
|
||||
"position": current_race_context.driver_state.current_position,
|
||||
"gap_to_leader": current_race_context.driver_state.gap_to_leader,
|
||||
"gap_to_ahead": current_race_context.driver_state.gap_to_ahead
|
||||
},
|
||||
"control_output": last_control_command if i == len(buffer_data) - 1 else {"brake_bias": 5, "differential_slip": 5},
|
||||
"strategy": lap_strategy,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"historical": True # Mark as historical data
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"[Dashboard] Error sending historical lap {telemetry.lap}: {e}")
|
||||
|
||||
logger.info(f"[Dashboard] Historical data transmission complete")
|
||||
else:
|
||||
logger.info("[Dashboard] No historical data to send (buffer empty or no race context)")
|
||||
|
||||
# Keep connection alive and handle incoming messages
|
||||
while True:
|
||||
# Receive any messages (mostly just keepalive pings)
|
||||
data = await websocket.receive_text()
|
||||
@@ -445,16 +494,36 @@ async def websocket_pi_endpoint(websocket: WebSocket):
|
||||
logger.info(f"LAP {lap_number} - GENERATING STRATEGY")
|
||||
logger.info(f"{'='*60}")
|
||||
|
||||
# Send immediate acknowledgment while processing
|
||||
# Use last known control values instead of resetting to neutral
|
||||
# Send SILENT acknowledgment to prevent timeout (no control update)
|
||||
# This tells the Pi "we're working on it" without triggering voice/controls
|
||||
await websocket.send_json({
|
||||
"type": "control_command",
|
||||
"type": "acknowledgment",
|
||||
"lap": lap_number,
|
||||
"brake_bias": last_control_command["brake_bias"],
|
||||
"differential_slip": last_control_command["differential_slip"],
|
||||
"message": "Processing strategies (maintaining previous settings)..."
|
||||
"message": "Processing strategies, please wait..."
|
||||
})
|
||||
|
||||
# Create a background task to send periodic keepalive pings during strategy generation
|
||||
# This prevents WebSocket timeout during long AI operations
|
||||
keepalive_active = asyncio.Event()
|
||||
|
||||
async def send_keepalive():
|
||||
"""Send periodic pings to keep WebSocket alive during long operations."""
|
||||
while not keepalive_active.is_set():
|
||||
try:
|
||||
await asyncio.sleep(10) # Send keepalive every 10 seconds
|
||||
if not keepalive_active.is_set():
|
||||
await websocket.send_json({
|
||||
"type": "keepalive",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
})
|
||||
logger.debug(f"[WebSocket] Sent keepalive ping for lap {lap_number}")
|
||||
except Exception as e:
|
||||
logger.error(f"[WebSocket] Keepalive error: {e}")
|
||||
break
|
||||
|
||||
# Start keepalive task
|
||||
keepalive_task = asyncio.create_task(send_keepalive())
|
||||
|
||||
# Generate strategies (this is the slow part)
|
||||
try:
|
||||
response = await strategy_generator.generate(
|
||||
@@ -463,6 +532,10 @@ async def websocket_pi_endpoint(websocket: WebSocket):
|
||||
strategy_history=strategy_history
|
||||
)
|
||||
|
||||
# Stop keepalive task
|
||||
keepalive_active.set()
|
||||
await keepalive_task
|
||||
|
||||
# Extract top strategy (first one)
|
||||
top_strategy = response.strategies[0] if response.strategies else None
|
||||
|
||||
@@ -500,6 +573,7 @@ async def websocket_pi_endpoint(websocket: WebSocket):
|
||||
"brake_bias": control_command["brake_bias"],
|
||||
"differential_slip": control_command["differential_slip"],
|
||||
"strategy_name": top_strategy.strategy_name if top_strategy else "N/A",
|
||||
"risk_level": top_strategy.risk_level if top_strategy else "medium",
|
||||
"total_strategies": len(response.strategies),
|
||||
"reasoning": control_command.get("reasoning", "")
|
||||
})
|
||||
@@ -530,6 +604,13 @@ async def websocket_pi_endpoint(websocket: WebSocket):
|
||||
logger.info(f"{'='*60}\n")
|
||||
|
||||
except Exception as e:
|
||||
# Stop keepalive task on error
|
||||
keepalive_active.set()
|
||||
try:
|
||||
await keepalive_task
|
||||
except:
|
||||
pass
|
||||
|
||||
logger.error(f"[WebSocket] Strategy generation failed: {e}")
|
||||
# Send error but keep neutral controls
|
||||
await websocket.send_json({
|
||||
@@ -691,7 +772,10 @@ if __name__ == "__main__":
|
||||
"main:app",
|
||||
host=settings.ai_service_host,
|
||||
port=settings.ai_service_port,
|
||||
reload=True
|
||||
reload=True,
|
||||
ws_ping_interval=20, # Send ping every 20 seconds
|
||||
ws_ping_timeout=60, # Wait up to 60 seconds for pong response
|
||||
timeout_keep_alive=75 # HTTP keepalive timeout
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -16,7 +16,8 @@ class TelemetryClient:
|
||||
def __init__(self):
|
||||
"""Initialize telemetry client."""
|
||||
settings = get_settings()
|
||||
self.base_url = settings.enrichment_service_url
|
||||
# Use internal_enrichment_url which adapts for production
|
||||
self.base_url = settings.internal_enrichment_url
|
||||
self.fetch_limit = settings.enrichment_fetch_limit
|
||||
logger.info(f"Telemetry client initialized for {self.base_url}")
|
||||
|
||||
|
||||
@@ -683,7 +683,13 @@
|
||||
}
|
||||
|
||||
function connect() {
|
||||
ws = new WebSocket('ws://localhost:9000/ws/dashboard');
|
||||
// Dynamically determine WebSocket URL based on current location
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
const host = window.location.host;
|
||||
const wsUrl = `${protocol}//${host}/ws/dashboard`;
|
||||
|
||||
console.log(`Connecting to WebSocket: ${wsUrl}`);
|
||||
ws = new WebSocket(wsUrl);
|
||||
|
||||
ws.onopen = () => {
|
||||
console.log('Dashboard WebSocket connected');
|
||||
|
||||
@@ -6,9 +6,10 @@ Connects to AI Intelligence Layer via WebSocket and:
|
||||
1. Streams lap telemetry to AI layer
|
||||
2. Receives control commands (brake_bias, differential_slip) from AI layer
|
||||
3. Applies control adjustments in real-time
|
||||
4. Generates voice announcements for strategy updates
|
||||
|
||||
Usage:
|
||||
python simulate_pi_websocket.py --interval 5 --ws-url ws://localhost:9000/ws/pi
|
||||
python simulate_pi_websocket.py --interval 5 --ws-url ws://10.159.65.108:9000/ws/pi --enable-voice
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
@@ -19,6 +20,8 @@ import logging
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
try:
|
||||
import pandas as pd
|
||||
@@ -29,6 +32,19 @@ except ImportError:
|
||||
print("Run: pip install pandas websockets")
|
||||
sys.exit(1)
|
||||
|
||||
# Optional voice support
|
||||
try:
|
||||
from elevenlabs.client import ElevenLabs
|
||||
from elevenlabs import save
|
||||
from dotenv import load_dotenv
|
||||
# Load .env from root directory (default behavior)
|
||||
load_dotenv()
|
||||
VOICE_AVAILABLE = True
|
||||
except ImportError:
|
||||
VOICE_AVAILABLE = False
|
||||
print("Note: elevenlabs not installed. Voice features disabled.")
|
||||
print("To enable voice: pip install elevenlabs python-dotenv")
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
@@ -37,10 +53,260 @@ logging.basicConfig(
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PiSimulator:
|
||||
"""WebSocket-based Pi simulator with control feedback."""
|
||||
class VoiceAnnouncer:
|
||||
"""ElevenLabs text-to-speech announcer for race engineer communications."""
|
||||
|
||||
def __init__(self, csv_path: Path, ws_url: str, interval: float = 60.0, enrichment_url: str = "http://localhost:8000"):
|
||||
def __init__(self, enabled: bool = True):
|
||||
"""Initialize ElevenLabs voice engine if available."""
|
||||
self.enabled = enabled and VOICE_AVAILABLE
|
||||
self.client = None
|
||||
self.audio_dir = Path("data/audio")
|
||||
# Use exact same voice as voice_service.py
|
||||
self.voice_id = "mbBupyLcEivjpxh8Brkf" # Rachel voice
|
||||
|
||||
if self.enabled:
|
||||
try:
|
||||
api_key = os.getenv("ELEVENLABS_API_KEY")
|
||||
if not api_key:
|
||||
logger.warning("⚠ ELEVENLABS_API_KEY not found in environment")
|
||||
self.enabled = False
|
||||
return
|
||||
|
||||
self.client = ElevenLabs(api_key=api_key)
|
||||
self.audio_dir.mkdir(parents=True, exist_ok=True)
|
||||
logger.info("✓ Voice announcer initialized (ElevenLabs)")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠ Voice engine initialization failed: {e}")
|
||||
self.enabled = False
|
||||
|
||||
def _format_strategy_message(self, data: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Format strategy update into natural race engineer speech.
|
||||
|
||||
Args:
|
||||
data: Control command update from AI layer
|
||||
|
||||
Returns:
|
||||
Formatted message string
|
||||
"""
|
||||
lap = data.get('lap', 0)
|
||||
strategy_name = data.get('strategy_name', 'Unknown')
|
||||
brake_bias = data.get('brake_bias', 5)
|
||||
diff_slip = data.get('differential_slip', 5)
|
||||
reasoning = data.get('reasoning', '')
|
||||
risk_level = data.get('risk_level', '')
|
||||
|
||||
# Build natural message
|
||||
parts = []
|
||||
|
||||
# Opening with lap number
|
||||
parts.append(f"Lap {lap}.")
|
||||
|
||||
# Strategy announcement with risk level
|
||||
if strategy_name and strategy_name != "N/A":
|
||||
# Simplify strategy name for speech
|
||||
clean_strategy = strategy_name.replace('-', ' ').replace('_', ' ')
|
||||
if risk_level:
|
||||
parts.append(f"Running {clean_strategy} strategy, {risk_level} risk.")
|
||||
else:
|
||||
parts.append(f"Running {clean_strategy} strategy.")
|
||||
|
||||
# Control adjustments with specific values
|
||||
control_messages = []
|
||||
|
||||
# Brake bias announcement with context
|
||||
if brake_bias < 4:
|
||||
control_messages.append(f"Brake bias set to {brake_bias}, forward biased for sharper turn in response")
|
||||
elif brake_bias == 4:
|
||||
control_messages.append(f"Brake bias {brake_bias}, slightly forward to help rotation")
|
||||
elif brake_bias > 6:
|
||||
control_messages.append(f"Brake bias set to {brake_bias}, rearward to protect front tire wear")
|
||||
elif brake_bias == 6:
|
||||
control_messages.append(f"Brake bias {brake_bias}, slightly rear for front tire management")
|
||||
else:
|
||||
control_messages.append(f"Brake bias neutral at {brake_bias}")
|
||||
|
||||
# Differential slip announcement with context
|
||||
if diff_slip < 4:
|
||||
control_messages.append(f"Differential at {diff_slip}, tightened for better rotation through corners")
|
||||
elif diff_slip == 4:
|
||||
control_messages.append(f"Differential {diff_slip}, slightly tight for rotation")
|
||||
elif diff_slip > 6:
|
||||
control_messages.append(f"Differential set to {diff_slip}, loosened to reduce rear tire degradation")
|
||||
elif diff_slip == 6:
|
||||
control_messages.append(f"Differential {diff_slip}, slightly loose for tire preservation")
|
||||
else:
|
||||
control_messages.append(f"Differential neutral at {diff_slip}")
|
||||
|
||||
if control_messages:
|
||||
parts.append(". ".join(control_messages) + ".")
|
||||
|
||||
# Key reasoning excerpt (first sentence only)
|
||||
if reasoning:
|
||||
# Extract first meaningful sentence
|
||||
sentences = reasoning.split('.')
|
||||
if sentences:
|
||||
key_reason = sentences[0].strip()
|
||||
if len(key_reason) > 20 and len(key_reason) < 150: # Slightly longer for more context
|
||||
parts.append(key_reason + ".")
|
||||
|
||||
return " ".join(parts)
|
||||
|
||||
def _format_control_message(self, data: Dict[str, Any]) -> str:
|
||||
"""
|
||||
Format control command into brief message.
|
||||
|
||||
Args:
|
||||
data: Control command from AI layer
|
||||
|
||||
Returns:
|
||||
Formatted message string
|
||||
"""
|
||||
lap = data.get('lap', 0)
|
||||
brake_bias = data.get('brake_bias', 5)
|
||||
diff_slip = data.get('differential_slip', 5)
|
||||
message = data.get('message', '')
|
||||
|
||||
# For early laps or non-strategy updates
|
||||
if message and "Collecting data" in message:
|
||||
return f"Lap {lap}. Collecting baseline data."
|
||||
|
||||
if brake_bias == 5 and diff_slip == 5:
|
||||
return f"Lap {lap}. Maintaining neutral settings."
|
||||
|
||||
return f"Lap {lap}. Controls adjusted."
|
||||
|
||||
async def announce_strategy(self, data: Dict[str, Any]):
|
||||
"""
|
||||
Announce strategy update with ElevenLabs voice synthesis.
|
||||
|
||||
Args:
|
||||
data: Control command update from AI layer
|
||||
"""
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
try:
|
||||
# Format message
|
||||
message = self._format_strategy_message(data)
|
||||
|
||||
logger.info(f"[VOICE] Announcing: {message}")
|
||||
|
||||
# Generate unique audio filename
|
||||
lap = data.get('lap', 0)
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
audio_path = self.audio_dir / f"lap_{lap}_{timestamp}.mp3"
|
||||
|
||||
# Synthesize with ElevenLabs (exact same settings as voice_service.py)
|
||||
def synthesize():
|
||||
try:
|
||||
audio = self.client.text_to_speech.convert(
|
||||
voice_id=self.voice_id,
|
||||
text=message,
|
||||
model_id="eleven_multilingual_v2", # Fast, low-latency model
|
||||
voice_settings={
|
||||
"stability": 0.4,
|
||||
"similarity_boost": 0.95,
|
||||
"style": 0.7,
|
||||
"use_speaker_boost": True
|
||||
}
|
||||
)
|
||||
save(audio, str(audio_path))
|
||||
logger.info(f"[VOICE] Saved to {audio_path}")
|
||||
|
||||
# Play the audio
|
||||
if sys.platform == "darwin": # macOS
|
||||
os.system(f"afplay {audio_path}")
|
||||
elif sys.platform == "linux":
|
||||
os.system(f"mpg123 {audio_path} || ffplay -nodisp -autoexit {audio_path}")
|
||||
elif sys.platform == "win32":
|
||||
os.system(f"start {audio_path}")
|
||||
|
||||
# Clean up audio file after playing
|
||||
try:
|
||||
if audio_path.exists():
|
||||
audio_path.unlink()
|
||||
logger.info(f"[VOICE] Cleaned up {audio_path}")
|
||||
except Exception as cleanup_error:
|
||||
logger.warning(f"[VOICE] Failed to delete audio file: {cleanup_error}")
|
||||
except Exception as e:
|
||||
logger.error(f"[VOICE] Synthesis error: {e}")
|
||||
|
||||
# Run in separate thread to avoid blocking
|
||||
loop = asyncio.get_event_loop()
|
||||
await loop.run_in_executor(None, synthesize)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[VOICE] Announcement failed: {e}")
|
||||
|
||||
async def announce_control(self, data: Dict[str, Any]):
|
||||
"""
|
||||
Announce control command with ElevenLabs voice synthesis (brief version).
|
||||
|
||||
Args:
|
||||
data: Control command from AI layer
|
||||
"""
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
try:
|
||||
# Format message
|
||||
message = self._format_control_message(data)
|
||||
|
||||
logger.info(f"[VOICE] Announcing: {message}")
|
||||
|
||||
# Generate unique audio filename
|
||||
lap = data.get('lap', 0)
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
audio_path = self.audio_dir / f"lap_{lap}_control_{timestamp}.mp3"
|
||||
|
||||
# Synthesize with ElevenLabs (exact same settings as voice_service.py)
|
||||
def synthesize():
|
||||
try:
|
||||
audio = self.client.text_to_speech.convert(
|
||||
voice_id=self.voice_id,
|
||||
text=message,
|
||||
model_id="eleven_multilingual_v2", # Fast, low-latency model
|
||||
voice_settings={
|
||||
"stability": 0.4,
|
||||
"similarity_boost": 0.95,
|
||||
"style": 0.7,
|
||||
"use_speaker_boost": True
|
||||
}
|
||||
)
|
||||
save(audio, str(audio_path))
|
||||
logger.info(f"[VOICE] Saved to {audio_path}")
|
||||
|
||||
# Play the audio
|
||||
if sys.platform == "darwin": # macOS
|
||||
os.system(f"afplay {audio_path}")
|
||||
elif sys.platform == "linux":
|
||||
os.system(f"mpg123 {audio_path} || ffplay -nodisp -autoexit {audio_path}")
|
||||
elif sys.platform == "win32":
|
||||
os.system(f"start {audio_path}")
|
||||
|
||||
# Clean up audio file after playing
|
||||
try:
|
||||
if audio_path.exists():
|
||||
audio_path.unlink()
|
||||
logger.info(f"[VOICE] Cleaned up {audio_path}")
|
||||
except Exception as cleanup_error:
|
||||
logger.warning(f"[VOICE] Failed to delete audio file: {cleanup_error}")
|
||||
except Exception as e:
|
||||
logger.error(f"[VOICE] Synthesis error: {e}")
|
||||
|
||||
# Run in separate thread to avoid blocking
|
||||
loop = asyncio.get_event_loop()
|
||||
await loop.run_in_executor(None, synthesize)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[VOICE] Announcement failed: {e}")
|
||||
|
||||
|
||||
class PiSimulator:
|
||||
"""WebSocket-based Pi simulator with control feedback and voice announcements."""
|
||||
|
||||
def __init__(self, csv_path: Path, ws_url: str, interval: float = 60.0, enrichment_url: str = "http://10.159.65.108:8000", voice_enabled: bool = False):
|
||||
self.csv_path = csv_path
|
||||
self.ws_url = ws_url
|
||||
self.enrichment_url = enrichment_url
|
||||
@@ -50,6 +316,12 @@ class PiSimulator:
|
||||
"brake_bias": 5,
|
||||
"differential_slip": 5
|
||||
}
|
||||
self.previous_controls = {
|
||||
"brake_bias": 5,
|
||||
"differential_slip": 5
|
||||
}
|
||||
self.current_risk_level: Optional[str] = None
|
||||
self.voice_announcer = VoiceAnnouncer(enabled=voice_enabled)
|
||||
|
||||
def load_lap_csv(self) -> pd.DataFrame:
|
||||
"""Load lap-level CSV data."""
|
||||
@@ -192,7 +464,13 @@ class PiSimulator:
|
||||
logger.info(f"Connecting to WebSocket: {self.ws_url}")
|
||||
|
||||
try:
|
||||
async with websockets.connect(self.ws_url) as websocket:
|
||||
# Configure WebSocket with longer ping timeout and interval
|
||||
async with websockets.connect(
|
||||
self.ws_url,
|
||||
ping_interval=20, # Send ping every 20 seconds
|
||||
ping_timeout=60, # Wait up to 60 seconds for pong response
|
||||
close_timeout=10 # Timeout for close handshake
|
||||
) as websocket:
|
||||
logger.info("WebSocket connected!")
|
||||
|
||||
# Wait for welcome message
|
||||
@@ -245,12 +523,92 @@ class PiSimulator:
|
||||
response = await asyncio.wait_for(websocket.recv(), timeout=5.0)
|
||||
response_data = json.loads(response)
|
||||
|
||||
if response_data.get("type") == "control_command":
|
||||
# Handle silent acknowledgment (no control update, no voice)
|
||||
if response_data.get("type") == "acknowledgment":
|
||||
message = response_data.get("message", "")
|
||||
logger.info(f"[ACK] {message}")
|
||||
|
||||
# Now wait for the actual control command update
|
||||
# Keep receiving messages until we get the update (ignoring keepalives)
|
||||
try:
|
||||
timeout_remaining = 45.0
|
||||
start_time = asyncio.get_event_loop().time()
|
||||
|
||||
while timeout_remaining > 0:
|
||||
update = await asyncio.wait_for(websocket.recv(), timeout=timeout_remaining)
|
||||
update_data = json.loads(update)
|
||||
|
||||
# Ignore keepalive messages
|
||||
if update_data.get("type") == "keepalive":
|
||||
logger.debug(f"[KEEPALIVE] Received ping from server during strategy generation")
|
||||
elapsed = asyncio.get_event_loop().time() - start_time
|
||||
timeout_remaining = 45.0 - elapsed
|
||||
continue
|
||||
|
||||
# Process control command update
|
||||
if update_data.get("type") == "control_command_update":
|
||||
brake_bias = update_data.get("brake_bias", 5)
|
||||
diff_slip = update_data.get("differential_slip", 5)
|
||||
strategy_name = update_data.get("strategy_name", "N/A")
|
||||
risk_level = update_data.get("risk_level", "medium")
|
||||
reasoning = update_data.get("reasoning", "")
|
||||
|
||||
# Check if controls changed from previous
|
||||
controls_changed = (
|
||||
self.current_controls["brake_bias"] != brake_bias or
|
||||
self.current_controls["differential_slip"] != diff_slip
|
||||
)
|
||||
|
||||
# Check if risk level changed
|
||||
risk_level_changed = (
|
||||
self.current_risk_level is not None and
|
||||
self.current_risk_level != risk_level
|
||||
)
|
||||
|
||||
self.previous_controls = self.current_controls.copy()
|
||||
self.current_controls["brake_bias"] = brake_bias
|
||||
self.current_controls["differential_slip"] = diff_slip
|
||||
self.current_risk_level = risk_level
|
||||
|
||||
logger.info(f"[UPDATED] Strategy-Based Control:")
|
||||
logger.info(f" ├─ Brake Bias: {brake_bias}/10")
|
||||
logger.info(f" ├─ Differential Slip: {diff_slip}/10")
|
||||
logger.info(f" ├─ Strategy: {strategy_name}")
|
||||
logger.info(f" ├─ Risk Level: {risk_level}")
|
||||
if reasoning:
|
||||
logger.info(f" └─ Reasoning: {reasoning[:100]}...")
|
||||
|
||||
self.apply_controls(brake_bias, diff_slip)
|
||||
|
||||
# Voice announcement if controls OR risk level changed
|
||||
if controls_changed or risk_level_changed:
|
||||
if risk_level_changed and not controls_changed:
|
||||
logger.info(f"[VOICE] Risk level changed to {risk_level}")
|
||||
await self.voice_announcer.announce_strategy(update_data)
|
||||
else:
|
||||
logger.info(f"[VOICE] Skipping announcement - controls and risk level unchanged")
|
||||
break # Exit loop after processing update
|
||||
|
||||
# Update timeout
|
||||
elapsed = asyncio.get_event_loop().time() - start_time
|
||||
timeout_remaining = 45.0 - elapsed
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning("[TIMEOUT] Strategy generation took too long")
|
||||
|
||||
elif response_data.get("type") == "control_command":
|
||||
brake_bias = response_data.get("brake_bias", 5)
|
||||
diff_slip = response_data.get("differential_slip", 5)
|
||||
strategy_name = response_data.get("strategy_name", "N/A")
|
||||
message = response_data.get("message")
|
||||
|
||||
# Store previous values before updating
|
||||
controls_changed = (
|
||||
self.current_controls["brake_bias"] != brake_bias or
|
||||
self.current_controls["differential_slip"] != diff_slip
|
||||
)
|
||||
|
||||
self.previous_controls = self.current_controls.copy()
|
||||
self.current_controls["brake_bias"] = brake_bias
|
||||
self.current_controls["differential_slip"] = diff_slip
|
||||
|
||||
@@ -265,6 +623,10 @@ class PiSimulator:
|
||||
# Apply controls (in real Pi, this would adjust hardware)
|
||||
self.apply_controls(brake_bias, diff_slip)
|
||||
|
||||
# Voice announcement ONLY if controls changed
|
||||
if controls_changed:
|
||||
await self.voice_announcer.announce_control(response_data)
|
||||
|
||||
# If message indicates processing, wait for update
|
||||
if message and "Processing" in message:
|
||||
logger.info(" AI is generating strategies, waiting for update...")
|
||||
@@ -276,16 +638,43 @@ class PiSimulator:
|
||||
brake_bias = update_data.get("brake_bias", 5)
|
||||
diff_slip = update_data.get("differential_slip", 5)
|
||||
strategy_name = update_data.get("strategy_name", "N/A")
|
||||
risk_level = update_data.get("risk_level", "medium")
|
||||
reasoning = update_data.get("reasoning", "")
|
||||
|
||||
# Check if controls changed from previous
|
||||
controls_changed = (
|
||||
self.current_controls["brake_bias"] != brake_bias or
|
||||
self.current_controls["differential_slip"] != diff_slip
|
||||
)
|
||||
|
||||
# Check if risk level changed
|
||||
risk_level_changed = (
|
||||
self.current_risk_level is not None and
|
||||
self.current_risk_level != risk_level
|
||||
)
|
||||
|
||||
self.previous_controls = self.current_controls.copy()
|
||||
self.current_controls["brake_bias"] = brake_bias
|
||||
self.current_controls["differential_slip"] = diff_slip
|
||||
self.current_risk_level = risk_level
|
||||
|
||||
logger.info(f"[UPDATED] Strategy-Based Control:")
|
||||
logger.info(f" ├─ Brake Bias: {brake_bias}/10")
|
||||
logger.info(f" ├─ Differential Slip: {diff_slip}/10")
|
||||
logger.info(f" └─ Strategy: {strategy_name}")
|
||||
logger.info(f" ├─ Strategy: {strategy_name}")
|
||||
logger.info(f" ├─ Risk Level: {risk_level}")
|
||||
if reasoning:
|
||||
logger.info(f" └─ Reasoning: {reasoning[:100]}...")
|
||||
|
||||
self.apply_controls(brake_bias, diff_slip)
|
||||
|
||||
# Voice announcement if controls OR risk level changed
|
||||
if controls_changed or risk_level_changed:
|
||||
if risk_level_changed and not controls_changed:
|
||||
logger.info(f"[VOICE] Risk level changed to {risk_level}")
|
||||
await self.voice_announcer.announce_strategy(update_data)
|
||||
else:
|
||||
logger.info(f"[VOICE] Skipping announcement - controls and risk level unchanged")
|
||||
except asyncio.TimeoutError:
|
||||
logger.warning("[TIMEOUT] Strategy generation took too long")
|
||||
|
||||
@@ -307,6 +696,15 @@ class PiSimulator:
|
||||
# Send disconnect message
|
||||
await websocket.send(json.dumps({"type": "disconnect"}))
|
||||
|
||||
except websockets.exceptions.ConnectionClosedError as e:
|
||||
if e.code == 1011:
|
||||
logger.error(f"WebSocket keepalive timeout (1011): Connection lost due to ping/pong failure")
|
||||
logger.error("Possible causes:")
|
||||
logger.error(" - Server took too long to respond (strategy generation > 60s)")
|
||||
logger.error(" - Network latency or congestion")
|
||||
logger.error(" - Server overloaded or unresponsive")
|
||||
else:
|
||||
logger.error(f"WebSocket connection closed: {e} (code: {e.code})")
|
||||
except websockets.exceptions.WebSocketException as e:
|
||||
logger.error(f"WebSocket error: {e}")
|
||||
logger.error("Is the AI Intelligence Layer running on port 9000?")
|
||||
@@ -344,7 +742,7 @@ class PiSimulator:
|
||||
|
||||
async def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="WebSocket-based Raspberry Pi Telemetry Simulator"
|
||||
description="WebSocket-based Raspberry Pi Telemetry Simulator with Voice Announcements"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--interval",
|
||||
@@ -355,14 +753,14 @@ async def main():
|
||||
parser.add_argument(
|
||||
"--ws-url",
|
||||
type=str,
|
||||
default="ws://localhost:9000/ws/pi",
|
||||
help="WebSocket URL for AI layer (default: ws://localhost:9000/ws/pi)"
|
||||
default="ws://10.159.65.108:9000/ws/pi",
|
||||
help="WebSocket URL for AI layer (default: ws://10.159.65.108:9000/ws/pi)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--enrichment-url",
|
||||
type=str,
|
||||
default="http://localhost:8000",
|
||||
help="Enrichment service URL (default: http://localhost:8000)"
|
||||
default="http://10.159.65.108:8000",
|
||||
help="Enrichment service URL (default: http://10.159.65.108:8000)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--csv",
|
||||
@@ -370,6 +768,11 @@ async def main():
|
||||
default=None,
|
||||
help="Path to lap CSV file (default: scripts/ALONSO_2023_MONZA_LAPS.csv)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--enable-voice",
|
||||
action="store_true",
|
||||
help="Enable voice announcements for strategy updates (requires elevenlabs and ELEVENLABS_API_KEY)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
@@ -389,7 +792,8 @@ async def main():
|
||||
csv_path=csv_path,
|
||||
ws_url=args.ws_url,
|
||||
enrichment_url=args.enrichment_url,
|
||||
interval=args.interval
|
||||
interval=args.interval,
|
||||
voice_enabled=args.enable_voice
|
||||
)
|
||||
|
||||
logger.info("Starting WebSocket Pi Simulator")
|
||||
@@ -397,6 +801,7 @@ async def main():
|
||||
logger.info(f"Enrichment Service: {args.enrichment_url}")
|
||||
logger.info(f"WebSocket URL: {args.ws_url}")
|
||||
logger.info(f"Interval: {args.interval}s per lap")
|
||||
logger.info(f"Voice Announcements: {'Enabled' if args.enable_voice and VOICE_AVAILABLE else 'Disabled'}")
|
||||
logger.info("-" * 60)
|
||||
|
||||
await simulator.stream_telemetry()
|
||||
|
||||
@@ -13,13 +13,8 @@ from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
class RaceEngineerVoice:
|
||||
def __init__(self, voice_id: str = "mbBupyLcEivjpxh8Brkf"): # Default: Rachel
|
||||
"""
|
||||
Initialize ElevenLabs voice service.
|
||||
def __init__(self, voice_id: str = "mbBupyLcEivjpxh8Brkf"):
|
||||
|
||||
Args:
|
||||
voice_id: ElevenLabs voice ID (Rachel is default, professional female voice)
|
||||
"""
|
||||
self.client = ElevenLabs(api_key=os.getenv("ELEVENLABS_API_KEY"))
|
||||
self.voice_id = voice_id
|
||||
|
||||
|
||||
99
start.py
Executable file
99
start.py
Executable file
@@ -0,0 +1,99 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Startup supervisor for HPC Simulation Services.
|
||||
Manages both enrichment service and AI intelligence layer.
|
||||
"""
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import signal
|
||||
import os
|
||||
|
||||
processes = []
|
||||
|
||||
def cleanup(signum=None, frame=None):
|
||||
"""Clean up all child processes."""
|
||||
print("\n🛑 Shutting down all services...")
|
||||
for proc in processes:
|
||||
try:
|
||||
proc.terminate()
|
||||
proc.wait(timeout=5)
|
||||
except subprocess.TimeoutExpired:
|
||||
proc.kill()
|
||||
except Exception as e:
|
||||
print(f"Error stopping process: {e}")
|
||||
sys.exit(0)
|
||||
|
||||
# Register signal handlers
|
||||
signal.signal(signal.SIGINT, cleanup)
|
||||
signal.signal(signal.SIGTERM, cleanup)
|
||||
|
||||
def main():
|
||||
print("🚀 Starting HPC Simulation Services...")
|
||||
|
||||
# Start enrichment service
|
||||
print("📊 Starting Enrichment Service on port 8000...")
|
||||
enrichment_proc = subprocess.Popen(
|
||||
[sys.executable, "scripts/serve.py"],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
text=True,
|
||||
bufsize=1
|
||||
)
|
||||
processes.append(enrichment_proc)
|
||||
print(f" ├─ PID: {enrichment_proc.pid}")
|
||||
|
||||
# Give it time to start
|
||||
time.sleep(5)
|
||||
|
||||
# Check if still running
|
||||
if enrichment_proc.poll() is not None:
|
||||
print("❌ Enrichment service failed to start")
|
||||
cleanup()
|
||||
return 1
|
||||
print(" └─ ✅ Enrichment service started successfully")
|
||||
|
||||
# Start AI Intelligence Layer
|
||||
print("🤖 Starting AI Intelligence Layer on port 9000...")
|
||||
ai_proc = subprocess.Popen(
|
||||
[sys.executable, "ai_intelligence_layer/main.py"],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
text=True,
|
||||
bufsize=1
|
||||
)
|
||||
processes.append(ai_proc)
|
||||
print(f" ├─ PID: {ai_proc.pid}")
|
||||
|
||||
# Give it time to start
|
||||
time.sleep(3)
|
||||
|
||||
# Check if still running
|
||||
if ai_proc.poll() is not None:
|
||||
print("❌ AI Intelligence Layer failed to start")
|
||||
cleanup()
|
||||
return 1
|
||||
print(" └─ ✅ AI Intelligence Layer started successfully")
|
||||
|
||||
print("\n✨ All services running!")
|
||||
print(" 📊 Enrichment Service: http://0.0.0.0:8000")
|
||||
print(" 🤖 AI Intelligence Layer: ws://0.0.0.0:9000/ws/pi")
|
||||
print("\nPress Ctrl+C to stop all services\n")
|
||||
|
||||
# Monitor processes
|
||||
try:
|
||||
while True:
|
||||
# Check if any process has died
|
||||
for proc in processes:
|
||||
if proc.poll() is not None:
|
||||
print(f"⚠️ Process {proc.pid} died unexpectedly!")
|
||||
cleanup()
|
||||
return 1
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
cleanup()
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
58
start.sh
Executable file
58
start.sh
Executable file
@@ -0,0 +1,58 @@
|
||||
#!/bin/bash
|
||||
# Startup script for Render.com deployment
|
||||
# Starts both the enrichment service and AI intelligence layer
|
||||
|
||||
set -e # Exit on error
|
||||
|
||||
echo "🚀 Starting HPC Simulation Services..."
|
||||
|
||||
# Trap to handle cleanup on exit
|
||||
cleanup() {
|
||||
echo "🛑 Shutting down services..."
|
||||
kill $ENRICHMENT_PID $AI_PID 2>/dev/null || true
|
||||
exit
|
||||
}
|
||||
trap cleanup SIGINT SIGTERM
|
||||
|
||||
# Start enrichment service in background
|
||||
echo "📊 Starting Enrichment Service on port 8000..."
|
||||
python scripts/serve.py &
|
||||
ENRICHMENT_PID=$!
|
||||
echo " ├─ PID: $ENRICHMENT_PID"
|
||||
|
||||
# Give enrichment service time to start
|
||||
sleep 5
|
||||
|
||||
# Check if enrichment service is still running
|
||||
if ! kill -0 $ENRICHMENT_PID 2>/dev/null; then
|
||||
echo "❌ Enrichment service failed to start"
|
||||
exit 1
|
||||
fi
|
||||
echo " └─ ✅ Enrichment service started successfully"
|
||||
|
||||
# Start AI Intelligence Layer in background
|
||||
echo "🤖 Starting AI Intelligence Layer on port 9000..."
|
||||
python ai_intelligence_layer/main.py &
|
||||
AI_PID=$!
|
||||
echo " ├─ PID: $AI_PID"
|
||||
|
||||
# Give AI layer time to start
|
||||
sleep 3
|
||||
|
||||
# Check if AI layer is still running
|
||||
if ! kill -0 $AI_PID 2>/dev/null; then
|
||||
echo "❌ AI Intelligence Layer failed to start"
|
||||
kill $ENRICHMENT_PID 2>/dev/null || true
|
||||
exit 1
|
||||
fi
|
||||
echo " └─ ✅ AI Intelligence Layer started successfully"
|
||||
|
||||
echo ""
|
||||
echo "✨ All services running!"
|
||||
echo " 📊 Enrichment Service: http://0.0.0.0:8000"
|
||||
echo " 🤖 AI Intelligence Layer: ws://0.0.0.0:9000/ws/pi"
|
||||
echo ""
|
||||
echo "Press Ctrl+C to stop all services"
|
||||
|
||||
# Wait for both processes (this keeps the script running)
|
||||
wait $ENRICHMENT_PID $AI_PID
|
||||
76
tests/test_voice.py
Normal file
76
tests/test_voice.py
Normal file
@@ -0,0 +1,76 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Quick test script for ElevenLabs voice announcements.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, '.')
|
||||
|
||||
try:
|
||||
from elevenlabs.client import ElevenLabs
|
||||
from elevenlabs import save
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Check API key
|
||||
api_key = os.getenv("ELEVENLABS_API_KEY")
|
||||
if not api_key:
|
||||
print("✗ ELEVENLABS_API_KEY not found in environment")
|
||||
print("Create a .env file with: ELEVENLABS_API_KEY=your_key_here")
|
||||
sys.exit(1)
|
||||
|
||||
# Initialize client with same settings as voice_service.py
|
||||
client = ElevenLabs(api_key=api_key)
|
||||
voice_id = "mbBupyLcEivjpxh8Brkf" # Rachel voice
|
||||
|
||||
# Test message
|
||||
test_message = "Lap 3. Strategy: Conservative One Stop. Brake bias forward for turn in. Current tire degradation suggests extended first stint."
|
||||
|
||||
print(f"Testing ElevenLabs voice announcement...")
|
||||
print(f"Voice ID: {voice_id} (Rachel)")
|
||||
print(f"Message: {test_message}")
|
||||
print("-" * 60)
|
||||
|
||||
# Synthesize
|
||||
audio = client.text_to_speech.convert(
|
||||
voice_id=voice_id,
|
||||
text=test_message,
|
||||
model_id="eleven_multilingual_v2",
|
||||
voice_settings={
|
||||
"stability": 0.4,
|
||||
"similarity_boost": 0.95,
|
||||
"style": 0.7,
|
||||
"use_speaker_boost": True
|
||||
}
|
||||
)
|
||||
|
||||
# Save audio
|
||||
output_dir = Path("data/audio")
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
output_path = output_dir / "test_voice.mp3"
|
||||
|
||||
save(audio, str(output_path))
|
||||
print(f"✓ Audio saved to: {output_path}")
|
||||
|
||||
# Play audio
|
||||
print("✓ Playing audio...")
|
||||
if sys.platform == "darwin": # macOS
|
||||
os.system(f"afplay {output_path}")
|
||||
elif sys.platform == "linux":
|
||||
os.system(f"mpg123 {output_path} || ffplay -nodisp -autoexit {output_path}")
|
||||
elif sys.platform == "win32":
|
||||
os.system(f"start {output_path}")
|
||||
|
||||
print("✓ Voice test completed successfully!")
|
||||
|
||||
except ImportError as e:
|
||||
print(f"✗ elevenlabs not available: {e}")
|
||||
print("Install with: pip install elevenlabs python-dotenv")
|
||||
except Exception as e:
|
||||
print(f"✗ Voice test failed: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
Reference in New Issue
Block a user