pipeline works from pi simulation to control output and strategy generation.

This commit is contained in:
Aditya Pulipaka
2025-10-19 03:57:03 -05:00
parent 9f70ba7221
commit 636ddf27d4
42 changed files with 1297 additions and 4472 deletions

View File

@@ -1,333 +0,0 @@
# System Architecture & Data Flow
## High-Level Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ F1 Race Strategy System │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Raw Race │ │ HPC Compute │ │ Enrichment │
│ Telemetry │────────▶│ Cluster │────────▶│ Module │
│ │ │ │ │ (port 8000) │
└─────────────────┘ └─────────────────┘ └────────┬────────┘
│ POST webhook
│ (enriched data)
┌─────────────────────────────────────────────┐
│ AI Intelligence Layer (port 9000) │
│ ┌─────────────────────────────────────┐ │
│ │ Step 1: Strategy Brainstorming │ │
│ │ - Generate 20 diverse strategies │ │
│ │ - Temperature: 0.9 (creative) │ │
│ └─────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ Step 2: Strategy Analysis │ │
│ │ - Select top 3 strategies │ │
│ │ - Temperature: 0.3 (analytical) │ │
│ └─────────────────────────────────────┘ │
│ │
│ Powered by: Google Gemini 1.5 Pro │
└──────────────────┬──────────────────────────┘
│ Strategic recommendations
┌─────────────────────────────────────────┐
│ Race Engineers / Frontend │
│ - Win probabilities │
│ - Risk assessments │
│ - Engineer briefs │
│ - Driver radio scripts │
│ - ECU commands │
└─────────────────────────────────────────┘
```
## Data Flow - Detailed
```
1. ENRICHED TELEMETRY INPUT
┌────────────────────────────────────────────────────────────────┐
│ { │
│ "lap": 27, │
│ "aero_efficiency": 0.83, // 0-1, higher = better │
│ "tire_degradation_index": 0.65, // 0-1, higher = worse │
│ "ers_charge": 0.72, // 0-1, energy available │
│ "fuel_optimization_score": 0.91,// 0-1, efficiency │
│ "driver_consistency": 0.89, // 0-1, lap-to-lap variance │
│ "weather_impact": "medium" // low/medium/high │
│ } │
└────────────────────────────────────────────────────────────────┘
2. RACE CONTEXT INPUT
┌────────────────────────────────────────────────────────────────┐
│ { │
│ "race_info": { │
│ "track_name": "Monaco", │
│ "current_lap": 27, │
│ "total_laps": 58 │
│ }, │
│ "driver_state": { │
│ "driver_name": "Hamilton", │
│ "current_position": 4, │
│ "current_tire_compound": "medium", │
│ "tire_age_laps": 14 │
│ }, │
│ "competitors": [...] │
│ } │
└────────────────────────────────────────────────────────────────┘
3. TELEMETRY ANALYSIS
┌────────────────────────────────────────────────────────────────┐
│ • Calculate tire degradation rate: 0.030/lap │
│ • Project tire cliff: Lap 33 │
│ • Analyze ERS pattern: stable │
│ • Assess fuel situation: OK │
│ • Evaluate driver form: excellent │
└────────────────────────────────────────────────────────────────┘
4. STEP 1: BRAINSTORM (Gemini AI)
┌────────────────────────────────────────────────────────────────┐
│ Temperature: 0.9 (high creativity) │
│ Prompt includes: │
│ • Last 10 laps telemetry │
│ • Calculated trends │
│ • Race constraints │
│ • Competitor analysis │
│ │
│ Output: 20 diverse strategies │
│ • Conservative (1-stop, low risk) │
│ • Standard (balanced approach) │
│ • Aggressive (undercut/overcut) │
│ • Reactive (respond to competitors) │
│ • Contingency (safety car, rain) │
└────────────────────────────────────────────────────────────────┘
5. STRATEGY VALIDATION
┌────────────────────────────────────────────────────────────────┐
│ • Pit laps within valid range │
│ • At least 2 tire compounds (F1 rule) │
│ • Stop count matches pit laps │
│ • Tire sequence correct length │
└────────────────────────────────────────────────────────────────┘
6. STEP 2: ANALYZE (Gemini AI)
┌────────────────────────────────────────────────────────────────┐
│ Temperature: 0.3 (analytical consistency) │
│ Analysis framework: │
│ 1. Tire degradation projection │
│ 2. Aero efficiency impact │
│ 3. Fuel management │
│ 4. Driver consistency │
│ 5. Weather & track position │
│ 6. Competitor analysis │
│ │
│ Selection criteria: │
│ • Rank 1: RECOMMENDED (highest podium %) │
│ • Rank 2: ALTERNATIVE (viable backup) │
│ • Rank 3: CONSERVATIVE (safest) │
└────────────────────────────────────────────────────────────────┘
7. FINAL OUTPUT
┌────────────────────────────────────────────────────────────────┐
│ For EACH of top 3 strategies: │
│ │
│ • Predicted Outcome │
│ - Finish position: P3 │
│ - P1 probability: 8% │
│ - P2 probability: 22% │
│ - P3 probability: 45% │
│ - Confidence: 78% │
│ │
│ • Risk Assessment │
│ - Risk level: medium │
│ - Key risks: ["Pit under 2.5s", "Traffic"] │
│ - Success factors: ["Tire advantage", "Window open"] │
│ │
│ • Telemetry Insights │
│ - "Tire cliff at lap 35" │
│ - "Aero 0.83 - performing well" │
│ - "Fuel excellent, no saving" │
│ - "Driver form excellent" │
│ │
│ • Engineer Brief │
│ - Title: "Aggressive Undercut Lap 28" │
│ - Summary: "67% chance P3 or better" │
│ - Key points: [...] │
│ - Execution steps: [...] │
│ │
│ • Driver Audio Script │
│ "Box this lap. Softs going on. Push mode." │
│ │
│ • ECU Commands │
│ - Fuel: RICH │
│ - ERS: AGGRESSIVE_DEPLOY │
│ - Engine: PUSH │
│ │
│ • Situational Context │
│ - "Decision needed in 2 laps" │
│ - "Tire deg accelerating" │
└────────────────────────────────────────────────────────────────┘
```
## API Endpoints Detail
```
┌─────────────────────────────────────────────────────────────────┐
│ GET /api/health │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Health check │
│ Response: {status, version, demo_mode} │
│ Latency: <100ms │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ POST /api/ingest/enriched │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Webhook receiver from enrichment service │
│ Input: Single lap enriched telemetry │
│ Action: Store in buffer (max 100 records) │
│ Response: {status, lap, buffer_size} │
│ Latency: <50ms │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ POST /api/strategy/brainstorm │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Generate 20 diverse strategies │
│ Input: │
│ - enriched_telemetry (optional, auto-fetch if missing) │
│ - race_context (required) │
│ Process: │
│ 1. Fetch telemetry if needed │
│ 2. Build prompt with telemetry analysis │
│ 3. Call Gemini (temp=0.9) │
│ 4. Parse & validate strategies │
│ Output: {strategies: [20 strategies]} │
│ Latency: <5s (target) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ POST /api/strategy/analyze │
├─────────────────────────────────────────────────────────────────┤
│ Purpose: Analyze 20 strategies, select top 3 │
│ Input: │
│ - enriched_telemetry (optional, auto-fetch if missing) │
│ - race_context (required) │
│ - strategies (required, typically 20) │
│ Process: │
│ 1. Fetch telemetry if needed │
│ 2. Build analytical prompt │
│ 3. Call Gemini (temp=0.3) │
│ 4. Parse nested response structures │
│ Output: │
│ - top_strategies: [3 detailed strategies] │
│ - situational_context: {...} │
│ Latency: <10s (target) │
└─────────────────────────────────────────────────────────────────┘
```
## Integration Patterns
### Pattern 1: Pull Model
```
Enrichment Service (8000) ←─────GET /enriched───── AI Layer (9000)
[polls periodically]
```
### Pattern 2: Push Model (RECOMMENDED)
```
Enrichment Service (8000) ─────POST /ingest/enriched────▶ AI Layer (9000)
[webhook on new data]
```
### Pattern 3: Direct Request
```
Client ──POST /brainstorm──▶ AI Layer (9000)
[includes telemetry]
```
## Error Handling Flow
```
Request
┌─────────────────┐
│ Validate Input │
└────────┬────────┘
┌─────────────────┐ NO ┌──────────────────┐
│ Telemetry │────────────▶│ Fetch from │
│ Provided? │ │ localhost:8000 │
└────────┬────────┘ └────────┬─────────┘
YES │ │
└───────────────┬───────────────┘
┌──────────────┐
│ Call Gemini │
└──────┬───────┘
┌────┴────┐
│ Success?│
└────┬────┘
YES │ NO
│ │
│ ▼
│ ┌────────────────┐
│ │ Retry with │
│ │ stricter prompt│
│ └────────┬───────┘
│ │
│ ┌────┴────┐
│ │Success? │
│ └────┬────┘
│ YES │ NO
│ │ │
└───────────┤ │
│ ▼
│ ┌────────────┐
│ │ Return │
│ │ Error 500 │
│ └────────────┘
┌──────────────┐
│ Return │
│ Success 200 │
└──────────────┘
```
## Performance Characteristics
| Component | Target | Typical | Max |
|-----------|--------|---------|-----|
| Health check | <100ms | 50ms | 200ms |
| Webhook ingest | <50ms | 20ms | 100ms |
| Brainstorm (20 strategies) | <5s | 3-4s | 10s |
| Analyze (top 3) | <10s | 6-8s | 20s |
| Gemini API call | <3s | 2s | 8s |
| Telemetry fetch | <500ms | 200ms | 1s |
## Scalability Considerations
- **Concurrent Requests**: FastAPI async handles multiple simultaneously
- **Rate Limiting**: Gemini API has quotas (check your tier)
- **Caching**: Demo mode caches identical requests
- **Buffer Size**: Webhook buffer limited to 100 records
- **Memory**: ~100MB per service instance
---
Built for the HPC + AI Race Strategy Hackathon 🏎️

View File

@@ -1,207 +0,0 @@
# ⚡ SIMPLIFIED & FAST AI Layer
## What Changed
Simplified the entire AI flow for **ultra-fast testing and development**:
### Before (Slow)
- Generate 20 strategies (~45-60 seconds)
- Analyze all 20 and select top 3 (~40-60 seconds)
- **Total: ~2 minutes per request** ❌
### After (Fast)
- Generate **1 strategy** (~5-10 seconds)
- **Skip analysis** completely
- **Total: ~10 seconds per request** ✅
## Configuration
### Current Settings (`.env`)
```bash
FAST_MODE=true
STRATEGY_COUNT=1 # ⚡ Set to 1 for testing, 20 for production
```
### How to Adjust
**For ultra-fast testing (current):**
```bash
STRATEGY_COUNT=1
```
**For demo/showcase:**
```bash
STRATEGY_COUNT=5
```
**For production:**
```bash
STRATEGY_COUNT=20
```
## Simplified Workflow
```
┌──────────────────┐
│ Enrichment │
│ Service POSTs │
│ telemetry │
└────────┬─────────┘
┌──────────────────┐
│ Webhook Buffer │
│ (stores data) │
└────────┬─────────┘
┌──────────────────┐
│ Brainstorm │ ⚡ 1 strategy only!
│ (Gemini API) │ ~10 seconds
└────────┬─────────┘
┌──────────────────┐
│ Return Strategy │
│ No analysis! │
└──────────────────┘
```
## Quick Test
### 1. Push telemetry via webhook
```bash
python3 test_webhook_push.py --loop 5
```
### 2. Generate strategy (fast!)
```bash
python3 test_buffer_usage.py
```
**Output:**
```
Testing FAST brainstorm with buffered telemetry...
(Configured for 1 strategy only - ultra fast!)
✓ Brainstorm succeeded!
Generated 1 strategy
Strategy:
1. Medium-to-Hard Standard (1-stop)
Tires: medium → hard
Optimal 1-stop at lap 32 when tire degradation reaches cliff
✓ SUCCESS: AI layer is using webhook buffer!
```
**Time: ~10 seconds** instead of 2 minutes!
## API Response Format
### Brainstorm Response (Simplified)
```json
{
"strategies": [
{
"strategy_id": 1,
"strategy_name": "Medium-to-Hard Standard",
"stop_count": 1,
"pit_laps": [32],
"tire_sequence": ["medium", "hard"],
"brief_description": "Optimal 1-stop at lap 32 when tire degradation reaches cliff",
"risk_level": "medium",
"key_assumption": "No safety car interventions"
}
]
}
```
**No analysis object!** Just the strategy/strategies.
## What Was Removed
**Analysis endpoint** - Skipped entirely for speed
**Top 3 selection** - Only 1 strategy generated
**Detailed rationale** - Simple description only
**Risk assessment details** - Basic risk level only
**Engineer briefs** - Not generated
**Radio scripts** - Not generated
**ECU commands** - Not generated
## What Remains
**Webhook push** - Still works perfectly
**Buffer storage** - Still stores telemetry
**Strategy generation** - Just faster (1 instead of 20)
**F1 rule validation** - Still validates tire compounds
**Telemetry analysis** - Still calculates tire cliff, degradation
## Re-enabling Full Mode
When you need the complete system (for demos/production):
### 1. Update `.env`
```bash
STRATEGY_COUNT=20
```
### 2. Restart service
```bash
# Service will auto-reload if running with uvicorn --reload
# Or manually restart:
python main.py
```
### 3. Use analysis endpoint
```bash
# After brainstorm, call analyze with the 20 strategies
POST /api/strategy/analyze
{
"race_context": {...},
"strategies": [...], # 20 strategies from brainstorm
"enriched_telemetry": [...] # optional
}
```
## Performance Comparison
| Mode | Strategies | Time | Use Case |
|------|-----------|------|----------|
| **Ultra Fast** | 1 | ~10s | Testing, development |
| **Fast** | 5 | ~20s | Quick demos |
| **Standard** | 10 | ~35s | Demos with variety |
| **Full** | 20 | ~60s | Production, full analysis |
## Benefits of Simplified Flow
**Faster iteration** - Test webhook integration quickly
**Lower API costs** - Fewer Gemini API calls
**Easier debugging** - Simpler responses to inspect
**Better dev experience** - No waiting 2 minutes per test
**Still validates** - All core logic still works
## Migration Path
### Phase 1: Testing (Now)
- Use `STRATEGY_COUNT=1`
- Test webhook integration
- Verify telemetry flow
- Debug any issues
### Phase 2: Demo
- Set `STRATEGY_COUNT=5`
- Show variety of strategies
- Still fast enough for live demos
### Phase 3: Production
- Set `STRATEGY_COUNT=20`
- Enable analysis endpoint
- Full feature set
---
**Current Status:** ⚡ Ultra-fast mode enabled!
**Response Time:** ~10 seconds (was ~2 minutes)
**Ready for:** Rapid testing and webhook integration validation

View File

@@ -1,381 +0,0 @@
# AI Intelligence Layer - Implementation Summary
## 🎉 PROJECT COMPLETE
The AI Intelligence Layer has been successfully built and tested! This is the **core innovation** of your F1 race strategy system.
---
## 📦 What Was Built
### ✅ Core Components
1. **FastAPI Service (main.py)**
- Running on port 9000
- 4 endpoints: health, ingest webhook, brainstorm, analyze
- Full CORS support
- Comprehensive error handling
2. **Data Models (models/)**
- `input_models.py`: Request schemas for telemetry and race context
- `output_models.py`: Response schemas with 10+ nested structures
- `internal_models.py`: Internal processing models
3. **Gemini AI Integration (services/gemini_client.py)**
- Automatic JSON parsing with retry logic
- Error recovery with stricter prompts
- Demo mode caching for consistent results
- Configurable timeout and retry settings
4. **Telemetry Client (services/telemetry_client.py)**
- Fetches from enrichment service (localhost:8000)
- Health check integration
- Automatic fallback handling
5. **Strategy Services**
- `strategy_generator.py`: Brainstorm 20 diverse strategies
- `strategy_analyzer.py`: Select top 3 with detailed analysis
6. **Prompt Engineering (prompts/)**
- `brainstorm_prompt.py`: Creative strategy generation (temp 0.9)
- `analyze_prompt.py`: Analytical strategy selection (temp 0.3)
- Both include telemetry interpretation guides
7. **Utilities (utils/)**
- `validators.py`: Strategy validation + telemetry analysis
- `telemetry_buffer.py`: In-memory webhook data storage
8. **Sample Data & Tests**
- Sample enriched telemetry (10 laps)
- Sample race context (Monaco, Hamilton P4)
- Component test script
- API integration test script
---
## 🎯 Key Features Implemented
### Two-Step AI Strategy Process
**Step 1: Brainstorming** (POST /api/strategy/brainstorm)
- Generates 20 diverse strategies
- Categories: Conservative, Standard, Aggressive, Reactive, Contingency
- High creativity (temperature 0.9)
- Validates against F1 rules (min 2 tire compounds)
- Response time target: <5 seconds
**Step 2: Analysis** (POST /api/strategy/analyze)
- Analyzes all 20 strategies
- Selects top 3: RECOMMENDED, ALTERNATIVE, CONSERVATIVE
- Low temperature (0.3) for consistency
- Provides:
- Predicted race outcomes with probabilities
- Risk assessments
- Telemetry insights
- Engineer briefs
- Driver radio scripts
- ECU commands
- Situational context
- Response time target: <10 seconds
### Telemetry Intelligence
The system interprets 6 enriched metrics:
- **Aero Efficiency**: Car performance (<0.6 = problem)
- **Tire Degradation**: Wear rate (>0.85 = cliff imminent)
- **ERS Charge**: Energy availability (>0.7 = can attack)
- **Fuel Optimization**: Efficiency (<0.7 = must save)
- **Driver Consistency**: Reliability (<0.75 = risky)
- **Weather Impact**: Severity (high = flexible strategy)
### Smart Features
1. **Automatic Telemetry Fetching**: If not provided, fetches from enrichment service
2. **Webhook Support**: Real-time push from enrichment module
3. **Trend Analysis**: Calculates degradation rates, projects tire cliff
4. **Strategy Validation**: Ensures legal strategies per F1 rules
5. **Demo Mode**: Caches responses for consistent demos
6. **Retry Logic**: Handles Gemini API failures gracefully
---
## 🔧 Integration Points
### Upstream (HPC Enrichment Module)
```
http://localhost:8000/enriched?limit=10
```
**Pull model**: AI layer fetches telemetry
**Push model (IMPLEMENTED)**:
```bash
# In enrichment service .env:
NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
Enrichment service pushes to AI layer webhook
### Downstream (Frontend/Display)
```
http://localhost:9000/api/strategy/brainstorm
http://localhost:9000/api/strategy/analyze
```
---
## 📊 Testing Results
### Component Tests ✅
```
✓ Parsed 10 telemetry records
✓ Parsed race context for Hamilton
✓ Tire degradation rate: 0.0300 per lap
✓ Aero efficiency average: 0.840
✓ ERS pattern: stable
✓ Projected tire cliff: Lap 33
✓ Strategy validation working correctly
✓ Telemetry summary generation working
✓ Generated brainstorm prompt (4877 characters)
```
All data models, validators, and prompt generation working perfectly!
---
## 🚀 How to Use
### 1. Setup (One-time)
```bash
cd ai_intelligence_layer
# Already done:
# - Virtual environment created (myenv)
# - Dependencies installed
# - .env file created
# YOU NEED TO DO:
# Add your Gemini API key to .env
nano .env
# Replace: GEMINI_API_KEY=your_gemini_api_key_here
```
Get a Gemini API key: https://makersuite.google.com/app/apikey
### 2. Start the Service
```bash
# Option 1: Direct
cd ai_intelligence_layer
source myenv/bin/activate
python main.py
# Option 2: With uvicorn
uvicorn main:app --host 0.0.0.0 --port 9000 --reload
```
### 3. Test the Service
```bash
# Quick health check
curl http://localhost:9000/api/health
# Full integration test
./test_api.sh
# Manual test
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d @- << EOF
{
"enriched_telemetry": $(cat sample_data/sample_enriched_telemetry.json),
"race_context": $(cat sample_data/sample_race_context.json)
}
EOF
```
---
## 📁 Project Structure
```
ai_intelligence_layer/
├── main.py # FastAPI app ✅
├── config.py # Settings ✅
├── requirements.txt # Dependencies ✅
├── .env # Configuration ✅
├── .env.example # Template ✅
├── README.md # Documentation ✅
├── test_api.sh # API tests ✅
├── test_components.py # Unit tests ✅
├── models/
│ ├── input_models.py # Request schemas ✅
│ ├── output_models.py # Response schemas ✅
│ └── internal_models.py # Internal models ✅
├── services/
│ ├── gemini_client.py # Gemini wrapper ✅
│ ├── telemetry_client.py # Enrichment API ✅
│ ├── strategy_generator.py # Brainstorm logic ✅
│ └── strategy_analyzer.py # Analysis logic ✅
├── prompts/
│ ├── brainstorm_prompt.py # Step 1 prompt ✅
│ └── analyze_prompt.py # Step 2 prompt ✅
├── utils/
│ ├── validators.py # Validation logic ✅
│ └── telemetry_buffer.py # Webhook buffer ✅
└── sample_data/
├── sample_enriched_telemetry.json ✅
└── sample_race_context.json ✅
```
**Total Files Created: 23**
**Lines of Code: ~3,500+**
---
## 🎨 Example Output
### Brainstorm Response (20 strategies)
```json
{
"strategies": [
{
"strategy_id": 1,
"strategy_name": "Conservative 1-Stop",
"stop_count": 1,
"pit_laps": [35],
"tire_sequence": ["medium", "hard"],
"risk_level": "low",
...
},
// ... 19 more
]
}
```
### Analyze Response (Top 3 with full details)
```json
{
"top_strategies": [
{
"rank": 1,
"classification": "RECOMMENDED",
"predicted_outcome": {
"finish_position_most_likely": 3,
"p1_probability": 8,
"p3_probability": 45,
"confidence_score": 78
},
"engineer_brief": {
"title": "Aggressive Undercut Lap 28",
"summary": "67% chance P3 or better",
"execution_steps": [...]
},
"driver_audio_script": "Box this lap. Softs going on...",
"ecu_commands": {
"fuel_mode": "RICH",
"ers_strategy": "AGGRESSIVE_DEPLOY",
"engine_mode": "PUSH"
}
},
// ... 2 more strategies
],
"situational_context": {
"critical_decision_point": "Next 3 laps crucial",
"time_sensitivity": "Decision needed within 2 laps"
}
}
```
---
## 🏆 Innovation Highlights
### What Makes This Special
1. **Real HPC Integration**: Uses actual enriched telemetry from HPC simulations
2. **Dual-LLM Process**: Brainstorm diversity + analytical selection
3. **Telemetry Intelligence**: Interprets metrics to project tire cliffs, fuel needs
4. **Production-Ready**: Validation, error handling, retry logic, webhooks
5. **Race-Ready Output**: Includes driver radio scripts, ECU commands, engineer briefs
6. **F1 Rule Compliance**: Validates tire compound rules, pit window constraints
### Technical Excellence
- **Pydantic Models**: Full type safety and validation
- **Async/Await**: Non-blocking API calls
- **Smart Fallbacks**: Auto-fetch telemetry if not provided
- **Configurable**: Temperature, timeouts, retry logic all adjustable
- **Demo Mode**: Repeatable results for presentations
- **Comprehensive Testing**: Component tests + integration tests
---
## 🐛 Known Limitations
1. **Requires Gemini API Key**: Must configure before use
2. **Enrichment Service Dependency**: Best with localhost:8000 running
3. **Single Race Support**: Designed for one race at a time
4. **English Only**: Prompts and outputs in English
---
## 🔜 Next Steps
### To Deploy This
1. Add your Gemini API key to `.env`
2. Ensure enrichment service is running on port 8000
3. Start this service: `python main.py`
4. Test with: `./test_api.sh`
### To Enhance (Future)
- Multi-race session management
- Historical strategy learning
- Real-time streaming updates
- Frontend dashboard integration
- Multi-language support
---
## 📞 Troubleshooting
### "Import errors" in IDE
- This is normal - dependencies installed in `myenv`
- Run from terminal with venv activated
- Or configure IDE to use `myenv/bin/python`
### "Enrichment service unreachable"
- Either start enrichment service on port 8000
- Or provide telemetry data directly in requests
### "Gemini API error"
- Check API key in `.env`
- Verify API quota: https://makersuite.google.com
- Check network connectivity
---
## ✨ Summary
You now have a **fully functional AI Intelligence Layer** that:
✅ Receives enriched telemetry from HPC simulations
✅ Generates 20 diverse race strategies using AI
✅ Analyzes and selects top 3 with detailed rationale
✅ Provides actionable outputs (radio scripts, ECU commands)
✅ Integrates via REST API and webhooks
✅ Validates strategies against F1 rules
✅ Handles errors gracefully with retry logic
✅ Includes comprehensive documentation and tests
**This is hackathon-ready and demo-ready!** 🏎️💨
Just add your Gemini API key and you're good to go!
---
Built with ❤️ for the HPC + AI Race Strategy Hackathon

View File

@@ -1,131 +0,0 @@
# 🚀 Quick Start Guide - AI Intelligence Layer
## ⚡ 60-Second Setup
### 1. Get Gemini API Key
Visit: https://makersuite.google.com/app/apikey
### 2. Configure
```bash
cd ai_intelligence_layer
nano .env
# Add your API key: GEMINI_API_KEY=your_key_here
```
### 3. Run
```bash
source myenv/bin/activate
python main.py
```
Service starts on: http://localhost:9000
---
## 🧪 Quick Test
### Health Check
```bash
curl http://localhost:9000/api/health
```
### Full Test
```bash
./test_api.sh
```
---
## 📡 API Endpoints
| Endpoint | Method | Purpose |
|----------|--------|---------|
| `/api/health` | GET | Health check |
| `/api/ingest/enriched` | POST | Webhook receiver |
| `/api/strategy/brainstorm` | POST | Generate 20 strategies |
| `/api/strategy/analyze` | POST | Select top 3 |
---
## 🔗 Integration
### With Enrichment Service (localhost:8000)
**Option 1: Pull** (AI fetches)
```bash
# In enrichment service, AI will auto-fetch from:
# http://localhost:8000/enriched?limit=10
```
**Option 2: Push** (Webhook - RECOMMENDED)
```bash
# In enrichment service .env:
NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
---
## 📦 What You Get
### Input
- Enriched telemetry (aero, tires, ERS, fuel, consistency)
- Race context (track, position, competitors)
### Output
- **20 diverse strategies** (conservative → aggressive)
- **Top 3 analyzed** with:
- Win probabilities
- Risk assessment
- Engineer briefs
- Driver radio scripts
- ECU commands
---
## 🎯 Example Usage
### Brainstorm
```bash
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {"track_name": "Monaco", "current_lap": 27, "total_laps": 58},
"driver_state": {"driver_name": "Hamilton", "current_position": 4}
}
}'
```
### Analyze
```bash
curl -X POST http://localhost:9000/api/strategy/analyze \
-H "Content-Type: application/json" \
-d '{
"race_context": {...},
"strategies": [...]
}'
```
---
## 🐛 Troubleshooting
| Issue | Solution |
|-------|----------|
| API key error | Add `GEMINI_API_KEY` to `.env` |
| Enrichment unreachable | Start enrichment service or provide telemetry data |
| Import errors | Activate venv: `source myenv/bin/activate` |
---
## 📚 Documentation
- **Full docs**: `README.md`
- **Implementation details**: `IMPLEMENTATION_SUMMARY.md`
- **Sample data**: `sample_data/`
---
## ✅ Status
All systems operational! Ready to generate race strategies! 🏎️💨

View File

@@ -1,294 +0,0 @@
# Race Context Guide
## Why Race Context is Separate from Telemetry
**Enrichment Service** (port 8000):
- Provides: **Enriched telemetry** (changes every lap)
- Example: tire degradation, aero efficiency, ERS charge
**Client/Frontend**:
- Provides: **Race context** (changes less frequently)
- Example: driver name, current position, track info, competitors
This separation is intentional:
- Telemetry changes **every lap** (real-time HPC data)
- Race context changes **occasionally** (position changes, pit stops)
- Keeps enrichment service simple and focused
## How to Call Brainstorm with Both
### Option 1: Client Provides Both (Recommended)
```bash
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"enriched_telemetry": [
{
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
}
],
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": []
}
}'
```
### Option 2: AI Layer Fetches Telemetry, Client Provides Context
```bash
# Enrichment service POSTs telemetry to webhook
# Then client calls:
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {...},
"driver_state": {...},
"competitors": []
}
}'
```
AI layer will use telemetry from:
1. **Buffer** (if webhook has pushed data) ← CURRENT SETUP
2. **GET /enriched** from enrichment service (fallback)
## Creating a Race Context Template
Here's a reusable template:
```json
{
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": [
{
"position": 1,
"driver": "Verstappen",
"tire_compound": "hard",
"tire_age_laps": 18,
"gap_seconds": -12.5
},
{
"position": 2,
"driver": "Leclerc",
"tire_compound": "medium",
"tire_age_laps": 10,
"gap_seconds": -5.2
},
{
"position": 3,
"driver": "Norris",
"tire_compound": "medium",
"tire_age_laps": 12,
"gap_seconds": -2.1
},
{
"position": 5,
"driver": "Sainz",
"tire_compound": "soft",
"tire_age_laps": 5,
"gap_seconds": 3.8
}
]
}
}
```
## Where Does Race Context Come From?
In a real system, race context typically comes from:
1. **Timing System** - Official F1 timing data
- Current positions
- Gap times
- Lap numbers
2. **Team Database** - Historical race data
- Track information
- Total laps for this race
- Weather forecasts
3. **Pit Wall** - Live observations
- Competitor tire strategies
- Weather conditions
- Track temperature
4. **Telemetry Feed** - Some data overlaps
- Driver's current tires
- Tire age
- Fuel remaining
## Recommended Architecture
```
┌─────────────────────┐
│ Timing System │
│ (Race Control) │
└──────────┬──────────┘
┌─────────────────────┐ ┌─────────────────────┐
│ Frontend/Client │ │ Enrichment Service │
│ │ │ (Port 8000) │
│ Manages: │ │ │
│ - Race context │ │ Manages: │
│ - UI state │ │ - Telemetry │
│ - User inputs │ │ - HPC enrichment │
└──────────┬──────────┘ └──────────┬──────────┘
│ │
│ │ POST /ingest/enriched
│ │ (telemetry only)
│ ▼
│ ┌─────────────────────┐
│ │ AI Layer Buffer │
│ │ (telemetry only) │
│ └─────────────────────┘
│ │
│ POST /api/strategy/brainstorm │
│ (race_context + telemetry) │
└───────────────────────────────┤
┌─────────────────────┐
│ AI Strategy Layer │
│ (Port 9000) │
│ │
│ Generates 3 │
│ strategies │
└─────────────────────┘
```
## Python Example: Calling with Race Context
```python
import httpx
async def get_race_strategies(race_context: dict):
"""
Get strategies from AI layer.
Args:
race_context: Current race state
Returns:
3 strategies with pit plans and risk assessments
"""
url = "http://localhost:9000/api/strategy/brainstorm"
payload = {
"race_context": race_context
# enriched_telemetry is optional - AI will use buffer or fetch
}
async with httpx.AsyncClient(timeout=60.0) as client:
response = await client.post(url, json=payload)
response.raise_for_status()
return response.json()
# Usage:
race_context = {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": []
}
strategies = await get_race_strategies(race_context)
print(f"Generated {len(strategies['strategies'])} strategies")
```
## Alternative: Enrichment Service Sends Full Payload
If you really want enrichment service to send race context too, you'd need to:
### 1. Store race context in enrichment service
```python
# In hpcsim/api.py
_race_context = {
"race_info": {...},
"driver_state": {...},
"competitors": []
}
@app.post("/set_race_context")
async def set_race_context(context: Dict[str, Any]):
"""Update race context (call this when race state changes)."""
global _race_context
_race_context = context
return {"status": "ok"}
```
### 2. Send both in webhook
```python
# In ingest_telemetry endpoint
if _CALLBACK_URL:
payload = {
"enriched_telemetry": [enriched],
"race_context": _race_context
}
await client.post(_CALLBACK_URL, json=payload)
```
### 3. Update AI webhook to handle full payload
But this adds complexity. **I recommend keeping it simple**: client provides race_context when calling brainstorm.
---
## Current Working Setup
**Enrichment service** → POSTs telemetry to `/api/ingest/enriched`
**AI layer** → Stores telemetry in buffer
**Client** → Calls `/api/strategy/brainstorm` with race_context
**AI layer** → Uses buffer telemetry + provided race_context → Generates strategies
This is clean, simple, and follows single responsibility principle!

View File

@@ -1,290 +0,0 @@
# 🚀 Quick Start: Full System Test
## Overview
Test the complete webhook integration flow:
1. **Enrichment Service** (port 8000) - Receives telemetry, enriches it, POSTs to AI layer
2. **AI Intelligence Layer** (port 9000) - Receives enriched data, generates 3 strategies
## Step-by-Step Testing
### 1. Start the Enrichment Service (Port 8000)
From the **project root** (`HPCSimSite/`):
```bash
# Option A: Using the serve script
python3 scripts/serve.py
```
**Or from any directory:**
```bash
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite
python3 -m uvicorn hpcsim.api:app --host 0.0.0.0 --port 8000
```
You should see:
```
INFO: Uvicorn running on http://0.0.0.0:8000
INFO: Application startup complete.
```
**Verify it's running:**
```bash
curl http://localhost:8000/healthz
# Should return: {"status":"ok","stored":0}
```
### 2. Configure Webhook Callback
The enrichment service needs to know where to send enriched data.
**Option A: Set environment variable (before starting)**
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
python3 scripts/serve.py
```
**Option B: For testing, manually POST enriched data**
You can skip the callback and use `test_webhook_push.py` to simulate it (already working!).
### 3. Start the AI Intelligence Layer (Port 9000)
In a **new terminal**, from `ai_intelligence_layer/`:
```bash
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite/ai_intelligence_layer
source myenv/bin/activate # Activate virtual environment
python main.py
```
You should see:
```
INFO - Starting AI Intelligence Layer on port 9000
INFO - Strategy count: 3
INFO - All services initialized successfully
INFO: Uvicorn running on http://0.0.0.0:9000
```
**Verify it's running:**
```bash
curl http://localhost:9000/api/health
```
### 4. Test the Webhook Flow
**Method 1: Simulate enrichment service (fastest)**
```bash
cd ai_intelligence_layer
python3 test_webhook_push.py --loop 5
```
Output:
```
✓ Posted lap 27 - Buffer size: 1 records
✓ Posted lap 28 - Buffer size: 2 records
...
Posted 5/5 records successfully
```
**Method 2: POST to enrichment service (full integration)**
POST raw telemetry to enrichment service, it will enrich and forward:
```bash
curl -X POST http://localhost:8000/ingest/telemetry \
-H "Content-Type: application/json" \
-d '{
"lap": 27,
"speed": 310,
"tire_temp": 95,
"fuel_level": 45
}'
```
*Note: This requires NEXT_STAGE_CALLBACK_URL to be set*
### 5. Generate Strategies
```bash
cd ai_intelligence_layer
python3 test_buffer_usage.py
```
Output:
```
Testing FAST brainstorm with buffered telemetry...
(Configured for 3 strategies - fast and diverse!)
✓ Brainstorm succeeded!
Generated 3 strategies
Saved to: /tmp/brainstorm_strategies.json
Strategies:
1. Conservative Stay Out (1-stop, low risk)
Tires: medium → hard
Pits at: laps [35]
Extend current stint then hard tires to end
2. Standard Undercut (1-stop, medium risk)
Tires: medium → hard
Pits at: laps [32]
Pit before tire cliff for track position
3. Aggressive Two-Stop (2-stop, high risk)
Tires: medium → soft → hard
Pits at: laps [30, 45]
Early pit for fresh rubber and pace advantage
✓ SUCCESS: AI layer is using webhook buffer!
Full JSON saved to /tmp/brainstorm_strategies.json
```
### 6. View the Results
```bash
cat /tmp/brainstorm_strategies.json | python3 -m json.tool
```
Or just:
```bash
cat /tmp/brainstorm_strategies.json
```
## Terminal Setup
Here's the recommended terminal layout:
```
┌─────────────────────────┬─────────────────────────┐
│ Terminal 1 │ Terminal 2 │
│ Enrichment Service │ AI Intelligence Layer │
│ (Port 8000) │ (Port 9000) │
│ │ │
│ $ cd HPCSimSite │ $ cd ai_intelligence... │
│ $ python3 scripts/ │ $ source myenv/bin/... │
│ serve.py │ $ python main.py │
│ │ │
│ Running... │ Running... │
└─────────────────────────┴─────────────────────────┘
┌───────────────────────────────────────────────────┐
│ Terminal 3 - Testing │
│ │
│ $ cd ai_intelligence_layer │
│ $ python3 test_webhook_push.py --loop 5 │
│ $ python3 test_buffer_usage.py │
└───────────────────────────────────────────────────┘
```
## Current Configuration
### Enrichment Service (Port 8000)
- **Endpoints:**
- `POST /ingest/telemetry` - Receive raw telemetry
- `POST /enriched` - Manually post enriched data
- `GET /enriched?limit=N` - Retrieve recent enriched records
- `GET /healthz` - Health check
### AI Intelligence Layer (Port 9000)
- **Endpoints:**
- `GET /api/health` - Health check
- `POST /api/ingest/enriched` - Webhook receiver (enrichment service POSTs here)
- `POST /api/strategy/brainstorm` - Generate 3 strategies
- ~~`POST /api/strategy/analyze`~~ - **DISABLED** for speed
- **Configuration:**
- `STRATEGY_COUNT=3` - Generates 3 strategies
- `FAST_MODE=true` - Uses shorter prompts
- Response time: ~15-20 seconds (was ~2 minutes with 20 strategies + analysis)
## Troubleshooting
### Enrichment service won't start
```bash
# Check if port 8000 is already in use
lsof -i :8000
# Kill existing process
kill -9 <PID>
# Or use a different port
python3 -m uvicorn hpcsim.api:app --host 0.0.0.0 --port 8001
```
### AI layer can't find enrichment service
If you see: `"Cannot connect to enrichment service at http://localhost:8000"`
**Solution:** The buffer is empty and it's trying to pull from enrichment service.
```bash
# Push some data via webhook first:
python3 test_webhook_push.py --loop 5
```
### Virtual environment issues
```bash
cd ai_intelligence_layer
# Check if venv exists
ls -la myenv/
# If missing, recreate:
python3 -m venv myenv
source myenv/bin/activate
pip install -r requirements.txt
```
### Module not found errors
```bash
# For enrichment service
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite
export PYTHONPATH=$PWD:$PYTHONPATH
python3 scripts/serve.py
# For AI layer
cd ai_intelligence_layer
source myenv/bin/activate
python main.py
```
## Full Integration Test Workflow
```bash
# Terminal 1: Start enrichment
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
python3 scripts/serve.py
# Terminal 2: Start AI layer
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite/ai_intelligence_layer
source myenv/bin/activate
python main.py
# Terminal 3: Test webhook push
cd /Users/rishubmadhav/Documents/GitHub/HPCSimSite/ai_intelligence_layer
python3 test_webhook_push.py --loop 5
# Terminal 3: Generate strategies
python3 test_buffer_usage.py
# View results
cat /tmp/brainstorm_strategies.json | python3 -m json.tool
```
## What's Next?
1.**Both services running** - Enrichment on 8000, AI on 9000
2.**Webhook tested** - Data flows from enrichment → AI layer
3.**Strategies generated** - 3 strategies in ~20 seconds
4. ⏭️ **Real telemetry** - Connect actual race data source
5. ⏭️ **Frontend** - Build UI to display strategies
6. ⏭️ **Production** - Increase to 20 strategies, enable analysis
---
**Status:** 🚀 Both services ready to run!
**Performance:** ~20 seconds for 3 strategies (vs 2+ minutes for 20 + analysis)
**Integration:** Webhook push working perfectly

View File

@@ -1,236 +0,0 @@
# ✅ AI Intelligence Layer - WORKING!
## 🎉 Success Summary
The AI Intelligence Layer is now **fully functional** and has been successfully tested!
### Test Results from Latest Run:
```
✓ Health Check: PASSED (200 OK)
✓ Brainstorm: PASSED (200 OK)
- Generated 19/20 strategies in 48 seconds
- 1 strategy filtered (didn't meet F1 tire compound rule)
- Fast mode working perfectly
✓ Service: RUNNING (port 9000)
```
## 📊 Performance Metrics
| Metric | Target | Actual | Status |
|--------|--------|--------|--------|
| Health check | <1s | <1s | ✅ |
| Brainstorm | 15-30s | 48s | ⚠️ Acceptable |
| Service uptime | Stable | Stable | ✅ |
| Fast mode | Enabled | Enabled | ✅ |
**Note:** 48s is slightly slower than the 15-30s target, but well within acceptable range. The Gemini API response time varies based on load.
## 🚀 How to Use
### 1. Start the Service
```bash
cd ai_intelligence_layer
source myenv/bin/activate
python main.py
```
### 2. Run Tests
**Best option - Python test script:**
```bash
python3 test_api.py
```
**Alternative - Shell script:**
```bash
./test_api.sh
```
### 3. Check Results
```bash
# View generated strategies
cat /tmp/brainstorm_result.json | python3 -m json.tool | head -50
# View analysis results
cat /tmp/analyze_result.json | python3 -m json.tool | head -100
```
## ✨ What's Working
### ✅ Core Features
- [x] FastAPI service on port 9000
- [x] Health check endpoint
- [x] Webhook receiver for enrichment data
- [x] Strategy brainstorming (20 diverse strategies)
- [x] Strategy analysis (top 3 selection)
- [x] Automatic telemetry fetching from enrichment service
- [x] F1 rule validation (tire compounds)
- [x] Fast mode for quicker responses
- [x] Retry logic with exponential backoff
- [x] Comprehensive error handling
### ✅ AI Features
- [x] Gemini 2.5 Flash integration
- [x] JSON response parsing
- [x] Prompt optimization (fast mode)
- [x] Strategy diversity (5 types)
- [x] Risk assessment
- [x] Telemetry interpretation
- [x] Tire cliff projection
- [x] Detailed analysis outputs
### ✅ Output Quality
- [x] Win probability predictions
- [x] Risk assessments
- [x] Engineer briefs
- [x] Driver radio scripts
- [x] ECU commands (fuel, ERS, engine modes)
- [x] Situational context
## 📝 Configuration
Current optimal settings in `.env`:
```bash
GEMINI_MODEL=gemini-2.5-flash # Fast, good quality
FAST_MODE=true # Optimized prompts
BRAINSTORM_TIMEOUT=90 # Sufficient time
ANALYZE_TIMEOUT=120 # Sufficient time
DEMO_MODE=false # Real-time mode
```
## 🎯 Next Steps
### For Demo/Testing:
1. ✅ Service is ready to use
2. ✅ Test scripts available
3. ⏭️ Try with different race scenarios
4. ⏭️ Test webhook integration with enrichment service
### For Production:
1. ⏭️ Set up monitoring/logging
2. ⏭️ Add rate limiting
3. ⏭️ Consider caching frequently requested strategies
4. ⏭️ Add authentication if exposing publicly
### Optional Enhancements:
1. ⏭️ Frontend dashboard
2. ⏭️ Real-time strategy updates during race
3. ⏭️ Historical strategy learning
4. ⏭️ Multi-driver support
## 🔧 Troubleshooting Guide
### Issue: "Connection refused"
**Solution:** Start the service
```bash
python main.py
```
### Issue: Slow responses (>60s)
**Solution:** Already fixed with:
- Fast mode enabled
- Increased timeouts
- Optimized prompts
### Issue: "422 Unprocessable Content"
**Solution:** Use `test_api.py` instead of `test_api.sh`
- Python script handles JSON properly
- No external dependencies
### Issue: Service crashes
**Solution:** Check logs
```bash
python main.py 2>&1 | tee ai_service.log
```
## 📚 Documentation
| File | Purpose |
|------|---------|
| `README.md` | Full documentation |
| `QUICKSTART.md` | 60-second setup |
| `TESTING.md` | Testing guide |
| `TIMEOUT_FIX.md` | Timeout resolution details |
| `ARCHITECTURE.md` | System architecture |
| `IMPLEMENTATION_SUMMARY.md` | Technical details |
## 🎓 Example Usage
### Manual API Call
```python
import requests
# Brainstorm
response = requests.post('http://localhost:9000/api/strategy/brainstorm', json={
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": [...]
}
})
strategies = response.json()['strategies']
print(f"Generated {len(strategies)} strategies")
```
## 🌟 Key Achievements
1. **Built from scratch** - Complete FastAPI application with AI integration
2. **Production-ready** - Error handling, validation, retry logic
3. **Well-documented** - 7 documentation files, inline comments
4. **Tested** - Component tests + integration tests passing
5. **Optimized** - Fast mode reduces response time significantly
6. **Flexible** - Webhook + polling support for enrichment data
7. **Smart** - Interprets telemetry, projects tire cliffs, validates F1 rules
8. **Complete** - All requirements from original spec implemented
## 📊 Files Created
- **Core:** 7 files (main, config, models)
- **Services:** 4 files (Gemini, telemetry, strategy generation/analysis)
- **Prompts:** 2 files (brainstorm, analyze)
- **Utils:** 2 files (validators, buffer)
- **Tests:** 3 files (component, API shell, API Python)
- **Docs:** 7 files (README, quickstart, testing, timeout fix, architecture, implementation, this file)
- **Config:** 3 files (.env, .env.example, requirements.txt)
- **Sample Data:** 2 files (telemetry, race context)
**Total: 30+ files, ~4,000+ lines of code**
## 🏁 Final Status
```
╔═══════════════════════════════════════════════╗
║ AI INTELLIGENCE LAYER - FULLY OPERATIONAL ║
║ ║
║ ✅ Service Running ║
║ ✅ Tests Passing ║
║ ✅ Fast Mode Working ║
║ ✅ Gemini Integration Working ║
║ ✅ Strategy Generation Working ║
║ ✅ Documentation Complete ║
║ ║
║ READY FOR HACKATHON! 🏎️💨 ║
╚═══════════════════════════════════════════════╝
```
---
**Built with ❤️ for the HPC + AI Race Strategy Hackathon**
Last updated: October 18, 2025
Version: 1.0.0
Status: ✅ Production Ready

View File

@@ -1,219 +0,0 @@
# Testing the AI Intelligence Layer
## Quick Test Options
### Option 1: Python Script (RECOMMENDED - No dependencies)
```bash
python3 test_api.py
```
**Advantages:**
- ✅ No external tools required
- ✅ Clear, formatted output
- ✅ Built-in error handling
- ✅ Works on all systems
### Option 2: Shell Script
```bash
./test_api.sh
```
**Note:** Uses pure Python for JSON processing (no `jq` required)
### Option 3: Manual Testing
#### Health Check
```bash
curl http://localhost:9000/api/health | python3 -m json.tool
```
#### Brainstorm Test
```bash
python3 << 'EOF'
import json
import urllib.request
# Load data
with open('sample_data/sample_enriched_telemetry.json') as f:
telemetry = json.load(f)
with open('sample_data/sample_race_context.json') as f:
context = json.load(f)
# Make request
data = json.dumps({
"enriched_telemetry": telemetry,
"race_context": context
}).encode('utf-8')
req = urllib.request.Request(
'http://localhost:9000/api/strategy/brainstorm',
data=data,
headers={'Content-Type': 'application/json'}
)
with urllib.request.urlopen(req, timeout=120) as response:
result = json.loads(response.read())
print(f"Generated {len(result['strategies'])} strategies")
for s in result['strategies'][:3]:
print(f"{s['strategy_id']}. {s['strategy_name']} - {s['risk_level']} risk")
EOF
```
## Expected Output
### Successful Test Run
```
======================================================================
AI Intelligence Layer - Test Suite
======================================================================
1. Testing health endpoint...
✓ Status: healthy
✓ Service: AI Intelligence Layer
✓ Demo mode: False
2. Testing brainstorm endpoint...
(This may take 15-30 seconds...)
✓ Generated 20 strategies in 18.3s
Sample strategies:
1. Conservative 1-Stop
Stops: 1, Risk: low
2. Standard Medium-Hard
Stops: 1, Risk: medium
3. Aggressive Undercut
Stops: 2, Risk: high
3. Testing analyze endpoint...
(This may take 20-40 seconds...)
✓ Analysis complete in 24.7s
Top 3 strategies:
1. Aggressive Undercut (RECOMMENDED)
Predicted: P3
P3 or better: 75%
Risk: medium
2. Standard Two-Stop (ALTERNATIVE)
Predicted: P4
P3 or better: 63%
Risk: medium
3. Conservative 1-Stop (CONSERVATIVE)
Predicted: P5
P3 or better: 37%
Risk: low
======================================================================
RECOMMENDED STRATEGY DETAILS:
======================================================================
Engineer Brief:
Undercut Leclerc on lap 32. 75% chance of P3 or better.
Driver Radio:
"Box this lap. Soft tires going on. Push mode for next 8 laps."
ECU Commands:
Fuel: RICH
ERS: AGGRESSIVE_DEPLOY
Engine: PUSH
======================================================================
======================================================================
✓ ALL TESTS PASSED!
======================================================================
Results saved to:
- /tmp/brainstorm_result.json
- /tmp/analyze_result.json
```
## Troubleshooting
### "Connection refused"
```bash
# Service not running. Start it:
python main.py
```
### "Timeout" errors
```bash
# Check .env settings:
cat .env | grep TIMEOUT
# Should see:
# BRAINSTORM_TIMEOUT=90
# ANALYZE_TIMEOUT=120
# Also check Fast Mode is enabled:
cat .env | grep FAST_MODE
# Should see: FAST_MODE=true
```
### "422 Unprocessable Content"
This usually means invalid JSON in the request. The new test scripts handle this automatically.
### Test takes too long
```bash
# Enable fast mode in .env:
FAST_MODE=true
# Restart service:
# Press Ctrl+C in the terminal running python main.py
# Then: python main.py
```
## Performance Benchmarks
With `FAST_MODE=true` and `gemini-2.5-flash`:
| Test | Expected Time | Status |
|------|--------------|--------|
| Health | <1s | ✅ |
| Brainstorm | 15-30s | ✅ |
| Analyze | 20-40s | ✅ |
| **Total** | **40-70s** | ✅ |
## Component Tests
To test just the data models and validators (no API calls):
```bash
python test_components.py
```
This runs instantly and doesn't require the Gemini API.
## Files Created During Tests
- `/tmp/test_request.json` - Brainstorm request payload
- `/tmp/brainstorm_result.json` - 20 generated strategies
- `/tmp/analyze_request.json` - Analyze request payload
- `/tmp/analyze_result.json` - Top 3 analyzed strategies
You can inspect these files to see the full API responses.
## Integration with Enrichment Service
If the enrichment service is running on `localhost:8000`, the AI layer will automatically fetch telemetry data when not provided in the request:
```bash
# Test without providing telemetry (will fetch from enrichment service)
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {"track_name": "Monaco", "current_lap": 27, "total_laps": 58},
"driver_state": {"driver_name": "Hamilton", "current_position": 4}
}
}'
```
---
**Ready to test!** 🚀
Just run: `python3 test_api.py`

View File

@@ -1,179 +0,0 @@
# Timeout Fix Guide
## Problem
Gemini API timing out with 504 errors after ~30 seconds.
## Solution Applied ✅
### 1. Increased Timeouts
**File: `.env`**
```bash
BRAINSTORM_TIMEOUT=90 # Increased from 30s
ANALYZE_TIMEOUT=120 # Increased from 60s
```
### 2. Added Fast Mode
**File: `.env`**
```bash
FAST_MODE=true # Use shorter, optimized prompts
```
Fast mode reduces prompt length by ~60% while maintaining quality:
- Brainstorm: ~4900 chars → ~1200 chars
- Analyze: ~6500 chars → ~1800 chars
### 3. Improved Retry Logic
**File: `services/gemini_client.py`**
- Longer backoff for timeout errors (5s instead of 2s)
- Minimum timeout of 60s for API calls
- Better error detection
### 4. Model Selection
You're using `gemini-2.5-flash` which is good! It's:
- ✅ Faster than Pro
- ✅ Cheaper
- ✅ Good quality for this use case
## How to Use
### Option 1: Fast Mode (RECOMMENDED for demos)
```bash
# In .env
FAST_MODE=true
```
- Faster responses (~10-20s per call)
- Shorter prompts
- Still high quality
### Option 2: Full Mode (for production)
```bash
# In .env
FAST_MODE=false
```
- More detailed prompts
- Slightly better quality
- Slower (~30-60s per call)
## Testing
### Quick Test
```bash
# Check health
curl http://localhost:9000/api/health
# Test with sample data (fast mode)
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d @- << EOF
{
"enriched_telemetry": $(cat sample_data/sample_enriched_telemetry.json),
"race_context": $(cat sample_data/sample_race_context.json)
}
EOF
```
## Troubleshooting
### Still getting timeouts?
**1. Check API quota**
- Visit: https://aistudio.google.com/apikey
- Check rate limits and quota
- Free tier: 15 requests/min, 1M tokens/min
**2. Try different model**
```bash
# In .env, try:
GEMINI_MODEL=gemini-1.5-flash # Fastest
# or
GEMINI_MODEL=gemini-1.5-pro # Better quality, slower
```
**3. Increase timeouts further**
```bash
# In .env
BRAINSTORM_TIMEOUT=180
ANALYZE_TIMEOUT=240
```
**4. Reduce strategy count**
If still timing out, you can modify the code to generate fewer strategies:
- Edit `prompts/brainstorm_prompt.py`
- Change "Generate 20 strategies" to "Generate 10 strategies"
### Network issues?
**Check connectivity:**
```bash
# Test Google AI endpoint
curl -I https://generativelanguage.googleapis.com
# Check if behind proxy
echo $HTTP_PROXY
echo $HTTPS_PROXY
```
**Use VPN if needed** - Some regions have restricted access to Google AI APIs
### Monitor performance
**Watch logs:**
```bash
# Start server with logs
python main.py 2>&1 | tee ai_layer.log
# In another terminal, watch for timeouts
tail -f ai_layer.log | grep -i timeout
```
## Performance Benchmarks
### Fast Mode (FAST_MODE=true)
- Brainstorm: ~15-25s
- Analyze: ~20-35s
- Total workflow: ~40-60s
### Full Mode (FAST_MODE=false)
- Brainstorm: ~30-50s
- Analyze: ~40-70s
- Total workflow: ~70-120s
## What Changed
### Before
```
Prompt: 4877 chars
Timeout: 30s
Result: ❌ 504 timeout errors
```
### After (Fast Mode)
```
Prompt: ~1200 chars (75% reduction)
Timeout: 90s
Result: ✅ Works reliably
```
## Configuration Summary
Your current setup:
```bash
GEMINI_MODEL=gemini-2.5-flash # Fast model
FAST_MODE=true # Optimized prompts
BRAINSTORM_TIMEOUT=90 # 3x increase
ANALYZE_TIMEOUT=120 # 2x increase
```
This should work reliably now! 🎉
## Additional Tips
1. **For demos**: Keep FAST_MODE=true
2. **For production**: Test with FAST_MODE=false, adjust timeouts as needed
3. **Monitor quota**: Check usage at https://aistudio.google.com
4. **Cache responses**: Enable DEMO_MODE=true for repeatable demos
---
**Status**: FIXED ✅
**Ready to test**: YES 🚀

View File

@@ -1,316 +0,0 @@
# Webhook Push Integration Guide
## Overview
The AI Intelligence Layer supports **two integration models** for receiving enriched telemetry:
1. **Push Model (Webhook)** - Enrichment service POSTs data to AI layer ✅ **RECOMMENDED**
2. **Pull Model** - AI layer fetches data from enrichment service (fallback)
## Push Model (Webhook) - How It Works
```
┌─────────────────────┐ ┌─────────────────────┐
│ HPC Enrichment │ POST │ AI Intelligence │
│ Service │────────▶│ Layer │
│ (Port 8000) │ │ (Port 9000) │
└─────────────────────┘ └─────────────────────┘
┌──────────────┐
│ Telemetry │
│ Buffer │
│ (in-memory) │
└──────────────┘
┌──────────────┐
│ Brainstorm │
│ & Analyze │
│ (Gemini AI) │
└──────────────┘
```
### Configuration
In your **enrichment service** (port 8000), set the callback URL:
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
When enrichment is complete for each lap, the service will POST to this endpoint.
### Webhook Endpoint
**Endpoint:** `POST /api/ingest/enriched`
**Request Body:** Single enriched telemetry record (JSON)
```json
{
"lap": 27,
"lap_time_seconds": 78.456,
"tire_degradation_index": 0.72,
"fuel_remaining_kg": 45.2,
"aero_efficiency": 0.85,
"ers_recovery_rate": 0.78,
"brake_wear_index": 0.65,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"predicted_tire_cliff_lap": 35,
"weather_impact": "minimal",
"hpc_simulation_id": "sim_monaco_lap27_001",
"metadata": {
"simulation_timestamp": "2025-10-18T22:15:30Z",
"confidence_level": 0.92,
"cluster_nodes_used": 8
}
}
```
**Response:**
```json
{
"status": "received",
"lap": 27,
"buffer_size": 15
}
```
### Buffer Behavior
- **Max Size:** 100 records (configurable)
- **Storage:** In-memory (cleared on restart)
- **Retrieval:** FIFO - newest data returned first
- **Auto-cleanup:** Oldest records dropped when buffer is full
## Testing the Webhook
### 1. Start the AI Intelligence Layer
```bash
cd ai_intelligence_layer
source myenv/bin/activate # or your venv
python main.py
```
Verify it's running:
```bash
curl http://localhost:9000/api/health
```
### 2. Simulate Enrichment Service Pushing Data
**Option A: Using the test script**
```bash
# Post single telemetry record
python3 test_webhook_push.py
# Post 10 records with 2s delay between each
python3 test_webhook_push.py --loop 10 --delay 2
# Post 5 records with 1s delay
python3 test_webhook_push.py --loop 5 --delay 1
```
**Option B: Using curl**
```bash
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d '{
"lap": 27,
"lap_time_seconds": 78.456,
"tire_degradation_index": 0.72,
"fuel_remaining_kg": 45.2,
"aero_efficiency": 0.85,
"ers_recovery_rate": 0.78,
"brake_wear_index": 0.65,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"predicted_tire_cliff_lap": 35,
"weather_impact": "minimal",
"hpc_simulation_id": "sim_monaco_lap27_001",
"metadata": {
"simulation_timestamp": "2025-10-18T22:15:30Z",
"confidence_level": 0.92,
"cluster_nodes_used": 8
}
}'
```
### 3. Verify Buffer Contains Data
Check the logs - you should see:
```
INFO - Received enriched telemetry webhook: lap 27
INFO - Added telemetry for lap 27 (buffer size: 1)
```
### 4. Test Strategy Generation Using Buffered Data
**Brainstorm endpoint** (no telemetry in request = uses buffer):
```bash
curl -X POST http://localhost:9000/api/strategy/brainstorm \
-H "Content-Type: application/json" \
-d '{
"race_context": {
"race_info": {
"track_name": "Monaco",
"current_lap": 27,
"total_laps": 58,
"weather_condition": "Dry",
"track_temp_celsius": 42
},
"driver_state": {
"driver_name": "Hamilton",
"current_position": 4,
"current_tire_compound": "medium",
"tire_age_laps": 14,
"fuel_remaining_percent": 47
},
"competitors": []
}
}' | python3 -m json.tool
```
Check logs for:
```
INFO - Using 10 telemetry records from webhook buffer
```
## Pull Model (Fallback)
If the buffer is empty and no telemetry is provided in the request, the AI layer will **automatically fetch** from the enrichment service:
```bash
GET http://localhost:8000/enriched?limit=10
```
This ensures the system works even without webhook configuration.
## Priority Order
When brainstorm/analyze endpoints are called:
1. **Check request body** - Use `enriched_telemetry` if provided
2. **Check buffer** - Use webhook buffer if it has data
3. **Fetch from service** - Pull from enrichment service as fallback
4. **Error** - If all fail, return 400 error
## Production Recommendations
### For Enrichment Service
```bash
# Configure callback URL
export NEXT_STAGE_CALLBACK_URL=http://ai-layer:9000/api/ingest/enriched
# Add retry logic (recommended)
export CALLBACK_MAX_RETRIES=3
export CALLBACK_TIMEOUT=10
```
### For AI Layer
```python
# config.py - Increase buffer size for production
telemetry_buffer_max_size: int = 500 # Store more history
# Consider Redis for persistent buffer
# (current implementation is in-memory only)
```
### Health Monitoring
```bash
# Check buffer status
curl http://localhost:9000/api/health
# Response includes buffer info (could be added):
{
"status": "healthy",
"buffer_size": 25,
"buffer_max_size": 100
}
```
## Common Issues
### 1. Webhook Not Receiving Data
**Symptoms:** Buffer size stays at 0
**Solutions:**
- Verify enrichment service has `NEXT_STAGE_CALLBACK_URL` configured
- Check network connectivity between services
- Examine enrichment service logs for POST errors
- Confirm AI layer is running on port 9000
### 2. Old Data in Buffer
**Symptoms:** AI uses outdated telemetry
**Solutions:**
- Buffer is FIFO - automatically clears old data
- Restart AI service to clear buffer
- Increase buffer size if race generates data faster than consumption
### 3. Pull Model Used Instead of Push
**Symptoms:** Logs show "fetching from enrichment service" instead of "using buffer"
**Solutions:**
- Confirm webhook is posting data (check buffer size in logs)
- Verify webhook POST is successful (200 response)
- Check if buffer was cleared (restart)
## Integration Examples
### Python (Enrichment Service)
```python
import httpx
async def push_enriched_telemetry(telemetry_data: dict):
"""Push enriched telemetry to AI layer."""
url = "http://localhost:9000/api/ingest/enriched"
async with httpx.AsyncClient() as client:
response = await client.post(url, json=telemetry_data, timeout=10.0)
response.raise_for_status()
return response.json()
```
### Shell Script (Testing)
```bash
#!/bin/bash
# push_telemetry.sh
for lap in {1..10}; do
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d "{\"lap\": $lap, \"tire_degradation_index\": 0.7, ...}"
sleep 2
done
```
## Benefits of Push Model
**Real-time** - AI layer receives data immediately as enrichment completes
**Efficient** - No polling, reduces load on enrichment service
**Decoupled** - Services don't need to coordinate timing
**Resilient** - Buffer allows AI to process multiple requests from same dataset
**Simple** - Enrichment service just POST and forget
---
**Next Steps:**
1. Configure `NEXT_STAGE_CALLBACK_URL` in enrichment service
2. Test webhook with `test_webhook_push.py`
3. Monitor logs to confirm push model is working
4. Run brainstorm/analyze and verify buffer usage

View File

@@ -1,200 +0,0 @@
# ✅ Webhook Push Integration - WORKING!
## Summary
Your AI Intelligence Layer now **supports webhook push integration** where the enrichment service POSTs telemetry data directly to the AI layer.
## What Was Changed
### 1. Enhanced Telemetry Priority (main.py)
Both `/api/strategy/brainstorm` and `/api/strategy/analyze` now check sources in this order:
1. **Request body** - If telemetry provided in request
2. **Webhook buffer** - If webhook has pushed data ✨ **NEW**
3. **Pull from service** - Fallback to GET http://localhost:8000/enriched
4. **Error** - If all sources fail
### 2. Test Scripts Created
- `test_webhook_push.py` - Simulates enrichment service POSTing telemetry
- `test_buffer_usage.py` - Verifies brainstorm uses buffered data
- `check_enriched.py` - Checks enrichment service for live data
### 3. Documentation
- `WEBHOOK_INTEGRATION.md` - Complete integration guide
## How It Works
```
Enrichment Service AI Intelligence Layer
(Port 8000) (Port 9000)
│ │
│ POST telemetry │
│──────────────────────────▶│
│ /api/ingest/enriched │
│ │
│ ✓ {status: "received"} │
│◀──────────────────────────│
│ │
┌──────────────┐
│ Buffer │
│ (5 records) │
└──────────────┘
User calls │
brainstorm │
(no telemetry) │
Uses buffer data!
```
## Quick Test (Just Completed! ✅)
### Step 1: Push telemetry via webhook
```bash
python3 test_webhook_push.py --loop 5 --delay 1
```
**Result:**
```
✓ Posted lap 27 - Buffer size: 1 records
✓ Posted lap 28 - Buffer size: 2 records
✓ Posted lap 29 - Buffer size: 3 records
✓ Posted lap 30 - Buffer size: 4 records
✓ Posted lap 31 - Buffer size: 5 records
Posted 5/5 records successfully
✓ Telemetry is now in the AI layer's buffer
```
### Step 2: Call brainstorm (will use buffer automatically)
```bash
python3 test_buffer_usage.py
```
This calls `/api/strategy/brainstorm` **without** providing telemetry in the request.
**Expected logs in AI service:**
```
INFO - Using 5 telemetry records from webhook buffer
INFO - Generated 20 strategies
```
## Configure Your Enrichment Service
In your enrichment service (port 8000), set the callback URL:
```bash
export NEXT_STAGE_CALLBACK_URL=http://localhost:9000/api/ingest/enriched
```
Then in your enrichment code:
```python
import httpx
async def send_enriched_telemetry(telemetry: dict):
"""Push enriched telemetry to AI layer."""
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:9000/api/ingest/enriched",
json=telemetry,
timeout=10.0
)
response.raise_for_status()
return response.json()
# After HPC enrichment completes for a lap:
await send_enriched_telemetry({
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
})
```
## Telemetry Model (Required Fields)
Your enrichment service must POST data matching this exact schema:
```json
{
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
}
```
**Field constraints:**
- All numeric fields: 0.0 to 1.0 (float)
- `weather_impact`: Must be "low", "medium", or "high" (string literal)
- `lap`: Integer > 0
## Benefits of Webhook Push Model
**Real-time** - AI receives data immediately as enrichment completes
**Efficient** - No polling overhead
**Decoupled** - Services operate independently
**Resilient** - Buffer allows multiple strategy requests from same dataset
**Automatic** - Brainstorm/analyze use buffer when no telemetry provided
## Verification Commands
### 1. Check webhook endpoint is working
```bash
curl -X POST http://localhost:9000/api/ingest/enriched \
-H "Content-Type: application/json" \
-d '{
"lap": 27,
"aero_efficiency": 0.85,
"tire_degradation_index": 0.72,
"ers_charge": 0.78,
"fuel_optimization_score": 0.82,
"driver_consistency": 0.88,
"weather_impact": "low"
}'
```
Expected response:
```json
{"status": "received", "lap": 27, "buffer_size": 1}
```
### 2. Check logs for buffer usage
When you call brainstorm/analyze, look for:
```
INFO - Using N telemetry records from webhook buffer
```
If buffer is empty:
```
INFO - No telemetry in buffer, fetching from enrichment service...
```
## Next Steps
1.**Webhook tested** - Successfully pushed 5 records
2. ⏭️ **Configure enrichment service** - Add NEXT_STAGE_CALLBACK_URL
3. ⏭️ **Test end-to-end** - Run enrichment → webhook → brainstorm
4. ⏭️ **Monitor logs** - Verify buffer usage in production
---
**Files created:**
- `test_webhook_push.py` - Webhook testing tool
- `test_buffer_usage.py` - Buffer verification tool
- `WEBHOOK_INTEGRATION.md` - Complete integration guide
- This summary
**Code modified:**
- `main.py` - Enhanced brainstorm/analyze to prioritize webhook buffer
- Both endpoints now check: request → buffer → fetch → error
**Status:** ✅ Webhook push model fully implemented and tested!

View File

@@ -2,12 +2,15 @@
AI Intelligence Layer - FastAPI Application
Port: 9000
Provides F1 race strategy generation and analysis using Gemini AI.
Supports WebSocket connections from Pi for bidirectional control.
"""
from fastapi import FastAPI, HTTPException, status
from fastapi import FastAPI, HTTPException, status, WebSocket, WebSocketDisconnect
from fastapi.middleware.cors import CORSMiddleware
from contextlib import asynccontextmanager
import logging
from typing import Dict, Any
import asyncio
import random
from typing import Dict, Any, List
from config import get_settings
from models.input_models import (
@@ -41,6 +44,37 @@ strategy_generator: StrategyGenerator = None
# strategy_analyzer: StrategyAnalyzer = None # Disabled - not using analysis
telemetry_client: TelemetryClient = None
current_race_context: RaceContext = None # Store race context globally
last_control_command: Dict[str, int] = {"brake_bias": 5, "differential_slip": 5} # Store last command
# WebSocket connection manager
class ConnectionManager:
"""Manages WebSocket connections from Pi clients."""
def __init__(self):
self.active_connections: List[WebSocket] = []
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.append(websocket)
logger.info(f"Pi client connected. Total connections: {len(self.active_connections)}")
def disconnect(self, websocket: WebSocket):
self.active_connections.remove(websocket)
logger.info(f"Pi client disconnected. Total connections: {len(self.active_connections)}")
async def send_control_command(self, websocket: WebSocket, command: Dict[str, Any]):
"""Send control command to specific Pi client."""
await websocket.send_json(command)
async def broadcast_control_command(self, command: Dict[str, Any]):
"""Broadcast control command to all connected Pi clients."""
for connection in self.active_connections:
try:
await connection.send_json(command)
except Exception as e:
logger.error(f"Error broadcasting to client: {e}")
websocket_manager = ConnectionManager()
@asynccontextmanager
@@ -263,6 +297,248 @@ async def analyze_strategies(request: AnalyzeRequest):
"""
@app.websocket("/ws/pi")
async def websocket_pi_endpoint(websocket: WebSocket):
"""
WebSocket endpoint for Raspberry Pi clients.
Flow:
1. Pi connects and streams lap telemetry via WebSocket
2. AI layer processes telemetry and generates strategies
3. AI layer pushes control commands back to Pi (brake_bias, differential_slip)
"""
global current_race_context, last_control_command
await websocket_manager.connect(websocket)
# Clear telemetry buffer for fresh connection
# This ensures lap counting starts from scratch for each Pi session
telemetry_buffer.clear()
# Reset last control command to neutral for new session
last_control_command = {"brake_bias": 5, "differential_slip": 5}
logger.info("[WebSocket] Telemetry buffer cleared for new connection")
try:
# Send initial welcome message
await websocket.send_json({
"type": "connection_established",
"message": "Connected to AI Intelligence Layer",
"status": "ready",
"buffer_cleared": True
})
# Main message loop
while True:
# Receive telemetry from Pi
data = await websocket.receive_json()
message_type = data.get("type", "telemetry")
if message_type == "telemetry":
# Process incoming lap telemetry
lap_number = data.get("lap_number", 0)
# Store in buffer (convert to EnrichedTelemetryWebhook format)
# Note: This assumes data is already enriched. If raw, route through enrichment first.
enriched = data.get("enriched_telemetry")
race_context_data = data.get("race_context")
if enriched and race_context_data:
try:
# Parse enriched telemetry
enriched_obj = EnrichedTelemetryWebhook(**enriched)
telemetry_buffer.add(enriched_obj)
# Update race context
current_race_context = RaceContext(**race_context_data)
# Auto-generate strategies if we have enough data
buffer_data = telemetry_buffer.get_latest(limit=10)
if len(buffer_data) >= 3:
logger.info(f"\n{'='*60}")
logger.info(f"LAP {lap_number} - GENERATING STRATEGY")
logger.info(f"{'='*60}")
# Send immediate acknowledgment while processing
# Use last known control values instead of resetting to neutral
await websocket.send_json({
"type": "control_command",
"lap": lap_number,
"brake_bias": last_control_command["brake_bias"],
"differential_slip": last_control_command["differential_slip"],
"message": "Processing strategies (maintaining previous settings)..."
})
# Generate strategies (this is the slow part)
try:
response = await strategy_generator.generate(
enriched_telemetry=buffer_data,
race_context=current_race_context
)
# Extract top strategy (first one)
top_strategy = response.strategies[0] if response.strategies else None
# Generate control commands based on strategy
control_command = generate_control_command(
lap_number=lap_number,
strategy=top_strategy,
enriched_telemetry=enriched_obj,
race_context=current_race_context
)
# Update global last command
last_control_command = {
"brake_bias": control_command["brake_bias"],
"differential_slip": control_command["differential_slip"]
}
# Send updated control command with strategies
await websocket.send_json({
"type": "control_command_update",
"lap": lap_number,
"brake_bias": control_command["brake_bias"],
"differential_slip": control_command["differential_slip"],
"strategy_name": top_strategy.strategy_name if top_strategy else "N/A",
"total_strategies": len(response.strategies),
"reasoning": control_command.get("reasoning", "")
})
logger.info(f"{'='*60}\n")
except Exception as e:
logger.error(f"[WebSocket] Strategy generation failed: {e}")
# Send error but keep neutral controls
await websocket.send_json({
"type": "error",
"lap": lap_number,
"message": f"Strategy generation failed: {str(e)}"
})
else:
# Not enough data yet, send neutral command
await websocket.send_json({
"type": "control_command",
"lap": lap_number,
"brake_bias": 5, # Neutral
"differential_slip": 5, # Neutral
"message": f"Collecting data ({len(buffer_data)}/3 laps)"
})
except Exception as e:
logger.error(f"[WebSocket] Error processing telemetry: {e}")
await websocket.send_json({
"type": "error",
"message": str(e)
})
else:
logger.warning(f"[WebSocket] Received incomplete data from Pi")
elif message_type == "ping":
# Respond to ping
await websocket.send_json({"type": "pong"})
elif message_type == "disconnect":
# Graceful disconnect
logger.info("[WebSocket] Pi requested disconnect")
break
except WebSocketDisconnect:
logger.info("[WebSocket] Pi client disconnected")
except Exception as e:
logger.error(f"[WebSocket] Unexpected error: {e}")
finally:
websocket_manager.disconnect(websocket)
# Clear buffer when connection closes to ensure fresh start for next connection
telemetry_buffer.clear()
logger.info("[WebSocket] Telemetry buffer cleared on disconnect")
def generate_control_command(
lap_number: int,
strategy: Any,
enriched_telemetry: EnrichedTelemetryWebhook,
race_context: RaceContext
) -> Dict[str, Any]:
"""
Generate control commands for Pi based on strategy and telemetry.
Returns brake_bias and differential_slip values (0-10) with reasoning.
Logic:
- Brake bias: Adjust based on tire degradation (higher deg = more rear bias)
- Differential slip: Adjust based on pace trend and tire cliff risk
"""
# Default neutral values
brake_bias = 5
differential_slip = 5
reasoning_parts = []
# Adjust brake bias based on tire degradation
if enriched_telemetry.tire_degradation_rate > 0.7:
# High degradation: shift bias to rear (protect fronts)
brake_bias = 7
reasoning_parts.append(f"High tire degradation ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 7 (rear) to protect fronts")
elif enriched_telemetry.tire_degradation_rate > 0.4:
# Moderate degradation: slight rear bias
brake_bias = 6
reasoning_parts.append(f"Moderate tire degradation ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 6 (slight rear)")
elif enriched_telemetry.tire_degradation_rate < 0.2:
# Fresh tires: can use front bias for better turn-in
brake_bias = 4
reasoning_parts.append(f"Fresh tires ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 4 (front) for better turn-in")
else:
reasoning_parts.append(f"Normal tire degradation ({enriched_telemetry.tire_degradation_rate:.2f}) → Brake bias 5 (neutral)")
# Adjust differential slip based on pace and tire cliff risk
if enriched_telemetry.tire_cliff_risk > 0.7:
# High cliff risk: increase slip for gentler tire treatment
differential_slip = 7
reasoning_parts.append(f"High tire cliff risk ({enriched_telemetry.tire_cliff_risk:.2f}) → Diff slip 7 (gentle tire treatment)")
elif enriched_telemetry.pace_trend == "declining":
# Pace declining: moderate slip increase
differential_slip = 6
reasoning_parts.append(f"Pace declining → Diff slip 6 (preserve performance)")
elif enriched_telemetry.pace_trend == "improving":
# Pace improving: can be aggressive, lower slip
differential_slip = 4
reasoning_parts.append(f"Pace improving → Diff slip 4 (aggressive, lower slip)")
else:
reasoning_parts.append(f"Pace stable → Diff slip 5 (neutral)")
# Check if within pit window
pit_window = enriched_telemetry.optimal_pit_window
if pit_window and pit_window[0] <= lap_number <= pit_window[1]:
# In pit window: conservative settings to preserve tires
old_brake = brake_bias
old_diff = differential_slip
brake_bias = min(brake_bias + 1, 10)
differential_slip = min(differential_slip + 1, 10)
reasoning_parts.append(f"In pit window (laps {pit_window[0]}-{pit_window[1]}) → Conservative: brake {old_brake}{brake_bias}, diff {old_diff}{differential_slip}")
# Format reasoning for terminal output
reasoning_text = "\n".join(f"{part}" for part in reasoning_parts)
# Print reasoning to terminal
logger.info(f"CONTROL DECISION REASONING:")
logger.info(reasoning_text)
logger.info(f"FINAL COMMANDS: Brake Bias = {brake_bias}, Differential Slip = {differential_slip}")
# Also include strategy info if available
if strategy:
logger.info(f"TOP STRATEGY: {strategy.strategy_name}")
logger.info(f" Risk Level: {strategy.risk_level}")
logger.info(f" Description: {strategy.brief_description}")
return {
"brake_bias": brake_bias,
"differential_slip": differential_slip,
"reasoning": reasoning_text
}
if __name__ == "__main__":
import uvicorn
settings = get_settings()
@@ -273,3 +549,4 @@ if __name__ == "__main__":
reload=True
)

View File

@@ -7,14 +7,13 @@ from typing import List, Literal, Optional
class EnrichedTelemetryWebhook(BaseModel):
"""Single lap of enriched telemetry data from HPC enrichment module."""
"""Single lap of enriched telemetry data from HPC enrichment module (lap-level)."""
lap: int = Field(..., description="Lap number")
aero_efficiency: float = Field(..., ge=0.0, le=1.0, description="Aerodynamic efficiency (0..1, higher is better)")
tire_degradation_index: float = Field(..., ge=0.0, le=1.0, description="Tire wear (0..1, higher is worse)")
ers_charge: float = Field(..., ge=0.0, le=1.0, description="Energy recovery system charge level")
fuel_optimization_score: float = Field(..., ge=0.0, le=1.0, description="Fuel efficiency score")
driver_consistency: float = Field(..., ge=0.0, le=1.0, description="Lap-to-lap consistency")
weather_impact: Literal["low", "medium", "high"] = Field(..., description="Weather effect severity")
tire_degradation_rate: float = Field(..., ge=0.0, le=1.0, description="Tire degradation rate (0..1, higher is worse)")
pace_trend: Literal["improving", "stable", "declining"] = Field(..., description="Pace trend over recent laps")
tire_cliff_risk: float = Field(..., ge=0.0, le=1.0, description="Probability of tire performance cliff (0..1)")
optimal_pit_window: List[int] = Field(..., description="Recommended pit stop lap window [start, end]")
performance_delta: float = Field(..., description="Lap time delta vs baseline (negative = slower)")
class RaceInfo(BaseModel):

View File

@@ -11,12 +11,11 @@ def build_brainstorm_prompt_fast(
enriched_telemetry: List[EnrichedTelemetryWebhook],
race_context: RaceContext
) -> str:
"""Build a faster, more concise prompt for quicker responses."""
"""Build a faster, more concise prompt for quicker responses (lap-level data)."""
settings = get_settings()
count = settings.strategy_count
latest = max(enriched_telemetry, key=lambda x: x.lap)
tire_rate = TelemetryAnalyzer.calculate_tire_degradation_rate(enriched_telemetry)
tire_cliff = TelemetryAnalyzer.project_tire_cliff(enriched_telemetry, race_context.race_info.current_lap)
pit_window = latest.optimal_pit_window
if count == 1:
# Ultra-fast mode: just generate 1 strategy
@@ -24,7 +23,7 @@ def build_brainstorm_prompt_fast(
CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old)
TELEMETRY: Aero {latest.aero_efficiency:.2f}, Tire deg {latest.tire_degradation_index:.2f} (cliff lap {tire_cliff}), ERS {latest.ers_charge:.2f}
TELEMETRY: Tire deg rate {latest.tire_degradation_rate:.2f}, Cliff risk {latest.tire_cliff_risk:.2f}, Pace {latest.pace_trend}, Pit window laps {pit_window[0]}-{pit_window[1]}
Generate 1 optimal strategy. Min 2 tire compounds required.
@@ -36,17 +35,17 @@ JSON: {{"strategies": [{{"strategy_id": 1, "strategy_name": "name", "stop_count"
CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old)
TELEMETRY: Aero {latest.aero_efficiency:.2f}, Tire deg {latest.tire_degradation_index:.2f} (cliff lap {tire_cliff}), ERS {latest.ers_charge:.2f}, Fuel {latest.fuel_optimization_score:.2f}
TELEMETRY: Tire deg {latest.tire_degradation_rate:.2f}, Cliff risk {latest.tire_cliff_risk:.2f}, Pace {latest.pace_trend}, Delta {latest.performance_delta:+.2f}s, Pit window {pit_window[0]}-{pit_window[1]}
Generate {count} strategies: conservative (1-stop), standard (1-2 stop), aggressive (undercut). Min 2 tire compounds each.
JSON: {{"strategies": [{{"strategy_id": 1, "strategy_name": "Conservative Stay Out", "stop_count": 1, "pit_laps": [35], "tire_sequence": ["medium", "hard"], "brief_description": "extend current stint then hard tires to end", "risk_level": "low", "key_assumption": "tire cliff at lap {tire_cliff}"}}]}}"""
JSON: {{"strategies": [{{"strategy_id": 1, "strategy_name": "Conservative Stay Out", "stop_count": 1, "pit_laps": [35], "tire_sequence": ["medium", "hard"], "brief_description": "extend current stint then hard tires to end", "risk_level": "low", "key_assumption": "tire cliff risk stays below 0.7"}}]}}"""
return f"""Generate {count} F1 race strategies for {race_context.driver_state.driver_name} at {race_context.race_info.track_name}.
CURRENT: Lap {race_context.race_info.current_lap}/{race_context.race_info.total_laps}, P{race_context.driver_state.current_position}, {race_context.driver_state.current_tire_compound} tires ({race_context.driver_state.tire_age_laps} laps old)
TELEMETRY: Aero {latest.aero_efficiency:.2f}, Tire deg {latest.tire_degradation_index:.2f} (rate {tire_rate:.3f}/lap, cliff lap {tire_cliff}), ERS {latest.ers_charge:.2f}, Fuel {latest.fuel_optimization_score:.2f}, Consistency {latest.driver_consistency:.2f}
TELEMETRY: Tire deg rate {latest.tire_degradation_rate:.2f}, Cliff risk {latest.tire_cliff_risk:.2f}, Pace {latest.pace_trend}, Performance delta {latest.performance_delta:+.2f}s, Pit window laps {pit_window[0]}-{pit_window[1]}
Generate {count} diverse strategies. Min 2 compounds.
@@ -67,27 +66,19 @@ def build_brainstorm_prompt(
Returns:
Formatted prompt string
"""
# Generate telemetry summary
telemetry_summary = TelemetryAnalyzer.generate_telemetry_summary(enriched_telemetry)
# Get latest telemetry
latest = max(enriched_telemetry, key=lambda x: x.lap)
# Calculate key metrics
tire_rate = TelemetryAnalyzer.calculate_tire_degradation_rate(enriched_telemetry)
tire_cliff_lap = TelemetryAnalyzer.project_tire_cliff(
enriched_telemetry,
race_context.race_info.current_lap
)
# Format telemetry data
# Format telemetry data (lap-level)
telemetry_data = []
for t in sorted(enriched_telemetry, key=lambda x: x.lap, reverse=True)[:10]:
telemetry_data.append({
"lap": t.lap,
"aero_efficiency": round(t.aero_efficiency, 3),
"tire_degradation_index": round(t.tire_degradation_index, 3),
"ers_charge": round(t.ers_charge, 3),
"fuel_optimization_score": round(t.fuel_optimization_score, 3),
"driver_consistency": round(t.driver_consistency, 3),
"weather_impact": t.weather_impact
"tire_degradation_rate": round(t.tire_degradation_rate, 3),
"pace_trend": t.pace_trend,
"tire_cliff_risk": round(t.tire_cliff_risk, 3),
"optimal_pit_window": t.optimal_pit_window,
"performance_delta": round(t.performance_delta, 2)
})
# Format competitors
@@ -101,15 +92,14 @@ def build_brainstorm_prompt(
"gap_seconds": round(c.gap_seconds, 1)
})
prompt = f"""You are an expert F1 strategist. Generate 20 diverse race strategies.
prompt = f"""You are an expert F1 strategist. Generate 20 diverse race strategies based on lap-level telemetry.
TELEMETRY METRICS:
- aero_efficiency: <0.6 problem, >0.8 optimal
- tire_degradation_index: >0.7 degrading, >0.85 cliff
- ers_charge: >0.7 attack, <0.3 depleted
- fuel_optimization_score: <0.7 save fuel
- driver_consistency: <0.75 risky
- weather_impact: severity level
LAP-LEVEL TELEMETRY METRICS:
- tire_degradation_rate: 0-1 (higher = worse tire wear)
- tire_cliff_risk: 0-1 (probability of hitting tire cliff)
- pace_trend: "improving", "stable", or "declining"
- optimal_pit_window: [start_lap, end_lap] recommended pit range
- performance_delta: seconds vs baseline (negative = slower)
RACE STATE:
Track: {race_context.race_info.track_name}
@@ -129,12 +119,11 @@ COMPETITORS:
ENRICHED TELEMETRY (Last {len(telemetry_data)} laps, newest first):
{telemetry_data}
TELEMETRY ANALYSIS:
{telemetry_summary}
KEY INSIGHTS:
- Tire degradation rate: {tire_rate:.3f} per lap
- Projected tire cliff: Lap {tire_cliff_lap}
- Latest tire degradation rate: {latest.tire_degradation_rate:.3f}
- Latest tire cliff risk: {latest.tire_cliff_risk:.3f}
- Latest pace trend: {latest.pace_trend}
- Optimal pit window: Laps {latest.optimal_pit_window[0]}-{latest.optimal_pit_window[1]}
- Laps remaining: {race_context.race_info.total_laps - race_context.race_info.current_lap}
TASK: Generate exactly 20 diverse strategies.
@@ -144,7 +133,7 @@ DIVERSITY: Conservative (1-stop), Standard (balanced), Aggressive (undercut), Re
RULES:
- Pit laps: {race_context.race_info.current_lap + 1} to {race_context.race_info.total_laps - 1}
- Min 2 tire compounds (F1 rule)
- Time pits before tire cliff (projected lap {tire_cliff_lap})
- Consider optimal pit window and tire cliff risk
For each strategy provide:
- strategy_id: 1-20

View File

@@ -40,17 +40,14 @@ class StrategyGenerator:
Raises:
Exception: If generation fails
"""
logger.info("Starting strategy brainstorming...")
logger.info(f"Using {len(enriched_telemetry)} telemetry records")
logger.info(f"Generating strategies using {len(enriched_telemetry)} laps of telemetry")
# Build prompt (use fast mode if enabled)
if self.settings.fast_mode:
from prompts.brainstorm_prompt import build_brainstorm_prompt_fast
prompt = build_brainstorm_prompt_fast(enriched_telemetry, race_context)
logger.info("Using FAST MODE prompt")
else:
prompt = build_brainstorm_prompt(enriched_telemetry, race_context)
logger.debug(f"Prompt length: {len(prompt)} chars")
# Generate with Gemini (high temperature for creativity)
response_data = await self.gemini_client.generate_json(
@@ -64,7 +61,6 @@ class StrategyGenerator:
raise Exception("Response missing 'strategies' field")
strategies_data = response_data["strategies"]
logger.info(f"Received {len(strategies_data)} strategies from Gemini")
# Validate and parse strategies
strategies = []
@@ -73,15 +69,12 @@ class StrategyGenerator:
strategy = Strategy(**s_data)
strategies.append(strategy)
except Exception as e:
logger.warning(f"Failed to parse strategy {s_data.get('strategy_id', '?')}: {e}")
logger.warning(f"Failed to parse strategy: {e}")
logger.info(f"Successfully parsed {len(strategies)} strategies")
logger.info(f"Generated {len(strategies)} valid strategies")
# Validate strategies
valid_strategies = StrategyValidator.validate_strategies(strategies, race_context)
if len(valid_strategies) < 10:
logger.warning(f"Only {len(valid_strategies)} valid strategies (expected 20)")
# Return response
return BrainstormResponse(strategies=valid_strategies)

View File

@@ -0,0 +1,45 @@
#!/bin/bash
# Start AI Intelligence Layer
# Usage: ./start_ai_layer.sh
cd "$(dirname "$0")"
echo "Starting AI Intelligence Layer on port 9000..."
echo "Logs will be written to /tmp/ai_layer.log"
echo ""
# Kill any existing process on port 9000
PID=$(lsof -ti:9000)
if [ ! -z "$PID" ]; then
echo "Killing existing process on port 9000 (PID: $PID)"
kill -9 $PID 2>/dev/null
sleep 1
fi
# Start the AI layer
python3 main.py > /tmp/ai_layer.log 2>&1 &
AI_PID=$!
echo "AI Layer started with PID: $AI_PID"
echo ""
# Wait for startup
echo "Waiting for server to start..."
sleep 3
# Check if it's running
if lsof -Pi :9000 -sTCP:LISTEN -t >/dev/null ; then
echo "✓ AI Intelligence Layer is running on port 9000"
echo ""
echo "Health check:"
curl -s http://localhost:9000/api/health | python3 -m json.tool 2>/dev/null || echo " (waiting for full startup...)"
echo ""
echo "WebSocket endpoint: ws://localhost:9000/ws/pi"
echo ""
echo "To stop: kill $AI_PID"
echo "To view logs: tail -f /tmp/ai_layer.log"
else
echo "✗ Failed to start AI Intelligence Layer"
echo "Check logs: tail /tmp/ai_layer.log"
exit 1
fi

View File

@@ -80,133 +80,25 @@ class StrategyValidator:
class TelemetryAnalyzer:
"""Analyzes enriched telemetry data to extract trends and insights."""
"""Analyzes enriched lap-level telemetry data to extract trends and insights."""
@staticmethod
def calculate_tire_degradation_rate(telemetry: List[EnrichedTelemetryWebhook]) -> float:
"""
Calculate tire degradation rate per lap.
Calculate tire degradation rate per lap (using lap-level data).
Args:
telemetry: List of enriched telemetry records
Returns:
Rate of tire degradation per lap (0.0 to 1.0)
"""
if len(telemetry) < 2:
return 0.0
# Sort by lap (ascending)
sorted_telemetry = sorted(telemetry, key=lambda x: x.lap)
# Calculate rate of change
first = sorted_telemetry[0]
last = sorted_telemetry[-1]
lap_diff = last.lap - first.lap
if lap_diff == 0:
return 0.0
deg_diff = last.tire_degradation_index - first.tire_degradation_index
rate = deg_diff / lap_diff
return max(0.0, rate) # Ensure non-negative
@staticmethod
def calculate_aero_efficiency_avg(telemetry: List[EnrichedTelemetryWebhook]) -> float:
"""
Calculate average aero efficiency.
Args:
telemetry: List of enriched telemetry records
Returns:
Average aero efficiency (0.0 to 1.0)
Latest tire degradation rate (0.0 to 1.0)
"""
if not telemetry:
return 0.0
total = sum(t.aero_efficiency for t in telemetry)
return total / len(telemetry)
@staticmethod
def analyze_ers_pattern(telemetry: List[EnrichedTelemetryWebhook]) -> str:
"""
Analyze ERS charge pattern.
Args:
telemetry: List of enriched telemetry records
Returns:
Pattern description: "charging", "stable", "depleting"
"""
if len(telemetry) < 2:
return "stable"
# Sort by lap
sorted_telemetry = sorted(telemetry, key=lambda x: x.lap)
# Look at recent trend
recent = sorted_telemetry[-3:] if len(sorted_telemetry) >= 3 else sorted_telemetry
if len(recent) < 2:
return "stable"
# Calculate average change
total_change = 0.0
for i in range(1, len(recent)):
total_change += recent[i].ers_charge - recent[i-1].ers_charge
avg_change = total_change / (len(recent) - 1)
if avg_change > 0.05:
return "charging"
elif avg_change < -0.05:
return "depleting"
else:
return "stable"
@staticmethod
def is_fuel_critical(telemetry: List[EnrichedTelemetryWebhook]) -> bool:
"""
Check if fuel situation is critical.
Args:
telemetry: List of enriched telemetry records
Returns:
True if fuel optimization score is below 0.7
"""
if not telemetry:
return False
# Check most recent telemetry
# Use latest tire degradation rate from enrichment
latest = max(telemetry, key=lambda x: x.lap)
return latest.fuel_optimization_score < 0.7
@staticmethod
def assess_driver_form(telemetry: List[EnrichedTelemetryWebhook]) -> str:
"""
Assess driver consistency form.
Args:
telemetry: List of enriched telemetry records
Returns:
Form description: "excellent", "good", "inconsistent"
"""
if not telemetry:
return "good"
# Get average consistency
avg_consistency = sum(t.driver_consistency for t in telemetry) / len(telemetry)
if avg_consistency >= 0.85:
return "excellent"
elif avg_consistency >= 0.75:
return "good"
else:
return "inconsistent"
return latest.tire_degradation_rate
@staticmethod
def project_tire_cliff(
@@ -214,65 +106,27 @@ class TelemetryAnalyzer:
current_lap: int
) -> int:
"""
Project when tire degradation will hit 0.85 (performance cliff).
Project when tire cliff will be reached (using lap-level data).
Args:
telemetry: List of enriched telemetry records
current_lap: Current lap number
Returns:
Projected lap number when cliff will be reached
Estimated lap number when cliff will be reached
"""
if not telemetry:
return current_lap + 20 # Default assumption
# Get current degradation and rate
# Use tire cliff risk from enrichment
latest = max(telemetry, key=lambda x: x.lap)
current_deg = latest.tire_degradation_index
cliff_risk = latest.tire_cliff_risk
if current_deg >= 0.85:
return current_lap # Already at cliff
# Calculate rate
rate = TelemetryAnalyzer.calculate_tire_degradation_rate(telemetry)
if rate <= 0:
return current_lap + 50 # Not degrading, far future
# Project laps until 0.85
laps_until_cliff = (0.85 - current_deg) / rate
projected_lap = current_lap + int(laps_until_cliff)
return projected_lap
@staticmethod
def generate_telemetry_summary(telemetry: List[EnrichedTelemetryWebhook]) -> str:
"""
Generate human-readable summary of telemetry trends.
Args:
telemetry: List of enriched telemetry records
Returns:
Summary string
"""
if not telemetry:
return "No telemetry data available."
tire_rate = TelemetryAnalyzer.calculate_tire_degradation_rate(telemetry)
aero_avg = TelemetryAnalyzer.calculate_aero_efficiency_avg(telemetry)
ers_pattern = TelemetryAnalyzer.analyze_ers_pattern(telemetry)
fuel_critical = TelemetryAnalyzer.is_fuel_critical(telemetry)
driver_form = TelemetryAnalyzer.assess_driver_form(telemetry)
latest = max(telemetry, key=lambda x: x.lap)
summary = f"""Telemetry Analysis (Last {len(telemetry)} laps):
- Tire degradation: {latest.tire_degradation_index:.2f} index, increasing at {tire_rate:.3f}/lap
- Aero efficiency: {aero_avg:.2f} average
- ERS: {latest.ers_charge:.2f} charge, {ers_pattern}
- Fuel: {latest.fuel_optimization_score:.2f} score, {'CRITICAL' if fuel_critical else 'OK'}
- Driver form: {driver_form} ({latest.driver_consistency:.2f} consistency)
- Weather impact: {latest.weather_impact}"""
return summary
if cliff_risk >= 0.7:
return current_lap + 2 # Imminent cliff
elif cliff_risk >= 0.4:
return current_lap + 5 # Approaching cliff
else:
# Estimate based on optimal pit window
pit_window = latest.optimal_pit_window
return pit_window[1] if pit_window else current_lap + 15