110 lines
5.2 KiB
Markdown
110 lines
5.2 KiB
Markdown
# CLAUDE.md
|
|
|
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
|
|
|
## What This Is
|
|
|
|
LockInBro API — the FastAPI backend for an ADHD-aware productivity system. Runs on a DigitalOcean droplet (1GB RAM) behind nginx with SSL at `https://wahwa.com/api/v1`. PostgreSQL database `focusapp` on the same droplet.
|
|
|
|
## Design Doc (Source of Truth)
|
|
|
|
**Always consult `/home/devuser/.github/profile/README.md` before making architectural decisions.** This is the full technical design document. Keep it up to date by pulling/pushing, but avoid frequent changes — batch updates when possible.
|
|
|
|
## Related Repositories
|
|
|
|
- `/home/devuser/BlindMaster/blinds_express` — Previous Express.js project on the same droplet. Contains **server-side auth workflow** (Argon2 + JWT) used as scaffolding reference for this project's auth module.
|
|
- `/home/devuser/BlindMaster/blinds_flutter` — Flutter app with **app-side auth flow** being ported to iOS/SwiftUI.
|
|
|
|
## Stack
|
|
|
|
- **Python / FastAPI** on port 3000, served by uvicorn
|
|
- **PostgreSQL** on port 5432, database `focusapp`, `root` superuser
|
|
- **Auth:** Argon2id (email/password) + Apple Sign In + JWT (stateless)
|
|
- **AI:** Claude API or Gemini API (auto-selects based on which key is set). Brain-dump parsing, VLM screenshot analysis, step planning, context resume
|
|
- **Analytics:** Hex API (notebooks query Postgres directly)
|
|
- **nginx** reverse-proxies `wahwa.com` → `localhost:3000`, WebSocket support enabled, SSL via Certbot
|
|
|
|
## Commands
|
|
|
|
```bash
|
|
# Activate conda environment (must do first)
|
|
source ~/miniconda3/bin/activate && conda activate lockinbro
|
|
|
|
# Run the server
|
|
uvicorn app.main:app --host 0.0.0.0 --port 3000 --reload
|
|
|
|
# Install dependencies
|
|
pip install -r requirements.txt
|
|
|
|
# Export environment after adding packages
|
|
conda env export --no-builds > environment.yml
|
|
pip freeze > requirements.txt
|
|
|
|
# Database migrations (requires Postgres connection)
|
|
alembic upgrade head # apply migrations
|
|
alembic revision -m "description" # create new migration (manual SQL in upgrade/downgrade)
|
|
```
|
|
|
|
## Architecture
|
|
|
|
### Project Structure
|
|
```
|
|
├── app/
|
|
│ ├── main.py # FastAPI entry point
|
|
│ ├── config.py # Settings from env vars
|
|
│ ├── middleware/auth.py # JWT validation + Argon2 utils
|
|
│ ├── routers/ # auth, tasks, steps, sessions, distractions, analytics
|
|
│ ├── services/
|
|
│ │ ├── llm.py # All Claude API calls + prompt templates
|
|
│ │ ├── hex_service.py # Hex notebook trigger + poll
|
|
│ │ └── db.py # asyncpg Postgres client
|
|
│ ├── models.py # Pydantic request/response schemas
|
|
│ └── types.py
|
|
├── alembic/
|
|
├── requirements.txt
|
|
└── .env
|
|
```
|
|
|
|
### Key Data Flows
|
|
|
|
1. **Brain-dump parsing:** iOS sends raw text → `POST /tasks/brain-dump` → Claude extracts structured tasks → optionally `POST /tasks/{id}/plan` → Claude generates 5-15 min ADHD-friendly steps
|
|
2. **Distraction detection:** macOS sends screenshot (raw JPEG binary via multipart) + task context → `POST /distractions/analyze-screenshot` → Claude Vision analyzes → backend auto-updates step statuses + writes `checkpoint_note` → returns nudge if distracted (confidence > 0.7)
|
|
3. **Context resume:** `GET /sessions/{id}/resume` → uses `checkpoint_note` to generate hyper-specific "welcome back" card
|
|
4. **Analytics:** Backend writes events to Postgres → Hex notebooks query directly → results served via `/analytics/*` endpoints
|
|
|
|
### Critical Design Decisions
|
|
|
|
- **Steps are a separate table, not JSONB** — VLM updates individual step statuses every ~20s. Separate rows avoid read-modify-write races.
|
|
- **No dynamic step splitting** — Growing a task list mid-work causes ADHD decision paralysis. Use `checkpoint_note` for within-step progress instead.
|
|
- **Screenshots are never persisted** — base64 in-memory only, discarded after VLM response. Privacy-critical.
|
|
- **Step auto-update is a backend side-effect** — `/analyze-screenshot` applies step changes server-side before responding. Client doesn't make separate step-update calls.
|
|
- **1GB RAM constraint** — Limit concurrent VLM requests to 1-2 in-flight. FastAPI + uvicorn uses ~40-60MB.
|
|
|
|
### AI/LLM Guidelines
|
|
|
|
All AI calls go through `services/llm.py`. Key principles:
|
|
- Non-judgmental tone (never "you got distracted again")
|
|
- Concise, scannable outputs (ADHD working memory)
|
|
- All AI responses are structured JSON
|
|
- Every prompt includes full task + step context
|
|
- Graceful degradation if Claude API is down
|
|
|
|
### Environment Variables (.env)
|
|
|
|
```
|
|
DATABASE_URL=postgresql://devuser@/focusapp
|
|
JWT_SECRET=<secret>
|
|
ANTHROPIC_API_KEY=<key> # set one of these two
|
|
GEMINI_API_KEY=<key> # prefers Anthropic if both set
|
|
HEX_API_TOKEN=<token>
|
|
HEX_NB_DISTRACTIONS=<notebook_id>
|
|
HEX_NB_FOCUS_TRENDS=<notebook_id>
|
|
HEX_NB_WEEKLY_REPORT=<notebook_id>
|
|
```
|
|
|
|
## Development Notes
|
|
|
|
- Run Claude Code from local machine over SSH, not on the droplet (saves RAM/disk)
|
|
- The droplet hostname is `blindmaster-ubuntu`
|
|
- PM2 is killed, `blinds_express` is stopped, port 3000 is free for this project
|