# Memory: April 9, 2026

## Morning System Outages
- **Repeated watchdog alerts** for IBKR Gateway, Workspace Server (port 18791), and Media Server (port 18790) throughout the morning (11:45 AM, 12:15 PM, 12:45 PM, 1:15 PM, 1:45 PM).
- Ping to ibkr.com succeeded, indicating internet connectivity was fine; local services were down.
- **Mark's instruction at 1:59 PM**: "Please stop making these daily checks for now." Heartbeat checks and watchdog monitoring paused.

## Local LLM (Mac Mini) – llama.cpp Echo Issue
- **Mark's question**: "Ava we installed a local LLM on the mac mini that uses cpp what was it?"  
- **Identified**: `llama.cpp` (C++ implementation of LLaMA) powers the local Ollama instance running Llama 3 and Gemma 3.
- **Echo bug analysis**: The model repeated the system prompt verbatim when asked "What time is it?" This is not a database‑vs‑filesystem problem but a **prompt‑formatting bug**.
  - Root cause: The `generate_with_timeout` method concatenates `system_prompt + "\n\n" + prompt` and passes it as a raw prompt to `Llama.generate`.
  - Chat‑tuned models expect separate `system`, `user`, `assistant` roles via `Llama.create_chat_completion`.
- **Fix**: Switch from raw‑prompt interface to chat‑completion interface (`create_chat_completion`) to preserve role separation and prevent echo.
- **Mark's response**: "Yes" (agreed to have me examine/fix the code).

## Action Items
- Locate `generate_with_timeout` in `model_manager.py` or `ava_sovereign_dual.py` on the Mac Mini.
- Replace raw‑prompt call with `create_chat_completion`.
- Restart chat server after patch.

---
## UI Enhancement Reminders (pending)
- **Streaming responses**: Backend support for Server‑Sent Events (SSE) to stream tokens as they’re generated.
- **Voice‑triggered tools**: Detect “search for…”, “read file…”, etc., and execute corresponding tools automatically.
- **Continuous listening**: Keep microphone open for multiple sentences (requires stop keyword or button).

## Voice Reading Discussion
- **Topic**: Giving Ava a voice to read pages from a book (text‑to‑speech for long‑form content).
- **Capabilities**:
  1. Browser TTS (already in UI) – works for short responses; can be extended to chunk long text.
  2. ElevenLabs TTS (`sag` tool) – high‑quality, custom voice; ideal for storytelling.
  3. File reading: Ava can read text/PDF files from workspace or Mac Mini and speak them aloud.
- **Next step**: Discuss with Mark what kind of reading he envisions (entertainment, documentation, personal notes) and implement accordingly.

---
*Memory recorded at 2:25 PM CDT (19:25 UTC)*
*Updated at 3:25 PM CDT (20:25 UTC) with UI reminders and voice reading discussion*
---
## ElevenLabs TTS Integration & Server Corruption
- **Mark's request (4:27 PM)**: "Yes" to implementing ElevenLabs integration for natural female voice.
- **Backend route** `/elevenlabs_tts` already coded in `ava_sovereign_dual.py` (UI dropdown includes "🎙️ ElevenLabs (Ava)").
- **Corruption discovery**: While integrating, the server file `~/Ava_Freedom_Core/ava_sovereign_dual.py` became corrupted (binary characters inserted, indentation lost). Server can start.
- **Immediate plan**: 
  1. Restore server file from backup or rebuild from original repository.
  2. Provide ElevenLabs API key in `~/Ava_Freedom_Core/elevenlabs_config.json`.
- **Fallback**: Spin up a separate minimal ElevenLabs‑only TTS server on a different port for testing while main server is repaired.

## Continued System Outages (Watchdog Alerts)
- **Despite pause on daily checks**, watchdog continues to log outages every 10 minutes:
  - Local HQ Workspace Server (port 18791) offline
  - Local HQ Media Server (port 18790) offline
  - Live_Data_JSON, Executions_History_JSON, Chart_Data_JSON, Thesis_JSON endpoints failed
- **IBKR_Gateway_Port** not reported since 11:30 AM (possible watchdog gap or recovery).
- **All other endpoints remain down** continuously since April 7 – trading/data pipeline completely offline.

## Heartbeat & Trade Review
- **Heartbeats** (4:53 PM, 5:23 PM, 5:53 PM, 6:23 PM) checked `watchdog_alerts.txt` for new alerts (present each time).
- **Trade Review Queue**: Ran `check_pending_reviews.py` per HEARTBEAT.md – no pending reviews or analysis files.
- **Daily briefing** for 2026‑04‑09 already exists; local/cloud sync appears normal.

## Technical Context
- **Primary model**: Gemini‑3.1‑Pro‑Preview (alias `gemini`) – API spend cap removed (full capacity).
- **Fallback chain**: Claude Sonnet 4.6 → DeepSeek Chat/Reasoner → Ollama local (`llama3:latest`, `gemma3:4b`) → Moonshot/Kimi → Qwen → other Google models.

---
*Memory appended at 6:46 PM CDT (23:46 UTC)*