Alina Lozovskaya commited on
Commit
1acbcb3
·
1 Parent(s): 7054b54

Fix README

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -7,7 +7,7 @@ Conversational demo for the Reachy Mini robot combining OpenAI's realtime APIs,
7
  - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming.
8
  - Local vision processing using SmolVLM2 model running on-device (CPU/GPU/MPS).
9
  - Layered motion system queues primary moves (dances, emotions, goto poses, breathing) while blending speech-reactive wobble and face-tracking.
10
- - Async tool dispatch integrates robot motion, camera capture, and optional facial-recognition helpers through a Gradio web UI with live transcripts.
11
 
12
  ## Installation
13
 
@@ -93,7 +93,7 @@ By default, the app runs in console mode for direct audio interaction. Use the `
93
 
94
  | Option | Default | Description |
95
  |--------|---------|-------------|
96
- | `--head-tracker {yolo,mediapipe}` | `None` | Select a face-tracking backend when a camera is available. Requires the matching optional extra. |
97
  | `--no-camera` | `False` | Run without camera capture or face tracking. |
98
  | `--gradio` | `False` | Launch the Gradio web UI. Without this flag, runs in console mode. Required when running in simulation mode. |
99
  | `--debug` | `False` | Enable verbose logging for troubleshooting. |
@@ -118,7 +118,7 @@ By default, the app runs in console mode for direct audio interaction. Use the `
118
  |------|--------|--------------|
119
  | `move_head` | Queue a head pose change (left/right/up/down/front). | Core install only. |
120
  | `camera` | Capture the latest camera frame and optionally query a vision backend. | Requires camera worker; vision analysis depends on selected extras. |
121
- | `head_tracking` | Enable or disable face-tracking offsets. | Camera worker with configured head tracker. |
122
  | `dance` | Queue a dance from `reachy_mini_dances_library`. | Core install only. |
123
  | `stop_dance` | Clear queued dances. | Core install only. |
124
  | `play_emotion` | Play a recorded emotion clip via Hugging Face assets. | Needs `HF_TOKEN` for the recorded emotions dataset. |
 
7
  - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming.
8
  - Local vision processing using SmolVLM2 model running on-device (CPU/GPU/MPS).
9
  - Layered motion system queues primary moves (dances, emotions, goto poses, breathing) while blending speech-reactive wobble and face-tracking.
10
+ - Async tool dispatch integrates robot motion, camera capture, and optional face-tracking capabilities through a Gradio web UI with live transcripts.
11
 
12
  ## Installation
13
 
 
93
 
94
  | Option | Default | Description |
95
  |--------|---------|-------------|
96
+ | `--head-tracker {yolo,mediapipe}` | `None` | Select a face-tracking backend when a camera is available. YOLO is implemented locally, MediaPipe comes from the `reachy_mini_toolbox` package. Requires the matching optional extra. |
97
  | `--no-camera` | `False` | Run without camera capture or face tracking. |
98
  | `--gradio` | `False` | Launch the Gradio web UI. Without this flag, runs in console mode. Required when running in simulation mode. |
99
  | `--debug` | `False` | Enable verbose logging for troubleshooting. |
 
118
  |------|--------|--------------|
119
  | `move_head` | Queue a head pose change (left/right/up/down/front). | Core install only. |
120
  | `camera` | Capture the latest camera frame and optionally query a vision backend. | Requires camera worker; vision analysis depends on selected extras. |
121
+ | `head_tracking` | Enable or disable face-tracking offsets (not facial recognition - only detects and tracks face position). | Camera worker with configured head tracker. |
122
  | `dance` | Queue a dance from `reachy_mini_dances_library`. | Core install only. |
123
  | `stop_dance` | Clear queued dances. | Core install only. |
124
  | `play_emotion` | Play a recorded emotion clip via Hugging Face assets. | Needs `HF_TOKEN` for the recorded emotions dataset. |