RemiFabre commited on
Commit
b0cb5ad
·
1 Parent(s): 0ecbc6c

Minor README tweaks

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -4,8 +4,8 @@ Conversational demo for the Reachy Mini robot combining OpenAI's realtime APIs,
4
 
5
  ## Overview
6
  - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming.
7
- - Layered motion system queues primary moves (dances, emotions, goto poses, breathing) while blending speech-reactive wobble and face-tracking.
8
  - Camera capture can route to OpenAI multimodal vision or stay on-device with SmolVLM2 local analysis.
 
9
  - Async tool dispatch integrates robot motion, camera capture, and optional facial-recognition helpers through a Gradio web UI with live transcripts.
10
 
11
  ## Installation
@@ -120,14 +120,14 @@ The app starts a Gradio UI served locally (http://127.0.0.1:7860/). When running
120
  | `stop_dance` | Clear queued dances. | Core install only. |
121
  | `play_emotion` | Play a recorded emotion clip via Hugging Face assets. | Needs `HF_TOKEN` for the recorded emotions dataset. |
122
  | `stop_emotion` | Clear queued emotions. | Core install only. |
123
- | `get_person_name` | Attempt DeepFace-based recognition of the current person. | Disabled by default (`ENABLE_FACE_RECOGNITION=False`); requires `deepface` and a local face database. |
124
  | `do_nothing` | Explicitly remain idle. | Core install only. |
125
 
126
  ## Development workflow
127
  - Install the dev group extras: `uv sync --group dev` or `pip install -e .[dev]`.
128
  - Run formatting and linting: `ruff check .`.
129
  - Execute the test suite: `pytest`.
130
- - When iterating on robot motions, keep the control loop responsiveoffload blocking work using the helpers in `tools.py`.
131
 
132
  ## License
133
  Apache 2.0
 
4
 
5
  ## Overview
6
  - Real-time audio conversation loop powered by the OpenAI realtime API and `fastrtc` for low-latency streaming.
 
7
  - Camera capture can route to OpenAI multimodal vision or stay on-device with SmolVLM2 local analysis.
8
+ - Layered motion system queues primary moves (dances, emotions, goto poses, breathing) while blending speech-reactive wobble and face-tracking.
9
  - Async tool dispatch integrates robot motion, camera capture, and optional facial-recognition helpers through a Gradio web UI with live transcripts.
10
 
11
  ## Installation
 
120
  | `stop_dance` | Clear queued dances. | Core install only. |
121
  | `play_emotion` | Play a recorded emotion clip via Hugging Face assets. | Needs `HF_TOKEN` for the recorded emotions dataset. |
122
  | `stop_emotion` | Clear queued emotions. | Core install only. |
123
+ | `get_person_name` | DeepFace-based recognition of the current person. | Disabled by default (`ENABLE_FACE_RECOGNITION=False`); requires `deepface` and a local face database. |
124
  | `do_nothing` | Explicitly remain idle. | Core install only. |
125
 
126
  ## Development workflow
127
  - Install the dev group extras: `uv sync --group dev` or `pip install -e .[dev]`.
128
  - Run formatting and linting: `ruff check .`.
129
  - Execute the test suite: `pytest`.
130
+ - When iterating on robot motions, keep the control loop responsive => offload blocking work using the helpers in `tools.py`.
131
 
132
  ## License
133
  Apache 2.0