Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
|
@@ -1,6 +1,3 @@
|
|
| 1 |
-
I'll create a comprehensive chat application using the MobileLLM-Pro model with a modern, interactive interface. This will include conversation history, streaming responses, and a clean UI.
|
| 2 |
-
|
| 3 |
-
```python
|
| 4 |
import gradio as gr
|
| 5 |
import torch
|
| 6 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
@@ -400,31 +397,4 @@ if __name__ == "__main__":
|
|
| 400 |
show_error=True,
|
| 401 |
show_tips=True,
|
| 402 |
debug=True
|
| 403 |
-
)
|
| 404 |
-
```
|
| 405 |
-
|
| 406 |
-
This chat application provides:
|
| 407 |
-
|
| 408 |
-
## Key Features:
|
| 409 |
-
1. **Model Management**: Load either the "instruct" or "base" version of MobileLLM-Pro
|
| 410 |
-
2. **Interactive Chat**: Full conversation history with message bubbles
|
| 411 |
-
3. **Streaming Responses**: See responses generate in real-time
|
| 412 |
-
4. **Customizable Settings**: Adjust system prompt and temperature
|
| 413 |
-
5. **Modern UI**: Clean, responsive interface with examples
|
| 414 |
-
6. **Error Handling**: Graceful error messages and status updates
|
| 415 |
-
|
| 416 |
-
## How to Use:
|
| 417 |
-
1. Set your `HF_TOKEN` environment variable (if required for the model)
|
| 418 |
-
2. Select model version (instruct recommended for chat)
|
| 419 |
-
3. Click "Load Model" and wait for it to load
|
| 420 |
-
4. Start chatting with the AI
|
| 421 |
-
5. Adjust settings like temperature and system prompt as needed
|
| 422 |
-
|
| 423 |
-
## Features:
|
| 424 |
-
- **Conversation History**: Maintains context across messages
|
| 425 |
-
- **Example Prompts**: Quick-start suggestions
|
| 426 |
-
- **Clear Function**: Reset the conversation
|
| 427 |
-
- **Streaming Toggle**: Choose between instant or streaming responses
|
| 428 |
-
- **Status Updates**: Real-time model loading status
|
| 429 |
-
|
| 430 |
-
The app handles the model loading process gracefully and provides a professional chat interface for interacting with MobileLLM-Pro.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
import gradio as gr
|
| 2 |
import torch
|
| 3 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
| 397 |
show_error=True,
|
| 398 |
show_tips=True,
|
| 399 |
debug=True
|
| 400 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|