Update README.md
Browse files
README.md
CHANGED
|
@@ -529,7 +529,7 @@ We re-evaluate the metrics of the Qwen series models, and the metrics of other s
|
|
| 529 |
|
| 530 |
1. The latest version of [transformers](https://github.com/huggingface/transformers) is recommended (at least 4.42.0).
|
| 531 |
2. We evaluate our models with `python=3.8` and `torch==2.1.2`.
|
| 532 |
-
3. If you use Rodimus, you need to install [flash-linear-attention](https://github.com/sustcsonglin/flash-linear-attention), [causal_conv1d](https://github.com/Dao-AILab/causal-conv1d)
|
| 533 |
|
| 534 |
## Generation
|
| 535 |
`generate` APi
|
|
|
|
| 529 |
|
| 530 |
1. The latest version of [transformers](https://github.com/huggingface/transformers) is recommended (at least 4.42.0).
|
| 531 |
2. We evaluate our models with `python=3.8` and `torch==2.1.2`.
|
| 532 |
+
3. If you use Rodimus, you need to install [flash-linear-attention](https://github.com/sustcsonglin/flash-linear-attention), [causal_conv1d](https://github.com/Dao-AILab/causal-conv1d) and [triton>=2.2.0](https://github.com/triton-lang/triton). If you use Rodimus+, you need to further install [flash-attention](https://github.com/Dao-AILab/flash-attention).
|
| 533 |
|
| 534 |
## Generation
|
| 535 |
`generate` APi
|