WilhelmT commited on
Commit
a001915
·
verified ·
1 Parent(s): 6acc4ed

Update README.md

Browse files

# Llama-3.2-1B-Instruct-FlashHead

**Optimized version of Llama-3.2-1B-Instruct using FlashHead, Embedl’s efficient replacement for the language model head, reducing size while preserving accuracy.**
Designed for **low-latency inference** on **NVIDIA RTX GPUs**, leveraging:

- FlashHead
- Custom vLLM generation via `embedl-models`

FlashHead matches the baseline **Llama-3.2-1B** within rounding on standard evaluations (MMLU-Pro, HellaSwag, GSM8K, etc.) and, in combination with quantization, achieves **H200-level latency** on **RTX Ada** GPUs.

---

## Model Details

| **Field** | **Value** |
|------------|------------|
| **Base Model** | Llama-3.2-1B-Instruct |
| **Input / Output** | Text → Text |
| **Release Date** | 2025-12-08 |
| **Version** | 1.0 |
| **Optimizations** | FlashHead LM Head, W4A16 Mixed Precision |
| **Developers** | Embedl |
| **Licenses** | Upstream: Meta Llama 3.2 License. Built with Llama. <br>Optimized components: Embedl Models Community Licence v1.0 *(no redistribution)* |
| **Intended Use** | Text generation, reasoning, assistant-style interaction, and general-purpose NLP on NVIDIA RTX GPUs |

---

## Optimizations

- **FlashHead LM Head** - lightweight replacement for the dense LM head, significantly improving throughput.
- **Mixed-Precision Quantization (W4A16)** - optimal balance of memory footprint and accuracy.
- **Custom Runtime Integration** - compatible with both **vLLM (0.10.2)** via the `embedl-models` package.

---

## Performance

### Token Generation Speed (RTX 3500 Ada, batch size = 1)

| **Precision** | **Tokens/sec** | **Speedup vs BF16** |
|----------------|----------------|----------------------|
| BF16 baseline | 130 | 1.0× |
| **FlashHead (Embedl)** | **163** | **1.25×** |
| W4A16 baseline | 278 | 2.14× |
| **FlashHead W4A16 (Embedl)** | **485** | **3.73×** |

FlashHead improves end-to-end speed by **1.75×** over state-of-the-art, while maintaining full accuracy parity.

---

## Accuracy (Parity with Baseline)

| **Method** | **MMLU-Pro** | **HellaSwag** | **IFEval** | **BoolQ** | **BBH** | **TruthfulQA** | **GSM8K** |
|-------------|---------------|----------------|--------------|-------------|-------------|----------------|--------------|
| **Baseline** | 0.18 | 0.59 | 0.45 | 0.69 | 0.38 | 0.36 | 0.46 |
| **FlashHead** | 0.18 | 0.59 | 0.45 | 0.69 | 0.38 | 0.36 | 0.46 |

FlashHead matches baseline performance exactly across all evaluation benchmarks.

---

## Installation

```bash
pip install embedl-models
```

The `embedl-models` package is required, it provides the optimized FlashHead implementation and quantized model runtime.

---

## Usage Examples

### vLLM Inference

```python
from vllm import SamplingParams
from embedl.models.vllm import LLM

model_id = "embedl/Llama-3.2-1B-Instruct-FlashHead"

sampling = SamplingParams(max_tokens=128, temperature=0.0)
llm = LLM(model=model_id, trust_remote_code=True)

prompt = "Write a haiku about coffee."
output = llm.generate([prompt], sampling)
print(output[0].outputs[0].text)
```
---

## Limitations

- Limited to **vLLM 0.10.2** (pinned dependency)
- **Batch size = 1** (real-time generation)
- Currently optimized for **NVIDIA RTX GPUs**

---

## Roadmap

Planned improvements:

- Huggingface transformers generation
- vLLM CLI benchmarking for detailed latency evaluation
- `lm-eval-harness` integration for detailed accuracy evaluation
- Upstream support in **Transformers** and **vLLM**
- Compatibility with **GGUF**, **MLC**, **Llama.cpp**, **TGI**, etc.
- Broader model coverage (larger models, VLMs, VLAs)

---

## License

- **Upstream:** Meta Llama 3.2 License
- **Optimized Components:** Embedl Models Community Licence v1.0 *(no redistribution)*

---

## Contact

**Enterprise & Commercial Inquiries**
[sales@embedl.com](mailto:sales@embedl.com)

**Technical Issues & Early Access**
[https://github.com/embedl/embedl-models](https://github.com/embedl/embedl-models)

**More Information & Model Releases**
[https://embedl.com](https://embedl.com)

---

### Partner & Developer Opportunities

If you are evaluating on-device inference, building products on SLMs, or exploring custom model optimization, reach out for:

- Embedl SDK - AI optimization tools & profiling
- Embedl HUB - benchmarking platform
- Engineering support for on-prem/edge deployments
- Migration guidance (Llama / Qwen / Gemma)
- Early access & partner co-marketing opportunities

Contact: [sales@embedl.com](mailto:sales@embedl.com)

Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -1,5 +1,7 @@
1
- ---
2
- license: other
3
- license_name: embedl-models-community-licence-agreement-1.0
4
- license_link: https://github.com/embedl/embedl-models/blob/main/LICENSE
5
- ---
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: embedl-models-community-licence-agreement-1.0
4
+ license_link: https://github.com/embedl/embedl-models/blob/main/LICENSE
5
+ base_model:
6
+ - meta-llama/Llama-3.2-1B-Instruct
7
+ ---