This is a layer-wise pruned variant of cerebras/Qwen3-Coder-REAP-25B-A3B resulting in a ~20B model with ~3B active parameters. It has not been fine-tuned yet.
Prune info: Original model: 48 layers New model: 38 Layers
result: Similar model, it MUST be fine-tuned before usage as current performance is non-ideal. While it can absolutely be used as is, fine-tuning is needed.
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Pinkstackorg/Qwen3-Coder-pruned-20B-A3B
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct
Finetuned
cerebras/Qwen3-Coder-REAP-25B-A3B