This is a layer-wise pruned variant of cerebras/Qwen3-Coder-REAP-25B-A3B resulting in a ~20B model with ~3B active parameters. It has not been fine-tuned yet.

Prune info: Original model: 48 layers New model: 38 Layers

result: Similar model, it MUST be fine-tuned before usage as current performance is non-ideal. While it can absolutely be used as is, fine-tuning is needed.

Downloads last month
7
Safetensors
Model size
20B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Pinkstackorg/Qwen3-Coder-pruned-20B-A3B

Finetuned
(2)
this model