Papers
arxiv:2601.01046

KV-Embedding: Training-free Text Embedding via Internal KV Re-routing in Decoder-only LLMs

Published on Jan 3
· Submitted by
yixuan
on Jan 6
Authors:

Abstract

KV-Embedding enables training-free representation learning from frozen LLMs by utilizing key-value states for enhanced context access and automated layer selection.

AI-generated summary

While LLMs are powerful embedding backbones, their application in training-free settings faces two structural challenges: causal attention restricts early tokens from accessing subsequent context, and the next-token prediction objective biases representations toward generation rather than semantic compression. To address these limitations, we propose KV-Embedding, a framework that activates the latent representation power of frozen LLMs. Our method leverages the observation that the key-value (KV) states of the final token at each layer encode a compressed view of the sequence. By re-routing these states as a prepended prefix, we enable all tokens to access sequence-level context within a single forward pass. To ensure model-agnostic applicability, we introduce an automated layer selection strategy based on intrinsic dimensionality. Evaluations on MTEB across Qwen, Mistral, and Llama backbones show that KV-Embedding outperforms existing training-free baselines by up to 10%, while maintaining robust performance on sequences up to 4,096 tokens. These results demonstrate that internal state manipulation offers an efficient alternative to input modification, and we hope this work encourages further exploration of LLM internals for representation learning.

Community

Paper author Paper submitter
edited 1 day ago

Turn any decoder-only LLM into a powerful embedding model—zero training needed!

The Trick: Re-route the final token's key-value states as an internal prefix, giving all tokens access to global context in one forward pass. No input modification, no mask removal, just smart internal state manipulation.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.01046 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.01046 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.01046 in a Space README.md to link it from this page.

Collections including this paper 1