Alibaba-NLP/gte-reranker-modernbert-base
We are excited to introduce the `gte-modernbert` series of models, which are built upon the latest modernBERT pre-trained encoder-only foundation models. The `gte-modernbert` series models include both text embedding models and rerank models.
Benchmarks
AskUbuntuDupQuestions
Duplicate question detection from AskUbuntu
Corpus: 6,743 Queries: 360
Quality
ndcg at 10 0.6701
map at 10 0.5148
mrr at 10 0.7570
Performance L4 b1 c16
Query 6.2K tok/s
Query p50 41.9ms
CMedQAv1Reranking
Chinese medical question answering reranking (v1)
Corpus: 100,000 Queries: 2,000
Quality
map at 10 0.4989
mrr at 10 0.5905
CMedQAv2Reranking
Chinese medical question answering reranking (v2)
Corpus: 108,000 Queries: 4,000
Quality
map at 10 0.5024
mrr at 10 0.5880
MMarcoReranking
Multilingual MARCO passage reranking (Chinese)
Quality
map at 10 0.2271
mrr at 10 0.2373
Performance L4 b1 c16