Why did we open-source our inference engine? Read the post

naver/splade-v3

This checkpoint corresponds to a model that starts from SPLADE++SelfDistil (`naver/splade-cocondenser-selfdistil`), and is trained with a mix of KL-Div and MarginMSE, with 8 negatives per query sampled from SPLADE++SelfDistil. We used the original MS MARCO collection without the titles.

Architecture
BERT
Parameters
110M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License
cc-by-nc-sa-4.0
Languages
en

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Quality
ndcg at 10 0.3639
map at 10 0.3082
mrr at 10 0.3560
Performance L4 b1 c16
Corpus 20.7K tok/s
Corpus p50 73.7ms
Query 3.0K tok/s
Query p50 52.9ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Quality
ndcg at 10 0.2045
map at 10 0.1567
mrr at 10 0.1702
Performance L4 b1 c16
Corpus 9.1K tok/s
Corpus p50 62.2ms
Query 1.6K tok/s
Query p50 51.3ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Quality
ndcg at 10 0.2768
map at 10 0.2113
mrr at 10 0.3451
Performance L4 b1 c16
Corpus 24.5K tok/s
Corpus p50 67.6ms
Query 3.3K tok/s
Query p50 51.2ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Quality
ndcg at 10 0.7393
map at 10 0.6784
mrr at 10 0.6805
Performance L4 b1 c16
Corpus 56.0K tok/s
Corpus p50 115.2ms
Query 4.5K tok/s
Query p50 52.2ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3404
map at 10 0.1300
mrr at 10 0.5417
Performance L4 b1 c16
Corpus 37.0K tok/s
Corpus p50 98.4ms
Query 1.4K tok/s
Query p50 52.6ms
Performance RTX-4090 b1 c16
Corpus 108.8K tok/s
Corpus p50 40.3ms
Query 3.5K tok/s
Query p50 19.1ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Quality
ndcg at 10 0.1543
map at 10 0.0878
mrr at 10 0.2686
Performance L4 b1 c16
Corpus 26.2K tok/s
Corpus p50 83.4ms
Query 3.0K tok/s
Query p50 53.6ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Quality
ndcg at 10 0.6846
map at 10 0.6371
mrr at 10 0.6524
Performance L4 b1 c16
Corpus 35.8K tok/s
Corpus p50 89.7ms
Query 4.5K tok/s
Query p50 52.1ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Quality
ndcg at 10 0.7380
map at 10 0.7057
mrr at 10 0.7057
Performance L4 b1 c16
Corpus 33.0K tok/s
Corpus p50 84.0ms
Query 50.8K tok/s
Query p50 91.0ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github 1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.