Why did we open-source our inference engine? Read the post

opensearch-project/opensearch-neural-sparse-encoding-doc-v3-gte

The model should be selected considering search relevance, model inference and retrieval efficiency(FLOPS). We benchmark models' performance on a subset of BEIR benchmark: TrecCovid,NFCorpus,NQ,HotpotQA,FiQA,ArguAna,Touche,DBPedia,SCIDOCS,FEVER,Climate FEVER,SciFact,Quora.

Architecture
ModernBERT
Parameters
137M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License
apache-2.0
Languages
en

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Quality
ndcg at 10 0.4057
map at 10 0.3518
mrr at 10 0.4049
Performance A10G b1 c16
Corpus 1 tok/s
Corpus p50 4.0s
Query 0 tok/s
Query p50 32.5s
Performance L4 b1 c16
Corpus 24.3K tok/s
Corpus p50 75.0ms
Query 4.2K tok/s
Query p50 40.6ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Quality
ndcg at 10 0.2244
map at 10 0.1739
mrr at 10 0.1860
Performance A10G b1 c16
Corpus 24 tok/s
Corpus p50 42.5s
Query 24 tok/s
Query p50 3.6s
Performance L4 b1 c16
Corpus 12.4K tok/s
Corpus p50 61.4ms
Query 2.1K tok/s
Query p50 43.2ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Quality
ndcg at 10 0.4062
map at 10 0.3301
mrr at 10 0.4849
Performance A10G b1 c16
Corpus 0 tok/s
Corpus p50 2.0s
Query 0 tok/s
Query p50 0.0ms
Performance L4 b1 c16
Corpus 29.2K tok/s
Corpus p50 78.9ms
Query 4.4K tok/s
Query p50 40.6ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Quality
ndcg at 10 0.7290
map at 10 0.6704
mrr at 10 0.6712
Performance L4 b1 c16
Corpus 59.1K tok/s
Corpus p50 127.0ms
Query 6.2K tok/s
Query p50 41.7ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3606
map at 10 0.1391
mrr at 10 0.5725
Performance L4 b1 c16
Corpus 37.7K tok/s
Corpus p50 114.2ms
Query 1.7K tok/s
Query p50 43.9ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Quality
ndcg at 10 0.1586
map at 10 0.0918
mrr at 10 0.2747
Performance L4 b1 c16
Corpus 34.2K tok/s
Corpus p50 86.0ms
Query 4.2K tok/s
Query p50 41.2ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Quality
ndcg at 10 0.6262
map at 10 0.5830
mrr at 10 0.5966
Performance L4 b1 c16
Corpus 40.0K tok/s
Corpus p50 103.7ms
Query 5.9K tok/s
Query p50 43.4ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Quality
ndcg at 10 0.7470
map at 10 0.7160
mrr at 10 0.7160
Performance L4 b1 c16
Corpus 34.1K tok/s
Corpus p50 101.3ms
Query 78.1K tok/s
Query p50 57.0ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github 1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.