Why did we open-source our inference engine? Read the post

jinaai/jina-colbert-v2 (Encode)

Trained by Jina AI.

Architecture
XLM-RoBERTa
Parameters
559M
Tasks
Encode
Outputs
Multi-Vec
Dimensions
Multi-Vec: 128
Max Sequence Length
8,192 tokens
License
cc-by-nc-4.0
Languages
multilingual, af, am, ar, as, az, be, bg, bn, br, bs, ca, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fr, fy, ga, gd, gl, gu, ha, he, hi, hr, hu, hy, id, is, it, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lo, lt, lv, mg, mk, ml, mn, mr, ms, my, ne, nl, no, om, or, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, so, sq, sr, su, sv, sw, ta, te, th, tl, tr, ug, uk, ur, uz, vi, xh, yi, zh

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Quality
ndcg at 10 0.4047
map at 10 0.3496
mrr at 10 0.4005
Performance L4 b1 c16
Corpus 24.9K tok/s
Corpus p50 81.3ms
Query 3.0K tok/s
Query p50 55.9ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Quality
ndcg at 10 0.2607
map at 10 0.2037
mrr at 10 0.1946
Performance L4 b1 c16
Corpus 13.9K tok/s
Corpus p50 63.3ms
Query 1.5K tok/s
Query p50 59.7ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Quality
ndcg at 10 0.4051
map at 10 0.3240
mrr at 10 0.4875
Performance L4 b1 c16
Corpus 27.1K tok/s
Corpus p50 93.4ms
Query 3.0K tok/s
Query p50 59.5ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Quality
ndcg at 10 0.7615
map at 10 0.7107
mrr at 10 0.7116
Performance L4 b1 c16
Corpus 30.7K tok/s
Corpus p50 259.5ms
Query 3.4K tok/s
Query p50 60.1ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3583
map at 10 0.1422
mrr at 10 0.5724
Performance L4 b1 c16
Corpus 33.2K tok/s
Corpus p50 146.1ms
Query 1.5K tok/s
Query p50 55.3ms
Reference →

NanoFiQA2018Retrieval

finance retrieval en

Smaller subset of the FiQA financial QA dataset

Quality
ndcg at 10 0.5208
map at 10 0.4318
mrr at 10 0.5644
Performance L4 b1 c16
Corpus 28.9K tok/s
Corpus p50 77.4ms
Query 2.6K tok/s
Query p50 49.5ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Quality
ndcg at 10 0.1779
map at 10 0.1045
mrr at 10 0.3091
Performance L4 b1 c16
Corpus 28.5K tok/s
Corpus p50 105.7ms
Query 2.9K tok/s
Query p50 57.3ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Quality
ndcg at 10 0.6702
map at 10 0.6266
mrr at 10 0.6391
Performance L4 b1 c16
Corpus 30.9K tok/s
Corpus p50 137.3ms
Query 4.8K tok/s
Query p50 56.4ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Quality
ndcg at 10 0.6085
map at 10 0.5717
mrr at 10 0.5717
Performance L4 b1 c16
Corpus 27.4K tok/s
Corpus p50 127.2ms
Query 58.2K tok/s
Query p50 80.3ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github 1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.