Why did we open-source our inference engine? Read the post

jinaai/jina-colbert-v2 (Score)

Trained by Jina AI.

Architecture
XLM-RoBERTa
Parameters
559M
Tasks
Encode
Outputs
Multi-Vec
Dimensions
Multi-Vec: 128
Max Sequence Length
8,192 tokens
License
cc-by-nc-4.0
Languages
multilingual, af, am, ar, as, az, be, bg, bn, br, bs, ca, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fr, fy, ga, gd, gl, gu, ha, he, hi, hr, hu, hy, id, is, it, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lo, lt, lv, mg, mk, ml, mn, mr, ms, my, ne, nl, no, om, or, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, so, sq, sr, su, sv, sw, ta, te, th, tl, tr, ug, uk, ur, uz, vi, xh, yi, zh

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Performance L4-SPOT b1 c16
Corpus 3.3K tok/s
Corpus p50 325.6ms
Query 163 tok/s
Query p50 490.5ms
Performance L4 b1 c16
Corpus 22.9K tok/s
Corpus p50 83.9ms
Query 2.9K tok/s
Query p50 59.2ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Performance L4-SPOT b1 c16
Corpus 798 tok/s
Corpus p50 555.8ms
Query 127 tok/s
Query p50 364.0ms
Performance L4 b1 c16
Corpus 13.4K tok/s
Corpus p50 65.7ms
Query 1.6K tok/s
Query p50 60.4ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Performance L4-SPOT b1 c16
Corpus 1.7K tok/s
Corpus p50 656.4ms
Query 218 tok/s
Query p50 412.6ms
Performance L4 b1 c16
Corpus 25.8K tok/s
Corpus p50 95.6ms
Query 3.0K tok/s
Query p50 60.7ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Performance L4-SPOT b1 c16
Corpus 5.8K tok/s
Corpus p50 568.2ms
Query 216 tok/s
Query p50 517.1ms
Performance L4 b1 c16
Corpus 27.5K tok/s
Corpus p50 274.0ms
Query 3.1K tok/s
Query p50 74.2ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Performance L4-SPOT b1 c16
Corpus 3.0K tok/s
Corpus p50 622.8ms
Query 77 tok/s
Query p50 440.1ms
Performance L4 b1 c16
Corpus 30.3K tok/s
Corpus p50 156.2ms
Query 1.2K tok/s
Query p50 63.6ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Performance L4-SPOT b1 c16
Corpus 2.8K tok/s
Corpus p50 521.5ms
Query 165 tok/s
Query p50 520.0ms
Performance L4 b1 c16
Corpus 26.7K tok/s
Corpus p50 108.8ms
Query 2.7K tok/s
Query p50 62.0ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Performance L4-SPOT b1 c16
Corpus 4.6K tok/s
Corpus p50 397.7ms
Query 269 tok/s
Query p50 436.5ms
Performance L4 b1 c16
Corpus 29.3K tok/s
Corpus p50 145.2ms
Query 3.9K tok/s
Query p50 65.2ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Performance L4-SPOT b1 c16
Corpus 3.4K tok/s
Corpus p50 554.5ms
Query 3.9K tok/s
Query p50 505.2ms
Performance L4 b1 c16
Corpus 19.2K tok/s
Corpus p50 162.2ms
Query 40.0K tok/s
Query p50 88.2ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github 1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.