Why did we open-source our inference engine? Read the post

mixedbread-ai/mxbai-edge-colbert-v0-32m

The crispy, lightweight ColBERT family from Mixedbread.

Architecture
ModernBERT
Parameters
32M
Tasks
Encode
Outputs
Multi-Vec
Dimensions
Multi-Vec: 64
Max Sequence Length
8,192 tokens
License
apache-2.0
Languages
en

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Duplicate question retrieval from StackExchange Physics

Corpus: 38,314 Queries: 1,039
Performance L4 b1 c16
Corpus 28.3K tok/s
Corpus p50 60.5ms
Query 3.0K tok/s
Query p50 55.7ms
Reference →

CosQA

technology retrieval en

Code search with natural language queries

Corpus: 6,267 Queries: 500
Performance L4 b1 c16
Corpus 15.0K tok/s
Corpus p50 51.8ms
Query 1.9K tok/s
Query p50 48.5ms
Reference →

FiQA2018

finance retrieval en

Financial opinion mining and question answering

Corpus: 57,599 Queries: 648
Performance L4 b1 c16
Corpus 38.9K tok/s
Corpus p50 57.5ms
Query 3.7K tok/s
Query p50 48.0ms
Reference →

LegalBenchConsumerContractsQA

legal retrieval en

Question answering on consumer contracts

Corpus: 153 Queries: 396
Performance L4 b1 c16
Corpus 87.3K tok/s
Corpus p50 76.5ms
Query 5.4K tok/s
Query p50 48.3ms
Reference →

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3376
map at 10 0.1285
mrr at 10 0.5432
Performance L4 b1 c16
Corpus 67.7K tok/s
Corpus p50 58.6ms
Query 1.4K tok/s
Query p50 53.0ms
Reference →

SCIDOCS

scientific retrieval en

Citation prediction, document classification, and recommendation for scientific papers

Corpus: 25,656 Queries: 1,000
Performance L4 b1 c16
Corpus 36.4K tok/s
Corpus p50 70.2ms
Query 2.7K tok/s
Query p50 61.8ms
Reference →

SciFact

scientific retrieval en

Scientific claim verification using research literature

Corpus: 5,183 Queries: 300
Performance L4 b1 c16
Corpus 58.8K tok/s
Corpus p50 61.2ms
Query 5.4K tok/s
Query p50 49.0ms
Reference →

StackOverflowQA

technology retrieval en

Programming question answering from Stack Overflow

Corpus: 19,931 Queries: 1,994
Performance L4 b1 c16
Corpus 52.8K tok/s
Corpus p50 58.9ms
Query 68.3K tok/s
Query p50 66.6ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github 1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.