VictorMorand commited on
Commit
4bb8858
·
verified ·
1 Parent(s): 5b4fa03

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ library_name: transformers
5
+ base_model: FacebookAI/roberta-base
6
+ model_name: cross-encoder-RoBERTa-infoNCE
7
+ source: https://github.com/xpmir/cross-encoders
8
+ paper: http://arxiv.org/abs/2603.03010
9
+ tags:
10
+ - cross-encoder
11
+ - sequence-classification
12
+ - tensorboard
13
+ datasets:
14
+ - msmarco
15
+ pipeline_tag: text-classification
16
+ ---
17
+
18
+ # cross-encoder-RoBERTa-infoNCE
19
+
20
+ [![Paper](https://img.shields.io/badge/Paper-Arxiv-red)](http://arxiv.org/abs/2603.03010)
21
+ [![All Models](https://img.shields.io/badge/🤗%20Hugging%20Face%20Models-blue)](https://huggingface.co/collections/xpmir/reproducing-cross-encoders)
22
+ [![GitHub](https://img.shields.io/badge/GitHub-Code-blue)](https://github.com/xpmir/cross-encoders)
23
+
24
+ This model is a cross-encoder based on `FacebookAI/roberta-base`. It was trained on Ms-Marco using loss `infoNCE` as part of a reproducibility paper for training cross encoders: "**[Reproducing and Comparing Distillation Techniques for Cross-Encoders](http://arxiv.org/abs/2603.03010)**", see the paper for more details.
25
+
26
+
27
+ ### Contents
28
+ - [Model Description](#model-description)
29
+ - [Usage](#usage)
30
+ - [Evals](#evaluations)
31
+
32
+
33
+ ## Model Description
34
+
35
+ This model is intended for **re-ranking** the top results returned by a retrieval system (like BM25, Bi-Encoders or SPLADE).
36
+
37
+ - **Training Data:** MS MARCO Passage
38
+ - **Language:** English
39
+ - **Loss** infoNCE
40
+
41
+ Training can be easily reproduced using the assiciated repository.
42
+ The exact training configuration used for this model is also detailed in [config.yaml](./config.yaml).
43
+
44
+ ## Usage
45
+
46
+ Quick Start:
47
+ ```python
48
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
49
+ import torch
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base")
52
+ model = AutoModelForSequenceClassification.from_pretrained("xpmir/cross-encoder-RoBERTa-infoNCE")
53
+
54
+ features = tokenizer("What is experimaestro ?", "Experimaestro is a powerful framework for ML experiments management...", padding=True, truncation=True, return_tensors="pt")
55
+
56
+ model.eval()
57
+ with torch.no_grad():
58
+ scores = model(**features).logits
59
+ print(scores)
60
+ ```
61
+
62
+ ## Evaluations
63
+
64
+ We provide evaluations of this cross-encoder re-ranking the top `1000` documents retrieved by `naver/splade-v3-distilbert`.
65
+
66
+ | dataset | RR@10 | nDCG@10 |
67
+ |:-------------------|:----------|:----------|
68
+ | msmarco_dev | 38.91 | 45.72 |
69
+ | trec2019 | 95.35 | 73.74 |
70
+ | trec2020 | 93.21 | 72.00 |
71
+ | fever | 77.82 | 78.27 |
72
+ | arguana | 21.78 | 32.36 |
73
+ | climate_fever | 26.02 | 19.42 |
74
+ | dbpedia | 75.34 | 44.45 |
75
+ | fiqa | 48.16 | 40.59 |
76
+ | hotpotqa | 86.88 | 70.72 |
77
+ | nfcorpus | 54.94 | 33.58 |
78
+ | nq | 54.68 | 59.47 |
79
+ | quora | 75.73 | 78.56 |
80
+ | scidocs | 27.99 | 15.66 |
81
+ | scifact | 68.45 | 71.15 |
82
+ | touche | 59.24 | 34.76 |
83
+ | trec_covid | 91.07 | 71.85 |
84
+ | robust04 | 70.84 | 48.48 |
85
+ | lotte_writing | 70.65 | 61.44 |
86
+ | lotte_recreation | 61.61 | 56.37 |
87
+ | lotte_science | 46.90 | 39.01 |
88
+ | lotte_technology | 56.38 | 47.20 |
89
+ | lotte_lifestyle | 74.03 | 64.38 |
90
+ | **Mean In Domain** | **75.82** | **63.82** |
91
+ | **BEIR 13** | **59.08** | **50.06** |
92
+ | **LoTTE (OOD)** | **63.40** | **52.81** |