Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
intfloat commited on
Commit
bf95d9b
·
verified ·
1 Parent(s): ba1d865

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -24
README.md CHANGED
@@ -1,24 +1,69 @@
1
- ---
2
- license: apache-2.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- dataset_info:
9
- features:
10
- - name: id
11
- dtype: string
12
- - name: contents
13
- dtype: string
14
- - name: title
15
- dtype: string
16
- - name: wikipedia_id
17
- dtype: string
18
- splits:
19
- - name: train
20
- num_bytes: 18038881943
21
- num_examples: 35678076
22
- download_size: 10150820540
23
- dataset_size: 18038881943
24
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ configs:
4
+ - config_name: default
5
+ data_files:
6
+ - split: train
7
+ path: data/train-*
8
+ dataset_info:
9
+ features:
10
+ - name: id
11
+ dtype: string
12
+ - name: contents
13
+ dtype: string
14
+ - name: title
15
+ dtype: string
16
+ - name: wikipedia_id
17
+ dtype: string
18
+ splits:
19
+ - name: train
20
+ num_bytes: 18038881943
21
+ num_examples: 35678076
22
+ download_size: 10150820540
23
+ dataset_size: 18038881943
24
+ language:
25
+ - en
26
+ ---
27
+
28
+ # KILT Corpus
29
+
30
+ This dataset contains approximately 36 million Wikipedia passages from the "[Multi-task retrieval for knowledge-intensive tasks](https://arxiv.org/pdf/2101.00117)" paper. It is also the retrieval corpus used in the paper [Chain-of-Retrieval Augmented Generation](https://arxiv.org/pdf/2501.14342).
31
+
32
+ ## Fields
33
+
34
+ * `id`: A unique identifier for each passage.
35
+ * `title`: The title of the Wikipedia page from which the passage originates.
36
+ * `contents`: The textual content of the passage.
37
+ * `wikipedia_id`: The unique identifier for the Wikipedia page, used for KILT evaluation.
38
+
39
+ ## How to Load the Dataset
40
+
41
+ You can easily load this dataset using the `datasets` library from Hugging Face. Make sure you have the library installed (`pip install datasets`).
42
+
43
+ ```python
44
+ from datasets import load_dataset
45
+
46
+ ds = load_dataset('corag/kilt-corpus', split='train')
47
+
48
+ # You can inspect the dataset structure and the first few examples:
49
+ print(ds)
50
+ print(ds[0])
51
+ ```
52
+
53
+ ## References
54
+
55
+ ```
56
+ @article{maillard2021multi,
57
+ title={Multi-task retrieval for knowledge-intensive tasks},
58
+ author={Maillard, Jean and Karpukhin, Vladimir and Petroni, Fabio and Yih, Wen-tau and O{\u{g}}uz, Barlas and Stoyanov, Veselin and Ghosh, Gargi},
59
+ journal={arXiv preprint arXiv:2101.00117},
60
+ year={2021}
61
+ }
62
+
63
+ @article{wang2025chain,
64
+ title={Chain-of-Retrieval Augmented Generation},
65
+ author={Wang, Liang and Chen, Haonan and Yang, Nan and Huang, Xiaolong and Dou, Zhicheng and Wei, Furu},
66
+ journal={arXiv preprint arXiv:2501.14342},
67
+ year={2025}
68
+ }
69
+ ```