Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,14 +6,18 @@ task_categories:
|
|
| 6 |
|
| 7 |
🏆 **News:** Our [OWSM v4 paper](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html) won the [Best Student Paper Award](https://isca-speech.org/ISCA-Awards) at INTERSPEECH 2025!
|
| 8 |
|
| 9 |
-
|
| 10 |
-
It
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
Utterances are segmented into up to 30 seconds to be consistent with OWSM training.
|
| 12 |
|
| 13 |
## YODAS Data Cleaning
|
| 14 |
|
| 15 |
Due to the nature of web-sourced data, the original YODAS2 dataset contains inaccurate language labels and misaligned audio-text pairs.
|
| 16 |
-
Our preliminary experiments suggest that such noise
|
| 17 |
To address this, we developed a scalable data-cleaning pipeline using publicly available toolkits, resulting in a curated subset of the original dataset.
|
| 18 |
This cleaned dataset forms a core part of the training data for our OWSM v4 models, which, when combined with existing OWSM data, significantly outperform previous versions on multilingual benchmarks.
|
| 19 |
|
|
@@ -21,14 +25,40 @@ This cleaned dataset forms a core part of the training data for our OWSM v4 mode
|
|
| 21 |
- **Data Cleaning Scripts:** [ESPnet](https://github.com/espnet/espnet/tree/master/egs2/owsm_v4/s2t1)
|
| 22 |
- **Model Demo:** [Gradio](https://huggingface.co/spaces/espnet/OWSM_V4_Demo)
|
| 23 |
|
|
|
|
| 24 |
|
| 25 |
-
|
| 26 |
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
[OWSM v4](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html) is
|
| 32 |
|
| 33 |
Please refer to our paper for comprehensive evaluations. Below are some notable results.
|
| 34 |
|
|
|
|
| 6 |
|
| 7 |
🏆 **News:** Our [OWSM v4 paper](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html) won the [Best Student Paper Award](https://isca-speech.org/ISCA-Awards) at INTERSPEECH 2025!
|
| 8 |
|
| 9 |
+
[Open Whisper-style Speech Model (OWSM)](https://www.wavlab.org/activities/2024/owsm/) is the first **fully open** Whisper-style speech foundation model.
|
| 10 |
+
It reproduces and advances OpenAI's Whisper-style training using publicly available data and open-source toolkits.
|
| 11 |
+
The code, pre-trained model weights, and training logs are publicly released to promote open science in speech foundation models.
|
| 12 |
+
|
| 13 |
+
This repo contains the newly curated training data for [OWSM v4](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html), the latest version in the OWSM series.
|
| 14 |
+
This dataset is a high-quality subset of [YODAS2](https://huggingface.co/datasets/espnet/yodas2), comprising 166,000 hours of multilingual speech spanning 75 languages.
|
| 15 |
Utterances are segmented into up to 30 seconds to be consistent with OWSM training.
|
| 16 |
|
| 17 |
## YODAS Data Cleaning
|
| 18 |
|
| 19 |
Due to the nature of web-sourced data, the original YODAS2 dataset contains inaccurate language labels and misaligned audio-text pairs.
|
| 20 |
+
Our preliminary experiments suggest that such noise hurts the performance of downstream ASR models.
|
| 21 |
To address this, we developed a scalable data-cleaning pipeline using publicly available toolkits, resulting in a curated subset of the original dataset.
|
| 22 |
This cleaned dataset forms a core part of the training data for our OWSM v4 models, which, when combined with existing OWSM data, significantly outperform previous versions on multilingual benchmarks.
|
| 23 |
|
|
|
|
| 25 |
- **Data Cleaning Scripts:** [ESPnet](https://github.com/espnet/espnet/tree/master/egs2/owsm_v4/s2t1)
|
| 26 |
- **Model Demo:** [Gradio](https://huggingface.co/spaces/espnet/OWSM_V4_Demo)
|
| 27 |
|
| 28 |
+
The data cleaning process consists of three stages.
|
| 29 |
|
| 30 |
+

|
| 31 |
|
| 32 |
+
### Stage 1: Resegmentation (Section 2.1.1 in Paper)
|
| 33 |
+
|
| 34 |
+
YODAS provides unsegmented long-form recordings, each of which is accompanied by a list of text transcriptions annotated with start and end timestamps.
|
| 35 |
+
However, some timestamps are inaccurate. Consequently, our first step is to realign the audio and text using the CTC segmentation algorithm.
|
| 36 |
+
|
| 37 |
+
### Stage 2: LID-based filtering (Section 2.1.2 in Paper)
|
| 38 |
+
|
| 39 |
+
Some utterances have incorrect language labels. We perform language identification on both audio and text using public models.
|
| 40 |
+
Then, we remove utterances where the language label does not match the identified language from either audio or text.
|
| 41 |
+
|
| 42 |
+
### Stage 3: CTC-score-based filtering (Section 2.1.3 in Paper)
|
| 43 |
+
|
| 44 |
+
The CTC segmentation algorithm in Stage 1 assigns a confidence score to each utterance, which measures the speech-text alignment quality.
|
| 45 |
+
We filter out utterances with low CTC scores.
|
| 46 |
+
The CTC confidence score is language-dependent; therefore, we rank the scores of short utterances within each language and select a relative threshold (quantile).
|
| 47 |
+
|
| 48 |
+
In this repo, we release two subsets:
|
| 49 |
+
- `dump/raw/yodas0.00` (low quality, not recommended): The filtering threshold is 0.00, i.e., no filtering based on CTC score. This subset contains all data at the end of Stage 2.
|
| 50 |
+
- ⚠️ **This subset has low-quality data that hurts ASR performance, as shown in Table 2 of our paper. It is NOT recommended to use unless further filtering is performed.**
|
| 51 |
+
- `dump/raw/yodas0.10` (good quality, recommended): The filtering threshold is 0.10. This is the actual training data used to develop OWSM v4.
|
| 52 |
+
|
| 53 |
+
## Usage
|
| 54 |
+
|
| 55 |
+
This dataset follows the ESPnet OWSM data format as described in the [`s2t1` recipe](https://github.com/espnet/espnet/tree/master/egs2/TEMPLATE/s2t1).
|
| 56 |
+
Two subsets are released:
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
## OWSM v4 Results
|
| 60 |
|
| 61 |
+
[OWSM v4](https://www.isca-archive.org/interspeech_2025/peng25c_interspeech.html) is
|
| 62 |
|
| 63 |
Please refer to our paper for comprehensive evaluations. Below are some notable results.
|
| 64 |
|