Transcription and translation of videos using fine-tuned XLSR Wav2Vec2 on custom dataset and mBART
Paper
•
2403.00212
•
Published
Error code: ConfigNamesError
Exception: FileNotFoundError
Message: Couldn't find a dataset script at /src/services/worker/Aniket-Tathe-08/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data.py or any data file in the same directory. Couldn't find 'Aniket-Tathe-08/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/Aniket-Tathe-08/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data/resolve/main/release_stats.py
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 79, in compute_config_names_response
config_names = get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1906, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Aniket-Tathe-08/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data.py or any data file in the same directory. Couldn't find 'Aniket-Tathe-08/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/Aniket-Tathe-08/Custom_Common_Voice_16.0_dataset_using_RVC_14min_data/resolve/main/release_stats.pyNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Custom common_voice_v16 corpus with a custom voice was was created using RVC(Retrieval-Based Voice Conversion)
The model underwent 200 epochs of training, utilizing a total of 14min of audio clips. The data was scraped from Youtube. The audio in the custom generated dataset is of a YouTuber named Ajay Pandey
Public Domain, CC-0
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}