Datasets:
language:
- ha
license: cc-by-4.0
task_categories:
- text-to-speech
- audio-classification
pretty_name: Hausa TTS Dataset
size_categories:
- n<1K
tags:
- audio
- tts
- hausa
- speech-synthesis
- multi-speaker
Hausa TTS Dataset
Dataset Description
This dataset contains Hausa language text-to-speech (TTS) recordings from multiple speakers. It includes audio files paired with their corresponding Hausa text transcriptions.
Dataset Structure
The dataset is organized as follows:
├── data/
│ ├── metadata.csv # Metadata (source, audio paths, text)
│ └── audio_files/
│ ├── 97f373e8-f6e6-.../ # Speaker 1 audio files
│ ├── b0db0a87-2206-.../ # Speaker 2 audio files
│ └── c3621689-ca53-.../ # Speaker 3 audio files
└── README.md
Data Fields
The metadata.csv contains the following columns:
- file_name: Relative path to the audio file in WAV format
- source: Speaker ID (UUID format) identifying the speaker
- text: Hausa text transcription corresponding to the audio
Dataset Statistics
- Total Samples: ~100 recordings
- Number of Speakers: 3 unique speakers
- Audio Format: WAV files
- Sample Rate: 24,000 Hz (target)
- Language: Hausa (ha)
Usage
Loading the Dataset
from datasets import load_dataset
# Load from Hugging Face
dataset = load_dataset("Aybee5/ha-tts-csv", split="train")
# Access the data
print(dataset[0])
# Output: {'source': 'speaker_id', 'audio': {'path': '...', 'array': [...], 'sampling_rate': 24000}, 'text': '...'}
Example with Audio Processing
from datasets import load_dataset, Audio
# Load dataset
dataset = load_dataset("Aybee5/ha-tts-csv", split="train")
# Cast audio column to specific sampling rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=24000))
# Process audio
for example in dataset:
audio_array = example["audio"]["array"]
sampling_rate = example["audio"]["sampling_rate"]
text = example["text"]
speaker_id = example["source"]
# Your processing here...
Speaker Information
The dataset includes recordings from 3 unique speakers, each identified by a UUID in the source field. For training multi-speaker TTS models, you can map these UUIDs to numeric speaker IDs:
unique_speakers = sorted(set(dataset["source"]))
speaker_to_id = {speaker: idx for idx, speaker in enumerate(unique_speakers)}
Use Cases
This dataset is suitable for:
- Text-to-Speech (TTS) model training
- Multi-speaker voice synthesis
- Hausa language speech research
- Voice cloning applications
- Speech corpus analysis
Languages
- Hausa (ha)
Licensing Information
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Citation
If you use this dataset in your research, please cite:
@dataset{hausa_tts_dataset,
title={Hausa TTS Dataset},
author={Your Name},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/Aybee5/ha-tts-csv}}
}
Data Collection and Processing
The audio data was collected using MimicStudio recording system and includes native Hausa speakers reading text prompts. All audio files are stored in WAV format with metadata tracked in an SQLite database, which has been exported to CSV format for easy dataset distribution.
Considerations for Using the Data
- Audio quality may vary between speakers
- Some audio files may have background noise
- Text transcriptions are in Hausa language using Latin script
- Speaker characteristics (gender, age, accent) are not explicitly labeled
Additional Information
For questions or issues with this dataset, please open an issue on the dataset repository or contact the dataset maintainer.