Datasets:
File size: 3,906 Bytes
a482ea6 b3eba64 a482ea6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
---
language:
- ha
license: cc-by-4.0
task_categories:
- text-to-speech
- audio-classification
pretty_name: Hausa TTS Dataset
size_categories:
- n<1K
tags:
- audio
- tts
- hausa
- speech-synthesis
- multi-speaker
---
# Hausa TTS Dataset
## Dataset Description
This dataset contains Hausa language text-to-speech (TTS) recordings from multiple speakers. It includes audio files paired with their corresponding Hausa text transcriptions.
### Dataset Structure
The dataset is organized as follows:
```
├── data/
│ ├── metadata.csv # Metadata (source, audio paths, text)
│ └── audio_files/
│ ├── 97f373e8-f6e6-.../ # Speaker 1 audio files
│ ├── b0db0a87-2206-.../ # Speaker 2 audio files
│ └── c3621689-ca53-.../ # Speaker 3 audio files
└── README.md
```
### Data Fields
The `metadata.csv` contains the following columns:
- **file_name**: Relative path to the audio file in WAV format
- **source**: Speaker ID (UUID format) identifying the speaker
- **text**: Hausa text transcription corresponding to the audio
### Dataset Statistics
- **Total Samples**: ~100 recordings
- **Number of Speakers**: 3 unique speakers
- **Audio Format**: WAV files
- **Sample Rate**: 24,000 Hz (target)
- **Language**: Hausa (ha)
### Usage
#### Loading the Dataset
```python
from datasets import load_dataset
# Load from Hugging Face
dataset = load_dataset("Aybee5/ha-tts-csv", split="train")
# Access the data
print(dataset[0])
# Output: {'source': 'speaker_id', 'audio': {'path': '...', 'array': [...], 'sampling_rate': 24000}, 'text': '...'}
```
#### Example with Audio Processing
```python
from datasets import load_dataset, Audio
# Load dataset
dataset = load_dataset("Aybee5/ha-tts-csv", split="train")
# Cast audio column to specific sampling rate
dataset = dataset.cast_column("audio", Audio(sampling_rate=24000))
# Process audio
for example in dataset:
audio_array = example["audio"]["array"]
sampling_rate = example["audio"]["sampling_rate"]
text = example["text"]
speaker_id = example["source"]
# Your processing here...
```
### Speaker Information
The dataset includes recordings from 3 unique speakers, each identified by a UUID in the `source` field. For training multi-speaker TTS models, you can map these UUIDs to numeric speaker IDs:
```python
unique_speakers = sorted(set(dataset["source"]))
speaker_to_id = {speaker: idx for idx, speaker in enumerate(unique_speakers)}
```
### Use Cases
This dataset is suitable for:
- Text-to-Speech (TTS) model training
- Multi-speaker voice synthesis
- Hausa language speech research
- Voice cloning applications
- Speech corpus analysis
### Languages
- Hausa (ha)
### Licensing Information
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
### Citation
If you use this dataset in your research, please cite:
```
@dataset{hausa_tts_dataset,
title={Hausa TTS Dataset},
author={Your Name},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/Aybee5/ha-tts-csv}}
}
```
### Data Collection and Processing
The audio data was collected using MimicStudio recording system and includes native Hausa speakers reading text prompts. All audio files are stored in WAV format with metadata tracked in an SQLite database, which has been exported to CSV format for easy dataset distribution.
### Considerations for Using the Data
- Audio quality may vary between speakers
- Some audio files may have background noise
- Text transcriptions are in Hausa language using Latin script
- Speaker characteristics (gender, age, accent) are not explicitly labeled
### Additional Information
For questions or issues with this dataset, please open an issue on the dataset repository or contact the dataset maintainer.
|