You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Afrivoice Ethiopia

Dataset Summary

Language Category Total number of hours Total number of transcribed hours Total number of clips Total of Size of the dataset in GB
Amharic Unscripted 351.17 81.67 71119 20.7
Scripted 113.55 113.55 22412 7.9
Expert 153.44 24.28 31144 8.1
Afaan Oromo Unscripted 338.36 80.44 65555 21.8
Scripted 111.93 111.93 23051 9.7
Expert 153.7 29.88 29638 8.2
Sidama Unscripted 348.67 74.94 72801 23.5
Scripted 114.93 114.93 22717 10.1
Expert 139.55 35.55 27920 7.5
Wolaytta Unscripted 339.95 104.29 68671 19.4
Scripted 108.83 108.83 25200 7.8
Expert 154.36 5.83 33303 8
Tigrinya Unscripted 348.86 97.73 69954 23.7
Scripted 107.25 107.25 26624 10.8
Expert 144.98 13.6 29830 7.6

Supported tasks

How to use

The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.

For Full Dataset

Run the following python script

from datasets import load_dataset
data = load_dataset("DigitalUmuganda/Afrivoice_Ethiopia")

For Language dataset

from datasets import load_dataset
data = load_dataset("DigitalUmuganda/Afrivoice_Ethiopia", name="am")

Dataset Structure

Data Instance

{ "voice_creator_id":"hqiqompEWaQa9CICqOwTYjyj3fc2", "transcription_creator_id":"1jMeIatBUBXgmHI1FDjtwIkW1E03", "image_filepath":"Hair-(ጸጉር)_WOLAYTTA_037.jpg", "image_category":"scripted", "image_sub_category":"Free", "category":"scripted", "audio_filepath":"00DioM00qNeWGYsLF7KG.wav", "transcription":null, "script":"Macca naati bantta huuphiyaa dumma dumma hanotan dadisoosona. Hegaappe guyyiyan bantta dadissido huuphiyaappe guyye baggaara qassi aybakkonne huuphiyaa bolla aattoosona.", "age_group":"25-35", "gender":"Female", "project_name":"Digital Umuganda", "locale":"wal_ET", "year":2025, "duration":15.96, "location":null, "uid":"hqiqompEWaQa9CICqOwTYjyj3fc2", "key":"00DioM00qNeWGYsLF7KG", "dir_path":"wal/train/scripted", "chunk_id":0, "workflow":"scripted", "prosodic varieties":"Free", "expert categories":null }

Data Fields

  • voice_creator_id (string): An id for which client (voice) made the recording
  • transcription (string): Original audio transcription with punctuation and capitalization,,only for unscripted workflow
  • script (string): original scripted created which was later read by the contributor, only applies for scripted workflow
  • image_filepath (string): file path of the image file inside the dataset
  • audio_filepath (string): file path of the audio file inside the dataset
  • age_group (string): age range of the audio recorder
  • gender (string): The gender of the speaker
  • location (string): geographical location of the audio recorder
  • duration (int): length in seconds of the audio file
  • image_category (string): domain of the image (eg: health, agriculture, finance), used as prompt during audio creation.
  • image_sub_category (string): Sub-domain label of the image (e.g., within agriculture: “seed farming” or “forestry”), used to guide audio creation.
  • project (string): project name
  • locale (string): The locale of the speaker
  • year (int): Year of recording
  • project_name (string): project name
  • location (string): location of the project
  • workflow (string): The workflow of the data collection it can either be scripted or unscripted
  • unscripted datatype (string): the various unscripted type image prompted, Spontaneous, Expert
  • prosodic varieties (string): the prosodic varieties used when recording the audio these can be: questions, emphatic, expressive, free, it applies for both scripted and unscripted
  • expert categories (string): the various categories used in the unscripted expert domain
  • key (string): key identifier of the data point
  • dir (string): directory path to the parent directory of the manifest file, and audio and image directory of data point.
  • chunk_id (int): chunk ID to locate manifest file it belongs to, and audio/image directories to find them

Data splits

Each domain/category in the dataset is divided into train, validation, and test splits, following the following rules: Users on the train split do not appear in the test or validation split. Same goes for test and validation splits.

Licensing Information

All datasets are licensed under the Creative Commons license (CC-BY-4).

Downloads last month
176