This repository contains the dMelodies dataset, meant to explore disentanglement in the context of symbolic music. Please cite as follows if you are using the code/data in this repository in any manner.
Ashis Pati, Siddharth Gururani, Alexander Lerch. "dMelodies: A Music Dataset for Disentanglement Learning", 21st International Society for Music Information Retrieval Conference (ISMIR), Montréal, Canada, 2020.
@inproceedings{pati2020dmelodies,
title={dMelodies: A Music Dataset for Disentanglement Learning},
author={Pati, Ashis and Gururani, Siddharth and Lerch, Alexander},
booktitle={21st International Society for Music Information Retrieval Conference (ISMIR)},
year={2020},
address={Montréal, Canada}
}
Over the last few years, there has been significant research attention on representation learning focused on disentangling the underlying factors of variation in given data. However, most of the current/previous studies rely on datasets from the image/computer vision domain (such as the dSprites dataset). The purpose of this work is to be able to create a standardized dataset for conducting disentanglement studies on symbolic music data. The key motivation is that such a dataset would help researchers working on disentanglement problems demonstrate their algorithm on diverse domains.
dMelodies is dataset of simple 2-bar melodies generated using 9 independent latent factors of variation where each data point represents a unique melody based on the following constraints:
- Each melody will correspond to a unique scale (major, minor, blues, etc.).
- Each melody plays the arpeggios using the standard I-IV-V-I cadence chord pattern.
- Bar 1 plays the first 2 chords (6 notes), Bar 2 plays the second 2 chords (6 notes).
- Each played note is an 8th note.
A typical example is shown below.
The following factors of variation are considered:
- Tonic (Root): 12 options from C to B
- Octave: 3 options from C4 through C6
- Mode/Scale: 3 options (Major, Minor, Blues)
- Rhythm Bar 1: 28 options based on where the 6 note onsets are located in the first bar.
- Rhythm Bar 2: 28 options based on where the 6 note onsets are located in the second bar.
- Arpeggiation Direction Chord 1: 2 options (up/down) based on how the arpreggio is played
- Arpeggiation Direction Chord 2: 2 options (up/down)
- Arpeggiation Direction Chord 3: 2 options (up/down)
- Arpeggiation Direction Chord 4: 2 options (up/down)
Consequently, the total number of data-points are 1,354,752.
The data is provided as a numpy .npz
archive with six fields:
score_array
: (1354752 x 16, int32) tokenized score representationlatent_array
: (1354752 x 9, int32) integer index of the latent factor valuesnote2index_dict
: dictionary mapping the musical note names/symbols to token indicesindex2note_dict
: dictionary mapping the token indices to musical note names/symbolslatent_dicts
: dictionary mapping the different latent factor values to the corresponding integer indicesmetadata
: additional information (title, authors, date of creation etc.)
Install anaconda
or miniconda
by following the instruction here.
Create a new conda environment using the enviroment.yml
file located in the root folder of this repository. The instructions for the same can be found here.
Activate the dmelodies
environment using the following command:
conda activate dmelodies
The DMelodiesDataset
class (see dmelodies_dataset.py
) is a wrapper around the dataset and provides methods to read and load the dataset. If you are using PyTorch, you can use the DMelodiesTorchDataset
class (see dmelodies_torch_dataloader.py
) which implements a torch DataLoader.
Examples for using both can be found in the dmelodies_loading.ipynb
file.
Explore the benchmarking experiments on dMelodies using different unsupervised learning methods.
In case you want to create your own version of the dataset (as a .npz
file), delete the contents of data
folder and then run script_create_dataset.py
from the root folder of this repository. Additional arguments --save-midi
and --save-xml
can be used to save the individual melodies as .mid
or .musicxml
files. The files will be saved in a raw_data
folder.
Note: Saving individual melodies will require approx. 16.5GB of space (5.5 GB for .mid
format and 11GB for .musicxml
format)