Data for "Quantifying Memorization Across Neural Language Models".
Our main repository provides the prefixes and model continuations which we used in our analysis of memorization in large language models.
Tip
The data can be downloaded from here.
As obtaining the prefixes requires one to download the entire 800GB The Pile dataset, this repository contains the extracted data (570 MB) as described here.
@article{lm-memorization,
title={Quantifying Memorization Across Neural Language Models},
author={Carlini, Nicholas and Ippolito, Daphne and Jagielski, Matthew and Lee, Katherine and Tram\`er, Florian and Zhang, Chiyuan
},
journal={arXiv:2202.07646},
url={https://arxiv.org/abs/2202.07646},
year={2022}
}