Skip to content

Latest commit

 

History

History
63 lines (41 loc) · 4.73 KB

README.md

File metadata and controls

63 lines (41 loc) · 4.73 KB

Geographical Biases in Large Language Models (LLMs)

This tutorial aims to identify geographical biases propagated by LLMs. For this purpose, 4 indicators are proposed.

  1. Spatial disparities in geographical knowledge. Open In Colab
  2. Spatial information coverage in training datasets. Open In Colab
  3. Correlation between geographic distance and semantic distance. Open In Colab
  4. Anomaly between geographical distance and semantic distance. Open In Colab

Semantic Distances
Fig. 1: Average semantic distances (using BERT) between the three most populous cities in a country compared to other cities worldwide.

Semantic Distances
Fig. 2: Percentage of correct country predictions given cities name with more than 100K inhabitants by spatial aggregation in 5° by 5° pixels?


Authors

Rémy Decoupes
Maguelonne Teisseire
Mathieu Roche

Acknowledgement:

This study was partially funded by EU grant 874850 MOOD and is catalogued as MOOD099. The contents of this publication are the sole responsibility of the authors and do not necessarily reflect the views of the European Commission

mood


Citing this work

If you find this work helpful or refer to it in your research, please consider citing:

  • Evaluation of Geographical Distortions in Language Models: A Crucial Step Towards Equitable Representations, Rémy Decoupes, Roberto Interdonato, Mathieu Roche, Maguelonne Teisseire, Sarah Valentin. arXiv

Reproducing the article

Figures and Tables could be reproduce by following these instructions. Please note that you will require a GPU with a minimum of 24 GB RAM. The total estimated execution time, if the indicators are run sequentially, is approximately 3 to 4 days.

This tutorial has been presented in

AgroParisTech CIRAD CNRS INRAE