Skip to content
/ helm Public
forked from stanford-crfm/helm

Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).

License

Notifications You must be signed in to change notification settings

AI-secure/helm

This branch is 625 commits behind stanford-crfm/helm:main.

Folders and files

NameName
Last commit message
Last commit date
Jun 3, 2024
May 21, 2024
Jun 10, 2024
Jun 1, 2024
Jun 11, 2024
Apr 5, 2024
Jan 10, 2024
Nov 22, 2022
May 7, 2024
Oct 12, 2022
Feb 29, 2024
Dec 21, 2023
Jan 24, 2024
Mar 19, 2024
Jan 24, 2024
Jan 24, 2024
Sep 6, 2023
Sep 6, 2023
Jan 9, 2024
Aug 24, 2023
May 13, 2023
May 30, 2024
May 30, 2024

Repository files navigation

Holistic Evaluation of Language Models

Welcome! The crfm-helm Python package contains code used in the Holistic Evaluation of Language Models project (paper, website) by Stanford CRFM. This package includes the following features:

  • Collection of datasets in a standard format (e.g., NaturalQuestions)
  • Collection of models accessible via a unified API (e.g., GPT-3, MT-NLG, OPT, BLOOM)
  • Collection of metrics beyond accuracy (efficiency, bias, toxicity, etc.)
  • Collection of perturbations for evaluating robustness and fairness (e.g., typos, dialect)
  • Modular framework for constructing prompts from datasets
  • Proxy server for managing accounts and providing unified interface to access models

To get started, refer to the documentation on Read the Docs for how to install and run the package.

Directory Structure

The directory structure for this repo is as follows

├── docs # MD used to generate readthedocs
│
├── scripts # Python utility scripts for HELM
│ ├── cache
│ ├── data_overlap # Calculate train test overlap
│ │ ├── common
│ │ ├── scenarios
│ │ └── test
│ ├── efficiency
│ ├── fact_completion
│ ├── offline_eval
│ └── scale
└── src
├── helm # Benchmarking Scripts for HELM
│ │
│ ├── benchmark # Main Python code for running HELM
│ │ │
│ │ └── static # Current JS (Jquery) code for rendering front-end
│ │ │
│ │ └── ...
│ │
│ ├── common # Additional Python code for running HELM
│ │
│ └── proxy # Python code for external web requests
│
└── helm-frontend # New React Front-end

Holistic Evaluation of Text-To-Image Models

Significant effort has recently been made in developing text-to-image generation models, which take textual prompts as input and generate images. As these models are widely used in real-world applications, there is an urgent need to comprehensively understand their capabilities and risks. However, existing evaluations primarily focus on image-text alignment and image quality. To address this limitation, we introduce a new benchmark, Holistic Evaluation of Text-To-Image Models (HEIM).

We identify 12 different aspects that are important in real-world model deployment, including:

  • image-text alignment
  • image quality
  • aesthetics
  • originality
  • reasoning
  • knowledge
  • bias
  • toxicity
  • fairness
  • robustness
  • multilinguality
  • efficiency

By curating scenarios encompassing these aspects, we evaluate state-of-the-art text-to-image models using this benchmark. Unlike previous evaluations that focused on alignment and quality, HEIM significantly improves coverage by evaluating all models across all aspects. Our results reveal that no single model excels in all aspects, with different models demonstrating strengths in different aspects.

This repository contains the code used to produce the results on the website and paper.

About

Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 92.7%
  • TypeScript 4.2%
  • JavaScript 2.5%
  • Other 0.6%