Skip to content
@mlcommons

MLCommons

Better ML for everyone

MLCommons

The mission of MLCommons™ is to make machine learning better for everyone. Together with its 50+ founding Members and Affiliates, including startups, leading companies, academics, and non-profits from around the globe, MLCommons will help grow machine learning from a research field into a mature industry through benchmarks, public datasets and best practices. MLCommons firmly believes in the power of open-source and open data. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

You can visit the MLCommons website here for more information, or head straight to our Community page if you want to join our Working Groups.

Individuals, companies, and other entities can become members and/or affiliates.

Policies, License and Code of Conduct

Pinned Loading

  1. training training Public

    Reference implementations of MLPerf™ training benchmarks

    Python 1.6k 567

  2. inference inference Public

    Reference implementations of MLPerf™ inference benchmarks

    Python 1.3k 543

  3. inference_results_v4.1 inference_results_v4.1 Public

    This repository contains the results and code for the MLPerf™ Inference v4.1 benchmark.

    Python 3 10

  4. training_results_v4.1 training_results_v4.1 Public

    This repository contains the results and code for the MLPerf™ Training v4.1 benchmark.

    Python 4 7

  5. modelbench modelbench Public

    Run safety benchmarks against AI models and view detailed reports showing how well they performed.

    Python 79 13

  6. ailuminate ailuminate Public

    The AILuminate v1.1 benchmark suite is an AI risk assessment benchmark developed with broad involvement from leading AI companies, academia, and civil society.

    4 3

Repositories

Showing 10 of 103 repositories
  • mlcommons/mlperf_inference_test_submissions_v5.0’s past year of commit activity
    Mermaid 0 4 0 0 Updated Feb 23, 2025
  • mlperf-automations Public

    This repository contains automation scripts designed to run MLPerf Inference benchmarks. Originally developed for the Collective Mind (CM) automation framework, these scripts have been adapted to leverage the MLC automation framework, maintained by the MLCommons Benchmark Infrastructure Working Group.

    mlcommons/mlperf-automations’s past year of commit activity
    Python 1 Apache-2.0 10 60 (1 issue needs help) 1 Updated Feb 23, 2025
  • mlcflow Public

    MLCFlow: Simplifying MLPerf Automations

    mlcommons/mlcflow’s past year of commit activity
    Python 3 Apache-2.0 10 6 2 Updated Feb 23, 2025
  • medperf Public

    An open benchmarking platform for medical artificial intelligence using Federated Evaluation.

    mlcommons/medperf’s past year of commit activity
    Python 152 Apache-2.0 34 53 (1 issue needs help) 23 Updated Feb 23, 2025
  • mobile_app_open Public

    Mobile App Open

    mlcommons/mobile_app_open’s past year of commit activity
    C++ 53 Apache-2.0 26 32 2 Updated Feb 23, 2025
  • croissant Public

    Croissant is a high-level format for machine learning datasets that brings together four rich layers.

    mlcommons/croissant’s past year of commit activity
    Python 523 Apache-2.0 54 134 (4 issues need help) 19 Updated Feb 22, 2025
  • modelbench Public

    Run safety benchmarks against AI models and view detailed reports showing how well they performed.

    mlcommons/modelbench’s past year of commit activity
    Python 79 Apache-2.0 13 223 3 Updated Feb 21, 2025
  • logging Public

    MLPerf™ logging library

    mlcommons/logging’s past year of commit activity
    Python 32 Apache-2.0 47 40 0 Updated Feb 21, 2025
  • inference Public

    Reference implementations of MLPerf™ inference benchmarks

    mlcommons/inference’s past year of commit activity
    Python 1,317 Apache-2.0 543 185 13 Updated Feb 21, 2025
  • dynabench Public
    mlcommons/dynabench’s past year of commit activity
    Python 21 MIT 17 8 (1 issue needs help) 4 Updated Feb 21, 2025