Skip to content

AI-Hypercomputer/tpu-recipes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cloud TPU performance recipes

This repository provides the necessary instructions to reproduce a specific workload on Google Cloud TPUs. The focus is on reliably achieving a performance metric (e.g. throughput) that demonstrates the combined hardware and software stack on TPUs.

Organization

  • ./training: instructions to reproduce the training performance of popular LLMs, diffusion, and other models with PyTorch and JAX.

  • ./inference: instructions to reproduce inference performance.

  • ./microbenchmarks: instructions for low-level TPU benchmarks such as matrix multiplication performance and memory bandwidth.

Contributor notes

Note: This is not an officially supported Google product. This project is not eligible for the Google Open Source Software Vulnerability Rewards Program.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published