-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GKE A3 Ultra support #940
base: main
Are you sure you want to change the base?
Conversation
samos123
commented
Jan 22, 2025
- Adds support for easily adding more GKE GPU accelerators by using a base class
- Adds Fuji v2 70B benchmark results
Dockerfile
Outdated
@@ -101,14 +93,63 @@ COPY . . | |||
# GPU container spec. # | |||
################################################################################ | |||
|
|||
FROM base AS gpu | |||
# This causes INTERNAL: No valid engine configs for Matmul error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@markblee @kelvin-zou are you fine with moving to nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 as the base image. I couldn't get A3 Ultra working on the original python base image.
Dockerfile
Outdated
|
||
# TODO(markblee): Support extras. | ||
ENV PIP_FIND_LINKS=https://storage.googleapis.com/jax-releases/jax_cuda_releases.html | ||
RUN pip install .[core,gpu] | ||
COPY . . | ||
RUN pip install -U "jax[gpu]==0.4.38" "jax==0.4.38" "jaxlib==0.4.38" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: this should be removed. Maybe we should wait until axlearn main upgrades to 0.4.38.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
# So we're using the NVIDIA provided cuda image instead which works. | ||
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 as gpu | ||
|
||
# Copy from original base |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it mean that we will need to keep the following commands consistent with those in BASE? Does Dockerfile support functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did some research and sadly the answer is no. We could try to create a bash script that gets used in both instead of copy pasting code. I do agree that we should figure out a way to reuse the same setup steps.
# Using `FROM base as GPU` causes INTERNAL: No valid engine configs for Matmul error. | ||
# So we're using the NVIDIA provided cuda image instead which works. | ||
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 as gpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make sure this change doesn't break our GPU training on AWS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's fine but defer to @kelvin-zou
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Second to what Ruoming recommended, maybe split it into Dockerfile.gcp if there are some gcp specific things. and import this docker file as base. The base image import is fine though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't anything GCP specific except installation of gcloud
which also happens in the axlearn base image. Happy to move to Dockerfile.gcp
though since it uses a different base image. I think that makes sense.
I still have access to internal capacity so wanted to get some reviews and make changes before I lose the ability to verify any changes. |
Dockerfile
Outdated
|
||
# TODO(markblee): Support extras. | ||
ENV PIP_FIND_LINKS=https://storage.googleapis.com/jax-releases/jax_cuda_releases.html | ||
RUN pip install .[core,gpu] | ||
COPY . . | ||
|
||
# TODO(samos123): remove this once axlearn upgrades to Jax 0.4.38. | ||
RUN pip install -U "jax[gpu]==0.4.38" "jax==0.4.38" "jaxlib==0.4.38" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we rely on axlearn's jax version instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes will remove this before merge since I think 0.4.37 works as well.
# Using `FROM base as GPU` causes INTERNAL: No valid engine configs for Matmul error. | ||
# So we're using the NVIDIA provided cuda image instead which works. | ||
FROM nvidia/cuda:12.6.3-cudnn-devel-ubuntu22.04 as gpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Second to what Ruoming recommended, maybe split it into Dockerfile.gcp if there are some gcp specific things. and import this docker file as base. The base image import is fine though.