-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-Training LLama3.1 on AWS Trainium using Ray and PyTorch Lightning #725
base: main
Are you sure you want to change the base?
Conversation
@omrishiv Would you be able to review or find someone who can do that? Thanks |
Yes, I can take a look. Hopefully by EoW |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for putting this together. I left a few comments to start with. I'm wondering though if this is similar to optimum-neuron
? Have you tried that? If it's similar, is it possible to reuse some of that framework without as many static files?
mountPath: /shared | ||
# Node Selector for Karpenter | ||
# Karpenter will provision this head pod on a node with the specified labels. | ||
nodeSelector: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these necessary? The pod won't land on a non-cpu node due to taints, and the keys/values may be different in other deployments.
persistentVolumeClaim: | ||
claimName: fsx-claim # Reference the PVC for shared storage | ||
rayStartParams: | ||
dashboard-host: 0.0.0.0 # Make dashboard accessible |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pleasse set num-cpus: 0
so we don't schedule actors on the head node
name: log-volume # Mount for Ray logs | ||
# Node Selector for Managed Node Group (with Cluster Autoscaler) | ||
# These workers will run on Trn1 instances provisioned by the cluster autoscaler. | ||
# This is necessary as Karpenter doesn't currently support EFA (required for Neuron distributed training). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this true? aws/karpenter-provider-aws#5068 I think you can request the resource.
- key: "aws.amazon.com/neuron" | ||
operator: "Exists" | ||
effect: "NoSchedule" | ||
- key: "hub.jupyter.org/dedicated" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we are trying to use the jupyter taints to keep other pods off of jupyter nodes, we shouldn't add a toleration for it
name: llama3.1-parallel-compile-job | ||
spec: | ||
submissionMode: K8sJobMode | ||
entrypoint: "NEURON_NUM_DEVICES=32 bash run_llama3.1_8b.sh -r 2 -n 16 -l 4e-4 -s 8192 -p 1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth extracting these variables to ENVVARs, that way you can do some hyperparameter tuning or otherwise update by mounting them directly from a configfile in one place, just a thought.
# Login to ECR | ||
echo -e "\nLogging in to ECR" | ||
aws ecr get-login-password --region "$region" | docker login --username AWS --password-stdin $ECR_REPO_URI | ||
aws ecr get-login-password --region "$region" | docker login --username AWS --password-stdin 763104351884.dkr.ecr.${region}.amazonaws.com/pytorch-training-neuronx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have the region here set dynamically, but it is hardcoded in the image:
key in the yaml deployment. Please double check
import ray | ||
from ray.train import ScalingConfig | ||
from ray.train.torch import TorchTrainer | ||
# from ray.train.torch.xla import TorchXLAConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't need, please remove
@@ -0,0 +1,9 @@ | |||
pytorch-lightning |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please freeze all dependencies, this may break otherwise
# warmup steps | ||
WARMUP_STEPS=100 | ||
# learning rate | ||
#LR=3.0e-4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
commented out code can be removed. same with MODEL_PATH
@@ -0,0 +1,362 @@ | |||
--- | |||
sidebar_position: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we repositioning all of the documents?
What does this PR do?
Example showing a combination of technologies such as Ray + PTL + Neuron for pre-training llama3.1 model on Trn1 instances. This example was requested by multiple customers.
The integration of Ray, PyTorch Lightning (PTL), and AWS Neuron combines PTL's intuitive model development API, Ray Train's robust distributed computing capabilities for seamless scaling across multiple nodes, and AWS Neuron's hardware optimization for Trainium, significantly simplifying the setup and management of distributed training environments for large-scale AI projects, particularly those involving computationally intensive tasks like large language models.
Motivation
Issue: #724
More
website/docs
orwebsite/blog
section for this featurepre-commit run -a
with this PR. Link for installing pre-commit locallyFor Moderators
Additional Notes
We tested this out for a customer use-case and even demoed the solution to the customer.
The customer was impressed with the results.