-
Notifications
You must be signed in to change notification settings - Fork 502
[TensorFlow][Inference][Sagemaker] TensorFlow 2.19.0 Currency Release #4883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
dockerd-entrypoint: | ||
source: docker/build_artifacts/dockerd-entrypoint.py | ||
target: dockerd-entrypoint.py | ||
dockerd_ec2_entrypoint: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need this for inference.
ARG TFS_SHORT_VERSION=2.19 | ||
ARG CUDA_DASH=12-2 | ||
|
||
ARG TF_SERVING_VERSION_GIT_COMMIT=HEAD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't use HEAD here for TF2.19, we can use a specific commit for TF2.19 can refer to https://hub.docker.com/layers/tensorflow/serving/2.19.0-gpu/images/sha256-d32afbfadf8cc3fdc14ce3e613badc109317dd952cef50dd0c899253f6373809
# Specify accept-bind-to-port LABEL for inference pipelines to use SAGEMAKER_BIND_TO_PORT | ||
# https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipeline-real-time.html | ||
LABEL com.amazonaws.sagemaker.capabilities.accept-bind-to-port=true | ||
LABEL com.amazonaws.sagemaker.inference.cuda.verified_versions=12.5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use 12.2 not 12.5 here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
test/dlc_tests/conftest.py
Outdated
"bashrc": {"pytorch": ["2.4.0", "2.5.1", "2.6.0"], "tensorflow": ["2.18.0"]}, | ||
"framework": {"pytorch": [""], "tensorflow": [""]}, | ||
"entrypoint": {"pytorch": ["2.4.0", "2.5.1", "2.6.0"], "tensorflow": ["2.18.0", "2.19.0"]}, | ||
"bashrc": {"pytorch": ["2.4.0", "2.5.1", "2.6.0"], "tensorflow": ["2.18.0", "2.19.0"]}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have bashrc and entrypoint telemetry in TF2.19 inference, we shouldn't skip it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
You can also try to pin openssl >= a version to see if that works. |
58ad489
to
03faa70
Compare
|
||
#we use different installation method other than this | ||
#RUN curl $TFS_URL -o /usr/bin/tensorflow_model_server \ | ||
# && chmod 555 /usr/bin/tensorflow_model_server |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove these lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can revert toml file before submitting for PR review.
…libreadline-dev in gpu file
… also skipped telemetry tests
…pping of bashrc and entrypoint telemtry
…lities & checking only security tests
* Add license file content test * use short verison * test no build * print string * enable build * fix allowlist * rebuild * buildtest ec2 * test arm * build test inference * disable arm64 mode * disable build * revert toml
* update EFA to 1.41.0 vllm to 0.9.0.1
ARG PYTHON_PIP=python3-pip | ||
ARG PIP=pip3 | ||
ARG PYTHON_VERSION=3.12.10 | ||
#ARG TFS_URL=https://framework-binaries.s3.us-west-2.amazonaws.com/tensorflow_serving/r2.18_aws/cpu/2025-01-15-19-48/tensorflow_model_server |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can also remove this line.
ARG PYTHON_PIP=python3-pip | ||
ARG PIP=pip3 | ||
ARG PYTHON_VERSION=3.12.10 | ||
#ARG TFS_URL=https://framework-binaries.s3.us-west-2.amazonaws.com/tensorflow_serving/r2.18_aws/gpu/2025-01-17-21-54/tensorflow_model_server |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this line too.
GitHub Issue #, if available:
Note:
If merging this PR should also close the associated Issue, please also add that Issue # to the Linked Issues section on the right.
All PR's are checked weekly for staleness. This PR will be closed if not updated in 30 days.
Description
Tests run
NOTE: By default, docker builds are disabled. In order to build your container, please update dlc_developer_config.toml and specify the framework to build in "build_frameworks"
Confused on how to run tests? Try using the helper utility...
Assuming your remote is called
origin
(you can find out more withgit remote -v
)...python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -cp origin
python src/prepare_dlc_dev_environment.py -b </path/to/buildspec.yml> -t sanity_tests -cp origin
python src/prepare_dlc_dev_environment.py -rcp origin
NOTE: If you are creating a PR for a new framework version, please ensure success of the standard, rc, and efa sagemaker remote tests by updating the dlc_developer_config.toml file:
Expand
sagemaker_remote_tests = true
sagemaker_efa_tests = true
sagemaker_rc_tests = true
Additionally, please run the sagemaker local tests in at least one revision:
sagemaker_local_tests = true
Formatting
black -l 100
on my code (formatting tool: https://black.readthedocs.io/en/stable/getting_started.html)DLC image/dockerfile
Builds to Execute
Expand
Fill out the template and click the checkbox of the builds you'd like to execute
Note: Replace with <X.Y> with the major.minor framework version (i.e. 2.2) you would like to start.
build_pytorch_training_<X.Y>_sm
build_pytorch_training_<X.Y>_ec2
build_pytorch_inference_<X.Y>_sm
build_pytorch_inference_<X.Y>_ec2
build_pytorch_inference_<X.Y>_graviton
build_tensorflow_training_<X.Y>_sm
build_tensorflow_training_<X.Y>_ec2
build_tensorflow_inference_<X.Y>_sm
build_tensorflow_inference_<X.Y>_ec2
build_tensorflow_inference_<X.Y>_graviton
Additional context
PR Checklist
Expand
NEURON/GRAVITON Testing Checklist
dlc_developer_config.toml
in my PR branch by settingneuron_mode = true
orgraviton_mode = true
Benchmark Testing Checklist
dlc_developer_config.toml
in my PR branch by settingec2_benchmark_tests = true
orsagemaker_benchmark_tests = true
Pytest Marker Checklist
Expand
@pytest.mark.model("<model-type>")
to the new tests which I have added, to specify the Deep Learning model that is used in the test (use"N/A"
if the test doesn't use a model)@pytest.mark.integration("<feature-being-tested>")
to the new tests which I have added, to specify the feature that will be tested@pytest.mark.multinode(<integer-num-nodes>)
to the new tests which I have added, to specify the number of nodes used on a multi-node test@pytest.mark.processor(<"cpu"/"gpu"/"eia"/"neuron">)
to the new tests which I have added, if a test is specifically applicable to only one processor typeBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.