Skip to content

Commit 849217f

Browse files
committed
fixed intro
1 parent c548a6d commit 849217f

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

7-estimators-multi-gpus.ipynb

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@
88
"\n",
99
"We can train on multiple GPUs directly via `tf.keras`'s distributed strategy scope.\n",
1010
"\n",
11-
"TensorFlow's [Estimators](https://www.tensorflow.org/programmers_guide/estimators) API is another useful way to training models in a distributed environment such as on nodes with multiple GPUs or on many nodes with GPUs. This is particularly useful when training on huge datasets especially when used with the `tf.keras` API. Here we will first present the API for the tiny Fashion-MNIST dataset and then show a practical usecase in the end.\n",
11+
"TensorFlow's [Estimators](https://www.tensorflow.org/programmers_guide/estimators) API is another useful way to training models in a distributed environment such as on nodes with multiple GPUs or on many nodes with GPUs. This is particularly useful when training on huge datasets especially when used with the `tf.keras` API. \n",
12+
"\n",
13+
"Here we will first present the `tf.keras` API for the tiny Fashion-MNIST dataset and then show a practical usecase in the end via Estimators.\n",
1214
"\n",
1315
"**TL;DR**: Essentially what we want to remember is that a `tf.keras.Model` can be trained with `tf.estimator` API by converting it to an `tf.estimator.Estimator` object via the `tf.keras.estimator.model_to_estimator` method. Once converted we can apply the machinery that `Estimator` provides to train on different hardware configurations."
1416
]

0 commit comments

Comments
 (0)