Example of Distributed pyTorch
-
Updated
Mar 23, 2019 - Python
Example of Distributed pyTorch
使用TensorFlow训练自己的图片,基于多GPU
performance test of MNIST hand writings usign MXNet + TF
Deep learning using TensorFlow low-level APIs
Tensorflow2 training code with jit compiling on multi-GPU.
Transfer Learning applied to Image Classification (VGG16 - Distributed Training on Multi-GPUs)
PyTorch/Lightning implementation of https://github.com/kang205/SASRec
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing
SHUKUN Technology Co.,Ltd Algorithm intern (2020/12-2021/5). Multi-GPU, Multi-node training for deep learning models. Horovod, NVIDIA clara train sdk, configuration tutorial,performance testing.
A pytorch project template for intensive AI research. Separate datamodule and models and thus support for multiple data-loaders and multiple models in same project
ALBERT model Pretraining and Fine Tuning using TF2.0
Code for various probabilistic deep learning models
jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
A lightweight Python template for deep learning project or research with PyTorch.
Add a description, image, and links to the multi-gpu-training topic page so that developers can more easily learn about it.
To associate your repository with the multi-gpu-training topic, visit your repo's landing page and select "manage topics."