A lightweight Python template for deep learning project or research with PyTorch.
-
Updated
May 1, 2024 - Python
A lightweight Python template for deep learning project or research with PyTorch.
jupyter notebooks to fine tune whisper models on Vietnamese using Colab and/or Kaggle and/or AWS EC2
Code for various probabilistic deep learning models
ALBERT model Pretraining and Fine Tuning using TF2.0
A pytorch project template for intensive AI research. Separate datamodule and models and thus support for multiple data-loaders and multiple models in same project
SHUKUN Technology Co.,Ltd Algorithm intern (2020/12-2021/5). Multi-GPU, Multi-node training for deep learning models. Horovod, NVIDIA clara train sdk, configuration tutorial,performance testing.
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing
PyTorch/Lightning implementation of https://github.com/kang205/SASRec
Transfer Learning applied to Image Classification (VGG16 - Distributed Training on Multi-GPUs)
Tensorflow2 training code with jit compiling on multi-GPU.
Deep learning using TensorFlow low-level APIs
performance test of MNIST hand writings usign MXNet + TF
使用TensorFlow训练自己的图片,基于多GPU
Example of Distributed pyTorch
Add a description, image, and links to the multi-gpu-training topic page so that developers can more easily learn about it.
To associate your repository with the multi-gpu-training topic, visit your repo's landing page and select "manage topics."