Skip to content

Latest commit

 

History

History
63 lines (48 loc) · 2.52 KB

readme.rst

File metadata and controls

63 lines (48 loc) · 2.52 KB
https://pepy.tech/badge/labml_remote https://img.shields.io/badge/slack-chat-green.svg?logo=slack https://img.shields.io/badge/labml-docs-blue

Run Python on a remote computer

pip install labml_remote
cd [PATH TO YOUR PROJECT FOLDER]
labml_remote --init
# Give it SSH credentials
labml_remote python [PATH TO YOU PYTHON CODE] [ARGUMENTS TO YOUR PYTHON CODE]

That's it!

Configurations

labml_remote --init asks for your SSH credentials and creates two files .remote/configs.yaml and .remote/exclude.txt. .remote/configs.yaml keeps the remote configurations for the project. Here's a sample .remote/configs.yaml:

hostname: ec2-18-219-46-175.us-east-2.compute.amazonaws.com
name: labml_samples
private_key: .remote/private_key
username: ubuntu

.remote/exclude.txt is like .gitignore - it specifies the files and folders that you dont need to sync up with the remote server. The excludes generated by labml_remote --init excludes things like .git, .remote, logs and __pycache__. You should edit this if you have things that you don't want to be synced with your remote computer.

How it works

labml_remote python ... will run your code in the remote computer.

It does a bunch of things and you should be able to see the progress in the console. It sets up miniconda if it's not already installed and create a new environment for the project. Then it creates a folder by the name of the project inside home folder and syncs up the contents of your local folder. It syncs using rsync so subsequent sysncs should only need to send the changes. Then it installs packages from requirements.txt or from pipenv if a Pipfile is found. Then it runs your python file. It will use pipenv if a Pipfile is present. The outputs of your program will be streamed to the console.

What it doesn't do

This won't install things like drivers or CUDA. So if you need them you should pick an image that comes with those for your instance. For example, on AWS pick a deep learning AMI if you want to use an instance with GPUs.

Hope this helps!