Code for the paper "Improving Generalization for Neural Adaptive Video Streaming via Meta Reinforcement Learning" - Nuowen Kan et al. (ACM MM22)
This document illustrates how to obtain the results shown in the paper "Improving Generalization for Neural Adaptive Video Streaming via Meta Reinforcement Learning".
Anaconda is suggested to be installed to manage the test environments.
- Linux or macOS
- Python >=3.6
- Pytorch >= 1.6.0
- numpy, pandas
- tqdm
- seaborn
- matplotlib
- CPU or NVIDIA GPU + CUDA CuDNN
Install PyTorch. Note that the command of PyTorch intallation depends on the actual compute platform of your own computer, and you can choose appropriate version following the guide page. For example, if you have intalled CUDA 10.2
, you can intall PyTorch with the latest version by running this Command:
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
Or you can create a specific environment (many redundant dependencies will be installed) just via
conda env create -f torch.yaml
The main training loop for MERINA can be found in main.py
, the trained models are in models/
and the corresponding meta RL algorithms in algos/
. Besides, envs/
includes emulator codes which simulate the environment of ABR virtual player. Some of baseline aglorithms are located in baselines/
.
There's quite a bit of documentation in the respective scripts so have a look there for details.
Improvements to the codes and methods (including a journal version) are currently underway and will be finished in a few months. I will update them all in this repository.
Type python main.py -h
for the instruction in the terminal, or read the description of ArgumentParser part.
For example, you could choose work mode by selecting the argument
--test
[evaluate the model],--adp
[run the meta adaptation procedure] or use the default setting [run the meta training procedure];The default QoE metric is the linear form and you can change it to logrithmic form by add the argument
--log
;The network throughput dataset must be chosen in the test mode. The datasets shown in the paper are available here and you can select them following the mappings below:
[--tf FCC traces] [--tfh FCC and HSDPA traces] [--t3g HSDPA traces] [--to Oboe traces] [--tp Puffer-Oct.17-21 traces] [--tp2 Puffer-Feb.18-22 traces]
Also, you can rename the lable of the results by
[--name "yourname"]
|--models
|--Results # !!
|--sim
|--test
|--lin
|--log
|--utils
|--log_results # !!
|--main.py
The public bandwidth traces are stored in this repository. Download and put them in the directory ./envs/traces/
.
To evalute MERINA on the in-distribution throughput traces with the
- FCC traces
python main.py --test --tf --log
- or HSDPA traces
python main.py --test --t3g --log
Plot a example result
cd utils
python plt_v2.py --log --merina --bola --mpc
To train a model using the FCC and HSDPA training dataset with the
python main.py --log
The exploration trajectories will be shown in ./Results/sim/merina/log_record
and the valid results are in ./Results/sim/merina/log_test
; In addition, you can monitor the training process using tensorboard, run
tensorboard --logdir=./Results/sim
Then, wait patiently and mannually interrupt the training (Ctrl + C
in the terminal) when the valid results converges. Cross your fingers!!!
- The script
imrl_light.py
is a variant that employs lightweight neural networks to build the VAE and policy network. Because it is an unfinished version, some problems may arise if you use it to train the models.