Skip to content

Automatically improving and analyzing the performance of a neural network for a fashion classification dataset. Instead of only considering the architecture and hyperparameters separately we build a system to jointly optimize them.

Notifications You must be signed in to change notification settings

marcomoldovan/joint-nas-hpo

Repository files navigation

Simultaneous Neural Architecture Search and Hyperparameter Optimization of a CNN

PyTorch Lightning Config: Hydra Template
Paper Conference

Description

Your task is to automatically improve and analyze the performance of a neural network for a fashion classification1 dataset. Instead of only considering the architecture and hyperparameters seperately you should build a system to jointly optimize them. You are allowed a maximum runtime of 6 hours. We have provided a standard vision model as a baseline. In the end, you should convince us that you indeed improved the performance of the network when compared to the default approach. To this end, you could consider one or several of the following:

  • (must) Apply HPO to obtain a well-performing hyperparameter configuration (e.g., BO or EAs);
  • (must) Apply NAS (e.g., BOHB or DARTS) to improve the architecture of the network;
  • (can) Extend the configuration space to cover preprocessing, data augmentation and regularization;
  • (can) Apply one or several of the speedup techniques for HPO/NAS;
  • (can) Apply meta-learning, such as algorithm selection or warmstarting, to improve the performance;
  • (can) Apply a learning to learn approach to learn how to optimize the network;
  • (can) Determine the importance of the algorithm’s hyperparameters;

From the optional approaches (denoted by can), pick the ones that you think are most appropriate. To evaluate your approach please choose the way you evaluate well; you could consider the following:

  • Measure and compare against the default performance of the given network;
  • Plot a confusion matrix;
  • Plot the performance of your AutoML approach over time;
  • Apply a statistical test;

Experimental Constrains

  • Your code for making design decisions should run no longer than 6 hours (without additional validation) on a single machine.
  • You can use any kind of hardware that is available to you. For example, you could also consider using Google Colab (which repeatedly offers a VM with a GPU for at most 12h for free) or Amazon SageMaker (which offers quite some resources for free if you are a first-time customer). Don’t forget to state in your paper what kind of hardware you used!

Metrics

  • The final performance has to be measured in terms of missclassification error.

How to run

Install dependencies

# clone project
git clone https://github.com/marcomoldovan/cross-modal-speech-segment-retrieval
cd cross-modal-speech-segment-retrieval

# install the correct python version
sudo apt-get install python3.10 # Linux, Python 3.7 or higher
brew install [email protected] #MacOS, Python 3.7 or higher
choco install python --version=3.9 # Windows, Python 3.7-3.9

# create python virtual environment and activate it
python3 -m venv myenv
source myenv/bin/activate

# if you have several version of python you can create a virtual environment with a specific version:
virtualenv --python=/usr/bin/<python3.x> myenv
myenv\Scripts\activate.bat

# [ALTERNATIVE] create conda environment
conda create -n myenv python=<3.x>
conda activate myenv

# install pytorch according to instructions
# https://pytorch.org/get-started/

# install requirements
pip install -r requirements.txt

Default training

Train model with default configuration

# train on CPU
python src/train.py trainer=cpu

# train on GPU
python src/train.py trainer=gpu

Train model with chosen experiment configuration from configs/experiment/

python src/train.py experiment=experiment_name.yaml

You can override any parameter from command line like this

python src/train.py trainer.max_epochs=20 data.batch_size=64

Hyperparameter search

To run a hyperparameter search with Optuna you can use the following command

python train.py -m hparams_search=fashion_mnist_optuna experiment=example

Running a hyperparameter sweep with Weights and Biases is also supported.

wandb sweep configs/hparams_search/fashion_mnist_wandb.yaml
wandb agent <sweep_id>

About

Automatically improving and analyzing the performance of a neural network for a fashion classification dataset. Instead of only considering the architecture and hyperparameters separately we build a system to jointly optimize them.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published