Skip to content

Latest commit

 

History

History
46 lines (34 loc) · 3.39 KB

README.md

File metadata and controls

46 lines (34 loc) · 3.39 KB

Regularized Meta-Learning for Neural Architecture Search

Rob van Gastel, Joaquin Vanschoren
Neural architecture search (NAS) methods have successfully enabled the automated search of neural architectures in various domains. However, most techniques start from scratch with every new task. Techniques have been proposed that generalize across tasks, but don't always adapt well to new tasks. In this work, we consider meta-learning approaches that effectively leverage prior experience to adapt to unseen tasks. We analyze different regularization and training methods to improve the generalizability of meta-learning for NAS. Empirical results on standard few-shot classification benchmarks show that the added regularization and adjustments in the network optimization improve upon previous approaches, such as MetaNAS. Moreover, our results show that learned reinforcement learning policies help find smaller task-specific architectures.

The code accompanying the paper is based on the implementation of MetaNAS, SpinningUp, and recurrent PPO. For the regularization methods, the following repositories are used, SharpDARTS, P-DARTS, DARTS- and PC-DARTS.

The results from the paper are located in the results branch in the metanas/results directory. To seperate the large results directory from the code base.

Requirements

Install the required packages by running,

conda env create -f environment.yml
conda activate reg_metanas

to create and activate the the reg_metanas environment.

Download the Datasets

Using the Omniglot, TripleMNIST or MiniImageNet dataset by setting download=True for the data loaders of torchmeta_loader.py provided by Torchmeta.

Setup

Explain different experiments run

Please refer to the metanas/scripts folder for examples how to use this code. For every method an adjustable bash script is provided,

  • Running meta-training for MetaNAS with meta-RL pre-optimization is provided in run_preopt_darts.sh
  • Running meta-training for MetaNAS with TSE-DARTS is provided in run_tse_darts.sh
  • Regularization methods
    • Running meta-training for MetaNAS with DARTS- is provided in run_dartsminus.sh
    • Running meta-training for MetaNAS with P-DARTS is provided in run_pdarts.sh
    • Running meta-training for MetaNAS with SharpDARTS is provided in run_sharpdarts.sh

Graphs

For the generated graph in the report we refer to the notebook in metanas/notebook.

Citation

If you use any part of this code in your research, please cite our paper:

@misc{robvangastel2021_reg_metanas,
  title={Regularized Meta-Learning for Neural Architecture Search},
  author={Rob van Gastel and Joaquin Vanschoren},
  year={2022},
  url = {https://2022.automl.cc/wp-content/uploads/2022/07/regularized_meta_learning_for_.pdf},
}