This folder replicates experiments from paper:
Franc, Prusa,yermakov. Consistent and tractable learning of Markov Networks. ECML 2022.
The goal is to evaluate performance of MN classifier learned by M3N algorithm with two different proxies: LP Margin-rescaling loss and MArkov Network Adversarial loss. The proxy losses are evaluated on synthetically generated sequences and on the problem of learning symbolic and visual Sudoku solver.
Sequences of observable and hidden labels generated from known Hidden Markov Chain:
python3 create_hmc_dataset.py
Examples of symbolic Sudoku puzzles with solutions:
python3 create_sudoku_dataset.py
Examples of visual Sudoku puzzles created from MNIST digits along with solutions:
python3 create_visual_sudoku_dataset.py
Each dataset contains of 5 splits of the examples into training/validation and test part. The number of splits and the number of trn/val/tst examples can specified in the header of the scripts.
For each dataset, the scripts generate examples with different amount of (randomly) missing labels. The amount of missing labels is set to 0%, 10% and 20%, however it can be modified in the header of the scripts.
The configuration of experiments for M3N algorithm with MANA loss is config/adam_advhomo.yaml. The configuration M3N algoritm with Margin-rescaling loss is config/adam_mrhomo.yaml.
The regularization constants to try are defined by item lambda
and the sizes of training set by n_examples
.
On a computer with single CPU run the following scripts:
./train_hmc.sh
./train_sudoku.sh
./train_visual_sudolu.sh
On computer cluster with SLURM invoke the following scripts:
sbatch -n15 train_hmc.slurm
sbatch -n15 train_sudoku.slurm
sbatch -n15 train_visula_sudoku.slurm
On a computer with single CPU run the following scripts:
./eval_hmc.sh
./eval_sudoku.sh
./eval_visual_sudolu.sh
On computer cluster with SLURM invoke the following scripts:
sbatch -n15 eval_hmc.slurm
sbatch -n15 eval_sudoku.slurm
sbatch -n15 eval_visula_sudoku.slurm
Result visualization is in show_results.ipynb.