This is the code for the paper Planning spatial networks with Monte Carlo tree search by Victor-Alexandru Darvariu, Stephen Hailes and Mirco Musolesi, Proceedings of the Royal Society A, 479(2269):20220383, 2023. If you use this code, please consider citing:
@article{darvariu2023planning,
author = {Darvariu, Victor-Alexandru and Hailes, Stephen and Musolesi, Mirco},
title={Planning spatial networks with {Monte Carlo} tree search},
journal = {Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences},
year = {2023},
publisher = {The Royal Society Publishing},
volume = {479},
number = {2269},
pages={20220383},
}
MIT.
Please ensure that you clone this repository under the relnet
root directory, e.g.
git clone [email protected]:VictorDarvariu/planning-spatial-networks-mcts.git relnet
Currently tested on Linux and MacOS (specifically, CentOS 7.4.1708 and Mac OS Big Sur 11.2.3), can also be adapted to Windows through WSL. Makes heavy use of Docker, see e.g. here for how to install on CentOS. Tested with Docker 19.03. The use of Docker largely does away with dependency and setup headaches, making it significantly easier to reproduce the reported results.
The Docker setup uses Unix groups to control permissions. You can reuse an existing group that you are a member of, or create a new group groupadd -g GID GNAME
and add your user to it usermod -a -G GNAME MYUSERNAME
.
Create a file relnet.env
at the root of the project (see relnet_example.env
) and adjust the paths within: this is where some data generated by the container will be stored. Also specify the group ID and name created / selected above.
Add the following lines to your .bashrc
, replacing /home/john/git/relnet
with the path where the repository is cloned.
export RN_SOURCE_DIR='/home/john/git/relnet'
set -a
. $RN_SOURCE_DIR/relnet.env
set +a
export PATH=$PATH:$RN_SOURCE_DIR/scripts
Make the scripts executable (e.g. chmod u+x scripts/*
) the first time after cloning the repository, and run apply_permissions.sh
in order to create and permission the necessary directories.
Some scripts are provided for convenience. To build the containers (note, this will take a significant amount of time e.g. 2 hours, as some packages are built from source):
update_container.sh
To start them:
manage_container.sh up
To stop them:
manage_container.sh stop
To purge the queue and restart the containers (useful for killing tasks that were launched):
purge_and_restart.sh
To take maximum advantage of your machine's capacity, you may want to tweak the number of threads for the workers. This configuration is provided in projectconfig.py
.
Additionally, you may want to enforce certain memory limits for your workers to avoid OOM errors. This can be tweaked in docker-compose.yml
.
It is also relatively straightforward to add more workers from different machines you control. For this, you will need to mount the volumes on networked-attached storage (i.e., make sure paths provided in relnet.env
are network-accessible) and adjust the location of backend and queue in projectconfig.py
to a network location instead of localhost. On the other machines, only start the worker container (see e.g. manage_container.sh
).
- Copy the
rw_network_data.zip
file provided to$RN_EXPERIMENT_DATA_DIR/real_world_graphs/raw_data
. - Then, unzip it
unzip rw_network_data.zip
- Re-grant group permissions so they can be read/modified by container users:
chgrp -R $RN_GNAME $RN_EXPERIMENT_DATA_DIR/real_world_graphs; chmod -R g+rwx $RN_EXPERIMENT_DATA_DIR/real_world_graphs/
- Delete zip file
rm rw_network_data.zip
- Run the following commands to wrangle the data into the expected formats:
docker exec -it relnet-worker-cpu /bin/bash -c "python relnet/data_wrangling/process_networks.py --include_geom_coords --dataset metro --task clean"
docker exec -it relnet-worker-cpu /bin/bash -c "python relnet/data_wrangling/process_networks.py --include_geom_coords --dataset metro --task process"
docker exec -it relnet-worker-cpu /bin/bash -c "python relnet/data_wrangling/process_networks.py --include_geom_coords --dataset internet_topology --task clean"
docker exec -it relnet-worker-cpu /bin/bash -c "python relnet/data_wrangling/process_networks.py --include_geom_coords --dataset internet_topology --task process"
Synthetic data will be automatically generated when the experiments are ran and stored to $RN_EXPERIMENT_DIR/stored_graphs
.
There are several services running on the manager
node.
- Jupyter notebook server:
http://localhost:8888
- Flower for queue statistics:
http://localhost:5555
- Tensorboard (currently disabled due to its large memory footprint):
http://localhost:6006
- RabbitMQ management:
http://localhost:15672
The first time Jupyter is accessed it will prompt for a token to enable password configuration, it can be grabbed by running docker exec -it relnet-manager /bin/bash -c "jupyter notebook list"
.
Experiment data and results are stored in part as files (under your configured $RN_EXPERIMENT_DATA_DIR
) as well as in a MongoDB database.
To access the MongoDB database with a GUI, you can use a MongoDB client such as Robo3T and point it to http://localhost:27017
.
Some functionality is provided in relnet/evaluation/storage.py
to insert and retrieve data, you can use it in e.g. analysis notebooks.
Experiments are launched from the manager container and processed (in a parallel way) by the workers.
The file relnet/evaluation/experiment_conditions.py
contains the configuration for the experiments reported in the paper, but you may modify e.g. agents, objective functions, hyperparameters etc. to suit your needs.
Then, you can launch all the experiments as follows:
run_synth.sh
run_ablations.sh
run_synth_sg_uct.sh
run_rw.sh
run_timings_rw.sh
run_timings_synth.sh 25
# ...
run_timings_synth.sh 200
run_ar_experiment.sh
- You can navigate to
http://localhost:5555
for the Flower interface which shows the progress of processing tasks in the queue. You may also check logs for both manager and worker at$RN_EXPERIMENT_DATA_DIR/logs
.
Jupyter notebooks are used to perform the data analysis and produce tables and figures. Navigate to http://localhost:8888
, then notebooks folder.
All tables and result figures can be obtained by opening the notebooks below, selecting the py3-relnet
kernel and running all cells. Resulting .pdf figures and .tex tables can be found at $RN_EXPERIMENT_DIR/aggregate
, except for Figure 4, which will be under $RN_EXPERIMENT_DIR/ar_test
.
The relationship between notebooks and figures / tables is as follows:
Evaluation.ipynb
: Tables 2, 3, 4; Figure 4Real_World_Graphs_Summary.ipynb
: Table 1Timings_Experiments.ipynb
: Table 5, Figure 5AR_Experiment.ipynb
: Figure 1 in supplementary material
You may also use the additional Hyperparam_Optimisation.ipynb
notebook to visualize the results of hyperparameter optimization at a granular level.
In case the py3-relnet
kernel is not found, try reinstalling the kernel by running docker exec -it -u 0 relnet-manager /bin/bash -c "source activate relnet-cenv; python -m ipykernel install --user --name relnet --display-name py3-relnet"