This repository houses the code for (pCQO-MIS) method. The goal of this repository is to provide tools and implementations for the experiments performed in the submission.
- Install LibTorch.
- Clone the repository and navigate to the ./cpp_impl/build directory.
- Run the following cmake command
cmake -DCMAKE_PREFIX_PATH={path to libtorch} ..
- Run
cmake --build . --config Release
to build the program. - Execute the program:
./pcqomis ./path/to/directory/with/graphs > results.txt
(make sure that the graphs you test are in DIMACS text format!) - Optional: Analyze the results of the solver using ./cpp_impl/output.py
- Python 3.10
- Required libraries:
pandas
,torch
,gurobipy
,ortools
- Clone the repository and navigate to the repository's root directory.
- Create a virtual environment using venv
python3 -m venv .venv
- Activate your environment
source .venv/bin/activate
- Install the required Python libraries:
pip install -r requirements.txt
- (If you want to run Gurobi) Obtain licenses for Gurobi and install that license on the machine you will be running this repository on.
- (If you want to run ReduMIS) Clone the KaMIS project and build a copy of the ReduMIS program. Place the program in the
external
folder of this repository. - Browse the /graphs folder to retrieve the datasets used in the original experiments.
- Run the benchmarking suite:
python benchmark.py
Specify the directories containing the graph data by uncommenting the appropriate lines in the graph_directories
list within benchmark.py
. For example:
graph_directories = [
"./graphs/satlib/m403",
"./graphs/satlib/m411",
"./graphs/satlib/m418",
]
Define the solvers you want to use in the solvers
list. Uncomment the solvers you want to benchmark and specify their parameters. For example:
solvers = [
{
"name": "Gurobi",
"class": GurobiMIS,
"params": {"time_limit": 100},
},
{
"name": "CPSAT",
"class": CPSATMIS,
"params": {"time_limit": 30},
},
]
Run the script to start the benchmarking process:
python benchmark.py
The script saves intermediate results at regular intervals defined by SOLUTION_SAVE_INTERVAL
. The final results are saved at the end of the benchmarking process. Output files are saved as CSV files named zero_to_stage_{current_stage}_of_{total_stages}_total_stages.csv
.
The script includes optional initializers for the solvers:
- Default initializer: Uniform distribution.
- Degree-based initializer: Initialize values based on node degrees.
Specify the relevant initializer in the solver's params.
mean_vector = []
degrees = dict(graph["data"].degree())
# Find the maximum degree
max_degree = max(degrees.values())
for _, degree in graph["data"].degree():
degree_init = 1 - degree / max_degree
mean_vector.append(degree_init)
min_degree_initialization = max(mean_vector)
for i in range(len(mean_vector)):
mean_vector[i] = mean_vector[i] / min_degree_initialization
solver_instance.value_initializer = lambda _: torch.normal(
mean=torch.Tensor(mean_vector), std=solver["params"]["std"]
)
The script outputs a CSV file containing the results for each graph and solver, including solution sizes and time taken for each solver.
- Ensure the graph data and solver implementations are correctly set up and accessible.
- Adjust the
SOLUTION_SAVE_INTERVAL
as needed to control the frequency of checkpoint saves. - The benchmarking process may be time-consuming depending on the number and size of graphs, and the solvers used.
- Large datasets that exceed local RAM can be run using the
benchmark_large_graphs.py
script.