Skip to content

The codes and data to analyze the effect of training data quality on the surrogate models in high-fidelity optimization.

License

Notifications You must be signed in to change notification settings

aims-umich/surrogate-uncertainty

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Surrogate-Driven Design Optimization with Uncertainty Constraints

This repository contains the datasets and scripts used to analyze how training data quality, particularly simulation uncertainty, affects the performance of surrogate models in high-fidelity Monte Carlo–driven optimization workflows. The repository includes all notebooks used for data preparation, surrogate hypertuning, optimization, validation, and 10-run uncertainty analysis for both converter and reflector benchmark cases.

Paper

Erdem, O. F., Broughton, D. P., Svoboda, J., Huang, C., & Radaideh, M. I. (2025).
Surrogate-driven design optimization with uncertainty constraints in Monte Carlo simulations.
Energy and AI.
https://www.sciencedirect.com/science/article/pii/S2666546825001879?via%3Dihub

Environment Installation

A single Conda environment is used for all steps of the pipeline.

conda env create -f environment.yml
conda activate surrogate_uncertainty

How to Generate the Results

Both Converter and Reflector subdirectories contain an identical workflow consisting of six steps. Running the notebooks in order will fully reproduce the datasets, surrogate models, optimization runs, and uncertainty analyses presented in the paper.

Below, the Converter workflow is shown. Repeat the same steps under the Reflector/ directory to generate the reflector results.


Step 1 — Data Exploration and Plotting

Directory: Converter/Data/
Notebook: 1. conv_data_plot.ipynb

nohup jupyter nbconvert --to notebook --execute --inplace "1. conv_data_plot.ipynb" > output.log 2>&1 &

Step 2 — Surrogate Hyperparameter Tuning

Directory: Converter/Surrogate Hypertuner/
Notebook: 2. conv_hypertuner.ipynb

nohup jupyter nbconvert --to notebook --execute --inplace "2. conv_hypertuner.ipynb" > output.log 2>&1 &

Step 3 — Surrogate-Based Optimization

Directory: Converter/
Notebook: 3. converter_optimizer.nbconvert.ipynb

nohup jupyter nbconvert --to notebook --execute --inplace "3. converter_optimizer.nbconvert.ipynb" > output.log 2>&1 &

Step 4 — Model Validation

Directory: Converter/Validation/
Notebook: 4. conv_model_predictions.ipynb

nohup jupyter nbconvert --to notebook --execute --inplace "4. conv_model_predictions.ipynb" > output.log 2>&1 &

Step 5 — 10-Run Uncertainty Assessment

Directory: Converter/Test with 10 runs/
Notebook: 5. converter_10fold_uncertainty_runner.ipynb

nohup jupyter nbconvert --to notebook --execute --inplace "5. converter_10fold_uncertainty_runner.ipynb" > output.log 2>&1 &

Step 6 — 10-Run Postprocessing

Directory: Converter/Test with 10 runs/
Notebook: 6. converter_10fold_postprocessor.ipynb

nohup jupyter nbconvert --to notebook --execute --inplace "6. converter_10fold_postprocessor.ipynb" > output.log 2>&1 &

Reflector Case

To reproduce the reflector results, repeat Steps 1–6 under the Reflector/ directory.
The file structure and notebook names are identical.

About

The codes and data to analyze the effect of training data quality on the surrogate models in high-fidelity optimization.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published