Skip to content
This repository was archived by the owner on Nov 16, 2023. It is now read-only.

Commit 904157c

Browse files
author
yalaudah
authored
Hotfix master (#310)
1 parent e136533 commit 904157c

39 files changed

+13
-4661
lines changed

docker/Dockerfile

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,17 +13,19 @@ ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
1313
ENV PATH /opt/conda/bin:$PATH
1414
SHELL ["/bin/bash", "-c"]
1515

16+
WORKDIR /home/username
17+
1618
# Install Anaconda and download the seismic-deeplearning repo
1719
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
1820
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
1921
rm ~/miniconda.sh && \
2022
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
2123
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
2224
apt-get install -y zip && \
23-
wget --quiet https://github.com/microsoft/seismic-deeplearning/archive/staging.zip -O staging.zip && \
24-
unzip staging.zip && rm staging.zip
25+
wget --quiet https://github.com/microsoft/seismic-deeplearning/archive/master.zip -O master.zip && \
26+
unzip master.zip && rm master.zip
2527

26-
RUN cd seismic-deeplearning-staging && \
28+
RUN cd seismic-deeplearning-master && \
2729
conda env create -n seismic-interpretation --file environment/anaconda/local/environment.yml && \
2830
source activate seismic-interpretation && \
2931
python -m ipykernel install --user --name seismic-interpretation && \
@@ -32,7 +34,7 @@ RUN cd seismic-deeplearning-staging && \
3234

3335
# TODO: add back in later when Penobscot notebook is available
3436
# Download Penobscot dataset:
35-
# RUN cd seismic-deeplearning-staging && \
37+
# RUN cd seismic-deeplearning-master && \
3638
# data_dir="/home/username/data/penobscot" && \
3739
# mkdir -p "$data_dir" && \
3840
# ./scripts/download_penobscot.sh "$data_dir" && \
@@ -42,7 +44,7 @@ RUN cd seismic-deeplearning-staging && \
4244
# cd ..
4345

4446
# Download F3 dataset:
45-
RUN cd seismic-deeplearning-staging && \
47+
RUN cd seismic-deeplearning-master && \
4648
data_dir="/home/username/data/dutch" && \
4749
mkdir -p "$data_dir" && \
4850
./scripts/download_dutch_f3.sh "$data_dir" && \

docker/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ This process will take a few minutes to complete.
2222
# Run the Docker image:
2323
Once the Docker image is built, you can run it anytime using the following command:
2424
```bash
25-
sudo docker run --rm -it -p 9000:9000 -p 9001:9001 --gpus=all --shm-size 11G --network host --mount type=bind,source=$PWD/hrnetv2_w48_imagenet_pretrained.pth,target=/home/models/hrnetv2_w48_imagenet_pretrained.pth -v ~/:/home/username seismic-deeplearning
25+
sudo docker run --rm -it -p 9000:9000 -p 9001:9001 --gpus=all --shm-size 11G --mount type=bind,source=$PWD/hrnetv2_w48_imagenet_pretrained.pth,target=/home/models/hrnetv2_w48_imagenet_pretrained.pth seismic-deeplearning
2626
```
27-
If you have saved the pretrained model in a different directory, make sure you replace `$PWD/hrnetv2_w48_imagenet_pretrained.pth` with the **absolute** path to the pretrained HRNet model. The command above will run a Jupyter Lab instance that you can access by clicking on the link in your terminal. You can then navigate to the notebook or script that you would like to run. By default, running the command above would mount your home directory to the Docker container, allowing you to access your files and data from within Jupyter Lab.
27+
If you have saved the pretrained model in a different directory, make sure you replace `$PWD/hrnetv2_w48_imagenet_pretrained.pth` with the **absolute** path to the pretrained HRNet model. The command above will run a Jupyter Lab instance that you can access by clicking on the link in your terminal. You can then navigate to the notebook or script that you would like to run.
2828

2929
# Run TensorBoard:
3030
To run Tensorboard to visualize the logged metrics and results, open a terminal in Jupyter Lab, navigate to the parent of the `output` directory of your model, and run the following command:

docs/README.md

Lines changed: 0 additions & 6 deletions
This file was deleted.

examples/interpretation/README.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,3 @@
11
The folder contains notebook examples illustrating the use of segmentation algorithms on openly available datasets. Make sure you have followed the [set up instructions](../README.md) before running these examples. We provide the following notebook examples
22

3-
* [Dutch F3 dataset](notebooks/F3_block_training_and_evaluation_local.ipynb): This notebook illustrates section and patch based segmentation approaches on the [Dutch F3](https://terranubis.com/datainfo/Netherlands-Offshore-F3-Block-Complete) open dataset. This notebook uses denconvolution based segmentation algorithm on 2D patches. The notebook will guide you through visualization of the input volume, setting up model training and evaluation.
4-
5-
6-
* [Penobscot dataset](notebooks/HRNet_Penobscot_demo_notebook.ipynb):
7-
In this notebook, we demonstrate how to train an [HRNet](https://github.com/HRNet/HRNet-Semantic-Segmentation) model for facies prediction using [Penobscot](https://terranubis.com/datainfo/Penobscot) dataset. The Penobscot 3D seismic dataset was acquired in the Scotian shelf, offshore Nova Scotia, Canada. This notebook illustrates the use of HRNet based segmentation algorithm on the dataset. Details of HRNet based model can be found [here](https://arxiv.org/abs/1904.04514)
8-
3+
* [Dutch F3 dataset](notebooks/Dutch_F3_patch_model_training_and_evaluation.ipynb): This notebook illustrates section and patch based segmentation approaches on the [Dutch F3](https://terranubis.com/datainfo/Netherlands-Offshore-F3-Block-Complete) open dataset. This notebook uses denconvolution based segmentation algorithm on 2D patches. The notebook will guide you through visualization of the input volume, setting up model training and evaluation.

0 commit comments

Comments
 (0)