Add a short description here!
A longer description of your project goes here...
In order to set up the necessary environment:
- create an environment
test_ds
with the help of conda,conda env create -f environment.yaml
- activate the new environment with
conda activate test_ds
- install
test_ds
with:python setup.py install # or `develop`
Optional and needed only once after git clone
:
-
install several pre-commit git hooks with:
pre-commit install
and checkout the configuration under
.pre-commit-config.yaml
. The-n, --no-verify
flag ofgit commit
can be used to deactivate pre-commit hooks temporarily. -
install nbstripout git hooks to remove the output cells of committed notebooks with:
nbstripout --install --attributes notebooks/.gitattributes
This is useful to avoid large diffs due to plots in your notebooks. A simple
nbstripout --uninstall
will revert these changes.
Then take a look into the scripts
and notebooks
folders.
- Always keep your abstract (unpinned) dependencies updated in
environment.yaml
and eventually insetup.cfg
if you want to ship and install your package viapip
later on. - Create concrete dependencies as
environment.lock.yaml
for the exact reproduction of your environment with:For multi-OS development, consider usingconda env export -n test_ds -f environment.lock.yaml
--no-builds
during the export. - Update your current environment with respect to a new
environment.lock.yaml
using:conda env update -f environment.lock.yaml --prune
├── AUTHORS.rst <- List of developers and maintainers.
├── CHANGELOG.rst <- Changelog to keep track of new features and fixes.
├── LICENSE.txt <- License as chosen on the command-line.
├── README.md <- The top-level README for developers.
├── configs <- Directory for configurations of model & application.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
├── docs <- Directory for Sphinx documentation in rst or md.
├── environment.yaml <- The conda environment file for reproducibility.
├── models <- Trained and serialized models, model predictions,
│ or model summaries.
├── notebooks <- Jupyter notebooks. Naming convention is a number (for
│ ordering), the creator's initials and a description,
│ e.g. `1.0-fw-initial-data-exploration`.
├── references <- Data dictionaries, manuals, and all other materials.
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated plots and figures for reports.
├── scripts <- Analysis and production scripts which import the
│ actual PYTHON_PKG, e.g. train_model.
├── setup.cfg <- Declarative configuration of your project.
├── setup.py <- Make this project pip installable with `pip install -e`
│ or `python setup.py develop`.
├── src
│ └── test_ds <- Actual Python package where the main functionality goes.
├── tests <- Unit tests which can be run with `py.test` or
│ `python setup.py test`.
├── .coveragerc <- Configuration for coverage reports of unit tests.
├── .isort.cfg <- Configuration for git hook that sorts imports.
└── .pre-commit-config.yaml <- Configuration of pre-commit git hooks.
This project has been set up using PyScaffold 3.2.2b1. For details and usage information on PyScaffold see https://pyscaffold.org/.