Skip to content

End to End ML recipe showcasing project layout, MLFlow tracking, persistent storage with Sqlite3 and Minio, containerization with Docker and industry coding standards.

Notifications You must be signed in to change notification settings

ajkdrag/end-to-end-ml-recipe

Repository files navigation

End-to-End-ML-Recipe (employee burnout prediction)

Setup conda env

Setup conda virtual environment from the environment.yml file of this repo.

conda env create -f environment.yml
conda activate emp_burnout

Setup package env

Install the codebase as a package (setup.py)

pip install .

Usage

  • For running the servers (MLFlow server, Minio, NGINX), start docker-compose as:

    docker-compose up 
    • Make sure to configure the volumes (check host paths) in the docker-compose.yml file as needed.
    • The server is only needed for tracking purposes. If you don't need mlflow tracking, you can opt to ignore this step.
  • For the client REST apis, start the app with uvicorn using:

    uvicorn emp_burnout.app:app
  • Head over to localhost:8000/docs to view the swagger ui for the exposed REST apis.

  • For the job config files, refer to configs/train.yml and configs/predict.yml.

    • The train.yml also contains the hyperparameters for training. One can modify them too.

TODOS:

  • Groundwork
  • Environment setup
  • Ingestion to DB
  • Docker setup
  • MLFlow Server setup
  • Training Job
  • Batch prediction Job
  • Single input prediction Job
  • Cleanup
  • Pydantic config parsing
  • REST APIs
  • Update README

About

End to End ML recipe showcasing project layout, MLFlow tracking, persistent storage with Sqlite3 and Minio, containerization with Docker and industry coding standards.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages