Skip to content

Releases: zenml-io/zenml

0.3.7

22 Apr 15:43
Compare
Choose a tag to compare

0.3.7

0.3.7 is a much-needed, long-awaited, big refactor of the Datasources paradigm of ZenML. There are also bug fixes, improvements, and more!

For those upgrading from an older version of ZenML, we ask to please delete their old pipelines dir and .zenml folders and start afresh with a zenml init.

If only working locally, this is as simple as:

cd zenml_enabled_repo
rm -rf pipelines/
rm -rf .zenml/

And then another ZenML init:

pip install --upgrade zenml
cd zenml_enabled_repo
zenml init

New Features

  • The inner-workings of the BaseDatasource have been modified along with the concrete implementations. Now, there is no relation between a DataStep and a Datasource: A Datasource holds all the logic to version and track itself via the new commit paradigm.

  • Introduced a new interface for datasources, the process method which is responsible for ingesting data and writing to TFRecords to be consumed by later steps.

  • Datasource versions (snapshots) can be accessed directly via the commits paradigm: Every commit is a new version of data.

  • Added JSONDatasource and TFRecordsDatasource.

Bug Fixes + Refactor

A big thanks to our new contributer @aak7912 for the help in this release with issue #71 and PR #75.

  • Added an example for regression.
  • compare_training_runs() now takes an optional datasource parameter to filter by datasource.
  • Trainer interface refined to focus on run_fn rather than other helper functions.
  • New docs released with a streamlined vision and coherent storyline: https://docs.zenml.io
  • Got rid of unnecessary Torch dependency with base ZenML version.

0.3.6

30 Mar 14:09
Compare
Choose a tag to compare

0.3.6

0.3.6 is a more inwards-facing release as part of a bigger effort to create a more flexible ZenML. As a first step, ZenML now supports arbitrary splits for all components natively, freeing us from the train/eval split paradigm. Here is an overview of changes:

New Features

  • The inner-workings of the BaseTrainerStep, BaseEvaluatorStep and the BasePreprocesserStep have been modified along with their respective components to work with the new split_mapping. Now, users can define arbitrary splits (not just train/eval). E.g. Doing a train/eval/test split is possible.

  • Within the instance of a TrainerStep, the user has access to input_patterns and output_patterns which provide the required uris with respect to their splits for the input and output(test_results) examples.

  • The built-in trainers are modified to work with the new changes.

Bug Fixes + Refactor

A big thanks to our new super supporter @zyfzjsc988 for most of the feedback that led to bug fixes and enhancements for this release:

  • #63: Now one can specify which ports ZenML opens its add-on applications.
  • #64 Now there is a way to list integrations with the following code:
from zenml.utils.requirements_utils import list_integrations.
list_integrations()
  • Fixed #61: view_anomalies() breaking in the quickstart.
  • Analytics is now opt-in by default, to get rid of the unnecessary prompt at zenml init. Users can still freely opt-out by using the CLI:
zenml config analytics opt-out

Again, the telemetry data is fully anonymized and just used to improve the product. Read more here

0.3.5

18 Mar 15:59
Compare
Choose a tag to compare

This release finally brings model agnostic automatic evaluation to ZenML! Now you can easily use TFMA with any model type to produce evaluation visualizations. This means you can now use TFMA with PyTorch or Scikit - a big win for automated sliced evaluation! It also introduces a new language for differentiation between features, raw features, labels and predictions, in addition to solving a few big bugs in the examples directory! Read more below.

As has been the case in the last few releases, this release is yet another breaking upgrade.

For those upgrading from an older version of ZenML, we ask to please delete their old pipelines dir and .zenml folders and start afresh with a zenml init.

If only working locally, this is as simple as:

cd zenml_enabled_repo
rm -rf pipelines/
rm -rf .zenml/

And then another ZenML init:

pip install --upgrade zenml
cd zenml_enabled_repo
zenml init

New Features

  • Added a new interface into the trainer step called test_fn which is utilized to produce model predictions and save them as test results

  • Implemented a new evaluator step called AgnosticEvaluator which is designed to work regardless of the model type as long as you run the test_fn in your trainer step

  • The first two changes allow torch trainer steps to be followed by an agnostic evaluator step, see the example here.

  • Proposed a new naming scheme, which is now integrated into the built-in steps, in order to make it easier to handle feature/label names.

  • Modified the TorchFeedForwardTrainer to showcase how to use TensorBoard in conjunction with PyTorch

Bug Fixes + Refactor

  • Refactored how ZenML treats relative imports for custom steps. Now, rather than doing absolute imports like:
from examples.scikit.step.trainer import MyScikitTrainer 

One can also do the following:

from step.trainer import MyScikitTrainer

ZenML automatically figures out the absolute path of the module based on the root of the directory.

Big shout out to @SaraKingGH in issue #55 for raising the above issues!

0.3.4

11 Mar 17:53
Compare
Choose a tag to compare

This release is a big design change and refactor. It involves a significant change in the Configuration file structure, meaning this is a breaking upgrade.

For those upgrading from an older version of ZenML, we ask to please delete their old pipelines dir and .zenml folders and start afresh with a zenml init.

If only working locally, this is as simple as:

cd zenml_enabled_repo
rm -rf pipelines/
rm -rf .zenml/

And then another ZenML init:

pip install --upgrade zenml
cd zenml_enabled_repo
zenml init

New Features

  • Introduced another higher-level pipeline: The NLPPipeline. This is a generic
    NLP pipeline for a text-datasource based training task. Full example of how to use the NLPPipeline can be found here
  • Introduced a BaseTokenizerStep as a simple mechanism to define how to train and encode using any generic
    tokenizer (again for NLP-based tasks).
  • Introduced a new HuggingFace integration, with the first concrete implementation of the BaseTokenizerStep, i.e., the HuggingFaceTokenizer.
  • Show-cased how to use HuggingFace with the ZenML TrainerStep in the NLP Example.

Bug Fixes + Refactor

  • Significant change to imports: Now imports are way simpler and user-friendly. E.g. Instead of:
from zenml.core.pipelines.training_pipeline import TrainingPipeline

A user can simple do:

from zenml.pipelines import TrainingPipeline

The caveat is of course that this might involve a re-write of older ZenML code imports.

Note: Future releases are also expected to be breaking. Until announced, please expect that upgrading ZenML versions may cause older-ZenML generated pipelines to behave unexpectedly.

Special shout-out to @nicholasmaiot for major contributions to this release!

0.3.3

26 Feb 16:02
Compare
Choose a tag to compare

This release is a significant one as it includes the first version of the AWS integration. It allows you to use ZenML to launch an EC2 instance as an orchestrator and execute a ZenML pipeline possibly coupled with an S3 artifact store and RDS metadata store.

It is a new feature and it does not include any breaking changes.

In order to install ZenML with the AWS integration attached, you can follow:

pip install --upgrade zenml[aws]
zenml init

New Features

  • OrchestratorAWSBackend implemented to launch an EC2 instance as the orchestrator.
  • While you are using the new orchestrator backend, you may use S3 and RDS.
  • Implemented an example which covers the basic process if you would like to start testing it right away.

Bug Fixes + Refactor

  • For more advanced use-cases, more examples will follow in the future.
  • Numerous small bugs and refinements.

0.3.2

12 Feb 12:27
Compare
Choose a tag to compare

Earlier release to get the PostgreSQL datasource out quicker.

To upgrade:

pip install --upgrade zenml

New Features

Bug Fixes + Refactor

  • Slight change to telemetry utils -> Now opt-out also sends a signal.

0.3.1

05 Feb 18:41
Compare
Choose a tag to compare

This release is a big design change and refactor. It involves a significant change in the Configuration file structure, meaning this is a breaking upgrade. For those upgrading from 0.2.0, we ask to please delete their old pipelines dir and .zenml folders and start afresh with a zenml init.

If only working locally, this is as simple as:

cd zenml_enabled_repo
rm -rf pipelines/
rm -rf .zenml/

And then another init:

pip install --upgrade zenml
zenml init

New Features

Bug Fixes + Refactor

  • Now you can run pipelines from within any subdirectory in the repo.
  • Relaxed restriction on custom steps having sub-directories with their module.
  • Relationship between Datasource and Data Step refined.
  • Numerous small bugs and refinements to facilitate flexible API design.

Note: Future releases are also expected to be breaking. Until announced, please expect that upgrading ZenML versions may cause older-ZenML generated pipelines to behave unexpectedly.

0.2.0

22 Jan 16:20
Compare
Choose a tag to compare

This new release is a major one. Its the first to introduce our new integrations system, which is meant to be used to extend ZenML with various other ML/MLOps libraries easily. The first big advantage one gets is 🚀 PyTorch Support 🚀!

pip install --upgrade zenml

And to enable the PyTorch extension:

pip install zenml[pytorch]

New Features

  • Introduced integrations for ZenML with the extra_requires setuptools paradigm.
  • Added PyTorchTrainer support with easily extendable TorchBaseTrainer example.
  • Restructured trainer steps to be more intuitive to extend from Tensorflow and PyTorch. Now, we have a TrainerStep, followed by TFBaseTrainerStep and TorchBaseTrainerStep.
  • The input_fn of the TorchTrainer have implemented in a way that it can ingest from a tfrecords file. This marks one of the few projects out there
    that have native support for ingesting the TFRecords format into PyTorch directly.

Bug Fixes

  • Fixed an issue with Repository.get_zenml_dir() that caused any pipeline creates below root level to fail on creation.

Documentation Annoucement

The docs are almost complete! We are at 80% completion. Keep an eye out as we update with more details on how to use/extend ZenML and let us know via slack if there is something missing!

0.1.5

19 Jan 20:45
Compare
Choose a tag to compare

New Features

  • Added Kubernetes Orchestrator to run pipelines on a kubernetes cluster.
  • Added timeseries support with StandardSequencerStep.
  • Added more [CLI groups] such as step, datasource and pipelines. E.g. zenml pipeline list gives list of pipelines in current repo.
  • Completed a significant portion of the Docs.
  • Refactored Step Interfaces for easier integrations into other libraries.
  • Added a GAN Example to showcase ImageDatasource.
  • Set up base for more Trainer Interfaces like PyTorch, scikit etc.
  • Added ability to see historical steps.

Bug Fixes

  • All files except YAML files picked up while parsing pipelines_dir, in reference to concerns raised in #13.

Upcoming changes

  • Next release will be a major one and will involve refactoring of design decisions that might cause backward incompatible changes to existing ZenML repos.

0.1.4

08 Jan 13:37
Compare
Choose a tag to compare

0.1.4

New Features

  • Ability to add a custom image to Dataflow ProcessingBackend.

Bug Fixes

  • Fixed requirements.txt and setup.py to enable local build.
  • Pip package should install without any requirement conflicts now.
  • Added custom docs made by Jupyter book in the docs/book folder.