Skip to content

Releases: zenml-io/zenml

0.57.1

14 May 09:54
7f97fdc
Compare
Choose a tag to compare

This a minor release that brings a variety of enhancements for
the new dashboard release, a new update to the LLMOps guide (covering the use of rerankers in RAG pipelines), and an updated README. It also introduces some new improvements to the service connectors.

We'd like to give a special thanks to @ruvilonix for their first contribution.

What's Changed

New Contributors

Full Changelog: 0.57.0...0.57.1

0.57.0

02 May 15:15
dab417c
Compare
Choose a tag to compare

We're excited to announce that we're open-sourcing our new and improved dashboard. This unifies the experience for OSS and cloud users, though OSS users will initially see some dashboard features unavailable in this launch release.

We're open-sourcing our dashboard for a few reasons:

  • to ensure that the dashboard experience is consistent across all users, for both the open-source and cloud versions
  • to make it easier for us to maintain and develop the dashboard, as we can share components between the two versions
  • to allow OSS contributions (and self-hosting and modifications) to the new dashboard
  • to open up possibilities for future features, particularly for our OSS users

New users of the ZenML in the dashboard will have a better experience thanks to a much-improved onboarding sequence:

Dashboard 2
The dashboard will guide you through connecting to your server, setting up a stack, connecting to service connectors as well as running a pipeline.

We’ve also improved the ‘Settings’ section of the dashboard and this is the new home for configuration of your repositories, secrets, and connectors, along with some other options.

Dashboard 3

What It Means for You

If you're already a cloud user, not much will change for you. You're already using the new dashboard for pipelines, models and artifacts. Your experience won’t change and for the moment you’ll continue using the old dashboard for certain components (notably for stacks and components).

If you're an open-source user, the new dashboard is now available to you as part of our latest release (0.57.0). You'll notice a completely refreshed design and a new DAG visualizer.

Dashboard 4

Unfortunately, some dashboard features are not yet ready so you'll see instructions on how to access them via the CLI. We hope to have these features returned into the product soon. (If you have a strong opinion as to which you'd like to see first, please let us know!) Specifically, secrets, stacks, and service connectors are not yet implemented in the new dashboard.

How to use the legacy dashboard

The old dashboard is still available to you. To run with the legacy dashboard pass the --legacy flag when spinning it up:

zenml up --legacy

Note that you can’t use both the new and old dashboard at the same time.

If you’re self-hosting ZenML instead of using ZenML Cloud, you can specify which dashboard you want to use by setting the ZEN_SERVER_USE_LEGACY_DASHBOARD environment variable pre-deployment. Specifying a boolean value for this variable will determine which dashboard gets served for your deployment. (There’s no dynamic switching between dashboards allowed, so if you wish to change which dashboard is used for a deployed server, you’ll need to redeploy the server after updating the environment variable.)

If you’re using ZenML Cloud, your experience won’t change with this release and your use of the dashboard remains the same.

What's Changed

Full Changelog: 0.56.4...0.57.0

0.56.4

24 Apr 13:21
17d6209
Compare
Choose a tag to compare

This release brings a variety of bug fixes and enhancements, including a new Comet Experiment Tracker integration, additional support for the uv package installer for zenml integration ... commands which significantly improves the speed of integration installations and dependency management, and a new evaluation section in the LLMOps guide.

In addition, it includes a number of bug fixes and documentation updates, such as a fix for cached artifacts produced via save_artifact inside steps linkage to the MCP.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @christianversloot who contributed to this release by bumping the mlflow version to 2.12.1

What's Changed

Full Changelog: 0.56.3...0.56.4

0.56.3

09 Apr 16:44
46d40c4
Compare
Choose a tag to compare

This release comes with a number of bug fixes and enhancements.

With this release you can benefit from new Lambda Labs GPU orchestrator integration in your pipelines. Lambda Labs is a cloud provider that offers GPU instances for machine learning workloads.

In this release we have also implemented a few important security improvements to ZenML Server mostly around Content Security Policies. Also users are from now on mandated to provide previous password during the password change process.

Also the documentation was significantly improved with the new AWS Cloud guide and the LLMOps guide covering various aspects of the LLM lifecycle.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @christianversloot who contributed to this release by adding support for Schedule.start_time to the HyperAI orchestrator.

What's Changed

Full Changelog: 0.56.2...0.56.3

0.56.2

25 Mar 22:27
68bcb3b
Compare
Choose a tag to compare

This release introduces a wide array of new features, enhancements, and bug fixes, with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks to an open-source community contribution of this model deployer stack component!

Note that 0.56.0 and 0.56.1 were yanked and removed from PyPI due to an issue with the
alembic versions + migration which could affect the database state. This release
fixes that issue.

This release also comes with a breaking change to the services
architecture.

Breaking Change

A significant change in this release is the migration of the Service (ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.

Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.

Additionally, the find_model_server method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server will only return newly deployed models stored in the server.

It is also no longer recommended to call service functions like service.start().
Instead, use model_deployer.start_model_server(service_id), which will allow ZenML
to update the changed status of the service in the server.

Starting a service

Old syntax:

from zenml import pipeline, 
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    service.start(timeout=10)

New syntax:

from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    model_deployer = BentoMLModelDeployer.get_active_model_deployer()
    model_deployer.start_model_server(service_id=service.service_id, timeout=10)

Enabling continuous deployment

Instead of replacing the parameter that was used in the deploy_model method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.

from zenml import pipeline, step, get_step_context
from zenml.client import Client

@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
    # Deploy a model using the MLflow Model Deployer
    zenml_client = Client()
    model_deployer = zenml_client.active_stack.model_deployer
    mlflow_deployment_config = MLFlowDeploymentConfig(
        name: str = "mlflow-model-deployment-example",
        description: str = "An example of deploying a model using the MLflow Model Deployer",
        pipeline_name: str = get_step_context().pipeline_name,
        pipeline_step_name: str = get_step_context().step_name,
        model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
        model_name: str = "model",
        workers: int = 1
        mlserver: bool = False
        timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
    )
    service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
    logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
    return service

Major Features and Enhancements:

  • A new Huggingface Model Deployer has been introduced, allowing you to seamlessly
    deploy your Huggingface models using ZenML. (Thank you so much @dudeperf3ct for the contribution!)
  • Faster Integration and Dependency Management ZenML now leverages the uv library,
    significantly improving the speed of integration installations and dependency management,
    resulting in a more streamlined and efficient workflow.
  • Enhanced Logging and Status Tracking Logging have been improved, providing better
    visibility into the state of your ZenML services.
  • Improved Artifact Store Isolation: ZenML now prevents unsafe operations that access
    data outside the scope of the artifact store, ensuring better isolation and security.
  • Adding admin user notion for the user accounts and added protection to certain operations
    performed via the REST interface to ADMIN-allowed only.
  • Rate limiting for login API to prevent abuse and protect the server from potential
    security threats.
  • The LLM template is now supported in ZenML, allowing you to use the LLM template
    for your pipelines.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.

What's Changed

Read more

0.56.1 [YANKED]

21 Mar 16:40
55305bc
Compare
Choose a tag to compare

[NOTICE] This version introduced the services class that is causing a bug for those users who are migrating from older versions. 0.56.3 will be out shortly in place of this release. For now, this release has been yanked.

This is a patch release aiming to solve a dependency problem that was brought in with the new rate-limiting functionality. With 0.56.1 you no longer need starlette to run client code or to run ZenML CLI commands.

🥳 Community Contributions 🥳

We'd like to thank @christianversloot for his contribution to this release.

What's Changed

Full Changelog: 0.56.0...0.56.1

0.56.0 [YANKED]

21 Mar 09:54
75f5ece
Compare
Choose a tag to compare

[NOTICE] This version introduced the services class that is causing a bug for those users who are migrating from older versions. 0.56.3 will be out shortly in place of this release. For now, this release has been yanked.

ZenML 0.56.0 introduces a wide array of new features, enhancements, and bug fixes,
with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks for an open-source community contribution of this model deployer stack component!

This release also comes with a breaking change to the services
architecture.

Breaking Change

A significant change in this release is the migration of the Service (ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.

Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.

Additionally, the find_model_server method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server will only return newly deployed models stored in the server.

It is also no longer recommended to call service functions like service.start().
Instead, use model_deployer.start_model_server(service_id), which will allow ZenML
to update the changed status of the service in the server.

Starting a service

Old syntax:

from zenml import pipeline, 
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    service.start(timeout=10)

New syntax:

from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    model_deployer = BentoMLModelDeployer.get_active_model_deployer()
    model_deployer.start_model_server(service_id=service.service_id, timeout=10)

Enabling continuous deployment

Instead of replacing the parameter that was used in the deploy_model method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.

from zenml import pipeline, step, get_step_context
from zenml.client import Client

@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
    # Deploy a model using the MLflow Model Deployer
    zenml_client = Client()
    model_deployer = zenml_client.active_stack.model_deployer
    mlflow_deployment_config = MLFlowDeploymentConfig(
        name: str = "mlflow-model-deployment-example",
        description: str = "An example of deploying a model using the MLflow Model Deployer",
        pipeline_name: str = get_step_context().pipeline_name,
        pipeline_step_name: str = get_step_context().step_name,
        model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
        model_name: str = "model",
        workers: int = 1
        mlserver: bool = False
        timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
    )
    service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
    logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
    return service

Major Features and Enhancements:

  • A new Huggingface Model Deployer has been introduced, allowing you to seamlessly
    deploy your Huggingface models using ZenML. (Thank you so much @dudeperf3ct for the contribution!)
  • Faster Integration and Dependency Management ZenML now leverages the uv library,
    significantly improving the speed of integration installations and dependency management,
    resulting in a more streamlined and efficient workflow.
  • Enhanced Logging and Status Tracking Logging have been improved, providing better
    visibility into the state of your ZenML services.
  • Improved Artifact Store Isolation: ZenML now prevents unsafe operations that access
    data outside the scope of the artifact store, ensuring better isolation and security.
  • Adding admin user notion for the user accounts and added protection to certain operations
    performed via the REST interface to ADMIN-allowed only.
  • Rate limiting for login API to prevent abuse and protect the server from potential
    security threats.
  • The LLM template is now supported in ZenML, allowing you to use the LLM template
    for your pipelines.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.

All changes:

Read more

0.55.5

06 Mar 16:01
8e13b42
Compare
Choose a tag to compare

This patch contains a number of bug fixes and security improvements.

We improved the isolation of artifact stores so that various artifacts cannot be stored or accessed outside of the configured artifact store scope. Such unsafe operations are no longer allowed. This may have an impact on existing codebases if you have used unsafe file operations in the past.

To illustrate such a side effect, let's consider a remote S3 artifact store is configured for the path s3://some_bucket/some_sub_folder and in the code you use artifact_store.open("s3://some_bucket/some_other_folder/dummy.txt","w") -> this operation is considered unsafe as it accesses the data outside the scope of the artifact store. If you really need this to achieve your goals, consider switching to s3fs or similar libraries for such cases.

Also with this release, the server global configuration is no longer stored on the server file system to prevent exposure of sensitive information.

User entities are now uniquely constrained to prevent the creation of duplicate users under certain race conditions.

What's Changed

Full Changelog: 0.55.4...0.55.5

0.55.4

29 Feb 16:49
a24ccb1
Compare
Choose a tag to compare

This release brings a host of enhancements and fixes across the board, including
significant improvements to our services logging and status, the integration of
model saving to the registry via CLI methods, and more robust handling of
parallel pipelines and database entities. We've also made strides in optimizing
MLflow interactions, enhancing our documentation, and ensuring our CI processes
are more robust.

Additionally, we've tackled several bug fixes and performance improvements,
making our platform even more reliable and user-friendly.

We'd like to give a special thanks to @christianversloot and @francoisserra for
their contributions.

What's Changed

Full Changelog: 0.55.3...0.55.4

0.55.3

20 Feb 09:52
Compare
Choose a tag to compare

This patch comes with a variety of bug fixes and documentation updates.

With this release you can now download files directly from artifact versions
that you get back from the client without the need to materialize them. If you
would like to bypass materialization entirely and just download the data or
files associated with a particular artifact version, you can use the
download_files method:

from zenml.client import Client

client = Client()
artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset")
artifact.download_files("path/to/save.zip")

What's Changed

Full Changelog: 0.55.2...0.55.3