|
| 1 | +--- |
| 2 | +description: Chat with your ZenML server |
| 3 | +--- |
| 4 | + |
| 5 | +# Chat with your ZenML server |
| 6 | + |
| 7 | +ZenML server supports a chat interface that allows you to interact with the server using natural language through the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/). This feature enables you to query your ML pipelines, analyze performance metrics, and generate reports using conversational language instead of traditional CLI commands or dashboard interfaces. |
| 8 | + |
| 9 | + |
| 10 | + |
| 11 | +## What is MCP? |
| 12 | + |
| 13 | +The Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). Think of it as a "USB-C port for AI applications" - providing a standardized way to connect AI models to different data sources and tools. |
| 14 | + |
| 15 | +MCP follows a client-server architecture where: |
| 16 | +- **MCP Clients**: Programs like Claude Desktop or IDEs (Cursor, Windsurf, etc.) that want to access data through MCP |
| 17 | +- **MCP Servers**: Lightweight programs that expose specific capabilities |
| 18 | + through the standardized protocol. Our implementation is of an MCP server that connects to your ZenML server. |
| 19 | + |
| 20 | +## Why use MCP with ZenML? |
| 21 | + |
| 22 | +The ZenML MCP Server offers several advantages for developers and teams: |
| 23 | + |
| 24 | +1. **Natural Language Interaction**: Query your ZenML metadata, code and logs using conversational language instead of memorizing CLI commands or navigating dashboard interfaces. |
| 25 | +2. **Contextual Development**: Get insights about failing pipelines or performance metrics without switching away from your development environment. |
| 26 | +3. **Accessible Analytics**: Generate custom reports and visualizations about your pipelines directly through conversation. |
| 27 | +4. **Streamlined Workflows**: Trigger pipeline runs via natural language requests when you're ready to execute. |
| 28 | + |
| 29 | +You can get a sense of how it works in the following video: |
| 30 | + |
| 31 | +[](https://www.loom.com/share/4cac0c90bd424df287ed5700e7680b14?sid=200acd11-2f1b-4953-8577-6fe0c65cad3c) |
| 32 | + |
| 33 | +## Features |
| 34 | + |
| 35 | +The ZenML MCP server provides access to core read functionality from your ZenML server, allowing you to get live information about: |
| 36 | + |
| 37 | +- Users |
| 38 | +- Stacks |
| 39 | +- Pipelines |
| 40 | +- Pipeline runs |
| 41 | +- Pipeline steps |
| 42 | +- Services |
| 43 | +- Stack components |
| 44 | +- Flavors |
| 45 | +- Pipeline run templates |
| 46 | +- Schedules |
| 47 | +- Artifacts (metadata about data artifacts, not the data itself) |
| 48 | +- Service Connectors |
| 49 | +- Step code |
| 50 | +- Step logs (if the step was run on a cloud-based stack) |
| 51 | + |
| 52 | +It also allows you to trigger new pipeline runs through existing run templates. |
| 53 | + |
| 54 | +## Getting Started |
| 55 | + |
| 56 | +For the most up-to-date setup instructions and code, please refer to the [ZenML |
| 57 | +MCP Server GitHub repository](https://github.com/zenml-io/mcp-zenml). We |
| 58 | +recommend using the `uv` package manager to install the dependencies since it's |
| 59 | +the most reliable and fastest setup experience. |
| 60 | + |
| 61 | +The setup process for the ZenML MCP Server is straightforward: |
| 62 | + |
| 63 | +### Prerequisites: |
| 64 | +- Access to a ZenML Cloud server |
| 65 | +- [`uv`](https://docs.astral.sh/uv/) installed locally |
| 66 | +- A local clone of the repository |
| 67 | + |
| 68 | +### Configuration: |
| 69 | + |
| 70 | +- Create an MCP config file with your ZenML server details |
| 71 | +- Configure your preferred MCP client (Claude Desktop or Cursor) |
| 72 | + |
| 73 | +For detailed setup instructions, please refer to the [GitHub repository](https://github.com/zenml-io/mcp-zenml). |
| 74 | + |
| 75 | +## Example Usage |
| 76 | + |
| 77 | +Once set up, you can interact with your ZenML infrastructure through natural language. Here are some example prompts you can try: |
| 78 | + |
| 79 | +1. **Pipeline Analysis Report**: |
| 80 | + ``` |
| 81 | + Can you write me a report (as a markdown artifact) about the 'simple_pipeline' and tell the story of the history of its runs, which were successful etc., and what stacks worked, which didn't, as well as some performance metrics + recommendations? |
| 82 | + ``` |
| 83 | + |
| 84 | + |
| 85 | + |
| 86 | +2. **Comparative Pipeline Analysis**: |
| 87 | + ``` |
| 88 | + Could you analyze all our ZenML pipelines and create a comparison report (as a markdown artifact) that highlights differences in success rates, average run times, and resource usage? Please include a section on which stacks perform best for each pipeline type. |
| 89 | + ``` |
| 90 | + |
| 91 | + |
| 92 | + |
| 93 | +3. **Stack Component Analysis**: |
| 94 | + ``` |
| 95 | + Please generate a comprehensive report or dashboard on our ZenML stack components, showing which ones are most frequently used across our pipelines. Include information about version compatibility issues and performance variations. |
| 96 | + ``` |
| 97 | + |
| 98 | + |
| 99 | + |
| 100 | +## Get Involved |
| 101 | + |
| 102 | +We invite you to try the [ZenML MCP Server](https://github.com/zenml-io/mcp-zenml) and share your experiences with us through our [Slack community](https://zenml.io/slack). We're particularly interested in: |
| 103 | + |
| 104 | +- Whether you need additional write actions (creating stacks, registering components, etc.) |
| 105 | +- Examples of how you're using the server in your workflows |
| 106 | +- Suggestions for additional features or improvements |
| 107 | + |
| 108 | +Contributions and pull requests to [the core repository](https://github.com/zenml-io/mcp-zenml) are always welcome! |
0 commit comments