Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Elyra to KFP v2 (cont.) #3273

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ uninstall-server-package:
@$(PYTHON_PIP) uninstall elyra -y

install-server-package: uninstall-server-package
$(PYTHON_PIP) install --upgrade --upgrade-strategy $(UPGRADE_STRATEGY) "$(shell find dist -name "elyra-*-py3-none-any.whl")[kfp-tekton]"
$(PYTHON_PIP) install --upgrade --upgrade-strategy $(UPGRADE_STRATEGY) "$(shell find dist -name "elyra-*-py3-none-any.whl")"

install-server: build-dependencies lint-server build-server install-server-package ## Build and install backend

Expand Down
3 changes: 1 addition & 2 deletions docs/source/getting_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ You can install elyra with all optional dependencies or with specific dependenci
- `elyra[airflow]` - install the Elyra core features and support for [Apache Airflow pipelines](https://github.com/apache/airflow)
- `elyra[airflow-gitlab]` - install the Elyra core features and GitLab support for [Apache Airflow pipelines](https://github.com/apache/airflow)
- `elyra[kfp]` - install the Elyra core features and support for [Kubeflow Pipelines](https://github.com/kubeflow/pipelines)
- `elyra[kfp-tekton]` - install the Elyra core features and support for [Kubeflow Pipelines on Tekton](https://github.com/kubeflow/kfp-tekton)
- `elyra[kfp-examples]` - install the Elyra core features and [Kubeflow Pipelines custom component examples](https://github.com/elyra-ai/examples/tree/main/component-catalog-connectors/kfp-example-components-connector)


Expand Down Expand Up @@ -92,7 +91,7 @@ conda install -c conda-forge "elyra[all]"
```

**NOTE:**
The Elyra packaging process was changed in version 4.0. The [Apache Airflow pipelines](https://github.com/apache/airflow) or [Kubeflow Pipelines on Tekton](https://github.com/kubeflow/kfp-tekton) dependencies are no longer installed by default. To install this dependency, you must specify `elyra[all]`, `elyra[kfp]` or `elyra[kfp-tekton]`.
The Elyra packaging process was changed in version 4.0. The [Apache Airflow pipelines](https://github.com/apache/airflow) or [Kubeflow Pipelines](https://github.com/kubeflow/pipelines) dependencies are no longer installed by default. To install this dependency, you must specify `elyra[all]` or `elyra[kfp]`.

You can also install the Pipeline editor, Code Snippet, Code Viewer, or Script editor extensions individually:

Expand Down
4 changes: 2 additions & 2 deletions docs/source/user_guide/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Please review the [_Best practices for file-based pipeline nodes_ topic in the _

Elyra pipelines support three runtime platforms:
- Local/JupyterLab
- [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/) (with Argo or [Tekton](https://github.com/kubeflow/kfp-tekton/) workflow engines)
- [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/) (with Argo workflow engine)
- [Apache Airflow](https://airflow.apache.org/)

#### Generic pipelines
Expand Down Expand Up @@ -373,7 +373,7 @@ Use the [`elyra-pipeline`](command-line-interface.html#working-with-pipelines) `

```bash
$ elyra-pipeline submit elyra-pipelines/a-kubeflow.pipeline \
--runtime-config kfp-shared-tekton
--runtime-config kfp
```

For Kubeflow Pipelines the `--monitor` option is supported. If specified, the pipeline execution status is monitored for up to `--monitor-timeout` minutes (default: 60) and the `elyra-pipeline submit` command terminates as follows:
Expand Down
5 changes: 1 addition & 4 deletions docs/source/user_guide/runtime-conf.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,14 +259,11 @@ A password or token is required for most authentication types. Refer to the [Kub
Example: `mypassword`

##### Kubeflow Pipelines engine (engine)
The engine being used by Kubeflow Pipelines to run pipelines: `Argo` or `Tekton`. If you have access to the Kubernetes cluster where Kubeflow Pipelines is deployed, run these commands in a terminal window to determine the engine type.
The engine being used by Kubeflow Pipelines to run pipelines: `Argo`. If you have access to the Kubernetes cluster where Kubeflow Pipelines is deployed, run these commands in a terminal window to determine the engine type.

```
# If this command completes successfully, the engine type is Argo.
kubectl describe configmap -n kubeflow workflow-controller-configmap

# If this command completes successfully, the engine type is Tekton.
kubectl describe configmap -n kubeflow kfp-tekton-config
```

The default is `Argo`.
Expand Down
3 changes: 1 addition & 2 deletions elyra/cli/pipeline_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -428,8 +428,7 @@ def _monitor_kfp_submission(runtime_config: dict, runtime_config_name: str, run_
raise click.ClickException(f"Kubeflow authentication failed: {ae}")

try:
# Create a Kubeflow Pipelines client. There is no need to use a Tekton client,
# because the monitoring API is agnostic.
# Create a Kubeflow Pipelines client.
client = ArgoClient(
host=runtime_config.metadata["api_endpoint"].rstrip("/"),
cookies=auth_info.get("cookies", None),
Expand Down
2 changes: 1 addition & 1 deletion elyra/metadata/schemas/kfp.json
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@
"title": "Kubeflow Pipelines engine",
"description": "The Kubeflow Pipelines engine in use",
"type": "string",
"enum": ["Argo", "Tekton"],
"enum": ["Argo"],
"default": "Argo",
"uihints": {
"category": "Kubeflow Pipelines"
Expand Down
17 changes: 0 additions & 17 deletions elyra/metadata/schemasproviders.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,6 @@
import entrypoints
from traitlets import log # noqa H306

try:
from kfp_tekton import TektonClient
except ImportError:
# We may not have kfp-tekton available and that's okay, for example when only using airflow!
TektonClient = None

from elyra.metadata.schema import SchemasProvider
from elyra.metadata.schemaspaces import CodeSnippets
from elyra.metadata.schemaspaces import ComponentCatalogs
Expand Down Expand Up @@ -98,17 +92,6 @@ def get_schemas(self) -> List[Dict]:
)

if kfp_schema_present: # Update the kfp engine enum to reflect current packages...
# If TektonClient package is missing, navigate to the engine property
# and remove 'tekton' entry if present and return updated result.
if not TektonClient:
# locate the schema and update the enum
for schema in runtime_schemas:
if schema["name"] == "kfp":
engine_enum: list = schema["properties"]["metadata"]["properties"]["engine"]["enum"]
if "Tekton" in engine_enum:
engine_enum.remove("Tekton")
schema["properties"]["metadata"]["properties"]["engine"]["enum"] = engine_enum

# For KFP schemas replace placeholders:
# - properties.metadata.properties.auth_type.enum ({AUTH_PROVIDER_PLACEHOLDERS})
# - properties.metadata.properties.auth_type.default ({DEFAULT_AUTH_PROVIDER_PLACEHOLDER})
Expand Down
30 changes: 15 additions & 15 deletions elyra/pipeline/kfp/PipelineConf.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
#
# Copyright 2018-2025 Elyra Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from typing import Union

from kubernetes.client.models import V1PodDNSConfig
Expand All @@ -20,7 +35,6 @@ def __init__(self):

def set_image_pull_secrets(self, image_pull_secrets):
"""Configures the pipeline level imagepullsecret.

Args:
image_pull_secrets: a list of Kubernetes V1LocalObjectReference For
detailed description, check Kubernetes V1LocalObjectReference definition
Expand All @@ -31,7 +45,6 @@ def set_image_pull_secrets(self, image_pull_secrets):

def set_timeout(self, seconds: int):
"""Configures the pipeline level timeout.

Args:
seconds: number of seconds for timeout
"""
Expand All @@ -41,7 +54,6 @@ def set_timeout(self, seconds: int):
def set_parallelism(self, max_num_pods: int):
"""Configures the max number of total parallel pods that can execute at
the same time in a workflow.

Args:
max_num_pods: max number of total parallel pods.
"""
Expand All @@ -53,7 +65,6 @@ def set_parallelism(self, max_num_pods: int):

def set_ttl_seconds_after_finished(self, seconds: int):
"""Configures the ttl after the pipeline has finished.

Args:
seconds: number of seconds for the workflow to be garbage collected after
it is finished.
Expand All @@ -64,7 +75,6 @@ def set_ttl_seconds_after_finished(self, seconds: int):
def set_pod_disruption_budget(self, min_available: Union[int, str]):
"""PodDisruptionBudget holds the number of concurrent disruptions that
you allow for pipeline Pods.

Args:
min_available (Union[int, str]): An eviction is allowed if at least
"minAvailable" pods selected by "selector" will still be available after
Expand All @@ -77,12 +87,9 @@ def set_pod_disruption_budget(self, min_available: Union[int, str]):

def set_default_pod_node_selector(self, label_name: str, value: str):
"""Add a constraint for nodeSelector for a pipeline.

Each constraint is a key-value pair label.

For the container to be eligible to run on a node, the node must have each
of the constraints appeared as labels.

Args:
label_name: The name of the constraint label.
value: The value of the constraint label.
Expand All @@ -92,7 +99,6 @@ def set_default_pod_node_selector(self, label_name: str, value: str):

def set_image_pull_policy(self, policy: str):
"""Configures the default image pull policy.

Args:
policy: the pull policy, has to be one of: Always, Never, IfNotPresent.
For more info:
Expand All @@ -104,23 +110,19 @@ def set_image_pull_policy(self, policy: str):
def add_op_transformer(self, transformer):
"""Configures the op_transformers which will be applied to all ops in
the pipeline. The ops can be ResourceOp, VolumeOp, or ContainerOp.

Args:
transformer: A function that takes a kfp Op as input and returns a kfp Op
"""
self.op_transformers.append(transformer)

def set_dns_config(self, dns_config: V1PodDNSConfig):
"""Set the dnsConfig to be given to each pod.

Args:
dns_config: Kubernetes V1PodDNSConfig For detailed description, check
Kubernetes V1PodDNSConfig definition
https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodDNSConfig.md

Example:
::

import kfp
from kubernetes.client.models import V1PodDNSConfig, V1PodDNSConfigOption
pipeline_conf = kfp.dsl.PipelineConf()
Expand All @@ -139,10 +141,8 @@ def data_passing_method(self):
def data_passing_method(self, value):
"""Sets the object representing the method used for intermediate data
passing.

Example:
::

from kfp.dsl import PipelineConf, data_passing_methods
from kubernetes.client.models import V1Volume, V1PersistentVolumeClaimVolumeSource
pipeline_conf = PipelineConf()
Expand Down
6 changes: 3 additions & 3 deletions elyra/pipeline/kfp/kfp_authentication.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@
from typing import Tuple
from urllib.parse import urlsplit

from kfp.auth import KF_PIPELINES_SA_TOKEN_ENV
from kfp.auth import KF_PIPELINES_SA_TOKEN_PATH
from kfp.auth import ServiceAccountTokenVolumeCredentials
from kfp.client import KF_PIPELINES_SA_TOKEN_ENV
from kfp.client import KF_PIPELINES_SA_TOKEN_PATH
from kfp.client import ServiceAccountTokenVolumeCredentials
import requests


Expand Down
Loading
Loading