Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Proposal for a TargetNodeDirectory in Tractus-X. #1556

Closed
Closed
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Proposal for a TargetNodeDirectory in Tractus-X

## Decision

A new service that will contain Connectors' URL's of each partner a member wants the offers from, acting as the **TargetNodeDirectory for the partner's FederatedCatalog**.


## Rationale

While considering new interventions in the Federated Catalog, this decision aims to set the TargetNodeDirectory.
From the [documentation](https://github.com/eclipse-edc/FederatedCatalog/blob/e733355c6991ff633ee009bd5f35ced61e941da6/docs/developer/architecture/federated-catalog.architecture.md)
> The Federated Catalog requires a list of Target Catalog Nodes, so it knows which endpoints to crawl. This list is provided by the TargetNodeDirectory. During the preparation of a crawl run, the ExecutionManager queries that directory and obtains the list of TCNs.

To address this, the goal is the creation of an independent service responsible for exposing an API, retrieving and storing Connectors' URL's that a certain partner chooses. This new service would be called TargetNodeDirectoryService and the user will be able to host it.

Users will input the Connectors' URL's of the connectors they want the catalogs from, through the API of the TargetNodeDirectoryService

This solution allows the member to choose precisely the Target Catalog Nodes that interests them, resulting in reduced network calls and latency. Additionally, each member has control on how to host and manage this new service. Service changes do not affect other parties (unless contract changes) and it can be scaled independently.

Other solution was also considered

- File in a S3 bucket (or different cloud provider's solution)
- This solution was discarded due to one file for all instead of each partner having the data that respectively needs does not match the requirement and this solution would lock the usage of a proprietary tool (cloud provider) being harder to sustain in the long run.

## Approach

The user is able to obtain the Connectors' URL's (through the Discovery Service, as an example) and store them in the new service through its API. The API will allow to save a list of Connectors' URL's in bulk and the service is responsible to store that (in memory or in database). These can later be retrieved and crawled by the Federated Catalog.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this operation meant to be done manually?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, with the goal of the removing the Discovery Service dependency from this TargetNodeDirectoryService the responsibility of obtaining and saving the the Connectors' URL's lies with the user, as sugggested here.

This solution improves on the default one of having the data in a static file since a dynamic approach would avoid downtime when a change is required.

Finally, considering service deployment, a new chart can be created just for this new service (similar to the existing ones), being its usage only decided by the member. As so, a Dockerfile should exist to ease this approach (giving the user option of running it in a container or run a simple `jar`).

Limitations of this solution are that each partner must have the Connectors' URL's beforehand and deal with the overhead of a new service, specially one with a persistence store.
Loading