You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Development/v4.x/Data_Model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,7 +74,7 @@ The data model and its API endpoints are described [here](https://scicatproject.
74
74
75
75
### Job
76
76
77
-
When a user wants to archive, retrieve or publish a dataset, SciCat must interact with the facility's storage and archiving system. This is done by creating a `Job` to track the interaction. The exact actions taken for each Job are often customized for each facility. A common action would be to forward the new Job entry to a Message Broker, e.g. RabbitMQ, from where it can be picked up by any program willing to react to this Job. For example, at PSI the RabbitMQ queue is emptied by a Node-RED process, which reads the Job information from the RabbitMQ queue. Alternatively, you can use other messaging solutions, such as Apache Kafka. In this way the (site specific) logic to handle the Jobs is kept outside the core of the SciCat system, giving a greater degree of flexibility. The external systems should ideally respond with status updates to the Job model, when there job is finished. As the job progresses, the system can make PUT/PATCH API calls back that update the `Job Status Message` and, when necessary, also the individual status of the involved datasets by updating the DatasetLifecycle information.
77
+
When a user wants to archive, retrieve or publish a dataset, SciCat must interact with the facility's storage and archiving system. This is done by creating a `Job` to track the interaction. The exact actions taken for each Job are often customized for each facility. A common action would be to forward the new Job entry to a Message Broker, e.g. RabbitMQ, from where it can be picked up by any program willing to react to this Job. For example, at PSI the RabbitMQ queue is emptied by a Node-RED process, which reads the Job information from the RabbitMQ queue. Alternatively, you can use other messaging solutions, such as Apache Kafka. In this way the (site specific) logic to handle the Jobs is kept outside the core of the SciCat system, giving a greater degree of flexibility. The external systems should ideally respond with updates to the Job model, when the job is finished. As the job progresses, the system can make PUT/PATCH API calls back that update the `Job Status` etc and, when necessary, also the individual status of the involved datasets by updating the DatasetLifecycle information.
78
78
79
79
The data model and its API endpoints are described [here](https://scicatproject.github.io/api/#operation/Job.create).
| HTTP method | Endpoint | Endpoint Authentication | Anonymous | Authenticated | Create Jobs Groups | Update Jobs Groups | Admin Groups | Delete Groups | Notes |
| HTTP method | Endpoint | Endpoint Authentication | Anonymous | Authenticated | Create Jobs Groups | Update Jobs Groups | Admin Groups | Delete Groups |
|_#all_|_#all_| any user can access this endpoint, both anonymous and authenticated |_#all_| Any user can create this instance of the job |
51
-
|_#datasetPublic_|_#all_| any user can access this endpoint, both anonymous and and authenticated |_#datasetPublic_| the job instance will be created only if all the datasets listed are __public__|
52
-
|_#authenticated_|_#user_| any valid users can access the endpoint, independently from their groups |_#user_| any valid users can cretae this instance of the job |
51
+
|_#datasetPublic_|_#all_| any user can access this endpoint, both anonymous and authenticated |_#datasetPublic_| the job instance will be created only if all the datasets listed are __public__|
52
+
|_#authenticated_|_#user_| any valid users can access the endpoint, independently from their groups |_#user_| any valid users can create this instance of the job |
53
53
|_#datasetAccess_|_#user_| any valid user can access this endpoint, independently from their groups |_#datasetAccess_| the job instance will be created only if the user has access to all the datasets listed |
54
-
|_#datasetOwner_|_#user_| any valid user can access this endpoint, independently from their groups |_#datasetOwner_| the job instance will be created only if the user is part of all the datasets owner group |
54
+
|_#datasetOwner_|_#user_| any valid user can access this endpoint, independently from their groups |_#datasetOwner_| the job instance will be created only if the user is part of all the datasets' owner group |
55
55
|__*@GROUP*__|__*GROUP*__| only users that belongs to the specified group can access the endpoint |__*GROUP*__| the job instance will be created only if the user belongs to the group specified |
56
56
|__*USER*__|__*USER*__| only the specified user can access the endpoint |__*USER*__| the job instance can be created only by the user indicated |
57
57
58
-
__IMPORTANT__: use option _#all_ carefully, as it allows anybody to create a new job. It is mostly use for debuging and testing
58
+
__IMPORTANT__: use option _#all_ carefully, as it allows anybody to create a new job. It is mostly used for debuging and testing.
59
59
60
-
#### Job Status Update Authorization Table
61
-
The _JobStatusUpdateConfiguration_ authorization permissions are configured directly in the __*update*__ section of the job configuration.
62
-
Any positive match will results in the user acquiring _JobStatusUpdate_ endpoint authorization apply to the jobs endpoint POST:Jobs/statusUpdate
60
+
#### Job Update Authorization Table
61
+
The _JobUpdateConfiguration_ authorization permissions are configured directly in the __*update*__ section of the job configuration.
62
+
Any positive match will result in the user acquiring _JobUpdate_ endpoint authorization, which applies to the jobs endpoint `PATCH:Jobs/id`
|_#all_|_#all_| any user can access this endpoint, both anonymous and authenticated |_#all_| Any user can update the status of this job instance |
66
+
|_#all_|_#all_| any user can access this endpoint, both anonymous and authenticated |_#all_| Any user can update this job instance |
67
67
|_#jobOwnerUser_|_#user_| authenticated user can access the endpoint |_#jobOwnerUser_| only the user that is listed in field _ownerUser_ can perform the update |
68
68
|_#jobOwnerGroup_|_#user_| authenticated user can access the endpoint |_#jobOwnerGroup_| any user that belongs to the group listed in field _ownerGroup_ can perform the update |
69
-
|__*@GROUP*__|__*GROUP*__| only users that belongs to the specified group can access the endpoint |__*GROUP*__| the job status can be updated only by users who belong to the group specified |
70
-
|__*USER*__|__*USER*__| only the specified user can access the endpoint |__*USER*__| the job status can be updated only by the user indicated |
69
+
|__*@GROUP*__|__*GROUP*__| only users that belong to the specified group can access the endpoint |__*GROUP*__| the job can be updated only by users who belong to the group specified |
70
+
|__*USER*__|__*USER*__| only the specified user can access the endpoint |__*USER*__| the job can be updated only by the user indicated |
71
71
72
-
__IMPORTANT__: use option _#all_ carefully, as it allows anybody to update the status of the job. It is mostly use for debuging and testing
72
+
__IMPORTANT__: use option _#all_ carefully, as it allows anybody to update the job. It is mostly used for debuging and testing.
73
73
74
74
#### Job Authorization priority
75
75
The endpoint authorization is the most permissive authorization across all the jobs defined.
76
-
The priority between job create and status update authorization is as follow:
76
+
The priority between job create and update authorization is as follows:
0 commit comments