Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draining contexts by criteria #253

Open
vkatsuba opened this issue Nov 9, 2020 · 14 comments
Open

Draining contexts by criteria #253

vkatsuba opened this issue Nov 9, 2020 · 14 comments
Assignees
Labels
discussion ⏰ Discussion feature 🚀 Feature Request

Comments

@vkatsuba
Copy link
Contributor

vkatsuba commented Nov 9, 2020

Feature Request

  • Create API for delete contexts by criteria:
    • Delete N contexts by criteria - /api/v1/contexts/count?apn=xyw&imsi=some_data
    • Delete all contexts by criteria - /api/v1/contexts?apn=xyw&imsi=some_data
  • For backward compatibility we should we should keep old behavior if URI is not included in the path
Name Type Description
apn ... ...
imsi ... ...
mccmnc ... ...
gtp_version ... ...
@fholzhauser
Copy link
Contributor

I think we should include filters beyond the context record. A very useful one would be the UPF where the context is active (e.g. when upgrading a UPF).

@RoadRunnr
Copy link
Member

Most of the context information is totally unsuitable for a draining selection. Only the version, APN and maybe the MCC/MNC of the IMSI would make sense.

There are also things missing that are not part of the context, e.g. UPF instance, IP pool, SGW and/or PGW, things from the ULI (RAI, TAU, ...)

Collecting this as idea might make sense. A implementation has IMHO be delayed after the stateless work has made some progress. The move to an external storage for the session state will have a major impact on how sessions can be selected.

@mgumz
Copy link
Contributor

mgumz commented Nov 10, 2020

apn, imsi(range), gtp-version, mccmnc √

use-case: to phase out one of n attached upfs one must be able to pick the sessions associated with it. yes, that information is not directly in the context datastructure. but it is available in the pfcp-ctx associated with the session.

@vkatsuba please clean out the table from the attributes are not that much relevant right now.

@RoadRunnr why not have a first implementation which traverses over the sessions, find a match in the context-data for the attributes and kill it? it does not have to be the optimal algorithm in the first place … and yes, if the session state is stored externally there wont be the need to traverse all session contexts. but i see this as an optimisation step.

@vkatsuba
Copy link
Contributor Author

vkatsuba commented Nov 10, 2020

@mgumz, @RoadRunnr the table was updated.

@RoadRunnr
Copy link
Member

@RoadRunnr why not have a first implementation which traverses over the sessions, find a match in the context-data for the attributes and kill it? it does not have to be the optimal algorithm in the first place … and yes, if the session state is stored externally there wont be the need to traverse all session contexts. but i see this as an optimisation step.

because the data will be in the UDSF and the only way to traverse that data will be a full load of all data in there into the process. I'm certain that will work well with > 100k sessions.

@vkatsuba
Copy link
Contributor Author

because the data will be in the UDSF and the only way to traverse that data will be a full load of all data in there into the process. I'm certain that will work well with > 100k sessions.

No sure but looks like we do the same when try use /api/v1/contexts/count. Maybe we can create a part of logic use the criterias based on context record? Eg apn and imsi is required fields and based on result we can try do second filter if by some reason we will not have enough data for filter by contexts Pids what was provided in context record.

@RoadRunnr
Copy link
Member

because the data will be in the UDSF and the only way to traverse that data will be a full load of all data in there into the process. I'm certain that will work well with > 100k sessions.

No sure but looks like we do the same when try use /api/v1/contexts/count. Maybe we can create a part of logic use the criterias based on context record? Eg apn and imsi is required fields and based on result we can try do second filter if by some reason we will not have enough data for filter by contexts Pids what was provided in context record.

With the move to USDF storage, that function has to be rewritten. And I'm not sure that we can retain that functionality at all.

@vkatsuba
Copy link
Contributor Author

@RoadRunnr, @mgumz, @fholzhauser how we should go with this ticket and what main plan should be?

@fholzhauser
Copy link
Contributor

Indeed doing this atomically is really difficult (local or UDSF). Also not practical, as draining should normally be controlled/slow(ish) to avoid excessive signalling load. To iterate through the contexts "slowly" we'd need to stop the node from accepting new requests with the same draining conditions before we start the iteration (e.g. mark an UPF or PGW to be skipped in node selection if that's a condition). In that case I think the mechanism could also be implemented with UDSF. Also I believe we might actually need this with UDSF too as long as the connectivity (SGW/PGW/UPF/AAA) is bound to particular ERGW nodes.

@vkatsuba
Copy link
Contributor Author

During of current discussion it looks like for implementation f current task we need first of all have implementation of USDF storage and then pick up draining contexts by criteria.

@fholzhauser
Copy link
Contributor

Actually, in my opinion the implementation could be started already as outlined above, and adopted to the UDSF solution later.

@RoadRunnr
Copy link
Member

Actually, in my opinion the implementation could be started already as outlined above, and adopted to the UDSF solution later.

I don't want this API stuff to get in the way of stateless changes. So while you could start with it, it can not be merge until the Nusdf and cluster solution is in place.
It is highly likely that adapting the API code to those changes will be as much work as implementing it in the first place.

@fholzhauser
Copy link
Contributor

That is a valid argument indeed.

@vkatsuba
Copy link
Contributor Author

Ticket of USDF storage #260 was created. Looks like we need to back to current discussion after implementation of USDF storage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion ⏰ Discussion feature 🚀 Feature Request
Projects
None yet
Development

No branches or pull requests

4 participants