Skip to content

Commit

Permalink
net: rps: fix cpu unplug
Browse files Browse the repository at this point in the history
softnet_data.input_pkt_queue is protected by a spinlock that we must
hold when transferring packets from victim queue to an active one.
This is because other cpus could still be trying to enqueue packets
into victim queue.

A second problem is that when we transfert the NAPI poll_list from
victim to current cpu, we absolutely need to special case the percpu
backlog, because we do not want to add complex locking to protect
process_queue : Only owner cpu is allowed to manipulate it, unless
cpu is offline.

Based on initial patch from Prasad Sodagudi &
Subash Abhinov Kasiviswanathan.

This version is better because we do not slow down packet processing,
only make migration safer.

Change-Id: I6468cc74779b126ce0565f8bd6cc39514b14eb38
Reported-by: Prasad Sodagudi <[email protected]>
Reported-by: Subash Abhinov Kasiviswanathan <[email protected]>
Signed-off-by: Eric Dumazet <[email protected]>
Cc: Tom Herbert <[email protected]>
Git-commit: ac64da0b83d82abe62f78b3d0e21cca31aea24fa
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Trilok Soni <[email protected]>
Signed-off-by: franciscofranco <[email protected]>
  • Loading branch information
Eric Dumazet authored and franciscofranco committed Oct 6, 2016
1 parent 10e7f4d commit b0b6c31
Showing 1 changed file with 15 additions and 5 deletions.
20 changes: 15 additions & 5 deletions net/core/dev.c
Original file line number Diff line number Diff line change
Expand Up @@ -6009,10 +6009,20 @@ static int dev_cpu_callback(struct notifier_block *nfb,
oldsd->output_queue = NULL;
oldsd->output_queue_tailp = &oldsd->output_queue;
}
/* Append NAPI poll list from offline CPU. */
if (!list_empty(&oldsd->poll_list)) {
list_splice_init(&oldsd->poll_list, &sd->poll_list);
raise_softirq_irqoff(NET_RX_SOFTIRQ);
/* Append NAPI poll list from offline CPU, with one exception :
* process_backlog() must be called by cpu owning percpu backlog.
* We properly handle process_queue & input_pkt_queue later.
*/
while (!list_empty(&oldsd->poll_list)) {
struct napi_struct *napi = list_first_entry(&oldsd->poll_list,
struct napi_struct,
poll_list);

list_del_init(&napi->poll_list);
if (napi->poll == process_backlog)
napi->state = 0;
else
____napi_schedule(sd, napi);
}

raise_softirq_irqoff(NET_TX_SOFTIRQ);
Expand All @@ -6023,7 +6033,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
netif_rx(skb);
input_queue_head_incr(oldsd);
}
while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
netif_rx(skb);
input_queue_head_incr(oldsd);
}
Expand Down

0 comments on commit b0b6c31

Please sign in to comment.