net: add skb_data_unref() helper

Similar to skb_unref(), add skb_data_unref() to save an expensive
atomic operation (and cache line dirtying) when last reference
on shinfo->dataref is released.

I saw this opportunity on hosts with RAW sockets accidentally
bound to UDP protocol, forcing an skb_clone() on all received packets.

These RAW sockets had their receive queue full, so all clone
packets were immediately dropped.

When UDP recvmsg() consumes later the original skb, skb_release_data()
is hitting atomic_sub_return() quite badly, because skb->clone
has been set permanently.

Note that this patch helps TCP TX performance, because
TCP stack also use (fast) clones.

This means that at least one of the two packets (the main skb or
its clone) will no longer have to perform this atomic operation
in skb_release_data().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240307123446.2302230-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Eric Dumazet 2024-03-07 12:34:46 +00:00 committed by Jakub Kicinski
parent 75c2946db3
commit 1cface552a
2 changed files with 19 additions and 3 deletions

View file

@ -1237,6 +1237,24 @@ static inline bool skb_unref(struct sk_buff *skb)
return true;
}
static inline bool skb_data_unref(const struct sk_buff *skb,
struct skb_shared_info *shinfo)
{
int bias;
if (!skb->cloned)
return true;
bias = skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1;
if (atomic_read(&shinfo->dataref) == bias)
smp_rmb();
else if (atomic_sub_return(bias, &shinfo->dataref))
return false;
return true;
}
void __fix_address
kfree_skb_reason(struct sk_buff *skb, enum skb_drop_reason reason);

View file

@ -1115,9 +1115,7 @@ static void skb_release_data(struct sk_buff *skb, enum skb_drop_reason reason,
struct skb_shared_info *shinfo = skb_shinfo(skb);
int i;
if (skb->cloned &&
atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1,
&shinfo->dataref))
if (!skb_data_unref(skb, shinfo))
goto exit;
if (skb_zcopy(skb)) {