2019-06-04 08:11:33 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-only
|
2007-07-27 13:43:22 +00:00
|
|
|
/*
|
|
|
|
* Copyright 2002-2005, Instant802 Networks, Inc.
|
|
|
|
* Copyright 2005-2006, Devicescape Software, Inc.
|
|
|
|
* Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
|
2010-02-15 10:46:39 +00:00
|
|
|
* Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
|
2014-09-03 12:24:57 +00:00
|
|
|
* Copyright 2013-2014 Intel Mobile Communications GmbH
|
2017-02-06 13:28:42 +00:00
|
|
|
* Copyright(c) 2015 - 2017 Intel Deutschland GmbH
|
2023-06-08 13:36:05 +00:00
|
|
|
* Copyright (C) 2018-2023 Intel Corporation
|
2007-07-27 13:43:22 +00:00
|
|
|
*/
|
|
|
|
|
2008-02-14 15:36:47 +00:00
|
|
|
#include <linux/jiffies.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2007-07-27 13:43:22 +00:00
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/skbuff.h>
|
|
|
|
#include <linux/netdevice.h>
|
|
|
|
#include <linux/etherdevice.h>
|
[MAC80211]: fix race conditions with keys
During receive processing, we select the key long before using it and
because there's no locking it is possible that we kfree() the key
after having selected it but before using it for crypto operations.
Obviously, this is bad.
Secondly, during transmit processing, there are two possible races: We
have a similar race between select_key() and using it for encryption,
but we also have a race here between select_key() and hardware
encryption (both when a key is removed.)
This patch solves these issues by using RCU: when a key is to be freed,
we first remove the pointer from the appropriate places (sdata->keys,
sdata->default_key, sta->key) using rcu_assign_pointer() and then
synchronize_rcu(). Then, we can safely kfree() the key and remove it
from the hardware. There's a window here where the hardware may still
be using it for decryption, but we can't work around that without having
two hardware callbacks, one to disable the key for RX and one to disable
it for TX; but the worst thing that will happen is that we receive a
packet decrypted that we don't find a key for any more and then drop it.
When we add a key, we first need to upload it to the hardware and then,
using rcu_assign_pointer() again, link it into our structures.
In the code using keys (TX/RX paths) we use rcu_dereference() to get the
key and enclose the whole tx/rx section in a rcu_read_lock() ...
rcu_read_unlock() block. Because we've uploaded the key to hardware
before linking it into internal structures, we can guarantee that it is
valid once get to into tx().
One possible race condition remains, however: when we have hardware
acceleration enabled and the driver shuts down the queues, we end up
queueing the frame. If now somebody removes the key, the key will be
removed from hwaccel and then then driver will be asked to encrypt the
frame with a key index that has been removed. Hence, drivers will need
to be aware that the hw_key_index they are passed might not be under
all circumstances. Most drivers will, however, simply ignore that
condition and encrypt the frame with the selected key anyway, this
only results in a frame being encrypted with a wrong key or dropped
(rightfully) because the key was not valid. There isn't much we can
do about it unless we want to walk the pending frame queue every time
a key is removed and remove all frames that used it.
This race condition, however, will most likely be solved once we add
multiqueue support to mac80211 because then frames will be queued
further up the stack instead of after being processed.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Acked-by: Michael Wu <flamingice@sourmilk.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-09-14 15:10:24 +00:00
|
|
|
#include <linux/rcupdate.h>
|
2011-07-15 15:47:34 +00:00
|
|
|
#include <linux/export.h>
|
2021-02-18 17:31:24 +00:00
|
|
|
#include <linux/kcov.h>
|
2016-01-28 14:19:25 +00:00
|
|
|
#include <linux/bitops.h>
|
2007-07-27 13:43:22 +00:00
|
|
|
#include <net/mac80211.h>
|
|
|
|
#include <net/ieee80211_radiotap.h>
|
2012-02-20 10:38:41 +00:00
|
|
|
#include <asm/unaligned.h>
|
2007-07-27 13:43:22 +00:00
|
|
|
|
|
|
|
#include "ieee80211_i.h"
|
2009-04-23 16:52:52 +00:00
|
|
|
#include "driver-ops.h"
|
2008-04-08 19:14:40 +00:00
|
|
|
#include "led.h"
|
2008-02-23 14:17:10 +00:00
|
|
|
#include "mesh.h"
|
2007-07-27 13:43:22 +00:00
|
|
|
#include "wep.h"
|
|
|
|
#include "wpa.h"
|
|
|
|
#include "tkip.h"
|
|
|
|
#include "wme.h"
|
2011-12-16 14:28:57 +00:00
|
|
|
#include "rate.h"
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2007-09-26 13:19:39 +00:00
|
|
|
/*
|
|
|
|
* monitor mode reception
|
|
|
|
*
|
|
|
|
* This function cleans up the SKB, i.e. it removes all the stuff
|
|
|
|
* only useful for monitoring.
|
|
|
|
*/
|
2020-05-26 12:33:48 +00:00
|
|
|
static struct sk_buff *ieee80211_clean_skb(struct sk_buff *skb,
|
|
|
|
unsigned int present_fcs_len,
|
|
|
|
unsigned int rtap_space)
|
2007-09-26 13:19:39 +00:00
|
|
|
{
|
2023-03-01 10:09:15 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2020-05-26 12:33:48 +00:00
|
|
|
struct ieee80211_hdr *hdr;
|
|
|
|
unsigned int hdrlen;
|
|
|
|
__le16 fc;
|
|
|
|
|
2017-04-11 13:38:56 +00:00
|
|
|
if (present_fcs_len)
|
|
|
|
__pskb_trim(skb, skb->len - present_fcs_len);
|
2022-09-28 12:55:31 +00:00
|
|
|
pskb_pull(skb, rtap_space);
|
2020-05-26 12:33:48 +00:00
|
|
|
|
2023-03-01 10:09:15 +00:00
|
|
|
/* After pulling radiotap header, clear all flags that indicate
|
|
|
|
* info in skb->data.
|
|
|
|
*/
|
2023-03-01 10:09:35 +00:00
|
|
|
status->flag &= ~(RX_FLAG_RADIOTAP_TLV_AT_END |
|
2023-03-01 10:09:15 +00:00
|
|
|
RX_FLAG_RADIOTAP_LSIG |
|
|
|
|
RX_FLAG_RADIOTAP_HE_MU |
|
|
|
|
RX_FLAG_RADIOTAP_HE);
|
|
|
|
|
2020-05-26 12:33:48 +00:00
|
|
|
hdr = (void *)skb->data;
|
|
|
|
fc = hdr->frame_control;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove the HT-Control field (if present) on management
|
|
|
|
* frames after we've sent the frame to monitoring. We
|
|
|
|
* (currently) don't need it, and don't properly parse
|
|
|
|
* frames with it present, due to the assumption of a
|
|
|
|
* fixed management header length.
|
|
|
|
*/
|
|
|
|
if (likely(!ieee80211_is_mgmt(fc) || !ieee80211_has_order(fc)))
|
|
|
|
return skb;
|
|
|
|
|
|
|
|
hdrlen = ieee80211_hdrlen(fc);
|
|
|
|
hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_ORDER);
|
|
|
|
|
|
|
|
if (!pskb_may_pull(skb, hdrlen)) {
|
|
|
|
dev_kfree_skb(skb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
memmove(skb->data + IEEE80211_HT_CTL_LEN, skb->data,
|
|
|
|
hdrlen - IEEE80211_HT_CTL_LEN);
|
2022-09-28 12:55:31 +00:00
|
|
|
pskb_pull(skb, IEEE80211_HT_CTL_LEN);
|
2020-05-26 12:33:48 +00:00
|
|
|
|
|
|
|
return skb;
|
2007-09-26 13:19:39 +00:00
|
|
|
}
|
|
|
|
|
2014-11-06 21:56:36 +00:00
|
|
|
static inline bool should_drop_frame(struct sk_buff *skb, int present_fcs_len,
|
2018-04-20 10:49:18 +00:00
|
|
|
unsigned int rtap_space)
|
2007-09-26 13:19:39 +00:00
|
|
|
{
|
2009-06-17 11:13:00 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2014-11-06 21:56:36 +00:00
|
|
|
struct ieee80211_hdr *hdr;
|
|
|
|
|
2018-04-20 10:49:18 +00:00
|
|
|
hdr = (void *)(skb->data + rtap_space);
|
2007-09-26 13:19:39 +00:00
|
|
|
|
2012-07-05 09:34:31 +00:00
|
|
|
if (status->flag & (RX_FLAG_FAILED_FCS_CRC |
|
2015-12-11 13:39:46 +00:00
|
|
|
RX_FLAG_FAILED_PLCP_CRC |
|
2018-09-05 05:06:06 +00:00
|
|
|
RX_FLAG_ONLY_MONITOR |
|
|
|
|
RX_FLAG_NO_PSDU))
|
2014-04-21 04:52:59 +00:00
|
|
|
return true;
|
|
|
|
|
2018-04-20 10:49:18 +00:00
|
|
|
if (unlikely(skb->len < 16 + present_fcs_len + rtap_space))
|
2014-04-21 04:52:59 +00:00
|
|
|
return true;
|
|
|
|
|
2008-06-11 21:21:59 +00:00
|
|
|
if (ieee80211_is_ctl(hdr->frame_control) &&
|
|
|
|
!ieee80211_is_pspoll(hdr->frame_control) &&
|
|
|
|
!ieee80211_is_back_req(hdr->frame_control))
|
2014-04-21 04:52:59 +00:00
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
2007-09-26 13:19:39 +00:00
|
|
|
}
|
|
|
|
|
2008-05-08 17:22:43 +00:00
|
|
|
static int
|
2014-11-06 21:56:36 +00:00
|
|
|
ieee80211_rx_radiotap_hdrlen(struct ieee80211_local *local,
|
|
|
|
struct ieee80211_rx_status *status,
|
|
|
|
struct sk_buff *skb)
|
2008-05-08 17:22:43 +00:00
|
|
|
{
|
|
|
|
int len;
|
|
|
|
|
|
|
|
/* always present fields */
|
2013-07-03 11:34:02 +00:00
|
|
|
len = sizeof(struct ieee80211_radiotap_header) + 8;
|
2008-05-08 17:22:43 +00:00
|
|
|
|
2013-07-03 11:34:02 +00:00
|
|
|
/* allocate extra bitmaps */
|
|
|
|
if (status->chains)
|
|
|
|
len += 4 * hweight8(status->chains);
|
2012-11-16 09:09:08 +00:00
|
|
|
|
|
|
|
if (ieee80211_have_rx_timestamp(status)) {
|
|
|
|
len = ALIGN(len, 8);
|
2008-05-08 17:22:43 +00:00
|
|
|
len += 8;
|
2012-11-16 09:09:08 +00:00
|
|
|
}
|
2015-06-02 19:39:54 +00:00
|
|
|
if (ieee80211_hw_check(&local->hw, SIGNAL_DBM))
|
2008-05-08 17:22:43 +00:00
|
|
|
len += 1;
|
|
|
|
|
2013-07-03 11:34:02 +00:00
|
|
|
/* antenna field, if we don't have per-chain info */
|
|
|
|
if (!status->chains)
|
|
|
|
len += 1;
|
|
|
|
|
2012-11-16 09:09:08 +00:00
|
|
|
/* padding for RX_FLAGS if necessary */
|
|
|
|
len = ALIGN(len, 2);
|
2008-05-08 17:22:43 +00:00
|
|
|
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->encoding == RX_ENC_HT) /* HT info */
|
2011-01-27 13:13:17 +00:00
|
|
|
len += 3;
|
|
|
|
|
2012-07-05 09:34:31 +00:00
|
|
|
if (status->flag & RX_FLAG_AMPDU_DETAILS) {
|
2012-11-16 09:09:08 +00:00
|
|
|
len = ALIGN(len, 4);
|
2012-07-05 09:34:31 +00:00
|
|
|
len += 8;
|
|
|
|
}
|
|
|
|
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->encoding == RX_ENC_VHT) {
|
2012-11-22 22:00:18 +00:00
|
|
|
len = ALIGN(len, 2);
|
|
|
|
len += 12;
|
|
|
|
}
|
|
|
|
|
2016-08-29 20:25:17 +00:00
|
|
|
if (local->hw.radiotap_timestamp.units_pos >= 0) {
|
|
|
|
len = ALIGN(len, 8);
|
|
|
|
len += 12;
|
|
|
|
}
|
|
|
|
|
2018-06-09 06:14:44 +00:00
|
|
|
if (status->encoding == RX_ENC_HE &&
|
|
|
|
status->flag & RX_FLAG_RADIOTAP_HE) {
|
|
|
|
len = ALIGN(len, 2);
|
|
|
|
len += 12;
|
|
|
|
BUILD_BUG_ON(sizeof(struct ieee80211_radiotap_he) != 12);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (status->encoding == RX_ENC_HE &&
|
|
|
|
status->flag & RX_FLAG_RADIOTAP_HE_MU) {
|
|
|
|
len = ALIGN(len, 2);
|
|
|
|
len += 12;
|
|
|
|
BUILD_BUG_ON(sizeof(struct ieee80211_radiotap_he_mu) != 12);
|
|
|
|
}
|
|
|
|
|
2018-09-05 05:06:06 +00:00
|
|
|
if (status->flag & RX_FLAG_NO_PSDU)
|
|
|
|
len += 1;
|
|
|
|
|
2018-08-31 08:31:20 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_LSIG) {
|
|
|
|
len = ALIGN(len, 2);
|
|
|
|
len += 4;
|
|
|
|
BUILD_BUG_ON(sizeof(struct ieee80211_radiotap_lsig) != 4);
|
|
|
|
}
|
|
|
|
|
2013-07-03 11:34:02 +00:00
|
|
|
if (status->chains) {
|
|
|
|
/* antenna and antenna signal fields */
|
|
|
|
len += 2 * hweight8(status->chains);
|
|
|
|
}
|
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END) {
|
|
|
|
int tlv_offset = 0;
|
2019-02-06 11:17:13 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The position to look at depends on the existence (or non-
|
|
|
|
* existence) of other elements, so take that into account...
|
|
|
|
*/
|
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_HE)
|
2023-03-01 10:09:35 +00:00
|
|
|
tlv_offset +=
|
2019-02-06 11:17:13 +00:00
|
|
|
sizeof(struct ieee80211_radiotap_he);
|
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_HE_MU)
|
2023-03-01 10:09:35 +00:00
|
|
|
tlv_offset +=
|
2019-02-06 11:17:13 +00:00
|
|
|
sizeof(struct ieee80211_radiotap_he_mu);
|
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_LSIG)
|
2023-03-01 10:09:35 +00:00
|
|
|
tlv_offset +=
|
2019-02-06 11:17:13 +00:00
|
|
|
sizeof(struct ieee80211_radiotap_lsig);
|
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
/* ensure 4 byte alignment for TLV */
|
|
|
|
len = ALIGN(len, 4);
|
2014-11-06 21:56:36 +00:00
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
/* TLVs until the mac header */
|
|
|
|
len += skb_mac_header(skb) - &skb->data[tlv_offset];
|
2014-11-06 21:56:36 +00:00
|
|
|
}
|
|
|
|
|
2008-05-08 17:22:43 +00:00
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
2021-05-17 21:07:56 +00:00
|
|
|
static void __ieee80211_queue_skb_to_iface(struct ieee80211_sub_if_data *sdata,
|
2022-08-17 19:57:19 +00:00
|
|
|
int link_id,
|
2021-05-17 21:07:56 +00:00
|
|
|
struct sta_info *sta,
|
|
|
|
struct sk_buff *skb)
|
2021-05-17 21:07:54 +00:00
|
|
|
{
|
2022-08-17 19:57:19 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
|
|
|
|
if (link_id >= 0) {
|
|
|
|
status->link_valid = 1;
|
|
|
|
status->link_id = link_id;
|
|
|
|
} else {
|
|
|
|
status->link_valid = 0;
|
|
|
|
}
|
|
|
|
|
2021-05-17 21:07:54 +00:00
|
|
|
skb_queue_tail(&sdata->skb_queue, skb);
|
2023-06-06 12:49:26 +00:00
|
|
|
wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
|
2021-05-17 21:07:54 +00:00
|
|
|
if (sta)
|
mac80211: prepare sta handling for MLO support
Currently in mac80211 each STA object is represented
using sta_info datastructure with the associated
STA specific information and drivers access ieee80211_sta
part of it.
With MLO (Multi Link Operation) support being added
in 802.11be standard, though the association is logically
with a single Multi Link capable STA, at the physical level
communication can happen via different advertised
links (uniquely identified by Channel, operating class,
BSSID) and hence the need to handle multiple link
STA parameters within a composite sta_info object
called the MLD STA. The different link STA part of
MLD STA are identified using the link address which can
be same or different as the MLD STA address and unique
link id based on the link vif.
To support extension of such a model, the sta_info
datastructure is modified to hold multiple link STA
objects with link specific params currently within
sta_info moved to this new structure. Similarly this is
done for ieee80211_sta as well which will be accessed
within mac80211 as well as by drivers, hence trivial
driver changes are expected to support this.
For current non MLO supported drivers, only one link STA
is present and link information is accessed via 'deflink'
member.
For MLO drivers, we still need to define the APIs etc. to
get the correct link ID and access the correct part of
the station info.
Currently in mac80211, all link STA info are accessed directly
via deflink. These will be updated to access via link pointers
indexed by link id with MLO support patches, with link id
being 0 for non MLO supported cases.
Except for couple of macro related changes, below spatch takes
care of updating mac80211 and driver code to access to the
link STA info via deflink.
@ieee80211_sta@
struct ieee80211_sta *s;
struct sta_info *si;
identifier var = {supp_rates, ht_cap, vht_cap, he_cap, he_6ghz_capa, eht_cap, rx_nss, bandwidth, txpwr};
@@
(
s->
- var
+ deflink.var
|
si->sta.
- var
+ deflink.var
)
@sta_info@
struct sta_info *si;
identifier var = {gtk, pcpu_rx_stats, rx_stats, rx_stats_avg, status_stats, tx_stats, cur_max_bandwidth};
@@
(
si->
- var
+ deflink.var
)
Signed-off-by: Sriram R <quic_srirrama@quicinc.com>
Link: https://lore.kernel.org/r/1649086883-13246-1-git-send-email-quic_srirrama@quicinc.com
[remove MLO-drivers notes from commit message, not clear yet; run spatch]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2022-04-04 15:41:23 +00:00
|
|
|
sta->deflink.rx_stats.packets++;
|
2021-05-17 21:07:54 +00:00
|
|
|
}
|
|
|
|
|
2021-05-17 21:07:56 +00:00
|
|
|
static void ieee80211_queue_skb_to_iface(struct ieee80211_sub_if_data *sdata,
|
2022-08-17 19:57:19 +00:00
|
|
|
int link_id,
|
2021-05-17 21:07:56 +00:00
|
|
|
struct sta_info *sta,
|
|
|
|
struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
skb->protocol = 0;
|
2022-08-17 19:57:19 +00:00
|
|
|
__ieee80211_queue_skb_to_iface(sdata, link_id, sta, skb);
|
2021-05-17 21:07:56 +00:00
|
|
|
}
|
|
|
|
|
2017-04-13 12:23:49 +00:00
|
|
|
static void ieee80211_handle_mu_mimo_mon(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct sk_buff *skb,
|
2018-04-20 10:49:18 +00:00
|
|
|
int rtap_space)
|
2017-04-13 12:23:49 +00:00
|
|
|
{
|
|
|
|
struct {
|
|
|
|
struct ieee80211_hdr_3addr hdr;
|
|
|
|
u8 category;
|
|
|
|
u8 action_code;
|
2019-01-24 18:19:57 +00:00
|
|
|
} __packed __aligned(2) action;
|
2017-04-13 12:23:49 +00:00
|
|
|
|
|
|
|
if (!sdata)
|
|
|
|
return;
|
|
|
|
|
|
|
|
BUILD_BUG_ON(sizeof(action) != IEEE80211_MIN_ACTION_SIZE + 1);
|
|
|
|
|
2018-04-20 10:49:18 +00:00
|
|
|
if (skb->len < rtap_space + sizeof(action) +
|
2017-04-13 12:23:49 +00:00
|
|
|
VHT_MUMIMO_GROUPS_DATA_LEN)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!is_valid_ether_addr(sdata->u.mntr.mu_follow_addr))
|
|
|
|
return;
|
|
|
|
|
2018-04-20 10:49:18 +00:00
|
|
|
skb_copy_bits(skb, rtap_space, &action, sizeof(action));
|
2017-04-13 12:23:49 +00:00
|
|
|
|
|
|
|
if (!ieee80211_is_action(action.hdr.frame_control))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (action.category != WLAN_CATEGORY_VHT)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (action.action_code != WLAN_VHT_ACTION_GROUPID_MGMT)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!ether_addr_equal(action.hdr.addr1, sdata->u.mntr.mu_follow_addr))
|
|
|
|
return;
|
|
|
|
|
|
|
|
skb = skb_copy(skb, GFP_ATOMIC);
|
|
|
|
if (!skb)
|
|
|
|
return;
|
|
|
|
|
2022-08-17 19:57:19 +00:00
|
|
|
ieee80211_queue_skb_to_iface(sdata, -1, NULL, skb);
|
2017-04-13 12:23:49 +00:00
|
|
|
}
|
|
|
|
|
2012-09-05 13:54:51 +00:00
|
|
|
/*
|
2008-05-08 17:22:43 +00:00
|
|
|
* ieee80211_add_rx_radiotap_header - add radiotap header
|
|
|
|
*
|
|
|
|
* add a radiotap header containing all the fields which the hardware provided.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
ieee80211_add_rx_radiotap_header(struct ieee80211_local *local,
|
|
|
|
struct sk_buff *skb,
|
|
|
|
struct ieee80211_rate *rate,
|
2012-04-16 12:56:48 +00:00
|
|
|
int rtap_len, bool has_fcs)
|
2008-05-08 17:22:43 +00:00
|
|
|
{
|
2009-06-17 11:13:00 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2008-05-08 17:22:43 +00:00
|
|
|
struct ieee80211_radiotap_header *rthdr;
|
|
|
|
unsigned char *pos;
|
2013-07-03 11:34:02 +00:00
|
|
|
__le32 *it_present;
|
|
|
|
u32 it_present_val;
|
2009-10-28 08:58:52 +00:00
|
|
|
u16 rx_flags = 0;
|
2013-07-08 14:55:52 +00:00
|
|
|
u16 channel_flags = 0;
|
2023-03-01 10:09:35 +00:00
|
|
|
u32 tlvs_len = 0;
|
2013-07-03 11:34:02 +00:00
|
|
|
int mpdulen, chain;
|
|
|
|
unsigned long chains = status->chains;
|
2018-06-09 06:14:44 +00:00
|
|
|
struct ieee80211_radiotap_he he = {};
|
|
|
|
struct ieee80211_radiotap_he_mu he_mu = {};
|
2018-08-31 08:31:20 +00:00
|
|
|
struct ieee80211_radiotap_lsig lsig = {};
|
2018-06-09 06:14:44 +00:00
|
|
|
|
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_HE) {
|
|
|
|
he = *(struct ieee80211_radiotap_he *)skb->data;
|
|
|
|
skb_pull(skb, sizeof(he));
|
|
|
|
WARN_ON_ONCE(status->encoding != RX_ENC_HE);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_HE_MU) {
|
|
|
|
he_mu = *(struct ieee80211_radiotap_he_mu *)skb->data;
|
|
|
|
skb_pull(skb, sizeof(he_mu));
|
|
|
|
}
|
2014-11-06 21:56:36 +00:00
|
|
|
|
2018-08-31 08:31:20 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_LSIG) {
|
|
|
|
lsig = *(struct ieee80211_radiotap_lsig *)skb->data;
|
|
|
|
skb_pull(skb, sizeof(lsig));
|
|
|
|
}
|
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END) {
|
|
|
|
/* data is pointer at tlv all other info was pulled off */
|
|
|
|
tlvs_len = skb_mac_header(skb) - skb->data;
|
2014-11-06 21:56:36 +00:00
|
|
|
}
|
2012-11-13 18:46:27 +00:00
|
|
|
|
|
|
|
mpdulen = skb->len;
|
2015-06-02 19:39:54 +00:00
|
|
|
if (!(has_fcs && ieee80211_hw_check(&local->hw, RX_INCLUDES_FCS)))
|
2012-11-13 18:46:27 +00:00
|
|
|
mpdulen += FCS_LEN;
|
2008-05-08 17:22:43 +00:00
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
rthdr = skb_push(skb, rtap_len - tlvs_len);
|
|
|
|
memset(rthdr, 0, rtap_len - tlvs_len);
|
2013-07-03 11:34:02 +00:00
|
|
|
it_present = &rthdr->it_present;
|
2008-05-08 17:22:43 +00:00
|
|
|
|
|
|
|
/* radiotap header, set always present flags */
|
2014-02-05 14:36:01 +00:00
|
|
|
rthdr->it_len = cpu_to_le16(rtap_len);
|
2013-07-03 11:34:02 +00:00
|
|
|
it_present_val = BIT(IEEE80211_RADIOTAP_FLAGS) |
|
|
|
|
BIT(IEEE80211_RADIOTAP_CHANNEL) |
|
|
|
|
BIT(IEEE80211_RADIOTAP_RX_FLAGS);
|
|
|
|
|
|
|
|
if (!status->chains)
|
|
|
|
it_present_val |= BIT(IEEE80211_RADIOTAP_ANTENNA);
|
|
|
|
|
|
|
|
for_each_set_bit(chain, &chains, IEEE80211_MAX_CHAINS) {
|
|
|
|
it_present_val |=
|
|
|
|
BIT(IEEE80211_RADIOTAP_EXT) |
|
|
|
|
BIT(IEEE80211_RADIOTAP_RADIOTAP_NAMESPACE);
|
|
|
|
put_unaligned_le32(it_present_val, it_present);
|
|
|
|
it_present++;
|
|
|
|
it_present_val = BIT(IEEE80211_RADIOTAP_ANTENNA) |
|
|
|
|
BIT(IEEE80211_RADIOTAP_DBM_ANTSIGNAL);
|
|
|
|
}
|
2008-05-08 17:22:43 +00:00
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END)
|
|
|
|
it_present_val |= BIT(IEEE80211_RADIOTAP_TLV);
|
2014-11-06 21:56:36 +00:00
|
|
|
|
2013-07-03 11:34:02 +00:00
|
|
|
put_unaligned_le32(it_present_val, it_present);
|
|
|
|
|
2021-08-06 21:53:05 +00:00
|
|
|
/* This references through an offset into it_optional[] rather
|
|
|
|
* than via it_present otherwise later uses of pos will cause
|
|
|
|
* the compiler to think we have walked past the end of the
|
|
|
|
* struct member.
|
|
|
|
*/
|
2021-11-09 09:02:04 +00:00
|
|
|
pos = (void *)&rthdr->it_optional[it_present + 1 - rthdr->it_optional];
|
2013-07-03 11:34:02 +00:00
|
|
|
|
2008-05-08 17:22:43 +00:00
|
|
|
/* the order of the following fields is important */
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_TSFT */
|
2012-11-13 18:46:27 +00:00
|
|
|
if (ieee80211_have_rx_timestamp(status)) {
|
2012-11-16 09:09:08 +00:00
|
|
|
/* padding */
|
|
|
|
while ((pos - (u8 *)rthdr) & 7)
|
|
|
|
*pos++ = 0;
|
2012-11-13 18:46:27 +00:00
|
|
|
put_unaligned_le64(
|
|
|
|
ieee80211_calculate_rx_timestamp(local, status,
|
|
|
|
mpdulen, 0),
|
|
|
|
pos);
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_TSFT));
|
2008-05-08 17:22:43 +00:00
|
|
|
pos += 8;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_FLAGS */
|
2015-06-02 19:39:54 +00:00
|
|
|
if (has_fcs && ieee80211_hw_check(&local->hw, RX_INCLUDES_FCS))
|
2008-05-08 17:22:43 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_F_FCS;
|
2009-03-13 11:52:10 +00:00
|
|
|
if (status->flag & (RX_FLAG_FAILED_FCS_CRC | RX_FLAG_FAILED_PLCP_CRC))
|
|
|
|
*pos |= IEEE80211_RADIOTAP_F_BADFCS;
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_SHORTPRE)
|
2008-07-30 15:19:55 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_F_SHORTPRE;
|
2008-05-08 17:22:43 +00:00
|
|
|
pos++;
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_RATE */
|
2017-04-26 10:14:59 +00:00
|
|
|
if (!rate || status->encoding != RX_ENC_LEGACY) {
|
2008-12-12 12:38:33 +00:00
|
|
|
/*
|
2011-11-08 11:28:33 +00:00
|
|
|
* Without rate information don't add it. If we have,
|
2011-02-07 04:40:04 +00:00
|
|
|
* MCS information is a separate field in radiotap,
|
2011-04-18 15:05:21 +00:00
|
|
|
* added below. The byte here is needed as padding
|
|
|
|
* for the channel though, so initialise it to 0.
|
2008-12-12 12:38:33 +00:00
|
|
|
*/
|
|
|
|
*pos = 0;
|
2008-12-15 08:37:50 +00:00
|
|
|
} else {
|
2013-07-08 14:55:53 +00:00
|
|
|
int shift = 0;
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_RATE));
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->bw == RATE_INFO_BW_10)
|
2013-07-08 14:55:53 +00:00
|
|
|
shift = 1;
|
2017-04-26 10:14:59 +00:00
|
|
|
else if (status->bw == RATE_INFO_BW_5)
|
2013-07-08 14:55:53 +00:00
|
|
|
shift = 2;
|
|
|
|
*pos = DIV_ROUND_UP(rate->bitrate, 5 * (1 << shift));
|
2008-12-15 08:37:50 +00:00
|
|
|
}
|
2008-05-08 17:22:43 +00:00
|
|
|
pos++;
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_CHANNEL */
|
2020-04-02 01:18:05 +00:00
|
|
|
/* TODO: frequency offset in KHz */
|
2009-10-28 08:58:52 +00:00
|
|
|
put_unaligned_le16(status->freq, pos);
|
2008-05-08 17:22:43 +00:00
|
|
|
pos += 2;
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->bw == RATE_INFO_BW_10)
|
2013-07-08 14:55:52 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_HALF;
|
2017-04-26 10:14:59 +00:00
|
|
|
else if (status->bw == RATE_INFO_BW_5)
|
2013-07-08 14:55:52 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_QUARTER;
|
|
|
|
|
2020-09-11 01:11:35 +00:00
|
|
|
if (status->band == NL80211_BAND_5GHZ ||
|
|
|
|
status->band == NL80211_BAND_6GHZ)
|
2013-07-08 14:55:52 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ;
|
2017-04-26 10:14:59 +00:00
|
|
|
else if (status->encoding != RX_ENC_LEGACY)
|
2013-07-08 14:55:52 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_DYN | IEEE80211_CHAN_2GHZ;
|
2011-11-08 11:28:33 +00:00
|
|
|
else if (rate && rate->flags & IEEE80211_RATE_ERP_G)
|
2013-07-08 14:55:52 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_2GHZ;
|
2011-11-08 11:28:33 +00:00
|
|
|
else if (rate)
|
2015-01-20 14:05:08 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_CCK | IEEE80211_CHAN_2GHZ;
|
2011-11-08 11:28:33 +00:00
|
|
|
else
|
2013-07-08 14:55:52 +00:00
|
|
|
channel_flags |= IEEE80211_CHAN_2GHZ;
|
|
|
|
put_unaligned_le16(channel_flags, pos);
|
2008-05-08 17:22:43 +00:00
|
|
|
pos += 2;
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_DBM_ANTSIGNAL */
|
2015-06-02 19:39:54 +00:00
|
|
|
if (ieee80211_hw_check(&local->hw, SIGNAL_DBM) &&
|
2012-03-01 17:00:07 +00:00
|
|
|
!(status->flag & RX_FLAG_NO_SIGNAL_VAL)) {
|
2008-05-08 17:22:43 +00:00
|
|
|
*pos = status->signal;
|
|
|
|
rthdr->it_present |=
|
2021-08-06 21:51:12 +00:00
|
|
|
cpu_to_le32(BIT(IEEE80211_RADIOTAP_DBM_ANTSIGNAL));
|
2008-05-08 17:22:43 +00:00
|
|
|
pos++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_LOCK_QUALITY is missing */
|
|
|
|
|
2013-07-03 11:34:02 +00:00
|
|
|
if (!status->chains) {
|
|
|
|
/* IEEE80211_RADIOTAP_ANTENNA */
|
|
|
|
*pos = status->antenna;
|
|
|
|
pos++;
|
|
|
|
}
|
2008-05-08 17:22:43 +00:00
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_DB_ANTNOISE is not used */
|
|
|
|
|
|
|
|
/* IEEE80211_RADIOTAP_RX_FLAGS */
|
|
|
|
/* ensure 2 byte alignment for the 2 byte field as required */
|
2009-10-28 08:58:52 +00:00
|
|
|
if ((pos - (u8 *)rthdr) & 1)
|
2012-11-16 09:09:08 +00:00
|
|
|
*pos++ = 0;
|
2009-03-13 11:52:10 +00:00
|
|
|
if (status->flag & RX_FLAG_FAILED_PLCP_CRC)
|
2009-10-28 08:58:52 +00:00
|
|
|
rx_flags |= IEEE80211_RADIOTAP_F_RX_BADPLCP;
|
|
|
|
put_unaligned_le16(rx_flags, pos);
|
2008-05-08 17:22:43 +00:00
|
|
|
pos += 2;
|
2011-01-27 13:13:17 +00:00
|
|
|
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->encoding == RX_ENC_HT) {
|
2013-05-24 10:05:45 +00:00
|
|
|
unsigned int stbc;
|
|
|
|
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_MCS));
|
2021-11-30 17:50:47 +00:00
|
|
|
*pos = local->hw.radiotap_mcs_details;
|
|
|
|
if (status->enc_flags & RX_ENC_FLAG_HT_GF)
|
|
|
|
*pos |= IEEE80211_RADIOTAP_MCS_HAVE_FMT;
|
|
|
|
if (status->enc_flags & RX_ENC_FLAG_LDPC)
|
|
|
|
*pos |= IEEE80211_RADIOTAP_MCS_HAVE_FEC;
|
|
|
|
pos++;
|
2011-01-27 13:13:17 +00:00
|
|
|
*pos = 0;
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_SHORT_GI)
|
2011-01-27 13:13:17 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_MCS_SGI;
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->bw == RATE_INFO_BW_40)
|
2011-01-27 13:13:17 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_MCS_BW_40;
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_HT_GF)
|
2012-05-10 07:09:10 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_MCS_FMT_GF;
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_LDPC)
|
2014-02-05 10:48:53 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_MCS_FEC_LDPC;
|
2017-04-26 09:13:00 +00:00
|
|
|
stbc = (status->enc_flags & RX_ENC_FLAG_STBC_MASK) >> RX_ENC_FLAG_STBC_SHIFT;
|
2013-05-24 10:05:45 +00:00
|
|
|
*pos |= stbc << IEEE80211_RADIOTAP_MCS_STBC_SHIFT;
|
2011-01-27 13:13:17 +00:00
|
|
|
pos++;
|
|
|
|
*pos++ = status->rate_idx;
|
|
|
|
}
|
2012-07-05 09:34:31 +00:00
|
|
|
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_DETAILS) {
|
|
|
|
u16 flags = 0;
|
|
|
|
|
|
|
|
/* ensure 4 byte alignment */
|
|
|
|
while ((pos - (u8 *)rthdr) & 3)
|
|
|
|
pos++;
|
|
|
|
rthdr->it_present |=
|
2021-08-06 21:51:12 +00:00
|
|
|
cpu_to_le32(BIT(IEEE80211_RADIOTAP_AMPDU_STATUS));
|
2012-07-05 09:34:31 +00:00
|
|
|
put_unaligned_le32(status->ampdu_reference, pos);
|
|
|
|
pos += 4;
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_LAST_KNOWN)
|
|
|
|
flags |= IEEE80211_RADIOTAP_AMPDU_LAST_KNOWN;
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_IS_LAST)
|
|
|
|
flags |= IEEE80211_RADIOTAP_AMPDU_IS_LAST;
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_DELIM_CRC_ERROR)
|
|
|
|
flags |= IEEE80211_RADIOTAP_AMPDU_DELIM_CRC_ERR;
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_DELIM_CRC_KNOWN)
|
|
|
|
flags |= IEEE80211_RADIOTAP_AMPDU_DELIM_CRC_KNOWN;
|
2018-02-19 12:48:39 +00:00
|
|
|
if (status->flag & RX_FLAG_AMPDU_EOF_BIT_KNOWN)
|
|
|
|
flags |= IEEE80211_RADIOTAP_AMPDU_EOF_KNOWN;
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_EOF_BIT)
|
|
|
|
flags |= IEEE80211_RADIOTAP_AMPDU_EOF;
|
2012-07-05 09:34:31 +00:00
|
|
|
put_unaligned_le16(flags, pos);
|
|
|
|
pos += 2;
|
|
|
|
if (status->flag & RX_FLAG_AMPDU_DELIM_CRC_KNOWN)
|
|
|
|
*pos++ = status->ampdu_delimiter_crc;
|
|
|
|
else
|
|
|
|
*pos++ = 0;
|
|
|
|
*pos++ = 0;
|
|
|
|
}
|
2012-11-16 09:09:08 +00:00
|
|
|
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->encoding == RX_ENC_VHT) {
|
2012-11-22 22:00:18 +00:00
|
|
|
u16 known = local->hw.radiotap_vht_details;
|
|
|
|
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_VHT));
|
2012-11-22 22:00:18 +00:00
|
|
|
put_unaligned_le16(known, pos);
|
|
|
|
pos += 2;
|
|
|
|
/* flags */
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_SHORT_GI)
|
2012-11-22 22:00:18 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_VHT_FLAG_SGI;
|
2014-02-05 10:48:53 +00:00
|
|
|
/* in VHT, STBC is binary */
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_STBC_MASK)
|
2014-02-05 10:48:53 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_VHT_FLAG_STBC;
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_BF)
|
2014-03-04 08:35:25 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_VHT_FLAG_BEAMFORMED;
|
2012-11-22 22:00:18 +00:00
|
|
|
pos++;
|
|
|
|
/* bandwidth */
|
2017-04-26 10:14:59 +00:00
|
|
|
switch (status->bw) {
|
|
|
|
case RATE_INFO_BW_80:
|
2012-11-22 22:00:18 +00:00
|
|
|
*pos++ = 4;
|
2017-04-26 10:14:59 +00:00
|
|
|
break;
|
|
|
|
case RATE_INFO_BW_160:
|
2012-11-22 22:00:18 +00:00
|
|
|
*pos++ = 11;
|
2017-04-26 10:14:59 +00:00
|
|
|
break;
|
|
|
|
case RATE_INFO_BW_40:
|
2012-11-22 22:00:18 +00:00
|
|
|
*pos++ = 1;
|
2017-04-26 10:14:59 +00:00
|
|
|
break;
|
|
|
|
default:
|
2012-11-22 22:00:18 +00:00
|
|
|
*pos++ = 0;
|
2017-04-26 10:14:59 +00:00
|
|
|
}
|
2012-11-22 22:00:18 +00:00
|
|
|
/* MCS/NSS */
|
2017-04-26 11:51:41 +00:00
|
|
|
*pos = (status->rate_idx << 4) | status->nss;
|
2012-11-22 22:00:18 +00:00
|
|
|
pos += 4;
|
|
|
|
/* coding field */
|
2017-04-26 09:13:00 +00:00
|
|
|
if (status->enc_flags & RX_ENC_FLAG_LDPC)
|
2014-02-05 10:48:53 +00:00
|
|
|
*pos |= IEEE80211_RADIOTAP_CODING_LDPC_USER0;
|
2012-11-22 22:00:18 +00:00
|
|
|
pos++;
|
|
|
|
/* group ID */
|
|
|
|
pos++;
|
|
|
|
/* partial_aid */
|
|
|
|
pos += 2;
|
|
|
|
}
|
|
|
|
|
2016-08-29 20:25:17 +00:00
|
|
|
if (local->hw.radiotap_timestamp.units_pos >= 0) {
|
|
|
|
u16 accuracy = 0;
|
2023-12-20 11:41:40 +00:00
|
|
|
u8 flags;
|
|
|
|
u64 ts;
|
2016-08-29 20:25:17 +00:00
|
|
|
|
|
|
|
rthdr->it_present |=
|
2021-08-06 21:51:12 +00:00
|
|
|
cpu_to_le32(BIT(IEEE80211_RADIOTAP_TIMESTAMP));
|
2016-08-29 20:25:17 +00:00
|
|
|
|
|
|
|
/* ensure 8 byte alignment */
|
|
|
|
while ((pos - (u8 *)rthdr) & 7)
|
|
|
|
pos++;
|
|
|
|
|
2023-12-20 11:41:40 +00:00
|
|
|
if (status->flag & RX_FLAG_MACTIME_IS_RTAP_TS64) {
|
|
|
|
flags = IEEE80211_RADIOTAP_TIMESTAMP_FLAG_64BIT;
|
|
|
|
ts = status->mactime;
|
|
|
|
} else {
|
|
|
|
flags = IEEE80211_RADIOTAP_TIMESTAMP_FLAG_32BIT;
|
|
|
|
ts = status->device_timestamp;
|
|
|
|
}
|
|
|
|
|
|
|
|
put_unaligned_le64(ts, pos);
|
2016-08-29 20:25:17 +00:00
|
|
|
pos += sizeof(u64);
|
|
|
|
|
|
|
|
if (local->hw.radiotap_timestamp.accuracy >= 0) {
|
|
|
|
accuracy = local->hw.radiotap_timestamp.accuracy;
|
|
|
|
flags |= IEEE80211_RADIOTAP_TIMESTAMP_FLAG_ACCURACY;
|
|
|
|
}
|
|
|
|
put_unaligned_le16(accuracy, pos);
|
|
|
|
pos += sizeof(u16);
|
|
|
|
|
|
|
|
*pos++ = local->hw.radiotap_timestamp.units_pos;
|
|
|
|
*pos++ = flags;
|
|
|
|
}
|
|
|
|
|
2018-06-09 06:14:44 +00:00
|
|
|
if (status->encoding == RX_ENC_HE &&
|
|
|
|
status->flag & RX_FLAG_RADIOTAP_HE) {
|
2018-08-31 08:31:07 +00:00
|
|
|
#define HE_PREP(f, val) le16_encode_bits(val, IEEE80211_RADIOTAP_HE_##f)
|
2018-06-09 06:14:44 +00:00
|
|
|
|
|
|
|
if (status->enc_flags & RX_ENC_FLAG_STBC_MASK) {
|
|
|
|
he.data6 |= HE_PREP(DATA6_NSTS,
|
|
|
|
FIELD_GET(RX_ENC_FLAG_STBC_MASK,
|
|
|
|
status->enc_flags));
|
|
|
|
he.data3 |= HE_PREP(DATA3_STBC, 1);
|
|
|
|
} else {
|
|
|
|
he.data6 |= HE_PREP(DATA6_NSTS, status->nss);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define CHECK_GI(s) \
|
|
|
|
BUILD_BUG_ON(IEEE80211_RADIOTAP_HE_DATA5_GI_##s != \
|
|
|
|
(int)NL80211_RATE_INFO_HE_GI_##s)
|
|
|
|
|
|
|
|
CHECK_GI(0_8);
|
|
|
|
CHECK_GI(1_6);
|
|
|
|
CHECK_GI(3_2);
|
|
|
|
|
|
|
|
he.data3 |= HE_PREP(DATA3_DATA_MCS, status->rate_idx);
|
|
|
|
he.data3 |= HE_PREP(DATA3_DATA_DCM, status->he_dcm);
|
|
|
|
he.data3 |= HE_PREP(DATA3_CODING,
|
|
|
|
!!(status->enc_flags & RX_ENC_FLAG_LDPC));
|
|
|
|
|
|
|
|
he.data5 |= HE_PREP(DATA5_GI, status->he_gi);
|
|
|
|
|
|
|
|
switch (status->bw) {
|
|
|
|
case RATE_INFO_BW_20:
|
|
|
|
he.data5 |= HE_PREP(DATA5_DATA_BW_RU_ALLOC,
|
|
|
|
IEEE80211_RADIOTAP_HE_DATA5_DATA_BW_RU_ALLOC_20MHZ);
|
|
|
|
break;
|
|
|
|
case RATE_INFO_BW_40:
|
|
|
|
he.data5 |= HE_PREP(DATA5_DATA_BW_RU_ALLOC,
|
|
|
|
IEEE80211_RADIOTAP_HE_DATA5_DATA_BW_RU_ALLOC_40MHZ);
|
|
|
|
break;
|
|
|
|
case RATE_INFO_BW_80:
|
|
|
|
he.data5 |= HE_PREP(DATA5_DATA_BW_RU_ALLOC,
|
|
|
|
IEEE80211_RADIOTAP_HE_DATA5_DATA_BW_RU_ALLOC_80MHZ);
|
|
|
|
break;
|
|
|
|
case RATE_INFO_BW_160:
|
|
|
|
he.data5 |= HE_PREP(DATA5_DATA_BW_RU_ALLOC,
|
|
|
|
IEEE80211_RADIOTAP_HE_DATA5_DATA_BW_RU_ALLOC_160MHZ);
|
|
|
|
break;
|
|
|
|
case RATE_INFO_BW_HE_RU:
|
|
|
|
#define CHECK_RU_ALLOC(s) \
|
|
|
|
BUILD_BUG_ON(IEEE80211_RADIOTAP_HE_DATA5_DATA_BW_RU_ALLOC_##s##T != \
|
|
|
|
NL80211_RATE_INFO_HE_RU_ALLOC_##s + 4)
|
|
|
|
|
|
|
|
CHECK_RU_ALLOC(26);
|
|
|
|
CHECK_RU_ALLOC(52);
|
|
|
|
CHECK_RU_ALLOC(106);
|
|
|
|
CHECK_RU_ALLOC(242);
|
|
|
|
CHECK_RU_ALLOC(484);
|
|
|
|
CHECK_RU_ALLOC(996);
|
|
|
|
CHECK_RU_ALLOC(2x996);
|
|
|
|
|
|
|
|
he.data5 |= HE_PREP(DATA5_DATA_BW_RU_ALLOC,
|
|
|
|
status->he_ru + 4);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
WARN_ONCE(1, "Invalid SU BW %d\n", status->bw);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ensure 2 byte alignment */
|
|
|
|
while ((pos - (u8 *)rthdr) & 1)
|
|
|
|
pos++;
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_HE));
|
2018-06-09 06:14:44 +00:00
|
|
|
memcpy(pos, &he, sizeof(he));
|
|
|
|
pos += sizeof(he);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (status->encoding == RX_ENC_HE &&
|
|
|
|
status->flag & RX_FLAG_RADIOTAP_HE_MU) {
|
|
|
|
/* ensure 2 byte alignment */
|
|
|
|
while ((pos - (u8 *)rthdr) & 1)
|
|
|
|
pos++;
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_HE_MU));
|
2018-06-09 06:14:44 +00:00
|
|
|
memcpy(pos, &he_mu, sizeof(he_mu));
|
|
|
|
pos += sizeof(he_mu);
|
|
|
|
}
|
|
|
|
|
2018-09-05 05:06:06 +00:00
|
|
|
if (status->flag & RX_FLAG_NO_PSDU) {
|
|
|
|
rthdr->it_present |=
|
2021-08-06 21:51:12 +00:00
|
|
|
cpu_to_le32(BIT(IEEE80211_RADIOTAP_ZERO_LEN_PSDU));
|
2018-09-05 05:06:06 +00:00
|
|
|
*pos++ = status->zero_length_psdu_type;
|
|
|
|
}
|
|
|
|
|
2018-08-31 08:31:20 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_LSIG) {
|
|
|
|
/* ensure 2 byte alignment */
|
|
|
|
while ((pos - (u8 *)rthdr) & 1)
|
|
|
|
pos++;
|
2021-08-06 21:51:12 +00:00
|
|
|
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_LSIG));
|
2018-08-31 08:31:20 +00:00
|
|
|
memcpy(pos, &lsig, sizeof(lsig));
|
|
|
|
pos += sizeof(lsig);
|
|
|
|
}
|
|
|
|
|
2013-07-03 11:34:02 +00:00
|
|
|
for_each_set_bit(chain, &chains, IEEE80211_MAX_CHAINS) {
|
|
|
|
*pos++ = status->chain_signal[chain];
|
|
|
|
*pos++ = chain;
|
|
|
|
}
|
2008-05-08 17:22:43 +00:00
|
|
|
}
|
|
|
|
|
2017-04-13 13:50:27 +00:00
|
|
|
static struct sk_buff *
|
|
|
|
ieee80211_make_monitor_skb(struct ieee80211_local *local,
|
|
|
|
struct sk_buff **origskb,
|
|
|
|
struct ieee80211_rate *rate,
|
2018-04-20 10:49:18 +00:00
|
|
|
int rtap_space, bool use_origskb)
|
2017-04-13 13:50:27 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(*origskb);
|
|
|
|
int rt_hdrlen, needed_headroom;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
/* room for the radiotap header based on driver features */
|
|
|
|
rt_hdrlen = ieee80211_rx_radiotap_hdrlen(local, status, *origskb);
|
2018-04-20 10:49:18 +00:00
|
|
|
needed_headroom = rt_hdrlen - rtap_space;
|
2017-04-13 13:50:27 +00:00
|
|
|
|
|
|
|
if (use_origskb) {
|
|
|
|
/* only need to expand headroom if necessary */
|
|
|
|
skb = *origskb;
|
|
|
|
*origskb = NULL;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This shouldn't trigger often because most devices have an
|
|
|
|
* RX header they pull before we get here, and that should
|
|
|
|
* be big enough for our radiotap information. We should
|
|
|
|
* probably export the length to drivers so that we can have
|
|
|
|
* them allocate enough headroom to start with.
|
|
|
|
*/
|
|
|
|
if (skb_headroom(skb) < needed_headroom &&
|
|
|
|
pskb_expand_head(skb, needed_headroom, 0, GFP_ATOMIC)) {
|
|
|
|
dev_kfree_skb(skb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Need to make a copy and possibly remove radiotap header
|
|
|
|
* and FCS from the original.
|
|
|
|
*/
|
2021-06-28 12:37:13 +00:00
|
|
|
skb = skb_copy_expand(*origskb, needed_headroom + NET_SKB_PAD,
|
|
|
|
0, GFP_ATOMIC);
|
2017-04-13 13:50:27 +00:00
|
|
|
|
|
|
|
if (!skb)
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* prepend radiotap information */
|
|
|
|
ieee80211_add_rx_radiotap_header(local, skb, rate, rt_hdrlen, true);
|
|
|
|
|
|
|
|
skb_reset_mac_header(skb);
|
|
|
|
skb->ip_summed = CHECKSUM_UNNECESSARY;
|
|
|
|
skb->pkt_type = PACKET_OTHERHOST;
|
|
|
|
skb->protocol = htons(ETH_P_802_2);
|
|
|
|
|
|
|
|
return skb;
|
|
|
|
}
|
|
|
|
|
2007-09-26 13:19:39 +00:00
|
|
|
/*
|
|
|
|
* This function copies a received frame to all monitor interfaces and
|
|
|
|
* returns a cleaned-up SKB that no longer includes the FCS nor the
|
|
|
|
* radiotap header the driver might have added.
|
|
|
|
*/
|
|
|
|
static struct sk_buff *
|
|
|
|
ieee80211_rx_monitor(struct ieee80211_local *local, struct sk_buff *origskb,
|
2008-01-24 18:38:38 +00:00
|
|
|
struct ieee80211_rate *rate)
|
2007-09-26 13:19:39 +00:00
|
|
|
{
|
2009-06-17 11:13:00 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(origskb);
|
2007-09-26 13:19:39 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
2017-04-13 13:50:27 +00:00
|
|
|
struct sk_buff *monskb = NULL;
|
2007-09-26 13:19:39 +00:00
|
|
|
int present_fcs_len = 0;
|
2018-04-20 10:49:18 +00:00
|
|
|
unsigned int rtap_space = 0;
|
2016-08-29 20:25:16 +00:00
|
|
|
struct ieee80211_sub_if_data *monitor_sdata =
|
|
|
|
rcu_dereference(local->monitor_sdata);
|
2017-04-13 13:50:27 +00:00
|
|
|
bool only_monitor = false;
|
2018-12-15 09:03:17 +00:00
|
|
|
unsigned int min_head_len;
|
2014-11-06 21:56:36 +00:00
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
if (WARN_ON_ONCE(status->flag & RX_FLAG_RADIOTAP_TLV_AT_END &&
|
|
|
|
!skb_mac_header_was_set(origskb))) {
|
|
|
|
/* with this skb no way to know where frame payload starts */
|
|
|
|
dev_kfree_skb(origskb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-06-09 06:14:44 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_HE)
|
|
|
|
rtap_space += sizeof(struct ieee80211_radiotap_he);
|
|
|
|
|
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_HE_MU)
|
|
|
|
rtap_space += sizeof(struct ieee80211_radiotap_he_mu);
|
|
|
|
|
2018-12-15 09:03:25 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_LSIG)
|
|
|
|
rtap_space += sizeof(struct ieee80211_radiotap_lsig);
|
|
|
|
|
2023-03-01 10:09:35 +00:00
|
|
|
if (status->flag & RX_FLAG_RADIOTAP_TLV_AT_END)
|
|
|
|
rtap_space += skb_mac_header(origskb) - &origskb->data[rtap_space];
|
2007-09-26 13:19:39 +00:00
|
|
|
|
2018-12-15 09:03:17 +00:00
|
|
|
min_head_len = rtap_space;
|
|
|
|
|
2007-09-26 13:19:39 +00:00
|
|
|
/*
|
|
|
|
* First, we may need to make a copy of the skb because
|
|
|
|
* (1) we need to modify it for radiotap (if not present), and
|
|
|
|
* (2) the other RX handlers will modify the skb we got.
|
|
|
|
*
|
|
|
|
* We don't need to, of course, if we aren't going to return
|
|
|
|
* the SKB because it has a bad FCS/PLCP checksum.
|
|
|
|
*/
|
2009-10-28 09:03:35 +00:00
|
|
|
|
2018-12-15 09:03:17 +00:00
|
|
|
if (!(status->flag & RX_FLAG_NO_PSDU)) {
|
|
|
|
if (ieee80211_hw_check(&local->hw, RX_INCLUDES_FCS)) {
|
|
|
|
if (unlikely(origskb->len <= FCS_LEN + rtap_space)) {
|
|
|
|
/* driver bug */
|
|
|
|
WARN_ON(1);
|
|
|
|
dev_kfree_skb(origskb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
present_fcs_len = FCS_LEN;
|
2017-04-11 13:38:56 +00:00
|
|
|
}
|
2018-12-15 09:03:17 +00:00
|
|
|
|
|
|
|
/* also consider the hdr->frame_control */
|
|
|
|
min_head_len += 2;
|
2017-04-11 13:38:56 +00:00
|
|
|
}
|
2007-09-26 13:19:39 +00:00
|
|
|
|
2018-12-15 09:03:17 +00:00
|
|
|
/* ensure that the expected data elements are in skb head */
|
|
|
|
if (!pskb_may_pull(origskb, min_head_len)) {
|
2010-03-29 09:35:07 +00:00
|
|
|
dev_kfree_skb(origskb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2018-04-20 10:49:18 +00:00
|
|
|
only_monitor = should_drop_frame(origskb, present_fcs_len, rtap_space);
|
2017-04-13 13:50:27 +00:00
|
|
|
|
2015-12-11 13:39:46 +00:00
|
|
|
if (!local->monitors || (status->flag & RX_FLAG_SKIP_MONITOR)) {
|
2017-04-13 13:50:27 +00:00
|
|
|
if (only_monitor) {
|
2007-09-26 13:19:39 +00:00
|
|
|
dev_kfree_skb(origskb);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-05-26 12:33:48 +00:00
|
|
|
return ieee80211_clean_skb(origskb, present_fcs_len,
|
|
|
|
rtap_space);
|
2007-09-26 13:19:39 +00:00
|
|
|
}
|
|
|
|
|
2018-04-20 10:49:18 +00:00
|
|
|
ieee80211_handle_mu_mimo_mon(monitor_sdata, origskb, rtap_space);
|
2017-04-13 12:23:49 +00:00
|
|
|
|
2017-04-13 11:28:18 +00:00
|
|
|
list_for_each_entry_rcu(sdata, &local->mon_list, u.mntr.list) {
|
2017-04-13 13:50:27 +00:00
|
|
|
bool last_monitor = list_is_last(&sdata->u.mntr.list,
|
|
|
|
&local->mon_list);
|
|
|
|
|
|
|
|
if (!monskb)
|
|
|
|
monskb = ieee80211_make_monitor_skb(local, &origskb,
|
2018-04-20 10:49:18 +00:00
|
|
|
rate, rtap_space,
|
2017-04-13 13:50:27 +00:00
|
|
|
only_monitor &&
|
|
|
|
last_monitor);
|
|
|
|
|
|
|
|
if (monskb) {
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
if (last_monitor) {
|
|
|
|
skb = monskb;
|
|
|
|
monskb = NULL;
|
|
|
|
} else {
|
|
|
|
skb = skb_clone(monskb, GFP_ATOMIC);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (skb) {
|
|
|
|
skb->dev = sdata->dev;
|
2020-11-13 21:46:24 +00:00
|
|
|
dev_sw_netstats_rx_add(skb->dev, skb->len);
|
2017-04-13 13:50:27 +00:00
|
|
|
netif_receive_skb(skb);
|
2007-09-26 13:19:39 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-04-13 13:50:27 +00:00
|
|
|
if (last_monitor)
|
|
|
|
break;
|
2007-09-26 13:19:39 +00:00
|
|
|
}
|
|
|
|
|
2017-04-13 13:50:27 +00:00
|
|
|
/* this happens if last_monitor was erroneously false */
|
|
|
|
dev_kfree_skb(monskb);
|
|
|
|
|
|
|
|
/* ditto */
|
|
|
|
if (!origskb)
|
|
|
|
return NULL;
|
2007-09-26 13:19:39 +00:00
|
|
|
|
2020-05-26 12:33:48 +00:00
|
|
|
return ieee80211_clean_skb(origskb, present_fcs_len, rtap_space);
|
2007-09-26 13:19:39 +00:00
|
|
|
}
|
|
|
|
|
2008-02-25 15:27:43 +00:00
|
|
|
static void ieee80211_parse_qos(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2008-07-02 18:05:34 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2011-07-07 16:45:03 +00:00
|
|
|
int tid, seqno_idx, security_idx;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
|
|
|
/* does the frame have a qos control field? */
|
2008-07-02 18:05:34 +00:00
|
|
|
if (ieee80211_is_data_qos(hdr->frame_control)) {
|
|
|
|
u8 *qc = ieee80211_get_qos_ctl(hdr);
|
2007-07-27 13:43:22 +00:00
|
|
|
/* frame has qos control */
|
2008-07-02 18:05:34 +00:00
|
|
|
tid = *qc & IEEE80211_QOS_CTL_TID_MASK;
|
2011-06-22 08:06:59 +00:00
|
|
|
if (*qc & IEEE80211_QOS_CTL_A_MSDU_PRESENT)
|
2010-09-24 10:38:25 +00:00
|
|
|
status->rx_flags |= IEEE80211_RX_AMSDU;
|
2011-07-07 16:45:03 +00:00
|
|
|
|
|
|
|
seqno_idx = tid;
|
|
|
|
security_idx = tid;
|
2007-07-27 13:43:22 +00:00
|
|
|
} else {
|
2008-07-10 08:11:02 +00:00
|
|
|
/*
|
|
|
|
* IEEE 802.11-2007, 7.1.3.4.1 ("Sequence Number field"):
|
|
|
|
*
|
|
|
|
* Sequence numbers for management frames, QoS data
|
|
|
|
* frames with a broadcast/multicast address in the
|
|
|
|
* Address 1 field, and all non-QoS data frames sent
|
|
|
|
* by QoS STAs are assigned using an additional single
|
|
|
|
* modulo-4096 counter, [...]
|
|
|
|
*
|
|
|
|
* We also use that counter for non-QoS STAs.
|
|
|
|
*/
|
2012-11-14 22:22:21 +00:00
|
|
|
seqno_idx = IEEE80211_NUM_TIDS;
|
2011-07-07 16:45:03 +00:00
|
|
|
security_idx = 0;
|
|
|
|
if (ieee80211_is_mgmt(hdr->frame_control))
|
2012-11-14 22:22:21 +00:00
|
|
|
security_idx = IEEE80211_NUM_TIDS;
|
2011-07-07 16:45:03 +00:00
|
|
|
tid = 0;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2011-07-07 16:45:03 +00:00
|
|
|
rx->seqno_idx = seqno_idx;
|
|
|
|
rx->security_idx = security_idx;
|
2007-07-27 13:43:22 +00:00
|
|
|
/* Set skb->priority to 1d tag if highest order bit of TID is not set.
|
|
|
|
* For now, set skb->priority to 0 for other cases. */
|
|
|
|
rx->skb->priority = (tid > 7) ? 0 : tid;
|
2008-01-29 16:07:43 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2009-01-06 23:26:10 +00:00
|
|
|
/**
|
|
|
|
* DOC: Packet alignment
|
|
|
|
*
|
|
|
|
* Drivers always need to pass packets that are aligned to two-byte boundaries
|
|
|
|
* to the stack.
|
|
|
|
*
|
2023-12-13 05:48:00 +00:00
|
|
|
* Additionally, they should, if possible, align the payload data in a way that
|
2009-01-06 23:26:10 +00:00
|
|
|
* guarantees that the contained IP header is aligned to a four-byte
|
|
|
|
* boundary. In the case of regular frames, this simply means aligning the
|
|
|
|
* payload to a four-byte boundary (because either the IP header is directly
|
|
|
|
* contained, or IV/RFC1042 headers that have a length divisible by four are
|
2009-12-17 12:54:57 +00:00
|
|
|
* in front of it). If the payload data is not properly aligned and the
|
|
|
|
* architecture doesn't support efficient unaligned operations, mac80211
|
|
|
|
* will align the data.
|
2009-01-06 23:26:10 +00:00
|
|
|
*
|
|
|
|
* With A-MSDU frames, however, the payload data address must yield two modulo
|
|
|
|
* four because there are 14-byte 802.3 headers within the A-MSDU frames that
|
|
|
|
* push the IP header further back to a multiple of four again. Thankfully, the
|
|
|
|
* specs were sane enough this time around to require padding each A-MSDU
|
|
|
|
* subframe to a length that is a multiple of four.
|
|
|
|
*
|
2011-03-31 01:57:33 +00:00
|
|
|
* Padding like Atheros hardware adds which is between the 802.11 header and
|
2023-12-13 05:48:00 +00:00
|
|
|
* the payload is not supported; the driver is required to move the 802.11
|
2009-01-06 23:26:10 +00:00
|
|
|
* header to be directly in front of the payload in that case.
|
|
|
|
*/
|
|
|
|
static void ieee80211_verify_alignment(struct ieee80211_rx_data *rx)
|
2008-01-29 16:07:43 +00:00
|
|
|
{
|
2009-12-17 12:54:57 +00:00
|
|
|
#ifdef CONFIG_MAC80211_VERBOSE_DEBUG
|
2015-11-06 11:34:24 +00:00
|
|
|
WARN_ON_ONCE((unsigned long)rx->skb->data & 1);
|
2009-01-06 23:26:10 +00:00
|
|
|
#endif
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2007-12-24 11:36:39 +00:00
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
/* rx handlers */
|
|
|
|
|
2009-01-08 11:32:02 +00:00
|
|
|
static int ieee80211_is_unicast_robust_mgmt_frame(struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
|
|
|
|
|
2014-01-23 15:20:29 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1))
|
2009-01-08 11:32:02 +00:00
|
|
|
return 0;
|
|
|
|
|
2014-01-23 15:20:29 +00:00
|
|
|
return ieee80211_is_robust_mgmt_frame(skb);
|
2009-01-08 11:32:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static int ieee80211_is_multicast_robust_mgmt_frame(struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
|
|
|
|
|
2014-01-23 15:20:29 +00:00
|
|
|
if (!is_multicast_ether_addr(hdr->addr1))
|
2009-01-08 11:32:02 +00:00
|
|
|
return 0;
|
|
|
|
|
2014-01-23 15:20:29 +00:00
|
|
|
return ieee80211_is_robust_mgmt_frame(skb);
|
2009-01-08 11:32:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/* Get the BIP key index from MMIE; return -1 if this is not a BIP frame */
|
|
|
|
static int ieee80211_get_mmie_keyidx(struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
struct ieee80211_mgmt *hdr = (struct ieee80211_mgmt *) skb->data;
|
|
|
|
struct ieee80211_mmie *mmie;
|
2015-01-24 17:52:08 +00:00
|
|
|
struct ieee80211_mmie_16 *mmie16;
|
2009-01-08 11:32:02 +00:00
|
|
|
|
2012-10-25 22:09:11 +00:00
|
|
|
if (skb->len < 24 + sizeof(*mmie) || !is_multicast_ether_addr(hdr->da))
|
2009-01-08 11:32:02 +00:00
|
|
|
return -1;
|
|
|
|
|
2020-02-22 13:25:47 +00:00
|
|
|
if (!ieee80211_is_robust_mgmt_frame(skb) &&
|
|
|
|
!ieee80211_is_beacon(hdr->frame_control))
|
2009-01-08 11:32:02 +00:00
|
|
|
return -1; /* not a robust management frame */
|
|
|
|
|
|
|
|
mmie = (struct ieee80211_mmie *)
|
|
|
|
(skb->data + skb->len - sizeof(*mmie));
|
2015-01-24 17:52:08 +00:00
|
|
|
if (mmie->element_id == WLAN_EID_MMIE &&
|
|
|
|
mmie->length == sizeof(*mmie) - 2)
|
|
|
|
return le16_to_cpu(mmie->key_id);
|
|
|
|
|
|
|
|
mmie16 = (struct ieee80211_mmie_16 *)
|
|
|
|
(skb->data + skb->len - sizeof(*mmie16));
|
|
|
|
if (skb->len >= 24 + sizeof(*mmie16) &&
|
|
|
|
mmie16->element_id == WLAN_EID_MMIE &&
|
|
|
|
mmie16->length == sizeof(*mmie16) - 2)
|
|
|
|
return le16_to_cpu(mmie16->key_id);
|
|
|
|
|
|
|
|
return -1;
|
2009-01-08 11:32:02 +00:00
|
|
|
}
|
|
|
|
|
2022-02-09 12:14:26 +00:00
|
|
|
static int ieee80211_get_keyid(struct sk_buff *skb)
|
2013-03-24 12:23:27 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
2022-02-09 12:14:26 +00:00
|
|
|
__le16 fc = hdr->frame_control;
|
|
|
|
int hdrlen = ieee80211_hdrlen(fc);
|
2013-03-24 12:23:27 +00:00
|
|
|
u8 keyid;
|
|
|
|
|
2022-02-09 12:14:26 +00:00
|
|
|
/* WEP, TKIP, CCMP and GCMP */
|
|
|
|
if (unlikely(skb->len < hdrlen + IEEE80211_WEP_IV_LEN))
|
2013-03-24 12:23:27 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2022-02-09 12:14:26 +00:00
|
|
|
skb_copy_bits(skb, hdrlen + 3, &keyid, 1);
|
2019-03-19 20:34:08 +00:00
|
|
|
|
2022-02-09 12:14:26 +00:00
|
|
|
keyid >>= 6;
|
2013-03-24 12:23:27 +00:00
|
|
|
|
|
|
|
return keyid;
|
|
|
|
}
|
|
|
|
|
2012-10-25 22:09:11 +00:00
|
|
|
static ieee80211_rx_result ieee80211_rx_mesh_check(struct ieee80211_rx_data *rx)
|
2008-02-23 14:17:10 +00:00
|
|
|
{
|
2008-07-02 23:30:51 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2009-11-25 16:46:19 +00:00
|
|
|
char *dev_addr = rx->sdata->vif.addr;
|
2008-02-25 15:24:38 +00:00
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (ieee80211_is_data(hdr->frame_control)) {
|
2009-08-10 19:15:48 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1)) {
|
|
|
|
if (ieee80211_has_tods(hdr->frame_control) ||
|
2012-10-25 22:09:11 +00:00
|
|
|
!ieee80211_has_fromds(hdr->frame_control))
|
2009-08-10 19:15:48 +00:00
|
|
|
return RX_DROP_MONITOR;
|
mac80211: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-08 18:56:52 +00:00
|
|
|
if (ether_addr_equal(hdr->addr3, dev_addr))
|
2009-08-10 19:15:48 +00:00
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
} else {
|
|
|
|
if (!ieee80211_has_a4(hdr->frame_control))
|
|
|
|
return RX_DROP_MONITOR;
|
mac80211: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-08 18:56:52 +00:00
|
|
|
if (ether_addr_equal(hdr->addr4, dev_addr))
|
2009-08-10 19:15:48 +00:00
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
}
|
2008-02-23 14:17:10 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* If there is not an established peer link and this is not a peer link
|
|
|
|
* establisment frame, beacon or probe, drop the frame.
|
|
|
|
*/
|
|
|
|
|
2011-05-13 17:45:43 +00:00
|
|
|
if (!rx->sta || sta_plink_state(rx->sta) != NL80211_PLINK_ESTAB) {
|
2008-02-23 14:17:10 +00:00
|
|
|
struct ieee80211_mgmt *mgmt;
|
2008-02-25 15:24:38 +00:00
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (!ieee80211_is_mgmt(hdr->frame_control))
|
2008-02-23 14:17:10 +00:00
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (ieee80211_is_action(hdr->frame_control)) {
|
2011-05-03 23:57:09 +00:00
|
|
|
u8 category;
|
2012-10-25 22:36:40 +00:00
|
|
|
|
|
|
|
/* make sure category field is present */
|
|
|
|
if (rx->skb->len < IEEE80211_MIN_ACTION_SIZE)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2008-02-23 14:17:10 +00:00
|
|
|
mgmt = (struct ieee80211_mgmt *)hdr;
|
2011-05-03 23:57:09 +00:00
|
|
|
category = mgmt->u.action.category;
|
|
|
|
if (category != WLAN_CATEGORY_MESH_ACTION &&
|
2012-10-25 22:09:11 +00:00
|
|
|
category != WLAN_CATEGORY_SELF_PROTECTED)
|
2008-02-23 14:17:10 +00:00
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (ieee80211_is_probe_req(hdr->frame_control) ||
|
|
|
|
ieee80211_is_probe_resp(hdr->frame_control) ||
|
2011-04-07 22:08:31 +00:00
|
|
|
ieee80211_is_beacon(hdr->frame_control) ||
|
|
|
|
ieee80211_is_auth(hdr->frame_control))
|
2008-07-02 23:30:51 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
}
|
|
|
|
|
2008-02-23 14:17:19 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
2008-02-23 14:17:10 +00:00
|
|
|
|
2016-01-28 14:19:24 +00:00
|
|
|
static inline bool ieee80211_rx_reorder_ready(struct tid_ampdu_rx *tid_agg_rx,
|
|
|
|
int index)
|
|
|
|
{
|
|
|
|
struct sk_buff_head *frames = &tid_agg_rx->reorder_buf[index];
|
|
|
|
struct sk_buff *tail = skb_peek_tail(frames);
|
|
|
|
struct ieee80211_rx_status *status;
|
|
|
|
|
2023-08-18 01:40:04 +00:00
|
|
|
if (tid_agg_rx->reorder_buf_filtered &&
|
|
|
|
tid_agg_rx->reorder_buf_filtered & BIT_ULL(index))
|
2016-01-28 14:19:25 +00:00
|
|
|
return true;
|
|
|
|
|
2016-01-28 14:19:24 +00:00
|
|
|
if (!tail)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
status = IEEE80211_SKB_RXCB(tail);
|
|
|
|
if (status->flag & RX_FLAG_AMSDU_MORE)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2012-06-22 10:48:38 +00:00
|
|
|
static void ieee80211_release_reorder_frame(struct ieee80211_sub_if_data *sdata,
|
2009-11-25 16:46:16 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx,
|
2013-02-04 17:44:44 +00:00
|
|
|
int index,
|
|
|
|
struct sk_buff_head *frames)
|
2009-11-25 16:46:16 +00:00
|
|
|
{
|
2014-07-16 10:09:31 +00:00
|
|
|
struct sk_buff_head *skb_list = &tid_agg_rx->reorder_buf[index];
|
|
|
|
struct sk_buff *skb;
|
2010-12-27 22:21:26 +00:00
|
|
|
struct ieee80211_rx_status *status;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2010-11-29 10:09:16 +00:00
|
|
|
lockdep_assert_held(&tid_agg_rx->reorder_lock);
|
|
|
|
|
2014-07-16 10:09:31 +00:00
|
|
|
if (skb_queue_empty(skb_list))
|
2009-11-25 16:46:16 +00:00
|
|
|
goto no_frame;
|
|
|
|
|
2016-01-28 14:19:24 +00:00
|
|
|
if (!ieee80211_rx_reorder_ready(tid_agg_rx, index)) {
|
2014-07-16 10:09:31 +00:00
|
|
|
__skb_queue_purge(skb_list);
|
|
|
|
goto no_frame;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* release frames from the reorder ring buffer */
|
2009-11-25 16:46:16 +00:00
|
|
|
tid_agg_rx->stored_mpdu_num--;
|
2014-07-16 10:09:31 +00:00
|
|
|
while ((skb = __skb_dequeue(skb_list))) {
|
|
|
|
status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
status->rx_flags |= IEEE80211_RX_DEFERRED_RELEASE;
|
|
|
|
__skb_queue_tail(frames, skb);
|
|
|
|
}
|
2009-11-25 16:46:16 +00:00
|
|
|
|
|
|
|
no_frame:
|
2023-08-18 01:40:04 +00:00
|
|
|
if (tid_agg_rx->reorder_buf_filtered)
|
|
|
|
tid_agg_rx->reorder_buf_filtered &= ~BIT_ULL(index);
|
2013-02-15 18:25:00 +00:00
|
|
|
tid_agg_rx->head_seq_num = ieee80211_sn_inc(tid_agg_rx->head_seq_num);
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
2012-06-22 10:48:38 +00:00
|
|
|
static void ieee80211_release_reorder_frames(struct ieee80211_sub_if_data *sdata,
|
2009-11-25 16:46:16 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx,
|
2013-02-04 17:44:44 +00:00
|
|
|
u16 head_seq_num,
|
|
|
|
struct sk_buff_head *frames)
|
2009-11-25 16:46:16 +00:00
|
|
|
{
|
|
|
|
int index;
|
|
|
|
|
2010-11-29 10:09:16 +00:00
|
|
|
lockdep_assert_held(&tid_agg_rx->reorder_lock);
|
|
|
|
|
2013-02-15 18:25:00 +00:00
|
|
|
while (ieee80211_sn_less(tid_agg_rx->head_seq_num, head_seq_num)) {
|
2013-10-24 13:53:32 +00:00
|
|
|
index = tid_agg_rx->head_seq_num % tid_agg_rx->buf_size;
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_release_reorder_frame(sdata, tid_agg_rx, index,
|
|
|
|
frames);
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Timeout (in jiffies) for skb's that are waiting in the RX reorder buffer. If
|
|
|
|
* the skb was added to the buffer longer than this time ago, the earlier
|
|
|
|
* frames that have not yet been received are assumed to be lost and the skb
|
|
|
|
* can be released for processing. This may also release other skb's from the
|
|
|
|
* reorder buffer if there are no additional gaps between the frames.
|
2010-08-04 23:36:41 +00:00
|
|
|
*
|
|
|
|
* Callers must hold tid_agg_rx->reorder_lock.
|
2009-11-25 16:46:16 +00:00
|
|
|
*/
|
|
|
|
#define HT_RX_REORDER_BUF_TIMEOUT (HZ / 10)
|
|
|
|
|
2012-06-22 10:48:38 +00:00
|
|
|
static void ieee80211_sta_reorder_release(struct ieee80211_sub_if_data *sdata,
|
2013-02-04 17:44:44 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx,
|
|
|
|
struct sk_buff_head *frames)
|
2010-08-04 23:36:04 +00:00
|
|
|
{
|
2014-07-16 10:09:31 +00:00
|
|
|
int index, i, j;
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2010-11-29 10:09:16 +00:00
|
|
|
lockdep_assert_held(&tid_agg_rx->reorder_lock);
|
|
|
|
|
2010-08-04 23:36:04 +00:00
|
|
|
/* release the buffer until next missing frame */
|
2013-10-24 13:53:32 +00:00
|
|
|
index = tid_agg_rx->head_seq_num % tid_agg_rx->buf_size;
|
2016-01-28 14:19:24 +00:00
|
|
|
if (!ieee80211_rx_reorder_ready(tid_agg_rx, index) &&
|
2012-02-01 16:48:09 +00:00
|
|
|
tid_agg_rx->stored_mpdu_num) {
|
2010-08-04 23:36:04 +00:00
|
|
|
/*
|
|
|
|
* No buffers ready to be released, but check whether any
|
|
|
|
* frames in the reorder buffer have timed out.
|
|
|
|
*/
|
|
|
|
int skipped = 1;
|
|
|
|
for (j = (index + 1) % tid_agg_rx->buf_size; j != index;
|
|
|
|
j = (j + 1) % tid_agg_rx->buf_size) {
|
2016-01-28 14:19:24 +00:00
|
|
|
if (!ieee80211_rx_reorder_ready(tid_agg_rx, j)) {
|
2010-08-04 23:36:04 +00:00
|
|
|
skipped++;
|
|
|
|
continue;
|
|
|
|
}
|
2011-03-24 23:01:48 +00:00
|
|
|
if (skipped &&
|
|
|
|
!time_after(jiffies, tid_agg_rx->reorder_time[j] +
|
2010-08-04 23:36:04 +00:00
|
|
|
HT_RX_REORDER_BUF_TIMEOUT))
|
2010-08-04 23:36:41 +00:00
|
|
|
goto set_release_timer;
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2014-07-16 10:09:31 +00:00
|
|
|
/* don't leave incomplete A-MSDUs around */
|
|
|
|
for (i = (index + 1) % tid_agg_rx->buf_size; i != j;
|
|
|
|
i = (i + 1) % tid_agg_rx->buf_size)
|
|
|
|
__skb_queue_purge(&tid_agg_rx->reorder_buf[i]);
|
|
|
|
|
2012-06-22 09:29:50 +00:00
|
|
|
ht_dbg_ratelimited(sdata,
|
|
|
|
"release an RX reorder frame due to timeout on earlier frames\n");
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_release_reorder_frame(sdata, tid_agg_rx, j,
|
|
|
|
frames);
|
2010-08-04 23:36:04 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment the head seq# also for the skipped slots.
|
|
|
|
*/
|
|
|
|
tid_agg_rx->head_seq_num =
|
2013-02-15 18:25:00 +00:00
|
|
|
(tid_agg_rx->head_seq_num +
|
|
|
|
skipped) & IEEE80211_SN_MASK;
|
2010-08-04 23:36:04 +00:00
|
|
|
skipped = 0;
|
|
|
|
}
|
2016-01-28 14:19:24 +00:00
|
|
|
} else while (ieee80211_rx_reorder_ready(tid_agg_rx, index)) {
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_release_reorder_frame(sdata, tid_agg_rx, index,
|
|
|
|
frames);
|
2013-10-24 13:53:32 +00:00
|
|
|
index = tid_agg_rx->head_seq_num % tid_agg_rx->buf_size;
|
2010-08-04 23:36:04 +00:00
|
|
|
}
|
2010-08-04 23:36:41 +00:00
|
|
|
|
|
|
|
if (tid_agg_rx->stored_mpdu_num) {
|
2013-10-24 13:53:32 +00:00
|
|
|
j = index = tid_agg_rx->head_seq_num % tid_agg_rx->buf_size;
|
2010-08-04 23:36:41 +00:00
|
|
|
|
|
|
|
for (; j != (index - 1) % tid_agg_rx->buf_size;
|
|
|
|
j = (j + 1) % tid_agg_rx->buf_size) {
|
2016-01-28 14:19:24 +00:00
|
|
|
if (ieee80211_rx_reorder_ready(tid_agg_rx, j))
|
2010-08-04 23:36:41 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
set_release_timer:
|
|
|
|
|
2015-04-01 12:20:42 +00:00
|
|
|
if (!tid_agg_rx->removed)
|
|
|
|
mod_timer(&tid_agg_rx->reorder_timer,
|
|
|
|
tid_agg_rx->reorder_time[j] + 1 +
|
|
|
|
HT_RX_REORDER_BUF_TIMEOUT);
|
2010-08-04 23:36:41 +00:00
|
|
|
} else {
|
|
|
|
del_timer(&tid_agg_rx->reorder_timer);
|
|
|
|
}
|
2010-08-04 23:36:04 +00:00
|
|
|
}
|
|
|
|
|
2009-11-25 16:46:16 +00:00
|
|
|
/*
|
|
|
|
* As this function belongs to the RX path it must be under
|
|
|
|
* rcu_read_lock protection. It returns false if the frame
|
|
|
|
* can be processed immediately, true if it was consumed.
|
|
|
|
*/
|
2012-06-22 10:48:38 +00:00
|
|
|
static bool ieee80211_sta_manage_reorder_buf(struct ieee80211_sub_if_data *sdata,
|
2009-11-25 16:46:16 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx,
|
2013-02-04 17:44:44 +00:00
|
|
|
struct sk_buff *skb,
|
|
|
|
struct sk_buff_head *frames)
|
2009-11-25 16:46:16 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
|
2014-07-16 10:09:31 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2009-11-25 16:46:16 +00:00
|
|
|
u16 sc = le16_to_cpu(hdr->seq_ctrl);
|
|
|
|
u16 mpdu_seq_num = (sc & IEEE80211_SCTL_SEQ) >> 4;
|
|
|
|
u16 head_seq_num, buf_size;
|
|
|
|
int index;
|
2010-08-04 23:36:41 +00:00
|
|
|
bool ret = true;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2010-11-29 10:09:16 +00:00
|
|
|
spin_lock(&tid_agg_rx->reorder_lock);
|
|
|
|
|
2014-09-02 12:05:10 +00:00
|
|
|
/*
|
|
|
|
* Offloaded BA sessions have no known starting sequence number so pick
|
|
|
|
* one from first Rxed frame for this tid after BA was started.
|
|
|
|
*/
|
|
|
|
if (unlikely(tid_agg_rx->auto_seq)) {
|
|
|
|
tid_agg_rx->auto_seq = false;
|
|
|
|
tid_agg_rx->ssn = mpdu_seq_num;
|
|
|
|
tid_agg_rx->head_seq_num = mpdu_seq_num;
|
|
|
|
}
|
|
|
|
|
2009-11-25 16:46:16 +00:00
|
|
|
buf_size = tid_agg_rx->buf_size;
|
|
|
|
head_seq_num = tid_agg_rx->head_seq_num;
|
|
|
|
|
2017-02-06 13:28:42 +00:00
|
|
|
/*
|
|
|
|
* If the current MPDU's SN is smaller than the SSN, it shouldn't
|
|
|
|
* be reordered.
|
|
|
|
*/
|
|
|
|
if (unlikely(!tid_agg_rx->started)) {
|
|
|
|
if (ieee80211_sn_less(mpdu_seq_num, head_seq_num)) {
|
|
|
|
ret = false;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
tid_agg_rx->started = true;
|
|
|
|
}
|
|
|
|
|
2009-11-25 16:46:16 +00:00
|
|
|
/* frame with out of date sequence number */
|
2013-02-15 18:25:00 +00:00
|
|
|
if (ieee80211_sn_less(mpdu_seq_num, head_seq_num)) {
|
2009-11-25 16:46:16 +00:00
|
|
|
dev_kfree_skb(skb);
|
2010-08-04 23:36:41 +00:00
|
|
|
goto out;
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If frame the sequence number exceeds our buffering window
|
|
|
|
* size release some previous frames to make room for this one.
|
|
|
|
*/
|
2013-02-15 18:25:00 +00:00
|
|
|
if (!ieee80211_sn_less(mpdu_seq_num, head_seq_num + buf_size)) {
|
|
|
|
head_seq_num = ieee80211_sn_inc(
|
|
|
|
ieee80211_sn_sub(mpdu_seq_num, buf_size));
|
2009-11-25 16:46:16 +00:00
|
|
|
/* release stored frames up to new head to stack */
|
2012-06-22 10:48:38 +00:00
|
|
|
ieee80211_release_reorder_frames(sdata, tid_agg_rx,
|
2013-02-04 17:44:44 +00:00
|
|
|
head_seq_num, frames);
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Now the new frame is always in the range of the reordering buffer */
|
|
|
|
|
2013-10-24 13:53:32 +00:00
|
|
|
index = mpdu_seq_num % tid_agg_rx->buf_size;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
|
|
|
/* check if we already stored this frame */
|
2016-01-28 14:19:24 +00:00
|
|
|
if (ieee80211_rx_reorder_ready(tid_agg_rx, index)) {
|
2009-11-25 16:46:16 +00:00
|
|
|
dev_kfree_skb(skb);
|
2010-08-04 23:36:41 +00:00
|
|
|
goto out;
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If the current MPDU is in the right order and nothing else
|
|
|
|
* is stored we can process it directly, no need to buffer it.
|
2011-03-15 22:17:01 +00:00
|
|
|
* If it is first but there's something stored, we may be able
|
|
|
|
* to release frames after this one.
|
2009-11-25 16:46:16 +00:00
|
|
|
*/
|
|
|
|
if (mpdu_seq_num == tid_agg_rx->head_seq_num &&
|
|
|
|
tid_agg_rx->stored_mpdu_num == 0) {
|
2014-07-16 10:09:31 +00:00
|
|
|
if (!(status->flag & RX_FLAG_AMSDU_MORE))
|
|
|
|
tid_agg_rx->head_seq_num =
|
|
|
|
ieee80211_sn_inc(tid_agg_rx->head_seq_num);
|
2010-08-04 23:36:41 +00:00
|
|
|
ret = false;
|
|
|
|
goto out;
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* put the frame in the reordering buffer */
|
2014-07-16 10:09:31 +00:00
|
|
|
__skb_queue_tail(&tid_agg_rx->reorder_buf[index], skb);
|
|
|
|
if (!(status->flag & RX_FLAG_AMSDU_MORE)) {
|
|
|
|
tid_agg_rx->reorder_time[index] = jiffies;
|
|
|
|
tid_agg_rx->stored_mpdu_num++;
|
|
|
|
ieee80211_sta_reorder_release(sdata, tid_agg_rx, frames);
|
|
|
|
}
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2010-08-04 23:36:41 +00:00
|
|
|
out:
|
|
|
|
spin_unlock(&tid_agg_rx->reorder_lock);
|
|
|
|
return ret;
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Reorder MPDUs from A-MPDUs, keeping them on a buffer. Returns
|
|
|
|
* true if the MPDU was buffered, false if it should be processed.
|
|
|
|
*/
|
2013-02-04 17:44:44 +00:00
|
|
|
static void ieee80211_rx_reorder_ampdu(struct ieee80211_rx_data *rx,
|
|
|
|
struct sk_buff_head *frames)
|
2009-11-25 16:46:16 +00:00
|
|
|
{
|
2009-11-25 16:46:17 +00:00
|
|
|
struct sk_buff *skb = rx->skb;
|
2009-11-25 16:46:16 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
|
2009-11-25 16:46:17 +00:00
|
|
|
struct sta_info *sta = rx->sta;
|
2009-11-25 16:46:16 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx;
|
|
|
|
u16 sc;
|
2011-11-04 04:11:11 +00:00
|
|
|
u8 tid, ack_policy;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2013-11-20 10:28:27 +00:00
|
|
|
if (!ieee80211_is_data_qos(hdr->frame_control) ||
|
|
|
|
is_multicast_ether_addr(hdr->addr1))
|
2009-11-25 16:46:17 +00:00
|
|
|
goto dont_reorder;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* filter the QoS data rx stream according to
|
|
|
|
* STA/TID and check if this STA/TID is on aggregation
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (!sta)
|
2009-11-25 16:46:17 +00:00
|
|
|
goto dont_reorder;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2011-11-04 04:11:11 +00:00
|
|
|
ack_policy = *ieee80211_get_qos_ctl(hdr) &
|
|
|
|
IEEE80211_QOS_CTL_ACK_POLICY_MASK;
|
2018-02-19 12:48:40 +00:00
|
|
|
tid = ieee80211_get_tid(hdr);
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2010-06-10 08:21:38 +00:00
|
|
|
tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]);
|
2016-08-29 20:25:18 +00:00
|
|
|
if (!tid_agg_rx) {
|
|
|
|
if (ack_policy == IEEE80211_QOS_CTL_ACK_POLICY_BLOCKACK &&
|
|
|
|
!test_bit(tid, rx->sta->ampdu_mlme.agg_session_valid) &&
|
|
|
|
!test_and_set_bit(tid, rx->sta->ampdu_mlme.unexpected_agg))
|
|
|
|
ieee80211_send_delba(rx->sdata, rx->sta->sta.addr, tid,
|
|
|
|
WLAN_BACK_RECIPIENT,
|
|
|
|
WLAN_REASON_QSTA_REQUIRE_SETUP);
|
2010-06-10 08:21:38 +00:00
|
|
|
goto dont_reorder;
|
2016-08-29 20:25:18 +00:00
|
|
|
}
|
2009-11-25 16:46:16 +00:00
|
|
|
|
|
|
|
/* qos null data frames are excluded */
|
|
|
|
if (unlikely(hdr->frame_control & cpu_to_le16(IEEE80211_STYPE_NULLFUNC)))
|
2010-06-10 08:21:38 +00:00
|
|
|
goto dont_reorder;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
2011-11-04 04:11:11 +00:00
|
|
|
/* not part of a BA session */
|
2022-04-20 10:50:38 +00:00
|
|
|
if (ack_policy == IEEE80211_QOS_CTL_ACK_POLICY_NOACK)
|
2011-11-04 04:11:11 +00:00
|
|
|
goto dont_reorder;
|
|
|
|
|
2009-11-25 16:46:16 +00:00
|
|
|
/* new, potentially un-ordered, ampdu frame - process it */
|
|
|
|
|
|
|
|
/* reset session timer */
|
|
|
|
if (tid_agg_rx->timeout)
|
2012-03-18 21:58:06 +00:00
|
|
|
tid_agg_rx->last_rx = jiffies;
|
2009-11-25 16:46:16 +00:00
|
|
|
|
|
|
|
/* if this mpdu is fragmented - terminate rx aggregation session */
|
|
|
|
sc = le16_to_cpu(hdr->seq_ctrl);
|
|
|
|
if (sc & IEEE80211_SCTL_FRAG) {
|
2022-08-17 19:57:19 +00:00
|
|
|
ieee80211_queue_skb_to_iface(rx->sdata, rx->link_id, NULL, skb);
|
2009-11-25 16:46:17 +00:00
|
|
|
return;
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
|
|
|
|
2010-06-10 08:21:38 +00:00
|
|
|
/*
|
|
|
|
* No locking needed -- we will only ever process one
|
|
|
|
* RX packet at a time, and thus own tid_agg_rx. All
|
|
|
|
* other code manipulating it needs to (and does) make
|
|
|
|
* sure that we cannot get to it any more before doing
|
|
|
|
* anything with it.
|
|
|
|
*/
|
2013-02-04 17:44:44 +00:00
|
|
|
if (ieee80211_sta_manage_reorder_buf(rx->sdata, tid_agg_rx, skb,
|
|
|
|
frames))
|
2009-11-25 16:46:17 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
dont_reorder:
|
2013-02-04 17:44:44 +00:00
|
|
|
__skb_queue_tail(frames, skb);
|
2009-11-25 16:46:16 +00:00
|
|
|
}
|
2008-02-23 14:17:10 +00:00
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2014-11-11 15:49:25 +00:00
|
|
|
ieee80211_rx_h_check_dup(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2008-07-02 23:30:51 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2015-12-08 14:04:33 +00:00
|
|
|
if (status->flag & RX_FLAG_DUP_VALIDATED)
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2013-07-11 20:33:26 +00:00
|
|
|
/*
|
|
|
|
* Drop duplicate 802.11 retransmissions
|
|
|
|
* (IEEE 802.11-2012: 9.3.2.10 "Duplicate detection and recovery")
|
|
|
|
*/
|
2014-11-11 15:49:25 +00:00
|
|
|
|
|
|
|
if (rx->skb->len < 24)
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (ieee80211_is_ctl(hdr->frame_control) ||
|
2020-01-14 05:59:40 +00:00
|
|
|
ieee80211_is_any_nullfunc(hdr->frame_control) ||
|
2014-11-11 15:49:25 +00:00
|
|
|
is_multicast_ether_addr(hdr->addr1))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2015-10-14 16:27:07 +00:00
|
|
|
if (!rx->sta)
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (unlikely(ieee80211_has_retry(hdr->frame_control) &&
|
|
|
|
rx->sta->last_seq_ctrl[rx->seqno_idx] == hdr->seq_ctrl)) {
|
|
|
|
I802_DEBUG_INC(rx->local->dot11FrameDuplicateCount);
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->rx_stats.num_duplicates++;
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_DUP;
|
2015-10-14 16:27:07 +00:00
|
|
|
} else if (!(status->flag & RX_FLAG_AMSDU_MORE)) {
|
|
|
|
rx->sta->last_seq_ctrl[rx->seqno_idx] = hdr->seq_ctrl;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2014-11-11 15:49:25 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_check(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
|
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
/* Drop disallowed frame classes based on STA auth/assoc state;
|
|
|
|
* IEEE 802.11, Chap 5.5.
|
|
|
|
*
|
2008-09-10 22:01:56 +00:00
|
|
|
* mac80211 filters only based on association state, i.e. it drops
|
|
|
|
* Class 3 frames from not associated stations. hostapd sends
|
2007-07-27 13:43:22 +00:00
|
|
|
* deauth/disassoc frames when needed. In addition, hostapd is
|
|
|
|
* responsible for filtering on both auth and assoc states.
|
|
|
|
*/
|
2008-02-23 14:17:10 +00:00
|
|
|
|
2008-02-23 14:17:19 +00:00
|
|
|
if (ieee80211_vif_is_mesh(&rx->sdata->vif))
|
2008-02-23 14:17:10 +00:00
|
|
|
return ieee80211_rx_mesh_check(rx);
|
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (unlikely((ieee80211_is_data(hdr->frame_control) ||
|
|
|
|
ieee80211_is_pspoll(hdr->frame_control)) &&
|
2008-09-10 22:01:58 +00:00
|
|
|
rx->sdata->vif.type != NL80211_IFTYPE_ADHOC &&
|
2014-11-03 09:33:19 +00:00
|
|
|
rx->sdata->vif.type != NL80211_IFTYPE_OCB &&
|
2011-09-29 14:04:36 +00:00
|
|
|
(!rx->sta || !test_sta_flag(rx->sta, WLAN_STA_ASSOC)))) {
|
2012-01-20 12:55:24 +00:00
|
|
|
/*
|
|
|
|
* accept port control frames from the AP even when it's not
|
|
|
|
* yet marked ASSOC to prevent a race where we don't set the
|
|
|
|
* assoc bit quickly enough before it sends the first frame
|
|
|
|
*/
|
|
|
|
if (rx->sta && rx->sdata->vif.type == NL80211_IFTYPE_STATION &&
|
2011-08-17 12:18:15 +00:00
|
|
|
ieee80211_is_data_present(hdr->frame_control)) {
|
2012-10-25 22:41:23 +00:00
|
|
|
unsigned int hdrlen;
|
|
|
|
__be16 ethertype;
|
|
|
|
|
|
|
|
hdrlen = ieee80211_hdrlen(hdr->frame_control);
|
|
|
|
|
|
|
|
if (rx->skb->len < hdrlen + 8)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
skb_copy_bits(rx->skb, hdrlen + 6, ðertype, 2);
|
|
|
|
if (ethertype == rx->sdata->control_port_protocol)
|
2011-08-17 12:18:15 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
2011-11-04 10:18:13 +00:00
|
|
|
|
|
|
|
if (rx->sdata->vif.type == NL80211_IFTYPE_AP &&
|
|
|
|
cfg80211_rx_spurious_frame(rx->sdata->dev,
|
|
|
|
hdr->addr2,
|
|
|
|
GFP_ATOMIC))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_SPURIOUS;
|
2011-11-04 10:18:13 +00:00
|
|
|
|
2008-01-31 18:48:21 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2011-08-17 12:18:15 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_CONTINUE;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2009-02-10 15:09:31 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_check_more_data(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local;
|
|
|
|
struct ieee80211_hdr *hdr;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
|
|
|
|
local = rx->local;
|
|
|
|
skb = rx->skb;
|
|
|
|
hdr = (struct ieee80211_hdr *) skb->data;
|
|
|
|
|
|
|
|
if (!local->pspolling)
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (!ieee80211_has_fromds(hdr->frame_control))
|
|
|
|
/* this is not from AP */
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (!ieee80211_is_data(hdr->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (!ieee80211_has_moredata(hdr->frame_control)) {
|
|
|
|
/* AP has no more frames buffered for us */
|
|
|
|
local->pspolling = false;
|
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* more data bit is set, let's request a new frame from the AP */
|
|
|
|
ieee80211_send_pspoll(local, rx->sdata);
|
|
|
|
|
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2012-10-10 19:39:50 +00:00
|
|
|
static void sta_ps_start(struct sta_info *sta)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2008-09-16 12:18:59 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = sta->sdata;
|
2008-11-29 23:48:41 +00:00
|
|
|
struct ieee80211_local *local = sdata->local;
|
2012-10-10 19:39:50 +00:00
|
|
|
struct ps_data *ps;
|
2015-03-27 20:30:37 +00:00
|
|
|
int tid;
|
2007-10-04 00:59:30 +00:00
|
|
|
|
2012-10-10 19:39:50 +00:00
|
|
|
if (sta->sdata->vif.type == NL80211_IFTYPE_AP ||
|
|
|
|
sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
|
|
|
|
ps = &sdata->bss->ps;
|
|
|
|
else
|
|
|
|
return;
|
|
|
|
|
|
|
|
atomic_inc(&ps->num_sta_ps);
|
2011-09-29 14:04:36 +00:00
|
|
|
set_sta_flag(sta, WLAN_STA_PS_STA);
|
2015-06-02 19:39:54 +00:00
|
|
|
if (!ieee80211_hw_check(&local->hw, AP_LINK_PS))
|
2011-01-31 20:29:13 +00:00
|
|
|
drv_sta_notify(local, sdata, STA_NOTIFY_SLEEP, &sta->sta);
|
2012-06-22 09:29:50 +00:00
|
|
|
ps_dbg(sdata, "STA %pM aid %d enters power save mode\n",
|
|
|
|
sta->sta.addr, sta->sta.aid);
|
2015-03-27 20:30:37 +00:00
|
|
|
|
2015-03-21 14:25:43 +00:00
|
|
|
ieee80211_clear_fast_xmit(sta);
|
|
|
|
|
2018-08-31 08:31:08 +00:00
|
|
|
for (tid = 0; tid < IEEE80211_NUM_TIDS; tid++) {
|
2019-03-19 11:00:13 +00:00
|
|
|
struct ieee80211_txq *txq = sta->sta.txq[tid];
|
2022-06-25 21:24:05 +00:00
|
|
|
struct txq_info *txqi = to_txq_info(txq);
|
2019-03-19 11:00:13 +00:00
|
|
|
|
2022-06-25 21:24:05 +00:00
|
|
|
spin_lock(&local->active_txq_lock[txq->ac]);
|
|
|
|
if (!list_empty(&txqi->schedule_order))
|
|
|
|
list_del_init(&txqi->schedule_order);
|
|
|
|
spin_unlock(&local->active_txq_lock[txq->ac]);
|
2019-03-19 11:00:13 +00:00
|
|
|
|
|
|
|
if (txq_has_queue(txq))
|
2015-03-27 20:30:37 +00:00
|
|
|
set_bit(tid, &sta->txq_buffered_tids);
|
|
|
|
else
|
|
|
|
clear_bit(tid, &sta->txq_buffered_tids);
|
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2012-10-10 19:39:50 +00:00
|
|
|
static void sta_ps_end(struct sta_info *sta)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2012-06-22 09:29:50 +00:00
|
|
|
ps_dbg(sta->sdata, "STA %pM aid %d exits power save mode\n",
|
|
|
|
sta->sta.addr, sta->sta.aid);
|
2008-02-20 10:21:35 +00:00
|
|
|
|
2011-09-29 14:04:36 +00:00
|
|
|
if (test_sta_flag(sta, WLAN_STA_PS_DRIVER)) {
|
2014-02-20 10:19:58 +00:00
|
|
|
/*
|
|
|
|
* Clear the flag only if the other one is still set
|
|
|
|
* so that the TX path won't start TX'ing new frames
|
|
|
|
* directly ... In the case that the driver flag isn't
|
|
|
|
* set ieee80211_sta_ps_deliver_wakeup() will clear it.
|
|
|
|
*/
|
|
|
|
clear_sta_flag(sta, WLAN_STA_PS_STA);
|
2012-06-22 09:29:50 +00:00
|
|
|
ps_dbg(sta->sdata, "STA %pM aid %d driver-ps-blocked\n",
|
|
|
|
sta->sta.addr, sta->sta.aid);
|
mac80211: async station powersave handling
Some devices require that all frames to a station
are flushed when that station goes into powersave
mode before being able to send frames to that
station again when it wakes up or polls -- all in
order to avoid reordering and too many or too few
frames being sent to the station when it polls.
Normally, this is the case unless the station
goes to sleep and wakes up very quickly again.
But in that case, frames for it may be pending
on the hardware queues, and thus races could
happen in the case of multiple hardware queues
used for QoS/WMM. Normally this isn't a problem,
but with the iwlwifi mechanism we need to make
sure the race doesn't happen.
This makes mac80211 able to cope with the race
with driver help by a new WLAN_STA_PS_DRIVER
per-station flag that can be controlled by the
driver and tells mac80211 whether it can transmit
frames or not. This flag must be set according to
very specific rules outlined in the documentation
for the function that controls it.
When we buffer new frames for the station, we
normally set the TIM bit right away, but while
the driver has blocked transmission to that sta
we need to avoid that as well since we cannot
respond to the station if it wakes up due to the
TIM bit. Once the driver unblocks, we can set
the TIM bit.
Similarly, when the station just wakes up, we
need to wait until all other frames are flushed
before we can transmit frames to that station,
so the same applies here, we need to wait for
the driver to give the OK.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2009-11-06 10:35:50 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
mac80211: fix station/driver powersave race
It is currently possible to have a race due to the station PS
unblock work like this:
* station goes to sleep with frames buffered in the driver
* driver blocks wakeup
* station wakes up again
* driver flushes/returns frames, and unblocks, which schedules
the unblock work
* unblock work starts to run, and checks that the station is
awake (i.e. that the WLAN_STA_PS_STA flag isn't set)
* we process a received frame with PM=1, setting the flag again
* ieee80211_sta_ps_deliver_wakeup() runs, delivering all frames
to the driver, and then clearing the WLAN_STA_PS_DRIVER and
WLAN_STA_PS_STA flags
In this scenario, mac80211 will think that the station is awake,
while it really is asleep, and any TX'ed frames should be filtered
by the device (it will know that the station is sleeping) but then
passed to mac80211 again, which will not buffer it either as it
thinks the station is awake, and eventually the packets will be
dropped.
Fix this by moving the clearing of the flags to exactly where we
learn about the situation. This creates a problem of reordering,
so introduce another flag indicating that delivery is being done,
this new flag also queues frames and is cleared only while the
spinlock is held (which the queuing code also holds) so that any
concurrent delivery/TX is handled correctly.
Reported-by: Andrei Otcheretianski <andrei.otcheretianski@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2014-05-27 14:32:27 +00:00
|
|
|
set_sta_flag(sta, WLAN_STA_PS_DELIVER);
|
|
|
|
clear_sta_flag(sta, WLAN_STA_PS_STA);
|
mac80211: async station powersave handling
Some devices require that all frames to a station
are flushed when that station goes into powersave
mode before being able to send frames to that
station again when it wakes up or polls -- all in
order to avoid reordering and too many or too few
frames being sent to the station when it polls.
Normally, this is the case unless the station
goes to sleep and wakes up very quickly again.
But in that case, frames for it may be pending
on the hardware queues, and thus races could
happen in the case of multiple hardware queues
used for QoS/WMM. Normally this isn't a problem,
but with the iwlwifi mechanism we need to make
sure the race doesn't happen.
This makes mac80211 able to cope with the race
with driver help by a new WLAN_STA_PS_DRIVER
per-station flag that can be controlled by the
driver and tells mac80211 whether it can transmit
frames or not. This flag must be set according to
very specific rules outlined in the documentation
for the function that controls it.
When we buffer new frames for the station, we
normally set the TIM bit right away, but while
the driver has blocked transmission to that sta
we need to avoid that as well since we cannot
respond to the station if it wakes up due to the
TIM bit. Once the driver unblocks, we can set
the TIM bit.
Similarly, when the station just wakes up, we
need to wait until all other frames are flushed
before we can transmit frames to that station,
so the same applies here, we need to wait for
the driver to give the OK.
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
2009-11-06 10:35:50 +00:00
|
|
|
ieee80211_sta_ps_deliver_wakeup(sta);
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2015-06-16 14:16:38 +00:00
|
|
|
int ieee80211_sta_ps_transition(struct ieee80211_sta *pubsta, bool start)
|
2011-01-31 20:29:13 +00:00
|
|
|
{
|
2015-06-16 14:16:38 +00:00
|
|
|
struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
|
2011-01-31 20:29:13 +00:00
|
|
|
bool in_ps;
|
|
|
|
|
2015-06-16 14:16:38 +00:00
|
|
|
WARN_ON(!ieee80211_hw_check(&sta->local->hw, AP_LINK_PS));
|
2011-01-31 20:29:13 +00:00
|
|
|
|
|
|
|
/* Don't let the same PS state be set twice */
|
2015-06-16 14:16:38 +00:00
|
|
|
in_ps = test_sta_flag(sta, WLAN_STA_PS_STA);
|
2011-01-31 20:29:13 +00:00
|
|
|
if ((start && in_ps) || (!start && !in_ps))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (start)
|
2015-06-16 14:16:38 +00:00
|
|
|
sta_ps_start(sta);
|
2011-01-31 20:29:13 +00:00
|
|
|
else
|
2015-06-16 14:16:38 +00:00
|
|
|
sta_ps_end(sta);
|
2011-01-31 20:29:13 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_sta_ps_transition);
|
|
|
|
|
2016-05-03 13:58:00 +00:00
|
|
|
void ieee80211_sta_pspoll(struct ieee80211_sta *pubsta)
|
|
|
|
{
|
|
|
|
struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
|
|
|
|
|
|
|
|
if (test_sta_flag(sta, WLAN_STA_SP))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_PS_DRIVER))
|
|
|
|
ieee80211_sta_ps_deliver_poll_response(sta);
|
|
|
|
else
|
|
|
|
set_sta_flag(sta, WLAN_STA_PSPOLL);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_sta_pspoll);
|
|
|
|
|
|
|
|
void ieee80211_sta_uapsd_trigger(struct ieee80211_sta *pubsta, u8 tid)
|
|
|
|
{
|
|
|
|
struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
|
2017-01-24 15:42:10 +00:00
|
|
|
int ac = ieee80211_ac_from_tid(tid);
|
2016-05-03 13:58:00 +00:00
|
|
|
|
|
|
|
/*
|
2016-10-18 20:12:10 +00:00
|
|
|
* If this AC is not trigger-enabled do nothing unless the
|
|
|
|
* driver is calling us after it already checked.
|
2016-05-03 13:58:00 +00:00
|
|
|
*
|
|
|
|
* NB: This could/should check a separate bitmap of trigger-
|
|
|
|
* enabled queues, but for now we only implement uAPSD w/o
|
|
|
|
* TSPEC changes to the ACs, so they're always the same.
|
|
|
|
*/
|
2016-10-18 20:12:12 +00:00
|
|
|
if (!(sta->sta.uapsd_queues & ieee80211_ac_to_qos_mask[ac]) &&
|
|
|
|
tid != IEEE80211_NUM_TIDS)
|
2016-05-03 13:58:00 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
/* if we are in a service period, do nothing */
|
|
|
|
if (test_sta_flag(sta, WLAN_STA_SP))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_PS_DRIVER))
|
|
|
|
ieee80211_sta_ps_deliver_uapsd(sta);
|
|
|
|
else
|
|
|
|
set_sta_flag(sta, WLAN_STA_UAPSD);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_sta_uapsd_trigger);
|
|
|
|
|
2011-09-29 14:04:33 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_uapsd_and_pspoll(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
|
|
|
struct ieee80211_hdr *hdr = (void *)rx->skb->data;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
|
|
|
|
2015-04-22 12:48:34 +00:00
|
|
|
if (!rx->sta)
|
2011-09-29 14:04:33 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_AP &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP_VLAN)
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The device handles station powersave, so don't do anything about
|
|
|
|
* uAPSD and PS-Poll frames (the latter shouldn't even come up from
|
|
|
|
* it to mac80211 since they're handled.)
|
|
|
|
*/
|
2015-06-02 19:39:54 +00:00
|
|
|
if (ieee80211_hw_check(&sdata->local->hw, AP_LINK_PS))
|
2011-09-29 14:04:33 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Don't do anything if the station isn't already asleep. In
|
|
|
|
* the uAPSD case, the station will probably be marked asleep,
|
|
|
|
* in the PS-Poll case the station must be confused ...
|
|
|
|
*/
|
2011-09-29 14:04:36 +00:00
|
|
|
if (!test_sta_flag(rx->sta, WLAN_STA_PS_STA))
|
2011-09-29 14:04:33 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (unlikely(ieee80211_is_pspoll(hdr->frame_control))) {
|
2016-05-03 13:58:00 +00:00
|
|
|
ieee80211_sta_pspoll(&rx->sta->sta);
|
2011-09-29 14:04:33 +00:00
|
|
|
|
|
|
|
/* Free PS Poll skb here instead of returning RX_DROP that would
|
|
|
|
* count as an dropped frame. */
|
|
|
|
dev_kfree_skb(rx->skb);
|
|
|
|
|
|
|
|
return RX_QUEUED;
|
|
|
|
} else if (!ieee80211_has_morefrags(hdr->frame_control) &&
|
|
|
|
!(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&
|
|
|
|
ieee80211_has_pm(hdr->frame_control) &&
|
|
|
|
(ieee80211_is_data_qos(hdr->frame_control) ||
|
|
|
|
ieee80211_is_qos_nullfunc(hdr->frame_control))) {
|
2018-02-19 12:48:40 +00:00
|
|
|
u8 tid = ieee80211_get_tid(hdr);
|
2011-09-29 14:04:33 +00:00
|
|
|
|
2016-05-03 13:58:00 +00:00
|
|
|
ieee80211_sta_uapsd_trigger(&rx->sta->sta, tid);
|
2011-09-29 14:04:33 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2008-02-25 15:27:43 +00:00
|
|
|
ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
|
|
|
struct sta_info *sta = rx->sta;
|
2022-09-02 14:12:40 +00:00
|
|
|
struct link_sta_info *link_sta = rx->link_sta;
|
2009-11-16 12:58:20 +00:00
|
|
|
struct sk_buff *skb = rx->skb;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
2013-04-22 14:29:31 +00:00
|
|
|
int i;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2022-09-02 14:12:40 +00:00
|
|
|
if (!sta || !link_sta)
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_CONTINUE;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2009-07-10 13:29:03 +00:00
|
|
|
/*
|
|
|
|
* Update last_rx only for IBSS packets which are for the current
|
2012-11-25 22:13:42 +00:00
|
|
|
* BSSID and for station already AUTHORIZED to avoid keeping the
|
|
|
|
* current IBSS network alive in cases where other STAs start
|
|
|
|
* using different BSSID. This will also give the station another
|
|
|
|
* chance to restart the authentication/authorization in case
|
|
|
|
* something went wrong the first time.
|
2009-07-10 13:29:03 +00:00
|
|
|
*/
|
2008-09-10 22:01:58 +00:00
|
|
|
if (rx->sdata->vif.type == NL80211_IFTYPE_ADHOC) {
|
2007-12-25 15:00:36 +00:00
|
|
|
u8 *bssid = ieee80211_get_bssid(hdr, rx->skb->len,
|
2008-09-10 22:01:58 +00:00
|
|
|
NL80211_IFTYPE_ADHOC);
|
2012-11-25 22:13:42 +00:00
|
|
|
if (ether_addr_equal(bssid, rx->sdata->u.ibss.bssid) &&
|
|
|
|
test_sta_flag(sta, WLAN_STA_AUTHORIZED)) {
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.last_rx = jiffies;
|
2023-06-08 13:36:05 +00:00
|
|
|
if (ieee80211_is_data_present(hdr->frame_control) &&
|
2016-03-31 17:02:08 +00:00
|
|
|
!is_multicast_ether_addr(hdr->addr1))
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.last_rate =
|
2016-03-31 17:02:08 +00:00
|
|
|
sta_stats_encode_rate(status);
|
2011-02-27 21:08:01 +00:00
|
|
|
}
|
2014-11-03 09:33:19 +00:00
|
|
|
} else if (rx->sdata->vif.type == NL80211_IFTYPE_OCB) {
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.last_rx = jiffies;
|
2020-09-22 02:28:14 +00:00
|
|
|
} else if (!ieee80211_is_s1g_beacon(hdr->frame_control) &&
|
2020-12-09 03:06:29 +00:00
|
|
|
!is_multicast_ether_addr(hdr->addr1)) {
|
2009-07-10 13:29:03 +00:00
|
|
|
/*
|
2008-02-23 14:17:10 +00:00
|
|
|
* Mesh beacons will update last_rx when if they are found to
|
|
|
|
* match the current local configuration when processed.
|
2007-07-27 13:43:22 +00:00
|
|
|
*/
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.last_rx = jiffies;
|
2023-06-08 13:36:05 +00:00
|
|
|
if (ieee80211_is_data_present(hdr->frame_control))
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.last_rate = sta_stats_encode_rate(status);
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.fragments++;
|
2016-03-31 17:02:09 +00:00
|
|
|
|
2022-09-02 14:12:40 +00:00
|
|
|
u64_stats_update_begin(&link_sta->rx_stats.syncp);
|
|
|
|
link_sta->rx_stats.bytes += rx->skb->len;
|
|
|
|
u64_stats_update_end(&link_sta->rx_stats.syncp);
|
2016-03-31 17:02:09 +00:00
|
|
|
|
2012-03-01 17:00:07 +00:00
|
|
|
if (!(status->flag & RX_FLAG_NO_SIGNAL_VAL)) {
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.last_signal = status->signal;
|
|
|
|
ewma_signal_add(&link_sta->rx_stats_avg.signal,
|
mac80211: prepare sta handling for MLO support
Currently in mac80211 each STA object is represented
using sta_info datastructure with the associated
STA specific information and drivers access ieee80211_sta
part of it.
With MLO (Multi Link Operation) support being added
in 802.11be standard, though the association is logically
with a single Multi Link capable STA, at the physical level
communication can happen via different advertised
links (uniquely identified by Channel, operating class,
BSSID) and hence the need to handle multiple link
STA parameters within a composite sta_info object
called the MLD STA. The different link STA part of
MLD STA are identified using the link address which can
be same or different as the MLD STA address and unique
link id based on the link vif.
To support extension of such a model, the sta_info
datastructure is modified to hold multiple link STA
objects with link specific params currently within
sta_info moved to this new structure. Similarly this is
done for ieee80211_sta as well which will be accessed
within mac80211 as well as by drivers, hence trivial
driver changes are expected to support this.
For current non MLO supported drivers, only one link STA
is present and link information is accessed via 'deflink'
member.
For MLO drivers, we still need to define the APIs etc. to
get the correct link ID and access the correct part of
the station info.
Currently in mac80211, all link STA info are accessed directly
via deflink. These will be updated to access via link pointers
indexed by link id with MLO support patches, with link id
being 0 for non MLO supported cases.
Except for couple of macro related changes, below spatch takes
care of updating mac80211 and driver code to access to the
link STA info via deflink.
@ieee80211_sta@
struct ieee80211_sta *s;
struct sta_info *si;
identifier var = {supp_rates, ht_cap, vht_cap, he_cap, he_6ghz_capa, eht_cap, rx_nss, bandwidth, txpwr};
@@
(
s->
- var
+ deflink.var
|
si->sta.
- var
+ deflink.var
)
@sta_info@
struct sta_info *si;
identifier var = {gtk, pcpu_rx_stats, rx_stats, rx_stats_avg, status_stats, tx_stats, cur_max_bandwidth};
@@
(
si->
- var
+ deflink.var
)
Signed-off-by: Sriram R <quic_srirrama@quicinc.com>
Link: https://lore.kernel.org/r/1649086883-13246-1-git-send-email-quic_srirrama@quicinc.com
[remove MLO-drivers notes from commit message, not clear yet; run spatch]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2022-04-04 15:41:23 +00:00
|
|
|
-status->signal);
|
2012-03-01 17:00:07 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2013-04-22 14:29:31 +00:00
|
|
|
if (status->chains) {
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.chains = status->chains;
|
2013-04-22 14:29:31 +00:00
|
|
|
for (i = 0; i < ARRAY_SIZE(status->chain_signal); i++) {
|
|
|
|
int signal = status->chain_signal[i];
|
|
|
|
|
|
|
|
if (!(status->chains & BIT(i)))
|
|
|
|
continue;
|
|
|
|
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.chain_signal_last[i] = signal;
|
|
|
|
ewma_signal_add(&link_sta->rx_stats_avg.chain_signal[i],
|
2015-10-16 15:54:47 +00:00
|
|
|
-signal);
|
2013-04-22 14:29:31 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-09-22 02:28:14 +00:00
|
|
|
if (ieee80211_is_s1g_beacon(hdr->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2008-11-26 14:02:58 +00:00
|
|
|
/*
|
|
|
|
* Change STA power saving mode only at the end of a frame
|
2017-10-29 09:51:07 +00:00
|
|
|
* exchange sequence, and only for a data or management
|
|
|
|
* frame as specified in IEEE 802.11-2016 11.2.3.2
|
2008-11-26 14:02:58 +00:00
|
|
|
*/
|
2015-06-02 19:39:54 +00:00
|
|
|
if (!ieee80211_hw_check(&sta->local->hw, AP_LINK_PS) &&
|
2011-01-31 20:29:13 +00:00
|
|
|
!ieee80211_has_morefrags(hdr->frame_control) &&
|
2018-08-20 10:56:07 +00:00
|
|
|
!is_multicast_ether_addr(hdr->addr1) &&
|
2017-10-29 09:51:07 +00:00
|
|
|
(ieee80211_is_mgmt(hdr->frame_control) ||
|
|
|
|
ieee80211_is_data(hdr->frame_control)) &&
|
2010-12-27 22:21:26 +00:00
|
|
|
!(status->rx_flags & IEEE80211_RX_DEFERRED_RELEASE) &&
|
2008-09-10 22:01:58 +00:00
|
|
|
(rx->sdata->vif.type == NL80211_IFTYPE_AP ||
|
2017-10-29 09:51:07 +00:00
|
|
|
rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN)) {
|
2011-09-29 14:04:36 +00:00
|
|
|
if (test_sta_flag(sta, WLAN_STA_PS_STA)) {
|
2014-01-24 13:41:44 +00:00
|
|
|
if (!ieee80211_has_pm(hdr->frame_control))
|
2012-10-10 19:39:50 +00:00
|
|
|
sta_ps_end(sta);
|
2008-11-26 14:02:58 +00:00
|
|
|
} else {
|
|
|
|
if (ieee80211_has_pm(hdr->frame_control))
|
2012-10-10 19:39:50 +00:00
|
|
|
sta_ps_start(sta);
|
2008-11-26 14:02:58 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2013-01-30 17:14:08 +00:00
|
|
|
/* mesh power save support */
|
|
|
|
if (ieee80211_vif_is_mesh(&rx->sdata->vif))
|
|
|
|
ieee80211_mps_rx_h_sta_process(sta, hdr);
|
|
|
|
|
2009-10-30 11:55:03 +00:00
|
|
|
/*
|
|
|
|
* Drop (qos-)data::nullfunc frames silently, since they
|
|
|
|
* are used only to control station power saving mode.
|
|
|
|
*/
|
2020-01-14 05:59:40 +00:00
|
|
|
if (ieee80211_is_any_nullfunc(hdr->frame_control)) {
|
2007-07-27 13:43:22 +00:00
|
|
|
I802_DEBUG_INC(rx->local->rx_handlers_drop_nullfunc);
|
2010-01-08 17:06:26 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we receive a 4-addr nullfunc frame from a STA
|
2011-11-04 10:18:20 +00:00
|
|
|
* that was not moved to a 4-addr STA vlan yet send
|
|
|
|
* the event to userspace and for older hostapd drop
|
|
|
|
* the frame to the monitor interface.
|
2010-01-08 17:06:26 +00:00
|
|
|
*/
|
|
|
|
if (ieee80211_has_a4(hdr->frame_control) &&
|
|
|
|
(rx->sdata->vif.type == NL80211_IFTYPE_AP ||
|
|
|
|
(rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
|
2011-11-04 10:18:20 +00:00
|
|
|
!rx->sdata->u.vlan.sta))) {
|
|
|
|
if (!test_and_set_sta_flag(sta, WLAN_STA_4ADDR_EVENT))
|
|
|
|
cfg80211_rx_unexpected_4addr_frame(
|
|
|
|
rx->sdata->dev, sta->sta.addr,
|
|
|
|
GFP_ATOMIC);
|
2023-04-19 12:52:54 +00:00
|
|
|
return RX_DROP_M_UNEXPECTED_4ADDR_FRAME;
|
2011-11-04 10:18:20 +00:00
|
|
|
}
|
2009-10-30 11:55:03 +00:00
|
|
|
/*
|
|
|
|
* Update counter and free packet here to avoid
|
|
|
|
* counting this as a dropped packed.
|
|
|
|
*/
|
2022-09-02 14:12:40 +00:00
|
|
|
link_sta->rx_stats.packets++;
|
2007-07-27 13:43:22 +00:00
|
|
|
dev_kfree_skb(rx->skb);
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_QUEUED;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_CONTINUE;
|
2007-07-27 13:43:22 +00:00
|
|
|
} /* ieee80211_rx_h_sta_process */
|
|
|
|
|
2020-02-22 13:25:47 +00:00
|
|
|
static struct ieee80211_key *
|
|
|
|
ieee80211_rx_get_bigtk(struct ieee80211_rx_data *rx, int idx)
|
|
|
|
{
|
|
|
|
struct ieee80211_key *key = NULL;
|
|
|
|
int idx2;
|
|
|
|
|
|
|
|
/* Make sure key gets set if either BIGTK key index is set so that
|
|
|
|
* ieee80211_drop_unencrypted_mgmt() can properly drop both unprotected
|
|
|
|
* Beacon frames and Beacon frames that claim to use another BIGTK key
|
|
|
|
* index (i.e., a key that we do not have).
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (idx < 0) {
|
|
|
|
idx = NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS;
|
|
|
|
idx2 = idx + 1;
|
|
|
|
} else {
|
|
|
|
if (idx == NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS)
|
|
|
|
idx2 = idx + 1;
|
|
|
|
else
|
|
|
|
idx2 = idx - 1;
|
|
|
|
}
|
|
|
|
|
2022-08-17 09:17:01 +00:00
|
|
|
if (rx->link_sta)
|
|
|
|
key = rcu_dereference(rx->link_sta->gtk[idx]);
|
2020-02-22 13:25:47 +00:00
|
|
|
if (!key)
|
2022-08-17 09:17:01 +00:00
|
|
|
key = rcu_dereference(rx->link->gtk[idx]);
|
|
|
|
if (!key && rx->link_sta)
|
|
|
|
key = rcu_dereference(rx->link_sta->gtk[idx2]);
|
2020-02-22 13:25:47 +00:00
|
|
|
if (!key)
|
2022-08-17 09:17:01 +00:00
|
|
|
key = rcu_dereference(rx->link->gtk[idx2]);
|
2020-02-22 13:25:47 +00:00
|
|
|
|
|
|
|
return key;
|
|
|
|
}
|
|
|
|
|
2013-08-14 13:29:46 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct sk_buff *skb = rx->skb;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
|
|
|
int keyidx;
|
2023-09-25 15:25:09 +00:00
|
|
|
ieee80211_rx_result result = RX_DROP_U_DECRYPT_FAIL;
|
2013-08-14 13:29:46 +00:00
|
|
|
struct ieee80211_key *sta_ptk = NULL;
|
2019-03-19 20:34:08 +00:00
|
|
|
struct ieee80211_key *ptk_idx = NULL;
|
2013-08-14 13:29:46 +00:00
|
|
|
int mmie_keyidx = -1;
|
|
|
|
__le16 fc;
|
|
|
|
|
2020-09-22 02:28:14 +00:00
|
|
|
if (ieee80211_is_ext(hdr->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2013-08-14 13:29:46 +00:00
|
|
|
/*
|
|
|
|
* Key selection 101
|
|
|
|
*
|
2020-02-22 13:25:47 +00:00
|
|
|
* There are five types of keys:
|
2013-08-14 13:29:46 +00:00
|
|
|
* - GTK (group keys)
|
|
|
|
* - IGTK (group keys for management frames)
|
2020-02-22 13:25:47 +00:00
|
|
|
* - BIGTK (group keys for Beacon frames)
|
2013-08-14 13:29:46 +00:00
|
|
|
* - PTK (pairwise keys)
|
|
|
|
* - STK (station-to-station pairwise keys)
|
|
|
|
*
|
|
|
|
* When selecting a key, we have to distinguish between multicast
|
|
|
|
* (including broadcast) and unicast frames, the latter can only
|
2020-02-22 13:25:47 +00:00
|
|
|
* use PTKs and STKs while the former always use GTKs, IGTKs, and
|
|
|
|
* BIGTKs. Unless, of course, actual WEP keys ("pre-RSNA") are used,
|
|
|
|
* then unicast frames can also use key indices like GTKs. Hence, if we
|
2013-08-14 13:29:46 +00:00
|
|
|
* don't have a PTK/STK we check the key index for a WEP key.
|
|
|
|
*
|
|
|
|
* Note that in a regular BSS, multicast frames are sent by the
|
|
|
|
* AP only, associated stations unicast the frame to the AP first
|
|
|
|
* which then multicasts it on their behalf.
|
|
|
|
*
|
|
|
|
* There is also a slight problem in IBSS mode: GTKs are negotiated
|
|
|
|
* with each station, that is something we don't currently handle.
|
|
|
|
* The spec seems to expect that one negotiates the same key with
|
|
|
|
* every station but there's no such requirement; VLANs could be
|
|
|
|
* possible.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* start without a key */
|
|
|
|
rx->key = NULL;
|
2013-03-24 12:23:27 +00:00
|
|
|
fc = hdr->frame_control;
|
2013-08-14 13:29:46 +00:00
|
|
|
|
2013-03-24 12:23:27 +00:00
|
|
|
if (rx->sta) {
|
|
|
|
int keyid = rx->sta->ptk_idx;
|
2019-03-19 20:34:08 +00:00
|
|
|
sta_ptk = rcu_dereference(rx->sta->ptk[keyid]);
|
2013-08-14 13:29:46 +00:00
|
|
|
|
2021-11-01 02:46:57 +00:00
|
|
|
if (ieee80211_has_protected(fc) &&
|
|
|
|
!(status->flag & RX_FLAG_IV_STRIPPED)) {
|
2022-02-09 12:14:26 +00:00
|
|
|
keyid = ieee80211_get_keyid(rx->skb);
|
2019-03-19 20:34:08 +00:00
|
|
|
|
2013-03-24 12:23:27 +00:00
|
|
|
if (unlikely(keyid < 0))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_NO_KEY_ID;
|
2019-03-19 20:34:08 +00:00
|
|
|
|
|
|
|
ptk_idx = rcu_dereference(rx->sta->ptk[keyid]);
|
2013-03-24 12:23:27 +00:00
|
|
|
}
|
|
|
|
}
|
2013-08-14 13:29:46 +00:00
|
|
|
|
|
|
|
if (!ieee80211_has_protected(fc))
|
|
|
|
mmie_keyidx = ieee80211_get_mmie_keyidx(rx->skb);
|
|
|
|
|
|
|
|
if (!is_multicast_ether_addr(hdr->addr1) && sta_ptk) {
|
2019-03-19 20:34:08 +00:00
|
|
|
rx->key = ptk_idx ? ptk_idx : sta_ptk;
|
2013-08-14 13:29:46 +00:00
|
|
|
if ((status->flag & RX_FLAG_DECRYPTED) &&
|
|
|
|
(status->flag & RX_FLAG_IV_STRIPPED))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
/* Skip decryption if the frame is not protected. */
|
|
|
|
if (!ieee80211_has_protected(fc))
|
|
|
|
return RX_CONTINUE;
|
2020-02-22 13:25:47 +00:00
|
|
|
} else if (mmie_keyidx >= 0 && ieee80211_is_beacon(fc)) {
|
|
|
|
/* Broadcast/multicast robust management frame / BIP */
|
|
|
|
if ((status->flag & RX_FLAG_DECRYPTED) &&
|
|
|
|
(status->flag & RX_FLAG_IV_STRIPPED))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (mmie_keyidx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS ||
|
|
|
|
mmie_keyidx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS +
|
2022-10-05 19:24:10 +00:00
|
|
|
NUM_DEFAULT_BEACON_KEYS) {
|
|
|
|
if (rx->sdata->dev)
|
|
|
|
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
|
|
|
|
skb->data,
|
|
|
|
skb->len);
|
2023-04-19 12:52:54 +00:00
|
|
|
return RX_DROP_M_BAD_BCN_KEYIDX;
|
2020-04-01 14:25:48 +00:00
|
|
|
}
|
2020-02-22 13:25:47 +00:00
|
|
|
|
|
|
|
rx->key = ieee80211_rx_get_bigtk(rx, mmie_keyidx);
|
|
|
|
if (!rx->key)
|
|
|
|
return RX_CONTINUE; /* Beacon protection not in use */
|
2013-08-14 13:29:46 +00:00
|
|
|
} else if (mmie_keyidx >= 0) {
|
|
|
|
/* Broadcast/multicast robust management frame / BIP */
|
|
|
|
if ((status->flag & RX_FLAG_DECRYPTED) &&
|
|
|
|
(status->flag & RX_FLAG_IV_STRIPPED))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (mmie_keyidx < NUM_DEFAULT_KEYS ||
|
|
|
|
mmie_keyidx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS)
|
2023-04-19 12:52:54 +00:00
|
|
|
return RX_DROP_M_BAD_MGMT_KEYIDX; /* unexpected BIP keyidx */
|
2022-08-17 09:17:01 +00:00
|
|
|
if (rx->link_sta) {
|
2016-06-22 10:55:20 +00:00
|
|
|
if (ieee80211_is_group_privacy_action(skb) &&
|
|
|
|
test_sta_flag(rx->sta, WLAN_STA_MFP))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2022-08-17 09:17:01 +00:00
|
|
|
rx->key = rcu_dereference(rx->link_sta->gtk[mmie_keyidx]);
|
2016-06-22 10:55:20 +00:00
|
|
|
}
|
2013-08-14 13:29:46 +00:00
|
|
|
if (!rx->key)
|
2022-08-17 09:17:01 +00:00
|
|
|
rx->key = rcu_dereference(rx->link->gtk[mmie_keyidx]);
|
2013-08-14 13:29:46 +00:00
|
|
|
} else if (!ieee80211_has_protected(fc)) {
|
|
|
|
/*
|
|
|
|
* The frame was not protected, so skip decryption. However, we
|
|
|
|
* need to set rx->key if there is a key that could have been
|
|
|
|
* used so that the frame may be dropped if encryption would
|
|
|
|
* have been expected.
|
|
|
|
*/
|
|
|
|
struct ieee80211_key *key = NULL;
|
|
|
|
int i;
|
|
|
|
|
2020-02-22 13:25:47 +00:00
|
|
|
if (ieee80211_is_beacon(fc)) {
|
|
|
|
key = ieee80211_rx_get_bigtk(rx, -1);
|
|
|
|
} else if (ieee80211_is_mgmt(fc) &&
|
|
|
|
is_multicast_ether_addr(hdr->addr1)) {
|
2022-08-17 09:17:01 +00:00
|
|
|
key = rcu_dereference(rx->link->default_mgmt_key);
|
2020-02-22 13:25:47 +00:00
|
|
|
} else {
|
2022-08-17 09:17:01 +00:00
|
|
|
if (rx->link_sta) {
|
2013-08-14 13:29:46 +00:00
|
|
|
for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
|
2022-08-17 09:17:01 +00:00
|
|
|
key = rcu_dereference(rx->link_sta->gtk[i]);
|
2013-08-14 13:29:46 +00:00
|
|
|
if (key)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!key) {
|
|
|
|
for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
|
2022-08-17 09:17:01 +00:00
|
|
|
key = rcu_dereference(rx->link->gtk[i]);
|
2013-08-14 13:29:46 +00:00
|
|
|
if (key)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-02-22 13:25:47 +00:00
|
|
|
if (key)
|
|
|
|
rx->key = key;
|
2013-08-14 13:29:46 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* The device doesn't give us the IV so we won't be
|
|
|
|
* able to look up the key. That's ok though, we
|
|
|
|
* don't need to decrypt the frame, we just won't
|
|
|
|
* be able to keep statistics accurate.
|
|
|
|
* Except for key threshold notifications, should
|
|
|
|
* we somehow allow the driver to tell us which key
|
|
|
|
* the hardware used if this flag is set?
|
|
|
|
*/
|
|
|
|
if ((status->flag & RX_FLAG_DECRYPTED) &&
|
|
|
|
(status->flag & RX_FLAG_IV_STRIPPED))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2022-02-09 12:14:26 +00:00
|
|
|
keyidx = ieee80211_get_keyid(rx->skb);
|
2013-08-14 13:29:46 +00:00
|
|
|
|
2019-03-19 20:34:08 +00:00
|
|
|
if (unlikely(keyidx < 0))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_NO_KEY_ID;
|
2013-08-14 13:29:46 +00:00
|
|
|
|
|
|
|
/* check per-station GTK first, if multicast packet */
|
2022-08-17 09:17:01 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1) && rx->link_sta)
|
|
|
|
rx->key = rcu_dereference(rx->link_sta->gtk[keyidx]);
|
2013-08-14 13:29:46 +00:00
|
|
|
|
|
|
|
/* if not found, try default key */
|
|
|
|
if (!rx->key) {
|
2022-05-16 13:00:15 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1))
|
2022-08-17 09:17:01 +00:00
|
|
|
rx->key = rcu_dereference(rx->link->gtk[keyidx]);
|
2022-05-16 13:00:15 +00:00
|
|
|
if (!rx->key)
|
|
|
|
rx->key = rcu_dereference(rx->sdata->keys[keyidx]);
|
2013-08-14 13:29:46 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* RSNA-protected unicast frames should always be
|
|
|
|
* sent with pairwise or station-to-station keys,
|
|
|
|
* but for WEP we allow using a key index as well.
|
|
|
|
*/
|
|
|
|
if (rx->key &&
|
|
|
|
rx->key->conf.cipher != WLAN_CIPHER_SUITE_WEP40 &&
|
|
|
|
rx->key->conf.cipher != WLAN_CIPHER_SUITE_WEP104 &&
|
|
|
|
!is_multicast_ether_addr(hdr->addr1))
|
|
|
|
rx->key = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rx->key) {
|
|
|
|
if (unlikely(rx->key->flags & KEY_FLAG_TAINTED))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
/* TODO: add threshold stuff again */
|
|
|
|
} else {
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (rx->key->conf.cipher) {
|
|
|
|
case WLAN_CIPHER_SUITE_WEP40:
|
|
|
|
case WLAN_CIPHER_SUITE_WEP104:
|
|
|
|
result = ieee80211_crypto_wep_decrypt(rx);
|
|
|
|
break;
|
|
|
|
case WLAN_CIPHER_SUITE_TKIP:
|
|
|
|
result = ieee80211_crypto_tkip_decrypt(rx);
|
|
|
|
break;
|
|
|
|
case WLAN_CIPHER_SUITE_CCMP:
|
2015-01-24 17:52:07 +00:00
|
|
|
result = ieee80211_crypto_ccmp_decrypt(
|
|
|
|
rx, IEEE80211_CCMP_MIC_LEN);
|
|
|
|
break;
|
|
|
|
case WLAN_CIPHER_SUITE_CCMP_256:
|
|
|
|
result = ieee80211_crypto_ccmp_decrypt(
|
|
|
|
rx, IEEE80211_CCMP_256_MIC_LEN);
|
2013-08-14 13:29:46 +00:00
|
|
|
break;
|
|
|
|
case WLAN_CIPHER_SUITE_AES_CMAC:
|
|
|
|
result = ieee80211_crypto_aes_cmac_decrypt(rx);
|
|
|
|
break;
|
2015-01-24 17:52:08 +00:00
|
|
|
case WLAN_CIPHER_SUITE_BIP_CMAC_256:
|
|
|
|
result = ieee80211_crypto_aes_cmac_256_decrypt(rx);
|
|
|
|
break;
|
2015-01-24 17:52:09 +00:00
|
|
|
case WLAN_CIPHER_SUITE_BIP_GMAC_128:
|
|
|
|
case WLAN_CIPHER_SUITE_BIP_GMAC_256:
|
|
|
|
result = ieee80211_crypto_aes_gmac_decrypt(rx);
|
|
|
|
break;
|
2015-01-24 17:52:06 +00:00
|
|
|
case WLAN_CIPHER_SUITE_GCMP:
|
|
|
|
case WLAN_CIPHER_SUITE_GCMP_256:
|
|
|
|
result = ieee80211_crypto_gcmp_decrypt(rx);
|
|
|
|
break;
|
2013-08-14 13:29:46 +00:00
|
|
|
default:
|
2023-09-25 15:25:09 +00:00
|
|
|
result = RX_DROP_U_BAD_CIPHER;
|
2013-08-14 13:29:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* the hdr variable is invalid after the decrypt handlers */
|
|
|
|
|
|
|
|
/* either the frame has been decrypted or will be dropped */
|
|
|
|
status->flag |= RX_FLAG_DECRYPTED;
|
|
|
|
|
2023-09-25 15:24:39 +00:00
|
|
|
if (unlikely(ieee80211_is_beacon(fc) && RX_RES_IS_UNUSABLE(result) &&
|
2022-10-05 19:24:10 +00:00
|
|
|
rx->sdata->dev))
|
2020-04-01 14:25:48 +00:00
|
|
|
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
|
|
|
|
skb->data, skb->len);
|
|
|
|
|
2013-08-14 13:29:46 +00:00
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2021-05-11 18:02:47 +00:00
|
|
|
void ieee80211_init_frag_cache(struct ieee80211_fragment_cache *cache)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(cache->entries); i++)
|
|
|
|
skb_queue_head_init(&cache->entries[i].skb_list);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ieee80211_destroy_frag_cache(struct ieee80211_fragment_cache *cache)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(cache->entries); i++)
|
|
|
|
__skb_queue_purge(&cache->entries[i].skb_list);
|
|
|
|
}
|
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
static inline struct ieee80211_fragment_entry *
|
2021-05-11 18:02:47 +00:00
|
|
|
ieee80211_reassemble_add(struct ieee80211_fragment_cache *cache,
|
2007-07-27 13:43:22 +00:00
|
|
|
unsigned int frag, unsigned int seq, int rx_queue,
|
|
|
|
struct sk_buff **skb)
|
|
|
|
{
|
|
|
|
struct ieee80211_fragment_entry *entry;
|
|
|
|
|
2021-05-11 18:02:47 +00:00
|
|
|
entry = &cache->entries[cache->next++];
|
|
|
|
if (cache->next >= IEEE80211_FRAGMENT_MAX)
|
|
|
|
cache->next = 0;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2021-05-11 18:02:47 +00:00
|
|
|
__skb_queue_purge(&entry->skb_list);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
|
|
|
__skb_queue_tail(&entry->skb_list, *skb); /* no need for locking */
|
|
|
|
*skb = NULL;
|
|
|
|
entry->first_frag_time = jiffies;
|
|
|
|
entry->seq = seq;
|
|
|
|
entry->rx_queue = rx_queue;
|
|
|
|
entry->last_frag = frag;
|
2016-02-26 21:13:40 +00:00
|
|
|
entry->check_sequential_pn = false;
|
2007-07-27 13:43:22 +00:00
|
|
|
entry->extra_len = 0;
|
|
|
|
|
|
|
|
return entry;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline struct ieee80211_fragment_entry *
|
2021-05-11 18:02:47 +00:00
|
|
|
ieee80211_reassemble_find(struct ieee80211_fragment_cache *cache,
|
2008-07-16 01:44:12 +00:00
|
|
|
unsigned int frag, unsigned int seq,
|
2007-07-27 13:43:22 +00:00
|
|
|
int rx_queue, struct ieee80211_hdr *hdr)
|
|
|
|
{
|
|
|
|
struct ieee80211_fragment_entry *entry;
|
|
|
|
int i, idx;
|
|
|
|
|
2021-05-11 18:02:47 +00:00
|
|
|
idx = cache->next;
|
2007-07-27 13:43:22 +00:00
|
|
|
for (i = 0; i < IEEE80211_FRAGMENT_MAX; i++) {
|
|
|
|
struct ieee80211_hdr *f_hdr;
|
2018-08-07 05:49:13 +00:00
|
|
|
struct sk_buff *f_skb;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
|
|
|
idx--;
|
|
|
|
if (idx < 0)
|
|
|
|
idx = IEEE80211_FRAGMENT_MAX - 1;
|
|
|
|
|
2021-05-11 18:02:47 +00:00
|
|
|
entry = &cache->entries[idx];
|
2007-07-27 13:43:22 +00:00
|
|
|
if (skb_queue_empty(&entry->skb_list) || entry->seq != seq ||
|
|
|
|
entry->rx_queue != rx_queue ||
|
|
|
|
entry->last_frag + 1 != frag)
|
|
|
|
continue;
|
|
|
|
|
2018-08-07 05:49:13 +00:00
|
|
|
f_skb = __skb_peek(&entry->skb_list);
|
|
|
|
f_hdr = (struct ieee80211_hdr *) f_skb->data;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2008-07-16 01:44:12 +00:00
|
|
|
/*
|
|
|
|
* Check ftype and addresses are equal, else check next fragment
|
|
|
|
*/
|
|
|
|
if (((hdr->frame_control ^ f_hdr->frame_control) &
|
|
|
|
cpu_to_le16(IEEE80211_FCTL_FTYPE)) ||
|
mac80211: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-08 18:56:52 +00:00
|
|
|
!ether_addr_equal(hdr->addr1, f_hdr->addr1) ||
|
|
|
|
!ether_addr_equal(hdr->addr2, f_hdr->addr2))
|
2007-07-27 13:43:22 +00:00
|
|
|
continue;
|
|
|
|
|
2008-02-14 15:36:47 +00:00
|
|
|
if (time_after(jiffies, entry->first_frag_time + 2 * HZ)) {
|
2007-07-27 13:43:22 +00:00
|
|
|
__skb_queue_purge(&entry->skb_list);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
return entry;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2021-05-11 18:02:42 +00:00
|
|
|
static bool requires_sequential_pn(struct ieee80211_rx_data *rx, __le16 fc)
|
|
|
|
{
|
|
|
|
return rx->key &&
|
|
|
|
(rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP ||
|
|
|
|
rx->key->conf.cipher == WLAN_CIPHER_SUITE_CCMP_256 ||
|
|
|
|
rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP ||
|
|
|
|
rx->key->conf.cipher == WLAN_CIPHER_SUITE_GCMP_256) &&
|
|
|
|
ieee80211_has_protected(fc);
|
|
|
|
}
|
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2008-02-25 15:27:43 +00:00
|
|
|
ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2021-05-11 18:02:47 +00:00
|
|
|
struct ieee80211_fragment_cache *cache = &rx->sdata->frags;
|
2007-07-27 13:43:22 +00:00
|
|
|
struct ieee80211_hdr *hdr;
|
|
|
|
u16 sc;
|
2008-07-16 01:44:13 +00:00
|
|
|
__le16 fc;
|
2007-07-27 13:43:22 +00:00
|
|
|
unsigned int frag, seq;
|
|
|
|
struct ieee80211_fragment_entry *entry;
|
|
|
|
struct sk_buff *skb;
|
2021-05-11 18:02:51 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2008-07-16 01:44:12 +00:00
|
|
|
hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2008-07-16 01:44:13 +00:00
|
|
|
fc = hdr->frame_control;
|
2012-10-25 18:10:18 +00:00
|
|
|
|
2020-09-22 02:28:14 +00:00
|
|
|
if (ieee80211_is_ctl(fc) || ieee80211_is_ext(fc))
|
2012-10-25 18:10:18 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
sc = le16_to_cpu(hdr->seq_ctrl);
|
|
|
|
frag = sc & IEEE80211_SCTL_FRAG;
|
|
|
|
|
2021-05-11 18:02:47 +00:00
|
|
|
if (rx->sta)
|
|
|
|
cache = &rx->sta->frags;
|
|
|
|
|
2014-12-12 11:11:11 +00:00
|
|
|
if (likely(!ieee80211_has_morefrags(fc) && frag == 0))
|
|
|
|
goto out;
|
|
|
|
|
2021-06-09 14:13:06 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
I802_DEBUG_INC(rx->local->rx_handlers_fragments);
|
|
|
|
|
2010-03-29 09:35:07 +00:00
|
|
|
if (skb_linearize(rx->skb))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_OOM;
|
2010-03-29 09:35:07 +00:00
|
|
|
|
2010-05-11 18:22:11 +00:00
|
|
|
/*
|
|
|
|
* skb_linearize() might change the skb->data and
|
|
|
|
* previously cached variables (in this case, hdr) need to
|
|
|
|
* be refreshed with the new data.
|
|
|
|
*/
|
|
|
|
hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2007-07-27 13:43:22 +00:00
|
|
|
seq = (sc & IEEE80211_SCTL_SEQ) >> 4;
|
|
|
|
|
|
|
|
if (frag == 0) {
|
|
|
|
/* This is the first fragment of a new frame. */
|
2021-05-11 18:02:47 +00:00
|
|
|
entry = ieee80211_reassemble_add(cache, frag, seq,
|
2011-07-07 16:45:03 +00:00
|
|
|
rx->seqno_idx, &(rx->skb));
|
2021-05-11 18:02:42 +00:00
|
|
|
if (requires_sequential_pn(rx, fc)) {
|
2011-07-07 16:45:03 +00:00
|
|
|
int queue = rx->security_idx;
|
2016-02-26 21:13:40 +00:00
|
|
|
|
|
|
|
/* Store CCMP/GCMP PN so that we can verify that the
|
|
|
|
* next fragment has a sequential PN value.
|
|
|
|
*/
|
|
|
|
entry->check_sequential_pn = true;
|
2021-05-11 18:02:49 +00:00
|
|
|
entry->is_protected = true;
|
2021-05-11 18:02:43 +00:00
|
|
|
entry->key_color = rx->key->color;
|
2007-07-27 13:43:22 +00:00
|
|
|
memcpy(entry->last_pn,
|
2010-06-11 17:27:33 +00:00
|
|
|
rx->key->u.ccmp.rx_pn[queue],
|
2013-05-08 11:09:08 +00:00
|
|
|
IEEE80211_CCMP_PN_LEN);
|
2016-02-26 21:13:40 +00:00
|
|
|
BUILD_BUG_ON(offsetof(struct ieee80211_key,
|
|
|
|
u.ccmp.rx_pn) !=
|
|
|
|
offsetof(struct ieee80211_key,
|
|
|
|
u.gcmp.rx_pn));
|
|
|
|
BUILD_BUG_ON(sizeof(rx->key->u.ccmp.rx_pn[queue]) !=
|
|
|
|
sizeof(rx->key->u.gcmp.rx_pn[queue]));
|
|
|
|
BUILD_BUG_ON(IEEE80211_CCMP_PN_LEN !=
|
|
|
|
IEEE80211_GCMP_PN_LEN);
|
2021-05-11 18:02:51 +00:00
|
|
|
} else if (rx->key &&
|
|
|
|
(ieee80211_has_protected(fc) ||
|
|
|
|
(status->flag & RX_FLAG_DECRYPTED))) {
|
2021-05-11 18:02:49 +00:00
|
|
|
entry->is_protected = true;
|
|
|
|
entry->key_color = rx->key->color;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_QUEUED;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* This is a fragment for a frame that should already be pending in
|
|
|
|
* fragment cache. Add this fragment to the end of the pending entry.
|
|
|
|
*/
|
2021-05-11 18:02:47 +00:00
|
|
|
entry = ieee80211_reassemble_find(cache, frag, seq,
|
2011-07-07 16:45:03 +00:00
|
|
|
rx->seqno_idx, hdr);
|
2007-07-27 13:43:22 +00:00
|
|
|
if (!entry) {
|
|
|
|
I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);
|
2008-01-31 18:48:21 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2016-02-26 21:13:40 +00:00
|
|
|
/* "The receiver shall discard MSDUs and MMPDUs whose constituent
|
|
|
|
* MPDU PN values are not incrementing in steps of 1."
|
|
|
|
* see IEEE P802.11-REVmc/D5.0, 12.5.3.4.4, item d (for CCMP)
|
|
|
|
* and IEEE P802.11-REVmc/D5.0, 12.5.5.4.4, item d (for GCMP)
|
|
|
|
*/
|
|
|
|
if (entry->check_sequential_pn) {
|
2007-07-27 13:43:22 +00:00
|
|
|
int i;
|
2013-05-08 11:09:08 +00:00
|
|
|
u8 pn[IEEE80211_CCMP_PN_LEN], *rpn;
|
2016-02-26 21:13:40 +00:00
|
|
|
|
2021-05-11 18:02:42 +00:00
|
|
|
if (!requires_sequential_pn(rx, fc))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_NONSEQ_PN;
|
2021-05-11 18:02:43 +00:00
|
|
|
|
|
|
|
/* Prevent mixed key and fragment cache attacks */
|
|
|
|
if (entry->key_color != rx->key->color)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_KEY_COLOR;
|
2021-05-11 18:02:43 +00:00
|
|
|
|
2013-05-08 11:09:08 +00:00
|
|
|
memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);
|
|
|
|
for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {
|
2007-07-27 13:43:22 +00:00
|
|
|
pn[i]++;
|
|
|
|
if (pn[i])
|
|
|
|
break;
|
|
|
|
}
|
2021-05-11 18:02:48 +00:00
|
|
|
|
|
|
|
rpn = rx->ccm_gcm.pn;
|
2013-05-08 11:09:08 +00:00
|
|
|
if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_REPLAY;
|
2013-05-08 11:09:08 +00:00
|
|
|
memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN);
|
2021-05-11 18:02:49 +00:00
|
|
|
} else if (entry->is_protected &&
|
2021-05-11 18:02:51 +00:00
|
|
|
(!rx->key ||
|
|
|
|
(!ieee80211_has_protected(fc) &&
|
|
|
|
!(status->flag & RX_FLAG_DECRYPTED)) ||
|
2021-05-11 18:02:49 +00:00
|
|
|
rx->key->color != entry->key_color)) {
|
|
|
|
/* Drop this as a mixed key or fragment cache attack, even
|
|
|
|
* if for TKIP Michael MIC should protect us, and WEP is a
|
|
|
|
* lost cause anyway.
|
|
|
|
*/
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_EXPECT_DEFRAG_PROT;
|
2021-05-11 18:02:51 +00:00
|
|
|
} else if (entry->is_protected && rx->key &&
|
|
|
|
entry->key_color != rx->key->color &&
|
|
|
|
(status->flag & RX_FLAG_DECRYPTED)) {
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_KEY_COLOR;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2008-07-16 01:44:13 +00:00
|
|
|
skb_pull(rx->skb, ieee80211_hdrlen(fc));
|
2007-07-27 13:43:22 +00:00
|
|
|
__skb_queue_tail(&entry->skb_list, rx->skb);
|
|
|
|
entry->last_frag = frag;
|
|
|
|
entry->extra_len += rx->skb->len;
|
2008-07-16 01:44:13 +00:00
|
|
|
if (ieee80211_has_morefrags(fc)) {
|
2007-07-27 13:43:22 +00:00
|
|
|
rx->skb = NULL;
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_QUEUED;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
rx->skb = __skb_dequeue(&entry->skb_list);
|
|
|
|
if (skb_tailroom(rx->skb) < entry->extra_len) {
|
2015-04-22 18:25:20 +00:00
|
|
|
I802_DEBUG_INC(rx->local->rx_expand_skb_head_defrag);
|
2007-07-27 13:43:22 +00:00
|
|
|
if (unlikely(pskb_expand_head(rx->skb, 0, entry->extra_len,
|
|
|
|
GFP_ATOMIC))) {
|
|
|
|
I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);
|
|
|
|
__skb_queue_purge(&entry->skb_list);
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_OOM;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
while ((skb = __skb_dequeue(&entry->skb_list))) {
|
networking: introduce and use skb_put_data()
A common pattern with skb_put() is to just want to memcpy()
some data into the new space, introduce skb_put_data() for
this.
An spatch similar to the one for skb_put_zero() converts many
of the places using it:
@@
identifier p, p2;
expression len, skb, data;
type t, t2;
@@
(
-p = skb_put(skb, len);
+p = skb_put_data(skb, data, len);
|
-p = (t)skb_put(skb, len);
+p = skb_put_data(skb, data, len);
)
(
p2 = (t2)p;
-memcpy(p2, data, len);
|
-memcpy(p, data, len);
)
@@
type t, t2;
identifier p, p2;
expression skb, data;
@@
t *p;
...
(
-p = skb_put(skb, sizeof(t));
+p = skb_put_data(skb, data, sizeof(t));
|
-p = (t *)skb_put(skb, sizeof(t));
+p = skb_put_data(skb, data, sizeof(t));
)
(
p2 = (t2)p;
-memcpy(p2, data, sizeof(*p));
|
-memcpy(p, data, sizeof(*p));
)
@@
expression skb, len, data;
@@
-memcpy(skb_put(skb, len), data, len);
+skb_put_data(skb, data, len);
(again, manually post-processed to retain some comments)
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 12:29:20 +00:00
|
|
|
skb_put_data(rx->skb, skb->data, skb->len);
|
2007-07-27 13:43:22 +00:00
|
|
|
dev_kfree_skb(skb);
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
2014-12-12 11:11:11 +00:00
|
|
|
ieee80211_led_rx(rx->local);
|
2007-07-27 13:43:22 +00:00
|
|
|
if (rx->sta)
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->rx_stats.packets++;
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_CONTINUE;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2012-10-25 22:09:11 +00:00
|
|
|
static int ieee80211_802_1x_port_control(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2012-10-25 22:09:11 +00:00
|
|
|
if (unlikely(!rx->sta || !test_sta_flag(rx->sta, WLAN_STA_AUTHORIZED)))
|
2007-11-22 17:49:12 +00:00
|
|
|
return -EACCES;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2007-11-22 17:49:12 +00:00
|
|
|
return 0;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2012-10-25 22:09:11 +00:00
|
|
|
static int ieee80211_drop_unencrypted(struct ieee80211_rx_data *rx, __le16 fc)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2009-11-16 12:58:20 +00:00
|
|
|
struct sk_buff *skb = rx->skb;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
|
2007-08-28 21:01:53 +00:00
|
|
|
/*
|
2007-09-14 15:10:25 +00:00
|
|
|
* Pass through unencrypted frames if the hardware has
|
|
|
|
* decrypted them already.
|
2007-08-28 21:01:53 +00:00
|
|
|
*/
|
2009-11-16 12:58:20 +00:00
|
|
|
if (status->flag & RX_FLAG_DECRYPTED)
|
2007-11-22 17:49:12 +00:00
|
|
|
return 0;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
|
|
|
/* Drop unencrypted frames if key is set. */
|
2008-07-16 01:44:13 +00:00
|
|
|
if (unlikely(!ieee80211_has_protected(fc) &&
|
2020-01-14 05:59:40 +00:00
|
|
|
!ieee80211_is_any_nullfunc(fc) &&
|
2015-03-20 10:37:36 +00:00
|
|
|
ieee80211_is_data(fc) && rx->key))
|
2007-11-22 17:49:12 +00:00
|
|
|
return -EACCES;
|
2010-02-16 10:05:00 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-09-29 14:21:00 +00:00
|
|
|
static ieee80211_rx_result
|
|
|
|
ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
|
2010-02-16 10:05:00 +00:00
|
|
|
{
|
2010-03-29 05:31:15 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2023-06-19 13:26:47 +00:00
|
|
|
struct ieee80211_mgmt *mgmt = (void *)rx->skb->data;
|
|
|
|
__le16 fc = mgmt->frame_control;
|
2010-02-16 10:05:00 +00:00
|
|
|
|
2010-03-29 05:31:15 +00:00
|
|
|
/*
|
|
|
|
* Pass through unencrypted frames if the hardware has
|
|
|
|
* decrypted them already.
|
|
|
|
*/
|
|
|
|
if (status->flag & RX_FLAG_DECRYPTED)
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_CONTINUE;
|
2010-02-16 10:05:00 +00:00
|
|
|
|
2023-06-19 13:26:47 +00:00
|
|
|
/* drop unicast protected dual (that wasn't protected) */
|
|
|
|
if (ieee80211_is_action(fc) &&
|
|
|
|
mgmt->u.action.category == WLAN_CATEGORY_PROTECTED_DUAL_OF_ACTION)
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_DROP_U_UNPROT_DUAL;
|
2023-06-19 13:26:47 +00:00
|
|
|
|
2011-09-29 14:04:36 +00:00
|
|
|
if (rx->sta && test_sta_flag(rx->sta, WLAN_STA_MFP)) {
|
2010-03-29 05:29:52 +00:00
|
|
|
if (unlikely(!ieee80211_has_protected(fc) &&
|
2023-06-19 15:37:38 +00:00
|
|
|
ieee80211_is_unicast_robust_mgmt_frame(rx->skb))) {
|
2013-05-15 22:55:00 +00:00
|
|
|
if (ieee80211_is_deauth(fc) ||
|
2023-06-19 15:37:38 +00:00
|
|
|
ieee80211_is_disassoc(fc)) {
|
|
|
|
/*
|
|
|
|
* Permit unprotected deauth/disassoc frames
|
|
|
|
* during 4-way-HS (key is installed after HS).
|
|
|
|
*/
|
|
|
|
if (!rx->key)
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_CONTINUE;
|
2023-06-19 15:37:38 +00:00
|
|
|
|
2013-05-15 22:55:00 +00:00
|
|
|
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
|
|
|
|
rx->skb->data,
|
|
|
|
rx->skb->len);
|
2023-06-19 15:37:38 +00:00
|
|
|
}
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_DROP_U_UNPROT_UCAST_MGMT;
|
2010-12-15 22:52:40 +00:00
|
|
|
}
|
2009-05-08 09:36:03 +00:00
|
|
|
/* BIP does not use Protected field, so need to check MMIE */
|
2009-11-30 00:55:45 +00:00
|
|
|
if (unlikely(ieee80211_is_multicast_robust_mgmt_frame(rx->skb) &&
|
2010-12-15 22:52:40 +00:00
|
|
|
ieee80211_get_mmie_keyidx(rx->skb) < 0)) {
|
2013-05-15 22:55:00 +00:00
|
|
|
if (ieee80211_is_deauth(fc) ||
|
|
|
|
ieee80211_is_disassoc(fc))
|
|
|
|
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
|
|
|
|
rx->skb->data,
|
|
|
|
rx->skb->len);
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_DROP_U_UNPROT_MCAST_MGMT;
|
2010-12-15 22:52:40 +00:00
|
|
|
}
|
2020-02-22 13:25:47 +00:00
|
|
|
if (unlikely(ieee80211_is_beacon(fc) && rx->key &&
|
2020-04-01 14:25:48 +00:00
|
|
|
ieee80211_get_mmie_keyidx(rx->skb) < 0)) {
|
|
|
|
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
|
|
|
|
rx->skb->data,
|
|
|
|
rx->skb->len);
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_DROP_U_UNPROT_BEACON;
|
2020-04-01 14:25:48 +00:00
|
|
|
}
|
2009-05-08 09:36:03 +00:00
|
|
|
/*
|
|
|
|
* When using MFP, Action frames are not allowed prior to
|
|
|
|
* having configured keys.
|
|
|
|
*/
|
|
|
|
if (unlikely(ieee80211_is_action(fc) && !rx->key &&
|
2014-01-23 15:20:29 +00:00
|
|
|
ieee80211_is_robust_mgmt_frame(rx->skb)))
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_DROP_U_UNPROT_ACTION;
|
2023-06-19 13:26:47 +00:00
|
|
|
|
|
|
|
/* drop unicast public action frames when using MPF */
|
|
|
|
if (is_unicast_ether_addr(mgmt->da) &&
|
2023-10-16 11:52:48 +00:00
|
|
|
ieee80211_is_protected_dual_of_public_action(rx->skb))
|
2023-09-25 15:25:10 +00:00
|
|
|
return RX_DROP_U_UNPROT_UNICAST_PUB_ACTION;
|
2009-05-08 09:36:03 +00:00
|
|
|
}
|
2008-04-13 08:12:47 +00:00
|
|
|
|
2023-10-01 10:01:09 +00:00
|
|
|
/*
|
|
|
|
* Drop robust action frames before assoc regardless of MFP state,
|
|
|
|
* after assoc we also have decided on MFP or not.
|
|
|
|
*/
|
|
|
|
if (ieee80211_is_action(fc) &&
|
|
|
|
ieee80211_is_robust_mgmt_frame(rx->skb) &&
|
|
|
|
(!rx->sta || !test_sta_flag(rx->sta, WLAN_STA_ASSOC)))
|
|
|
|
return RX_DROP_U_UNPROT_ROBUST_ACTION;
|
|
|
|
|
2023-09-29 14:21:00 +00:00
|
|
|
return RX_CONTINUE;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2023-09-25 15:25:11 +00:00
|
|
|
static ieee80211_rx_result
|
2011-04-12 17:15:22 +00:00
|
|
|
__ieee80211_data_to_8023(struct ieee80211_rx_data *rx, bool *port_control)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2009-11-10 19:10:05 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2011-01-18 14:48:48 +00:00
|
|
|
bool check_port_control = false;
|
|
|
|
struct ethhdr *ehdr;
|
|
|
|
int ret;
|
2009-11-10 19:10:05 +00:00
|
|
|
|
2011-04-12 17:15:22 +00:00
|
|
|
*port_control = false;
|
2009-11-19 10:55:19 +00:00
|
|
|
if (ieee80211_has_a4(hdr->frame_control) &&
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP_VLAN && !sdata->u.vlan.sta)
|
2023-09-25 15:25:11 +00:00
|
|
|
return RX_DROP_U_UNEXPECTED_VLAN_4ADDR;
|
2009-11-19 10:55:19 +00:00
|
|
|
|
2011-01-18 14:48:48 +00:00
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_STATION &&
|
|
|
|
!!sdata->u.mgd.use_4addr != !!ieee80211_has_a4(hdr->frame_control)) {
|
|
|
|
if (!sdata->u.mgd.use_4addr)
|
2023-09-25 15:25:11 +00:00
|
|
|
return RX_DROP_U_UNEXPECTED_STA_4ADDR;
|
mac80211: add missing WFA Multi-AP backhaul STA Rx requirement
The current mac80211 WDS (4-address mode) can be used to cover most of the
Multi-AP requirements for Data frames per the WFA Multi-AP Specification v1.0.
When configuring AP/STA interfaces in 4-address mode, they are able to function
as fronthaul AP/backhaul STA of Multi-AP device complying below
Tx, Rx requirements except one missing STA Rx requirement added by this patch.
Multi-AP specification section 14.1 describes the following requirements:
Transmitter requirements
------------------------
1. Fronthaul AP
i) When DA!=RA of backhaul STA, must use 4-address format
ii) When DA==RA of backhaul STA, shall use either 3-address
or 4-address format with RA updated with STA MAC
(mac80211 support 4-address format via AP/VLAN interface)
2. Backhaul STA
i) When SA!=TA of backhaul STA, must use 4-address format
ii) When SA==TA of backhaul STA, shall use either 3-address
or 4-address format with RA updated with AP MAC
(mac80211 support 4-address format via use_4addr)
Receiver requirements
---------------------
1. Fronthaul AP
i) When SA!=TA of backhaul STA, must support receiving 4-address
format frames
ii) When SA==TA of backhaul STA, must support receiving both
3-address and 4-address format frames
(mac80211 support both 3-addr & 4-addr via AP/VLAN interface)
2. Backhaul STA
i) When DA!=RA of backhaul STA, must support receiving 4-address
format frames
ii) When DA==RA of backhaul STA, must support receiving both
3-address and 4-address format frames
(mac80211 support only receiving 4-address format via
use_4addr)
This patch addresses the above Rx requirement (ii) for backhaul STA to receive
unicast (DA==RA) 3-address frames in addition to 4-address frames.
The current design doesn't accept 3-address frames when configured in 4-address
mode (use_4addr). Hence add a check to allow 3-address frames when DA==RA of
backhaul STA (adhering to Table 9-26 of IEEE Std 802.11™-2016).
This case was tested with a bridged station interface when associated with
a non-mac80211 based vendor AP implementation using 3-address frames for WDS.
STA was able to support the Multi-AP Rx requirement when DA==RA. No issues,
no loops seen when tested with mac80211 based AP as well.
Verified and confirmed all other Tx and Rx requirements of AP and STA for
Multi-AP respectively. They all work using the current mac80211-WDS design.
Signed-off-by: Sathishkumar Muruganandam <murugana@codeaurora.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2018-08-28 12:21:38 +00:00
|
|
|
else if (!ether_addr_equal(hdr->addr1, sdata->vif.addr))
|
2011-01-18 14:48:48 +00:00
|
|
|
check_port_control = true;
|
|
|
|
}
|
|
|
|
|
2009-11-19 10:55:19 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1) &&
|
2011-01-18 14:48:48 +00:00
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP_VLAN && sdata->u.vlan.sta)
|
2023-09-25 15:25:11 +00:00
|
|
|
return RX_DROP_U_UNEXPECTED_VLAN_MCAST;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2011-01-18 14:48:48 +00:00
|
|
|
ret = ieee80211_data_to_8023(rx->skb, sdata->vif.addr, sdata->vif.type);
|
2011-04-12 17:15:22 +00:00
|
|
|
if (ret < 0)
|
2023-09-25 15:25:11 +00:00
|
|
|
return RX_DROP_U_INVALID_8023;
|
2011-01-18 14:48:48 +00:00
|
|
|
|
|
|
|
ehdr = (struct ethhdr *) rx->skb->data;
|
2011-04-12 17:15:22 +00:00
|
|
|
if (ehdr->h_proto == rx->sdata->control_port_protocol)
|
|
|
|
*port_control = true;
|
|
|
|
else if (check_port_control)
|
2023-09-25 15:25:11 +00:00
|
|
|
return RX_DROP_U_NOT_PORT_CONTROL;
|
2011-01-18 14:48:48 +00:00
|
|
|
|
2023-09-25 15:25:11 +00:00
|
|
|
return RX_CONTINUE;
|
2007-11-22 17:49:12 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2022-06-30 13:43:44 +00:00
|
|
|
bool ieee80211_is_our_addr(struct ieee80211_sub_if_data *sdata,
|
|
|
|
const u8 *addr, int *out_link_id)
|
2022-06-30 12:37:37 +00:00
|
|
|
{
|
|
|
|
unsigned int link_id;
|
|
|
|
|
|
|
|
/* non-MLO, or MLD address replaced by hardware */
|
|
|
|
if (ether_addr_equal(sdata->vif.addr, addr))
|
|
|
|
return true;
|
|
|
|
|
2023-06-08 13:36:08 +00:00
|
|
|
if (!ieee80211_vif_is_mld(&sdata->vif))
|
2022-06-30 12:37:37 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
for (link_id = 0; link_id < ARRAY_SIZE(sdata->vif.link_conf); link_id++) {
|
|
|
|
struct ieee80211_bss_conf *conf;
|
|
|
|
|
|
|
|
conf = rcu_dereference(sdata->vif.link_conf[link_id]);
|
|
|
|
|
|
|
|
if (!conf)
|
|
|
|
continue;
|
|
|
|
if (ether_addr_equal(conf->addr, addr)) {
|
|
|
|
if (out_link_id)
|
|
|
|
*out_link_id = link_id;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2007-12-19 00:31:22 +00:00
|
|
|
/*
|
|
|
|
* requires that rx->skb is a frame with ethernet header
|
|
|
|
*/
|
2008-07-16 01:44:13 +00:00
|
|
|
static bool ieee80211_frame_allowed(struct ieee80211_rx_data *rx, __le16 fc)
|
2007-12-19 00:31:22 +00:00
|
|
|
{
|
2008-05-28 17:45:32 +00:00
|
|
|
static const u8 pae_group_addr[ETH_ALEN] __aligned(2)
|
2007-12-19 00:31:22 +00:00
|
|
|
= { 0x01, 0x80, 0xC2, 0x00, 0x00, 0x03 };
|
|
|
|
struct ethhdr *ehdr = (struct ethhdr *) rx->skb->data;
|
|
|
|
|
|
|
|
/*
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
* Allow EAPOL frames to us/the PAE group address regardless of
|
|
|
|
* whether the frame was encrypted or not, and always disallow
|
|
|
|
* all other destination addresses for them.
|
2007-12-19 00:31:22 +00:00
|
|
|
*/
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
if (unlikely(ehdr->h_proto == rx->sdata->control_port_protocol))
|
2022-06-30 12:37:37 +00:00
|
|
|
return ieee80211_is_our_addr(rx->sdata, ehdr->h_dest, NULL) ||
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
ether_addr_equal(ehdr->h_dest, pae_group_addr);
|
2007-12-19 00:31:22 +00:00
|
|
|
|
|
|
|
if (ieee80211_802_1x_port_control(rx) ||
|
2008-07-16 01:44:13 +00:00
|
|
|
ieee80211_drop_unencrypted(rx, fc))
|
2007-12-19 00:31:22 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2018-03-26 17:52:51 +00:00
|
|
|
static void ieee80211_deliver_skb_to_local_stack(struct sk_buff *skb,
|
|
|
|
struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
|
|
|
struct net_device *dev = sdata->dev;
|
|
|
|
|
|
|
|
if (unlikely((skb->protocol == sdata->control_port_protocol ||
|
2020-03-12 09:10:54 +00:00
|
|
|
(skb->protocol == cpu_to_be16(ETH_P_PREAUTH) &&
|
|
|
|
!sdata->control_port_no_preauth)) &&
|
2018-03-26 17:52:51 +00:00
|
|
|
sdata->control_port_over_nl80211)) {
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2019-08-27 22:41:20 +00:00
|
|
|
bool noencrypt = !(status->flag & RX_FLAG_DECRYPTED);
|
2018-03-26 17:52:51 +00:00
|
|
|
|
2023-03-01 10:09:23 +00:00
|
|
|
cfg80211_rx_control_port(dev, skb, noencrypt, rx->link_id);
|
2018-03-26 17:52:51 +00:00
|
|
|
dev_kfree_skb(skb);
|
|
|
|
} else {
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
struct ethhdr *ehdr = (void *)skb_mac_header(skb);
|
|
|
|
|
2019-08-27 22:41:19 +00:00
|
|
|
memset(skb->cb, 0, sizeof(skb->cb));
|
|
|
|
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
/*
|
|
|
|
* 802.1X over 802.11 requires that the authenticator address
|
|
|
|
* be used for EAPOL frames. However, 802.1X allows the use of
|
|
|
|
* the PAE group address instead. If the interface is part of
|
|
|
|
* a bridge and we pass the frame with the PAE group address,
|
|
|
|
* then the bridge will forward it to the network (even if the
|
|
|
|
* client was not associated yet), which isn't supposed to
|
|
|
|
* happen.
|
|
|
|
* To avoid that, rewrite the destination address to our own
|
|
|
|
* address, so that the authenticator (e.g. hostapd) will see
|
|
|
|
* the frame, but bridge won't forward it anywhere else. Note
|
|
|
|
* that due to earlier filtering, the only other address can
|
2022-02-12 16:20:15 +00:00
|
|
|
* be the PAE group address, unless the hardware allowed them
|
|
|
|
* through in 802.3 offloaded mode.
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
*/
|
|
|
|
if (unlikely(skb->protocol == sdata->control_port_protocol &&
|
|
|
|
!ether_addr_equal(ehdr->h_dest, sdata->vif.addr)))
|
|
|
|
ether_addr_copy(ehdr->h_dest, sdata->vif.addr);
|
|
|
|
|
2018-03-26 17:52:51 +00:00
|
|
|
/* deliver to local stack */
|
2020-07-26 11:06:11 +00:00
|
|
|
if (rx->list)
|
|
|
|
list_add_tail(&skb->list, rx->list);
|
2018-03-26 17:52:51 +00:00
|
|
|
else
|
|
|
|
netif_receive_skb(skb);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2007-12-19 00:31:22 +00:00
|
|
|
/*
|
|
|
|
* requires that rx->skb is a frame with ethernet header
|
|
|
|
*/
|
2007-11-22 17:49:12 +00:00
|
|
|
static void
|
2008-02-25 15:27:43 +00:00
|
|
|
ieee80211_deliver_skb(struct ieee80211_rx_data *rx)
|
2007-11-22 17:49:12 +00:00
|
|
|
{
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
|
|
|
struct net_device *dev = sdata->dev;
|
2007-11-22 17:49:12 +00:00
|
|
|
struct sk_buff *skb, *xmit_skb;
|
2007-12-19 00:31:22 +00:00
|
|
|
struct ethhdr *ehdr = (struct ethhdr *) rx->skb->data;
|
|
|
|
struct sta_info *dsta;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2007-11-22 17:49:12 +00:00
|
|
|
skb = rx->skb;
|
|
|
|
xmit_skb = NULL;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2020-11-13 21:46:24 +00:00
|
|
|
dev_sw_netstats_rx_add(dev, skb->len);
|
2015-04-22 15:10:38 +00:00
|
|
|
|
2016-03-31 17:02:03 +00:00
|
|
|
if (rx->sta) {
|
|
|
|
/* The seqno index has the same property as needed
|
|
|
|
* for the rx_msdu field, i.e. it is IEEE80211_NUM_TIDS
|
|
|
|
* for non-QoS-data frames. Here we know it's a data
|
|
|
|
* frame, so count MSDUs.
|
|
|
|
*/
|
2022-09-02 14:12:40 +00:00
|
|
|
u64_stats_update_begin(&rx->link_sta->rx_stats.syncp);
|
|
|
|
rx->link_sta->rx_stats.msdu[rx->seqno_idx]++;
|
|
|
|
u64_stats_update_end(&rx->link_sta->rx_stats.syncp);
|
2016-03-31 17:02:03 +00:00
|
|
|
}
|
|
|
|
|
2008-09-10 22:01:58 +00:00
|
|
|
if ((sdata->vif.type == NL80211_IFTYPE_AP ||
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP_VLAN) &&
|
2008-09-10 22:01:54 +00:00
|
|
|
!(sdata->flags & IEEE80211_SDATA_DONT_BRIDGE_PACKETS) &&
|
mac80211: do not accept/forward invalid EAPOL frames
EAPOL frames are used for authentication and key management between the
AP and each individual STA associated in the BSS. Those frames are not
supposed to be sent by one associated STA to another associated STA
(either unicast for broadcast/multicast).
Similarly, in 802.11 they're supposed to be sent to the authenticator
(AP) address.
Since it is possible for unexpected EAPOL frames to result in misbehavior
in supplicant implementations, it is better for the AP to not allow such
cases to be forwarded to other clients either directly, or indirectly if
the AP interface is part of a bridge.
Accept EAPOL (control port) frames only if they're transmitted to the
own address, or, due to interoperability concerns, to the PAE group
address.
Disable forwarding of EAPOL (or well, the configured control port
protocol) frames back to wireless medium in all cases. Previously, these
frames were accepted from fully authenticated and authorized stations
and also from unauthenticated stations for one of the cases.
Additionally, to avoid forwarding by the bridge, rewrite the PAE group
address case to the local MAC address.
Cc: stable@vger.kernel.org
Co-developed-by: Jouni Malinen <jouni@codeaurora.org>
Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
Link: https://lore.kernel.org/r/20210511200110.cb327ed0cabe.Ib7dcffa2a31f0913d660de65ba3c8aca75b1d10f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2021-05-11 18:02:50 +00:00
|
|
|
ehdr->h_proto != rx->sdata->control_port_protocol &&
|
2009-11-19 10:55:19 +00:00
|
|
|
(sdata->vif.type != NL80211_IFTYPE_AP_VLAN || !sdata->u.vlan.sta)) {
|
2016-10-10 17:12:21 +00:00
|
|
|
if (is_multicast_ether_addr(ehdr->h_dest) &&
|
|
|
|
ieee80211_vif_get_num_mcast_if(sdata) != 0) {
|
2007-12-19 00:31:22 +00:00
|
|
|
/*
|
|
|
|
* send multicast frames both to higher layers in
|
|
|
|
* local net stack and back to the wireless medium
|
|
|
|
*/
|
2007-11-22 17:49:12 +00:00
|
|
|
xmit_skb = skb_copy(skb, GFP_ATOMIC);
|
2012-05-13 21:56:26 +00:00
|
|
|
if (!xmit_skb)
|
2012-06-22 09:29:50 +00:00
|
|
|
net_info_ratelimited("%s: failed to clone multicast frame\n",
|
2012-05-13 21:56:26 +00:00
|
|
|
dev->name);
|
2018-10-09 08:00:21 +00:00
|
|
|
} else if (!is_multicast_ether_addr(ehdr->h_dest) &&
|
|
|
|
!ether_addr_equal(ehdr->h_dest, ehdr->h_source)) {
|
|
|
|
dsta = sta_info_get(sdata, ehdr->h_dest);
|
2009-11-25 16:46:18 +00:00
|
|
|
if (dsta) {
|
2007-12-19 00:31:22 +00:00
|
|
|
/*
|
|
|
|
* The destination station is associated to
|
|
|
|
* this AP (in this VLAN), so send the frame
|
|
|
|
* directly to it and do not pass it to local
|
|
|
|
* net stack.
|
2007-07-27 13:43:22 +00:00
|
|
|
*/
|
2007-11-22 17:49:12 +00:00
|
|
|
xmit_skb = skb;
|
2007-07-27 13:43:22 +00:00
|
|
|
skb = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-12-17 12:54:57 +00:00
|
|
|
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
|
2013-12-04 08:16:31 +00:00
|
|
|
if (skb) {
|
|
|
|
/* 'align' will only take the values 0 or 2 here since all
|
|
|
|
* frames are required to be aligned to 2-byte boundaries
|
|
|
|
* when being passed to mac80211; the code here works just
|
|
|
|
* as well if that isn't true, but mac80211 assumes it can
|
|
|
|
* access fields as 2-byte aligned (e.g. for ether_addr_equal)
|
2009-01-06 23:26:10 +00:00
|
|
|
*/
|
2013-12-04 08:16:31 +00:00
|
|
|
int align;
|
|
|
|
|
|
|
|
align = (unsigned long)(skb->data + sizeof(struct ethhdr)) & 3;
|
2009-01-06 23:26:10 +00:00
|
|
|
if (align) {
|
|
|
|
if (WARN_ON(skb_headroom(skb) < 3)) {
|
|
|
|
dev_kfree_skb(skb);
|
|
|
|
skb = NULL;
|
|
|
|
} else {
|
|
|
|
u8 *data = skb->data;
|
2009-10-28 20:13:52 +00:00
|
|
|
size_t len = skb_headlen(skb);
|
|
|
|
skb->data -= align;
|
|
|
|
memmove(skb->data, data, len);
|
|
|
|
skb_set_tail_pointer(skb, len);
|
2009-01-06 23:26:10 +00:00
|
|
|
}
|
|
|
|
}
|
2013-12-04 08:16:31 +00:00
|
|
|
}
|
2009-01-06 23:26:10 +00:00
|
|
|
#endif
|
|
|
|
|
2013-12-04 08:16:31 +00:00
|
|
|
if (skb) {
|
|
|
|
skb->protocol = eth_type_trans(skb, dev);
|
2018-03-26 17:52:51 +00:00
|
|
|
ieee80211_deliver_skb_to_local_stack(skb, rx);
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2007-11-22 17:49:12 +00:00
|
|
|
if (xmit_skb) {
|
2011-12-21 08:11:35 +00:00
|
|
|
/*
|
|
|
|
* Send to wireless media and increase priority by 256 to
|
|
|
|
* keep the received priority instead of reclassifying
|
|
|
|
* the frame (see cfg80211_classify8021d).
|
|
|
|
*/
|
|
|
|
xmit_skb->priority += 256;
|
2007-12-11 18:54:23 +00:00
|
|
|
xmit_skb->protocol = htons(ETH_P_802_3);
|
2007-12-19 00:31:22 +00:00
|
|
|
skb_reset_network_header(xmit_skb);
|
|
|
|
skb_reset_mac_header(xmit_skb);
|
2007-11-22 17:49:12 +00:00
|
|
|
dev_queue_xmit(xmit_skb);
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2007-11-22 17:49:12 +00:00
|
|
|
}
|
|
|
|
|
2023-03-14 09:59:54 +00:00
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
static bool
|
|
|
|
ieee80211_rx_mesh_fast_forward(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct sk_buff *skb, int hdrlen)
|
|
|
|
{
|
|
|
|
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
|
|
|
|
struct ieee80211_mesh_fast_tx *entry = NULL;
|
|
|
|
struct ieee80211s_hdr *mesh_hdr;
|
|
|
|
struct tid_ampdu_tx *tid_tx;
|
|
|
|
struct sta_info *sta;
|
|
|
|
struct ethhdr eth;
|
|
|
|
u8 tid;
|
|
|
|
|
|
|
|
mesh_hdr = (struct ieee80211s_hdr *)(skb->data + sizeof(eth));
|
|
|
|
if ((mesh_hdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6)
|
|
|
|
entry = mesh_fast_tx_get(sdata, mesh_hdr->eaddr1);
|
|
|
|
else if (!(mesh_hdr->flags & MESH_FLAGS_AE))
|
|
|
|
entry = mesh_fast_tx_get(sdata, skb->data);
|
|
|
|
if (!entry)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
sta = rcu_dereference(entry->mpath->next_hop);
|
|
|
|
if (!sta)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (skb_linearize(skb))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
|
|
|
|
tid_tx = rcu_dereference(sta->ampdu_mlme.tid_tx[tid]);
|
|
|
|
if (tid_tx) {
|
|
|
|
if (!test_bit(HT_AGG_STATE_OPERATIONAL, &tid_tx->state))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (tid_tx->timeout)
|
|
|
|
tid_tx->last_tx = jiffies;
|
|
|
|
}
|
|
|
|
|
|
|
|
ieee80211_aggr_check(sdata, sta, skb);
|
|
|
|
|
|
|
|
if (ieee80211_get_8023_tunnel_proto(skb->data + hdrlen,
|
|
|
|
&skb->protocol))
|
|
|
|
hdrlen += ETH_ALEN;
|
|
|
|
else
|
|
|
|
skb->protocol = htons(skb->len - hdrlen);
|
|
|
|
skb_set_network_header(skb, hdrlen + 2);
|
|
|
|
|
|
|
|
skb->dev = sdata->dev;
|
|
|
|
memcpy(ð, skb->data, ETH_HLEN - 2);
|
|
|
|
skb_pull(skb, 2);
|
|
|
|
__ieee80211_xmit_fast(sdata, sta, &entry->fast_tx, skb, tid_tx,
|
|
|
|
eth.h_dest, eth.h_source);
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_unicast);
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_frames);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
static ieee80211_rx_result
|
|
|
|
ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta,
|
|
|
|
struct sk_buff *skb)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_MAC80211_MESH
|
|
|
|
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
uint16_t fc = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_DATA;
|
|
|
|
struct ieee80211_hdr hdr = {
|
|
|
|
.frame_control = cpu_to_le16(fc)
|
|
|
|
};
|
|
|
|
struct ieee80211_hdr *fwd_hdr;
|
|
|
|
struct ieee80211s_hdr *mesh_hdr;
|
|
|
|
struct ieee80211_tx_info *info;
|
|
|
|
struct sk_buff *fwd_skb;
|
|
|
|
struct ethhdr *eth;
|
|
|
|
bool multicast;
|
|
|
|
int tailroom = 0;
|
|
|
|
int hdrlen, mesh_hdrlen;
|
|
|
|
u8 *qos;
|
|
|
|
|
|
|
|
if (!ieee80211_vif_is_mesh(&sdata->vif))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (!pskb_may_pull(skb, sizeof(*eth) + 6))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
mesh_hdr = (struct ieee80211s_hdr *)(skb->data + sizeof(*eth));
|
|
|
|
mesh_hdrlen = ieee80211_get_mesh_hdrlen(mesh_hdr);
|
|
|
|
|
|
|
|
if (!pskb_may_pull(skb, sizeof(*eth) + mesh_hdrlen))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
eth = (struct ethhdr *)skb->data;
|
|
|
|
multicast = is_multicast_ether_addr(eth->h_dest);
|
|
|
|
|
|
|
|
mesh_hdr = (struct ieee80211s_hdr *)(eth + 1);
|
|
|
|
if (!mesh_hdr->ttl)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
/* frame is in RMC, don't forward */
|
|
|
|
if (is_multicast_ether_addr(eth->h_dest) &&
|
|
|
|
mesh_rmc_check(sdata, eth->h_source, mesh_hdr))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
/* forward packet */
|
|
|
|
if (sdata->crypto_tx_tailroom_needed_cnt)
|
|
|
|
tailroom = IEEE80211_ENCRYPT_TAILROOM;
|
|
|
|
|
|
|
|
if (mesh_hdr->flags & MESH_FLAGS_AE) {
|
|
|
|
struct mesh_path *mppath;
|
|
|
|
char *proxied_addr;
|
wifi: mac80211: mesh fast xmit support
Previously, fast xmit only worked on interface types where initially a
sta lookup is performed, and a cached header can be attached to the sta,
requiring only some fields to be updated at runtime.
This technique is not directly applicable for a mesh device type due
to the dynamic nature of the topology and protocol. There are more
addresses that need to be filled, and there is an extra header with a
dynamic length based on the addressing mode.
Change the code to cache entries contain a copy of the mesh subframe header +
bridge tunnel header, as well as an embedded struct ieee80211_fast_tx, which
contains the information for building the 802.11 header.
Add a mesh specific early fast xmit call, which looks up a cached entry and
adds only the mesh subframe header, before passing it over to the generic
fast xmit code.
To ensure the changes in network are reflected in these cached headers,
flush affected cached entries on path changes, as well as other conditions
that currently trigger a fast xmit check in other modes (key changes etc.)
This code is loosely based on a previous implementation by:
Sriram R <quic_srirrama@quicinc.com>
Cc: Sriram R <quic_srirrama@quicinc.com>
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20230314095956.62085-4-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-03-14 09:59:53 +00:00
|
|
|
bool update = false;
|
2023-02-13 10:08:54 +00:00
|
|
|
|
|
|
|
if (multicast)
|
|
|
|
proxied_addr = mesh_hdr->eaddr1;
|
|
|
|
else if ((mesh_hdr->flags & MESH_FLAGS_AE) == MESH_FLAGS_AE_A5_A6)
|
|
|
|
/* has_a4 already checked in ieee80211_rx_mesh_check */
|
|
|
|
proxied_addr = mesh_hdr->eaddr2;
|
|
|
|
else
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
mppath = mpp_path_lookup(sdata, proxied_addr);
|
|
|
|
if (!mppath) {
|
|
|
|
mpp_path_add(sdata, proxied_addr, eth->h_source);
|
|
|
|
} else {
|
|
|
|
spin_lock_bh(&mppath->state_lock);
|
wifi: mac80211: mesh fast xmit support
Previously, fast xmit only worked on interface types where initially a
sta lookup is performed, and a cached header can be attached to the sta,
requiring only some fields to be updated at runtime.
This technique is not directly applicable for a mesh device type due
to the dynamic nature of the topology and protocol. There are more
addresses that need to be filled, and there is an extra header with a
dynamic length based on the addressing mode.
Change the code to cache entries contain a copy of the mesh subframe header +
bridge tunnel header, as well as an embedded struct ieee80211_fast_tx, which
contains the information for building the 802.11 header.
Add a mesh specific early fast xmit call, which looks up a cached entry and
adds only the mesh subframe header, before passing it over to the generic
fast xmit code.
To ensure the changes in network are reflected in these cached headers,
flush affected cached entries on path changes, as well as other conditions
that currently trigger a fast xmit check in other modes (key changes etc.)
This code is loosely based on a previous implementation by:
Sriram R <quic_srirrama@quicinc.com>
Cc: Sriram R <quic_srirrama@quicinc.com>
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20230314095956.62085-4-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-03-14 09:59:53 +00:00
|
|
|
if (!ether_addr_equal(mppath->mpp, eth->h_source)) {
|
2023-02-13 10:08:54 +00:00
|
|
|
memcpy(mppath->mpp, eth->h_source, ETH_ALEN);
|
wifi: mac80211: mesh fast xmit support
Previously, fast xmit only worked on interface types where initially a
sta lookup is performed, and a cached header can be attached to the sta,
requiring only some fields to be updated at runtime.
This technique is not directly applicable for a mesh device type due
to the dynamic nature of the topology and protocol. There are more
addresses that need to be filled, and there is an extra header with a
dynamic length based on the addressing mode.
Change the code to cache entries contain a copy of the mesh subframe header +
bridge tunnel header, as well as an embedded struct ieee80211_fast_tx, which
contains the information for building the 802.11 header.
Add a mesh specific early fast xmit call, which looks up a cached entry and
adds only the mesh subframe header, before passing it over to the generic
fast xmit code.
To ensure the changes in network are reflected in these cached headers,
flush affected cached entries on path changes, as well as other conditions
that currently trigger a fast xmit check in other modes (key changes etc.)
This code is loosely based on a previous implementation by:
Sriram R <quic_srirrama@quicinc.com>
Cc: Sriram R <quic_srirrama@quicinc.com>
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20230314095956.62085-4-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-03-14 09:59:53 +00:00
|
|
|
update = true;
|
|
|
|
}
|
2023-02-13 10:08:54 +00:00
|
|
|
mppath->exp_time = jiffies;
|
|
|
|
spin_unlock_bh(&mppath->state_lock);
|
|
|
|
}
|
wifi: mac80211: mesh fast xmit support
Previously, fast xmit only worked on interface types where initially a
sta lookup is performed, and a cached header can be attached to the sta,
requiring only some fields to be updated at runtime.
This technique is not directly applicable for a mesh device type due
to the dynamic nature of the topology and protocol. There are more
addresses that need to be filled, and there is an extra header with a
dynamic length based on the addressing mode.
Change the code to cache entries contain a copy of the mesh subframe header +
bridge tunnel header, as well as an embedded struct ieee80211_fast_tx, which
contains the information for building the 802.11 header.
Add a mesh specific early fast xmit call, which looks up a cached entry and
adds only the mesh subframe header, before passing it over to the generic
fast xmit code.
To ensure the changes in network are reflected in these cached headers,
flush affected cached entries on path changes, as well as other conditions
that currently trigger a fast xmit check in other modes (key changes etc.)
This code is loosely based on a previous implementation by:
Sriram R <quic_srirrama@quicinc.com>
Cc: Sriram R <quic_srirrama@quicinc.com>
Signed-off-by: Ryder Lee <ryder.lee@mediatek.com>
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20230314095956.62085-4-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2023-03-14 09:59:53 +00:00
|
|
|
|
|
|
|
/* flush fast xmit cache if the address path changed */
|
|
|
|
if (update)
|
|
|
|
mesh_fast_tx_flush_addr(sdata, proxied_addr);
|
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
|
2023-03-14 09:59:52 +00:00
|
|
|
/* Frame has reached destination. Don't forward */
|
|
|
|
if (ether_addr_equal(sdata->vif.addr, eth->h_dest))
|
|
|
|
goto rx_accept;
|
|
|
|
|
2023-03-26 15:17:09 +00:00
|
|
|
if (!--mesh_hdr->ttl) {
|
|
|
|
if (multicast)
|
|
|
|
goto rx_accept;
|
|
|
|
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_ttl);
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
}
|
|
|
|
|
2023-03-14 09:59:52 +00:00
|
|
|
if (!ifmsh->mshcfg.dot11MeshForwarding) {
|
|
|
|
if (is_multicast_ether_addr(eth->h_dest))
|
|
|
|
goto rx_accept;
|
|
|
|
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
}
|
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
skb_set_queue_mapping(skb, ieee802_1d_to_ac[skb->priority]);
|
|
|
|
|
2023-03-14 09:59:54 +00:00
|
|
|
if (!multicast &&
|
|
|
|
ieee80211_rx_mesh_fast_forward(sdata, skb, mesh_hdrlen))
|
|
|
|
return RX_QUEUED;
|
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
ieee80211_fill_mesh_addresses(&hdr, &hdr.frame_control,
|
|
|
|
eth->h_dest, eth->h_source);
|
|
|
|
hdrlen = ieee80211_hdrlen(hdr.frame_control);
|
|
|
|
if (multicast) {
|
|
|
|
int extra_head = sizeof(struct ieee80211_hdr) - sizeof(*eth);
|
|
|
|
|
|
|
|
fwd_skb = skb_copy_expand(skb, local->tx_headroom + extra_head +
|
|
|
|
IEEE80211_ENCRYPT_HEADROOM,
|
|
|
|
tailroom, GFP_ATOMIC);
|
|
|
|
if (!fwd_skb)
|
|
|
|
goto rx_accept;
|
|
|
|
} else {
|
|
|
|
fwd_skb = skb;
|
|
|
|
skb = NULL;
|
|
|
|
|
|
|
|
if (skb_cow_head(fwd_skb, hdrlen - sizeof(struct ethhdr)))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_OOM;
|
2023-03-24 12:09:22 +00:00
|
|
|
|
|
|
|
if (skb_linearize(fwd_skb))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_OOM;
|
2023-02-13 10:08:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
fwd_hdr = skb_push(fwd_skb, hdrlen - sizeof(struct ethhdr));
|
|
|
|
memcpy(fwd_hdr, &hdr, hdrlen - 2);
|
|
|
|
qos = ieee80211_get_qos_ctl(fwd_hdr);
|
|
|
|
qos[0] = qos[1] = 0;
|
|
|
|
|
|
|
|
skb_reset_mac_header(fwd_skb);
|
|
|
|
hdrlen += mesh_hdrlen;
|
|
|
|
if (ieee80211_get_8023_tunnel_proto(fwd_skb->data + hdrlen,
|
|
|
|
&fwd_skb->protocol))
|
|
|
|
hdrlen += ETH_ALEN;
|
|
|
|
else
|
|
|
|
fwd_skb->protocol = htons(fwd_skb->len - hdrlen);
|
2023-03-24 12:09:23 +00:00
|
|
|
skb_set_network_header(fwd_skb, hdrlen + 2);
|
2023-02-13 10:08:54 +00:00
|
|
|
|
|
|
|
info = IEEE80211_SKB_CB(fwd_skb);
|
|
|
|
memset(info, 0, sizeof(*info));
|
|
|
|
info->control.flags |= IEEE80211_TX_INTCFL_NEED_TXPROCESSING;
|
|
|
|
info->control.vif = &sdata->vif;
|
|
|
|
info->control.jiffies = jiffies;
|
2023-03-14 09:59:54 +00:00
|
|
|
fwd_skb->dev = sdata->dev;
|
2023-02-13 10:08:54 +00:00
|
|
|
if (multicast) {
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_mcast);
|
|
|
|
memcpy(fwd_hdr->addr2, sdata->vif.addr, ETH_ALEN);
|
|
|
|
/* update power mode indication when forwarding */
|
|
|
|
ieee80211_mps_set_frame_flags(sdata, NULL, fwd_hdr);
|
|
|
|
} else if (!mesh_nexthop_lookup(sdata, fwd_skb)) {
|
|
|
|
/* mesh power mode flags updated in mesh_nexthop_lookup */
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_unicast);
|
|
|
|
} else {
|
|
|
|
/* unable to resolve next hop */
|
|
|
|
if (sta)
|
|
|
|
mesh_path_error_tx(sdata, ifmsh->mshcfg.element_ttl,
|
|
|
|
hdr.addr3, 0,
|
|
|
|
WLAN_REASON_MESH_PATH_NOFORWARD,
|
|
|
|
sta->sta.addr);
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, dropped_frames_no_route);
|
|
|
|
kfree_skb(fwd_skb);
|
|
|
|
goto rx_accept;
|
|
|
|
}
|
|
|
|
|
|
|
|
IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_frames);
|
|
|
|
ieee80211_add_pending_skb(local, fwd_skb);
|
|
|
|
|
|
|
|
rx_accept:
|
|
|
|
if (!skb)
|
|
|
|
return RX_QUEUED;
|
|
|
|
|
|
|
|
ieee80211_strip_8023_mesh_hdr(skb);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2018-02-27 12:03:07 +00:00
|
|
|
__ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx, u8 data_offset)
|
2007-11-26 14:14:33 +00:00
|
|
|
{
|
2009-11-16 12:58:20 +00:00
|
|
|
struct net_device *dev = rx->sdata->dev;
|
2009-12-01 02:18:37 +00:00
|
|
|
struct sk_buff *skb = rx->skb;
|
2008-07-16 01:44:13 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
|
|
|
__le16 fc = hdr->frame_control;
|
2009-12-01 02:18:37 +00:00
|
|
|
struct sk_buff_head frame_list;
|
2023-03-30 09:00:00 +00:00
|
|
|
ieee80211_rx_result res;
|
2016-10-05 13:29:49 +00:00
|
|
|
struct ethhdr ethhdr;
|
2016-10-05 14:42:06 +00:00
|
|
|
const u8 *check_da = ethhdr.h_dest, *check_sa = ethhdr.h_source;
|
2007-11-26 14:14:33 +00:00
|
|
|
|
2016-10-05 08:14:42 +00:00
|
|
|
if (unlikely(ieee80211_has_a4(hdr->frame_control))) {
|
2016-10-05 14:42:06 +00:00
|
|
|
check_da = NULL;
|
|
|
|
check_sa = NULL;
|
|
|
|
} else switch (rx->sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_AP:
|
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
check_da = NULL;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_STATION:
|
|
|
|
if (!rx->sta ||
|
|
|
|
!test_sta_flag(rx->sta, WLAN_STA_TDLS_PEER))
|
|
|
|
check_sa = NULL;
|
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
check_sa = NULL;
|
2023-02-13 10:08:54 +00:00
|
|
|
check_da = NULL;
|
2016-10-05 14:42:06 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
2016-10-05 08:14:42 +00:00
|
|
|
}
|
2007-11-26 14:14:33 +00:00
|
|
|
|
2009-12-01 02:18:37 +00:00
|
|
|
skb->dev = dev;
|
|
|
|
__skb_queue_head_init(&frame_list);
|
2007-11-26 14:14:33 +00:00
|
|
|
|
2016-10-05 13:29:49 +00:00
|
|
|
if (ieee80211_data_to_8023_exthdr(skb, ðhdr,
|
|
|
|
rx->sdata->vif.addr,
|
2018-02-27 12:03:07 +00:00
|
|
|
rx->sdata->vif.type,
|
2021-05-11 18:02:44 +00:00
|
|
|
data_offset, true))
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_AMSDU;
|
2016-10-05 13:29:49 +00:00
|
|
|
|
2023-03-30 09:00:01 +00:00
|
|
|
if (rx->sta->amsdu_mesh_control < 0) {
|
2023-03-14 09:59:56 +00:00
|
|
|
s8 valid = -1;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i <= 2; i++) {
|
|
|
|
if (!ieee80211_is_valid_amsdu(skb, i))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (valid >= 0) {
|
|
|
|
/* ambiguous */
|
|
|
|
valid = -1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
valid = i;
|
|
|
|
}
|
2023-02-13 10:08:55 +00:00
|
|
|
|
2023-03-14 09:59:56 +00:00
|
|
|
rx->sta->amsdu_mesh_control = valid;
|
2023-02-13 10:08:55 +00:00
|
|
|
}
|
|
|
|
|
2009-12-01 02:18:37 +00:00
|
|
|
ieee80211_amsdu_to_8023s(skb, &frame_list, dev->dev_addr,
|
|
|
|
rx->sdata->vif.type,
|
2016-10-05 14:17:01 +00:00
|
|
|
rx->local->hw.extra_tx_headroom,
|
2023-02-13 10:08:55 +00:00
|
|
|
check_da, check_sa,
|
|
|
|
rx->sta->amsdu_mesh_control);
|
2007-11-26 14:14:33 +00:00
|
|
|
|
2009-12-01 02:18:37 +00:00
|
|
|
while (!skb_queue_empty(&frame_list)) {
|
|
|
|
rx->skb = __skb_dequeue(&frame_list);
|
2007-11-26 14:14:33 +00:00
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
res = ieee80211_rx_mesh_data(rx->sdata, rx->sta, rx->skb);
|
|
|
|
switch (res) {
|
|
|
|
case RX_QUEUED:
|
2007-12-19 00:31:22 +00:00
|
|
|
continue;
|
2023-02-13 10:08:54 +00:00
|
|
|
case RX_CONTINUE:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto free;
|
2007-12-19 00:31:22 +00:00
|
|
|
}
|
2007-11-26 14:14:33 +00:00
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
if (!ieee80211_frame_allowed(rx, fc))
|
|
|
|
goto free;
|
|
|
|
|
2007-11-26 14:14:33 +00:00
|
|
|
ieee80211_deliver_skb(rx);
|
2023-02-13 10:08:54 +00:00
|
|
|
continue;
|
|
|
|
|
|
|
|
free:
|
|
|
|
dev_kfree_skb(rx->skb);
|
2007-11-26 14:14:33 +00:00
|
|
|
}
|
|
|
|
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_QUEUED;
|
2007-11-26 14:14:33 +00:00
|
|
|
}
|
|
|
|
|
2018-02-27 12:03:07 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct sk_buff *skb = rx->skb;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
|
|
|
__le16 fc = hdr->frame_control;
|
|
|
|
|
|
|
|
if (!(status->rx_flags & IEEE80211_RX_AMSDU))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (unlikely(!ieee80211_is_data(fc)))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (unlikely(!ieee80211_is_data_present(fc)))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
if (unlikely(ieee80211_has_a4(hdr->frame_control))) {
|
|
|
|
switch (rx->sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
if (!rx->sdata->u.vlan.sta)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_4ADDR;
|
2018-02-27 12:03:07 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_STATION:
|
|
|
|
if (!rx->sdata->u.mgd.use_4addr)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_4ADDR;
|
2018-02-27 12:03:07 +00:00
|
|
|
break;
|
2023-02-13 10:08:54 +00:00
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
break;
|
2018-02-27 12:03:07 +00:00
|
|
|
default:
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_4ADDR;
|
2018-02-27 12:03:07 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-03-30 09:00:01 +00:00
|
|
|
if (is_multicast_ether_addr(hdr->addr1) || !rx->sta)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_AMSDU;
|
2018-02-27 12:03:07 +00:00
|
|
|
|
2021-05-11 18:02:46 +00:00
|
|
|
if (rx->key) {
|
|
|
|
/*
|
|
|
|
* We should not receive A-MSDUs on pre-HT connections,
|
|
|
|
* and HT connections cannot use old ciphers. Thus drop
|
|
|
|
* them, as in those cases we couldn't even have SPP
|
|
|
|
* A-MSDUs or such.
|
|
|
|
*/
|
|
|
|
switch (rx->key->conf.cipher) {
|
|
|
|
case WLAN_CIPHER_SUITE_WEP40:
|
|
|
|
case WLAN_CIPHER_SUITE_WEP104:
|
|
|
|
case WLAN_CIPHER_SUITE_TKIP:
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_BAD_AMSDU_CIPHER;
|
2021-05-11 18:02:46 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-02-27 12:03:07 +00:00
|
|
|
return __ieee80211_rx_h_amsdu(rx, 0);
|
|
|
|
}
|
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2008-02-25 15:27:43 +00:00
|
|
|
ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
|
2007-11-22 17:49:12 +00:00
|
|
|
{
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2010-02-08 12:17:01 +00:00
|
|
|
struct ieee80211_local *local = rx->local;
|
2009-11-16 12:58:20 +00:00
|
|
|
struct net_device *dev = sdata->dev;
|
2008-07-16 01:44:13 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)rx->skb->data;
|
|
|
|
__le16 fc = hdr->frame_control;
|
2023-03-30 09:00:00 +00:00
|
|
|
ieee80211_rx_result res;
|
2011-04-12 17:15:22 +00:00
|
|
|
bool port_control;
|
2007-11-22 17:49:12 +00:00
|
|
|
|
2008-07-16 01:44:13 +00:00
|
|
|
if (unlikely(!ieee80211_is_data(hdr->frame_control)))
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_CONTINUE;
|
2007-11-22 17:49:12 +00:00
|
|
|
|
2008-07-16 01:44:13 +00:00
|
|
|
if (unlikely(!ieee80211_is_data_present(hdr->frame_control)))
|
2008-01-31 18:48:21 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2007-11-22 17:49:12 +00:00
|
|
|
|
2009-11-10 19:10:05 +00:00
|
|
|
/*
|
2011-11-04 10:18:20 +00:00
|
|
|
* Send unexpected-4addr-frame event to hostapd. For older versions,
|
|
|
|
* also drop the frame to cooked monitor interfaces.
|
2009-11-10 19:10:05 +00:00
|
|
|
*/
|
|
|
|
if (ieee80211_has_a4(hdr->frame_control) &&
|
2011-11-04 10:18:20 +00:00
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP) {
|
|
|
|
if (rx->sta &&
|
|
|
|
!test_and_set_sta_flag(rx->sta, WLAN_STA_4ADDR_EVENT))
|
|
|
|
cfg80211_rx_unexpected_4addr_frame(
|
|
|
|
rx->sdata->dev, rx->sta->sta.addr, GFP_ATOMIC);
|
2009-11-10 19:10:05 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2011-11-04 10:18:20 +00:00
|
|
|
}
|
2009-11-10 19:10:05 +00:00
|
|
|
|
2023-09-25 15:25:11 +00:00
|
|
|
res = __ieee80211_data_to_8023(rx, &port_control);
|
|
|
|
if (unlikely(res != RX_CONTINUE))
|
|
|
|
return res;
|
2007-11-22 17:49:12 +00:00
|
|
|
|
2023-02-13 10:08:54 +00:00
|
|
|
res = ieee80211_rx_mesh_data(rx->sdata, rx->sta, rx->skb);
|
|
|
|
if (res != RX_CONTINUE)
|
|
|
|
return res;
|
|
|
|
|
2008-07-16 01:44:13 +00:00
|
|
|
if (!ieee80211_frame_allowed(rx, fc))
|
2008-01-31 18:48:21 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2007-12-19 00:31:22 +00:00
|
|
|
|
2014-11-09 16:50:20 +00:00
|
|
|
/* directly handle TDLS channel switch requests/responses */
|
|
|
|
if (unlikely(((struct ethhdr *)rx->skb->data)->h_proto ==
|
|
|
|
cpu_to_be16(ETH_P_TDLS))) {
|
|
|
|
struct ieee80211_tdls_data *tf = (void *)rx->skb->data;
|
|
|
|
|
|
|
|
if (pskb_may_pull(rx->skb,
|
|
|
|
offsetof(struct ieee80211_tdls_data, u)) &&
|
|
|
|
tf->payload_type == WLAN_TDLS_SNAP_RFTYPE &&
|
|
|
|
tf->category == WLAN_CATEGORY_TDLS &&
|
|
|
|
(tf->action_code == WLAN_TDLS_CHANNEL_SWITCH_REQUEST ||
|
|
|
|
tf->action_code == WLAN_TDLS_CHANNEL_SWITCH_RESPONSE)) {
|
2021-05-17 21:07:56 +00:00
|
|
|
rx->skb->protocol = cpu_to_be16(ETH_P_TDLS);
|
2022-08-17 19:57:19 +00:00
|
|
|
__ieee80211_queue_skb_to_iface(sdata, rx->link_id,
|
|
|
|
rx->sta, rx->skb);
|
2014-11-09 16:50:20 +00:00
|
|
|
return RX_QUEUED;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-04-12 17:15:22 +00:00
|
|
|
if (rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
unlikely(port_control) && sdata->bss) {
|
|
|
|
sdata = container_of(sdata->bss, struct ieee80211_sub_if_data,
|
|
|
|
u.ap);
|
|
|
|
dev = sdata->dev;
|
|
|
|
rx->sdata = sdata;
|
|
|
|
}
|
|
|
|
|
2007-11-22 17:49:12 +00:00
|
|
|
rx->skb->dev = dev;
|
|
|
|
|
2016-03-17 13:02:52 +00:00
|
|
|
if (!ieee80211_hw_check(&local->hw, SUPPORTS_DYNAMIC_PS) &&
|
|
|
|
local->ps_sdata && local->hw.conf.dynamic_ps_timeout > 0 &&
|
2011-02-02 17:27:53 +00:00
|
|
|
!is_multicast_ether_addr(
|
|
|
|
((struct ethhdr *)rx->skb->data)->h_dest) &&
|
|
|
|
(!local->scanning &&
|
2016-03-17 13:02:52 +00:00
|
|
|
!test_bit(SDATA_STATE_OFFCHANNEL, &sdata->state)))
|
|
|
|
mod_timer(&local->dynamic_ps_timer, jiffies +
|
|
|
|
msecs_to_jiffies(local->hw.conf.dynamic_ps_timeout));
|
2010-02-08 12:17:01 +00:00
|
|
|
|
2007-11-22 17:49:12 +00:00
|
|
|
ieee80211_deliver_skb(rx);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_QUEUED;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_rx_h_ctrl(struct ieee80211_rx_data *rx, struct sk_buff_head *frames)
|
2007-12-25 15:00:36 +00:00
|
|
|
{
|
|
|
|
struct sk_buff *skb = rx->skb;
|
2008-07-02 23:30:51 +00:00
|
|
|
struct ieee80211_bar *bar = (struct ieee80211_bar *)skb->data;
|
2007-12-25 15:00:36 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx;
|
|
|
|
u16 start_seq_num;
|
|
|
|
u16 tid;
|
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (likely(!ieee80211_is_ctl(bar->frame_control)))
|
2008-01-31 18:48:20 +00:00
|
|
|
return RX_CONTINUE;
|
2007-12-25 15:00:36 +00:00
|
|
|
|
2008-07-02 23:30:51 +00:00
|
|
|
if (ieee80211_is_back_req(bar->frame_control)) {
|
2010-05-30 12:52:58 +00:00
|
|
|
struct {
|
|
|
|
__le16 control, start_seq_num;
|
|
|
|
} __packed bar_data;
|
2015-04-20 19:53:37 +00:00
|
|
|
struct ieee80211_event event = {
|
|
|
|
.type = BAR_RX_EVENT,
|
|
|
|
};
|
2010-05-30 12:52:58 +00:00
|
|
|
|
2007-12-25 15:00:36 +00:00
|
|
|
if (!rx->sta)
|
2009-11-16 11:00:40 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2010-05-30 12:52:58 +00:00
|
|
|
|
|
|
|
if (skb_copy_bits(skb, offsetof(struct ieee80211_bar, control),
|
|
|
|
&bar_data, sizeof(bar_data)))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
tid = le16_to_cpu(bar_data.control) >> 12;
|
2010-06-10 08:21:38 +00:00
|
|
|
|
2016-08-29 20:25:19 +00:00
|
|
|
if (!test_bit(tid, rx->sta->ampdu_mlme.agg_session_valid) &&
|
|
|
|
!test_and_set_bit(tid, rx->sta->ampdu_mlme.unexpected_agg))
|
|
|
|
ieee80211_send_delba(rx->sdata, rx->sta->sta.addr, tid,
|
|
|
|
WLAN_BACK_RECIPIENT,
|
|
|
|
WLAN_REASON_QSTA_REQUIRE_SETUP);
|
|
|
|
|
2010-06-10 08:21:38 +00:00
|
|
|
tid_agg_rx = rcu_dereference(rx->sta->ampdu_mlme.tid_rx[tid]);
|
|
|
|
if (!tid_agg_rx)
|
2009-11-16 11:00:40 +00:00
|
|
|
return RX_DROP_MONITOR;
|
2007-12-25 15:00:36 +00:00
|
|
|
|
2010-05-30 12:52:58 +00:00
|
|
|
start_seq_num = le16_to_cpu(bar_data.start_seq_num) >> 4;
|
2015-04-20 19:53:37 +00:00
|
|
|
event.u.ba.tid = tid;
|
|
|
|
event.u.ba.ssn = start_seq_num;
|
|
|
|
event.u.ba.sta = &rx->sta->sta;
|
2007-12-25 15:00:36 +00:00
|
|
|
|
|
|
|
/* reset session timer */
|
2009-02-10 20:25:45 +00:00
|
|
|
if (tid_agg_rx->timeout)
|
|
|
|
mod_timer(&tid_agg_rx->session_timer,
|
|
|
|
TU_TO_EXP_TIME(tid_agg_rx->timeout));
|
2007-12-25 15:00:36 +00:00
|
|
|
|
2010-11-29 10:09:16 +00:00
|
|
|
spin_lock(&tid_agg_rx->reorder_lock);
|
2009-11-16 11:00:40 +00:00
|
|
|
/* release stored frames up to start of BAR */
|
2012-06-22 10:48:38 +00:00
|
|
|
ieee80211_release_reorder_frames(rx->sdata, tid_agg_rx,
|
2013-02-04 17:44:44 +00:00
|
|
|
start_seq_num, frames);
|
2010-11-29 10:09:16 +00:00
|
|
|
spin_unlock(&tid_agg_rx->reorder_lock);
|
|
|
|
|
2015-04-20 19:53:37 +00:00
|
|
|
drv_event_callback(rx->local, rx->sdata, &event);
|
|
|
|
|
2009-11-16 11:00:40 +00:00
|
|
|
kfree_skb(skb);
|
|
|
|
return RX_QUEUED;
|
2007-12-25 15:00:36 +00:00
|
|
|
}
|
|
|
|
|
2010-05-30 12:53:43 +00:00
|
|
|
/*
|
|
|
|
* After this point, we only want management frames,
|
|
|
|
* so we can drop all remaining control frames to
|
|
|
|
* cooked monitor interfaces.
|
|
|
|
*/
|
|
|
|
return RX_DROP_MONITOR;
|
2007-12-25 15:00:36 +00:00
|
|
|
}
|
|
|
|
|
2009-01-10 09:46:53 +00:00
|
|
|
static void ieee80211_process_sa_query_req(struct ieee80211_sub_if_data *sdata,
|
|
|
|
struct ieee80211_mgmt *mgmt,
|
|
|
|
size_t len)
|
2009-01-08 11:32:06 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct sk_buff *skb;
|
|
|
|
struct ieee80211_mgmt *resp;
|
|
|
|
|
mac80211: Convert compare_ether_addr to ether_addr_equal
Use the new bool function ether_addr_equal to add
some clarity and reduce the likelihood for misuse
of compare_ether_addr for sorting.
Done via cocci script:
$ cat compare_ether_addr.cocci
@@
expression a,b;
@@
- !compare_ether_addr(a, b)
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- compare_ether_addr(a, b)
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) == 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !ether_addr_equal(a, b) != 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) == 0
+ !ether_addr_equal(a, b)
@@
expression a,b;
@@
- ether_addr_equal(a, b) != 0
+ ether_addr_equal(a, b)
@@
expression a,b;
@@
- !!ether_addr_equal(a, b)
+ ether_addr_equal(a, b)
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-08 18:56:52 +00:00
|
|
|
if (!ether_addr_equal(mgmt->da, sdata->vif.addr)) {
|
2009-01-08 11:32:06 +00:00
|
|
|
/* Not to own unicast address */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-05-16 13:00:15 +00:00
|
|
|
if (!ether_addr_equal(mgmt->sa, sdata->deflink.u.mgd.bssid) ||
|
|
|
|
!ether_addr_equal(mgmt->bssid, sdata->deflink.u.mgd.bssid)) {
|
2009-07-07 01:45:17 +00:00
|
|
|
/* Not from the current AP or not associated yet. */
|
2009-01-08 11:32:06 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (len < 24 + 1 + sizeof(resp->u.action.u.sa_query)) {
|
|
|
|
/* Too short SA Query request frame */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
skb = dev_alloc_skb(sizeof(*resp) + local->hw.extra_tx_headroom);
|
|
|
|
if (skb == NULL)
|
|
|
|
return;
|
|
|
|
|
|
|
|
skb_reserve(skb, local->hw.extra_tx_headroom);
|
networking: convert many more places to skb_put_zero()
There were many places that my previous spatch didn't find,
as pointed out by yuan linyu in various patches.
The following spatch found many more and also removes the
now unnecessary casts:
@@
identifier p, p2;
expression len;
expression skb;
type t, t2;
@@
(
-p = skb_put(skb, len);
+p = skb_put_zero(skb, len);
|
-p = (t)skb_put(skb, len);
+p = skb_put_zero(skb, len);
)
... when != p
(
p2 = (t2)p;
-memset(p2, 0, len);
|
-memset(p, 0, len);
)
@@
type t, t2;
identifier p, p2;
expression skb;
@@
t *p;
...
(
-p = skb_put(skb, sizeof(t));
+p = skb_put_zero(skb, sizeof(t));
|
-p = (t *)skb_put(skb, sizeof(t));
+p = skb_put_zero(skb, sizeof(t));
)
... when != p
(
p2 = (t2)p;
-memset(p2, 0, sizeof(*p));
|
-memset(p, 0, sizeof(*p));
)
@@
expression skb, len;
@@
-memset(skb_put(skb, len), 0, len);
+skb_put_zero(skb, len);
Apply it to the tree (with one manual fixup to keep the
comment in vxlan.c, which spatch removed.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-16 12:29:19 +00:00
|
|
|
resp = skb_put_zero(skb, 24);
|
2009-01-08 11:32:06 +00:00
|
|
|
memcpy(resp->da, mgmt->sa, ETH_ALEN);
|
2009-11-25 16:46:19 +00:00
|
|
|
memcpy(resp->sa, sdata->vif.addr, ETH_ALEN);
|
2022-05-16 13:00:15 +00:00
|
|
|
memcpy(resp->bssid, sdata->deflink.u.mgd.bssid, ETH_ALEN);
|
2009-01-08 11:32:06 +00:00
|
|
|
resp->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
|
|
|
|
IEEE80211_STYPE_ACTION);
|
|
|
|
skb_put(skb, 1 + sizeof(resp->u.action.u.sa_query));
|
|
|
|
resp->u.action.category = WLAN_CATEGORY_SA_QUERY;
|
|
|
|
resp->u.action.u.sa_query.action = WLAN_ACTION_SA_QUERY_RESPONSE;
|
|
|
|
memcpy(resp->u.action.u.sa_query.trans_id,
|
|
|
|
mgmt->u.action.u.sa_query.trans_id,
|
|
|
|
WLAN_SA_QUERY_TR_ID_LEN);
|
|
|
|
|
2009-11-18 17:42:05 +00:00
|
|
|
ieee80211_tx_skb(sdata, skb);
|
2009-01-08 11:32:06 +00:00
|
|
|
}
|
|
|
|
|
2022-03-25 10:42:41 +00:00
|
|
|
static void
|
|
|
|
ieee80211_rx_check_bss_color_collision(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_mgmt *mgmt = (void *)rx->skb->data;
|
|
|
|
const struct element *ie;
|
|
|
|
size_t baselen;
|
|
|
|
|
|
|
|
if (!wiphy_ext_feature_isset(rx->local->hw.wiphy,
|
|
|
|
NL80211_EXT_FEATURE_BSS_COLOR))
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (ieee80211_hw_check(&rx->local->hw, DETECTS_COLOR_COLLISION))
|
|
|
|
return;
|
|
|
|
|
wifi: mac80211: move some future per-link data to bss_conf
To add MLD, reuse the bss_conf structure later for per-link
information, so move some things into it that are per link.
Most transformations were done with the following spatch:
@@
expression sdata;
identifier var = { chanctx_conf, mu_mimo_owner, csa_active, color_change_active, color_change_color };
@@
-sdata->vif.var
+sdata->vif.bss_conf.var
@@
struct ieee80211_vif *vif;
identifier var = { chanctx_conf, mu_mimo_owner, csa_active, color_change_active, color_change_color };
@@
-vif->var
+vif->bss_conf.var
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2022-05-10 11:26:44 +00:00
|
|
|
if (rx->sdata->vif.bss_conf.csa_active)
|
2022-03-25 10:42:41 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
baselen = mgmt->u.beacon.variable - rx->skb->data;
|
|
|
|
if (baselen > rx->skb->len)
|
|
|
|
return;
|
|
|
|
|
|
|
|
ie = cfg80211_find_ext_elem(WLAN_EID_EXT_HE_OPERATION,
|
|
|
|
mgmt->u.beacon.variable,
|
|
|
|
rx->skb->len - baselen);
|
|
|
|
if (ie && ie->datalen >= sizeof(struct ieee80211_he_operation) &&
|
|
|
|
ie->datalen >= ieee80211_he_oper_size(ie->data + 1)) {
|
|
|
|
struct ieee80211_bss_conf *bss_conf = &rx->sdata->vif.bss_conf;
|
|
|
|
const struct ieee80211_he_operation *he_oper;
|
|
|
|
u8 color;
|
|
|
|
|
|
|
|
he_oper = (void *)(ie->data + 1);
|
|
|
|
if (le32_get_bits(he_oper->he_oper_params,
|
|
|
|
IEEE80211_HE_OPERATION_BSS_COLOR_DISABLED))
|
|
|
|
return;
|
|
|
|
|
|
|
|
color = le32_get_bits(he_oper->he_oper_params,
|
|
|
|
IEEE80211_HE_OPERATION_BSS_COLOR_MASK);
|
|
|
|
if (color == bss_conf->he_bss_color.color)
|
2023-01-19 13:57:11 +00:00
|
|
|
ieee80211_obss_color_collision_notify(&rx->sdata->vif,
|
|
|
|
BIT_ULL(color),
|
|
|
|
GFP_ATOMIC);
|
2022-03-25 10:42:41 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-08-12 13:38:38 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_mgmt_check(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *) rx->skb->data;
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2010-08-12 13:38:38 +00:00
|
|
|
|
2020-09-22 02:28:14 +00:00
|
|
|
if (ieee80211_is_s1g_beacon(mgmt->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2010-08-12 13:38:38 +00:00
|
|
|
/*
|
|
|
|
* From here on, look only at management frames.
|
|
|
|
* Data and control frames are already handled,
|
|
|
|
* and unknown (reserved) frames are useless.
|
|
|
|
*/
|
|
|
|
if (rx->skb->len < 24)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
if (!ieee80211_is_mgmt(mgmt->frame_control))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2023-06-19 13:26:46 +00:00
|
|
|
/* drop too small action frames */
|
|
|
|
if (ieee80211_is_action(mgmt->frame_control) &&
|
|
|
|
rx->skb->len < IEEE80211_MIN_ACTION_SIZE)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_RUNT_ACTION;
|
2023-06-19 13:26:46 +00:00
|
|
|
|
2011-11-04 10:18:18 +00:00
|
|
|
if (rx->sdata->vif.type == NL80211_IFTYPE_AP &&
|
|
|
|
ieee80211_is_beacon(mgmt->frame_control) &&
|
|
|
|
!(rx->flags & IEEE80211_RX_BEACON_REPORTED)) {
|
2012-03-05 21:18:41 +00:00
|
|
|
int sig = 0;
|
|
|
|
|
2022-03-25 10:42:41 +00:00
|
|
|
/* sw bss color collision detection */
|
|
|
|
ieee80211_rx_check_bss_color_collision(rx);
|
|
|
|
|
2018-03-14 16:58:34 +00:00
|
|
|
if (ieee80211_hw_check(&rx->local->hw, SIGNAL_DBM) &&
|
|
|
|
!(status->flag & RX_FLAG_NO_SIGNAL_VAL))
|
2012-03-05 21:18:41 +00:00
|
|
|
sig = status->signal;
|
|
|
|
|
2020-04-30 17:25:50 +00:00
|
|
|
cfg80211_report_obss_beacon_khz(rx->local->hw.wiphy,
|
|
|
|
rx->skb->data, rx->skb->len,
|
|
|
|
ieee80211_rx_status_to_khz(status),
|
|
|
|
sig);
|
2011-11-04 10:18:18 +00:00
|
|
|
rx->flags |= IEEE80211_RX_BEACON_REPORTED;
|
|
|
|
}
|
|
|
|
|
2023-09-25 15:25:10 +00:00
|
|
|
return ieee80211_drop_unencrypted_mgmt(rx);
|
2010-08-12 13:38:38 +00:00
|
|
|
}
|
|
|
|
|
2021-08-23 18:02:39 +00:00
|
|
|
static bool
|
|
|
|
ieee80211_process_rx_twt_action(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)rx->skb->data;
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
|
|
|
|
|
|
|
/* TWT actions are only supported in AP for the moment */
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_AP)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!rx->local->ops->add_twt_setup)
|
|
|
|
return false;
|
|
|
|
|
2021-08-27 08:07:19 +00:00
|
|
|
if (!sdata->vif.bss_conf.twt_responder)
|
2021-08-23 18:02:39 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!rx->sta)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
switch (mgmt->u.action.u.s1g.action_code) {
|
|
|
|
case WLAN_S1G_TWT_SETUP: {
|
|
|
|
struct ieee80211_twt_setup *twt;
|
|
|
|
|
|
|
|
if (rx->skb->len < IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
1 + /* action code */
|
|
|
|
sizeof(struct ieee80211_twt_setup) +
|
|
|
|
2 /* TWT req_type agrt */)
|
|
|
|
break;
|
|
|
|
|
|
|
|
twt = (void *)mgmt->u.action.u.s1g.variable;
|
|
|
|
if (twt->element_id != WLAN_EID_S1G_TWT)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (rx->skb->len < IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
4 + /* action code + token + tlv */
|
|
|
|
twt->length)
|
|
|
|
break;
|
|
|
|
|
|
|
|
return true; /* queue the frame */
|
|
|
|
}
|
|
|
|
case WLAN_S1G_TWT_TEARDOWN:
|
|
|
|
if (rx->skb->len < IEEE80211_MIN_ACTION_SIZE + 2)
|
|
|
|
break;
|
|
|
|
|
|
|
|
return true; /* queue the frame */
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2008-09-09 12:42:50 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_action(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = rx->local;
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2008-09-09 12:42:50 +00:00
|
|
|
struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *) rx->skb->data;
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2008-09-09 12:42:50 +00:00
|
|
|
int len = rx->skb->len;
|
|
|
|
|
|
|
|
if (!ieee80211_is_action(mgmt->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2012-12-05 23:04:26 +00:00
|
|
|
if (!rx->sta && mgmt->u.action.category != WLAN_CATEGORY_PUBLIC &&
|
2013-08-28 11:41:31 +00:00
|
|
|
mgmt->u.action.category != WLAN_CATEGORY_SELF_PROTECTED &&
|
|
|
|
mgmt->u.action.category != WLAN_CATEGORY_SPECTRUM_MGMT)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_ACTION_UNKNOWN_SRC;
|
2008-09-09 12:42:50 +00:00
|
|
|
|
|
|
|
switch (mgmt->u.action.category) {
|
2011-12-16 14:28:57 +00:00
|
|
|
case WLAN_CATEGORY_HT:
|
|
|
|
/* reject HT action frames from stations not supporting HT */
|
2022-09-02 14:12:40 +00:00
|
|
|
if (!rx->link_sta->pub->ht_cap.ht_supported)
|
2011-12-16 14:28:57 +00:00
|
|
|
goto invalid;
|
|
|
|
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_MESH_POINT &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_ADHOC)
|
|
|
|
break;
|
|
|
|
|
2012-12-28 11:12:10 +00:00
|
|
|
/* verify action & smps_control/chanwidth are present */
|
2011-12-16 14:28:57 +00:00
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 2)
|
|
|
|
goto invalid;
|
|
|
|
|
|
|
|
switch (mgmt->u.action.u.ht_smps.action) {
|
|
|
|
case WLAN_HT_ACTION_SMPS: {
|
|
|
|
struct ieee80211_supported_band *sband;
|
2013-02-12 13:21:00 +00:00
|
|
|
enum ieee80211_smps_mode smps_mode;
|
2018-01-31 10:54:50 +00:00
|
|
|
struct sta_opmode_info sta_opmode = {};
|
2011-12-16 14:28:57 +00:00
|
|
|
|
2020-01-31 11:12:53 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_AP &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP_VLAN)
|
|
|
|
goto handled;
|
|
|
|
|
2011-12-16 14:28:57 +00:00
|
|
|
/* convert to HT capability */
|
|
|
|
switch (mgmt->u.action.u.ht_smps.smps_control) {
|
|
|
|
case WLAN_HT_SMPS_CONTROL_DISABLED:
|
2013-02-12 13:21:00 +00:00
|
|
|
smps_mode = IEEE80211_SMPS_OFF;
|
2011-12-16 14:28:57 +00:00
|
|
|
break;
|
|
|
|
case WLAN_HT_SMPS_CONTROL_STATIC:
|
2013-02-12 13:21:00 +00:00
|
|
|
smps_mode = IEEE80211_SMPS_STATIC;
|
2011-12-16 14:28:57 +00:00
|
|
|
break;
|
|
|
|
case WLAN_HT_SMPS_CONTROL_DYNAMIC:
|
2013-02-12 13:21:00 +00:00
|
|
|
smps_mode = IEEE80211_SMPS_DYNAMIC;
|
2011-12-16 14:28:57 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* if no change do nothing */
|
2022-09-02 14:12:41 +00:00
|
|
|
if (rx->link_sta->pub->smps_mode == smps_mode)
|
2011-12-16 14:28:57 +00:00
|
|
|
goto handled;
|
2022-09-02 14:12:41 +00:00
|
|
|
rx->link_sta->pub->smps_mode = smps_mode;
|
2018-03-27 13:46:16 +00:00
|
|
|
sta_opmode.smps_mode =
|
|
|
|
ieee80211_smps_mode_to_smps_mode(smps_mode);
|
2018-01-31 10:54:50 +00:00
|
|
|
sta_opmode.changed = STA_OPMODE_SMPS_MODE_CHANGED;
|
2011-12-16 14:28:57 +00:00
|
|
|
|
|
|
|
sband = rx->local->hw.wiphy->bands[status->band];
|
|
|
|
|
2022-05-30 16:35:23 +00:00
|
|
|
rate_control_rate_update(local, sband, rx->sta, 0,
|
2012-03-28 08:58:37 +00:00
|
|
|
IEEE80211_RC_SMPS_CHANGED);
|
2018-01-31 10:54:50 +00:00
|
|
|
cfg80211_sta_opmode_change_notify(sdata->dev,
|
|
|
|
rx->sta->addr,
|
|
|
|
&sta_opmode,
|
2018-10-23 03:24:44 +00:00
|
|
|
GFP_ATOMIC);
|
2011-12-16 14:28:57 +00:00
|
|
|
goto handled;
|
|
|
|
}
|
2012-12-28 11:12:10 +00:00
|
|
|
case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: {
|
|
|
|
struct ieee80211_supported_band *sband;
|
|
|
|
u8 chanwidth = mgmt->u.action.u.ht_notify_cw.chanwidth;
|
2014-12-14 09:05:51 +00:00
|
|
|
enum ieee80211_sta_rx_bandwidth max_bw, new_bw;
|
2018-01-31 10:54:50 +00:00
|
|
|
struct sta_opmode_info sta_opmode = {};
|
2012-12-28 11:12:10 +00:00
|
|
|
|
|
|
|
/* If it doesn't support 40 MHz it can't change ... */
|
2022-09-02 14:12:40 +00:00
|
|
|
if (!(rx->link_sta->pub->ht_cap.cap &
|
2013-02-07 10:47:44 +00:00
|
|
|
IEEE80211_HT_CAP_SUP_WIDTH_20_40))
|
2012-12-28 11:12:10 +00:00
|
|
|
goto handled;
|
|
|
|
|
2013-02-07 10:47:44 +00:00
|
|
|
if (chanwidth == IEEE80211_HT_CHANWIDTH_20MHZ)
|
2014-12-14 09:05:51 +00:00
|
|
|
max_bw = IEEE80211_STA_RX_BW_20;
|
2013-02-07 10:47:44 +00:00
|
|
|
else
|
2022-09-02 14:12:40 +00:00
|
|
|
max_bw = ieee80211_sta_cap_rx_bw(rx->link_sta);
|
2014-12-14 09:05:51 +00:00
|
|
|
|
|
|
|
/* set cur_max_bandwidth and recalc sta bw */
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->cur_max_bandwidth = max_bw;
|
|
|
|
new_bw = ieee80211_sta_cur_vht_bw(rx->link_sta);
|
2012-12-28 11:12:10 +00:00
|
|
|
|
2022-09-02 14:12:40 +00:00
|
|
|
if (rx->link_sta->pub->bandwidth == new_bw)
|
2012-12-28 11:12:10 +00:00
|
|
|
goto handled;
|
|
|
|
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->pub->bandwidth = new_bw;
|
2012-12-28 11:12:10 +00:00
|
|
|
sband = rx->local->hw.wiphy->bands[status->band];
|
2018-03-27 13:46:17 +00:00
|
|
|
sta_opmode.bw =
|
2022-09-02 14:12:40 +00:00
|
|
|
ieee80211_sta_rx_bw_to_chan_width(rx->link_sta);
|
2018-01-31 10:54:50 +00:00
|
|
|
sta_opmode.changed = STA_OPMODE_MAX_BW_CHANGED;
|
2012-12-28 11:12:10 +00:00
|
|
|
|
2022-05-30 16:35:23 +00:00
|
|
|
rate_control_rate_update(local, sband, rx->sta, 0,
|
2012-12-28 11:12:10 +00:00
|
|
|
IEEE80211_RC_BW_CHANGED);
|
2018-01-31 10:54:50 +00:00
|
|
|
cfg80211_sta_opmode_change_notify(sdata->dev,
|
|
|
|
rx->sta->addr,
|
|
|
|
&sta_opmode,
|
2018-10-23 03:24:44 +00:00
|
|
|
GFP_ATOMIC);
|
2012-12-28 11:12:10 +00:00
|
|
|
goto handled;
|
|
|
|
}
|
2011-12-16 14:28:57 +00:00
|
|
|
default:
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2012-12-27 17:55:36 +00:00
|
|
|
break;
|
2013-03-26 14:17:18 +00:00
|
|
|
case WLAN_CATEGORY_PUBLIC:
|
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 1)
|
|
|
|
goto invalid;
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
break;
|
|
|
|
if (!rx->sta)
|
|
|
|
break;
|
2022-05-16 13:00:15 +00:00
|
|
|
if (!ether_addr_equal(mgmt->bssid, sdata->deflink.u.mgd.bssid))
|
2013-03-26 14:17:18 +00:00
|
|
|
break;
|
|
|
|
if (mgmt->u.action.u.ext_chan_switch.action_code !=
|
|
|
|
WLAN_PUB_ACTION_EXT_CHANSW_ANN)
|
|
|
|
break;
|
|
|
|
if (len < offsetof(struct ieee80211_mgmt,
|
|
|
|
u.action.u.ext_chan_switch.variable))
|
|
|
|
goto invalid;
|
|
|
|
goto queue;
|
2012-12-27 17:55:36 +00:00
|
|
|
case WLAN_CATEGORY_VHT:
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_MESH_POINT &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_ADHOC)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* verify action code is present */
|
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 1)
|
|
|
|
goto invalid;
|
|
|
|
|
|
|
|
switch (mgmt->u.action.u.vht_opmode_notif.action_code) {
|
|
|
|
case WLAN_VHT_ACTION_OPMODE_NOTIF: {
|
|
|
|
/* verify opmode is present */
|
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 2)
|
|
|
|
goto invalid;
|
2016-10-20 06:52:50 +00:00
|
|
|
goto queue;
|
2012-12-27 17:55:36 +00:00
|
|
|
}
|
2015-12-08 14:04:31 +00:00
|
|
|
case WLAN_VHT_ACTION_GROUPID_MGMT: {
|
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 25)
|
|
|
|
goto invalid;
|
|
|
|
goto queue;
|
|
|
|
}
|
2012-12-27 17:55:36 +00:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2011-12-16 14:28:57 +00:00
|
|
|
break;
|
2008-09-09 12:42:50 +00:00
|
|
|
case WLAN_CATEGORY_BACK:
|
2009-02-10 20:25:47 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION &&
|
2011-10-26 21:47:29 +00:00
|
|
|
sdata->vif.type != NL80211_IFTYPE_MESH_POINT &&
|
2009-02-10 20:25:47 +00:00
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP_VLAN &&
|
2011-11-30 15:56:34 +00:00
|
|
|
sdata->vif.type != NL80211_IFTYPE_AP &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_ADHOC)
|
2010-02-15 10:46:39 +00:00
|
|
|
break;
|
2009-02-10 20:25:47 +00:00
|
|
|
|
2010-02-15 10:53:10 +00:00
|
|
|
/* verify action_code is present */
|
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 1)
|
|
|
|
break;
|
|
|
|
|
2008-09-09 12:42:50 +00:00
|
|
|
switch (mgmt->u.action.u.addba_req.action_code) {
|
|
|
|
case WLAN_ACTION_ADDBA_REQ:
|
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.addba_req)))
|
2010-06-10 08:21:35 +00:00
|
|
|
goto invalid;
|
|
|
|
break;
|
2008-09-09 12:42:50 +00:00
|
|
|
case WLAN_ACTION_ADDBA_RESP:
|
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.addba_resp)))
|
2010-06-10 08:21:35 +00:00
|
|
|
goto invalid;
|
|
|
|
break;
|
2008-09-09 12:42:50 +00:00
|
|
|
case WLAN_ACTION_DELBA:
|
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.delba)))
|
2010-06-10 08:21:35 +00:00
|
|
|
goto invalid;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto invalid;
|
2008-09-09 12:42:50 +00:00
|
|
|
}
|
2010-06-10 08:21:35 +00:00
|
|
|
|
2010-06-10 08:21:51 +00:00
|
|
|
goto queue;
|
2008-09-09 12:49:03 +00:00
|
|
|
case WLAN_CATEGORY_SPECTRUM_MGMT:
|
2010-02-15 10:53:10 +00:00
|
|
|
/* verify action_code is present */
|
|
|
|
if (len < IEEE80211_MIN_ACTION_SIZE + 1)
|
|
|
|
break;
|
|
|
|
|
2008-09-09 12:49:03 +00:00
|
|
|
switch (mgmt->u.action.u.measurement.action_code) {
|
|
|
|
case WLAN_ACTION_SPCT_MSR_REQ:
|
2016-04-12 13:56:15 +00:00
|
|
|
if (status->band != NL80211_BAND_5GHZ)
|
2013-08-28 11:41:31 +00:00
|
|
|
break;
|
|
|
|
|
2008-09-09 12:49:03 +00:00
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.measurement)))
|
2010-02-15 10:46:39 +00:00
|
|
|
break;
|
2013-08-28 11:41:31 +00:00
|
|
|
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
break;
|
|
|
|
|
2008-09-09 12:49:03 +00:00
|
|
|
ieee80211_process_measurement_req(sdata, mgmt, len);
|
2010-02-15 10:46:39 +00:00
|
|
|
goto handled;
|
2013-08-28 11:41:31 +00:00
|
|
|
case WLAN_ACTION_SPCT_CHL_SWITCH: {
|
|
|
|
u8 *bssid;
|
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.chan_switch)))
|
2010-02-15 10:46:39 +00:00
|
|
|
break;
|
2009-05-15 09:52:31 +00:00
|
|
|
|
2013-08-28 11:41:31 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION &&
|
2013-10-17 22:55:02 +00:00
|
|
|
sdata->vif.type != NL80211_IFTYPE_ADHOC &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_MESH_POINT)
|
2013-08-28 11:41:31 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_STATION)
|
2022-05-16 13:00:15 +00:00
|
|
|
bssid = sdata->deflink.u.mgd.bssid;
|
2013-08-28 11:41:31 +00:00
|
|
|
else if (sdata->vif.type == NL80211_IFTYPE_ADHOC)
|
|
|
|
bssid = sdata->u.ibss.bssid;
|
2013-10-17 22:55:02 +00:00
|
|
|
else if (sdata->vif.type == NL80211_IFTYPE_MESH_POINT)
|
|
|
|
bssid = mgmt->sa;
|
2013-08-28 11:41:31 +00:00
|
|
|
else
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (!ether_addr_equal(mgmt->bssid, bssid))
|
2010-02-15 10:46:39 +00:00
|
|
|
break;
|
2009-01-06 03:58:37 +00:00
|
|
|
|
2010-06-10 08:21:51 +00:00
|
|
|
goto queue;
|
2013-08-28 11:41:31 +00:00
|
|
|
}
|
2008-09-09 12:49:03 +00:00
|
|
|
}
|
|
|
|
break;
|
2011-08-13 03:01:00 +00:00
|
|
|
case WLAN_CATEGORY_SELF_PROTECTED:
|
2012-10-25 22:36:40 +00:00
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.self_prot.action_code)))
|
|
|
|
break;
|
|
|
|
|
2011-08-13 03:01:00 +00:00
|
|
|
switch (mgmt->u.action.u.self_prot.action_code) {
|
|
|
|
case WLAN_SP_MESH_PEERING_OPEN:
|
|
|
|
case WLAN_SP_MESH_PEERING_CLOSE:
|
|
|
|
case WLAN_SP_MESH_PEERING_CONFIRM:
|
|
|
|
if (!ieee80211_vif_is_mesh(&sdata->vif))
|
|
|
|
goto invalid;
|
2013-03-04 21:06:12 +00:00
|
|
|
if (sdata->u.mesh.user_mpm)
|
2011-08-13 03:01:00 +00:00
|
|
|
/* userspace handles this frame */
|
|
|
|
break;
|
|
|
|
goto queue;
|
|
|
|
case WLAN_SP_MGK_INFORM:
|
|
|
|
case WLAN_SP_MGK_ACK:
|
|
|
|
if (!ieee80211_vif_is_mesh(&sdata->vif))
|
|
|
|
goto invalid;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
2011-05-03 23:57:09 +00:00
|
|
|
case WLAN_CATEGORY_MESH_ACTION:
|
2012-10-25 22:36:40 +00:00
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.mesh_action.action_code)))
|
|
|
|
break;
|
|
|
|
|
2010-06-10 08:21:34 +00:00
|
|
|
if (!ieee80211_vif_is_mesh(&sdata->vif))
|
|
|
|
break;
|
2011-08-12 02:35:15 +00:00
|
|
|
if (mesh_action_is_path_sel(mgmt) &&
|
2012-10-25 22:09:11 +00:00
|
|
|
!mesh_path_sel_is_hwmp(sdata))
|
2010-12-17 01:37:50 +00:00
|
|
|
break;
|
|
|
|
goto queue;
|
2021-08-23 18:02:39 +00:00
|
|
|
case WLAN_CATEGORY_S1G:
|
2023-08-15 15:51:05 +00:00
|
|
|
if (len < offsetofend(typeof(*mgmt),
|
|
|
|
u.action.u.s1g.action_code))
|
|
|
|
break;
|
|
|
|
|
2021-08-23 18:02:39 +00:00
|
|
|
switch (mgmt->u.action.u.s1g.action_code) {
|
|
|
|
case WLAN_S1G_TWT_SETUP:
|
|
|
|
case WLAN_S1G_TWT_TEARDOWN:
|
|
|
|
if (ieee80211_process_rx_twt_action(rx))
|
|
|
|
goto queue;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
2010-02-15 10:46:39 +00:00
|
|
|
}
|
2010-02-15 10:53:10 +00:00
|
|
|
|
2010-08-12 13:38:38 +00:00
|
|
|
return RX_CONTINUE;
|
|
|
|
|
2010-06-10 08:21:35 +00:00
|
|
|
invalid:
|
2010-09-24 10:38:25 +00:00
|
|
|
status->rx_flags |= IEEE80211_RX_MALFORMED_ACTION_FRM;
|
2010-08-12 13:38:38 +00:00
|
|
|
/* will return in the next handlers */
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
handled:
|
|
|
|
if (rx->sta)
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->rx_stats.packets++;
|
2010-08-12 13:38:38 +00:00
|
|
|
dev_kfree_skb(rx->skb);
|
|
|
|
return RX_QUEUED;
|
|
|
|
|
|
|
|
queue:
|
2022-08-17 19:57:19 +00:00
|
|
|
ieee80211_queue_skb_to_iface(sdata, rx->link_id, rx->sta, rx->skb);
|
2010-08-12 13:38:38 +00:00
|
|
|
return RX_QUEUED;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_userspace_mgmt(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2022-01-26 09:15:31 +00:00
|
|
|
struct cfg80211_rx_info info = {
|
|
|
|
.freq = ieee80211_rx_status_to_khz(status),
|
|
|
|
.buf = rx->skb->data,
|
2022-07-18 08:42:19 +00:00
|
|
|
.len = rx->skb->len,
|
|
|
|
.link_id = rx->link_id,
|
|
|
|
.have_link_id = rx->link_id >= 0,
|
2022-01-26 09:15:31 +00:00
|
|
|
};
|
2010-08-12 13:38:38 +00:00
|
|
|
|
|
|
|
/* skip known-bad action frames and return them in the next handler */
|
2010-09-24 10:38:25 +00:00
|
|
|
if (status->rx_flags & IEEE80211_RX_MALFORMED_ACTION_FRM)
|
2010-08-12 13:38:38 +00:00
|
|
|
return RX_CONTINUE;
|
2010-01-07 19:23:53 +00:00
|
|
|
|
2010-02-15 10:53:10 +00:00
|
|
|
/*
|
|
|
|
* Getting here means the kernel doesn't know how to handle
|
|
|
|
* it, but maybe userspace does ... include returned frames
|
|
|
|
* so userspace can register for those to know whether ones
|
|
|
|
* it transmitted were processed or returned.
|
|
|
|
*/
|
|
|
|
|
2018-03-14 16:58:34 +00:00
|
|
|
if (ieee80211_hw_check(&rx->local->hw, SIGNAL_DBM) &&
|
|
|
|
!(status->flag & RX_FLAG_NO_SIGNAL_VAL))
|
2022-01-26 09:15:31 +00:00
|
|
|
info.sig_dbm = status->signal;
|
|
|
|
|
|
|
|
if (ieee80211_is_timing_measurement(rx->skb) ||
|
|
|
|
ieee80211_is_ftm(rx->skb)) {
|
|
|
|
info.rx_tstamp = ktime_to_ns(skb_hwtstamps(rx->skb)->hwtstamp);
|
|
|
|
info.ack_tstamp = ktime_to_ns(status->ack_tx_hwtstamp);
|
|
|
|
}
|
2012-03-05 21:18:41 +00:00
|
|
|
|
2022-01-26 09:15:31 +00:00
|
|
|
if (cfg80211_rx_mgmt_ext(&rx->sdata->wdev, &info)) {
|
2010-08-12 13:38:38 +00:00
|
|
|
if (rx->sta)
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->rx_stats.packets++;
|
2010-08-12 13:38:38 +00:00
|
|
|
dev_kfree_skb(rx->skb);
|
|
|
|
return RX_QUEUED;
|
|
|
|
}
|
|
|
|
|
|
|
|
return RX_CONTINUE;
|
|
|
|
}
|
|
|
|
|
2020-05-26 08:31:33 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_action_post_userspace(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
|
|
|
struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *) rx->skb->data;
|
|
|
|
int len = rx->skb->len;
|
|
|
|
|
|
|
|
if (!ieee80211_is_action(mgmt->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
switch (mgmt->u.action.category) {
|
|
|
|
case WLAN_CATEGORY_SA_QUERY:
|
|
|
|
if (len < (IEEE80211_MIN_ACTION_SIZE +
|
|
|
|
sizeof(mgmt->u.action.u.sa_query)))
|
|
|
|
break;
|
|
|
|
|
|
|
|
switch (mgmt->u.action.u.sa_query.action) {
|
|
|
|
case WLAN_ACTION_SA_QUERY_REQUEST:
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
break;
|
|
|
|
ieee80211_process_sa_query_req(sdata, mgmt, len);
|
|
|
|
goto handled;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
handled:
|
|
|
|
if (rx->sta)
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->rx_stats.packets++;
|
2020-05-26 08:31:33 +00:00
|
|
|
dev_kfree_skb(rx->skb);
|
|
|
|
return RX_QUEUED;
|
|
|
|
}
|
|
|
|
|
2010-08-12 13:38:38 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = rx->local;
|
|
|
|
struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *) rx->skb->data;
|
|
|
|
struct sk_buff *nskb;
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
2010-08-12 13:38:38 +00:00
|
|
|
|
|
|
|
if (!ieee80211_is_action(mgmt->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* For AP mode, hostapd is responsible for handling any action
|
|
|
|
* frames that we didn't handle, including returning unknown
|
|
|
|
* ones. For all other modes we will return them to the sender,
|
|
|
|
* setting the 0x80 bit in the action category, as required by
|
2012-06-27 13:38:56 +00:00
|
|
|
* 802.11-2012 9.24.4.
|
2010-08-12 13:38:38 +00:00
|
|
|
* Newer versions of hostapd shall also use the management frame
|
|
|
|
* registration mechanisms, but older ones still use cooked
|
|
|
|
* monitor interfaces so push all frames there.
|
|
|
|
*/
|
2010-09-24 10:38:25 +00:00
|
|
|
if (!(status->rx_flags & IEEE80211_RX_MALFORMED_ACTION_FRM) &&
|
2010-08-12 13:38:38 +00:00
|
|
|
(sdata->vif.type == NL80211_IFTYPE_AP ||
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP_VLAN))
|
|
|
|
return RX_DROP_MONITOR;
|
2010-02-15 10:53:10 +00:00
|
|
|
|
2012-06-27 13:38:56 +00:00
|
|
|
if (is_multicast_ether_addr(mgmt->da))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2010-02-15 10:46:39 +00:00
|
|
|
/* do not return rejected action frames */
|
|
|
|
if (mgmt->u.action.category & 0x80)
|
2023-09-25 15:25:09 +00:00
|
|
|
return RX_DROP_U_REJECTED_ACTION_RESPONSE;
|
2010-02-15 10:46:39 +00:00
|
|
|
|
|
|
|
nskb = skb_copy_expand(rx->skb, local->hw.extra_tx_headroom, 0,
|
|
|
|
GFP_ATOMIC);
|
|
|
|
if (nskb) {
|
2010-06-24 15:13:56 +00:00
|
|
|
struct ieee80211_mgmt *nmgmt = (void *)nskb->data;
|
2010-02-15 10:46:39 +00:00
|
|
|
|
2010-06-24 15:13:56 +00:00
|
|
|
nmgmt->u.action.category |= 0x80;
|
|
|
|
memcpy(nmgmt->da, nmgmt->sa, ETH_ALEN);
|
|
|
|
memcpy(nmgmt->sa, rx->sdata->vif.addr, ETH_ALEN);
|
2010-02-15 10:46:39 +00:00
|
|
|
|
|
|
|
memset(nskb->cb, 0, sizeof(nskb->cb));
|
|
|
|
|
2013-03-07 12:22:05 +00:00
|
|
|
if (rx->sdata->vif.type == NL80211_IFTYPE_P2P_DEVICE) {
|
|
|
|
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(nskb);
|
|
|
|
|
|
|
|
info->flags = IEEE80211_TX_CTL_TX_OFFCHAN |
|
|
|
|
IEEE80211_TX_INTFL_OFFCHAN_TX_OK |
|
|
|
|
IEEE80211_TX_CTL_NO_CCK_RATE;
|
2015-06-02 19:39:54 +00:00
|
|
|
if (ieee80211_hw_check(&local->hw, QUEUE_CONTROL))
|
2013-03-07 12:22:05 +00:00
|
|
|
info->hw_queue =
|
|
|
|
local->hw.offchannel_tx_hw_queue;
|
|
|
|
}
|
|
|
|
|
2022-07-18 19:36:08 +00:00
|
|
|
__ieee80211_tx_skb_tid_band(rx->sdata, nskb, 7, -1,
|
2020-07-23 10:01:52 +00:00
|
|
|
status->band);
|
2008-09-09 12:42:50 +00:00
|
|
|
}
|
2008-09-09 12:49:03 +00:00
|
|
|
dev_kfree_skb(rx->skb);
|
|
|
|
return RX_QUEUED;
|
2008-09-09 12:42:50 +00:00
|
|
|
}
|
|
|
|
|
2020-09-22 02:28:14 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
|
|
|
ieee80211_rx_h_ext(struct ieee80211_rx_data *rx)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
|
|
|
struct ieee80211_hdr *hdr = (void *)rx->skb->data;
|
|
|
|
|
|
|
|
if (!ieee80211_is_ext(hdr->frame_control))
|
|
|
|
return RX_CONTINUE;
|
|
|
|
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
/* for now only beacons are ext, so queue them */
|
2022-08-17 19:57:19 +00:00
|
|
|
ieee80211_queue_skb_to_iface(sdata, rx->link_id, rx->sta, rx->skb);
|
2020-09-22 02:28:14 +00:00
|
|
|
|
|
|
|
return RX_QUEUED;
|
|
|
|
}
|
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
static ieee80211_rx_result debug_noinline
|
2008-02-25 15:27:43 +00:00
|
|
|
ieee80211_rx_h_mgmt(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2010-06-10 08:21:34 +00:00
|
|
|
struct ieee80211_mgmt *mgmt = (void *)rx->skb->data;
|
|
|
|
__le16 stype;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2010-06-10 08:21:34 +00:00
|
|
|
stype = mgmt->frame_control & cpu_to_le16(IEEE80211_FCTL_STYPE);
|
2008-09-10 22:01:49 +00:00
|
|
|
|
2010-06-10 08:21:34 +00:00
|
|
|
if (!ieee80211_vif_is_mesh(&sdata->vif) &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_ADHOC &&
|
2014-11-03 09:33:19 +00:00
|
|
|
sdata->vif.type != NL80211_IFTYPE_OCB &&
|
2010-06-10 08:21:34 +00:00
|
|
|
sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
return RX_DROP_MONITOR;
|
2008-09-10 22:01:49 +00:00
|
|
|
|
2010-06-10 08:21:34 +00:00
|
|
|
switch (stype) {
|
2012-01-20 12:55:27 +00:00
|
|
|
case cpu_to_le16(IEEE80211_STYPE_AUTH):
|
2010-06-10 08:21:34 +00:00
|
|
|
case cpu_to_le16(IEEE80211_STYPE_BEACON):
|
|
|
|
case cpu_to_le16(IEEE80211_STYPE_PROBE_RESP):
|
|
|
|
/* process for all: mesh, mlme, ibss */
|
|
|
|
break;
|
2019-10-04 12:37:05 +00:00
|
|
|
case cpu_to_le16(IEEE80211_STYPE_DEAUTH):
|
|
|
|
if (is_multicast_ether_addr(mgmt->da) &&
|
|
|
|
!is_broadcast_ether_addr(mgmt->da))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
|
|
|
/* process only for station/IBSS */
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_ADHOC)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
break;
|
2012-01-20 12:55:27 +00:00
|
|
|
case cpu_to_le16(IEEE80211_STYPE_ASSOC_RESP):
|
|
|
|
case cpu_to_le16(IEEE80211_STYPE_REASSOC_RESP):
|
2010-06-10 08:21:34 +00:00
|
|
|
case cpu_to_le16(IEEE80211_STYPE_DISASSOC):
|
2010-11-29 19:53:23 +00:00
|
|
|
if (is_multicast_ether_addr(mgmt->da) &&
|
|
|
|
!is_broadcast_ether_addr(mgmt->da))
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
|
2010-06-10 08:21:34 +00:00
|
|
|
/* process only for station */
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_STATION)
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
break;
|
|
|
|
case cpu_to_le16(IEEE80211_STYPE_PROBE_REQ):
|
2013-02-14 19:20:14 +00:00
|
|
|
/* process only for ibss and mesh */
|
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_ADHOC &&
|
|
|
|
sdata->vif.type != NL80211_IFTYPE_MESH_POINT)
|
2010-06-10 08:21:34 +00:00
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return RX_DROP_MONITOR;
|
|
|
|
}
|
2007-09-28 12:02:09 +00:00
|
|
|
|
2022-08-17 19:57:19 +00:00
|
|
|
ieee80211_queue_skb_to_iface(sdata, rx->link_id, rx->sta, rx->skb);
|
2009-02-15 11:44:28 +00:00
|
|
|
|
2010-06-10 08:21:34 +00:00
|
|
|
return RX_QUEUED;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2009-11-16 12:58:21 +00:00
|
|
|
static void ieee80211_rx_cooked_monitor(struct ieee80211_rx_data *rx,
|
2023-04-19 12:52:54 +00:00
|
|
|
struct ieee80211_rate *rate,
|
|
|
|
ieee80211_rx_result reason)
|
2008-01-31 18:48:27 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
struct ieee80211_local *local = rx->local;
|
|
|
|
struct sk_buff *skb = rx->skb, *skb2;
|
|
|
|
struct net_device *prev_dev = NULL;
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2012-03-02 12:18:19 +00:00
|
|
|
int needed_headroom;
|
2008-01-31 18:48:27 +00:00
|
|
|
|
2010-09-24 10:38:25 +00:00
|
|
|
/*
|
|
|
|
* If cooked monitor has been processed already, then
|
|
|
|
* don't do it again. If not, set the flag.
|
|
|
|
*/
|
|
|
|
if (rx->flags & IEEE80211_RX_CMNTR)
|
2010-09-24 19:52:49 +00:00
|
|
|
goto out_free_skb;
|
2010-09-24 10:38:25 +00:00
|
|
|
rx->flags |= IEEE80211_RX_CMNTR;
|
2010-09-24 19:52:49 +00:00
|
|
|
|
2011-10-21 08:22:22 +00:00
|
|
|
/* If there are no cooked monitor interfaces, just free the SKB */
|
|
|
|
if (!local->cooked_mntrs)
|
|
|
|
goto out_free_skb;
|
|
|
|
|
2012-03-02 12:18:19 +00:00
|
|
|
/* room for the radiotap header based on driver features */
|
2014-11-06 21:56:36 +00:00
|
|
|
needed_headroom = ieee80211_rx_radiotap_hdrlen(local, status, skb);
|
2008-01-31 18:48:27 +00:00
|
|
|
|
2012-03-02 12:18:19 +00:00
|
|
|
if (skb_headroom(skb) < needed_headroom &&
|
|
|
|
pskb_expand_head(skb, needed_headroom, 0, GFP_ATOMIC))
|
|
|
|
goto out_free_skb;
|
2008-01-31 18:48:27 +00:00
|
|
|
|
2012-03-02 12:18:19 +00:00
|
|
|
/* prepend radiotap information */
|
2012-04-16 12:56:48 +00:00
|
|
|
ieee80211_add_rx_radiotap_header(local, skb, rate, needed_headroom,
|
|
|
|
false);
|
2008-01-31 18:48:27 +00:00
|
|
|
|
2016-03-03 01:16:56 +00:00
|
|
|
skb_reset_mac_header(skb);
|
2008-01-31 18:48:27 +00:00
|
|
|
skb->ip_summed = CHECKSUM_UNNECESSARY;
|
|
|
|
skb->pkt_type = PACKET_OTHERHOST;
|
|
|
|
skb->protocol = htons(ETH_P_802_2);
|
|
|
|
|
|
|
|
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
|
2009-12-23 12:15:31 +00:00
|
|
|
if (!ieee80211_sdata_running(sdata))
|
2008-01-31 18:48:27 +00:00
|
|
|
continue;
|
|
|
|
|
2008-09-10 22:01:58 +00:00
|
|
|
if (sdata->vif.type != NL80211_IFTYPE_MONITOR ||
|
2016-08-29 20:25:15 +00:00
|
|
|
!(sdata->u.mntr.flags & MONITOR_FLAG_COOK_FRAMES))
|
2008-01-31 18:48:27 +00:00
|
|
|
continue;
|
|
|
|
|
|
|
|
if (prev_dev) {
|
|
|
|
skb2 = skb_clone(skb, GFP_ATOMIC);
|
|
|
|
if (skb2) {
|
|
|
|
skb2->dev = prev_dev;
|
2010-06-24 18:25:56 +00:00
|
|
|
netif_receive_skb(skb2);
|
2008-01-31 18:48:27 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
prev_dev = sdata->dev;
|
2020-11-13 21:46:24 +00:00
|
|
|
dev_sw_netstats_rx_add(sdata->dev, skb->len);
|
2008-01-31 18:48:27 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (prev_dev) {
|
|
|
|
skb->dev = prev_dev;
|
2010-06-24 18:25:56 +00:00
|
|
|
netif_receive_skb(skb);
|
2010-09-24 10:38:25 +00:00
|
|
|
return;
|
|
|
|
}
|
2008-01-31 18:48:27 +00:00
|
|
|
|
|
|
|
out_free_skb:
|
2023-04-19 12:52:54 +00:00
|
|
|
kfree_skb_reason(skb, (__force u32)reason);
|
2008-01-31 18:48:27 +00:00
|
|
|
}
|
|
|
|
|
2010-08-04 23:36:04 +00:00
|
|
|
static void ieee80211_rx_handlers_result(struct ieee80211_rx_data *rx,
|
|
|
|
ieee80211_rx_result res)
|
|
|
|
{
|
2023-04-19 12:52:54 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
|
|
|
struct ieee80211_supported_band *sband;
|
|
|
|
struct ieee80211_rate *rate = NULL;
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2023-04-19 12:52:54 +00:00
|
|
|
if (res == RX_QUEUED) {
|
|
|
|
I802_DEBUG_INC(rx->sdata->local->rx_handlers_queued);
|
|
|
|
return;
|
|
|
|
}
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2023-04-19 12:52:54 +00:00
|
|
|
if (res != RX_CONTINUE) {
|
2010-08-04 23:36:04 +00:00
|
|
|
I802_DEBUG_INC(rx->sdata->local->rx_handlers_drop);
|
|
|
|
if (rx->sta)
|
2022-09-02 14:12:40 +00:00
|
|
|
rx->link_sta->rx_stats.dropped++;
|
2010-08-04 23:36:04 +00:00
|
|
|
}
|
2023-04-19 12:52:54 +00:00
|
|
|
|
|
|
|
if (u32_get_bits((__force u32)res, SKB_DROP_REASON_SUBSYS_MASK) ==
|
|
|
|
SKB_DROP_REASON_SUBSYS_MAC80211_UNUSABLE) {
|
|
|
|
kfree_skb_reason(rx->skb, (__force u32)res);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
sband = rx->local->hw.wiphy->bands[status->band];
|
|
|
|
if (status->encoding == RX_ENC_LEGACY)
|
|
|
|
rate = &sband->bitrates[status->rate_idx];
|
|
|
|
|
|
|
|
ieee80211_rx_cooked_monitor(rx, rate, res);
|
2010-08-04 23:36:04 +00:00
|
|
|
}
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
static void ieee80211_rx_handlers(struct ieee80211_rx_data *rx,
|
|
|
|
struct sk_buff_head *frames)
|
2008-01-31 18:48:25 +00:00
|
|
|
{
|
|
|
|
ieee80211_rx_result res = RX_DROP_MONITOR;
|
2010-08-04 23:36:04 +00:00
|
|
|
struct sk_buff *skb;
|
2008-01-31 18:48:26 +00:00
|
|
|
|
2008-08-05 17:34:52 +00:00
|
|
|
#define CALL_RXH(rxh) \
|
|
|
|
do { \
|
|
|
|
res = rxh(rx); \
|
|
|
|
if (res != RX_CONTINUE) \
|
2009-11-25 16:46:17 +00:00
|
|
|
goto rxh_next; \
|
2016-03-31 17:02:04 +00:00
|
|
|
} while (0)
|
2008-06-30 13:10:45 +00:00
|
|
|
|
2015-03-16 08:08:20 +00:00
|
|
|
/* Lock here to avoid hitting all of the data used in the RX
|
|
|
|
* path (e.g. key data, station data, ...) concurrently when
|
|
|
|
* a frame is released from the reorder buffer due to timeout
|
|
|
|
* from the timer, potentially concurrently with RX from the
|
|
|
|
* driver.
|
|
|
|
*/
|
2013-02-04 17:44:44 +00:00
|
|
|
spin_lock_bh(&rx->local->rx_path_lock);
|
2010-12-30 16:25:29 +00:00
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
while ((skb = __skb_dequeue(frames))) {
|
2009-11-25 16:46:17 +00:00
|
|
|
/*
|
|
|
|
* all the other fields are valid across frames
|
|
|
|
* that belong to an aMPDU since they are on the
|
|
|
|
* same TID from the same station
|
|
|
|
*/
|
|
|
|
rx->skb = skb;
|
|
|
|
|
2022-07-21 11:08:13 +00:00
|
|
|
if (WARN_ON_ONCE(!rx->link))
|
|
|
|
goto rxh_next;
|
|
|
|
|
2016-03-31 17:02:04 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_check_more_data);
|
|
|
|
CALL_RXH(ieee80211_rx_h_uapsd_and_pspoll);
|
|
|
|
CALL_RXH(ieee80211_rx_h_sta_process);
|
|
|
|
CALL_RXH(ieee80211_rx_h_decrypt);
|
|
|
|
CALL_RXH(ieee80211_rx_h_defragment);
|
|
|
|
CALL_RXH(ieee80211_rx_h_michael_mic_verify);
|
2009-11-25 16:46:17 +00:00
|
|
|
/* must be after MMIC verify so header is counted in MPDU mic */
|
2016-03-31 17:02:04 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_amsdu);
|
|
|
|
CALL_RXH(ieee80211_rx_h_data);
|
2013-02-04 17:44:44 +00:00
|
|
|
|
|
|
|
/* special treatment -- needs the queue */
|
|
|
|
res = ieee80211_rx_h_ctrl(rx, frames);
|
|
|
|
if (res != RX_CONTINUE)
|
|
|
|
goto rxh_next;
|
|
|
|
|
2016-03-31 17:02:04 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_mgmt_check);
|
|
|
|
CALL_RXH(ieee80211_rx_h_action);
|
|
|
|
CALL_RXH(ieee80211_rx_h_userspace_mgmt);
|
2020-05-26 08:31:33 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_action_post_userspace);
|
2016-03-31 17:02:04 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_action_return);
|
2020-09-22 02:28:14 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_ext);
|
2016-03-31 17:02:04 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_mgmt);
|
2008-06-30 13:10:45 +00:00
|
|
|
|
2010-08-04 23:36:04 +00:00
|
|
|
rxh_next:
|
|
|
|
ieee80211_rx_handlers_result(rx, res);
|
2013-02-04 17:44:44 +00:00
|
|
|
|
2008-06-30 13:10:45 +00:00
|
|
|
#undef CALL_RXH
|
2010-08-04 23:36:04 +00:00
|
|
|
}
|
2010-12-30 16:25:29 +00:00
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
spin_unlock_bh(&rx->local->rx_path_lock);
|
2010-08-04 23:36:04 +00:00
|
|
|
}
|
|
|
|
|
2010-09-24 09:21:06 +00:00
|
|
|
static void ieee80211_invoke_rx_handlers(struct ieee80211_rx_data *rx)
|
2010-08-04 23:36:04 +00:00
|
|
|
{
|
2013-02-04 17:44:44 +00:00
|
|
|
struct sk_buff_head reorder_release;
|
2010-08-04 23:36:04 +00:00
|
|
|
ieee80211_rx_result res = RX_DROP_MONITOR;
|
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
__skb_queue_head_init(&reorder_release);
|
|
|
|
|
2010-08-04 23:36:04 +00:00
|
|
|
#define CALL_RXH(rxh) \
|
|
|
|
do { \
|
|
|
|
res = rxh(rx); \
|
|
|
|
if (res != RX_CONTINUE) \
|
|
|
|
goto rxh_next; \
|
2016-03-31 17:02:04 +00:00
|
|
|
} while (0)
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2016-03-31 17:02:04 +00:00
|
|
|
CALL_RXH(ieee80211_rx_h_check_dup);
|
|
|
|
CALL_RXH(ieee80211_rx_h_check);
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_rx_reorder_ampdu(rx, &reorder_release);
|
2010-08-04 23:36:04 +00:00
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_rx_handlers(rx, &reorder_release);
|
2010-08-04 23:36:04 +00:00
|
|
|
return;
|
2008-06-30 13:10:45 +00:00
|
|
|
|
2009-11-25 16:46:17 +00:00
|
|
|
rxh_next:
|
2010-08-04 23:36:04 +00:00
|
|
|
ieee80211_rx_handlers_result(rx, res);
|
|
|
|
|
|
|
|
#undef CALL_RXH
|
2008-01-31 18:48:25 +00:00
|
|
|
}
|
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
static bool
|
|
|
|
ieee80211_rx_is_valid_sta_link_id(struct ieee80211_sta *sta, u8 link_id)
|
|
|
|
{
|
|
|
|
return !!(sta->valid_links & BIT(link_id));
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool ieee80211_rx_data_set_link(struct ieee80211_rx_data *rx,
|
|
|
|
u8 link_id)
|
|
|
|
{
|
|
|
|
rx->link_id = link_id;
|
|
|
|
rx->link = rcu_dereference(rx->sdata->link[link_id]);
|
|
|
|
|
|
|
|
if (!rx->sta)
|
|
|
|
return rx->link;
|
|
|
|
|
|
|
|
if (!ieee80211_rx_is_valid_sta_link_id(&rx->sta->sta, link_id))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
rx->link_sta = rcu_dereference(rx->sta->link[link_id]);
|
|
|
|
|
|
|
|
return rx->link && rx->link_sta;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool ieee80211_rx_data_set_sta(struct ieee80211_rx_data *rx,
|
2023-02-15 09:07:05 +00:00
|
|
|
struct sta_info *sta, int link_id)
|
2022-12-30 20:07:47 +00:00
|
|
|
{
|
|
|
|
rx->link_id = link_id;
|
|
|
|
rx->sta = sta;
|
|
|
|
|
|
|
|
if (sta) {
|
|
|
|
rx->local = sta->sdata->local;
|
|
|
|
if (!rx->sdata)
|
|
|
|
rx->sdata = sta->sdata;
|
|
|
|
rx->link_sta = &sta->deflink;
|
2023-02-15 09:40:41 +00:00
|
|
|
} else {
|
|
|
|
rx->link_sta = NULL;
|
2022-12-30 20:07:47 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (link_id < 0)
|
|
|
|
rx->link = &rx->sdata->deflink;
|
|
|
|
else if (!ieee80211_rx_data_set_link(rx, link_id))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2010-08-04 23:36:41 +00:00
|
|
|
/*
|
2010-11-29 10:09:16 +00:00
|
|
|
* This function makes calls into the RX path, therefore
|
|
|
|
* it has to be invoked under RCU read lock.
|
2010-08-04 23:36:41 +00:00
|
|
|
*/
|
|
|
|
void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid)
|
|
|
|
{
|
2013-02-04 17:44:44 +00:00
|
|
|
struct sk_buff_head frames;
|
2010-09-24 10:38:25 +00:00
|
|
|
struct ieee80211_rx_data rx = {
|
2011-07-07 16:45:03 +00:00
|
|
|
/* This is OK -- must be QoS data frame */
|
|
|
|
.security_idx = tid,
|
|
|
|
.seqno_idx = tid,
|
2010-09-24 10:38:25 +00:00
|
|
|
};
|
2010-08-24 17:22:42 +00:00
|
|
|
struct tid_ampdu_rx *tid_agg_rx;
|
2022-12-30 20:07:47 +00:00
|
|
|
int link_id = -1;
|
|
|
|
|
|
|
|
/* FIXME: statistics won't be right with this */
|
|
|
|
if (sta->sta.valid_links)
|
|
|
|
link_id = ffs(sta->sta.valid_links) - 1;
|
|
|
|
|
2023-02-15 09:07:05 +00:00
|
|
|
if (!ieee80211_rx_data_set_sta(&rx, sta, link_id))
|
2022-12-30 20:07:47 +00:00
|
|
|
return;
|
2010-08-24 17:22:42 +00:00
|
|
|
|
|
|
|
tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]);
|
|
|
|
if (!tid_agg_rx)
|
|
|
|
return;
|
2010-08-04 23:36:41 +00:00
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
__skb_queue_head_init(&frames);
|
|
|
|
|
2010-08-24 17:22:42 +00:00
|
|
|
spin_lock(&tid_agg_rx->reorder_lock);
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_sta_reorder_release(sta->sdata, tid_agg_rx, &frames);
|
2010-08-24 17:22:42 +00:00
|
|
|
spin_unlock(&tid_agg_rx->reorder_lock);
|
2010-08-04 23:36:41 +00:00
|
|
|
|
2015-04-20 19:53:38 +00:00
|
|
|
if (!skb_queue_empty(&frames)) {
|
|
|
|
struct ieee80211_event event = {
|
|
|
|
.type = BA_FRAME_TIMEOUT,
|
|
|
|
.u.ba.tid = tid,
|
|
|
|
.u.ba.sta = &sta->sta,
|
|
|
|
};
|
|
|
|
drv_event_callback(rx.local, rx.sdata, &event);
|
|
|
|
}
|
|
|
|
|
2013-02-04 17:44:44 +00:00
|
|
|
ieee80211_rx_handlers(&rx, &frames);
|
2010-08-04 23:36:41 +00:00
|
|
|
}
|
|
|
|
|
2016-01-28 14:19:25 +00:00
|
|
|
void ieee80211_mark_rx_ba_filtered_frames(struct ieee80211_sta *pubsta, u8 tid,
|
|
|
|
u16 ssn, u64 filtered,
|
|
|
|
u16 received_mpdus)
|
|
|
|
{
|
2023-08-18 01:40:04 +00:00
|
|
|
struct ieee80211_local *local;
|
2016-01-28 14:19:25 +00:00
|
|
|
struct sta_info *sta;
|
|
|
|
struct tid_ampdu_rx *tid_agg_rx;
|
|
|
|
struct sk_buff_head frames;
|
|
|
|
struct ieee80211_rx_data rx = {
|
|
|
|
/* This is OK -- must be QoS data frame */
|
|
|
|
.security_idx = tid,
|
|
|
|
.seqno_idx = tid,
|
|
|
|
};
|
|
|
|
int i, diff;
|
|
|
|
|
|
|
|
if (WARN_ON(!pubsta || tid >= IEEE80211_NUM_TIDS))
|
|
|
|
return;
|
|
|
|
|
|
|
|
__skb_queue_head_init(&frames);
|
|
|
|
|
|
|
|
sta = container_of(pubsta, struct sta_info, sta);
|
|
|
|
|
2023-08-18 01:40:04 +00:00
|
|
|
local = sta->sdata->local;
|
|
|
|
WARN_ONCE(local->hw.max_rx_aggregation_subframes > 64,
|
|
|
|
"RX BA marker can't support max_rx_aggregation_subframes %u > 64\n",
|
|
|
|
local->hw.max_rx_aggregation_subframes);
|
|
|
|
|
2023-02-15 09:07:05 +00:00
|
|
|
if (!ieee80211_rx_data_set_sta(&rx, sta, -1))
|
2022-12-30 20:07:47 +00:00
|
|
|
return;
|
2016-01-28 14:19:25 +00:00
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
tid_agg_rx = rcu_dereference(sta->ampdu_mlme.tid_rx[tid]);
|
|
|
|
if (!tid_agg_rx)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
spin_lock_bh(&tid_agg_rx->reorder_lock);
|
|
|
|
|
|
|
|
if (received_mpdus >= IEEE80211_SN_MODULO >> 1) {
|
|
|
|
int release;
|
|
|
|
|
|
|
|
/* release all frames in the reorder buffer */
|
|
|
|
release = (tid_agg_rx->head_seq_num + tid_agg_rx->buf_size) %
|
|
|
|
IEEE80211_SN_MODULO;
|
|
|
|
ieee80211_release_reorder_frames(sta->sdata, tid_agg_rx,
|
|
|
|
release, &frames);
|
|
|
|
/* update ssn to match received ssn */
|
|
|
|
tid_agg_rx->head_seq_num = ssn;
|
|
|
|
} else {
|
|
|
|
ieee80211_release_reorder_frames(sta->sdata, tid_agg_rx, ssn,
|
|
|
|
&frames);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* handle the case that received ssn is behind the mac ssn.
|
|
|
|
* it can be tid_agg_rx->buf_size behind and still be valid */
|
|
|
|
diff = (tid_agg_rx->head_seq_num - ssn) & IEEE80211_SN_MASK;
|
|
|
|
if (diff >= tid_agg_rx->buf_size) {
|
|
|
|
tid_agg_rx->reorder_buf_filtered = 0;
|
|
|
|
goto release;
|
|
|
|
}
|
|
|
|
filtered = filtered >> diff;
|
|
|
|
ssn += diff;
|
|
|
|
|
|
|
|
/* update bitmap */
|
|
|
|
for (i = 0; i < tid_agg_rx->buf_size; i++) {
|
|
|
|
int index = (ssn + i) % tid_agg_rx->buf_size;
|
|
|
|
|
|
|
|
tid_agg_rx->reorder_buf_filtered &= ~BIT_ULL(index);
|
|
|
|
if (filtered & BIT_ULL(i))
|
|
|
|
tid_agg_rx->reorder_buf_filtered |= BIT_ULL(index);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* now process also frames that the filter marking released */
|
|
|
|
ieee80211_sta_reorder_release(sta->sdata, tid_agg_rx, &frames);
|
|
|
|
|
|
|
|
release:
|
|
|
|
spin_unlock_bh(&tid_agg_rx->reorder_lock);
|
|
|
|
|
|
|
|
ieee80211_rx_handlers(&rx, &frames);
|
|
|
|
|
|
|
|
out:
|
|
|
|
rcu_read_unlock();
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_mark_rx_ba_filtered_frames);
|
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
/* main receive path */
|
|
|
|
|
2022-06-14 10:07:11 +00:00
|
|
|
static inline int ieee80211_bssid_match(const u8 *raddr, const u8 *addr)
|
|
|
|
{
|
|
|
|
return ether_addr_equal(raddr, addr) ||
|
|
|
|
is_broadcast_ether_addr(raddr);
|
|
|
|
}
|
|
|
|
|
2015-04-22 13:08:39 +00:00
|
|
|
static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
2010-09-24 09:21:05 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2009-11-16 12:58:20 +00:00
|
|
|
struct sk_buff *skb = rx->skb;
|
2015-04-22 13:08:39 +00:00
|
|
|
struct ieee80211_hdr *hdr = (void *)skb->data;
|
2009-11-16 12:58:20 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
u8 *bssid = ieee80211_get_bssid(hdr, skb->len, sdata->vif.type);
|
2020-09-22 02:28:14 +00:00
|
|
|
bool multicast = is_multicast_ether_addr(hdr->addr1) ||
|
|
|
|
ieee80211_is_s1g_beacon(hdr->frame_control);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2007-12-19 00:31:27 +00:00
|
|
|
switch (sdata->vif.type) {
|
2008-09-10 22:01:58 +00:00
|
|
|
case NL80211_IFTYPE_STATION:
|
2009-11-19 10:55:19 +00:00
|
|
|
if (!bssid && !sdata->u.mgd.use_4addr)
|
2014-01-07 15:23:24 +00:00
|
|
|
return false;
|
2022-11-24 00:53:36 +00:00
|
|
|
if (ieee80211_is_first_frag(hdr->seq_ctrl) &&
|
|
|
|
ieee80211_is_robust_mgmt_frame(skb) && !rx->sta)
|
2019-02-13 14:13:30 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (multicast)
|
|
|
|
return true;
|
2022-06-14 11:08:42 +00:00
|
|
|
return ieee80211_is_our_addr(sdata, hdr->addr1, &rx->link_id);
|
2008-09-10 22:01:58 +00:00
|
|
|
case NL80211_IFTYPE_ADHOC:
|
2007-07-27 13:43:22 +00:00
|
|
|
if (!bssid)
|
2014-01-07 15:23:24 +00:00
|
|
|
return false;
|
2013-09-17 09:15:43 +00:00
|
|
|
if (ether_addr_equal(sdata->vif.addr, hdr->addr2) ||
|
2021-08-27 14:42:30 +00:00
|
|
|
ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2) ||
|
|
|
|
!is_valid_ether_addr(hdr->addr2))
|
2014-01-07 15:23:24 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (ieee80211_is_beacon(hdr->frame_control))
|
2014-01-07 15:23:24 +00:00
|
|
|
return true;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!ieee80211_bssid_match(bssid, sdata->u.ibss.bssid))
|
2014-01-07 15:23:24 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!multicast &&
|
|
|
|
!ether_addr_equal(sdata->vif.addr, hdr->addr1))
|
2015-04-22 12:40:58 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!rx->sta) {
|
2008-12-12 12:38:33 +00:00
|
|
|
int rate_idx;
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->encoding != RX_ENC_LEGACY)
|
2012-11-09 14:07:02 +00:00
|
|
|
rate_idx = 0; /* TODO: HT/VHT rates */
|
2008-12-12 12:38:33 +00:00
|
|
|
else
|
2009-11-16 12:58:20 +00:00
|
|
|
rate_idx = status->rate_idx;
|
2011-12-15 10:17:37 +00:00
|
|
|
ieee80211_ibss_rx_no_sta(sdata, bssid, hdr->addr2,
|
|
|
|
BIT(rate_idx));
|
2008-12-12 12:38:33 +00:00
|
|
|
}
|
2015-04-22 13:08:39 +00:00
|
|
|
return true;
|
2014-11-03 09:33:19 +00:00
|
|
|
case NL80211_IFTYPE_OCB:
|
|
|
|
if (!bssid)
|
|
|
|
return false;
|
2015-08-05 14:02:42 +00:00
|
|
|
if (!ieee80211_is_data_present(hdr->frame_control))
|
2014-11-03 09:33:19 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!is_broadcast_ether_addr(bssid))
|
2014-11-03 09:33:19 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!multicast &&
|
|
|
|
!ether_addr_equal(sdata->dev->dev_addr, hdr->addr1))
|
2015-04-22 12:40:58 +00:00
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!rx->sta) {
|
2014-11-03 09:33:19 +00:00
|
|
|
int rate_idx;
|
2017-04-26 10:14:59 +00:00
|
|
|
if (status->encoding != RX_ENC_LEGACY)
|
2014-11-03 09:33:19 +00:00
|
|
|
rate_idx = 0; /* TODO: HT rates */
|
|
|
|
else
|
|
|
|
rate_idx = status->rate_idx;
|
|
|
|
ieee80211_ocb_rx_no_sta(sdata, bssid, hdr->addr2,
|
|
|
|
BIT(rate_idx));
|
|
|
|
}
|
2015-04-22 13:08:39 +00:00
|
|
|
return true;
|
2008-09-10 22:01:58 +00:00
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
2018-01-04 14:51:53 +00:00
|
|
|
if (ether_addr_equal(sdata->vif.addr, hdr->addr2))
|
|
|
|
return false;
|
2015-04-22 13:08:39 +00:00
|
|
|
if (multicast)
|
|
|
|
return true;
|
|
|
|
return ether_addr_equal(sdata->vif.addr, hdr->addr1);
|
2008-09-10 22:01:58 +00:00
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
case NL80211_IFTYPE_AP:
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!bssid)
|
2022-06-14 11:08:42 +00:00
|
|
|
return ieee80211_is_our_addr(sdata, hdr->addr1,
|
|
|
|
&rx->link_id);
|
2015-04-22 13:08:39 +00:00
|
|
|
|
2022-06-14 11:08:42 +00:00
|
|
|
if (!is_broadcast_ether_addr(bssid) &&
|
|
|
|
!ieee80211_is_our_addr(sdata, bssid, NULL)) {
|
2011-12-06 09:39:40 +00:00
|
|
|
/*
|
|
|
|
* Accept public action frames even when the
|
|
|
|
* BSSID doesn't match, this is used for P2P
|
|
|
|
* and location updates. Note that mac80211
|
|
|
|
* itself never looks at these frames.
|
|
|
|
*/
|
2013-05-13 14:42:40 +00:00
|
|
|
if (!multicast &&
|
2022-06-14 11:08:42 +00:00
|
|
|
!ieee80211_is_our_addr(sdata, hdr->addr1,
|
|
|
|
&rx->link_id))
|
2014-01-07 15:23:24 +00:00
|
|
|
return false;
|
2012-07-06 20:19:27 +00:00
|
|
|
if (ieee80211_is_public_action(hdr, skb->len))
|
2014-01-07 15:23:24 +00:00
|
|
|
return true;
|
2015-04-22 12:48:34 +00:00
|
|
|
return ieee80211_is_beacon(hdr->frame_control);
|
2015-04-22 13:08:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!ieee80211_has_tods(hdr->frame_control)) {
|
2014-07-17 14:14:30 +00:00
|
|
|
/* ignore data frames to TDLS-peers */
|
|
|
|
if (ieee80211_is_data(hdr->frame_control))
|
|
|
|
return false;
|
|
|
|
/* ignore action frames to TDLS-peers */
|
|
|
|
if (ieee80211_is_action(hdr->frame_control) &&
|
2016-02-29 22:29:00 +00:00
|
|
|
!is_broadcast_ether_addr(bssid) &&
|
2014-07-17 14:14:30 +00:00
|
|
|
!ether_addr_equal(bssid, hdr->addr1))
|
|
|
|
return false;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2017-04-20 19:32:16 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* 802.11-2016 Table 9-26 says that for data frames, A1 must be
|
|
|
|
* the BSSID - we've checked that already but may have accepted
|
|
|
|
* the wildcard (ff:ff:ff:ff:ff:ff).
|
|
|
|
*
|
|
|
|
* It also says:
|
|
|
|
* The BSSID of the Data frame is determined as follows:
|
|
|
|
* a) If the STA is contained within an AP or is associated
|
|
|
|
* with an AP, the BSSID is the address currently in use
|
|
|
|
* by the STA contained in the AP.
|
|
|
|
*
|
|
|
|
* So we should not accept data frames with an address that's
|
|
|
|
* multicast.
|
|
|
|
*
|
|
|
|
* Accepting it also opens a security problem because stations
|
|
|
|
* could encrypt it with the GTK and inject traffic that way.
|
|
|
|
*/
|
|
|
|
if (ieee80211_is_data(hdr->frame_control) && multicast)
|
|
|
|
return false;
|
|
|
|
|
2015-04-22 13:08:39 +00:00
|
|
|
return true;
|
2012-06-18 18:07:15 +00:00
|
|
|
case NL80211_IFTYPE_P2P_DEVICE:
|
2015-04-22 13:08:39 +00:00
|
|
|
return ieee80211_is_public_action(hdr, skb->len) ||
|
|
|
|
ieee80211_is_probe_req(hdr->frame_control) ||
|
|
|
|
ieee80211_is_probe_resp(hdr->frame_control) ||
|
|
|
|
ieee80211_is_beacon(hdr->frame_control);
|
2016-09-20 14:31:13 +00:00
|
|
|
case NL80211_IFTYPE_NAN:
|
|
|
|
/* Currently no frames on NAN interface are allowed */
|
|
|
|
return false;
|
2010-09-16 12:58:23 +00:00
|
|
|
default:
|
2007-09-26 13:19:43 +00:00
|
|
|
break;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2015-04-22 13:08:39 +00:00
|
|
|
WARN_ON_ONCE(1);
|
|
|
|
return false;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
void ieee80211_check_fast_rx(struct sta_info *sta)
|
|
|
|
{
|
|
|
|
struct ieee80211_sub_if_data *sdata = sta->sdata;
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct ieee80211_key *key;
|
|
|
|
struct ieee80211_fast_rx fastrx = {
|
|
|
|
.dev = sdata->dev,
|
|
|
|
.vif_type = sdata->vif.type,
|
|
|
|
.control_port_protocol = sdata->control_port_protocol,
|
|
|
|
}, *old, *new = NULL;
|
2022-10-01 10:01:13 +00:00
|
|
|
u32 offload_flags;
|
2020-12-18 18:47:18 +00:00
|
|
|
bool set_offload = false;
|
2016-03-31 17:02:10 +00:00
|
|
|
bool assign = false;
|
2020-12-18 18:47:18 +00:00
|
|
|
bool offload;
|
2016-03-31 17:02:10 +00:00
|
|
|
|
|
|
|
/* use sparse to check that we don't return without updating */
|
|
|
|
__acquire(check_fast_rx);
|
|
|
|
|
|
|
|
BUILD_BUG_ON(sizeof(fastrx.rfc1042_hdr) != sizeof(rfc1042_header));
|
|
|
|
BUILD_BUG_ON(sizeof(fastrx.rfc1042_hdr) != ETH_ALEN);
|
|
|
|
ether_addr_copy(fastrx.rfc1042_hdr, rfc1042_header);
|
|
|
|
ether_addr_copy(fastrx.vif_addr, sdata->vif.addr);
|
|
|
|
|
2016-03-31 17:02:11 +00:00
|
|
|
fastrx.uses_rss = ieee80211_hw_check(&local->hw, USES_RSS);
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
/* fast-rx doesn't do reordering */
|
|
|
|
if (ieee80211_hw_check(&local->hw, AMPDU_AGGREGATION) &&
|
|
|
|
!ieee80211_hw_check(&local->hw, SUPPORTS_REORDERING_BUFFER))
|
|
|
|
goto clear;
|
|
|
|
|
|
|
|
switch (sdata->vif.type) {
|
|
|
|
case NL80211_IFTYPE_STATION:
|
|
|
|
if (sta->sta.tdls) {
|
|
|
|
fastrx.da_offs = offsetof(struct ieee80211_hdr, addr1);
|
|
|
|
fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr2);
|
|
|
|
fastrx.expected_ds_bits = 0;
|
|
|
|
} else {
|
|
|
|
fastrx.da_offs = offsetof(struct ieee80211_hdr, addr1);
|
|
|
|
fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr3);
|
|
|
|
fastrx.expected_ds_bits =
|
|
|
|
cpu_to_le16(IEEE80211_FCTL_FROMDS);
|
|
|
|
}
|
2018-02-23 09:06:05 +00:00
|
|
|
|
2018-02-23 12:55:50 +00:00
|
|
|
if (sdata->u.mgd.use_4addr && !sta->sta.tdls) {
|
|
|
|
fastrx.expected_ds_bits |=
|
|
|
|
cpu_to_le16(IEEE80211_FCTL_TODS);
|
|
|
|
fastrx.da_offs = offsetof(struct ieee80211_hdr, addr3);
|
|
|
|
fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr4);
|
|
|
|
}
|
|
|
|
|
2018-02-23 09:06:05 +00:00
|
|
|
if (!sdata->u.mgd.powersave)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* software powersave is a huge mess, avoid all of it */
|
|
|
|
if (ieee80211_hw_check(&local->hw, PS_NULLFUNC_STACK))
|
|
|
|
goto clear;
|
|
|
|
if (ieee80211_hw_check(&local->hw, SUPPORTS_PS) &&
|
|
|
|
!ieee80211_hw_check(&local->hw, SUPPORTS_DYNAMIC_PS))
|
|
|
|
goto clear;
|
2016-03-31 17:02:10 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_AP_VLAN:
|
|
|
|
case NL80211_IFTYPE_AP:
|
|
|
|
/* parallel-rx requires this, at least with calls to
|
|
|
|
* ieee80211_sta_ps_transition()
|
|
|
|
*/
|
|
|
|
if (!ieee80211_hw_check(&local->hw, AP_LINK_PS))
|
|
|
|
goto clear;
|
|
|
|
fastrx.da_offs = offsetof(struct ieee80211_hdr, addr3);
|
|
|
|
fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr2);
|
|
|
|
fastrx.expected_ds_bits = cpu_to_le16(IEEE80211_FCTL_TODS);
|
|
|
|
|
|
|
|
fastrx.internal_forward =
|
|
|
|
!(sdata->flags & IEEE80211_SDATA_DONT_BRIDGE_PACKETS) &&
|
|
|
|
(sdata->vif.type != NL80211_IFTYPE_AP_VLAN ||
|
|
|
|
!sdata->u.vlan.sta);
|
2018-02-23 09:06:04 +00:00
|
|
|
|
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
|
|
|
|
sdata->u.vlan.sta) {
|
|
|
|
fastrx.expected_ds_bits |=
|
|
|
|
cpu_to_le16(IEEE80211_FCTL_FROMDS);
|
|
|
|
fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr4);
|
|
|
|
fastrx.internal_forward = 0;
|
|
|
|
}
|
|
|
|
|
2023-03-14 09:59:55 +00:00
|
|
|
break;
|
|
|
|
case NL80211_IFTYPE_MESH_POINT:
|
|
|
|
fastrx.expected_ds_bits = cpu_to_le16(IEEE80211_FCTL_FROMDS |
|
|
|
|
IEEE80211_FCTL_TODS);
|
|
|
|
fastrx.da_offs = offsetof(struct ieee80211_hdr, addr3);
|
|
|
|
fastrx.sa_offs = offsetof(struct ieee80211_hdr, addr4);
|
2016-03-31 17:02:10 +00:00
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto clear;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!test_sta_flag(sta, WLAN_STA_AUTHORIZED))
|
|
|
|
goto clear;
|
|
|
|
|
|
|
|
rcu_read_lock();
|
|
|
|
key = rcu_dereference(sta->ptk[sta->ptk_idx]);
|
2020-12-18 18:47:17 +00:00
|
|
|
if (!key)
|
|
|
|
key = rcu_dereference(sdata->default_unicast_key);
|
2016-03-31 17:02:10 +00:00
|
|
|
if (key) {
|
|
|
|
switch (key->conf.cipher) {
|
|
|
|
case WLAN_CIPHER_SUITE_TKIP:
|
|
|
|
/* we don't want to deal with MMIC in fast-rx */
|
|
|
|
goto clear_rcu;
|
|
|
|
case WLAN_CIPHER_SUITE_CCMP:
|
|
|
|
case WLAN_CIPHER_SUITE_CCMP_256:
|
|
|
|
case WLAN_CIPHER_SUITE_GCMP:
|
|
|
|
case WLAN_CIPHER_SUITE_GCMP_256:
|
|
|
|
break;
|
|
|
|
default:
|
2019-03-19 20:34:08 +00:00
|
|
|
/* We also don't want to deal with
|
|
|
|
* WEP or cipher scheme.
|
2016-03-31 17:02:10 +00:00
|
|
|
*/
|
|
|
|
goto clear_rcu;
|
|
|
|
}
|
|
|
|
|
|
|
|
fastrx.key = true;
|
|
|
|
fastrx.icv_len = key->conf.icv_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
assign = true;
|
|
|
|
clear_rcu:
|
|
|
|
rcu_read_unlock();
|
|
|
|
clear:
|
|
|
|
__release(check_fast_rx);
|
|
|
|
|
|
|
|
if (assign)
|
|
|
|
new = kmemdup(&fastrx, sizeof(fastrx), GFP_KERNEL);
|
|
|
|
|
2022-10-01 10:01:13 +00:00
|
|
|
offload_flags = get_bss_sdata(sdata)->vif.offload_flags;
|
|
|
|
offload = offload_flags & IEEE80211_OFFLOAD_DECAP_ENABLED;
|
2020-12-18 18:47:18 +00:00
|
|
|
|
2022-10-01 10:01:13 +00:00
|
|
|
if (assign && offload)
|
2020-12-18 18:47:18 +00:00
|
|
|
set_offload = !test_and_set_sta_flag(sta, WLAN_STA_DECAP_OFFLOAD);
|
|
|
|
else
|
|
|
|
set_offload = test_and_clear_sta_flag(sta, WLAN_STA_DECAP_OFFLOAD);
|
|
|
|
|
|
|
|
if (set_offload)
|
|
|
|
drv_sta_set_decap_offload(local, sdata, &sta->sta, assign);
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
spin_lock_bh(&sta->lock);
|
|
|
|
old = rcu_dereference_protected(sta->fast_rx, true);
|
|
|
|
rcu_assign_pointer(sta->fast_rx, new);
|
|
|
|
spin_unlock_bh(&sta->lock);
|
|
|
|
|
|
|
|
if (old)
|
|
|
|
kfree_rcu(old, rcu_head);
|
|
|
|
}
|
|
|
|
|
|
|
|
void ieee80211_clear_fast_rx(struct sta_info *sta)
|
|
|
|
{
|
|
|
|
struct ieee80211_fast_rx *old;
|
|
|
|
|
|
|
|
spin_lock_bh(&sta->lock);
|
|
|
|
old = rcu_dereference_protected(sta->fast_rx, true);
|
|
|
|
RCU_INIT_POINTER(sta->fast_rx, NULL);
|
|
|
|
spin_unlock_bh(&sta->lock);
|
|
|
|
|
|
|
|
if (old)
|
|
|
|
kfree_rcu(old, rcu_head);
|
|
|
|
}
|
|
|
|
|
|
|
|
void __ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
struct sta_info *sta;
|
|
|
|
|
2023-08-28 12:00:01 +00:00
|
|
|
lockdep_assert_wiphy(local->hw.wiphy);
|
2016-03-31 17:02:10 +00:00
|
|
|
|
2020-02-23 14:33:02 +00:00
|
|
|
list_for_each_entry(sta, &local->sta_list, list) {
|
2016-03-31 17:02:10 +00:00
|
|
|
if (sdata != sta->sdata &&
|
|
|
|
(!sta->sdata->bss || sta->sdata->bss != sdata->bss))
|
|
|
|
continue;
|
|
|
|
ieee80211_check_fast_rx(sta);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = sdata->local;
|
|
|
|
|
2023-08-28 12:00:01 +00:00
|
|
|
lockdep_assert_wiphy(local->hw.wiphy);
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
__ieee80211_check_fast_rx_iface(sdata);
|
|
|
|
}
|
|
|
|
|
2020-12-18 18:47:18 +00:00
|
|
|
static void ieee80211_rx_8023(struct ieee80211_rx_data *rx,
|
|
|
|
struct ieee80211_fast_rx *fast_rx,
|
|
|
|
int orig_len)
|
|
|
|
{
|
|
|
|
struct ieee80211_sta_rx_stats *stats;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
|
|
|
|
struct sta_info *sta = rx->sta;
|
2022-08-17 10:42:13 +00:00
|
|
|
struct link_sta_info *link_sta;
|
2020-12-18 18:47:18 +00:00
|
|
|
struct sk_buff *skb = rx->skb;
|
|
|
|
void *sa = skb->data + ETH_ALEN;
|
|
|
|
void *da = skb->data;
|
|
|
|
|
2022-08-17 10:42:13 +00:00
|
|
|
if (rx->link_id >= 0) {
|
|
|
|
link_sta = rcu_dereference(sta->link[rx->link_id]);
|
|
|
|
if (WARN_ON_ONCE(!link_sta)) {
|
|
|
|
dev_kfree_skb(rx->skb);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
link_sta = &sta->deflink;
|
|
|
|
}
|
|
|
|
|
|
|
|
stats = &link_sta->rx_stats;
|
2020-12-18 18:47:18 +00:00
|
|
|
if (fast_rx->uses_rss)
|
2022-08-17 10:42:13 +00:00
|
|
|
stats = this_cpu_ptr(link_sta->pcpu_rx_stats);
|
2020-12-18 18:47:18 +00:00
|
|
|
|
|
|
|
/* statistics part of ieee80211_rx_h_sta_process() */
|
|
|
|
if (!(status->flag & RX_FLAG_NO_SIGNAL_VAL)) {
|
|
|
|
stats->last_signal = status->signal;
|
|
|
|
if (!fast_rx->uses_rss)
|
2022-08-17 10:42:13 +00:00
|
|
|
ewma_signal_add(&link_sta->rx_stats_avg.signal,
|
2020-12-18 18:47:18 +00:00
|
|
|
-status->signal);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (status->chains) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
stats->chains = status->chains;
|
|
|
|
for (i = 0; i < ARRAY_SIZE(status->chain_signal); i++) {
|
|
|
|
int signal = status->chain_signal[i];
|
|
|
|
|
|
|
|
if (!(status->chains & BIT(i)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
stats->chain_signal_last[i] = signal;
|
|
|
|
if (!fast_rx->uses_rss)
|
2022-08-17 10:42:13 +00:00
|
|
|
ewma_signal_add(&link_sta->rx_stats_avg.chain_signal[i],
|
2020-12-18 18:47:18 +00:00
|
|
|
-signal);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* end of statistics */
|
|
|
|
|
|
|
|
stats->last_rx = jiffies;
|
|
|
|
stats->last_rate = sta_stats_encode_rate(status);
|
|
|
|
|
|
|
|
stats->fragments++;
|
|
|
|
stats->packets++;
|
|
|
|
|
|
|
|
skb->dev = fast_rx->dev;
|
|
|
|
|
|
|
|
dev_sw_netstats_rx_add(fast_rx->dev, skb->len);
|
|
|
|
|
|
|
|
/* The seqno index has the same property as needed
|
|
|
|
* for the rx_msdu field, i.e. it is IEEE80211_NUM_TIDS
|
|
|
|
* for non-QoS-data frames. Here we know it's a data
|
|
|
|
* frame, so count MSDUs.
|
|
|
|
*/
|
|
|
|
u64_stats_update_begin(&stats->syncp);
|
|
|
|
stats->msdu[rx->seqno_idx]++;
|
|
|
|
stats->bytes += orig_len;
|
|
|
|
u64_stats_update_end(&stats->syncp);
|
|
|
|
|
|
|
|
if (fast_rx->internal_forward) {
|
|
|
|
struct sk_buff *xmit_skb = NULL;
|
|
|
|
if (is_multicast_ether_addr(da)) {
|
|
|
|
xmit_skb = skb_copy(skb, GFP_ATOMIC);
|
|
|
|
} else if (!ether_addr_equal(da, sa) &&
|
|
|
|
sta_info_get(rx->sdata, da)) {
|
|
|
|
xmit_skb = skb;
|
|
|
|
skb = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (xmit_skb) {
|
|
|
|
/*
|
|
|
|
* Send to wireless media and increase priority by 256
|
|
|
|
* to keep the received priority instead of
|
|
|
|
* reclassifying the frame (see cfg80211_classify8021d).
|
|
|
|
*/
|
|
|
|
xmit_skb->priority += 256;
|
|
|
|
xmit_skb->protocol = htons(ETH_P_802_3);
|
|
|
|
skb_reset_network_header(xmit_skb);
|
|
|
|
skb_reset_mac_header(xmit_skb);
|
|
|
|
dev_queue_xmit(xmit_skb);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!skb)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* deliver to local stack */
|
|
|
|
skb->protocol = eth_type_trans(skb, fast_rx->dev);
|
2022-02-12 16:20:15 +00:00
|
|
|
ieee80211_deliver_skb_to_local_stack(skb, rx);
|
2020-12-18 18:47:18 +00:00
|
|
|
}
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
static bool ieee80211_invoke_fast_rx(struct ieee80211_rx_data *rx,
|
|
|
|
struct ieee80211_fast_rx *fast_rx)
|
|
|
|
{
|
|
|
|
struct sk_buff *skb = rx->skb;
|
|
|
|
struct ieee80211_hdr *hdr = (void *)skb->data;
|
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2023-03-14 09:59:55 +00:00
|
|
|
static ieee80211_rx_result res;
|
2016-03-31 17:02:10 +00:00
|
|
|
int orig_len = skb->len;
|
2018-02-27 12:03:07 +00:00
|
|
|
int hdrlen = ieee80211_hdrlen(hdr->frame_control);
|
|
|
|
int snap_offs = hdrlen;
|
2016-03-31 17:02:10 +00:00
|
|
|
struct {
|
|
|
|
u8 snap[sizeof(rfc1042_header)];
|
|
|
|
__be16 proto;
|
|
|
|
} *payload __aligned(2);
|
|
|
|
struct {
|
|
|
|
u8 da[ETH_ALEN];
|
|
|
|
u8 sa[ETH_ALEN];
|
|
|
|
} addrs __aligned(2);
|
2022-08-17 10:42:13 +00:00
|
|
|
struct ieee80211_sta_rx_stats *stats;
|
2016-03-31 17:02:11 +00:00
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
/* for parallel-rx, we need to have DUP_VALIDATED, otherwise we write
|
|
|
|
* to a common data structure; drivers can implement that per queue
|
|
|
|
* but we don't have that information in mac80211
|
|
|
|
*/
|
|
|
|
if (!(status->flag & RX_FLAG_DUP_VALIDATED))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
#define FAST_RX_CRYPT_FLAGS (RX_FLAG_PN_VALIDATED | RX_FLAG_DECRYPTED)
|
|
|
|
|
|
|
|
/* If using encryption, we also need to have:
|
|
|
|
* - PN_VALIDATED: similar, but the implementation is tricky
|
|
|
|
* - DECRYPTED: necessary for PN_VALIDATED
|
|
|
|
*/
|
|
|
|
if (fast_rx->key &&
|
|
|
|
(status->flag & FAST_RX_CRYPT_FLAGS) != FAST_RX_CRYPT_FLAGS)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (unlikely(!ieee80211_is_data_present(hdr->frame_control)))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (unlikely(ieee80211_is_frag(hdr)))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Since our interface address cannot be multicast, this
|
|
|
|
* implicitly also rejects multicast frames without the
|
|
|
|
* explicit check.
|
|
|
|
*
|
|
|
|
* We shouldn't get any *data* frames not addressed to us
|
|
|
|
* (AP mode will accept multicast *management* frames), but
|
|
|
|
* punting here will make it go through the full checks in
|
|
|
|
* ieee80211_accept_frame().
|
|
|
|
*/
|
|
|
|
if (!ether_addr_equal(fast_rx->vif_addr, hdr->addr1))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if ((hdr->frame_control & cpu_to_le16(IEEE80211_FCTL_FROMDS |
|
|
|
|
IEEE80211_FCTL_TODS)) !=
|
|
|
|
fast_rx->expected_ds_bits)
|
2018-02-23 09:06:03 +00:00
|
|
|
return false;
|
2016-03-31 17:02:10 +00:00
|
|
|
|
|
|
|
/* assign the key to drop unencrypted frames (later)
|
|
|
|
* and strip the IV/MIC if necessary
|
|
|
|
*/
|
|
|
|
if (fast_rx->key && !(status->flag & RX_FLAG_IV_STRIPPED)) {
|
|
|
|
/* GCMP header length is the same */
|
|
|
|
snap_offs += IEEE80211_CCMP_HDR_LEN;
|
|
|
|
}
|
|
|
|
|
2023-03-14 09:59:55 +00:00
|
|
|
if (!ieee80211_vif_is_mesh(&rx->sdata->vif) &&
|
|
|
|
!(status->rx_flags & IEEE80211_RX_AMSDU)) {
|
2018-02-27 12:03:07 +00:00
|
|
|
if (!pskb_may_pull(skb, snap_offs + sizeof(*payload)))
|
2022-10-07 09:05:09 +00:00
|
|
|
return false;
|
2016-03-31 17:02:10 +00:00
|
|
|
|
2018-02-27 12:03:07 +00:00
|
|
|
payload = (void *)(skb->data + snap_offs);
|
2016-03-31 17:02:10 +00:00
|
|
|
|
2018-02-27 12:03:07 +00:00
|
|
|
if (!ether_addr_equal(payload->snap, fast_rx->rfc1042_hdr))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Don't handle these here since they require special code.
|
|
|
|
* Accept AARP and IPX even though they should come with a
|
|
|
|
* bridge-tunnel header - but if we get them this way then
|
|
|
|
* there's little point in discarding them.
|
|
|
|
*/
|
|
|
|
if (unlikely(payload->proto == cpu_to_be16(ETH_P_TDLS) ||
|
|
|
|
payload->proto == fast_rx->control_port_protocol))
|
|
|
|
return false;
|
|
|
|
}
|
2016-03-31 17:02:10 +00:00
|
|
|
|
|
|
|
/* after this point, don't punt to the slowpath! */
|
|
|
|
|
|
|
|
if (rx->key && !(status->flag & RX_FLAG_MIC_STRIPPED) &&
|
|
|
|
pskb_trim(skb, skb->len - fast_rx->icv_len))
|
|
|
|
goto drop;
|
|
|
|
|
|
|
|
if (rx->key && !ieee80211_has_protected(hdr->frame_control))
|
|
|
|
goto drop;
|
|
|
|
|
2018-02-27 12:03:07 +00:00
|
|
|
if (status->rx_flags & IEEE80211_RX_AMSDU) {
|
|
|
|
if (__ieee80211_rx_h_amsdu(rx, snap_offs - hdrlen) !=
|
|
|
|
RX_QUEUED)
|
|
|
|
goto drop;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
/* do the header conversion - first grab the addresses */
|
|
|
|
ether_addr_copy(addrs.da, skb->data + fast_rx->da_offs);
|
|
|
|
ether_addr_copy(addrs.sa, skb->data + fast_rx->sa_offs);
|
2023-03-14 09:59:55 +00:00
|
|
|
if (ieee80211_vif_is_mesh(&rx->sdata->vif)) {
|
|
|
|
skb_pull(skb, snap_offs - 2);
|
|
|
|
put_unaligned_be16(skb->len - 2, skb->data);
|
|
|
|
} else {
|
|
|
|
skb_postpull_rcsum(skb, skb->data + snap_offs,
|
|
|
|
sizeof(rfc1042_header) + 2);
|
|
|
|
|
|
|
|
/* remove the SNAP but leave the ethertype */
|
|
|
|
skb_pull(skb, snap_offs + sizeof(rfc1042_header));
|
|
|
|
}
|
2016-03-31 17:02:10 +00:00
|
|
|
/* push the addresses in front */
|
|
|
|
memcpy(skb_push(skb, sizeof(addrs)), &addrs, sizeof(addrs));
|
|
|
|
|
2023-03-14 09:59:55 +00:00
|
|
|
res = ieee80211_rx_mesh_data(rx->sdata, rx->sta, rx->skb);
|
|
|
|
switch (res) {
|
|
|
|
case RX_QUEUED:
|
|
|
|
return true;
|
|
|
|
case RX_CONTINUE:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
goto drop;
|
|
|
|
}
|
|
|
|
|
2020-12-18 18:47:18 +00:00
|
|
|
ieee80211_rx_8023(rx, fast_rx, orig_len);
|
2016-03-31 17:02:10 +00:00
|
|
|
|
|
|
|
return true;
|
|
|
|
drop:
|
|
|
|
dev_kfree_skb(skb);
|
2022-08-17 10:42:13 +00:00
|
|
|
|
2020-12-18 18:47:18 +00:00
|
|
|
if (fast_rx->uses_rss)
|
2022-12-30 20:07:47 +00:00
|
|
|
stats = this_cpu_ptr(rx->link_sta->pcpu_rx_stats);
|
2022-08-17 10:42:13 +00:00
|
|
|
else
|
2022-12-30 20:07:47 +00:00
|
|
|
stats = &rx->link_sta->rx_stats;
|
2020-12-18 18:47:18 +00:00
|
|
|
|
2016-03-31 17:02:11 +00:00
|
|
|
stats->dropped++;
|
2016-03-31 17:02:10 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2010-09-24 09:21:06 +00:00
|
|
|
/*
|
|
|
|
* This function returns whether or not the SKB
|
|
|
|
* was destined for RX processing or not, which,
|
|
|
|
* if consume is true, is equivalent to whether
|
|
|
|
* or not the skb was consumed.
|
|
|
|
*/
|
|
|
|
static bool ieee80211_prepare_and_rx_handle(struct ieee80211_rx_data *rx,
|
|
|
|
struct sk_buff *skb, bool consume)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = rx->local;
|
|
|
|
struct ieee80211_sub_if_data *sdata = rx->sdata;
|
2022-07-14 21:47:32 +00:00
|
|
|
struct ieee80211_hdr *hdr = (void *)skb->data;
|
2022-12-30 20:07:47 +00:00
|
|
|
struct link_sta_info *link_sta = rx->link_sta;
|
|
|
|
struct ieee80211_link_data *link = rx->link;
|
2010-09-24 09:21:06 +00:00
|
|
|
|
|
|
|
rx->skb = skb;
|
|
|
|
|
2016-03-31 17:02:10 +00:00
|
|
|
/* See if we can do fast-rx; if we have to copy we already lost,
|
|
|
|
* so punt in that case. We should never have to deliver a data
|
|
|
|
* frame to multiple interfaces anyway.
|
|
|
|
*
|
|
|
|
* We skip the ieee80211_accept_frame() call and do the necessary
|
|
|
|
* checking inside ieee80211_invoke_fast_rx().
|
|
|
|
*/
|
|
|
|
if (consume && rx->sta) {
|
|
|
|
struct ieee80211_fast_rx *fast_rx;
|
|
|
|
|
|
|
|
fast_rx = rcu_dereference(rx->sta->fast_rx);
|
|
|
|
if (fast_rx && ieee80211_invoke_fast_rx(rx, fast_rx))
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2015-04-22 13:08:39 +00:00
|
|
|
if (!ieee80211_accept_frame(rx))
|
2010-09-24 09:21:06 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (!consume) {
|
2022-01-26 09:15:31 +00:00
|
|
|
struct skb_shared_hwtstamps *shwt;
|
|
|
|
|
|
|
|
rx->skb = skb_copy(skb, GFP_ATOMIC);
|
|
|
|
if (!rx->skb) {
|
2010-09-24 09:21:06 +00:00
|
|
|
if (net_ratelimit())
|
|
|
|
wiphy_debug(local->hw.wiphy,
|
2011-01-08 18:30:54 +00:00
|
|
|
"failed to copy skb for %s\n",
|
2010-09-24 09:21:06 +00:00
|
|
|
sdata->name);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2022-01-26 09:15:31 +00:00
|
|
|
/* skb_copy() does not copy the hw timestamps, so copy it
|
|
|
|
* explicitly
|
|
|
|
*/
|
|
|
|
shwt = skb_hwtstamps(rx->skb);
|
|
|
|
shwt->hwtstamp = skb_hwtstamps(skb)->hwtstamp;
|
2022-12-08 04:00:50 +00:00
|
|
|
|
|
|
|
/* Update the hdr pointer to the new skb for translation below */
|
|
|
|
hdr = (struct ieee80211_hdr *)rx->skb->data;
|
2010-09-24 09:21:06 +00:00
|
|
|
}
|
|
|
|
|
2023-02-14 10:10:48 +00:00
|
|
|
if (unlikely(rx->sta && rx->sta->sta.mlo) &&
|
2023-06-04 09:11:15 +00:00
|
|
|
is_unicast_ether_addr(hdr->addr1) &&
|
|
|
|
!ieee80211_is_probe_resp(hdr->frame_control) &&
|
|
|
|
!ieee80211_is_beacon(hdr->frame_control)) {
|
2022-07-14 21:47:32 +00:00
|
|
|
/* translate to MLD addresses */
|
|
|
|
if (ether_addr_equal(link->conf->addr, hdr->addr1))
|
|
|
|
ether_addr_copy(hdr->addr1, rx->sdata->vif.addr);
|
|
|
|
if (ether_addr_equal(link_sta->addr, hdr->addr2))
|
|
|
|
ether_addr_copy(hdr->addr2, rx->sta->addr);
|
2022-07-18 14:40:36 +00:00
|
|
|
/* translate A3 only if it's the BSSID */
|
|
|
|
if (!ieee80211_has_tods(hdr->frame_control) &&
|
|
|
|
!ieee80211_has_fromds(hdr->frame_control)) {
|
|
|
|
if (ether_addr_equal(link_sta->addr, hdr->addr3))
|
|
|
|
ether_addr_copy(hdr->addr3, rx->sta->addr);
|
|
|
|
else if (ether_addr_equal(link->conf->addr, hdr->addr3))
|
|
|
|
ether_addr_copy(hdr->addr3, rx->sdata->vif.addr);
|
|
|
|
}
|
2022-07-14 21:47:32 +00:00
|
|
|
/* not needed for A4 since it can only carry the SA */
|
|
|
|
}
|
|
|
|
|
2010-09-24 09:21:06 +00:00
|
|
|
ieee80211_invoke_rx_handlers(rx);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2020-12-18 18:47:18 +00:00
|
|
|
static void __ieee80211_rx_handle_8023(struct ieee80211_hw *hw,
|
|
|
|
struct ieee80211_sta *pubsta,
|
|
|
|
struct sk_buff *skb,
|
|
|
|
struct list_head *list)
|
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
2022-08-17 10:42:12 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2020-12-18 18:47:18 +00:00
|
|
|
struct ieee80211_fast_rx *fast_rx;
|
|
|
|
struct ieee80211_rx_data rx;
|
2023-02-15 09:07:05 +00:00
|
|
|
struct sta_info *sta;
|
2022-12-30 20:07:47 +00:00
|
|
|
int link_id = -1;
|
2020-12-18 18:47:18 +00:00
|
|
|
|
|
|
|
memset(&rx, 0, sizeof(rx));
|
|
|
|
rx.skb = skb;
|
|
|
|
rx.local = local;
|
|
|
|
rx.list = list;
|
2022-06-14 11:08:42 +00:00
|
|
|
rx.link_id = -1;
|
2020-12-18 18:47:18 +00:00
|
|
|
|
|
|
|
I802_DEBUG_INC(local->dot11ReceivedFragmentCount);
|
|
|
|
|
|
|
|
/* drop frame if too short for header */
|
|
|
|
if (skb->len < sizeof(struct ethhdr))
|
|
|
|
goto drop;
|
|
|
|
|
|
|
|
if (!pubsta)
|
|
|
|
goto drop;
|
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
if (status->link_valid)
|
|
|
|
link_id = status->link_id;
|
2022-08-17 10:42:12 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TODO: Should the frame be dropped if the right link_id is not
|
|
|
|
* available? Or may be it is fine in the current form to proceed with
|
|
|
|
* the frame processing because with frame being in 802.3 format,
|
|
|
|
* link_id is used only for stats purpose and updating the stats on
|
|
|
|
* the deflink is fine?
|
|
|
|
*/
|
2023-02-15 09:07:05 +00:00
|
|
|
sta = container_of(pubsta, struct sta_info, sta);
|
|
|
|
if (!ieee80211_rx_data_set_sta(&rx, sta, link_id))
|
2022-12-30 20:07:47 +00:00
|
|
|
goto drop;
|
2020-12-18 18:47:18 +00:00
|
|
|
|
|
|
|
fast_rx = rcu_dereference(rx.sta->fast_rx);
|
|
|
|
if (!fast_rx)
|
|
|
|
goto drop;
|
|
|
|
|
|
|
|
ieee80211_rx_8023(&rx, fast_rx, skb->len);
|
|
|
|
return;
|
|
|
|
|
|
|
|
drop:
|
|
|
|
dev_kfree_skb(skb);
|
|
|
|
}
|
|
|
|
|
2022-06-14 11:08:42 +00:00
|
|
|
static bool ieee80211_rx_for_interface(struct ieee80211_rx_data *rx,
|
|
|
|
struct sk_buff *skb, bool consume)
|
|
|
|
{
|
|
|
|
struct link_sta_info *link_sta;
|
|
|
|
struct ieee80211_hdr *hdr = (void *)skb->data;
|
2022-12-30 20:07:47 +00:00
|
|
|
struct sta_info *sta;
|
|
|
|
int link_id = -1;
|
2022-06-14 11:08:42 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Look up link station first, in case there's a
|
|
|
|
* chance that they might have a link address that
|
|
|
|
* is identical to the MLD address, that way we'll
|
|
|
|
* have the link information if needed.
|
|
|
|
*/
|
|
|
|
link_sta = link_sta_info_get_bss(rx->sdata, hdr->addr2);
|
|
|
|
if (link_sta) {
|
2022-12-30 20:07:47 +00:00
|
|
|
sta = link_sta->sta;
|
|
|
|
link_id = link_sta->link_id;
|
2022-06-14 11:08:42 +00:00
|
|
|
} else {
|
2022-08-17 10:42:12 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
sta = sta_info_get_bss(rx->sdata, hdr->addr2);
|
|
|
|
if (status->link_valid)
|
|
|
|
link_id = status->link_id;
|
2022-06-14 11:08:42 +00:00
|
|
|
}
|
|
|
|
|
2023-02-15 09:07:05 +00:00
|
|
|
if (!ieee80211_rx_data_set_sta(rx, sta, link_id))
|
2022-12-30 20:07:47 +00:00
|
|
|
return false;
|
|
|
|
|
2022-06-14 11:08:42 +00:00
|
|
|
return ieee80211_prepare_and_rx_handle(rx, skb, consume);
|
|
|
|
}
|
|
|
|
|
2007-07-27 13:43:22 +00:00
|
|
|
/*
|
2014-04-21 04:52:59 +00:00
|
|
|
* This is the actual Rx frames handler. as it belongs to Rx path it must
|
2007-12-24 11:36:39 +00:00
|
|
|
* be called with rcu_read_lock protection.
|
2007-07-27 13:43:22 +00:00
|
|
|
*/
|
2008-01-21 10:39:12 +00:00
|
|
|
static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
|
2016-03-31 17:02:02 +00:00
|
|
|
struct ieee80211_sta *pubsta,
|
2015-06-11 14:02:32 +00:00
|
|
|
struct sk_buff *skb,
|
2020-07-26 11:06:11 +00:00
|
|
|
struct list_head *list)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
2022-08-17 10:42:12 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2007-07-27 13:43:22 +00:00
|
|
|
struct ieee80211_sub_if_data *sdata;
|
|
|
|
struct ieee80211_hdr *hdr;
|
2010-03-29 09:35:07 +00:00
|
|
|
__le16 fc;
|
2008-02-25 15:27:43 +00:00
|
|
|
struct ieee80211_rx_data rx;
|
2010-09-24 09:21:06 +00:00
|
|
|
struct ieee80211_sub_if_data *prev;
|
2016-09-19 11:00:10 +00:00
|
|
|
struct rhlist_head *tmp;
|
2010-03-29 09:35:07 +00:00
|
|
|
int err = 0;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2010-03-29 09:35:07 +00:00
|
|
|
fc = ((struct ieee80211_hdr *)skb->data)->frame_control;
|
2007-07-27 13:43:22 +00:00
|
|
|
memset(&rx, 0, sizeof(rx));
|
|
|
|
rx.skb = skb;
|
|
|
|
rx.local = local;
|
2020-07-26 11:06:11 +00:00
|
|
|
rx.list = list;
|
2022-06-14 11:08:42 +00:00
|
|
|
rx.link_id = -1;
|
2007-09-17 05:29:22 +00:00
|
|
|
|
2010-03-29 09:35:07 +00:00
|
|
|
if (ieee80211_is_data(fc) || ieee80211_is_mgmt(fc))
|
2015-04-22 18:47:28 +00:00
|
|
|
I802_DEBUG_INC(local->dot11ReceivedFragmentCount);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2012-10-25 22:33:36 +00:00
|
|
|
if (ieee80211_is_mgmt(fc)) {
|
|
|
|
/* drop frame if too short for header */
|
|
|
|
if (skb->len < ieee80211_hdrlen(fc))
|
|
|
|
err = -ENOBUFS;
|
|
|
|
else
|
|
|
|
err = skb_linearize(skb);
|
|
|
|
} else {
|
2010-03-29 09:35:07 +00:00
|
|
|
err = !pskb_may_pull(skb, ieee80211_hdrlen(fc));
|
2012-10-25 22:33:36 +00:00
|
|
|
}
|
2010-03-29 09:35:07 +00:00
|
|
|
|
|
|
|
if (err) {
|
|
|
|
dev_kfree_skb(skb);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
hdr = (struct ieee80211_hdr *)skb->data;
|
2008-01-29 16:07:43 +00:00
|
|
|
ieee80211_parse_qos(&rx);
|
2009-01-06 23:26:10 +00:00
|
|
|
ieee80211_verify_alignment(&rx);
|
2008-01-29 16:07:43 +00:00
|
|
|
|
2012-07-06 20:19:27 +00:00
|
|
|
if (unlikely(ieee80211_is_probe_resp(hdr->frame_control) ||
|
2020-09-22 02:28:08 +00:00
|
|
|
ieee80211_is_beacon(hdr->frame_control) ||
|
|
|
|
ieee80211_is_s1g_beacon(hdr->frame_control)))
|
2012-07-06 20:19:27 +00:00
|
|
|
ieee80211_scan_rx(local, skb);
|
|
|
|
|
mac80211: use driver-indicated transmitter STA only for data frames
When I originally introduced using the driver-indicated station as an
optimisation to avoid the hashtable lookup/iteration, of course it
wasn't intended to really functionally change anything.
I neglected, however, to take into account VLAN interfaces, which have
the property that management and data frames are handled differently:
data frames go directly to the station and the VLAN while management
frames continue to be processed over the underlying/associated AP-type
interface. As a consequence, when a driver used this optimisation for
management frames and the user enabled VLANs, my change broke things
since any management frames, particularly disassoc/deauth, were missed
by hostapd.
Fix this by restoring the original code path for non-data frames, they
aren't critical for performance to begin with.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=194713.
Big thanks goes to Jarek who bisected the issue and provided a very
detailed bug report, including the crucial information that he was
using VLANs in his configuration.
Cc: stable@vger.kernel.org
Fixes: 771e846bea9e ("mac80211: allow passing transmitter station on RX")
Reported-and-tested-by: Jarek Kamiński <jarek@freeside.be>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2017-02-27 08:38:11 +00:00
|
|
|
if (ieee80211_is_data(fc)) {
|
2016-03-31 17:02:02 +00:00
|
|
|
struct sta_info *sta, *prev_sta;
|
2022-12-30 20:07:47 +00:00
|
|
|
int link_id = -1;
|
2015-02-13 20:55:15 +00:00
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
if (status->link_valid)
|
|
|
|
link_id = status->link_id;
|
2022-08-17 10:42:12 +00:00
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
if (pubsta) {
|
2023-02-15 09:07:05 +00:00
|
|
|
sta = container_of(pubsta, struct sta_info, sta);
|
|
|
|
if (!ieee80211_rx_data_set_sta(&rx, sta, link_id))
|
2022-08-17 10:42:12 +00:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In MLO connection, fetch the link_id using addr2
|
|
|
|
* when the driver does not pass link_id in status.
|
|
|
|
* When the address translation is already performed by
|
|
|
|
* driver/hw, the valid link_id must be passed in
|
|
|
|
* status.
|
|
|
|
*/
|
|
|
|
|
|
|
|
if (!status->link_valid && pubsta->mlo) {
|
|
|
|
struct ieee80211_hdr *hdr = (void *)skb->data;
|
|
|
|
struct link_sta_info *link_sta;
|
|
|
|
|
|
|
|
link_sta = link_sta_info_get_bss(rx.sdata,
|
|
|
|
hdr->addr2);
|
|
|
|
if (!link_sta)
|
|
|
|
goto out;
|
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
ieee80211_rx_data_set_link(&rx, link_sta->link_id);
|
2022-08-17 10:42:12 +00:00
|
|
|
}
|
|
|
|
|
mac80211: use driver-indicated transmitter STA only for data frames
When I originally introduced using the driver-indicated station as an
optimisation to avoid the hashtable lookup/iteration, of course it
wasn't intended to really functionally change anything.
I neglected, however, to take into account VLAN interfaces, which have
the property that management and data frames are handled differently:
data frames go directly to the station and the VLAN while management
frames continue to be processed over the underlying/associated AP-type
interface. As a consequence, when a driver used this optimisation for
management frames and the user enabled VLANs, my change broke things
since any management frames, particularly disassoc/deauth, were missed
by hostapd.
Fix this by restoring the original code path for non-data frames, they
aren't critical for performance to begin with.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=194713.
Big thanks goes to Jarek who bisected the issue and provided a very
detailed bug report, including the crucial information that he was
using VLANs in his configuration.
Cc: stable@vger.kernel.org
Fixes: 771e846bea9e ("mac80211: allow passing transmitter station on RX")
Reported-and-tested-by: Jarek Kamiński <jarek@freeside.be>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2017-02-27 08:38:11 +00:00
|
|
|
if (ieee80211_prepare_and_rx_handle(&rx, skb, true))
|
|
|
|
return;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2010-09-23 17:22:24 +00:00
|
|
|
prev_sta = NULL;
|
2010-09-24 09:21:06 +00:00
|
|
|
|
2016-09-19 11:00:10 +00:00
|
|
|
for_each_sta_info(local, hdr->addr2, sta, tmp) {
|
2010-09-23 17:22:24 +00:00
|
|
|
if (!prev_sta) {
|
|
|
|
prev_sta = sta;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
rx.sdata = prev_sta->sdata;
|
2023-02-15 09:07:05 +00:00
|
|
|
if (!ieee80211_rx_data_set_sta(&rx, prev_sta, link_id))
|
2022-12-30 20:07:47 +00:00
|
|
|
goto out;
|
|
|
|
|
|
|
|
if (!status->link_valid && prev_sta->sta.mlo)
|
2022-08-17 10:42:12 +00:00
|
|
|
continue;
|
|
|
|
|
2010-09-24 09:21:06 +00:00
|
|
|
ieee80211_prepare_and_rx_handle(&rx, skb, false);
|
2009-11-25 16:46:18 +00:00
|
|
|
|
2010-09-23 17:22:24 +00:00
|
|
|
prev_sta = sta;
|
2010-09-24 09:21:06 +00:00
|
|
|
}
|
2010-09-23 17:22:24 +00:00
|
|
|
|
|
|
|
if (prev_sta) {
|
2022-12-30 20:07:47 +00:00
|
|
|
rx.sdata = prev_sta->sdata;
|
2023-02-15 09:07:05 +00:00
|
|
|
if (!ieee80211_rx_data_set_sta(&rx, prev_sta, link_id))
|
2022-08-17 10:42:12 +00:00
|
|
|
goto out;
|
|
|
|
|
2022-12-30 20:07:47 +00:00
|
|
|
if (!status->link_valid && prev_sta->sta.mlo)
|
|
|
|
goto out;
|
2010-09-23 17:22:24 +00:00
|
|
|
|
2010-09-24 09:21:06 +00:00
|
|
|
if (ieee80211_prepare_and_rx_handle(&rx, skb, true))
|
2010-09-23 17:22:24 +00:00
|
|
|
return;
|
2010-11-30 14:45:38 +00:00
|
|
|
goto out;
|
2010-09-23 17:22:24 +00:00
|
|
|
}
|
2010-09-24 09:21:06 +00:00
|
|
|
}
|
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
prev = NULL;
|
2007-09-26 13:19:39 +00:00
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
|
|
|
|
if (!ieee80211_sdata_running(sdata))
|
|
|
|
continue;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
if (sdata->vif.type == NL80211_IFTYPE_MONITOR ||
|
|
|
|
sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
|
|
|
|
continue;
|
2007-07-27 13:43:22 +00:00
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
/*
|
|
|
|
* frame is destined for this interface, but if it's
|
|
|
|
* not also for the previous one we handle that after
|
|
|
|
* the loop to avoid copying the SKB once too much
|
|
|
|
*/
|
2010-01-21 23:36:39 +00:00
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
if (!prev) {
|
2009-11-25 16:46:18 +00:00
|
|
|
prev = sdata;
|
2010-09-24 09:21:07 +00:00
|
|
|
continue;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2010-01-21 23:36:39 +00:00
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
rx.sdata = prev;
|
2022-06-14 11:08:42 +00:00
|
|
|
ieee80211_rx_for_interface(&rx, skb, false);
|
2010-01-21 23:36:39 +00:00
|
|
|
|
2010-09-24 09:21:07 +00:00
|
|
|
prev = sdata;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (prev) {
|
|
|
|
rx.sdata = prev;
|
2010-09-24 09:21:06 +00:00
|
|
|
|
2022-06-14 11:08:42 +00:00
|
|
|
if (ieee80211_rx_for_interface(&rx, skb, true))
|
2010-09-24 09:21:07 +00:00
|
|
|
return;
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2010-09-24 09:21:06 +00:00
|
|
|
|
2010-11-30 14:45:38 +00:00
|
|
|
out:
|
2010-09-24 09:21:06 +00:00
|
|
|
dev_kfree_skb(skb);
|
2007-07-27 13:43:22 +00:00
|
|
|
}
|
2007-12-24 11:36:39 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This is the receive path handler. It is called by a low level driver when an
|
|
|
|
* 802.11 MPDU is received from the hardware.
|
|
|
|
*/
|
2020-07-26 11:06:11 +00:00
|
|
|
void ieee80211_rx_list(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta,
|
|
|
|
struct sk_buff *skb, struct list_head *list)
|
2007-12-24 11:36:39 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
2008-01-24 18:38:38 +00:00
|
|
|
struct ieee80211_rate *rate = NULL;
|
|
|
|
struct ieee80211_supported_band *sband;
|
2009-06-17 11:13:00 +00:00
|
|
|
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
|
2021-11-13 06:34:15 +00:00
|
|
|
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
|
2008-01-24 18:38:38 +00:00
|
|
|
|
2009-10-11 13:10:40 +00:00
|
|
|
WARN_ON_ONCE(softirq_count() == 0);
|
|
|
|
|
2016-04-12 13:56:15 +00:00
|
|
|
if (WARN_ON(status->band >= NUM_NL80211_BANDS))
|
2009-08-24 09:46:30 +00:00
|
|
|
goto drop;
|
2008-01-24 18:38:38 +00:00
|
|
|
|
|
|
|
sband = local->hw.wiphy->bands[status->band];
|
2009-08-24 09:46:30 +00:00
|
|
|
if (WARN_ON(!sband))
|
|
|
|
goto drop;
|
2008-01-24 18:38:38 +00:00
|
|
|
|
2009-07-28 16:10:17 +00:00
|
|
|
/*
|
|
|
|
* If we're suspending, it is possible although not too likely
|
|
|
|
* that we'd be receiving frames after having already partially
|
|
|
|
* quiesced the stack. We can't process such frames then since
|
|
|
|
* that might, for example, cause stations to be added or other
|
|
|
|
* driver callbacks be invoked.
|
|
|
|
*/
|
2009-08-24 09:46:30 +00:00
|
|
|
if (unlikely(local->quiescing || local->suspended))
|
|
|
|
goto drop;
|
2009-07-28 16:10:17 +00:00
|
|
|
|
2012-06-06 08:25:02 +00:00
|
|
|
/* We might be during a HW reconfig, prevent Rx for the same reason */
|
|
|
|
if (unlikely(local->in_reconfig))
|
|
|
|
goto drop;
|
|
|
|
|
2009-08-21 12:44:45 +00:00
|
|
|
/*
|
|
|
|
* The same happens when we're not even started,
|
|
|
|
* but that's worth a warning.
|
|
|
|
*/
|
2009-08-24 09:46:30 +00:00
|
|
|
if (WARN_ON(!local->started))
|
|
|
|
goto drop;
|
2009-08-21 12:44:45 +00:00
|
|
|
|
2010-07-30 11:23:12 +00:00
|
|
|
if (likely(!(status->flag & RX_FLAG_FAILED_PLCP_CRC))) {
|
2009-11-09 21:03:22 +00:00
|
|
|
/*
|
2010-07-30 11:23:12 +00:00
|
|
|
* Validate the rate, unless a PLCP error means that
|
|
|
|
* we probably can't have a valid rate here anyway.
|
2009-11-09 21:03:22 +00:00
|
|
|
*/
|
2010-07-30 11:23:12 +00:00
|
|
|
|
2017-04-26 10:14:59 +00:00
|
|
|
switch (status->encoding) {
|
|
|
|
case RX_ENC_HT:
|
2010-07-30 11:23:12 +00:00
|
|
|
/*
|
|
|
|
* rate_idx is MCS index, which can be [0-76]
|
|
|
|
* as documented on:
|
|
|
|
*
|
2020-06-05 15:41:12 +00:00
|
|
|
* https://wireless.wiki.kernel.org/en/developers/Documentation/ieee80211/802.11n
|
2010-07-30 11:23:12 +00:00
|
|
|
*
|
|
|
|
* Anything else would be some sort of driver or
|
|
|
|
* hardware error. The driver should catch hardware
|
|
|
|
* errors.
|
|
|
|
*/
|
2012-09-30 15:08:35 +00:00
|
|
|
if (WARN(status->rate_idx > 76,
|
2010-07-30 11:23:12 +00:00
|
|
|
"Rate marked as an HT rate but passed "
|
|
|
|
"status->rate_idx is not "
|
|
|
|
"an MCS index [0-76]: %d (0x%02x)\n",
|
|
|
|
status->rate_idx,
|
|
|
|
status->rate_idx))
|
|
|
|
goto drop;
|
2017-04-26 10:14:59 +00:00
|
|
|
break;
|
|
|
|
case RX_ENC_VHT:
|
2022-01-03 01:36:21 +00:00
|
|
|
if (WARN_ONCE(status->rate_idx > 11 ||
|
2017-04-26 11:51:41 +00:00
|
|
|
!status->nss ||
|
|
|
|
status->nss > 8,
|
2012-11-09 14:07:02 +00:00
|
|
|
"Rate marked as a VHT rate but data is invalid: MCS: %d, NSS: %d\n",
|
2017-04-26 11:51:41 +00:00
|
|
|
status->rate_idx, status->nss))
|
2012-11-09 14:07:02 +00:00
|
|
|
goto drop;
|
2017-04-26 10:14:59 +00:00
|
|
|
break;
|
2018-06-09 06:14:44 +00:00
|
|
|
case RX_ENC_HE:
|
|
|
|
if (WARN_ONCE(status->rate_idx > 11 ||
|
|
|
|
!status->nss ||
|
|
|
|
status->nss > 8,
|
|
|
|
"Rate marked as an HE rate but data is invalid: MCS: %d, NSS: %d\n",
|
|
|
|
status->rate_idx, status->nss))
|
|
|
|
goto drop;
|
|
|
|
break;
|
2023-01-09 11:07:21 +00:00
|
|
|
case RX_ENC_EHT:
|
|
|
|
if (WARN_ONCE(status->rate_idx > 15 ||
|
|
|
|
!status->nss ||
|
|
|
|
status->nss > 8 ||
|
|
|
|
status->eht.gi > NL80211_RATE_INFO_EHT_GI_3_2,
|
|
|
|
"Rate marked as an EHT rate but data is invalid: MCS:%d, NSS:%d, GI:%d\n",
|
|
|
|
status->rate_idx, status->nss, status->eht.gi))
|
|
|
|
goto drop;
|
|
|
|
break;
|
2017-04-26 10:14:59 +00:00
|
|
|
default:
|
|
|
|
WARN_ON_ONCE(1);
|
2020-07-07 20:45:48 +00:00
|
|
|
fallthrough;
|
2017-04-26 10:14:59 +00:00
|
|
|
case RX_ENC_LEGACY:
|
2012-09-30 15:08:35 +00:00
|
|
|
if (WARN_ON(status->rate_idx >= sband->n_bitrates))
|
2010-07-30 11:23:12 +00:00
|
|
|
goto drop;
|
|
|
|
rate = &sband->bitrates[status->rate_idx];
|
|
|
|
}
|
2008-12-12 12:38:33 +00:00
|
|
|
}
|
2007-12-24 11:36:39 +00:00
|
|
|
|
2022-08-17 10:42:12 +00:00
|
|
|
if (WARN_ON_ONCE(status->link_id >= IEEE80211_LINK_UNSPECIFIED))
|
|
|
|
goto drop;
|
|
|
|
|
2010-09-24 10:38:25 +00:00
|
|
|
status->rx_flags = 0;
|
|
|
|
|
2020-10-29 17:36:20 +00:00
|
|
|
kcov_remote_start_common(skb_get_kcov_handle(skb));
|
|
|
|
|
2007-12-24 11:36:39 +00:00
|
|
|
/*
|
|
|
|
* Frames with failed FCS/PLCP checksum are not returned,
|
|
|
|
* all other frames are returned without radiotap header
|
|
|
|
* if it was previously present.
|
|
|
|
* Also, frames with less than 16 bytes are dropped.
|
|
|
|
*/
|
2020-12-18 18:47:18 +00:00
|
|
|
if (!(status->flag & RX_FLAG_8023))
|
|
|
|
skb = ieee80211_rx_monitor(local, skb, rate);
|
2020-10-29 17:36:20 +00:00
|
|
|
if (skb) {
|
2021-11-13 06:34:15 +00:00
|
|
|
if ((status->flag & RX_FLAG_8023) ||
|
|
|
|
ieee80211_is_data_present(hdr->frame_control))
|
|
|
|
ieee80211_tpt_led_trig_rx(local, skb->len);
|
2016-03-31 17:02:02 +00:00
|
|
|
|
2020-12-18 18:47:18 +00:00
|
|
|
if (status->flag & RX_FLAG_8023)
|
|
|
|
__ieee80211_rx_handle_8023(hw, pubsta, skb, list);
|
|
|
|
else
|
|
|
|
__ieee80211_rx_handle_packet(hw, pubsta, skb, list);
|
2020-10-29 17:36:20 +00:00
|
|
|
}
|
2009-08-24 09:46:30 +00:00
|
|
|
|
2020-10-29 17:36:20 +00:00
|
|
|
kcov_remote_stop();
|
2009-08-24 09:46:30 +00:00
|
|
|
return;
|
|
|
|
drop:
|
|
|
|
kfree_skb(skb);
|
2007-12-24 11:36:39 +00:00
|
|
|
}
|
2020-07-26 11:06:11 +00:00
|
|
|
EXPORT_SYMBOL(ieee80211_rx_list);
|
|
|
|
|
|
|
|
void ieee80211_rx_napi(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta,
|
|
|
|
struct sk_buff *skb, struct napi_struct *napi)
|
|
|
|
{
|
|
|
|
struct sk_buff *tmp;
|
|
|
|
LIST_HEAD(list);
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* key references and virtual interfaces are protected using RCU
|
|
|
|
* and this requires that we are in a read-side RCU section during
|
|
|
|
* receive processing
|
|
|
|
*/
|
|
|
|
rcu_read_lock();
|
|
|
|
ieee80211_rx_list(hw, pubsta, skb, &list);
|
|
|
|
rcu_read_unlock();
|
|
|
|
|
|
|
|
if (!napi) {
|
|
|
|
netif_receive_skb_list(&list);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
list_for_each_entry_safe(skb, tmp, &list, list) {
|
|
|
|
skb_list_del_init(skb);
|
|
|
|
napi_gro_receive(napi, skb);
|
|
|
|
}
|
|
|
|
}
|
2015-06-11 14:02:32 +00:00
|
|
|
EXPORT_SYMBOL(ieee80211_rx_napi);
|
2007-07-27 13:43:22 +00:00
|
|
|
|
|
|
|
/* This is a version of the rx handler that can be called from hard irq
|
|
|
|
* context. Post the skb on the queue and schedule the tasklet */
|
2009-06-17 11:13:00 +00:00
|
|
|
void ieee80211_rx_irqsafe(struct ieee80211_hw *hw, struct sk_buff *skb)
|
2007-07-27 13:43:22 +00:00
|
|
|
{
|
|
|
|
struct ieee80211_local *local = hw_to_local(hw);
|
|
|
|
|
|
|
|
BUILD_BUG_ON(sizeof(struct ieee80211_rx_status) > sizeof(skb->cb));
|
|
|
|
|
|
|
|
skb->pkt_type = IEEE80211_RX_MSG;
|
|
|
|
skb_queue_tail(&local->skb_queue, skb);
|
|
|
|
tasklet_schedule(&local->tasklet);
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL(ieee80211_rx_irqsafe);
|