wireless-next patches for v6.11

Most likely the last "new features" pull request for v6.11 with
 changes both in stack and in drivers. The big thing is the multiple
 radios for wiphy feature which makes it possible to better advertise
 radio capabilities to user space. mt76 enabled MLO and iwlwifi
 re-enabled MLO, ath12k and rtw89 Wi-Fi 6 devices got WoWLAN support.
 
 Major changes:
 
 cfg80211/mac80211
 
 * remove DEAUTH_NEED_MGD_TX_PREP flag
 
 * multiple radios per wiphy support
 
 mac80211_hwsim
 
 * multi-radio wiphy support
 
 ath12k
 
 * DebugFS support for datapath statistics
 
 * WCN7850: support for WoW (Wake on WLAN)
 
 * WCN7850: device-tree bindings
 
 ath11k
 
 * QCA6390: device-tree bindings
 
 iwlwifi
 
 * mvm: re-enable Multi-Link Operation (MLO)
 
 * aggregation (A-MSDU) optimisations
 
 rtw89
 
 * preparation for RTL8852BE-VT support
 
 * WoWLAN support for WiFi 6 chips
 
 * 36-bit PCI DMA support
 
 mt76
 
 * mt7925 Multi-Link Operation (MLO) support
 -----BEGIN PGP SIGNATURE-----
 
 iQFFBAABCgAvFiEEiBjanGPFTz4PRfLobhckVSbrbZsFAmaPsBQRHGt2YWxvQGtl
 cm5lbC5vcmcACgkQbhckVSbrbZt9EQf/Wevf/RnKyHhcuW4kmv0cxnjLW39K7CAh
 ZlfN2JNTsVk4Na1EBjUgVyAWGdnGQpEhQlJYDExHcf5iD12pMVMIAQS8JXTDxuva
 +ErAN1652p2N8nFCkNNuGbjYfO0D61xSIQj2uHhAlafK2k8FwnSn6XPP6jjHWvur
 Acmw6W6l8eL+MP2K1VN2/2S09Gr6IQs7gXgWQX/6CaoK+OynFbUg8T9GQ2aqjr+d
 lD17YB+oOHNCBxvg9LtBhKdfV14OBkKT6hW+YEqsrBEbx3N07ogDkPO0NUUPMXN3
 IePEhj4XXrJ5UBMTvgWzNG9CwPeZFwuKGga+HZO9RKF5rwu42LsUMA==
 =MpwE
 -----END PGP SIGNATURE-----

Merge tag 'wireless-next-2024-07-11' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next

Kalle Valo says:

====================
wireless-next patches for v6.11

Most likely the last "new features" pull request for v6.11 with
changes both in stack and in drivers. The big thing is the multiple
radios for wiphy feature which makes it possible to better advertise
radio capabilities to user space. mt76 enabled MLO and iwlwifi
re-enabled MLO, ath12k and rtw89 Wi-Fi 6 devices got WoWLAN support.

Major changes:

cfg80211/mac80211
 * remove DEAUTH_NEED_MGD_TX_PREP flag
 * multiple radios per wiphy support

mac80211_hwsim
 * multi-radio wiphy support

ath12k
 * DebugFS support for datapath statistics
 * WCN7850: support for WoW (Wake on WLAN)
 * WCN7850: device-tree bindings

ath11k
 * QCA6390: device-tree bindings

iwlwifi
 * mvm: re-enable Multi-Link Operation (MLO)
 * aggregation (A-MSDU) optimisations

rtw89
 * preparation for RTL8852BE-VT support
 * WoWLAN support for WiFi 6 chips
 * 36-bit PCI DMA support

mt76
 * mt7925 Multi-Link Operation (MLO) support

* tag 'wireless-next-2024-07-11' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (204 commits)
  wifi: mac80211: fix AP chandef capturing in CSA
  wifi: iwlwifi: correctly reference TSO page information
  wifi: mt76: mt792x: fix scheduler interference in drv own process
  wifi: mt76: mt7925: enabling MLO when the firmware supports it
  wifi: mt76: mt7925: remove the unused mt7925_mcu_set_chan_info
  wifi: mt76: mt7925: update mt7925_mac_link_bss_add for MLO
  wifi: mt76: mt7925: update mt7925_mcu_bss_basic_tlv for MLO
  wifi: mt76: mt7925: update mt7925_mcu_set_timing for MLO
  wifi: mt76: mt7925: update mt7925_mcu_sta_phy_tlv for MLO
  wifi: mt76: mt7925: update mt7925_mcu_sta_rate_ctrl_tlv for MLO
  wifi: mt76: mt7925: add mt7925_mcu_sta_eht_mld_tlv for MLO
  wifi: mt76: mt7925: update mt7925_mcu_sta_update for MLO
  wifi: mt76: mt7925: update mt7925_mcu_add_bss_info for MLO
  wifi: mt76: mt7925: update mt7925_mcu_bss_mld_tlv for MLO
  wifi: mt76: mt7925: update mt7925_mcu_sta_mld_tlv for MLO
  wifi: mt76: mt7925: add mt7925_[assign,unassign]_vif_chanctx
  wifi: mt76: add def_wcid to struct mt76_wcid
  wifi: mt76: mt7925: report link information in rx status
  wifi: mt76: mt7925: update rate index according to link id
  wifi: mt76: mt7925: add link handling in the mt7925_ipv6_addr_change
  ...
====================

Link: https://patch.msgid.link/20240711102353.0C849C116B1@smtp.kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2024-07-11 17:22:04 -07:00
commit 80ab5445da
162 changed files with 16665 additions and 3674 deletions

View file

@ -17,6 +17,7 @@ description: |
properties:
compatible:
enum:
- pci17cb,1101 # QCA6390
- pci17cb,1103 # WCN6855
reg:
@ -28,10 +29,55 @@ properties:
string to uniquely identify variant of the calibration data for designs
with colliding bus and device ids
vddrfacmn-supply:
description: VDD_RFA_CMN supply regulator handle
vddaon-supply:
description: VDD_AON supply regulator handle
vddwlcx-supply:
description: VDD_WL_CX supply regulator handle
vddwlmx-supply:
description: VDD_WL_MX supply regulator handle
vddrfa0p8-supply:
description: VDD_RFA_0P8 supply regulator handle
vddrfa1p2-supply:
description: VDD_RFA_1P2 supply regulator handle
vddrfa1p7-supply:
description: VDD_RFA_1P7 supply regulator handle
vddpcie0p9-supply:
description: VDD_PCIE_0P9 supply regulator handle
vddpcie1p8-supply:
description: VDD_PCIE_1P8 supply regulator handle
required:
- compatible
- reg
allOf:
- if:
properties:
compatible:
contains:
const: pci17cb,1101
then:
required:
- vddrfacmn-supply
- vddaon-supply
- vddwlcx-supply
- vddwlmx-supply
- vddrfa0p8-supply
- vddrfa1p2-supply
- vddrfa1p7-supply
- vddpcie0p9-supply
- vddpcie1p8-supply
additionalProperties: false
examples:

View file

@ -0,0 +1,99 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
# Copyright (c) 2024 Linaro Limited
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/wireless/qcom,ath12k.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies ath12k wireless devices (PCIe)
maintainers:
- Jeff Johnson <quic_jjohnson@quicinc.com>
- Kalle Valo <kvalo@kernel.org>
description:
Qualcomm Technologies IEEE 802.11be PCIe devices.
properties:
compatible:
enum:
- pci17cb,1107 # WCN7850
reg:
maxItems: 1
vddaon-supply:
description: VDD_AON supply regulator handle
vddwlcx-supply:
description: VDD_WLCX supply regulator handle
vddwlmx-supply:
description: VDD_WLMX supply regulator handle
vddrfacmn-supply:
description: VDD_RFA_CMN supply regulator handle
vddrfa0p8-supply:
description: VDD_RFA_0P8 supply regulator handle
vddrfa1p2-supply:
description: VDD_RFA_1P2 supply regulator handle
vddrfa1p8-supply:
description: VDD_RFA_1P8 supply regulator handle
vddpcie0p9-supply:
description: VDD_PCIE_0P9 supply regulator handle
vddpcie1p8-supply:
description: VDD_PCIE_1P8 supply regulator handle
required:
- compatible
- reg
- vddaon-supply
- vddwlcx-supply
- vddwlmx-supply
- vddrfacmn-supply
- vddrfa0p8-supply
- vddrfa1p2-supply
- vddrfa1p8-supply
- vddpcie0p9-supply
- vddpcie1p8-supply
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/qcom,rpmh.h>
#include <dt-bindings/gpio/gpio.h>
pcie {
#address-cells = <3>;
#size-cells = <2>;
pcie@0 {
device_type = "pci";
reg = <0x0 0x0 0x0 0x0 0x0>;
#address-cells = <3>;
#size-cells = <2>;
ranges;
bus-range = <0x01 0xff>;
wifi@0 {
compatible = "pci17cb,1107";
reg = <0x10000 0x0 0x0 0x0 0x0>;
vddaon-supply = <&vreg_pmu_aon_0p59>;
vddwlcx-supply = <&vreg_pmu_wlcx_0p8>;
vddwlmx-supply = <&vreg_pmu_wlmx_0p85>;
vddrfacmn-supply = <&vreg_pmu_rfa_cmn>;
vddrfa0p8-supply = <&vreg_pmu_rfa_0p8>;
vddrfa1p2-supply = <&vreg_pmu_rfa_1p2>;
vddrfa1p8-supply = <&vreg_pmu_rfa_1p8>;
vddpcie0p9-supply = <&vreg_pmu_pcie_0p9>;
vddpcie1p8-supply = <&vreg_pmu_pcie_1p8>;
};
};
};

View file

@ -1877,8 +1877,7 @@ static void ath11k_dp_rx_h_csum_offload(struct ath11k *ar, struct sk_buff *msdu)
CHECKSUM_NONE : CHECKSUM_UNNECESSARY;
}
static int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar,
enum hal_encrypt_type enctype)
int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar, enum hal_encrypt_type enctype)
{
switch (enctype) {
case HAL_ENCRYPT_TYPE_OPEN:

View file

@ -1,6 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_DP_RX_H
#define ATH11K_DP_RX_H
@ -95,4 +96,6 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
int ath11k_dp_rx_pktlog_start(struct ath11k_base *ab);
int ath11k_dp_rx_pktlog_stop(struct ath11k_base *ab, bool stop_timer);
int ath11k_dp_rx_crypto_mic_len(struct ath11k *ar, enum hal_encrypt_type enctype);
#endif /* ATH11K_DP_RX_H */

View file

@ -353,8 +353,12 @@ ath11k_dp_tx_htt_tx_complete_buf(struct ath11k_base *ab,
if (ts->acked) {
if (!(info->flags & IEEE80211_TX_CTL_NO_ACK)) {
info->flags |= IEEE80211_TX_STAT_ACK;
info->status.ack_signal = ATH11K_DEFAULT_NOISE_FLOOR +
ts->ack_rssi;
info->status.ack_signal = ts->ack_rssi;
if (!test_bit(WMI_TLV_SERVICE_HW_DB2DBM_CONVERSION_SUPPORT,
ab->wmi_ab.svc_map))
info->status.ack_signal += ATH11K_DEFAULT_NOISE_FLOOR;
info->status.flags |=
IEEE80211_TX_STATUS_ACK_SIGNAL_VALID;
} else {
@ -584,8 +588,12 @@ static void ath11k_dp_tx_complete_msdu(struct ath11k *ar,
if (ts->status == HAL_WBM_TQM_REL_REASON_FRAME_ACKED &&
!(info->flags & IEEE80211_TX_CTL_NO_ACK)) {
info->flags |= IEEE80211_TX_STAT_ACK;
info->status.ack_signal = ATH11K_DEFAULT_NOISE_FLOOR +
ts->ack_rssi;
info->status.ack_signal = ts->ack_rssi;
if (!test_bit(WMI_TLV_SERVICE_HW_DB2DBM_CONVERSION_SUPPORT,
ab->wmi_ab.svc_map))
info->status.ack_signal += ATH11K_DEFAULT_NOISE_FLOOR;
info->status.flags |= IEEE80211_TX_STATUS_ACK_SIGNAL_VALID;
}

View file

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021, 2023 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021, 2023-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_DP_TX_H
@ -13,7 +13,7 @@
struct ath11k_dp_htt_wbm_tx_status {
u32 msdu_id;
bool acked;
int ack_rssi;
s8 ack_rssi;
u16 peer_id;
};

View file

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_HAL_TX_H
@ -54,7 +54,7 @@ struct hal_tx_info {
struct hal_tx_status {
enum hal_wbm_rel_src_module buf_rel_source;
enum hal_wbm_tqm_rel_reason status;
u8 ack_rssi;
s8 ack_rssi;
u32 flags; /* %HAL_TX_STATUS_FLAGS_ */
u32 ppdu_id;
u8 try_cnt;

View file

@ -4229,6 +4229,7 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
switch (key->cipher) {
case WLAN_CIPHER_SUITE_CCMP:
case WLAN_CIPHER_SUITE_CCMP_256:
arg.key_cipher = WMI_CIPHER_AES_CCM;
/* TODO: Re-check if flag is valid */
key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
@ -4238,12 +4239,10 @@ static int ath11k_install_key(struct ath11k_vif *arvif,
arg.key_txmic_len = 8;
arg.key_rxmic_len = 8;
break;
case WLAN_CIPHER_SUITE_CCMP_256:
arg.key_cipher = WMI_CIPHER_AES_CCM;
break;
case WLAN_CIPHER_SUITE_GCMP:
case WLAN_CIPHER_SUITE_GCMP_256:
arg.key_cipher = WMI_CIPHER_AES_GCM;
key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT;
break;
default:
ath11k_warn(ar->ab, "cipher %d is not supported\n", key->cipher);
@ -5903,7 +5902,10 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
{
struct ath11k_base *ab = ar->ab;
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB(skb);
struct ieee80211_tx_info *info;
enum hal_encrypt_type enctype;
unsigned int mic_len;
dma_addr_t paddr;
int buf_id;
int ret;
@ -5927,7 +5929,12 @@ static int ath11k_mac_mgmt_tx_wmi(struct ath11k *ar, struct ath11k_vif *arvif,
ieee80211_is_deauth(hdr->frame_control) ||
ieee80211_is_disassoc(hdr->frame_control)) &&
ieee80211_has_protected(hdr->frame_control)) {
skb_put(skb, IEEE80211_CCMP_MIC_LEN);
if (!(skb_cb->flags & ATH11K_SKB_CIPHER_SET))
ath11k_warn(ab, "WMI management tx frame without ATH11K_SKB_CIPHER_SET");
enctype = ath11k_dp_tx_get_encrypt_type(skb_cb->cipher);
mic_len = ath11k_dp_rx_crypto_mic_len(ar, enctype);
skb_put(skb, mic_len);
}
}
@ -8977,8 +8984,11 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL);
}
sinfo->signal_avg = ewma_avg_rssi_read(&arsta->avg_rssi) +
ATH11K_DEFAULT_NOISE_FLOOR;
sinfo->signal_avg = ewma_avg_rssi_read(&arsta->avg_rssi);
if (!db2dbm)
sinfo->signal_avg += ATH11K_DEFAULT_NOISE_FLOOR;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_SIGNAL_AVG);
}
@ -9020,7 +9030,12 @@ static void ath11k_mac_op_ipv6_changed(struct ieee80211_hw *hw,
offload = &arvif->arp_ns_offload;
count = 0;
/* Note: read_lock_bh() calls rcu_read_lock() */
/* The _ipv6_changed() is called with RCU lock already held in
* atomic_notifier_call_chain(), so we don't need to call
* rcu_read_lock() again here. But note that with CONFIG_PREEMPT_RT
* enabled, read_lock_bh() also calls rcu_read_lock(). This is OK
* because RCU read critical section is allowed to get nested.
*/
read_lock_bh(&idev->lock);
memset(offload->ipv6_addr, 0, sizeof(offload->ipv6_addr));

View file

@ -2859,7 +2859,7 @@ int ath11k_qmi_firmware_start(struct ath11k_base *ab,
int ath11k_qmi_fwreset_from_cold_boot(struct ath11k_base *ab)
{
int timeout;
long time_left;
if (!ath11k_core_coldboot_cal_support(ab) ||
ab->hw_params.cbcal_restart_fw == 0)
@ -2867,11 +2867,11 @@ int ath11k_qmi_fwreset_from_cold_boot(struct ath11k_base *ab)
ath11k_dbg(ab, ATH11K_DBG_QMI, "wait for cold boot done\n");
timeout = wait_event_timeout(ab->qmi.cold_boot_waitq,
(ab->qmi.cal_done == 1),
ATH11K_COLD_BOOT_FW_RESET_DELAY);
time_left = wait_event_timeout(ab->qmi.cold_boot_waitq,
(ab->qmi.cal_done == 1),
ATH11K_COLD_BOOT_FW_RESET_DELAY);
if (timeout <= 0) {
if (time_left <= 0) {
ath11k_warn(ab, "Coldboot Calibration timed out\n");
return -ETIMEDOUT;
}
@ -2886,7 +2886,7 @@ EXPORT_SYMBOL(ath11k_qmi_fwreset_from_cold_boot);
static int ath11k_qmi_process_coldboot_calibration(struct ath11k_base *ab)
{
int timeout;
long time_left;
int ret;
ret = ath11k_qmi_wlanfw_mode_send(ab, ATH11K_FIRMWARE_MODE_COLD_BOOT);
@ -2897,10 +2897,10 @@ static int ath11k_qmi_process_coldboot_calibration(struct ath11k_base *ab)
ath11k_dbg(ab, ATH11K_DBG_QMI, "Coldboot calibration wait started\n");
timeout = wait_event_timeout(ab->qmi.cold_boot_waitq,
(ab->qmi.cal_done == 1),
ATH11K_COLD_BOOT_FW_RESET_DELAY);
if (timeout <= 0) {
time_left = wait_event_timeout(ab->qmi.cold_boot_waitq,
(ab->qmi.cal_done == 1),
ATH11K_COLD_BOOT_FW_RESET_DELAY);
if (time_left <= 0) {
ath11k_warn(ab, "coldboot calibration timed out\n");
return 0;
}

View file

@ -23,9 +23,10 @@ ath12k-y += core.o \
fw.o \
p2p.o
ath12k-$(CONFIG_ATH12K_DEBUGFS) += debugfs.o
ath12k-$(CONFIG_ATH12K_DEBUGFS) += debugfs.o debugfs_htt_stats.o
ath12k-$(CONFIG_ACPI) += acpi.o
ath12k-$(CONFIG_ATH12K_TRACING) += trace.o
ath12k-$(CONFIG_PM) += wow.o
# for tracing framework to find trace.h
CFLAGS_trace.o := -I$(src)

View file

@ -391,4 +391,6 @@ void ath12k_acpi_stop(struct ath12k_base *ab)
acpi_remove_notify_handler(ACPI_HANDLE(ab->dev),
ACPI_DEVICE_NOTIFY,
ath12k_acpi_dsm_notify);
memset(&ab->acpi, 0, sizeof(ab->acpi));
}

View file

@ -16,6 +16,7 @@
#include "hif.h"
#include "fw.h"
#include "debugfs.h"
#include "wow.h"
unsigned int ath12k_debug_mask;
module_param_named(debug_mask, ath12k_debug_mask, uint, 0644);
@ -42,13 +43,37 @@ static int ath12k_core_rfkill_config(struct ath12k_base *ab)
return ret;
}
/* Check if we need to continue with suspend/resume operation.
* Return:
* a negative value: error happens and don't continue.
* 0: no error but don't continue.
* positive value: no error and do continue.
*/
static int ath12k_core_continue_suspend_resume(struct ath12k_base *ab)
{
struct ath12k *ar;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
/* so far single_pdev_only chips have supports_suspend as true
* so pass 0 as a dummy pdev_id here.
*/
ar = ab->pdevs[0].ar;
if (!ar || !ar->ah || ar->ah->state != ATH12K_HW_STATE_OFF)
return 0;
return 1;
}
int ath12k_core_suspend(struct ath12k_base *ab)
{
struct ath12k *ar;
int ret, i;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
ret = ath12k_core_continue_suspend_resume(ab);
if (ret <= 0)
return ret;
for (i = 0; i < ab->num_radios; i++) {
ar = ab->pdevs[i].ar;
@ -80,8 +105,13 @@ EXPORT_SYMBOL(ath12k_core_suspend);
int ath12k_core_suspend_late(struct ath12k_base *ab)
{
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
int ret;
ret = ath12k_core_continue_suspend_resume(ab);
if (ret <= 0)
return ret;
ath12k_acpi_stop(ab);
ath12k_hif_irq_disable(ab);
ath12k_hif_ce_irq_disable(ab);
@ -96,8 +126,9 @@ int ath12k_core_resume_early(struct ath12k_base *ab)
{
int ret;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
ret = ath12k_core_continue_suspend_resume(ab);
if (ret <= 0)
return ret;
reinit_completion(&ab->restart_completed);
ret = ath12k_hif_power_up(ab);
@ -111,9 +142,11 @@ EXPORT_SYMBOL(ath12k_core_resume_early);
int ath12k_core_resume(struct ath12k_base *ab)
{
long time_left;
int ret;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
ret = ath12k_core_continue_suspend_resume(ab);
if (ret <= 0)
return ret;
time_left = wait_for_completion_timeout(&ab->restart_completed,
ATH12K_RESET_TIMEOUT_HZ);
@ -1059,8 +1092,6 @@ static void ath12k_core_post_reconfigure_recovery(struct ath12k_base *ab)
mutex_unlock(&ar->conf_mutex);
}
/* Restart after all the link/radio halt */
ieee80211_restart_hw(ah->hw);
break;
case ATH12K_HW_STATE_OFF:
ath12k_warn(ab,
@ -1087,7 +1118,8 @@ static void ath12k_core_post_reconfigure_recovery(struct ath12k_base *ab)
static void ath12k_core_restart(struct work_struct *work)
{
struct ath12k_base *ab = container_of(work, struct ath12k_base, restart_work);
int ret;
struct ath12k_hw *ah;
int ret, i;
ret = ath12k_core_reconfigure_on_crash(ab);
if (ret) {
@ -1095,8 +1127,12 @@ static void ath12k_core_restart(struct work_struct *work)
return;
}
if (ab->is_reset)
complete_all(&ab->reconfigure_complete);
if (ab->is_reset) {
for (i = 0; i < ab->num_hw; i++) {
ah = ab->ah[i];
ieee80211_restart_hw(ah->hw);
}
}
complete(&ab->restart_completed);
}
@ -1150,20 +1186,14 @@ static void ath12k_core_reset(struct work_struct *work)
ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset starting\n");
ab->is_reset = true;
atomic_set(&ab->recovery_start_count, 0);
reinit_completion(&ab->recovery_start);
atomic_set(&ab->recovery_count, 0);
ath12k_core_pre_reconfigure_recovery(ab);
reinit_completion(&ab->reconfigure_complete);
ath12k_core_post_reconfigure_recovery(ab);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "waiting recovery start...\n");
time_left = wait_for_completion_timeout(&ab->recovery_start,
ATH12K_RECOVER_START_TIMEOUT_HZ);
ath12k_hif_irq_disable(ab);
ath12k_hif_ce_irq_disable(ab);
@ -1275,8 +1305,6 @@ struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
mutex_init(&ab->core_lock);
spin_lock_init(&ab->base_lock);
init_completion(&ab->reset_complete);
init_completion(&ab->reconfigure_complete);
init_completion(&ab->recovery_start);
INIT_LIST_HEAD(&ab->peers);
init_waitqueue_head(&ab->peer_mapping_wq);
@ -1288,6 +1316,7 @@ struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
timer_setup(&ab->rx_replenish_retry, ath12k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
init_completion(&ab->restart_completed);
init_completion(&ab->wow.wakeup_completed);
ab->dev = dev;
ab->hif.bus = bus;

View file

@ -28,6 +28,8 @@
#include "dbring.h"
#include "fw.h"
#include "acpi.h"
#include "wow.h"
#include "debugfs_htt_stats.h"
#define SM(_v, _f) (((_v) << _f##_LSB) & _f##_MASK)
@ -229,6 +231,13 @@ struct ath12k_vif_cache {
u32 bss_conf_changed;
};
struct ath12k_rekey_data {
u8 kck[NL80211_KCK_LEN];
u8 kek[NL80211_KCK_LEN];
u64 replay_ctr;
bool enable_offload;
};
struct ath12k_vif {
u32 vdev_id;
enum wmi_vdev_type vdev_type;
@ -285,6 +294,7 @@ struct ath12k_vif {
u32 punct_bitmap;
bool ps;
struct ath12k_vif_cache *cache;
struct ath12k_rekey_data rekey_data;
};
struct ath12k_vif_iter {
@ -472,8 +482,17 @@ struct ath12k_fw_stats {
struct list_head bcn;
};
struct ath12k_dbg_htt_stats {
enum ath12k_dbg_htt_ext_stats_type type;
u32 cfg_param[4];
u8 reset;
struct debug_htt_stats_req *stats_req;
};
struct ath12k_debug {
struct dentry *debugfs_pdev;
struct dentry *debugfs_pdev_symlink;
struct ath12k_dbg_htt_stats htt_stats;
};
struct ath12k_per_peer_tx_stats {
@ -606,6 +625,9 @@ struct ath12k {
struct work_struct wmi_mgmt_tx_work;
struct sk_buff_head wmi_mgmt_tx_queue;
struct ath12k_wow wow;
struct completion target_suspend;
bool target_suspend_ack;
struct ath12k_per_peer_tx_stats peer_tx_stats;
struct list_head ppdu_stats_info;
u32 ppdu_stat_list_depth;
@ -625,12 +647,12 @@ struct ath12k {
u32 freq_low;
u32 freq_high;
bool nlo_enabled;
};
struct ath12k_hw {
struct ieee80211_hw *hw;
struct ath12k_base *ab;
/* Protect the write operation of the hardware state ath12k_hw::state
* between hardware start<=>reconfigure<=>stop transitions.
*/
@ -766,6 +788,11 @@ struct ath12k_base {
const struct ath12k_hif_ops *ops;
} hif;
struct {
struct completion wakeup_completed;
u32 wmi_conf_rx_decap_mode;
} wow;
struct ath12k_ce ce;
struct timer_list rx_replenish_retry;
struct ath12k_hal hal;
@ -848,11 +875,8 @@ struct ath12k_base {
struct work_struct reset_work;
atomic_t reset_count;
atomic_t recovery_count;
atomic_t recovery_start_count;
bool is_reset;
struct completion reset_complete;
struct completion reconfigure_complete;
struct completion recovery_start;
/* continuous recovery fail count */
atomic_t fail_cont_count;
unsigned long reset_fail_timeout;

View file

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH12K_DEBUG_H_
@ -25,6 +25,7 @@ enum ath12k_debug_mask {
ATH12K_DBG_PCI = 0x00001000,
ATH12K_DBG_DP_TX = 0x00002000,
ATH12K_DBG_DP_RX = 0x00004000,
ATH12K_DBG_WOW = 0x00008000,
ATH12K_DBG_ANY = 0xffffffff,
};

View file

@ -6,6 +6,7 @@
#include "core.h"
#include "debugfs.h"
#include "debugfs_htt_stats.h"
static ssize_t ath12k_write_simulate_radar(struct file *file,
const char __user *user_buf,
@ -80,11 +81,27 @@ void ath12k_debugfs_register(struct ath12k *ar)
/* Create a symlink under ieee80211/phy* */
scnprintf(buf, sizeof(buf), "../../ath12k/%pd2", ar->debug.debugfs_pdev);
debugfs_create_symlink("ath12k", hw->wiphy->debugfsdir, buf);
ar->debug.debugfs_pdev_symlink = debugfs_create_symlink("ath12k",
hw->wiphy->debugfsdir,
buf);
if (ar->mac.sbands[NL80211_BAND_5GHZ].channels) {
debugfs_create_file("dfs_simulate_radar", 0200,
ar->debug.debugfs_pdev, ar,
&fops_simulate_radar);
}
ath12k_debugfs_htt_stats_register(ar);
}
void ath12k_debugfs_unregister(struct ath12k *ar)
{
if (!ar->debug.debugfs_pdev)
return;
/* Remove symlink under ieee80211/phy* */
debugfs_remove(ar->debug.debugfs_pdev_symlink);
debugfs_remove_recursive(ar->debug.debugfs_pdev);
ar->debug.debugfs_pdev_symlink = NULL;
ar->debug.debugfs_pdev = NULL;
}

View file

@ -11,7 +11,7 @@
void ath12k_debugfs_soc_create(struct ath12k_base *ab);
void ath12k_debugfs_soc_destroy(struct ath12k_base *ab);
void ath12k_debugfs_register(struct ath12k *ar);
void ath12k_debugfs_unregister(struct ath12k *ar);
#else
static inline void ath12k_debugfs_soc_create(struct ath12k_base *ab)
{
@ -25,6 +25,10 @@ static inline void ath12k_debugfs_register(struct ath12k *ar)
{
}
static inline void ath12k_debugfs_unregister(struct ath12k *ar)
{
}
#endif /* CONFIG_ATH12K_DEBUGFS */
#endif /* _ATH12K_DEBUGFS_H_ */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,567 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef DEBUG_HTT_STATS_H
#define DEBUG_HTT_STATS_H
#define ATH12K_HTT_STATS_BUF_SIZE (1024 * 512)
#define ATH12K_HTT_STATS_COOKIE_LSB GENMASK_ULL(31, 0)
#define ATH12K_HTT_STATS_COOKIE_MSB GENMASK_ULL(63, 32)
#define ATH12K_HTT_STATS_MAGIC_VALUE 0xF0F0F0F0
#define ATH12K_HTT_STATS_SUBTYPE_MAX 16
#define ATH12K_HTT_MAX_STRING_LEN 256
#define ATH12K_HTT_STATS_RESET_BITMAP32_OFFSET(_idx) ((_idx) & 0x1f)
#define ATH12K_HTT_STATS_RESET_BITMAP64_OFFSET(_idx) ((_idx) & 0x3f)
#define ATH12K_HTT_STATS_RESET_BITMAP32_BIT(_idx) (1 << \
ATH12K_HTT_STATS_RESET_BITMAP32_OFFSET(_idx))
#define ATH12K_HTT_STATS_RESET_BITMAP64_BIT(_idx) (1 << \
ATH12K_HTT_STATS_RESET_BITMAP64_OFFSET(_idx))
void ath12k_debugfs_htt_stats_register(struct ath12k *ar);
#ifdef CONFIG_ATH12K_DEBUGFS
void ath12k_debugfs_htt_ext_stats_handler(struct ath12k_base *ab,
struct sk_buff *skb);
#else /* CONFIG_ATH12K_DEBUGFS */
static inline void ath12k_debugfs_htt_ext_stats_handler(struct ath12k_base *ab,
struct sk_buff *skb)
{
}
#endif
/**
* DOC: target -> host extended statistics upload
*
* The following field definitions describe the format of the HTT
* target to host stats upload confirmation message.
* The message contains a cookie echoed from the HTT host->target stats
* upload request, which identifies which request the confirmation is
* for, and a single stats can span over multiple HTT stats indication
* due to the HTT message size limitation so every HTT ext stats
* indication will have tag-length-value stats information elements.
* The tag-length header for each HTT stats IND message also includes a
* status field, to indicate whether the request for the stat type in
* question was fully met, partially met, unable to be met, or invalid
* (if the stat type in question is disabled in the target).
* A Done bit 1's indicate the end of the of stats info elements.
*
*
* |31 16|15 12|11|10 8|7 5|4 0|
* |--------------------------------------------------------------|
* | reserved | msg type |
* |--------------------------------------------------------------|
* | cookie LSBs |
* |--------------------------------------------------------------|
* | cookie MSBs |
* |--------------------------------------------------------------|
* | stats entry length | rsvd | D| S | stat type |
* |--------------------------------------------------------------|
* | type-specific stats info |
* | (see debugfs_htt_stats.h) |
* |--------------------------------------------------------------|
* Header fields:
* - MSG_TYPE
* Bits 7:0
* Purpose: Identifies this is a extended statistics upload confirmation
* message.
* Value: 0x1c
* - COOKIE_LSBS
* Bits 31:0
* Purpose: Provide a mechanism to match a target->host stats confirmation
* message with its preceding host->target stats request message.
* Value: MSBs of the opaque cookie specified by the host-side requestor
* - COOKIE_MSBS
* Bits 31:0
* Purpose: Provide a mechanism to match a target->host stats confirmation
* message with its preceding host->target stats request message.
* Value: MSBs of the opaque cookie specified by the host-side requestor
*
* Stats Information Element tag-length header fields:
* - STAT_TYPE
* Bits 7:0
* Purpose: identifies the type of statistics info held in the
* following information element
* Value: ath12k_dbg_htt_ext_stats_type
* - STATUS
* Bits 10:8
* Purpose: indicate whether the requested stats are present
* Value:
* 0 -> The requested stats have been delivered in full
* 1 -> The requested stats have been delivered in part
* 2 -> The requested stats could not be delivered (error case)
* 3 -> The requested stat type is either not recognized (invalid)
* - DONE
* Bits 11
* Purpose:
* Indicates the completion of the stats entry, this will be the last
* stats conf HTT segment for the requested stats type.
* Value:
* 0 -> the stats retrieval is ongoing
* 1 -> the stats retrieval is complete
* - LENGTH
* Bits 31:16
* Purpose: indicate the stats information size
* Value: This field specifies the number of bytes of stats information
* that follows the element tag-length header.
* It is expected but not required that this length is a multiple of
* 4 bytes.
*/
#define ATH12K_HTT_T2H_EXT_STATS_INFO1_DONE BIT(11)
#define ATH12K_HTT_T2H_EXT_STATS_INFO1_LENGTH GENMASK(31, 16)
struct ath12k_htt_extd_stats_msg {
__le32 info0;
__le64 cookie;
__le32 info1;
u8 data[];
} __packed;
/* htt_dbg_ext_stats_type */
enum ath12k_dbg_htt_ext_stats_type {
ATH12K_DBG_HTT_EXT_STATS_RESET = 0,
ATH12K_DBG_HTT_EXT_STATS_PDEV_TX = 1,
ATH12K_DBG_HTT_EXT_STATS_PDEV_TX_SCHED = 4,
ATH12K_DBG_HTT_EXT_STATS_PDEV_ERROR = 5,
ATH12K_DBG_HTT_EXT_STATS_PDEV_TQM = 6,
/* keep this last */
ATH12K_DBG_HTT_NUM_EXT_STATS,
};
enum ath12k_dbg_htt_tlv_tag {
HTT_STATS_TX_PDEV_CMN_TAG = 0,
HTT_STATS_TX_PDEV_UNDERRUN_TAG = 1,
HTT_STATS_TX_PDEV_SIFS_TAG = 2,
HTT_STATS_TX_PDEV_FLUSH_TAG = 3,
HTT_STATS_TX_TQM_GEN_MPDU_TAG = 11,
HTT_STATS_TX_TQM_LIST_MPDU_TAG = 12,
HTT_STATS_TX_TQM_LIST_MPDU_CNT_TAG = 13,
HTT_STATS_TX_TQM_CMN_TAG = 14,
HTT_STATS_TX_TQM_PDEV_TAG = 15,
HTT_STATS_TX_PDEV_SCHEDULER_TXQ_STATS_TAG = 36,
HTT_STATS_TX_SCHED_CMN_TAG = 37,
HTT_STATS_SCHED_TXQ_CMD_POSTED_TAG = 39,
HTT_STATS_TX_TQM_ERROR_STATS_TAG = 43,
HTT_STATS_SCHED_TXQ_CMD_REAPED_TAG = 44,
HTT_STATS_HW_INTR_MISC_TAG = 54,
HTT_STATS_HW_PDEV_ERRS_TAG = 56,
HTT_STATS_WHAL_TX_TAG = 66,
HTT_STATS_TX_PDEV_SIFS_HIST_TAG = 67,
HTT_STATS_SCHED_TXQ_SCHED_ORDER_SU_TAG = 86,
HTT_STATS_SCHED_TXQ_SCHED_INELIGIBILITY_TAG = 87,
HTT_STATS_HW_WAR_TAG = 89,
HTT_STATS_SCHED_TXQ_SUPERCYCLE_TRIGGER_TAG = 100,
HTT_STATS_PDEV_CTRL_PATH_TX_STATS_TAG = 102,
HTT_STATS_MU_PPDU_DIST_TAG = 129,
HTT_STATS_MAX_TAG,
};
#define ATH12K_HTT_STATS_MAC_ID GENMASK(7, 0)
#define ATH12K_HTT_TX_PDEV_MAX_SIFS_BURST_STATS 9
#define ATH12K_HTT_TX_PDEV_MAX_FLUSH_REASON_STATS 150
/* MU MIMO distribution stats is a 2-dimensional array
* with dimension one denoting stats for nr4[0] or nr8[1]
*/
#define ATH12K_HTT_STATS_NUM_NR_BINS 2
#define ATH12K_HTT_STATS_MAX_NUM_MU_PPDU_PER_BURST 10
#define ATH12K_HTT_TX_PDEV_MAX_SIFS_BURST_HIST_STATS 10
#define ATH12K_HTT_STATS_MAX_NUM_SCHED_STATUS 9
#define ATH12K_HTT_STATS_NUM_SCHED_STATUS_WORDS \
(ATH12K_HTT_STATS_NUM_NR_BINS * ATH12K_HTT_STATS_MAX_NUM_SCHED_STATUS)
#define ATH12K_HTT_STATS_MU_PPDU_PER_BURST_WORDS \
(ATH12K_HTT_STATS_NUM_NR_BINS * ATH12K_HTT_STATS_MAX_NUM_MU_PPDU_PER_BURST)
enum ath12k_htt_tx_pdev_underrun_enum {
HTT_STATS_TX_PDEV_NO_DATA_UNDERRUN = 0,
HTT_STATS_TX_PDEV_DATA_UNDERRUN_BETWEEN_MPDU = 1,
HTT_STATS_TX_PDEV_DATA_UNDERRUN_WITHIN_MPDU = 2,
HTT_TX_PDEV_MAX_URRN_STATS = 3,
};
enum ath12k_htt_stats_reset_cfg_param_alloc_pos {
ATH12K_HTT_STATS_RESET_PARAM_CFG_32_BYTES = 1,
ATH12K_HTT_STATS_RESET_PARAM_CFG_64_BYTES,
ATH12K_HTT_STATS_RESET_PARAM_CFG_128_BYTES,
};
struct debug_htt_stats_req {
bool done;
bool override_cfg_param;
u8 pdev_id;
enum ath12k_dbg_htt_ext_stats_type type;
u32 cfg_param[4];
u8 peer_addr[ETH_ALEN];
struct completion htt_stats_rcvd;
u32 buf_len;
u8 buf[];
};
struct ath12k_htt_tx_pdev_stats_cmn_tlv {
__le32 mac_id__word;
__le32 hw_queued;
__le32 hw_reaped;
__le32 underrun;
__le32 hw_paused;
__le32 hw_flush;
__le32 hw_filt;
__le32 tx_abort;
__le32 mpdu_requed;
__le32 tx_xretry;
__le32 data_rc;
__le32 mpdu_dropped_xretry;
__le32 illgl_rate_phy_err;
__le32 cont_xretry;
__le32 tx_timeout;
__le32 pdev_resets;
__le32 phy_underrun;
__le32 txop_ovf;
__le32 seq_posted;
__le32 seq_failed_queueing;
__le32 seq_completed;
__le32 seq_restarted;
__le32 mu_seq_posted;
__le32 seq_switch_hw_paused;
__le32 next_seq_posted_dsr;
__le32 seq_posted_isr;
__le32 seq_ctrl_cached;
__le32 mpdu_count_tqm;
__le32 msdu_count_tqm;
__le32 mpdu_removed_tqm;
__le32 msdu_removed_tqm;
__le32 mpdus_sw_flush;
__le32 mpdus_hw_filter;
__le32 mpdus_truncated;
__le32 mpdus_ack_failed;
__le32 mpdus_expired;
__le32 mpdus_seq_hw_retry;
__le32 ack_tlv_proc;
__le32 coex_abort_mpdu_cnt_valid;
__le32 coex_abort_mpdu_cnt;
__le32 num_total_ppdus_tried_ota;
__le32 num_data_ppdus_tried_ota;
__le32 local_ctrl_mgmt_enqued;
__le32 local_ctrl_mgmt_freed;
__le32 local_data_enqued;
__le32 local_data_freed;
__le32 mpdu_tried;
__le32 isr_wait_seq_posted;
__le32 tx_active_dur_us_low;
__le32 tx_active_dur_us_high;
__le32 remove_mpdus_max_retries;
__le32 comp_delivered;
__le32 ppdu_ok;
__le32 self_triggers;
__le32 tx_time_dur_data;
__le32 seq_qdepth_repost_stop;
__le32 mu_seq_min_msdu_repost_stop;
__le32 seq_min_msdu_repost_stop;
__le32 seq_txop_repost_stop;
__le32 next_seq_cancel;
__le32 fes_offsets_err_cnt;
__le32 num_mu_peer_blacklisted;
__le32 mu_ofdma_seq_posted;
__le32 ul_mumimo_seq_posted;
__le32 ul_ofdma_seq_posted;
__le32 thermal_suspend_cnt;
__le32 dfs_suspend_cnt;
__le32 tx_abort_suspend_cnt;
__le32 tgt_specific_opaque_txq_suspend_info;
__le32 last_suspend_reason;
} __packed;
struct ath12k_htt_tx_pdev_stats_urrn_tlv {
DECLARE_FLEX_ARRAY(__le32, urrn_stats);
} __packed;
struct ath12k_htt_tx_pdev_stats_flush_tlv {
DECLARE_FLEX_ARRAY(__le32, flush_errs);
} __packed;
struct ath12k_htt_tx_pdev_stats_phy_err_tlv {
DECLARE_FLEX_ARRAY(__le32, phy_errs);
} __packed;
struct ath12k_htt_tx_pdev_stats_sifs_tlv {
DECLARE_FLEX_ARRAY(__le32, sifs_status);
} __packed;
struct ath12k_htt_pdev_ctrl_path_tx_stats_tlv {
__le32 fw_tx_mgmt_subtype[ATH12K_HTT_STATS_SUBTYPE_MAX];
} __packed;
struct ath12k_htt_tx_pdev_stats_sifs_hist_tlv {
DECLARE_FLEX_ARRAY(__le32, sifs_hist_status);
} __packed;
enum ath12k_htt_stats_hw_mode {
ATH12K_HTT_STATS_HWMODE_AC = 0,
ATH12K_HTT_STATS_HWMODE_AX = 1,
ATH12K_HTT_STATS_HWMODE_BE = 2,
};
struct ath12k_htt_tx_pdev_mu_ppdu_dist_stats_tlv {
__le32 hw_mode;
__le32 num_seq_term_status[ATH12K_HTT_STATS_NUM_SCHED_STATUS_WORDS];
__le32 num_ppdu_cmpl_per_burst[ATH12K_HTT_STATS_MU_PPDU_PER_BURST_WORDS];
__le32 num_seq_posted[ATH12K_HTT_STATS_NUM_NR_BINS];
__le32 num_ppdu_posted_per_burst[ATH12K_HTT_STATS_MU_PPDU_PER_BURST_WORDS];
} __packed;
#define ATH12K_HTT_TX_PDEV_STATS_SCHED_PER_TXQ_MAC_ID GENMASK(7, 0)
#define ATH12K_HTT_TX_PDEV_STATS_SCHED_PER_TXQ_ID GENMASK(15, 8)
#define ATH12K_HTT_TX_PDEV_NUM_SCHED_ORDER_LOG 20
struct ath12k_htt_stats_tx_sched_cmn_tlv {
__le32 mac_id__word;
__le32 current_timestamp;
} __packed;
struct ath12k_htt_tx_pdev_stats_sched_per_txq_tlv {
__le32 mac_id__word;
__le32 sched_policy;
__le32 last_sched_cmd_posted_timestamp;
__le32 last_sched_cmd_compl_timestamp;
__le32 sched_2_tac_lwm_count;
__le32 sched_2_tac_ring_full;
__le32 sched_cmd_post_failure;
__le32 num_active_tids;
__le32 num_ps_schedules;
__le32 sched_cmds_pending;
__le32 num_tid_register;
__le32 num_tid_unregister;
__le32 num_qstats_queried;
__le32 qstats_update_pending;
__le32 last_qstats_query_timestamp;
__le32 num_tqm_cmdq_full;
__le32 num_de_sched_algo_trigger;
__le32 num_rt_sched_algo_trigger;
__le32 num_tqm_sched_algo_trigger;
__le32 notify_sched;
__le32 dur_based_sendn_term;
__le32 su_notify2_sched;
__le32 su_optimal_queued_msdus_sched;
__le32 su_delay_timeout_sched;
__le32 su_min_txtime_sched_delay;
__le32 su_no_delay;
__le32 num_supercycles;
__le32 num_subcycles_with_sort;
__le32 num_subcycles_no_sort;
} __packed;
struct ath12k_htt_sched_txq_cmd_posted_tlv {
DECLARE_FLEX_ARRAY(__le32, sched_cmd_posted);
} __packed;
struct ath12k_htt_sched_txq_cmd_reaped_tlv {
DECLARE_FLEX_ARRAY(__le32, sched_cmd_reaped);
} __packed;
struct ath12k_htt_sched_txq_sched_order_su_tlv {
DECLARE_FLEX_ARRAY(__le32, sched_order_su);
} __packed;
struct ath12k_htt_sched_txq_sched_ineligibility_tlv {
DECLARE_FLEX_ARRAY(__le32, sched_ineligibility);
} __packed;
enum ath12k_htt_sched_txq_supercycle_triggers_tlv_enum {
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_NONE = 0,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_FORCED,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_LESS_NUM_TIDQ_ENTRIES,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_LESS_NUM_ACTIVE_TIDS,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_MAX_ITR_REACHED,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_DUR_THRESHOLD_REACHED,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_TWT_TRIGGER,
ATH12K_HTT_SCHED_SUPERCYCLE_TRIGGER_MAX,
};
struct ath12k_htt_sched_txq_supercycle_triggers_tlv {
DECLARE_FLEX_ARRAY(__le32, supercycle_triggers);
} __packed;
struct ath12k_htt_hw_stats_pdev_errs_tlv {
__le32 mac_id__word;
__le32 tx_abort;
__le32 tx_abort_fail_count;
__le32 rx_abort;
__le32 rx_abort_fail_count;
__le32 warm_reset;
__le32 cold_reset;
__le32 tx_flush;
__le32 tx_glb_reset;
__le32 tx_txq_reset;
__le32 rx_timeout_reset;
__le32 mac_cold_reset_restore_cal;
__le32 mac_cold_reset;
__le32 mac_warm_reset;
__le32 mac_only_reset;
__le32 phy_warm_reset;
__le32 phy_warm_reset_ucode_trig;
__le32 mac_warm_reset_restore_cal;
__le32 mac_sfm_reset;
__le32 phy_warm_reset_m3_ssr;
__le32 phy_warm_reset_reason_phy_m3;
__le32 phy_warm_reset_reason_tx_hw_stuck;
__le32 phy_warm_reset_reason_num_rx_frame_stuck;
__le32 phy_warm_reset_reason_wal_rx_rec_rx_busy;
__le32 phy_warm_reset_reason_wal_rx_rec_mac_hng;
__le32 phy_warm_reset_reason_mac_conv_phy_reset;
__le32 wal_rx_recovery_rst_mac_hang_cnt;
__le32 wal_rx_recovery_rst_known_sig_cnt;
__le32 wal_rx_recovery_rst_no_rx_cnt;
__le32 wal_rx_recovery_rst_no_rx_consec_cnt;
__le32 wal_rx_recovery_rst_rx_busy_cnt;
__le32 wal_rx_recovery_rst_phy_mac_hang_cnt;
__le32 rx_flush_cnt;
__le32 phy_warm_reset_reason_tx_exp_cca_stuck;
__le32 phy_warm_reset_reason_tx_consec_flsh_war;
__le32 phy_warm_reset_reason_tx_hwsch_reset_war;
__le32 phy_warm_reset_reason_hwsch_cca_wdog_war;
__le32 fw_rx_rings_reset;
__le32 rx_dest_drain_rx_descs_leak_prevented;
__le32 rx_dest_drain_rx_descs_saved_cnt;
__le32 rx_dest_drain_rxdma2reo_leak_detected;
__le32 rx_dest_drain_rxdma2fw_leak_detected;
__le32 rx_dest_drain_rxdma2wbm_leak_detected;
__le32 rx_dest_drain_rxdma1_2sw_leak_detected;
__le32 rx_dest_drain_rx_drain_ok_mac_idle;
__le32 rx_dest_drain_ok_mac_not_idle;
__le32 rx_dest_drain_prerequisite_invld;
__le32 rx_dest_drain_skip_non_lmac_reset;
__le32 rx_dest_drain_hw_fifo_notempty_post_wait;
} __packed;
#define ATH12K_HTT_STATS_MAX_HW_INTR_NAME_LEN 8
struct ath12k_htt_hw_stats_intr_misc_tlv {
u8 hw_intr_name[ATH12K_HTT_STATS_MAX_HW_INTR_NAME_LEN];
__le32 mask;
__le32 count;
} __packed;
struct ath12k_htt_hw_stats_whal_tx_tlv {
__le32 mac_id__word;
__le32 last_unpause_ppdu_id;
__le32 hwsch_unpause_wait_tqm_write;
__le32 hwsch_dummy_tlv_skipped;
__le32 hwsch_misaligned_offset_received;
__le32 hwsch_reset_count;
__le32 hwsch_dev_reset_war;
__le32 hwsch_delayed_pause;
__le32 hwsch_long_delayed_pause;
__le32 sch_rx_ppdu_no_response;
__le32 sch_selfgen_response;
__le32 sch_rx_sifs_resp_trigger;
} __packed;
struct ath12k_htt_hw_war_stats_tlv {
__le32 mac_id__word;
DECLARE_FLEX_ARRAY(__le32, hw_wars);
} __packed;
struct ath12k_htt_tx_tqm_cmn_stats_tlv {
__le32 mac_id__word;
__le32 max_cmdq_id;
__le32 list_mpdu_cnt_hist_intvl;
__le32 add_msdu;
__le32 q_empty;
__le32 q_not_empty;
__le32 drop_notification;
__le32 desc_threshold;
__le32 hwsch_tqm_invalid_status;
__le32 missed_tqm_gen_mpdus;
__le32 tqm_active_tids;
__le32 tqm_inactive_tids;
__le32 tqm_active_msduq_flows;
__le32 msduq_timestamp_updates;
__le32 msduq_updates_mpdu_head_info_cmd;
__le32 msduq_updates_emp_to_nonemp_status;
__le32 get_mpdu_head_info_cmds_by_query;
__le32 get_mpdu_head_info_cmds_by_tac;
__le32 gen_mpdu_cmds_by_query;
__le32 high_prio_q_not_empty;
} __packed;
struct ath12k_htt_tx_tqm_error_stats_tlv {
__le32 q_empty_failure;
__le32 q_not_empty_failure;
__le32 add_msdu_failure;
__le32 tqm_cache_ctl_err;
__le32 tqm_soft_reset;
__le32 tqm_reset_num_in_use_link_descs;
__le32 tqm_reset_num_lost_link_descs;
__le32 tqm_reset_num_lost_host_tx_buf_cnt;
__le32 tqm_reset_num_in_use_internal_tqm;
__le32 tqm_reset_num_in_use_idle_link_rng;
__le32 tqm_reset_time_to_tqm_hang_delta_ms;
__le32 tqm_reset_recovery_time_ms;
__le32 tqm_reset_num_peers_hdl;
__le32 tqm_reset_cumm_dirty_hw_mpduq_cnt;
__le32 tqm_reset_cumm_dirty_hw_msduq_proc;
__le32 tqm_reset_flush_cache_cmd_su_cnt;
__le32 tqm_reset_flush_cache_cmd_other_cnt;
__le32 tqm_reset_flush_cache_cmd_trig_type;
__le32 tqm_reset_flush_cache_cmd_trig_cfg;
__le32 tqm_reset_flush_cmd_skp_status_null;
} __packed;
struct ath12k_htt_tx_tqm_gen_mpdu_stats_tlv {
DECLARE_FLEX_ARRAY(__le32, gen_mpdu_end_reason);
} __packed;
#define ATH12K_HTT_TX_TQM_MAX_LIST_MPDU_END_REASON 16
#define ATH12K_HTT_TX_TQM_MAX_LIST_MPDU_CNT_HISTOGRAM_BINS 16
struct ath12k_htt_tx_tqm_list_mpdu_stats_tlv {
DECLARE_FLEX_ARRAY(__le32, list_mpdu_end_reason);
} __packed;
struct ath12k_htt_tx_tqm_list_mpdu_cnt_tlv {
DECLARE_FLEX_ARRAY(__le32, list_mpdu_cnt_hist);
} __packed;
struct ath12k_htt_tx_tqm_pdev_stats_tlv {
__le32 msdu_count;
__le32 mpdu_count;
__le32 remove_msdu;
__le32 remove_mpdu;
__le32 remove_msdu_ttl;
__le32 send_bar;
__le32 bar_sync;
__le32 notify_mpdu;
__le32 sync_cmd;
__le32 write_cmd;
__le32 hwsch_trigger;
__le32 ack_tlv_proc;
__le32 gen_mpdu_cmd;
__le32 gen_list_cmd;
__le32 remove_mpdu_cmd;
__le32 remove_mpdu_tried_cmd;
__le32 mpdu_queue_stats_cmd;
__le32 mpdu_head_info_cmd;
__le32 msdu_flow_stats_cmd;
__le32 remove_msdu_cmd;
__le32 remove_msdu_ttl_cmd;
__le32 flush_cache_cmd;
__le32 update_mpduq_cmd;
__le32 enqueue;
__le32 enqueue_notify;
__le32 notify_mpdu_at_head;
__le32 notify_mpdu_state_valid;
__le32 sched_udp_notify1;
__le32 sched_udp_notify2;
__le32 sched_nonudp_notify1;
__le32 sched_nonudp_notify2;
} __packed;
#endif

View file

@ -333,6 +333,7 @@ struct ath12k_dp {
struct dp_srng reo_except_ring;
struct dp_srng reo_cmd_ring;
struct dp_srng reo_status_ring;
enum ath12k_peer_metadata_version peer_metadata_ver;
struct dp_srng reo_dst_ring[DP_REO_DST_RING_MAX];
struct dp_tx_ring tx_ring[DP_TCL_NUM_RING_MAX];
struct hal_wbm_idle_scatter_list scatter_list[DP_IDLE_SCATTER_BUFS_MAX];

View file

@ -17,6 +17,7 @@
#include "dp_tx.h"
#include "peer.h"
#include "dp_mon.h"
#include "debugfs_htt_stats.h"
#define ATH12K_DP_RX_FRAGMENT_TIMEOUT_MS (2 * HZ)
@ -1268,10 +1269,10 @@ static int ath12k_htt_tlv_ppdu_stats_parse(struct ath12k_base *ab,
return 0;
}
static int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
int (*iter)(struct ath12k_base *ar, u16 tag, u16 len,
const void *ptr, void *data),
void *data)
int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
int (*iter)(struct ath12k_base *ar, u16 tag, u16 len,
const void *ptr, void *data),
void *data)
{
const struct htt_tlv *tlv;
const void *begin = ptr;
@ -1741,6 +1742,7 @@ void ath12k_dp_htt_htc_t2h_msg_handler(struct ath12k_base *ab,
ath12k_htt_pull_ppdu_stats(ab, skb);
break;
case HTT_T2H_MSG_TYPE_EXT_STATS_CONF:
ath12k_debugfs_htt_ext_stats_handler(ab, skb);
break;
case HTT_T2H_MSG_TYPE_MLO_TIMESTAMP_OFFSET_IND:
ath12k_htt_mlo_offset_event_handler(ab, skb);
@ -2583,6 +2585,29 @@ static void ath12k_dp_rx_process_received_packets(struct ath12k_base *ab,
rcu_read_unlock();
}
static u16 ath12k_dp_rx_get_peer_id(struct ath12k_base *ab,
enum ath12k_peer_metadata_version ver,
__le32 peer_metadata)
{
switch (ver) {
default:
ath12k_warn(ab, "Unknown peer metadata version: %d", ver);
fallthrough;
case ATH12K_PEER_METADATA_V0:
return le32_get_bits(peer_metadata,
RX_MPDU_DESC_META_DATA_V0_PEER_ID);
case ATH12K_PEER_METADATA_V1:
return le32_get_bits(peer_metadata,
RX_MPDU_DESC_META_DATA_V1_PEER_ID);
case ATH12K_PEER_METADATA_V1A:
return le32_get_bits(peer_metadata,
RX_MPDU_DESC_META_DATA_V1A_PEER_ID);
case ATH12K_PEER_METADATA_V1B:
return le32_get_bits(peer_metadata,
RX_MPDU_DESC_META_DATA_V1B_PEER_ID);
}
}
int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
struct napi_struct *napi, int budget)
{
@ -2611,6 +2636,8 @@ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
ath12k_hal_srng_access_begin(ab, srng);
while ((desc = ath12k_hal_srng_dst_get_next_entry(ab, srng))) {
struct rx_mpdu_desc *mpdu_info;
struct rx_msdu_desc *msdu_info;
enum hal_reo_dest_ring_push_reason push_reason;
u32 cookie;
@ -2658,16 +2685,19 @@ int ath12k_dp_rx_process(struct ath12k_base *ab, int ring_id,
continue;
}
rxcb->is_first_msdu = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
msdu_info = &desc->rx_msdu_info;
mpdu_info = &desc->rx_mpdu_info;
rxcb->is_first_msdu = !!(le32_to_cpu(msdu_info->info0) &
RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU);
rxcb->is_last_msdu = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
rxcb->is_last_msdu = !!(le32_to_cpu(msdu_info->info0) &
RX_MSDU_DESC_INFO0_LAST_MSDU_IN_MPDU);
rxcb->is_continuation = !!(le32_to_cpu(desc->rx_msdu_info.info0) &
rxcb->is_continuation = !!(le32_to_cpu(msdu_info->info0) &
RX_MSDU_DESC_INFO0_MSDU_CONTINUATION);
rxcb->mac_id = mac_id;
rxcb->peer_id = le32_get_bits(desc->rx_mpdu_info.peer_meta_data,
RX_MPDU_DESC_META_DATA_PEER_ID);
rxcb->tid = le32_get_bits(desc->rx_mpdu_info.info0,
rxcb->peer_id = ath12k_dp_rx_get_peer_id(ab, dp->peer_metadata_ver,
mpdu_info->peer_meta_data);
rxcb->tid = le32_get_bits(mpdu_info->info0,
RX_MPDU_DESC_INFO0_TID);
__skb_queue_tail(&msdu_list, msdu);
@ -3402,7 +3432,7 @@ int ath12k_dp_rx_process_err(struct ath12k_base *ab, struct napi_struct *napi,
struct ath12k *ar;
dma_addr_t paddr;
bool is_frag;
bool drop = false;
bool drop;
int pdev_id;
tot_n_bufs_reaped = 0;
@ -3420,7 +3450,9 @@ int ath12k_dp_rx_process_err(struct ath12k_base *ab, struct napi_struct *napi,
while (budget &&
(reo_desc = ath12k_hal_srng_dst_get_next_entry(ab, srng))) {
drop = false;
ab->soc_stats.err_ring_pkts++;
ret = ath12k_hal_desc_reo_parse_err(ab, reo_desc, &paddr,
&desc_bank);
if (ret) {

View file

@ -139,4 +139,8 @@ ath12k_dp_rx_h_find_peer(struct ath12k_base *ab, struct sk_buff *msdu);
int ath12k_dp_rxdma_ring_sel_config_qcn9274(struct ath12k_base *ab);
int ath12k_dp_rxdma_ring_sel_config_wcn7850(struct ath12k_base *ab);
int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len,
int (*iter)(struct ath12k_base *ar, u16 tag, u16 len,
const void *ptr, void *data),
void *data);
#endif /* ATH12K_DP_RX_H */

View file

@ -1086,6 +1086,7 @@ ath12k_dp_tx_htt_h2t_ext_stats_req(struct ath12k *ar, u8 type,
struct htt_ext_stats_cfg_cmd *cmd;
int len = sizeof(*cmd);
int ret;
u32 pdev_id;
skb = ath12k_htc_alloc_skb(ab, len);
if (!skb)
@ -1097,7 +1098,8 @@ ath12k_dp_tx_htt_h2t_ext_stats_req(struct ath12k *ar, u8 type,
memset(cmd, 0, sizeof(*cmd));
cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_EXT_STATS_CFG;
cmd->hdr.pdev_mask = 1 << ar->pdev->pdev_id;
pdev_id = ath12k_mac_get_target_pdev_id(ar);
cmd->hdr.pdev_mask = 1 << pdev_id;
cmd->hdr.stats_type = type;
cmd->cfg_param0 = cpu_to_le32(cfg_params->cfg0);

View file

@ -597,8 +597,30 @@ struct hal_tlv_64_hdr {
#define RX_MPDU_DESC_INFO0_MPDU_QOS_CTRL_VALID BIT(27)
#define RX_MPDU_DESC_INFO0_TID GENMASK(31, 28)
/* TODO revisit after meta data is concluded */
#define RX_MPDU_DESC_META_DATA_PEER_ID GENMASK(15, 0)
/* Peer Metadata classification */
/* Version 0 */
#define RX_MPDU_DESC_META_DATA_V0_PEER_ID GENMASK(15, 0)
#define RX_MPDU_DESC_META_DATA_V0_VDEV_ID GENMASK(23, 16)
/* Version 1 */
#define RX_MPDU_DESC_META_DATA_V1_PEER_ID GENMASK(13, 0)
#define RX_MPDU_DESC_META_DATA_V1_LOGICAL_LINK_ID GENMASK(15, 14)
#define RX_MPDU_DESC_META_DATA_V1_VDEV_ID GENMASK(23, 16)
#define RX_MPDU_DESC_META_DATA_V1_LMAC_ID GENMASK(25, 24)
#define RX_MPDU_DESC_META_DATA_V1_DEVICE_ID GENMASK(28, 26)
/* Version 1A */
#define RX_MPDU_DESC_META_DATA_V1A_PEER_ID GENMASK(13, 0)
#define RX_MPDU_DESC_META_DATA_V1A_VDEV_ID GENMASK(21, 14)
#define RX_MPDU_DESC_META_DATA_V1A_LOGICAL_LINK_ID GENMASK(25, 22)
#define RX_MPDU_DESC_META_DATA_V1A_DEVICE_ID GENMASK(28, 26)
/* Version 1B */
#define RX_MPDU_DESC_META_DATA_V1B_PEER_ID GENMASK(13, 0)
#define RX_MPDU_DESC_META_DATA_V1B_VDEV_ID GENMASK(21, 14)
#define RX_MPDU_DESC_META_DATA_V1B_HW_LINK_ID GENMASK(25, 22)
#define RX_MPDU_DESC_META_DATA_V1B_DEVICE_ID GENMASK(28, 26)
struct rx_mpdu_desc {
__le32 info0; /* %RX_MPDU_DESC_INFO */

View file

@ -244,6 +244,11 @@ static void ath12k_htc_suspend_complete(struct ath12k_base *ab, bool ack)
complete(&ab->htc_suspend);
}
static void ath12k_htc_wakeup_from_suspend(struct ath12k_base *ab)
{
ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot wakeup from suspend is received\n");
}
void ath12k_htc_rx_completion_handler(struct ath12k_base *ab,
struct sk_buff *skb)
{
@ -349,6 +354,7 @@ void ath12k_htc_rx_completion_handler(struct ath12k_base *ab,
ath12k_htc_suspend_complete(ab, false);
break;
case ATH12K_HTC_MSG_WAKEUP_FROM_SUSPEND_ID:
ath12k_htc_wakeup_from_suspend(ab);
break;
default:
ath12k_warn(ab, "ignoring unsolicited htc ep0 event %u\n",

View file

@ -78,8 +78,6 @@
#define TARGET_NUM_WDS_ENTRIES 32
#define TARGET_DMA_BURST_SIZE 1
#define TARGET_RX_BATCHMODE 1
#define TARGET_RX_PEER_METADATA_VER_V1A 2
#define TARGET_RX_PEER_METADATA_VER_V1B 3
#define TARGET_EMA_MAX_PROFILE_PERIOD 8
#define ATH12K_HW_DEFAULT_QUEUE 0

View file

@ -6,6 +6,7 @@
#include <net/mac80211.h>
#include <linux/etherdevice.h>
#include "mac.h"
#include "core.h"
#include "debug.h"
@ -15,6 +16,8 @@
#include "dp_rx.h"
#include "peer.h"
#include "debugfs.h"
#include "hif.h"
#include "wow.h"
#define CHAN2G(_channel, _freq, _flags) { \
.band = NL80211_BAND_2GHZ, \
@ -670,6 +673,82 @@ static struct ath12k *ath12k_get_ar_by_vif(struct ieee80211_hw *hw,
return NULL;
}
static struct ath12k_vif *ath12k_mac_get_vif_up(struct ath12k *ar)
{
struct ath12k_vif *arvif;
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(arvif, &ar->arvifs, list) {
if (arvif->is_up)
return arvif;
}
return NULL;
}
static bool ath12k_mac_band_match(enum nl80211_band band1, enum WMI_HOST_WLAN_BAND band2)
{
switch (band1) {
case NL80211_BAND_2GHZ:
if (band2 & WMI_HOST_WLAN_2G_CAP)
return true;
break;
case NL80211_BAND_5GHZ:
case NL80211_BAND_6GHZ:
if (band2 & WMI_HOST_WLAN_5G_CAP)
return true;
break;
default:
return false;
}
return false;
}
static u8 ath12k_mac_get_target_pdev_id_from_vif(struct ath12k_vif *arvif)
{
struct ath12k *ar = arvif->ar;
struct ath12k_base *ab = ar->ab;
struct ieee80211_vif *vif = arvif->vif;
struct cfg80211_chan_def def;
enum nl80211_band band;
u8 pdev_id = ab->fw_pdev[0].pdev_id;
int i;
if (WARN_ON(ath12k_mac_vif_chan(vif, &def)))
return pdev_id;
band = def.chan->band;
for (i = 0; i < ab->fw_pdev_count; i++) {
if (ath12k_mac_band_match(band, ab->fw_pdev[i].supported_bands))
return ab->fw_pdev[i].pdev_id;
}
return pdev_id;
}
u8 ath12k_mac_get_target_pdev_id(struct ath12k *ar)
{
struct ath12k_vif *arvif;
struct ath12k_base *ab = ar->ab;
if (!ab->hw_params->single_pdev_only)
return ar->pdev->pdev_id;
arvif = ath12k_mac_get_vif_up(ar);
/* fw_pdev array has pdev ids derived from phy capability
* service ready event (pdev_and_hw_link_ids).
* If no vif is active, return default first index.
*/
if (!arvif)
return ar->ab->fw_pdev[0].pdev_id;
/* If active vif is found, return the pdev id matching chandef band */
return ath12k_mac_get_target_pdev_id_from_vif(arvif);
}
static void ath12k_pdev_caps_update(struct ath12k *ar)
{
struct ath12k_base *ab = ar->ab;
@ -2050,7 +2129,9 @@ static void ath12k_peer_assoc_h_he(struct ath12k *ar,
{
const struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
int i;
u8 ampdu_factor, rx_mcs_80, rx_mcs_160, max_nss;
u8 ampdu_factor, max_nss;
u8 rx_mcs_80 = IEEE80211_HE_MCS_NOT_SUPPORTED;
u8 rx_mcs_160 = IEEE80211_HE_MCS_NOT_SUPPORTED;
u16 mcs_160_map, mcs_80_map;
bool support_160;
u16 v;
@ -2255,9 +2336,6 @@ static int ath12k_get_smps_from_capa(const struct ieee80211_sta_ht_cap *ht_cap,
const struct ieee80211_he_6ghz_capa *he_6ghz_capa,
int *smps)
{
if (!ht_cap->ht_supported && !he_6ghz_capa->capa)
return -EOPNOTSUPP;
if (ht_cap->ht_supported)
*smps = u16_get_bits(ht_cap->cap, IEEE80211_HT_CAP_SM_PS);
else
@ -2277,6 +2355,9 @@ static void ath12k_peer_assoc_h_smps(struct ieee80211_sta *sta,
const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
int smps;
if (!ht_cap->ht_supported && !he_6ghz_capa->capa)
return;
if (ath12k_get_smps_from_capa(ht_cap, he_6ghz_capa, &smps))
return;
@ -2756,6 +2837,9 @@ static int ath12k_setup_peer_smps(struct ath12k *ar, struct ath12k_vif *arvif,
{
int smps, ret = 0;
if (!ht_cap->ht_supported && !he_6ghz_capa)
return 0;
ret = ath12k_get_smps_from_capa(ht_cap, he_6ghz_capa, &smps);
if (ret < 0)
return ret;
@ -2834,6 +2918,7 @@ static void ath12k_bss_assoc(struct ath12k *ar,
}
arvif->is_up = true;
arvif->rekey_data.enable_offload = false;
ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
"mac vdev %d up (associated) bssid %pM aid %d\n",
@ -2881,6 +2966,8 @@ static void ath12k_bss_disassoc(struct ath12k *ar,
arvif->is_up = false;
memset(&arvif->rekey_data, 0, sizeof(arvif->rekey_data));
cancel_delayed_work(&arvif->connection_loss_work);
}
@ -3372,7 +3459,7 @@ static void ath12k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
static struct ath12k*
ath12k_mac_select_scan_device(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_scan_request *req)
u32 center_freq)
{
struct ath12k_hw *ah = hw->priv;
enum nl80211_band band;
@ -3389,9 +3476,9 @@ ath12k_mac_select_scan_device(struct ieee80211_hw *hw,
* split the hw request and perform multiple scans
*/
if (req->req.channels[0]->center_freq < ATH12K_MIN_5G_FREQ)
if (center_freq < ATH12K_MIN_5G_FREQ)
band = NL80211_BAND_2GHZ;
else if (req->req.channels[0]->center_freq < ATH12K_MIN_6G_FREQ)
else if (center_freq < ATH12K_MIN_6G_FREQ)
band = NL80211_BAND_5GHZ;
else
band = NL80211_BAND_6GHZ;
@ -3591,7 +3678,7 @@ static int ath12k_mac_op_hw_scan(struct ieee80211_hw *hw,
/* Since the targeted scan device could depend on the frequency
* requested in the hw_req, select the corresponding radio
*/
ar = ath12k_mac_select_scan_device(hw, vif, hw_req);
ar = ath12k_mac_select_scan_device(hw, vif, hw_req->req.channels[0]->center_freq);
if (!ar)
return -EINVAL;
@ -4087,6 +4174,11 @@ static int ath12k_station_assoc(struct ath12k *ar,
ath12k_peer_assoc_prepare(ar, vif, sta, &peer_arg, reassoc);
if (peer_arg.peer_nss < 1) {
ath12k_warn(ar->ab,
"invalid peer NSS %d\n", peer_arg.peer_nss);
return -EINVAL;
}
ret = ath12k_wmi_send_peer_assoc_cmd(ar, &peer_arg);
if (ret) {
ath12k_warn(ar->ab, "failed to run peer assoc for STA %pM vdev %i: %d\n",
@ -5834,28 +5926,6 @@ static int ath12k_mac_config_mon_status_default(struct ath12k *ar, bool enable)
/* TODO: Need to support new monitor mode */
}
static void ath12k_mac_wait_reconfigure(struct ath12k_base *ab)
{
int recovery_start_count;
if (!ab->is_reset)
return;
recovery_start_count = atomic_inc_return(&ab->recovery_start_count);
ath12k_dbg(ab, ATH12K_DBG_MAC, "recovery start count %d\n", recovery_start_count);
if (recovery_start_count == ab->num_radios) {
complete(&ab->recovery_start);
ath12k_dbg(ab, ATH12K_DBG_MAC, "recovery started success\n");
}
ath12k_dbg(ab, ATH12K_DBG_MAC, "waiting reconfigure...\n");
wait_for_completion_timeout(&ab->reconfigure_complete,
ATH12K_RECONFIGURE_TIMEOUT_HZ);
}
static int ath12k_mac_start(struct ath12k *ar)
{
struct ath12k_hw *ah = ar->ah;
@ -5987,7 +6057,6 @@ static int ath12k_mac_op_start(struct ieee80211_hw *hw)
break;
case ATH12K_HW_STATE_RESTARTING:
ah->state = ATH12K_HW_STATE_RESTARTED;
ath12k_mac_wait_reconfigure(ah->ab);
break;
case ATH12K_HW_STATE_RESTARTED:
case ATH12K_HW_STATE_WEDGED:
@ -8314,10 +8383,6 @@ static int ath12k_mac_op_get_survey(struct ieee80211_hw *hw, int idx,
if (!sband)
sband = hw->wiphy->bands[NL80211_BAND_6GHZ];
if (!sband || idx >= sband->n_channels) {
idx -= sband->n_channels;
sband = NULL;
}
if (!sband || idx >= sband->n_channels)
return -ENOENT;
@ -8416,12 +8481,68 @@ static int ath12k_mac_op_remain_on_channel(struct ieee80211_hw *hw,
struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct ath12k_hw *ah = ath12k_hw_to_ah(hw);
struct ath12k_wmi_scan_req_arg arg;
struct ath12k *ar;
struct ath12k *ar, *prev_ar;
u32 scan_time_msec;
bool create = true;
int ret;
ar = ath12k_ah_to_ar(ah, 0);
if (ah->num_radio == 1) {
WARN_ON(!arvif->is_created);
ar = ath12k_ah_to_ar(ah, 0);
goto scan;
}
ar = ath12k_mac_select_scan_device(hw, vif, chan->center_freq);
if (!ar)
return -EINVAL;
/* If the vif is already assigned to a specific vdev of an ar,
* check whether its already started, vdev which is started
* are not allowed to switch to a new radio.
* If the vdev is not started, but was earlier created on a
* different ar, delete that vdev and create a new one. We don't
* delete at the scan stop as an optimization to avoid redundant
* delete-create vdev's for the same ar, in case the request is
* always on the same band for the vif
*/
if (arvif->is_created) {
if (WARN_ON(!arvif->ar))
return -EINVAL;
if (ar != arvif->ar && arvif->is_started)
return -EBUSY;
if (ar != arvif->ar) {
/* backup the previously used ar ptr, since the vdev delete
* would assign the arvif->ar to NULL after the call
*/
prev_ar = arvif->ar;
mutex_lock(&prev_ar->conf_mutex);
ret = ath12k_mac_vdev_delete(prev_ar, vif);
mutex_unlock(&prev_ar->conf_mutex);
if (ret) {
ath12k_warn(prev_ar->ab,
"unable to delete scan vdev for roc: %d\n",
ret);
return ret;
}
} else {
create = false;
}
}
if (create) {
mutex_lock(&ar->conf_mutex);
ret = ath12k_mac_vdev_create(ar, vif);
mutex_unlock(&ar->conf_mutex);
if (ret) {
ath12k_warn(ar->ab, "unable to create scan vdev for roc: %d\n",
ret);
return -EINVAL;
}
}
scan:
mutex_lock(&ar->conf_mutex);
spin_lock_bh(&ar->data_lock);
@ -8503,6 +8624,40 @@ static int ath12k_mac_op_remain_on_channel(struct ieee80211_hw *hw,
return ret;
}
static void ath12k_mac_op_set_rekey_data(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct cfg80211_gtk_rekey_data *data)
{
struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct ath12k_rekey_data *rekey_data = &arvif->rekey_data;
struct ath12k_hw *ah = ath12k_hw_to_ah(hw);
struct ath12k *ar = ath12k_ah_to_ar(ah, 0);
ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac set rekey data vdev %d\n",
arvif->vdev_id);
mutex_lock(&ar->conf_mutex);
memcpy(rekey_data->kck, data->kck, NL80211_KCK_LEN);
memcpy(rekey_data->kek, data->kek, NL80211_KEK_LEN);
/* The supplicant works on big-endian, the firmware expects it on
* little endian.
*/
rekey_data->replay_ctr = get_unaligned_be64(data->replay_ctr);
arvif->rekey_data.enable_offload = true;
ath12k_dbg_dump(ar->ab, ATH12K_DBG_MAC, "kck", NULL,
rekey_data->kck, NL80211_KCK_LEN);
ath12k_dbg_dump(ar->ab, ATH12K_DBG_MAC, "kek", NULL,
rekey_data->kck, NL80211_KEK_LEN);
ath12k_dbg_dump(ar->ab, ATH12K_DBG_MAC, "replay ctr", NULL,
&rekey_data->replay_ctr, sizeof(rekey_data->replay_ctr));
mutex_unlock(&ar->conf_mutex);
}
static const struct ieee80211_ops ath12k_ops = {
.tx = ath12k_mac_op_tx,
.wake_tx_queue = ieee80211_handle_wake_tx_queue,
@ -8518,6 +8673,7 @@ static const struct ieee80211_ops ath12k_ops = {
.hw_scan = ath12k_mac_op_hw_scan,
.cancel_hw_scan = ath12k_mac_op_cancel_hw_scan,
.set_key = ath12k_mac_op_set_key,
.set_rekey_data = ath12k_mac_op_set_rekey_data,
.sta_state = ath12k_mac_op_sta_state,
.sta_set_txpwr = ath12k_mac_op_sta_set_txpwr,
.sta_rc_update = ath12k_mac_op_sta_rc_update,
@ -8539,6 +8695,12 @@ static const struct ieee80211_ops ath12k_ops = {
.sta_statistics = ath12k_mac_op_sta_statistics,
.remain_on_channel = ath12k_mac_op_remain_on_channel,
.cancel_remain_on_channel = ath12k_mac_op_cancel_remain_on_channel,
#ifdef CONFIG_PM
.suspend = ath12k_wow_op_suspend,
.resume = ath12k_wow_op_resume,
.set_wakeup = ath12k_wow_op_set_wakeup,
#endif
};
static void ath12k_mac_update_ch_list(struct ath12k *ar,
@ -8836,8 +8998,10 @@ static void ath12k_mac_hw_unregister(struct ath12k_hw *ah)
struct ath12k *ar;
int i;
for_each_ar(ah, ar, i)
for_each_ar(ah, ar, i) {
cancel_work_sync(&ar->regd_update_work);
ath12k_debugfs_unregister(ar);
}
ieee80211_unregister_hw(hw);
@ -8901,6 +9065,7 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
u32 ht_cap = U32_MAX, antennas_rx = 0, antennas_tx = 0;
bool is_6ghz = false, is_raw_mode = false, is_monitor_disable = false;
u8 *mac_addr = NULL;
u8 mbssid_max_interfaces = 0;
wiphy->max_ap_assoc_sta = 0;
@ -8944,6 +9109,8 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
mac_addr = ar->mac_addr;
else
mac_addr = ab->mac_addr;
mbssid_max_interfaces += TARGET_NUM_VDEVS;
}
wiphy->available_antennas_rx = antennas_rx;
@ -9036,7 +9203,7 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
wiphy->iftype_ext_capab = ath12k_iftypes_ext_capa;
wiphy->num_iftype_ext_capab = ARRAY_SIZE(ath12k_iftypes_ext_capa);
wiphy->mbssid_max_interfaces = TARGET_NUM_VDEVS;
wiphy->mbssid_max_interfaces = mbssid_max_interfaces;
wiphy->ema_max_profile_periodicity = TARGET_EMA_MAX_PROFILE_PERIOD;
if (is_6ghz) {
@ -9056,6 +9223,24 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
ieee80211_hw_set(hw, SUPPORT_FAST_XMIT);
}
if (test_bit(WMI_TLV_SERVICE_NLO, ar->wmi->wmi_ab->svc_map)) {
wiphy->max_sched_scan_ssids = WMI_PNO_MAX_SUPP_NETWORKS;
wiphy->max_match_sets = WMI_PNO_MAX_SUPP_NETWORKS;
wiphy->max_sched_scan_ie_len = WMI_PNO_MAX_IE_LENGTH;
wiphy->max_sched_scan_plans = WMI_PNO_MAX_SCHED_SCAN_PLANS;
wiphy->max_sched_scan_plan_interval =
WMI_PNO_MAX_SCHED_SCAN_PLAN_INT;
wiphy->max_sched_scan_plan_iterations =
WMI_PNO_MAX_SCHED_SCAN_PLAN_ITRNS;
wiphy->features |= NL80211_FEATURE_ND_RANDOM_MAC_ADDR;
}
ret = ath12k_wow_init(ar);
if (ret) {
ath12k_warn(ar->ab, "failed to init wow: %d\n", ret);
goto err_free_if_combs;
}
ret = ieee80211_register_hw(hw);
if (ret) {
ath12k_err(ab, "ieee80211 registration failed: %d\n", ret);
@ -9077,13 +9262,16 @@ static int ath12k_mac_hw_register(struct ath12k_hw *ah)
ath12k_err(ar->ab, "ath12k regd update failed: %d\n", ret);
goto err_unregister_hw;
}
}
ath12k_debugfs_register(ar);
ath12k_debugfs_register(ar);
}
return 0;
err_unregister_hw:
for_each_ar(ah, ar, i)
ath12k_debugfs_unregister(ar);
ieee80211_unregister_hw(hw);
err_free_if_combs:
@ -9216,7 +9404,6 @@ static struct ath12k_hw *ath12k_mac_hw_allocate(struct ath12k_base *ab,
ah = ath12k_hw_to_ah(hw);
ah->hw = hw;
ah->ab = ab;
ah->num_radio = num_pdev_map;
mutex_init(&ah->hw_mutex);
@ -9307,3 +9494,34 @@ int ath12k_mac_allocate(struct ath12k_base *ab)
return ret;
}
int ath12k_mac_vif_set_keepalive(struct ath12k_vif *arvif,
enum wmi_sta_keepalive_method method,
u32 interval)
{
struct wmi_sta_keepalive_arg arg = {};
struct ath12k *ar = arvif->ar;
int ret;
lockdep_assert_held(&ar->conf_mutex);
if (arvif->vdev_type != WMI_VDEV_TYPE_STA)
return 0;
if (!test_bit(WMI_TLV_SERVICE_STA_KEEP_ALIVE, ar->ab->wmi_ab.svc_map))
return 0;
arg.vdev_id = arvif->vdev_id;
arg.enabled = 1;
arg.method = method;
arg.interval = interval;
ret = ath12k_wmi_sta_keepalive(ar, &arg);
if (ret) {
ath12k_warn(ar->ab, "failed to set keepalive on vdev %i: %d\n",
arvif->vdev_id, ret);
return ret;
}
return 0;
}

View file

@ -9,6 +9,7 @@
#include <net/mac80211.h>
#include <net/cfg80211.h>
#include "wmi.h"
struct ath12k;
struct ath12k_base;
@ -81,5 +82,9 @@ int ath12k_mac_rfkill_config(struct ath12k *ar);
int ath12k_mac_wait_tx_complete(struct ath12k *ar);
void ath12k_mac_handle_beacon(struct ath12k *ar, struct sk_buff *skb);
void ath12k_mac_handle_beacon_miss(struct ath12k *ar, u32 vdev_id);
int ath12k_mac_vif_set_keepalive(struct ath12k_vif *arvif,
enum wmi_sta_keepalive_method method,
u32 interval);
u8 ath12k_mac_get_target_pdev_id(struct ath12k *ar);
#endif

View file

@ -233,7 +233,7 @@ void ath12k_wmi_init_qcn9274(struct ath12k_base *ab,
config->beacon_tx_offload_max_vdev += config->ema_max_vap_cnt;
if (test_bit(WMI_TLV_SERVICE_PEER_METADATA_V1A_V1B_SUPPORT, ab->wmi_ab.svc_map))
config->dp_peer_meta_data_ver = TARGET_RX_PEER_METADATA_VER_V1B;
config->peer_metadata_ver = ATH12K_PEER_METADATA_V1B;
}
void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
@ -3498,7 +3498,7 @@ ath12k_wmi_copy_resource_config(struct ath12k_wmi_resource_config_params *wmi_cf
wmi_cfg->sched_params = cpu_to_le32(tg_cfg->sched_params);
wmi_cfg->twt_ap_pdev_count = cpu_to_le32(tg_cfg->twt_ap_pdev_count);
wmi_cfg->twt_ap_sta_count = cpu_to_le32(tg_cfg->twt_ap_sta_count);
wmi_cfg->flags2 = le32_encode_bits(tg_cfg->dp_peer_meta_data_ver,
wmi_cfg->flags2 = le32_encode_bits(tg_cfg->peer_metadata_ver,
WMI_RSRC_CFG_FLAGS2_RX_PEER_METADATA_VERSION);
wmi_cfg->host_service_flags = cpu_to_le32(tg_cfg->is_reg_cc_ext_event_supported <<
WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT);
@ -3717,6 +3717,7 @@ int ath12k_wmi_cmd_init(struct ath12k_base *ab)
arg.res_cfg.is_reg_cc_ext_event_supported = true;
ab->hw_params->wmi_init(ab, &arg.res_cfg);
ab->wow.wmi_conf_rx_decap_mode = arg.res_cfg.rx_decap_mode;
arg.num_mem_chunks = wmi_ab->num_mem_chunks;
arg.hw_mode_id = wmi_ab->preferred_hw_mode;
@ -3728,6 +3729,8 @@ int ath12k_wmi_cmd_init(struct ath12k_base *ab)
arg.num_band_to_mac = ab->num_radios;
ath12k_fill_band_to_mac_param(ab, arg.band_to_mac);
ab->dp.peer_metadata_ver = arg.res_cfg.peer_metadata_ver;
return ath12k_init_cmd_send(&wmi_ab->wmi[0], &arg);
}
@ -7030,6 +7033,116 @@ static void ath12k_wmi_twt_disable_event(struct ath12k_base *ab,
kfree(tb);
}
static int ath12k_wmi_wow_wakeup_host_parse(struct ath12k_base *ab,
u16 tag, u16 len,
const void *ptr, void *data)
{
const struct wmi_wow_ev_pg_fault_param *pf_param;
const struct wmi_wow_ev_param *param;
struct wmi_wow_ev_arg *arg = data;
int pf_len;
switch (tag) {
case WMI_TAG_WOW_EVENT_INFO:
param = ptr;
arg->wake_reason = le32_to_cpu(param->wake_reason);
ath12k_dbg(ab, ATH12K_DBG_WMI, "wow wakeup host reason %d %s\n",
arg->wake_reason, wow_reason(arg->wake_reason));
break;
case WMI_TAG_ARRAY_BYTE:
if (arg && arg->wake_reason == WOW_REASON_PAGE_FAULT) {
pf_param = ptr;
pf_len = le32_to_cpu(pf_param->len);
if (pf_len > len - sizeof(pf_len) ||
pf_len < 0) {
ath12k_warn(ab, "invalid wo reason page fault buffer len %d\n",
pf_len);
return -EINVAL;
}
ath12k_dbg(ab, ATH12K_DBG_WMI, "wow_reason_page_fault len %d\n",
pf_len);
ath12k_dbg_dump(ab, ATH12K_DBG_WMI,
"wow_reason_page_fault packet present",
"wow_pg_fault ",
pf_param->data,
pf_len);
}
break;
default:
break;
}
return 0;
}
static void ath12k_wmi_event_wow_wakeup_host(struct ath12k_base *ab, struct sk_buff *skb)
{
struct wmi_wow_ev_arg arg = { };
int ret;
ret = ath12k_wmi_tlv_iter(ab, skb->data, skb->len,
ath12k_wmi_wow_wakeup_host_parse,
&arg);
if (ret) {
ath12k_warn(ab, "failed to parse wmi wow wakeup host event tlv: %d\n",
ret);
return;
}
complete(&ab->wow.wakeup_completed);
}
static void ath12k_wmi_gtk_offload_status_event(struct ath12k_base *ab,
struct sk_buff *skb)
{
const struct wmi_gtk_offload_status_event *ev;
struct ath12k_vif *arvif;
__be64 replay_ctr_be;
u64 replay_ctr;
const void **tb;
int ret;
tb = ath12k_wmi_tlv_parse_alloc(ab, skb, GFP_ATOMIC);
if (IS_ERR(tb)) {
ret = PTR_ERR(tb);
ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
return;
}
ev = tb[WMI_TAG_GTK_OFFLOAD_STATUS_EVENT];
if (!ev) {
ath12k_warn(ab, "failed to fetch gtk offload status ev");
kfree(tb);
return;
}
rcu_read_lock();
arvif = ath12k_mac_get_arvif_by_vdev_id(ab, le32_to_cpu(ev->vdev_id));
if (!arvif) {
rcu_read_unlock();
ath12k_warn(ab, "failed to get arvif for vdev_id:%d\n",
le32_to_cpu(ev->vdev_id));
kfree(tb);
return;
}
replay_ctr = le64_to_cpu(ev->replay_ctr);
arvif->rekey_data.replay_ctr = replay_ctr;
ath12k_dbg(ab, ATH12K_DBG_WMI, "wmi gtk offload event refresh_cnt %d replay_ctr %llu\n",
le32_to_cpu(ev->refresh_cnt), replay_ctr);
/* supplicant expects big-endian replay counter */
replay_ctr_be = cpu_to_be64(replay_ctr);
ieee80211_gtk_rekey_notify(arvif->vif, arvif->bssid,
(void *)&replay_ctr_be, GFP_ATOMIC);
rcu_read_unlock();
kfree(tb);
}
static void ath12k_wmi_op_rx(struct ath12k_base *ab, struct sk_buff *skb)
{
struct wmi_cmd_hdr *cmd_hdr;
@ -7150,6 +7263,12 @@ static void ath12k_wmi_op_rx(struct ath12k_base *ab, struct sk_buff *skb)
case WMI_DIAG_EVENTID:
ath12k_wmi_diag_event(ab, skb);
break;
case WMI_WOW_WAKEUP_HOST_EVENTID:
ath12k_wmi_event_wow_wakeup_host(ab, skb);
break;
case WMI_GTK_OFFLOAD_STATUS_EVENTID:
ath12k_wmi_gtk_offload_status_event(ab, skb);
break;
/* TODO: Add remaining events */
default:
ath12k_dbg(ab, ATH12K_DBG_WMI, "Unknown eventid: 0x%x\n", id);
@ -7359,3 +7478,608 @@ void ath12k_wmi_detach(struct ath12k_base *ab)
ath12k_wmi_free_dbring_caps(ab);
}
int ath12k_wmi_hw_data_filter_cmd(struct ath12k *ar, struct wmi_hw_data_filter_arg *arg)
{
struct wmi_hw_data_filter_cmd *cmd;
struct sk_buff *skb;
int len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_hw_data_filter_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_HW_DATA_FILTER_CMD,
sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(arg->vdev_id);
cmd->enable = cpu_to_le32(arg->enable ? 1 : 0);
/* Set all modes in case of disable */
if (arg->enable)
cmd->hw_filter_bitmap = cpu_to_le32(arg->hw_filter_bitmap);
else
cmd->hw_filter_bitmap = cpu_to_le32((u32)~0U);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
"wmi hw data filter enable %d filter_bitmap 0x%x\n",
arg->enable, arg->hw_filter_bitmap);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_HW_DATA_FILTER_CMDID);
}
int ath12k_wmi_wow_host_wakeup_ind(struct ath12k *ar)
{
struct wmi_wow_host_wakeup_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_wow_host_wakeup_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_WOW_HOSTWAKEUP_FROM_SLEEP_CMD,
sizeof(*cmd));
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi tlv wow host wakeup ind\n");
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_HOSTWAKEUP_FROM_SLEEP_CMDID);
}
int ath12k_wmi_wow_enable(struct ath12k *ar)
{
struct wmi_wow_enable_cmd *cmd;
struct sk_buff *skb;
int len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_wow_enable_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_WOW_ENABLE_CMD,
sizeof(*cmd));
cmd->enable = cpu_to_le32(1);
cmd->pause_iface_config = cpu_to_le32(WOW_IFACE_PAUSE_ENABLED);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi tlv wow enable\n");
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_ENABLE_CMDID);
}
int ath12k_wmi_wow_add_wakeup_event(struct ath12k *ar, u32 vdev_id,
enum wmi_wow_wakeup_event event,
u32 enable)
{
struct wmi_wow_add_del_event_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_wow_add_del_event_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_WOW_ADD_DEL_EVT_CMD,
sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(vdev_id);
cmd->is_add = cpu_to_le32(enable);
cmd->event_bitmap = cpu_to_le32((1 << event));
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi tlv wow add wakeup event %s enable %d vdev_id %d\n",
wow_wakeup_event(event), enable, vdev_id);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_ENABLE_DISABLE_WAKE_EVENT_CMDID);
}
int ath12k_wmi_wow_add_pattern(struct ath12k *ar, u32 vdev_id, u32 pattern_id,
const u8 *pattern, const u8 *mask,
int pattern_len, int pattern_offset)
{
struct wmi_wow_add_pattern_cmd *cmd;
struct wmi_wow_bitmap_pattern_params *bitmap;
struct wmi_tlv *tlv;
struct sk_buff *skb;
void *ptr;
size_t len;
len = sizeof(*cmd) +
sizeof(*tlv) + /* array struct */
sizeof(*bitmap) + /* bitmap */
sizeof(*tlv) + /* empty ipv4 sync */
sizeof(*tlv) + /* empty ipv6 sync */
sizeof(*tlv) + /* empty magic */
sizeof(*tlv) + /* empty info timeout */
sizeof(*tlv) + sizeof(u32); /* ratelimit interval */
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
/* cmd */
ptr = skb->data;
cmd = ptr;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_WOW_ADD_PATTERN_CMD,
sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(vdev_id);
cmd->pattern_id = cpu_to_le32(pattern_id);
cmd->pattern_type = cpu_to_le32(WOW_BITMAP_PATTERN);
ptr += sizeof(*cmd);
/* bitmap */
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, sizeof(*bitmap));
ptr += sizeof(*tlv);
bitmap = ptr;
bitmap->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_WOW_BITMAP_PATTERN_T,
sizeof(*bitmap));
memcpy(bitmap->patternbuf, pattern, pattern_len);
memcpy(bitmap->bitmaskbuf, mask, pattern_len);
bitmap->pattern_offset = cpu_to_le32(pattern_offset);
bitmap->pattern_len = cpu_to_le32(pattern_len);
bitmap->bitmask_len = cpu_to_le32(pattern_len);
bitmap->pattern_id = cpu_to_le32(pattern_id);
ptr += sizeof(*bitmap);
/* ipv4 sync */
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 0);
ptr += sizeof(*tlv);
/* ipv6 sync */
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 0);
ptr += sizeof(*tlv);
/* magic */
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, 0);
ptr += sizeof(*tlv);
/* pattern info timeout */
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, 0);
ptr += sizeof(*tlv);
/* ratelimit interval */
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, sizeof(u32));
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi tlv wow add pattern vdev_id %d pattern_id %d pattern_offset %d pattern_len %d\n",
vdev_id, pattern_id, pattern_offset, pattern_len);
ath12k_dbg_dump(ar->ab, ATH12K_DBG_WMI, NULL, "wow pattern: ",
bitmap->patternbuf, pattern_len);
ath12k_dbg_dump(ar->ab, ATH12K_DBG_WMI, NULL, "wow bitmask: ",
bitmap->bitmaskbuf, pattern_len);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_ADD_WAKE_PATTERN_CMDID);
}
int ath12k_wmi_wow_del_pattern(struct ath12k *ar, u32 vdev_id, u32 pattern_id)
{
struct wmi_wow_del_pattern_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_wow_del_pattern_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_WOW_DEL_PATTERN_CMD,
sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(vdev_id);
cmd->pattern_id = cpu_to_le32(pattern_id);
cmd->pattern_type = cpu_to_le32(WOW_BITMAP_PATTERN);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi tlv wow del pattern vdev_id %d pattern_id %d\n",
vdev_id, pattern_id);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_WOW_DEL_WAKE_PATTERN_CMDID);
}
static struct sk_buff *
ath12k_wmi_op_gen_config_pno_start(struct ath12k *ar, u32 vdev_id,
struct wmi_pno_scan_req_arg *pno)
{
struct nlo_configured_params *nlo_list;
size_t len, nlo_list_len, channel_list_len;
struct wmi_wow_nlo_config_cmd *cmd;
__le32 *channel_list;
struct wmi_tlv *tlv;
struct sk_buff *skb;
void *ptr;
u32 i;
len = sizeof(*cmd) +
sizeof(*tlv) +
/* TLV place holder for array of structures
* nlo_configured_params(nlo_list)
*/
sizeof(*tlv);
/* TLV place holder for array of uint32 channel_list */
channel_list_len = sizeof(u32) * pno->a_networks[0].channel_count;
len += channel_list_len;
nlo_list_len = sizeof(*nlo_list) * pno->uc_networks_count;
len += nlo_list_len;
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return ERR_PTR(-ENOMEM);
ptr = skb->data;
cmd = ptr;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_NLO_CONFIG_CMD, sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(pno->vdev_id);
cmd->flags = cpu_to_le32(WMI_NLO_CONFIG_START | WMI_NLO_CONFIG_SSID_HIDE_EN);
/* current FW does not support min-max range for dwell time */
cmd->active_dwell_time = cpu_to_le32(pno->active_max_time);
cmd->passive_dwell_time = cpu_to_le32(pno->passive_max_time);
if (pno->do_passive_scan)
cmd->flags |= cpu_to_le32(WMI_NLO_CONFIG_SCAN_PASSIVE);
cmd->fast_scan_period = cpu_to_le32(pno->fast_scan_period);
cmd->slow_scan_period = cpu_to_le32(pno->slow_scan_period);
cmd->fast_scan_max_cycles = cpu_to_le32(pno->fast_scan_max_cycles);
cmd->delay_start_time = cpu_to_le32(pno->delay_start_time);
if (pno->enable_pno_scan_randomization) {
cmd->flags |= cpu_to_le32(WMI_NLO_CONFIG_SPOOFED_MAC_IN_PROBE_REQ |
WMI_NLO_CONFIG_RANDOM_SEQ_NO_IN_PROBE_REQ);
ether_addr_copy(cmd->mac_addr.addr, pno->mac_addr);
ether_addr_copy(cmd->mac_mask.addr, pno->mac_addr_mask);
}
ptr += sizeof(*cmd);
/* nlo_configured_params(nlo_list) */
cmd->no_of_ssids = cpu_to_le32(pno->uc_networks_count);
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT, nlo_list_len);
ptr += sizeof(*tlv);
nlo_list = ptr;
for (i = 0; i < pno->uc_networks_count; i++) {
tlv = (struct wmi_tlv *)(&nlo_list[i].tlv_header);
tlv->header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ARRAY_BYTE,
sizeof(*nlo_list));
nlo_list[i].ssid.valid = cpu_to_le32(1);
nlo_list[i].ssid.ssid.ssid_len =
cpu_to_le32(pno->a_networks[i].ssid.ssid_len);
memcpy(nlo_list[i].ssid.ssid.ssid,
pno->a_networks[i].ssid.ssid,
le32_to_cpu(nlo_list[i].ssid.ssid.ssid_len));
if (pno->a_networks[i].rssi_threshold &&
pno->a_networks[i].rssi_threshold > -300) {
nlo_list[i].rssi_cond.valid = cpu_to_le32(1);
nlo_list[i].rssi_cond.rssi =
cpu_to_le32(pno->a_networks[i].rssi_threshold);
}
nlo_list[i].bcast_nw_type.valid = cpu_to_le32(1);
nlo_list[i].bcast_nw_type.bcast_nw_type =
cpu_to_le32(pno->a_networks[i].bcast_nw_type);
}
ptr += nlo_list_len;
cmd->num_of_channels = cpu_to_le32(pno->a_networks[0].channel_count);
tlv = ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_UINT32, channel_list_len);
ptr += sizeof(*tlv);
channel_list = ptr;
for (i = 0; i < pno->a_networks[0].channel_count; i++)
channel_list[i] = cpu_to_le32(pno->a_networks[0].channels[i]);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi tlv start pno config vdev_id %d\n",
vdev_id);
return skb;
}
static struct sk_buff *ath12k_wmi_op_gen_config_pno_stop(struct ath12k *ar,
u32 vdev_id)
{
struct wmi_wow_nlo_config_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return ERR_PTR(-ENOMEM);
cmd = (struct wmi_wow_nlo_config_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_NLO_CONFIG_CMD, len);
cmd->vdev_id = cpu_to_le32(vdev_id);
cmd->flags = cpu_to_le32(WMI_NLO_CONFIG_STOP);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
"wmi tlv stop pno config vdev_id %d\n", vdev_id);
return skb;
}
int ath12k_wmi_wow_config_pno(struct ath12k *ar, u32 vdev_id,
struct wmi_pno_scan_req_arg *pno_scan)
{
struct sk_buff *skb;
if (pno_scan->enable)
skb = ath12k_wmi_op_gen_config_pno_start(ar, vdev_id, pno_scan);
else
skb = ath12k_wmi_op_gen_config_pno_stop(ar, vdev_id);
if (IS_ERR_OR_NULL(skb))
return -ENOMEM;
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_NETWORK_LIST_OFFLOAD_CONFIG_CMDID);
}
static void ath12k_wmi_fill_ns_offload(struct ath12k *ar,
struct wmi_arp_ns_offload_arg *offload,
void **ptr,
bool enable,
bool ext)
{
struct wmi_ns_offload_params *ns;
struct wmi_tlv *tlv;
void *buf_ptr = *ptr;
u32 ns_cnt, ns_ext_tuples;
int i, max_offloads;
ns_cnt = offload->ipv6_count;
tlv = buf_ptr;
if (ext) {
ns_ext_tuples = offload->ipv6_count - WMI_MAX_NS_OFFLOADS;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT,
ns_ext_tuples * sizeof(*ns));
i = WMI_MAX_NS_OFFLOADS;
max_offloads = offload->ipv6_count;
} else {
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT,
WMI_MAX_NS_OFFLOADS * sizeof(*ns));
i = 0;
max_offloads = WMI_MAX_NS_OFFLOADS;
}
buf_ptr += sizeof(*tlv);
for (; i < max_offloads; i++) {
ns = buf_ptr;
ns->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_NS_OFFLOAD_TUPLE,
sizeof(*ns));
if (enable) {
if (i < ns_cnt)
ns->flags |= cpu_to_le32(WMI_NSOL_FLAGS_VALID);
memcpy(ns->target_ipaddr[0], offload->ipv6_addr[i], 16);
memcpy(ns->solicitation_ipaddr, offload->self_ipv6_addr[i], 16);
if (offload->ipv6_type[i])
ns->flags |= cpu_to_le32(WMI_NSOL_FLAGS_IS_IPV6_ANYCAST);
memcpy(ns->target_mac.addr, offload->mac_addr, ETH_ALEN);
if (!is_zero_ether_addr(ns->target_mac.addr))
ns->flags |= cpu_to_le32(WMI_NSOL_FLAGS_MAC_VALID);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
"wmi index %d ns_solicited %pI6 target %pI6",
i, ns->solicitation_ipaddr,
ns->target_ipaddr[0]);
}
buf_ptr += sizeof(*ns);
}
*ptr = buf_ptr;
}
static void ath12k_wmi_fill_arp_offload(struct ath12k *ar,
struct wmi_arp_ns_offload_arg *offload,
void **ptr,
bool enable)
{
struct wmi_arp_offload_params *arp;
struct wmi_tlv *tlv;
void *buf_ptr = *ptr;
int i;
/* fill arp tuple */
tlv = buf_ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_STRUCT,
WMI_MAX_ARP_OFFLOADS * sizeof(*arp));
buf_ptr += sizeof(*tlv);
for (i = 0; i < WMI_MAX_ARP_OFFLOADS; i++) {
arp = buf_ptr;
arp->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_ARP_OFFLOAD_TUPLE,
sizeof(*arp));
if (enable && i < offload->ipv4_count) {
/* Copy the target ip addr and flags */
arp->flags = cpu_to_le32(WMI_ARPOL_FLAGS_VALID);
memcpy(arp->target_ipaddr, offload->ipv4_addr[i], 4);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "wmi arp offload address %pI4",
arp->target_ipaddr);
}
buf_ptr += sizeof(*arp);
}
*ptr = buf_ptr;
}
int ath12k_wmi_arp_ns_offload(struct ath12k *ar,
struct ath12k_vif *arvif,
struct wmi_arp_ns_offload_arg *offload,
bool enable)
{
struct wmi_set_arp_ns_offload_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
void *buf_ptr;
size_t len;
u8 ns_cnt, ns_ext_tuples = 0;
ns_cnt = offload->ipv6_count;
len = sizeof(*cmd) +
sizeof(*tlv) +
WMI_MAX_NS_OFFLOADS * sizeof(struct wmi_ns_offload_params) +
sizeof(*tlv) +
WMI_MAX_ARP_OFFLOADS * sizeof(struct wmi_arp_offload_params);
if (ns_cnt > WMI_MAX_NS_OFFLOADS) {
ns_ext_tuples = ns_cnt - WMI_MAX_NS_OFFLOADS;
len += sizeof(*tlv) +
ns_ext_tuples * sizeof(struct wmi_ns_offload_params);
}
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
buf_ptr = skb->data;
cmd = buf_ptr;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_SET_ARP_NS_OFFLOAD_CMD,
sizeof(*cmd));
cmd->flags = cpu_to_le32(0);
cmd->vdev_id = cpu_to_le32(arvif->vdev_id);
cmd->num_ns_ext_tuples = cpu_to_le32(ns_ext_tuples);
buf_ptr += sizeof(*cmd);
ath12k_wmi_fill_ns_offload(ar, offload, &buf_ptr, enable, 0);
ath12k_wmi_fill_arp_offload(ar, offload, &buf_ptr, enable);
if (ns_ext_tuples)
ath12k_wmi_fill_ns_offload(ar, offload, &buf_ptr, enable, 1);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_SET_ARP_NS_OFFLOAD_CMDID);
}
int ath12k_wmi_gtk_rekey_offload(struct ath12k *ar,
struct ath12k_vif *arvif, bool enable)
{
struct ath12k_rekey_data *rekey_data = &arvif->rekey_data;
struct wmi_gtk_rekey_offload_cmd *cmd;
struct sk_buff *skb;
__le64 replay_ctr;
int len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_gtk_rekey_offload_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_GTK_OFFLOAD_CMD, sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(arvif->vdev_id);
if (enable) {
cmd->flags = cpu_to_le32(GTK_OFFLOAD_ENABLE_OPCODE);
/* the length in rekey_data and cmd is equal */
memcpy(cmd->kck, rekey_data->kck, sizeof(cmd->kck));
memcpy(cmd->kek, rekey_data->kek, sizeof(cmd->kek));
replay_ctr = cpu_to_le64(rekey_data->replay_ctr);
memcpy(cmd->replay_ctr, &replay_ctr,
sizeof(replay_ctr));
} else {
cmd->flags = cpu_to_le32(GTK_OFFLOAD_DISABLE_OPCODE);
}
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "offload gtk rekey vdev: %d %d\n",
arvif->vdev_id, enable);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_GTK_OFFLOAD_CMDID);
}
int ath12k_wmi_gtk_rekey_getinfo(struct ath12k *ar,
struct ath12k_vif *arvif)
{
struct wmi_gtk_rekey_offload_cmd *cmd;
struct sk_buff *skb;
int len;
len = sizeof(*cmd);
skb = ath12k_wmi_alloc_skb(ar->wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_gtk_rekey_offload_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_GTK_OFFLOAD_CMD, sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(arvif->vdev_id);
cmd->flags = cpu_to_le32(GTK_OFFLOAD_REQUEST_STATUS_OPCODE);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI, "get gtk rekey vdev_id: %d\n",
arvif->vdev_id);
return ath12k_wmi_cmd_send(ar->wmi, skb, WMI_GTK_OFFLOAD_CMDID);
}
int ath12k_wmi_sta_keepalive(struct ath12k *ar,
const struct wmi_sta_keepalive_arg *arg)
{
struct wmi_sta_keepalive_arp_resp_params *arp;
struct ath12k_wmi_pdev *wmi = ar->wmi;
struct wmi_sta_keepalive_cmd *cmd;
struct sk_buff *skb;
size_t len;
len = sizeof(*cmd) + sizeof(*arp);
skb = ath12k_wmi_alloc_skb(wmi->wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_sta_keepalive_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_STA_KEEPALIVE_CMD, sizeof(*cmd));
cmd->vdev_id = cpu_to_le32(arg->vdev_id);
cmd->enabled = cpu_to_le32(arg->enabled);
cmd->interval = cpu_to_le32(arg->interval);
cmd->method = cpu_to_le32(arg->method);
arp = (struct wmi_sta_keepalive_arp_resp_params *)(cmd + 1);
arp->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_STA_KEEPALVE_ARP_RESPONSE,
sizeof(*arp));
if (arg->method == WMI_STA_KEEPALIVE_METHOD_UNSOLICITED_ARP_RESPONSE ||
arg->method == WMI_STA_KEEPALIVE_METHOD_GRATUITOUS_ARP_REQUEST) {
arp->src_ip4_addr = cpu_to_le32(arg->src_ip4_addr);
arp->dest_ip4_addr = cpu_to_le32(arg->dest_ip4_addr);
ether_addr_copy(arp->dest_mac_addr.addr, arg->dest_mac_addr);
}
ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
"wmi sta keepalive vdev %d enabled %d method %d interval %d\n",
arg->vdev_id, arg->enabled, arg->method, arg->interval);
return ath12k_wmi_cmd_send(wmi, skb, WMI_STA_KEEPALIVE_CMDID);
}

View file

@ -24,6 +24,7 @@
struct ath12k_base;
struct ath12k;
struct ath12k_vif;
/* There is no signed version of __le32, so for a temporary solution come
* up with our own version. The idea is from fs/ntfs/endian.h.
@ -2293,6 +2294,13 @@ struct ath12k_wmi_host_mem_chunk_arg {
u32 req_id;
};
enum ath12k_peer_metadata_version {
ATH12K_PEER_METADATA_V0,
ATH12K_PEER_METADATA_V1,
ATH12K_PEER_METADATA_V1A,
ATH12K_PEER_METADATA_V1B
};
struct ath12k_wmi_resource_config_arg {
u32 num_vdevs;
u32 num_peers;
@ -2355,10 +2363,10 @@ struct ath12k_wmi_resource_config_arg {
u32 sched_params;
u32 twt_ap_pdev_count;
u32 twt_ap_sta_count;
bool is_reg_cc_ext_event_supported;
u8 dp_peer_meta_data_ver;
enum ath12k_peer_metadata_version peer_metadata_ver;
u32 ema_max_vap_cnt;
u32 ema_max_profile_period;
bool is_reg_cc_ext_event_supported;
};
struct ath12k_wmi_init_cmd_arg {
@ -4902,6 +4910,556 @@ struct wmi_twt_disable_event {
__le32 status;
} __packed;
/* WOW structures */
enum wmi_wow_wakeup_event {
WOW_BMISS_EVENT = 0,
WOW_BETTER_AP_EVENT,
WOW_DEAUTH_RECVD_EVENT,
WOW_MAGIC_PKT_RECVD_EVENT,
WOW_GTK_ERR_EVENT,
WOW_FOURWAY_HSHAKE_EVENT,
WOW_EAPOL_RECVD_EVENT,
WOW_NLO_DETECTED_EVENT,
WOW_DISASSOC_RECVD_EVENT,
WOW_PATTERN_MATCH_EVENT,
WOW_CSA_IE_EVENT,
WOW_PROBE_REQ_WPS_IE_EVENT,
WOW_AUTH_REQ_EVENT,
WOW_ASSOC_REQ_EVENT,
WOW_HTT_EVENT,
WOW_RA_MATCH_EVENT,
WOW_HOST_AUTO_SHUTDOWN_EVENT,
WOW_IOAC_MAGIC_EVENT,
WOW_IOAC_SHORT_EVENT,
WOW_IOAC_EXTEND_EVENT,
WOW_IOAC_TIMER_EVENT,
WOW_DFS_PHYERR_RADAR_EVENT,
WOW_BEACON_EVENT,
WOW_CLIENT_KICKOUT_EVENT,
WOW_EVENT_MAX,
};
enum wmi_wow_interface_cfg {
WOW_IFACE_PAUSE_ENABLED,
WOW_IFACE_PAUSE_DISABLED
};
#define C2S(x) case x: return #x
static inline const char *wow_wakeup_event(enum wmi_wow_wakeup_event ev)
{
switch (ev) {
C2S(WOW_BMISS_EVENT);
C2S(WOW_BETTER_AP_EVENT);
C2S(WOW_DEAUTH_RECVD_EVENT);
C2S(WOW_MAGIC_PKT_RECVD_EVENT);
C2S(WOW_GTK_ERR_EVENT);
C2S(WOW_FOURWAY_HSHAKE_EVENT);
C2S(WOW_EAPOL_RECVD_EVENT);
C2S(WOW_NLO_DETECTED_EVENT);
C2S(WOW_DISASSOC_RECVD_EVENT);
C2S(WOW_PATTERN_MATCH_EVENT);
C2S(WOW_CSA_IE_EVENT);
C2S(WOW_PROBE_REQ_WPS_IE_EVENT);
C2S(WOW_AUTH_REQ_EVENT);
C2S(WOW_ASSOC_REQ_EVENT);
C2S(WOW_HTT_EVENT);
C2S(WOW_RA_MATCH_EVENT);
C2S(WOW_HOST_AUTO_SHUTDOWN_EVENT);
C2S(WOW_IOAC_MAGIC_EVENT);
C2S(WOW_IOAC_SHORT_EVENT);
C2S(WOW_IOAC_EXTEND_EVENT);
C2S(WOW_IOAC_TIMER_EVENT);
C2S(WOW_DFS_PHYERR_RADAR_EVENT);
C2S(WOW_BEACON_EVENT);
C2S(WOW_CLIENT_KICKOUT_EVENT);
C2S(WOW_EVENT_MAX);
default:
return NULL;
}
}
enum wmi_wow_wake_reason {
WOW_REASON_UNSPECIFIED = -1,
WOW_REASON_NLOD = 0,
WOW_REASON_AP_ASSOC_LOST,
WOW_REASON_LOW_RSSI,
WOW_REASON_DEAUTH_RECVD,
WOW_REASON_DISASSOC_RECVD,
WOW_REASON_GTK_HS_ERR,
WOW_REASON_EAP_REQ,
WOW_REASON_FOURWAY_HS_RECV,
WOW_REASON_TIMER_INTR_RECV,
WOW_REASON_PATTERN_MATCH_FOUND,
WOW_REASON_RECV_MAGIC_PATTERN,
WOW_REASON_P2P_DISC,
WOW_REASON_WLAN_HB,
WOW_REASON_CSA_EVENT,
WOW_REASON_PROBE_REQ_WPS_IE_RECV,
WOW_REASON_AUTH_REQ_RECV,
WOW_REASON_ASSOC_REQ_RECV,
WOW_REASON_HTT_EVENT,
WOW_REASON_RA_MATCH,
WOW_REASON_HOST_AUTO_SHUTDOWN,
WOW_REASON_IOAC_MAGIC_EVENT,
WOW_REASON_IOAC_SHORT_EVENT,
WOW_REASON_IOAC_EXTEND_EVENT,
WOW_REASON_IOAC_TIMER_EVENT,
WOW_REASON_ROAM_HO,
WOW_REASON_DFS_PHYERR_RADADR_EVENT,
WOW_REASON_BEACON_RECV,
WOW_REASON_CLIENT_KICKOUT_EVENT,
WOW_REASON_PAGE_FAULT = 0x3a,
WOW_REASON_DEBUG_TEST = 0xFF,
};
static inline const char *wow_reason(enum wmi_wow_wake_reason reason)
{
switch (reason) {
C2S(WOW_REASON_UNSPECIFIED);
C2S(WOW_REASON_NLOD);
C2S(WOW_REASON_AP_ASSOC_LOST);
C2S(WOW_REASON_LOW_RSSI);
C2S(WOW_REASON_DEAUTH_RECVD);
C2S(WOW_REASON_DISASSOC_RECVD);
C2S(WOW_REASON_GTK_HS_ERR);
C2S(WOW_REASON_EAP_REQ);
C2S(WOW_REASON_FOURWAY_HS_RECV);
C2S(WOW_REASON_TIMER_INTR_RECV);
C2S(WOW_REASON_PATTERN_MATCH_FOUND);
C2S(WOW_REASON_RECV_MAGIC_PATTERN);
C2S(WOW_REASON_P2P_DISC);
C2S(WOW_REASON_WLAN_HB);
C2S(WOW_REASON_CSA_EVENT);
C2S(WOW_REASON_PROBE_REQ_WPS_IE_RECV);
C2S(WOW_REASON_AUTH_REQ_RECV);
C2S(WOW_REASON_ASSOC_REQ_RECV);
C2S(WOW_REASON_HTT_EVENT);
C2S(WOW_REASON_RA_MATCH);
C2S(WOW_REASON_HOST_AUTO_SHUTDOWN);
C2S(WOW_REASON_IOAC_MAGIC_EVENT);
C2S(WOW_REASON_IOAC_SHORT_EVENT);
C2S(WOW_REASON_IOAC_EXTEND_EVENT);
C2S(WOW_REASON_IOAC_TIMER_EVENT);
C2S(WOW_REASON_ROAM_HO);
C2S(WOW_REASON_DFS_PHYERR_RADADR_EVENT);
C2S(WOW_REASON_BEACON_RECV);
C2S(WOW_REASON_CLIENT_KICKOUT_EVENT);
C2S(WOW_REASON_PAGE_FAULT);
C2S(WOW_REASON_DEBUG_TEST);
default:
return NULL;
}
}
#undef C2S
#define WOW_DEFAULT_BITMAP_PATTERN_SIZE 148
#define WOW_DEFAULT_BITMASK_SIZE 148
#define WOW_MIN_PATTERN_SIZE 1
#define WOW_MAX_PATTERN_SIZE 148
#define WOW_MAX_PKT_OFFSET 128
#define WOW_HDR_LEN (sizeof(struct ieee80211_hdr_3addr) + \
sizeof(struct rfc1042_hdr))
#define WOW_MAX_REDUCE (WOW_HDR_LEN - sizeof(struct ethhdr) - \
offsetof(struct ieee80211_hdr_3addr, addr1))
struct wmi_wow_bitmap_pattern_params {
__le32 tlv_header;
u8 patternbuf[WOW_DEFAULT_BITMAP_PATTERN_SIZE];
u8 bitmaskbuf[WOW_DEFAULT_BITMASK_SIZE];
__le32 pattern_offset;
__le32 pattern_len;
__le32 bitmask_len;
__le32 pattern_id;
} __packed;
struct wmi_wow_add_pattern_cmd {
__le32 tlv_header;
__le32 vdev_id;
__le32 pattern_id;
__le32 pattern_type;
} __packed;
struct wmi_wow_del_pattern_cmd {
__le32 tlv_header;
__le32 vdev_id;
__le32 pattern_id;
__le32 pattern_type;
} __packed;
enum wmi_tlv_pattern_type {
WOW_PATTERN_MIN = 0,
WOW_BITMAP_PATTERN = WOW_PATTERN_MIN,
WOW_IPV4_SYNC_PATTERN,
WOW_IPV6_SYNC_PATTERN,
WOW_WILD_CARD_PATTERN,
WOW_TIMER_PATTERN,
WOW_MAGIC_PATTERN,
WOW_IPV6_RA_PATTERN,
WOW_IOAC_PKT_PATTERN,
WOW_IOAC_TMR_PATTERN,
WOW_PATTERN_MAX
};
struct wmi_wow_add_del_event_cmd {
__le32 tlv_header;
__le32 vdev_id;
__le32 is_add;
__le32 event_bitmap;
} __packed;
struct wmi_wow_enable_cmd {
__le32 tlv_header;
__le32 enable;
__le32 pause_iface_config;
__le32 flags;
} __packed;
struct wmi_wow_host_wakeup_cmd {
__le32 tlv_header;
__le32 reserved;
} __packed;
struct wmi_wow_ev_param {
__le32 vdev_id;
__le32 flag;
__le32 wake_reason;
__le32 data_len;
} __packed;
struct wmi_wow_ev_pg_fault_param {
__le32 len;
u8 data[];
} __packed;
struct wmi_wow_ev_arg {
enum wmi_wow_wake_reason wake_reason;
};
#define WMI_PNO_MAX_SCHED_SCAN_PLANS 2
#define WMI_PNO_MAX_SCHED_SCAN_PLAN_INT 7200
#define WMI_PNO_MAX_SCHED_SCAN_PLAN_ITRNS 100
#define WMI_PNO_MAX_NETW_CHANNELS 26
#define WMI_PNO_MAX_NETW_CHANNELS_EX 60
#define WMI_PNO_MAX_SUPP_NETWORKS WLAN_SCAN_PARAMS_MAX_SSID
#define WMI_PNO_MAX_IE_LENGTH WLAN_SCAN_PARAMS_MAX_IE_LEN
/* size based of dot11 declaration without extra IEs as we will not carry those for PNO */
#define WMI_PNO_MAX_PB_REQ_SIZE 450
#define WMI_PNO_24GHZ_DEFAULT_CH 1
#define WMI_PNO_5GHZ_DEFAULT_CH 36
#define WMI_ACTIVE_MAX_CHANNEL_TIME 40
#define WMI_PASSIVE_MAX_CHANNEL_TIME 110
/* SSID broadcast type */
enum wmi_ssid_bcast_type {
BCAST_UNKNOWN = 0,
BCAST_NORMAL = 1,
BCAST_HIDDEN = 2,
};
#define WMI_NLO_MAX_SSIDS 16
#define WMI_NLO_MAX_CHAN 48
#define WMI_NLO_CONFIG_STOP BIT(0)
#define WMI_NLO_CONFIG_START BIT(1)
#define WMI_NLO_CONFIG_RESET BIT(2)
#define WMI_NLO_CONFIG_SLOW_SCAN BIT(4)
#define WMI_NLO_CONFIG_FAST_SCAN BIT(5)
#define WMI_NLO_CONFIG_SSID_HIDE_EN BIT(6)
/* This bit is used to indicate if EPNO or supplicant PNO is enabled.
* Only one of them can be enabled at a given time
*/
#define WMI_NLO_CONFIG_ENLO BIT(7)
#define WMI_NLO_CONFIG_SCAN_PASSIVE BIT(8)
#define WMI_NLO_CONFIG_ENLO_RESET BIT(9)
#define WMI_NLO_CONFIG_SPOOFED_MAC_IN_PROBE_REQ BIT(10)
#define WMI_NLO_CONFIG_RANDOM_SEQ_NO_IN_PROBE_REQ BIT(11)
#define WMI_NLO_CONFIG_ENABLE_IE_WHITELIST_IN_PROBE_REQ BIT(12)
#define WMI_NLO_CONFIG_ENABLE_CNLO_RSSI_CONFIG BIT(13)
struct wmi_nlo_ssid_params {
__le32 valid;
struct ath12k_wmi_ssid_params ssid;
} __packed;
struct wmi_nlo_enc_params {
__le32 valid;
__le32 enc_type;
} __packed;
struct wmi_nlo_auth_params {
__le32 valid;
__le32 auth_type;
} __packed;
struct wmi_nlo_bcast_nw_params {
__le32 valid;
__le32 bcast_nw_type;
} __packed;
struct wmi_nlo_rssi_params {
__le32 valid;
__le32 rssi;
} __packed;
struct nlo_configured_params {
/* TLV tag and len;*/
__le32 tlv_header;
struct wmi_nlo_ssid_params ssid;
struct wmi_nlo_enc_params enc_type;
struct wmi_nlo_auth_params auth_type;
struct wmi_nlo_rssi_params rssi_cond;
/* indicates if the SSID is hidden or not */
struct wmi_nlo_bcast_nw_params bcast_nw_type;
} __packed;
struct wmi_network_type_arg {
struct cfg80211_ssid ssid;
u32 authentication;
u32 encryption;
u32 bcast_nw_type;
u8 channel_count;
u16 channels[WMI_PNO_MAX_NETW_CHANNELS_EX];
s32 rssi_threshold;
};
struct wmi_pno_scan_req_arg {
u8 enable;
u8 vdev_id;
u8 uc_networks_count;
struct wmi_network_type_arg a_networks[WMI_PNO_MAX_SUPP_NETWORKS];
u32 fast_scan_period;
u32 slow_scan_period;
u8 fast_scan_max_cycles;
bool do_passive_scan;
u32 delay_start_time;
u32 active_min_time;
u32 active_max_time;
u32 passive_min_time;
u32 passive_max_time;
/* mac address randomization attributes */
u32 enable_pno_scan_randomization;
u8 mac_addr[ETH_ALEN];
u8 mac_addr_mask[ETH_ALEN];
};
struct wmi_wow_nlo_config_cmd {
__le32 tlv_header;
__le32 flags;
__le32 vdev_id;
__le32 fast_scan_max_cycles;
__le32 active_dwell_time;
__le32 passive_dwell_time;
__le32 probe_bundle_size;
/* ART = IRT */
__le32 rest_time;
/* max value that can be reached after scan_backoff_multiplier */
__le32 max_rest_time;
__le32 scan_backoff_multiplier;
__le32 fast_scan_period;
/* specific to windows */
__le32 slow_scan_period;
__le32 no_of_ssids;
__le32 num_of_channels;
/* NLO scan start delay time in milliseconds */
__le32 delay_start_time;
/* MAC Address to use in Probe Req as SA */
struct ath12k_wmi_mac_addr_params mac_addr;
/* Mask on which MAC has to be randomized */
struct ath12k_wmi_mac_addr_params mac_mask;
/* IE bitmap to use in Probe Req */
__le32 ie_bitmap[8];
/* Number of vendor OUIs. In the TLV vendor_oui[] */
__le32 num_vendor_oui;
/* Number of connected NLO band preferences */
__le32 num_cnlo_band_pref;
/* The TLVs will follow.
* nlo_configured_params nlo_list[];
* u32 channel_list[num_of_channels];
*/
} __packed;
/* Definition of HW data filtering */
enum hw_data_filter_type {
WMI_HW_DATA_FILTER_DROP_NON_ARP_BC = BIT(0),
WMI_HW_DATA_FILTER_DROP_NON_ICMPV6_MC = BIT(1),
};
struct wmi_hw_data_filter_cmd {
__le32 tlv_header;
__le32 vdev_id;
__le32 enable;
__le32 hw_filter_bitmap;
} __packed;
struct wmi_hw_data_filter_arg {
u32 vdev_id;
bool enable;
u32 hw_filter_bitmap;
};
#define WMI_IPV6_UC_TYPE 0
#define WMI_IPV6_AC_TYPE 1
#define WMI_IPV6_MAX_COUNT 16
#define WMI_IPV4_MAX_COUNT 2
struct wmi_arp_ns_offload_arg {
u8 ipv4_addr[WMI_IPV4_MAX_COUNT][4];
u32 ipv4_count;
u32 ipv6_count;
u8 ipv6_addr[WMI_IPV6_MAX_COUNT][16];
u8 self_ipv6_addr[WMI_IPV6_MAX_COUNT][16];
u8 ipv6_type[WMI_IPV6_MAX_COUNT];
bool ipv6_valid[WMI_IPV6_MAX_COUNT];
u8 mac_addr[ETH_ALEN];
};
#define WMI_MAX_NS_OFFLOADS 2
#define WMI_MAX_ARP_OFFLOADS 2
#define WMI_ARPOL_FLAGS_VALID BIT(0)
#define WMI_ARPOL_FLAGS_MAC_VALID BIT(1)
#define WMI_ARPOL_FLAGS_REMOTE_IP_VALID BIT(2)
struct wmi_arp_offload_params {
__le32 tlv_header;
__le32 flags;
u8 target_ipaddr[4];
u8 remote_ipaddr[4];
struct ath12k_wmi_mac_addr_params target_mac;
} __packed;
#define WMI_NSOL_FLAGS_VALID BIT(0)
#define WMI_NSOL_FLAGS_MAC_VALID BIT(1)
#define WMI_NSOL_FLAGS_REMOTE_IP_VALID BIT(2)
#define WMI_NSOL_FLAGS_IS_IPV6_ANYCAST BIT(3)
#define WMI_NSOL_MAX_TARGET_IPS 2
struct wmi_ns_offload_params {
__le32 tlv_header;
__le32 flags;
u8 target_ipaddr[WMI_NSOL_MAX_TARGET_IPS][16];
u8 solicitation_ipaddr[16];
u8 remote_ipaddr[16];
struct ath12k_wmi_mac_addr_params target_mac;
} __packed;
struct wmi_set_arp_ns_offload_cmd {
__le32 tlv_header;
__le32 flags;
__le32 vdev_id;
__le32 num_ns_ext_tuples;
/* The TLVs follow:
* wmi_ns_offload_params ns[WMI_MAX_NS_OFFLOADS];
* wmi_arp_offload_params arp[WMI_MAX_ARP_OFFLOADS];
* wmi_ns_offload_params ns_ext[num_ns_ext_tuples];
*/
} __packed;
#define GTK_OFFLOAD_OPCODE_MASK 0xFF000000
#define GTK_OFFLOAD_ENABLE_OPCODE 0x01000000
#define GTK_OFFLOAD_DISABLE_OPCODE 0x02000000
#define GTK_OFFLOAD_REQUEST_STATUS_OPCODE 0x04000000
#define GTK_OFFLOAD_KEK_BYTES 16
#define GTK_OFFLOAD_KCK_BYTES 16
#define GTK_REPLAY_COUNTER_BYTES 8
#define WMI_MAX_KEY_LEN 32
#define IGTK_PN_SIZE 6
struct wmi_gtk_offload_status_event {
__le32 vdev_id;
__le32 flags;
__le32 refresh_cnt;
__le64 replay_ctr;
u8 igtk_key_index;
u8 igtk_key_length;
u8 igtk_key_rsc[IGTK_PN_SIZE];
u8 igtk_key[WMI_MAX_KEY_LEN];
u8 gtk_key_index;
u8 gtk_key_length;
u8 gtk_key_rsc[GTK_REPLAY_COUNTER_BYTES];
u8 gtk_key[WMI_MAX_KEY_LEN];
} __packed;
struct wmi_gtk_rekey_offload_cmd {
__le32 tlv_header;
__le32 vdev_id;
__le32 flags;
u8 kek[GTK_OFFLOAD_KEK_BYTES];
u8 kck[GTK_OFFLOAD_KCK_BYTES];
u8 replay_ctr[GTK_REPLAY_COUNTER_BYTES];
} __packed;
struct wmi_sta_keepalive_cmd {
__le32 tlv_header;
__le32 vdev_id;
__le32 enabled;
/* WMI_STA_KEEPALIVE_METHOD_ */
__le32 method;
/* in seconds */
__le32 interval;
/* following this structure is the TLV for struct
* wmi_sta_keepalive_arp_resp_params
*/
} __packed;
struct wmi_sta_keepalive_arp_resp_params {
__le32 tlv_header;
__le32 src_ip4_addr;
__le32 dest_ip4_addr;
struct ath12k_wmi_mac_addr_params dest_mac_addr;
} __packed;
struct wmi_sta_keepalive_arg {
u32 vdev_id;
u32 enabled;
u32 method;
u32 interval;
u32 src_ip4_addr;
u32 dest_ip4_addr;
const u8 dest_mac_addr[ETH_ALEN];
};
enum wmi_sta_keepalive_method {
WMI_STA_KEEPALIVE_METHOD_NULL_FRAME = 1,
WMI_STA_KEEPALIVE_METHOD_UNSOLICITED_ARP_RESPONSE = 2,
WMI_STA_KEEPALIVE_METHOD_ETHERNET_LOOPBACK = 3,
WMI_STA_KEEPALIVE_METHOD_GRATUITOUS_ARP_REQUEST = 4,
WMI_STA_KEEPALIVE_METHOD_MGMT_VENDOR_ACTION = 5,
};
#define WMI_STA_KEEPALIVE_INTERVAL_DEFAULT 30
#define WMI_STA_KEEPALIVE_INTERVAL_DISABLE 0
void ath12k_wmi_init_qcn9274(struct ath12k_base *ab,
struct ath12k_wmi_resource_config_arg *config);
void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
@ -5054,4 +5612,28 @@ ath12k_wmi_mac_phy_get_hw_link_id(const struct ath12k_wmi_mac_phy_caps_params *p
WMI_CAPS_PARAMS_HW_LINK_ID);
}
int ath12k_wmi_wow_host_wakeup_ind(struct ath12k *ar);
int ath12k_wmi_wow_enable(struct ath12k *ar);
int ath12k_wmi_wow_del_pattern(struct ath12k *ar, u32 vdev_id, u32 pattern_id);
int ath12k_wmi_wow_add_pattern(struct ath12k *ar, u32 vdev_id, u32 pattern_id,
const u8 *pattern, const u8 *mask,
int pattern_len, int pattern_offset);
int ath12k_wmi_wow_add_wakeup_event(struct ath12k *ar, u32 vdev_id,
enum wmi_wow_wakeup_event event,
u32 enable);
int ath12k_wmi_wow_config_pno(struct ath12k *ar, u32 vdev_id,
struct wmi_pno_scan_req_arg *pno_scan);
int ath12k_wmi_hw_data_filter_cmd(struct ath12k *ar,
struct wmi_hw_data_filter_arg *arg);
int ath12k_wmi_arp_ns_offload(struct ath12k *ar,
struct ath12k_vif *arvif,
struct wmi_arp_ns_offload_arg *offload,
bool enable);
int ath12k_wmi_gtk_rekey_offload(struct ath12k *ar,
struct ath12k_vif *arvif, bool enable);
int ath12k_wmi_gtk_rekey_getinfo(struct ath12k *ar,
struct ath12k_vif *arvif);
int ath12k_wmi_sta_keepalive(struct ath12k *ar,
const struct wmi_sta_keepalive_arg *arg);
#endif

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,62 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2020 The Linux Foundation. All rights reserved.
* Copyright (c) 2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_WOW_H
#define ATH12K_WOW_H
#define ATH12K_WOW_RETRY_NUM 10
#define ATH12K_WOW_RETRY_WAIT_MS 200
#define ATH12K_WOW_PATTERNS 22
struct ath12k_wow {
u32 max_num_patterns;
struct completion wakeup_completed;
struct wiphy_wowlan_support wowlan_support;
};
struct ath12k_pkt_pattern {
u8 pattern[WOW_MAX_PATTERN_SIZE];
u8 bytemask[WOW_MAX_PATTERN_SIZE];
int pattern_len;
int pkt_offset;
};
struct rfc1042_hdr {
u8 llc_dsap;
u8 llc_ssap;
u8 llc_ctrl;
u8 snap_oui[3];
__be16 eth_type;
} __packed;
#ifdef CONFIG_PM
int ath12k_wow_init(struct ath12k *ar);
int ath12k_wow_op_suspend(struct ieee80211_hw *hw,
struct cfg80211_wowlan *wowlan);
int ath12k_wow_op_resume(struct ieee80211_hw *hw);
void ath12k_wow_op_set_wakeup(struct ieee80211_hw *hw, bool enabled);
int ath12k_wow_enable(struct ath12k *ar);
int ath12k_wow_wakeup(struct ath12k *ar);
#else
static inline int ath12k_wow_init(struct ath12k *ar)
{
return 0;
}
static inline int ath12k_wow_enable(struct ath12k *ar)
{
return 0;
}
static inline int ath12k_wow_wakeup(struct ath12k *ar)
{
return 0;
}
#endif /* CONFIG_PM */
#endif /* ATH12K_WOW_H */

View file

@ -180,11 +180,10 @@ static struct libipw_txb *libipw_alloc_txb(int nr_frags, int txb_size,
struct libipw_txb *txb;
int i;
txb = kmalloc(struct_size(txb, fragments, nr_frags), gfp_mask);
txb = kzalloc(struct_size(txb, fragments, nr_frags), gfp_mask);
if (!txb)
return NULL;
memset(txb, 0, sizeof(struct libipw_txb));
txb->nr_frags = nr_frags;
txb->frag_size = txb_size;

View file

@ -147,32 +147,34 @@ struct iwl_fw_ini_region_internal_buffer {
* Configures parameters for region data collection
*
* @hdr: debug header
* @id: region id. Max id is &IWL_FW_INI_MAX_REGION_ID
* @id: region id. Max id is %IWL_FW_INI_MAX_REGION_ID
* @type: region type. One of &enum iwl_fw_ini_region_type
* @sub_type: region sub type
* @sub_type_ver: region sub type version
* @reserved: not in use
* @name: region name
* @dev_addr: device address configuration. Used by
* &IWL_FW_INI_REGION_DEVICE_MEMORY, &IWL_FW_INI_REGION_PERIPHERY_MAC,
* &IWL_FW_INI_REGION_PERIPHERY_PHY, &IWL_FW_INI_REGION_PERIPHERY_AUX,
* &IWL_FW_INI_REGION_PAGING, &IWL_FW_INI_REGION_CSR,
* &IWL_FW_INI_REGION_DRAM_IMR and &IWL_FW_INI_REGION_PCI_IOSF_CONFIG
* &IWL_FW_INI_REGION_DBGI_SRAM, &FW_TLV_DEBUG_REGION_TYPE_DBGI_SRAM,
* &IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP,
* %IWL_FW_INI_REGION_DEVICE_MEMORY, %IWL_FW_INI_REGION_PERIPHERY_MAC,
* %IWL_FW_INI_REGION_PERIPHERY_PHY, %IWL_FW_INI_REGION_PERIPHERY_AUX,
* %IWL_FW_INI_REGION_PAGING, %IWL_FW_INI_REGION_CSR,
* %IWL_FW_INI_REGION_DRAM_IMR and %IWL_FW_INI_REGION_PCI_IOSF_CONFIG
* %IWL_FW_INI_REGION_DBGI_SRAM, %FW_TLV_DEBUG_REGION_TYPE_DBGI_SRAM,
* %IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP,
* @dev_addr_range: device address range configuration. Used by
* &IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE and
* &IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE
* @fifos: fifos configuration. Used by &IWL_FW_INI_REGION_TXF and
* &IWL_FW_INI_REGION_RXF
* %IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE and
* %IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE
* @fifos: fifos configuration. Used by %IWL_FW_INI_REGION_TXF and
* %IWL_FW_INI_REGION_RXF
* @err_table: error table configuration. Used by
* IWL_FW_INI_REGION_LMAC_ERROR_TABLE and
* IWL_FW_INI_REGION_UMAC_ERROR_TABLE
* %IWL_FW_INI_REGION_LMAC_ERROR_TABLE and
* %IWL_FW_INI_REGION_UMAC_ERROR_TABLE
* @internal_buffer: internal monitor buffer configuration. Used by
* &IWL_FW_INI_REGION_INTERNAL_BUFFER
* %IWL_FW_INI_REGION_INTERNAL_BUFFER
* @special_mem: special device memory region, used by
* %IWL_FW_INI_REGION_SPECIAL_DEVICE_MEMORY
* @dram_alloc_id: dram allocation id. One of &enum iwl_fw_ini_allocation_id.
* Used by &IWL_FW_INI_REGION_DRAM_BUFFER
* @tlv_mask: tlv collection mask. Used by &IWL_FW_INI_REGION_TLV
* Used by %IWL_FW_INI_REGION_DRAM_BUFFER
* @tlv_mask: tlv collection mask. Used by %IWL_FW_INI_REGION_TLV
* @addrs: array of addresses attached to the end of the region tlv
*/
struct iwl_fw_ini_region_tlv {

View file

@ -310,6 +310,13 @@ struct iwl_ac_qos {
* @filter_flags: combination of &enum iwl_mac_filter_flags
* @qos_flags: from &enum iwl_mac_qos_flags
* @ac: one iwl_mac_qos configuration for each AC
* @ap: AP specific config data, see &struct iwl_mac_data_ap
* @go: GO specific config data, see &struct iwl_mac_data_go
* @sta: BSS client specific config data, see &struct iwl_mac_data_sta
* @p2p_sta: P2P client specific config data, see &struct iwl_mac_data_p2p_sta
* @p2p_dev: P2P-device specific config data, see &struct iwl_mac_data_p2p_dev
* @pibss: Pseudo-IBSS specific data, unused; see struct iwl_mac_data_pibss
* @ibss: IBSS specific config data, see &struct iwl_mac_data_ibss
*/
struct iwl_mac_ctx_cmd {
/* COMMON_INDEX_HDR_API_S_VER_1 */

View file

@ -704,6 +704,8 @@ struct iwl_lari_config_change_cmd_v10 {
* Each bit represents a country or region, and a band to activate
* according to the BIOS definitions.
* For LARI cmd version 11 - bits 0:4 are supported.
* For LARI cmd version 12 - bits 0:6 are supported and bits 7:31 are
* reserved. No need to mask out the reserved bits.
* @force_disable_channels_bitmap: Bitmap of disabled bands/channels.
* Each bit represents a set of channels in a specific band that should be
* disabled
@ -731,9 +733,11 @@ struct iwl_lari_config_change_cmd {
__le32 oem_11be_allow_bitmap;
} __packed;
/* LARI_CHANGE_CONF_CMD_S_VER_11 */
/* LARI_CHANGE_CONF_CMD_S_VER_12 */
/* Activate UNII-1 (5.2GHz) for World Wide */
#define ACTIVATE_5G2_IN_WW_MASK BIT(4)
#define ACTIVATE_5G2_IN_WW_MASK BIT(4)
#define CHAN_STATE_ACTIVE_BITMAP_CMD_V11 0x1F
/**
* struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete

View file

@ -144,6 +144,7 @@ struct iwl_phy_context_cmd_v1 {
* @rxchain_info: ???
* @sbb_bandwidth: 0 disabled, 1 - 40Mhz ... 4 - 320MHz
* @sbb_ctrl_channel_loc: location of the control channel
* @puncture_mask: bitmap of punctured subchannels
* @dsp_cfg_flags: set to 0
* @reserved: reserved to align to 64 bit
*/

View file

@ -569,9 +569,12 @@ enum iwl_ppag_flags {
* @v2: version 2
* version 3, 4, 5 and 6 are the same structure as v2,
* but has a different format of the flags bitmap
* @flags: values from &enum iwl_ppag_flags
* @gain: table of antenna gain values per chain and sub-band
* @reserved: reserved
* @v1.flags: values from &enum iwl_ppag_flags
* @v1.gain: table of antenna gain values per chain and sub-band
* @v1.reserved: reserved
* @v2.flags: values from &enum iwl_ppag_flags
* @v2.gain: table of antenna gain values per chain and sub-band
* @v2.reserved: reserved
*/
union iwl_ppag_table_cmd {
struct {

View file

@ -710,7 +710,15 @@ struct iwl_rx_mpdu_desc {
__le32 reorder_data;
union {
/**
* @v1: version 1 of the remaining RX descriptor,
* see &struct iwl_rx_mpdu_desc_v1
*/
struct iwl_rx_mpdu_desc_v1 v1;
/**
* @v3: version 3 of the remaining RX descriptor,
* see &struct iwl_rx_mpdu_desc_v3
*/
struct iwl_rx_mpdu_desc_v3 v3;
};
} __packed; /* RX_MPDU_RES_START_API_S_VER_3,

View file

@ -497,6 +497,7 @@ static size_t iwl_get_lari_config_cmd_size(u8 cmd_ver)
size_t cmd_size;
switch (cmd_ver) {
case 12:
case 11:
cmd_size = sizeof(struct iwl_lari_config_change_cmd);
break;
@ -563,6 +564,9 @@ int iwl_fill_lari_config(struct iwl_fw_runtime *fwrt,
if (!ret) {
if (cmd_ver < 8)
value &= ~ACTIVATE_5G2_IN_WW_MASK;
if (cmd_ver < 12)
value &= CHAN_STATE_ACTIVE_BITMAP_CMD_V11;
cmd->chan_state_active_bitmap = cpu_to_le32(value);
}

View file

@ -1240,12 +1240,6 @@ iwl_dbg_tlv_tp_trigger(struct iwl_fw_runtime *fwrt, bool sync,
}
fwrt->trans->dbg.restart_required = false;
IWL_DEBUG_FW(fwrt, "WRT: tp %d, reset_fw %d\n",
tp, dump_data.trig->reset_fw);
IWL_DEBUG_FW(fwrt,
"WRT: restart_required %d, last_tp_resetfw %d\n",
fwrt->trans->dbg.restart_required,
fwrt->trans->dbg.last_tp_resetfw);
if (fwrt->trans->trans_cfg->device_family ==
IWL_DEVICE_FAMILY_9000) {
@ -1255,22 +1249,17 @@ iwl_dbg_tlv_tp_trigger(struct iwl_fw_runtime *fwrt, bool sync,
IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY) {
fwrt->trans->dbg.restart_required = false;
fwrt->trans->dbg.last_tp_resetfw = 0xFF;
IWL_DEBUG_FW(fwrt, "WRT: FW_ASSERT due to reset_fw_mode-no restart\n");
} else if (le32_to_cpu(dump_data.trig->reset_fw) ==
IWL_FW_INI_RESET_FW_MODE_STOP_AND_RELOAD_FW) {
IWL_DEBUG_FW(fwrt, "WRT: stop and reload firmware\n");
fwrt->trans->dbg.restart_required = true;
} else if (le32_to_cpu(dump_data.trig->reset_fw) ==
IWL_FW_INI_RESET_FW_MODE_STOP_FW_ONLY) {
IWL_DEBUG_FW(fwrt,
"WRT: stop only and no reload firmware\n");
fwrt->trans->dbg.restart_required = false;
fwrt->trans->dbg.last_tp_resetfw =
le32_to_cpu(dump_data.trig->reset_fw);
} else if (le32_to_cpu(dump_data.trig->reset_fw) ==
IWL_FW_INI_RESET_FW_MODE_NOTHING) {
IWL_DEBUG_FW(fwrt,
"WRT: nothing need to be done after debug collection\n");
/* nothing */
} else {
IWL_ERR(fwrt, "WRT: wrong resetfw %d\n",
le32_to_cpu(dump_data.trig->reset_fw));

View file

@ -120,6 +120,7 @@ enum CMD_MODE {
CMD_BLOCK_TXQS = BIT(3),
CMD_SEND_IN_D3 = BIT(4),
};
#define CMD_MODE_BITS 5
#define DEF_CMD_PAYLOAD_SIZE 320
@ -712,7 +713,9 @@ struct iwl_dma_ptr {
struct iwl_cmd_meta {
/* only for SYNC commands, iff the reply skb is wanted */
struct iwl_host_cmd *source;
u32 flags;
u32 flags: CMD_MODE_BITS;
/* sg_offset is valid if it is non-zero */
u32 sg_offset: PAGE_SHIFT;
u32 tbs;
};
@ -749,6 +752,7 @@ struct iwl_pcie_first_tb_buf {
* @first_tb_dma: DMA address for the first_tb_bufs start
* @entries: transmit entries (driver state)
* @lock: queue lock
* @reclaim_lock: reclaim lock
* @stuck_timer: timer that fires if queue gets stuck
* @trans: pointer back to transport (for timer)
* @need_update: indicates need to update read/write index
@ -791,6 +795,8 @@ struct iwl_txq {
struct iwl_pcie_txq_entry *entries;
/* lock for syncing changes on the queue */
spinlock_t lock;
/* lock to prevent concurrent reclaim */
spinlock_t reclaim_lock;
unsigned long frozen_expiry_remainder;
struct timer_list stuck_timer;
struct iwl_trans *trans;

View file

@ -11,6 +11,7 @@
HOW(BLOCKED_TPT) \
HOW(BLOCKED_FW) \
HOW(BLOCKED_NON_BSS) \
HOW(BLOCKED_ROC) \
HOW(EXIT_MISSED_BEACON) \
HOW(EXIT_LOW_RSSI) \
HOW(EXIT_COEX) \
@ -1033,15 +1034,17 @@ void iwl_mvm_block_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
if (WARN_ON(!(reason & IWL_MVM_BLOCK_ESR_REASONS)))
return;
if (!(mvmvif->esr_disable_reason & reason)) {
IWL_DEBUG_INFO(mvm,
"Blocking EMLSR mode. reason = %s (0x%x)\n",
iwl_get_esr_state_string(reason), reason);
iwl_mvm_print_esr_state(mvm, mvmvif->esr_disable_reason);
}
if (mvmvif->esr_disable_reason & reason)
return;
IWL_DEBUG_INFO(mvm,
"Blocking EMLSR mode. reason = %s (0x%x)\n",
iwl_get_esr_state_string(reason), reason);
mvmvif->esr_disable_reason |= reason;
iwl_mvm_print_esr_state(mvm, mvmvif->esr_disable_reason);
iwl_mvm_exit_esr(mvm, vif, reason, link_to_keep);
}

View file

@ -59,13 +59,13 @@ static const struct ieee80211_iface_combination iwl_mvm_iface_combinations[] = {
.num_different_channels = 2,
.max_interfaces = 3,
.limits = iwl_mvm_limits,
.n_limits = ARRAY_SIZE(iwl_mvm_limits) - 1,
.n_limits = ARRAY_SIZE(iwl_mvm_limits),
},
{
.num_different_channels = 1,
.max_interfaces = 3,
.limits = iwl_mvm_limits_ap,
.n_limits = ARRAY_SIZE(iwl_mvm_limits_ap) - 1,
.n_limits = ARRAY_SIZE(iwl_mvm_limits_ap),
},
};
@ -372,7 +372,7 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
if (mvm->mld_api_is_used && mvm->nvm_data->sku_cap_11be_enable &&
!iwlwifi_mod_params.disable_11ax &&
!iwlwifi_mod_params.disable_11be) {
hw->wiphy->flags |= WIPHY_FLAG_DISABLE_WEXT;
hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_MLO;
/* we handle this already earlier, but need it for MLO */
ieee80211_hw_set(hw, HANDLES_QUIET_CSA);
}
@ -383,12 +383,6 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
if (!mvm->mld_api_is_used)
ieee80211_hw_set(hw, TIMING_BEACON_ONLY);
/* We should probably have this, but mac80211
* currently doesn't support it for MLO.
*/
if (!(hw->wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO))
ieee80211_hw_set(hw, DEAUTH_NEED_MGD_TX_PREP);
/*
* On older devices, enabling TX A-MSDU occasionally leads to
* something getting messed up, the command read from the FIFO
@ -2855,6 +2849,8 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
if (changes & BSS_CHANGED_ASSOC) {
if (vif->cfg.assoc) {
mvmvif->session_prot_connection_loss = false;
/* clear statistics to get clean beacon counter */
iwl_mvm_request_statistics(mvm, true);
for_each_mvm_vif_valid_link(mvmvif, i)
@ -4270,8 +4266,12 @@ void iwl_mvm_mac_mgd_prepare_tx(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_prep_tx_info *info)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
if (info->was_assoc && !mvmvif->session_prot_connection_loss)
return;
guard(mvm)(mvm);
iwl_mvm_protect_assoc(mvm, vif, info->duration, info->link_id);
}

View file

@ -873,6 +873,8 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
if (changes & BSS_CHANGED_ASSOC) {
if (vif->cfg.assoc) {
mvmvif->session_prot_connection_loss = false;
/* clear statistics to get clean beacon counter */
iwl_mvm_request_statistics(mvm, true);
iwl_mvm_sf_update(mvm, vif, false);

View file

@ -450,6 +450,38 @@ struct iwl_mvm_esr_exit {
* @unblock_esr_tpt_wk: work for unblocking EMLSR when tpt is high enough.
* @roc_activity: currently running ROC activity for this vif (or
* ROC_NUM_ACTIVITIES if no activity is running).
* @session_prot_connection_loss: the connection was lost due to session
* protection ending without receiving a beacon, so we need to now
* protect the deauth separately
* @ap_early_keys: The firmware cannot install keys before stations etc.,
* but higher layers work differently, so we store the keys here for
* later installation.
* @ap_sta: pointer to the AP STA data structure
* @csa_count: CSA counter (old CSA implementation w/o firmware)
* @csa_misbehave: CSA AP misbehaviour flag (old implementation)
* @csa_target_freq: CSA target channel frequency (old implementation)
* @csa_work: CSA work (old implementation)
* @dbgfs_bf: beamforming debugfs data
* @dbgfs_dir: debugfs directory for this vif
* @dbgfs_pm: power management debugfs data
* @dbgfs_quota_min: debugfs value for minimal quota
* @dbgfs_slink: debugfs symlink for this interface
* @ftm_unprotected: unprotected FTM debugfs override
* @hs_time_event_data: hotspot/AUX ROC time event data
* @mac_pwr_cmd: debugfs override for MAC power command
* @target_ipv6_addrs: IPv6 addresses on this interface for offload
* @num_target_ipv6_addrs: number of @target_ipv6_addrs
* @tentative_addrs: bitmap of tentative IPv6 addresses in @target_ipv6_addrs
* @rekey_data: rekeying data for WoWLAN GTK rekey offload
* @seqno: storage for seqno for older firmware D0/D3 transition
* @seqno_valid: indicates @seqno is valid
* @time_event_data: session protection time event data
* @tsf_id: the TSF resource ID assigned in firmware (for firmware needing that)
* @tx_key_idx: WEP transmit key index for D3
* @uapsd_misbehaving_ap_addr: MLD address/BSSID of U-APSD misbehaving AP, to
* not use U-APSD on reconnection
* @uapsd_nonagg_detected_wk: worker for handling detection of no aggregation
* in U-APSD
*/
struct iwl_mvm_vif {
struct iwl_mvm *mvm;
@ -463,6 +495,7 @@ struct iwl_mvm_vif {
bool pm_enabled;
bool monitor_active;
bool esr_active;
bool session_prot_connection_loss;
u8 low_latency: 6;
u8 low_latency_actual: 1;
@ -735,24 +768,20 @@ struct iwl_mvm_tcm {
* struct iwl_mvm_reorder_buffer - per ra/tid/queue reorder buffer
* @head_sn: reorder window head sn
* @num_stored: number of mpdus stored in the buffer
* @buf_size: the reorder buffer size as set by the last addba request
* @queue: queue of this reorder buffer
* @last_amsdu: track last ASMDU SN for duplication detection
* @last_sub_index: track ASMDU sub frame index for duplication detection
* @valid: reordering is valid for this queue
* @lock: protect reorder buffer internal state
* @mvm: mvm pointer, needed for frame timer context
*/
struct iwl_mvm_reorder_buffer {
u16 head_sn;
u16 num_stored;
u16 buf_size;
int queue;
u16 last_amsdu;
u8 last_sub_index;
bool valid;
spinlock_t lock;
struct iwl_mvm *mvm;
} ____cacheline_aligned_in_smp;
/**
@ -774,6 +803,7 @@ __aligned(roundup_pow_of_two(sizeof(struct sk_buff_head)))
* @tid: tid of the session
* @baid: baid of the session
* @timeout: the timeout set in the addba request
* @buf_size: the reorder buffer size as set by the last addba request
* @entries_per_queue: # of buffers per queue, this actually gets
* aligned up to avoid cache line sharing between queues
* @last_rx: last rx jiffies, updated only if timeout passed from last update
@ -790,13 +820,14 @@ struct iwl_mvm_baid_data {
u8 tid;
u8 baid;
u16 timeout;
u16 buf_size;
u16 entries_per_queue;
unsigned long last_rx;
struct timer_list session_timer;
struct iwl_mvm_baid_data __rcu **rcu_ptr;
struct iwl_mvm *mvm;
struct iwl_mvm_reorder_buffer reorder_buf[IWL_MAX_RX_HW_QUEUES];
struct iwl_mvm_reorder_buf_entry entries[];
struct iwl_mvm_reorder_buf_entry entries[] ____cacheline_aligned_in_smp;
};
static inline struct iwl_mvm_baid_data *

View file

@ -566,7 +566,7 @@ static void iwl_mvm_release_frames(struct iwl_mvm *mvm,
lockdep_assert_held(&reorder_buf->lock);
while (ieee80211_sn_less(ssn, nssn)) {
int index = ssn % reorder_buf->buf_size;
int index = ssn % baid_data->buf_size;
struct sk_buff_head *skb_list = &entries[index].frames;
struct sk_buff *skb;
@ -617,7 +617,7 @@ static void iwl_mvm_del_ba(struct iwl_mvm *mvm, int queue,
spin_lock_bh(&reorder_buf->lock);
iwl_mvm_release_frames(mvm, sta, NULL, ba_data, reorder_buf,
ieee80211_sn_add(reorder_buf->head_sn,
reorder_buf->buf_size));
ba_data->buf_size));
spin_unlock_bh(&reorder_buf->lock);
out:
@ -839,7 +839,7 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
}
/* put in reorder buffer */
index = sn % buffer->buf_size;
index = sn % baid_data->buf_size;
__skb_queue_tail(&entries[index].frames, skb);
buffer->num_stored++;

View file

@ -2743,7 +2743,7 @@ static void iwl_mvm_free_reorder(struct iwl_mvm *mvm,
*/
WARN_ON(1);
for (j = 0; j < reorder_buf->buf_size; j++)
for (j = 0; j < data->buf_size; j++)
__skb_queue_purge(&entries[j].frames);
spin_unlock_bh(&reorder_buf->lock);
@ -2752,7 +2752,7 @@ static void iwl_mvm_free_reorder(struct iwl_mvm *mvm,
static void iwl_mvm_init_reorder_buffer(struct iwl_mvm *mvm,
struct iwl_mvm_baid_data *data,
u16 ssn, u16 buf_size)
u16 ssn)
{
int i;
@ -2765,12 +2765,10 @@ static void iwl_mvm_init_reorder_buffer(struct iwl_mvm *mvm,
reorder_buf->num_stored = 0;
reorder_buf->head_sn = ssn;
reorder_buf->buf_size = buf_size;
spin_lock_init(&reorder_buf->lock);
reorder_buf->mvm = mvm;
reorder_buf->queue = i;
reorder_buf->valid = false;
for (j = 0; j < reorder_buf->buf_size; j++)
for (j = 0; j < data->buf_size; j++)
__skb_queue_head_init(&entries[j].frames);
}
}
@ -2979,13 +2977,14 @@ int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
baid_data->mvm = mvm;
baid_data->tid = tid;
baid_data->sta_mask = iwl_mvm_sta_fw_id_mask(mvm, sta, -1);
baid_data->buf_size = buf_size;
mvm_sta->tid_to_baid[tid] = baid;
if (timeout)
mod_timer(&baid_data->session_timer,
TU_TO_EXP_TIME(timeout * 2));
iwl_mvm_init_reorder_buffer(mvm, baid_data, ssn, buf_size);
iwl_mvm_init_reorder_buffer(mvm, baid_data, ssn);
/*
* protect the BA data with RCU to cover a case where our
* internal RX sync mechanism will timeout (not that it's

View file

@ -222,6 +222,8 @@ static bool iwl_mvm_te_check_disconnect(struct iwl_mvm *mvm,
iwl_dbg_tlv_time_point(&mvm->fwrt,
IWL_FW_INI_TIME_POINT_ASSOC_FAILED,
NULL);
mvmvif->session_prot_connection_loss = true;
}
iwl_mvm_connection_loss(mvm, vif, errmsg);
@ -1055,12 +1057,21 @@ void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif,
u32 *duration_tu,
u32 *delay)
{
u32 dtim_interval = vif->bss_conf.dtim_period *
vif->bss_conf.beacon_int;
struct ieee80211_bss_conf *link_conf;
unsigned int link_id;
u32 dtim_interval = 0;
*delay = AUX_ROC_MIN_DELAY;
*duration_tu = MSEC_TO_TU(duration_ms);
rcu_read_lock();
for_each_vif_active_link(vif, link_conf, link_id) {
dtim_interval =
max_t(u32, dtim_interval,
link_conf->dtim_period * link_conf->beacon_int);
}
rcu_read_unlock();
/*
* If we are associated we want the delay time to be at least one
* dtim interval so that the FW can wait until after the DTIM and
@ -1069,8 +1080,10 @@ void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif,
* Since we want to use almost a whole dtim interval we would also
* like the delay to be for 2-3 dtim intervals, in case there are
* other time events with higher priority.
* dtim_interval should never be 0, it can be 1 if we don't know it
* (we haven't heard any beacon yet).
*/
if (vif->cfg.assoc) {
if (vif->cfg.assoc && !WARN_ON(!dtim_interval)) {
*delay = min_t(u32, dtim_interval * 3, AUX_ROC_MAX_DELAY);
/* We cannot remain off-channel longer than the DTIM interval */
if (dtim_interval <= *duration_tu) {

View file

@ -603,6 +603,22 @@ struct iwl_tso_hdr_page {
u8 *pos;
};
/*
* Note that we put this struct *last* in the page. By doing that, we ensure
* that no TB referencing this page can trigger the 32-bit boundary hardware
* bug.
*/
struct iwl_tso_page_info {
dma_addr_t dma_addr;
struct page *next;
refcount_t use_count;
};
#define IWL_TSO_PAGE_DATA_SIZE (PAGE_SIZE - sizeof(struct iwl_tso_page_info))
#define IWL_TSO_PAGE_INFO(addr) \
((struct iwl_tso_page_info *)(((unsigned long)addr & PAGE_MASK) + \
IWL_TSO_PAGE_DATA_SIZE))
int iwl_pcie_tx_init(struct iwl_trans *trans);
void iwl_pcie_tx_start(struct iwl_trans *trans, u32 scd_base_addr);
int iwl_pcie_tx_stop(struct iwl_trans *trans);
@ -623,9 +639,23 @@ void iwl_trans_pcie_tx_reset(struct iwl_trans *trans);
int iwl_pcie_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq,
int slots_num, bool cmd_queue);
struct iwl_tso_hdr_page *iwl_pcie_get_page_hdr(struct iwl_trans *trans,
size_t len, struct sk_buff *skb);
void iwl_pcie_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb);
dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr);
struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,
struct iwl_cmd_meta *cmd_meta,
u8 **hdr, unsigned int hdr_room);
void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb,
struct iwl_cmd_meta *cmd_meta);
static inline dma_addr_t iwl_pcie_get_tso_page_phys(void *addr)
{
dma_addr_t res;
res = IWL_TSO_PAGE_INFO(addr)->dma_addr;
res += (unsigned long)addr & ~PAGE_MASK;
return res;
}
static inline dma_addr_t
iwl_txq_get_first_tb_dma(struct iwl_txq *txq, int idx)

View file

@ -19,8 +19,10 @@ static struct page *get_workaround_page(struct iwl_trans *trans,
struct sk_buff *skb)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwl_tso_page_info *info;
struct page **page_ptr;
struct page *ret;
dma_addr_t phys;
page_ptr = (void *)((u8 *)skb->cb + trans_pcie->txqs.page_offs);
@ -28,8 +30,22 @@ static struct page *get_workaround_page(struct iwl_trans *trans,
if (!ret)
return NULL;
info = IWL_TSO_PAGE_INFO(page_address(ret));
/* Create a DMA mapping for the page */
phys = dma_map_page_attrs(trans->dev, ret, 0, PAGE_SIZE,
DMA_TO_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
if (unlikely(dma_mapping_error(trans->dev, phys))) {
__free_page(ret);
return NULL;
}
/* Store physical address and set use count */
info->dma_addr = phys;
refcount_set(&info->use_count, 1);
/* set the chaining pointer to the previous page if there */
*(void **)((u8 *)page_address(ret) + PAGE_SIZE - sizeof(void *)) = *page_ptr;
info->next = *page_ptr;
*page_ptr = ret;
return ret;
@ -45,7 +61,8 @@ static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
struct sk_buff *skb,
struct iwl_tfh_tfd *tfd,
dma_addr_t phys, void *virt,
u16 len, struct iwl_cmd_meta *meta)
u16 len, struct iwl_cmd_meta *meta,
bool unmap)
{
dma_addr_t oldphys = phys;
struct page *page;
@ -76,7 +93,7 @@ static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
* a new mapping for it so the device will not fail.
*/
if (WARN_ON(len > PAGE_SIZE - sizeof(void *))) {
if (WARN_ON(len > IWL_TSO_PAGE_DATA_SIZE)) {
ret = -ENOBUFS;
goto unmap;
}
@ -89,10 +106,27 @@ static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
memcpy(page_address(page), virt, len);
phys = dma_map_single(trans->dev, page_address(page), len,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(trans->dev, phys)))
return -ENOMEM;
/*
* This is a bit odd, but performance does not matter here, what
* matters are the expectations of the calling code and TB cleanup
* function.
*
* As such, if unmap is set, then create another mapping for the TB
* entry as it will be unmapped later. On the other hand, if it is not
* set, then the TB entry will not be unmapped and instead we simply
* reference and sync the mapping that get_workaround_page() created.
*/
if (unmap) {
phys = dma_map_single(trans->dev, page_address(page), len,
DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(trans->dev, phys)))
return -ENOMEM;
} else {
phys = iwl_pcie_get_tso_page_phys(page_address(page));
dma_sync_single_for_device(trans->dev, phys, len,
DMA_TO_DEVICE);
}
ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len);
if (ret < 0) {
/* unmap the new allocation as single */
@ -100,6 +134,7 @@ static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
meta = NULL;
goto unmap;
}
IWL_DEBUG_TX(trans,
"TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n",
len, (unsigned long long)oldphys,
@ -107,6 +142,9 @@ static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
ret = 0;
unmap:
if (!unmap)
goto trace;
if (meta)
dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE);
else
@ -119,7 +157,9 @@ static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
struct sk_buff *skb,
struct iwl_tfh_tfd *tfd, int start_len,
struct iwl_tfh_tfd *tfd,
struct iwl_cmd_meta *out_meta,
int start_len,
u8 hdr_len,
struct iwl_device_tx_cmd *dev_cmd)
{
@ -128,9 +168,10 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
struct ieee80211_hdr *hdr = (void *)skb->data;
unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
unsigned int mss = skb_shinfo(skb)->gso_size;
dma_addr_t start_hdr_phys;
u16 length, amsdu_pad;
u8 *start_hdr;
struct iwl_tso_hdr_page *hdr_page;
struct sg_table *sgt;
struct tso_t tso;
trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd),
@ -146,11 +187,11 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
(3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr));
/* Our device supports 9 segments at most, it will fit in 1 page */
hdr_page = iwl_pcie_get_page_hdr(trans, hdr_room, skb);
if (!hdr_page)
sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room);
if (!sgt)
return -ENOMEM;
start_hdr = hdr_page->pos;
start_hdr_phys = iwl_pcie_get_tso_page_phys(start_hdr);
/*
* Pull the ieee80211 header to be able to use TSO core,
@ -172,36 +213,34 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
unsigned int data_left = min_t(unsigned int, mss, total_len);
unsigned int tb_len;
dma_addr_t tb_phys;
u8 *subf_hdrs_start = hdr_page->pos;
u8 *pos_hdr = start_hdr;
total_len -= data_left;
memset(hdr_page->pos, 0, amsdu_pad);
hdr_page->pos += amsdu_pad;
memset(pos_hdr, 0, amsdu_pad);
pos_hdr += amsdu_pad;
amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen +
data_left)) & 0x3;
ether_addr_copy(hdr_page->pos, ieee80211_get_DA(hdr));
hdr_page->pos += ETH_ALEN;
ether_addr_copy(hdr_page->pos, ieee80211_get_SA(hdr));
hdr_page->pos += ETH_ALEN;
ether_addr_copy(pos_hdr, ieee80211_get_DA(hdr));
pos_hdr += ETH_ALEN;
ether_addr_copy(pos_hdr, ieee80211_get_SA(hdr));
pos_hdr += ETH_ALEN;
length = snap_ip_tcp_hdrlen + data_left;
*((__be16 *)hdr_page->pos) = cpu_to_be16(length);
hdr_page->pos += sizeof(length);
*((__be16 *)pos_hdr) = cpu_to_be16(length);
pos_hdr += sizeof(length);
/*
* This will copy the SNAP as well which will be considered
* as MAC header.
*/
tso_build_hdr(skb, hdr_page->pos, &tso, data_left, !total_len);
tso_build_hdr(skb, pos_hdr, &tso, data_left, !total_len);
hdr_page->pos += snap_ip_tcp_hdrlen;
pos_hdr += snap_ip_tcp_hdrlen;
tb_len = pos_hdr - start_hdr;
tb_phys = iwl_pcie_get_tso_page_phys(start_hdr);
tb_len = hdr_page->pos - start_hdr;
tb_phys = dma_map_single(trans->dev, start_hdr,
tb_len, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
goto out_err;
/*
* No need for _with_wa, this is from the TSO page and
* we leave some space at the end of it so can't hit
@ -211,21 +250,24 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr,
tb_phys, tb_len);
/* add this subframe's headers' length to the tx_cmd */
le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start);
le16_add_cpu(&tx_cmd->len, tb_len);
/* prepare the start_hdr for the next subframe */
start_hdr = hdr_page->pos;
start_hdr = pos_hdr;
/* put the payload */
while (data_left) {
int ret;
tb_len = min_t(unsigned int, tso.size, data_left);
tb_phys = dma_map_single(trans->dev, tso.data,
tb_len, DMA_TO_DEVICE);
tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data);
/* Not a real mapping error, use direct comparison */
if (unlikely(tb_phys == DMA_MAPPING_ERROR))
goto out_err;
ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd,
tb_phys, tso.data,
tb_len, NULL);
tb_len, NULL, false);
if (ret)
goto out_err;
@ -234,6 +276,9 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
}
}
dma_sync_single_for_device(trans->dev, start_hdr_phys, hdr_room,
DMA_TO_DEVICE);
/* re -add the WiFi header */
skb_push(skb, hdr_len);
@ -290,8 +335,8 @@ iwl_tfh_tfd *iwl_txq_gen2_build_tx_amsdu(struct iwl_trans *trans,
*/
iwl_txq_gen2_set_tb(trans, tfd, tb_phys, len);
if (iwl_txq_gen2_build_amsdu(trans, skb, tfd, len + IWL_FIRST_TB_SIZE,
hdr_len, dev_cmd))
if (iwl_txq_gen2_build_amsdu(trans, skb, tfd, out_meta,
len + IWL_FIRST_TB_SIZE, hdr_len, dev_cmd))
goto out_err;
/* building the A-MSDU might have changed this data, memcpy it now */
@ -323,7 +368,7 @@ static int iwl_txq_gen2_tx_add_frags(struct iwl_trans *trans,
fragsz, DMA_TO_DEVICE);
ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
skb_frag_address(frag),
fragsz, out_meta);
fragsz, out_meta, true);
if (ret)
return ret;
}
@ -397,7 +442,7 @@ iwl_tfh_tfd *iwl_txq_gen2_build_tx(struct iwl_trans *trans,
tb2_len, DMA_TO_DEVICE);
ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
skb->data + hdr_len, tb2_len,
NULL);
NULL, true);
if (ret)
goto out_err;
}
@ -412,7 +457,8 @@ iwl_tfh_tfd *iwl_txq_gen2_build_tx(struct iwl_trans *trans,
skb_headlen(frag), DMA_TO_DEVICE);
ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
frag->data,
skb_headlen(frag), NULL);
skb_headlen(frag), NULL,
true);
if (ret)
goto out_err;
if (iwl_txq_gen2_tx_add_frags(trans, frag, tfd, out_meta))
@ -607,6 +653,10 @@ void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans,
return;
}
/* TB1 is mapped directly, the rest is the TSO page and SG list. */
if (meta->sg_offset)
num_tbs = 2;
/* first TB is never freed - it's the bidirectional DMA data */
for (i = 1; i < num_tbs; i++) {
if (meta->tbs & BIT(i))
@ -722,7 +772,7 @@ int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
/* Set up first empty entry in queue's array of Tx/cmd buffers */
out_meta = &txq->entries[idx].meta;
out_meta->flags = 0;
memset(out_meta, 0, sizeof(*out_meta));
tfd = iwl_txq_gen2_build_tfd(trans, txq, dev_cmd, skb, out_meta);
if (!tfd) {
@ -771,17 +821,19 @@ static void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id];
spin_lock_bh(&txq->lock);
spin_lock_bh(&txq->reclaim_lock);
spin_lock(&txq->lock);
while (txq->write_ptr != txq->read_ptr) {
IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
txq_id, txq->read_ptr);
if (txq_id != trans_pcie->txqs.cmd.q_id) {
int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
struct iwl_cmd_meta *cmd_meta = &txq->entries[idx].meta;
struct sk_buff *skb = txq->entries[idx].skb;
if (!WARN_ON_ONCE(!skb))
iwl_pcie_free_tso_page(trans, skb);
iwl_pcie_free_tso_pages(trans, skb, cmd_meta);
}
iwl_txq_gen2_free_tfd(trans, txq);
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
@ -793,7 +845,8 @@ static void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
iwl_op_mode_free_skb(trans->op_mode, skb);
}
spin_unlock_bh(&txq->lock);
spin_unlock(&txq->lock);
spin_unlock_bh(&txq->reclaim_lock);
/* just in case - this queue may have been stopped */
iwl_trans_pcie_wake_queue(trans, txq);
@ -1250,7 +1303,7 @@ int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
out_cmd = txq->entries[idx].cmd;
out_meta = &txq->entries[idx].meta;
/* re-initialize to NULL */
/* re-initialize, this also marks the SG list as unused */
memset(out_meta, 0, sizeof(*out_meta));
if (cmd->flags & CMD_WANT_SKB)
out_meta->source = cmd;

View file

@ -209,7 +209,22 @@ static void iwl_pcie_clear_cmd_in_flight(struct iwl_trans *trans)
spin_unlock(&trans_pcie->reg_lock);
}
void iwl_pcie_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb)
static void iwl_pcie_free_and_unmap_tso_page(struct iwl_trans *trans,
struct page *page)
{
struct iwl_tso_page_info *info = IWL_TSO_PAGE_INFO(page_address(page));
/* Decrease internal use count and unmap/free page if needed */
if (refcount_dec_and_test(&info->use_count)) {
dma_unmap_page(trans->dev, info->dma_addr, PAGE_SIZE,
DMA_TO_DEVICE);
__free_page(page);
}
}
void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb,
struct iwl_cmd_meta *cmd_meta)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct page **page_ptr;
@ -220,11 +235,23 @@ void iwl_pcie_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb)
*page_ptr = NULL;
while (next) {
struct iwl_tso_page_info *info;
struct page *tmp = next;
next = *(void **)((u8 *)page_address(next) + PAGE_SIZE -
sizeof(void *));
__free_page(tmp);
info = IWL_TSO_PAGE_INFO(page_address(next));
next = info->next;
/* Unmap the scatter gather list that is on the last page */
if (!next && cmd_meta->sg_offset) {
struct sg_table *sgt;
sgt = (void *)((u8 *)page_address(tmp) +
cmd_meta->sg_offset);
dma_unmap_sgtable(trans->dev, sgt, DMA_TO_DEVICE, 0);
}
iwl_pcie_free_and_unmap_tso_page(trans, tmp);
}
}
@ -276,6 +303,10 @@ static void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans,
return;
}
/* TB1 is mapped directly, the rest is the TSO page and SG list. */
if (meta->sg_offset)
num_tbs = 2;
/* first TB is never freed - it's the bidirectional DMA data */
for (i = 1; i < num_tbs; i++) {
@ -302,20 +333,21 @@ static void iwl_txq_gen1_tfd_unmap(struct iwl_trans *trans,
* iwl_txq_free_tfd - Free all chunks referenced by TFD [txq->q.read_ptr]
* @trans: transport private data
* @txq: tx queue
* @read_ptr: the TXQ read_ptr to free
*
* Does NOT advance any TFD circular buffer read/write indexes
* Does NOT free the TFD itself (which is within circular buffer)
*/
static void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
static void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq,
int read_ptr)
{
/* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and
* idx is bounded by n_window
*/
int rd_ptr = txq->read_ptr;
int idx = iwl_txq_get_cmd_index(txq, rd_ptr);
int idx = iwl_txq_get_cmd_index(txq, read_ptr);
struct sk_buff *skb;
lockdep_assert_held(&txq->lock);
lockdep_assert_held(&txq->reclaim_lock);
if (!txq->entries)
return;
@ -325,10 +357,10 @@ static void iwl_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
*/
if (trans->trans_cfg->gen2)
iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta,
iwl_txq_get_tfd(trans, txq, rd_ptr));
iwl_txq_get_tfd(trans, txq, read_ptr));
else
iwl_txq_gen1_tfd_unmap(trans, &txq->entries[idx].meta,
txq, rd_ptr);
txq, read_ptr);
/* free SKB */
skb = txq->entries[idx].skb;
@ -356,20 +388,23 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
return;
}
spin_lock_bh(&txq->lock);
spin_lock_bh(&txq->reclaim_lock);
spin_lock(&txq->lock);
while (txq->write_ptr != txq->read_ptr) {
IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
txq_id, txq->read_ptr);
if (txq_id != trans_pcie->txqs.cmd.q_id) {
struct sk_buff *skb = txq->entries[txq->read_ptr].skb;
struct iwl_cmd_meta *cmd_meta =
&txq->entries[txq->read_ptr].meta;
if (WARN_ON_ONCE(!skb))
continue;
iwl_pcie_free_tso_page(trans, skb);
iwl_pcie_free_tso_pages(trans, skb, cmd_meta);
}
iwl_txq_free_tfd(trans, txq);
iwl_txq_free_tfd(trans, txq, txq->read_ptr);
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
if (txq->read_ptr == txq->write_ptr &&
@ -383,7 +418,8 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
iwl_op_mode_free_skb(trans->op_mode, skb);
}
spin_unlock_bh(&txq->lock);
spin_unlock(&txq->lock);
spin_unlock_bh(&txq->reclaim_lock);
/* just in case - this queue may have been stopped */
iwl_trans_pcie_wake_queue(trans, txq);
@ -888,6 +924,7 @@ int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,
return ret;
spin_lock_init(&txq->lock);
spin_lock_init(&txq->reclaim_lock);
if (cmd_queue) {
static struct lock_class_key iwl_txq_cmd_queue_lock_class;
@ -1022,11 +1059,12 @@ static void iwl_txq_progress(struct iwl_txq *txq)
mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
}
static inline bool iwl_txq_used(const struct iwl_txq *q, int i)
static inline bool iwl_txq_used(const struct iwl_txq *q, int i,
int read_ptr, int write_ptr)
{
int index = iwl_txq_get_cmd_index(q, i);
int r = iwl_txq_get_cmd_index(q, q->read_ptr);
int w = iwl_txq_get_cmd_index(q, q->write_ptr);
int r = iwl_txq_get_cmd_index(q, read_ptr);
int w = iwl_txq_get_cmd_index(q, write_ptr);
return w >= r ?
(index >= r && index < w) :
@ -1053,7 +1091,7 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
r = iwl_txq_get_cmd_index(txq, txq->read_ptr);
if (idx >= trans->trans_cfg->base_params->max_tfd_queue_size ||
(!iwl_txq_used(txq, idx))) {
(!iwl_txq_used(txq, idx, txq->read_ptr, txq->write_ptr))) {
WARN_ONCE(test_bit(txq_id, trans_pcie->txqs.queue_used),
"%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n",
__func__, txq_id, idx,
@ -1420,7 +1458,8 @@ int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
out_cmd = txq->entries[idx].cmd;
out_meta = &txq->entries[idx].meta;
memset(out_meta, 0, sizeof(*out_meta)); /* re-initialize to NULL */
/* re-initialize, this also marks the SG list as unused */
memset(out_meta, 0, sizeof(*out_meta));
if (cmd->flags & CMD_WANT_SKB)
out_meta->source = cmd;
@ -1702,12 +1741,15 @@ static int iwl_fill_data_tbs(struct iwl_trans *trans, struct sk_buff *skb,
}
#ifdef CONFIG_INET
struct iwl_tso_hdr_page *iwl_pcie_get_page_hdr(struct iwl_trans *trans,
size_t len, struct sk_buff *skb)
static void *iwl_pcie_get_page_hdr(struct iwl_trans *trans,
size_t len, struct sk_buff *skb)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwl_tso_hdr_page *p = this_cpu_ptr(trans_pcie->txqs.tso_hdr_page);
struct iwl_tso_page_info *info;
struct page **page_ptr;
dma_addr_t phys;
void *ret;
page_ptr = (void *)((u8 *)skb->cb + trans_pcie->txqs.page_offs);
@ -1727,24 +1769,124 @@ struct iwl_tso_hdr_page *iwl_pcie_get_page_hdr(struct iwl_trans *trans,
*
* (see also get_workaround_page() in tx-gen2.c)
*/
if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE -
sizeof(void *))
if (((unsigned long)p->pos & ~PAGE_MASK) + len < IWL_TSO_PAGE_DATA_SIZE) {
info = IWL_TSO_PAGE_INFO(page_address(p->page));
goto out;
}
/* We don't have enough room on this page, get a new one. */
__free_page(p->page);
iwl_pcie_free_and_unmap_tso_page(trans, p->page);
alloc:
p->page = alloc_page(GFP_ATOMIC);
if (!p->page)
return NULL;
p->pos = page_address(p->page);
info = IWL_TSO_PAGE_INFO(page_address(p->page));
/* set the chaining pointer to NULL */
*(void **)((u8 *)page_address(p->page) + PAGE_SIZE - sizeof(void *)) = NULL;
info->next = NULL;
/* Create a DMA mapping for the page */
phys = dma_map_page_attrs(trans->dev, p->page, 0, PAGE_SIZE,
DMA_TO_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
if (unlikely(dma_mapping_error(trans->dev, phys))) {
__free_page(p->page);
p->page = NULL;
return NULL;
}
/* Store physical address and set use count */
info->dma_addr = phys;
refcount_set(&info->use_count, 1);
out:
*page_ptr = p->page;
get_page(p->page);
return p;
/* Return an internal reference for the caller */
refcount_inc(&info->use_count);
ret = p->pos;
p->pos += len;
return ret;
}
/**
* iwl_pcie_get_sgt_tb_phys - Find TB address in mapped SG list
* @sgt: scatter gather table
* @addr: Virtual address
*
* Find the entry that includes the address for the given address and return
* correct physical address for the TB entry.
*
* Returns: Address for TB entry
*/
dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr)
{
struct scatterlist *sg;
int i;
for_each_sgtable_dma_sg(sgt, sg, i) {
if (addr >= sg_virt(sg) &&
(u8 *)addr < (u8 *)sg_virt(sg) + sg_dma_len(sg))
return sg_dma_address(sg) +
((unsigned long)addr - (unsigned long)sg_virt(sg));
}
WARN_ON_ONCE(1);
return DMA_MAPPING_ERROR;
}
/**
* iwl_pcie_prep_tso - Prepare TSO page and SKB for sending
* @trans: transport private data
* @skb: the SKB to map
* @cmd_meta: command meta to store the scatter list information for unmapping
* @hdr: output argument for TSO headers
* @hdr_room: requested length for TSO headers
*
* Allocate space for a scatter gather list and TSO headers and map the SKB
* using the scatter gather list. The SKB is unmapped again when the page is
* free'ed again at the end of the operation.
*
* Returns: newly allocated and mapped scatter gather table with list
*/
struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,
struct iwl_cmd_meta *cmd_meta,
u8 **hdr, unsigned int hdr_room)
{
struct sg_table *sgt;
if (WARN_ON_ONCE(skb_has_frag_list(skb)))
return NULL;
*hdr = iwl_pcie_get_page_hdr(trans,
hdr_room + __alignof__(struct sg_table) +
sizeof(struct sg_table) +
(skb_shinfo(skb)->nr_frags + 1) *
sizeof(struct scatterlist),
skb);
if (!*hdr)
return NULL;
sgt = (void *)PTR_ALIGN(*hdr + hdr_room, __alignof__(struct sg_table));
sgt->sgl = (void *)(sgt + 1);
sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1);
sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, 0, skb->len);
if (WARN_ON_ONCE(sgt->orig_nents <= 0))
return NULL;
/* And map the entire SKB */
if (dma_map_sgtable(trans->dev, sgt, DMA_TO_DEVICE, 0) < 0)
return NULL;
/* Store non-zero (i.e. valid) offset for unmapping */
cmd_meta->sg_offset = (unsigned long) sgt & ~PAGE_MASK;
return sgt;
}
static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
@ -1759,8 +1901,9 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
unsigned int mss = skb_shinfo(skb)->gso_size;
u16 length, iv_len, amsdu_pad;
u8 *start_hdr;
struct iwl_tso_hdr_page *hdr_page;
dma_addr_t start_hdr_phys;
u8 *start_hdr, *pos_hdr;
struct sg_table *sgt;
struct tso_t tso;
/* if the packet is protected, then it must be CCMP or GCMP */
@ -1783,13 +1926,14 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
(3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len;
/* Our device supports 9 segments at most, it will fit in 1 page */
hdr_page = iwl_pcie_get_page_hdr(trans, hdr_room, skb);
if (!hdr_page)
sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room);
if (!sgt)
return -ENOMEM;
start_hdr = hdr_page->pos;
memcpy(hdr_page->pos, skb->data + hdr_len, iv_len);
hdr_page->pos += iv_len;
start_hdr_phys = iwl_pcie_get_tso_page_phys(start_hdr);
pos_hdr = start_hdr;
memcpy(pos_hdr, skb->data + hdr_len, iv_len);
pos_hdr += iv_len;
/*
* Pull the ieee80211 header + IV to be able to use TSO core,
@ -1812,45 +1956,43 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
min_t(unsigned int, mss, total_len);
unsigned int hdr_tb_len;
dma_addr_t hdr_tb_phys;
u8 *subf_hdrs_start = hdr_page->pos;
u8 *subf_hdrs_start = pos_hdr;
total_len -= data_left;
memset(hdr_page->pos, 0, amsdu_pad);
hdr_page->pos += amsdu_pad;
memset(pos_hdr, 0, amsdu_pad);
pos_hdr += amsdu_pad;
amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen +
data_left)) & 0x3;
ether_addr_copy(hdr_page->pos, ieee80211_get_DA(hdr));
hdr_page->pos += ETH_ALEN;
ether_addr_copy(hdr_page->pos, ieee80211_get_SA(hdr));
hdr_page->pos += ETH_ALEN;
ether_addr_copy(pos_hdr, ieee80211_get_DA(hdr));
pos_hdr += ETH_ALEN;
ether_addr_copy(pos_hdr, ieee80211_get_SA(hdr));
pos_hdr += ETH_ALEN;
length = snap_ip_tcp_hdrlen + data_left;
*((__be16 *)hdr_page->pos) = cpu_to_be16(length);
hdr_page->pos += sizeof(length);
*((__be16 *)pos_hdr) = cpu_to_be16(length);
pos_hdr += sizeof(length);
/*
* This will copy the SNAP as well which will be considered
* as MAC header.
*/
tso_build_hdr(skb, hdr_page->pos, &tso, data_left, !total_len);
tso_build_hdr(skb, pos_hdr, &tso, data_left, !total_len);
hdr_page->pos += snap_ip_tcp_hdrlen;
pos_hdr += snap_ip_tcp_hdrlen;
hdr_tb_len = pos_hdr - start_hdr;
hdr_tb_phys = iwl_pcie_get_tso_page_phys(start_hdr);
hdr_tb_len = hdr_page->pos - start_hdr;
hdr_tb_phys = dma_map_single(trans->dev, start_hdr,
hdr_tb_len, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(trans->dev, hdr_tb_phys)))
return -EINVAL;
iwl_pcie_txq_build_tfd(trans, txq, hdr_tb_phys,
hdr_tb_len, false);
trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr,
hdr_tb_phys, hdr_tb_len);
/* add this subframe's headers' length to the tx_cmd */
le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start);
le16_add_cpu(&tx_cmd->len, pos_hdr - subf_hdrs_start);
/* prepare the start_hdr for the next subframe */
start_hdr = hdr_page->pos;
start_hdr = pos_hdr;
/* put the payload */
while (data_left) {
@ -1858,9 +2000,9 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
data_left);
dma_addr_t tb_phys;
tb_phys = dma_map_single(trans->dev, tso.data,
size, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data);
/* Not a real mapping error, use direct comparison */
if (unlikely(tb_phys == DMA_MAPPING_ERROR))
return -EINVAL;
iwl_pcie_txq_build_tfd(trans, txq, tb_phys,
@ -1873,6 +2015,9 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
}
}
dma_sync_single_for_device(trans->dev, start_hdr_phys, hdr_room,
DMA_TO_DEVICE);
/* re -add the WiFi header and IV */
skb_push(skb, hdr_len + iv_len);
@ -2027,7 +2172,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
/* Set up first empty entry in queue's array of Tx/cmd buffers */
out_meta = &txq->entries[txq->write_ptr].meta;
out_meta->flags = 0;
memset(out_meta, 0, sizeof(*out_meta));
/*
* The second TB (tb1) points to the remainder of the TX command
@ -2144,12 +2289,12 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
}
static void iwl_txq_gen1_inval_byte_cnt_tbl(struct iwl_trans *trans,
struct iwl_txq *txq)
struct iwl_txq *txq,
int read_ptr)
{
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwlagn_scd_bc_tbl *scd_bc_tbl = trans_pcie->txqs.scd_bc_tbls.addr;
int txq_id = txq->id;
int read_ptr = txq->read_ptr;
u8 sta_id = 0;
__le16 bc_ent;
struct iwl_device_tx_cmd *dev_cmd = txq->entries[read_ptr].cmd;
@ -2176,6 +2321,7 @@ void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
struct iwl_txq *txq = trans_pcie->txqs.txq[txq_id];
int tfd_num, read_ptr, last_to_free;
int txq_read_ptr, txq_write_ptr;
/* This function is not meant to release cmd queue*/
if (WARN_ON(txq_id == trans_pcie->txqs.cmd.q_id))
@ -2186,8 +2332,14 @@ void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
tfd_num = iwl_txq_get_cmd_index(txq, ssn);
spin_lock_bh(&txq->lock);
read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr);
spin_lock_bh(&txq->reclaim_lock);
spin_lock(&txq->lock);
txq_read_ptr = txq->read_ptr;
txq_write_ptr = txq->write_ptr;
spin_unlock(&txq->lock);
read_ptr = iwl_txq_get_cmd_index(txq, txq_read_ptr);
if (!test_bit(txq_id, trans_pcie->txqs.queue_used)) {
IWL_DEBUG_TX_QUEUES(trans, "Q %d inactive - ignoring idx %d\n",
@ -2199,19 +2351,19 @@ void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
goto out;
IWL_DEBUG_TX_REPLY(trans, "[Q %d] %d (%d) -> %d (%d)\n",
txq_id, read_ptr, txq->read_ptr, tfd_num, ssn);
txq_id, read_ptr, txq_read_ptr, tfd_num, ssn);
/* Since we free until index _not_ inclusive, the one before index is
* the last we will free. This one must be used
*/
last_to_free = iwl_txq_dec_wrap(trans, tfd_num);
if (!iwl_txq_used(txq, last_to_free)) {
if (!iwl_txq_used(txq, last_to_free, txq_read_ptr, txq_write_ptr)) {
IWL_ERR(trans,
"%s: Read index for txq id (%d), last_to_free %d is out of range [0-%d] %d %d.\n",
__func__, txq_id, last_to_free,
trans->trans_cfg->base_params->max_tfd_queue_size,
txq->write_ptr, txq->read_ptr);
txq_write_ptr, txq_read_ptr);
iwl_op_mode_time_point(trans->op_mode,
IWL_FW_INI_TIME_POINT_FAKE_TX,
@ -2224,26 +2376,31 @@ void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
for (;
read_ptr != tfd_num;
txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr),
read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr)) {
txq_read_ptr = iwl_txq_inc_wrap(trans, txq_read_ptr),
read_ptr = iwl_txq_get_cmd_index(txq, txq_read_ptr)) {
struct iwl_cmd_meta *cmd_meta = &txq->entries[read_ptr].meta;
struct sk_buff *skb = txq->entries[read_ptr].skb;
if (WARN_ONCE(!skb, "no SKB at %d (%d) on queue %d\n",
read_ptr, txq->read_ptr, txq_id))
read_ptr, txq_read_ptr, txq_id))
continue;
iwl_pcie_free_tso_page(trans, skb);
iwl_pcie_free_tso_pages(trans, skb, cmd_meta);
__skb_queue_tail(skbs, skb);
txq->entries[read_ptr].skb = NULL;
if (!trans->trans_cfg->gen2)
iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq);
iwl_txq_gen1_inval_byte_cnt_tbl(trans, txq,
txq_read_ptr);
iwl_txq_free_tfd(trans, txq);
iwl_txq_free_tfd(trans, txq, txq_read_ptr);
}
spin_lock(&txq->lock);
txq->read_ptr = txq_read_ptr;
iwl_txq_progress(txq);
if (iwl_txq_space(trans, txq) > txq->low_mark &&
@ -2265,13 +2422,12 @@ void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
txq->overflow_tx = true;
/*
* This is tricky: we are in reclaim path which is non
* re-entrant, so noone will try to take the access the
* txq data from that path. We stopped tx, so we can't
* have tx as well. Bottom line, we can unlock and re-lock
* later.
* This is tricky: we are in reclaim path and are holding
* reclaim_lock, so noone will try to access the txq data
* from that path. We stopped tx, so we can't have tx as well.
* Bottom line, we can unlock and re-lock later.
*/
spin_unlock_bh(&txq->lock);
spin_unlock(&txq->lock);
while ((skb = __skb_dequeue(&overflow_skbs))) {
struct iwl_device_tx_cmd *dev_cmd_ptr;
@ -2290,12 +2446,13 @@ void iwl_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
if (iwl_txq_space(trans, txq) > txq->low_mark)
iwl_trans_pcie_wake_queue(trans, txq);
spin_lock_bh(&txq->lock);
spin_lock(&txq->lock);
txq->overflow_tx = false;
}
spin_unlock(&txq->lock);
out:
spin_unlock_bh(&txq->lock);
spin_unlock_bh(&txq->reclaim_lock);
}
/* Set wr_ptr of specific device and txq */

View file

@ -1287,6 +1287,9 @@ mwifiex_get_priv_by_id(struct mwifiex_adapter *adapter,
for (i = 0; i < adapter->priv_num; i++) {
if (adapter->priv[i]) {
if (adapter->priv[i]->bss_mode == NL80211_IFTYPE_UNSPECIFIED)
continue;
if ((adapter->priv[i]->bss_num == bss_num) &&
(adapter->priv[i]->bss_type == bss_type))
break;

View file

@ -1125,6 +1125,11 @@ mt76_rx_convert(struct mt76_dev *dev, struct sk_buff *skb,
memcpy(status->chain_signal, mstat.chain_signal,
sizeof(mstat.chain_signal));
if (mstat.wcid) {
status->link_valid = mstat.wcid->link_valid;
status->link_id = mstat.wcid->link_id;
}
*sta = wcid_to_sta(mstat.wcid);
*hw = mt76_phy_hw(dev, mstat.phy_idx);
}

View file

@ -349,6 +349,8 @@ struct mt76_wcid {
u8 sta:1;
u8 amsdu:1;
u8 phy_idx:2;
u8 link_id:4;
bool link_valid;
u8 rx_check_pn;
u8 rx_key_pn[IEEE80211_NUM_TIDS + 1][6];
@ -366,6 +368,8 @@ struct mt76_wcid {
struct mt76_sta_stats stats;
struct list_head poll_list;
struct mt76_wcid *def_wcid;
};
struct mt76_txq {
@ -1081,6 +1085,7 @@ bool ____mt76_poll_msec(struct mt76_dev *dev, u32 offset, u32 mask, u32 val,
void mt76_mmio_init(struct mt76_dev *dev, void __iomem *regs);
void mt76_pci_disable_aspm(struct pci_dev *pdev);
bool mt76_pci_aspm_supported(struct pci_dev *pdev);
static inline u16 mt76_chip(struct mt76_dev *dev)
{
@ -1256,6 +1261,9 @@ wcid_to_sta(struct mt76_wcid *wcid)
if (!wcid || !wcid->sta)
return NULL;
if (wcid->def_wcid)
ptr = wcid->def_wcid;
return container_of(ptr, struct ieee80211_sta, drv_priv);
}

View file

@ -842,6 +842,7 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
{
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
struct sk_buff *skb, *sskb, *wskb = NULL;
struct ieee80211_link_sta *link_sta;
struct mt7615_dev *dev = phy->dev;
struct wtbl_req_hdr *wtbl_hdr;
struct mt7615_sta *msta;
@ -849,6 +850,7 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
int cmd, err;
msta = sta ? (struct mt7615_sta *)sta->drv_priv : &mvif->sta;
link_sta = sta ? &sta->deflink : NULL;
sskb = mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76,
&msta->wcid);
@ -861,8 +863,8 @@ mt7615_mcu_wtbl_sta_add(struct mt7615_phy *phy, struct ieee80211_vif *vif,
else
mvif->sta_added = true;
}
mt76_connac_mcu_sta_basic_tlv(&dev->mt76, sskb, vif, sta, enable,
new_entry);
mt76_connac_mcu_sta_basic_tlv(&dev->mt76, sskb, vif, link_sta,
enable, new_entry);
if (enable && sta)
mt76_connac_mcu_sta_tlv(phy->mt76, sskb, sta, vif, 0,
MT76_STA_INFO_STATE_ASSOC);
@ -1109,8 +1111,8 @@ mt7615_mcu_uni_add_dev(struct mt7615_phy *phy, struct ieee80211_vif *vif,
{
struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
return mt76_connac_mcu_uni_add_dev(phy->mt76, vif, &mvif->sta.wcid,
enable);
return mt76_connac_mcu_uni_add_dev(phy->mt76, &vif->bss_conf,
&mvif->sta.wcid, enable);
}
static int

View file

@ -370,7 +370,7 @@ EXPORT_SYMBOL_GPL(mt76_connac_mcu_bss_omac_tlv);
void mt76_connac_mcu_sta_basic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
struct ieee80211_link_sta *link_sta,
bool enable, bool newly)
{
struct sta_rec_basic *basic;
@ -390,7 +390,7 @@ void mt76_connac_mcu_sta_basic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
basic->conn_state = CONN_STATE_DISCONNECT;
}
if (!sta) {
if (!link_sta) {
basic->conn_type = cpu_to_le32(CONNECTION_INFRA_BC);
if (vif->type == NL80211_IFTYPE_STATION &&
@ -411,7 +411,7 @@ void mt76_connac_mcu_sta_basic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
else
conn_type = CONNECTION_INFRA_STA;
basic->conn_type = cpu_to_le32(conn_type);
basic->aid = cpu_to_le16(sta->aid);
basic->aid = cpu_to_le16(link_sta->sta->aid);
break;
case NL80211_IFTYPE_STATION:
if (vif->p2p && !is_mt7921(dev))
@ -423,15 +423,15 @@ void mt76_connac_mcu_sta_basic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
break;
case NL80211_IFTYPE_ADHOC:
basic->conn_type = cpu_to_le32(CONNECTION_IBSS_ADHOC);
basic->aid = cpu_to_le16(sta->aid);
basic->aid = cpu_to_le16(link_sta->sta->aid);
break;
default:
WARN_ON(1);
break;
}
memcpy(basic->peer_addr, sta->addr, ETH_ALEN);
basic->qos = sta->wme;
memcpy(basic->peer_addr, link_sta->addr, ETH_ALEN);
basic->qos = link_sta->sta->wme;
}
EXPORT_SYMBOL_GPL(mt76_connac_mcu_sta_basic_tlv);
@ -793,7 +793,8 @@ EXPORT_SYMBOL_GPL(mt76_connac_mcu_sta_he_tlv_v2);
u8
mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
enum nl80211_band band, struct ieee80211_sta *sta)
enum nl80211_band band,
struct ieee80211_link_sta *link_sta)
{
struct ieee80211_sta_ht_cap *ht_cap;
struct ieee80211_sta_vht_cap *vht_cap;
@ -801,11 +802,11 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
const struct ieee80211_sta_eht_cap *eht_cap;
u8 mode = 0;
if (sta) {
ht_cap = &sta->deflink.ht_cap;
vht_cap = &sta->deflink.vht_cap;
he_cap = &sta->deflink.he_cap;
eht_cap = &sta->deflink.eht_cap;
if (link_sta) {
ht_cap = &link_sta->ht_cap;
vht_cap = &link_sta->vht_cap;
he_cap = &link_sta->he_cap;
eht_cap = &link_sta->eht_cap;
} else {
struct ieee80211_supported_band *sband;
@ -911,7 +912,8 @@ void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_PHY, sizeof(*phy));
phy = (struct sta_rec_phy *)tlv;
phy->phy_type = mt76_connac_get_phy_mode_v2(mphy, vif, band, sta);
phy->phy_type = mt76_connac_get_phy_mode_v2(mphy, vif, band,
&sta->deflink);
phy->basic_rate = cpu_to_le16((u16)vif->bss_conf.basic_rates);
phy->rcpi = rcpi;
phy->ampdu = FIELD_PREP(IEEE80211_HT_AMPDU_PARM_FACTOR,
@ -1044,6 +1046,7 @@ int mt76_connac_mcu_sta_cmd(struct mt76_phy *phy,
struct mt76_sta_cmd_info *info)
{
struct mt76_vif *mvif = (struct mt76_vif *)info->vif->drv_priv;
struct ieee80211_link_sta *link_sta;
struct mt76_dev *dev = phy->dev;
struct wtbl_req_hdr *wtbl_hdr;
struct tlv *sta_wtbl;
@ -1053,9 +1056,11 @@ int mt76_connac_mcu_sta_cmd(struct mt76_phy *phy,
if (IS_ERR(skb))
return PTR_ERR(skb);
link_sta = info->sta ? &info->sta->deflink : NULL;
if (info->sta || !info->offload_fw)
mt76_connac_mcu_sta_basic_tlv(dev, skb, info->vif, info->sta,
info->enable, info->newly);
mt76_connac_mcu_sta_basic_tlv(dev, skb, info->vif,
link_sta, info->enable,
info->newly);
if (info->sta && info->enable)
mt76_connac_mcu_sta_tlv(phy, skb, info->sta,
info->vif, info->rcpi,
@ -1132,11 +1137,11 @@ void mt76_connac_mcu_wtbl_ba_tlv(struct mt76_dev *dev, struct sk_buff *skb,
EXPORT_SYMBOL_GPL(mt76_connac_mcu_wtbl_ba_tlv);
int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *bss_conf,
struct mt76_wcid *wcid,
bool enable)
{
struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
struct mt76_vif *mvif = (struct mt76_vif *)bss_conf->vif->drv_priv;
struct mt76_dev *dev = phy->dev;
struct {
struct {
@ -1148,7 +1153,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
__le16 tag;
__le16 len;
u8 active;
u8 pad;
u8 link_idx; /* not link_id */
u8 omac_addr[ETH_ALEN];
} __packed tlv;
} dev_req = {
@ -1160,6 +1165,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
.tag = cpu_to_le16(DEV_INFO_ACTIVE),
.len = cpu_to_le16(sizeof(struct req_tlv)),
.active = enable,
.link_idx = mvif->idx,
},
};
struct {
@ -1182,12 +1188,13 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
.bmc_tx_wlan_idx = cpu_to_le16(wcid->idx),
.sta_idx = cpu_to_le16(wcid->idx),
.conn_state = 1,
.link_idx = mvif->idx,
},
};
int err, idx, cmd, len;
void *data;
switch (vif->type) {
switch (bss_conf->vif->type) {
case NL80211_IFTYPE_MESH_POINT:
case NL80211_IFTYPE_MONITOR:
case NL80211_IFTYPE_AP:
@ -1207,7 +1214,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
idx = mvif->omac_idx > EXT_BSSID_START ? HW_BSSID_0 : mvif->omac_idx;
basic_req.basic.hw_bss_idx = idx;
memcpy(dev_req.tlv.omac_addr, vif->addr, ETH_ALEN);
memcpy(dev_req.tlv.omac_addr, bss_conf->addr, ETH_ALEN);
cmd = enable ? MCU_UNI_CMD(DEV_INFO_UPDATE) : MCU_UNI_CMD(BSS_INFO_UPDATE);
data = enable ? (void *)&dev_req : (void *)&basic_req;
@ -1305,7 +1312,8 @@ int mt76_connac_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
EXPORT_SYMBOL_GPL(mt76_connac_mcu_sta_ba);
u8 mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
enum nl80211_band band, struct ieee80211_sta *sta)
enum nl80211_band band,
struct ieee80211_link_sta *link_sta)
{
struct mt76_dev *dev = phy->dev;
const struct ieee80211_sta_he_cap *he_cap;
@ -1316,10 +1324,10 @@ u8 mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
if (is_connac_v1(dev))
return 0x38;
if (sta) {
ht_cap = &sta->deflink.ht_cap;
vht_cap = &sta->deflink.vht_cap;
he_cap = &sta->deflink.he_cap;
if (link_sta) {
ht_cap = &link_sta->ht_cap;
vht_cap = &link_sta->vht_cap;
he_cap = &link_sta->he_cap;
} else {
struct ieee80211_supported_band *sband;

View file

@ -545,6 +545,13 @@ struct sta_rec_muru {
} mimo_ul;
} __packed;
struct sta_rec_remove {
__le16 tag;
__le16 len;
u8 action;
u8 pad[3];
} __packed;
struct sta_phy {
u8 type;
u8 flag;
@ -813,7 +820,10 @@ enum {
STA_REC_HE_6G = 0x17,
STA_REC_HE_V2 = 0x19,
STA_REC_MLD = 0x20,
STA_REC_EHT_MLD = 0x21,
STA_REC_EHT = 0x22,
STA_REC_MLD_OFF = 0x23,
STA_REC_REMOVE = 0x25,
STA_REC_PN_INFO = 0x26,
STA_REC_KEY_V3 = 0x27,
STA_REC_HDRT = 0x28,
@ -1392,6 +1402,7 @@ enum {
MT_NIC_CAP_WFDMA_REALLOC,
MT_NIC_CAP_6G,
MT_NIC_CAP_CHIP_CAP = 0x20,
MT_NIC_CAP_EML_CAP = 0x22,
};
#define UNI_WOW_DETECT_TYPE_MAGIC BIT(0)
@ -1443,7 +1454,7 @@ struct mt76_connac_bss_basic_tlv {
__le16 sta_idx;
__le16 nonht_basic_phy;
u8 phymode_ext; /* bit(0) AX_6G */
u8 pad[1];
u8 link_idx;
} __packed;
struct mt76_connac_bss_qos_tlv {
@ -1733,7 +1744,10 @@ enum mt76_sta_info_state {
};
struct mt76_sta_cmd_info {
struct ieee80211_sta *sta;
union {
struct ieee80211_sta *sta;
struct ieee80211_link_sta *link_sta;
};
struct mt76_wcid *wcid;
struct ieee80211_vif *vif;
@ -1883,8 +1897,8 @@ int mt76_connac_mcu_set_channel_domain(struct mt76_phy *phy);
int mt76_connac_mcu_set_vif_ps(struct mt76_dev *dev, struct ieee80211_vif *vif);
void mt76_connac_mcu_sta_basic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta, bool enable,
bool newly);
struct ieee80211_link_sta *link_sta,
bool enable, bool newly);
void mt76_connac_mcu_wtbl_generic_tlv(struct mt76_dev *dev, struct sk_buff *skb,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta, void *sta_wtbl,
@ -1898,7 +1912,8 @@ int mt76_connac_mcu_sta_update_hdr_trans(struct mt76_dev *dev,
struct mt76_wcid *wcid, int cmd);
void mt76_connac_mcu_sta_he_tlv_v2(struct sk_buff *skb, struct ieee80211_sta *sta);
u8 mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
enum nl80211_band band, struct ieee80211_sta *sta);
enum nl80211_band band,
struct ieee80211_link_sta *link_sta);
int mt76_connac_mcu_wtbl_update_hdr_trans(struct mt76_dev *dev,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
@ -1917,7 +1932,7 @@ void mt76_connac_mcu_sta_ba_tlv(struct sk_buff *skb,
struct ieee80211_ampdu_params *params,
bool enable, bool tx);
int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *bss_conf,
struct mt76_wcid *wcid,
bool enable);
int mt76_connac_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
@ -1992,7 +2007,8 @@ mt76_connac_get_he_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif);
const struct ieee80211_sta_eht_cap *
mt76_connac_get_eht_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif);
u8 mt76_connac_get_phy_mode(struct mt76_phy *phy, struct ieee80211_vif *vif,
enum nl80211_band band, struct ieee80211_sta *sta);
enum nl80211_band band,
struct ieee80211_link_sta *sta);
u8 mt76_connac_get_phy_mode_ext(struct mt76_phy *phy, struct ieee80211_vif *vif,
enum nl80211_band band);

View file

@ -1503,7 +1503,7 @@ mt7915_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7915_dev *dev,
ra->valid = true;
ra->auto_rate = true;
ra->phy_mode = mt76_connac_get_phy_mode(mphy, vif, band, sta);
ra->phy_mode = mt76_connac_get_phy_mode(mphy, vif, band, &sta->deflink);
ra->channel = chandef->chan->hw_value;
ra->bw = sta->deflink.bandwidth;
ra->phy.bw = sta->deflink.bandwidth;
@ -1656,11 +1656,13 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, bool enable)
{
struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
struct ieee80211_link_sta *link_sta;
struct mt7915_sta *msta;
struct sk_buff *skb;
int ret;
msta = sta ? (struct mt7915_sta *)sta->drv_priv : &mvif->sta;
link_sta = sta ? &sta->deflink : NULL;
skb = mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76,
&msta->wcid);
@ -1668,7 +1670,7 @@ int mt7915_mcu_add_sta(struct mt7915_dev *dev, struct ieee80211_vif *vif,
return PTR_ERR(skb);
/* starec basic */
mt76_connac_mcu_sta_basic_tlv(&dev->mt76, skb, vif, sta, enable,
mt76_connac_mcu_sta_basic_tlv(&dev->mt76, skb, vif, link_sta, enable,
!rcu_access_pointer(dev->mt76.wcid[msta->wcid.idx]));
if (!enable)
goto out;

View file

@ -39,6 +39,7 @@ static void mt7921_mac_sta_poll(struct mt792x_dev *dev)
};
struct ieee80211_sta *sta;
struct mt792x_sta *msta;
struct mt792x_link_sta *mlink;
u32 tx_time[IEEE80211_NUM_ACS], rx_time[IEEE80211_NUM_ACS];
LIST_HEAD(sta_poll_list);
struct rate_info *rate;
@ -60,23 +61,25 @@ static void mt7921_mac_sta_poll(struct mt792x_dev *dev)
spin_unlock_bh(&dev->mt76.sta_poll_lock);
break;
}
msta = list_first_entry(&sta_poll_list,
struct mt792x_sta, wcid.poll_list);
list_del_init(&msta->wcid.poll_list);
mlink = list_first_entry(&sta_poll_list,
struct mt792x_link_sta,
wcid.poll_list);
msta = container_of(mlink, struct mt792x_sta, deflink);
list_del_init(&mlink->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
idx = msta->wcid.idx;
idx = mlink->wcid.idx;
addr = mt7921_mac_wtbl_lmac_addr(idx, MT_WTBL_AC0_CTT_OFFSET);
for (i = 0; i < IEEE80211_NUM_ACS; i++) {
u32 tx_last = msta->airtime_ac[i];
u32 rx_last = msta->airtime_ac[i + 4];
u32 tx_last = mlink->airtime_ac[i];
u32 rx_last = mlink->airtime_ac[i + 4];
msta->airtime_ac[i] = mt76_rr(dev, addr);
msta->airtime_ac[i + 4] = mt76_rr(dev, addr + 4);
mlink->airtime_ac[i] = mt76_rr(dev, addr);
mlink->airtime_ac[i + 4] = mt76_rr(dev, addr + 4);
tx_time[i] = msta->airtime_ac[i] - tx_last;
rx_time[i] = msta->airtime_ac[i + 4] - rx_last;
tx_time[i] = mlink->airtime_ac[i] - tx_last;
rx_time[i] = mlink->airtime_ac[i + 4] - rx_last;
if ((tx_last | rx_last) & BIT(30))
clear = true;
@ -87,10 +90,10 @@ static void mt7921_mac_sta_poll(struct mt792x_dev *dev)
if (clear) {
mt7921_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
memset(mlink->airtime_ac, 0, sizeof(mlink->airtime_ac));
}
if (!msta->wcid.sta)
if (!mlink->wcid.sta)
continue;
sta = container_of((void *)msta, struct ieee80211_sta,
@ -113,7 +116,7 @@ static void mt7921_mac_sta_poll(struct mt792x_dev *dev)
* we need to make sure that flags match so polling GI
* from per-sta counters directly.
*/
rate = &msta->wcid.rate;
rate = &mlink->wcid.rate;
addr = mt7921_mac_wtbl_lmac_addr(idx,
MT_WTBL_TXRX_CAP_RATE_OFFSET);
val = mt76_rr(dev, addr);
@ -154,10 +157,10 @@ static void mt7921_mac_sta_poll(struct mt792x_dev *dev)
rssi[2] = to_rssi(GENMASK(23, 16), val);
rssi[3] = to_rssi(GENMASK(31, 14), val);
msta->ack_signal =
mlink->ack_signal =
mt76_rx_signal(msta->vif->phy->mt76->antenna_mask, rssi);
ewma_avg_signal_add(&msta->avg_ack_signal, -msta->ack_signal);
ewma_avg_signal_add(&mlink->avg_ack_signal, -mlink->ack_signal);
}
}
@ -180,6 +183,7 @@ mt7921_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
u32 rxd3 = le32_to_cpu(rxd[3]);
u32 rxd4 = le32_to_cpu(rxd[4]);
struct mt792x_sta *msta = NULL;
struct mt792x_link_sta *mlink;
u16 seq_ctrl = 0;
__le16 fc = 0;
u8 mode = 0;
@ -210,10 +214,11 @@ mt7921_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
status->wcid = mt792x_rx_get_wcid(dev, idx, unicast);
if (status->wcid) {
msta = container_of(status->wcid, struct mt792x_sta, wcid);
mlink = container_of(status->wcid, struct mt792x_link_sta, wcid);
msta = container_of(mlink, struct mt792x_sta, deflink);
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (list_empty(&msta->wcid.poll_list))
list_add_tail(&msta->wcid.poll_list,
if (list_empty(&mlink->wcid.poll_list))
list_add_tail(&mlink->wcid.poll_list,
&dev->mt76.sta_poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
}
@ -444,7 +449,7 @@ mt7921_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
void mt7921_mac_add_txs(struct mt792x_dev *dev, void *data)
{
struct mt792x_sta *msta = NULL;
struct mt792x_link_sta *mlink;
struct mt76_wcid *wcid;
__le32 *txs_data = data;
u16 wcidx;
@ -468,15 +473,15 @@ void mt7921_mac_add_txs(struct mt792x_dev *dev, void *data)
if (!wcid)
goto out;
msta = container_of(wcid, struct mt792x_sta, wcid);
mlink = container_of(wcid, struct mt792x_link_sta, wcid);
mt76_connac2_mac_add_txs_skb(&dev->mt76, wcid, pid, txs_data);
if (!wcid->sta)
goto out;
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (list_empty(&msta->wcid.poll_list))
list_add_tail(&msta->wcid.poll_list, &dev->mt76.sta_poll_list);
if (list_empty(&mlink->wcid.poll_list))
list_add_tail(&mlink->wcid.poll_list, &dev->mt76.sta_poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
out:
@ -513,7 +518,7 @@ static void mt7921_mac_tx_free(struct mt792x_dev *dev, void *data, int len)
* 1'b0: msdu_id with the same 'wcid pair' as above.
*/
if (info & MT_TX_FREE_PAIR) {
struct mt792x_sta *msta;
struct mt792x_link_sta *mlink;
u16 idx;
count++;
@ -523,10 +528,10 @@ static void mt7921_mac_tx_free(struct mt792x_dev *dev, void *data, int len)
if (!sta)
continue;
msta = container_of(wcid, struct mt792x_sta, wcid);
mlink = container_of(wcid, struct mt792x_link_sta, wcid);
spin_lock_bh(&mdev->sta_poll_lock);
if (list_empty(&msta->wcid.poll_list))
list_add_tail(&msta->wcid.poll_list,
if (list_empty(&mlink->wcid.poll_list))
list_add_tail(&mlink->wcid.poll_list,
&mdev->sta_poll_list);
spin_unlock_bh(&mdev->sta_poll_lock);
continue;
@ -641,11 +646,12 @@ mt7921_vif_connect_iter(void *priv, u8 *mac,
if (vif->type == NL80211_IFTYPE_STATION)
ieee80211_disconnect(vif, true);
mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid, true);
mt76_connac_mcu_uni_add_dev(&dev->mphy, &vif->bss_conf,
&mvif->sta.deflink.wcid, true);
mt7921_mcu_set_tx(dev, vif);
if (vif->type == NL80211_IFTYPE_AP) {
mt76_connac_mcu_uni_add_bss(dev->phy.mt76, vif, &mvif->sta.wcid,
mt76_connac_mcu_uni_add_bss(dev->phy.mt76, vif, &mvif->sta.deflink.wcid,
true, NULL);
mt7921_mcu_sta_update(dev, NULL, vif, true,
MT76_STA_INFO_STATE_NONE);
@ -786,9 +792,9 @@ int mt7921_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (sta) {
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
if (time_after(jiffies, msta->last_txs + HZ / 4)) {
if (time_after(jiffies, msta->deflink.last_txs + HZ / 4)) {
info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
msta->last_txs = jiffies;
msta->deflink.last_txs = jiffies;
}
}

View file

@ -295,40 +295,40 @@ mt7921_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
mt792x_mutex_acquire(dev);
mvif->mt76.idx = __ffs64(~dev->mt76.vif_mask);
if (mvif->mt76.idx >= MT792x_MAX_INTERFACES) {
mvif->bss_conf.mt76.idx = __ffs64(~dev->mt76.vif_mask);
if (mvif->bss_conf.mt76.idx >= MT792x_MAX_INTERFACES) {
ret = -ENOSPC;
goto out;
}
mvif->mt76.omac_idx = mvif->mt76.idx;
mvif->bss_conf.mt76.omac_idx = mvif->bss_conf.mt76.idx;
mvif->phy = phy;
mvif->mt76.band_idx = 0;
mvif->mt76.wmm_idx = mvif->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
mvif->bss_conf.mt76.band_idx = 0;
mvif->bss_conf.mt76.wmm_idx = mvif->bss_conf.mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid,
true);
ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, &vif->bss_conf,
&mvif->sta.deflink.wcid, true);
if (ret)
goto out;
dev->mt76.vif_mask |= BIT_ULL(mvif->mt76.idx);
phy->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
dev->mt76.vif_mask |= BIT_ULL(mvif->bss_conf.mt76.idx);
phy->omac_mask |= BIT_ULL(mvif->bss_conf.mt76.omac_idx);
idx = MT792x_WTBL_RESERVED - mvif->mt76.idx;
idx = MT792x_WTBL_RESERVED - mvif->bss_conf.mt76.idx;
INIT_LIST_HEAD(&mvif->sta.wcid.poll_list);
mvif->sta.wcid.idx = idx;
mvif->sta.wcid.phy_idx = mvif->mt76.band_idx;
mvif->sta.wcid.hw_key_idx = -1;
mvif->sta.wcid.tx_info |= MT_WCID_TX_INFO_SET;
mt76_wcid_init(&mvif->sta.wcid);
INIT_LIST_HEAD(&mvif->sta.deflink.wcid.poll_list);
mvif->sta.deflink.wcid.idx = idx;
mvif->sta.deflink.wcid.phy_idx = mvif->bss_conf.mt76.band_idx;
mvif->sta.deflink.wcid.hw_key_idx = -1;
mvif->sta.deflink.wcid.tx_info |= MT_WCID_TX_INFO_SET;
mt76_wcid_init(&mvif->sta.deflink.wcid);
mt7921_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
ewma_rssi_init(&mvif->rssi);
ewma_rssi_init(&mvif->bss_conf.rssi);
rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.deflink.wcid);
if (vif->txq) {
mtxq = (struct mt76_txq *)vif->txq->drv_priv;
mtxq->wcid = idx;
@ -494,7 +494,7 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt792x_sta *msta = sta ? (struct mt792x_sta *)sta->drv_priv :
&mvif->sta;
struct mt76_wcid *wcid = &msta->wcid;
struct mt76_wcid *wcid = &msta->deflink.wcid;
u8 *wcid_keyidx = &wcid->hw_key_idx;
int idx = key->keyidx, err = 0;
@ -541,18 +541,18 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
}
mt76_wcid_key_setup(&dev->mt76, wcid, key);
err = mt76_connac_mcu_add_key(&dev->mt76, vif, &msta->bip,
err = mt76_connac_mcu_add_key(&dev->mt76, vif, &msta->deflink.bip,
key, MCU_UNI_CMD(STA_REC_UPDATE),
&msta->wcid, cmd);
&msta->deflink.wcid, cmd);
if (err)
goto out;
if (key->cipher == WLAN_CIPHER_SUITE_WEP104 ||
key->cipher == WLAN_CIPHER_SUITE_WEP40)
err = mt76_connac_mcu_add_key(&dev->mt76, vif,
&mvif->wep_sta->bip,
&mvif->wep_sta->deflink.bip,
key, MCU_UNI_CMD(STA_REC_UPDATE),
&mvif->wep_sta->wcid, cmd);
&mvif->wep_sta->deflink.wcid, cmd);
out:
mt792x_mutex_release(dev);
@ -718,7 +718,7 @@ static void mt7921_bss_info_changed(struct ieee80211_hw *hw,
if (changed & BSS_CHANGED_ARP_FILTER) {
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
mt76_connac_mcu_update_arp_filter(&dev->mt76, &mvif->mt76,
mt76_connac_mcu_update_arp_filter(&dev->mt76, &mvif->bss_conf.mt76,
info);
}
@ -799,13 +799,13 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
if (idx < 0)
return -ENOSPC;
INIT_LIST_HEAD(&msta->wcid.poll_list);
INIT_LIST_HEAD(&msta->deflink.wcid.poll_list);
msta->vif = mvif;
msta->wcid.sta = 1;
msta->wcid.idx = idx;
msta->wcid.phy_idx = mvif->mt76.band_idx;
msta->wcid.tx_info |= MT_WCID_TX_INFO_SET;
msta->last_txs = jiffies;
msta->deflink.wcid.sta = 1;
msta->deflink.wcid.idx = idx;
msta->deflink.wcid.phy_idx = mvif->bss_conf.mt76.band_idx;
msta->deflink.wcid.tx_info |= MT_WCID_TX_INFO_SET;
msta->deflink.last_txs = jiffies;
ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
if (ret)
@ -840,14 +840,14 @@ void mt7921_mac_sta_assoc(struct mt76_dev *mdev, struct ieee80211_vif *vif,
mt792x_mutex_acquire(dev);
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
mt76_connac_mcu_uni_add_bss(&dev->mphy, vif, &mvif->sta.wcid,
true, mvif->mt76.ctx);
mt76_connac_mcu_uni_add_bss(&dev->mphy, vif, &mvif->sta.deflink.wcid,
true, mvif->bss_conf.mt76.ctx);
ewma_avg_signal_init(&msta->avg_ack_signal);
ewma_avg_signal_init(&msta->deflink.avg_ack_signal);
mt7921_mac_wtbl_update(dev, msta->wcid.idx,
mt7921_mac_wtbl_update(dev, msta->deflink.wcid.idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
memset(msta->deflink.airtime_ac, 0, sizeof(msta->deflink.airtime_ac));
mt7921_mcu_sta_update(dev, sta, vif, true, MT76_STA_INFO_STATE_ASSOC);
@ -861,27 +861,27 @@ void mt7921_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->wcid);
mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->deflink.wcid);
mt76_connac_pm_wake(&dev->mphy, &dev->pm);
mt7921_mcu_sta_update(dev, sta, vif, false, MT76_STA_INFO_STATE_NONE);
mt7921_mac_wtbl_update(dev, msta->wcid.idx,
mt7921_mac_wtbl_update(dev, msta->deflink.wcid.idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
if (vif->type == NL80211_IFTYPE_STATION) {
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
mvif->wep_sta = NULL;
ewma_rssi_init(&mvif->rssi);
ewma_rssi_init(&mvif->bss_conf.rssi);
if (!sta->tdls)
mt76_connac_mcu_uni_add_bss(&dev->mphy, vif,
&mvif->sta.wcid, false,
mvif->mt76.ctx);
&mvif->sta.deflink.wcid, false,
mvif->bss_conf.mt76.ctx);
}
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (!list_empty(&msta->wcid.poll_list))
list_del_init(&msta->wcid.poll_list);
if (!list_empty(&msta->deflink.wcid.poll_list))
list_del_init(&msta->deflink.wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
mt7921_regd_set_6ghz_power_type(vif, false);
@ -923,12 +923,12 @@ mt7921_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
mt792x_mutex_acquire(dev);
switch (action) {
case IEEE80211_AMPDU_RX_START:
mt76_rx_aggr_start(&dev->mt76, &msta->wcid, tid, ssn,
mt76_rx_aggr_start(&dev->mt76, &msta->deflink.wcid, tid, ssn,
params->buf_size);
mt7921_mcu_uni_rx_ba(dev, params, true);
break;
case IEEE80211_AMPDU_RX_STOP:
mt76_rx_aggr_stop(&dev->mt76, &msta->wcid, tid);
mt76_rx_aggr_stop(&dev->mt76, &msta->deflink.wcid, tid);
mt7921_mcu_uni_rx_ba(dev, params, false);
break;
case IEEE80211_AMPDU_TX_OPERATIONAL:
@ -939,16 +939,16 @@ mt7921_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
case IEEE80211_AMPDU_TX_STOP_FLUSH:
case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
mtxq->aggr = false;
clear_bit(tid, &msta->wcid.ampdu_state);
clear_bit(tid, &msta->deflink.wcid.ampdu_state);
mt7921_mcu_uni_tx_ba(dev, params, false);
break;
case IEEE80211_AMPDU_TX_START:
set_bit(tid, &msta->wcid.ampdu_state);
set_bit(tid, &msta->deflink.wcid.ampdu_state);
ret = IEEE80211_AMPDU_TX_START_IMMEDIATE;
break;
case IEEE80211_AMPDU_TX_STOP_CONT:
mtxq->aggr = false;
clear_bit(tid, &msta->wcid.ampdu_state);
clear_bit(tid, &msta->deflink.wcid.ampdu_state);
mt7921_mcu_uni_tx_ba(dev, params, false);
ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
break;
@ -1166,11 +1166,11 @@ static void mt7921_sta_set_decap_offload(struct ieee80211_hw *hw,
mt792x_mutex_acquire(dev);
if (enabled)
set_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
set_bit(MT_WCID_FLAG_HDR_TRANS, &msta->deflink.wcid.flags);
else
clear_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
clear_bit(MT_WCID_FLAG_HDR_TRANS, &msta->deflink.wcid.flags);
mt76_connac_mcu_sta_update_hdr_trans(&dev->mt76, vif, &msta->wcid,
mt76_connac_mcu_sta_update_hdr_trans(&dev->mt76, vif, &msta->deflink.wcid,
MCU_UNI_CMD(STA_REC_UPDATE));
mt792x_mutex_release(dev);
@ -1196,7 +1196,7 @@ static void mt7921_ipv6_addr_change(struct ieee80211_hw *hw,
struct mt76_connac_arpns_tlv arpns;
} req_hdr = {
.hdr = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
},
.arpns = {
.tag = cpu_to_le16(UNI_OFFLOAD_OFFLOAD_ND),
@ -1294,8 +1294,8 @@ mt7921_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
mt792x_mutex_acquire(dev);
err = mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid,
true, mvif->mt76.ctx);
err = mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.deflink.wcid,
true, mvif->bss_conf.mt76.ctx);
if (err)
goto out;
@ -1326,8 +1326,8 @@ mt7921_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
if (err)
goto out;
mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid, false,
mvif->mt76.ctx);
mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.deflink.wcid, false,
mvif->bss_conf.mt76.ctx);
out:
mt792x_mutex_release(dev);
@ -1346,32 +1346,27 @@ mt7921_remove_chanctx(struct ieee80211_hw *hw,
{
}
static void mt7921_ctx_iter(void *priv, u8 *mac,
struct ieee80211_vif *vif)
{
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct ieee80211_chanctx_conf *ctx = priv;
if (ctx != mvif->mt76.ctx)
return;
if (vif->type == NL80211_IFTYPE_MONITOR)
mt7921_mcu_config_sniffer(mvif, ctx);
else
mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
}
static void
mt7921_change_chanctx(struct ieee80211_hw *hw,
struct ieee80211_chanctx_conf *ctx,
u32 changed)
{
struct mt792x_chanctx *mctx = (struct mt792x_chanctx *)ctx->drv_priv;
struct mt792x_phy *phy = mt792x_hw_phy(hw);
struct ieee80211_vif *vif;
struct mt792x_vif *mvif;
if (!mctx->bss_conf)
return;
mvif = container_of(mctx->bss_conf, struct mt792x_vif, bss_conf);
vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv);
mt792x_mutex_acquire(phy->dev);
ieee80211_iterate_active_interfaces(phy->mt76->hw,
IEEE80211_IFACE_ITER_ACTIVE,
mt7921_ctx_iter, ctx);
if (vif->type == NL80211_IFTYPE_MONITOR)
mt7921_mcu_config_sniffer(mvif, ctx);
else
mt76_connac_mcu_uni_set_chctx(mvif->phy->mt76, &mvif->bss_conf.mt76, ctx);
mt792x_mutex_release(phy->dev);
}
@ -1385,7 +1380,7 @@ static void mt7921_mgd_prepare_tx(struct ieee80211_hw *hw,
jiffies_to_msecs(HZ);
mt792x_mutex_acquire(dev);
mt7921_set_roc(mvif->phy, mvif, mvif->mt76.ctx->def.chan, duration,
mt7921_set_roc(mvif->phy, mvif, mvif->bss_conf.mt76.ctx->def.chan, duration,
MT7921_ROC_REQ_JOIN);
mt792x_mutex_release(dev);
}

View file

@ -105,7 +105,7 @@ mt7921_mcu_set_ipv6_ns_filter(struct mt76_dev *dev,
struct mt76_connac_arpns_tlv arpns;
} req = {
.hdr = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
},
.arpns = {
.tag = cpu_to_le16(UNI_OFFLOAD_OFFLOAD_ND),
@ -260,7 +260,7 @@ mt7921_mcu_rssi_monitor_iter(void *priv, u8 *mac,
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt76_connac_rssi_notify_event *event = priv;
enum nl80211_cqm_rssi_threshold_event nl_event;
s32 rssi = le32_to_cpu(event->rssi[mvif->mt76.idx]);
s32 rssi = le32_to_cpu(event->rssi[mvif->bss_conf.mt76.idx]);
if (!rssi)
return;
@ -386,9 +386,9 @@ int mt7921_mcu_uni_tx_ba(struct mt792x_dev *dev,
struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
if (enable && !params->amsdu)
msta->wcid.amsdu = false;
msta->deflink.wcid.amsdu = false;
return mt76_connac_mcu_sta_ba(&dev->mt76, &msta->vif->mt76, params,
return mt76_connac_mcu_sta_ba(&dev->mt76, &msta->vif->bss_conf.mt76, params,
MCU_UNI_CMD(STA_REC_UPDATE),
enable, true);
}
@ -399,7 +399,7 @@ int mt7921_mcu_uni_rx_ba(struct mt792x_dev *dev,
{
struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
return mt76_connac_mcu_sta_ba(&dev->mt76, &msta->vif->mt76, params,
return mt76_connac_mcu_sta_ba(&dev->mt76, &msta->vif->bss_conf.mt76, params,
MCU_UNI_CMD(STA_REC_UPDATE),
enable, false);
}
@ -678,9 +678,9 @@ int mt7921_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif)
u8 wmm_idx;
u8 pad;
} __packed req = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
.qos = vif->bss_conf.qos,
.wmm_idx = mvif->mt76.wmm_idx,
.wmm_idx = mvif->bss_conf.mt76.wmm_idx,
};
struct mu_edca {
u8 cw_min;
@ -701,15 +701,15 @@ int mt7921_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif)
struct mu_edca edca[IEEE80211_NUM_ACS];
u8 pad3[32];
} __packed req_mu = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
.qos = vif->bss_conf.qos,
.wmm_idx = mvif->mt76.wmm_idx,
.wmm_idx = mvif->bss_conf.mt76.wmm_idx,
};
static const int to_aci[] = { 1, 0, 2, 3 };
int ac, ret;
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
struct ieee80211_tx_queue_params *q = &mvif->queue_params[ac];
struct ieee80211_tx_queue_params *q = &mvif->bss_conf.queue_params[ac];
struct edca *e = &req.edca[to_aci[ac]];
e->aifs = cpu_to_le16(q->aifs);
@ -738,10 +738,10 @@ int mt7921_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif)
struct ieee80211_he_mu_edca_param_ac_rec *q;
struct mu_edca *e;
if (!mvif->queue_params[ac].mu_edca)
if (!mvif->bss_conf.queue_params[ac].mu_edca)
break;
q = &mvif->queue_params[ac].mu_edca_param_rec;
q = &mvif->bss_conf.queue_params[ac].mu_edca_param_rec;
e = &(req_mu.edca[to_aci[ac]]);
e->cw_min = q->ecw_min_max & 0xf;
@ -790,7 +790,7 @@ int mt7921_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
.tokenid = token_id,
.reqtype = type,
.maxinterval = cpu_to_le32(duration),
.bss_idx = vif->mt76.idx,
.bss_idx = vif->bss_conf.mt76.idx,
.control_channel = chan->hw_value,
.bw = CMD_CBW_20MHZ,
.bw_from_ap = CMD_CBW_20MHZ,
@ -842,7 +842,7 @@ int mt7921_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
.tag = cpu_to_le16(UNI_ROC_ABORT),
.len = cpu_to_le16(sizeof(struct roc_abort_tlv)),
.tokenid = token_id,
.bss_idx = vif->mt76.idx,
.bss_idx = vif->bss_conf.mt76.idx,
.dbdcband = 0xff, /* auto*/
},
};
@ -947,7 +947,7 @@ int mt7921_mcu_uni_bss_ps(struct mt792x_dev *dev, struct ieee80211_vif *vif)
} __packed ps;
} __packed ps_req = {
.hdr = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
},
.ps = {
.tag = cpu_to_le16(UNI_BSS_INFO_PS),
@ -982,7 +982,7 @@ mt7921_mcu_uni_bss_bcnft(struct mt792x_dev *dev, struct ieee80211_vif *vif,
} __packed bcnft;
} __packed bcnft_req = {
.hdr = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
},
.bcnft = {
.tag = cpu_to_le16(UNI_BSS_INFO_BCNFT),
@ -1015,7 +1015,7 @@ mt7921_mcu_set_bss_pm(struct mt792x_dev *dev, struct ieee80211_vif *vif,
u8 bmc_triggered_ac;
u8 pad;
} req = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
.aid = cpu_to_le16(vif->cfg.aid),
.dtim_period = vif->bss_conf.dtim_period,
.bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int),
@ -1024,7 +1024,7 @@ mt7921_mcu_set_bss_pm(struct mt792x_dev *dev, struct ieee80211_vif *vif,
u8 bss_idx;
u8 pad[3];
} req_hdr = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
};
int err;
@ -1042,7 +1042,7 @@ int mt7921_mcu_sta_update(struct mt792x_dev *dev, struct ieee80211_sta *sta,
enum mt76_sta_info_state state)
{
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
int rssi = -ewma_rssi_read(&mvif->rssi);
int rssi = -ewma_rssi_read(&mvif->bss_conf.rssi);
struct mt76_sta_cmd_info info = {
.sta = sta,
.vif = vif,
@ -1055,7 +1055,7 @@ int mt7921_mcu_sta_update(struct mt792x_dev *dev, struct ieee80211_sta *sta,
struct mt792x_sta *msta;
msta = sta ? (struct mt792x_sta *)sta->drv_priv : NULL;
info.wcid = msta ? &msta->wcid : &mvif->sta.wcid;
info.wcid = msta ? &msta->deflink.wcid : &mvif->sta.deflink.wcid;
info.newly = msta ? state != MT76_STA_INFO_STATE_ASSOC : true;
return mt76_connac_mcu_sta_cmd(&dev->mphy, &info);
@ -1190,7 +1190,7 @@ int mt7921_mcu_config_sniffer(struct mt792x_vif *vif,
} __packed tlv;
} __packed req = {
.hdr = {
.band_idx = vif->mt76.band_idx,
.band_idx = vif->bss_conf.mt76.band_idx,
},
.tlv = {
.tag = cpu_to_le16(1),
@ -1251,7 +1251,7 @@ mt7921_mcu_uni_add_beacon_offload(struct mt792x_dev *dev,
} __packed beacon_tlv;
} req = {
.hdr = {
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
},
.beacon_tlv = {
.tag = cpu_to_le16(UNI_BSS_INFO_BCN_CONTENT),
@ -1460,7 +1460,7 @@ int mt7921_mcu_set_rssimonitor(struct mt792x_dev *dev, struct ieee80211_vif *vif
.enable = vif->cfg.assoc,
.cqm_rssi_high = vif->bss_conf.cqm_rssi_thold + vif->bss_conf.cqm_rssi_hyst,
.cqm_rssi_low = vif->bss_conf.cqm_rssi_thold - vif->bss_conf.cqm_rssi_hyst,
.bss_idx = mvif->mt76.idx,
.bss_idx = mvif->bss_conf.mt76.idx,
};
return mt76_mcu_send_msg(&dev->mt76, MCU_CE_CMD(RSSI_MONITOR),

View file

@ -339,6 +339,9 @@ static int mt7921_pci_probe(struct pci_dev *pdev,
bus_ops->rmw = mt7921_rmw;
dev->mt76.bus = bus_ops;
if (!mt7921_disable_aspm && mt76_pci_aspm_supported(pdev))
dev->aspm_supported = true;
ret = mt792xe_mcu_fw_pmctrl(dev);
if (ret)
goto err_free_dev;

View file

@ -34,9 +34,9 @@ int mt7921e_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (sta) {
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
if (time_after(jiffies, msta->last_txs + HZ / 4)) {
if (time_after(jiffies, msta->deflink.last_txs + HZ / 4)) {
info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
msta->last_txs = jiffies;
msta->deflink.last_txs = jiffies;
}
}

View file

@ -179,6 +179,12 @@ static void mt7925_init_work(struct work_struct *work)
mt76_set_stream_caps(&dev->mphy, true);
mt7925_set_stream_he_eht_caps(&dev->phy);
ret = mt7925_init_mlo_caps(&dev->phy);
if (ret) {
dev_err(dev->mt76.dev, "MLO init failed\n");
return;
}
ret = mt76_register_device(&dev->mt76, true, mt76_rates,
ARRAY_SIZE(mt76_rates));
if (ret) {

View file

@ -28,6 +28,7 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
};
struct ieee80211_sta *sta;
struct mt792x_sta *msta;
struct mt792x_link_sta *mlink;
u32 tx_time[IEEE80211_NUM_ACS], rx_time[IEEE80211_NUM_ACS];
LIST_HEAD(sta_poll_list);
struct rate_info *rate;
@ -46,24 +47,25 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
if (list_empty(&sta_poll_list))
break;
msta = list_first_entry(&sta_poll_list,
struct mt792x_sta, wcid.poll_list);
mlink = list_first_entry(&sta_poll_list,
struct mt792x_link_sta, wcid.poll_list);
msta = container_of(mlink, struct mt792x_sta, deflink);
spin_lock_bh(&dev->mt76.sta_poll_lock);
list_del_init(&msta->wcid.poll_list);
list_del_init(&mlink->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
idx = msta->wcid.idx;
idx = mlink->wcid.idx;
addr = mt7925_mac_wtbl_lmac_addr(dev, idx, MT_WTBL_AC0_CTT_OFFSET);
for (i = 0; i < IEEE80211_NUM_ACS; i++) {
u32 tx_last = msta->airtime_ac[i];
u32 rx_last = msta->airtime_ac[i + 4];
u32 tx_last = mlink->airtime_ac[i];
u32 rx_last = mlink->airtime_ac[i + 4];
msta->airtime_ac[i] = mt76_rr(dev, addr);
msta->airtime_ac[i + 4] = mt76_rr(dev, addr + 4);
mlink->airtime_ac[i] = mt76_rr(dev, addr);
mlink->airtime_ac[i + 4] = mt76_rr(dev, addr + 4);
tx_time[i] = msta->airtime_ac[i] - tx_last;
rx_time[i] = msta->airtime_ac[i + 4] - rx_last;
tx_time[i] = mlink->airtime_ac[i] - tx_last;
rx_time[i] = mlink->airtime_ac[i + 4] - rx_last;
if ((tx_last | rx_last) & BIT(30))
clear = true;
@ -74,10 +76,10 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
if (clear) {
mt7925_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
memset(mlink->airtime_ac, 0, sizeof(mlink->airtime_ac));
}
if (!msta->wcid.sta)
if (!mlink->wcid.sta)
continue;
sta = container_of((void *)msta, struct ieee80211_sta,
@ -100,7 +102,7 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
* we need to make sure that flags match so polling GI
* from per-sta counters directly.
*/
rate = &msta->wcid.rate;
rate = &mlink->wcid.rate;
switch (rate->bw) {
case RATE_INFO_BW_160:
@ -144,10 +146,10 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
rssi[2] = to_rssi(GENMASK(23, 16), val);
rssi[3] = to_rssi(GENMASK(31, 14), val);
msta->ack_signal =
mlink->ack_signal =
mt76_rx_signal(msta->vif->phy->mt76->antenna_mask, rssi);
ewma_avg_signal_add(&msta->avg_ack_signal, -msta->ack_signal);
ewma_avg_signal_add(&mlink->avg_ack_signal, -mlink->ack_signal);
}
}
@ -365,7 +367,7 @@ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
u32 rxd2 = le32_to_cpu(rxd[2]);
u32 rxd3 = le32_to_cpu(rxd[3]);
u32 rxd4 = le32_to_cpu(rxd[4]);
struct mt792x_sta *msta = NULL;
struct mt792x_link_sta *mlink;
u8 mode = 0; /* , band_idx; */
u16 seq_ctrl = 0;
__le16 fc = 0;
@ -393,10 +395,10 @@ mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
status->wcid = mt792x_rx_get_wcid(dev, idx, unicast);
if (status->wcid) {
msta = container_of(status->wcid, struct mt792x_sta, wcid);
mlink = container_of(status->wcid, struct mt792x_link_sta, wcid);
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (list_empty(&msta->wcid.poll_list))
list_add_tail(&msta->wcid.poll_list,
if (list_empty(&mlink->wcid.poll_list))
list_add_tail(&mlink->wcid.poll_list,
&dev->mt76.sta_poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
}
@ -738,8 +740,12 @@ mt7925_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
BSS_CHANGED_BEACON_ENABLED));
bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
BSS_CHANGED_FILS_DISCOVERY));
struct mt792x_bss_conf *mconf;
mconf = vif ? mt792x_vif_to_link((struct mt792x_vif *)vif->drv_priv,
wcid->link_id) : NULL;
mvif = mconf ? (struct mt76_vif *)&mconf->mt76 : NULL;
mvif = vif ? (struct mt76_vif *)vif->drv_priv : NULL;
if (mvif) {
omac_idx = mvif->omac_idx;
wmm_idx = mvif->wmm_idx;
@ -800,8 +806,10 @@ mt7925_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
txwi[5] = cpu_to_le32(val);
val = MT_TXD6_DIS_MAT | MT_TXD6_DAS |
FIELD_PREP(MT_TXD6_MSDU_CNT, 1);
val = MT_TXD6_DAS | FIELD_PREP(MT_TXD6_MSDU_CNT, 1);
if (!ieee80211_vif_is_mld(vif) ||
(q_idx >= MT_LMAC_ALTX0 && q_idx <= MT_LMAC_BCN0))
val |= MT_TXD6_DIS_MAT;
txwi[6] = cpu_to_le32(val);
txwi[7] = 0;
@ -831,27 +839,53 @@ mt7925_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
}
EXPORT_SYMBOL_GPL(mt7925_mac_write_txwi);
static void mt7925_tx_check_aggr(struct ieee80211_sta *sta, __le32 *txwi)
static void mt7925_tx_check_aggr(struct ieee80211_sta *sta, struct sk_buff *skb,
struct mt76_wcid *wcid)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_link_sta *link_sta;
struct mt792x_link_sta *mlink;
struct mt792x_sta *msta;
bool is_8023;
u16 fc, tid;
u32 val;
if (!sta || !(sta->deflink.ht_cap.ht_supported || sta->deflink.he_cap.has_he))
link_sta = rcu_dereference(sta->link[wcid->link_id]);
if (!link_sta)
return;
tid = le32_get_bits(txwi[1], MT_TXD1_TID);
if (tid >= 6) /* skip VO queue */
if (!sta || !(link_sta->ht_cap.ht_supported || link_sta->he_cap.has_he))
return;
val = le32_to_cpu(txwi[2]);
fc = FIELD_GET(MT_TXD2_FRAME_TYPE, val) << 2 |
FIELD_GET(MT_TXD2_SUB_TYPE, val) << 4;
tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
if (is_8023) {
fc = IEEE80211_FTYPE_DATA |
(sta->wme ? IEEE80211_STYPE_QOS_DATA :
IEEE80211_STYPE_DATA);
} else {
/* No need to get precise TID for Action/Management Frame,
* since it will not meet the following Frame Control
* condition anyway.
*/
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
fc = le16_to_cpu(hdr->frame_control) &
(IEEE80211_FCTL_FTYPE | IEEE80211_FCTL_STYPE);
}
if (unlikely(fc != (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_DATA)))
return;
msta = (struct mt792x_sta *)sta->drv_priv;
if (!test_and_set_bit(tid, &msta->wcid.ampdu_state))
if (sta->mlo && msta->deflink_id != IEEE80211_LINK_UNSPECIFIED)
mlink = rcu_dereference(msta->link[msta->deflink_id]);
else
mlink = &msta->deflink;
if (!test_and_set_bit(tid, &mlink->wcid.ampdu_state))
ieee80211_start_tx_ba_session(sta, tid, 0);
}
@ -991,7 +1025,7 @@ mt7925_mac_add_txs_skb(struct mt792x_dev *dev, struct mt76_wcid *wcid,
void mt7925_mac_add_txs(struct mt792x_dev *dev, void *data)
{
struct mt792x_sta *msta = NULL;
struct mt792x_link_sta *mlink = NULL;
struct mt76_wcid *wcid;
__le32 *txs_data = data;
u16 wcidx;
@ -1015,15 +1049,15 @@ void mt7925_mac_add_txs(struct mt792x_dev *dev, void *data)
if (!wcid)
goto out;
msta = container_of(wcid, struct mt792x_sta, wcid);
mlink = container_of(wcid, struct mt792x_link_sta, wcid);
mt7925_mac_add_txs_skb(dev, wcid, pid, txs_data);
if (!wcid->sta)
goto out;
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (list_empty(&msta->wcid.poll_list))
list_add_tail(&msta->wcid.poll_list, &dev->mt76.sta_poll_list);
if (list_empty(&mlink->wcid.poll_list))
list_add_tail(&mlink->wcid.poll_list, &dev->mt76.sta_poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
out:
@ -1031,7 +1065,7 @@ void mt7925_mac_add_txs(struct mt792x_dev *dev, void *data)
}
void mt7925_txwi_free(struct mt792x_dev *dev, struct mt76_txwi_cache *t,
struct ieee80211_sta *sta, bool clear_status,
struct ieee80211_sta *sta, struct mt76_wcid *wcid,
struct list_head *free_list)
{
struct mt76_dev *mdev = &dev->mt76;
@ -1044,10 +1078,8 @@ void mt7925_txwi_free(struct mt792x_dev *dev, struct mt76_txwi_cache *t,
txwi = (__le32 *)mt76_get_txwi_ptr(mdev, t);
if (sta) {
struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv;
if (likely(t->skb->protocol != cpu_to_be16(ETH_P_PAE)))
mt7925_tx_check_aggr(sta, txwi);
mt7925_tx_check_aggr(sta, t->skb, wcid);
wcid_idx = wcid->idx;
} else {
@ -1094,7 +1126,7 @@ mt7925_mac_tx_free(struct mt792x_dev *dev, void *data, int len)
*/
info = le32_to_cpu(*cur_info);
if (info & MT_TXFREE_INFO_PAIR) {
struct mt792x_sta *msta;
struct mt792x_link_sta *mlink;
u16 idx;
idx = FIELD_GET(MT_TXFREE_INFO_WLAN_ID, info);
@ -1103,10 +1135,10 @@ mt7925_mac_tx_free(struct mt792x_dev *dev, void *data, int len)
if (!sta)
continue;
msta = container_of(wcid, struct mt792x_sta, wcid);
mlink = container_of(wcid, struct mt792x_link_sta, wcid);
spin_lock_bh(&mdev->sta_poll_lock);
if (list_empty(&msta->wcid.poll_list))
list_add_tail(&msta->wcid.poll_list,
if (list_empty(&mlink->wcid.poll_list))
list_add_tail(&mlink->wcid.poll_list,
&mdev->sta_poll_list);
spin_unlock_bh(&mdev->sta_poll_lock);
continue;
@ -1132,7 +1164,7 @@ mt7925_mac_tx_free(struct mt792x_dev *dev, void *data, int len)
if (!txwi)
continue;
mt7925_txwi_free(dev, txwi, sta, 0, &free_list);
mt7925_txwi_free(dev, txwi, sta, wcid, &free_list);
}
}
@ -1235,17 +1267,26 @@ mt7925_vif_connect_iter(void *priv, u8 *mac,
struct ieee80211_vif *vif)
{
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
unsigned long valid = ieee80211_vif_is_mld(vif) ?
mvif->valid_links : BIT(0);
struct mt792x_dev *dev = mvif->phy->dev;
struct ieee80211_hw *hw = mt76_hw(dev);
struct ieee80211_bss_conf *bss_conf;
int i;
if (vif->type == NL80211_IFTYPE_STATION)
ieee80211_disconnect(vif, true);
mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid, true);
mt7925_mcu_set_tx(dev, vif);
for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
bss_conf = mt792x_vif_to_bss_conf(vif, i);
mt76_connac_mcu_uni_add_dev(&dev->mphy, bss_conf,
&mvif->sta.deflink.wcid, true);
mt7925_mcu_set_tx(dev, bss_conf);
}
if (vif->type == NL80211_IFTYPE_AP) {
mt76_connac_mcu_uni_add_bss(dev->phy.mt76, vif, &mvif->sta.wcid,
mt76_connac_mcu_uni_add_bss(dev->phy.mt76, vif, &mvif->sta.deflink.wcid,
true, NULL);
mt7925_mcu_sta_update(dev, NULL, vif, true,
MT76_STA_INFO_STATE_NONE);
@ -1380,9 +1421,9 @@ int mt7925_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (sta) {
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
if (time_after(jiffies, msta->last_txs + HZ / 4)) {
if (time_after(jiffies, msta->deflink.last_txs + HZ / 4)) {
info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
msta->last_txs = jiffies;
msta->deflink.last_txs = jiffies;
}
}
@ -1417,7 +1458,7 @@ void mt7925_usb_sdio_tx_complete_skb(struct mt76_dev *mdev,
sta = wcid_to_sta(wcid);
if (sta && likely(e->skb->protocol != cpu_to_be16(ETH_P_PAE)))
mt7925_tx_check_aggr(sta, txwi);
mt76_connac2_tx_check_aggr(sta, txwi);
skb_pull(e->skb, headroom);
mt76_tx_complete_skb(mdev, e->wcid, e->skb);

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -366,7 +366,10 @@ struct bss_mld_tlv {
u8 mac_addr[ETH_ALEN];
u8 remap_idx;
u8 link_id;
u8 __rsv[2];
u8 eml_enable;
u8 max_link_num;
u8 hybrid_mode;
u8 __rsv[3];
} __packed;
struct sta_rec_ba_uni {
@ -440,6 +443,17 @@ struct sta_rec_mld {
} __packed link[2];
} __packed;
struct sta_rec_eht_mld {
__le16 tag;
__le16 len;
u8 nsep;
u8 mld_type;
u8 __rsv1[1];
u8 str_cap[3];
u8 eml_cap[3];
u8 __rsv2[3];
} __packed;
struct bss_ifs_time_tlv {
__le16 tag;
__le16 len;
@ -456,6 +470,21 @@ struct bss_ifs_time_tlv {
__le16 eifs_cck_time;
} __packed;
struct bss_rlm_tlv {
__le16 tag;
__le16 len;
u8 control_channel;
u8 center_chan;
u8 center_chan2;
u8 bw;
u8 tx_streams;
u8 rx_streams;
u8 ht_op_info;
u8 sco;
u8 band;
u8 pad[3];
} __packed;
#define MT7925_STA_UPDATE_MAX_SIZE (sizeof(struct sta_req_hdr) + \
sizeof(struct sta_rec_basic) + \
sizeof(struct sta_rec_bf) + \
@ -474,7 +503,8 @@ struct bss_ifs_time_tlv {
sizeof(struct sta_rec_eht) + \
sizeof(struct sta_rec_hdr_trans) + \
sizeof(struct sta_rec_mld) + \
sizeof(struct tlv))
sizeof(struct tlv) * 2 + \
sizeof(struct sta_rec_remove))
#define MT7925_BSS_UPDATE_MAX_SIZE (sizeof(struct bss_req_hdr) + \
sizeof(struct mt76_connac_bss_basic_tlv) + \
@ -484,6 +514,7 @@ struct bss_ifs_time_tlv {
sizeof(struct bss_info_uni_he) + \
sizeof(struct bss_info_uni_bss_color) + \
sizeof(struct bss_ifs_time_tlv) + \
sizeof(struct bss_rlm_tlv) + \
sizeof(struct tlv))
#define MT_CONNAC3_SKU_POWER_LIMIT 449
@ -538,6 +569,26 @@ struct mt7925_wow_pattern_tlv {
u8 rsv[7];
} __packed;
struct roc_acquire_tlv {
__le16 tag;
__le16 len;
u8 bss_idx;
u8 tokenid;
u8 control_channel;
u8 sco;
u8 band;
u8 bw;
u8 center_chan;
u8 center_chan2;
u8 bw_from_ap;
u8 center_chan_from_ap;
u8 center_chan2_from_ap;
u8 reqtype;
__le32 maxinterval;
u8 dbdcband;
u8 rsv[3];
} __packed;
static inline enum connac3_mcu_cipher_type
mt7925_mcu_get_cipher(int cipher)
{
@ -578,18 +629,18 @@ int mt7925_mcu_sched_scan_enable(struct mt76_phy *phy,
bool enable);
int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
struct ieee80211_chanctx_conf *ctx,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
struct ieee80211_bss_conf *link_conf,
struct ieee80211_link_sta *link_sta,
int enable);
int mt7925_mcu_set_timing(struct mt792x_phy *phy,
struct ieee80211_vif *vif);
struct ieee80211_bss_conf *link_conf);
int mt7925_mcu_set_deep_sleep(struct mt792x_dev *dev, bool enable);
int mt7925_mcu_set_channel_domain(struct mt76_phy *phy);
int mt7925_mcu_set_radio_en(struct mt792x_phy *phy, bool enable);
int mt7925_mcu_set_chctx(struct mt76_phy *phy, struct mt76_vif *mvif,
struct ieee80211_bss_conf *link_conf,
struct ieee80211_chanctx_conf *ctx);
int mt7925_mcu_set_rate_txpower(struct mt76_phy *phy);
int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
struct mt76_vif *vif,
struct ieee80211_bss_conf *info);
struct ieee80211_bss_conf *link_conf);
#endif

View file

@ -30,17 +30,22 @@
enum {
UNI_ROC_ACQUIRE,
UNI_ROC_ABORT,
UNI_ROC_SUB_LINK = 3,
UNI_ROC_NUM
};
enum mt7925_roc_req {
MT7925_ROC_REQ_JOIN,
MT7925_ROC_REQ_ROC,
MT7925_ROC_REQ_SUB_LINK,
MT7925_ROC_REQ_MLSR_AG = 10,
MT7925_ROC_REQ_MLSR_AA,
MT7925_ROC_REQ_NUM
};
enum {
UNI_EVENT_ROC_GRANT = 0,
UNI_EVENT_ROC_GRANT_SUB_LINK = 4,
UNI_EVENT_ROC_TAG_NUM
};
@ -192,13 +197,15 @@ int __mt7925_start(struct mt792x_phy *phy);
int mt7925_register_device(struct mt792x_dev *dev);
void mt7925_unregister_device(struct mt792x_dev *dev);
int mt7925_run_firmware(struct mt792x_dev *dev);
int mt7925_mcu_set_bss_pm(struct mt792x_dev *dev, struct ieee80211_vif *vif,
int mt7925_mcu_set_bss_pm(struct mt792x_dev *dev,
struct ieee80211_bss_conf *link_conf,
bool enable);
int mt7925_mcu_sta_update(struct mt792x_dev *dev, struct ieee80211_sta *sta,
int mt7925_mcu_sta_update(struct mt792x_dev *dev,
struct ieee80211_link_sta *link_sta,
struct ieee80211_vif *vif, bool enable,
enum mt76_sta_info_state state);
int mt7925_mcu_set_chan_info(struct mt792x_phy *phy, u16 tag);
int mt7925_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif);
int mt7925_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_bss_conf *bss_conf);
int mt7925_mcu_set_eeprom(struct mt792x_dev *dev);
int mt7925_mcu_get_rx_rate(struct mt792x_phy *phy, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, struct rate_info *rate);
@ -228,6 +235,7 @@ void mt7925_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
struct sk_buff *skb, u32 *info);
void mt7925_stats_work(struct work_struct *work);
void mt7925_set_stream_he_eht_caps(struct mt792x_phy *phy);
int mt7925_init_mlo_caps(struct mt792x_phy *phy);
int mt7925_init_debugfs(struct mt792x_dev *dev);
int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
@ -241,7 +249,8 @@ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
bool enable);
void mt7925_scan_work(struct work_struct *work);
void mt7925_roc_work(struct work_struct *work);
int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev, struct ieee80211_vif *vif);
int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev,
struct ieee80211_bss_conf *link_conf);
void mt7925_coredump_work(struct work_struct *work);
int mt7925_get_txpwr_info(struct mt792x_dev *dev, u8 band_idx,
struct mt7925_txpwr *txpwr);
@ -252,7 +261,7 @@ void mt7925_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
struct ieee80211_key_conf *key, int pid,
enum mt76_txq_id qid, u32 changed);
void mt7925_txwi_free(struct mt792x_dev *dev, struct mt76_txwi_cache *t,
struct ieee80211_sta *sta, bool clear_status,
struct ieee80211_sta *sta, struct mt76_wcid *wcid,
struct list_head *free_list);
int mt7925_mcu_parse_response(struct mt76_dev *mdev, int cmd,
struct sk_buff *skb, int seq);
@ -291,20 +300,24 @@ int mt7925_set_tx_sar_pwr(struct ieee80211_hw *hw,
int mt7925_mcu_regval(struct mt792x_dev *dev, u32 regidx, u32 *val, bool set);
int mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
enum environment_cap env_cap);
int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
int duration, u8 token_id);
int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
struct ieee80211_channel *chan, int duration,
enum mt7925_roc_req type, u8 token_id);
int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
u8 token_id);
int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
int cmd, int *wait_seq);
int mt7925_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
struct mt76_connac_sta_key_conf *sta_key_conf,
struct ieee80211_key_conf *key, int mcu_cmd,
struct mt76_wcid *wcid, enum set_key_cmd cmd);
struct mt76_wcid *wcid, enum set_key_cmd cmd,
struct mt792x_sta *msta);
int mt7925_mcu_set_rts_thresh(struct mt792x_phy *phy, u32 val);
int mt7925_mcu_wtbl_update_hdr_trans(struct mt792x_dev *dev,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
struct ieee80211_sta *sta,
int link_id);
#endif

View file

@ -373,6 +373,9 @@ static int mt7925_pci_probe(struct pci_dev *pdev,
bus_ops->rmw = mt7925_rmw;
dev->mt76.bus = bus_ops;
if (!mt7925_disable_aspm && mt76_pci_aspm_supported(pdev))
dev->aspm_supported = true;
ret = __mt792x_mcu_fw_pmctrl(dev);
if (ret)
goto err_free_dev;

View file

@ -34,9 +34,9 @@ int mt7925e_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (sta) {
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
if (time_after(jiffies, msta->last_txs + HZ / 4)) {
if (time_after(jiffies, msta->deflink.last_txs + HZ / 4)) {
info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
msta->last_txs = jiffies;
msta->deflink.last_txs = jiffies;
}
}
@ -60,7 +60,7 @@ void mt7925_tx_token_put(struct mt792x_dev *dev)
spin_lock_bh(&dev->mt76.token_lock);
idr_for_each_entry(&dev->mt76.token, txwi, id) {
mt7925_txwi_free(dev, txwi, NULL, false, NULL);
mt7925_txwi_free(dev, txwi, NULL, NULL, NULL);
dev->mt76.token_count--;
}
spin_unlock_bh(&dev->mt76.token_lock);

View file

@ -27,6 +27,7 @@
#define MT792x_CHIP_CAP_CLC_EVT_EN BIT(0)
#define MT792x_CHIP_CAP_RSSI_NOTIFY_EVT_EN BIT(1)
#define MT792x_CHIP_CAP_MLO_EVT_EN BIT(2)
/* NOTE: used to map mt76_rates. idx may change if firmware expands table */
#define MT792x_BASIC_RATES_TBL 11
@ -81,11 +82,9 @@ enum mt792x_reg_power_type {
DECLARE_EWMA(avg_signal, 10, 8)
struct mt792x_sta {
struct mt792x_link_sta {
struct mt76_wcid wcid; /* must be first */
struct mt792x_vif *vif;
u32 airtime_ac[8];
int ack_signal;
@ -94,21 +93,46 @@ struct mt792x_sta {
unsigned long last_txs;
struct mt76_connac_sta_key_conf bip;
struct mt792x_sta *sta;
struct ieee80211_link_sta *pri_link;
};
struct mt792x_sta {
struct mt792x_link_sta deflink; /* must be first */
struct mt792x_link_sta __rcu *link[IEEE80211_MLD_MAX_NUM_LINKS];
struct mt792x_vif *vif;
u16 valid_links;
u8 deflink_id;
};
DECLARE_EWMA(rssi, 10, 8);
struct mt792x_vif {
struct mt792x_chanctx {
struct mt792x_bss_conf *bss_conf;
};
struct mt792x_bss_conf {
struct mt76_vif mt76; /* must be first */
struct mt792x_vif *vif;
struct ewma_rssi rssi;
struct ieee80211_tx_queue_params queue_params[IEEE80211_NUM_ACS];
unsigned int link_id;
};
struct mt792x_vif {
struct mt792x_bss_conf bss_conf; /* must be first */
struct mt792x_bss_conf __rcu *link_conf[IEEE80211_MLD_MAX_NUM_LINKS];
struct mt792x_sta sta;
struct mt792x_sta *wep_sta;
struct mt792x_phy *phy;
struct ewma_rssi rssi;
struct ieee80211_tx_queue_params queue_params[IEEE80211_NUM_ACS];
u16 valid_links;
u8 deflink_id;
};
struct mt792x_phy {
@ -140,6 +164,7 @@ struct mt792x_phy {
#endif
void *clc[MT792x_CLC_MAX_NUM];
u64 chip_cap;
u16 eml_cap;
struct work_struct roc_work;
struct timer_list roc_timer;
@ -190,6 +215,7 @@ struct mt792x_dev {
bool fw_assert:1;
bool has_eht:1;
bool regd_in_progress:1;
bool aspm_supported:1;
wait_queue_head_t wait;
struct work_struct init_work;
@ -211,6 +237,66 @@ struct mt792x_dev {
u32 backup_l2;
};
static inline struct mt792x_bss_conf *
mt792x_vif_to_link(struct mt792x_vif *mvif, u8 link_id)
{
struct ieee80211_vif *vif;
vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv);
if (!ieee80211_vif_is_mld(vif) ||
link_id >= IEEE80211_LINK_UNSPECIFIED)
return &mvif->bss_conf;
return rcu_dereference_protected(mvif->link_conf[link_id],
lockdep_is_held(&mvif->phy->dev->mt76.mutex));
}
static inline struct mt792x_link_sta *
mt792x_sta_to_link(struct mt792x_sta *msta, u8 link_id)
{
struct ieee80211_vif *vif;
vif = container_of((void *)msta->vif, struct ieee80211_vif, drv_priv);
if (!ieee80211_vif_is_mld(vif) ||
link_id >= IEEE80211_LINK_UNSPECIFIED)
return &msta->deflink;
return rcu_dereference_protected(msta->link[link_id],
lockdep_is_held(&msta->vif->phy->dev->mt76.mutex));
}
static inline struct mt792x_bss_conf *
mt792x_link_conf_to_mconf(struct ieee80211_bss_conf *link_conf)
{
struct ieee80211_vif *vif = link_conf->vif;
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
return mt792x_vif_to_link(mvif, link_conf->link_id);
}
static inline struct ieee80211_bss_conf *
mt792x_vif_to_bss_conf(struct ieee80211_vif *vif, unsigned int link_id)
{
if (!ieee80211_vif_is_mld(vif) ||
link_id >= IEEE80211_LINK_UNSPECIFIED)
return &vif->bss_conf;
return link_conf_dereference_protected(vif, link_id);
}
static inline struct ieee80211_link_sta *
mt792x_sta_to_link_sta(struct ieee80211_vif *vif, struct ieee80211_sta *sta,
unsigned int link_id)
{
if (!ieee80211_vif_is_mld(vif) ||
link_id >= IEEE80211_LINK_UNSPECIFIED)
return &sta->deflink;
return link_sta_dereference_protected(sta, link_id);
}
static inline struct mt792x_dev *
mt792x_hw_dev(struct ieee80211_hw *hw)
{
@ -325,6 +411,9 @@ mt792x_get_mac80211_ops(struct device *dev,
int mt792x_init_wcid(struct mt792x_dev *dev);
int mt792x_mcu_drv_pmctrl(struct mt792x_dev *dev);
int mt792x_mcu_fw_pmctrl(struct mt792x_dev *dev);
void mt792x_mac_link_bss_remove(struct mt792x_dev *dev,
struct mt792x_bss_conf *mconf,
struct mt792x_link_sta *mlink);
static inline char *mt792x_ram_name(struct mt792x_dev *dev)
{

View file

@ -59,20 +59,42 @@ void mt792x_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control,
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_vif *vif = info->control.vif;
struct mt76_wcid *wcid = &dev->mt76.global_wcid;
u8 link_id;
int qid;
if (control->sta) {
struct mt792x_link_sta *mlink;
struct mt792x_sta *sta;
link_id = u32_get_bits(info->control.flags,
IEEE80211_TX_CTRL_MLO_LINK);
sta = (struct mt792x_sta *)control->sta->drv_priv;
wcid = &sta->wcid;
mlink = mt792x_sta_to_link(sta, link_id);
wcid = &mlink->wcid;
}
if (vif && !control->sta) {
struct mt792x_vif *mvif;
mvif = (struct mt792x_vif *)vif->drv_priv;
wcid = &mvif->sta.wcid;
wcid = &mvif->sta.deflink.wcid;
}
if (vif && control->sta && ieee80211_vif_is_mld(vif)) {
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
struct ieee80211_link_sta *link_sta;
struct ieee80211_bss_conf *conf;
link_id = wcid->link_id;
rcu_read_lock();
conf = rcu_dereference(vif->link_conf[link_id]);
memcpy(hdr->addr2, conf->addr, ETH_ALEN);
link_sta = rcu_dereference(control->sta->link[link_id]);
memcpy(hdr->addr1, link_sta->addr, ETH_ALEN);
if (vif->type == NL80211_IFTYPE_STATION)
memcpy(hdr->addr3, conf->bssid, ETH_ALEN);
rcu_read_unlock();
}
if (mt76_connac_pm_ref(mphy, &dev->pm)) {
@ -113,31 +135,47 @@ void mt792x_stop(struct ieee80211_hw *hw, bool suspend)
}
EXPORT_SYMBOL_GPL(mt792x_stop);
void mt792x_mac_link_bss_remove(struct mt792x_dev *dev,
struct mt792x_bss_conf *mconf,
struct mt792x_link_sta *mlink)
{
struct ieee80211_vif *vif = container_of((void *)mconf->vif,
struct ieee80211_vif, drv_priv);
struct ieee80211_bss_conf *link_conf;
int idx = mlink->wcid.idx;
link_conf = mt792x_vif_to_bss_conf(vif, mconf->link_id);
mt76_connac_free_pending_tx_skbs(&dev->pm, &mlink->wcid);
mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mlink->wcid, false);
rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
dev->mt76.vif_mask &= ~BIT_ULL(mconf->mt76.idx);
mconf->vif->phy->omac_mask &= ~BIT_ULL(mconf->mt76.omac_idx);
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (!list_empty(&mlink->wcid.poll_list))
list_del_init(&mlink->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
mt76_wcid_cleanup(&dev->mt76, &mlink->wcid);
}
EXPORT_SYMBOL_GPL(mt792x_mac_link_bss_remove);
void mt792x_remove_interface(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt792x_sta *msta = &mvif->sta;
struct mt792x_dev *dev = mt792x_hw_dev(hw);
struct mt792x_phy *phy = mt792x_hw_phy(hw);
int idx = msta->wcid.idx;
struct mt792x_bss_conf *mconf;
mt792x_mutex_acquire(dev);
mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->wcid);
mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid, false);
rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
mconf = mt792x_link_conf_to_mconf(&vif->bss_conf);
mt792x_mac_link_bss_remove(dev, mconf, &mvif->sta.deflink);
dev->mt76.vif_mask &= ~BIT_ULL(mvif->mt76.idx);
phy->omac_mask &= ~BIT_ULL(mvif->mt76.omac_idx);
mt792x_mutex_release(dev);
spin_lock_bh(&dev->mt76.sta_poll_lock);
if (!list_empty(&msta->wcid.poll_list))
list_del_init(&msta->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
mt76_wcid_cleanup(&dev->mt76, &msta->wcid);
}
EXPORT_SYMBOL_GPL(mt792x_remove_interface);
@ -149,7 +187,7 @@ int mt792x_conf_tx(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
/* no need to update right away, we'll get BSS_CHANGED_QOS */
queue = mt76_connac_lmac_mapping(queue);
mvif->queue_params[queue] = *params;
mvif->bss_conf.queue_params[queue] = *params;
return 0;
}
@ -178,7 +216,7 @@ u64 mt792x_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
{
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt792x_dev *dev = mt792x_hw_dev(hw);
u8 omac_idx = mvif->mt76.omac_idx;
u8 omac_idx = mvif->bss_conf.mt76.omac_idx;
union {
u64 t64;
u32 t32[2];
@ -204,7 +242,7 @@ void mt792x_set_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
{
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt792x_dev *dev = mt792x_hw_dev(hw);
u8 omac_idx = mvif->mt76.omac_idx;
u8 omac_idx = mvif->bss_conf.mt76.omac_idx;
union {
u64 t64;
u32 t32[2];
@ -261,11 +299,13 @@ int mt792x_assign_vif_chanctx(struct ieee80211_hw *hw,
struct ieee80211_bss_conf *link_conf,
struct ieee80211_chanctx_conf *ctx)
{
struct mt792x_chanctx *mctx = (struct mt792x_chanctx *)ctx->drv_priv;
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt792x_dev *dev = mt792x_hw_dev(hw);
mutex_lock(&dev->mt76.mutex);
mvif->mt76.ctx = ctx;
mvif->bss_conf.mt76.ctx = ctx;
mctx->bss_conf = &mvif->bss_conf;
mutex_unlock(&dev->mt76.mutex);
return 0;
@ -277,11 +317,13 @@ void mt792x_unassign_vif_chanctx(struct ieee80211_hw *hw,
struct ieee80211_bss_conf *link_conf,
struct ieee80211_chanctx_conf *ctx)
{
struct mt792x_chanctx *mctx = (struct mt792x_chanctx *)ctx->drv_priv;
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct mt792x_dev *dev = mt792x_hw_dev(hw);
mutex_lock(&dev->mt76.mutex);
mvif->mt76.ctx = NULL;
mctx->bss_conf = NULL;
mvif->bss_conf.mt76.ctx = NULL;
mutex_unlock(&dev->mt76.mutex);
}
EXPORT_SYMBOL_GPL(mt792x_unassign_vif_chanctx);
@ -405,10 +447,10 @@ mt792x_ethtool_worker(void *wi_data, struct ieee80211_sta *sta)
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
struct mt76_ethtool_worker_info *wi = wi_data;
if (msta->vif->mt76.idx != wi->idx)
if (msta->vif->bss_conf.mt76.idx != wi->idx)
return;
mt76_ethtool_worker(wi, &msta->wcid.stats, true);
mt76_ethtool_worker(wi, &msta->deflink.wcid.stats, true);
}
void mt792x_get_et_stats(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
@ -421,7 +463,7 @@ void mt792x_get_et_stats(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct mt76_mib_stats *mib = &phy->mib;
struct mt76_ethtool_worker_info wi = {
.data = data,
.idx = mvif->mt76.idx,
.idx = mvif->bss_conf.mt76.idx,
};
int i, ei = 0;
@ -487,7 +529,7 @@ void mt792x_sta_statistics(struct ieee80211_hw *hw,
struct station_info *sinfo)
{
struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
struct rate_info *txrate = &msta->wcid.rate;
struct rate_info *txrate = &msta->deflink.wcid.rate;
if (!txrate->legacy && !txrate->flags)
return;
@ -502,19 +544,19 @@ void mt792x_sta_statistics(struct ieee80211_hw *hw,
sinfo->txrate.he_dcm = txrate->he_dcm;
sinfo->txrate.he_ru_alloc = txrate->he_ru_alloc;
}
sinfo->tx_failed = msta->wcid.stats.tx_failed;
sinfo->tx_failed = msta->deflink.wcid.stats.tx_failed;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED);
sinfo->tx_retries = msta->wcid.stats.tx_retries;
sinfo->tx_retries = msta->deflink.wcid.stats.tx_retries;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES);
sinfo->txrate.flags = txrate->flags;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
sinfo->ack_signal = (s8)msta->ack_signal;
sinfo->ack_signal = (s8)msta->deflink.ack_signal;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_ACK_SIGNAL);
sinfo->avg_ack_signal = -(s8)ewma_avg_signal_read(&msta->avg_ack_signal);
sinfo->avg_ack_signal = -(s8)ewma_avg_signal_read(&msta->deflink.avg_ack_signal);
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_ACK_SIGNAL_AVG);
}
EXPORT_SYMBOL_GPL(mt792x_sta_statistics);
@ -556,6 +598,7 @@ int mt792x_init_wiphy(struct ieee80211_hw *hw)
hw->sta_data_size = sizeof(struct mt792x_sta);
hw->vif_data_size = sizeof(struct mt792x_vif);
hw->chanctx_data_size = sizeof(struct mt792x_chanctx);
if (dev->fw_features & MT792x_FW_CAP_CNM) {
wiphy->flags |= WIPHY_FLAG_HAS_REMAIN_ON_CHANNEL;
@ -766,6 +809,10 @@ int __mt792xe_mcu_drv_pmctrl(struct mt792x_dev *dev)
for (i = 0; i < MT792x_DRV_OWN_RETRY_COUNT; i++) {
mt76_wr(dev, MT_CONN_ON_LPCTL, PCIE_LPCR_HOST_CLR_OWN);
if (dev->aspm_supported)
usleep_range(2000, 3000);
if (mt76_poll_msec_tick(dev, MT_CONN_ON_LPCTL,
PCIE_LPCR_HOST_OWN_SYNC, 0, 50, 1))
break;

View file

@ -138,6 +138,7 @@ EXPORT_SYMBOL_GPL(mt792x_mac_update_mib_stats);
struct mt76_wcid *mt792x_rx_get_wcid(struct mt792x_dev *dev, u16 idx,
bool unicast)
{
struct mt792x_link_sta *link;
struct mt792x_sta *sta;
struct mt76_wcid *wcid;
@ -151,11 +152,12 @@ struct mt76_wcid *mt792x_rx_get_wcid(struct mt792x_dev *dev, u16 idx,
if (!wcid->sta)
return NULL;
sta = container_of(wcid, struct mt792x_sta, wcid);
link = container_of(wcid, struct mt792x_link_sta, wcid);
sta = container_of(link, struct mt792x_sta, deflink);
if (!sta->vif)
return NULL;
return &sta->vif->sta.wcid;
return &sta->vif->sta.deflink.wcid;
}
EXPORT_SYMBOL_GPL(mt792x_rx_get_wcid);
@ -173,7 +175,7 @@ mt792x_mac_rssi_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
if (!ether_addr_equal(vif->addr, hdr->addr1))
return;
ewma_rssi_add(&mvif->rssi, -status->signal);
ewma_rssi_add(&mvif->bss_conf.rssi, -status->signal);
}
void mt792x_mac_assoc_rssi(struct mt792x_dev *dev, struct sk_buff *skb)

View file

@ -2002,7 +2002,7 @@ mt7996_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7996_dev *dev,
ra->valid = true;
ra->auto_rate = true;
ra->phy_mode = mt76_connac_get_phy_mode(mphy, vif, band, sta);
ra->phy_mode = mt76_connac_get_phy_mode(mphy, vif, band, &sta->deflink);
ra->channel = chandef->chan->hw_value;
ra->bw = (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_320) ?
CMD_CBW_320MHZ : sta->deflink.bandwidth;
@ -2157,11 +2157,13 @@ int mt7996_mcu_add_sta(struct mt7996_dev *dev, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, bool enable, bool newly)
{
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
struct ieee80211_link_sta *link_sta;
struct mt7996_sta *msta;
struct sk_buff *skb;
int ret;
msta = sta ? (struct mt7996_sta *)sta->drv_priv : &mvif->sta;
link_sta = sta ? &sta->deflink : NULL;
skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76,
&msta->wcid,
@ -2170,7 +2172,8 @@ int mt7996_mcu_add_sta(struct mt7996_dev *dev, struct ieee80211_vif *vif,
return PTR_ERR(skb);
/* starec basic */
mt76_connac_mcu_sta_basic_tlv(&dev->mt76, skb, vif, sta, enable, newly);
mt76_connac_mcu_sta_basic_tlv(&dev->mt76, skb, vif, link_sta,
enable, newly);
if (!enable)
goto out;

View file

@ -45,3 +45,26 @@ void mt76_pci_disable_aspm(struct pci_dev *pdev)
aspm_conf);
}
EXPORT_SYMBOL_GPL(mt76_pci_disable_aspm);
bool mt76_pci_aspm_supported(struct pci_dev *pdev)
{
struct pci_dev *parent = pdev->bus->self;
u16 aspm_conf, parent_aspm_conf = 0;
bool result = true;
pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &aspm_conf);
aspm_conf &= PCI_EXP_LNKCTL_ASPMC;
if (parent) {
pcie_capability_read_word(parent, PCI_EXP_LNKCTL,
&parent_aspm_conf);
parent_aspm_conf &= PCI_EXP_LNKCTL_ASPMC;
}
if (!aspm_conf && (!parent || !parent_aspm_conf)) {
/* aspm already disabled */
result = false;
}
return result;
}
EXPORT_SYMBOL_GPL(mt76_pci_aspm_supported);

View file

@ -697,9 +697,14 @@ static void rtl8188fu_init_statistics(struct rtl8xxxu_priv *priv)
rtl8xxxu_write32(priv, REG_OFDM0_FA_RSTC, val32);
}
#define TX_POWER_INDEX_MAX 0x3F
#define TX_POWER_INDEX_DEFAULT_CCK 0x22
#define TX_POWER_INDEX_DEFAULT_HT40 0x27
static int rtl8188fu_parse_efuse(struct rtl8xxxu_priv *priv)
{
struct rtl8188fu_efuse *efuse = &priv->efuse_wifi.efuse8188fu;
int i;
if (efuse->rtl_id != cpu_to_le16(0x8129))
return -EINVAL;
@ -713,6 +718,16 @@ static int rtl8188fu_parse_efuse(struct rtl8xxxu_priv *priv)
efuse->tx_power_index_A.ht40_base,
sizeof(efuse->tx_power_index_A.ht40_base));
for (i = 0; i < ARRAY_SIZE(priv->cck_tx_power_index_A); i++) {
if (priv->cck_tx_power_index_A[i] > TX_POWER_INDEX_MAX)
priv->cck_tx_power_index_A[i] = TX_POWER_INDEX_DEFAULT_CCK;
}
for (i = 0; i < ARRAY_SIZE(priv->ht40_1s_tx_power_index_A); i++) {
if (priv->ht40_1s_tx_power_index_A[i] > TX_POWER_INDEX_MAX)
priv->ht40_1s_tx_power_index_A[i] = TX_POWER_INDEX_DEFAULT_HT40;
}
priv->ofdm_tx_power_diff[0].a = efuse->tx_power_index_A.ht20_ofdm_1s_diff.a;
priv->ht20_tx_power_diff[0].a = efuse->tx_power_index_A.ht20_ofdm_1s_diff.b;

View file

@ -6679,7 +6679,6 @@ static void rtl8xxxu_switch_ports(struct rtl8xxxu_priv *priv)
u8 macid[ETH_ALEN], bssid[ETH_ALEN], macid_1[ETH_ALEN], bssid_1[ETH_ALEN];
u8 msr, bcn_ctrl, bcn_ctrl_1, atimwnd[2], atimwnd_1[2];
struct rtl8xxxu_vif *rtlvif;
struct ieee80211_vif *vif;
u8 tsftr[8], tsftr_1[8];
int i;
@ -6744,10 +6743,7 @@ static void rtl8xxxu_switch_ports(struct rtl8xxxu_priv *priv)
/* write bcn ctl */
rtl8xxxu_write8(priv, REG_BEACON_CTRL, bcn_ctrl_1);
rtl8xxxu_write8(priv, REG_BEACON_CTRL_1, bcn_ctrl);
vif = priv->vifs[0];
priv->vifs[0] = priv->vifs[1];
priv->vifs[1] = vif;
swap(priv->vifs[0], priv->vifs[1]);
/* priv->vifs[0] is NULL here, based on how this function is currently
* called from rtl8xxxu_add_interface().

View file

@ -27,7 +27,7 @@ static void rtl88e_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
*/
rtlpci->const_pci_aspm = 3;

View file

@ -31,7 +31,7 @@ static void rtl92c_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
* */
rtlpci->const_pci_aspm = 3;

View file

@ -32,7 +32,7 @@ static void rtl92d_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
* */
rtlpci->const_pci_aspm = 3;

View file

@ -31,7 +31,7 @@ static void rtl92ee_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
*/
rtlpci->const_pci_aspm = 3;

View file

@ -27,7 +27,7 @@ static void rtl92s_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
* */
rtlpci->const_pci_aspm = 2;

View file

@ -33,7 +33,7 @@ static void rtl8723e_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
*/
rtlpci->const_pci_aspm = 3;

View file

@ -32,7 +32,7 @@ static void rtl8723be_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
*/
rtlpci->const_pci_aspm = 3;

View file

@ -30,7 +30,7 @@ static void rtl8821ae_init_aspm_vars(struct ieee80211_hw *hw)
* 2 - Enable ASPM with Clock Req,
* 3 - Alwyas Enable ASPM with Clock Req,
* 4 - Always Enable ASPM without Clock Req.
* set defult to RTL8192CE:3 RTL8192E:2
* set default to RTL8192CE:3 RTL8192E:2
*/
rtlpci->const_pci_aspm = 3;

View file

@ -1201,6 +1201,15 @@ static int __priority_queue_cfg(struct rtw_dev *rtwdev,
rtw_write16(rtwdev, REG_FIFOPAGE_CTRL_2 + 2, fifo->rsvd_boundary);
rtw_write16(rtwdev, REG_BCNQ1_BDNY_V1, fifo->rsvd_boundary);
rtw_write32(rtwdev, REG_RXFF_BNDY, chip->rxff_size - C2H_PKT_BUF - 1);
if (rtwdev->hci.type == RTW_HCI_TYPE_USB) {
rtw_write8_mask(rtwdev, REG_AUTO_LLT_V1, BIT_MASK_BLK_DESC_NUM,
chip->usb_tx_agg_desc_num);
rtw_write8(rtwdev, REG_AUTO_LLT_V1 + 3, chip->usb_tx_agg_desc_num);
rtw_write8_set(rtwdev, REG_TXDMA_OFFSET_CHK + 1, BIT(1));
}
rtw_write8_set(rtwdev, REG_AUTO_LLT_V1, BIT_AUTO_INIT_LLT_V1);
if (!check_hw_ready(rtwdev, REG_AUTO_LLT_V1, BIT_AUTO_INIT_LLT_V1, 0))

View file

@ -1197,6 +1197,8 @@ struct rtw_chip_info {
u16 fw_fifo_addr[RTW_FW_FIFO_MAX];
const struct rtw_fwcd_segs *fwcd_segs;
u8 usb_tx_agg_desc_num;
u8 default_1ss_tx_path;
bool path_div_supported;

View file

@ -270,6 +270,7 @@
#define BIT_MASK_BCN_HEAD_1_V1 0xfff
#define REG_AUTO_LLT_V1 0x0208
#define BIT_AUTO_INIT_LLT_V1 BIT(0)
#define BIT_MASK_BLK_DESC_NUM GENMASK(7, 4)
#define REG_DWBCN0_CTRL 0x0208
#define BIT_BCN_VALID BIT(16)
#define REG_TXDMA_OFFSET_CHK 0x020C

View file

@ -2013,6 +2013,7 @@ const struct rtw_chip_info rtw8703b_hw_spec = {
.tx_stbc = false,
.max_power_index = 0x3f,
.ampdu_density = IEEE80211_HT_MPDU_DENSITY_16,
.usb_tx_agg_desc_num = 1, /* Not sure if this chip has USB interface */
.path_div_supported = false,
.ht_supported = true,

View file

@ -2171,6 +2171,7 @@ const struct rtw_chip_info rtw8723d_hw_spec = {
.band = RTW_BAND_2G,
.page_size = TX_PAGE_SIZE,
.dig_min = 0x20,
.usb_tx_agg_desc_num = 1,
.ht_supported = true,
.vht_supported = false,
.lps_deep_mode_supported = 0,

View file

@ -2008,6 +2008,7 @@ const struct rtw_chip_info rtw8821c_hw_spec = {
.band = RTW_BAND_2G | RTW_BAND_5G,
.page_size = TX_PAGE_SIZE,
.dig_min = 0x1c,
.usb_tx_agg_desc_num = 3,
.ht_supported = true,
.vht_supported = true,
.lps_deep_mode_supported = BIT(LPS_DEEP_MODE_LCLK),

View file

@ -2548,6 +2548,7 @@ const struct rtw_chip_info rtw8822b_hw_spec = {
.band = RTW_BAND_2G | RTW_BAND_5G,
.page_size = TX_PAGE_SIZE,
.dig_min = 0x1c,
.usb_tx_agg_desc_num = 3,
.ht_supported = true,
.vht_supported = true,
.lps_deep_mode_supported = BIT(LPS_DEEP_MODE_LCLK),

View file

@ -5366,6 +5366,7 @@ const struct rtw_chip_info rtw8822c_hw_spec = {
.band = RTW_BAND_2G | RTW_BAND_5G,
.page_size = TX_PAGE_SIZE,
.dig_min = 0x20,
.usb_tx_agg_desc_num = 3,
.default_1ss_tx_path = BB_PATH_A,
.path_div_supported = true,
.ht_supported = true,

View file

@ -379,7 +379,9 @@ static bool rtw_usb_tx_agg_skb(struct rtw_usb *rtwusb, struct sk_buff_head *list
skb_iter = skb_peek(list);
if (skb_iter && skb_iter->len + skb_head->len <= RTW_USB_MAX_XMITBUF_SZ)
if (skb_iter &&
skb_iter->len + skb_head->len <= RTW_USB_MAX_XMITBUF_SZ &&
agg_num < rtwdev->chip->usb_tx_agg_desc_num)
__skb_unlink(skb_iter, list);
else
skb_iter = NULL;
@ -740,7 +742,6 @@ static struct rtw_hci_ops rtw_usb_ops = {
static int rtw_usb_init_rx(struct rtw_dev *rtwdev)
{
struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
int i;
rtwusb->rxwq = create_singlethread_workqueue("rtw88_usb: rx wq");
if (!rtwusb->rxwq) {
@ -752,13 +753,19 @@ static int rtw_usb_init_rx(struct rtw_dev *rtwdev)
INIT_WORK(&rtwusb->rx_work, rtw_usb_rx_handler);
return 0;
}
static void rtw_usb_setup_rx(struct rtw_dev *rtwdev)
{
struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
int i;
for (i = 0; i < RTW_USB_RXCB_NUM; i++) {
struct rx_usb_ctrl_block *rxcb = &rtwusb->rx_cb[i];
rtw_usb_rx_resubmit(rtwusb, rxcb);
}
return 0;
}
static void rtw_usb_deinit_rx(struct rtw_dev *rtwdev)
@ -895,6 +902,8 @@ int rtw_usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
goto err_destroy_rxwq;
}
rtw_usb_setup_rx(rtwdev);
return 0;
err_destroy_rxwq:

View file

@ -22,6 +22,9 @@ config RTW89_8851B
config RTW89_8852A
tristate
config RTW89_8852B_COMMON
tristate
config RTW89_8852B
tristate
@ -59,6 +62,7 @@ config RTW89_8852BE
select RTW89_CORE
select RTW89_PCI
select RTW89_8852B
select RTW89_8852B_COMMON
help
Select this option will enable support for 8852BE chipset

View file

@ -17,7 +17,8 @@ rtw89_core-y += core.o \
ps.o \
chan.o \
ser.o \
acpi.o
acpi.o \
util.o
rtw89_core-$(CONFIG_PM) += wow.o
@ -39,6 +40,9 @@ rtw89_8852a-objs := rtw8852a.o \
obj-$(CONFIG_RTW89_8852AE) += rtw89_8852ae.o
rtw89_8852ae-objs := rtw8852ae.o
obj-$(CONFIG_RTW89_8852B_COMMON) += rtw89_8852b_common.o
rtw89_8852b_common-objs := rtw8852b_common.o
obj-$(CONFIG_RTW89_8852B) += rtw89_8852b.o
rtw89_8852b-objs := rtw8852b.o \
rtw8852b_table.o \

View file

@ -523,6 +523,7 @@ static u8 rtw89_get_addr_cam_entry_size(struct rtw89_dev *rtwdev)
case RTL8852A:
case RTL8852B:
case RTL8851B:
case RTL8852BT:
return ADDR_CAM_ENT_SIZE;
default:
return ADDR_CAM_ENT_SHORT_SIZE;

View file

@ -141,6 +141,28 @@ bool rtw89_assign_entity_chan(struct rtw89_dev *rtwdev,
return band_changed;
}
int rtw89_iterate_entity_chan(struct rtw89_dev *rtwdev,
int (*iterator)(const struct rtw89_chan *chan,
void *data),
void *data)
{
struct rtw89_hal *hal = &rtwdev->hal;
const struct rtw89_chan *chan;
int ret;
u8 idx;
lockdep_assert_held(&rtwdev->mutex);
for_each_set_bit(idx, hal->entity_map, NUM_OF_RTW89_SUB_ENTITY) {
chan = rtw89_chan_get(rtwdev, idx);
ret = iterator(chan, data);
if (ret)
return ret;
}
return 0;
}
static void __rtw89_config_entity_chandef(struct rtw89_dev *rtwdev,
enum rtw89_sub_entity_idx idx,
const struct cfg80211_chan_def *chandef,

View file

@ -78,6 +78,10 @@ void rtw89_chan_create(struct rtw89_chan *chan, u8 center_chan, u8 primary_chan,
bool rtw89_assign_entity_chan(struct rtw89_dev *rtwdev,
enum rtw89_sub_entity_idx idx,
const struct rtw89_chan *new);
int rtw89_iterate_entity_chan(struct rtw89_dev *rtwdev,
int (*iterator)(const struct rtw89_chan *chan,
void *data),
void *data);
void rtw89_config_entity_chandef(struct rtw89_dev *rtwdev,
enum rtw89_sub_entity_idx idx,
const struct cfg80211_chan_def *chandef);

Some files were not shown because too many files have changed in this diff Show more