wireless-next patches for v6.10

The third, and most likely the last, "new features" pull request for
 v6.10 with changes both in stack and in drivers. In ath12k and rtw89
 we disabled Wireless Extensions just like with iwlwifi earlier. Wi-Fi
 7 devices will not support Wireless Extensions (WEXT) anymore so if
 someone is still using the legacy WEXT interface it's time to switch
 to nl80211 now!
 
 We merged wireless into wireless-next as we decided not to send a
 wireless pull request to v6.9 this late in the cycle. Also an
 immutable branch with MHI subsystem was merged to get ath11k and
 ath12k hibernation working.
 
 Major changes:
 
 mac80211/cfg80211
 
 * handle color change per link
 
 mt76
 
 * mt7921 LED control
 
 * mt7925 EHT radiotap support
 
 * mt7920e PCI support
 
 ath12k
 
 * debugfs support
 
 * dfs_simulate_radar debugfs file
 
 * disable Wireless Extensions
 
 * suspend and hibernation support
 
 * ACPI support
 
 * refactoring in preparation of multi-link support
 
 ath11k
 
 * support hibernation (required changes in qrtr and MHI subsystems)
 
 * ieee80211-freq-limit Device Tree property support
 
 ath10k
 
 * firmware-name Device Tree property support
 
 rtw89
 
 * complete features of new WiFi 7 chip 8922AE including BT-coexistence
   and WoWLAN
 
 * use BIOS ACPI settings to set TX power and channels
 
 * disable Wireless Extensios on Wi-Fi 7 devices
 
 iwlwifi
 
 * block_esr debugfs file
 
 * support again firmware API 90 (was reverted earlier)
 
 * provide channel survey information for Automatic Channel Selection (ACS)
 -----BEGIN PGP SIGNATURE-----
 
 iQFFBAABCgAvFiEEiBjanGPFTz4PRfLobhckVSbrbZsFAmY7aisRHGt2YWxvQGtl
 cm5lbC5vcmcACgkQbhckVSbrbZuILQgAmcfU9EJca08qqI2dvcu/X3C/WBVaOQUC
 3sdvdGfaO9bTXDjbt0KfmhYNc0BHIM1Jzbegmhe80k6/4cnkWh0JgGZbgMMZGL+V
 KI8ejbjY1BpfReApiL37DzSUWh176u4jNhUhifRXrLYxBkoZ5goUX8u6meMZHuj4
 8EBfW7xGhtmMn2Po+bUYNXsczWR+mcfxfNF3vfyYIIsuJz2wbONcRuc0Hvq1mHmC
 Ebtq7myY/SQvLtILGZxwAhdgHH8AgCN9R5WpzcCHze8t8xmmt4CjdwS6BGwmaBNM
 NoGnzHvDG0u+dUYcY/x1tiky9PKRjMtMW+GvYPUvjy5j3nDDXeq51w==
 =tf9q
 -----END PGP SIGNATURE-----

Merge tag 'wireless-next-2024-05-08' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next

Kalle Valo says:

====================
wireless-next patches for v6.10

The third, and most likely the last, "new features" pull request for
v6.10 with changes both in stack and in drivers. In ath12k and rtw89
we disabled Wireless Extensions just like with iwlwifi earlier. Wi-Fi
7 devices will not support Wireless Extensions (WEXT) anymore so if
someone is still using the legacy WEXT interface it's time to switch
to nl80211 now!

We merged wireless into wireless-next as we decided not to send a
wireless pull request to v6.9 this late in the cycle. Also an
immutable branch with MHI subsystem was merged to get ath11k and
ath12k hibernation working.

Major changes:

mac80211/cfg80211
 * handle color change per link

mt76
 * mt7921 LED control
 * mt7925 EHT radiotap support
 * mt7920e PCI support

ath12k
 * debugfs support
 * dfs_simulate_radar debugfs file
 * disable Wireless Extensions
 * suspend and hibernation support
 * ACPI support
 * refactoring in preparation of multi-link support

ath11k
 * support hibernation (required changes in qrtr and MHI subsystems)
 * ieee80211-freq-limit Device Tree property support

ath10k
 * firmware-name Device Tree property support

rtw89
 * complete features of new WiFi 7 chip 8922AE including BT-coexistence
   and WoWLAN
 * use BIOS ACPI settings to set TX power and channels
 * disable Wireless Extensios on Wi-Fi 7 devices

iwlwifi
 * block_esr debugfs file
 * support again firmware API 90 (was reverted earlier)
 * provide channel survey information for Automatic Channel Selection (ACS)

* tag 'wireless-next-2024-05-08' of git://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (214 commits)
  wifi: mwl8k: initialize cmd->addr[] properly
  wifi: iwlwifi: Ensure prph_mac dump includes all addresses
  wifi: iwlwifi: mvm: don't request statistics in restart
  wifi: iwlwifi: mvm: exit EMLSR if secondary link is not used
  wifi: iwlwifi: mvm: add beacon template version 14
  wifi: iwlwifi: mvm: align UATS naming with firmware
  wifi: iwlwifi: Force SCU_ACTIVE for specific platforms
  wifi: iwlwifi: mvm: record and return channel survey information
  wifi: iwlwifi: mvm: add the firmware API for channel survey
  wifi: iwlwifi: mvm: Fix race in scan completion
  wifi: iwlwifi: mvm: Add a print for invalid link pair due to bandwidth
  wifi: iwlwifi: mvm: add a debugfs for reading EMLSR blocking reasons
  wifi: iwlwifi: mvm: Add active EMLSR blocking reasons prints
  wifi: iwlwifi: bump FW API to 90 for BZ/SC devices
  wifi: iwlwifi: mvm: fix primary link setting
  wifi: iwlwifi: mvm: use already determined cmd_id
  wifi: iwlwifi: mvm: don't reset link selection during restart
  wifi: iwlwifi: Print EMLSR states name
  wifi: iwlwifi: mvm: Block EMLSR when a p2p/softAP vif is active
  wifi: iwlwifi: mvm: fix typo in debug print
  ...
====================

Link: https://lore.kernel.org/r/20240508120726.85A10C113CC@smtp.kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2024-05-08 19:09:38 -07:00
commit 83127ecada
235 changed files with 13494 additions and 9702 deletions

View File

@ -73,6 +73,12 @@ properties:
- sky85703-11
- sky85803
firmware-name:
maxItems: 1
description:
If present, a board or platform specific string used to lookup firmware
files for the device.
wifi-firmware:
type: object
additionalProperties: false

View File

@ -59,6 +59,8 @@ properties:
minItems: 1
maxItems: 2
ieee80211-freq-limit: true
wifi-firmware:
type: object
description: |
@ -88,6 +90,7 @@ required:
additionalProperties: false
allOf:
- $ref: ieee80211.yaml#
- if:
properties:
compatible:

View File

@ -80,6 +80,7 @@ enum dev_st_transition {
DEV_ST_TRANSITION_FP,
DEV_ST_TRANSITION_SYS_ERR,
DEV_ST_TRANSITION_DISABLE,
DEV_ST_TRANSITION_DISABLE_DESTROY_DEVICE,
DEV_ST_TRANSITION_MAX,
};
@ -90,7 +91,8 @@ enum dev_st_transition {
dev_st_trans(MISSION_MODE, "MISSION MODE") \
dev_st_trans(FP, "FLASH PROGRAMMER") \
dev_st_trans(SYS_ERR, "SYS ERROR") \
dev_st_trans_end(DISABLE, "DISABLE")
dev_st_trans(DISABLE, "DISABLE") \
dev_st_trans_end(DISABLE_DESTROY_DEVICE, "DISABLE (DESTROY DEVICE)")
extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \

View File

@ -468,7 +468,8 @@ error_mission_mode:
}
/* Handle shutdown transitions */
static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
bool destroy_device)
{
enum mhi_pm_state cur_state;
struct mhi_event *mhi_event;
@ -530,8 +531,16 @@ skip_mhi_reset:
dev_dbg(dev, "Waiting for all pending threads to complete\n");
wake_up_all(&mhi_cntrl->state_event);
dev_dbg(dev, "Reset all active channels and remove MHI devices\n");
device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_destroy_device);
/*
* Only destroy the 'struct device' for channels if indicated by the
* 'destroy_device' flag. Because, during system suspend or hibernation
* state, there is no need to destroy the 'struct device' as the endpoint
* device would still be physically attached to the machine.
*/
if (destroy_device) {
dev_dbg(dev, "Reset all active channels and remove MHI devices\n");
device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_destroy_device);
}
mutex_lock(&mhi_cntrl->pm_mutex);
@ -821,7 +830,10 @@ void mhi_pm_st_worker(struct work_struct *work)
mhi_pm_sys_error_transition(mhi_cntrl);
break;
case DEV_ST_TRANSITION_DISABLE:
mhi_pm_disable_transition(mhi_cntrl);
mhi_pm_disable_transition(mhi_cntrl, false);
break;
case DEV_ST_TRANSITION_DISABLE_DESTROY_DEVICE:
mhi_pm_disable_transition(mhi_cntrl, true);
break;
default:
break;
@ -1175,7 +1187,8 @@ error_exit:
}
EXPORT_SYMBOL_GPL(mhi_async_power_up);
void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
static void __mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful,
bool destroy_device)
{
enum mhi_pm_state cur_state, transition_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
@ -1211,15 +1224,32 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
write_unlock_irq(&mhi_cntrl->pm_lock);
mutex_unlock(&mhi_cntrl->pm_mutex);
mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_DISABLE);
if (destroy_device)
mhi_queue_state_transition(mhi_cntrl,
DEV_ST_TRANSITION_DISABLE_DESTROY_DEVICE);
else
mhi_queue_state_transition(mhi_cntrl,
DEV_ST_TRANSITION_DISABLE);
/* Wait for shutdown to complete */
flush_work(&mhi_cntrl->st_worker);
disable_irq(mhi_cntrl->irq[0]);
}
void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
{
__mhi_power_down(mhi_cntrl, graceful, true);
}
EXPORT_SYMBOL_GPL(mhi_power_down);
void mhi_power_down_keep_dev(struct mhi_controller *mhi_cntrl,
bool graceful)
{
__mhi_power_down(mhi_cntrl, graceful, false);
}
EXPORT_SYMBOL_GPL(mhi_power_down_keep_dev);
int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
{
int ret = mhi_async_power_up(mhi_cntrl);

View File

@ -1594,6 +1594,20 @@ static int ar5523_probe(struct usb_interface *intf,
struct ar5523 *ar;
int error = -ENOMEM;
static const u8 bulk_ep_addr[] = {
AR5523_CMD_TX_PIPE | USB_DIR_OUT,
AR5523_DATA_TX_PIPE | USB_DIR_OUT,
AR5523_CMD_RX_PIPE | USB_DIR_IN,
AR5523_DATA_RX_PIPE | USB_DIR_IN,
0};
if (!usb_check_bulk_endpoints(intf, bulk_ep_addr)) {
dev_err(&dev->dev,
"Could not find all expected endpoints\n");
error = -ENODEV;
goto out;
}
/*
* Load firmware if the device requires it. This will return
* -ENXIO on success and we'll get called back afer the usb

View File

@ -171,8 +171,10 @@ struct ath_common {
unsigned int clockrate;
spinlock_t cc_lock;
struct ath_cycle_counters cc_ani;
struct ath_cycle_counters cc_survey;
struct_group(cc,
struct ath_cycle_counters cc_ani;
struct ath_cycle_counters cc_survey;
);
struct ath_regulatory regulatory;
struct ath_regulatory reg_world_copy;

View File

@ -75,7 +75,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 2116,
.fw = {
.dir = QCA988X_HW_2_0_FW_DIR,
.board = QCA988X_HW_2_0_BOARD_DATA_FILE,
.board_size = QCA988X_BOARD_DATA_SZ,
.board_ext_size = QCA988X_BOARD_EXT_DATA_SZ,
},
@ -116,7 +115,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 2116,
.fw = {
.dir = QCA988X_HW_2_0_FW_DIR,
.board = QCA988X_HW_2_0_BOARD_DATA_FILE,
.board_size = QCA988X_BOARD_DATA_SZ,
.board_ext_size = QCA988X_BOARD_EXT_DATA_SZ,
},
@ -158,7 +156,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 2116,
.fw = {
.dir = QCA9887_HW_1_0_FW_DIR,
.board = QCA9887_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA9887_BOARD_DATA_SZ,
.board_ext_size = QCA9887_BOARD_EXT_DATA_SZ,
},
@ -199,7 +196,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 0,
.fw = {
.dir = QCA6174_HW_3_0_FW_DIR,
.board = QCA6174_HW_3_0_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
},
@ -236,7 +232,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 8124,
.fw = {
.dir = QCA6174_HW_2_1_FW_DIR,
.board = QCA6174_HW_2_1_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
},
@ -277,7 +272,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 8124,
.fw = {
.dir = QCA6174_HW_2_1_FW_DIR,
.board = QCA6174_HW_2_1_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
},
@ -318,7 +312,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 8124,
.fw = {
.dir = QCA6174_HW_3_0_FW_DIR,
.board = QCA6174_HW_3_0_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
},
@ -360,7 +353,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.fw = {
/* uses same binaries as hw3.0 */
.dir = QCA6174_HW_3_0_FW_DIR,
.board = QCA6174_HW_3_0_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
},
@ -409,7 +401,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 12064,
.fw = {
.dir = QCA99X0_HW_2_0_FW_DIR,
.board = QCA99X0_HW_2_0_BOARD_DATA_FILE,
.board_size = QCA99X0_BOARD_DATA_SZ,
.board_ext_size = QCA99X0_BOARD_EXT_DATA_SZ,
},
@ -457,8 +448,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 12064,
.fw = {
.dir = QCA9984_HW_1_0_FW_DIR,
.board = QCA9984_HW_1_0_BOARD_DATA_FILE,
.eboard = QCA9984_HW_1_0_EBOARD_DATA_FILE,
.board_size = QCA99X0_BOARD_DATA_SZ,
.board_ext_size = QCA99X0_BOARD_EXT_DATA_SZ,
.ext_board_size = QCA99X0_EXT_BOARD_DATA_SZ,
@ -510,7 +499,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 12064,
.fw = {
.dir = QCA9888_HW_2_0_FW_DIR,
.board = QCA9888_HW_2_0_BOARD_DATA_FILE,
.board_size = QCA99X0_BOARD_DATA_SZ,
.board_ext_size = QCA99X0_BOARD_EXT_DATA_SZ,
},
@ -556,7 +544,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 8124,
.fw = {
.dir = QCA9377_HW_1_0_FW_DIR,
.board = QCA9377_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA9377_BOARD_DATA_SZ,
.board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
},
@ -597,7 +584,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 8124,
.fw = {
.dir = QCA9377_HW_1_0_FW_DIR,
.board = QCA9377_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA9377_BOARD_DATA_SZ,
.board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
},
@ -640,7 +626,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 8124,
.fw = {
.dir = QCA9377_HW_1_0_FW_DIR,
.board = QCA9377_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA9377_BOARD_DATA_SZ,
.board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
},
@ -680,7 +665,6 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.cal_data_len = 12064,
.fw = {
.dir = QCA4019_HW_1_0_FW_DIR,
.board = QCA4019_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA4019_BOARD_DATA_SZ,
.board_ext_size = QCA4019_BOARD_EXT_DATA_SZ,
},
@ -720,6 +704,8 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.max_spatial_stream = 4,
.fw = {
.dir = WCN3990_HW_1_0_FW_DIR,
.board_size = WCN3990_BOARD_DATA_SZ,
.board_ext_size = WCN3990_BOARD_EXT_DATA_SZ,
},
.sw_decrypt_mcast_mgmt = true,
.rx_desc_ops = &wcn3990_rx_desc_ops,
@ -942,11 +928,20 @@ static const struct firmware *ath10k_fetch_fw_file(struct ath10k *ar,
if (dir == NULL)
dir = ".";
if (ar->board_name) {
snprintf(filename, sizeof(filename), "%s/%s/%s",
dir, ar->board_name, file);
ret = firmware_request_nowarn(&fw, filename, ar->dev);
ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot fw request '%s': %d\n",
filename, ret);
if (!ret)
return fw;
}
snprintf(filename, sizeof(filename), "%s/%s", dir, file);
ret = firmware_request_nowarn(&fw, filename, ar->dev);
ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot fw request '%s': %d\n",
filename, ret);
if (ret)
return ERR_PTR(ret);
@ -1288,11 +1283,6 @@ static int ath10k_core_fetch_board_data_api_1(struct ath10k *ar, int bd_ie_type)
char boardname[100];
if (bd_ie_type == ATH10K_BD_IE_BOARD) {
if (!ar->hw_params.fw.board) {
ath10k_err(ar, "failed to find board file fw entry\n");
return -EINVAL;
}
scnprintf(boardname, sizeof(boardname), "board-%s-%s.bin",
ath10k_bus_str(ar->hif.bus), dev_name(ar->dev));
@ -1302,7 +1292,7 @@ static int ath10k_core_fetch_board_data_api_1(struct ath10k *ar, int bd_ie_type)
if (IS_ERR(ar->normal_mode_fw.board)) {
fw = ath10k_fetch_fw_file(ar,
ar->hw_params.fw.dir,
ar->hw_params.fw.board);
ATH10K_BOARD_DATA_FILE);
ar->normal_mode_fw.board = fw;
}
@ -1312,13 +1302,8 @@ static int ath10k_core_fetch_board_data_api_1(struct ath10k *ar, int bd_ie_type)
ar->normal_mode_fw.board_data = ar->normal_mode_fw.board->data;
ar->normal_mode_fw.board_len = ar->normal_mode_fw.board->size;
} else if (bd_ie_type == ATH10K_BD_IE_BOARD_EXT) {
if (!ar->hw_params.fw.eboard) {
ath10k_err(ar, "failed to find eboard file fw entry\n");
return -EINVAL;
}
fw = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir,
ar->hw_params.fw.eboard);
ATH10K_EBOARD_DATA_FILE);
ar->normal_mode_fw.ext_board = fw;
if (IS_ERR(ar->normal_mode_fw.ext_board))
return PTR_ERR(ar->normal_mode_fw.ext_board);

View File

@ -1081,6 +1081,8 @@ struct ath10k {
*/
const struct ath10k_fw_components *running_fw;
const char *board_name;
const struct firmware *pre_cal_file;
const struct firmware *cal_file;

View File

@ -439,7 +439,7 @@ ath10k_dbg_sta_write_peer_debug_trigger(struct file *file,
}
out:
mutex_unlock(&ar->conf_mutex);
return count;
return ret ?: count;
}
static const struct file_operations fops_peer_debug_trigger = {

View File

@ -39,14 +39,12 @@ enum ath10k_bus {
#define QCA988X_HW_2_0_VERSION 0x4100016c
#define QCA988X_HW_2_0_CHIP_ID_REV 0x2
#define QCA988X_HW_2_0_FW_DIR ATH10K_FW_DIR "/QCA988X/hw2.0"
#define QCA988X_HW_2_0_BOARD_DATA_FILE "board.bin"
#define QCA988X_HW_2_0_PATCH_LOAD_ADDR 0x1234
/* QCA9887 1.0 definitions */
#define QCA9887_HW_1_0_VERSION 0x4100016d
#define QCA9887_HW_1_0_CHIP_ID_REV 0
#define QCA9887_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA9887/hw1.0"
#define QCA9887_HW_1_0_BOARD_DATA_FILE "board.bin"
#define QCA9887_HW_1_0_PATCH_LOAD_ADDR 0x1234
/* QCA6174 target BMI version signatures */
@ -85,11 +83,9 @@ enum qca9377_chip_id_rev {
};
#define QCA6174_HW_2_1_FW_DIR ATH10K_FW_DIR "/QCA6174/hw2.1"
#define QCA6174_HW_2_1_BOARD_DATA_FILE "board.bin"
#define QCA6174_HW_2_1_PATCH_LOAD_ADDR 0x1234
#define QCA6174_HW_3_0_FW_DIR ATH10K_FW_DIR "/QCA6174/hw3.0"
#define QCA6174_HW_3_0_BOARD_DATA_FILE "board.bin"
#define QCA6174_HW_3_0_PATCH_LOAD_ADDR 0x1234
/* QCA99X0 1.0 definitions (unsupported) */
@ -99,7 +95,6 @@ enum qca9377_chip_id_rev {
#define QCA99X0_HW_2_0_DEV_VERSION 0x01000000
#define QCA99X0_HW_2_0_CHIP_ID_REV 0x1
#define QCA99X0_HW_2_0_FW_DIR ATH10K_FW_DIR "/QCA99X0/hw2.0"
#define QCA99X0_HW_2_0_BOARD_DATA_FILE "board.bin"
#define QCA99X0_HW_2_0_PATCH_LOAD_ADDR 0x1234
/* QCA9984 1.0 defines */
@ -107,8 +102,6 @@ enum qca9377_chip_id_rev {
#define QCA9984_HW_DEV_TYPE 0xa
#define QCA9984_HW_1_0_CHIP_ID_REV 0x0
#define QCA9984_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA9984/hw1.0"
#define QCA9984_HW_1_0_BOARD_DATA_FILE "board.bin"
#define QCA9984_HW_1_0_EBOARD_DATA_FILE "eboard.bin"
#define QCA9984_HW_1_0_PATCH_LOAD_ADDR 0x1234
/* QCA9888 2.0 defines */
@ -116,18 +109,15 @@ enum qca9377_chip_id_rev {
#define QCA9888_HW_DEV_TYPE 0xc
#define QCA9888_HW_2_0_CHIP_ID_REV 0x0
#define QCA9888_HW_2_0_FW_DIR ATH10K_FW_DIR "/QCA9888/hw2.0"
#define QCA9888_HW_2_0_BOARD_DATA_FILE "board.bin"
#define QCA9888_HW_2_0_PATCH_LOAD_ADDR 0x1234
/* QCA9377 1.0 definitions */
#define QCA9377_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA9377/hw1.0"
#define QCA9377_HW_1_0_BOARD_DATA_FILE "board.bin"
#define QCA9377_HW_1_0_PATCH_LOAD_ADDR 0x1234
/* QCA4019 1.0 definitions */
#define QCA4019_HW_1_0_DEV_VERSION 0x01000000
#define QCA4019_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA4019/hw1.0"
#define QCA4019_HW_1_0_BOARD_DATA_FILE "board.bin"
#define QCA4019_HW_1_0_PATCH_LOAD_ADDR 0x1234
/* WCN3990 1.0 definitions */
@ -159,7 +149,9 @@ enum qca9377_chip_id_rev {
#define ATH10K_FIRMWARE_MAGIC "QCA-ATH10K"
#define ATH10K_BOARD_MAGIC "QCA-ATH10K-BOARD"
#define ATH10K_BOARD_DATA_FILE "board.bin"
#define ATH10K_BOARD_API2_FILE "board-2.bin"
#define ATH10K_EBOARD_DATA_FILE "eboard.bin"
#define REG_DUMP_COUNT_QCA988X 60
@ -553,9 +545,7 @@ struct ath10k_hw_params {
struct ath10k_hw_params_fw {
const char *dir;
const char *board;
size_t board_size;
const char *eboard;
size_t ext_board_size;
size_t board_ext_size;
} fw;

View File

@ -3826,28 +3826,28 @@ MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API2_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API3_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API4_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API5_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_BOARD_API2_FILE);
/* QCA9887 1.0 firmware files */
MODULE_FIRMWARE(QCA9887_HW_1_0_FW_DIR "/" ATH10K_FW_API5_FILE);
MODULE_FIRMWARE(QCA9887_HW_1_0_FW_DIR "/" QCA9887_HW_1_0_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA9887_HW_1_0_FW_DIR "/" ATH10K_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA9887_HW_1_0_FW_DIR "/" ATH10K_BOARD_API2_FILE);
/* QCA6174 2.1 firmware files */
MODULE_FIRMWARE(QCA6174_HW_2_1_FW_DIR "/" ATH10K_FW_API4_FILE);
MODULE_FIRMWARE(QCA6174_HW_2_1_FW_DIR "/" ATH10K_FW_API5_FILE);
MODULE_FIRMWARE(QCA6174_HW_2_1_FW_DIR "/" QCA6174_HW_2_1_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA6174_HW_2_1_FW_DIR "/" ATH10K_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA6174_HW_2_1_FW_DIR "/" ATH10K_BOARD_API2_FILE);
/* QCA6174 3.1 firmware files */
MODULE_FIRMWARE(QCA6174_HW_3_0_FW_DIR "/" ATH10K_FW_API4_FILE);
MODULE_FIRMWARE(QCA6174_HW_3_0_FW_DIR "/" ATH10K_FW_API5_FILE);
MODULE_FIRMWARE(QCA6174_HW_3_0_FW_DIR "/" ATH10K_FW_API6_FILE);
MODULE_FIRMWARE(QCA6174_HW_3_0_FW_DIR "/" QCA6174_HW_3_0_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA6174_HW_3_0_FW_DIR "/" ATH10K_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA6174_HW_3_0_FW_DIR "/" ATH10K_BOARD_API2_FILE);
/* QCA9377 1.0 firmware files */
MODULE_FIRMWARE(QCA9377_HW_1_0_FW_DIR "/" ATH10K_FW_API6_FILE);
MODULE_FIRMWARE(QCA9377_HW_1_0_FW_DIR "/" ATH10K_FW_API5_FILE);
MODULE_FIRMWARE(QCA9377_HW_1_0_FW_DIR "/" QCA9377_HW_1_0_BOARD_DATA_FILE);
MODULE_FIRMWARE(QCA9377_HW_1_0_FW_DIR "/" ATH10K_BOARD_DATA_FILE);

View File

@ -1338,6 +1338,9 @@ static void ath10k_snoc_quirks_init(struct ath10k *ar)
struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
struct device *dev = &ar_snoc->dev->dev;
/* ignore errors, keep NULL if there is no property */
of_property_read_string(dev->of_node, "firmware-name", &ar->board_name);
if (of_property_read_bool(dev->of_node, "qcom,snoc-host-cap-8bit-quirk"))
set_bit(ATH10K_SNOC_FLAG_8BIT_HOST_CAP_QUIRK, &ar_snoc->flags);
}

View File

@ -491,4 +491,7 @@ struct host_interest {
#define QCA4019_BOARD_DATA_SZ 12064
#define QCA4019_BOARD_EXT_DATA_SZ 0
#define WCN3990_BOARD_DATA_SZ 26328
#define WCN3990_BOARD_EXT_DATA_SZ 0
#endif /* __TARGADDRS_H__ */

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/module.h>
@ -413,7 +413,7 @@ static int ath11k_ahb_power_up(struct ath11k_base *ab)
return ret;
}
static void ath11k_ahb_power_down(struct ath11k_base *ab)
static void ath11k_ahb_power_down(struct ath11k_base *ab, bool is_suspend)
{
struct ath11k_ahb *ab_ahb = ath11k_ahb_priv(ab);
@ -1261,7 +1261,7 @@ static void ath11k_ahb_remove(struct platform_device *pdev)
struct ath11k_base *ab = platform_get_drvdata(pdev);
if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
ath11k_ahb_power_down(ab);
ath11k_ahb_power_down(ab, false);
ath11k_debugfs_soc_destroy(ab);
ath11k_qmi_deinit_service(ab);
goto qmi_fail;

View File

@ -906,12 +906,6 @@ int ath11k_core_suspend(struct ath11k_base *ab)
return ret;
}
ret = ath11k_wow_enable(ab);
if (ret) {
ath11k_warn(ab, "failed to enable wow during suspend: %d\n", ret);
return ret;
}
ret = ath11k_dp_rx_pktlog_stop(ab, false);
if (ret) {
ath11k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n",
@ -922,20 +916,49 @@ int ath11k_core_suspend(struct ath11k_base *ab)
ath11k_ce_stop_shadow_timers(ab);
ath11k_dp_stop_shadow_timers(ab);
ath11k_hif_irq_disable(ab);
ath11k_hif_ce_irq_disable(ab);
ret = ath11k_hif_suspend(ab);
if (ret) {
ath11k_warn(ab, "failed to suspend hif: %d\n", ret);
return ret;
}
/* PM framework skips suspend_late/resume_early callbacks
* if other devices report errors in their suspend callbacks.
* However ath11k_core_resume() would still be called because
* here we return success thus kernel put us on dpm_suspended_list.
* Since we won't go through a power down/up cycle, there is
* no chance to call complete(&ab->restart_completed) in
* ath11k_core_restart(), making ath11k_core_resume() timeout.
* So call it here to avoid this issue. This also works in case
* no error happens thus suspend_late/resume_early get called,
* because it will be reinitialized in ath11k_core_resume_early().
*/
complete(&ab->restart_completed);
return 0;
}
EXPORT_SYMBOL(ath11k_core_suspend);
int ath11k_core_resume(struct ath11k_base *ab)
int ath11k_core_suspend_late(struct ath11k_base *ab)
{
struct ath11k_pdev *pdev;
struct ath11k *ar;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far single_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
ar = pdev->ar;
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
ath11k_hif_irq_disable(ab);
ath11k_hif_ce_irq_disable(ab);
ath11k_hif_power_down(ab, true);
return 0;
}
EXPORT_SYMBOL(ath11k_core_suspend_late);
int ath11k_core_resume_early(struct ath11k_base *ab)
{
int ret;
struct ath11k_pdev *pdev;
@ -944,7 +967,7 @@ int ath11k_core_resume(struct ath11k_base *ab)
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far signle_pdev_only chips have supports_suspend as true
/* so far single_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
@ -952,29 +975,46 @@ int ath11k_core_resume(struct ath11k_base *ab)
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
ret = ath11k_hif_resume(ab);
if (ret) {
ath11k_warn(ab, "failed to resume hif during resume: %d\n", ret);
return ret;
}
reinit_completion(&ab->restart_completed);
ret = ath11k_hif_power_up(ab);
if (ret)
ath11k_warn(ab, "failed to power up hif during resume: %d\n", ret);
ath11k_hif_ce_irq_enable(ab);
ath11k_hif_irq_enable(ab);
return ret;
}
EXPORT_SYMBOL(ath11k_core_resume_early);
int ath11k_core_resume(struct ath11k_base *ab)
{
int ret;
struct ath11k_pdev *pdev;
struct ath11k *ar;
long time_left;
if (!ab->hw_params.supports_suspend)
return -EOPNOTSUPP;
/* so far single_pdev_only chips have supports_suspend as true
* and only the first pdev is valid.
*/
pdev = ath11k_core_get_single_pdev(ab);
ar = pdev->ar;
if (!ar || ar->state != ATH11K_STATE_OFF)
return 0;
time_left = wait_for_completion_timeout(&ab->restart_completed,
ATH11K_RESET_TIMEOUT_HZ);
if (time_left == 0) {
ath11k_warn(ab, "timeout while waiting for restart complete");
return -ETIMEDOUT;
}
ret = ath11k_dp_rx_pktlog_start(ab);
if (ret) {
if (ret)
ath11k_warn(ab, "failed to start rx pktlog during resume: %d\n",
ret);
return ret;
}
ret = ath11k_wow_wakeup(ab);
if (ret) {
ath11k_warn(ab, "failed to wakeup wow during resume: %d\n", ret);
return ret;
}
return 0;
return ret;
}
EXPORT_SYMBOL(ath11k_core_resume);
@ -2072,6 +2112,8 @@ static void ath11k_core_restart(struct work_struct *work)
if (!ab->is_reset)
ath11k_core_post_reconfigure_recovery(ab);
complete(&ab->restart_completed);
}
static void ath11k_core_reset(struct work_struct *work)
@ -2141,7 +2183,7 @@ static void ath11k_core_reset(struct work_struct *work)
ath11k_hif_irq_disable(ab);
ath11k_hif_ce_irq_disable(ab);
ath11k_hif_power_down(ab);
ath11k_hif_power_down(ab, false);
ath11k_hif_power_up(ab);
ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset started\n");
@ -2214,7 +2256,7 @@ void ath11k_core_deinit(struct ath11k_base *ab)
mutex_unlock(&ab->core_lock);
ath11k_hif_power_down(ab);
ath11k_hif_power_down(ab, false);
ath11k_mac_destroy(ab);
ath11k_core_soc_destroy(ab);
ath11k_fw_destroy(ab);
@ -2267,6 +2309,7 @@ struct ath11k_base *ath11k_core_alloc(struct device *dev, size_t priv_size,
timer_setup(&ab->rx_replenish_retry, ath11k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
init_completion(&ab->wow.wakeup_completed);
init_completion(&ab->restart_completed);
ab->dev = dev;
ab->hif.bus = bus;

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH11K_CORE_H
@ -1033,6 +1033,8 @@ struct ath11k_base {
DECLARE_BITMAP(fw_features, ATH11K_FW_FEATURE_COUNT);
} fw;
struct completion restart_completed;
#ifdef CONFIG_NL80211_TESTMODE
struct {
u32 data_pos;
@ -1232,8 +1234,10 @@ void ath11k_core_free_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd);
int ath11k_core_check_dt(struct ath11k_base *ath11k);
int ath11k_core_check_smbios(struct ath11k_base *ab);
void ath11k_core_halt(struct ath11k *ar);
int ath11k_core_resume_early(struct ath11k_base *ab);
int ath11k_core_resume(struct ath11k_base *ab);
int ath11k_core_suspend(struct ath11k_base *ab);
int ath11k_core_suspend_late(struct ath11k_base *ab);
void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab);
bool ath11k_core_coldboot_cal_support(struct ath11k_base *ab);

View File

@ -664,7 +664,7 @@ struct hal_srng_config {
};
/**
* enum hal_rx_buf_return_buf_manager
* enum hal_rx_buf_return_buf_manager - manager for returned rx buffers
*
* @HAL_RX_BUF_RBM_WBM_IDLE_BUF_LIST: Buffer returned to WBM idle buffer list
* @HAL_RX_BUF_RBM_WBM_IDLE_DESC_LIST: Descriptor returned to WBM idle

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2020 The Linux Foundation. All rights reserved.
* Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _HIF_H_
@ -18,7 +18,7 @@ struct ath11k_hif_ops {
int (*start)(struct ath11k_base *ab);
void (*stop)(struct ath11k_base *ab);
int (*power_up)(struct ath11k_base *ab);
void (*power_down)(struct ath11k_base *ab);
void (*power_down)(struct ath11k_base *ab, bool is_suspend);
int (*suspend)(struct ath11k_base *ab);
int (*resume)(struct ath11k_base *ab);
int (*map_service_to_pipe)(struct ath11k_base *ab, u16 service_id,
@ -67,12 +67,18 @@ static inline void ath11k_hif_irq_disable(struct ath11k_base *ab)
static inline int ath11k_hif_power_up(struct ath11k_base *ab)
{
if (!ab->hif.ops->power_up)
return -EOPNOTSUPP;
return ab->hif.ops->power_up(ab);
}
static inline void ath11k_hif_power_down(struct ath11k_base *ab)
static inline void ath11k_hif_power_down(struct ath11k_base *ab, bool is_suspend)
{
ab->hif.ops->power_down(ab);
if (!ab->hif.ops->power_down)
return;
ab->hif.ops->power_down(ab, is_suspend);
}
static inline int ath11k_hif_suspend(struct ath11k_base *ab)

View File

@ -1659,7 +1659,7 @@ void ath11k_mac_bcn_tx_event(struct ath11k_vif *arvif)
if (vif->bss_conf.color_change_active &&
ieee80211_beacon_cntdwn_is_complete(vif, 0)) {
arvif->bcca_zero_sent = true;
ieee80211_color_change_finish(vif);
ieee80211_color_change_finish(vif, 0);
return;
}
@ -10126,6 +10126,7 @@ static int __ath11k_mac_register(struct ath11k *ar)
if (ret)
goto err;
wiphy_read_of_freq_limits(ar->hw->wiphy);
ath11k_mac_setup_ht_vht_cap(ar, cap, &ht_cap);
ath11k_mac_setup_he_cap(ar, cap);

View File

@ -453,9 +453,17 @@ int ath11k_mhi_start(struct ath11k_pci *ab_pci)
return 0;
}
void ath11k_mhi_stop(struct ath11k_pci *ab_pci)
void ath11k_mhi_stop(struct ath11k_pci *ab_pci, bool is_suspend)
{
mhi_power_down(ab_pci->mhi_ctrl, true);
/* During suspend we need to use mhi_power_down_keep_dev()
* workaround, otherwise ath11k_core_resume() will timeout
* during resume.
*/
if (is_suspend)
mhi_power_down_keep_dev(ab_pci->mhi_ctrl, true);
else
mhi_power_down(ab_pci->mhi_ctrl, true);
mhi_unprepare_after_power_down(ab_pci->mhi_ctrl);
}

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2020 The Linux Foundation. All rights reserved.
* Copyright (c) 2022 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH11K_MHI_H
#define _ATH11K_MHI_H
@ -18,7 +18,7 @@
#define MHICTRL_RESET_MASK 0x2
int ath11k_mhi_start(struct ath11k_pci *ar_pci);
void ath11k_mhi_stop(struct ath11k_pci *ar_pci);
void ath11k_mhi_stop(struct ath11k_pci *ar_pci, bool is_suspend);
int ath11k_mhi_register(struct ath11k_pci *ar_pci);
void ath11k_mhi_unregister(struct ath11k_pci *ar_pci);
void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab);
@ -26,5 +26,4 @@ void ath11k_mhi_clear_vector(struct ath11k_base *ab);
int ath11k_mhi_suspend(struct ath11k_pci *ar_pci);
int ath11k_mhi_resume(struct ath11k_pci *ar_pci);
#endif

View File

@ -638,7 +638,7 @@ static int ath11k_pci_power_up(struct ath11k_base *ab)
return 0;
}
static void ath11k_pci_power_down(struct ath11k_base *ab)
static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend)
{
struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
@ -649,7 +649,7 @@ static void ath11k_pci_power_down(struct ath11k_base *ab)
ath11k_pci_msi_disable(ab_pci);
ath11k_mhi_stop(ab_pci);
ath11k_mhi_stop(ab_pci, is_suspend);
clear_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags);
ath11k_pci_sw_reset(ab_pci->ab, false);
}
@ -970,7 +970,7 @@ static void ath11k_pci_remove(struct pci_dev *pdev)
ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) {
ath11k_pci_power_down(ab);
ath11k_pci_power_down(ab, false);
ath11k_debugfs_soc_destroy(ab);
ath11k_qmi_deinit_service(ab);
goto qmi_fail;
@ -998,7 +998,7 @@ static void ath11k_pci_shutdown(struct pci_dev *pdev)
struct ath11k_pci *ab_pci = ath11k_pci_priv(ab);
ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
ath11k_pci_power_down(ab);
ath11k_pci_power_down(ab, false);
}
static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
@ -1035,9 +1035,39 @@ static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
return ret;
}
static SIMPLE_DEV_PM_OPS(ath11k_pci_pm_ops,
ath11k_pci_pm_suspend,
ath11k_pci_pm_resume);
static __maybe_unused int ath11k_pci_pm_suspend_late(struct device *dev)
{
struct ath11k_base *ab = dev_get_drvdata(dev);
int ret;
ret = ath11k_core_suspend_late(ab);
if (ret)
ath11k_warn(ab, "failed to late suspend core: %d\n", ret);
/* Similar to ath11k_pci_pm_suspend(), we return success here
* even error happens, to allow system suspend/hibernation survive.
*/
return 0;
}
static __maybe_unused int ath11k_pci_pm_resume_early(struct device *dev)
{
struct ath11k_base *ab = dev_get_drvdata(dev);
int ret;
ret = ath11k_core_resume_early(ab);
if (ret)
ath11k_warn(ab, "failed to early resume core: %d\n", ret);
return ret;
}
static const struct dev_pm_ops __maybe_unused ath11k_pci_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend,
ath11k_pci_pm_resume)
SET_LATE_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend_late,
ath11k_pci_pm_resume_early)
};
static struct pci_driver ath11k_pci_driver = {
.name = "ath11k_pci",

View File

@ -2877,7 +2877,7 @@ int ath11k_qmi_fwreset_from_cold_boot(struct ath11k_base *ab)
}
/* reset the firmware */
ath11k_hif_power_down(ab);
ath11k_hif_power_down(ab, false);
ath11k_hif_power_up(ab);
ath11k_dbg(ab, ATH11K_DBG_QMI, "exit wait for cold boot done\n");
return 0;

View File

@ -4064,7 +4064,8 @@ ath11k_wmi_obss_color_collision_event(struct ath11k_base *ab, struct sk_buff *sk
switch (ev->evt_type) {
case WMI_BSS_COLOR_COLLISION_DETECTION:
ieee80211_obss_color_collision_notify(arvif->vif, ev->obss_color_bitmap);
ieee80211_obss_color_collision_notify(arvif->vif, ev->obss_color_bitmap,
0);
ath11k_dbg(ab, ATH11K_DBG_WMI,
"OBSS color collision detected vdev:%d, event:%d, bitmap:%08llx\n",
ev->vdev_id, ev->evt_type, ev->obss_color_bitmap);
@ -8650,30 +8651,27 @@ exit:
kfree(tb);
}
static int ath11k_wmi_p2p_noa_event(struct ath11k_base *ab,
struct sk_buff *skb)
static void ath11k_wmi_p2p_noa_event(struct ath11k_base *ab,
struct sk_buff *skb)
{
const void **tb;
const struct wmi_p2p_noa_event *ev;
const struct ath11k_wmi_p2p_noa_info *noa;
struct ath11k *ar;
int ret, vdev_id;
int vdev_id;
u8 noa_descriptors;
tb = ath11k_wmi_tlv_parse_alloc(ab, skb, GFP_ATOMIC);
if (IS_ERR(tb)) {
ret = PTR_ERR(tb);
ath11k_warn(ab, "failed to parse tlv: %d\n", ret);
return ret;
ath11k_warn(ab, "failed to parse tlv: %ld\n", PTR_ERR(tb));
return;
}
ev = tb[WMI_TAG_P2P_NOA_EVENT];
noa = tb[WMI_TAG_P2P_NOA_INFO];
if (!ev || !noa) {
ret = -EPROTO;
if (!ev || !noa)
goto out;
}
vdev_id = ev->vdev_id;
noa_descriptors = u32_get_bits(noa->noa_attr,
@ -8682,7 +8680,6 @@ static int ath11k_wmi_p2p_noa_event(struct ath11k_base *ab,
if (noa_descriptors > WMI_P2P_MAX_NOA_DESCRIPTORS) {
ath11k_warn(ab, "invalid descriptor num %d in P2P NoA event\n",
noa_descriptors);
return -EINVAL;
goto out;
}
@ -8695,7 +8692,6 @@ static int ath11k_wmi_p2p_noa_event(struct ath11k_base *ab,
if (!ar) {
ath11k_warn(ab, "invalid vdev id %d in P2P NoA event\n",
vdev_id);
ret = -EINVAL;
goto unlock;
}
@ -8705,7 +8701,6 @@ unlock:
rcu_read_unlock();
out:
kfree(tb);
return 0;
}
static void ath11k_wmi_tlv_op_rx(struct ath11k_base *ab, struct sk_buff *skb)

View File

@ -24,6 +24,15 @@ config ATH12K_DEBUG
If unsure, say Y to make it easier to debug problems. But if
you want optimal performance choose N.
config ATH12K_DEBUGFS
bool "QTI ath12k debugfs support"
depends on ATH12K && MAC80211_DEBUGFS
help
Enable ath12k debugfs support
If unsure, say Y to make it easier to debug problems. But if
you want optimal performance choose N.
config ATH12K_TRACING
bool "ath12k tracing support"
depends on ATH12K && EVENT_TRACING

View File

@ -23,6 +23,8 @@ ath12k-y += core.o \
fw.o \
p2p.o
ath12k-$(CONFIG_ATH12K_DEBUGFS) += debugfs.o
ath12k-$(CONFIG_ACPI) += acpi.o
ath12k-$(CONFIG_ATH12K_TRACING) += trace.o
# for tracing framework to find trace.h

View File

@ -0,0 +1,394 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "core.h"
#include "acpi.h"
#include "debug.h"
static int ath12k_acpi_dsm_get_data(struct ath12k_base *ab, int func)
{
union acpi_object *obj;
acpi_handle root_handle;
int ret;
root_handle = ACPI_HANDLE(ab->dev);
if (!root_handle) {
ath12k_dbg(ab, ATH12K_DBG_BOOT, "invalid acpi handler\n");
return -EOPNOTSUPP;
}
obj = acpi_evaluate_dsm(root_handle, ab->hw_params->acpi_guid, 0, func,
NULL);
if (!obj) {
ath12k_dbg(ab, ATH12K_DBG_BOOT, "acpi_evaluate_dsm() failed\n");
return -ENOENT;
}
if (obj->type == ACPI_TYPE_INTEGER) {
ab->acpi.func_bit = obj->integer.value;
} else if (obj->type == ACPI_TYPE_BUFFER) {
switch (func) {
case ATH12K_ACPI_DSM_FUNC_TAS_CFG:
if (obj->buffer.length != ATH12K_ACPI_DSM_TAS_CFG_SIZE) {
ath12k_warn(ab, "invalid ACPI DSM TAS config size: %d\n",
obj->buffer.length);
ret = -EINVAL;
goto out;
}
memcpy(&ab->acpi.tas_cfg, obj->buffer.pointer,
obj->buffer.length);
break;
case ATH12K_ACPI_DSM_FUNC_TAS_DATA:
if (obj->buffer.length != ATH12K_ACPI_DSM_TAS_DATA_SIZE) {
ath12k_warn(ab, "invalid ACPI DSM TAS data size: %d\n",
obj->buffer.length);
ret = -EINVAL;
goto out;
}
memcpy(&ab->acpi.tas_sar_power_table, obj->buffer.pointer,
obj->buffer.length);
break;
case ATH12K_ACPI_DSM_FUNC_BIOS_SAR:
if (obj->buffer.length != ATH12K_ACPI_DSM_BIOS_SAR_DATA_SIZE) {
ath12k_warn(ab, "invalid ACPI BIOS SAR data size: %d\n",
obj->buffer.length);
ret = -EINVAL;
goto out;
}
memcpy(&ab->acpi.bios_sar_data, obj->buffer.pointer,
obj->buffer.length);
break;
case ATH12K_ACPI_DSM_FUNC_GEO_OFFSET:
if (obj->buffer.length != ATH12K_ACPI_DSM_GEO_OFFSET_DATA_SIZE) {
ath12k_warn(ab, "invalid ACPI GEO OFFSET data size: %d\n",
obj->buffer.length);
ret = -EINVAL;
goto out;
}
memcpy(&ab->acpi.geo_offset_data, obj->buffer.pointer,
obj->buffer.length);
break;
case ATH12K_ACPI_DSM_FUNC_INDEX_CCA:
if (obj->buffer.length != ATH12K_ACPI_DSM_CCA_DATA_SIZE) {
ath12k_warn(ab, "invalid ACPI DSM CCA data size: %d\n",
obj->buffer.length);
ret = -EINVAL;
goto out;
}
memcpy(&ab->acpi.cca_data, obj->buffer.pointer,
obj->buffer.length);
break;
case ATH12K_ACPI_DSM_FUNC_INDEX_BAND_EDGE:
if (obj->buffer.length != ATH12K_ACPI_DSM_BAND_EDGE_DATA_SIZE) {
ath12k_warn(ab, "invalid ACPI DSM band edge data size: %d\n",
obj->buffer.length);
ret = -EINVAL;
goto out;
}
memcpy(&ab->acpi.band_edge_power, obj->buffer.pointer,
obj->buffer.length);
break;
}
} else {
ath12k_warn(ab, "ACPI DSM method returned an unsupported object type: %d\n",
obj->type);
ret = -EINVAL;
goto out;
}
ret = 0;
out:
ACPI_FREE(obj);
return ret;
}
static int ath12k_acpi_set_power_limit(struct ath12k_base *ab)
{
const u8 *tas_sar_power_table = ab->acpi.tas_sar_power_table;
int ret;
if (tas_sar_power_table[0] != ATH12K_ACPI_TAS_DATA_VERSION ||
tas_sar_power_table[1] != ATH12K_ACPI_TAS_DATA_ENABLE) {
ath12k_warn(ab, "latest ACPI TAS data is invalid\n");
return -EINVAL;
}
ret = ath12k_wmi_set_bios_cmd(ab, WMI_BIOS_PARAM_TAS_DATA_TYPE,
tas_sar_power_table,
ATH12K_ACPI_DSM_TAS_DATA_SIZE);
if (ret) {
ath12k_warn(ab, "failed to send ACPI TAS data table: %d\n", ret);
return ret;
}
return ret;
}
static int ath12k_acpi_set_bios_sar_power(struct ath12k_base *ab)
{
int ret;
if (ab->acpi.bios_sar_data[0] != ATH12K_ACPI_POWER_LIMIT_VERSION ||
ab->acpi.bios_sar_data[1] != ATH12K_ACPI_POWER_LIMIT_ENABLE_FLAG) {
ath12k_warn(ab, "invalid latest ACPI BIOS SAR data\n");
return -EINVAL;
}
ret = ath12k_wmi_set_bios_sar_cmd(ab, ab->acpi.bios_sar_data);
if (ret) {
ath12k_warn(ab, "failed to set ACPI BIOS SAR table: %d\n", ret);
return ret;
}
return 0;
}
static void ath12k_acpi_dsm_notify(acpi_handle handle, u32 event, void *data)
{
int ret;
struct ath12k_base *ab = data;
if (event == ATH12K_ACPI_NOTIFY_EVENT) {
ath12k_warn(ab, "unknown acpi notify %u\n", event);
return;
}
if (!ab->acpi.acpi_tas_enable) {
ath12k_dbg(ab, ATH12K_DBG_BOOT, "acpi_tas_enable is false\n");
return;
}
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_TAS_DATA);
if (ret) {
ath12k_warn(ab, "failed to update ACPI TAS data table: %d\n", ret);
return;
}
ret = ath12k_acpi_set_power_limit(ab);
if (ret) {
ath12k_warn(ab, "failed to set ACPI TAS power limit data: %d", ret);
return;
}
if (!ab->acpi.acpi_bios_sar_enable)
return;
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_BIOS_SAR);
if (ret) {
ath12k_warn(ab, "failed to update BIOS SAR: %d\n", ret);
return;
}
ret = ath12k_acpi_set_bios_sar_power(ab);
if (ret) {
ath12k_warn(ab, "failed to set BIOS SAR power limit: %d\n", ret);
return;
}
}
static int ath12k_acpi_set_bios_sar_params(struct ath12k_base *ab)
{
int ret;
ret = ath12k_wmi_set_bios_sar_cmd(ab, ab->acpi.bios_sar_data);
if (ret) {
ath12k_warn(ab, "failed to set ACPI BIOS SAR table: %d\n", ret);
return ret;
}
ret = ath12k_wmi_set_bios_geo_cmd(ab, ab->acpi.geo_offset_data);
if (ret) {
ath12k_warn(ab, "failed to set ACPI BIOS GEO table: %d\n", ret);
return ret;
}
return 0;
}
static int ath12k_acpi_set_tas_params(struct ath12k_base *ab)
{
int ret;
ret = ath12k_wmi_set_bios_cmd(ab, WMI_BIOS_PARAM_TAS_CONFIG_TYPE,
ab->acpi.tas_cfg,
ATH12K_ACPI_DSM_TAS_CFG_SIZE);
if (ret) {
ath12k_warn(ab, "failed to send ACPI TAS config table parameter: %d\n",
ret);
return ret;
}
ret = ath12k_wmi_set_bios_cmd(ab, WMI_BIOS_PARAM_TAS_DATA_TYPE,
ab->acpi.tas_sar_power_table,
ATH12K_ACPI_DSM_TAS_DATA_SIZE);
if (ret) {
ath12k_warn(ab, "failed to send ACPI TAS data table parameter: %d\n",
ret);
return ret;
}
return 0;
}
int ath12k_acpi_start(struct ath12k_base *ab)
{
acpi_status status;
u8 *buf;
int ret;
if (!ab->hw_params->acpi_guid)
/* not supported with this hardware */
return 0;
ab->acpi.acpi_tas_enable = false;
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_SUPPORT_FUNCS);
if (ret) {
ath12k_dbg(ab, ATH12K_DBG_BOOT, "failed to get ACPI DSM data: %d\n", ret);
return ret;
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_TAS_CFG)) {
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_TAS_CFG);
if (ret) {
ath12k_warn(ab, "failed to get ACPI TAS config table: %d\n", ret);
return ret;
}
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_TAS_DATA)) {
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_TAS_DATA);
if (ret) {
ath12k_warn(ab, "failed to get ACPI TAS data table: %d\n", ret);
return ret;
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_TAS_CFG) &&
ab->acpi.tas_sar_power_table[0] == ATH12K_ACPI_TAS_DATA_VERSION &&
ab->acpi.tas_sar_power_table[1] == ATH12K_ACPI_TAS_DATA_ENABLE)
ab->acpi.acpi_tas_enable = true;
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_BIOS_SAR)) {
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_BIOS_SAR);
if (ret) {
ath12k_warn(ab, "failed to get ACPI bios sar data: %d\n", ret);
return ret;
}
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_GEO_OFFSET)) {
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_GEO_OFFSET);
if (ret) {
ath12k_warn(ab, "failed to get ACPI geo offset data: %d\n", ret);
return ret;
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_BIOS_SAR) &&
ab->acpi.bios_sar_data[0] == ATH12K_ACPI_POWER_LIMIT_VERSION &&
ab->acpi.bios_sar_data[1] == ATH12K_ACPI_POWER_LIMIT_ENABLE_FLAG &&
!ab->acpi.acpi_tas_enable)
ab->acpi.acpi_bios_sar_enable = true;
}
if (ab->acpi.acpi_tas_enable) {
ret = ath12k_acpi_set_tas_params(ab);
if (ret) {
ath12k_warn(ab, "failed to send ACPI parameters: %d\n", ret);
return ret;
}
}
if (ab->acpi.acpi_bios_sar_enable) {
ret = ath12k_acpi_set_bios_sar_params(ab);
if (ret)
return ret;
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi, ATH12K_ACPI_FUNC_BIT_CCA)) {
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_INDEX_CCA);
if (ret) {
ath12k_warn(ab, "failed to get ACPI DSM CCA threshold configuration: %d\n",
ret);
return ret;
}
if (ab->acpi.cca_data[0] == ATH12K_ACPI_CCA_THR_VERSION &&
ab->acpi.cca_data[ATH12K_ACPI_CCA_THR_OFFSET_DATA_OFFSET] ==
ATH12K_ACPI_CCA_THR_ENABLE_FLAG) {
buf = ab->acpi.cca_data + ATH12K_ACPI_CCA_THR_OFFSET_DATA_OFFSET;
ret = ath12k_wmi_set_bios_cmd(ab,
WMI_BIOS_PARAM_CCA_THRESHOLD_TYPE,
buf,
ATH12K_ACPI_CCA_THR_OFFSET_LEN);
if (ret) {
ath12k_warn(ab, "failed to set ACPI DSM CCA threshold: %d\n",
ret);
return ret;
}
}
}
if (ATH12K_ACPI_FUNC_BIT_VALID(ab->acpi,
ATH12K_ACPI_FUNC_BIT_BAND_EDGE_CHAN_POWER)) {
ret = ath12k_acpi_dsm_get_data(ab, ATH12K_ACPI_DSM_FUNC_INDEX_BAND_EDGE);
if (ret) {
ath12k_warn(ab, "failed to get ACPI DSM band edge channel power: %d\n",
ret);
return ret;
}
if (ab->acpi.band_edge_power[0] == ATH12K_ACPI_BAND_EDGE_VERSION &&
ab->acpi.band_edge_power[1] == ATH12K_ACPI_BAND_EDGE_ENABLE_FLAG) {
ret = ath12k_wmi_set_bios_cmd(ab,
WMI_BIOS_PARAM_TYPE_BANDEDGE,
ab->acpi.band_edge_power,
sizeof(ab->acpi.band_edge_power));
if (ret) {
ath12k_warn(ab,
"failed to set ACPI DSM band edge channel power: %d\n",
ret);
return ret;
}
}
}
status = acpi_install_notify_handler(ACPI_HANDLE(ab->dev),
ACPI_DEVICE_NOTIFY,
ath12k_acpi_dsm_notify, ab);
if (ACPI_FAILURE(status)) {
ath12k_warn(ab, "failed to install DSM notify callback: %d\n", status);
return -EIO;
}
ab->acpi.started = true;
return 0;
}
void ath12k_acpi_stop(struct ath12k_base *ab)
{
if (!ab->acpi.started)
return;
acpi_remove_notify_handler(ACPI_HANDLE(ab->dev),
ACPI_DEVICE_NOTIFY,
ath12k_acpi_dsm_notify);
}

View File

@ -0,0 +1,76 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_ACPI_H
#define ATH12K_ACPI_H
#include <linux/acpi.h>
#define ATH12K_ACPI_DSM_FUNC_SUPPORT_FUNCS 0
#define ATH12K_ACPI_DSM_FUNC_BIOS_SAR 4
#define ATH12K_ACPI_DSM_FUNC_GEO_OFFSET 5
#define ATH12K_ACPI_DSM_FUNC_INDEX_CCA 6
#define ATH12K_ACPI_DSM_FUNC_TAS_CFG 8
#define ATH12K_ACPI_DSM_FUNC_TAS_DATA 9
#define ATH12K_ACPI_DSM_FUNC_INDEX_BAND_EDGE 10
#define ATH12K_ACPI_FUNC_BIT_BIOS_SAR BIT(3)
#define ATH12K_ACPI_FUNC_BIT_GEO_OFFSET BIT(4)
#define ATH12K_ACPI_FUNC_BIT_CCA BIT(5)
#define ATH12K_ACPI_FUNC_BIT_TAS_CFG BIT(7)
#define ATH12K_ACPI_FUNC_BIT_TAS_DATA BIT(8)
#define ATH12K_ACPI_FUNC_BIT_BAND_EDGE_CHAN_POWER BIT(9)
#define ATH12K_ACPI_NOTIFY_EVENT 0x86
#define ATH12K_ACPI_FUNC_BIT_VALID(_acdata, _func) (((_acdata).func_bit) & (_func))
#define ATH12K_ACPI_TAS_DATA_VERSION 0x1
#define ATH12K_ACPI_TAS_DATA_ENABLE 0x1
#define ATH12K_ACPI_POWER_LIMIT_VERSION 0x1
#define ATH12K_ACPI_POWER_LIMIT_ENABLE_FLAG 0x1
#define ATH12K_ACPI_CCA_THR_VERSION 0x1
#define ATH12K_ACPI_CCA_THR_ENABLE_FLAG 0x1
#define ATH12K_ACPI_BAND_EDGE_VERSION 0x1
#define ATH12K_ACPI_BAND_EDGE_ENABLE_FLAG 0x1
#define ATH12K_ACPI_GEO_OFFSET_DATA_OFFSET 1
#define ATH12K_ACPI_DBS_BACKOFF_DATA_OFFSET 2
#define ATH12K_ACPI_CCA_THR_OFFSET_DATA_OFFSET 5
#define ATH12K_ACPI_BIOS_SAR_DBS_BACKOFF_LEN 10
#define ATH12K_ACPI_POWER_LIMIT_DATA_OFFSET 12
#define ATH12K_ACPI_BIOS_SAR_GEO_OFFSET_LEN 18
#define ATH12K_ACPI_BIOS_SAR_TABLE_LEN 22
#define ATH12K_ACPI_CCA_THR_OFFSET_LEN 36
#define ATH12K_ACPI_DSM_TAS_DATA_SIZE 69
#define ATH12K_ACPI_DSM_BAND_EDGE_DATA_SIZE 100
#define ATH12K_ACPI_DSM_TAS_CFG_SIZE 108
#define ATH12K_ACPI_DSM_GEO_OFFSET_DATA_SIZE (ATH12K_ACPI_GEO_OFFSET_DATA_OFFSET + \
ATH12K_ACPI_BIOS_SAR_GEO_OFFSET_LEN)
#define ATH12K_ACPI_DSM_BIOS_SAR_DATA_SIZE (ATH12K_ACPI_POWER_LIMIT_DATA_OFFSET + \
ATH12K_ACPI_BIOS_SAR_TABLE_LEN)
#define ATH12K_ACPI_DSM_CCA_DATA_SIZE (ATH12K_ACPI_CCA_THR_OFFSET_DATA_OFFSET + \
ATH12K_ACPI_CCA_THR_OFFSET_LEN)
#ifdef CONFIG_ACPI
int ath12k_acpi_start(struct ath12k_base *ab);
void ath12k_acpi_stop(struct ath12k_base *ab);
#else
static inline int ath12k_acpi_start(struct ath12k_base *ab)
{
return 0;
}
static inline void ath12k_acpi_stop(struct ath12k_base *ab)
{
}
#endif /* CONFIG_ACPI */
#endif /* ATH12K_ACPI_H */

View File

@ -15,6 +15,7 @@
#include "debug.h"
#include "hif.h"
#include "fw.h"
#include "debugfs.h"
unsigned int ath12k_debug_mask;
module_param_named(debug_mask, ath12k_debug_mask, uint, 0644);
@ -43,67 +44,90 @@ static int ath12k_core_rfkill_config(struct ath12k_base *ab)
int ath12k_core_suspend(struct ath12k_base *ab)
{
int ret;
struct ath12k *ar;
int ret, i;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
/* TODO: there can frames in queues so for now add delay as a hack.
* Need to implement to handle and remove this delay.
rcu_read_lock();
for (i = 0; i < ab->num_radios; i++) {
ar = ath12k_mac_get_ar_by_pdev_id(ab, i);
if (!ar)
continue;
ret = ath12k_mac_wait_tx_complete(ar);
if (ret) {
ath12k_warn(ab, "failed to wait tx complete: %d\n", ret);
rcu_read_unlock();
return ret;
}
}
rcu_read_unlock();
/* PM framework skips suspend_late/resume_early callbacks
* if other devices report errors in their suspend callbacks.
* However ath12k_core_resume() would still be called because
* here we return success thus kernel put us on dpm_suspended_list.
* Since we won't go through a power down/up cycle, there is
* no chance to call complete(&ab->restart_completed) in
* ath12k_core_restart(), making ath12k_core_resume() timeout.
* So call it here to avoid this issue. This also works in case
* no error happens thus suspend_late/resume_early get called,
* because it will be reinitialized in ath12k_core_resume_early().
*/
msleep(500);
complete(&ab->restart_completed);
ret = ath12k_dp_rx_pktlog_stop(ab, true);
if (ret) {
ath12k_warn(ab, "failed to stop dp rx (and timer) pktlog during suspend: %d\n",
ret);
return ret;
}
return 0;
}
EXPORT_SYMBOL(ath12k_core_suspend);
ret = ath12k_dp_rx_pktlog_stop(ab, false);
if (ret) {
ath12k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n",
ret);
return ret;
}
int ath12k_core_suspend_late(struct ath12k_base *ab)
{
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
ath12k_hif_irq_disable(ab);
ath12k_hif_ce_irq_disable(ab);
ret = ath12k_hif_suspend(ab);
if (ret) {
ath12k_warn(ab, "failed to suspend hif: %d\n", ret);
return ret;
}
ath12k_hif_power_down(ab, true);
return 0;
}
EXPORT_SYMBOL(ath12k_core_suspend_late);
int ath12k_core_resume(struct ath12k_base *ab)
int ath12k_core_resume_early(struct ath12k_base *ab)
{
int ret;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
ret = ath12k_hif_resume(ab);
if (ret) {
ath12k_warn(ab, "failed to resume hif during resume: %d\n", ret);
return ret;
}
reinit_completion(&ab->restart_completed);
ret = ath12k_hif_power_up(ab);
if (ret)
ath12k_warn(ab, "failed to power up hif during resume: %d\n", ret);
ath12k_hif_ce_irq_enable(ab);
ath12k_hif_irq_enable(ab);
return ret;
}
EXPORT_SYMBOL(ath12k_core_resume_early);
ret = ath12k_dp_rx_pktlog_start(ab);
if (ret) {
ath12k_warn(ab, "failed to start rx pktlog during resume: %d\n",
ret);
return ret;
int ath12k_core_resume(struct ath12k_base *ab)
{
long time_left;
if (!ab->hw_params->supports_suspend)
return -EOPNOTSUPP;
time_left = wait_for_completion_timeout(&ab->restart_completed,
ATH12K_RESET_TIMEOUT_HZ);
if (time_left == 0) {
ath12k_warn(ab, "timeout while waiting for restart complete");
return -ETIMEDOUT;
}
return 0;
}
EXPORT_SYMBOL(ath12k_core_resume);
static int __ath12k_core_create_board_name(struct ath12k_base *ab, char *name,
size_t name_len, bool with_variant,
@ -542,6 +566,8 @@ static void ath12k_core_stop(struct ath12k_base *ab)
if (!test_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags))
ath12k_qmi_firmware_stop(ab);
ath12k_acpi_stop(ab);
ath12k_hif_stop(ab);
ath12k_wmi_detach(ab);
ath12k_dp_rx_pdev_reo_cleanup(ab);
@ -628,6 +654,8 @@ static int ath12k_core_soc_create(struct ath12k_base *ab)
return ret;
}
ath12k_debugfs_soc_create(ab);
ret = ath12k_hif_power_up(ab);
if (ret) {
ath12k_err(ab, "failed to power up :%d\n", ret);
@ -637,6 +665,7 @@ static int ath12k_core_soc_create(struct ath12k_base *ab)
return 0;
err_qmi_deinit:
ath12k_debugfs_soc_destroy(ab);
ath12k_qmi_deinit_service(ab);
return ret;
}
@ -645,6 +674,7 @@ static void ath12k_core_soc_destroy(struct ath12k_base *ab)
{
ath12k_dp_free(ab);
ath12k_reg_free(ab);
ath12k_debugfs_soc_destroy(ab);
ath12k_qmi_deinit_service(ab);
}
@ -779,6 +809,11 @@ static int ath12k_core_start(struct ath12k_base *ab,
goto err_reo_cleanup;
}
ret = ath12k_acpi_start(ab);
if (ret)
/* ACPI is optional so continue in case of an error */
ath12k_dbg(ab, ATH12K_DBG_BOOT, "acpi failed: %d\n", ret);
return 0;
err_reo_cleanup:
@ -874,9 +909,8 @@ static int ath12k_core_reconfigure_on_crash(struct ath12k_base *ab)
int ret;
mutex_lock(&ab->core_lock);
ath12k_hif_irq_disable(ab);
ath12k_dp_pdev_free(ab);
ath12k_hif_stop(ab);
ath12k_ce_cleanup_pipes(ab);
ath12k_wmi_detach(ab);
ath12k_dp_rx_pdev_reo_cleanup(ab);
mutex_unlock(&ab->core_lock);
@ -1052,9 +1086,6 @@ static void ath12k_core_restart(struct work_struct *work)
struct ath12k_base *ab = container_of(work, struct ath12k_base, restart_work);
int ret;
if (!ab->is_reset)
ath12k_core_pre_reconfigure_recovery(ab);
ret = ath12k_core_reconfigure_on_crash(ab);
if (ret) {
ath12k_err(ab, "failed to reconfigure driver on crash recovery\n");
@ -1064,8 +1095,7 @@ static void ath12k_core_restart(struct work_struct *work)
if (ab->is_reset)
complete_all(&ab->reconfigure_complete);
if (!ab->is_reset)
ath12k_core_post_reconfigure_recovery(ab);
complete(&ab->restart_completed);
}
static void ath12k_core_reset(struct work_struct *work)
@ -1131,8 +1161,10 @@ static void ath12k_core_reset(struct work_struct *work)
time_left = wait_for_completion_timeout(&ab->recovery_start,
ATH12K_RECOVER_START_TIMEOUT_HZ);
ath12k_hif_power_down(ab);
ath12k_qmi_free_resource(ab);
ath12k_hif_irq_disable(ab);
ath12k_hif_ce_irq_disable(ab);
ath12k_hif_power_down(ab, false);
ath12k_hif_power_up(ab);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset started\n");
@ -1175,7 +1207,7 @@ void ath12k_core_deinit(struct ath12k_base *ab)
mutex_unlock(&ab->core_lock);
ath12k_hif_power_down(ab);
ath12k_hif_power_down(ab, false);
ath12k_mac_destroy(ab);
ath12k_core_soc_destroy(ab);
ath12k_fw_unmap(ab);
@ -1223,11 +1255,12 @@ struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
timer_setup(&ab->rx_replenish_retry, ath12k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
init_completion(&ab->restart_completed);
ab->dev = dev;
ab->hif.bus = bus;
ab->qmi.num_radios = U8_MAX;
ab->slo_capable = true;
ab->mlo_capable_flags = ATH12K_INTRA_DEVICE_MLO_SUPPORT;
return ab;

View File

@ -26,6 +26,7 @@
#include "reg.h"
#include "dbring.h"
#include "fw.h"
#include "acpi.h"
#define SM(_v, _f) (((_v) << _f##_LSB) & _f##_MASK)
@ -46,6 +47,7 @@
#define ATH12K_SMBIOS_BDF_EXT_MAGIC "BDF_"
#define ATH12K_INVALID_HW_MAC_ID 0xFF
#define ATH12K_CONNECTION_LOSS_HZ (3 * HZ)
#define ATH12K_RX_RATE_TABLE_NUM 320
#define ATH12K_RX_RATE_TABLE_11AX_NUM 576
@ -214,6 +216,24 @@ enum ath12k_monitor_flags {
ATH12K_FLAG_MONITOR_ENABLED,
};
struct ath12k_tx_conf {
bool changed;
u16 ac;
struct ieee80211_tx_queue_params tx_queue_params;
};
struct ath12k_key_conf {
bool changed;
enum set_key_cmd cmd;
struct ieee80211_key_conf *key;
};
struct ath12k_vif_cache {
struct ath12k_tx_conf tx_conf;
struct ath12k_key_conf key_conf;
u32 bss_conf_changed;
};
struct ath12k_vif {
u32 vdev_id;
enum wmi_vdev_type vdev_type;
@ -251,11 +271,13 @@ struct ath12k_vif {
} ap;
} u;
bool is_created;
bool is_started;
bool is_up;
u32 aid;
u8 bssid[ETH_ALEN];
struct cfg80211_bitrate_mask bitrate_mask;
struct delayed_work connection_loss_work;
int num_legacy_stations;
int rtscts_prot_mode;
int txpower;
@ -267,10 +289,12 @@ struct ath12k_vif {
u8 vdev_stats_id;
u32 punct_bitmap;
bool ps;
struct ath12k_vif_cache *cache;
};
struct ath12k_vif_iter {
u32 vdev_id;
struct ath12k *ar;
struct ath12k_vif *arvif;
};
@ -453,6 +477,10 @@ struct ath12k_fw_stats {
struct list_head bcn;
};
struct ath12k_debug {
struct dentry *debugfs_pdev;
};
struct ath12k_per_peer_tx_stats {
u32 succ_bytes;
u32 retry_bytes;
@ -592,16 +620,24 @@ struct ath12k {
struct ath12k_per_peer_tx_stats cached_stats;
u32 last_ppdu_id;
u32 cached_ppdu_id;
#ifdef CONFIG_ATH12K_DEBUGFS
struct ath12k_debug debug;
#endif
bool dfs_block_radar_events;
bool monitor_conf_enabled;
bool monitor_vdev_created;
bool monitor_started;
int monitor_vdev_id;
u32 freq_low;
u32 freq_high;
};
struct ath12k_hw {
struct ieee80211_hw *hw;
bool regd_updated;
bool use_6ghz_regd;
u8 num_radio;
struct ath12k radio[] __aligned(sizeof(void *));
@ -688,6 +724,21 @@ struct ath12k_soc_dp_stats {
struct ath12k_soc_dp_tx_err_stats tx_err;
};
/**
* enum ath12k_link_capable_flags - link capable flags
*
* Single/Multi link capability information
*
* @ATH12K_INTRA_DEVICE_MLO_SUPPORT: SLO/MLO form between the radio, where all
* the links (radios) present within a device.
* @ATH12K_INTER_DEVICE_MLO_SUPPORT: SLO/MLO form between the radio, where all
* the links (radios) present across the devices.
*/
enum ath12k_link_capable_flags {
ATH12K_INTRA_DEVICE_MLO_SUPPORT = BIT(0),
ATH12K_INTER_DEVICE_MLO_SUPPORT = BIT(1),
};
/* Master structure to hold the hw data which may be used in core module */
struct ath12k_base {
enum ath12k_hw_rev hw_rev;
@ -782,6 +833,9 @@ struct ath12k_base {
/* Current DFS Regulatory */
enum ath12k_dfs_region dfs_region;
struct ath12k_soc_dp_stats soc_stats;
#ifdef CONFIG_ATH12K_DEBUGFS
struct dentry *debugfs_soc;
#endif
unsigned long dev_flags;
struct completion driver_recovery;
@ -843,10 +897,31 @@ struct ath12k_base {
const struct hal_rx_ops *hal_rx_ops;
/* slo_capable denotes if the single/multi link operation
* is supported within the same chip (SoC).
/* mlo_capable_flags denotes the single/multi link operation
* capabilities of the Device.
*
* See enum ath12k_link_capable_flags
*/
bool slo_capable;
u8 mlo_capable_flags;
struct completion restart_completed;
#ifdef CONFIG_ACPI
struct {
bool started;
u32 func_bit;
bool acpi_tas_enable;
bool acpi_bios_sar_enable;
u8 tas_cfg[ATH12K_ACPI_DSM_TAS_CFG_SIZE];
u8 tas_sar_power_table[ATH12K_ACPI_DSM_TAS_DATA_SIZE];
u8 bios_sar_data[ATH12K_ACPI_DSM_BIOS_SAR_DATA_SIZE];
u8 geo_offset_data[ATH12K_ACPI_DSM_GEO_OFFSET_DATA_SIZE];
u8 cca_data[ATH12K_ACPI_DSM_CCA_DATA_SIZE];
u8 band_edge_power[ATH12K_ACPI_DSM_BAND_EDGE_DATA_SIZE];
} acpi;
#endif /* CONFIG_ACPI */
/* must be last */
u8 drv_priv[] __aligned(sizeof(void *));
@ -874,8 +949,10 @@ int ath12k_core_fetch_regdb(struct ath12k_base *ab, struct ath12k_board_data *bd
int ath12k_core_check_dt(struct ath12k_base *ath12k);
int ath12k_core_check_smbios(struct ath12k_base *ab);
void ath12k_core_halt(struct ath12k *ar);
int ath12k_core_resume_early(struct ath12k_base *ab);
int ath12k_core_resume(struct ath12k_base *ab);
int ath12k_core_suspend(struct ath12k_base *ab);
int ath12k_core_suspend_late(struct ath12k_base *ab);
const struct firmware *ath12k_core_firmware_request(struct ath12k_base *ab,
const char *filename);
@ -951,13 +1028,21 @@ static inline struct ath12k_hw *ath12k_hw_to_ah(struct ieee80211_hw *hw)
return hw->priv;
}
static inline struct ath12k *ath12k_ah_to_ar(struct ath12k_hw *ah)
static inline struct ath12k *ath12k_ah_to_ar(struct ath12k_hw *ah, u8 hw_link_id)
{
return ah->radio;
if (WARN(hw_link_id >= ah->num_radio,
"bad hw link id %d, so switch to default link\n", hw_link_id))
hw_link_id = 0;
return &ah->radio[hw_link_id];
}
static inline struct ieee80211_hw *ath12k_ar_to_hw(struct ath12k *ar)
{
return ar->ah->hw;
}
#define for_each_ar(ah, ar, index) \
for ((index) = 0; ((index) < (ah)->num_radio && \
((ar) = &(ah)->radio[(index)])); (index)++)
#endif /* _CORE_H_ */

View File

@ -0,0 +1,90 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include "core.h"
#include "debugfs.h"
static ssize_t ath12k_write_simulate_radar(struct file *file,
const char __user *user_buf,
size_t count, loff_t *ppos)
{
struct ath12k *ar = file->private_data;
int ret;
mutex_lock(&ar->conf_mutex);
ret = ath12k_wmi_simulate_radar(ar);
if (ret)
goto exit;
ret = count;
exit:
mutex_unlock(&ar->conf_mutex);
return ret;
}
static const struct file_operations fops_simulate_radar = {
.write = ath12k_write_simulate_radar,
.open = simple_open
};
void ath12k_debugfs_soc_create(struct ath12k_base *ab)
{
bool dput_needed;
char soc_name[64] = { 0 };
struct dentry *debugfs_ath12k;
debugfs_ath12k = debugfs_lookup("ath12k", NULL);
if (debugfs_ath12k) {
/* a dentry from lookup() needs dput() after we don't use it */
dput_needed = true;
} else {
debugfs_ath12k = debugfs_create_dir("ath12k", NULL);
if (IS_ERR_OR_NULL(debugfs_ath12k))
return;
dput_needed = false;
}
scnprintf(soc_name, sizeof(soc_name), "%s-%s", ath12k_bus_str(ab->hif.bus),
dev_name(ab->dev));
ab->debugfs_soc = debugfs_create_dir(soc_name, debugfs_ath12k);
if (dput_needed)
dput(debugfs_ath12k);
}
void ath12k_debugfs_soc_destroy(struct ath12k_base *ab)
{
debugfs_remove_recursive(ab->debugfs_soc);
ab->debugfs_soc = NULL;
/* We are not removing ath12k directory on purpose, even if it
* would be empty. This simplifies the directory handling and it's
* a minor cosmetic issue to leave an empty ath12k directory to
* debugfs.
*/
}
void ath12k_debugfs_register(struct ath12k *ar)
{
struct ath12k_base *ab = ar->ab;
struct ieee80211_hw *hw = ar->ah->hw;
char pdev_name[5];
char buf[100] = {0};
scnprintf(pdev_name, sizeof(pdev_name), "%s%d", "mac", ar->pdev_idx);
ar->debug.debugfs_pdev = debugfs_create_dir(pdev_name, ab->debugfs_soc);
/* Create a symlink under ieee80211/phy* */
scnprintf(buf, sizeof(buf), "../../ath12k/%pd2", ar->debug.debugfs_pdev);
debugfs_create_symlink("ath12k", hw->wiphy->debugfsdir, buf);
if (ar->mac.sbands[NL80211_BAND_5GHZ].channels) {
debugfs_create_file("dfs_simulate_radar", 0200,
ar->debug.debugfs_pdev, ar,
&fops_simulate_radar);
}
}

View File

@ -0,0 +1,30 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH12K_DEBUGFS_H_
#define _ATH12K_DEBUGFS_H_
#ifdef CONFIG_ATH12K_DEBUGFS
void ath12k_debugfs_soc_create(struct ath12k_base *ab);
void ath12k_debugfs_soc_destroy(struct ath12k_base *ab);
void ath12k_debugfs_register(struct ath12k *ar);
#else
static inline void ath12k_debugfs_soc_create(struct ath12k_base *ab)
{
}
static inline void ath12k_debugfs_soc_destroy(struct ath12k_base *ab)
{
}
static inline void ath12k_debugfs_register(struct ath12k *ar)
{
}
#endif /* CONFIG_ATH12K_DEBUGFS */
#endif /* _ATH12K_DEBUGFS_H_ */

View File

@ -14,6 +14,11 @@
#include "peer.h"
#include "dp_mon.h"
enum ath12k_dp_desc_type {
ATH12K_DP_TX_DESC,
ATH12K_DP_RX_DESC,
};
static void ath12k_dp_htt_htc_tx_complete(struct ath12k_base *ab,
struct sk_buff *skb)
{
@ -1150,7 +1155,9 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
struct ath12k_rx_desc_info *desc_info;
struct ath12k_tx_desc_info *tx_desc_info, *tmp1;
struct ath12k_dp *dp = &ab->dp;
struct ath12k_skb_cb *skb_cb;
struct sk_buff *skb;
struct ath12k *ar;
int i, j;
u32 pool_id, tx_spt_page;
@ -1201,6 +1208,11 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
if (!skb)
continue;
skb_cb = ATH12K_SKB_CB(skb);
ar = skb_cb->ar;
if (atomic_dec_and_test(&ar->dp.num_tx_pending))
wake_up(&ar->dp.tx_empty_waitq);
dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr,
skb->len, DMA_TO_DEVICE);
dev_kfree_skb_any(skb);
@ -1344,12 +1356,16 @@ struct ath12k_rx_desc_info *ath12k_dp_get_rx_desc(struct ath12k_base *ab,
u32 cookie)
{
struct ath12k_rx_desc_info **desc_addr_ptr;
u16 ppt_idx, spt_idx;
u16 start_ppt_idx, end_ppt_idx, ppt_idx, spt_idx;
ppt_idx = u32_get_bits(cookie, ATH12K_DP_CC_COOKIE_PPT);
spt_idx = u32_get_bits(cookie, ATH12k_DP_CC_COOKIE_SPT);
spt_idx = u32_get_bits(cookie, ATH12K_DP_CC_COOKIE_SPT);
if (ppt_idx > ATH12K_NUM_RX_SPT_PAGES ||
start_ppt_idx = ATH12K_RX_SPT_PAGE_OFFSET;
end_ppt_idx = start_ppt_idx + ATH12K_NUM_RX_SPT_PAGES;
if (ppt_idx < start_ppt_idx ||
ppt_idx >= end_ppt_idx ||
spt_idx > ATH12K_MAX_SPT_ENTRIES)
return NULL;
@ -1362,13 +1378,17 @@ struct ath12k_tx_desc_info *ath12k_dp_get_tx_desc(struct ath12k_base *ab,
u32 cookie)
{
struct ath12k_tx_desc_info **desc_addr_ptr;
u16 ppt_idx, spt_idx;
u16 start_ppt_idx, end_ppt_idx, ppt_idx, spt_idx;
ppt_idx = u32_get_bits(cookie, ATH12K_DP_CC_COOKIE_PPT);
spt_idx = u32_get_bits(cookie, ATH12k_DP_CC_COOKIE_SPT);
spt_idx = u32_get_bits(cookie, ATH12K_DP_CC_COOKIE_SPT);
if (ppt_idx < ATH12K_NUM_RX_SPT_PAGES ||
ppt_idx > ab->dp.num_spt_pages ||
start_ppt_idx = ATH12K_TX_SPT_PAGE_OFFSET;
end_ppt_idx = start_ppt_idx +
(ATH12K_TX_SPT_PAGES_PER_POOL * ATH12K_HW_MAX_QUEUES);
if (ppt_idx < start_ppt_idx ||
ppt_idx >= end_ppt_idx ||
spt_idx > ATH12K_MAX_SPT_ENTRIES)
return NULL;
@ -1397,15 +1417,16 @@ static int ath12k_dp_cc_desc_init(struct ath12k_base *ab)
return -ENOMEM;
}
ppt_idx = ATH12K_RX_SPT_PAGE_OFFSET + i;
dp->spt_info->rxbaddr[i] = &rx_descs[0];
for (j = 0; j < ATH12K_MAX_SPT_ENTRIES; j++) {
rx_descs[j].cookie = ath12k_dp_cc_cookie_gen(i, j);
rx_descs[j].cookie = ath12k_dp_cc_cookie_gen(ppt_idx, j);
rx_descs[j].magic = ATH12K_DP_RX_DESC_MAGIC;
list_add_tail(&rx_descs[j].list, &dp->rx_desc_free_list);
/* Update descriptor VA in SPT */
rx_desc_addr = ath12k_dp_cc_get_desc_addr_ptr(ab, i, j);
rx_desc_addr = ath12k_dp_cc_get_desc_addr_ptr(ab, ppt_idx, j);
*rx_desc_addr = &rx_descs[j];
}
}
@ -1425,10 +1446,11 @@ static int ath12k_dp_cc_desc_init(struct ath12k_base *ab)
}
tx_spt_page = i + pool_id * ATH12K_TX_SPT_PAGES_PER_POOL;
ppt_idx = ATH12K_TX_SPT_PAGE_OFFSET + tx_spt_page;
dp->spt_info->txbaddr[tx_spt_page] = &tx_descs[0];
for (j = 0; j < ATH12K_MAX_SPT_ENTRIES; j++) {
ppt_idx = ATH12K_NUM_RX_SPT_PAGES + tx_spt_page;
tx_descs[j].desc_id = ath12k_dp_cc_cookie_gen(ppt_idx, j);
tx_descs[j].pool_id = pool_id;
list_add_tail(&tx_descs[j].list,
@ -1445,11 +1467,41 @@ static int ath12k_dp_cc_desc_init(struct ath12k_base *ab)
return 0;
}
static int ath12k_dp_cmem_init(struct ath12k_base *ab,
struct ath12k_dp *dp,
enum ath12k_dp_desc_type type)
{
u32 cmem_base;
int i, start, end;
cmem_base = ab->qmi.dev_mem[ATH12K_QMI_DEVMEM_CMEM_INDEX].start;
switch (type) {
case ATH12K_DP_TX_DESC:
start = ATH12K_TX_SPT_PAGE_OFFSET;
end = start + ATH12K_NUM_TX_SPT_PAGES;
break;
case ATH12K_DP_RX_DESC:
start = ATH12K_RX_SPT_PAGE_OFFSET;
end = start + ATH12K_NUM_RX_SPT_PAGES;
break;
default:
ath12k_err(ab, "invalid descriptor type %d in cmem init\n", type);
return -EINVAL;
}
/* Write to PPT in CMEM */
for (i = start; i < end; i++)
ath12k_hif_write32(ab, cmem_base + ATH12K_PPT_ADDR_OFFSET(i),
dp->spt_info[i].paddr >> ATH12K_SPT_4K_ALIGN_OFFSET);
return 0;
}
static int ath12k_dp_cc_init(struct ath12k_base *ab)
{
struct ath12k_dp *dp = &ab->dp;
int i, ret = 0;
u32 cmem_base;
INIT_LIST_HEAD(&dp->rx_desc_free_list);
spin_lock_init(&dp->rx_desc_lock);
@ -1472,8 +1524,6 @@ static int ath12k_dp_cc_init(struct ath12k_base *ab)
return -ENOMEM;
}
cmem_base = ab->qmi.dev_mem[ATH12K_QMI_DEVMEM_CMEM_INDEX].start;
for (i = 0; i < dp->num_spt_pages; i++) {
dp->spt_info[i].vaddr = dma_alloc_coherent(ab->dev,
ATH12K_PAGE_SIZE,
@ -1490,10 +1540,18 @@ static int ath12k_dp_cc_init(struct ath12k_base *ab)
ret = -EINVAL;
goto free;
}
}
/* Write to PPT in CMEM */
ath12k_hif_write32(ab, cmem_base + ATH12K_PPT_ADDR_OFFSET(i),
dp->spt_info[i].paddr >> ATH12K_SPT_4K_ALIGN_OFFSET);
ret = ath12k_dp_cmem_init(ab, dp, ATH12K_DP_TX_DESC);
if (ret) {
ath12k_warn(ab, "HW CC Tx cmem init failed %d", ret);
goto free;
}
ret = ath12k_dp_cmem_init(ab, dp, ATH12K_DP_RX_DESC);
if (ret) {
ath12k_warn(ab, "HW CC Rx cmem init failed %d", ret);
goto free;
}
ret = ath12k_dp_cc_desc_init(ab);

View File

@ -223,6 +223,9 @@ struct ath12k_pdev_dp {
#define ATH12K_NUM_TX_SPT_PAGES (ATH12K_TX_SPT_PAGES_PER_POOL * ATH12K_HW_MAX_QUEUES)
#define ATH12K_NUM_SPT_PAGES (ATH12K_NUM_RX_SPT_PAGES + ATH12K_NUM_TX_SPT_PAGES)
#define ATH12K_TX_SPT_PAGE_OFFSET 0
#define ATH12K_RX_SPT_PAGE_OFFSET ATH12K_NUM_TX_SPT_PAGES
/* The SPT pages are divided for RX and TX, first block for RX
* and remaining for TX
*/
@ -245,7 +248,7 @@ struct ath12k_pdev_dp {
#define ATH12K_CC_SPT_MSB 8
#define ATH12K_CC_PPT_MSB 19
#define ATH12K_CC_PPT_SHIFT 9
#define ATH12k_DP_CC_COOKIE_SPT GENMASK(8, 0)
#define ATH12K_DP_CC_COOKIE_SPT GENMASK(8, 0)
#define ATH12K_DP_CC_COOKIE_PPT GENMASK(19, 9)
#define DP_REO_QREF_NUM GENMASK(31, 16)

View File

@ -944,7 +944,7 @@ ath12k_dp_mon_rx_merg_msdus(struct ath12k *ar,
goto err_merge_fail;
ath12k_dbg(ab, ATH12K_DBG_DATA,
"mpdu_buf %pK mpdu_buf->len %u",
"mpdu_buf %p mpdu_buf->len %u",
prev_buf, prev_buf->len);
} else {
ath12k_dbg(ab, ATH12K_DBG_DATA,
@ -958,7 +958,7 @@ ath12k_dp_mon_rx_merg_msdus(struct ath12k *ar,
err_merge_fail:
if (mpdu_buf && decap_format != DP_RX_DECAP_TYPE_RAW) {
ath12k_dbg(ab, ATH12K_DBG_DATA,
"err_merge_fail mpdu_buf %pK", mpdu_buf);
"err_merge_fail mpdu_buf %p", mpdu_buf);
/* Free the head buffer */
dev_kfree_skb_any(mpdu_buf);
}
@ -1092,7 +1092,7 @@ static void ath12k_dp_mon_rx_deliver_msdu(struct ath12k *ar, struct napi_struct
spin_unlock_bh(&ar->ab->base_lock);
ath12k_dbg(ar->ab, ATH12K_DBG_DATA,
"rx skb %pK len %u peer %pM %u %s %s%s%s%s%s%s%s%s %srate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
"rx skb %p len %u peer %pM %u %s %s%s%s%s%s%s%s%s %srate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
msdu,
msdu->len,
peer ? peer->addr : NULL,

View File

@ -239,26 +239,12 @@ static inline u8 ath12k_dp_rx_get_msdu_src_link(struct ath12k_base *ab,
return ab->hal_rx_ops->rx_desc_get_msdu_src_link_id(desc);
}
static int ath12k_dp_purge_mon_ring(struct ath12k_base *ab)
static void ath12k_dp_clean_up_skb_list(struct sk_buff_head *skb_list)
{
int i, reaped = 0;
unsigned long timeout = jiffies + msecs_to_jiffies(DP_MON_PURGE_TIMEOUT_MS);
struct sk_buff *skb;
do {
for (i = 0; i < ab->hw_params->num_rxmda_per_pdev; i++)
reaped += ath12k_dp_mon_process_ring(ab, i, NULL,
DP_MON_SERVICE_BUDGET,
ATH12K_DP_RX_MONITOR_MODE);
/* nothing more to reap */
if (reaped < DP_MON_SERVICE_BUDGET)
return 0;
} while (time_before(jiffies, timeout));
ath12k_warn(ab, "dp mon ring purge timeout");
return -ETIMEDOUT;
while ((skb = __skb_dequeue(skb_list)))
dev_kfree_skb_any(skb);
}
static size_t ath12k_dp_list_cut_nodes(struct list_head *list,
@ -2459,7 +2445,7 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
spin_unlock_bh(&ab->base_lock);
ath12k_dbg(ab, ATH12K_DBG_DATA,
"rx skb %pK len %u peer %pM %d %s sn %u %s%s%s%s%s%s%s%s%s rate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
"rx skb %p len %u peer %pM %d %s sn %u %s%s%s%s%s%s%s%s%s rate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
msdu,
msdu->len,
peer ? peer->addr : NULL,
@ -3742,19 +3728,20 @@ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
struct hal_rx_wbm_rel_info err_info;
struct hal_srng *srng;
struct sk_buff *msdu;
struct sk_buff_head msdu_list;
struct sk_buff_head msdu_list, scatter_msdu_list;
struct ath12k_skb_rxcb *rxcb;
void *rx_desc;
u8 mac_id;
int num_buffs_reaped = 0;
struct ath12k_rx_desc_info *desc_info;
int ret, pdev_id;
struct hal_rx_desc *msdu_data;
__skb_queue_head_init(&msdu_list);
__skb_queue_head_init(&scatter_msdu_list);
srng = &ab->hal.srng_list[dp->rx_rel_ring.ring_id];
rx_ring = &dp->rx_refill_buf_ring;
spin_lock_bh(&srng->lock);
ath12k_hal_srng_access_begin(ab, srng);
@ -3807,17 +3794,53 @@ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
continue;
}
msdu_data = (struct hal_rx_desc *)msdu->data;
rxcb->err_rel_src = err_info.err_rel_src;
rxcb->err_code = err_info.err_code;
rxcb->rx_desc = (struct hal_rx_desc *)msdu->data;
__skb_queue_tail(&msdu_list, msdu);
rxcb->is_first_msdu = err_info.first_msdu;
rxcb->is_last_msdu = err_info.last_msdu;
rxcb->is_continuation = err_info.continuation;
rxcb->rx_desc = msdu_data;
if (err_info.continuation) {
__skb_queue_tail(&scatter_msdu_list, msdu);
continue;
}
mac_id = ath12k_dp_rx_get_msdu_src_link(ab,
msdu_data);
if (mac_id >= MAX_RADIOS) {
dev_kfree_skb_any(msdu);
/* In any case continuation bit is set
* in the previous record, cleanup scatter_msdu_list
*/
ath12k_dp_clean_up_skb_list(&scatter_msdu_list);
continue;
}
if (!skb_queue_empty(&scatter_msdu_list)) {
struct sk_buff *msdu;
skb_queue_walk(&scatter_msdu_list, msdu) {
rxcb = ATH12K_SKB_RXCB(msdu);
rxcb->mac_id = mac_id;
}
skb_queue_splice_tail_init(&scatter_msdu_list,
&msdu_list);
}
rxcb = ATH12K_SKB_RXCB(msdu);
rxcb->mac_id = mac_id;
__skb_queue_tail(&msdu_list, msdu);
}
/* In any case continuation bit is set in the
* last record, cleanup scatter_msdu_list
*/
ath12k_dp_clean_up_skb_list(&scatter_msdu_list);
ath12k_hal_srng_access_end(ab, srng);
spin_unlock_bh(&srng->lock);
@ -3830,8 +3853,9 @@ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
rcu_read_lock();
while ((msdu = __skb_dequeue(&msdu_list))) {
mac_id = ath12k_dp_rx_get_msdu_src_link(ab,
(struct hal_rx_desc *)msdu->data);
rxcb = ATH12K_SKB_RXCB(msdu);
mac_id = rxcb->mac_id;
pdev_id = ath12k_hw_mac_id_to_pdev_id(ab->hw_params, mac_id);
ar = ab->pdevs[pdev_id].ar;
@ -4264,29 +4288,3 @@ int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar)
return 0;
}
int ath12k_dp_rx_pktlog_start(struct ath12k_base *ab)
{
/* start reap timer */
mod_timer(&ab->mon_reap_timer,
jiffies + msecs_to_jiffies(ATH12K_MON_TIMER_INTERVAL));
return 0;
}
int ath12k_dp_rx_pktlog_stop(struct ath12k_base *ab, bool stop_timer)
{
int ret;
if (stop_timer)
del_timer_sync(&ab->mon_reap_timer);
/* reap all the monitor related rings */
ret = ath12k_dp_purge_mon_ring(ab);
if (ret) {
ath12k_warn(ab, "failed to purge dp mon ring: %d\n", ret);
return ret;
}
return 0;
}

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_DP_RX_H
#define ATH12K_DP_RX_H
@ -123,8 +123,6 @@ int ath12k_dp_rx_bufs_replenish(struct ath12k_base *ab,
int ath12k_dp_rx_pdev_mon_attach(struct ath12k *ar);
int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev_id);
int ath12k_dp_rx_pktlog_start(struct ath12k_base *ab);
int ath12k_dp_rx_pktlog_stop(struct ath12k_base *ab, bool stop_timer);
u8 ath12k_dp_rx_h_l3pad(struct ath12k_base *ab,
struct hal_rx_desc *desc);
struct ath12k_peer *

View File

@ -767,7 +767,7 @@ struct hal_srng_config {
};
/**
* enum hal_rx_buf_return_buf_manager
* enum hal_rx_buf_return_buf_manager - manager for returned rx buffers
*
* @HAL_RX_BUF_RBM_WBM_IDLE_BUF_LIST: Buffer returned to WBM idle buffer list
* @HAL_RX_BUF_RBM_WBM_CHIP0_IDLE_DESC_LIST: Descriptor returned to WBM idle

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2019-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef ATH12K_HIF_H
@ -17,7 +17,7 @@ struct ath12k_hif_ops {
int (*start)(struct ath12k_base *ab);
void (*stop)(struct ath12k_base *ab);
int (*power_up)(struct ath12k_base *ab);
void (*power_down)(struct ath12k_base *ab);
void (*power_down)(struct ath12k_base *ab, bool is_suspend);
int (*suspend)(struct ath12k_base *ab);
int (*resume)(struct ath12k_base *ab);
int (*map_service_to_pipe)(struct ath12k_base *ab, u16 service_id,
@ -133,12 +133,18 @@ static inline void ath12k_hif_write32(struct ath12k_base *ab, u32 address,
static inline int ath12k_hif_power_up(struct ath12k_base *ab)
{
if (!ab->hif.ops->power_up)
return -EOPNOTSUPP;
return ab->hif.ops->power_up(ab);
}
static inline void ath12k_hif_power_down(struct ath12k_base *ab)
static inline void ath12k_hif_power_down(struct ath12k_base *ab, bool is_suspend)
{
ab->hif.ops->power_down(ab);
if (!ab->hif.ops->power_down)
return;
ab->hif.ops->power_down(ab, is_suspend);
}
#endif /* ATH12K_HIF_H */

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/skbuff.h>
#include <linux/ctype.h>
@ -358,7 +358,7 @@ void ath12k_htc_rx_completion_handler(struct ath12k_base *ab,
goto out;
}
ath12k_dbg(ab, ATH12K_DBG_HTC, "htc rx completion ep %d skb %pK\n",
ath12k_dbg(ab, ATH12K_DBG_HTC, "htc rx completion ep %d skb %p\n",
eid, skb);
ep->ep_ops.ep_rx_complete(ab, skb);

View File

@ -15,6 +15,10 @@
#include "mhi.h"
#include "dp_rx.h"
static const guid_t wcn7850_uuid = GUID_INIT(0xf634f534, 0x6147, 0x11ec,
0x90, 0xd6, 0x02, 0x42,
0xac, 0x12, 0x00, 0x03);
static u8 ath12k_hw_qcn9274_mac_from_pdev_id(int pdev_idx)
{
return pdev_idx;
@ -920,6 +924,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.otp_board_id_register = QCN9274_QFPROM_RAW_RFA_PDET_ROW13_LSB,
.supports_sta_ps = false,
.acpi_guid = NULL,
},
{
.name = "wcn7850 hw2.0",
@ -964,7 +970,7 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.idle_ps = true,
.download_calib = false,
.supports_suspend = false,
.supports_suspend = true,
.tcl_ring_retry = false,
.reoq_lut_support = false,
.supports_shadow_regs = true,
@ -993,6 +999,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.otp_board_id_register = 0,
.supports_sta_ps = true,
.acpi_guid = &wcn7850_uuid,
},
{
.name = "qcn9274 hw2.0",
@ -1061,6 +1069,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.otp_board_id_register = QCN9274_QFPROM_RAW_RFA_PDET_ROW13_LSB,
.supports_sta_ps = false,
.acpi_guid = NULL,
},
};

View File

@ -8,6 +8,7 @@
#define ATH12K_HW_H
#include <linux/mhi.h>
#include <linux/uuid.h>
#include "wmi.h"
#include "hal.h"
@ -80,6 +81,7 @@
#define TARGET_RX_PEER_METADATA_VER_V1A 2
#define TARGET_RX_PEER_METADATA_VER_V1B 3
#define ATH12K_HW_DEFAULT_QUEUE 0
#define ATH12K_HW_MAX_QUEUES 4
#define ATH12K_QUEUE_LEN 4096
@ -211,6 +213,8 @@ struct ath12k_hw_params {
u32 otp_board_id_register;
bool supports_sta_ps;
const guid_t *acpi_guid;
};
struct ath12k_hw_ops {

File diff suppressed because it is too large Load Diff

View File

@ -78,4 +78,8 @@ enum ath12k_supported_bw ath12k_mac_mac80211_bw_to_ath12k_bw(enum rate_info_bw b
enum hal_encrypt_type ath12k_dp_tx_get_encrypt_type(u32 cipher);
int ath12k_mac_rfkill_enable_radio(struct ath12k *ar, bool enable);
int ath12k_mac_rfkill_config(struct ath12k *ar);
int ath12k_mac_wait_tx_complete(struct ath12k *ar);
void ath12k_mac_handle_beacon(struct ath12k *ar, struct sk_buff *skb);
void ath12k_mac_handle_beacon_miss(struct ath12k *ar, u32 vdev_id);
#endif

View File

@ -18,34 +18,6 @@
#define OTP_VALID_DUALMAC_BOARD_ID_MASK 0x1000
static const struct mhi_channel_config ath12k_mhi_channels_qcn9274[] = {
{
.num = 0,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 1,
.dir = DMA_TO_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 1,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 1,
.dir = DMA_FROM_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 20,
.name = "IPCR",
@ -111,34 +83,6 @@ const struct mhi_controller_config ath12k_mhi_config_qcn9274 = {
};
static const struct mhi_channel_config ath12k_mhi_channels_wcn7850[] = {
{
.num = 0,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 0,
.dir = DMA_TO_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 1,
.name = "LOOPBACK",
.num_elements = 32,
.event_ring = 0,
.dir = DMA_FROM_DEVICE,
.ee_mask = 0x4,
.pollcfg = 0,
.doorbell = MHI_DB_BRST_DISABLE,
.lpm_notify = false,
.offload_channel = false,
.doorbell_mode_switch = false,
.auto_queue = false,
},
{
.num = 20,
.name = "IPCR",
@ -196,7 +140,7 @@ const struct mhi_controller_config ath12k_mhi_config_wcn7850 = {
.max_channels = 128,
.timeout_ms = 2000,
.use_bounce_buf = false,
.buf_len = 0,
.buf_len = 8192,
.num_channels = ARRAY_SIZE(ath12k_mhi_channels_wcn7850),
.ch_cfg = ath12k_mhi_channels_wcn7850,
.num_events = ARRAY_SIZE(ath12k_mhi_events_wcn7850),
@ -385,7 +329,6 @@ int ath12k_mhi_register(struct ath12k_pci *ab_pci)
"failed to read board id\n");
} else if (board_id & OTP_VALID_DUALMAC_BOARD_ID_MASK) {
dualmac = true;
ab->slo_capable = false;
ath12k_dbg(ab, ATH12K_DBG_BOOT,
"dualmac fw selected for board id: %x\n", board_id);
}
@ -470,6 +413,8 @@ static char *ath12k_mhi_state_to_str(enum ath12k_mhi_state mhi_state)
return "POWER_ON";
case ATH12K_MHI_POWER_OFF:
return "POWER_OFF";
case ATH12K_MHI_POWER_OFF_KEEP_DEV:
return "POWER_OFF_KEEP_DEV";
case ATH12K_MHI_FORCE_POWER_OFF:
return "FORCE_POWER_OFF";
case ATH12K_MHI_SUSPEND:
@ -501,6 +446,7 @@ static void ath12k_mhi_set_state_bit(struct ath12k_pci *ab_pci,
set_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state);
break;
case ATH12K_MHI_POWER_OFF:
case ATH12K_MHI_POWER_OFF_KEEP_DEV:
case ATH12K_MHI_FORCE_POWER_OFF:
clear_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state);
clear_bit(ATH12K_MHI_TRIGGER_RDDM, &ab_pci->mhi_state);
@ -544,6 +490,7 @@ static int ath12k_mhi_check_state_bit(struct ath12k_pci *ab_pci,
return 0;
break;
case ATH12K_MHI_POWER_OFF:
case ATH12K_MHI_POWER_OFF_KEEP_DEV:
case ATH12K_MHI_SUSPEND:
if (test_bit(ATH12K_MHI_POWER_ON, &ab_pci->mhi_state) &&
!test_bit(ATH12K_MHI_SUSPEND, &ab_pci->mhi_state))
@ -594,12 +541,27 @@ static int ath12k_mhi_set_state(struct ath12k_pci *ab_pci,
ret = 0;
break;
case ATH12K_MHI_POWER_ON:
ret = mhi_async_power_up(ab_pci->mhi_ctrl);
/* In case of resume, QRTR's resume_early() is called
* right after ath12k' resume_early(). Since QRTR requires
* MHI mission mode state when preparing IPCR channels
* (see ee_mask of that channel), we need to use the 'sync'
* version here to make sure MHI is in that state when we
* return. Or QRTR might resume before that state comes,
* and as a result it fails.
*
* The 'sync' version works for non-resume (normal power on)
* case as well.
*/
ret = mhi_sync_power_up(ab_pci->mhi_ctrl);
break;
case ATH12K_MHI_POWER_OFF:
mhi_power_down(ab_pci->mhi_ctrl, true);
ret = 0;
break;
case ATH12K_MHI_POWER_OFF_KEEP_DEV:
mhi_power_down_keep_dev(ab_pci->mhi_ctrl, true);
ret = 0;
break;
case ATH12K_MHI_FORCE_POWER_OFF:
mhi_power_down(ab_pci->mhi_ctrl, false);
ret = 0;
@ -653,9 +615,17 @@ out:
return ret;
}
void ath12k_mhi_stop(struct ath12k_pci *ab_pci)
void ath12k_mhi_stop(struct ath12k_pci *ab_pci, bool is_suspend)
{
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_OFF);
/* During suspend we need to use mhi_power_down_keep_dev()
* workaround, otherwise ath12k_core_resume() will timeout
* during resume.
*/
if (is_suspend)
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_OFF_KEEP_DEV);
else
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_POWER_OFF);
ath12k_mhi_set_state(ab_pci, ATH12K_MHI_DEINIT);
}

View File

@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause-Clear */
/*
* Copyright (c) 2020-2021 The Linux Foundation. All rights reserved.
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
* Copyright (c) 2021-2022, 2024 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#ifndef _ATH12K_MHI_H
#define _ATH12K_MHI_H
@ -22,6 +22,7 @@ enum ath12k_mhi_state {
ATH12K_MHI_DEINIT,
ATH12K_MHI_POWER_ON,
ATH12K_MHI_POWER_OFF,
ATH12K_MHI_POWER_OFF_KEEP_DEV,
ATH12K_MHI_FORCE_POWER_OFF,
ATH12K_MHI_SUSPEND,
ATH12K_MHI_RESUME,
@ -34,7 +35,7 @@ extern const struct mhi_controller_config ath12k_mhi_config_qcn9274;
extern const struct mhi_controller_config ath12k_mhi_config_wcn7850;
int ath12k_mhi_start(struct ath12k_pci *ar_pci);
void ath12k_mhi_stop(struct ath12k_pci *ar_pci);
void ath12k_mhi_stop(struct ath12k_pci *ar_pci, bool is_suspend);
int ath12k_mhi_register(struct ath12k_pci *ar_pci);
void ath12k_mhi_unregister(struct ath12k_pci *ar_pci);
void ath12k_mhi_set_mhictrl_reset(struct ath12k_base *ab);

View File

@ -121,7 +121,7 @@ static void ath12k_p2p_noa_update_vdev_iter(void *data, u8 *mac,
struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct ath12k_p2p_noa_arg *arg = data;
if (arvif->vdev_id != arg->vdev_id)
if (arvif->ar != arg->ar || arvif->vdev_id != arg->vdev_id)
return;
ath12k_p2p_noa_update(arvif, arg->noa);
@ -132,6 +132,7 @@ void ath12k_p2p_noa_update_by_vdev_id(struct ath12k *ar, u32 vdev_id,
{
struct ath12k_p2p_noa_arg arg = {
.vdev_id = vdev_id,
.ar = ar,
.noa = noa,
};

View File

@ -12,6 +12,7 @@ struct ath12k_wmi_p2p_noa_info;
struct ath12k_p2p_noa_arg {
u32 vdev_id;
struct ath12k *ar;
const struct ath12k_wmi_p2p_noa_info *noa;
};

View File

@ -872,7 +872,7 @@ static int ath12k_pci_claim(struct ath12k_pci *ab_pci, struct pci_dev *pdev)
goto release_region;
}
ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot pci_mem 0x%pK\n", ab->mem);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "boot pci_mem 0x%p\n", ab->mem);
return 0;
release_region:
@ -1271,7 +1271,7 @@ int ath12k_pci_power_up(struct ath12k_base *ab)
return 0;
}
void ath12k_pci_power_down(struct ath12k_base *ab)
void ath12k_pci_power_down(struct ath12k_base *ab, bool is_suspend)
{
struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
@ -1280,7 +1280,7 @@ void ath12k_pci_power_down(struct ath12k_base *ab)
ath12k_pci_force_wake(ab_pci->ab);
ath12k_pci_msi_disable(ab_pci);
ath12k_mhi_stop(ab_pci);
ath12k_mhi_stop(ab_pci, is_suspend);
clear_bit(ATH12K_PCI_FLAG_INIT_DONE, &ab_pci->flags);
ath12k_pci_sw_reset(ab_pci->ab, false);
}
@ -1503,7 +1503,7 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
ath12k_pci_set_irq_affinity_hint(ab_pci, NULL);
if (test_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags)) {
ath12k_pci_power_down(ab);
ath12k_pci_power_down(ab, false);
ath12k_qmi_deinit_service(ab);
goto qmi_fail;
}
@ -1531,7 +1531,7 @@ static void ath12k_pci_shutdown(struct pci_dev *pdev)
struct ath12k_pci *ab_pci = ath12k_pci_priv(ab);
ath12k_pci_set_irq_affinity_hint(ab_pci, NULL);
ath12k_pci_power_down(ab);
ath12k_pci_power_down(ab, false);
}
static __maybe_unused int ath12k_pci_pm_suspend(struct device *dev)
@ -1558,9 +1558,36 @@ static __maybe_unused int ath12k_pci_pm_resume(struct device *dev)
return ret;
}
static SIMPLE_DEV_PM_OPS(ath12k_pci_pm_ops,
ath12k_pci_pm_suspend,
ath12k_pci_pm_resume);
static __maybe_unused int ath12k_pci_pm_suspend_late(struct device *dev)
{
struct ath12k_base *ab = dev_get_drvdata(dev);
int ret;
ret = ath12k_core_suspend_late(ab);
if (ret)
ath12k_warn(ab, "failed to late suspend core: %d\n", ret);
return ret;
}
static __maybe_unused int ath12k_pci_pm_resume_early(struct device *dev)
{
struct ath12k_base *ab = dev_get_drvdata(dev);
int ret;
ret = ath12k_core_resume_early(ab);
if (ret)
ath12k_warn(ab, "failed to early resume core: %d\n", ret);
return ret;
}
static const struct dev_pm_ops __maybe_unused ath12k_pci_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(ath12k_pci_pm_suspend,
ath12k_pci_pm_resume)
SET_LATE_SYSTEM_SLEEP_PM_OPS(ath12k_pci_pm_suspend_late,
ath12k_pci_pm_resume_early)
};
static struct pci_driver ath12k_pci_driver = {
.name = "ath12k_pci",

View File

@ -143,5 +143,5 @@ int ath12k_pci_hif_resume(struct ath12k_base *ab);
void ath12k_pci_stop(struct ath12k_base *ab);
int ath12k_pci_start(struct ath12k_base *ab);
int ath12k_pci_power_up(struct ath12k_base *ab);
void ath12k_pci_power_down(struct ath12k_base *ab);
void ath12k_pci_power_down(struct ath12k_base *ab, bool is_suspend);
#endif /* ATH12K_PCI_H */

View File

@ -582,6 +582,24 @@ static const struct qmi_elem_info qmi_wlanfw_phy_cap_resp_msg_v01_ei[] = {
.offset = offsetof(struct qmi_wlanfw_phy_cap_resp_msg_v01,
board_id),
},
{
.data_type = QMI_OPT_FLAG,
.elem_len = 1,
.elem_size = sizeof(u8),
.array_type = NO_ARRAY,
.tlv_type = 0x13,
.offset = offsetof(struct qmi_wlanfw_phy_cap_resp_msg_v01,
single_chip_mlo_support_valid),
},
{
.data_type = QMI_UNSIGNED_1_BYTE,
.elem_len = 1,
.elem_size = sizeof(u8),
.array_type = NO_ARRAY,
.tlv_type = 0x13,
.offset = offsetof(struct qmi_wlanfw_phy_cap_resp_msg_v01,
single_chip_mlo_support),
},
{
.data_type = QMI_EOTI,
.array_type = NO_ARRAY,
@ -2005,7 +2023,15 @@ static void ath12k_host_cap_parse_mlo(struct ath12k_base *ab,
u8 hw_link_id = 0;
int i;
if (!(ab->mlo_capable_flags & ATH12K_INTRA_DEVICE_MLO_SUPPORT)) {
ath12k_dbg(ab, ATH12K_DBG_QMI,
"intra device MLO is disabled hence skip QMI MLO cap");
return;
}
if (!ab->qmi.num_radios || ab->qmi.num_radios == U8_MAX) {
ab->mlo_capable_flags = 0;
ath12k_dbg(ab, ATH12K_DBG_QMI,
"skip QMI MLO cap due to invalid num_radio %d\n",
ab->qmi.num_radios);
@ -2124,9 +2150,6 @@ static void ath12k_qmi_phy_cap_send(struct ath12k_base *ab)
struct qmi_txn txn;
int ret;
if (!ab->slo_capable)
goto out;
ret = qmi_txn_init(&ab->qmi.handle, &txn,
qmi_wlanfw_phy_cap_resp_msg_v01_ei, &resp);
if (ret < 0)
@ -2151,6 +2174,13 @@ static void ath12k_qmi_phy_cap_send(struct ath12k_base *ab)
goto out;
}
if (resp.single_chip_mlo_support_valid) {
if (resp.single_chip_mlo_support)
ab->mlo_capable_flags |= ATH12K_INTRA_DEVICE_MLO_SUPPORT;
else
ab->mlo_capable_flags &= ~ATH12K_INTRA_DEVICE_MLO_SUPPORT;
}
if (!resp.num_phy_valid) {
ret = -ENODATA;
goto out;
@ -2158,9 +2188,11 @@ static void ath12k_qmi_phy_cap_send(struct ath12k_base *ab)
ab->qmi.num_radios = resp.num_phy;
ath12k_dbg(ab, ATH12K_DBG_QMI, "phy capability resp valid %d num_phy %d valid %d board_id %d\n",
ath12k_dbg(ab, ATH12K_DBG_QMI,
"phy capability resp valid %d num_phy %d valid %d board_id %d valid %d single_chip_mlo_support %d\n",
resp.num_phy_valid, resp.num_phy,
resp.board_id_valid, resp.board_id);
resp.board_id_valid, resp.board_id,
resp.single_chip_mlo_support_valid, resp.single_chip_mlo_support);
return;
@ -2325,8 +2357,9 @@ static void ath12k_qmi_free_target_mem_chunk(struct ath12k_base *ab)
for (i = 0; i < ab->qmi.mem_seg_count; i++) {
if (!ab->qmi.target_mem[i].v.addr)
continue;
dma_free_coherent(ab->dev,
ab->qmi.target_mem[i].size,
ab->qmi.target_mem[i].prev_size,
ab->qmi.target_mem[i].v.addr,
ab->qmi.target_mem[i].paddr);
ab->qmi.target_mem[i].v.addr = NULL;
@ -2352,6 +2385,20 @@ static int ath12k_qmi_alloc_target_mem_chunk(struct ath12k_base *ab)
case M3_DUMP_REGION_TYPE:
case PAGEABLE_MEM_REGION_TYPE:
case CALDB_MEM_REGION_TYPE:
/* Firmware reloads in recovery/resume.
* In such cases, no need to allocate memory for FW again.
*/
if (chunk->v.addr) {
if (chunk->prev_type == chunk->type &&
chunk->prev_size == chunk->size)
goto this_chunk_done;
/* cannot reuse the existing chunk */
dma_free_coherent(ab->dev, chunk->prev_size,
chunk->v.addr, chunk->paddr);
chunk->v.addr = NULL;
}
chunk->v.addr = dma_alloc_coherent(ab->dev,
chunk->size,
&chunk->paddr,
@ -2370,6 +2417,10 @@ static int ath12k_qmi_alloc_target_mem_chunk(struct ath12k_base *ab)
chunk->type, chunk->size);
return -ENOMEM;
}
chunk->prev_type = chunk->type;
chunk->prev_size = chunk->size;
this_chunk_done:
break;
default:
ath12k_warn(ab, "memory type %u not supported\n",
@ -2666,6 +2717,19 @@ out:
return ret;
}
static void ath12k_qmi_m3_free(struct ath12k_base *ab)
{
struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
if (!m3_mem->vaddr)
return;
dma_free_coherent(ab->dev, m3_mem->size,
m3_mem->vaddr, m3_mem->paddr);
m3_mem->vaddr = NULL;
m3_mem->size = 0;
}
static int ath12k_qmi_m3_load(struct ath12k_base *ab)
{
struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
@ -2675,10 +2739,6 @@ static int ath12k_qmi_m3_load(struct ath12k_base *ab)
size_t m3_len;
int ret;
if (m3_mem->vaddr)
/* m3 firmware buffer is already available in the DMA buffer */
return 0;
if (ab->fw.m3_data && ab->fw.m3_len > 0) {
/* firmware-N.bin had a m3 firmware file so use that */
m3_data = ab->fw.m3_data;
@ -2700,6 +2760,15 @@ static int ath12k_qmi_m3_load(struct ath12k_base *ab)
m3_len = fw->size;
}
/* In recovery/resume cases, M3 buffer is not freed, try to reuse that */
if (m3_mem->vaddr) {
if (m3_mem->size >= m3_len)
goto skip_m3_alloc;
/* Old buffer is too small, free and reallocate */
ath12k_qmi_m3_free(ab);
}
m3_mem->vaddr = dma_alloc_coherent(ab->dev,
m3_len, &m3_mem->paddr,
GFP_KERNEL);
@ -2710,6 +2779,7 @@ static int ath12k_qmi_m3_load(struct ath12k_base *ab)
goto out;
}
skip_m3_alloc:
memcpy(m3_mem->vaddr, m3_data, m3_len);
m3_mem->size = m3_len;
@ -2721,19 +2791,6 @@ out:
return ret;
}
static void ath12k_qmi_m3_free(struct ath12k_base *ab)
{
struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
if (!m3_mem->vaddr)
return;
dma_free_coherent(ab->dev, m3_mem->size,
m3_mem->vaddr, m3_mem->paddr);
m3_mem->vaddr = NULL;
m3_mem->size = 0;
}
static int ath12k_qmi_wlanfw_m3_info_send(struct ath12k_base *ab)
{
struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
@ -3178,6 +3235,9 @@ static const struct qmi_msg_handler ath12k_qmi_msg_handlers[] = {
.decoded_size = sizeof(struct qmi_wlanfw_fw_ready_ind_msg_v01),
.fn = ath12k_qmi_msg_fw_ready_cb,
},
/* end of list */
{},
};
static int ath12k_qmi_ops_new_server(struct qmi_handle *qmi_hdl,
@ -3261,7 +3321,8 @@ static void ath12k_qmi_driver_event_work(struct work_struct *work)
case ATH12K_QMI_EVENT_FW_READY:
clear_bit(ATH12K_FLAG_QMI_FAIL, &ab->dev_flags);
if (test_bit(ATH12K_FLAG_REGISTERED, &ab->dev_flags)) {
ath12k_hal_dump_srng_stats(ab);
if (ab->is_reset)
ath12k_hal_dump_srng_stats(ab);
queue_work(ab->workqueue, &ab->restart_work);
break;
}

View File

@ -96,6 +96,8 @@ struct ath12k_qmi_event_msg {
struct target_mem_chunk {
u32 size;
u32 type;
u32 prev_size;
u32 prev_type;
dma_addr_t paddr;
union {
void __iomem *ioaddr;
@ -265,6 +267,8 @@ struct qmi_wlanfw_phy_cap_resp_msg_v01 {
u8 num_phy;
u8 board_id_valid;
u32 board_id;
u8 single_chip_mlo_support_valid;
u8 single_chip_mlo_support;
};
#define QMI_WLANFW_IND_REGISTER_REQ_MSG_V01_MAX_LEN 54

View File

@ -49,8 +49,8 @@ ath12k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
struct ath12k_wmi_init_country_arg arg;
struct ath12k_hw *ah = ath12k_hw_to_ah(hw);
struct ath12k *ar = ath12k_ah_to_ar(ah);
int ret;
struct ath12k *ar = ath12k_ah_to_ar(ah, 0);
int ret, i;
ath12k_dbg(ar->ab, ATH12K_DBG_REG,
"Regulatory Notification received for %s\n", wiphy_name(wiphy));
@ -85,10 +85,16 @@ ath12k_reg_notifier(struct wiphy *wiphy, struct regulatory_request *request)
memcpy(&arg.cc_info.alpha2, request->alpha2, 2);
arg.cc_info.alpha2[2] = 0;
ret = ath12k_wmi_send_init_country_cmd(ar, &arg);
if (ret)
ath12k_warn(ar->ab,
"INIT Country code set to fw failed : %d\n", ret);
/* Allow fresh updates to wiphy regd */
ah->regd_updated = false;
/* Send the reg change request to all the radios */
for_each_ar(ah, ar, i) {
ret = ath12k_wmi_send_init_country_cmd(ar, &arg);
if (ret)
ath12k_warn(ar->ab,
"INIT Country code set to fw failed : %d\n", ret);
}
}
int ath12k_reg_update_chan_list(struct ath12k *ar)
@ -202,10 +208,32 @@ int ath12k_regd_update(struct ath12k *ar, bool init)
{
struct ieee80211_hw *hw = ath12k_ar_to_hw(ar);
struct ieee80211_regdomain *regd, *regd_copy = NULL;
struct ath12k_hw *ah = ar->ah;
int ret, regd_len, pdev_id;
struct ath12k_base *ab;
int i;
ab = ar->ab;
/* If one of the radios within ah has already updated the regd for
* the wiphy, then avoid setting regd again
*/
if (ah->regd_updated)
return 0;
/* firmware provides reg rules which are similar for 2 GHz and 5 GHz
* pdev but 6 GHz pdev has superset of all rules including rules for
* all bands, we prefer 6 GHz pdev's rules to be used for setup of
* the wiphy regd.
* If 6 GHz pdev was part of the ath12k_hw, wait for the 6 GHz pdev,
* else pick the first pdev which calls this function and use its
* regd to update global hw regd.
* The regd_updated flag set at the end will not allow any further
* updates.
*/
if (ah->use_6ghz_regd && !ar->supports_6ghz)
return 0;
pdev_id = ar->pdev_idx;
spin_lock_bh(&ab->base_lock);
@ -258,10 +286,17 @@ int ath12k_regd_update(struct ath12k *ar, bool init)
if (ret)
goto err;
if (ar->state == ATH12K_STATE_ON) {
ret = ath12k_reg_update_chan_list(ar);
if (ret)
goto err;
ah->regd_updated = true;
/* Apply the new regd to all the radios, this is expected to be received only once
* since we check for ah->regd_updated and allow here only once.
*/
for_each_ar(ah, ar, i) {
if (ar->state == ATH12K_STATE_ON) {
ab = ar->ab;
ret = ath12k_reg_update_chan_list(ar);
if (ret)
goto err;
}
}
return 0;

View File

@ -858,20 +858,20 @@ int ath12k_wmi_vdev_create(struct ath12k *ar, u8 *macaddr,
len = sizeof(*txrx_streams);
txrx_streams->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_TXRX_STREAMS,
len);
txrx_streams->band = WMI_TPC_CHAINMASK_CONFIG_BAND_2G;
txrx_streams->band = cpu_to_le32(WMI_TPC_CHAINMASK_CONFIG_BAND_2G);
txrx_streams->supported_tx_streams =
args->chains[NL80211_BAND_2GHZ].tx;
cpu_to_le32(args->chains[NL80211_BAND_2GHZ].tx);
txrx_streams->supported_rx_streams =
args->chains[NL80211_BAND_2GHZ].rx;
cpu_to_le32(args->chains[NL80211_BAND_2GHZ].rx);
txrx_streams++;
txrx_streams->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_VDEV_TXRX_STREAMS,
len);
txrx_streams->band = WMI_TPC_CHAINMASK_CONFIG_BAND_5G;
txrx_streams->band = cpu_to_le32(WMI_TPC_CHAINMASK_CONFIG_BAND_5G);
txrx_streams->supported_tx_streams =
args->chains[NL80211_BAND_5GHZ].tx;
cpu_to_le32(args->chains[NL80211_BAND_5GHZ].tx);
txrx_streams->supported_rx_streams =
args->chains[NL80211_BAND_5GHZ].rx;
cpu_to_le32(args->chains[NL80211_BAND_5GHZ].rx);
ath12k_dbg(ar->ab, ATH12K_DBG_WMI,
"WMI vdev create: id %d type %d subtype %d macaddr %pM pdevid %d\n",
@ -2723,6 +2723,149 @@ int ath12k_wmi_send_dfs_phyerr_offload_enable_cmd(struct ath12k *ar,
return ret;
}
int ath12k_wmi_set_bios_cmd(struct ath12k_base *ab, u32 param_id,
const u8 *buf, size_t buf_len)
{
struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab;
struct wmi_pdev_set_bios_interface_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
u8 *ptr;
u32 len, len_aligned;
int ret;
len_aligned = roundup(buf_len, sizeof(u32));
len = sizeof(*cmd) + TLV_HDR_SIZE + len_aligned;
skb = ath12k_wmi_alloc_skb(wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_pdev_set_bios_interface_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_BIOS_INTERFACE_CMD,
sizeof(*cmd));
cmd->pdev_id = cpu_to_le32(WMI_PDEV_ID_SOC);
cmd->param_type_id = cpu_to_le32(param_id);
cmd->length = cpu_to_le32(buf_len);
ptr = skb->data + sizeof(*cmd);
tlv = (struct wmi_tlv *)ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, len_aligned);
ptr += TLV_HDR_SIZE;
memcpy(ptr, buf, buf_len);
ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0],
skb,
WMI_PDEV_SET_BIOS_INTERFACE_CMDID);
if (ret) {
ath12k_warn(ab,
"failed to send WMI_PDEV_SET_BIOS_INTERFACE_CMDID parameter id %d: %d\n",
param_id, ret);
dev_kfree_skb(skb);
}
return 0;
}
int ath12k_wmi_set_bios_sar_cmd(struct ath12k_base *ab, const u8 *psar_table)
{
struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab;
struct wmi_pdev_set_bios_sar_table_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
int ret;
u8 *buf_ptr;
u32 len, sar_table_len_aligned, sar_dbs_backoff_len_aligned;
const u8 *psar_value = psar_table + ATH12K_ACPI_POWER_LIMIT_DATA_OFFSET;
const u8 *pdbs_value = psar_table + ATH12K_ACPI_DBS_BACKOFF_DATA_OFFSET;
sar_table_len_aligned = roundup(ATH12K_ACPI_BIOS_SAR_TABLE_LEN, sizeof(u32));
sar_dbs_backoff_len_aligned = roundup(ATH12K_ACPI_BIOS_SAR_DBS_BACKOFF_LEN,
sizeof(u32));
len = sizeof(*cmd) + TLV_HDR_SIZE + sar_table_len_aligned +
TLV_HDR_SIZE + sar_dbs_backoff_len_aligned;
skb = ath12k_wmi_alloc_skb(wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_pdev_set_bios_sar_table_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_BIOS_SAR_TABLE_CMD,
sizeof(*cmd));
cmd->pdev_id = cpu_to_le32(WMI_PDEV_ID_SOC);
cmd->sar_len = cpu_to_le32(ATH12K_ACPI_BIOS_SAR_TABLE_LEN);
cmd->dbs_backoff_len = cpu_to_le32(ATH12K_ACPI_BIOS_SAR_DBS_BACKOFF_LEN);
buf_ptr = skb->data + sizeof(*cmd);
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE,
sar_table_len_aligned);
buf_ptr += TLV_HDR_SIZE;
memcpy(buf_ptr, psar_value, ATH12K_ACPI_BIOS_SAR_TABLE_LEN);
buf_ptr += sar_table_len_aligned;
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE,
sar_dbs_backoff_len_aligned);
buf_ptr += TLV_HDR_SIZE;
memcpy(buf_ptr, pdbs_value, ATH12K_ACPI_BIOS_SAR_DBS_BACKOFF_LEN);
ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0],
skb,
WMI_PDEV_SET_BIOS_SAR_TABLE_CMDID);
if (ret) {
ath12k_warn(ab,
"failed to send WMI_PDEV_SET_BIOS_INTERFACE_CMDID %d\n",
ret);
dev_kfree_skb(skb);
}
return ret;
}
int ath12k_wmi_set_bios_geo_cmd(struct ath12k_base *ab, const u8 *pgeo_table)
{
struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab;
struct wmi_pdev_set_bios_geo_table_cmd *cmd;
struct wmi_tlv *tlv;
struct sk_buff *skb;
int ret;
u8 *buf_ptr;
u32 len, sar_geo_len_aligned;
const u8 *pgeo_value = pgeo_table + ATH12K_ACPI_GEO_OFFSET_DATA_OFFSET;
sar_geo_len_aligned = roundup(ATH12K_ACPI_BIOS_SAR_GEO_OFFSET_LEN, sizeof(u32));
len = sizeof(*cmd) + TLV_HDR_SIZE + sar_geo_len_aligned;
skb = ath12k_wmi_alloc_skb(wmi_ab, len);
if (!skb)
return -ENOMEM;
cmd = (struct wmi_pdev_set_bios_geo_table_cmd *)skb->data;
cmd->tlv_header = ath12k_wmi_tlv_cmd_hdr(WMI_TAG_PDEV_SET_BIOS_GEO_TABLE_CMD,
sizeof(*cmd));
cmd->pdev_id = cpu_to_le32(WMI_PDEV_ID_SOC);
cmd->geo_len = cpu_to_le32(ATH12K_ACPI_BIOS_SAR_GEO_OFFSET_LEN);
buf_ptr = skb->data + sizeof(*cmd);
tlv = (struct wmi_tlv *)buf_ptr;
tlv->header = ath12k_wmi_tlv_hdr(WMI_TAG_ARRAY_BYTE, sar_geo_len_aligned);
buf_ptr += TLV_HDR_SIZE;
memcpy(buf_ptr, pgeo_value, ATH12K_ACPI_BIOS_SAR_GEO_OFFSET_LEN);
ret = ath12k_wmi_cmd_send(&wmi_ab->wmi[0],
skb,
WMI_PDEV_SET_BIOS_GEO_TABLE_CMDID);
if (ret) {
ath12k_warn(ab,
"failed to send WMI_PDEV_SET_BIOS_GEO_TABLE_CMDID %d\n",
ret);
dev_kfree_skb(skb);
}
return ret;
}
int ath12k_wmi_delba_send(struct ath12k *ar, u32 vdev_id, const u8 *mac,
u32 tid, u32 initiator, u32 reason)
{
@ -3324,7 +3467,8 @@ ath12k_wmi_copy_resource_config(struct ath12k_wmi_resource_config_params *wmi_cf
wmi_cfg->bpf_instruction_size = cpu_to_le32(tg_cfg->bpf_instruction_size);
wmi_cfg->max_bssid_rx_filters = cpu_to_le32(tg_cfg->max_bssid_rx_filters);
wmi_cfg->use_pdev_id = cpu_to_le32(tg_cfg->use_pdev_id);
wmi_cfg->flag1 = cpu_to_le32(tg_cfg->atf_config);
wmi_cfg->flag1 = cpu_to_le32(tg_cfg->atf_config |
WMI_RSRC_CFG_FLAG1_BSS_CHANNEL_INFO_64);
wmi_cfg->peer_map_unmap_version = cpu_to_le32(tg_cfg->peer_map_unmap_version);
wmi_cfg->sched_params = cpu_to_le32(tg_cfg->sched_params);
wmi_cfg->twt_ap_pdev_count = cpu_to_le32(tg_cfg->twt_ap_pdev_count);
@ -4041,6 +4185,7 @@ static void ath12k_wmi_free_dbring_caps(struct ath12k_base *ab)
{
kfree(ab->db_caps);
ab->db_caps = NULL;
ab->num_db_cap = 0;
}
static int ath12k_wmi_dma_ring_caps(struct ath12k_base *ab,
@ -5927,13 +6072,11 @@ static void ath12k_mgmt_rx_event(struct ath12k_base *ab, struct sk_buff *skb)
}
}
/* TODO: Pending handle beacon implementation
*if (ieee80211_is_beacon(hdr->frame_control))
* ath12k_mac_handle_beacon(ar, skb);
*/
if (ieee80211_is_beacon(hdr->frame_control))
ath12k_mac_handle_beacon(ar, skb);
ath12k_dbg(ab, ATH12K_DBG_MGMT,
"event mgmt rx skb %pK len %d ftype %02x stype %02x\n",
"event mgmt rx skb %p len %d ftype %02x stype %02x\n",
skb, skb->len,
fc & IEEE80211_FCTL_FTYPE, fc & IEEE80211_FCTL_STYPE);
@ -6137,42 +6280,44 @@ static void ath12k_roam_event(struct ath12k_base *ab, struct sk_buff *skb)
{
struct wmi_roam_event roam_ev = {};
struct ath12k *ar;
u32 vdev_id;
u8 roam_reason;
if (ath12k_pull_roam_ev(ab, skb, &roam_ev) != 0) {
ath12k_warn(ab, "failed to extract roam event");
return;
}
vdev_id = le32_to_cpu(roam_ev.vdev_id);
roam_reason = u32_get_bits(le32_to_cpu(roam_ev.reason),
WMI_ROAM_REASON_MASK);
ath12k_dbg(ab, ATH12K_DBG_WMI,
"wmi roam event vdev %u reason 0x%08x rssi %d\n",
roam_ev.vdev_id, roam_ev.reason, roam_ev.rssi);
"wmi roam event vdev %u reason %d rssi %d\n",
vdev_id, roam_reason, roam_ev.rssi);
rcu_read_lock();
ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(roam_ev.vdev_id));
ar = ath12k_mac_get_ar_by_vdev_id(ab, vdev_id);
if (!ar) {
ath12k_warn(ab, "invalid vdev id in roam ev %d",
roam_ev.vdev_id);
ath12k_warn(ab, "invalid vdev id in roam ev %d", vdev_id);
rcu_read_unlock();
return;
}
if (le32_to_cpu(roam_ev.reason) >= WMI_ROAM_REASON_MAX)
if (roam_reason >= WMI_ROAM_REASON_MAX)
ath12k_warn(ab, "ignoring unknown roam event reason %d on vdev %i\n",
roam_ev.reason, roam_ev.vdev_id);
roam_reason, vdev_id);
switch (le32_to_cpu(roam_ev.reason)) {
switch (roam_reason) {
case WMI_ROAM_REASON_BEACON_MISS:
/* TODO: Pending beacon miss and connection_loss_work
* implementation
* ath12k_mac_handle_beacon_miss(ar, vdev_id);
*/
ath12k_mac_handle_beacon_miss(ar, vdev_id);
break;
case WMI_ROAM_REASON_BETTER_AP:
case WMI_ROAM_REASON_LOW_RSSI:
case WMI_ROAM_REASON_SUITABLE_AP_FOUND:
case WMI_ROAM_REASON_HO_FAILED:
ath12k_warn(ab, "ignoring not implemented roam event reason %d on vdev %i\n",
roam_ev.reason, roam_ev.vdev_id);
roam_reason, vdev_id);
break;
}

View File

@ -353,6 +353,9 @@ enum wmi_tlv_cmd_id {
WMI_PDEV_DMA_RING_CFG_REQ_CMDID,
WMI_PDEV_HE_TB_ACTION_FRM_CMDID,
WMI_PDEV_PKTLOG_FILTER_CMDID,
WMI_PDEV_SET_BIOS_SAR_TABLE_CMDID = 0x4044,
WMI_PDEV_SET_BIOS_GEO_TABLE_CMDID = 0x4045,
WMI_PDEV_SET_BIOS_INTERFACE_CMDID = 0x404A,
WMI_VDEV_CREATE_CMDID = WMI_TLV_CMD(WMI_GRP_VDEV),
WMI_VDEV_DELETE_CMDID,
WMI_VDEV_START_REQUEST_CMDID,
@ -1925,6 +1928,9 @@ enum wmi_tlv_tag {
WMI_TAG_REGULATORY_RULE_EXT_STRUCT = 0x3A9,
WMI_TAG_REG_CHAN_LIST_CC_EXT_EVENT,
WMI_TAG_EHT_RATE_SET = 0x3C4,
WMI_TAG_PDEV_SET_BIOS_SAR_TABLE_CMD = 0x3D8,
WMI_TAG_PDEV_SET_BIOS_GEO_TABLE_CMD = 0x3D9,
WMI_TAG_PDEV_SET_BIOS_INTERFACE_CMD = 0x3FB,
WMI_TAG_MAX
};
@ -2195,8 +2201,11 @@ enum wmi_peer_param {
WMI_PEER_SET_MAX_TX_RATE = 17,
WMI_PEER_SET_MIN_TX_RATE = 18,
WMI_PEER_SET_DEFAULT_ROUTING = 19,
WMI_PEER_CHWIDTH_PUNCTURE_20MHZ_BITMAP = 39,
};
#define WMI_PEER_PUNCTURE_BITMAP GENMASK(23, 8)
enum wmi_slot_time {
WMI_VDEV_SLOT_TIME_LONG = 1,
WMI_VDEV_SLOT_TIME_SHORT = 2,
@ -2400,6 +2409,7 @@ struct wmi_init_cmd {
#define WMI_RSRC_CFG_HOST_SVC_FLAG_REG_CC_EXT_SUPPORT_BIT 4
#define WMI_RSRC_CFG_FLAGS2_RX_PEER_METADATA_VERSION GENMASK(5, 4)
#define WMI_RSRC_CFG_FLAG1_BSS_CHANNEL_INFO_64 BIT(5)
struct ath12k_wmi_resource_config_params {
__le32 tlv_header;
@ -2600,6 +2610,19 @@ struct ath12k_wmi_soc_hal_reg_caps_params {
__le32 num_phy;
} __packed;
enum wmi_channel_width {
WMI_CHAN_WIDTH_20 = 0,
WMI_CHAN_WIDTH_40 = 1,
WMI_CHAN_WIDTH_80 = 2,
WMI_CHAN_WIDTH_160 = 3,
WMI_CHAN_WIDTH_80P80 = 4,
WMI_CHAN_WIDTH_5 = 5,
WMI_CHAN_WIDTH_10 = 6,
WMI_CHAN_WIDTH_165 = 7,
WMI_CHAN_WIDTH_160P160 = 8,
WMI_CHAN_WIDTH_320 = 9,
};
#define WMI_MAX_EHTCAP_MAC_SIZE 2
#define WMI_MAX_EHTCAP_PHY_SIZE 3
#define WMI_MAX_EHTCAP_RATE_SET 3
@ -2724,9 +2747,9 @@ struct wmi_vdev_create_cmd {
struct ath12k_wmi_vdev_txrx_streams_params {
__le32 tlv_header;
u32 band;
u32 supported_tx_streams;
u32 supported_rx_streams;
__le32 band;
__le32 supported_tx_streams;
__le32 supported_rx_streams;
} __packed;
struct wmi_vdev_delete_cmd {
@ -4182,6 +4205,9 @@ struct wmi_peer_sta_kickout_event {
struct ath12k_wmi_mac_addr_params peer_macaddr;
} __packed;
#define WMI_ROAM_REASON_MASK GENMASK(3, 0)
#define WMI_ROAM_SUBNET_STATUS_MASK GENMASK(5, 4)
enum wmi_roam_reason {
WMI_ROAM_REASON_BETTER_AP = 1,
WMI_ROAM_REASON_BEACON_MISS = 2,
@ -4774,6 +4800,37 @@ struct ath12k_wmi_base {
struct ath12k_wmi_target_cap_arg *targ_cap;
};
struct wmi_pdev_set_bios_interface_cmd {
__le32 tlv_header;
__le32 pdev_id;
__le32 param_type_id;
__le32 length;
} __packed;
enum wmi_bios_param_type {
WMI_BIOS_PARAM_CCA_THRESHOLD_TYPE = 0,
WMI_BIOS_PARAM_TAS_CONFIG_TYPE = 1,
WMI_BIOS_PARAM_TAS_DATA_TYPE = 2,
/* bandedge control power */
WMI_BIOS_PARAM_TYPE_BANDEDGE = 3,
WMI_BIOS_PARAM_TYPE_MAX,
};
struct wmi_pdev_set_bios_sar_table_cmd {
__le32 tlv_header;
__le32 pdev_id;
__le32 sar_len;
__le32 dbs_backoff_len;
} __packed;
struct wmi_pdev_set_bios_geo_table_cmd {
__le32 tlv_header;
__le32 pdev_id;
__le32 geo_len;
} __packed;
#define ATH12K_FW_STATS_BUF_SIZE (1024 * 1024)
enum wmi_sys_cap_info_flags {
@ -4932,6 +4989,10 @@ int ath12k_wmi_probe_resp_tmpl(struct ath12k *ar, u32 vdev_id,
struct sk_buff *tmpl);
int ath12k_wmi_set_hw_mode(struct ath12k_base *ab,
enum wmi_host_hw_mode_config_type mode);
int ath12k_wmi_set_bios_cmd(struct ath12k_base *ab, u32 param_id,
const u8 *buf, size_t buf_len);
int ath12k_wmi_set_bios_sar_cmd(struct ath12k_base *ab, const u8 *psar_table);
int ath12k_wmi_set_bios_geo_cmd(struct ath12k_base *ab, const u8 *pgeo_table);
static inline u32
ath12k_wmi_caps_ext_get_pdev_id(const struct ath12k_wmi_caps_ext_params *param)

View File

@ -1427,25 +1427,7 @@ static struct sdio_driver ath6kl_sdio_driver = {
.remove = ath6kl_sdio_remove,
.drv.pm = ATH6KL_SDIO_PM_OPS,
};
static int __init ath6kl_sdio_init(void)
{
int ret;
ret = sdio_register_driver(&ath6kl_sdio_driver);
if (ret)
ath6kl_err("sdio driver registration failed: %d\n", ret);
return ret;
}
static void __exit ath6kl_sdio_exit(void)
{
sdio_unregister_driver(&ath6kl_sdio_driver);
}
module_init(ath6kl_sdio_init);
module_exit(ath6kl_sdio_exit);
module_sdio_driver(ath6kl_sdio_driver);
MODULE_AUTHOR("Atheros Communications, Inc.");
MODULE_DESCRIPTION("Driver support for Atheros AR600x SDIO devices");

View File

@ -135,8 +135,7 @@ void ath9k_ps_wakeup(struct ath_softc *sc)
if (power_mode != ATH9K_PM_AWAKE) {
spin_lock(&common->cc_lock);
ath_hw_cycle_counters_update(common);
memset(&common->cc_survey, 0, sizeof(common->cc_survey));
memset(&common->cc_ani, 0, sizeof(common->cc_ani));
memset(&common->cc, 0, sizeof(common->cc));
spin_unlock(&common->cc_lock);
}

View File

@ -280,7 +280,8 @@ static void carl9170_tx_release(struct kref *ref)
* carl9170_tx_fill_rateinfo() has filled the rate information
* before we get to this point.
*/
memset_after(&txinfo->status, 0, rates);
memset(&txinfo->pad, 0, sizeof(txinfo->pad));
memset(&txinfo->rate_driver_data, 0, sizeof(txinfo->rate_driver_data));
if (atomic_read(&ar->tx_total_queued))
ar->tx_schedule = true;

View File

@ -1069,6 +1069,38 @@ static int carl9170_usb_probe(struct usb_interface *intf,
ar->usb_ep_cmd_is_bulk = true;
}
/* Verify that all expected endpoints are present */
if (ar->usb_ep_cmd_is_bulk) {
u8 bulk_ep_addr[] = {
AR9170_USB_EP_RX | USB_DIR_IN,
AR9170_USB_EP_TX | USB_DIR_OUT,
AR9170_USB_EP_CMD | USB_DIR_OUT,
0};
u8 int_ep_addr[] = {
AR9170_USB_EP_IRQ | USB_DIR_IN,
0};
if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
!usb_check_int_endpoints(intf, int_ep_addr))
err = -ENODEV;
} else {
u8 bulk_ep_addr[] = {
AR9170_USB_EP_RX | USB_DIR_IN,
AR9170_USB_EP_TX | USB_DIR_OUT,
0};
u8 int_ep_addr[] = {
AR9170_USB_EP_IRQ | USB_DIR_IN,
AR9170_USB_EP_CMD | USB_DIR_OUT,
0};
if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
!usb_check_int_endpoints(intf, int_ep_addr))
err = -ENODEV;
}
if (err) {
carl9170_free(ar);
return err;
}
usb_set_intfdata(intf, ar);
SET_IEEE80211_DEV(ar->hw, &intf->dev);

View File

@ -892,10 +892,8 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
struct wil6210_priv *wil = wiphy_to_wil(wiphy);
struct wireless_dev *wdev = request->wdev;
struct wil6210_vif *vif = wdev_to_vif(wil, wdev);
struct {
struct wmi_start_scan_cmd cmd;
u16 chnl[4];
} __packed cmd;
DEFINE_FLEX(struct wmi_start_scan_cmd, cmd,
channel_list, num_channels, 4);
uint i, n;
int rc;
@ -977,9 +975,8 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
vif->scan_request = request;
mod_timer(&vif->scan_timer, jiffies + WIL6210_SCAN_TO);
memset(&cmd, 0, sizeof(cmd));
cmd.cmd.scan_type = WMI_ACTIVE_SCAN;
cmd.cmd.num_channels = 0;
cmd->scan_type = WMI_ACTIVE_SCAN;
cmd->num_channels = 0;
n = min(request->n_channels, 4U);
for (i = 0; i < n; i++) {
int ch = request->channels[i]->hw_value;
@ -991,7 +988,8 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
continue;
}
/* 0-based channel indexes */
cmd.cmd.channel_list[cmd.cmd.num_channels++].channel = ch - 1;
cmd->num_channels++;
cmd->channel_list[cmd->num_channels - 1].channel = ch - 1;
wil_dbg_misc(wil, "Scan for ch %d : %d MHz\n", ch,
request->channels[i]->center_freq);
}
@ -1007,16 +1005,15 @@ static int wil_cfg80211_scan(struct wiphy *wiphy,
if (rc)
goto out_restore;
if (wil->discovery_mode && cmd.cmd.scan_type == WMI_ACTIVE_SCAN) {
cmd.cmd.discovery_mode = 1;
if (wil->discovery_mode && cmd->scan_type == WMI_ACTIVE_SCAN) {
cmd->discovery_mode = 1;
wil_dbg_misc(wil, "active scan with discovery_mode=1\n");
}
if (vif->mid == 0)
wil->radio_wdev = wdev;
rc = wmi_send(wil, WMI_START_SCAN_CMDID, vif->mid,
&cmd, sizeof(cmd.cmd) +
cmd.cmd.num_channels * sizeof(cmd.cmd.channel_list[0]));
cmd, struct_size(cmd, channel_list, cmd->num_channels));
out_restore:
if (rc) {

View File

@ -4014,28 +4014,23 @@ int wmi_set_cqm_rssi_config(struct wil6210_priv *wil,
struct net_device *ndev = wil->main_ndev;
struct wil6210_vif *vif = ndev_to_vif(ndev);
int rc;
struct {
struct wmi_set_link_monitor_cmd cmd;
s8 rssi_thold;
} __packed cmd = {
.cmd = {
.rssi_hyst = rssi_hyst,
.rssi_thresholds_list_size = 1,
},
.rssi_thold = rssi_thold,
};
struct {
struct wmi_cmd_hdr hdr;
struct wmi_set_link_monitor_event evt;
} __packed reply = {
.evt = {.status = WMI_FW_STATUS_FAILURE},
};
DEFINE_FLEX(struct wmi_set_link_monitor_cmd, cmd,
rssi_thresholds_list, rssi_thresholds_list_size, 1);
cmd->rssi_hyst = rssi_hyst;
cmd->rssi_thresholds_list[0] = rssi_thold;
if (rssi_thold > S8_MAX || rssi_thold < S8_MIN || rssi_hyst > U8_MAX)
return -EINVAL;
rc = wmi_call(wil, WMI_SET_LINK_MONITOR_CMDID, vif->mid, &cmd,
sizeof(cmd), WMI_SET_LINK_MONITOR_EVENTID,
rc = wmi_call(wil, WMI_SET_LINK_MONITOR_CMDID, vif->mid, cmd,
__struct_size(cmd), WMI_SET_LINK_MONITOR_EVENTID,
&reply, sizeof(reply), WIL_WMI_CALL_GENERAL_TO_MS);
if (rc) {
wil_err(wil, "WMI_SET_LINK_MONITOR_CMDID failed, rc %d\n", rc);

View File

@ -474,7 +474,7 @@ struct wmi_start_scan_cmd {
struct {
u8 channel;
u8 reserved;
} channel_list[];
} channel_list[] __counted_by(num_channels);
} __packed;
#define WMI_MAX_PNO_SSID_NUM (16)
@ -3320,7 +3320,7 @@ struct wmi_set_link_monitor_cmd {
u8 rssi_hyst;
u8 reserved[12];
u8 rssi_thresholds_list_size;
s8 rssi_thresholds_list[];
s8 rssi_thresholds_list[] __counted_by(rssi_thresholds_list_size);
} __packed;
/* wmi_link_monitor_event_type */

View File

@ -117,13 +117,6 @@ struct bootrom_id_le {
__le32 boardrev; /* Board revision */
};
struct brcmf_usb_image {
struct list_head list;
s8 *fwname;
u8 *image;
int image_len;
};
struct brcmf_usbdev_info {
struct brcmf_usbdev bus_pub; /* MUST BE FIRST */
spinlock_t qlock;

View File

@ -143,12 +143,6 @@ struct ampdu_info {
struct brcms_fifo_info fifo_tb[NUM_FFPLD_FIFO];
};
/* used for flushing ampdu packets */
struct cb_del_ampdu_pars {
struct ieee80211_sta *sta;
u16 tid;
};
static void brcms_c_scb_ampdu_update_max_txlen(struct ampdu_info *ampdu, u8 dur)
{
u32 rate, mcs;

View File

@ -10,7 +10,7 @@
#include "fw/api/txq.h"
/* Highest firmware API version supported */
#define IWL_BZ_UCODE_API_MAX 89
#define IWL_BZ_UCODE_API_MAX 90
/* Lowest firmware API version supported */
#define IWL_BZ_UCODE_API_MIN 80

View File

@ -10,7 +10,7 @@
#include "fw/api/txq.h"
/* Highest firmware API version supported */
#define IWL_SC_UCODE_API_MAX 89
#define IWL_SC_UCODE_API_MAX 90
/* Lowest firmware API version supported */
#define IWL_SC_UCODE_API_MIN 82

View File

@ -1,5 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2024 Intel Corporation
* Copyright (C) 2012-2014, 2018-2022 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
@ -89,6 +90,12 @@ enum iwl_data_path_subcmd_ids {
*/
SEC_KEY_CMD = 0x18,
/**
* @ESR_MODE_NOTIF: notification to recommend/force a wanted esr mode,
* uses &struct iwl_mvm_esr_mode_notif
*/
ESR_MODE_NOTIF = 0xF3,
/**
* @MONITOR_NOTIF: Datapath monitoring notification, using
* &struct iwl_datapath_monitor_notif

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2012-2014, 2018-2019, 2021-2023 Intel Corporation
* Copyright (C) 2012-2014, 2018-2019, 2021-2024 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@ -642,4 +642,25 @@ struct iwl_mvm_sta_disable_tx_cmd {
__le32 disable;
} __packed; /* STA_DISABLE_TX_API_S_VER_1 */
/**
* enum iwl_mvm_fw_esr_recommendation - FW recommendation code
* @ESR_RECOMMEND_LEAVE: recommendation to leave esr
* @ESR_FORCE_LEAVE: force exiting esr
* @ESR_RECOMMEND_ENTER: recommendation to enter esr
*/
enum iwl_mvm_fw_esr_recommendation {
ESR_RECOMMEND_LEAVE,
ESR_FORCE_LEAVE,
ESR_RECOMMEND_ENTER,
}; /* ESR_MODE_RECOMMENDATION_CODE_API_E_VER_1 */
/**
* struct iwl_mvm_esr_mode_notif - FWs recommendation/force for esr mode
*
* @action: the action to apply on esr state. See &iwl_mvm_fw_esr_recommendation
*/
struct iwl_mvm_esr_mode_notif {
__le32 action;
} __packed; /* ESR_MODE_RECOMMENDATION_NTFY_API_S_VER_1 */
#endif /* __iwl_fw_api_mac_cfg_h__ */

View File

@ -46,9 +46,9 @@ enum iwl_regulatory_and_nvm_subcmd_ids {
SAR_OFFSET_MAPPING_TABLE_CMD = 0x4,
/**
* @UATS_TABLE_CMD: &struct iwl_uats_table_cmd
* @MCC_ALLOWED_AP_TYPE_CMD: &struct iwl_mcc_allowed_ap_type_cmd
*/
UATS_TABLE_CMD = 0x5,
MCC_ALLOWED_AP_TYPE_CMD = 0x5,
/**
* @PNVM_INIT_COMPLETE_NTFY: &struct iwl_pnvm_init_complete_ntfy
@ -701,13 +701,13 @@ struct iwl_pnvm_init_complete_ntfy {
#define UATS_TABLE_COL_SIZE 13
/**
* struct iwl_uats_table_cmd - struct for UATS_TABLE_CMD
* struct iwl_mcc_allowed_ap_type_cmd - struct for MCC_ALLOWED_AP_TYPE_CMD
* @offset_map: mapping a mcc to UHB AP type support (UATS) allowed
* @reserved: reserved
*/
struct iwl_uats_table_cmd {
struct iwl_mcc_allowed_ap_type_cmd {
u8 offset_map[UATS_TABLE_ROW_SIZE][UATS_TABLE_COL_SIZE];
__le16 reserved;
} __packed; /* UATS_TABLE_CMD_S_VER_1 */
} __packed; /* MCC_ALLOWED_AP_TYPE_CMD_API_S_VER_1 */
#endif /* __iwl_fw_api_nvm_reg_h__ */

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2012-2014, 2018-2024 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@ -13,6 +13,10 @@
* enum iwl_scan_subcmd_ids - scan commands
*/
enum iwl_scan_subcmd_ids {
/**
* @CHANNEL_SURVEY_NOTIF: &struct iwl_umac_scan_channel_survey_notif
*/
CHANNEL_SURVEY_NOTIF = 0xFB,
/**
* @OFFLOAD_MATCH_INFO_NOTIF: &struct iwl_scan_offload_match_info
*/
@ -62,6 +66,8 @@ struct iwl_ssid_ie {
#define IWL_FAST_SCHED_SCAN_ITERATIONS 3
#define IWL_MAX_SCHED_SCAN_PLANS 2
#define IWL_MAX_NUM_NOISE_RESULTS 22
enum scan_framework_client {
SCAN_CLIENT_SCHED_SCAN = BIT(0),
SCAN_CLIENT_NETDETECT = BIT(1),
@ -642,10 +648,13 @@ enum iwl_umac_scan_general_flags {
* notification per channel or not.
* @IWL_UMAC_SCAN_GEN_FLAGS2_ALLOW_CHNL_REORDER: Whether to allow channel
* reorder optimization or not.
* @IWL_UMAC_SCAN_GEN_FLAGS2_COLLECT_CHANNEL_STATS: Enable channel statistics
* collection when #IWL_UMAC_SCAN_GEN_FLAGS_V2_FORCE_PASSIVE is set.
*/
enum iwl_umac_scan_general_flags2 {
IWL_UMAC_SCAN_GEN_FLAGS2_NOTIF_PER_CHNL = BIT(0),
IWL_UMAC_SCAN_GEN_FLAGS2_ALLOW_CHNL_REORDER = BIT(1),
IWL_UMAC_SCAN_GEN_FLAGS2_COLLECT_CHANNEL_STATS = BIT(3),
};
/**
@ -1258,4 +1267,26 @@ struct iwl_umac_scan_iter_complete_notif {
struct iwl_scan_results_notif results[];
} __packed; /* SCAN_ITER_COMPLETE_NTF_UMAC_API_S_VER_2 */
/**
* struct iwl_umac_scan_channel_survey_notif - data for survey
* @channel: the channel scanned
* @band: band of channel
* @noise: noise floor measurements in negative dBm, invalid 0xff
* @reserved: for future use and alignment
* @active_time: time in ms the radio was turned on (on the channel)
* @busy_time: time in ms the channel was sensed busy, 0 for a clean channel
* @tx_time: time the radio spent transmitting data
* @rx_time: time the radio spent receiving data
*/
struct iwl_umac_scan_channel_survey_notif {
__le32 channel;
__le32 band;
u8 noise[IWL_MAX_NUM_NOISE_RESULTS];
u8 reserved[2];
__le32 active_time;
__le32 busy_time;
__le32 tx_time;
__le32 rx_time;
} __packed; /* SCAN_CHANNEL_SURVEY_NTF_API_S_VER_1 */
#endif /* __iwl_fw_api_scan_h__ */

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2012-2014, 2018-2024 Intel Corporation
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
#ifndef __iwl_fw_api_tx_h__
@ -793,7 +793,8 @@ enum iwl_mac_beacon_flags {
* @reserved: reserved
* @link_id: the firmware id of the link that will use this beacon
* @tim_idx: the offset of the tim IE in the beacon
* @tim_size: the length of the tim IE
* @tim_size: the length of the tim IE (version < 14)
* @btwt_offset: offset to the broadcast TWT IE if present (version >= 14)
* @ecsa_offset: offset to the ECSA IE if present
* @csa_offset: offset to the CSA IE if present
* @frame: the template of the beacon frame
@ -805,14 +806,18 @@ struct iwl_mac_beacon_cmd {
__le32 reserved;
__le32 link_id;
__le32 tim_idx;
__le32 tim_size;
union {
__le32 tim_size;
__le32 btwt_offset;
};
__le32 ecsa_offset;
__le32 csa_offset;
struct ieee80211_hdr frame[];
} __packed; /* BEACON_TEMPLATE_CMD_API_S_VER_10,
* BEACON_TEMPLATE_CMD_API_S_VER_11,
* BEACON_TEMPLATE_CMD_API_S_VER_12,
* BEACON_TEMPLATE_CMD_API_S_VER_13
* BEACON_TEMPLATE_CMD_API_S_VER_13,
* BEACON_TEMPLATE_CMD_API_S_VER_14
*/
struct iwl_beacon_notif {

View File

@ -1026,17 +1026,12 @@ static int iwl_dump_ini_prph_mac_iter_common(struct iwl_fw_runtime *fwrt,
{
struct iwl_fw_ini_error_dump_range *range = range_ptr;
__le32 *val = range->data;
u32 prph_val;
int i;
range->internal_base_addr = cpu_to_le32(addr);
range->range_data_size = size;
for (i = 0; i < le32_to_cpu(size); i += 4) {
prph_val = iwl_read_prph(fwrt->trans, addr + i);
if (iwl_trans_is_hw_error_value(prph_val))
return -EBUSY;
*val++ = cpu_to_le32(prph_val);
}
for (i = 0; i < le32_to_cpu(size); i += 4)
*val++ = cpu_to_le32(iwl_read_prph(fwrt->trans, addr + i));
return sizeof(*range) + le32_to_cpu(range->range_data_size);
}

View File

@ -182,7 +182,7 @@ struct iwl_fw_runtime {
u8 ppag_ver;
struct iwl_sar_offset_mapping_cmd sgom_table;
bool sgom_enabled;
struct iwl_uats_table_cmd uats_table;
struct iwl_mcc_allowed_ap_type_cmd uats_table;
u8 uefi_tables_lock_status;
bool uats_enabled;
};

View File

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (C) 2018, 2020-2023 Intel Corporation
* Copyright (C) 2018, 2020-2024 Intel Corporation
*/
#ifndef __iwl_context_info_file_gen3_h__
#define __iwl_context_info_file_gen3_h__
@ -56,6 +56,8 @@ enum iwl_prph_scratch_mtr_format {
* @IWL_PRPH_SCRATCH_RB_SIZE_EXT_8K: 8kB RB size
* @IWL_PRPH_SCRATCH_RB_SIZE_EXT_12K: 12kB RB size
* @IWL_PRPH_SCRATCH_RB_SIZE_EXT_16K: 16kB RB size
* @IWL_PRPH_SCRATCH_SCU_FORCE_ACTIVE: Indicate fw to set SCU_FORCE_ACTIVE
* upon reset.
*/
enum iwl_prph_scratch_flags {
IWL_PRPH_SCRATCH_IMR_DEBUG_EN = BIT(1),
@ -71,6 +73,7 @@ enum iwl_prph_scratch_flags {
IWL_PRPH_SCRATCH_RB_SIZE_EXT_8K = 8 << 20,
IWL_PRPH_SCRATCH_RB_SIZE_EXT_12K = 9 << 20,
IWL_PRPH_SCRATCH_RB_SIZE_EXT_16K = 10 << 20,
IWL_PRPH_SCRATCH_SCU_FORCE_ACTIVE = BIT(29),
};
/*

View File

@ -1478,7 +1478,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
size_t trigger_tlv_sz[FW_DBG_TRIGGER_MAX];
u32 api_ver;
int i;
bool load_module = false;
bool usniffer_images = false;
bool failure = true;
@ -1726,19 +1725,12 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
goto out_unbind;
}
} else {
load_module = true;
request_module_nowait("%s", op->name);
}
mutex_unlock(&iwlwifi_opmode_table_mtx);
complete(&drv->request_firmware_complete);
/*
* Load the module last so we don't block anything
* else from proceeding if the module fails to load
* or hangs loading.
*/
if (load_module)
request_module("%s", op->name);
failure = false;
goto free;

View File

@ -257,38 +257,28 @@ static void iwl_mvm_bt_coex_tcm_based_ci(struct iwl_mvm *mvm,
* This function receives the LB link id and checks if eSR should be
* enabled or disabled (due to BT coex)
*/
static bool
bool
iwl_mvm_bt_coex_calculate_esr_mode(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
int link_id)
s32 link_rssi,
bool primary)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_vif_link_info *link_info = mvmvif->link[link_id];
bool have_wifi_loss_rate =
iwl_fw_lookup_notif_ver(mvm->fw, LEGACY_GROUP,
BT_PROFILE_NOTIFICATION, 0) > 4;
s8 link_rssi = 0;
u8 wifi_loss_rate;
lockdep_assert_held(&mvm->mutex);
if (mvm->last_bt_notif.wifi_loss_low_rssi == BT_OFF)
return true;
/* If LB link is the primary one we should always disable eSR */
if (link_id == iwl_mvm_get_primary_link(vif))
if (primary)
return false;
/* The feature is not supported */
if (!have_wifi_loss_rate)
return true;
/*
* We might not have a link_info when checking whether we can
* (re)enable eSR - the LB link might not exist yet
*/
if (link_info)
link_rssi = (s8)link_info->beacon_stats.avg_signal;
/*
* In case we don't know the RSSI - take the lower wifi loss,
@ -298,7 +288,7 @@ iwl_mvm_bt_coex_calculate_esr_mode(struct iwl_mvm *mvm,
if (!link_rssi)
wifi_loss_rate = mvm->last_bt_notif.wifi_loss_mid_high_rssi;
else if (!(mvmvif->esr_disable_reason & IWL_MVM_ESR_BLOCKED_COEX))
else if (mvmvif->esr_active)
/* RSSI needs to get really low to disable eSR... */
wifi_loss_rate =
link_rssi <= -IWL_MVM_BT_COEX_DISABLE_ESR_THRESH ?
@ -318,20 +308,20 @@ void iwl_mvm_bt_coex_update_link_esr(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
int link_id)
{
bool enable;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_vif_link_info *link = mvmvif->link[link_id];
if (!ieee80211_vif_is_mld(vif) ||
!iwl_mvm_vif_from_mac80211(vif)->authorized)
!iwl_mvm_vif_from_mac80211(vif)->authorized ||
WARN_ON(!link))
return;
enable = iwl_mvm_bt_coex_calculate_esr_mode(mvm, vif, link_id);
if (enable)
iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_COEX);
else
if (!iwl_mvm_bt_coex_calculate_esr_mode(mvm, vif,
(s8)link->beacon_stats.avg_signal,
link_id == iwl_mvm_get_primary_link(vif)))
/* In case we decided to exit eSR - stay with the primary */
iwl_mvm_block_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_COEX,
iwl_mvm_get_primary_link(vif));
iwl_mvm_exit_esr(mvm, vif, IWL_MVM_ESR_EXIT_COEX,
iwl_mvm_get_primary_link(vif));
}
static void iwl_mvm_bt_notif_per_link(struct iwl_mvm *mvm,
@ -515,10 +505,6 @@ static void iwl_mvm_bt_notif_iterator(void *_data, u8 *mac,
return;
}
/* When BT is off this will be 0 */
if (data->notif->wifi_loss_low_rssi == BT_OFF)
iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_COEX);
for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++)
iwl_mvm_bt_notif_per_link(mvm, vif, data, link_id);
}

View File

@ -14,6 +14,8 @@
#define IWL_MVM_BT_COEX_DISABLE_ESR_THRESH 69
#define IWL_MVM_BT_COEX_ENABLE_ESR_THRESH 63
#define IWL_MVM_BT_COEX_WIFI_LOSS_THRESH 0
#define IWL_MVM_TRIGGER_LINK_SEL_TIME_SEC 30
#define IWL_MVM_TPT_COUNT_WINDOW_SEC 5
#define IWL_MVM_DEFAULT_PS_TX_DATA_TIMEOUT (100 * USEC_PER_MSEC)
#define IWL_MVM_DEFAULT_PS_RX_DATA_TIMEOUT (100 * USEC_PER_MSEC)
@ -134,4 +136,5 @@
#define IWL_MVM_HIGH_RSSI_THRESH_160MHZ -58
#define IWL_MVM_LOW_RSSI_THRESH_160MHZ -61
#define IWL_MVM_ENTER_ESR_TPT_THRESH 400
#endif /* __MVM_CONSTANTS_H */

View File

@ -1243,7 +1243,7 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
.data[0] = &d3_cfg_cmd_data,
.len[0] = sizeof(d3_cfg_cmd_data),
};
int ret, primary_link;
int ret;
int len __maybe_unused;
bool unified_image = fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_CNSLDTD_D3_D0_IMG);
@ -1261,18 +1261,11 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
if (IS_ERR_OR_NULL(vif))
return 1;
primary_link = iwl_mvm_get_primary_link(vif);
/* leave ESR immediately, not only async with iwl_mvm_block_esr() */
if (ieee80211_vif_is_mld(vif)) {
ret = ieee80211_set_active_links(vif, BIT(primary_link));
if (ret)
return ret;
}
ret = iwl_mvm_block_esr_sync(mvm, vif, IWL_MVM_ESR_BLOCKED_WOWLAN);
if (ret)
return ret;
mutex_lock(&mvm->mutex);
/* only additionally block for consistency and to avoid concurrency */
iwl_mvm_block_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_WOWLAN, primary_link);
set_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
@ -1280,7 +1273,7 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
mvmvif = iwl_mvm_vif_from_mac80211(vif);
mvm_link = mvmvif->link[primary_link];
mvm_link = mvmvif->link[iwl_mvm_get_primary_link(vif)];
if (WARN_ON_ONCE(!mvm_link)) {
ret = -EINVAL;
goto out_noreset;

View File

@ -712,31 +712,7 @@ static ssize_t iwl_dbgfs_int_mlo_scan_write(struct ieee80211_vif *vif,
if (!action) {
ret = iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_INT_MLO, false);
} else if (action == 1) {
struct ieee80211_channel *channels[IEEE80211_MLD_MAX_NUM_LINKS];
unsigned long usable_links = ieee80211_vif_usable_links(vif);
size_t n_channels = 0;
u8 link_id;
rcu_read_lock();
for_each_set_bit(link_id, &usable_links,
IEEE80211_MLD_MAX_NUM_LINKS) {
struct ieee80211_bss_conf *link_conf =
rcu_dereference(vif->link_conf[link_id]);
if (WARN_ON_ONCE(!link_conf))
continue;
channels[n_channels++] = link_conf->chanreq.oper.chan;
}
rcu_read_unlock();
if (n_channels)
ret = iwl_mvm_int_mlo_scan_start(mvm, vif, channels,
n_channels);
else
ret = -EINVAL;
ret = iwl_mvm_int_mlo_scan(mvm, vif);
} else {
ret = -EINVAL;
}
@ -746,6 +722,66 @@ static ssize_t iwl_dbgfs_int_mlo_scan_write(struct ieee80211_vif *vif,
return ret ?: count;
}
static ssize_t iwl_dbgfs_esr_disable_reason_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
{
struct ieee80211_vif *vif = file->private_data;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = mvmvif->mvm;
unsigned long esr_mask;
char *buf;
int bufsz, pos, i;
ssize_t rv;
mutex_lock(&mvm->mutex);
esr_mask = mvmvif->esr_disable_reason;
mutex_unlock(&mvm->mutex);
bufsz = hweight32(esr_mask) * 32 + 40;
buf = kmalloc(bufsz, GFP_KERNEL);
if (!buf)
return -ENOMEM;
pos = scnprintf(buf, bufsz, "EMLSR state: '0x%lx'\nreasons:\n",
esr_mask);
for_each_set_bit(i, &esr_mask, BITS_PER_LONG)
pos += scnprintf(buf + pos, bufsz - pos, " - %s\n",
iwl_get_esr_state_string(BIT(i)));
rv = simple_read_from_buffer(user_buf, count, ppos, buf, pos);
kfree(buf);
return rv;
}
static ssize_t iwl_dbgfs_esr_disable_reason_write(struct ieee80211_vif *vif,
char *buf, size_t count,
loff_t *ppos)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = mvmvif->mvm;
u32 reason;
u8 block;
int ret;
ret = sscanf(buf, "%u %hhu", &reason, &block);
if (ret < 0)
return ret;
if (hweight16(reason) != 1 || !(reason & IWL_MVM_BLOCK_ESR_REASONS))
return -EINVAL;
mutex_lock(&mvm->mutex);
if (block)
iwl_mvm_block_esr(mvm, vif, reason,
iwl_mvm_get_primary_link(vif));
else
iwl_mvm_unblock_esr(mvm, vif, reason);
mutex_unlock(&mvm->mutex);
return count;
}
#define MVM_DEBUGFS_WRITE_FILE_OPS(name, bufsz) \
_MVM_DEBUGFS_WRITE_FILE_OPS(name, bufsz, struct ieee80211_vif)
#define MVM_DEBUGFS_READ_WRITE_FILE_OPS(name, bufsz) \
@ -766,6 +802,7 @@ MVM_DEBUGFS_READ_WRITE_FILE_OPS(rx_phyinfo, 10);
MVM_DEBUGFS_READ_WRITE_FILE_OPS(quota_min, 32);
MVM_DEBUGFS_READ_FILE_OPS(os_device_timediff);
MVM_DEBUGFS_WRITE_FILE_OPS(int_mlo_scan, 32);
MVM_DEBUGFS_READ_WRITE_FILE_OPS(esr_disable_reason, 32);
void iwl_mvm_vif_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
{
@ -796,6 +833,7 @@ void iwl_mvm_vif_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
debugfs_create_bool("ftm_unprotected", 0200, mvmvif->dbgfs_dir,
&mvmvif->ftm_unprotected);
MVM_DEBUGFS_ADD_FILE_VIF(int_mlo_scan, mvmvif->dbgfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE_VIF(esr_disable_reason, mvmvif->dbgfs_dir, 0600);
if (vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
mvmvif == mvm->bf_allowed_vif)

View File

@ -494,7 +494,7 @@ static void iwl_mvm_uats_init(struct iwl_mvm *mvm)
int ret;
struct iwl_host_cmd cmd = {
.id = WIDE_ID(REGULATORY_AND_NVM_GROUP,
UATS_TABLE_CMD),
MCC_ALLOWED_AP_TYPE_CMD),
.flags = 0,
.data[0] = &mvm->fwrt.uats_table,
.len[0] = sizeof(mvm->fwrt.uats_table),
@ -516,7 +516,7 @@ static void iwl_mvm_uats_init(struct iwl_mvm *mvm)
IWL_FW_CMD_VER_UNKNOWN);
if (cmd_ver != 1) {
IWL_DEBUG_RADIO(mvm,
"UATS_TABLE_CMD ver %d not supported\n",
"MCC_ALLOWED_AP_TYPE_CMD ver %d not supported\n",
cmd_ver);
return;
}
@ -529,9 +529,10 @@ static void iwl_mvm_uats_init(struct iwl_mvm *mvm)
ret = iwl_mvm_send_cmd(mvm, &cmd);
if (ret < 0)
IWL_ERR(mvm, "failed to send UATS_TABLE_CMD (%d)\n", ret);
IWL_ERR(mvm, "failed to send MCC_ALLOWED_AP_TYPE_CMD (%d)\n",
ret);
else
IWL_DEBUG_RADIO(mvm, "UATS_TABLE_CMD sent to FW\n");
IWL_DEBUG_RADIO(mvm, "MCC_ALLOWED_AP_TYPE_CMD sent to FW\n");
}
static int iwl_mvm_sgom_init(struct iwl_mvm *mvm)

View File

@ -5,6 +5,48 @@
#include "mvm.h"
#include "time-event.h"
#define HANDLE_ESR_REASONS(HOW) \
HOW(BLOCKED_PREVENTION) \
HOW(BLOCKED_WOWLAN) \
HOW(BLOCKED_TPT) \
HOW(BLOCKED_FW) \
HOW(BLOCKED_NON_BSS) \
HOW(EXIT_MISSED_BEACON) \
HOW(EXIT_LOW_RSSI) \
HOW(EXIT_COEX) \
HOW(EXIT_BANDWIDTH) \
HOW(EXIT_CSA) \
HOW(EXIT_LINK_USAGE)
static const char *const iwl_mvm_esr_states_names[] = {
#define NAME_ENTRY(x) [ilog2(IWL_MVM_ESR_##x)] = #x,
HANDLE_ESR_REASONS(NAME_ENTRY)
};
const char *iwl_get_esr_state_string(enum iwl_mvm_esr_state state)
{
int offs = ilog2(state);
if (offs >= ARRAY_SIZE(iwl_mvm_esr_states_names) ||
!iwl_mvm_esr_states_names[offs])
return "UNKNOWN";
return iwl_mvm_esr_states_names[offs];
}
static void iwl_mvm_print_esr_state(struct iwl_mvm *mvm, u32 mask)
{
#define NAME_FMT(x) "%s"
#define NAME_PR(x) (mask & IWL_MVM_ESR_##x) ? "[" #x "]" : "",
IWL_DEBUG_INFO(mvm,
"EMLSR state = " HANDLE_ESR_REASONS(NAME_FMT)
" (0x%x)\n",
HANDLE_ESR_REASONS(NAME_PR)
mask);
#undef NAME_FMT
#undef NAME_PR
}
static u32 iwl_mvm_get_free_fw_link_id(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvm_vif)
{
@ -108,6 +150,65 @@ int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
return iwl_mvm_link_cmd_send(mvm, &cmd, FW_CTXT_ACTION_ADD);
}
struct iwl_mvm_esr_iter_data {
struct ieee80211_vif *vif;
unsigned int link_id;
bool lift_block;
};
static void iwl_mvm_esr_vif_iterator(void *_data, u8 *mac,
struct ieee80211_vif *vif)
{
struct iwl_mvm_esr_iter_data *data = _data;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
int link_id;
if (ieee80211_vif_type_p2p(vif) == NL80211_IFTYPE_STATION)
return;
for_each_mvm_vif_valid_link(mvmvif, link_id) {
struct iwl_mvm_vif_link_info *link_info =
mvmvif->link[link_id];
if (vif == data->vif && link_id == data->link_id)
continue;
if (link_info->active)
data->lift_block = false;
}
}
int iwl_mvm_esr_non_bss_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
unsigned int link_id, bool active)
{
/* An active link of a non-station vif blocks EMLSR. Upon activation
* block EMLSR on the bss vif. Upon deactivation, check if this link
* was the last non-station link active, and if so unblock the bss vif
*/
struct ieee80211_vif *bss_vif = iwl_mvm_get_bss_vif(mvm);
struct iwl_mvm_esr_iter_data data = {
.vif = vif,
.link_id = link_id,
.lift_block = true,
};
if (IS_ERR_OR_NULL(bss_vif))
return 0;
if (active)
return iwl_mvm_block_esr_sync(mvm, bss_vif,
IWL_MVM_ESR_BLOCKED_NON_BSS);
ieee80211_iterate_active_interfaces(mvm->hw,
IEEE80211_IFACE_ITER_NORMAL,
iwl_mvm_esr_vif_iterator, &data);
if (data.lift_block) {
mutex_lock(&mvm->mutex);
iwl_mvm_unblock_esr(mvm, bss_vif, IWL_MVM_ESR_BLOCKED_NON_BSS);
mutex_unlock(&mvm->mutex);
}
return 0;
}
int iwl_mvm_link_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
struct ieee80211_bss_conf *link_conf,
u32 changes, bool active)
@ -591,28 +692,41 @@ s8 iwl_mvm_get_esr_rssi_thresh(struct iwl_mvm *mvm,
}
static u32
iwl_mvm_esr_disallowed_with_link(struct ieee80211_vif *vif,
const struct iwl_mvm_link_sel_data *link)
iwl_mvm_esr_disallowed_with_link(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
const struct iwl_mvm_link_sel_data *link,
bool primary)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = mvmvif->mvm;
struct wiphy *wiphy = mvm->hw->wiphy;
struct ieee80211_bss_conf *conf;
enum iwl_mvm_esr_state ret = 0;
s8 thresh;
conf = wiphy_dereference(wiphy, vif->link_conf[link->link_id]);
if (WARN_ON_ONCE(!conf))
return false;
/* BT Coex effects eSR mode only if one of the links is on LB */
if (link->chandef->chan->band == NL80211_BAND_2GHZ &&
mvmvif->esr_disable_reason & IWL_MVM_ESR_BLOCKED_COEX)
ret |= IWL_MVM_ESR_BLOCKED_COEX;
(!iwl_mvm_bt_coex_calculate_esr_mode(mvm, vif, link->signal,
primary)))
ret |= IWL_MVM_ESR_EXIT_COEX;
thresh = iwl_mvm_get_esr_rssi_thresh(mvm, link->chandef,
false);
if (link->signal < thresh)
ret |= IWL_MVM_ESR_EXIT_LOW_RSSI;
if (ret)
if (conf->csa_active)
ret |= IWL_MVM_ESR_EXIT_CSA;
if (ret) {
IWL_DEBUG_INFO(mvm,
"Link %d is not allowed for esr. Reason: 0x%x\n",
link->link_id, ret);
"Link %d is not allowed for esr\n",
link->link_id);
iwl_mvm_print_esr_state(mvm, ret);
}
return ret;
}
@ -621,13 +735,30 @@ bool iwl_mvm_mld_valid_link_pair(struct ieee80211_vif *vif,
const struct iwl_mvm_link_sel_data *a,
const struct iwl_mvm_link_sel_data *b)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = mvmvif->mvm;
enum iwl_mvm_esr_state ret = 0;
/* Per-link considerations */
if (iwl_mvm_esr_disallowed_with_link(vif, a) ||
iwl_mvm_esr_disallowed_with_link(vif, b))
if (iwl_mvm_esr_disallowed_with_link(mvm, vif, a, true) ||
iwl_mvm_esr_disallowed_with_link(mvm, vif, b, false))
return false;
/* Per-combination considerations */
return a->chandef->chan->band != b->chandef->chan->band;
if (a->chandef->width != b->chandef->width ||
!(a->chandef->chan->band == NL80211_BAND_6GHZ &&
b->chandef->chan->band == NL80211_BAND_5GHZ))
ret |= IWL_MVM_ESR_EXIT_BANDWIDTH;
if (ret) {
IWL_DEBUG_INFO(mvm,
"Links %d and %d are not a valid pair for EMLSR\n",
a->link_id, b->link_id);
iwl_mvm_print_esr_state(mvm, ret);
return false;
}
return true;
}
EXPORT_SYMBOL_IF_IWLWIFI_KUNIT(iwl_mvm_mld_valid_link_pair);
@ -700,9 +831,9 @@ void iwl_mvm_select_links(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
primary_link = best_link->link_id;
new_active_links = BIT(best_link->link_id);
/* eSR is not supported/allowed, or only one usable link */
if (max_active_links == 1 || !iwl_mvm_esr_allowed_on_vif(mvm, vif) ||
n_data == 1)
/* eSR is not supported/blocked, or only one usable link */
if (max_active_links == 1 || !iwl_mvm_vif_has_esr_cap(mvm, vif) ||
mvmvif->esr_disable_reason || n_data == 1)
goto set_active;
for (u8 a = 0; a < n_data; a++)
@ -724,8 +855,8 @@ void iwl_mvm_select_links(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
if (hweight16(new_active_links) <= 1)
goto set_active;
/* prefer single link over marginal eSR improvement */
if (best_link->grade * 110 / 100 >= max_esr_grade) {
/* For equal grade - prefer EMLSR */
if (best_link->grade > max_esr_grade) {
primary_link = best_link->link_id;
new_active_links = BIT(best_link->link_id);
}
@ -784,49 +915,58 @@ u8 iwl_mvm_get_other_link(struct ieee80211_vif *vif, u8 link_id)
#define IWL_MVM_ESR_PREVENT_SHORT (HZ * 300)
#define IWL_MVM_ESR_PREVENT_LONG (HZ * 600)
static void iwl_mvm_recalc_esr_prevention(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif,
enum iwl_mvm_esr_state reason)
static bool iwl_mvm_check_esr_prevention(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif,
enum iwl_mvm_esr_state reason)
{
unsigned long now = jiffies;
bool timeout_expired = time_after(jiffies,
mvmvif->last_esr_exit.ts +
IWL_MVM_PREVENT_ESR_TIMEOUT);
unsigned long delay;
bool timeout_expired =
time_after(now, mvmvif->last_esr_exit.ts +
IWL_MVM_PREVENT_ESR_TIMEOUT);
if (WARN_ON(!(IWL_MVM_ESR_PREVENT_REASONS & reason)))
return;
lockdep_assert_held(&mvm->mutex);
mvmvif->last_esr_exit.ts = now;
/* Only handle reasons that can cause prevention */
if (!(reason & IWL_MVM_ESR_PREVENT_REASONS))
return false;
if (timeout_expired ||
mvmvif->last_esr_exit.reason != reason) {
mvmvif->last_esr_exit.reason = reason;
/*
* Reset the counter if more than 400 seconds have passed between one
* exit and the other, or if we exited due to a different reason.
* Will also reset the counter after the long prevention is done.
*/
if (timeout_expired || mvmvif->last_esr_exit.reason != reason) {
mvmvif->exit_same_reason_count = 1;
return;
return false;
}
mvmvif->exit_same_reason_count++;
if (WARN_ON(mvmvif->exit_same_reason_count < 2 ||
mvmvif->exit_same_reason_count > 3))
return;
return false;
mvmvif->esr_disable_reason |= IWL_MVM_ESR_BLOCKED_PREVENTION;
/*
* For the second exit, use a short prevention, and for the third one,
* use a long prevention.
*/
delay = mvmvif->exit_same_reason_count == 2 ?
IWL_MVM_ESR_PREVENT_SHORT :
IWL_MVM_ESR_PREVENT_LONG;
IWL_DEBUG_INFO(mvm,
"Preventing EMLSR for %ld seconds due to %u exits with the reason 0x%x\n",
delay / HZ, mvmvif->exit_same_reason_count, reason);
"Preventing EMLSR for %ld seconds due to %u exits with the reason = %s (0x%x)\n",
delay / HZ, mvmvif->exit_same_reason_count,
iwl_get_esr_state_string(reason), reason);
wiphy_delayed_work_queue(mvm->hw->wiphy,
&mvmvif->prevent_esr_done_wk, delay);
return true;
}
#define IWL_MVM_TRIGGER_LINK_SEL_TIME (IWL_MVM_TRIGGER_LINK_SEL_TIME_SEC * HZ)
/* API to exit eSR mode */
void iwl_mvm_exit_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_esr_state reason,
@ -834,6 +974,7 @@ void iwl_mvm_exit_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
u16 new_active_links;
bool prevented;
lockdep_assert_held(&mvm->mutex);
@ -849,13 +990,31 @@ void iwl_mvm_exit_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
new_active_links = BIT(link_to_keep);
IWL_DEBUG_INFO(mvm,
"Exiting EMLSR. Reason = 0x%x. Current active links=0x%x, new active links = 0x%x\n",
reason, vif->active_links, new_active_links);
"Exiting EMLSR. reason = %s (0x%x). Current active links=0x%x, new active links = 0x%x\n",
iwl_get_esr_state_string(reason), reason,
vif->active_links, new_active_links);
ieee80211_set_active_links_async(vif, new_active_links);
if (IWL_MVM_ESR_PREVENT_REASONS & reason)
iwl_mvm_recalc_esr_prevention(mvm, mvmvif, reason);
/* Prevent EMLSR if needed */
prevented = iwl_mvm_check_esr_prevention(mvm, mvmvif, reason);
/* Remember why and when we exited EMLSR */
mvmvif->last_esr_exit.ts = jiffies;
mvmvif->last_esr_exit.reason = reason;
/*
* If EMLSR is prevented now - don't try to get back to EMLSR.
* If we exited due to a blocking event, we will try to get back to
* EMLSR when the corresponding unblocking event will happen.
*/
if (prevented || reason & IWL_MVM_BLOCK_ESR_REASONS)
return;
/* If EMLSR is not blocked - try enabling it again in 30 seconds */
wiphy_delayed_work_queue(mvm->hw->wiphy,
&mvmvif->mlo_int_scan_wk,
round_jiffies_relative(IWL_MVM_TRIGGER_LINK_SEL_TIME));
}
void iwl_mvm_block_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
@ -870,15 +1029,81 @@ void iwl_mvm_block_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
if (WARN_ON(!(reason & IWL_MVM_BLOCK_ESR_REASONS)))
return;
if (!(mvmvif->esr_disable_reason & reason))
IWL_DEBUG_INFO(mvm, "Blocking EMSLR mode. reason = 0x%x\n",
reason);
if (!(mvmvif->esr_disable_reason & reason)) {
IWL_DEBUG_INFO(mvm,
"Blocking EMLSR mode. reason = %s (0x%x)\n",
iwl_get_esr_state_string(reason), reason);
iwl_mvm_print_esr_state(mvm, mvmvif->esr_disable_reason);
}
mvmvif->esr_disable_reason |= reason;
iwl_mvm_exit_esr(mvm, vif, reason, link_to_keep);
}
int iwl_mvm_block_esr_sync(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_esr_state reason)
{
int primary_link = iwl_mvm_get_primary_link(vif);
int ret;
if (!IWL_MVM_AUTO_EML_ENABLE || !ieee80211_vif_is_mld(vif))
return 0;
/* This should be called only with blocking reasons */
if (WARN_ON(!(reason & IWL_MVM_BLOCK_ESR_REASONS)))
return 0;
/* leave ESR immediately, not only async with iwl_mvm_block_esr() */
ret = ieee80211_set_active_links(vif, BIT(primary_link));
if (ret)
return ret;
mutex_lock(&mvm->mutex);
/* only additionally block for consistency and to avoid concurrency */
iwl_mvm_block_esr(mvm, vif, reason, primary_link);
mutex_unlock(&mvm->mutex);
return 0;
}
static void iwl_mvm_esr_unblocked(struct iwl_mvm *mvm,
struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
bool need_new_sel = time_after(jiffies, mvmvif->last_esr_exit.ts +
IWL_MVM_TRIGGER_LINK_SEL_TIME);
lockdep_assert_held(&mvm->mutex);
if (!ieee80211_vif_is_mld(vif) || !mvmvif->authorized ||
mvmvif->esr_active)
return;
IWL_DEBUG_INFO(mvm, "EMLSR is unblocked\n");
/*
* If EMLSR was blocked for more than 30 seconds, or the last link
* selection decided to not enter EMLSR, trigger a new scan.
*/
if (need_new_sel || hweight16(mvmvif->link_selection_res) < 2) {
IWL_DEBUG_INFO(mvm, "Trigger MLO scan\n");
wiphy_delayed_work_queue(mvm->hw->wiphy,
&mvmvif->mlo_int_scan_wk, 0);
/*
* If EMLSR was blocked for less than 30 seconds, and the last link
* selection decided to use EMLSR, activate EMLSR using the previous
* link selection result.
*/
} else {
IWL_DEBUG_INFO(mvm,
"Use the latest link selection result: 0x%x\n",
mvmvif->link_selection_res);
ieee80211_set_active_links_async(vif,
mvmvif->link_selection_res);
}
}
void iwl_mvm_unblock_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_esr_state reason)
{
@ -890,9 +1115,17 @@ void iwl_mvm_unblock_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
if (WARN_ON(!(reason & IWL_MVM_BLOCK_ESR_REASONS)))
return;
if (mvmvif->esr_disable_reason & reason)
IWL_DEBUG_INFO(mvm, "Unblocking EMSLR mode. reason = 0x%x\n",
reason);
/* No Change */
if (!(mvmvif->esr_disable_reason & reason))
return;
mvmvif->esr_disable_reason &= ~reason;
IWL_DEBUG_INFO(mvm,
"Unblocking EMLSR mode. reason = %s (0x%x)\n",
iwl_get_esr_state_string(reason), reason);
iwl_mvm_print_esr_state(mvm, mvmvif->esr_disable_reason);
if (!mvmvif->esr_disable_reason)
iwl_mvm_esr_unblocked(mvm, vif);
}

View File

@ -1163,6 +1163,13 @@ static int iwl_mvm_mac_ctxt_send_beacon_v9(struct iwl_mvm *mvm,
WLAN_EID_EXT_CHANSWITCH_ANN,
beacon->len));
if (vif->type == NL80211_IFTYPE_AP &&
iwl_fw_lookup_cmd_ver(mvm->fw, BEACON_TEMPLATE_CMD, 0) >= 14)
beacon_cmd.btwt_offset =
cpu_to_le32(iwl_mvm_find_ie_offset(beacon->data,
WLAN_EID_S1G_TWT,
beacon->len));
return iwl_mvm_mac_ctxt_send_beacon_cmd(mvm, beacon, &beacon_cmd,
sizeof(beacon_cmd));
}

View File

@ -1350,6 +1350,7 @@ void iwl_mvm_mac_stop(struct ieee80211_hw *hw)
iwl_mvm_scan_stop(mvm, IWL_MVM_SCAN_INT_MLO, false);
mutex_unlock(&mvm->mutex);
wiphy_work_cancel(mvm->hw->wiphy, &mvm->trig_link_selection_wk);
wiphy_work_flush(mvm->hw->wiphy, &mvm->async_handlers_wiphy_wk);
flush_work(&mvm->async_handlers_wk);
flush_work(&mvm->add_stream_wk);
@ -1625,6 +1626,32 @@ static void iwl_mvm_prevent_esr_done_wk(struct wiphy *wiphy,
mutex_unlock(&mvm->mutex);
}
static void iwl_mvm_mlo_int_scan_wk(struct wiphy *wiphy, struct wiphy_work *wk)
{
struct iwl_mvm_vif *mvmvif = container_of(wk, struct iwl_mvm_vif,
mlo_int_scan_wk.work);
struct ieee80211_vif *vif =
container_of((void *)mvmvif, struct ieee80211_vif, drv_priv);
mutex_lock(&mvmvif->mvm->mutex);
iwl_mvm_int_mlo_scan(mvmvif->mvm, vif);
mutex_unlock(&mvmvif->mvm->mutex);
}
static void iwl_mvm_unblock_esr_tpt(struct wiphy *wiphy, struct wiphy_work *wk)
{
struct iwl_mvm_vif *mvmvif =
container_of(wk, struct iwl_mvm_vif, unblock_esr_tpt_wk);
struct iwl_mvm *mvm = mvmvif->mvm;
struct ieee80211_vif *vif = iwl_mvm_get_bss_vif(mvm);
mutex_lock(&mvm->mutex);
iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_TPT);
mutex_unlock(&mvm->mutex);
}
void iwl_mvm_mac_init_mvmvif(struct iwl_mvm *mvm, struct iwl_mvm_vif *mvmvif)
{
lockdep_assert_held(&mvm->mutex);
@ -1637,6 +1664,12 @@ void iwl_mvm_mac_init_mvmvif(struct iwl_mvm *mvm, struct iwl_mvm_vif *mvmvif)
wiphy_delayed_work_init(&mvmvif->prevent_esr_done_wk,
iwl_mvm_prevent_esr_done_wk);
wiphy_delayed_work_init(&mvmvif->mlo_int_scan_wk,
iwl_mvm_mlo_int_scan_wk);
wiphy_work_init(&mvmvif->unblock_esr_tpt_wk,
iwl_mvm_unblock_esr_tpt);
}
static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
@ -1783,6 +1816,11 @@ void iwl_mvm_prepare_mac_removal(struct iwl_mvm *mvm,
wiphy_delayed_work_cancel(mvm->hw->wiphy,
&mvmvif->prevent_esr_done_wk);
wiphy_delayed_work_cancel(mvm->hw->wiphy,
&mvmvif->mlo_int_scan_wk);
wiphy_work_cancel(mvm->hw->wiphy, &mvmvif->unblock_esr_tpt_wk);
cancel_delayed_work_sync(&mvmvif->csa_work);
}
@ -3877,24 +3915,6 @@ out:
return callbacks->update_sta(mvm, vif, sta);
}
static void iwl_mvm_bt_coex_update_vif_esr(struct iwl_mvm *mvm,
struct ieee80211_vif *vif)
{
unsigned long usable_links = ieee80211_vif_usable_links(vif);
u8 link_id;
for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
struct ieee80211_bss_conf *link_conf =
link_conf_dereference_protected(vif, link_id);
if (WARN_ON_ONCE(!link_conf))
return;
if (link_conf->chanreq.oper.chan->band == NL80211_BAND_2GHZ)
iwl_mvm_bt_coex_update_link_esr(mvm, vif, link_id);
}
}
static int
iwl_mvm_sta_state_assoc_to_authorized(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
@ -3918,9 +3938,12 @@ iwl_mvm_sta_state_assoc_to_authorized(struct iwl_mvm *mvm,
WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif));
mvmvif->authorized = 1;
mvmvif->link_selection_res = 0;
mvmvif->link_selection_primary =
vif->active_links ? __ffs(vif->active_links) : 0;
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
mvmvif->link_selection_res = vif->active_links;
mvmvif->link_selection_primary =
vif->active_links ? __ffs(vif->active_links) : 0;
}
callbacks->mac_ctxt_changed(mvm, vif, false);
iwl_mvm_mei_host_associated(mvm, vif, mvm_sta);
@ -3928,8 +3951,10 @@ iwl_mvm_sta_state_assoc_to_authorized(struct iwl_mvm *mvm,
memset(&mvmvif->last_esr_exit, 0,
sizeof(mvmvif->last_esr_exit));
/* Calculate eSR mode due to BT coex */
iwl_mvm_bt_coex_update_vif_esr(mvm, vif);
iwl_mvm_block_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_TPT, 0);
/* Block until FW notif will arrive */
iwl_mvm_block_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_FW, 0);
/* when client is authorized (AP station marked as such),
* try to enable the best link(s).
@ -3988,6 +4013,11 @@ iwl_mvm_sta_state_authorized_to_assoc(struct iwl_mvm *mvm,
wiphy_delayed_work_cancel(mvm->hw->wiphy,
&mvmvif->prevent_esr_done_wk);
wiphy_delayed_work_cancel(mvm->hw->wiphy,
&mvmvif->mlo_int_scan_wk);
wiphy_work_cancel(mvm->hw->wiphy, &mvmvif->unblock_esr_tpt_wk);
/* No need for the periodic statistics anymore */
if (ieee80211_vif_is_mld(vif) && mvmvif->esr_active)
iwl_mvm_request_periodic_system_statistics(mvm, false);
@ -4804,6 +4834,10 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
*/
flush_work(&mvm->roc_done_wk);
ret = iwl_mvm_esr_non_bss_link(mvm, vif, 0, true);
if (ret)
return ret;
mutex_lock(&mvm->mutex);
switch (vif->type) {
@ -4847,9 +4881,7 @@ int iwl_mvm_cancel_roc(struct ieee80211_hw *hw,
IWL_DEBUG_MAC80211(mvm, "enter\n");
mutex_lock(&mvm->mutex);
iwl_mvm_stop_roc(mvm, vif);
mutex_unlock(&mvm->mutex);
IWL_DEBUG_MAC80211(mvm, "leave\n");
return 0;
@ -5544,17 +5576,16 @@ static void iwl_mvm_csa_block_txqs(void *data, struct ieee80211_sta *sta)
}
#define IWL_MAX_CSA_BLOCK_TX 1500
int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
int iwl_mvm_pre_channel_switch(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_channel_switch *chsw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct ieee80211_vif *csa_vif;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_txq *mvmtxq;
int ret;
mutex_lock(&mvm->mutex);
lockdep_assert_held(&mvm->mutex);
mvmvif->csa_failed = false;
mvmvif->csa_blocks_tx = false;
@ -5572,25 +5603,19 @@ int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
rcu_dereference_protected(mvm->csa_vif,
lockdep_is_held(&mvm->mutex));
if (WARN_ONCE(csa_vif && csa_vif->bss_conf.csa_active,
"Another CSA is already in progress")) {
ret = -EBUSY;
goto out_unlock;
}
"Another CSA is already in progress"))
return -EBUSY;
/* we still didn't unblock tx. prevent new CS meanwhile */
if (rcu_dereference_protected(mvm->csa_tx_blocked_vif,
lockdep_is_held(&mvm->mutex))) {
ret = -EBUSY;
goto out_unlock;
}
lockdep_is_held(&mvm->mutex)))
return -EBUSY;
rcu_assign_pointer(mvm->csa_vif, vif);
if (WARN_ONCE(mvmvif->csa_countdown,
"Previous CSA countdown didn't complete")) {
ret = -EBUSY;
goto out_unlock;
}
"Previous CSA countdown didn't complete"))
return -EBUSY;
mvmvif->csa_target_freq = chsw->chandef.chan->center_freq;
@ -5624,10 +5649,8 @@ int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
* we don't know the dtim period. In this case, the firmware can't
* track the beacons.
*/
if (!vif->cfg.assoc || !vif->bss_conf.dtim_period) {
ret = -EBUSY;
goto out_unlock;
}
if (!vif->cfg.assoc || !vif->bss_conf.dtim_period)
return -EBUSY;
if (chsw->delay > IWL_MAX_CSA_BLOCK_TX &&
hweight16(vif->valid_links) <= 1)
@ -5649,7 +5672,7 @@ int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD)) {
ret = iwl_mvm_old_pre_chan_sw_sta(mvm, vif, chsw);
if (ret)
goto out_unlock;
return ret;
} else {
iwl_mvm_schedule_client_csa(mvm, vif, chsw);
}
@ -5665,12 +5688,23 @@ int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
ret = iwl_mvm_power_update_ps(mvm);
if (ret)
goto out_unlock;
return ret;
/* we won't be on this channel any longer */
iwl_mvm_teardown_tdls_peers(mvm);
out_unlock:
return ret;
}
static int iwl_mvm_mac_pre_channel_switch(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_channel_switch *chsw)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
mutex_lock(&mvm->mutex);
ret = iwl_mvm_pre_channel_switch(mvm, vif, chsw);
mutex_unlock(&mvm->mutex);
return ret;
@ -5865,6 +5899,65 @@ void iwl_mvm_mac_flush_sta(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
mutex_unlock(&mvm->mutex);
}
static int iwl_mvm_mac_get_acs_survey(struct iwl_mvm *mvm, int idx,
struct survey_info *survey)
{
int chan_idx;
enum nl80211_band band;
int ret;
mutex_lock(&mvm->mutex);
if (!mvm->acs_survey) {
ret = -ENOENT;
goto out;
}
/* Find and return the next entry that has a non-zero active time */
for (band = 0; band < NUM_NL80211_BANDS; band++) {
struct ieee80211_supported_band *sband =
mvm->hw->wiphy->bands[band];
if (!sband)
continue;
for (chan_idx = 0; chan_idx < sband->n_channels; chan_idx++) {
struct iwl_mvm_acs_survey_channel *info =
&mvm->acs_survey->bands[band][chan_idx];
if (!info->time)
continue;
/* Found (the next) channel to report */
survey->channel = &sband->channels[chan_idx];
survey->filled = SURVEY_INFO_TIME |
SURVEY_INFO_TIME_BUSY |
SURVEY_INFO_TIME_RX |
SURVEY_INFO_TIME_TX;
survey->time = info->time;
survey->time_busy = info->time_busy;
survey->time_rx = info->time_rx;
survey->time_tx = info->time_tx;
survey->noise = info->noise;
if (survey->noise < 0)
survey->filled |= SURVEY_INFO_NOISE_DBM;
/* Clear time so that channel is only reported once */
info->time = 0;
ret = 0;
goto out;
}
}
ret = -ENOENT;
out:
mutex_unlock(&mvm->mutex);
return ret;
}
int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
struct survey_info *survey)
{
@ -5877,14 +5970,18 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
memset(survey, 0, sizeof(*survey));
/* only support global statistics right now */
if (idx != 0)
return -ENOENT;
if (!fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_RADIO_BEACON_STATS))
return -ENOENT;
/*
* Return the beacon stats at index zero and pass on following indices
* to the function returning the full survey, most likely for ACS
* (Automatic Channel Selection).
*/
if (idx > 0)
return iwl_mvm_mac_get_acs_survey(mvm, idx - 1, survey);
mutex_lock(&mvm->mutex);
if (iwl_mvm_firmware_running(mvm)) {
@ -6456,7 +6553,7 @@ const struct ieee80211_ops iwl_mvm_hw_ops = {
.set_tim = iwl_mvm_set_tim,
.channel_switch = iwl_mvm_channel_switch,
.pre_channel_switch = iwl_mvm_pre_channel_switch,
.pre_channel_switch = iwl_mvm_mac_pre_channel_switch,
.post_channel_switch = iwl_mvm_post_channel_switch,
.abort_channel_switch = iwl_mvm_abort_channel_switch,
.channel_switch_rx_beacon = iwl_mvm_channel_switch_rx_beacon,

View File

@ -207,6 +207,30 @@ static unsigned int iwl_mvm_mld_count_active_links(struct iwl_mvm_vif *mvmvif)
return n_active;
}
static void iwl_mvm_restart_mpdu_count(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif)
{
struct ieee80211_sta *ap_sta = mvmvif->ap_sta;
struct iwl_mvm_sta *mvmsta;
lockdep_assert_held(&mvm->mutex);
if (!ap_sta)
return;
mvmsta = iwl_mvm_sta_from_mac80211(ap_sta);
if (!mvmsta->mpdu_counters)
return;
for (int q = 0; q < mvm->trans->num_rx_queues; q++) {
spin_lock_bh(&mvmsta->mpdu_counters[q].lock);
memset(mvmsta->mpdu_counters[q].per_link, 0,
sizeof(mvmsta->mpdu_counters[q].per_link));
mvmsta->mpdu_counters[q].window_start = jiffies;
spin_unlock_bh(&mvmsta->mpdu_counters[q].lock);
}
}
static int iwl_mvm_esr_mode_active(struct iwl_mvm *mvm,
struct ieee80211_vif *vif)
{
@ -243,6 +267,13 @@ static int iwl_mvm_esr_mode_active(struct iwl_mvm *mvm,
/* Needed for tracking RSSI */
iwl_mvm_request_periodic_system_statistics(mvm, true);
/*
* Restart the MPDU counters and the counting window, so when the
* statistics arrive (which is where we look at the counters) we
* will be at the end of the window.
*/
iwl_mvm_restart_mpdu_count(mvm, mvmvif);
return ret;
}
@ -360,6 +391,18 @@ static int iwl_mvm_mld_assign_vif_chanctx(struct ieee80211_hw *hw,
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
/* update EMLSR mode */
if (ieee80211_vif_type_p2p(vif) != NL80211_IFTYPE_STATION) {
ret = iwl_mvm_esr_non_bss_link(mvm, vif, link_conf->link_id,
true);
/*
* Don't activate this link if failed to exit EMLSR in
* the BSS interface
*/
if (ret)
return ret;
}
mutex_lock(&mvm->mutex);
ret = __iwl_mvm_mld_assign_vif_chanctx(mvm, vif, link_conf, ctx, false);
mutex_unlock(&mvm->mutex);
@ -412,6 +455,9 @@ static int iwl_mvm_esr_mode_inactive(struct iwl_mvm *mvm,
iwl_mvm_request_periodic_system_statistics(mvm, false);
/* Start a new counting window */
iwl_mvm_restart_mpdu_count(mvm, mvmvif);
return ret;
}
@ -480,6 +526,10 @@ static void iwl_mvm_mld_unassign_vif_chanctx(struct ieee80211_hw *hw,
iwl_mvm_add_link(mvm, vif, link_conf);
}
mutex_unlock(&mvm->mutex);
/* update EMLSR mode */
if (ieee80211_vif_type_p2p(vif) != NL80211_IFTYPE_STATION)
iwl_mvm_esr_non_bss_link(mvm, vif, link_conf->link_id, false);
}
static void
@ -659,6 +709,25 @@ static int iwl_mvm_mld_mac_sta_state(struct ieee80211_hw *hw,
&callbacks);
}
static bool iwl_mvm_esr_bw_criteria(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *link_conf)
{
struct ieee80211_bss_conf *other_link;
int link_id;
/* Exit EMLSR if links don't have equal bandwidths */
for_each_vif_active_link(vif, other_link, link_id) {
if (link_id == link_conf->link_id)
continue;
if (link_conf->chanreq.oper.width ==
other_link->chanreq.oper.width)
return true;
}
return false;
}
static void
iwl_mvm_mld_link_info_changed_station(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
@ -688,6 +757,14 @@ iwl_mvm_mld_link_info_changed_station(struct iwl_mvm *mvm,
link_changes |= LINK_CONTEXT_MODIFY_HE_PARAMS;
}
if ((changes & BSS_CHANGED_BANDWIDTH) &&
ieee80211_vif_link_active(vif, link_conf->link_id) &&
mvmvif->esr_active &&
!iwl_mvm_esr_bw_criteria(mvm, vif, link_conf))
iwl_mvm_exit_esr(mvm, vif,
IWL_MVM_ESR_EXIT_BANDWIDTH,
iwl_mvm_get_primary_link(vif));
/* if associated, maybe puncturing changed - we'll check later */
if (vif->cfg.assoc)
link_changes |= LINK_CONTEXT_MODIFY_EHT_PARAMS;
@ -845,6 +922,11 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
if (ret)
IWL_ERR(mvm, "failed to update power mode\n");
}
if (changes & (BSS_CHANGED_MLD_VALID_LINKS | BSS_CHANGED_MLD_TTLM) &&
ieee80211_vif_is_mld(vif) && mvmvif->authorized)
wiphy_delayed_work_queue(mvm->hw->wiphy,
&mvmvif->mlo_int_scan_wk, 0);
}
static void
@ -1130,7 +1212,7 @@ iwl_mvm_mld_change_vif_links(struct ieee80211_hw *hw,
* Ensure we always have a valid primary_link, the real
* decision happens later when PHY is activated.
*/
mvmvif->primary_link = BIT(__ffs(new_links));
mvmvif->primary_link = __ffs(new_links);
}
out_err:
@ -1159,10 +1241,8 @@ iwl_mvm_mld_change_sta_links(struct ieee80211_hw *hw,
return ret;
}
bool iwl_mvm_esr_allowed_on_vif(struct iwl_mvm *mvm,
struct ieee80211_vif *vif)
bool iwl_mvm_vif_has_esr_cap(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
const struct wiphy_iftype_ext_capab *ext_capa;
lockdep_assert_held(&mvm->mutex);
@ -1176,11 +1256,8 @@ bool iwl_mvm_esr_allowed_on_vif(struct iwl_mvm *mvm,
ext_capa = cfg80211_get_iftype_ext_capa(mvm->hw->wiphy,
ieee80211_vif_type_p2p(vif));
if (!ext_capa ||
!(ext_capa->eml_capabilities & IEEE80211_EML_CAP_EMLSR_SUPP))
return false;
return !(mvmvif->esr_disable_reason & ~IWL_MVM_ESR_BLOCKED_COEX);
return (ext_capa &&
(ext_capa->eml_capabilities & IEEE80211_EML_CAP_EMLSR_SUPP));
}
static bool iwl_mvm_mld_can_activate_links(struct ieee80211_hw *hw,
@ -1204,7 +1281,7 @@ static bool iwl_mvm_mld_can_activate_links(struct ieee80211_hw *hw,
/* If it is an eSR device, check that we can enter eSR */
ret = iwl_mvm_is_esr_supported(mvm->fwrt.trans) &&
iwl_mvm_esr_allowed_on_vif(mvm, vif);
iwl_mvm_vif_has_esr_cap(mvm, vif);
unlock:
mutex_unlock(&mvm->mutex);
@ -1229,6 +1306,45 @@ iwl_mvm_mld_can_neg_ttlm(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
return NEG_TTLM_RES_ACCEPT;
}
static int
iwl_mvm_mld_mac_pre_channel_switch(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_channel_switch *chsw)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret;
mutex_lock(&mvm->mutex);
if (mvmvif->esr_active) {
u8 primary = iwl_mvm_get_primary_link(vif);
int selected;
/* prefer primary unless quiet CSA on it */
if (chsw->link_id == primary && chsw->block_tx)
selected = iwl_mvm_get_other_link(vif, primary);
else
selected = primary;
iwl_mvm_exit_esr(mvm, vif, IWL_MVM_ESR_EXIT_CSA, selected);
mutex_unlock(&mvm->mutex);
/*
* If we've not kept the link active that's doing the CSA
* then we don't need to do anything else, just return.
*/
if (selected != chsw->link_id)
return 0;
mutex_lock(&mvm->mutex);
}
ret = iwl_mvm_pre_channel_switch(mvm, vif, chsw);
mutex_unlock(&mvm->mutex);
return ret;
}
const struct ieee80211_ops iwl_mvm_mld_hw_ops = {
.tx = iwl_mvm_mac_tx,
.wake_tx_queue = iwl_mvm_mac_wake_tx_queue,
@ -1282,7 +1398,7 @@ const struct ieee80211_ops iwl_mvm_mld_hw_ops = {
.tx_last_beacon = iwl_mvm_tx_last_beacon,
.channel_switch = iwl_mvm_channel_switch,
.pre_channel_switch = iwl_mvm_pre_channel_switch,
.pre_channel_switch = iwl_mvm_mld_mac_pre_channel_switch,
.post_channel_switch = iwl_mvm_post_channel_switch,
.abort_channel_switch = iwl_mvm_abort_channel_switch,
.channel_switch_rx_beacon = iwl_mvm_channel_switch_rx_beacon,

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2022-2023 Intel Corporation
* Copyright (C) 2022-2024 Intel Corporation
*/
#include "mvm.h"
#include "time-sync.h"
@ -723,7 +723,6 @@ int iwl_mvm_mld_add_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
iwl_mvm_mld_set_ap_sta_id(sta, mvm_vif->link[link_id],
mvm_link_sta);
}
return 0;
err:
@ -849,6 +848,8 @@ int iwl_mvm_mld_rm_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
iwl_mvm_mld_free_sta_link(mvm, mvm_sta, mvm_link_sta,
link_id, stay_in_fw);
}
kfree(mvm_sta->mpdu_counters);
mvm_sta->mpdu_counters = NULL;
return ret;
}

View File

@ -353,24 +353,42 @@ struct iwl_mvm_vif_link_info {
* For the blocking reasons - use iwl_mvm_(un)block_esr(), and for the exit
* reasons - use iwl_mvm_exit_esr().
*
* @IWL_MVM_ESR_BLOCKED_COEX: COEX is preventing the enablement of EMLSR
* Note: new reasons shall be added to HANDLE_ESR_REASONS as well (for logs)
*
* @IWL_MVM_ESR_BLOCKED_PREVENTION: Prevent EMLSR to avoid entering and exiting
* in a loop.
* @IWL_MVM_ESR_BLOCKED_WOWLAN: WOWLAN is preventing the enablement of EMLSR
* @IWL_MVM_ESR_BLOCKED_TPT: block EMLSR when there is not enough traffic
* @IWL_MVM_ESR_BLOCKED_FW: FW didn't recommended/forced exit from EMLSR
* @IWL_MVM_ESR_BLOCKED_NON_BSS: An active non-bssid link's preventing EMLSR
* @IWL_MVM_ESR_EXIT_MISSED_BEACON: exited EMLSR due to missed beacons
* @IWL_MVM_ESR_EXIT_LOW_RSSI: link is deactivated/not allowed for EMLSR
* due to low RSSI.
* @IWL_MVM_ESR_EXIT_COEX: link is deactivated/not allowed for EMLSR
* due to BT Coex.
* @IWL_MVM_ESR_EXIT_BANDWIDTH: Bandwidths of primary and secondry links
* preventing the enablement of EMLSR
* @IWL_MVM_ESR_EXIT_CSA: CSA happened, so exit EMLSR
* @IWL_MVM_ESR_EXIT_LINK_USAGE: Exit EMLSR due to low tpt on secondary link
*/
enum iwl_mvm_esr_state {
IWL_MVM_ESR_BLOCKED_COEX = 0x1,
IWL_MVM_ESR_BLOCKED_PREVENTION = 0x2,
IWL_MVM_ESR_BLOCKED_WOWLAN = 0x4,
IWL_MVM_ESR_BLOCKED_PREVENTION = 0x1,
IWL_MVM_ESR_BLOCKED_WOWLAN = 0x2,
IWL_MVM_ESR_BLOCKED_TPT = 0x4,
IWL_MVM_ESR_BLOCKED_FW = 0x8,
IWL_MVM_ESR_BLOCKED_NON_BSS = 0x10,
IWL_MVM_ESR_EXIT_MISSED_BEACON = 0x10000,
IWL_MVM_ESR_EXIT_LOW_RSSI = 0x20000,
IWL_MVM_ESR_EXIT_COEX = 0x40000,
IWL_MVM_ESR_EXIT_BANDWIDTH = 0x80000,
IWL_MVM_ESR_EXIT_CSA = 0x100000,
IWL_MVM_ESR_EXIT_LINK_USAGE = 0x200000,
};
#define IWL_MVM_BLOCK_ESR_REASONS 0xffff
const char *iwl_get_esr_state_string(enum iwl_mvm_esr_state state);
/**
* struct iwl_mvm_esr_exit - details of the last exit from EMLSR mode.
* @reason: The reason for the last exit from EMLSR.
@ -428,6 +446,8 @@ struct iwl_mvm_esr_exit {
* @last_esr_exit::reason, only counting exits due to
* &IWL_MVM_ESR_PREVENT_REASONS.
* @prevent_esr_done_wk: work that should be done when esr prevention ends.
* @mlo_int_scan_wk: work for the internal MLO scan.
* @unblock_esr_tpt_wk: work for unblocking EMLSR when tpt is high enough.
*/
struct iwl_mvm_vif {
struct iwl_mvm *mvm;
@ -524,6 +544,8 @@ struct iwl_mvm_vif {
struct iwl_mvm_esr_exit last_esr_exit;
u8 exit_same_reason_count;
struct wiphy_delayed_work prevent_esr_done_wk;
struct wiphy_delayed_work mlo_int_scan_wk;
struct wiphy_work unblock_esr_tpt_wk;
struct iwl_mvm_vif_link_info deflink;
struct iwl_mvm_vif_link_info *link[IEEE80211_MLD_MAX_NUM_LINKS];
@ -898,6 +920,35 @@ struct iwl_mei_scan_filter {
struct work_struct scan_work;
};
/**
* struct iwl_mvm_acs_survey_channel - per-channel survey information
*
* Stripped down version of &struct survey_info.
*
* @time: time in ms the radio was on the channel
* @time_busy: time in ms the channel was sensed busy
* @time_tx: time in ms spent transmitting data
* @time_rx: time in ms spent receiving data
* @noise: channel noise in dBm
*/
struct iwl_mvm_acs_survey_channel {
u32 time;
u32 time_busy;
u32 time_tx;
u32 time_rx;
s8 noise;
};
struct iwl_mvm_acs_survey {
struct iwl_mvm_acs_survey_channel *bands[NUM_NL80211_BANDS];
/* Overall number of channels */
int n_channels;
/* Storage space for per-channel information follows */
struct iwl_mvm_acs_survey_channel channels[] __counted_by(n_channels);
};
struct iwl_mvm {
/* for logger access */
struct device *dev;
@ -917,6 +968,8 @@ struct iwl_mvm {
/* For async rx handlers that require the wiphy lock */
struct wiphy_work async_handlers_wiphy_wk;
struct wiphy_work trig_link_selection_wk;
struct work_struct roc_done_wk;
unsigned long init_status;
@ -1265,6 +1318,8 @@ struct iwl_mvm {
struct iwl_mei_scan_filter mei_scan_filter;
struct iwl_mvm_acs_survey *acs_survey;
bool statistics_clear;
};
@ -2011,6 +2066,8 @@ unsigned int iwl_mvm_get_link_grade(struct ieee80211_bss_conf *link_conf);
bool iwl_mvm_mld_valid_link_pair(struct ieee80211_vif *vif,
const struct iwl_mvm_link_sel_data *a,
const struct iwl_mvm_link_sel_data *b);
s8 iwl_mvm_average_dbm_values(const struct iwl_umac_scan_channel_survey_notif *notif);
#endif
/* AP and IBSS */
@ -2088,13 +2145,13 @@ int iwl_mvm_reg_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
struct ieee80211_scan_ies *ies);
size_t iwl_mvm_scan_size(struct iwl_mvm *mvm);
int iwl_mvm_scan_stop(struct iwl_mvm *mvm, int type, bool notify);
int iwl_mvm_int_mlo_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
struct ieee80211_channel **channels,
size_t n_channels);
int iwl_mvm_max_scan_ie_len(struct iwl_mvm *mvm);
void iwl_mvm_report_scan_aborted(struct iwl_mvm *mvm);
void iwl_mvm_scan_timeout_wk(struct work_struct *work);
int iwl_mvm_int_mlo_scan(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
void iwl_mvm_rx_channel_survey_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb);
/* Scheduled scan */
void iwl_mvm_rx_lmac_scan_complete_notif(struct iwl_mvm *mvm,
@ -2221,9 +2278,6 @@ bool iwl_mvm_bt_coex_is_tpc_allowed(struct iwl_mvm *mvm,
u8 iwl_mvm_bt_coex_get_single_ant_msk(struct iwl_mvm *mvm, u8 enabled_ants);
u8 iwl_mvm_bt_coex_tx_prio(struct iwl_mvm *mvm, struct ieee80211_hdr *hdr,
struct ieee80211_tx_info *info, u8 ac);
void iwl_mvm_bt_coex_update_link_esr(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
int link_id);
/* beacon filtering */
#ifdef CONFIG_IWLWIFI_DEBUGFS
@ -2811,7 +2865,7 @@ void iwl_mvm_change_chanctx(struct ieee80211_hw *hw,
int iwl_mvm_tx_last_beacon(struct ieee80211_hw *hw);
void iwl_mvm_channel_switch(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_channel_switch *chsw);
int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
int iwl_mvm_pre_channel_switch(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_channel_switch *chsw);
void iwl_mvm_abort_channel_switch(struct ieee80211_hw *hw,
@ -2875,11 +2929,12 @@ int iwl_mvm_roc_add_cmd(struct iwl_mvm *mvm,
int duration, u32 activity);
/* EMLSR */
bool iwl_mvm_esr_allowed_on_vif(struct iwl_mvm *mvm,
struct ieee80211_vif *vif);
bool iwl_mvm_vif_has_esr_cap(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
void iwl_mvm_block_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_esr_state reason,
u8 link_to_keep);
int iwl_mvm_block_esr_sync(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_esr_state reason);
void iwl_mvm_unblock_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
enum iwl_mvm_esr_state reason);
void iwl_mvm_exit_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
@ -2888,5 +2943,14 @@ void iwl_mvm_exit_esr(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
s8 iwl_mvm_get_esr_rssi_thresh(struct iwl_mvm *mvm,
const struct cfg80211_chan_def *chandef,
bool low);
void iwl_mvm_bt_coex_update_link_esr(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
int link_id);
bool
iwl_mvm_bt_coex_calculate_esr_mode(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
s32 link_rssi,
bool primary);
int iwl_mvm_esr_non_bss_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
unsigned int link_id, bool active);
#endif /* __IWL_MVM_H__ */

View File

@ -145,6 +145,24 @@ static void iwl_mvm_nic_config(struct iwl_op_mode *op_mode)
~APMG_PS_CTRL_EARLY_PWR_OFF_RESET_DIS);
}
static void iwl_mvm_rx_esr_mode_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb)
{
struct iwl_rx_packet *pkt = rxb_addr(rxb);
struct iwl_mvm_esr_mode_notif *notif = (void *)pkt->data;
struct ieee80211_vif *vif = iwl_mvm_get_bss_vif(mvm);
/* FW recommendations is only for entering EMLSR */
if (!vif || iwl_mvm_vif_from_mac80211(vif)->esr_active)
return;
if (le32_to_cpu(notif->action) == ESR_RECOMMEND_ENTER)
iwl_mvm_unblock_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_FW);
else
iwl_mvm_block_esr(mvm, vif, IWL_MVM_ESR_BLOCKED_FW,
iwl_mvm_get_primary_link(vif));
}
static void iwl_mvm_rx_monitor_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb)
{
@ -365,7 +383,7 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = {
iwl_mvm_rx_scan_match_found,
RX_HANDLER_SYNC),
RX_HANDLER(SCAN_COMPLETE_UMAC, iwl_mvm_rx_umac_scan_complete_notif,
RX_HANDLER_ASYNC_LOCKED_WIPHY,
RX_HANDLER_ASYNC_LOCKED,
struct iwl_umac_scan_complete),
RX_HANDLER(SCAN_ITERATION_COMPLETE_UMAC,
iwl_mvm_rx_umac_scan_iter_complete_notif, RX_HANDLER_SYNC,
@ -425,6 +443,12 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = {
iwl_mvm_channel_switch_error_notif,
RX_HANDLER_ASYNC_UNLOCKED,
struct iwl_channel_switch_error_notif),
RX_HANDLER_GRP(DATA_PATH_GROUP, ESR_MODE_NOTIF,
iwl_mvm_rx_esr_mode_notif,
RX_HANDLER_ASYNC_LOCKED_WIPHY,
struct iwl_mvm_esr_mode_notif),
RX_HANDLER_GRP(DATA_PATH_GROUP, MONITOR_NOTIF,
iwl_mvm_rx_monitor_notif, RX_HANDLER_ASYNC_LOCKED,
struct iwl_datapath_monitor_notif),
@ -449,6 +473,9 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = {
RX_HANDLER_GRP(MAC_CONF_GROUP, ROC_NOTIF,
iwl_mvm_rx_roc_notif, RX_HANDLER_SYNC,
struct iwl_roc_notif),
RX_HANDLER_GRP(SCAN_GROUP, CHANNEL_SURVEY_NOTIF,
iwl_mvm_rx_channel_survey_notif, RX_HANDLER_ASYNC_LOCKED,
struct iwl_umac_scan_channel_survey_notif),
};
#undef RX_HANDLER
#undef RX_HANDLER_GRP
@ -607,6 +634,7 @@ static const struct iwl_hcmd_names iwl_mvm_data_path_names[] = {
HCMD_NAME(CHEST_COLLECTOR_FILTER_CONFIG_CMD),
HCMD_NAME(SCD_QUEUE_CONFIG_CMD),
HCMD_NAME(SEC_KEY_CMD),
HCMD_NAME(ESR_MODE_NOTIF),
HCMD_NAME(MONITOR_NOTIF),
HCMD_NAME(THERMAL_DUAL_CHAIN_REQUEST),
HCMD_NAME(STA_PM_NOTIF),
@ -626,6 +654,7 @@ static const struct iwl_hcmd_names iwl_mvm_statistics_names[] = {
* Access is done through binary search
*/
static const struct iwl_hcmd_names iwl_mvm_scan_names[] = {
HCMD_NAME(CHANNEL_SURVEY_NOTIF),
HCMD_NAME(OFFLOAD_MATCH_INFO_NOTIF),
};
@ -1146,6 +1175,27 @@ static const struct iwl_mei_ops mei_ops = {
.nic_stolen = iwl_mvm_mei_nic_stolen,
};
static void iwl_mvm_find_link_selection_vif(void *_data, u8 *mac,
struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
if (ieee80211_vif_is_mld(vif) && mvmvif->authorized)
iwl_mvm_select_links(mvmvif->mvm, vif);
}
static void iwl_mvm_trig_link_selection(struct wiphy *wiphy,
struct wiphy_work *wk)
{
struct iwl_mvm *mvm =
container_of(wk, struct iwl_mvm, trig_link_selection_wk);
ieee80211_iterate_active_interfaces(mvm->hw,
IEEE80211_IFACE_ITER_NORMAL,
iwl_mvm_find_link_selection_vif,
NULL);
}
static struct iwl_op_mode *
iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
const struct iwl_fw *fw, struct dentry *dbgfs_dir)
@ -1277,6 +1327,10 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
wiphy_work_init(&mvm->async_handlers_wiphy_wk,
iwl_mvm_async_handlers_wiphy_wk);
wiphy_work_init(&mvm->trig_link_selection_wk,
iwl_mvm_trig_link_selection);
init_waitqueue_head(&mvm->rx_sync_waitq);
mvm->queue_sync_state = 0;
@ -1531,6 +1585,7 @@ static void iwl_op_mode_mvm_stop(struct iwl_op_mode *op_mode)
kfree(mvm->temp_nvm_data);
for (i = 0; i < NVM_MAX_NUM_SECTIONS; i++)
kfree(mvm->nvm_sections[i].data);
kfree(mvm->acs_survey);
cancel_delayed_work_sync(&mvm->tcm.work);

View File

@ -654,10 +654,7 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm,
*/
sta->deflink.agg.max_amsdu_len = max_amsdu_len;
cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
WIDE_ID(DATA_PATH_GROUP,
TLC_MNG_CONFIG_CMD),
0);
cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 0);
IWL_DEBUG_RATE(mvm, "TLC CONFIG CMD, sta_id=%d, max_ch_width=%d, mode=%d\n",
cfg_cmd.sta_id, cfg_cmd.max_ch_width, cfg_cmd.mode);
IWL_DEBUG_RATE(mvm, "TLC CONFIG CMD, chains=0x%X, ch_wid_supp=%d, flags=0x%X\n",
@ -693,9 +690,7 @@ void iwl_mvm_rs_fw_rate_init(struct iwl_mvm *mvm,
u16 cmd_size = sizeof(cfg_cmd_v3);
/* In old versions of the API the struct is 4 bytes smaller */
if (iwl_fw_lookup_cmd_ver(mvm->fw,
WIDE_ID(DATA_PATH_GROUP,
TLC_MNG_CONFIG_CMD), 0) < 3)
if (iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id, 0) < 3)
cmd_size -= 4;
ret = iwl_mvm_send_cmd_pdu(mvm, cmd_id, CMD_ASYNC, cmd_size,

View File

@ -951,6 +951,89 @@ iwl_mvm_stat_iterator_all_links(struct iwl_mvm *mvm,
}
}
#define SEC_LINK_MIN_PERC 10
#define SEC_LINK_MIN_TX 3000
#define SEC_LINK_MIN_RX 400
static void iwl_mvm_update_esr_mode_tpt(struct iwl_mvm *mvm)
{
struct ieee80211_vif *bss_vif = iwl_mvm_get_bss_vif(mvm);
struct iwl_mvm_vif *mvmvif;
struct iwl_mvm_sta *mvmsta;
unsigned long total_tx = 0, total_rx = 0;
unsigned long sec_link_tx = 0, sec_link_rx = 0;
u8 sec_link_tx_perc, sec_link_rx_perc;
u8 sec_link;
lockdep_assert_held(&mvm->mutex);
if (!bss_vif)
return;
mvmvif = iwl_mvm_vif_from_mac80211(bss_vif);
if (!mvmvif->esr_active || !mvmvif->ap_sta)
return;
mvmsta = iwl_mvm_sta_from_mac80211(mvmvif->ap_sta);
/* We only count for the AP sta in a MLO connection */
if (!mvmsta->mpdu_counters)
return;
/* Get the FW ID of the secondary link */
sec_link = iwl_mvm_get_other_link(bss_vif,
iwl_mvm_get_primary_link(bss_vif));
if (WARN_ON(!mvmvif->link[sec_link]))
return;
sec_link = mvmvif->link[sec_link]->fw_link_id;
/* Sum up RX and TX MPDUs from the different queues/links */
for (int q = 0; q < mvm->trans->num_rx_queues; q++) {
spin_lock_bh(&mvmsta->mpdu_counters[q].lock);
/* The link IDs that doesn't exist will contain 0 */
for (int link = 0; link < IWL_MVM_FW_MAX_LINK_ID; link++) {
total_tx += mvmsta->mpdu_counters[q].per_link[link].tx;
total_rx += mvmsta->mpdu_counters[q].per_link[link].rx;
}
sec_link_tx += mvmsta->mpdu_counters[q].per_link[sec_link].tx;
sec_link_rx += mvmsta->mpdu_counters[q].per_link[sec_link].rx;
/*
* In EMLSR we have statistics every 5 seconds, so we can reset
* the counters upon every statistics notification.
*/
memset(mvmsta->mpdu_counters[q].per_link, 0,
sizeof(mvmsta->mpdu_counters[q].per_link));
spin_unlock_bh(&mvmsta->mpdu_counters[q].lock);
}
/* If we don't have enough MPDUs - exit EMLSR */
if (total_tx < IWL_MVM_ENTER_ESR_TPT_THRESH &&
total_rx < IWL_MVM_ENTER_ESR_TPT_THRESH) {
iwl_mvm_block_esr(mvm, bss_vif, IWL_MVM_ESR_BLOCKED_TPT,
iwl_mvm_get_primary_link(bss_vif));
return;
}
/* Calculate the percentage of the secondary link TX/RX */
sec_link_tx_perc = total_tx ? sec_link_tx * 100 / total_tx : 0;
sec_link_rx_perc = total_rx ? sec_link_rx * 100 / total_rx : 0;
/*
* The TX/RX percentage is checked only if it exceeds the required
* minimum. In addition, RX is checked only if the TX check failed.
*/
if ((total_tx > SEC_LINK_MIN_TX &&
sec_link_tx_perc < SEC_LINK_MIN_PERC) ||
(total_rx > SEC_LINK_MIN_RX &&
sec_link_rx_perc < SEC_LINK_MIN_PERC))
iwl_mvm_exit_esr(mvm, bss_vif, IWL_MVM_ESR_EXIT_LINK_USAGE,
iwl_mvm_get_primary_link(bss_vif));
}
void iwl_mvm_handle_rx_system_oper_stats(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb)
{
@ -978,6 +1061,8 @@ void iwl_mvm_handle_rx_system_oper_stats(struct iwl_mvm *mvm,
ieee80211_iterate_stations_atomic(mvm->hw, iwl_mvm_stats_energy_iter,
average_energy);
iwl_mvm_handle_per_phy_stats(mvm, stats->per_phy);
iwl_mvm_update_esr_mode_tpt(mvm);
}
void iwl_mvm_handle_rx_system_oper_part1_stats(struct iwl_mvm *mvm,

View File

@ -2035,6 +2035,7 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
struct ieee80211_link_sta *link_sta = NULL;
struct sk_buff *skb;
u8 crypt_len = 0;
u8 sta_id = le32_get_bits(desc->status, IWL_RX_MPDU_STATUS_STA_ID);
size_t desc_size;
struct iwl_mvm_rx_phy_data phy_data = {};
u32 format;
@ -2183,13 +2184,11 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
rcu_read_lock();
if (desc->status & cpu_to_le32(IWL_RX_MPDU_STATUS_SRC_STA_FOUND)) {
u8 id = le32_get_bits(desc->status, IWL_RX_MPDU_STATUS_STA_ID);
if (!WARN_ON_ONCE(id >= mvm->fw->ucode_capa.num_stations)) {
sta = rcu_dereference(mvm->fw_id_to_mac_id[id]);
if (!WARN_ON_ONCE(sta_id >= mvm->fw->ucode_capa.num_stations)) {
sta = rcu_dereference(mvm->fw_id_to_mac_id[sta_id]);
if (IS_ERR(sta))
sta = NULL;
link_sta = rcu_dereference(mvm->fw_id_to_link_sta[id]);
link_sta = rcu_dereference(mvm->fw_id_to_link_sta[sta_id]);
if (sta && sta->valid_links && link_sta) {
rx_status->link_valid = 1;
@ -2310,6 +2309,16 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
iwl_mvm_agg_rx_received(mvm, reorder_data, baid);
}
if (ieee80211_is_data(hdr->frame_control)) {
u8 sub_frame_idx = desc->amsdu_info &
IWL_RX_MPDU_AMSDU_SUBFRAME_IDX_MASK;
/* 0 means not an A-MSDU, and 1 means a new A-MSDU */
if (!sub_frame_idx || sub_frame_idx == 1)
iwl_mvm_count_mpdu(mvmsta, sta_id, 1, false,
queue);
}
}
/* management stuff on default queue */

View File

@ -226,6 +226,14 @@ iwl_mvm_scan_type _iwl_mvm_get_scan_type(struct iwl_mvm *mvm,
.global_cnt = 0,
};
/*
* A scanning AP interface probably wants to generate a survey to do
* ACS (automatic channel selection).
* Force a non-fragmented scan in that case.
*/
if (vif && ieee80211_vif_type_p2p(vif) == NL80211_IFTYPE_AP)
return IWL_SCAN_TYPE_WILD;
ieee80211_iterate_active_interfaces_atomic(mvm->hw,
IEEE80211_IFACE_ITER_NORMAL,
iwl_mvm_scan_iterator,
@ -852,11 +860,13 @@ static inline bool iwl_mvm_scan_use_ebs(struct iwl_mvm *mvm,
* 4. it's not a p2p find operation.
* 5. we are not in low latency mode,
* or if fragmented ebs is supported by the FW
* 6. the VIF is not an AP interface (scan wants survey results)
*/
return ((capa->flags & IWL_UCODE_TLV_FLAGS_EBS_SUPPORT) &&
mvm->last_ebs_successful && IWL_MVM_ENABLE_EBS &&
vif->type != NL80211_IFTYPE_P2P_DEVICE &&
(!low_latency || iwl_mvm_is_frag_ebs_supported(mvm)));
(!low_latency || iwl_mvm_is_frag_ebs_supported(mvm)) &&
ieee80211_vif_type_p2p(vif) != NL80211_IFTYPE_AP);
}
static inline bool iwl_mvm_is_regular_scan(struct iwl_mvm_scan_params *params)
@ -2124,7 +2134,8 @@ static u16 iwl_mvm_scan_umac_flags_v2(struct iwl_mvm *mvm,
static u8 iwl_mvm_scan_umac_flags2(struct iwl_mvm *mvm,
struct iwl_mvm_scan_params *params,
struct ieee80211_vif *vif, int type)
struct ieee80211_vif *vif, int type,
u16 gen_flags)
{
u8 flags = 0;
@ -2144,6 +2155,13 @@ static u8 iwl_mvm_scan_umac_flags2(struct iwl_mvm *mvm,
IWL_UCODE_TLV_CAPA_SCAN_DONT_TOGGLE_ANT))
flags |= IWL_UMAC_SCAN_GEN_PARAMS_FLAGS2_DONT_TOGGLE_ANT;
/* Passive and AP interface -> ACS (automatic channel selection) */
if (gen_flags & IWL_UMAC_SCAN_GEN_FLAGS_V2_FORCE_PASSIVE &&
ieee80211_vif_type_p2p(vif) == NL80211_IFTYPE_AP &&
iwl_fw_lookup_notif_ver(mvm->fw, SCAN_GROUP, CHANNEL_SURVEY_NOTIF,
0) >= 1)
flags |= IWL_UMAC_SCAN_GEN_FLAGS2_COLLECT_CHANNEL_STATS;
return flags;
}
@ -2513,7 +2531,8 @@ static int iwl_mvm_scan_umac_v14_and_above(struct iwl_mvm *mvm,
gen_flags = iwl_mvm_scan_umac_flags_v2(mvm, params, vif, type);
if (version >= 15)
gen_flags2 = iwl_mvm_scan_umac_flags2(mvm, params, vif, type);
gen_flags2 = iwl_mvm_scan_umac_flags2(mvm, params, vif, type,
gen_flags);
else
gen_flags2 = 0;
@ -3178,23 +3197,6 @@ int iwl_mvm_sched_scan_start(struct iwl_mvm *mvm,
return ret;
}
static void iwl_mvm_find_link_selection_vif(void *_data, u8 *mac,
struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
if (ieee80211_vif_is_mld(vif) && mvmvif->authorized)
iwl_mvm_select_links(mvmvif->mvm, vif);
}
static void iwl_mvm_post_scan_link_selection(struct iwl_mvm *mvm)
{
ieee80211_iterate_active_interfaces(mvm->hw,
IEEE80211_IFACE_ITER_NORMAL,
iwl_mvm_find_link_selection_vif,
NULL);
}
void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb)
{
@ -3202,9 +3204,25 @@ void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm,
struct iwl_umac_scan_complete *notif = (void *)pkt->data;
u32 uid = __le32_to_cpu(notif->uid);
bool aborted = (notif->status == IWL_SCAN_OFFLOAD_ABORTED);
bool select_links = false;
mvm->mei_scan_filter.is_mei_limited_scan = false;
IWL_DEBUG_SCAN(mvm,
"Scan completed: uid=%u type=%u, status=%s, EBS=%s\n",
uid, mvm->scan_uid_status[uid],
notif->status == IWL_SCAN_OFFLOAD_COMPLETED ?
"completed" : "aborted",
iwl_mvm_ebs_status_str(notif->ebs_status));
IWL_DEBUG_SCAN(mvm, "Scan completed: scan_status=0x%x\n",
mvm->scan_status);
IWL_DEBUG_SCAN(mvm,
"Scan completed: line=%u, iter=%u, elapsed time=%u\n",
notif->last_schedule, notif->last_iter,
__le32_to_cpu(notif->time_from_last_iter));
if (WARN_ON(!(mvm->scan_uid_status[uid] & mvm->scan_status)))
return;
@ -3235,19 +3253,17 @@ void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm,
mvm->sched_scan_pass_all = SCHED_SCAN_PASS_ALL_DISABLED;
} else if (mvm->scan_uid_status[uid] == IWL_MVM_SCAN_INT_MLO) {
IWL_DEBUG_SCAN(mvm, "Internal MLO scan completed\n");
/*
* Other scan types won't necessarily scan for the MLD links channels.
* Therefore, only select links after successful internal scan.
*/
select_links = notif->status == IWL_SCAN_OFFLOAD_COMPLETED;
}
mvm->scan_status &= ~mvm->scan_uid_status[uid];
IWL_DEBUG_SCAN(mvm,
"Scan completed, uid %u type %u, status %s, EBS status %s\n",
uid, mvm->scan_uid_status[uid],
notif->status == IWL_SCAN_OFFLOAD_COMPLETED ?
"completed" : "aborted",
iwl_mvm_ebs_status_str(notif->ebs_status));
IWL_DEBUG_SCAN(mvm,
"Last line %d, Last iteration %d, Time from last iteration %d\n",
notif->last_schedule, notif->last_iter,
__le32_to_cpu(notif->time_from_last_iter));
IWL_DEBUG_SCAN(mvm, "Scan completed: after update: scan_status=0x%x\n",
mvm->scan_status);
if (notif->ebs_status != IWL_SCAN_EBS_SUCCESS &&
notif->ebs_status != IWL_SCAN_EBS_INACTIVE)
@ -3255,8 +3271,8 @@ void iwl_mvm_rx_umac_scan_complete_notif(struct iwl_mvm *mvm,
mvm->scan_uid_status[uid] = 0;
if (notif->status == IWL_SCAN_OFFLOAD_COMPLETED)
iwl_mvm_post_scan_link_selection(mvm);
if (select_links)
wiphy_work_queue(mvm->hw->wiphy, &mvm->trig_link_selection_wk);
}
void iwl_mvm_rx_umac_scan_iter_complete_notif(struct iwl_mvm *mvm,
@ -3481,6 +3497,10 @@ int iwl_mvm_scan_stop(struct iwl_mvm *mvm, int type, bool notify)
{
int ret;
IWL_DEBUG_SCAN(mvm,
"Request to stop scan: type=0x%x, status=0x%x\n",
type, mvm->scan_status);
if (!(mvm->scan_status & type))
return 0;
@ -3492,6 +3512,9 @@ int iwl_mvm_scan_stop(struct iwl_mvm *mvm, int type, bool notify)
ret = iwl_mvm_scan_stop_wait(mvm, type);
if (!ret)
mvm->scan_status |= type << IWL_MVM_SCAN_STOPPING_SHIFT;
else
IWL_DEBUG_SCAN(mvm, "Failed to stop scan\n");
out:
/* Clear the scan status so the next scan requests will
* succeed and mark the scan as stopping, so that the Rx
@ -3517,9 +3540,10 @@ out:
return ret;
}
int iwl_mvm_int_mlo_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
struct ieee80211_channel **channels,
size_t n_channels)
static int iwl_mvm_int_mlo_scan_start(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct ieee80211_channel **channels,
size_t n_channels)
{
struct cfg80211_scan_request *req = NULL;
struct ieee80211_scan_ies ies = {};
@ -3563,3 +3587,228 @@ int iwl_mvm_int_mlo_scan_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
IWL_DEBUG_SCAN(mvm, "Internal MLO scan: ret=%d\n", ret);
return ret;
}
int iwl_mvm_int_mlo_scan(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct ieee80211_channel *channels[IEEE80211_MLD_MAX_NUM_LINKS];
unsigned long usable_links = ieee80211_vif_usable_links(vif);
size_t n_channels = 0;
u8 link_id;
lockdep_assert_held(&mvm->mutex);
if (mvm->scan_status & IWL_MVM_SCAN_INT_MLO) {
IWL_DEBUG_SCAN(mvm, "Internal MLO scan is already running\n");
return -EBUSY;
}
rcu_read_lock();
for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
struct ieee80211_bss_conf *link_conf =
rcu_dereference(vif->link_conf[link_id]);
if (WARN_ON_ONCE(!link_conf))
continue;
channels[n_channels++] = link_conf->chanreq.oper.chan;
}
rcu_read_unlock();
if (!n_channels)
return -EINVAL;
return iwl_mvm_int_mlo_scan_start(mvm, vif, channels, n_channels);
}
static int iwl_mvm_chanidx_from_phy(struct iwl_mvm *mvm,
enum nl80211_band band,
u16 phy_chan_num)
{
struct ieee80211_supported_band *sband = mvm->hw->wiphy->bands[band];
int chan_idx;
if (WARN_ON_ONCE(!sband))
return -EINVAL;
for (chan_idx = 0; chan_idx < sband->n_channels; chan_idx++) {
struct ieee80211_channel *channel = &sband->channels[chan_idx];
if (channel->hw_value == phy_chan_num)
return chan_idx;
}
return -EINVAL;
}
static u32 iwl_mvm_div_by_db(u32 value, u8 db)
{
/*
* 2^32 * 10**(i / 10) for i = [1, 10], skipping 0 and simply stopping
* at 10 dB and looping instead of using a much larger table.
*
* Using 64 bit math is overkill, but means the helper does not require
* a limit on the input range.
*/
static const u32 db_to_val[] = {
0xcb59185e, 0xa1866ba8, 0x804dce7a, 0x65ea59fe, 0x50f44d89,
0x404de61f, 0x331426af, 0x2892c18b, 0x203a7e5b, 0x1999999a,
};
while (value && db > 0) {
u8 change = min_t(u8, db, ARRAY_SIZE(db_to_val));
value = (((u64)value) * db_to_val[change - 1]) >> 32;
db -= change;
}
return value;
}
VISIBLE_IF_IWLWIFI_KUNIT s8
iwl_mvm_average_dbm_values(const struct iwl_umac_scan_channel_survey_notif *notif)
{
s8 average_magnitude;
u32 average_factor;
s8 sum_magnitude = -128;
u32 sum_factor = 0;
int i, count = 0;
/*
* To properly average the decibel values (signal values given in dBm)
* we need to do the math in linear space. Doing a linear average of
* dB (dBm) values is a bit annoying though due to the large range of
* at least -10 to -110 dBm that will not fit into a 32 bit integer.
*
* A 64 bit integer should be sufficient, but then we still have the
* problem that there are no directly usable utility functions
* available.
*
* So, lets not deal with that and instead do much of the calculation
* with a 16.16 fixed point integer along with a base in dBm. 16.16 bit
* gives us plenty of head-room for adding up a few values and even
* doing some math on it. And the tail should be accurate enough too
* (1/2^16 is somewhere around -48 dB, so effectively zero).
*
* i.e. the real value of sum is:
* sum = sum_factor / 2^16 * 10^(sum_magnitude / 10) mW
*
* However, that does mean we need to be able to bring two values to
* a common base, so we need a helper for that.
*
* Note that this function takes an input with unsigned negative dBm
* values but returns a signed dBm (i.e. a negative value).
*/
for (i = 0; i < ARRAY_SIZE(notif->noise); i++) {
s8 val_magnitude;
u32 val_factor;
if (notif->noise[i] == 0xff)
continue;
val_factor = 0x10000;
val_magnitude = -notif->noise[i];
if (val_magnitude <= sum_magnitude) {
u8 div_db = sum_magnitude - val_magnitude;
val_factor = iwl_mvm_div_by_db(val_factor, div_db);
val_magnitude = sum_magnitude;
} else {
u8 div_db = val_magnitude - sum_magnitude;
sum_factor = iwl_mvm_div_by_db(sum_factor, div_db);
sum_magnitude = val_magnitude;
}
sum_factor += val_factor;
count++;
}
/* No valid noise measurement, return a very high noise level */
if (count == 0)
return 0;
average_magnitude = sum_magnitude;
average_factor = sum_factor / count;
/*
* average_factor will be a number smaller than 1.0 (0x10000) at this
* point. What we need to do now is to adjust average_magnitude so that
* average_factor is between -0.5 dB and 0.5 dB.
*
* Just do -1 dB steps and find the point where
* -0.5 dB * -i dB = 0x10000 * 10^(-0.5/10) / i dB
* = div_by_db(0xe429, i)
* is smaller than average_factor.
*/
for (i = 0; average_factor < iwl_mvm_div_by_db(0xe429, i); i++) {
/* nothing */
}
return average_magnitude - i;
}
EXPORT_SYMBOL_IF_IWLWIFI_KUNIT(iwl_mvm_average_dbm_values);
void iwl_mvm_rx_channel_survey_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb)
{
struct iwl_rx_packet *pkt = rxb_addr(rxb);
const struct iwl_umac_scan_channel_survey_notif *notif =
(void *)pkt->data;
struct iwl_mvm_acs_survey_channel *info;
enum nl80211_band band;
int chan_idx;
lockdep_assert_held(&mvm->mutex);
if (!mvm->acs_survey) {
size_t n_channels = 0;
for (band = 0; band < NUM_NL80211_BANDS; band++) {
if (!mvm->hw->wiphy->bands[band])
continue;
n_channels += mvm->hw->wiphy->bands[band]->n_channels;
}
mvm->acs_survey = kzalloc(struct_size(mvm->acs_survey,
channels, n_channels),
GFP_KERNEL);
if (!mvm->acs_survey)
return;
mvm->acs_survey->n_channels = n_channels;
n_channels = 0;
for (band = 0; band < NUM_NL80211_BANDS; band++) {
if (!mvm->hw->wiphy->bands[band])
continue;
mvm->acs_survey->bands[band] =
&mvm->acs_survey->channels[n_channels];
n_channels += mvm->hw->wiphy->bands[band]->n_channels;
}
}
band = iwl_mvm_nl80211_band_from_phy(le32_to_cpu(notif->band));
chan_idx = iwl_mvm_chanidx_from_phy(mvm, band,
le32_to_cpu(notif->channel));
if (WARN_ON_ONCE(chan_idx < 0))
return;
IWL_DEBUG_SCAN(mvm, "channel survey received for freq %d\n",
mvm->hw->wiphy->bands[band]->channels[chan_idx].center_freq);
info = &mvm->acs_survey->bands[band][chan_idx];
/* Times are all in ms */
info->time = le32_to_cpu(notif->active_time);
info->time_busy = le32_to_cpu(notif->busy_time);
info->time_rx = le32_to_cpu(notif->rx_time);
info->time_tx = le32_to_cpu(notif->tx_time);
info->noise = iwl_mvm_average_dbm_values(notif);
}

View File

@ -1847,6 +1847,18 @@ int iwl_mvm_sta_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
iwl_mvm_toggle_tx_ant(mvm, &mvm_sta->tx_ant);
/* MPDUs are counted only when EMLSR is possible */
if (vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
!sta->tdls && ieee80211_vif_is_mld(vif)) {
mvm_sta->mpdu_counters =
kcalloc(mvm->trans->num_rx_queues,
sizeof(*mvm_sta->mpdu_counters),
GFP_KERNEL);
if (mvm_sta->mpdu_counters)
for (int q = 0; q < mvm->trans->num_rx_queues; q++)
spin_lock_init(&mvm_sta->mpdu_counters[q].lock);
}
return 0;
}
@ -4392,3 +4404,77 @@ void iwl_mvm_cancel_channel_switch(struct iwl_mvm *mvm,
if (ret)
IWL_ERR(mvm, "Failed to cancel the channel switch\n");
}
static int iwl_mvm_fw_sta_id_to_fw_link_id(struct iwl_mvm_vif *mvmvif,
u8 fw_sta_id)
{
struct ieee80211_link_sta *link_sta =
rcu_dereference(mvmvif->mvm->fw_id_to_link_sta[fw_sta_id]);
struct iwl_mvm_vif_link_info *link;
if (WARN_ON_ONCE(!link_sta))
return -EINVAL;
link = mvmvif->link[link_sta->link_id];
if (WARN_ON_ONCE(!link))
return -EINVAL;
return link->fw_link_id;
}
#define IWL_MVM_TPT_COUNT_WINDOW (IWL_MVM_TPT_COUNT_WINDOW_SEC * HZ)
void iwl_mvm_count_mpdu(struct iwl_mvm_sta *mvm_sta, u8 fw_sta_id, u32 count,
bool tx, int queue)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(mvm_sta->vif);
struct iwl_mvm_tpt_counter *queue_counter;
struct iwl_mvm_mpdu_counter *link_counter;
u32 total_mpdus = 0;
int fw_link_id;
/* Count only for a BSS sta, and only when EMLSR is possible */
if (!mvm_sta->mpdu_counters)
return;
/* Map sta id to link id */
fw_link_id = iwl_mvm_fw_sta_id_to_fw_link_id(mvmvif, fw_sta_id);
if (fw_link_id < 0)
return;
queue_counter = &mvm_sta->mpdu_counters[queue];
link_counter = &queue_counter->per_link[fw_link_id];
spin_lock_bh(&queue_counter->lock);
if (tx)
link_counter->tx += count;
else
link_counter->rx += count;
/*
* When not in EMLSR, the window and the decision to enter EMLSR are
* handled during counting, when in EMLSR - in the statistics flow
*/
if (mvmvif->esr_active)
goto out;
if (time_is_before_jiffies(queue_counter->window_start +
IWL_MVM_TPT_COUNT_WINDOW)) {
memset(queue_counter->per_link, 0,
sizeof(queue_counter->per_link));
queue_counter->window_start = jiffies;
}
for (int i = 0; i < IWL_MVM_FW_MAX_LINK_ID; i++)
total_mpdus += tx ? queue_counter->per_link[i].tx :
queue_counter->per_link[i].rx;
if (total_mpdus > IWL_MVM_ENTER_ESR_TPT_THRESH)
wiphy_work_queue(mvmvif->mvm->hw->wiphy,
&mvmvif->unblock_esr_tpt_wk);
out:
spin_unlock_bh(&queue_counter->lock);
}

View File

@ -347,6 +347,24 @@ struct iwl_mvm_link_sta {
u8 avg_energy;
};
struct iwl_mvm_mpdu_counter {
u32 tx;
u32 rx;
};
/**
* struct iwl_mvm_tpt_counter - per-queue MPDU counter
*
* @lock: Needed to protect the counters when modified from statistics.
* @per_link: per-link counters.
* @window_start: timestamp of the counting-window start
*/
struct iwl_mvm_tpt_counter {
spinlock_t lock;
struct iwl_mvm_mpdu_counter per_link[IWL_MVM_FW_MAX_LINK_ID];
unsigned long window_start;
} ____cacheline_aligned_in_smp;
/**
* struct iwl_mvm_sta - representation of a station in the driver
* @vif: the interface the station belongs to
@ -394,6 +412,7 @@ struct iwl_mvm_link_sta {
* @link: per link sta entries. For non-MLO only link[0] holds data. For MLO,
* link[0] points to deflink and link[link_id] is allocated when new link
* sta is added.
* @mpdu_counters: RX/TX MPDUs counters for each queue.
*
* When mac80211 creates a station it reserves some space (hw->sta_data_size)
* in the structure for use by driver. This structure is placed in that
@ -433,6 +452,8 @@ struct iwl_mvm_sta {
struct iwl_mvm_link_sta deflink;
struct iwl_mvm_link_sta __rcu *link[IEEE80211_MLD_MAX_NUM_LINKS];
struct iwl_mvm_tpt_counter *mpdu_counters;
};
u16 iwl_mvm_tid_queued(struct iwl_mvm *mvm, struct iwl_mvm_tid_data *tid_data);
@ -514,6 +535,9 @@ void iwl_mvm_update_tkip_key(struct iwl_mvm *mvm,
void iwl_mvm_rx_eosp_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb);
void iwl_mvm_count_mpdu(struct iwl_mvm_sta *mvm_sta, u8 fw_sta_id, u32 count,
bool tx, int queue);
/* AMPDU */
int iwl_mvm_sta_rx_agg(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
int tid, u16 ssn, bool start, u16 buf_size, u16 timeout);

View File

@ -1,3 +1,3 @@
iwlmvm-tests-y += module.o links.o
iwlmvm-tests-y += module.o links.o scan.o
obj-$(CONFIG_IWLWIFI_KUNIT_TESTS) += iwlmvm-tests.o

View File

@ -10,6 +10,14 @@
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
static struct wiphy wiphy = {
.mtx = __MUTEX_INITIALIZER(wiphy.mtx),
};
static struct ieee80211_hw hw = {
.wiphy = &wiphy,
};
static struct ieee80211_channel chan_5ghz = {
.band = NL80211_BAND_5GHZ,
};
@ -37,7 +45,23 @@ static struct cfg80211_bss bss = {};
static struct ieee80211_bss_conf link_conf = {.bss = &bss};
static struct iwl_mvm mvm = {};
static const struct iwl_fw_cmd_version entry = {
.group = LEGACY_GROUP,
.cmd = BT_PROFILE_NOTIFICATION,
.notif_ver = 4
};
static struct iwl_fw fw = {
.ucode_capa = {
.n_cmd_versions = 1,
.cmd_versions = &entry,
},
};
static struct iwl_mvm mvm = {
.hw = &hw,
.fw = &fw,
};
static const struct link_grading_case {
const char *desc;
@ -217,30 +241,31 @@ kunit_test_suite(link_grading);
static const struct valid_link_pair_case {
const char *desc;
u32 esr_disable_reason;
bool bt;
struct ieee80211_channel *chan_a;
struct ieee80211_channel *chan_b;
enum nl80211_chan_width cw_a;
enum nl80211_chan_width cw_b;
s32 sig_a;
s32 sig_b;
bool csa_a;
bool valid;
} valid_link_pair_cases[] = {
{
.desc = "HB + UHB, valid.",
.chan_a = &chan_5ghz,
.chan_b = &chan_6ghz,
.chan_a = &chan_6ghz,
.chan_b = &chan_5ghz,
.valid = true,
},
{
.desc = "LB + HB, no BT.",
.chan_a = &chan_2ghz,
.chan_b = &chan_5ghz,
.valid = true,
.valid = false,
},
{
.desc = "LB + HB, with BT.",
.esr_disable_reason = 0x1,
.bt = true,
.chan_a = &chan_2ghz,
.chan_b = &chan_5ghz,
.valid = false,
@ -260,77 +285,79 @@ static const struct valid_link_pair_case {
.valid = false,
},
{
.desc = "RSSI: LB, 20 MHz, high",
.chan_a = &chan_2ghz,
.desc = "RSSI: UHB, 20 MHz, high",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_20,
.sig_a = -66,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_20,
.valid = true,
},
{
.desc = "RSSI: LB, 40 MHz, low",
.chan_a = &chan_2ghz,
.desc = "RSSI: UHB, 40 MHz, low",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_40,
.sig_a = -65,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_40,
.valid = false,
},
{
.desc = "RSSI: LB, 40 MHz, high",
.chan_a = &chan_2ghz,
.desc = "RSSI: UHB, 40 MHz, high",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_40,
.sig_a = -63,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_40,
.valid = true,
},
{
.desc = "RSSI: HB, 80 MHz, low",
.chan_a = &chan_5ghz,
.desc = "RSSI: UHB, 80 MHz, low",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_80,
.sig_a = -62,
.chan_b = &chan_2ghz,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_80,
.valid = false,
},
{
.desc = "RSSI: HB, 80 MHz, high",
.chan_a = &chan_5ghz,
.desc = "RSSI: UHB, 80 MHz, high",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_80,
.sig_a = -60,
.chan_b = &chan_2ghz,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_80,
.valid = true,
},
{
.desc = "RSSI: HB, 160 MHz, low",
.chan_a = &chan_5ghz,
.desc = "RSSI: UHB, 160 MHz, low",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_160,
.sig_a = -59,
.chan_b = &chan_2ghz,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_160,
.valid = false,
},
{
.desc = "RSSI: HB, 160 MHz, high",
.chan_a = &chan_5ghz,
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_160,
.sig_a = -5,
.chan_b = &chan_2ghz,
.valid = true,
},
{
.desc = "RSSI: UHB, 320 MHz, low",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_320,
.sig_a = -68,
.chan_b = &chan_6ghz,
.valid = false,
},
{
.desc = "RSSI: UHB, 320 MHz, high",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_320,
.sig_a = -66,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_160,
.valid = true,
},
{
.desc = "CSA active",
.chan_a = &chan_6ghz,
.cw_a = NL80211_CHAN_WIDTH_160,
.sig_a = -5,
.chan_b = &chan_5ghz,
.cw_b = NL80211_CHAN_WIDTH_160,
.valid = false,
/* same as previous entry with valid=true except for CSA */
.csa_a = true,
},
};
KUNIT_ARRAY_PARAM_DESC(valid_link_pair, valid_link_pair_cases, desc)
@ -354,6 +381,7 @@ static void test_valid_link_pair(struct kunit *test)
.link_id = 5,
.signal = params->sig_b,
};
struct ieee80211_bss_conf *conf;
bool result;
KUNIT_ASSERT_NOT_NULL(test, vif);
@ -370,10 +398,23 @@ static void test_valid_link_pair(struct kunit *test)
#endif
mvm.trans = trans;
mvmvif->esr_disable_reason = params->esr_disable_reason;
mvm.last_bt_notif.wifi_loss_low_rssi = params->bt;
mvmvif->mvm = &mvm;
conf = kunit_kzalloc(test, sizeof(*vif->link_conf[0]), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, conf);
conf->chanreq.oper = chandef_a;
conf->csa_active = params->csa_a;
vif->link_conf[link_a.link_id] = (void __rcu *)conf;
conf = kunit_kzalloc(test, sizeof(*vif->link_conf[0]), GFP_KERNEL);
KUNIT_ASSERT_NOT_NULL(test, conf);
conf->chanreq.oper = chandef_b;
vif->link_conf[link_b.link_id] = (void __rcu *)conf;
wiphy_lock(&wiphy);
result = iwl_mvm_mld_valid_link_pair(vif, &link_a, &link_b);
wiphy_unlock(&wiphy);
KUNIT_EXPECT_EQ(test, result, params->valid);

View File

@ -0,0 +1,110 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* KUnit tests for channel helper functions
*
* Copyright (C) 2024 Intel Corporation
*/
#include <net/mac80211.h>
#include "../mvm.h"
#include <kunit/test.h>
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
static const struct acs_average_db_case {
const char *desc;
u8 neg_dbm[22];
s8 result;
} acs_average_db_cases[] = {
{
.desc = "Smallest possible value, all filled",
.neg_dbm = {
128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
128, 128
},
.result = -128,
},
{
.desc = "Biggest possible value, all filled",
.neg_dbm = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0,
},
.result = 0,
},
{
.desc = "Smallest possible value, partial filled",
.neg_dbm = {
128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff,
},
.result = -128,
},
{
.desc = "Biggest possible value, partial filled",
.neg_dbm = {
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff,
},
.result = 0,
},
{
.desc = "Adding -80dBm to -75dBm until it is still rounded to -79dBm",
.neg_dbm = {
75, 80, 80, 80, 80, 80, 80, 80, 80, 80,
80, 80, 80, 80, 80, 80, 80, 0xff, 0xff, 0xff,
0xff, 0xff,
},
.result = -79,
},
{
.desc = "Adding -80dBm to -75dBm until it is just rounded to -80dBm",
.neg_dbm = {
75, 80, 80, 80, 80, 80, 80, 80, 80, 80,
80, 80, 80, 80, 80, 80, 80, 80, 0xff, 0xff,
0xff, 0xff,
},
.result = -80,
},
};
KUNIT_ARRAY_PARAM_DESC(acs_average_db, acs_average_db_cases, desc)
static void test_acs_average_db(struct kunit *test)
{
const struct acs_average_db_case *params = test->param_value;
struct iwl_umac_scan_channel_survey_notif notif;
int i;
/* Test the values in the given order */
for (i = 0; i < ARRAY_SIZE(params->neg_dbm); i++)
notif.noise[i] = params->neg_dbm[i];
KUNIT_ASSERT_EQ(test,
iwl_mvm_average_dbm_values(&notif),
params->result);
/* Test in reverse order */
for (i = 0; i < ARRAY_SIZE(params->neg_dbm); i++)
notif.noise[ARRAY_SIZE(params->neg_dbm) - i - 1] =
params->neg_dbm[i];
KUNIT_ASSERT_EQ(test,
iwl_mvm_average_dbm_values(&notif),
params->result);
}
static struct kunit_case acs_average_db_case[] = {
KUNIT_CASE_PARAM(test_acs_average_db, acs_average_db_gen_params),
{}
};
static struct kunit_suite acs_average_db = {
.name = "iwlmvm-acs-average-db",
.test_cases = acs_average_db_case,
};
kunit_test_suite(acs_average_db);

View File

@ -47,6 +47,10 @@ void iwl_mvm_te_clear_data(struct iwl_mvm *mvm,
static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
{
struct ieee80211_vif *vif = mvm->p2p_device_vif;
lockdep_assert_held(&mvm->mutex);
/*
* Clear the ROC_RUNNING status bit.
* This will cause the TX path to drop offchannel transmissions.
@ -70,9 +74,7 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
* not really racy.
*/
if (!WARN_ON(!mvm->p2p_device_vif)) {
struct ieee80211_vif *vif = mvm->p2p_device_vif;
if (!WARN_ON(!vif)) {
mvmvif = iwl_mvm_vif_from_mac80211(vif);
iwl_mvm_flush_sta(mvm, mvmvif->deflink.bcast_sta.sta_id,
mvmvif->deflink.bcast_sta.tfd_queue_msk);
@ -106,6 +108,7 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
if (mvm->mld_api_is_used) {
iwl_mvm_mld_rm_aux_sta(mvm);
mutex_unlock(&mvm->mutex);
return;
}
@ -115,6 +118,10 @@ static void iwl_mvm_cleanup_roc(struct iwl_mvm *mvm)
if (iwl_mvm_has_new_station_api(mvm->fw))
iwl_mvm_rm_aux_sta(mvm);
}
mutex_unlock(&mvm->mutex);
if (vif)
iwl_mvm_esr_non_bss_link(mvm, vif, 0, false);
}
void iwl_mvm_roc_done_wk(struct work_struct *wk)
@ -122,8 +129,8 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
struct iwl_mvm *mvm = container_of(wk, struct iwl_mvm, roc_done_wk);
mutex_lock(&mvm->mutex);
/* Mutex is released inside */
iwl_mvm_cleanup_roc(mvm);
mutex_unlock(&mvm->mutex);
}
static void iwl_mvm_roc_finished(struct iwl_mvm *mvm)
@ -1220,6 +1227,8 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
struct iwl_mvm_vif *mvmvif;
struct iwl_mvm_time_event_data *te_data;
mutex_lock(&mvm->mutex);
if (fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD)) {
mvmvif = iwl_mvm_vif_from_mac80211(vif);
@ -1263,6 +1272,8 @@ cleanup_roc:
set_bit(vif->type == NL80211_IFTYPE_P2P_DEVICE ?
IWL_MVM_STATUS_ROC_RUNNING : IWL_MVM_STATUS_ROC_AUX_RUNNING,
&mvm->status);
/* Mutex is released inside this function */
iwl_mvm_cleanup_roc(mvm);
}

View File

@ -1870,6 +1870,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
IWL_DEBUG_TX_REPLY(mvm,
"Next reclaimed packet:%d\n",
next_reclaimed);
iwl_mvm_count_mpdu(mvmsta, sta_id, 1, true, 0);
} else {
IWL_DEBUG_TX_REPLY(mvm,
"NDP - don't update next_reclaimed\n");
@ -2247,9 +2248,13 @@ void iwl_mvm_rx_ba_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
le32_to_cpu(ba_res->tx_rate), false);
}
if (mvmsta)
if (mvmsta) {
iwl_mvm_tx_airtime(mvm, mvmsta,
le32_to_cpu(ba_res->wireless_time));
iwl_mvm_count_mpdu(mvmsta, sta_id,
le16_to_cpu(ba_res->txed), true, 0);
}
rcu_read_unlock();
return;
}

View File

@ -435,6 +435,13 @@ int iwl_mvm_request_statistics(struct iwl_mvm *mvm, bool clear)
IWL_FW_CMD_VER_UNKNOWN);
int ret;
/*
* Don't request statistics during restart, they'll not have any useful
* information right after restart, nor is clearing needed
*/
if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
return 0;
if (cmd_ver != IWL_FW_CMD_VER_UNKNOWN)
return iwl_mvm_request_system_statistics(mvm, clear, cmd_ver);

View File

@ -1,13 +1,34 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2018-2023 Intel Corporation
* Copyright (C) 2018-2024 Intel Corporation
*/
#include <linux/dmi.h>
#include "iwl-trans.h"
#include "iwl-fh.h"
#include "iwl-context-info-gen3.h"
#include "internal.h"
#include "iwl-prph.h"
static const struct dmi_system_id dmi_force_scu_active_approved_list[] = {
{ .ident = "DELL",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
},
},
{ .ident = "DELL",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Alienware"),
},
},
/* keep last */
{}
};
static bool iwl_is_force_scu_active_approved(void)
{
return !!dmi_check_system(dmi_force_scu_active_approved_list);
}
static void
iwl_pcie_ctxt_info_dbg_enable(struct iwl_trans *trans,
struct iwl_prph_scratch_hwm_cfg *dbg_cfg,
@ -128,6 +149,14 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
if (trans->trans_cfg->imr_enabled)
control_flags |= IWL_PRPH_SCRATCH_IMR_DEBUG_EN;
if (CSR_HW_REV_TYPE(trans->hw_rev) == IWL_CFG_MAC_TYPE_GL &&
iwl_is_force_scu_active_approved()) {
control_flags |= IWL_PRPH_SCRATCH_SCU_FORCE_ACTIVE;
IWL_DEBUG_FW(trans,
"Context Info: Set SCU_FORCE_ACTIVE (0x%x) in control_flags\n",
IWL_PRPH_SCRATCH_SCU_FORCE_ACTIVE);
}
/* initialize RX default queue */
prph_sc_ctrl->rbd_cfg.free_rbd_addr =
cpu_to_le64(trans_pcie->rxq->bd_dma);

Some files were not shown because too many files have changed in this diff Show More