RDMA/mlx5: Support TX port affinity for VF drivers in LAG mode

The mlx5 VF driver doesn't set QP tx port affinity because it doesn't know
if the lag is active or not, since the "lag_active" works only for PF
interfaces. In this case for VF interfaces only one lag is used which
brings performance issue.

Add a lag_tx_port_affinity CAP bit; When it is enabled and
"num_lag_ports > 1", then driver always set QP tx affinity, regardless
of lag state.

Link: https://lore.kernel.org/r/20200527055014.355093-1-leon@kernel.org
Signed-off-by: Mark Zhang <markz@mellanox.com>
Reviewed-by: Maor Gottlieb <maorg@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
This commit is contained in:
Mark Zhang 2020-05-27 08:50:14 +03:00 committed by Jason Gunthorpe
parent e0cca8b456
commit 802dcc7fc5
3 changed files with 10 additions and 2 deletions

View file

@ -1972,7 +1972,7 @@ static int mlx5_ib_alloc_ucontext(struct ib_ucontext *uctx,
context->lib_caps = req.lib_caps;
print_lib_caps(dev, context->lib_caps);
if (dev->lag_active) {
if (mlx5_ib_lag_should_assign_affinity(dev)) {
u8 port = mlx5_core_native_port_num(dev->mdev) - 1;
atomic_set(&context->tx_port_affinity,

View file

@ -1550,4 +1550,11 @@ static inline bool mlx5_ib_can_use_umr(struct mlx5_ib_dev *dev,
int mlx5_ib_enable_driver(struct ib_device *dev);
int mlx5_ib_test_wc(struct mlx5_ib_dev *dev);
static inline bool mlx5_ib_lag_should_assign_affinity(struct mlx5_ib_dev *dev)
{
return dev->lag_active ||
(MLX5_CAP_GEN(dev->mdev, num_lag_ports) > 1 &&
MLX5_CAP_GEN(dev->mdev, lag_tx_port_affinity));
}
#endif /* MLX5_IB_H */

View file

@ -3653,7 +3653,8 @@ static unsigned int get_tx_affinity(struct ib_qp *qp,
struct mlx5_ib_qp_base *qp_base;
unsigned int tx_affinity;
if (!(dev->lag_active && qp_supports_affinity(qp)))
if (!(mlx5_ib_lag_should_assign_affinity(dev) &&
qp_supports_affinity(qp)))
return 0;
if (mqp->flags & MLX5_IB_QP_CREATE_SQPN_QP1)