RDMA/rtrs: RDMA_RXE requires more number of WR

When using rdma_rxe, post_one_recv() returns ENOMEM error due to the full
recv queue.  This patch increase the number of WR for receive queue to
support all devices.

Link: https://lore.kernel.org/r/20210614090337.29557-4-jinpu.wang@ionos.com
Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
Signed-off-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This commit is contained in:
Md Haris Iqbal 2021-06-14 11:03:35 +02:00 committed by Jason Gunthorpe
parent 0509ebfa33
commit b012f0ad53
2 changed files with 5 additions and 4 deletions

View File

@ -1579,10 +1579,11 @@ static int create_con_cq_qp(struct rtrs_clt_con *con)
lockdep_assert_held(&con->con_mutex);
if (con->c.cid == 0) {
/*
* One completion for each receive and two for each send
* (send request + registration)
* Two (request + registration) completion for send
* Two for recv if always_invalidate is set on server
* or one for recv.
* + 2 for drain and heartbeat
* in case qp gets into error state
* in case qp gets into error state.
*/
max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;

View File

@ -1656,7 +1656,7 @@ static int create_con(struct rtrs_srv_sess *sess,
* + 2 for drain and heartbeat
*/
max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
max_recv_wr = SERVICE_CON_QUEUE_DEPTH + 2;
max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2;
cq_size = max_send_wr + max_recv_wr;
} else {
/*