cifs: use the least loaded channel for sending requests

Till now, we've used a simple round robin approach to
distribute the requests between the channels. This does
not work well if the channels consume the requests at
different speeds, even if the advertised speeds are the
same.

This change will allow the client to pick the channel
with least number of requests currently in-flight. This
will disregard the link speed, and select a channel
based on the current load of the channels.

For cases when all the channels are equally loaded,
fall back to the old round robin method.

Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
Reviewed-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Signed-off-by: Steve French <stfrench@microsoft.com>
This commit is contained in:
Shyam Prasad N 2022-12-19 05:40:44 +00:00 committed by Steve French
parent e7388b8a1a
commit ea90708d3c
1 changed files with 29 additions and 4 deletions

View File

@ -1007,15 +1007,40 @@ cifs_cancelled_callback(struct mid_q_entry *mid)
struct TCP_Server_Info *cifs_pick_channel(struct cifs_ses *ses)
{
uint index = 0;
unsigned int min_in_flight = UINT_MAX, max_in_flight = 0;
struct TCP_Server_Info *server = NULL;
int i;
if (!ses)
return NULL;
/* round robin */
index = (uint)atomic_inc_return(&ses->chan_seq);
spin_lock(&ses->chan_lock);
index %= ses->chan_count;
for (i = 0; i < ses->chan_count; i++) {
server = ses->chans[i].server;
if (!server)
continue;
/*
* strictly speaking, we should pick up req_lock to read
* server->in_flight. But it shouldn't matter much here if we
* race while reading this data. The worst that can happen is
* that we could use a channel that's not least loaded. Avoiding
* taking the lock could help reduce wait time, which is
* important for this function
*/
if (server->in_flight < min_in_flight) {
min_in_flight = server->in_flight;
index = i;
}
if (server->in_flight > max_in_flight)
max_in_flight = server->in_flight;
}
/* if all channels are equally loaded, fall back to round-robin */
if (min_in_flight == max_in_flight) {
index = (uint)atomic_inc_return(&ses->chan_seq);
index %= ses->chan_count;
}
spin_unlock(&ses->chan_lock);
return ses->chans[index].server;