SCSI misc on 20230902

Updates to the usual drivers (ufs, lpfc, qla2xxx, mpi3mr, libsas) and
 the usual minor updates and bug fixes but no significant core changes.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCZPLlcCYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishXH4AP9Mm8Hk
 ICCySnXJ/VcxrAU2ekUlPxGT8CNT2WqRhIqKegEAu0jagPyNcTjX+oBCNVOM33zU
 p5uU6IEHDA3d36qu04w=
 =8gks
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "Updates to the usual drivers (ufs, lpfc, qla2xxx, mpi3mr, libsas) and
  the usual minor updates and bug fixes but no significant core changes"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (116 commits)
  scsi: storvsc: Handle additional SRB status values
  scsi: libsas: Delete sas_ata_task.retry_count
  scsi: libsas: Delete sas_ata_task.stp_affil_pol
  scsi: libsas: Delete sas_ata_task.set_affil_pol
  scsi: libsas: Delete sas_ssp_task.task_prio
  scsi: libsas: Delete sas_ssp_task.enable_first_burst
  scsi: libsas: Delete sas_ssp_task.retry_count
  scsi: libsas: Delete struct scsi_core
  scsi: libsas: Delete enum sas_phy_type
  scsi: libsas: Delete enum sas_class
  scsi: libsas: Delete sas_ha_struct.lldd_module
  scsi: target: Fix write perf due to unneeded throttling
  scsi: lpfc: Do not abuse UUID APIs and LPFC_COMPRESS_VMID_SIZE
  scsi: pm8001: Remove unused declarations
  scsi: fcoe: Fix potential deadlock on &fip->ctlr_lock
  scsi: elx: sli4: Remove code duplication
  scsi: bfa: Replace one-element array with flexible-array member in struct fc_rscn_pl_s
  scsi: qla2xxx: Remove unused declarations
  scsi: pmcraid: Use pci_dev_id() to simplify the code
  scsi: pm80xx: Set RETFIS when requested by libsas
  ...
This commit is contained in:
Linus Torvalds 2023-09-02 12:02:41 -07:00
commit b89b029377
121 changed files with 1876 additions and 4559 deletions

View File

@ -1437,180 +1437,6 @@ Description:
If avail_wb_buff < wb_flush_threshold, it indicates that WriteBooster buffer needs to
be flushed, otherwise it is not necessary.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_version
What: /sys/bus/platform/devices/*.ufs/device_descriptor/hpb_version
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the HPB specification version.
The full information about the descriptor can be found in the UFS
HPB (Host Performance Booster) Extension specifications.
Example: version 1.2.3 = 0123h
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/hpb_control
What: /sys/bus/platform/devices/*.ufs/device_descriptor/hpb_control
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows an indication of the HPB control mode.
00h: Host control mode
01h: Device control mode
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_region_size
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_region_size
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the bHPBRegionSize which can be calculated
as in the following (in bytes):
HPB Region size = 512B * 2^bHPBRegionSize
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_number_lu
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_number_lu
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum number of HPB LU supported by
the device.
00h: HPB is not supported by the device.
01h ~ 20h: Maximum number of HPB LU supported by the device
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_subregion_size
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_subregion_size
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the bHPBSubRegionSize, which can be
calculated as in the following (in bytes) and shall be a multiple of
logical block size:
HPB Sub-Region size = 512B x 2^bHPBSubRegionSize
bHPBSubRegionSize shall not exceed bHPBRegionSize.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/hpb_max_active_regions
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/hpb_max_active_regions
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum number of active HPB regions that
is supported by the device.
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_lu_max_active_regions
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum number of HPB regions assigned to
the HPB logical unit.
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_pinned_region_start_offset
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the start offset of HPB pinned region.
The file is read only.
What: /sys/class/scsi_device/*/device/unit_descriptor/hpb_number_pinned_regions
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of HPB pinned regions assigned to
the HPB logical unit.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/hit_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of reads that changed to HPB read.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/miss_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of reads that cannot be changed to
HPB read.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/rcmd_noti_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of response UPIUs that has
recommendations for activating sub-regions and/or inactivating region.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/rcmd_active_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: For the HPB device control mode, this entry shows the number of
active sub-regions recommended by response UPIUs. For the HPB host control
mode, this entry shows the number of active sub-regions recommended by the
HPB host control mode heuristic algorithm.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/rcmd_inactive_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: For the HPB device control mode, this entry shows the number of
inactive regions recommended by response UPIUs. For the HPB host control
mode, this entry shows the number of inactive regions recommended by the
HPB host control mode heuristic algorithm.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_stats/map_req_cnt
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the number of read buffer commands for
activating sub-regions recommended by response UPIUs.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_params/requeue_timeout_ms
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the requeue timeout threshold for write buffer
command in ms. The value can be changed by writing an integer to
this entry.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/max_data_size_hpb_single_cmd
What: /sys/bus/platform/devices/*.ufs/attributes/max_data_size_hpb_single_cmd
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the maximum HPB data size for using a single HPB
command.
=== ========
00h 4KB
01h 8KB
02h 12KB
...
FFh 1024KB
=== ========
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/flags/hpb_enable
What: /sys/bus/platform/devices/*.ufs/flags/hpb_enable
Date: June 2021
Contact: Daejun Park <daejun7.park@samsung.com>
Description: This entry shows the status of HPB.
== ============================
0 HPB is not enabled.
1 HPB is enabled
== ============================
The file is read only.
Contact: Daniil Lunev <dlunev@chromium.org>
What: /sys/bus/platform/drivers/ufshcd/*/capabilities/
What: /sys/bus/platform/devices/*.ufs/capabilities/
@ -1648,76 +1474,3 @@ Description: Indicates status of Write Booster.
The file is read only.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/activation_thld
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In host control mode, reads are the major source of activation
trials. Once this threshold hs met, the region is added to the
"to-be-activated" list. Since we reset the read counter upon
write, this include sending a rb command updating the region
ppn as well.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/normalization_factor
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In host control mode, we think of the regions as "buckets".
Those buckets are being filled with reads, and emptied on write.
We use entries_per_srgn - the amount of blocks in a subregion as
our bucket size. This applies because HPB1.0 only handles
single-block reads. Once the bucket size is crossed, we trigger
a normalization work - not only to avoid overflow, but mainly
because we want to keep those counters normalized, as we are
using those reads as a comparative score, to make various decisions.
The normalization is dividing (shift right) the read counter by
the normalization_factor. If during consecutive normalizations
an active region has exhausted its reads - inactivate it.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_enter
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: Region deactivation is often due to the fact that eviction took
place: A region becomes active at the expense of another. This is
happening when the max-active-regions limit has been crossed.
In host mode, eviction is considered an extreme measure. We
want to verify that the entering region has enough reads, and
the exiting region has much fewer reads. eviction_thld_enter is
the min reads that a region must have in order to be considered
a candidate for evicting another region.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/eviction_thld_exit
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: Same as above for the exiting region. A region is considered to
be a candidate for eviction only if it has fewer reads than
eviction_thld_exit.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_ms
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In order not to hang on to "cold" regions, we inactivate
a region that has no READ access for a predefined amount of
time - read_timeout_ms. If read_timeout_ms has expired, and the
region is dirty, it is less likely that we can make any use of
HPB reading it so we inactivate it. Still, deactivation has
its overhead, and we may still benefit from HPB reading this
region if it is clean - see read_timeout_expiries.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/read_timeout_expiries
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: If the region read timeout has expired, but the region is clean,
just re-wind its timer for another spin. Do that as long as it
is clean and did not exhaust its read_timeout_expiries threshold.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/timeout_polling_interval_ms
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: The frequency with which the delayed worker that checks the
read_timeouts is awakened.
What: /sys/class/scsi_device/*/device/hpb_param_sysfs/inflight_map_req
Date: February 2021
Contact: Avri Altman <avri.altman@wdc.com>
Description: In host control mode the host is the originator of map requests.
To avoid flooding the device with map requests, use a simple throttling
mechanism that limits the number of inflight map requests.

View File

@ -1190,11 +1190,11 @@ Members of interest:
- pointer to scsi_device object that this command is
associated with.
resid
- an LLD should set this signed integer to the requested
- an LLD should set this unsigned integer to the requested
transfer length (i.e. 'request_bufflen') less the number
of bytes that are actually transferred. 'resid' is
preset to 0 so an LLD can ignore it if it cannot detect
underruns (overruns should be rare). If possible an LLD
underruns (overruns should not be reported). An LLD
should set 'resid' prior to invoking 'done'. The most
interesting case is data transfers from a SCSI target
device (e.g. READs) that underrun.

View File

@ -1979,12 +1979,8 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
if (unlikely(rsp->flags & SRP_RSP_FLAG_DIUNDER))
scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt));
else if (unlikely(rsp->flags & SRP_RSP_FLAG_DIOVER))
scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_in_res_cnt));
else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOUNDER))
scsi_set_resid(scmnd, be32_to_cpu(rsp->data_out_res_cnt));
else if (unlikely(rsp->flags & SRP_RSP_FLAG_DOOVER))
scsi_set_resid(scmnd, -be32_to_cpu(rsp->data_out_res_cnt));
srp_free_req(ch, req, scmnd,
be32_to_cpu(rsp->req_lim_delta));

View File

@ -836,7 +836,7 @@ config SCSI_IMM
config SCSI_IZIP_EPP16
bool "ppa/imm option - Use slow (but safe) EPP-16"
depends on SCSI_PPA || SCSI_IMM
depends on SCSI_IMM
help
EPP (Enhanced Parallel Port) is a standard for parallel ports which
allows them to act as expansion buses that can handle up to 64

View File

@ -61,23 +61,11 @@ $(OUTDIR)/aicdb.h:
clean:
rm -f $(clean-files)
# Create a dependency chain in generated files
# to avoid concurrent invocations of the single
# rule that builds them all.
$(OUTDIR)/aicasm_gram.c: $(OUTDIR)/aicasm_gram.h
$(OUTDIR)/aicasm_gram.c $(OUTDIR)/aicasm_gram.h: aicasm_gram.y
$(YACC) $(YFLAGS) -b $(<:.y=) $<
mv $(<:.y=).tab.c $(OUTDIR)/$(<:.y=.c)
mv $(<:.y=).tab.h $(OUTDIR)/$(<:.y=.h)
$(YACC) $(YFLAGS) -b $(<:.y=) $< -o $(OUTDIR)/$(<:.y=.c)
# Create a dependency chain in generated files
# to avoid concurrent invocations of the single
# rule that builds them all.
$(OUTDIR)/aicasm_macro_gram.c: $(OUTDIR)/aicasm_macro_gram.h
$(OUTDIR)/aicasm_macro_gram.c $(OUTDIR)/aicasm_macro_gram.h: aicasm_macro_gram.y
$(YACC) $(YFLAGS) -b $(<:.y=) -p mm $<
mv $(<:.y=).tab.c $(OUTDIR)/$(<:.y=.c)
mv $(<:.y=).tab.h $(OUTDIR)/$(<:.y=.h)
$(YACC) $(YFLAGS) -b $(<:.y=) -p mm $< -o $(OUTDIR)/$(<:.y=.c)
$(OUTDIR)/aicasm_scan.c: aicasm_scan.l
$(LEX) $(LFLAGS) -o $@ $<

View File

@ -52,6 +52,7 @@
#include <stdlib.h>
#include <string.h>
#include <sysexits.h>
#include <ctype.h>
#include "aicasm_symbol.h"
#include "aicasm.h"

View File

@ -28,7 +28,7 @@ static int asd_get_user_sas_addr(struct asd_ha_struct *asd_ha)
if (asd_ha->hw_prof.sas_addr[0])
return 0;
return sas_request_addr(asd_ha->sas_ha.core.shost,
return sas_request_addr(asd_ha->sas_ha.shost,
asd_ha->hw_prof.sas_addr);
}
@ -72,10 +72,8 @@ static int asd_init_phy(struct asd_phy *phy)
struct asd_sas_phy *sas_phy = &phy->sas_phy;
sas_phy->enabled = 1;
sas_phy->class = SAS;
sas_phy->iproto = SAS_PROTOCOL_ALL;
sas_phy->tproto = 0;
sas_phy->type = PHY_TYPE_PHYSICAL;
sas_phy->role = PHY_ROLE_INITIATOR;
sas_phy->oob_mode = OOB_NOT_CONNECTED;
sas_phy->linkrate = SAS_LINK_RATE_UNKNOWN;

View File

@ -667,7 +667,6 @@ static int asd_register_sas_ha(struct asd_ha_struct *asd_ha)
}
asd_ha->sas_ha.sas_ha_name = (char *) asd_ha->name;
asd_ha->sas_ha.lldd_module = THIS_MODULE;
asd_ha->sas_ha.sas_addr = &asd_ha->hw_prof.sas_addr[0];
for (i = 0; i < ASD_MAX_PHYS; i++) {
@ -688,8 +687,8 @@ static int asd_unregister_sas_ha(struct asd_ha_struct *asd_ha)
err = sas_unregister_ha(&asd_ha->sas_ha);
sas_remove_host(asd_ha->sas_ha.core.shost);
scsi_host_put(asd_ha->sas_ha.core.shost);
sas_remove_host(asd_ha->sas_ha.shost);
scsi_host_put(asd_ha->sas_ha.shost);
kfree(asd_ha->sas_ha.sas_phy);
kfree(asd_ha->sas_ha.sas_port);
@ -739,7 +738,7 @@ static int asd_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
asd_printk("found %s, device %s\n", asd_ha->name, pci_name(dev));
SHOST_TO_SAS_HA(shost) = &asd_ha->sas_ha;
asd_ha->sas_ha.core.shost = shost;
asd_ha->sas_ha.shost = shost;
shost->transportt = aic94xx_transport_template;
shost->max_id = ~0;
shost->max_lun = ~0;

View File

@ -388,14 +388,9 @@ static int asd_build_ata_ascb(struct asd_ascb *ascb, struct sas_task *task,
flags |= data_dir_flags[task->data_dir];
scb->ata_task.ata_flags = flags;
scb->ata_task.retry_count = task->ata_task.retry_count;
scb->ata_task.retry_count = 0;
flags = 0;
if (task->ata_task.set_affil_pol)
flags |= SET_AFFIL_POLICY;
if (task->ata_task.stp_affil_pol)
flags |= STP_AFFIL_POLICY;
scb->ata_task.flags = flags;
scb->ata_task.flags = 0;
}
ascb->tasklet_complete = asd_task_tasklet_complete;
@ -485,9 +480,6 @@ static int asd_build_ssp_ascb(struct asd_ascb *ascb, struct sas_task *task,
scb->ssp_task.ssp_frame.tptt = cpu_to_be16(0xFFFF);
memcpy(scb->ssp_task.ssp_cmd.lun, task->ssp_task.LUN, 8);
if (task->ssp_task.enable_first_burst)
scb->ssp_task.ssp_cmd.efb_prio_attr |= EFB_MASK;
scb->ssp_task.ssp_cmd.efb_prio_attr |= (task->ssp_task.task_prio << 3);
scb->ssp_task.ssp_cmd.efb_prio_attr |= (task->ssp_task.task_attr & 7);
memcpy(scb->ssp_task.ssp_cmd.cdb, task->ssp_task.cmd->cmnd,
task->ssp_task.cmd->cmd_len);

View File

@ -1715,14 +1715,14 @@ static void arcmsr_shutdown(struct pci_dev *pdev)
arcmsr_flush_adapter_cache(acb);
}
static int arcmsr_module_init(void)
static int __init arcmsr_module_init(void)
{
int error = 0;
error = pci_register_driver(&arcmsr_pci_driver);
return error;
}
static void arcmsr_module_exit(void)
static void __exit arcmsr_module_exit(void)
{
pci_unregister_driver(&arcmsr_pci_driver);
}

View File

@ -450,6 +450,10 @@ int beiscsi_iface_set_param(struct Scsi_Host *shost,
}
nla_for_each_attr(attrib, data, dt_len, rm_len) {
/* ignore nla_type as it is never used */
if (nla_len(attrib) < sizeof(*iface_param))
return -EINVAL;
iface_param = nla_data(attrib);
if (iface_param->param_type != ISCSI_NET_PARAM)

View File

@ -800,7 +800,7 @@ struct fc_rscn_pl_s {
u8 command;
u8 pagelen;
__be16 payldlen;
struct fc_rscn_event_s event[1];
struct fc_rscn_event_s event[];
};
/*

View File

@ -1051,7 +1051,7 @@ fc_rscn_build(struct fchs_s *fchs, struct fc_rscn_pl_s *rscn,
rscn->event[0].format = FC_RSCN_FORMAT_PORTID;
rscn->event[0].portid = s_id;
return sizeof(struct fc_rscn_pl_s);
return struct_size(rscn, event, 1);
}
u16

View File

@ -2317,12 +2317,8 @@ sli_xmit_bls_rsp64_wqe(struct sli4 *sli, void *buf,
SLI4_GENERIC_CONTEXT_VPI << SLI4_BLS_RSP_WQE_CT_SHFT;
bls->context_tag = cpu_to_le16(params->vpi);
if (params->s_id != U32_MAX)
bls->local_n_port_id_dword |=
cpu_to_le32(params->s_id & 0x00ffffff);
else
bls->local_n_port_id_dword |=
cpu_to_le32(params->s_id & 0x00ffffff);
bls->local_n_port_id_dword |=
cpu_to_le32(params->s_id & 0x00ffffff);
dw_ridflags = (dw_ridflags & ~SLI4_BLS_RSP_RID) |
(params->d_id & SLI4_BLS_RSP_RID);

View File

@ -319,16 +319,17 @@ static void fcoe_ctlr_announce(struct fcoe_ctlr *fip)
{
struct fcoe_fcf *sel;
struct fcoe_fcf *fcf;
unsigned long flags;
mutex_lock(&fip->ctlr_mutex);
spin_lock_bh(&fip->ctlr_lock);
spin_lock_irqsave(&fip->ctlr_lock, flags);
kfree_skb(fip->flogi_req);
fip->flogi_req = NULL;
list_for_each_entry(fcf, &fip->fcfs, list)
fcf->flogi_sent = 0;
spin_unlock_bh(&fip->ctlr_lock);
spin_unlock_irqrestore(&fip->ctlr_lock, flags);
sel = fip->sel_fcf;
if (sel && ether_addr_equal(sel->fcf_mac, fip->dest_addr))
@ -699,6 +700,7 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
{
struct fc_frame *fp;
struct fc_frame_header *fh;
unsigned long flags;
u16 old_xid;
u8 op;
u8 mac[ETH_ALEN];
@ -732,11 +734,11 @@ int fcoe_ctlr_els_send(struct fcoe_ctlr *fip, struct fc_lport *lport,
op = FIP_DT_FLOGI;
if (fip->mode == FIP_MODE_VN2VN)
break;
spin_lock_bh(&fip->ctlr_lock);
spin_lock_irqsave(&fip->ctlr_lock, flags);
kfree_skb(fip->flogi_req);
fip->flogi_req = skb;
fip->flogi_req_send = 1;
spin_unlock_bh(&fip->ctlr_lock);
spin_unlock_irqrestore(&fip->ctlr_lock, flags);
schedule_work(&fip->timer_work);
return -EINPROGRESS;
case ELS_FDISC:
@ -1705,10 +1707,11 @@ static int fcoe_ctlr_flogi_send_locked(struct fcoe_ctlr *fip)
static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
{
struct fcoe_fcf *fcf;
unsigned long flags;
int error;
mutex_lock(&fip->ctlr_mutex);
spin_lock_bh(&fip->ctlr_lock);
spin_lock_irqsave(&fip->ctlr_lock, flags);
LIBFCOE_FIP_DBG(fip, "re-sending FLOGI - reselect\n");
fcf = fcoe_ctlr_select(fip);
if (!fcf || fcf->flogi_sent) {
@ -1719,7 +1722,7 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
fcoe_ctlr_solicit(fip, NULL);
error = fcoe_ctlr_flogi_send_locked(fip);
}
spin_unlock_bh(&fip->ctlr_lock);
spin_unlock_irqrestore(&fip->ctlr_lock, flags);
mutex_unlock(&fip->ctlr_mutex);
return error;
}
@ -1736,8 +1739,9 @@ static int fcoe_ctlr_flogi_retry(struct fcoe_ctlr *fip)
static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
{
struct fcoe_fcf *fcf;
unsigned long flags;
spin_lock_bh(&fip->ctlr_lock);
spin_lock_irqsave(&fip->ctlr_lock, flags);
fcf = fip->sel_fcf;
if (!fcf || !fip->flogi_req_send)
goto unlock;
@ -1764,7 +1768,7 @@ static void fcoe_ctlr_flogi_send(struct fcoe_ctlr *fip)
} else /* XXX */
LIBFCOE_FIP_DBG(fip, "No FCF selected - defer send\n");
unlock:
spin_unlock_bh(&fip->ctlr_lock);
spin_unlock_irqrestore(&fip->ctlr_lock, flags);
}
/**

View File

@ -50,11 +50,6 @@ static irqreturn_t gvp11_intr(int irq, void *data)
static int gvp11_xfer_mask = 0;
void gvp11_setup(char *str, int *ints)
{
gvp11_xfer_mask = ints[1];
}
static int dma_setup(struct scsi_cmnd *cmd, int dir_in)
{
struct scsi_pointer *scsi_pointer = WD33C93_scsi_pointer(cmd);

View File

@ -1018,10 +1018,8 @@ static void hisi_sas_phy_init(struct hisi_hba *hisi_hba, int phy_no)
phy->minimum_linkrate = SAS_LINK_RATE_1_5_GBPS;
phy->maximum_linkrate = hisi_hba->hw->phy_get_max_linkrate();
sas_phy->enabled = (phy_no < hisi_hba->n_phy) ? 1 : 0;
sas_phy->class = SAS;
sas_phy->iproto = SAS_PROTOCOL_ALL;
sas_phy->tproto = 0;
sas_phy->type = PHY_TYPE_PHYSICAL;
sas_phy->role = PHY_ROLE_INITIATOR;
sas_phy->oob_mode = OOB_NOT_CONNECTED;
sas_phy->linkrate = SAS_LINK_RATE_UNKNOWN;
@ -1065,23 +1063,18 @@ EXPORT_SYMBOL_GPL(hisi_sas_phy_enable);
static void hisi_sas_port_notify_formed(struct asd_sas_phy *sas_phy)
{
struct sas_ha_struct *sas_ha = sas_phy->ha;
struct hisi_hba *hisi_hba = sas_ha->lldd_ha;
struct hisi_sas_phy *phy = sas_phy->lldd_phy;
struct asd_sas_port *sas_port = sas_phy->port;
struct hisi_sas_port *port;
unsigned long flags;
if (!sas_port)
return;
port = to_hisi_sas_port(sas_port);
spin_lock_irqsave(&hisi_hba->lock, flags);
port->port_attached = 1;
port->id = phy->port_id;
phy->port = port;
sas_port->lldd_port = port;
spin_unlock_irqrestore(&hisi_hba->lock, flags);
}
static void hisi_sas_do_release_task(struct hisi_hba *hisi_hba, struct sas_task *task,
@ -2519,10 +2512,9 @@ int hisi_sas_probe(struct platform_device *pdev,
sha->sas_ha_name = DRV_NAME;
sha->dev = hisi_hba->dev;
sha->lldd_module = THIS_MODULE;
sha->sas_addr = &hisi_hba->sas_addr[0];
sha->num_phys = hisi_hba->n_phy;
sha->core.shost = hisi_hba->shost;
sha->shost = hisi_hba->shost;
for (i = 0; i < hisi_hba->n_phy; i++) {
sha->sas_phy[i] = &hisi_hba->phy[i].sas_phy;
@ -2564,12 +2556,12 @@ void hisi_sas_remove(struct platform_device *pdev)
{
struct sas_ha_struct *sha = platform_get_drvdata(pdev);
struct hisi_hba *hisi_hba = sha->lldd_ha;
struct Scsi_Host *shost = sha->core.shost;
struct Scsi_Host *shost = sha->shost;
del_timer_sync(&hisi_hba->timer);
sas_unregister_ha(sha);
sas_remove_host(sha->core.shost);
sas_remove_host(shost);
hisi_sas_free(hisi_hba);
scsi_host_put(shost);

View File

@ -960,7 +960,7 @@ static void prep_ssp_v1_hw(struct hisi_hba *hisi_hba,
struct scsi_cmnd *scsi_cmnd = ssp_task->cmd;
struct sas_tmf_task *tmf = slot->tmf;
int has_data = 0, priority = !!tmf;
u8 *buf_cmd, fburst = 0;
u8 *buf_cmd;
u32 dw1, dw2;
/* create header */
@ -1018,16 +1018,11 @@ static void prep_ssp_v1_hw(struct hisi_hba *hisi_hba,
buf_cmd = hisi_sas_cmd_hdr_addr_mem(slot) +
sizeof(struct ssp_frame_hdr);
if (task->ssp_task.enable_first_burst) {
fburst = (1 << 7);
dw2 |= 1 << CMD_HDR_FIRST_BURST_OFF;
}
hdr->dw2 = cpu_to_le32(dw2);
memcpy(buf_cmd, &task->ssp_task.LUN, 8);
if (!tmf) {
buf_cmd[9] = fburst | task->ssp_task.task_attr |
(task->ssp_task.task_prio << 3);
buf_cmd[9] = task->ssp_task.task_attr;
memcpy(buf_cmd + 12, task->ssp_task.cmd->cmnd,
task->ssp_task.cmd->cmd_len);
} else {

View File

@ -1798,8 +1798,7 @@ static void prep_ssp_v2_hw(struct hisi_hba *hisi_hba,
memcpy(buf_cmd, &task->ssp_task.LUN, 8);
if (!tmf) {
buf_cmd[9] = task->ssp_task.task_attr |
(task->ssp_task.task_prio << 3);
buf_cmd[9] = task->ssp_task.task_attr;
memcpy(buf_cmd + 12, task->ssp_task.cmd->cmnd,
task->ssp_task.cmd->cmd_len);
} else {
@ -2026,6 +2025,11 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
u16 dma_tx_err_type = le16_to_cpu(err_record->dma_tx_err_type);
u16 sipc_rx_err_type = le16_to_cpu(err_record->sipc_rx_err_type);
u32 dma_rx_err_type = le32_to_cpu(err_record->dma_rx_err_type);
struct hisi_sas_complete_v2_hdr *complete_queue =
hisi_hba->complete_hdr[slot->cmplt_queue];
struct hisi_sas_complete_v2_hdr *complete_hdr =
&complete_queue[slot->cmplt_queue_slot];
u32 dw0 = le32_to_cpu(complete_hdr->dw0);
int error = -1;
if (err_phase == 1) {
@ -2310,7 +2314,8 @@ static void slot_err_v2_hw(struct hisi_hba *hisi_hba,
break;
}
}
hisi_sas_sata_done(task, slot);
if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
hisi_sas_sata_done(task, slot);
}
break;
default:
@ -2443,7 +2448,8 @@ static void slot_complete_v2_hw(struct hisi_hba *hisi_hba,
case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
{
ts->stat = SAS_SAM_STAT_GOOD;
hisi_sas_sata_done(task, slot);
if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
hisi_sas_sata_done(task, slot);
break;
}
default:

View File

@ -1326,7 +1326,7 @@ static void prep_ssp_v3_hw(struct hisi_hba *hisi_hba,
memcpy(buf_cmd, &task->ssp_task.LUN, 8);
if (!tmf) {
buf_cmd[9] = ssp_task->task_attr | (ssp_task->task_prio << 3);
buf_cmd[9] = ssp_task->task_attr;
memcpy(buf_cmd + 12, scsi_cmnd->cmnd, scsi_cmnd->cmd_len);
} else {
buf_cmd[10] = tmf->tmf;
@ -2257,7 +2257,8 @@ slot_err_v3_hw(struct hisi_hba *hisi_hba, struct sas_task *task,
ts->stat = SAS_OPEN_REJECT;
ts->open_rej_reason = SAS_OREJ_RSVD_RETRY;
}
hisi_sas_sata_done(task, slot);
if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
hisi_sas_sata_done(task, slot);
break;
case SAS_PROTOCOL_SMP:
ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
@ -2384,7 +2385,8 @@ static void slot_complete_v3_hw(struct hisi_hba *hisi_hba,
case SAS_PROTOCOL_STP:
case SAS_PROTOCOL_SATA | SAS_PROTOCOL_STP:
ts->stat = SAS_SAM_STAT_GOOD;
hisi_sas_sata_done(task, slot);
if (dw0 & CMPLT_HDR_RSPNS_XFRD_MSK)
hisi_sas_sata_done(task, slot);
break;
default:
ts->stat = SAS_SAM_STAT_CHECK_CONDITION;
@ -3104,21 +3106,25 @@ static const struct hisi_sas_debugfs_reg debugfs_ras_reg = {
static void debugfs_snapshot_prepare_v3_hw(struct hisi_hba *hisi_hba)
{
set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0);
struct Scsi_Host *shost = hisi_hba->shost;
scsi_block_requests(shost);
wait_cmds_complete_timeout_v3_hw(hisi_hba, 100, 5000);
set_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
hisi_sas_sync_cqs(hisi_hba);
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 0);
}
static void debugfs_snapshot_restore_v3_hw(struct hisi_hba *hisi_hba)
{
struct Scsi_Host *shost = hisi_hba->shost;
hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE,
(u32)((1ULL << hisi_hba->queue_count) - 1));
clear_bit(HISI_SAS_REJECT_CMD_BIT, &hisi_hba->flags);
scsi_unblock_requests(shost);
}
static void read_iost_itct_cache_v3_hw(struct hisi_hba *hisi_hba,
@ -4576,7 +4582,7 @@ static int debugfs_fifo_data_v3_hw_show(struct seq_file *s, void *p)
debugfs_read_fifo_data_v3_hw(phy);
debugfs_show_row_32_v3_hw(s, 0, HISI_SAS_FIFO_DATA_DW_SIZE * 4,
phy->fifo.rd_data);
(__le32 *)phy->fifo.rd_data);
return 0;
}
@ -4950,7 +4956,7 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
sha->sas_phy = arr_phy;
sha->sas_port = arr_port;
sha->core.shost = shost;
sha->shost = shost;
sha->lldd_ha = hisi_hba;
shost->transportt = hisi_sas_stt;
@ -4967,7 +4973,6 @@ hisi_sas_v3_probe(struct pci_dev *pdev, const struct pci_device_id *id)
sha->sas_ha_name = DRV_NAME;
sha->dev = dev;
sha->lldd_module = THIS_MODULE;
sha->sas_addr = &hisi_hba->sas_addr[0];
sha->num_phys = hisi_hba->n_phy;
@ -5055,14 +5060,14 @@ static void hisi_sas_v3_remove(struct pci_dev *pdev)
struct device *dev = &pdev->dev;
struct sas_ha_struct *sha = dev_get_drvdata(dev);
struct hisi_hba *hisi_hba = sha->lldd_ha;
struct Scsi_Host *shost = sha->core.shost;
struct Scsi_Host *shost = sha->shost;
pm_runtime_get_noresume(dev);
del_timer_sync(&hisi_hba->timer);
sas_unregister_ha(sha);
flush_workqueue(hisi_hba->wq);
sas_remove_host(sha->core.shost);
sas_remove_host(shost);
hisi_sas_v3_destroy_irqs(pdev, hisi_hba);
hisi_sas_free(hisi_hba);

View File

@ -537,7 +537,7 @@ EXPORT_SYMBOL(scsi_host_alloc);
static int __scsi_host_match(struct device *dev, const void *data)
{
struct Scsi_Host *p;
const unsigned short *hostnum = data;
const unsigned int *hostnum = data;
p = class_to_shost(dev);
return p->host_no == *hostnum;
@ -554,7 +554,7 @@ static int __scsi_host_match(struct device *dev, const void *data)
* that scsi_host_get() took. The put_device() below dropped
* the reference from class_find_device().
**/
struct Scsi_Host *scsi_host_lookup(unsigned short hostnum)
struct Scsi_Host *scsi_host_lookup(unsigned int hostnum)
{
struct device *cdev;
struct Scsi_Host *shost = NULL;

View File

@ -306,7 +306,7 @@ static inline struct isci_pci_info *to_pci_info(struct pci_dev *pdev)
static inline struct Scsi_Host *to_shost(struct isci_host *ihost)
{
return ihost->sas_ha.core.shost;
return ihost->sas_ha.shost;
}
#define for_each_isci_host(id, ihost, pdev) \

View File

@ -250,7 +250,6 @@ static int isci_register_sas_ha(struct isci_host *isci_host)
return -ENOMEM;
sas_ha->sas_ha_name = DRV_NAME;
sas_ha->lldd_module = THIS_MODULE;
sas_ha->sas_addr = &isci_host->phys[0].sas_addr[0];
for (i = 0; i < SCI_MAX_PHYS; i++) {
@ -264,9 +263,7 @@ static int isci_register_sas_ha(struct isci_host *isci_host)
sas_ha->strict_wide_ports = 1;
sas_register_ha(sas_ha);
return 0;
return sas_register_ha(sas_ha);
}
static void isci_unregister(struct isci_host *isci_host)
@ -575,7 +572,7 @@ static struct isci_host *isci_host_alloc(struct pci_dev *pdev, int id)
goto err_shost;
SHOST_TO_SAS_HA(shost) = &ihost->sas_ha;
ihost->sas_ha.core.shost = shost;
ihost->sas_ha.shost = shost;
shost->transportt = isci_transport_template;
shost->max_id = ~0;
@ -730,7 +727,7 @@ static int isci_resume(struct device *dev)
sas_prep_resume_ha(&ihost->sas_ha);
isci_host_init(ihost);
isci_host_start(ihost->sas_ha.core.shost);
isci_host_start(ihost->sas_ha.shost);
wait_for_start(ihost);
sas_resume_ha(&ihost->sas_ha);

View File

@ -1404,10 +1404,8 @@ void isci_phy_init(struct isci_phy *iphy, struct isci_host *ihost, int index)
iphy->sas_phy.ha = &ihost->sas_ha;
iphy->sas_phy.lldd_phy = iphy;
iphy->sas_phy.enabled = 1;
iphy->sas_phy.class = SAS;
iphy->sas_phy.iproto = SAS_PROTOCOL_ALL;
iphy->sas_phy.tproto = 0;
iphy->sas_phy.type = PHY_TYPE_PHYSICAL;
iphy->sas_phy.role = PHY_ROLE_INITIATOR;
iphy->sas_phy.oob_mode = OOB_NOT_CONNECTED;
iphy->sas_phy.linkrate = SAS_LINK_RATE_UNKNOWN;

View File

@ -180,7 +180,7 @@ static void sci_io_request_build_ssp_command_iu(struct isci_request *ireq)
cmd_iu->_r_a = 0;
cmd_iu->_r_b = 0;
cmd_iu->en_fburst = 0; /* unsupported */
cmd_iu->task_prio = task->ssp_task.task_prio;
cmd_iu->task_prio = 0;
cmd_iu->task_attr = task->ssp_task.task_attr;
cmd_iu->_r_c = 0;

View File

@ -162,7 +162,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
struct ata_port *ap = qc->ap;
struct domain_device *dev = ap->private_data;
struct sas_ha_struct *sas_ha = dev->port->ha;
struct Scsi_Host *host = sas_ha->core.shost;
struct Scsi_Host *host = sas_ha->shost;
struct sas_internal *i = to_sas_internal(host->transportt);
/* TODO: we should try to remove that unlock */
@ -201,12 +201,14 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
task->data_dir = qc->dma_dir;
}
task->scatter = qc->sg;
task->ata_task.retry_count = 1;
qc->lldd_task = task;
task->ata_task.use_ncq = ata_is_ncq(qc->tf.protocol);
task->ata_task.dma_xfer = ata_is_dma(qc->tf.protocol);
if (qc->flags & ATA_QCFLAG_RESULT_TF)
task->ata_task.return_fis_on_success = 1;
if (qc->scsicmd)
ASSIGN_SAS_TASK(qc->scsicmd, task);
@ -235,7 +237,7 @@ static void sas_ata_qc_fill_rtf(struct ata_queued_cmd *qc)
static struct sas_internal *dev_to_sas_internal(struct domain_device *dev)
{
return to_sas_internal(dev->port->ha->core.shost->transportt);
return to_sas_internal(dev->port->ha->shost->transportt);
}
static int sas_get_ata_command_set(struct domain_device *dev)
@ -584,7 +586,7 @@ static struct ata_port_info sata_port_info = {
int sas_ata_init(struct domain_device *found_dev)
{
struct sas_ha_struct *ha = found_dev->port->ha;
struct Scsi_Host *shost = ha->core.shost;
struct Scsi_Host *shost = ha->shost;
struct ata_host *ata_host;
struct ata_port *ap;
int rc;
@ -822,7 +824,7 @@ static void async_sas_ata_eh(void *data, async_cookie_t cookie)
struct sas_ha_struct *ha = dev->port->ha;
sas_ata_printk(KERN_DEBUG, dev, "dev error handler\n");
ata_scsi_port_error_handler(ha->core.shost, ap);
ata_scsi_port_error_handler(ha->shost, ap);
sas_put_device(dev);
}

View File

@ -170,7 +170,7 @@ int sas_notify_lldd_dev_found(struct domain_device *dev)
{
int res = 0;
struct sas_ha_struct *sas_ha = dev->port->ha;
struct Scsi_Host *shost = sas_ha->core.shost;
struct Scsi_Host *shost = sas_ha->shost;
struct sas_internal *i = to_sas_internal(shost->transportt);
if (!i->dft->lldd_dev_found)
@ -192,7 +192,7 @@ int sas_notify_lldd_dev_found(struct domain_device *dev)
void sas_notify_lldd_dev_gone(struct domain_device *dev)
{
struct sas_ha_struct *sas_ha = dev->port->ha;
struct Scsi_Host *shost = sas_ha->core.shost;
struct Scsi_Host *shost = sas_ha->shost;
struct sas_internal *i = to_sas_internal(shost->transportt);
if (!i->dft->lldd_dev_gone)
@ -234,7 +234,7 @@ static void sas_suspend_devices(struct work_struct *work)
struct domain_device *dev;
struct sas_discovery_event *ev = to_sas_discovery_event(work);
struct asd_sas_port *port = ev->port;
struct Scsi_Host *shost = port->ha->core.shost;
struct Scsi_Host *shost = port->ha->shost;
struct sas_internal *si = to_sas_internal(shost->transportt);
clear_bit(DISCE_SUSPEND, &port->disc.pending);
@ -373,7 +373,7 @@ static bool sas_abort_cmd(struct request *req, void *data)
static void sas_abort_device_scsi_cmds(struct domain_device *dev)
{
struct sas_ha_struct *sas_ha = dev->port->ha;
struct Scsi_Host *shost = sas_ha->core.shost;
struct Scsi_Host *shost = sas_ha->shost;
if (dev_is_expander(dev->dev_type))
return;

View File

@ -37,7 +37,7 @@ static int smp_execute_task_sg(struct domain_device *dev,
int res, retry;
struct sas_task *task = NULL;
struct sas_internal *i =
to_sas_internal(dev->port->ha->core.shost->transportt);
to_sas_internal(dev->port->ha->shost->transportt);
struct sas_ha_struct *ha = dev->port->ha;
pm_runtime_get_sync(ha->dev);

View File

@ -114,7 +114,7 @@ static int sas_host_smp_write_gpio(struct sas_ha_struct *sas_ha, u8 *resp_data,
u8 reg_type, u8 reg_index, u8 reg_count,
u8 *req_data)
{
struct sas_internal *i = to_sas_internal(sas_ha->core.shost->transportt);
struct sas_internal *i = to_sas_internal(sas_ha->shost->transportt);
int written;
if (i->dft->lldd_write_gpio == NULL) {
@ -182,7 +182,7 @@ static void sas_phy_control(struct sas_ha_struct *sas_ha, u8 phy_id,
enum sas_linkrate max, u8 *resp_data)
{
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
struct sas_phy_linkrates rates;
struct asd_sas_phy *asd_phy;

View File

@ -183,7 +183,7 @@ static int sas_get_linkerrors(struct sas_phy *phy)
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
struct asd_sas_phy *asd_phy = sas_ha->sas_phy[phy->number];
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
return i->dft->lldd_control_phy(asd_phy, PHY_FUNC_GET_EVENTS, NULL);
}
@ -232,7 +232,7 @@ static int transport_sas_phy_reset(struct sas_phy *phy, int hard_reset)
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
struct asd_sas_phy *asd_phy = sas_ha->sas_phy[phy->number];
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
if (!hard_reset && sas_try_ata_reset(asd_phy) == 0)
return 0;
@ -266,7 +266,7 @@ int sas_phy_enable(struct sas_phy *phy, int enable)
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
struct asd_sas_phy *asd_phy = sas_ha->sas_phy[phy->number];
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
if (enable)
ret = transport_sas_phy_reset(phy, 0);
@ -303,7 +303,7 @@ int sas_phy_reset(struct sas_phy *phy, int hard_reset)
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
struct asd_sas_phy *asd_phy = sas_ha->sas_phy[phy->number];
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
ret = i->dft->lldd_control_phy(asd_phy, reset_type, NULL);
} else {
@ -339,7 +339,7 @@ int sas_set_phy_speed(struct sas_phy *phy,
struct sas_ha_struct *sas_ha = SHOST_TO_SAS_HA(shost);
struct asd_sas_phy *asd_phy = sas_ha->sas_phy[phy->number];
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
ret = i->dft->lldd_control_phy(asd_phy, PHY_FUNC_SET_LINK_RATE,
rates);
@ -438,7 +438,7 @@ static void _sas_resume_ha(struct sas_ha_struct *ha, bool drain)
/* all phys are back up or timed out, turn on i/o so we can
* flush out disks that did not return
*/
scsi_unblock_requests(ha->core.shost);
scsi_unblock_requests(ha->shost);
if (drain)
sas_drain_work(ha);
clear_bit(SAS_HA_RESUMING, &ha->state);
@ -468,7 +468,7 @@ void sas_suspend_ha(struct sas_ha_struct *ha)
int i;
sas_disable_events(ha);
scsi_block_requests(ha->core.shost);
scsi_block_requests(ha->shost);
for (i = 0; i < ha->num_phys; i++) {
struct asd_sas_port *port = ha->sas_port[i];
@ -641,7 +641,7 @@ struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy,
struct asd_sas_event *event;
struct sas_ha_struct *sas_ha = phy->ha;
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
event = kmem_cache_zalloc(sas_event_cache, gfp_flags);
if (!event)

View File

@ -41,13 +41,7 @@ struct sas_phy_data {
void sas_scsi_recover_host(struct Scsi_Host *shost);
int sas_show_class(enum sas_class class, char *buf);
int sas_show_proto(enum sas_protocol proto, char *buf);
int sas_show_linkrate(enum sas_linkrate linkrate, char *buf);
int sas_show_oob_mode(enum sas_oob_mode oob_mode, char *buf);
int sas_register_phys(struct sas_ha_struct *sas_ha);
void sas_unregister_phys(struct sas_ha_struct *sas_ha);
struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy, gfp_t gfp_flags);
void sas_free_event(struct asd_sas_event *event);
@ -91,7 +85,6 @@ int sas_get_report_phy_sata(struct domain_device *dev, int phy_id,
int sas_get_phy_attached_dev(struct domain_device *dev, int phy_id,
u8 *sas_addr, enum sas_device_type *type);
int sas_try_ata_reset(struct asd_sas_phy *phy);
void sas_hae_reset(struct work_struct *work);
void sas_free_device(struct kref *kref);
void sas_destruct_devices(struct asd_sas_port *port);

View File

@ -38,7 +38,7 @@ static void sas_phye_oob_error(struct work_struct *work)
struct sas_ha_struct *sas_ha = phy->ha;
struct asd_sas_port *port = phy->port;
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
sas_deform_port(phy, 1);
@ -66,7 +66,7 @@ static void sas_phye_spinup_hold(struct work_struct *work)
struct asd_sas_phy *phy = ev->phy;
struct sas_ha_struct *sas_ha = phy->ha;
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
phy->error = 0;
i->dft->lldd_control_phy(phy, PHY_FUNC_RELEASE_SPINUP_HOLD, NULL);
@ -95,7 +95,7 @@ static void sas_phye_shutdown(struct work_struct *work)
struct asd_sas_phy *phy = ev->phy;
struct sas_ha_struct *sas_ha = phy->ha;
struct sas_internal *i =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
if (phy->enabled) {
int ret;
@ -131,7 +131,7 @@ int sas_register_phys(struct sas_ha_struct *sas_ha)
spin_lock_init(&phy->sas_prim_lock);
phy->frame_rcvd_size = 0;
phy->phy = sas_phy_alloc(&sas_ha->core.shost->shost_gendev, i);
phy->phy = sas_phy_alloc(&sas_ha->shost->shost_gendev, i);
if (!phy->phy)
return -ENOMEM;

View File

@ -28,7 +28,7 @@ static void sas_resume_port(struct asd_sas_phy *phy)
struct domain_device *dev, *n;
struct asd_sas_port *port = phy->port;
struct sas_ha_struct *sas_ha = phy->ha;
struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt);
struct sas_internal *si = to_sas_internal(sas_ha->shost->transportt);
if (si->dft->lldd_port_formed)
si->dft->lldd_port_formed(phy);
@ -83,7 +83,6 @@ static void sas_form_port_add_phy(struct asd_sas_port *port,
memcpy(port->sas_addr, phy->sas_addr, SAS_ADDR_SIZE);
if (*(u64 *)port->attached_sas_addr == 0) {
port->class = phy->class;
memcpy(port->attached_sas_addr, phy->attached_sas_addr,
SAS_ADDR_SIZE);
port->iproto = phy->iproto;
@ -109,7 +108,7 @@ static void sas_form_port(struct asd_sas_phy *phy)
struct asd_sas_port *port = phy->port;
struct domain_device *port_dev = NULL;
struct sas_internal *si =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
unsigned long flags;
if (port) {
@ -212,7 +211,7 @@ void sas_deform_port(struct asd_sas_phy *phy, int gone)
struct sas_ha_struct *sas_ha = phy->ha;
struct asd_sas_port *port = phy->port;
struct sas_internal *si =
to_sas_internal(sas_ha->core.shost->transportt);
to_sas_internal(sas_ha->shost->transportt);
struct domain_device *dev;
unsigned long flags;
@ -249,7 +248,6 @@ void sas_deform_port(struct asd_sas_phy *phy, int gone)
INIT_LIST_HEAD(&port->phy_list);
memset(port->sas_addr, 0, SAS_ADDR_SIZE);
memset(port->attached_sas_addr, 0, SAS_ADDR_SIZE);
port->class = 0;
port->iproto = 0;
port->tproto = 0;
port->oob_mode = 0;

View File

@ -142,7 +142,6 @@ static struct sas_task *sas_create_task(struct scsi_cmnd *cmd,
task->dev = dev;
task->task_proto = task->dev->tproto; /* BUG_ON(!SSP) */
task->ssp_task.retry_count = 1;
int_to_scsilun(cmd->device->lun, &lun);
memcpy(task->ssp_task.LUN, &lun.scsi_lun, 8);
task->ssp_task.task_attr = TASK_ATTR_SIMPLE;
@ -279,7 +278,7 @@ static enum task_disposition sas_scsi_find_task(struct sas_task *task)
unsigned long flags;
int i, res;
struct sas_internal *si =
to_sas_internal(task->dev->port->ha->core.shost->transportt);
to_sas_internal(task->dev->port->ha->shost->transportt);
for (i = 0; i < 5; i++) {
pr_notice("%s: aborting task 0x%p\n", __func__, task);
@ -327,7 +326,7 @@ static int sas_recover_lu(struct domain_device *dev, struct scsi_cmnd *cmd)
int res = TMF_RESP_FUNC_FAILED;
struct scsi_lun lun;
struct sas_internal *i =
to_sas_internal(dev->port->ha->core.shost->transportt);
to_sas_internal(dev->port->ha->shost->transportt);
int_to_scsilun(cmd->device->lun, &lun);
@ -355,7 +354,7 @@ static int sas_recover_I_T(struct domain_device *dev)
{
int res = TMF_RESP_FUNC_FAILED;
struct sas_internal *i =
to_sas_internal(dev->port->ha->core.shost->transportt);
to_sas_internal(dev->port->ha->shost->transportt);
pr_notice("I_T nexus reset for dev %016llx\n",
SAS_ADDR(dev->sas_addr));
@ -410,7 +409,7 @@ static void sas_wait_eh(struct domain_device *dev)
spin_unlock_irq(&ha->lock);
/* make sure SCSI EH is complete */
if (scsi_host_in_recovery(ha->core.shost)) {
if (scsi_host_in_recovery(ha->shost)) {
msleep(10);
goto retry;
}
@ -440,7 +439,7 @@ static int sas_queue_reset(struct domain_device *dev, int reset_type,
set_bit(SAS_DEV_EH_PENDING, &dev->state);
set_bit(reset_type, &dev->state);
int_to_scsilun(lun, &dev->ssp_dev.reset_lun);
scsi_schedule_eh(ha->core.shost);
scsi_schedule_eh(ha->shost);
}
spin_unlock_irq(&ha->lock);
@ -925,7 +924,7 @@ static int sas_execute_internal_abort(struct domain_device *device,
unsigned int qid, void *data)
{
struct sas_ha_struct *ha = device->port->ha;
struct sas_internal *i = to_sas_internal(ha->core.shost->transportt);
struct sas_internal *i = to_sas_internal(ha->shost->transportt);
struct sas_task *task = NULL;
int res, retry;
@ -1015,7 +1014,7 @@ int sas_execute_tmf(struct domain_device *device, void *parameter,
{
struct sas_task *task;
struct sas_internal *i =
to_sas_internal(device->port->ha->core.shost->transportt);
to_sas_internal(device->port->ha->shost->transportt);
int res, retry;
for (retry = 0; retry < TASK_RETRY; retry++) {

View File

@ -309,7 +309,6 @@ struct lpfc_hba;
#define LPFC_VMID_TIMER 300 /* timer interval in seconds */
#define LPFC_MAX_VMID_SIZE 256
#define LPFC_COMPRESS_VMID_SIZE 16
union lpfc_vmid_io_tag {
u32 app_id; /* App Id vmid */
@ -667,7 +666,7 @@ struct lpfc_vport {
uint32_t cfg_first_burst_size;
uint32_t dev_loss_tmo_changed;
/* VMID parameters */
u8 lpfc_vmid_host_uuid[LPFC_COMPRESS_VMID_SIZE];
u8 lpfc_vmid_host_uuid[16];
u32 max_vmid; /* maximum VMIDs allowed per port */
u32 cur_vmid_cnt; /* Current VMID count */
#define LPFC_MIN_VMID 4
@ -872,6 +871,7 @@ enum lpfc_irq_chann_mode {
enum lpfc_hba_bit_flags {
FABRIC_COMANDS_BLOCKED,
HBA_PCI_ERR,
MBX_TMO_ERR,
};
struct lpfc_hba {
@ -1708,6 +1708,25 @@ lpfc_next_online_cpu(const struct cpumask *mask, unsigned int start)
return cpu_it;
}
/**
* lpfc_next_present_cpu - Finds next present CPU after n
* @n: the cpu prior to search
*
* Note: If no next present cpu, then fallback to first present cpu.
*
**/
static inline unsigned int lpfc_next_present_cpu(int n)
{
unsigned int cpu;
cpu = cpumask_next(n, cpu_present_mask);
if (cpu >= nr_cpu_ids)
cpu = cpumask_first(cpu_present_mask);
return cpu;
}
/**
* lpfc_sli4_mod_hba_eq_delay - update EQ delay
* @phba: Pointer to HBA context object.

View File

@ -2127,11 +2127,12 @@ lpfc_get_hba_info(struct lpfc_hba *phba,
uint32_t *mrpi, uint32_t *arpi,
uint32_t *mvpi, uint32_t *avpi)
{
struct lpfc_mbx_read_config *rd_config;
LPFC_MBOXQ_t *pmboxq;
MAILBOX_t *pmb;
int rc = 0;
uint32_t max_vpi;
struct lpfc_sli4_hba *sli4_hba;
struct lpfc_max_cfg_param *max_cfg_param;
u16 rsrc_ext_cnt, rsrc_ext_size, max_vpi;
/*
* prevent udev from issuing mailbox commands until the port is
@ -2167,31 +2168,65 @@ lpfc_get_hba_info(struct lpfc_hba *phba,
}
if (phba->sli_rev == LPFC_SLI_REV4) {
rd_config = &pmboxq->u.mqe.un.rd_config;
if (mrpi)
*mrpi = bf_get(lpfc_mbx_rd_conf_rpi_count, rd_config);
if (arpi)
*arpi = bf_get(lpfc_mbx_rd_conf_rpi_count, rd_config) -
phba->sli4_hba.max_cfg_param.rpi_used;
if (mxri)
*mxri = bf_get(lpfc_mbx_rd_conf_xri_count, rd_config);
if (axri)
*axri = bf_get(lpfc_mbx_rd_conf_xri_count, rd_config) -
phba->sli4_hba.max_cfg_param.xri_used;
sli4_hba = &phba->sli4_hba;
max_cfg_param = &sli4_hba->max_cfg_param;
/* Account for differences with SLI-3. Get vpi count from
* mailbox data and subtract one for max vpi value.
*/
max_vpi = (bf_get(lpfc_mbx_rd_conf_vpi_count, rd_config) > 0) ?
(bf_get(lpfc_mbx_rd_conf_vpi_count, rd_config) - 1) : 0;
/* Normally, extents are not used */
if (!phba->sli4_hba.extents_in_use) {
if (mrpi)
*mrpi = max_cfg_param->max_rpi;
if (mxri)
*mxri = max_cfg_param->max_xri;
if (mvpi) {
max_vpi = max_cfg_param->max_vpi;
/* Limit the max we support */
if (max_vpi > LPFC_MAX_VPI)
max_vpi = LPFC_MAX_VPI;
if (mvpi)
*mvpi = max_vpi;
if (avpi)
*avpi = max_vpi - phba->sli4_hba.max_cfg_param.vpi_used;
/* Limit the max we support */
if (max_vpi > LPFC_MAX_VPI)
max_vpi = LPFC_MAX_VPI;
*mvpi = max_vpi;
}
} else { /* Extents in use */
if (mrpi) {
if (lpfc_sli4_get_avail_extnt_rsrc(phba,
LPFC_RSC_TYPE_FCOE_RPI,
&rsrc_ext_cnt,
&rsrc_ext_size)) {
rc = 0;
goto free_pmboxq;
}
*mrpi = rsrc_ext_cnt * rsrc_ext_size;
}
if (mxri) {
if (lpfc_sli4_get_avail_extnt_rsrc(phba,
LPFC_RSC_TYPE_FCOE_XRI,
&rsrc_ext_cnt,
&rsrc_ext_size)) {
rc = 0;
goto free_pmboxq;
}
*mxri = rsrc_ext_cnt * rsrc_ext_size;
}
if (mvpi) {
if (lpfc_sli4_get_avail_extnt_rsrc(phba,
LPFC_RSC_TYPE_FCOE_VPI,
&rsrc_ext_cnt,
&rsrc_ext_size)) {
rc = 0;
goto free_pmboxq;
}
max_vpi = rsrc_ext_cnt * rsrc_ext_size;
/* Limit the max we support */
if (max_vpi > LPFC_MAX_VPI)
max_vpi = LPFC_MAX_VPI;
*mvpi = max_vpi;
}
}
} else {
if (mrpi)
*mrpi = pmb->un.varRdConfig.max_rpi;
@ -2212,8 +2247,12 @@ lpfc_get_hba_info(struct lpfc_hba *phba,
}
}
/* Success */
rc = 1;
free_pmboxq:
mempool_free(pmboxq, phba->mbox_mem_pool);
return 1;
return rc;
}
/**
@ -2265,10 +2304,19 @@ lpfc_used_rpi_show(struct device *dev, struct device_attribute *attr,
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba;
uint32_t cnt, acnt;
struct lpfc_sli4_hba *sli4_hba;
struct lpfc_max_cfg_param *max_cfg_param;
u32 cnt = 0, acnt = 0;
if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, &acnt, NULL, NULL))
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
if (phba->sli_rev == LPFC_SLI_REV4) {
sli4_hba = &phba->sli4_hba;
max_cfg_param = &sli4_hba->max_cfg_param;
return scnprintf(buf, PAGE_SIZE, "%d\n",
max_cfg_param->rpi_used);
} else {
if (lpfc_get_hba_info(phba, NULL, NULL, &cnt, &acnt, NULL, NULL))
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
}
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
}
@ -2321,10 +2369,19 @@ lpfc_used_xri_show(struct device *dev, struct device_attribute *attr,
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba;
uint32_t cnt, acnt;
struct lpfc_sli4_hba *sli4_hba;
struct lpfc_max_cfg_param *max_cfg_param;
u32 cnt = 0, acnt = 0;
if (lpfc_get_hba_info(phba, &cnt, &acnt, NULL, NULL, NULL, NULL))
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
if (phba->sli_rev == LPFC_SLI_REV4) {
sli4_hba = &phba->sli4_hba;
max_cfg_param = &sli4_hba->max_cfg_param;
return scnprintf(buf, PAGE_SIZE, "%d\n",
max_cfg_param->xri_used);
} else {
if (lpfc_get_hba_info(phba, &cnt, &acnt, NULL, NULL, NULL, NULL))
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
}
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
}
@ -2377,10 +2434,19 @@ lpfc_used_vpi_show(struct device *dev, struct device_attribute *attr,
struct Scsi_Host *shost = class_to_shost(dev);
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba;
uint32_t cnt, acnt;
struct lpfc_sli4_hba *sli4_hba;
struct lpfc_max_cfg_param *max_cfg_param;
u32 cnt = 0, acnt = 0;
if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, &acnt))
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
if (phba->sli_rev == LPFC_SLI_REV4) {
sli4_hba = &phba->sli4_hba;
max_cfg_param = &sli4_hba->max_cfg_param;
return scnprintf(buf, PAGE_SIZE, "%d\n",
max_cfg_param->vpi_used);
} else {
if (lpfc_get_hba_info(phba, NULL, NULL, NULL, NULL, &cnt, &acnt))
return scnprintf(buf, PAGE_SIZE, "%d\n", (cnt - acnt));
}
return scnprintf(buf, PAGE_SIZE, "Unknown\n");
}

View File

@ -1557,7 +1557,8 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
ndlp->nlp_fc4_type |= NLP_FC4_FCP;
if (fc4_data_1 & LPFC_FC4_TYPE_BITMASK)
ndlp->nlp_fc4_type |= NLP_FC4_NVME;
lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
lpfc_printf_vlog(vport, KERN_INFO,
LOG_DISCOVERY | LOG_NODE,
"3064 Setting ndlp x%px, DID x%06x "
"with FC4 x%08x, Data: x%08x x%08x "
"%d\n",
@ -1568,14 +1569,21 @@ lpfc_cmpl_ct_cmd_gft_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
if (ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE &&
ndlp->nlp_fc4_type) {
ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
lpfc_nlp_set_state(vport, ndlp,
NLP_STE_PRLI_ISSUE);
lpfc_issue_els_prli(vport, ndlp, 0);
/* This is a fabric topology so if discovery
* started with an unsolicited PLOGI, don't
* send a PRLI. Targets don't issue PLOGI or
* PRLI when acting as a target. Likely this is
* an initiator function.
*/
if (!(ndlp->nlp_flag & NLP_RCV_PLOGI)) {
lpfc_nlp_set_state(vport, ndlp,
NLP_STE_PRLI_ISSUE);
lpfc_issue_els_prli(vport, ndlp, 0);
}
} else if (!ndlp->nlp_fc4_type) {
/* If fc4 type is still unknown, then LOGO */
lpfc_printf_vlog(vport, KERN_INFO,
LOG_DISCOVERY,
LOG_DISCOVERY | LOG_NODE,
"6443 Sending LOGO ndlp x%px,"
"DID x%06x with fc4_type: "
"x%08x, state: %d\n",

View File

@ -1041,7 +1041,7 @@ stop_rr_fcf_flogi:
!(ndlp->fc4_xpt_flags & SCSI_XPT_REGD))
lpfc_nlp_put(ndlp);
lpfc_printf_vlog(vport, KERN_WARNING, LOG_TRACE_EVENT,
lpfc_printf_vlog(vport, KERN_WARNING, LOG_ELS,
"0150 FLOGI failure Status:x%x/x%x "
"xri x%x TMO:x%x refcnt %d\n",
ulp_status, ulp_word4, cmdiocb->sli4_xritag,
@ -1091,7 +1091,6 @@ stop_rr_fcf_flogi:
if (!lpfc_error_lost_link(vport, ulp_status, ulp_word4))
lpfc_issue_reg_vfi(vport);
lpfc_nlp_put(ndlp);
goto out;
}
goto flogifail;
@ -1332,7 +1331,8 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
if (phba->cfg_vmid_priority_tagging) {
sp->cmn.priority_tagging = 1;
/* lpfc_vmid_host_uuid is combination of wwpn and wwnn */
if (uuid_is_null((uuid_t *)vport->lpfc_vmid_host_uuid)) {
if (!memchr_inv(vport->lpfc_vmid_host_uuid, 0,
sizeof(vport->lpfc_vmid_host_uuid))) {
memcpy(vport->lpfc_vmid_host_uuid, phba->wwpn,
sizeof(phba->wwpn));
memcpy(&vport->lpfc_vmid_host_uuid[8], phba->wwnn,
@ -2377,10 +2377,10 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* PRLI failed */
lpfc_printf_vlog(vport, mode, loglevel,
"2754 PRLI failure DID:%06X Status:x%x/x%x, "
"data: x%x x%x\n",
"data: x%x x%x x%x\n",
ndlp->nlp_DID, ulp_status,
ulp_word4, ndlp->nlp_state,
ndlp->fc4_prli_sent);
ndlp->fc4_prli_sent, ndlp->nlp_flag);
/* Do not call DSM for lpfc_els_abort'ed ELS cmds */
if (!lpfc_error_lost_link(vport, ulp_status, ulp_word4))
@ -2391,14 +2391,16 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
* mismatch typically caused by an RSCN. Skip any
* processing to allow recovery.
*/
if (ndlp->nlp_state >= NLP_STE_PLOGI_ISSUE &&
ndlp->nlp_state <= NLP_STE_REG_LOGIN_ISSUE) {
if ((ndlp->nlp_state >= NLP_STE_PLOGI_ISSUE &&
ndlp->nlp_state <= NLP_STE_REG_LOGIN_ISSUE) ||
(ndlp->nlp_state == NLP_STE_NPR_NODE &&
ndlp->nlp_flag & NLP_DELAY_TMO)) {
lpfc_printf_vlog(vport, KERN_WARNING, LOG_NODE,
"2784 PRLI cmpl: state mismatch "
"2784 PRLI cmpl: Allow Node recovery "
"DID x%06x nstate x%x nflag x%x\n",
ndlp->nlp_DID, ndlp->nlp_state,
ndlp->nlp_flag);
goto out;
goto out;
}
/*
@ -6166,11 +6168,25 @@ lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb,
npr->TaskRetryIdReq = 1;
}
npr->acceptRspCode = PRLI_REQ_EXECUTED;
npr->estabImagePair = 1;
/* Set image pair for complementary pairs only. */
if (ndlp->nlp_type & NLP_FCP_TARGET)
npr->estabImagePair = 1;
else
npr->estabImagePair = 0;
npr->readXferRdyDis = 1;
npr->ConfmComplAllowed = 1;
npr->prliType = PRLI_FCP_TYPE;
npr->initiatorFunc = 1;
/* Xmit PRLI ACC response tag <ulpIoTag> */
lpfc_printf_vlog(vport, KERN_INFO,
LOG_ELS | LOG_NODE | LOG_DISCOVERY,
"6014 FCP issue PRLI ACC imgpair %d "
"retry %d task %d\n",
npr->estabImagePair,
npr->Retry, npr->TaskRetryIdReq);
} else if (prli_fc4_req == PRLI_NVME_TYPE) {
/* Respond with an NVME PRLI Type */
npr_nvme = (struct lpfc_nvme_prli *) pcmd;
@ -9588,11 +9604,13 @@ void
lpfc_els_flush_cmd(struct lpfc_vport *vport)
{
LIST_HEAD(abort_list);
LIST_HEAD(cancel_list);
struct lpfc_hba *phba = vport->phba;
struct lpfc_sli_ring *pring;
struct lpfc_iocbq *tmp_iocb, *piocb;
u32 ulp_command;
unsigned long iflags = 0;
bool mbx_tmo_err;
lpfc_fabric_abort_vport(vport);
@ -9614,15 +9632,16 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
if (phba->sli_rev == LPFC_SLI_REV4)
spin_lock(&pring->ring_lock);
mbx_tmo_err = test_bit(MBX_TMO_ERR, &phba->bit_flags);
/* First we need to issue aborts to outstanding cmds on txcmpl */
list_for_each_entry_safe(piocb, tmp_iocb, &pring->txcmplq, list) {
if (piocb->cmd_flag & LPFC_IO_LIBDFC)
if (piocb->cmd_flag & LPFC_IO_LIBDFC && !mbx_tmo_err)
continue;
if (piocb->vport != vport)
continue;
if (piocb->cmd_flag & LPFC_DRIVER_ABORTED)
if (piocb->cmd_flag & LPFC_DRIVER_ABORTED && !mbx_tmo_err)
continue;
/* On the ELS ring we can have ELS_REQUESTs or
@ -9641,8 +9660,8 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
*/
if (phba->link_state == LPFC_LINK_DOWN)
piocb->cmd_cmpl = lpfc_cmpl_els_link_down;
}
if (ulp_command == CMD_GEN_REQUEST64_CR)
} else if (ulp_command == CMD_GEN_REQUEST64_CR ||
mbx_tmo_err)
list_add_tail(&piocb->dlist, &abort_list);
}
@ -9654,11 +9673,19 @@ lpfc_els_flush_cmd(struct lpfc_vport *vport)
list_for_each_entry_safe(piocb, tmp_iocb, &abort_list, dlist) {
spin_lock_irqsave(&phba->hbalock, iflags);
list_del_init(&piocb->dlist);
lpfc_sli_issue_abort_iotag(phba, pring, piocb, NULL);
if (mbx_tmo_err)
list_move_tail(&piocb->list, &cancel_list);
else
lpfc_sli_issue_abort_iotag(phba, pring, piocb, NULL);
spin_unlock_irqrestore(&phba->hbalock, iflags);
}
/* Make sure HBA is alive */
lpfc_issue_hb_tmo(phba);
if (!list_empty(&cancel_list))
lpfc_sli_cancel_iocbs(phba, &cancel_list, IOSTAT_LOCAL_REJECT,
IOERR_SLI_ABORTED);
else
/* Make sure HBA is alive */
lpfc_issue_hb_tmo(phba);
if (!list_empty(&abort_list))
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
@ -12331,9 +12358,10 @@ lpfc_vmid_uvem(struct lpfc_vport *vport,
elsiocb->vmid_tag.vmid_context = vmid_context;
pcmd = (u8 *)elsiocb->cmd_dmabuf->virt;
if (uuid_is_null((uuid_t *)vport->lpfc_vmid_host_uuid))
if (!memchr_inv(vport->lpfc_vmid_host_uuid, 0,
sizeof(vport->lpfc_vmid_host_uuid)))
memcpy(vport->lpfc_vmid_host_uuid, vmid->host_vmid,
LPFC_COMPRESS_VMID_SIZE);
sizeof(vport->lpfc_vmid_host_uuid));
*((u32 *)(pcmd)) = ELS_CMD_UVEM;
len = (u32 *)(pcmd + 4);
@ -12343,13 +12371,13 @@ lpfc_vmid_uvem(struct lpfc_vport *vport,
vem_id_desc->tag = be32_to_cpu(VEM_ID_DESC_TAG);
vem_id_desc->length = be32_to_cpu(LPFC_UVEM_VEM_ID_DESC_SIZE);
memcpy(vem_id_desc->vem_id, vport->lpfc_vmid_host_uuid,
LPFC_COMPRESS_VMID_SIZE);
sizeof(vem_id_desc->vem_id));
inst_desc = (struct instantiated_ve_desc *)(pcmd + 32);
inst_desc->tag = be32_to_cpu(INSTANTIATED_VE_DESC_TAG);
inst_desc->length = be32_to_cpu(LPFC_UVEM_VE_MAP_DESC_SIZE);
memcpy(inst_desc->global_vem_id, vmid->host_vmid,
LPFC_COMPRESS_VMID_SIZE);
sizeof(inst_desc->global_vem_id));
bf_set(lpfc_instantiated_nport_id, inst_desc, vport->fc_myDID);
bf_set(lpfc_instantiated_local_id, inst_desc,

View File

@ -169,29 +169,44 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE,
"3181 dev_loss_callbk x%06x, rport x%px flg x%x "
"load_flag x%x refcnt %d state %d xpt x%x\n",
"load_flag x%x refcnt %u state %d xpt x%x\n",
ndlp->nlp_DID, ndlp->rport, ndlp->nlp_flag,
vport->load_flag, kref_read(&ndlp->kref),
ndlp->nlp_state, ndlp->fc4_xpt_flags);
/* Don't schedule a worker thread event if the vport is going down.
* The teardown process cleans up the node via lpfc_drop_node.
*/
/* Don't schedule a worker thread event if the vport is going down. */
if (vport->load_flag & FC_UNLOADING) {
((struct lpfc_rport_data *)rport->dd_data)->pnode = NULL;
spin_lock_irqsave(&ndlp->lock, iflags);
ndlp->rport = NULL;
ndlp->fc4_xpt_flags &= ~SCSI_XPT_REGD;
/* clear the NLP_XPT_REGD if the node is not registered
* with nvme-fc
/* The scsi_transport is done with the rport so lpfc cannot
* call to unregister. Remove the scsi transport reference
* and clean up the SCSI transport node details.
*/
if (ndlp->fc4_xpt_flags == NLP_XPT_REGD)
ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
if (ndlp->fc4_xpt_flags & (NLP_XPT_REGD | SCSI_XPT_REGD)) {
ndlp->fc4_xpt_flags &= ~SCSI_XPT_REGD;
/* Remove the node reference from remote_port_add now.
* The driver will not call remote_port_delete.
/* NVME transport-registered rports need the
* NLP_XPT_REGD flag to complete an unregister.
*/
if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
spin_unlock_irqrestore(&ndlp->lock, iflags);
lpfc_nlp_put(ndlp);
spin_lock_irqsave(&ndlp->lock, iflags);
}
/* Only 1 thread can drop the initial node reference. If
* another thread has set NLP_DROPPED, this thread is done.
*/
lpfc_nlp_put(ndlp);
if (!(ndlp->nlp_flag & NLP_DROPPED)) {
ndlp->nlp_flag |= NLP_DROPPED;
spin_unlock_irqrestore(&ndlp->lock, iflags);
lpfc_nlp_put(ndlp);
spin_lock_irqsave(&ndlp->lock, iflags);
}
spin_unlock_irqrestore(&ndlp->lock, iflags);
return;
}
@ -4686,7 +4701,8 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
spin_lock_irqsave(&ndlp->lock, iflags);
if (!(ndlp->fc4_xpt_flags & NLP_XPT_REGD)) {
spin_unlock_irqrestore(&ndlp->lock, iflags);
lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
lpfc_printf_vlog(vport, KERN_INFO,
LOG_ELS | LOG_NODE | LOG_DISCOVERY,
"0999 %s Not regd: ndlp x%px rport x%px DID "
"x%x FLG x%x XPT x%x\n",
__func__, ndlp, ndlp->rport, ndlp->nlp_DID,
@ -4702,9 +4718,10 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
vport->phba->nport_event_cnt++;
lpfc_unregister_remote_port(ndlp);
} else if (!ndlp->rport) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
lpfc_printf_vlog(vport, KERN_INFO,
LOG_ELS | LOG_NODE | LOG_DISCOVERY,
"1999 %s NDLP in devloss x%px DID x%x FLG x%x"
" XPT x%x refcnt %d\n",
" XPT x%x refcnt %u\n",
__func__, ndlp, ndlp->nlp_DID, ndlp->nlp_flag,
ndlp->fc4_xpt_flags,
kref_read(&ndlp->kref));
@ -4954,22 +4971,29 @@ lpfc_drop_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
{
/*
* Use of lpfc_drop_node and UNUSED list: lpfc_drop_node should
* be used if we wish to issue the "last" lpfc_nlp_put() to remove
* the ndlp from the vport. The ndlp marked as UNUSED on the list
* until ALL other outstanding threads have completed. We check
* that the ndlp not already in the UNUSED state before we proceed.
* be used when lpfc wants to remove the "last" lpfc_nlp_put() to
* release the ndlp from the vport when conditions are correct.
*/
if (ndlp->nlp_state == NLP_STE_UNUSED_NODE)
return;
lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNUSED_NODE);
ndlp->nlp_flag |= NLP_DROPPED;
if (vport->phba->sli_rev == LPFC_SLI_REV4) {
lpfc_cleanup_vports_rrqs(vport, ndlp);
lpfc_unreg_rpi(vport, ndlp);
}
lpfc_nlp_put(ndlp);
return;
/* NLP_DROPPED means another thread already removed the initial
* reference from lpfc_nlp_init. If set, don't drop it again and
* introduce an imbalance.
*/
spin_lock_irq(&ndlp->lock);
if (!(ndlp->nlp_flag & NLP_DROPPED)) {
ndlp->nlp_flag |= NLP_DROPPED;
spin_unlock_irq(&ndlp->lock);
lpfc_nlp_put(ndlp);
return;
}
spin_unlock_irq(&ndlp->lock);
}
/*
@ -5757,8 +5781,11 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did)
(NLP_FCP_TARGET | NLP_NVME_TARGET)))
return NULL;
lpfc_disc_state_machine(vport, ndlp, NULL,
NLP_EVT_DEVICE_RECOVERY);
if (ndlp->nlp_state > NLP_STE_UNUSED_NODE &&
ndlp->nlp_state < NLP_STE_PRLI_ISSUE) {
lpfc_disc_state_machine(vport, ndlp, NULL,
NLP_EVT_DEVICE_RECOVERY);
}
spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_NPR_2B_DISC;

View File

@ -764,6 +764,8 @@ typedef struct _PRLI { /* Structure is in Big Endian format */
#define PRLI_PREDEF_CONFIG 0x5
#define PRLI_PARTIAL_SUCCESS 0x6
#define PRLI_INVALID_PAGE_CNT 0x7
#define PRLI_INV_SRV_PARM 0x8
uint8_t word0Reserved3; /* FC Parm Word 0, bit 0:7 */
uint32_t origProcAssoc; /* FC Parm Word 1, bit 0:31 */

View File

@ -2123,7 +2123,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
en_rn_msg = false;
} else if (reg_err1 == SLIPORT_ERR1_REG_ERR_CODE_2 &&
reg_err2 == SLIPORT_ERR2_REG_FORCED_DUMP)
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"3144 Port Down: Debug Dump\n");
else if (reg_err1 == SLIPORT_ERR1_REG_ERR_CODE_2 &&
reg_err2 == SLIPORT_ERR2_REG_FUNC_PROVISON)
@ -7550,6 +7550,8 @@ lpfc_disable_pci_dev(struct lpfc_hba *phba)
void
lpfc_reset_hba(struct lpfc_hba *phba)
{
int rc = 0;
/* If resets are disabled then set error state and return. */
if (!phba->cfg_enable_hba_reset) {
phba->link_state = LPFC_HBA_ERROR;
@ -7560,13 +7562,25 @@ lpfc_reset_hba(struct lpfc_hba *phba)
if (phba->sli.sli_flag & LPFC_SLI_ACTIVE) {
lpfc_offline_prep(phba, LPFC_MBX_WAIT);
} else {
if (test_bit(MBX_TMO_ERR, &phba->bit_flags)) {
/* Perform a PCI function reset to start from clean */
rc = lpfc_pci_function_reset(phba);
lpfc_els_flush_all_cmd(phba);
}
lpfc_offline_prep(phba, LPFC_MBX_NO_WAIT);
lpfc_sli_flush_io_rings(phba);
}
lpfc_offline(phba);
lpfc_sli_brdrestart(phba);
lpfc_online(phba);
lpfc_unblock_mgmt_io(phba);
clear_bit(MBX_TMO_ERR, &phba->bit_flags);
if (unlikely(rc)) {
lpfc_printf_log(phba, KERN_ERR, LOG_SLI,
"8888 PCI function reset failed rc %x\n",
rc);
} else {
lpfc_sli_brdrestart(phba);
lpfc_online(phba);
lpfc_unblock_mgmt_io(phba);
}
}
/**
@ -12498,10 +12512,7 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
(new_cpup->eq != LPFC_VECTOR_MAP_EMPTY) &&
(new_cpup->phys_id == cpup->phys_id))
goto found_same;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
new_cpu = lpfc_next_present_cpu(new_cpu);
}
/* At this point, we leave the CPU as unassigned */
continue;
@ -12513,9 +12524,7 @@ found_same:
* chance of having multiple unassigned CPU entries
* selecting the same IRQ.
*/
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (start_cpu >= nr_cpu_ids)
start_cpu = first_cpu;
start_cpu = lpfc_next_present_cpu(new_cpu);
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"3337 Set Affinity: CPU %d "
@ -12548,10 +12557,7 @@ found_same:
if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) &&
(new_cpup->eq != LPFC_VECTOR_MAP_EMPTY))
goto found_any;
new_cpu = cpumask_next(
new_cpu, cpu_present_mask);
if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
new_cpu = lpfc_next_present_cpu(new_cpu);
}
/* We should never leave an entry unassigned */
lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
@ -12567,9 +12573,7 @@ found_any:
* chance of having multiple unassigned CPU entries
* selecting the same IRQ.
*/
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (start_cpu >= nr_cpu_ids)
start_cpu = first_cpu;
start_cpu = lpfc_next_present_cpu(new_cpu);
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"3338 Set Affinity: CPU %d "
@ -12640,9 +12644,7 @@ found_any:
new_cpup->core_id == cpup->core_id) {
goto found_hdwq;
}
new_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
new_cpu = lpfc_next_present_cpu(new_cpu);
}
/* If we can't match both phys_id and core_id,
@ -12654,10 +12656,7 @@ found_any:
if (new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY &&
new_cpup->phys_id == cpup->phys_id)
goto found_hdwq;
new_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (new_cpu >= nr_cpu_ids)
new_cpu = first_cpu;
new_cpu = lpfc_next_present_cpu(new_cpu);
}
/* Otherwise just round robin on cfg_hdw_queue */
@ -12666,9 +12665,7 @@ found_any:
goto logit;
found_hdwq:
/* We found an available entry, copy the IRQ info */
start_cpu = cpumask_next(new_cpu, cpu_present_mask);
if (start_cpu >= nr_cpu_ids)
start_cpu = first_cpu;
start_cpu = lpfc_next_present_cpu(new_cpu);
cpup->hdwq = new_cpup->hdwq;
logit:
lpfc_printf_log(phba, KERN_INFO, LOG_INIT,

View File

@ -1,7 +1,7 @@
/*******************************************************************
* This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2022 Broadcom. All Rights Reserved. The term *
* Copyright (C) 2017-2023 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc. and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. *
@ -879,23 +879,34 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
spin_unlock_irq(shost->host_lock);
lpfc_retry_pport_discovery(phba);
}
} else if ((!(ndlp->nlp_type & NLP_FABRIC) &&
((ndlp->nlp_type & NLP_FCP_TARGET) ||
(ndlp->nlp_type & NLP_NVME_TARGET) ||
(vport->fc_flag & FC_PT2PT))) ||
(ndlp->nlp_state == NLP_STE_ADISC_ISSUE)) {
/* Only try to re-login if this is NOT a Fabric Node
* AND the remote NPORT is a FCP/NVME Target or we
* are in pt2pt mode. NLP_STE_ADISC_ISSUE is a special
* case for LOGO as a response to ADISC behavior.
*/
mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(&ndlp->lock);
} else {
lpfc_printf_vlog(vport, KERN_INFO,
LOG_NODE | LOG_ELS | LOG_DISCOVERY,
"3203 LOGO recover nport x%06x state x%x "
"ntype x%x fc_flag x%x\n",
ndlp->nlp_DID, ndlp->nlp_state,
ndlp->nlp_type, vport->fc_flag);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
/* Special cases for rports that recover post LOGO. */
if ((!(ndlp->nlp_type == NLP_FABRIC) &&
(ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET) ||
vport->fc_flag & FC_PT2PT)) ||
(ndlp->nlp_state >= NLP_STE_ADISC_ISSUE ||
ndlp->nlp_state <= NLP_STE_PRLI_ISSUE)) {
mod_timer(&ndlp->nlp_delayfunc,
jiffies + msecs_to_jiffies(1000 * 1));
spin_lock_irq(&ndlp->lock);
ndlp->nlp_flag |= NLP_DELAY_TMO;
spin_unlock_irq(&ndlp->lock);
ndlp->nlp_last_elscmd = ELS_CMD_PLOGI;
lpfc_printf_vlog(vport, KERN_INFO,
LOG_NODE | LOG_ELS | LOG_DISCOVERY,
"3204 Start nlpdelay on DID x%06x "
"nflag x%x lastels x%x ref cnt %u",
ndlp->nlp_DID, ndlp->nlp_flag,
ndlp->nlp_last_elscmd,
kref_read(&ndlp->kref));
}
}
out:
/* Unregister from backend, could have been skipped due to ADISC */
@ -1854,7 +1865,6 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport,
struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg;
LPFC_MBOXQ_t *mb;
LPFC_MBOXQ_t *nextmb;
struct lpfc_nodelist *ns_ndlp;
cmdiocb = (struct lpfc_iocbq *) arg;
@ -1882,13 +1892,6 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport,
}
spin_unlock_irq(&phba->hbalock);
/* software abort if any GID_FT is outstanding */
if (vport->cfg_enable_fc4_type != LPFC_ENABLE_FCP) {
ns_ndlp = lpfc_findnode_did(vport, NameServer_DID);
if (ns_ndlp)
lpfc_els_abort(phba, ns_ndlp);
}
lpfc_rcv_logo(vport, ndlp, cmdiocb, ELS_CMD_LOGO);
return ndlp->nlp_state;
}
@ -2148,6 +2151,7 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
struct lpfc_nvme_prli *nvpr;
void *temp_ptr;
u32 ulp_status;
bool acc_imode_sps = false;
cmdiocb = (struct lpfc_iocbq *) arg;
rspiocb = cmdiocb->rsp_iocb;
@ -2182,22 +2186,32 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
goto out_err;
}
if (npr && (npr->acceptRspCode == PRLI_REQ_EXECUTED) &&
(npr->prliType == PRLI_FCP_TYPE)) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
"6028 FCP NPR PRLI Cmpl Init %d Target %d\n",
npr->initiatorFunc,
npr->targetFunc);
if (npr->initiatorFunc)
ndlp->nlp_type |= NLP_FCP_INITIATOR;
if (npr->targetFunc) {
ndlp->nlp_type |= NLP_FCP_TARGET;
if (npr->writeXferRdyDis)
ndlp->nlp_flag |= NLP_FIRSTBURST;
}
if (npr->Retry)
ndlp->nlp_fcp_info |= NLP_FCP_2_DEVICE;
if (npr && npr->prliType == PRLI_FCP_TYPE) {
lpfc_printf_vlog(vport, KERN_INFO,
LOG_ELS | LOG_NODE | LOG_DISCOVERY,
"6028 FCP NPR PRLI Cmpl Init %d Target %d "
"EIP %d AccCode x%x\n",
npr->initiatorFunc, npr->targetFunc,
npr->estabImagePair, npr->acceptRspCode);
if (npr->acceptRspCode == PRLI_INV_SRV_PARM) {
/* Strict initiators don't establish an image pair. */
if (npr->initiatorFunc && !npr->targetFunc &&
!npr->estabImagePair)
acc_imode_sps = true;
}
if (npr->acceptRspCode == PRLI_REQ_EXECUTED || acc_imode_sps) {
if (npr->initiatorFunc)
ndlp->nlp_type |= NLP_FCP_INITIATOR;
if (npr->targetFunc) {
ndlp->nlp_type |= NLP_FCP_TARGET;
if (npr->writeXferRdyDis)
ndlp->nlp_flag |= NLP_FIRSTBURST;
}
if (npr->Retry)
ndlp->nlp_fcp_info |= NLP_FCP_2_DEVICE;
}
} else if (nvpr &&
(bf_get_be32(prli_acc_rsp_code, nvpr) ==
PRLI_REQ_EXECUTED) &&

View File

@ -1864,7 +1864,6 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
struct lpfc_nvme_fcpreq_priv *freqpriv;
unsigned long flags;
int ret_val;
struct nvme_fc_cmd_iu *cp;
/* Validate pointers. LLDD fault handling with transport does
* have timing races.
@ -1988,16 +1987,10 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
return;
}
/*
* Get Command Id from cmd to plug into response. This
* code is not needed in the next NVME Transport drop.
*/
cp = (struct nvme_fc_cmd_iu *)lpfc_nbuf->nvmeCmd->cmdaddr;
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_ABTS,
"6138 Transport Abort NVME Request Issued for "
"ox_id x%x nvme opcode x%x nvme cmd_id x%x\n",
nvmereq_wqe->sli4_xritag, cp->sqe.common.opcode,
cp->sqe.common.command_id);
"ox_id x%x\n",
nvmereq_wqe->sli4_xritag);
return;
out_unlock:
@ -2510,8 +2503,9 @@ lpfc_nvme_register_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
lpfc_printf_vlog(vport, KERN_ERR,
LOG_TRACE_EVENT,
"6031 RemotePort Registration failed "
"err: %d, DID x%06x\n",
ret, ndlp->nlp_DID);
"err: %d, DID x%06x ref %u\n",
ret, ndlp->nlp_DID, kref_read(&ndlp->kref));
lpfc_nlp_put(ndlp);
}
return ret;

View File

@ -1620,10 +1620,7 @@ lpfc_nvmet_setup_io_context(struct lpfc_hba *phba)
cpu = cpumask_first(cpu_present_mask);
continue;
}
cpu = cpumask_next(cpu, cpu_present_mask);
if (cpu == nr_cpu_ids)
cpu = cpumask_first(cpu_present_mask);
cpu = lpfc_next_present_cpu(cpu);
}
for_each_present_cpu(i) {

View File

@ -3935,6 +3935,8 @@ void lpfc_poll_eratt(struct timer_list *t)
uint64_t sli_intr, cnt;
phba = from_timer(phba, t, eratt_poll);
if (!(phba->hba_flag & HBA_SETUP))
return;
/* Here we will also keep track of interrupts per sec of the hba */
sli_intr = phba->sli.slistat.sli_intr;
@ -7693,7 +7695,9 @@ lpfc_sli4_repost_sgl_list(struct lpfc_hba *phba,
spin_unlock_irq(&phba->hbalock);
} else {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"3161 Failure to post sgl to port.\n");
"3161 Failure to post sgl to port,status %x "
"blkcnt %d totalcnt %d postcnt %d\n",
status, block_cnt, total_cnt, post_cnt);
return -EIO;
}
@ -8478,6 +8482,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
spin_unlock_irq(&phba->hbalock);
}
}
phba->hba_flag &= ~HBA_SETUP;
lpfc_sli4_dip(phba);
@ -9282,6 +9287,7 @@ lpfc_mbox_timeout_handler(struct lpfc_hba *phba)
* would get IOCB_ERROR from lpfc_sli_issue_iocb, allowing
* it to fail all outstanding SCSI IO.
*/
set_bit(MBX_TMO_ERR, &phba->bit_flags);
spin_lock_irq(&phba->pport->work_port_lock);
phba->pport->work_port_events &= ~WORKER_MBOX_TMO;
spin_unlock_irq(&phba->pport->work_port_lock);

View File

@ -20,7 +20,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "14.2.0.13"
#define LPFC_DRIVER_VERSION "14.2.0.14"
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -438,7 +438,7 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
// set up PCI related soft state and other pre-known parameters
adapter->unique_id = pdev->bus->number << 8 | pdev->devfn;
adapter->unique_id = pci_dev_id(pdev);
adapter->irq = pdev->irq;
adapter->pdev = pdev;

View File

@ -7518,7 +7518,7 @@ static int megasas_probe_one(struct pci_dev *pdev,
*/
instance->pdev = pdev;
instance->host = host;
instance->unique_id = pdev->bus->number << 8 | pdev->devfn;
instance->unique_id = pci_dev_id(pdev);
instance->init_id = MEGASAS_DEFAULT_INIT_ID;
megasas_set_adapter_type(instance);

View File

@ -1482,7 +1482,7 @@ struct mpi3_security_page0 {
#define MPI3_SECURITY1_KEY_RECORD_MAX 1
#endif
#ifndef MPI3_SECURITY1_PAD_MAX
#define MPI3_SECURITY1_PAD_MAX 1
#define MPI3_SECURITY1_PAD_MAX 4
#endif
union mpi3_security1_key_data {
__le32 dword[128];

View File

@ -600,6 +600,7 @@ struct mpi3_event_data_pcie_error_threshold {
__le16 threshold_count;
__le16 attached_dev_handle;
__le16 reserved12;
__le32 reserved14;
};
#define MPI3_EVENT_PCI_ERROR_RC_THRESHOLD_EXCEEDED (0x00)

View File

@ -18,7 +18,7 @@ union mpi3_version_union {
#define MPI3_VERSION_MAJOR (3)
#define MPI3_VERSION_MINOR (0)
#define MPI3_VERSION_UNIT (27)
#define MPI3_VERSION_UNIT (28)
#define MPI3_VERSION_DEV (0)
#define MPI3_DEVHANDLE_INVALID (0xffff)
struct mpi3_sysif_oper_queue_indexes {

View File

@ -55,8 +55,8 @@ extern struct list_head mrioc_list;
extern int prot_mask;
extern atomic64_t event_counter;
#define MPI3MR_DRIVER_VERSION "8.4.1.0.0"
#define MPI3MR_DRIVER_RELDATE "16-March-2023"
#define MPI3MR_DRIVER_VERSION "8.5.0.0.0"
#define MPI3MR_DRIVER_RELDATE "24-July-2023"
#define MPI3MR_DRIVER_NAME "mpi3mr"
#define MPI3MR_DRIVER_LICENSE "GPL"
@ -66,11 +66,12 @@ extern atomic64_t event_counter;
#define MPI3MR_NAME_LENGTH 32
#define IOCNAME "%s: "
#define MPI3MR_MAX_SECTORS 2048
#define MPI3MR_DEFAULT_MAX_IO_SIZE (1 * 1024 * 1024)
/* Definitions for internal SGL and Chain SGL buffers */
#define MPI3MR_PAGE_SIZE_4K 4096
#define MPI3MR_SG_DEPTH (MPI3MR_PAGE_SIZE_4K / sizeof(struct mpi3_sge_common))
#define MPI3MR_DEFAULT_SGL_ENTRIES 256
#define MPI3MR_MAX_SGL_ENTRIES 2048
/* Definitions for MAX values for shost */
#define MPI3MR_MAX_CMDS_LUN 128
@ -206,6 +207,9 @@ extern atomic64_t event_counter;
*/
#define MPI3MR_MAX_APP_XFER_SECTORS (2048 + 512)
#define MPI3MR_WRITE_SAME_MAX_LEN_256_BLKS 256
#define MPI3MR_WRITE_SAME_MAX_LEN_2048_BLKS 2048
/**
* struct mpi3mr_nvme_pt_sge - Structure to store SGEs for NVMe
* Encapsulated commands.
@ -323,6 +327,7 @@ struct mpi3mr_ioc_facts {
u16 max_perids;
u16 max_pds;
u16 max_sasexpanders;
u32 max_data_length;
u16 max_sasinitiators;
u16 max_enclosures;
u16 max_pcie_switches;
@ -676,6 +681,7 @@ enum mpi3mr_dev_state {
* @io_unit_port: IO Unit port ID
* @non_stl: Is this device not to be attached with SAS TL
* @io_throttle_enabled: I/O throttling needed or not
* @wslen: Write same max length
* @q_depth: Device specific Queue Depth
* @wwid: World wide ID
* @enclosure_logical_id: Enclosure logical identifier
@ -698,6 +704,7 @@ struct mpi3mr_tgt_dev {
u8 io_unit_port;
u8 non_stl;
u8 io_throttle_enabled;
u16 wslen;
u16 q_depth;
u64 wwid;
u64 enclosure_logical_id;
@ -751,6 +758,8 @@ static inline void mpi3mr_tgtdev_put(struct mpi3mr_tgt_dev *s)
* @dev_removed: Device removed in the Firmware
* @dev_removedelay: Device is waiting to be removed in FW
* @dev_type: Device type
* @dev_nvme_dif: Device is NVMe DIF enabled
* @wslen: Write same max length
* @io_throttle_enabled: I/O throttling needed or not
* @io_divert: Flag indicates io divert is on or off for the dev
* @throttle_group: Pointer to throttle group info
@ -767,6 +776,8 @@ struct mpi3mr_stgt_priv_data {
u8 dev_removed;
u8 dev_removedelay;
u8 dev_type;
u8 dev_nvme_dif;
u16 wslen;
u8 io_throttle_enabled;
u8 io_divert;
struct mpi3mr_throttle_group_info *throttle_group;
@ -782,12 +793,14 @@ struct mpi3mr_stgt_priv_data {
* @ncq_prio_enable: NCQ priority enable for SATA device
* @pend_count: Counter to track pending I/Os during error
* handling
* @wslen: Write same max length
*/
struct mpi3mr_sdev_priv_data {
struct mpi3mr_stgt_priv_data *tgt_priv_data;
u32 lun_id;
u8 ncq_prio_enable;
u32 pend_count;
u16 wslen;
};
/**
@ -959,6 +972,7 @@ struct scmd_priv {
* @stop_drv_processing: Stop all command processing
* @device_refresh_on: Don't process the events until devices are refreshed
* @max_host_ios: Maximum host I/O count
* @max_sgl_entries: Max SGL entries per I/O
* @chain_buf_count: Chain buffer count
* @chain_buf_pool: Chain buffer pool
* @chain_sgl_list: Chain SGL list
@ -1129,6 +1143,7 @@ struct mpi3mr_ioc {
u16 max_host_ios;
spinlock_t tgtdev_lock;
struct list_head tgtdev_list;
u16 max_sgl_entries;
u32 chain_buf_count;
struct dma_pool *chain_buf_pool;

View File

@ -1163,6 +1163,12 @@ mpi3mr_revalidate_factsdata(struct mpi3mr_ioc *mrioc)
return -EPERM;
}
if (mrioc->shost->max_sectors != (mrioc->facts.max_data_length / 512))
ioc_err(mrioc, "Warning: The maximum data transfer length\n"
"\tchanged after reset: previous(%d), new(%d),\n"
"the driver cannot change this at run time\n",
mrioc->shost->max_sectors * 512, mrioc->facts.max_data_length);
if ((mrioc->sas_transport_enabled) && (mrioc->facts.ioc_capabilities &
MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED))
ioc_err(mrioc,
@ -2343,8 +2349,8 @@ static int mpi3mr_sync_timestamp(struct mpi3mr_ioc *mrioc)
ioc_err(mrioc, "Issue IOUCTL time_stamp: command timed out\n");
mrioc->init_cmds.is_waiting = 0;
if (!(mrioc->init_cmds.state & MPI3MR_CMD_RESET))
mpi3mr_soft_reset_handler(mrioc,
MPI3MR_RESET_FROM_TSU_TIMEOUT, 1);
mpi3mr_check_rh_fault_ioc(mrioc,
MPI3MR_RESET_FROM_TSU_TIMEOUT);
retval = -1;
goto out_unlock;
}
@ -2856,6 +2862,7 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
le16_to_cpu(facts_data->max_pcie_switches);
mrioc->facts.max_sasexpanders =
le16_to_cpu(facts_data->max_sas_expanders);
mrioc->facts.max_data_length = le16_to_cpu(facts_data->max_data_length);
mrioc->facts.max_sasinitiators =
le16_to_cpu(facts_data->max_sas_initiators);
mrioc->facts.max_enclosures = le16_to_cpu(facts_data->max_enclosures);
@ -2893,13 +2900,18 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
mrioc->facts.io_throttle_high =
le16_to_cpu(facts_data->io_throttle_high);
if (mrioc->facts.max_data_length ==
MPI3_IOCFACTS_MAX_DATA_LENGTH_NOT_REPORTED)
mrioc->facts.max_data_length = MPI3MR_DEFAULT_MAX_IO_SIZE;
else
mrioc->facts.max_data_length *= MPI3MR_PAGE_SIZE_4K;
/* Store in 512b block count */
if (mrioc->facts.io_throttle_data_length)
mrioc->io_throttle_data_length =
(mrioc->facts.io_throttle_data_length * 2 * 4);
else
/* set the length to 1MB + 1K to disable throttle */
mrioc->io_throttle_data_length = MPI3MR_MAX_SECTORS + 2;
mrioc->io_throttle_data_length = (mrioc->facts.max_data_length / 512) + 2;
mrioc->io_throttle_high = (mrioc->facts.io_throttle_high * 2 * 1024);
mrioc->io_throttle_low = (mrioc->facts.io_throttle_low * 2 * 1024);
@ -2914,9 +2926,9 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
ioc_info(mrioc, "SGEModMask 0x%x SGEModVal 0x%x SGEModShift 0x%x ",
mrioc->facts.sge_mod_mask, mrioc->facts.sge_mod_value,
mrioc->facts.sge_mod_shift);
ioc_info(mrioc, "DMA mask %d InitialPE status 0x%x\n",
ioc_info(mrioc, "DMA mask %d InitialPE status 0x%x max_data_len (%d)\n",
mrioc->facts.dma_mask, (facts_flags &
MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_MASK));
MPI3_IOCFACTS_FLAGS_INITIAL_PORT_ENABLE_MASK), mrioc->facts.max_data_length);
ioc_info(mrioc,
"max_dev_per_throttle_group(%d), max_throttle_groups(%d)\n",
mrioc->facts.max_dev_per_tg, mrioc->facts.max_io_throttle_group);
@ -3359,8 +3371,8 @@ int mpi3mr_process_event_ack(struct mpi3mr_ioc *mrioc, u8 event,
if (!(mrioc->init_cmds.state & MPI3MR_CMD_COMPLETE)) {
ioc_err(mrioc, "Issue EvtNotify: command timed out\n");
if (!(mrioc->init_cmds.state & MPI3MR_CMD_RESET))
mpi3mr_soft_reset_handler(mrioc,
MPI3MR_RESET_FROM_EVTACK_TIMEOUT, 1);
mpi3mr_check_rh_fault_ioc(mrioc,
MPI3MR_RESET_FROM_EVTACK_TIMEOUT);
retval = -1;
goto out_unlock;
}
@ -3414,7 +3426,14 @@ static int mpi3mr_alloc_chain_bufs(struct mpi3mr_ioc *mrioc)
if (!mrioc->chain_sgl_list)
goto out_failed;
sz = MPI3MR_PAGE_SIZE_4K;
if (mrioc->max_sgl_entries > (mrioc->facts.max_data_length /
MPI3MR_PAGE_SIZE_4K))
mrioc->max_sgl_entries = mrioc->facts.max_data_length /
MPI3MR_PAGE_SIZE_4K;
sz = mrioc->max_sgl_entries * sizeof(struct mpi3_sge_common);
ioc_info(mrioc, "number of sgl entries=%d chain buffer size=%dKB\n",
mrioc->max_sgl_entries, sz/1024);
mrioc->chain_buf_pool = dma_pool_create("chain_buf pool",
&mrioc->pdev->dev, sz, 16, 0);
if (!mrioc->chain_buf_pool) {
@ -3813,7 +3832,7 @@ retry_init:
}
mrioc->max_host_ios = mrioc->facts.max_reqs - MPI3MR_INTERNAL_CMDS_RESVD;
mrioc->shost->max_sectors = mrioc->facts.max_data_length / 512;
mrioc->num_io_throttle_group = mrioc->facts.max_io_throttle_group;
atomic_set(&mrioc->pend_large_data_sz, 0);

View File

@ -33,6 +33,12 @@ static int logging_level;
module_param(logging_level, int, 0);
MODULE_PARM_DESC(logging_level,
" bits for enabling additional logging info (default=0)");
static int max_sgl_entries = MPI3MR_DEFAULT_SGL_ENTRIES;
module_param(max_sgl_entries, int, 0444);
MODULE_PARM_DESC(max_sgl_entries,
"Preferred max number of SG entries to be used for a single I/O\n"
"The actual value will be determined by the driver\n"
"(Minimum=256, Maximum=2048, default=256)");
/* Forward declarations*/
static void mpi3mr_send_event_ack(struct mpi3mr_ioc *mrioc, u8 event,
@ -424,6 +430,7 @@ void mpi3mr_invalidate_devhandles(struct mpi3mr_ioc *mrioc)
tgt_priv->io_throttle_enabled = 0;
tgt_priv->io_divert = 0;
tgt_priv->throttle_group = NULL;
tgt_priv->wslen = 0;
if (tgtdev->host_exposed)
atomic_set(&tgt_priv->block_io, 1);
}
@ -1034,6 +1041,19 @@ mpi3mr_update_sdev(struct scsi_device *sdev, void *data)
void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc)
{
struct mpi3mr_tgt_dev *tgtdev, *tgtdev_next;
struct mpi3mr_stgt_priv_data *tgt_priv;
dprint_reset(mrioc, "refresh target devices: check for removals\n");
list_for_each_entry_safe(tgtdev, tgtdev_next, &mrioc->tgtdev_list,
list) {
if ((tgtdev->dev_handle == MPI3MR_INVALID_DEV_HANDLE) &&
tgtdev->host_exposed && tgtdev->starget &&
tgtdev->starget->hostdata) {
tgt_priv = tgtdev->starget->hostdata;
tgt_priv->dev_removed = 1;
atomic_set(&tgt_priv->block_io, 0);
}
}
list_for_each_entry_safe(tgtdev, tgtdev_next, &mrioc->tgtdev_list,
list) {
@ -1102,6 +1122,18 @@ static void mpi3mr_update_tgtdev(struct mpi3mr_ioc *mrioc,
tgtdev->io_throttle_enabled =
(flags & MPI3_DEVICE0_FLAGS_IO_THROTTLING_REQUIRED) ? 1 : 0;
switch (flags & MPI3_DEVICE0_FLAGS_MAX_WRITE_SAME_MASK) {
case MPI3_DEVICE0_FLAGS_MAX_WRITE_SAME_256_LB:
tgtdev->wslen = MPI3MR_WRITE_SAME_MAX_LEN_256_BLKS;
break;
case MPI3_DEVICE0_FLAGS_MAX_WRITE_SAME_2048_LB:
tgtdev->wslen = MPI3MR_WRITE_SAME_MAX_LEN_2048_BLKS;
break;
case MPI3_DEVICE0_FLAGS_MAX_WRITE_SAME_NO_LIMIT:
default:
tgtdev->wslen = 0;
break;
}
if (tgtdev->starget && tgtdev->starget->hostdata) {
scsi_tgt_priv_data = (struct mpi3mr_stgt_priv_data *)
@ -1113,6 +1145,7 @@ static void mpi3mr_update_tgtdev(struct mpi3mr_ioc *mrioc,
tgtdev->io_throttle_enabled;
if (is_added == true)
atomic_set(&scsi_tgt_priv_data->block_io, 0);
scsi_tgt_priv_data->wslen = tgtdev->wslen;
}
switch (dev_pg0->access_status) {
@ -3413,7 +3446,7 @@ static int mpi3mr_prepare_sg_scmd(struct mpi3mr_ioc *mrioc,
scsi_bufflen(scmd));
return -ENOMEM;
}
if (sges_left > MPI3MR_SG_DEPTH) {
if (sges_left > mrioc->max_sgl_entries) {
sdev_printk(KERN_ERR, scmd->device,
"scsi_dma_map returned unsupported sge count %d!\n",
sges_left);
@ -3933,6 +3966,48 @@ void mpi3mr_wait_for_host_io(struct mpi3mr_ioc *mrioc, u32 timeout)
mpi3mr_get_fw_pending_ios(mrioc));
}
/**
* mpi3mr_setup_divert_ws - Setup Divert IO flag for write same
* @mrioc: Adapter instance reference
* @scmd: SCSI command reference
* @scsiio_req: MPI3 SCSI IO request
* @scsiio_flags: Pointer to MPI3 SCSI IO Flags
* @wslen: write same max length
*
* Gets values of unmap, ndob and number of blocks from write
* same scsi io and based on these values it sets divert IO flag
* and reason for diverting IO to firmware.
*
* Return: Nothing
*/
static inline void mpi3mr_setup_divert_ws(struct mpi3mr_ioc *mrioc,
struct scsi_cmnd *scmd, struct mpi3_scsi_io_request *scsiio_req,
u32 *scsiio_flags, u16 wslen)
{
u8 unmap = 0, ndob = 0;
u8 opcode = scmd->cmnd[0];
u32 num_blocks = 0;
u16 sa = (scmd->cmnd[8] << 8) | (scmd->cmnd[9]);
if (opcode == WRITE_SAME_16) {
unmap = scmd->cmnd[1] & 0x08;
ndob = scmd->cmnd[1] & 0x01;
num_blocks = get_unaligned_be32(scmd->cmnd + 10);
} else if ((opcode == VARIABLE_LENGTH_CMD) && (sa == WRITE_SAME_32)) {
unmap = scmd->cmnd[10] & 0x08;
ndob = scmd->cmnd[10] & 0x01;
num_blocks = get_unaligned_be32(scmd->cmnd + 28);
} else
return;
if ((unmap) && (ndob) && (num_blocks > wslen)) {
scsiio_req->msg_flags |=
MPI3_SCSIIO_MSGFLAGS_DIVERT_TO_FIRMWARE;
*scsiio_flags |=
MPI3_SCSIIO_FLAGS_DIVERT_REASON_WRITE_SAME_TOO_LARGE;
}
}
/**
* mpi3mr_eh_host_reset - Host reset error handling callback
* @scmd: SCSI command reference
@ -4430,7 +4505,6 @@ static int mpi3mr_target_alloc(struct scsi_target *starget)
unsigned long flags;
int retval = 0;
struct sas_rphy *rphy = NULL;
bool update_stgt_priv_data = false;
scsi_tgt_priv_data = kzalloc(sizeof(*scsi_tgt_priv_data), GFP_KERNEL);
if (!scsi_tgt_priv_data)
@ -4439,39 +4513,50 @@ static int mpi3mr_target_alloc(struct scsi_target *starget)
starget->hostdata = scsi_tgt_priv_data;
spin_lock_irqsave(&mrioc->tgtdev_lock, flags);
if (starget->channel == mrioc->scsi_device_channel) {
tgt_dev = __mpi3mr_get_tgtdev_by_perst_id(mrioc, starget->id);
if (tgt_dev && !tgt_dev->is_hidden)
update_stgt_priv_data = true;
else
if (tgt_dev && !tgt_dev->is_hidden) {
scsi_tgt_priv_data->starget = starget;
scsi_tgt_priv_data->dev_handle = tgt_dev->dev_handle;
scsi_tgt_priv_data->perst_id = tgt_dev->perst_id;
scsi_tgt_priv_data->dev_type = tgt_dev->dev_type;
scsi_tgt_priv_data->tgt_dev = tgt_dev;
tgt_dev->starget = starget;
atomic_set(&scsi_tgt_priv_data->block_io, 0);
retval = 0;
if ((tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_PCIE) &&
((tgt_dev->dev_spec.pcie_inf.dev_info &
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) ==
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE) &&
((tgt_dev->dev_spec.pcie_inf.dev_info &
MPI3_DEVICE0_PCIE_DEVICE_INFO_PITYPE_MASK) !=
MPI3_DEVICE0_PCIE_DEVICE_INFO_PITYPE_0))
scsi_tgt_priv_data->dev_nvme_dif = 1;
scsi_tgt_priv_data->io_throttle_enabled = tgt_dev->io_throttle_enabled;
scsi_tgt_priv_data->wslen = tgt_dev->wslen;
if (tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_VD)
scsi_tgt_priv_data->throttle_group = tgt_dev->dev_spec.vd_inf.tg;
} else
retval = -ENXIO;
} else if (mrioc->sas_transport_enabled && !starget->channel) {
rphy = dev_to_rphy(starget->dev.parent);
tgt_dev = __mpi3mr_get_tgtdev_by_addr_and_rphy(mrioc,
rphy->identify.sas_address, rphy);
if (tgt_dev && !tgt_dev->is_hidden && !tgt_dev->non_stl &&
(tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_SAS_SATA))
update_stgt_priv_data = true;
else
(tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_SAS_SATA)) {
scsi_tgt_priv_data->starget = starget;
scsi_tgt_priv_data->dev_handle = tgt_dev->dev_handle;
scsi_tgt_priv_data->perst_id = tgt_dev->perst_id;
scsi_tgt_priv_data->dev_type = tgt_dev->dev_type;
scsi_tgt_priv_data->tgt_dev = tgt_dev;
scsi_tgt_priv_data->io_throttle_enabled = tgt_dev->io_throttle_enabled;
scsi_tgt_priv_data->wslen = tgt_dev->wslen;
tgt_dev->starget = starget;
atomic_set(&scsi_tgt_priv_data->block_io, 0);
retval = 0;
} else
retval = -ENXIO;
}
if (update_stgt_priv_data) {
scsi_tgt_priv_data->starget = starget;
scsi_tgt_priv_data->dev_handle = tgt_dev->dev_handle;
scsi_tgt_priv_data->perst_id = tgt_dev->perst_id;
scsi_tgt_priv_data->dev_type = tgt_dev->dev_type;
scsi_tgt_priv_data->tgt_dev = tgt_dev;
tgt_dev->starget = starget;
atomic_set(&scsi_tgt_priv_data->block_io, 0);
retval = 0;
scsi_tgt_priv_data->io_throttle_enabled =
tgt_dev->io_throttle_enabled;
if (tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_VD)
scsi_tgt_priv_data->throttle_group =
tgt_dev->dev_spec.vd_inf.tg;
}
spin_unlock_irqrestore(&mrioc->tgtdev_lock, flags);
return retval;
@ -4732,6 +4817,10 @@ static int mpi3mr_qcmd(struct Scsi_Host *shost,
mpi3mr_setup_eedp(mrioc, scmd, scsiio_req);
if (stgt_priv_data->wslen)
mpi3mr_setup_divert_ws(mrioc, scmd, scsiio_req, &scsiio_flags,
stgt_priv_data->wslen);
memcpy(scsiio_req->cdb.cdb32, scmd->cmnd, scmd->cmd_len);
scsiio_req->data_length = cpu_to_le32(scsi_bufflen(scmd));
scsiio_req->dev_handle = cpu_to_le16(dev_handle);
@ -4818,10 +4907,10 @@ static const struct scsi_host_template mpi3mr_driver_template = {
.no_write_same = 1,
.can_queue = 1,
.this_id = -1,
.sg_tablesize = MPI3MR_SG_DEPTH,
.sg_tablesize = MPI3MR_DEFAULT_SGL_ENTRIES,
/* max xfer supported is 1M (2K in 512 byte sized sectors)
*/
.max_sectors = 2048,
.max_sectors = (MPI3MR_DEFAULT_MAX_IO_SIZE / 512),
.cmd_per_lun = MPI3MR_MAX_CMDS_LUN,
.max_segment_size = 0xffffffff,
.track_queue_depth = 1,
@ -5004,6 +5093,16 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
mrioc->pdev = pdev;
mrioc->stop_bsgs = 1;
mrioc->max_sgl_entries = max_sgl_entries;
if (max_sgl_entries > MPI3MR_MAX_SGL_ENTRIES)
mrioc->max_sgl_entries = MPI3MR_MAX_SGL_ENTRIES;
else if (max_sgl_entries < MPI3MR_DEFAULT_SGL_ENTRIES)
mrioc->max_sgl_entries = MPI3MR_DEFAULT_SGL_ENTRIES;
else {
mrioc->max_sgl_entries /= MPI3MR_DEFAULT_SGL_ENTRIES;
mrioc->max_sgl_entries *= MPI3MR_DEFAULT_SGL_ENTRIES;
}
/* init shost parameters */
shost->max_cmd_len = MPI3MR_MAX_CDB_LENGTH;
shost->max_lun = -1;
@ -5068,7 +5167,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
shost->nr_maps = 3;
shost->can_queue = mrioc->max_host_ios;
shost->sg_tablesize = MPI3MR_SG_DEPTH;
shost->sg_tablesize = mrioc->max_sgl_entries;
shost->max_id = mrioc->facts.max_perids + 1;
retval = scsi_add_host(shost, &pdev->dev);

View File

@ -84,10 +84,8 @@ static void mvs_phy_init(struct mvs_info *mvi, int phy_id)
phy->port = NULL;
timer_setup(&phy->timer, NULL, 0);
sas_phy->enabled = (phy_id < mvi->chip->n_phy) ? 1 : 0;
sas_phy->class = SAS;
sas_phy->iproto = SAS_PROTOCOL_ALL;
sas_phy->tproto = 0;
sas_phy->type = PHY_TYPE_PHYSICAL;
sas_phy->role = PHY_ROLE_INITIATOR;
sas_phy->oob_mode = OOB_NOT_CONNECTED;
sas_phy->linkrate = SAS_LINK_RATE_UNKNOWN;
@ -416,7 +414,7 @@ static int mvs_prep_sas_ha_init(struct Scsi_Host *shost,
sha->sas_phy = arr_phy;
sha->sas_port = arr_port;
sha->core.shost = shost;
sha->shost = shost;
sha->lldd_ha = kzalloc(sizeof(struct mvs_prv_info), GFP_KERNEL);
if (!sha->lldd_ha)
@ -458,7 +456,6 @@ static void mvs_post_sas_ha_init(struct Scsi_Host *shost,
sha->sas_ha_name = DRV_NAME;
sha->dev = mvi->dev;
sha->lldd_module = THIS_MODULE;
sha->sas_addr = &mvi->sas_addr[0];
sha->num_phys = nr_core * chip_info->n_phy;
@ -473,7 +470,7 @@ static void mvs_post_sas_ha_init(struct Scsi_Host *shost,
shost->sg_tablesize = min_t(u16, SG_ALL, MVS_MAX_SG);
shost->can_queue = can_queue;
mvi->shost->cmd_per_lun = MVS_QUEUE_SIZE;
sha->core.shost = mvi->shost;
sha->shost = mvi->shost;
}
static void mvs_init_sas_add(struct mvs_info *mvi)

View File

@ -564,7 +564,7 @@ static int mvs_task_prep_ssp(struct mvs_info *mvi,
void *buf_prd;
struct ssp_frame_hdr *ssp_hdr;
void *buf_tmp;
u8 *buf_cmd, *buf_oaf, fburst = 0;
u8 *buf_cmd, *buf_oaf;
dma_addr_t buf_tmp_dma;
u32 flags;
u32 resp_len, req_len, i, tag = tei->tag;
@ -582,10 +582,6 @@ static int mvs_task_prep_ssp(struct mvs_info *mvi,
(phy_mask << TXQ_PHY_SHIFT));
flags = MCH_RETRY;
if (task->ssp_task.enable_first_burst) {
flags |= MCH_FBURST;
fburst = (1 << 7);
}
if (is_tmf)
flags |= (MCH_SSP_FR_TASK << MCH_SSP_FR_TYPE_SHIFT);
else
@ -667,8 +663,7 @@ static int mvs_task_prep_ssp(struct mvs_info *mvi,
memcpy(buf_cmd, &task->ssp_task.LUN, 8);
if (ssp_hdr->frame_type != SSP_TASK) {
buf_cmd[9] = fburst | task->ssp_task.task_attr |
(task->ssp_task.task_prio << 3);
buf_cmd[9] = task->ssp_task.task_attr;
memcpy(buf_cmd + 12, task->ssp_task.cmd->cmnd,
task->ssp_task.cmd->cmd_len);
} else{

View File

@ -2490,7 +2490,7 @@ static int mvumi_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
mhba->pdev = pdev;
mhba->shost = host;
mhba->unique_id = pdev->bus->number << 8 | pdev->devfn;
mhba->unique_id = pci_dev_id(pdev);
ret = mvumi_init_fw(mhba);
if (ret)

View File

@ -4053,9 +4053,6 @@ static int pm8001_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
ssp_cmd.data_len = cpu_to_le32(task->total_xfer_len);
ssp_cmd.device_id = cpu_to_le32(pm8001_dev->device_id);
ssp_cmd.tag = cpu_to_le32(tag);
if (task->ssp_task.enable_first_burst)
ssp_cmd.ssp_iu.efb_prio_attr |= 0x80;
ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_prio << 3);
ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_attr & 7);
memcpy(ssp_cmd.ssp_iu.cdb, task->ssp_task.cmd->cmnd,
task->ssp_task.cmd->cmd_len);
@ -4095,7 +4092,7 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
u32 hdr_tag, ncg_tag = 0;
u64 phys_addr;
u32 ATAP = 0x0;
u32 dir;
u32 dir, retfis = 0;
u32 opc = OPC_INB_SATA_HOST_OPSTART;
memset(&sata_cmd, 0, sizeof(sata_cmd));
@ -4124,8 +4121,11 @@ static int pm8001_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
sata_cmd.tag = cpu_to_le32(tag);
sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
sata_cmd.data_len = cpu_to_le32(task->total_xfer_len);
sata_cmd.ncqtag_atap_dir_m =
cpu_to_le32(((ncg_tag & 0xff)<<16)|((ATAP & 0x3f) << 10) | dir);
if (task->ata_task.return_fis_on_success)
retfis = 1;
sata_cmd.retfis_ncqtag_atap_dir_m =
cpu_to_le32((retfis << 24) | ((ncg_tag & 0xff) << 16) |
((ATAP & 0x3f) << 10) | dir);
sata_cmd.sata_fis = task->ata_task.fis;
if (likely(!task->ata_task.device_control_reg_update))
sata_cmd.sata_fis.flags |= 0x80;/* C=1: update ATA cmd reg */

View File

@ -515,7 +515,7 @@ struct sata_start_req {
__le32 tag;
__le32 device_id;
__le32 data_len;
__le32 ncqtag_atap_dir_m;
__le32 retfis_ncqtag_atap_dir_m;
struct host_to_dev_fis sata_fis;
u32 reserved1;
u32 reserved2;

View File

@ -162,10 +162,8 @@ static void pm8001_phy_init(struct pm8001_hba_info *pm8001_ha, int phy_id)
phy->minimum_linkrate = SAS_LINK_RATE_1_5_GBPS;
phy->maximum_linkrate = SAS_LINK_RATE_6_0_GBPS;
sas_phy->enabled = (phy_id < pm8001_ha->chip->n_phy) ? 1 : 0;
sas_phy->class = SAS;
sas_phy->iproto = SAS_PROTOCOL_ALL;
sas_phy->tproto = 0;
sas_phy->type = PHY_TYPE_PHYSICAL;
sas_phy->role = PHY_ROLE_INITIATOR;
sas_phy->oob_mode = OOB_NOT_CONNECTED;
sas_phy->linkrate = SAS_LINK_RATE_UNKNOWN;
@ -654,10 +652,9 @@ static void pm8001_post_sas_ha_init(struct Scsi_Host *shost,
sha->sas_ha_name = DRV_NAME;
sha->dev = pm8001_ha->dev;
sha->strict_wide_ports = 1;
sha->lldd_module = THIS_MODULE;
sha->sas_addr = &pm8001_ha->sas_addr[0];
sha->num_phys = chip_info->n_phy;
sha->core.shost = shost;
sha->shost = shost;
}
/**

View File

@ -702,8 +702,6 @@ int pm8001_mpi_fw_flash_update_resp(struct pm8001_hba_info *pm8001_ha,
void *piomb);
int pm8001_mpi_general_event(struct pm8001_hba_info *pm8001_ha, void *piomb);
int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb);
struct sas_task *pm8001_alloc_task(void);
void pm8001_free_task(struct sas_task *task);
void pm8001_tag_free(struct pm8001_hba_info *pm8001_ha, u32 tag);
struct pm8001_device *pm8001_find_dev(struct pm8001_hba_info *pm8001_ha,
u32 device_id);

View File

@ -4316,9 +4316,6 @@ static int pm80xx_chip_ssp_io_req(struct pm8001_hba_info *pm8001_ha,
ssp_cmd.data_len = cpu_to_le32(task->total_xfer_len);
ssp_cmd.device_id = cpu_to_le32(pm8001_dev->device_id);
ssp_cmd.tag = cpu_to_le32(tag);
if (task->ssp_task.enable_first_burst)
ssp_cmd.ssp_iu.efb_prio_attr = 0x80;
ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_prio << 3);
ssp_cmd.ssp_iu.efb_prio_attr |= (task->ssp_task.task_attr & 7);
memcpy(ssp_cmd.ssp_iu.cdb, task->ssp_task.cmd->cmnd,
task->ssp_task.cmd->cmd_len);
@ -4457,7 +4454,7 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
u64 phys_addr, end_addr;
u32 end_addr_high, end_addr_low;
u32 ATAP = 0x0;
u32 dir;
u32 dir, retfis = 0;
u32 opc = OPC_INB_SATA_HOST_OPSTART;
memset(&sata_cmd, 0, sizeof(sata_cmd));
@ -4487,7 +4484,8 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
sata_cmd.tag = cpu_to_le32(tag);
sata_cmd.device_id = cpu_to_le32(pm8001_ha_dev->device_id);
sata_cmd.data_len = cpu_to_le32(task->total_xfer_len);
if (task->ata_task.return_fis_on_success)
retfis = 1;
sata_cmd.sata_fis = task->ata_task.fis;
if (likely(!task->ata_task.device_control_reg_update))
sata_cmd.sata_fis.flags |= 0x80;/* C=1: update ATA cmd reg */
@ -4500,12 +4498,10 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
"Encryption enabled.Sending Encrypt SATA cmd 0x%x\n",
sata_cmd.sata_fis.command);
opc = OPC_INB_SATA_DIF_ENC_IO;
/* set encryption bit */
sata_cmd.ncqtag_atap_dir_m_dad =
cpu_to_le32(((ncg_tag & 0xff)<<16)|
((ATAP & 0x3f) << 10) | 0x20 | dir);
/* dad (bit 0-1) is 0 */
/* set encryption bit; dad (bits 0-1) is 0 */
sata_cmd.retfis_ncqtag_atap_dir_m_dad =
cpu_to_le32((retfis << 24) | ((ncg_tag & 0xff) << 16) |
((ATAP & 0x3f) << 10) | 0x20 | dir);
/* fill in PRD (scatter/gather) table, if any */
if (task->num_scatter > 1) {
pm8001_chip_make_sg(task->scatter,
@ -4568,11 +4564,10 @@ static int pm80xx_chip_sata_req(struct pm8001_hba_info *pm8001_ha,
pm8001_dbg(pm8001_ha, IO,
"Sending Normal SATA command 0x%x inb %x\n",
sata_cmd.sata_fis.command, q_index);
/* dad (bit 0-1) is 0 */
sata_cmd.ncqtag_atap_dir_m_dad =
cpu_to_le32(((ncg_tag & 0xff)<<16) |
((ATAP & 0x3f) << 10) | dir);
/* dad (bits 0-1) is 0 */
sata_cmd.retfis_ncqtag_atap_dir_m_dad =
cpu_to_le32((retfis << 24) | ((ncg_tag & 0xff) << 16) |
((ATAP & 0x3f) << 10) | dir);
/* fill in PRD (scatter/gather) table, if any */
if (task->num_scatter > 1) {
pm8001_chip_make_sg(task->scatter,

View File

@ -731,7 +731,7 @@ struct sata_start_req {
__le32 tag;
__le32 device_id;
__le32 data_len;
__le32 ncqtag_atap_dir_m_dad;
__le32 retfis_ncqtag_atap_dir_m_dad;
struct host_to_dev_fis sata_fis;
u32 reserved1;
u32 reserved2; /* dword 11. rsvd for normal I/O. */

View File

@ -3584,8 +3584,7 @@ static ssize_t pmcraid_show_adapter_id(
struct Scsi_Host *shost = class_to_shost(dev);
struct pmcraid_instance *pinstance =
(struct pmcraid_instance *)shost->hostdata;
u32 adapter_id = (pinstance->pdev->bus->number << 8) |
pinstance->pdev->devfn;
u32 adapter_id = pci_dev_id(pinstance->pdev);
u32 aen_group = pmcraid_event_family.id;
return snprintf(buf, PAGE_SIZE,

View File

@ -45,6 +45,11 @@ typedef struct {
#include "ppa.h"
static unsigned int mode = PPA_AUTODETECT;
module_param(mode, uint, 0644);
MODULE_PARM_DESC(mode, "Transfer mode (0 = Autodetect, 1 = SPP 4-bit, "
"2 = SPP 8-bit, 3 = EPP 8-bit, 4 = EPP 16-bit, 5 = EPP 32-bit");
static struct scsi_pointer *ppa_scsi_pointer(struct scsi_cmnd *cmd)
{
return scsi_cmd_priv(cmd);
@ -157,7 +162,7 @@ static int ppa_show_info(struct seq_file *m, struct Scsi_Host *host)
return 0;
}
static int device_check(ppa_struct *dev);
static int device_check(ppa_struct *dev, bool autodetect);
#if PPA_DEBUG > 0
#define ppa_fail(x,y) printk("ppa: ppa_fail(%i) from %s at line %d\n",\
@ -302,13 +307,10 @@ static int ppa_out(ppa_struct *dev, char *buffer, int len)
case PPA_EPP_8:
epp_reset(ppb);
w_ctr(ppb, 0x4);
#ifdef CONFIG_SCSI_IZIP_EPP16
if (!(((long) buffer | len) & 0x01))
outsw(ppb + 4, buffer, len >> 1);
#else
if (!(((long) buffer | len) & 0x03))
if (dev->mode == PPA_EPP_32 && !(((long) buffer | len) & 0x01))
outsl(ppb + 4, buffer, len >> 2);
#endif
else if (dev->mode == PPA_EPP_16 && !(((long) buffer | len) & 0x03))
outsw(ppb + 4, buffer, len >> 1);
else
outsb(ppb + 4, buffer, len);
w_ctr(ppb, 0xc);
@ -355,13 +357,10 @@ static int ppa_in(ppa_struct *dev, char *buffer, int len)
case PPA_EPP_8:
epp_reset(ppb);
w_ctr(ppb, 0x24);
#ifdef CONFIG_SCSI_IZIP_EPP16
if (!(((long) buffer | len) & 0x01))
insw(ppb + 4, buffer, len >> 1);
#else
if (!(((long) buffer | len) & 0x03))
if (dev->mode == PPA_EPP_32 && !(((long) buffer | len) & 0x03))
insl(ppb + 4, buffer, len >> 2);
#endif
else if (dev->mode == PPA_EPP_16 && !(((long) buffer | len) & 0x01))
insw(ppb + 4, buffer, len >> 1);
else
insb(ppb + 4, buffer, len);
w_ctr(ppb, 0x2c);
@ -469,6 +468,27 @@ static int ppa_init(ppa_struct *dev)
{
int retv;
unsigned short ppb = dev->base;
bool autodetect = dev->mode == PPA_AUTODETECT;
if (autodetect) {
int modes = dev->dev->port->modes;
int ppb_hi = dev->dev->port->base_hi;
/* Mode detection works up the chain of speed
* This avoids a nasty if-then-else-if-... tree
*/
dev->mode = PPA_NIBBLE;
if (modes & PARPORT_MODE_TRISTATE)
dev->mode = PPA_PS2;
if (modes & PARPORT_MODE_ECP) {
w_ecr(ppb_hi, 0x20);
dev->mode = PPA_PS2;
}
if ((modes & PARPORT_MODE_EPP) && (modes & PARPORT_MODE_ECP))
w_ecr(ppb_hi, 0x80);
}
ppa_disconnect(dev);
ppa_connect(dev, CONNECT_NORMAL);
@ -492,7 +512,7 @@ static int ppa_init(ppa_struct *dev)
if (retv)
return -EIO;
return device_check(dev);
return device_check(dev, autodetect);
}
static inline int ppa_send_command(struct scsi_cmnd *cmd)
@ -637,7 +657,7 @@ static void ppa_interrupt(struct work_struct *work)
case DID_OK:
break;
case DID_NO_CONNECT:
printk(KERN_DEBUG "ppa: no device at SCSI ID %i\n", cmd->device->target);
printk(KERN_DEBUG "ppa: no device at SCSI ID %i\n", scmd_id(cmd));
break;
case DID_BUS_BUSY:
printk(KERN_DEBUG "ppa: BUS BUSY - EPP timeout detected\n");
@ -883,7 +903,7 @@ static int ppa_reset(struct scsi_cmnd *cmd)
return SUCCESS;
}
static int device_check(ppa_struct *dev)
static int device_check(ppa_struct *dev, bool autodetect)
{
/* This routine looks for a device and then attempts to use EPP
to send a command. If all goes as planned then EPP is available. */
@ -895,8 +915,8 @@ static int device_check(ppa_struct *dev)
old_mode = dev->mode;
for (loop = 0; loop < 8; loop++) {
/* Attempt to use EPP for Test Unit Ready */
if ((ppb & 0x0007) == 0x0000)
dev->mode = PPA_EPP_32;
if (autodetect && (ppb & 0x0007) == 0x0000)
dev->mode = PPA_EPP_8;
second_pass:
ppa_connect(dev, CONNECT_EPP_MAYBE);
@ -924,7 +944,7 @@ second_pass:
udelay(1000);
ppa_disconnect(dev);
udelay(1000);
if (dev->mode == PPA_EPP_32) {
if (dev->mode != old_mode) {
dev->mode = old_mode;
goto second_pass;
}
@ -947,7 +967,7 @@ second_pass:
udelay(1000);
ppa_disconnect(dev);
udelay(1000);
if (dev->mode == PPA_EPP_32) {
if (dev->mode != old_mode) {
dev->mode = old_mode;
goto second_pass;
}
@ -1026,7 +1046,6 @@ static int __ppa_attach(struct parport *pb)
DEFINE_WAIT(wait);
ppa_struct *dev, *temp;
int ports;
int modes, ppb, ppb_hi;
int err = -ENOMEM;
struct pardev_cb ppa_cb;
@ -1034,7 +1053,7 @@ static int __ppa_attach(struct parport *pb)
if (!dev)
return -ENOMEM;
dev->base = -1;
dev->mode = PPA_AUTODETECT;
dev->mode = mode < PPA_UNKNOWN ? mode : PPA_AUTODETECT;
dev->recon_tmo = PPA_RECON_TMO;
init_waitqueue_head(&waiting);
temp = find_parent();
@ -1069,25 +1088,8 @@ static int __ppa_attach(struct parport *pb)
}
dev->waiting = NULL;
finish_wait(&waiting, &wait);
ppb = dev->base = dev->dev->port->base;
ppb_hi = dev->dev->port->base_hi;
w_ctr(ppb, 0x0c);
modes = dev->dev->port->modes;
/* Mode detection works up the chain of speed
* This avoids a nasty if-then-else-if-... tree
*/
dev->mode = PPA_NIBBLE;
if (modes & PARPORT_MODE_TRISTATE)
dev->mode = PPA_PS2;
if (modes & PARPORT_MODE_ECP) {
w_ecr(ppb_hi, 0x20);
dev->mode = PPA_PS2;
}
if ((modes & PARPORT_MODE_EPP) && (modes & PARPORT_MODE_ECP))
w_ecr(ppb_hi, 0x80);
dev->base = dev->dev->port->base;
w_ctr(dev->base, 0x0c);
/* Done configuration */

View File

@ -107,11 +107,7 @@ static char *PPA_MODE_STRING[] =
"PS/2",
"EPP 8 bit",
"EPP 16 bit",
#ifdef CONFIG_SCSI_IZIP_EPP16
"EPP 16 bit",
#else
"EPP 32 bit",
#endif
"Unknown"};
/* other options */

View File

@ -59,6 +59,8 @@ extern uint qedf_debug;
#define QEDF_LOG_NOTICE 0x40000000 /* Notice logs */
#define QEDF_LOG_WARN 0x80000000 /* Warning logs */
#define QEDF_DEBUGFS_LOG_LEN (2 * PAGE_SIZE)
/* Debug context structure */
struct qedf_dbg_ctx {
unsigned int host_no;

View File

@ -8,6 +8,7 @@
#include <linux/uaccess.h>
#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/vmalloc.h>
#include "qedf.h"
#include "qedf_dbg.h"
@ -98,7 +99,9 @@ static ssize_t
qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count,
loff_t *ppos)
{
ssize_t ret;
size_t cnt = 0;
char *cbuf;
int id;
struct qedf_fastpath *fp = NULL;
struct qedf_dbg_ctx *qedf_dbg =
@ -108,19 +111,25 @@ qedf_dbg_fp_int_cmd_read(struct file *filp, char __user *buffer, size_t count,
QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n");
cnt = sprintf(buffer, "\nFastpath I/O completions\n\n");
cbuf = vmalloc(QEDF_DEBUGFS_LOG_LEN);
if (!cbuf)
return 0;
cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt, "\nFastpath I/O completions\n\n");
for (id = 0; id < qedf->num_queues; id++) {
fp = &(qedf->fp_array[id]);
if (fp->sb_id == QEDF_SB_ID_NULL)
continue;
cnt += sprintf((buffer + cnt), "#%d: %lu\n", id,
fp->completions);
cnt += scnprintf(cbuf + cnt, QEDF_DEBUGFS_LOG_LEN - cnt,
"#%d: %lu\n", id, fp->completions);
}
cnt = min_t(int, count, cnt - *ppos);
*ppos += cnt;
return cnt;
ret = simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
vfree(cbuf);
return ret;
}
static ssize_t
@ -138,15 +147,14 @@ qedf_dbg_debug_cmd_read(struct file *filp, char __user *buffer, size_t count,
loff_t *ppos)
{
int cnt;
char cbuf[32];
struct qedf_dbg_ctx *qedf_dbg =
(struct qedf_dbg_ctx *)filp->private_data;
QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "debug mask=0x%x\n", qedf_debug);
cnt = sprintf(buffer, "debug mask = 0x%x\n", qedf_debug);
cnt = scnprintf(cbuf, sizeof(cbuf), "debug mask = 0x%x\n", qedf_debug);
cnt = min_t(int, count, cnt - *ppos);
*ppos += cnt;
return cnt;
return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
}
static ssize_t
@ -185,18 +193,17 @@ qedf_dbg_stop_io_on_error_cmd_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos)
{
int cnt;
char cbuf[7];
struct qedf_dbg_ctx *qedf_dbg =
(struct qedf_dbg_ctx *)filp->private_data;
struct qedf_ctx *qedf = container_of(qedf_dbg,
struct qedf_ctx, dbg_ctx);
QEDF_INFO(qedf_dbg, QEDF_LOG_DEBUGFS, "entered\n");
cnt = sprintf(buffer, "%s\n",
cnt = scnprintf(cbuf, sizeof(cbuf), "%s\n",
qedf->stop_io_on_error ? "true" : "false");
cnt = min_t(int, count, cnt - *ppos);
*ppos += cnt;
return cnt;
return simple_read_from_buffer(buffer, count, ppos, cbuf, cnt);
}
static ssize_t

View File

@ -466,6 +466,7 @@ static inline be_id_t port_id_to_be_id(port_id_t port_id)
}
struct tmf_arg {
struct list_head tmf_elem;
struct qla_qpair *qpair;
struct fc_port *fcport;
struct scsi_qla_host *vha;
@ -2541,7 +2542,6 @@ enum rscn_addr_format {
typedef struct fc_port {
struct list_head list;
struct scsi_qla_host *vha;
struct list_head tmf_pending;
unsigned int conf_compl_supported:1;
unsigned int deleted:2;
@ -2562,9 +2562,6 @@ typedef struct fc_port {
unsigned int do_prli_nvme:1;
uint8_t nvme_flag;
uint8_t active_tmf;
#define MAX_ACTIVE_TMF 8
uint8_t node_name[WWN_SIZE];
uint8_t port_name[WWN_SIZE];
port_id_t d_id;
@ -4656,6 +4653,8 @@ struct qla_hw_data {
uint32_t flt_region_aux_img_status_sec;
};
uint8_t active_image;
uint8_t active_tmf;
#define MAX_ACTIVE_TMF 8
/* Needed for BEACON */
uint16_t beacon_blink_led;
@ -4670,6 +4669,8 @@ struct qla_hw_data {
struct qla_msix_entry *msix_entries;
struct list_head tmf_pending;
struct list_head tmf_active;
struct list_head vp_list; /* list of VP */
unsigned long vp_idx_map[(MAX_MULTI_ID_FABRIC / 8) /
sizeof(unsigned long)];

View File

@ -48,8 +48,6 @@ extern int qla24xx_els_dcmd2_iocb(scsi_qla_host_t *, int, fc_port_t *, bool);
extern void qla2x00_els_dcmd2_free(scsi_qla_host_t *vha,
struct els_plogi *els_plogi);
extern void qla2x00_update_fcports(scsi_qla_host_t *);
extern int qla2x00_abort_isp(scsi_qla_host_t *);
extern void qla2x00_abort_isp_cleanup(scsi_qla_host_t *);
extern void qla2x00_quiesce_io(scsi_qla_host_t *);
@ -143,6 +141,7 @@ void qla_edif_sess_down(struct scsi_qla_host *vha, struct fc_port *sess);
void qla_edif_clear_appdata(struct scsi_qla_host *vha,
struct fc_port *fcport);
const char *sc_to_str(uint16_t cmd);
void qla_adjust_iocb_limit(scsi_qla_host_t *vha);
/*
* Global Data in qla_os.c source file.
@ -205,8 +204,6 @@ extern int qla2x00_post_async_logout_work(struct scsi_qla_host *, fc_port_t *,
uint16_t *);
extern int qla2x00_post_async_adisc_work(struct scsi_qla_host *, fc_port_t *,
uint16_t *);
extern int qla2x00_post_async_adisc_done_work(struct scsi_qla_host *,
fc_port_t *, uint16_t *);
extern int qla2x00_set_exlogins_buffer(struct scsi_qla_host *);
extern void qla2x00_free_exlogin_buffer(struct qla_hw_data *);
extern int qla2x00_set_exchoffld_buffer(struct scsi_qla_host *);
@ -216,7 +213,6 @@ extern int qla81xx_restart_mpi_firmware(scsi_qla_host_t *);
extern struct scsi_qla_host *qla2x00_create_host(const struct scsi_host_template *,
struct qla_hw_data *);
extern void qla2x00_free_host(struct scsi_qla_host *);
extern void qla2x00_relogin(struct scsi_qla_host *);
extern void qla2x00_do_work(struct scsi_qla_host *);
extern void qla2x00_free_fcports(struct scsi_qla_host *);
@ -238,13 +234,10 @@ extern int __qla83xx_clear_drv_presence(scsi_qla_host_t *vha);
extern int qla2x00_post_uevent_work(struct scsi_qla_host *, u32);
extern void qla2x00_disable_board_on_pci_error(struct work_struct *);
extern void qla_eeh_work(struct work_struct *);
extern void qla2x00_sp_compl(srb_t *sp, int);
extern void qla2xxx_qpair_sp_free_dma(srb_t *sp);
extern void qla2xxx_qpair_sp_compl(srb_t *sp, int);
extern void qla24xx_sched_upd_fcport(fc_port_t *);
void qla2x00_handle_login_done_event(struct scsi_qla_host *, fc_port_t *,
uint16_t *);
int qla24xx_post_gnl_work(struct scsi_qla_host *, fc_port_t *);
int qla24xx_post_relogin_work(struct scsi_qla_host *vha);
void qla2x00_wait_for_sess_deletion(scsi_qla_host_t *);
@ -728,7 +721,6 @@ int qla24xx_post_gpsc_work(struct scsi_qla_host *, fc_port_t *);
int qla24xx_async_gpsc(scsi_qla_host_t *, fc_port_t *);
void qla24xx_handle_gpsc_event(scsi_qla_host_t *, struct event_arg *);
int qla2x00_mgmt_svr_login(scsi_qla_host_t *);
void qla24xx_handle_gffid_event(scsi_qla_host_t *vha, struct event_arg *ea);
int qla24xx_async_gffid(scsi_qla_host_t *vha, fc_port_t *fcport, bool);
int qla24xx_async_gpnft(scsi_qla_host_t *, u8, srb_t *);
void qla24xx_async_gpnft_done(scsi_qla_host_t *, srb_t *);
@ -851,7 +843,6 @@ extern void qla2x00_start_iocbs(struct scsi_qla_host *, struct req_que *);
/* Interrupt related */
extern irqreturn_t qla82xx_intr_handler(int, void *);
extern irqreturn_t qla82xx_msi_handler(int, void *);
extern irqreturn_t qla82xx_msix_default(int, void *);
extern irqreturn_t qla82xx_msix_rsp_q(int, void *);
extern void qla82xx_enable_intrs(struct qla_hw_data *);

View File

@ -508,6 +508,7 @@ static
void qla24xx_handle_adisc_event(scsi_qla_host_t *vha, struct event_arg *ea)
{
struct fc_port *fcport = ea->fcport;
unsigned long flags;
ql_dbg(ql_dbg_disc, vha, 0x20d2,
"%s %8phC DS %d LS %d rc %d login %d|%d rscn %d|%d lid %d\n",
@ -522,9 +523,15 @@ void qla24xx_handle_adisc_event(scsi_qla_host_t *vha, struct event_arg *ea)
ql_dbg(ql_dbg_disc, vha, 0x2066,
"%s %8phC: adisc fail: post delete\n",
__func__, ea->fcport->port_name);
spin_lock_irqsave(&vha->work_lock, flags);
/* deleted = 0 & logout_on_delete = force fw cleanup */
fcport->deleted = 0;
if (fcport->deleted == QLA_SESS_DELETED)
fcport->deleted = 0;
fcport->logout_on_delete = 1;
spin_unlock_irqrestore(&vha->work_lock, flags);
qlt_schedule_sess_for_deletion(ea->fcport);
return;
}
@ -1134,7 +1141,7 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
u16 *mb;
if (!vha->flags.online || (fcport->flags & FCF_ASYNC_SENT))
return rval;
goto done;
ql_dbg(ql_dbg_disc, vha, 0x20d9,
"Async-gnlist WWPN %8phC \n", fcport->port_name);
@ -1188,8 +1195,9 @@ int qla24xx_async_gnl(struct scsi_qla_host *vha, fc_port_t *fcport)
done_free_sp:
/* ref: INIT */
kref_put(&sp->cmd_kref, qla2x00_sp_release);
fcport->flags &= ~(FCF_ASYNC_SENT);
done:
fcport->flags &= ~(FCF_ASYNC_ACTIVE | FCF_ASYNC_SENT);
fcport->flags &= ~(FCF_ASYNC_ACTIVE);
return rval;
}
@ -1446,7 +1454,6 @@ void __qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
ea->fcport->login_gen++;
ea->fcport->deleted = 0;
ea->fcport->logout_on_delete = 1;
if (!ea->fcport->login_succ && !IS_SW_RESV_ADDR(ea->fcport->d_id)) {
@ -1996,12 +2003,11 @@ qla2x00_tmf_iocb_timeout(void *data)
int rc, h;
unsigned long flags;
if (sp->type == SRB_MARKER) {
complete(&tmf->u.tmf.comp);
return;
}
if (sp->type == SRB_MARKER)
rc = QLA_FUNCTION_FAILED;
else
rc = qla24xx_async_abort_cmd(sp, false);
rc = qla24xx_async_abort_cmd(sp, false);
if (rc) {
spin_lock_irqsave(sp->qpair->qp_lock_ptr, flags);
for (h = 1; h < sp->qpair->req->num_outstanding_cmds; h++) {
@ -2032,10 +2038,14 @@ static void qla_marker_sp_done(srb_t *sp, int res)
complete(&tmf->u.tmf.comp);
}
#define START_SP_W_RETRIES(_sp, _rval) \
#define START_SP_W_RETRIES(_sp, _rval, _chip_gen, _login_gen) \
{\
int cnt = 5; \
do { \
if (_chip_gen != sp->vha->hw->chip_reset || _login_gen != sp->fcport->login_gen) {\
_rval = EINVAL; \
break; \
} \
_rval = qla2x00_start_sp(_sp); \
if (_rval == EAGAIN) \
msleep(1); \
@ -2058,6 +2068,7 @@ qla26xx_marker(struct tmf_arg *arg)
srb_t *sp;
int rval = QLA_FUNCTION_FAILED;
fc_port_t *fcport = arg->fcport;
u32 chip_gen, login_gen;
if (TMF_NOT_READY(arg->fcport)) {
ql_dbg(ql_dbg_taskm, vha, 0x8039,
@ -2067,6 +2078,9 @@ qla26xx_marker(struct tmf_arg *arg)
return QLA_SUSPENDED;
}
chip_gen = vha->hw->chip_reset;
login_gen = fcport->login_gen;
/* ref: INIT */
sp = qla2xxx_get_qpair_sp(vha, arg->qpair, fcport, GFP_KERNEL);
if (!sp)
@ -2084,7 +2098,7 @@ qla26xx_marker(struct tmf_arg *arg)
tm_iocb->u.tmf.loop_id = fcport->loop_id;
tm_iocb->u.tmf.vp_index = vha->vp_idx;
START_SP_W_RETRIES(sp, rval);
START_SP_W_RETRIES(sp, rval, chip_gen, login_gen);
ql_dbg(ql_dbg_taskm, vha, 0x8006,
"Async-marker hdl=%x loop-id=%x portid=%06x modifier=%x lun=%lld qp=%d rval %d.\n",
@ -2123,6 +2137,17 @@ static void qla2x00_tmf_sp_done(srb_t *sp, int res)
complete(&tmf->u.tmf.comp);
}
static int qla_tmf_wait(struct tmf_arg *arg)
{
/* there are only 2 types of error handling that reaches here, lun or target reset */
if (arg->flags & (TCF_LUN_RESET | TCF_ABORT_TASK_SET | TCF_CLEAR_TASK_SET))
return qla2x00_eh_wait_for_pending_commands(arg->vha,
arg->fcport->d_id.b24, arg->lun, WAIT_LUN);
else
return qla2x00_eh_wait_for_pending_commands(arg->vha,
arg->fcport->d_id.b24, arg->lun, WAIT_TARGET);
}
static int
__qla2x00_async_tm_cmd(struct tmf_arg *arg)
{
@ -2130,8 +2155,9 @@ __qla2x00_async_tm_cmd(struct tmf_arg *arg)
struct srb_iocb *tm_iocb;
srb_t *sp;
int rval = QLA_FUNCTION_FAILED;
fc_port_t *fcport = arg->fcport;
u32 chip_gen, login_gen;
u64 jif;
if (TMF_NOT_READY(arg->fcport)) {
ql_dbg(ql_dbg_taskm, vha, 0x8032,
@ -2141,6 +2167,9 @@ __qla2x00_async_tm_cmd(struct tmf_arg *arg)
return QLA_SUSPENDED;
}
chip_gen = vha->hw->chip_reset;
login_gen = fcport->login_gen;
/* ref: INIT */
sp = qla2xxx_get_qpair_sp(vha, arg->qpair, fcport, GFP_KERNEL);
if (!sp)
@ -2158,7 +2187,7 @@ __qla2x00_async_tm_cmd(struct tmf_arg *arg)
tm_iocb->u.tmf.flags = arg->flags;
tm_iocb->u.tmf.lun = arg->lun;
START_SP_W_RETRIES(sp, rval);
START_SP_W_RETRIES(sp, rval, chip_gen, login_gen);
ql_dbg(ql_dbg_taskm, vha, 0x802f,
"Async-tmf hdl=%x loop-id=%x portid=%06x ctrl=%x lun=%lld qp=%d rval=%x.\n",
@ -2176,8 +2205,24 @@ __qla2x00_async_tm_cmd(struct tmf_arg *arg)
"TM IOCB failed (%x).\n", rval);
}
if (!test_bit(UNLOADING, &vha->dpc_flags) && !IS_QLAFX00(vha->hw))
rval = qla26xx_marker(arg);
if (!test_bit(UNLOADING, &vha->dpc_flags) && !IS_QLAFX00(vha->hw)) {
jif = jiffies;
if (qla_tmf_wait(arg)) {
ql_log(ql_log_info, vha, 0x803e,
"Waited %u ms Nexus=%ld:%06x:%llu.\n",
jiffies_to_msecs(jiffies - jif), vha->host_no,
fcport->d_id.b24, arg->lun);
}
if (chip_gen == vha->hw->chip_reset && login_gen == fcport->login_gen) {
rval = qla26xx_marker(arg);
} else {
ql_log(ql_log_info, vha, 0x803e,
"Skip Marker due to disruption. Nexus=%ld:%06x:%llu.\n",
vha->host_no, fcport->d_id.b24, arg->lun);
rval = QLA_FUNCTION_FAILED;
}
}
done_free_sp:
/* ref: INIT */
@ -2186,30 +2231,42 @@ done:
return rval;
}
static void qla_put_tmf(fc_port_t *fcport)
static void qla_put_tmf(struct tmf_arg *arg)
{
struct scsi_qla_host *vha = fcport->vha;
struct scsi_qla_host *vha = arg->vha;
struct qla_hw_data *ha = vha->hw;
unsigned long flags;
spin_lock_irqsave(&ha->tgt.sess_lock, flags);
fcport->active_tmf--;
ha->active_tmf--;
list_del(&arg->tmf_elem);
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
}
static
int qla_get_tmf(fc_port_t *fcport)
int qla_get_tmf(struct tmf_arg *arg)
{
struct scsi_qla_host *vha = fcport->vha;
struct scsi_qla_host *vha = arg->vha;
struct qla_hw_data *ha = vha->hw;
unsigned long flags;
fc_port_t *fcport = arg->fcport;
int rc = 0;
LIST_HEAD(tmf_elem);
struct tmf_arg *t;
spin_lock_irqsave(&ha->tgt.sess_lock, flags);
list_add_tail(&tmf_elem, &fcport->tmf_pending);
list_for_each_entry(t, &ha->tmf_active, tmf_elem) {
if (t->fcport == arg->fcport && t->lun == arg->lun) {
/* reject duplicate TMF */
ql_log(ql_log_warn, vha, 0x802c,
"found duplicate TMF. Nexus=%ld:%06x:%llu.\n",
vha->host_no, fcport->d_id.b24, arg->lun);
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
return -EINVAL;
}
}
while (fcport->active_tmf >= MAX_ACTIVE_TMF) {
list_add_tail(&arg->tmf_elem, &ha->tmf_pending);
while (ha->active_tmf >= MAX_ACTIVE_TMF) {
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
msleep(1);
@ -2221,15 +2278,17 @@ int qla_get_tmf(fc_port_t *fcport)
rc = EIO;
break;
}
if (fcport->active_tmf < MAX_ACTIVE_TMF &&
list_is_first(&tmf_elem, &fcport->tmf_pending))
if (ha->active_tmf < MAX_ACTIVE_TMF &&
list_is_first(&arg->tmf_elem, &ha->tmf_pending))
break;
}
list_del(&tmf_elem);
list_del(&arg->tmf_elem);
if (!rc)
fcport->active_tmf++;
if (!rc) {
ha->active_tmf++;
list_add_tail(&arg->tmf_elem, &ha->tmf_active);
}
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
@ -2241,9 +2300,8 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint64_t lun,
uint32_t tag)
{
struct scsi_qla_host *vha = fcport->vha;
struct qla_qpair *qpair;
struct tmf_arg a;
int i, rval = QLA_SUCCESS;
int rval = QLA_SUCCESS;
if (TMF_NOT_READY(fcport))
return QLA_SUSPENDED;
@ -2251,47 +2309,22 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint64_t lun,
a.vha = fcport->vha;
a.fcport = fcport;
a.lun = lun;
a.flags = flags;
INIT_LIST_HEAD(&a.tmf_elem);
if (flags & (TCF_LUN_RESET|TCF_ABORT_TASK_SET|TCF_CLEAR_TASK_SET|TCF_CLEAR_ACA)) {
a.modifier = MK_SYNC_ID_LUN;
if (qla_get_tmf(fcport))
return QLA_FUNCTION_FAILED;
} else {
a.modifier = MK_SYNC_ID;
}
if (vha->hw->mqenable) {
for (i = 0; i < vha->hw->num_qpairs; i++) {
qpair = vha->hw->queue_pair_map[i];
if (!qpair)
continue;
if (TMF_NOT_READY(fcport)) {
ql_log(ql_log_warn, vha, 0x8026,
"Unable to send TM due to disruption.\n");
rval = QLA_SUSPENDED;
break;
}
a.qpair = qpair;
a.flags = flags|TCF_NOTMCMD_TO_TARGET;
rval = __qla2x00_async_tm_cmd(&a);
if (rval)
break;
}
}
if (rval)
goto bailout;
if (qla_get_tmf(&a))
return QLA_FUNCTION_FAILED;
a.qpair = vha->hw->base_qpair;
a.flags = flags;
rval = __qla2x00_async_tm_cmd(&a);
bailout:
if (a.modifier == MK_SYNC_ID_LUN)
qla_put_tmf(fcport);
qla_put_tmf(&a);
return rval;
}
@ -4147,41 +4180,55 @@ out:
return ha->flags.lr_detected;
}
void qla_init_iocb_limit(scsi_qla_host_t *vha)
static void __qla_adjust_iocb_limit(struct qla_qpair *qpair)
{
u16 i, num_qps;
u32 limit;
struct qla_hw_data *ha = vha->hw;
u8 num_qps;
u16 limit;
struct qla_hw_data *ha = qpair->vha->hw;
num_qps = ha->num_qpairs + 1;
limit = (ha->orig_fw_iocb_count * QLA_IOCB_PCT_LIMIT) / 100;
ha->base_qpair->fwres.iocbs_total = ha->orig_fw_iocb_count;
ha->base_qpair->fwres.iocbs_limit = limit;
ha->base_qpair->fwres.iocbs_qp_limit = limit / num_qps;
ha->base_qpair->fwres.iocbs_used = 0;
qpair->fwres.iocbs_total = ha->orig_fw_iocb_count;
qpair->fwres.iocbs_limit = limit;
qpair->fwres.iocbs_qp_limit = limit / num_qps;
ha->base_qpair->fwres.exch_total = ha->orig_fw_xcb_count;
ha->base_qpair->fwres.exch_limit = (ha->orig_fw_xcb_count *
QLA_IOCB_PCT_LIMIT) / 100;
qpair->fwres.exch_total = ha->orig_fw_xcb_count;
qpair->fwres.exch_limit = (ha->orig_fw_xcb_count *
QLA_IOCB_PCT_LIMIT) / 100;
}
void qla_init_iocb_limit(scsi_qla_host_t *vha)
{
u8 i;
struct qla_hw_data *ha = vha->hw;
__qla_adjust_iocb_limit(ha->base_qpair);
ha->base_qpair->fwres.iocbs_used = 0;
ha->base_qpair->fwres.exch_used = 0;
for (i = 0; i < ha->max_qpairs; i++) {
if (ha->queue_pair_map[i]) {
ha->queue_pair_map[i]->fwres.iocbs_total =
ha->orig_fw_iocb_count;
ha->queue_pair_map[i]->fwres.iocbs_limit = limit;
ha->queue_pair_map[i]->fwres.iocbs_qp_limit =
limit / num_qps;
__qla_adjust_iocb_limit(ha->queue_pair_map[i]);
ha->queue_pair_map[i]->fwres.iocbs_used = 0;
ha->queue_pair_map[i]->fwres.exch_total = ha->orig_fw_xcb_count;
ha->queue_pair_map[i]->fwres.exch_limit =
(ha->orig_fw_xcb_count * QLA_IOCB_PCT_LIMIT) / 100;
ha->queue_pair_map[i]->fwres.exch_used = 0;
}
}
}
void qla_adjust_iocb_limit(scsi_qla_host_t *vha)
{
u8 i;
struct qla_hw_data *ha = vha->hw;
__qla_adjust_iocb_limit(ha->base_qpair);
for (i = 0; i < ha->max_qpairs; i++) {
if (ha->queue_pair_map[i])
__qla_adjust_iocb_limit(ha->queue_pair_map[i]);
}
}
/**
* qla2x00_setup_chip() - Load and start RISC firmware.
* @vha: HA context
@ -4777,15 +4824,16 @@ qla2x00_init_rings(scsi_qla_host_t *vha)
if (ha->flags.edif_enabled)
mid_init_cb->init_cb.frame_payload_size = cpu_to_le16(ELS_MAX_PAYLOAD);
QLA_FW_STARTED(ha);
rval = qla2x00_init_firmware(vha, ha->init_cb_size);
next_check:
if (rval) {
QLA_FW_STOPPED(ha);
ql_log(ql_log_fatal, vha, 0x00d2,
"Init Firmware **** FAILED ****.\n");
} else {
ql_dbg(ql_dbg_init, vha, 0x00d3,
"Init Firmware -- success.\n");
QLA_FW_STARTED(ha);
vha->u_ql2xexchoffld = vha->u_ql2xiniexchg = 0;
}
@ -5506,7 +5554,6 @@ qla2x00_alloc_fcport(scsi_qla_host_t *vha, gfp_t flags)
INIT_WORK(&fcport->reg_work, qla_register_fcport_fn);
INIT_LIST_HEAD(&fcport->gnl_entry);
INIT_LIST_HEAD(&fcport->list);
INIT_LIST_HEAD(&fcport->tmf_pending);
INIT_LIST_HEAD(&fcport->sess_cmd_list);
spin_lock_init(&fcport->sess_cmd_lock);
@ -6090,6 +6137,8 @@ qla2x00_reg_remote_port(scsi_qla_host_t *vha, fc_port_t *fcport)
void
qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
{
unsigned long flags;
if (IS_SW_RESV_ADDR(fcport->d_id))
return;
@ -6099,7 +6148,11 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport)
qla2x00_set_fcport_disc_state(fcport, DSC_UPD_FCPORT);
fcport->login_retry = vha->hw->login_retry_count;
fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT);
spin_lock_irqsave(&vha->work_lock, flags);
fcport->deleted = 0;
spin_unlock_irqrestore(&vha->work_lock, flags);
if (vha->hw->current_topology == ISP_CFG_NL)
fcport->logout_on_delete = 0;
else

View File

@ -3882,6 +3882,7 @@ qla_marker_iocb(srb_t *sp, struct mrk_entry_24xx *mrk)
{
mrk->entry_type = MARKER_TYPE;
mrk->modifier = sp->u.iocb_cmd.u.tmf.modifier;
mrk->handle = make_handle(sp->qpair->req->id, sp->handle);
if (sp->u.iocb_cmd.u.tmf.modifier != MK_SYNC_ALL) {
mrk->nport_handle = cpu_to_le16(sp->u.iocb_cmd.u.tmf.loop_id);
int_to_scsilun(sp->u.iocb_cmd.u.tmf.lun, (struct scsi_lun *)&mrk->lun);

View File

@ -1121,8 +1121,12 @@ qla2x00_async_event(scsi_qla_host_t *vha, struct rsp_que *rsp, uint16_t *mb)
unsigned long flags;
fc_port_t *fcport = NULL;
if (!vha->hw->flags.fw_started)
if (!vha->hw->flags.fw_started) {
ql_log(ql_log_warn, vha, 0x50ff,
"Dropping AEN - %04x %04x %04x %04x.\n",
mb[0], mb[1], mb[2], mb[3]);
return;
}
/* Setup to process RIO completion. */
handle_cnt = 0;
@ -2539,7 +2543,6 @@ qla24xx_tm_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, void *tsk)
case CS_PORT_BUSY:
case CS_INCOMPLETE:
case CS_PORT_UNAVAILABLE:
case CS_TIMEOUT:
case CS_RESET:
if (atomic_read(&fcport->state) == FCS_ONLINE) {
ql_dbg(ql_dbg_disc, fcport->vha, 0x3021,

View File

@ -2213,6 +2213,9 @@ qla2x00_get_firmware_state(scsi_qla_host_t *vha, uint16_t *states)
ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1054,
"Entered %s.\n", __func__);
if (!ha->flags.fw_started)
return QLA_FUNCTION_FAILED;
mcp->mb[0] = MBC_GET_FIRMWARE_STATE;
mcp->out_mb = MBX_0;
if (IS_FWI2_CAPABLE(vha->hw))

View File

@ -132,6 +132,7 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport,
"Failed to allocate qpair\n");
return -EINVAL;
}
qla_adjust_iocb_limit(vha);
}
*handle = qpair;
@ -667,7 +668,7 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport,
rval = qla2x00_start_nvme_mq(sp);
if (rval != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x212d,
ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x212d,
"qla2x00_start_nvme_mq failed = %d\n", rval);
sp->priv = NULL;
priv->sp = NULL;

View File

@ -1488,8 +1488,9 @@ qla2xxx_eh_device_reset(struct scsi_cmnd *cmd)
goto eh_reset_failed;
}
err = 3;
if (qla2x00_eh_wait_for_pending_commands(vha, sdev->id,
sdev->lun, WAIT_LUN) != QLA_SUCCESS) {
if (qla2x00_eh_wait_for_pending_commands(vha, fcport->d_id.b24,
cmd->device->lun,
WAIT_LUN) != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x800d,
"wait for pending cmds failed for cmd=%p.\n", cmd);
goto eh_reset_failed;
@ -1555,8 +1556,8 @@ qla2xxx_eh_target_reset(struct scsi_cmnd *cmd)
goto eh_reset_failed;
}
err = 3;
if (qla2x00_eh_wait_for_pending_commands(vha, sdev->id,
0, WAIT_TARGET) != QLA_SUCCESS) {
if (qla2x00_eh_wait_for_pending_commands(vha, fcport->d_id.b24, 0,
WAIT_TARGET) != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x800d,
"wait for pending cmds failed for cmd=%p.\n", cmd);
goto eh_reset_failed;
@ -3009,6 +3010,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
atomic_set(&ha->num_pend_mbx_stage3, 0);
atomic_set(&ha->zio_threshold, DEFAULT_ZIO_THRESHOLD);
ha->last_zio_threshold = DEFAULT_ZIO_THRESHOLD;
INIT_LIST_HEAD(&ha->tmf_pending);
INIT_LIST_HEAD(&ha->tmf_active);
/* Assign ISP specific operations. */
if (IS_QLA2100(ha)) {

View File

@ -1068,10 +1068,6 @@ void qlt_free_session_done(struct work_struct *work)
(struct imm_ntfy_from_isp *)sess->iocb, SRB_NACK_LOGO);
}
spin_lock_irqsave(&vha->work_lock, flags);
sess->flags &= ~FCF_ASYNC_SENT;
spin_unlock_irqrestore(&vha->work_lock, flags);
spin_lock_irqsave(&ha->tgt.sess_lock, flags);
if (sess->se_sess) {
sess->se_sess = NULL;
@ -1081,7 +1077,6 @@ void qlt_free_session_done(struct work_struct *work)
qla2x00_set_fcport_disc_state(sess, DSC_DELETED);
sess->fw_login_state = DSC_LS_PORT_UNAVAIL;
sess->deleted = QLA_SESS_DELETED;
if (sess->login_succ && !IS_SW_RESV_ADDR(sess->d_id)) {
vha->fcport_count--;
@ -1133,10 +1128,15 @@ void qlt_free_session_done(struct work_struct *work)
sess->explicit_logout = 0;
spin_unlock_irqrestore(&ha->tgt.sess_lock, flags);
sess->free_pending = 0;
qla2x00_dfs_remove_rport(vha, sess);
spin_lock_irqsave(&vha->work_lock, flags);
sess->flags &= ~FCF_ASYNC_SENT;
sess->deleted = QLA_SESS_DELETED;
sess->free_pending = 0;
spin_unlock_irqrestore(&vha->work_lock, flags);
ql_dbg(ql_dbg_disc, vha, 0xf001,
"Unregistration of sess %p %8phC finished fcp_cnt %d\n",
sess, sess->port_name, vha->fcport_count);
@ -1185,12 +1185,12 @@ void qlt_unreg_sess(struct fc_port *sess)
* management from being sent.
*/
sess->flags |= FCF_ASYNC_SENT;
sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
spin_unlock_irqrestore(&sess->vha->work_lock, flags);
if (sess->se_sess)
vha->hw->tgt.tgt_ops->clear_nacl_from_fcport_map(sess);
sess->deleted = QLA_SESS_DELETION_IN_PROGRESS;
qla2x00_set_fcport_disc_state(sess, DSC_DELETE_PEND);
sess->last_rscn_gen = sess->rscn_gen;
sess->last_login_gen = sess->login_gen;

View File

@ -6,9 +6,9 @@
/*
* Driver version
*/
#define QLA2XXX_VERSION "10.02.08.400-k"
#define QLA2XXX_VERSION "10.02.08.500-k"
#define QLA_DRIVER_MAJOR_VER 10
#define QLA_DRIVER_MINOR_VER 2
#define QLA_DRIVER_PATCH_VER 8
#define QLA_DRIVER_BETA_VER 400
#define QLA_DRIVER_BETA_VER 500

View File

@ -968,6 +968,11 @@ static int qla4xxx_set_chap_entry(struct Scsi_Host *shost, void *data, int len)
memset(&chap_rec, 0, sizeof(chap_rec));
nla_for_each_attr(attr, data, len, rem) {
if (nla_len(attr) < sizeof(*param_info)) {
rc = -EINVAL;
goto exit_set_chap;
}
param_info = nla_data(attr);
switch (param_info->param) {
@ -2750,6 +2755,11 @@ qla4xxx_iface_set_param(struct Scsi_Host *shost, void *data, uint32_t len)
}
nla_for_each_attr(attr, data, len, rem) {
if (nla_len(attr) < sizeof(*iface_param)) {
rval = -EINVAL;
goto exit_init_fw_cb;
}
iface_param = nla_data(attr);
if (iface_param->param_type == ISCSI_NET_PARAM) {
@ -8104,6 +8114,11 @@ qla4xxx_sysfs_ddb_set_param(struct iscsi_bus_flash_session *fnode_sess,
memset((void *)&chap_tbl, 0, sizeof(chap_tbl));
nla_for_each_attr(attr, data, len, rem) {
if (nla_len(attr) < sizeof(*fnode_param)) {
rc = -EINVAL;
goto exit_set_param;
}
fnode_param = nla_data(attr);
switch (fnode_param->param) {

View File

@ -28,7 +28,7 @@
#include <linux/jiffies.h>
#include <linux/dma-mapping.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/firmware.h>
#include <linux/pgtable.h>
@ -843,7 +843,7 @@ static int qpti_map_queues(struct qlogicpti *qpti)
return 0;
}
const char *qlogicpti_info(struct Scsi_Host *host)
static const char *qlogicpti_info(struct Scsi_Host *host)
{
static char buf[80];
struct qlogicpti *qpti = (struct qlogicpti *) host->hostdata;

View File

@ -103,7 +103,6 @@ bool scsi_noretry_cmd(struct scsi_cmnd *scmd);
void scsi_eh_done(struct scsi_cmnd *scmd);
/* scsi_lib.c */
extern int scsi_maybe_unblock_host(struct scsi_device *sdev);
extern void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd);
extern void scsi_queue_insert(struct scsi_cmnd *cmd, int reason);
extern void scsi_io_completion(struct scsi_cmnd *, unsigned int);
@ -155,7 +154,6 @@ extern int scsi_sysfs_add_host(struct Scsi_Host *);
extern int scsi_sysfs_register(void);
extern void scsi_sysfs_unregister(void);
extern void scsi_sysfs_device_initialize(struct scsi_device *);
extern int scsi_sysfs_target_initialize(struct scsi_device *);
extern struct scsi_transport_template blank_transport_template;
extern void __scsi_remove_device(struct scsi_device *);

View File

@ -3014,14 +3014,15 @@ iscsi_if_destroy_conn(struct iscsi_transport *transport, struct iscsi_uevent *ev
}
static int
iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
{
char *data = (char*)ev + sizeof(*ev);
struct iscsi_cls_conn *conn;
struct iscsi_cls_session *session;
int err = 0, value = 0, state;
if (ev->u.set_param.len > PAGE_SIZE)
if (ev->u.set_param.len > rlen ||
ev->u.set_param.len > PAGE_SIZE)
return -EINVAL;
session = iscsi_session_lookup(ev->u.set_param.sid);
@ -3029,6 +3030,10 @@ iscsi_if_set_param(struct iscsi_transport *transport, struct iscsi_uevent *ev)
if (!conn || !session)
return -EINVAL;
/* data will be regarded as NULL-ended string, do length check */
if (strlen(data) > ev->u.set_param.len)
return -EINVAL;
switch (ev->u.set_param.param) {
case ISCSI_PARAM_SESS_RECOVERY_TMO:
sscanf(data, "%d", &value);
@ -3118,7 +3123,7 @@ put_ep:
static int
iscsi_if_transport_ep(struct iscsi_transport *transport,
struct iscsi_uevent *ev, int msg_type)
struct iscsi_uevent *ev, int msg_type, u32 rlen)
{
struct iscsi_endpoint *ep;
int rc = 0;
@ -3126,7 +3131,10 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
switch (msg_type) {
case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST:
case ISCSI_UEVENT_TRANSPORT_EP_CONNECT:
rc = iscsi_if_ep_connect(transport, ev, msg_type);
if (rlen < sizeof(struct sockaddr))
rc = -EINVAL;
else
rc = iscsi_if_ep_connect(transport, ev, msg_type);
break;
case ISCSI_UEVENT_TRANSPORT_EP_POLL:
if (!transport->ep_poll)
@ -3150,12 +3158,15 @@ iscsi_if_transport_ep(struct iscsi_transport *transport,
static int
iscsi_tgt_dscvr(struct iscsi_transport *transport,
struct iscsi_uevent *ev)
struct iscsi_uevent *ev, u32 rlen)
{
struct Scsi_Host *shost;
struct sockaddr *dst_addr;
int err;
if (rlen < sizeof(*dst_addr))
return -EINVAL;
if (!transport->tgt_dscvr)
return -EINVAL;
@ -3176,7 +3187,7 @@ iscsi_tgt_dscvr(struct iscsi_transport *transport,
static int
iscsi_set_host_param(struct iscsi_transport *transport,
struct iscsi_uevent *ev)
struct iscsi_uevent *ev, u32 rlen)
{
char *data = (char*)ev + sizeof(*ev);
struct Scsi_Host *shost;
@ -3185,7 +3196,8 @@ iscsi_set_host_param(struct iscsi_transport *transport,
if (!transport->set_host_param)
return -ENOSYS;
if (ev->u.set_host_param.len > PAGE_SIZE)
if (ev->u.set_host_param.len > rlen ||
ev->u.set_host_param.len > PAGE_SIZE)
return -EINVAL;
shost = scsi_host_lookup(ev->u.set_host_param.host_no);
@ -3195,6 +3207,10 @@ iscsi_set_host_param(struct iscsi_transport *transport,
return -ENODEV;
}
/* see similar check in iscsi_if_set_param() */
if (strlen(data) > ev->u.set_host_param.len)
return -EINVAL;
err = transport->set_host_param(shost, ev->u.set_host_param.param,
data, ev->u.set_host_param.len);
scsi_host_put(shost);
@ -3202,12 +3218,15 @@ iscsi_set_host_param(struct iscsi_transport *transport,
}
static int
iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev)
iscsi_set_path(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
{
struct Scsi_Host *shost;
struct iscsi_path *params;
int err;
if (rlen < sizeof(*params))
return -EINVAL;
if (!transport->set_path)
return -ENOSYS;
@ -3267,12 +3286,15 @@ iscsi_set_iface_params(struct iscsi_transport *transport,
}
static int
iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev)
iscsi_send_ping(struct iscsi_transport *transport, struct iscsi_uevent *ev, u32 rlen)
{
struct Scsi_Host *shost;
struct sockaddr *dst_addr;
int err;
if (rlen < sizeof(*dst_addr))
return -EINVAL;
if (!transport->send_ping)
return -ENOSYS;
@ -3770,13 +3792,12 @@ exit_host_stats:
}
static int iscsi_if_transport_conn(struct iscsi_transport *transport,
struct nlmsghdr *nlh)
struct nlmsghdr *nlh, u32 pdu_len)
{
struct iscsi_uevent *ev = nlmsg_data(nlh);
struct iscsi_cls_session *session;
struct iscsi_cls_conn *conn = NULL;
struct iscsi_endpoint *ep;
uint32_t pdu_len;
int err = 0;
switch (nlh->nlmsg_type) {
@ -3861,8 +3882,6 @@ static int iscsi_if_transport_conn(struct iscsi_transport *transport,
break;
case ISCSI_UEVENT_SEND_PDU:
pdu_len = nlh->nlmsg_len - sizeof(*nlh) - sizeof(*ev);
if ((ev->u.send_pdu.hdr_size > pdu_len) ||
(ev->u.send_pdu.data_size > (pdu_len - ev->u.send_pdu.hdr_size))) {
err = -EINVAL;
@ -3892,6 +3911,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
struct iscsi_internal *priv;
struct iscsi_cls_session *session;
struct iscsi_endpoint *ep = NULL;
u32 rlen;
if (!netlink_capable(skb, CAP_SYS_ADMIN))
return -EPERM;
@ -3911,6 +3931,13 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
portid = NETLINK_CB(skb).portid;
/*
* Even though the remaining payload may not be regarded as nlattr,
* (like address or something else), calculate the remaining length
* here to ease following length checks.
*/
rlen = nlmsg_attrlen(nlh, sizeof(*ev));
switch (nlh->nlmsg_type) {
case ISCSI_UEVENT_CREATE_SESSION:
err = iscsi_if_create_session(priv, ep, ev,
@ -3967,7 +3994,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
err = -EINVAL;
break;
case ISCSI_UEVENT_SET_PARAM:
err = iscsi_if_set_param(transport, ev);
err = iscsi_if_set_param(transport, ev, rlen);
break;
case ISCSI_UEVENT_CREATE_CONN:
case ISCSI_UEVENT_DESTROY_CONN:
@ -3975,7 +4002,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
case ISCSI_UEVENT_START_CONN:
case ISCSI_UEVENT_BIND_CONN:
case ISCSI_UEVENT_SEND_PDU:
err = iscsi_if_transport_conn(transport, nlh);
err = iscsi_if_transport_conn(transport, nlh, rlen);
break;
case ISCSI_UEVENT_GET_STATS:
err = iscsi_if_get_stats(transport, nlh);
@ -3984,23 +4011,22 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
case ISCSI_UEVENT_TRANSPORT_EP_POLL:
case ISCSI_UEVENT_TRANSPORT_EP_DISCONNECT:
case ISCSI_UEVENT_TRANSPORT_EP_CONNECT_THROUGH_HOST:
err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type);
err = iscsi_if_transport_ep(transport, ev, nlh->nlmsg_type, rlen);
break;
case ISCSI_UEVENT_TGT_DSCVR:
err = iscsi_tgt_dscvr(transport, ev);
err = iscsi_tgt_dscvr(transport, ev, rlen);
break;
case ISCSI_UEVENT_SET_HOST_PARAM:
err = iscsi_set_host_param(transport, ev);
err = iscsi_set_host_param(transport, ev, rlen);
break;
case ISCSI_UEVENT_PATH_UPDATE:
err = iscsi_set_path(transport, ev);
err = iscsi_set_path(transport, ev, rlen);
break;
case ISCSI_UEVENT_SET_IFACE_PARAMS:
err = iscsi_set_iface_params(transport, ev,
nlmsg_attrlen(nlh, sizeof(*ev)));
err = iscsi_set_iface_params(transport, ev, rlen);
break;
case ISCSI_UEVENT_PING:
err = iscsi_send_ping(transport, ev);
err = iscsi_send_ping(transport, ev, rlen);
break;
case ISCSI_UEVENT_GET_CHAP:
err = iscsi_get_chap(transport, nlh);
@ -4009,13 +4035,10 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
err = iscsi_delete_chap(transport, ev);
break;
case ISCSI_UEVENT_SET_FLASHNODE_PARAMS:
err = iscsi_set_flashnode_param(transport, ev,
nlmsg_attrlen(nlh,
sizeof(*ev)));
err = iscsi_set_flashnode_param(transport, ev, rlen);
break;
case ISCSI_UEVENT_NEW_FLASHNODE:
err = iscsi_new_flashnode(transport, ev,
nlmsg_attrlen(nlh, sizeof(*ev)));
err = iscsi_new_flashnode(transport, ev, rlen);
break;
case ISCSI_UEVENT_DEL_FLASHNODE:
err = iscsi_del_flashnode(transport, ev);
@ -4030,8 +4053,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, uint32_t *group)
err = iscsi_logout_flashnode_sid(transport, ev);
break;
case ISCSI_UEVENT_SET_CHAP:
err = iscsi_set_chap(transport, ev,
nlmsg_attrlen(nlh, sizeof(*ev)));
err = iscsi_set_chap(transport, ev, rlen);
break;
case ISCSI_UEVENT_GET_HOST_STATS:
err = iscsi_get_host_stats(transport, nlh);

View File

@ -316,6 +316,9 @@ enum storvsc_request_type {
#define SRB_STATUS_ABORTED 0x02
#define SRB_STATUS_ERROR 0x04
#define SRB_STATUS_INVALID_REQUEST 0x06
#define SRB_STATUS_TIMEOUT 0x09
#define SRB_STATUS_SELECTION_TIMEOUT 0x0A
#define SRB_STATUS_BUS_RESET 0x0E
#define SRB_STATUS_DATA_OVERRUN 0x12
#define SRB_STATUS_INVALID_LUN 0x20
#define SRB_STATUS_INTERNAL_ERROR 0x30
@ -981,6 +984,10 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
case SRB_STATUS_ABORTED:
case SRB_STATUS_INVALID_REQUEST:
case SRB_STATUS_INTERNAL_ERROR:
case SRB_STATUS_TIMEOUT:
case SRB_STATUS_SELECTION_TIMEOUT:
case SRB_STATUS_BUS_RESET:
case SRB_STATUS_DATA_OVERRUN:
if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID) {
/* Check for capacity change */
if ((asc == 0x2a) && (ascq == 0x9)) {

View File

@ -12,7 +12,8 @@
#include <linux/init.h>
#include <linux/dma-mapping.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/gfp.h>
#include <asm/irq.h>

View File

@ -45,9 +45,9 @@ static ssize_t lio_target_np_driver_show(struct config_item *item, char *page,
tpg_np_new = iscsit_tpg_locate_child_np(tpg_np, type);
if (tpg_np_new)
rb = sprintf(page, "1\n");
rb = sysfs_emit(page, "1\n");
else
rb = sprintf(page, "0\n");
rb = sysfs_emit(page, "0\n");
return rb;
}
@ -282,7 +282,7 @@ static ssize_t iscsi_nacl_attrib_##name##_show(struct config_item *item,\
{ \
struct se_node_acl *se_nacl = attrib_to_nacl(item); \
struct iscsi_node_acl *nacl = to_iscsi_nacl(se_nacl); \
return sprintf(page, "%u\n", nacl->node_attrib.name); \
return sysfs_emit(page, "%u\n", nacl->node_attrib.name); \
} \
\
static ssize_t iscsi_nacl_attrib_##name##_store(struct config_item *item,\
@ -320,7 +320,7 @@ static ssize_t iscsi_nacl_attrib_authentication_show(struct config_item *item,
struct se_node_acl *se_nacl = attrib_to_nacl(item);
struct iscsi_node_acl *nacl = to_iscsi_nacl(se_nacl);
return sprintf(page, "%d\n", nacl->node_attrib.authentication);
return sysfs_emit(page, "%d\n", nacl->node_attrib.authentication);
}
static ssize_t iscsi_nacl_attrib_authentication_store(struct config_item *item,
@ -533,102 +533,102 @@ static ssize_t lio_target_nacl_info_show(struct config_item *item, char *page)
spin_lock_bh(&se_nacl->nacl_sess_lock);
se_sess = se_nacl->nacl_sess;
if (!se_sess) {
rb += sprintf(page+rb, "No active iSCSI Session for Initiator"
rb += sysfs_emit_at(page, rb, "No active iSCSI Session for Initiator"
" Endpoint: %s\n", se_nacl->initiatorname);
} else {
sess = se_sess->fabric_sess_ptr;
rb += sprintf(page+rb, "InitiatorName: %s\n",
rb += sysfs_emit_at(page, rb, "InitiatorName: %s\n",
sess->sess_ops->InitiatorName);
rb += sprintf(page+rb, "InitiatorAlias: %s\n",
rb += sysfs_emit_at(page, rb, "InitiatorAlias: %s\n",
sess->sess_ops->InitiatorAlias);
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"LIO Session ID: %u ISID: 0x%6ph TSIH: %hu ",
sess->sid, sess->isid, sess->tsih);
rb += sprintf(page+rb, "SessionType: %s\n",
rb += sysfs_emit_at(page, rb, "SessionType: %s\n",
(sess->sess_ops->SessionType) ?
"Discovery" : "Normal");
rb += sprintf(page+rb, "Session State: ");
rb += sysfs_emit_at(page, rb, "Session State: ");
switch (sess->session_state) {
case TARG_SESS_STATE_FREE:
rb += sprintf(page+rb, "TARG_SESS_FREE\n");
rb += sysfs_emit_at(page, rb, "TARG_SESS_FREE\n");
break;
case TARG_SESS_STATE_ACTIVE:
rb += sprintf(page+rb, "TARG_SESS_STATE_ACTIVE\n");
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_ACTIVE\n");
break;
case TARG_SESS_STATE_LOGGED_IN:
rb += sprintf(page+rb, "TARG_SESS_STATE_LOGGED_IN\n");
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_LOGGED_IN\n");
break;
case TARG_SESS_STATE_FAILED:
rb += sprintf(page+rb, "TARG_SESS_STATE_FAILED\n");
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_FAILED\n");
break;
case TARG_SESS_STATE_IN_CONTINUE:
rb += sprintf(page+rb, "TARG_SESS_STATE_IN_CONTINUE\n");
rb += sysfs_emit_at(page, rb, "TARG_SESS_STATE_IN_CONTINUE\n");
break;
default:
rb += sprintf(page+rb, "ERROR: Unknown Session"
rb += sysfs_emit_at(page, rb, "ERROR: Unknown Session"
" State!\n");
break;
}
rb += sprintf(page+rb, "---------------------[iSCSI Session"
rb += sysfs_emit_at(page, rb, "---------------------[iSCSI Session"
" Values]-----------------------\n");
rb += sprintf(page+rb, " CmdSN/WR : CmdSN/WC : ExpCmdSN"
rb += sysfs_emit_at(page, rb, " CmdSN/WR : CmdSN/WC : ExpCmdSN"
" : MaxCmdSN : ITT : TTT\n");
max_cmd_sn = (u32) atomic_read(&sess->max_cmd_sn);
rb += sprintf(page+rb, " 0x%08x 0x%08x 0x%08x 0x%08x"
rb += sysfs_emit_at(page, rb, " 0x%08x 0x%08x 0x%08x 0x%08x"
" 0x%08x 0x%08x\n",
sess->cmdsn_window,
(max_cmd_sn - sess->exp_cmd_sn) + 1,
sess->exp_cmd_sn, max_cmd_sn,
sess->init_task_tag, sess->targ_xfer_tag);
rb += sprintf(page+rb, "----------------------[iSCSI"
rb += sysfs_emit_at(page, rb, "----------------------[iSCSI"
" Connections]-------------------------\n");
spin_lock(&sess->conn_lock);
list_for_each_entry(conn, &sess->sess_conn_list, conn_list) {
rb += sprintf(page+rb, "CID: %hu Connection"
rb += sysfs_emit_at(page, rb, "CID: %hu Connection"
" State: ", conn->cid);
switch (conn->conn_state) {
case TARG_CONN_STATE_FREE:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_FREE\n");
break;
case TARG_CONN_STATE_XPT_UP:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_XPT_UP\n");
break;
case TARG_CONN_STATE_IN_LOGIN:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_IN_LOGIN\n");
break;
case TARG_CONN_STATE_LOGGED_IN:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_LOGGED_IN\n");
break;
case TARG_CONN_STATE_IN_LOGOUT:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_IN_LOGOUT\n");
break;
case TARG_CONN_STATE_LOGOUT_REQUESTED:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_LOGOUT_REQUESTED\n");
break;
case TARG_CONN_STATE_CLEANUP_WAIT:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"TARG_CONN_STATE_CLEANUP_WAIT\n");
break;
default:
rb += sprintf(page+rb,
rb += sysfs_emit_at(page, rb,
"ERROR: Unknown Connection State!\n");
break;
}
rb += sprintf(page+rb, " Address %pISc %s", &conn->login_sockaddr,
rb += sysfs_emit_at(page, rb, " Address %pISc %s", &conn->login_sockaddr,
(conn->network_transport == ISCSI_TCP) ?
"TCP" : "SCTP");
rb += sprintf(page+rb, " StatSN: 0x%08x\n",
rb += sysfs_emit_at(page, rb, " StatSN: 0x%08x\n",
conn->stat_sn);
}
spin_unlock(&sess->conn_lock);
@ -641,7 +641,7 @@ static ssize_t lio_target_nacl_info_show(struct config_item *item, char *page)
static ssize_t lio_target_nacl_cmdsn_depth_show(struct config_item *item,
char *page)
{
return sprintf(page, "%u\n", acl_to_nacl(item)->queue_depth);
return sysfs_emit(page, "%u\n", acl_to_nacl(item)->queue_depth);
}
static ssize_t lio_target_nacl_cmdsn_depth_store(struct config_item *item,
@ -750,7 +750,7 @@ static ssize_t iscsi_tpg_attrib_##name##_show(struct config_item *item, \
if (iscsit_get_tpg(tpg) < 0) \
return -EINVAL; \
\
rb = sprintf(page, "%u\n", tpg->tpg_attrib.name); \
rb = sysfs_emit(page, "%u\n", tpg->tpg_attrib.name); \
iscsit_put_tpg(tpg); \
return rb; \
} \
@ -783,7 +783,6 @@ CONFIGFS_ATTR(iscsi_tpg_attrib_, name)
DEF_TPG_ATTRIB(authentication);
DEF_TPG_ATTRIB(login_timeout);
DEF_TPG_ATTRIB(netif_timeout);
DEF_TPG_ATTRIB(generate_node_acls);
DEF_TPG_ATTRIB(default_cmdsn_depth);
DEF_TPG_ATTRIB(cache_dynamic_acls);
@ -799,7 +798,6 @@ DEF_TPG_ATTRIB(login_keys_workaround);
static struct configfs_attribute *lio_target_tpg_attrib_attrs[] = {
&iscsi_tpg_attrib_attr_authentication,
&iscsi_tpg_attrib_attr_login_timeout,
&iscsi_tpg_attrib_attr_netif_timeout,
&iscsi_tpg_attrib_attr_generate_node_acls,
&iscsi_tpg_attrib_attr_default_cmdsn_depth,
&iscsi_tpg_attrib_attr_cache_dynamic_acls,
@ -1138,7 +1136,7 @@ static void lio_target_tiqn_deltpg(struct se_portal_group *se_tpg)
static ssize_t lio_target_wwn_lio_version_show(struct config_item *item,
char *page)
{
return sprintf(page, "Datera Inc. iSCSI Target "ISCSIT_VERSION"\n");
return sysfs_emit(page, "Datera Inc. iSCSI Target %s\n", ISCSIT_VERSION);
}
CONFIGFS_ATTR_RO(lio_target_wwn_, lio_version);
@ -1146,7 +1144,7 @@ CONFIGFS_ATTR_RO(lio_target_wwn_, lio_version);
static ssize_t lio_target_wwn_cpus_allowed_list_show(
struct config_item *item, char *page)
{
return sprintf(page, "%*pbl\n",
return sysfs_emit(page, "%*pbl\n",
cpumask_pr_args(iscsit_global->allowed_cpumask));
}
@ -1283,7 +1281,7 @@ static ssize_t iscsi_disc_enforce_discovery_auth_show(struct config_item *item,
{
struct iscsi_node_auth *discovery_auth = &iscsit_global->discovery_acl.node_auth;
return sprintf(page, "%d\n", discovery_auth->enforce_discovery_auth);
return sysfs_emit(page, "%d\n", discovery_auth->enforce_discovery_auth);
}
static ssize_t iscsi_disc_enforce_discovery_auth_store(struct config_item *item,

View File

@ -211,7 +211,6 @@ static void iscsit_set_default_tpg_attribs(struct iscsi_portal_group *tpg)
a->authentication = TA_AUTHENTICATION;
a->login_timeout = TA_LOGIN_TIMEOUT;
a->netif_timeout = TA_NETIF_TIMEOUT;
a->default_cmdsn_depth = TA_DEFAULT_CMDSN_DEPTH;
a->generate_node_acls = TA_GENERATE_NODE_ACLS;
a->cache_dynamic_acls = TA_CACHE_DYNAMIC_ACLS;
@ -666,31 +665,6 @@ int iscsit_ta_login_timeout(
return 0;
}
int iscsit_ta_netif_timeout(
struct iscsi_portal_group *tpg,
u32 netif_timeout)
{
struct iscsi_tpg_attrib *a = &tpg->tpg_attrib;
if (netif_timeout > TA_NETIF_TIMEOUT_MAX) {
pr_err("Requested Network Interface Timeout %u larger"
" than maximum %u\n", netif_timeout,
TA_NETIF_TIMEOUT_MAX);
return -EINVAL;
} else if (netif_timeout < TA_NETIF_TIMEOUT_MIN) {
pr_err("Requested Network Interface Timeout %u smaller"
" than minimum %u\n", netif_timeout,
TA_NETIF_TIMEOUT_MIN);
return -EINVAL;
}
a->netif_timeout = netif_timeout;
pr_debug("Set Network Interface Timeout to %u for"
" Target Portal Group %hu\n", a->netif_timeout, tpg->tpgt);
return 0;
}
int iscsit_ta_generate_node_acls(
struct iscsi_portal_group *tpg,
u32 flag)

View File

@ -38,7 +38,6 @@ extern int iscsit_tpg_del_network_portal(struct iscsi_portal_group *,
struct iscsi_tpg_np *);
extern int iscsit_ta_authentication(struct iscsi_portal_group *, u32);
extern int iscsit_ta_login_timeout(struct iscsi_portal_group *, u32);
extern int iscsit_ta_netif_timeout(struct iscsi_portal_group *, u32);
extern int iscsit_ta_generate_node_acls(struct iscsi_portal_group *, u32);
extern int iscsit_ta_default_cmdsn_depth(struct iscsi_portal_group *, u32);
extern int iscsit_ta_cache_dynamic_acls(struct iscsi_portal_group *, u32);

View File

@ -739,11 +739,16 @@ iblock_execute_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
if (data_direction == DMA_TO_DEVICE) {
struct iblock_dev *ib_dev = IBLOCK_DEV(dev);
/*
* Set bits to indicate WRITE_ODIRECT so we are not throttled
* by WBT.
*/
opf = REQ_OP_WRITE | REQ_SYNC | REQ_IDLE;
/*
* Force writethrough using REQ_FUA if a volatile write cache
* is not enabled, or if initiator set the Force Unit Access bit.
*/
opf = REQ_OP_WRITE;
miter_dir = SG_MITER_TO_SG;
if (bdev_fua(ib_dev->ibd_bd)) {
if (cmd->se_cmd_flags & SCF_FUA)

View File

@ -35,14 +35,6 @@ config SCSI_UFS_CRYPTO
capabilities of the UFS device (if present) to perform crypto
operations on data being transferred to/from the device.
config SCSI_UFS_HPB
bool "Support UFS Host Performance Booster"
help
The UFS HPB feature improves random read performance. It caches
L2P (logical to physical) map of UFS to host DRAM. The driver uses HPB
read command by piggybacking physical page number for bypassing FTL (flash
translation layer)'s L2P address translation.
config SCSI_UFS_FAULT_INJECTION
bool "UFS Fault Injection Support"
depends on FAULT_INJECTION

View File

@ -5,6 +5,5 @@ ufshcd-core-y += ufshcd.o ufs-sysfs.o ufs-mcq.o
ufshcd-core-$(CONFIG_DEBUG_FS) += ufs-debugfs.o
ufshcd-core-$(CONFIG_SCSI_UFS_BSG) += ufs_bsg.o
ufshcd-core-$(CONFIG_SCSI_UFS_CRYPTO) += ufshcd-crypto.o
ufshcd-core-$(CONFIG_SCSI_UFS_HPB) += ufshpb.o
ufshcd-core-$(CONFIG_SCSI_UFS_FAULT_INJECTION) += ufs-fault-injection.o
ufshcd-core-$(CONFIG_SCSI_UFS_HWMON) += ufs-hwmon.o

View File

@ -127,7 +127,8 @@ static int ufs_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32
return err;
}
static umode_t ufs_hwmon_is_visible(const void *_data, enum hwmon_sensor_types type, u32 attr,
static umode_t ufs_hwmon_is_visible(const void *data,
enum hwmon_sensor_types type, u32 attr,
int channel)
{
if (type != hwmon_temp)

View File

@ -97,6 +97,7 @@ void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds)
val |= FIELD_PREP(MCQ_CFG_MAC_MASK, max_active_cmds);
ufshcd_writel(hba, val, REG_UFS_MCQ_CFG);
}
EXPORT_SYMBOL_GPL(ufshcd_mcq_config_mac);
/**
* ufshcd_mcq_req_to_hwq - find the hardware queue on which the
@ -104,7 +105,7 @@ void ufshcd_mcq_config_mac(struct ufs_hba *hba, u32 max_active_cmds)
* @hba: per adapter instance
* @req: pointer to the request to be issued
*
* Returns the hardware queue instance on which the request would
* Return: the hardware queue instance on which the request would
* be queued.
*/
struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba,
@ -120,7 +121,7 @@ struct ufs_hw_queue *ufshcd_mcq_req_to_hwq(struct ufs_hba *hba,
* ufshcd_mcq_decide_queue_depth - decide the queue depth
* @hba: per adapter instance
*
* Returns queue-depth on success, non-zero on error
* Return: queue-depth on success, non-zero on error
*
* MAC - Max. Active Command of the Host Controller (HC)
* HC wouldn't send more than this commands to the device.
@ -245,6 +246,7 @@ u32 ufshcd_mcq_read_cqis(struct ufs_hba *hba, int i)
{
return readl(mcq_opr_base(hba, OPR_CQIS, i) + REG_CQIS);
}
EXPORT_SYMBOL_GPL(ufshcd_mcq_read_cqis);
void ufshcd_mcq_write_cqis(struct ufs_hba *hba, u32 val, int i)
{
@ -388,6 +390,7 @@ void ufshcd_mcq_make_queues_operational(struct ufs_hba *hba)
MCQ_CFG_n(REG_SQATTR, i));
}
}
EXPORT_SYMBOL_GPL(ufshcd_mcq_make_queues_operational);
void ufshcd_mcq_enable_esi(struct ufs_hba *hba)
{
@ -487,10 +490,10 @@ static int ufshcd_mcq_sq_start(struct ufs_hba *hba, struct ufs_hw_queue *hwq)
/**
* ufshcd_mcq_sq_cleanup - Clean up submission queue resources
* associated with the pending command.
* @hba - per adapter instance.
* @task_tag - The command's task tag.
* @hba: per adapter instance.
* @task_tag: The command's task tag.
*
* Returns 0 for success; error code otherwise.
* Return: 0 for success; error code otherwise.
*/
int ufshcd_mcq_sq_cleanup(struct ufs_hba *hba, int task_tag)
{
@ -551,16 +554,11 @@ unlock:
* Write the sqe's Command Type to 0xF. The host controller will not
* fetch any sqe with Command Type = 0xF.
*
* @utrd - UTP Transfer Request Descriptor to be nullified.
* @utrd: UTP Transfer Request Descriptor to be nullified.
*/
static void ufshcd_mcq_nullify_sqe(struct utp_transfer_req_desc *utrd)
{
u32 dword_0;
dword_0 = le32_to_cpu(utrd->header.dword_0);
dword_0 &= ~UPIU_COMMAND_TYPE_MASK;
dword_0 |= FIELD_PREP(UPIU_COMMAND_TYPE_MASK, 0xF);
utrd->header.dword_0 = cpu_to_le32(dword_0);
utrd->header.command_type = 0xf;
}
/**
@ -568,11 +566,11 @@ static void ufshcd_mcq_nullify_sqe(struct utp_transfer_req_desc *utrd)
* If the command is in the submission queue and not issued to the device yet,
* nullify the sqe so the host controller will skip fetching the sqe.
*
* @hba - per adapter instance.
* @hwq - Hardware Queue to be searched.
* @task_tag - The command's task tag.
* @hba: per adapter instance.
* @hwq: Hardware Queue to be searched.
* @task_tag: The command's task tag.
*
* Returns true if the SQE containing the command is present in the SQ
* Return: true if the SQE containing the command is present in the SQ
* (not fetched by the controller); returns false if the SQE is not in the SQ.
*/
static bool ufshcd_mcq_sqe_search(struct ufs_hba *hba,
@ -621,9 +619,9 @@ out:
/**
* ufshcd_mcq_abort - Abort the command in MCQ.
* @cmd - The command to be aborted.
* @cmd: The command to be aborted.
*
* Returns SUCCESS or FAILED error codes
* Return: SUCCESS or FAILED error codes
*/
int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
{

View File

@ -718,8 +718,6 @@ UFS_DEVICE_DESC_PARAM(device_version, _DEV_VER, 2);
UFS_DEVICE_DESC_PARAM(number_of_secure_wpa, _NUM_SEC_WPA, 1);
UFS_DEVICE_DESC_PARAM(psa_max_data_size, _PSA_MAX_DATA, 4);
UFS_DEVICE_DESC_PARAM(psa_state_timeout, _PSA_TMT, 1);
UFS_DEVICE_DESC_PARAM(hpb_version, _HPB_VER, 2);
UFS_DEVICE_DESC_PARAM(hpb_control, _HPB_CONTROL, 1);
UFS_DEVICE_DESC_PARAM(ext_feature_sup, _EXT_UFS_FEATURE_SUP, 4);
UFS_DEVICE_DESC_PARAM(wb_presv_us_en, _WB_PRESRV_USRSPC_EN, 1);
UFS_DEVICE_DESC_PARAM(wb_type, _WB_TYPE, 1);
@ -752,8 +750,6 @@ static struct attribute *ufs_sysfs_device_descriptor[] = {
&dev_attr_number_of_secure_wpa.attr,
&dev_attr_psa_max_data_size.attr,
&dev_attr_psa_state_timeout.attr,
&dev_attr_hpb_version.attr,
&dev_attr_hpb_control.attr,
&dev_attr_ext_feature_sup.attr,
&dev_attr_wb_presv_us_en.attr,
&dev_attr_wb_type.attr,
@ -827,10 +823,6 @@ UFS_GEOMETRY_DESC_PARAM(enh4_memory_max_alloc_units,
_ENM4_MAX_NUM_UNITS, 4);
UFS_GEOMETRY_DESC_PARAM(enh4_memory_capacity_adjustment_factor,
_ENM4_CAP_ADJ_FCTR, 2);
UFS_GEOMETRY_DESC_PARAM(hpb_region_size, _HPB_REGION_SIZE, 1);
UFS_GEOMETRY_DESC_PARAM(hpb_number_lu, _HPB_NUMBER_LU, 1);
UFS_GEOMETRY_DESC_PARAM(hpb_subregion_size, _HPB_SUBREGION_SIZE, 1);
UFS_GEOMETRY_DESC_PARAM(hpb_max_active_regions, _HPB_MAX_ACTIVE_REGS, 2);
UFS_GEOMETRY_DESC_PARAM(wb_max_alloc_units, _WB_MAX_ALLOC_UNITS, 4);
UFS_GEOMETRY_DESC_PARAM(wb_max_wb_luns, _WB_MAX_WB_LUNS, 1);
UFS_GEOMETRY_DESC_PARAM(wb_buff_cap_adj, _WB_BUFF_CAP_ADJ, 1);
@ -868,10 +860,6 @@ static struct attribute *ufs_sysfs_geometry_descriptor[] = {
&dev_attr_enh3_memory_capacity_adjustment_factor.attr,
&dev_attr_enh4_memory_max_alloc_units.attr,
&dev_attr_enh4_memory_capacity_adjustment_factor.attr,
&dev_attr_hpb_region_size.attr,
&dev_attr_hpb_number_lu.attr,
&dev_attr_hpb_subregion_size.attr,
&dev_attr_hpb_max_active_regions.attr,
&dev_attr_wb_max_alloc_units.attr,
&dev_attr_wb_max_wb_luns.attr,
&dev_attr_wb_buff_cap_adj.attr,
@ -1132,7 +1120,6 @@ UFS_FLAG(disable_fw_update, _PERMANENTLY_DISABLE_FW_UPDATE);
UFS_FLAG(wb_enable, _WB_EN);
UFS_FLAG(wb_flush_en, _WB_BUFF_FLUSH_EN);
UFS_FLAG(wb_flush_during_h8, _WB_BUFF_FLUSH_DURING_HIBERN8);
UFS_FLAG(hpb_enable, _HPB_EN);
static struct attribute *ufs_sysfs_device_flags[] = {
&dev_attr_device_init.attr,
@ -1146,7 +1133,6 @@ static struct attribute *ufs_sysfs_device_flags[] = {
&dev_attr_wb_enable.attr,
&dev_attr_wb_flush_en.attr,
&dev_attr_wb_flush_during_h8.attr,
&dev_attr_hpb_enable.attr,
NULL,
};
@ -1193,7 +1179,6 @@ out: \
static DEVICE_ATTR_RO(_name)
UFS_ATTRIBUTE(boot_lun_enabled, _BOOT_LU_EN);
UFS_ATTRIBUTE(max_data_size_hpb_single_cmd, _MAX_HPB_SINGLE_CMD);
UFS_ATTRIBUTE(current_power_mode, _POWER_MODE);
UFS_ATTRIBUTE(active_icc_level, _ACTIVE_ICC_LVL);
UFS_ATTRIBUTE(ooo_data_enabled, _OOO_DATA_EN);
@ -1217,7 +1202,6 @@ UFS_ATTRIBUTE(wb_cur_buf, _CURR_WB_BUFF_SIZE);
static struct attribute *ufs_sysfs_attributes[] = {
&dev_attr_boot_lun_enabled.attr,
&dev_attr_max_data_size_hpb_single_cmd.attr,
&dev_attr_current_power_mode.attr,
&dev_attr_active_icc_level.attr,
&dev_attr_ooo_data_enabled.attr,
@ -1291,9 +1275,6 @@ UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1);
UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8);
UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2);
UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1);
UFS_UNIT_DESC_PARAM(hpb_lu_max_active_regions, _HPB_LU_MAX_ACTIVE_RGNS, 2);
UFS_UNIT_DESC_PARAM(hpb_pinned_region_start_offset, _HPB_PIN_RGN_START_OFF, 2);
UFS_UNIT_DESC_PARAM(hpb_number_pinned_regions, _HPB_NUM_PIN_RGNS, 2);
UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4);
static struct attribute *ufs_sysfs_unit_descriptor[] = {
@ -1311,9 +1292,6 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = {
&dev_attr_physical_memory_resourse_count.attr,
&dev_attr_context_capabilities.attr,
&dev_attr_large_unit_granularity.attr,
&dev_attr_hpb_lu_max_active_regions.attr,
&dev_attr_hpb_pinned_region_start_offset.attr,
&dev_attr_hpb_number_pinned_regions.attr,
&dev_attr_wb_buf_alloc_units.attr,
NULL,
};

View File

@ -232,6 +232,8 @@ static inline void ufs_bsg_node_release(struct device *dev)
* @hba: per adapter object
*
* Called during initial loading of the driver, and before scsi_scan_host.
*
* Returns: 0 (success).
*/
int ufs_bsg_probe(struct ufs_hba *hba)
{

View File

@ -26,15 +26,15 @@ static inline void ufshcd_prepare_lrbp_crypto(struct request *rq,
}
static inline void
ufshcd_prepare_req_desc_hdr_crypto(struct ufshcd_lrb *lrbp, u32 *dword_0,
u32 *dword_1, u32 *dword_3)
ufshcd_prepare_req_desc_hdr_crypto(struct ufshcd_lrb *lrbp,
struct request_desc_header *h)
{
if (lrbp->crypto_key_slot >= 0) {
*dword_0 |= UTP_REQ_DESC_CRYPTO_ENABLE_CMD;
*dword_0 |= lrbp->crypto_key_slot;
*dword_1 = lower_32_bits(lrbp->data_unit_num);
*dword_3 = upper_32_bits(lrbp->data_unit_num);
}
if (lrbp->crypto_key_slot < 0)
return;
h->enable_crypto = 1;
h->cci = lrbp->crypto_key_slot;
h->dunl = cpu_to_le32(lower_32_bits(lrbp->data_unit_num));
h->dunu = cpu_to_le32(upper_32_bits(lrbp->data_unit_num));
}
bool ufshcd_crypto_enable(struct ufs_hba *hba);
@ -51,8 +51,8 @@ static inline void ufshcd_prepare_lrbp_crypto(struct request *rq,
struct ufshcd_lrb *lrbp) { }
static inline void
ufshcd_prepare_req_desc_hdr_crypto(struct ufshcd_lrb *lrbp, u32 *dword_0,
u32 *dword_1, u32 *dword_3) { }
ufshcd_prepare_req_desc_hdr_crypto(struct ufshcd_lrb *lrbp,
struct request_desc_header *h) { }
static inline bool ufshcd_crypto_enable(struct ufs_hba *hba)
{

View File

@ -93,7 +93,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
struct utp_upiu_req *req_upiu,
struct utp_upiu_req *rsp_upiu,
int msgcode,
enum upiu_request_transaction msgcode,
u8 *desc_buff, int *buff_len,
enum query_opcode desc_op);
@ -294,7 +294,7 @@ extern const struct ufs_pm_lvl_states ufs_pm_lvl_states[];
* ufshcd_scsi_to_upiu_lun - maps scsi LUN to UPIU LUN
* @scsi_lun: scsi LUN id
*
* Returns UPIU LUN id
* Return: UPIU LUN id
*/
static inline u8 ufshcd_scsi_to_upiu_lun(unsigned int scsi_lun)
{

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,318 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Universal Flash Storage Host Performance Booster
*
* Copyright (C) 2017-2021 Samsung Electronics Co., Ltd.
*
* Authors:
* Yongmyung Lee <ymhungry.lee@samsung.com>
* Jinyoung Choi <j-young.choi@samsung.com>
*/
#ifndef _UFSHPB_H_
#define _UFSHPB_H_
/* hpb response UPIU macro */
#define HPB_RSP_NONE 0x0
#define HPB_RSP_REQ_REGION_UPDATE 0x1
#define HPB_RSP_DEV_RESET 0x2
#define MAX_ACTIVE_NUM 2
#define MAX_INACTIVE_NUM 2
#define DEV_DATA_SEG_LEN 0x14
#define DEV_SENSE_SEG_LEN 0x12
#define DEV_DES_TYPE 0x80
#define DEV_ADDITIONAL_LEN 0x10
/* hpb map & entries macro */
#define HPB_RGN_SIZE_UNIT 512
#define HPB_ENTRY_BLOCK_SIZE SZ_4K
#define HPB_ENTRY_SIZE 0x8
#define PINNED_NOT_SET U32_MAX
/* hpb support chunk size */
#define HPB_LEGACY_CHUNK_HIGH 1
#define HPB_MULTI_CHUNK_HIGH 255
/* hpb vender defined opcode */
#define UFSHPB_READ 0xF8
#define UFSHPB_READ_BUFFER 0xF9
#define UFSHPB_READ_BUFFER_ID 0x01
#define UFSHPB_WRITE_BUFFER 0xFA
#define UFSHPB_WRITE_BUFFER_INACT_SINGLE_ID 0x01
#define UFSHPB_WRITE_BUFFER_PREFETCH_ID 0x02
#define UFSHPB_WRITE_BUFFER_INACT_ALL_ID 0x03
#define HPB_WRITE_BUFFER_CMD_LENGTH 10
#define MAX_HPB_READ_ID 0x7F
#define HPB_READ_BUFFER_CMD_LENGTH 10
#define LU_ENABLED_HPB_FUNC 0x02
#define HPB_RESET_REQ_RETRIES 10
#define HPB_MAP_REQ_RETRIES 5
#define HPB_REQUEUE_TIME_MS 0
#define HPB_SUPPORT_VERSION 0x200
#define HPB_SUPPORT_LEGACY_VERSION 0x100
enum UFSHPB_MODE {
HPB_HOST_CONTROL,
HPB_DEVICE_CONTROL,
};
enum UFSHPB_STATE {
HPB_INIT,
HPB_PRESENT,
HPB_SUSPEND,
HPB_FAILED,
HPB_RESET,
};
enum HPB_RGN_STATE {
HPB_RGN_INACTIVE,
HPB_RGN_ACTIVE,
/* pinned regions are always active */
HPB_RGN_PINNED,
};
enum HPB_SRGN_STATE {
HPB_SRGN_UNUSED,
HPB_SRGN_INVALID,
HPB_SRGN_VALID,
HPB_SRGN_ISSUED,
};
/**
* struct ufshpb_lu_info - UFSHPB logical unit related info
* @num_blocks: the number of logical block
* @pinned_start: the start region number of pinned region
* @num_pinned: the number of pinned regions
* @max_active_rgns: maximum number of active regions
*/
struct ufshpb_lu_info {
int num_blocks;
int pinned_start;
int num_pinned;
int max_active_rgns;
};
struct ufshpb_map_ctx {
struct page **m_page;
unsigned long *ppn_dirty;
};
struct ufshpb_subregion {
struct ufshpb_map_ctx *mctx;
enum HPB_SRGN_STATE srgn_state;
int rgn_idx;
int srgn_idx;
bool is_last;
/* subregion reads - for host mode */
unsigned int reads;
/* below information is used by rsp_list */
struct list_head list_act_srgn;
};
struct ufshpb_region {
struct ufshpb_lu *hpb;
struct ufshpb_subregion *srgn_tbl;
enum HPB_RGN_STATE rgn_state;
int rgn_idx;
int srgn_cnt;
/* below information is used by rsp_list */
struct list_head list_inact_rgn;
/* below information is used by lru */
struct list_head list_lru_rgn;
unsigned long rgn_flags;
#define RGN_FLAG_DIRTY 0
#define RGN_FLAG_UPDATE 1
/* region reads - for host mode */
spinlock_t rgn_lock;
unsigned int reads;
/* region "cold" timer - for host mode */
ktime_t read_timeout;
unsigned int read_timeout_expiries;
struct list_head list_expired_rgn;
};
#define for_each_sub_region(rgn, i, srgn) \
for ((i) = 0; \
((i) < (rgn)->srgn_cnt) && ((srgn) = &(rgn)->srgn_tbl[i]); \
(i)++)
/**
* struct ufshpb_req - HPB related request structure (write/read buffer)
* @req: block layer request structure
* @bio: bio for this request
* @hpb: ufshpb_lu structure that related to
* @list_req: ufshpb_req mempool list
* @sense: store its sense data
* @mctx: L2P map information
* @rgn_idx: target region index
* @srgn_idx: target sub-region index
* @lun: target logical unit number
* @m_page: L2P map information data for pre-request
* @len: length of host-side cached L2P map in m_page
* @lpn: start LPN of L2P map in m_page
*/
struct ufshpb_req {
struct request *req;
struct bio *bio;
struct ufshpb_lu *hpb;
struct list_head list_req;
union {
struct {
struct ufshpb_map_ctx *mctx;
unsigned int rgn_idx;
unsigned int srgn_idx;
unsigned int lun;
} rb;
struct {
struct page *m_page;
unsigned int len;
unsigned long lpn;
} wb;
};
};
struct victim_select_info {
struct list_head lh_lru_rgn; /* LRU list of regions */
int max_lru_active_cnt; /* supported hpb #region - pinned #region */
atomic_t active_cnt;
};
/**
* ufshpb_params - ufs hpb parameters
* @requeue_timeout_ms - requeue threshold of wb command (0x2)
* @activation_thld - min reads [IOs] to activate/update a region
* @normalization_factor - shift right the region's reads
* @eviction_thld_enter - min reads [IOs] for the entering region in eviction
* @eviction_thld_exit - max reads [IOs] for the exiting region in eviction
* @read_timeout_ms - timeout [ms] from the last read IO to the region
* @read_timeout_expiries - amount of allowable timeout expireis
* @timeout_polling_interval_ms - frequency in which timeouts are checked
* @inflight_map_req - number of inflight map requests
*/
struct ufshpb_params {
unsigned int requeue_timeout_ms;
unsigned int activation_thld;
unsigned int normalization_factor;
unsigned int eviction_thld_enter;
unsigned int eviction_thld_exit;
unsigned int read_timeout_ms;
unsigned int read_timeout_expiries;
unsigned int timeout_polling_interval_ms;
unsigned int inflight_map_req;
};
struct ufshpb_stats {
u64 hit_cnt;
u64 miss_cnt;
u64 rcmd_noti_cnt;
u64 rcmd_active_cnt;
u64 rcmd_inactive_cnt;
u64 map_req_cnt;
u64 pre_req_cnt;
u64 umap_req_cnt;
};
struct ufshpb_lu {
int lun;
struct scsi_device *sdev_ufs_lu;
spinlock_t rgn_state_lock; /* for protect rgn/srgn state */
struct ufshpb_region *rgn_tbl;
atomic_t hpb_state;
spinlock_t rsp_list_lock;
struct list_head lh_act_srgn; /* hold rsp_list_lock */
struct list_head lh_inact_rgn; /* hold rsp_list_lock */
/* pre request information */
struct ufshpb_req *pre_req;
int num_inflight_pre_req;
int throttle_pre_req;
int num_inflight_map_req; /* hold param_lock */
spinlock_t param_lock;
struct list_head lh_pre_req_free;
int pre_req_max_tr_len;
/* cached L2P map management worker */
struct work_struct map_work;
/* for selecting victim */
struct victim_select_info lru_info;
struct work_struct ufshpb_normalization_work;
struct delayed_work ufshpb_read_to_work;
unsigned long work_data_bits;
#define TIMEOUT_WORK_RUNNING 0
/* pinned region information */
u32 lu_pinned_start;
u32 lu_pinned_end;
/* HPB related configuration */
u32 rgns_per_lu;
u32 srgns_per_lu;
u32 last_srgn_entries;
int srgns_per_rgn;
u32 srgn_mem_size;
u32 entries_per_rgn_mask;
u32 entries_per_rgn_shift;
u32 entries_per_srgn;
u32 entries_per_srgn_mask;
u32 entries_per_srgn_shift;
u32 pages_per_srgn;
bool is_hcm;
struct ufshpb_stats stats;
struct ufshpb_params params;
struct kmem_cache *map_req_cache;
struct kmem_cache *m_page_cache;
struct list_head list_hpb_lu;
};
struct ufs_hba;
struct ufshcd_lrb;
#ifndef CONFIG_SCSI_UFS_HPB
static int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) { return 0; }
static void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp) {}
static void ufshpb_resume(struct ufs_hba *hba) {}
static void ufshpb_suspend(struct ufs_hba *hba) {}
static void ufshpb_toggle_state(struct ufs_hba *hba, enum UFSHPB_STATE src, enum UFSHPB_STATE dest) {}
static void ufshpb_init(struct ufs_hba *hba) {}
static void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev) {}
static void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev) {}
static void ufshpb_remove(struct ufs_hba *hba) {}
static bool ufshpb_is_allowed(struct ufs_hba *hba) { return false; }
static void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf) {}
static void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf) {}
static bool ufshpb_is_legacy(struct ufs_hba *hba) { return false; }
#else
int ufshpb_prep(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_rsp_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp);
void ufshpb_resume(struct ufs_hba *hba);
void ufshpb_suspend(struct ufs_hba *hba);
void ufshpb_toggle_state(struct ufs_hba *hba, enum UFSHPB_STATE src, enum UFSHPB_STATE dest);
void ufshpb_init(struct ufs_hba *hba);
void ufshpb_init_hpb_lu(struct ufs_hba *hba, struct scsi_device *sdev);
void ufshpb_destroy_lu(struct ufs_hba *hba, struct scsi_device *sdev);
void ufshpb_remove(struct ufs_hba *hba);
bool ufshpb_is_allowed(struct ufs_hba *hba);
void ufshpb_get_geo_info(struct ufs_hba *hba, u8 *geo_buf);
void ufshpb_get_dev_info(struct ufs_hba *hba, u8 *desc_buf);
bool ufshpb_is_legacy(struct ufs_hba *hba);
extern struct attribute_group ufs_sysfs_hpb_stat_group;
extern struct attribute_group ufs_sysfs_hpb_param_group;
#endif
#endif /* End of Header */

View File

@ -101,11 +101,10 @@ static void cdns_ufs_set_l4_attr(struct ufs_hba *hba)
}
/**
* cdns_ufs_set_hclkdiv()
* Sets HCLKDIV register value based on the core_clk
* cdns_ufs_set_hclkdiv() - set HCLKDIV register value based on the core_clk.
* @hba: host controller instance
*
* Return zero for success and non-zero for failure
* Return: zero for success and non-zero for failure.
*/
static int cdns_ufs_set_hclkdiv(struct ufs_hba *hba)
{
@ -143,12 +142,11 @@ static int cdns_ufs_set_hclkdiv(struct ufs_hba *hba)
}
/**
* cdns_ufs_hce_enable_notify()
* Called before and after HCE enable bit is set.
* cdns_ufs_hce_enable_notify() - set HCLKDIV register
* @hba: host controller instance
* @status: notify stage (pre, post change)
*
* Return zero for success and non-zero for failure
* Return: zero for success and non-zero for failure.
*/
static int cdns_ufs_hce_enable_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status)
@ -160,12 +158,10 @@ static int cdns_ufs_hce_enable_notify(struct ufs_hba *hba,
}
/**
* cdns_ufs_hibern8_notify()
* Called around hibern8 enter/exit.
* cdns_ufs_hibern8_notify() - save and restore L4 attributes.
* @hba: host controller instance
* @cmd: UIC Command
* @status: notify stage (pre, post change)
*
*/
static void cdns_ufs_hibern8_notify(struct ufs_hba *hba, enum uic_cmd_dme cmd,
enum ufs_notify_change_status status)
@ -177,12 +173,11 @@ static void cdns_ufs_hibern8_notify(struct ufs_hba *hba, enum uic_cmd_dme cmd,
}
/**
* cdns_ufs_link_startup_notify()
* Called before and after Link startup is carried out.
* cdns_ufs_link_startup_notify() - handle link startup.
* @hba: host controller instance
* @status: notify stage (pre, post change)
*
* Return zero for success and non-zero for failure
* Return: zero for success and non-zero for failure.
*/
static int cdns_ufs_link_startup_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status)
@ -212,7 +207,7 @@ static int cdns_ufs_link_startup_notify(struct ufs_hba *hba,
* cdns_ufs_init - performs additional ufs initialization
* @hba: host controller instance
*
* Returns status of initialization
* Return: status of initialization.
*/
static int cdns_ufs_init(struct ufs_hba *hba)
{
@ -235,7 +230,7 @@ static int cdns_ufs_init(struct ufs_hba *hba)
* cdns_ufs_m31_16nm_phy_initialization - performs m31 phy initialization
* @hba: host controller instance
*
* Always returns 0
* Return: 0 (success).
*/
static int cdns_ufs_m31_16nm_phy_initialization(struct ufs_hba *hba)
{
@ -284,7 +279,7 @@ MODULE_DEVICE_TABLE(of, cdns_ufs_of_match);
* cdns_ufs_pltfrm_probe - probe routine of the driver
* @pdev: pointer to platform device handle
*
* Return zero for success and non-zero for failure
* Return: zero for success and non-zero for failure.
*/
static int cdns_ufs_pltfrm_probe(struct platform_device *pdev)
{
@ -308,7 +303,7 @@ static int cdns_ufs_pltfrm_probe(struct platform_device *pdev)
* cdns_ufs_pltfrm_remove - removes the ufs driver
* @pdev: pointer to platform device handle
*
* Always returns 0
* Return: 0 (success).
*/
static int cdns_ufs_pltfrm_remove(struct platform_device *pdev)
{

Some files were not shown because too many files have changed in this diff Show More