SCSI misc on 20240514

Updates to the usual drivers (ufs, lpfc, qla2xxx, mpi3mr, libsas).
 The major update (which causes a conflict with block, see below) is
 Christoph removing the queue limits and their associated block
 helpers.  The remaining patches are assorted minor fixes and
 deprecated function updates plus a bit of constification.
 
 Signed-off-by: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCZkOnWyYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishYe7AP93XRN/
 xnccJbSTTUL4FFGobq2CYXv58Na+FM/b/+/kEAD+PNi0LmHDdDTOaFUblMd9l4lj
 mpvYLRvJ6ifnHX6WXAg=
 =PVnL
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "Updates to the usual drivers (ufs, lpfc, qla2xxx, mpi3mr, libsas).

  The major update (which causes a conflict with block, see below) is
  Christoph removing the queue limits and their associated block
  helpers.

  The remaining patches are assorted minor fixes and deprecated function
  updates plus a bit of constification"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (141 commits)
  scsi: mpi3mr: Sanitise num_phys
  scsi: lpfc: Copyright updates for 14.4.0.2 patches
  scsi: lpfc: Update lpfc version to 14.4.0.2
  scsi: lpfc: Add support for 32 byte CDBs
  scsi: lpfc: Change lpfc_hba hba_flag member into a bitmask
  scsi: lpfc: Introduce rrq_list_lock to protect active_rrq_list
  scsi: lpfc: Clear deferred RSCN processing flag when driver is unloading
  scsi: lpfc: Update logging of protection type for T10 DIF I/O
  scsi: lpfc: Change default logging level for unsolicited CT MIB commands
  scsi: target: Remove unused list 'device_list'
  scsi: iscsi: Remove unused list 'connlist_err'
  scsi: ufs: exynos: Add support for Tensor gs101 SoC
  scsi: ufs: exynos: Add some pa_dbg_ register offsets into drvdata
  scsi: ufs: exynos: Allow max frequencies up to 267Mhz
  scsi: ufs: exynos: Add EXYNOS_UFS_OPT_TIMER_TICK_SELECT option
  scsi: ufs: exynos: Add EXYNOS_UFS_OPT_UFSPR_SECURE option
  scsi: ufs: dt-bindings: exynos: Add gs101 compatible
  scsi: qla2xxx: Fix debugfs output for fw_resource_count
  scsi: qedf: Ensure the copied buf is NUL terminated
  scsi: bfa: Ensure the copied buf is NUL terminated
  ...
This commit is contained in:
Linus Torvalds 2024-05-14 18:25:53 -07:00
commit 113d1dd9c8
154 changed files with 2179 additions and 2048 deletions

View File

@ -12,12 +12,10 @@ maintainers:
description: | description: |
Each Samsung UFS host controller instance should have its own node. Each Samsung UFS host controller instance should have its own node.
allOf:
- $ref: ufs-common.yaml
properties: properties:
compatible: compatible:
enum: enum:
- google,gs101-ufs
- samsung,exynos7-ufs - samsung,exynos7-ufs
- samsung,exynosautov9-ufs - samsung,exynosautov9-ufs
- samsung,exynosautov9-ufs-vh - samsung,exynosautov9-ufs-vh
@ -38,14 +36,24 @@ properties:
- const: ufsp - const: ufsp
clocks: clocks:
minItems: 2
items: items:
- description: ufs link core clock - description: ufs link core clock
- description: unipro main clock - description: unipro main clock
- description: fmp clock
- description: ufs aclk clock
- description: ufs pclk clock
- description: sysreg clock
clock-names: clock-names:
minItems: 2
items: items:
- const: core_clk - const: core_clk
- const: sclk_unipro_main - const: sclk_unipro_main
- const: fmp
- const: aclk
- const: pclk
- const: sysreg
phys: phys:
maxItems: 1 maxItems: 1
@ -72,6 +80,30 @@ required:
- clocks - clocks
- clock-names - clock-names
allOf:
- $ref: ufs-common.yaml
- if:
properties:
compatible:
contains:
const: google,gs101-ufs
then:
properties:
clocks:
minItems: 6
clock-names:
minItems: 6
else:
properties:
clocks:
maxItems: 2
clock-names:
maxItems: 2
unevaluatedProperties: false unevaluatedProperties: false
examples: examples:

View File

@ -20,7 +20,7 @@ Although the old parallel (fast/wide/ultra) SCSI bus has largely fallen
out of use, the SCSI command set is more widely used than ever to out of use, the SCSI command set is more widely used than ever to
communicate with devices over a number of different busses. communicate with devices over a number of different busses.
The `SCSI protocol <http://www.t10.org/scsi-3.htm>`__ is a big-endian The `SCSI protocol <https://www.t10.org/scsi-3.htm>`__ is a big-endian
peer-to-peer packet based protocol. SCSI commands are 6, 10, 12, or 16 peer-to-peer packet based protocol. SCSI commands are 6, 10, 12, or 16
bytes long, often followed by an associated data payload. bytes long, often followed by an associated data payload.
@ -28,8 +28,7 @@ SCSI commands can be transported over just about any kind of bus, and
are the default protocol for storage devices attached to USB, SATA, SAS, are the default protocol for storage devices attached to USB, SATA, SAS,
Fibre Channel, FireWire, and ATAPI devices. SCSI packets are also Fibre Channel, FireWire, and ATAPI devices. SCSI packets are also
commonly exchanged over Infiniband, commonly exchanged over Infiniband,
`I2O <http://i2o.shadowconnect.com/faq.php>`__, TCP/IP TCP/IP (`iSCSI <https://en.wikipedia.org/wiki/ISCSI>`__), even `Parallel
(`iSCSI <https://en.wikipedia.org/wiki/ISCSI>`__), even `Parallel
ports <http://cyberelk.net/tim/parport/parscsi.html>`__. ports <http://cyberelk.net/tim/parport/parscsi.html>`__.
Design of the Linux SCSI subsystem Design of the Linux SCSI subsystem
@ -170,9 +169,9 @@ drivers/scsi/scsi_netlink.c
Infrastructure to provide async events from transports to userspace via Infrastructure to provide async events from transports to userspace via
netlink, using a single NETLINK_SCSITRANSPORT protocol for all netlink, using a single NETLINK_SCSITRANSPORT protocol for all
transports. See `the original patch transports. See `the original patch submission
submission <http://marc.info/?l=linux-scsi&m=115507374832500&w=2>`__ for <https://lore.kernel.org/linux-scsi/1155070439.6275.5.camel@localhost.localdomain/>`__
more details. for more details.
.. kernel-doc:: drivers/scsi/scsi_netlink.c .. kernel-doc:: drivers/scsi/scsi_netlink.c
:internal: :internal:
@ -328,11 +327,11 @@ the ordinary is seen.
To be more realistic, the simulated devices have the transport To be more realistic, the simulated devices have the transport
attributes of SAS disks. attributes of SAS disks.
For documentation see http://sg.danny.cz/sg/sdebug26.html For documentation see http://sg.danny.cz/sg/scsi_debug.html
todo todo
~~~~ ~~~~
Parallel (fast/wide/ultra) SCSI, USB, SATA, SAS, Fibre Channel, Parallel (fast/wide/ultra) SCSI, USB, SATA, SAS, Fibre Channel,
FireWire, ATAPI devices, Infiniband, I2O, Parallel ports, FireWire, ATAPI devices, Infiniband, Parallel ports,
netlink... netlink...

View File

@ -42,18 +42,18 @@ This version of the document roughly matches linux kernel version 2.6.8 .
Documentation Documentation
============= =============
There is a SCSI documentation directory within the kernel source tree, There is a SCSI documentation directory within the kernel source tree,
typically Documentation/scsi . Most documents are in plain typically Documentation/scsi . Most documents are in reStructuredText
(i.e. ASCII) text. This file is named scsi_mid_low_api.txt and can be format. This file is named scsi_mid_low_api.rst and can be
found in that directory. A more recent copy of this document may be found found in that directory. A more recent copy of this document may be found
at http://web.archive.org/web/20070107183357rn_1/sg.torque.net/scsi/. at https://docs.kernel.org/scsi/scsi_mid_low_api.html. Many LLDs are
Many LLDs are documented there (e.g. aic7xxx.txt). The SCSI mid-level is documented in Documentation/scsi (e.g. aic7xxx.rst). The SCSI mid-level is
briefly described in scsi.txt which contains a url to a document briefly described in scsi.rst which contains a URL to a document describing
describing the SCSI subsystem in the lk 2.4 series. Two upper level the SCSI subsystem in the Linux Kernel 2.4 series. Two upper level
drivers have documents in that directory: st.txt (SCSI tape driver) and drivers have documents in that directory: st.rst (SCSI tape driver) and
scsi-generic.txt (for the sg driver). scsi-generic.rst (for the sg driver).
Some documentation (or urls) for LLDs may be found in the C source code Some documentation (or URLs) for LLDs may be found in the C source code
or in the same directory as the C source code. For example to find a url or in the same directory as the C source code. For example to find a URL
about the USB mass storage driver see the about the USB mass storage driver see the
/usr/src/linux/drivers/usb/storage directory. /usr/src/linux/drivers/usb/storage directory.

View File

@ -5828,10 +5828,9 @@ F: include/uapi/misc/cxl.h
CXLFLASH (IBM Coherent Accelerator Processor Interface CAPI Flash) SCSI DRIVER CXLFLASH (IBM Coherent Accelerator Processor Interface CAPI Flash) SCSI DRIVER
M: Manoj N. Kumar <manoj@linux.ibm.com> M: Manoj N. Kumar <manoj@linux.ibm.com>
M: Matthew R. Ochs <mrochs@linux.ibm.com>
M: Uma Krishnan <ukrishn@linux.ibm.com> M: Uma Krishnan <ukrishn@linux.ibm.com>
L: linux-scsi@vger.kernel.org L: linux-scsi@vger.kernel.org
S: Supported S: Obsolete
F: Documentation/arch/powerpc/cxlflash.rst F: Documentation/arch/powerpc/cxlflash.rst
F: drivers/scsi/cxlflash/ F: drivers/scsi/cxlflash/
F: include/uapi/scsi/cxlflash_ioctl.h F: include/uapi/scsi/cxlflash_ioctl.h

View File

@ -282,72 +282,6 @@ int queue_limits_set(struct request_queue *q, struct queue_limits *lim)
} }
EXPORT_SYMBOL_GPL(queue_limits_set); EXPORT_SYMBOL_GPL(queue_limits_set);
/**
* blk_queue_bounce_limit - set bounce buffer limit for queue
* @q: the request queue for the device
* @bounce: bounce limit to enforce
*
* Description:
* Force bouncing for ISA DMA ranges or highmem.
*
* DEPRECATED, don't use in new code.
**/
void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce)
{
q->limits.bounce = bounce;
}
EXPORT_SYMBOL(blk_queue_bounce_limit);
/**
* blk_queue_max_hw_sectors - set max sectors for a request for this queue
* @q: the request queue for the device
* @max_hw_sectors: max hardware sectors in the usual 512b unit
*
* Description:
* Enables a low level driver to set a hard upper limit,
* max_hw_sectors, on the size of requests. max_hw_sectors is set by
* the device driver based upon the capabilities of the I/O
* controller.
*
* max_dev_sectors is a hard limit imposed by the storage device for
* READ/WRITE requests. It is set by the disk driver.
*
* max_sectors is a soft limit imposed by the block layer for
* filesystem type requests. This value can be overridden on a
* per-device basis in /sys/block/<device>/queue/max_sectors_kb.
* The soft limit can not exceed max_hw_sectors.
**/
void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_sectors)
{
struct queue_limits *limits = &q->limits;
unsigned int max_sectors;
if ((max_hw_sectors << 9) < PAGE_SIZE) {
max_hw_sectors = 1 << (PAGE_SHIFT - 9);
pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors);
}
max_hw_sectors = round_down(max_hw_sectors,
limits->logical_block_size >> SECTOR_SHIFT);
limits->max_hw_sectors = max_hw_sectors;
max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors);
if (limits->max_user_sectors)
max_sectors = min(max_sectors, limits->max_user_sectors);
else
max_sectors = min(max_sectors, BLK_DEF_MAX_SECTORS_CAP);
max_sectors = round_down(max_sectors,
limits->logical_block_size >> SECTOR_SHIFT);
limits->max_sectors = max_sectors;
if (!q->disk)
return;
q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9);
}
EXPORT_SYMBOL(blk_queue_max_hw_sectors);
/** /**
* blk_queue_chunk_sectors - set size of the chunk for this queue * blk_queue_chunk_sectors - set size of the chunk for this queue
* @q: the request queue for the device * @q: the request queue for the device
@ -442,65 +376,6 @@ void blk_queue_max_zone_append_sectors(struct request_queue *q,
} }
EXPORT_SYMBOL_GPL(blk_queue_max_zone_append_sectors); EXPORT_SYMBOL_GPL(blk_queue_max_zone_append_sectors);
/**
* blk_queue_max_segments - set max hw segments for a request for this queue
* @q: the request queue for the device
* @max_segments: max number of segments
*
* Description:
* Enables a low level driver to set an upper limit on the number of
* hw data segments in a request.
**/
void blk_queue_max_segments(struct request_queue *q, unsigned short max_segments)
{
if (!max_segments) {
max_segments = 1;
pr_info("%s: set to minimum %u\n", __func__, max_segments);
}
q->limits.max_segments = max_segments;
}
EXPORT_SYMBOL(blk_queue_max_segments);
/**
* blk_queue_max_discard_segments - set max segments for discard requests
* @q: the request queue for the device
* @max_segments: max number of segments
*
* Description:
* Enables a low level driver to set an upper limit on the number of
* segments in a discard request.
**/
void blk_queue_max_discard_segments(struct request_queue *q,
unsigned short max_segments)
{
q->limits.max_discard_segments = max_segments;
}
EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments);
/**
* blk_queue_max_segment_size - set max segment size for blk_rq_map_sg
* @q: the request queue for the device
* @max_size: max size of segment in bytes
*
* Description:
* Enables a low level driver to set an upper limit on the size of a
* coalesced segment
**/
void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size)
{
if (max_size < PAGE_SIZE) {
max_size = PAGE_SIZE;
pr_info("%s: set to minimum %u\n", __func__, max_size);
}
/* see blk_queue_virt_boundary() for the explanation */
WARN_ON_ONCE(q->limits.virt_boundary_mask);
q->limits.max_segment_size = max_size;
}
EXPORT_SYMBOL(blk_queue_max_segment_size);
/** /**
* blk_queue_logical_block_size - set logical block size for the queue * blk_queue_logical_block_size - set logical block size for the queue
* @q: the request queue for the device * @q: the request queue for the device
@ -667,29 +542,6 @@ void blk_limits_io_opt(struct queue_limits *limits, unsigned int opt)
} }
EXPORT_SYMBOL(blk_limits_io_opt); EXPORT_SYMBOL(blk_limits_io_opt);
/**
* blk_queue_io_opt - set optimal request size for the queue
* @q: the request queue for the device
* @opt: optimal request size in bytes
*
* Description:
* Storage devices may report an optimal I/O size, which is the
* device's preferred unit for sustained I/O. This is rarely reported
* for disk drives. For RAID arrays it is usually the stripe width or
* the internal track size. A properly aligned multiple of
* optimal_io_size is the preferred request size for workloads where
* sustained throughput is desired.
*/
void blk_queue_io_opt(struct request_queue *q, unsigned int opt)
{
blk_limits_io_opt(&q->limits, opt);
if (!q->disk)
return;
q->disk->bdi->ra_pages =
max(queue_io_opt(q) * 2 / PAGE_SIZE, VM_READAHEAD_PAGES);
}
EXPORT_SYMBOL(blk_queue_io_opt);
static int queue_limit_alignment_offset(const struct queue_limits *lim, static int queue_limit_alignment_offset(const struct queue_limits *lim,
sector_t sector) sector_t sector)
{ {
@ -939,81 +791,6 @@ void blk_queue_update_dma_pad(struct request_queue *q, unsigned int mask)
} }
EXPORT_SYMBOL(blk_queue_update_dma_pad); EXPORT_SYMBOL(blk_queue_update_dma_pad);
/**
* blk_queue_segment_boundary - set boundary rules for segment merging
* @q: the request queue for the device
* @mask: the memory boundary mask
**/
void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask)
{
if (mask < PAGE_SIZE - 1) {
mask = PAGE_SIZE - 1;
pr_info("%s: set to minimum %lx\n", __func__, mask);
}
q->limits.seg_boundary_mask = mask;
}
EXPORT_SYMBOL(blk_queue_segment_boundary);
/**
* blk_queue_virt_boundary - set boundary rules for bio merging
* @q: the request queue for the device
* @mask: the memory boundary mask
**/
void blk_queue_virt_boundary(struct request_queue *q, unsigned long mask)
{
q->limits.virt_boundary_mask = mask;
/*
* Devices that require a virtual boundary do not support scatter/gather
* I/O natively, but instead require a descriptor list entry for each
* page (which might not be idential to the Linux PAGE_SIZE). Because
* of that they are not limited by our notion of "segment size".
*/
if (mask)
q->limits.max_segment_size = UINT_MAX;
}
EXPORT_SYMBOL(blk_queue_virt_boundary);
/**
* blk_queue_dma_alignment - set dma length and memory alignment
* @q: the request queue for the device
* @mask: alignment mask
*
* description:
* set required memory and length alignment for direct dma transactions.
* this is used when building direct io requests for the queue.
*
**/
void blk_queue_dma_alignment(struct request_queue *q, int mask)
{
q->limits.dma_alignment = mask;
}
EXPORT_SYMBOL(blk_queue_dma_alignment);
/**
* blk_queue_update_dma_alignment - update dma length and memory alignment
* @q: the request queue for the device
* @mask: alignment mask
*
* description:
* update required memory and length alignment for direct dma transactions.
* If the requested alignment is larger than the current alignment, then
* the current queue alignment is updated to the new value, otherwise it
* is left alone. The design of this is to allow multiple objects
* (driver, device, transport etc) to set their respective
* alignments without having them interfere.
*
**/
void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
{
BUG_ON(mask > PAGE_SIZE);
if (mask > q->limits.dma_alignment)
q->limits.dma_alignment = mask;
}
EXPORT_SYMBOL(blk_queue_update_dma_alignment);
/** /**
* blk_set_queue_depth - tell the block layer about the device queue depth * blk_set_queue_depth - tell the block layer about the device queue depth
* @q: the request queue for the device * @q: the request queue for the device
@ -1051,28 +828,6 @@ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
} }
EXPORT_SYMBOL_GPL(blk_queue_write_cache); EXPORT_SYMBOL_GPL(blk_queue_write_cache);
/**
* blk_queue_can_use_dma_map_merging - configure queue for merging segments.
* @q: the request queue for the device
* @dev: the device pointer for dma
*
* Tell the block layer about merging the segments by dma map of @q.
*/
bool blk_queue_can_use_dma_map_merging(struct request_queue *q,
struct device *dev)
{
unsigned long boundary = dma_get_merge_boundary(dev);
if (!boundary)
return false;
/* No need to update max_segment_size. see blk_queue_virt_boundary() */
blk_queue_virt_boundary(q, boundary);
return true;
}
EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);
/** /**
* disk_set_zoned - inidicate a zoned device * disk_set_zoned - inidicate a zoned device
* @disk: gendisk to configure * @disk: gendisk to configure

View File

@ -354,12 +354,14 @@ static const struct blk_mq_ops bsg_mq_ops = {
* bsg_setup_queue - Create and add the bsg hooks so we can receive requests * bsg_setup_queue - Create and add the bsg hooks so we can receive requests
* @dev: device to attach bsg device to * @dev: device to attach bsg device to
* @name: device to give bsg device * @name: device to give bsg device
* @lim: queue limits for the bsg queue
* @job_fn: bsg job handler * @job_fn: bsg job handler
* @timeout: timeout handler function pointer * @timeout: timeout handler function pointer
* @dd_job_size: size of LLD data needed for each job * @dd_job_size: size of LLD data needed for each job
*/ */
struct request_queue *bsg_setup_queue(struct device *dev, const char *name, struct request_queue *bsg_setup_queue(struct device *dev, const char *name,
bsg_job_fn *job_fn, bsg_timeout_fn *timeout, int dd_job_size) struct queue_limits *lim, bsg_job_fn *job_fn,
bsg_timeout_fn *timeout, int dd_job_size)
{ {
struct bsg_set *bset; struct bsg_set *bset;
struct blk_mq_tag_set *set; struct blk_mq_tag_set *set;
@ -383,7 +385,7 @@ struct request_queue *bsg_setup_queue(struct device *dev, const char *name,
if (blk_mq_alloc_tag_set(set)) if (blk_mq_alloc_tag_set(set))
goto out_tag_set; goto out_tag_set;
q = blk_mq_alloc_queue(set, NULL, NULL); q = blk_mq_alloc_queue(set, lim, NULL);
if (IS_ERR(q)) { if (IS_ERR(q)) {
ret = PTR_ERR(q); ret = PTR_ERR(q);
goto out_queue; goto out_queue;

View File

@ -397,7 +397,7 @@ extern const struct attribute_group *ahci_sdev_groups[];
.sdev_groups = ahci_sdev_groups, \ .sdev_groups = ahci_sdev_groups, \
.change_queue_depth = ata_scsi_change_queue_depth, \ .change_queue_depth = ata_scsi_change_queue_depth, \
.tag_alloc_policy = BLK_TAG_ALLOC_RR, \ .tag_alloc_policy = BLK_TAG_ALLOC_RR, \
.slave_configure = ata_scsi_slave_config .device_configure = ata_scsi_device_configure
extern struct ata_port_operations ahci_ops; extern struct ata_port_operations ahci_ops;
extern struct ata_port_operations ahci_platform_ops; extern struct ata_port_operations ahci_platform_ops;

View File

@ -848,80 +848,143 @@ DEVICE_ATTR(link_power_management_policy, S_IRUGO | S_IWUSR,
ata_scsi_lpm_show, ata_scsi_lpm_store); ata_scsi_lpm_show, ata_scsi_lpm_store);
EXPORT_SYMBOL_GPL(dev_attr_link_power_management_policy); EXPORT_SYMBOL_GPL(dev_attr_link_power_management_policy);
/**
* ata_ncq_prio_supported - Check if device supports NCQ Priority
* @ap: ATA port of the target device
* @sdev: SCSI device
* @supported: Address of a boolean to store the result
*
* Helper to check if device supports NCQ Priority feature.
*
* Context: Any context. Takes and releases @ap->lock.
*
* Return:
* * %0 - OK. Status is stored into @supported
* * %-ENODEV - Failed to find the ATA device
*/
int ata_ncq_prio_supported(struct ata_port *ap, struct scsi_device *sdev,
bool *supported)
{
struct ata_device *dev;
unsigned long flags;
int rc = 0;
spin_lock_irqsave(ap->lock, flags);
dev = ata_scsi_find_dev(ap, sdev);
if (!dev)
rc = -ENODEV;
else
*supported = dev->flags & ATA_DFLAG_NCQ_PRIO;
spin_unlock_irqrestore(ap->lock, flags);
return rc;
}
EXPORT_SYMBOL_GPL(ata_ncq_prio_supported);
static ssize_t ata_ncq_prio_supported_show(struct device *device, static ssize_t ata_ncq_prio_supported_show(struct device *device,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
struct scsi_device *sdev = to_scsi_device(device); struct scsi_device *sdev = to_scsi_device(device);
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct ata_device *dev; bool supported;
bool ncq_prio_supported; int rc;
int rc = 0;
spin_lock_irq(ap->lock); rc = ata_ncq_prio_supported(ap, sdev, &supported);
dev = ata_scsi_find_dev(ap, sdev); if (rc)
if (!dev) return rc;
rc = -ENODEV;
else
ncq_prio_supported = dev->flags & ATA_DFLAG_NCQ_PRIO;
spin_unlock_irq(ap->lock);
return rc ? rc : sysfs_emit(buf, "%u\n", ncq_prio_supported); return sysfs_emit(buf, "%d\n", supported);
} }
DEVICE_ATTR(ncq_prio_supported, S_IRUGO, ata_ncq_prio_supported_show, NULL); DEVICE_ATTR(ncq_prio_supported, S_IRUGO, ata_ncq_prio_supported_show, NULL);
EXPORT_SYMBOL_GPL(dev_attr_ncq_prio_supported); EXPORT_SYMBOL_GPL(dev_attr_ncq_prio_supported);
/**
* ata_ncq_prio_enabled - Check if NCQ Priority is enabled
* @ap: ATA port of the target device
* @sdev: SCSI device
* @enabled: Address of a boolean to store the result
*
* Helper to check if NCQ Priority feature is enabled.
*
* Context: Any context. Takes and releases @ap->lock.
*
* Return:
* * %0 - OK. Status is stored into @enabled
* * %-ENODEV - Failed to find the ATA device
*/
int ata_ncq_prio_enabled(struct ata_port *ap, struct scsi_device *sdev,
bool *enabled)
{
struct ata_device *dev;
unsigned long flags;
int rc = 0;
spin_lock_irqsave(ap->lock, flags);
dev = ata_scsi_find_dev(ap, sdev);
if (!dev)
rc = -ENODEV;
else
*enabled = dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED;
spin_unlock_irqrestore(ap->lock, flags);
return rc;
}
EXPORT_SYMBOL_GPL(ata_ncq_prio_enabled);
static ssize_t ata_ncq_prio_enable_show(struct device *device, static ssize_t ata_ncq_prio_enable_show(struct device *device,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
struct scsi_device *sdev = to_scsi_device(device); struct scsi_device *sdev = to_scsi_device(device);
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct ata_device *dev; bool enabled;
bool ncq_prio_enable; int rc;
int rc = 0;
spin_lock_irq(ap->lock); rc = ata_ncq_prio_enabled(ap, sdev, &enabled);
dev = ata_scsi_find_dev(ap, sdev);
if (!dev)
rc = -ENODEV;
else
ncq_prio_enable = dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED;
spin_unlock_irq(ap->lock);
return rc ? rc : sysfs_emit(buf, "%u\n", ncq_prio_enable);
}
static ssize_t ata_ncq_prio_enable_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t len)
{
struct scsi_device *sdev = to_scsi_device(device);
struct ata_port *ap;
struct ata_device *dev;
long int input;
int rc = 0;
rc = kstrtol(buf, 10, &input);
if (rc) if (rc)
return rc; return rc;
if ((input < 0) || (input > 1))
return -EINVAL;
ap = ata_shost_to_port(sdev->host); return sysfs_emit(buf, "%d\n", enabled);
}
/**
* ata_ncq_prio_enable - Enable/disable NCQ Priority
* @ap: ATA port of the target device
* @sdev: SCSI device
* @enable: true - enable NCQ Priority, false - disable NCQ Priority
*
* Helper to enable/disable NCQ Priority feature.
*
* Context: Any context. Takes and releases @ap->lock.
*
* Return:
* * %0 - OK. Status is stored into @enabled
* * %-ENODEV - Failed to find the ATA device
* * %-EINVAL - NCQ Priority is not supported or CDL is enabled
*/
int ata_ncq_prio_enable(struct ata_port *ap, struct scsi_device *sdev,
bool enable)
{
struct ata_device *dev;
unsigned long flags;
int rc = 0;
spin_lock_irqsave(ap->lock, flags);
dev = ata_scsi_find_dev(ap, sdev); dev = ata_scsi_find_dev(ap, sdev);
if (unlikely(!dev)) if (!dev) {
return -ENODEV; rc = -ENODEV;
goto unlock;
spin_lock_irq(ap->lock); }
if (!(dev->flags & ATA_DFLAG_NCQ_PRIO)) { if (!(dev->flags & ATA_DFLAG_NCQ_PRIO)) {
rc = -EINVAL; rc = -EINVAL;
goto unlock; goto unlock;
} }
if (input) { if (enable) {
if (dev->flags & ATA_DFLAG_CDL_ENABLED) { if (dev->flags & ATA_DFLAG_CDL_ENABLED) {
ata_dev_err(dev, ata_dev_err(dev,
"CDL must be disabled to enable NCQ priority\n"); "CDL must be disabled to enable NCQ priority\n");
@ -934,9 +997,30 @@ static ssize_t ata_ncq_prio_enable_store(struct device *device,
} }
unlock: unlock:
spin_unlock_irq(ap->lock); spin_unlock_irqrestore(ap->lock, flags);
return rc ? rc : len; return rc;
}
EXPORT_SYMBOL_GPL(ata_ncq_prio_enable);
static ssize_t ata_ncq_prio_enable_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t len)
{
struct scsi_device *sdev = to_scsi_device(device);
struct ata_port *ap = ata_shost_to_port(sdev->host);
bool enable;
int rc;
rc = kstrtobool(buf, &enable);
if (rc)
return rc;
rc = ata_ncq_prio_enable(ap, sdev, enable);
if (rc)
return rc;
return len;
} }
DEVICE_ATTR(ncq_prio_enable, S_IRUGO | S_IWUSR, DEVICE_ATTR(ncq_prio_enable, S_IRUGO | S_IWUSR,
@ -1170,21 +1254,24 @@ void ata_sas_tport_delete(struct ata_port *ap)
EXPORT_SYMBOL_GPL(ata_sas_tport_delete); EXPORT_SYMBOL_GPL(ata_sas_tport_delete);
/** /**
* ata_sas_slave_configure - Default slave_config routine for libata devices * ata_sas_device_configure - Default device_configure routine for libata
* devices
* @sdev: SCSI device to configure * @sdev: SCSI device to configure
* @lim: queue limits
* @ap: ATA port to which SCSI device is attached * @ap: ATA port to which SCSI device is attached
* *
* RETURNS: * RETURNS:
* Zero. * Zero.
*/ */
int ata_sas_slave_configure(struct scsi_device *sdev, struct ata_port *ap) int ata_sas_device_configure(struct scsi_device *sdev, struct queue_limits *lim,
struct ata_port *ap)
{ {
ata_scsi_sdev_config(sdev); ata_scsi_sdev_config(sdev);
return ata_scsi_dev_config(sdev, ap->link.device); return ata_scsi_dev_config(sdev, lim, ap->link.device);
} }
EXPORT_SYMBOL_GPL(ata_sas_slave_configure); EXPORT_SYMBOL_GPL(ata_sas_device_configure);
/** /**
* ata_sas_queuecmd - Issue SCSI cdb to libata-managed device * ata_sas_queuecmd - Issue SCSI cdb to libata-managed device

View File

@ -1021,7 +1021,8 @@ bool ata_scsi_dma_need_drain(struct request *rq)
} }
EXPORT_SYMBOL_GPL(ata_scsi_dma_need_drain); EXPORT_SYMBOL_GPL(ata_scsi_dma_need_drain);
int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev) int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
struct ata_device *dev)
{ {
struct request_queue *q = sdev->request_queue; struct request_queue *q = sdev->request_queue;
int depth = 1; int depth = 1;
@ -1031,7 +1032,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
/* configure max sectors */ /* configure max sectors */
dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors); dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors);
blk_queue_max_hw_sectors(q, dev->max_sectors); lim->max_hw_sectors = dev->max_sectors;
if (dev->class == ATA_DEV_ATAPI) { if (dev->class == ATA_DEV_ATAPI) {
sdev->sector_size = ATA_SECT_SIZE; sdev->sector_size = ATA_SECT_SIZE;
@ -1040,7 +1041,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
blk_queue_update_dma_pad(q, ATA_DMA_PAD_SZ - 1); blk_queue_update_dma_pad(q, ATA_DMA_PAD_SZ - 1);
/* make room for appending the drain */ /* make room for appending the drain */
blk_queue_max_segments(q, queue_max_segments(q) - 1); lim->max_segments--;
sdev->dma_drain_len = ATAPI_MAX_DRAIN; sdev->dma_drain_len = ATAPI_MAX_DRAIN;
sdev->dma_drain_buf = kmalloc(sdev->dma_drain_len, GFP_NOIO); sdev->dma_drain_buf = kmalloc(sdev->dma_drain_len, GFP_NOIO);
@ -1077,7 +1078,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
"sector_size=%u > PAGE_SIZE, PIO may malfunction\n", "sector_size=%u > PAGE_SIZE, PIO may malfunction\n",
sdev->sector_size); sdev->sector_size);
blk_queue_update_dma_alignment(q, sdev->sector_size - 1); lim->dma_alignment = sdev->sector_size - 1;
if (dev->flags & ATA_DFLAG_AN) if (dev->flags & ATA_DFLAG_AN)
set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events); set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events);
@ -1131,8 +1132,9 @@ int ata_scsi_slave_alloc(struct scsi_device *sdev)
EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc); EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc);
/** /**
* ata_scsi_slave_config - Set SCSI device attributes * ata_scsi_device_configure - Set SCSI device attributes
* @sdev: SCSI device to examine * @sdev: SCSI device to examine
* @lim: queue limits
* *
* This is called before we actually start reading * This is called before we actually start reading
* and writing to the device, to configure certain * and writing to the device, to configure certain
@ -1142,17 +1144,18 @@ EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc);
* Defined by SCSI layer. We don't really care. * Defined by SCSI layer. We don't really care.
*/ */
int ata_scsi_slave_config(struct scsi_device *sdev) int ata_scsi_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); struct ata_device *dev = __ata_scsi_find_dev(ap, sdev);
if (dev) if (dev)
return ata_scsi_dev_config(sdev, dev); return ata_scsi_dev_config(sdev, lim, dev);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(ata_scsi_slave_config); EXPORT_SYMBOL_GPL(ata_scsi_device_configure);
/** /**
* ata_scsi_slave_destroy - SCSI device is about to be destroyed * ata_scsi_slave_destroy - SCSI device is about to be destroyed

View File

@ -131,7 +131,8 @@ extern void ata_scsi_dev_rescan(struct work_struct *work);
extern int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel, extern int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel,
unsigned int id, u64 lun); unsigned int id, u64 lun);
void ata_scsi_sdev_config(struct scsi_device *sdev); void ata_scsi_sdev_config(struct scsi_device *sdev);
int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev); int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
struct ata_device *dev);
int __ata_scsi_queuecmd(struct scsi_cmnd *scmd, struct ata_device *dev); int __ata_scsi_queuecmd(struct scsi_cmnd *scmd, struct ata_device *dev);
/* libata-eh.c */ /* libata-eh.c */

View File

@ -796,7 +796,8 @@ static void pata_macio_reset_hw(struct pata_macio_priv *priv, int resume)
/* Hook the standard slave config to fixup some HW related alignment /* Hook the standard slave config to fixup some HW related alignment
* restrictions * restrictions
*/ */
static int pata_macio_slave_config(struct scsi_device *sdev) static int pata_macio_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct pata_macio_priv *priv = ap->private_data; struct pata_macio_priv *priv = ap->private_data;
@ -805,7 +806,7 @@ static int pata_macio_slave_config(struct scsi_device *sdev)
int rc; int rc;
/* First call original */ /* First call original */
rc = ata_scsi_slave_config(sdev); rc = ata_scsi_device_configure(sdev, lim);
if (rc) if (rc)
return rc; return rc;
@ -814,7 +815,7 @@ static int pata_macio_slave_config(struct scsi_device *sdev)
/* OHare has issues with non cache aligned DMA on some chipsets */ /* OHare has issues with non cache aligned DMA on some chipsets */
if (priv->kind == controller_ohare) { if (priv->kind == controller_ohare) {
blk_queue_update_dma_alignment(sdev->request_queue, 31); lim->dma_alignment = 31;
blk_queue_update_dma_pad(sdev->request_queue, 31); blk_queue_update_dma_pad(sdev->request_queue, 31);
/* Tell the world about it */ /* Tell the world about it */
@ -829,7 +830,7 @@ static int pata_macio_slave_config(struct scsi_device *sdev)
/* Shasta and K2 seem to have "issues" with reads ... */ /* Shasta and K2 seem to have "issues" with reads ... */
if (priv->kind == controller_sh_ata6 || priv->kind == controller_k2_ata6) { if (priv->kind == controller_sh_ata6 || priv->kind == controller_k2_ata6) {
/* Allright these are bad, apply restrictions */ /* Allright these are bad, apply restrictions */
blk_queue_update_dma_alignment(sdev->request_queue, 15); lim->dma_alignment = 15;
blk_queue_update_dma_pad(sdev->request_queue, 15); blk_queue_update_dma_pad(sdev->request_queue, 15);
/* We enable MWI and hack cache line size directly here, this /* We enable MWI and hack cache line size directly here, this
@ -918,7 +919,7 @@ static const struct scsi_host_template pata_macio_sht = {
* use 64K minus 256 * use 64K minus 256
*/ */
.max_segment_size = MAX_DBDMA_SEG, .max_segment_size = MAX_DBDMA_SEG,
.slave_configure = pata_macio_slave_config, .device_configure = pata_macio_device_configure,
.sdev_groups = ata_common_sdev_groups, .sdev_groups = ata_common_sdev_groups,
.can_queue = ATA_DEF_QUEUE, .can_queue = ATA_DEF_QUEUE,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,

View File

@ -673,7 +673,7 @@ static const struct scsi_host_template mv6_sht = {
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
.slave_configure = ata_scsi_slave_config .device_configure = ata_scsi_device_configure
}; };
static struct ata_port_operations mv5_ops = { static struct ata_port_operations mv5_ops = {

View File

@ -296,7 +296,8 @@ static void nv_nf2_freeze(struct ata_port *ap);
static void nv_nf2_thaw(struct ata_port *ap); static void nv_nf2_thaw(struct ata_port *ap);
static void nv_ck804_freeze(struct ata_port *ap); static void nv_ck804_freeze(struct ata_port *ap);
static void nv_ck804_thaw(struct ata_port *ap); static void nv_ck804_thaw(struct ata_port *ap);
static int nv_adma_slave_config(struct scsi_device *sdev); static int nv_adma_device_configure(struct scsi_device *sdev,
struct queue_limits *lim);
static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc); static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc);
static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc); static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc);
static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc); static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc);
@ -318,7 +319,8 @@ static void nv_adma_tf_read(struct ata_port *ap, struct ata_taskfile *tf);
static void nv_mcp55_thaw(struct ata_port *ap); static void nv_mcp55_thaw(struct ata_port *ap);
static void nv_mcp55_freeze(struct ata_port *ap); static void nv_mcp55_freeze(struct ata_port *ap);
static void nv_swncq_error_handler(struct ata_port *ap); static void nv_swncq_error_handler(struct ata_port *ap);
static int nv_swncq_slave_config(struct scsi_device *sdev); static int nv_swncq_device_configure(struct scsi_device *sdev,
struct queue_limits *lim);
static int nv_swncq_port_start(struct ata_port *ap); static int nv_swncq_port_start(struct ata_port *ap);
static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc); static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc);
static void nv_swncq_fill_sg(struct ata_queued_cmd *qc); static void nv_swncq_fill_sg(struct ata_queued_cmd *qc);
@ -380,7 +382,7 @@ static const struct scsi_host_template nv_adma_sht = {
.can_queue = NV_ADMA_MAX_CPBS, .can_queue = NV_ADMA_MAX_CPBS,
.sg_tablesize = NV_ADMA_SGTBL_TOTAL_LEN, .sg_tablesize = NV_ADMA_SGTBL_TOTAL_LEN,
.dma_boundary = NV_ADMA_DMA_BOUNDARY, .dma_boundary = NV_ADMA_DMA_BOUNDARY,
.slave_configure = nv_adma_slave_config, .device_configure = nv_adma_device_configure,
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
@ -391,7 +393,7 @@ static const struct scsi_host_template nv_swncq_sht = {
.can_queue = ATA_MAX_QUEUE - 1, .can_queue = ATA_MAX_QUEUE - 1,
.sg_tablesize = LIBATA_MAX_PRD, .sg_tablesize = LIBATA_MAX_PRD,
.dma_boundary = ATA_DMA_BOUNDARY, .dma_boundary = ATA_DMA_BOUNDARY,
.slave_configure = nv_swncq_slave_config, .device_configure = nv_swncq_device_configure,
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
@ -661,7 +663,8 @@ static void nv_adma_mode(struct ata_port *ap)
pp->flags &= ~NV_ADMA_PORT_REGISTER_MODE; pp->flags &= ~NV_ADMA_PORT_REGISTER_MODE;
} }
static int nv_adma_slave_config(struct scsi_device *sdev) static int nv_adma_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct nv_adma_port_priv *pp = ap->private_data; struct nv_adma_port_priv *pp = ap->private_data;
@ -673,7 +676,7 @@ static int nv_adma_slave_config(struct scsi_device *sdev)
int adma_enable; int adma_enable;
u32 current_reg, new_reg, config_mask; u32 current_reg, new_reg, config_mask;
rc = ata_scsi_slave_config(sdev); rc = ata_scsi_device_configure(sdev, lim);
if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun) if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun)
/* Not a proper libata device, ignore */ /* Not a proper libata device, ignore */
@ -740,8 +743,8 @@ static int nv_adma_slave_config(struct scsi_device *sdev)
rc = dma_set_mask(&pdev->dev, pp->adma_dma_mask); rc = dma_set_mask(&pdev->dev, pp->adma_dma_mask);
} }
blk_queue_segment_boundary(sdev->request_queue, segment_boundary); lim->seg_boundary_mask = segment_boundary;
blk_queue_max_segments(sdev->request_queue, sg_tablesize); lim->max_segments = sg_tablesize;
ata_port_info(ap, ata_port_info(ap,
"DMA mask 0x%llX, segment boundary 0x%lX, hw segs %hu\n", "DMA mask 0x%llX, segment boundary 0x%lX, hw segs %hu\n",
(unsigned long long)*ap->host->dev->dma_mask, (unsigned long long)*ap->host->dev->dma_mask,
@ -1868,7 +1871,8 @@ static void nv_swncq_host_init(struct ata_host *host)
writel(~0x0, mmio + NV_INT_STATUS_MCP55); writel(~0x0, mmio + NV_INT_STATUS_MCP55);
} }
static int nv_swncq_slave_config(struct scsi_device *sdev) static int nv_swncq_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct pci_dev *pdev = to_pci_dev(ap->host->dev); struct pci_dev *pdev = to_pci_dev(ap->host->dev);
@ -1878,7 +1882,7 @@ static int nv_swncq_slave_config(struct scsi_device *sdev)
u8 check_maxtor = 0; u8 check_maxtor = 0;
unsigned char model_num[ATA_ID_PROD_LEN + 1]; unsigned char model_num[ATA_ID_PROD_LEN + 1];
rc = ata_scsi_slave_config(sdev); rc = ata_scsi_device_configure(sdev, lim);
if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun) if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun)
/* Not a proper libata device, ignore */ /* Not a proper libata device, ignore */
return rc; return rc;

View File

@ -381,7 +381,7 @@ static const struct scsi_host_template sil24_sht = {
.tag_alloc_policy = BLK_TAG_ALLOC_FIFO, .tag_alloc_policy = BLK_TAG_ALLOC_FIFO,
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.slave_configure = ata_scsi_slave_config .device_configure = ata_scsi_device_configure
}; };
static struct ata_port_operations sil24_ops = { static struct ata_port_operations sil24_ops = {

View File

@ -1500,19 +1500,14 @@ static int sbp2_scsi_slave_alloc(struct scsi_device *sdev)
sdev->allow_restart = 1; sdev->allow_restart = 1;
/*
* SBP-2 does not require any alignment, but we set it anyway
* for compatibility with earlier versions of this driver.
*/
blk_queue_update_dma_alignment(sdev->request_queue, 4 - 1);
if (lu->tgt->workarounds & SBP2_WORKAROUND_INQUIRY_36) if (lu->tgt->workarounds & SBP2_WORKAROUND_INQUIRY_36)
sdev->inquiry_len = 36; sdev->inquiry_len = 36;
return 0; return 0;
} }
static int sbp2_scsi_slave_configure(struct scsi_device *sdev) static int sbp2_scsi_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct sbp2_logical_unit *lu = sdev->hostdata; struct sbp2_logical_unit *lu = sdev->hostdata;
@ -1538,7 +1533,7 @@ static int sbp2_scsi_slave_configure(struct scsi_device *sdev)
sdev->start_stop_pwr_cond = 1; sdev->start_stop_pwr_cond = 1;
if (lu->tgt->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS) if (lu->tgt->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS)
blk_queue_max_hw_sectors(sdev->request_queue, 128 * 1024 / 512); lim->max_hw_sectors = 128 * 1024 / 512;
return 0; return 0;
} }
@ -1596,7 +1591,7 @@ static const struct scsi_host_template scsi_driver_template = {
.proc_name = "sbp2", .proc_name = "sbp2",
.queuecommand = sbp2_scsi_queuecommand, .queuecommand = sbp2_scsi_queuecommand,
.slave_alloc = sbp2_scsi_slave_alloc, .slave_alloc = sbp2_scsi_slave_alloc,
.slave_configure = sbp2_scsi_slave_configure, .device_configure = sbp2_scsi_device_configure,
.eh_abort_handler = sbp2_scsi_abort, .eh_abort_handler = sbp2_scsi_abort,
.this_id = -1, .this_id = -1,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,

View File

@ -129,6 +129,7 @@ static const struct scsi_host_template mptfc_driver_template = {
.sg_tablesize = MPT_SCSI_SG_DEPTH, .sg_tablesize = MPT_SCSI_SG_DEPTH,
.max_sectors = 8192, .max_sectors = 8192,
.cmd_per_lun = 7, .cmd_per_lun = 7,
.dma_alignment = 511,
.shost_groups = mptscsih_host_attr_groups, .shost_groups = mptscsih_host_attr_groups,
}; };

View File

@ -2020,6 +2020,7 @@ static const struct scsi_host_template mptsas_driver_template = {
.sg_tablesize = MPT_SCSI_SG_DEPTH, .sg_tablesize = MPT_SCSI_SG_DEPTH,
.max_sectors = 8192, .max_sectors = 8192,
.cmd_per_lun = 7, .cmd_per_lun = 7,
.dma_alignment = 511,
.shost_groups = mptscsih_host_attr_groups, .shost_groups = mptscsih_host_attr_groups,
.no_write_same = 1, .no_write_same = 1,
}; };

View File

@ -2438,8 +2438,6 @@ mptscsih_slave_configure(struct scsi_device *sdev)
"tagged %d, simple %d\n", "tagged %d, simple %d\n",
ioc->name,sdev->tagged_supported, sdev->simple_tags)); ioc->name,sdev->tagged_supported, sdev->simple_tags));
blk_queue_dma_alignment (sdev->request_queue, 512 - 1);
return 0; return 0;
} }

View File

@ -843,6 +843,7 @@ static const struct scsi_host_template mptspi_driver_template = {
.sg_tablesize = MPT_SCSI_SG_DEPTH, .sg_tablesize = MPT_SCSI_SG_DEPTH,
.max_sectors = 8192, .max_sectors = 8192,
.cmd_per_lun = 7, .cmd_per_lun = 7,
.dma_alignment = 511,
.shost_groups = mptscsih_host_attr_groups, .shost_groups = mptscsih_host_attr_groups,
}; };

View File

@ -1351,7 +1351,7 @@ static int qed_slowpath_start(struct qed_dev *cdev,
(params->drv_rev << 8) | (params->drv_rev << 8) |
(params->drv_eng); (params->drv_eng);
strscpy(drv_version.name, params->name, strscpy(drv_version.name, params->name,
MCP_DRV_VER_STR_SIZE - 4); sizeof(drv_version.name));
rc = qed_mcp_send_drv_version(hwfn, hwfn->p_main_ptt, rc = qed_mcp_send_drv_version(hwfn, hwfn->p_main_ptt,
&drv_version); &drv_version);
if (rc) { if (rc) {

View File

@ -4561,9 +4561,9 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_tpm_track(
len_to_track_end = 0; len_to_track_end = 0;
/* /*
* A tidaw can address 4k of memory, but must not cross page boundaries * A tidaw can address 4k of memory, but must not cross page boundaries
* We can let the block layer handle this by setting * We can let the block layer handle this by setting seg_boundary_mask
* blk_queue_segment_boundary to page boundaries and * to page boundaries and max_segment_size to page size when setting up
* blk_max_segment_size to page size when setting up the request queue. * the request queue.
* For write requests, a TIDAW must not cross track boundaries, because * For write requests, a TIDAW must not cross track boundaries, because
* we have to set the CBC flag on the last tidaw for each track. * we have to set the CBC flag on the last tidaw for each track.
*/ */

View File

@ -2631,7 +2631,6 @@ static void FPT_sres(u32 port, unsigned char p_card,
WRW_HARPOON((port + hp_fiforead), (unsigned short)0x00); WRW_HARPOON((port + hp_fiforead), (unsigned short)0x00);
our_target = (unsigned char)(RD_HARPOON(port + hp_select_id) >> 4); our_target = (unsigned char)(RD_HARPOON(port + hp_select_id) >> 4);
currTar_Info = &FPT_sccbMgrTbl[p_card][our_target];
msgRetryCount = 0; msgRetryCount = 0;
do { do {

View File

@ -53,7 +53,7 @@ config SCSI_ESP_PIO
config SCSI_NETLINK config SCSI_NETLINK
bool bool
default n default n
depends on NET depends on NET
config SCSI_PROC_FS config SCSI_PROC_FS
@ -327,7 +327,7 @@ config ISCSI_TCP
config ISCSI_BOOT_SYSFS config ISCSI_BOOT_SYSFS
tristate "iSCSI Boot Sysfs Interface" tristate "iSCSI Boot Sysfs Interface"
default n default n
help help
This option enables support for exposing iSCSI boot information This option enables support for exposing iSCSI boot information
via sysfs to userspace. If you wish to export this information, via sysfs to userspace. If you wish to export this information,

View File

@ -295,7 +295,13 @@ static void __exit amiga_a3000_scsi_remove(struct platform_device *pdev)
release_mem_region(res->start, resource_size(res)); release_mem_region(res->start, resource_size(res));
} }
static struct platform_driver amiga_a3000_scsi_driver = { /*
* amiga_a3000_scsi_remove() lives in .exit.text. For drivers registered via
* module_platform_driver_probe() this is ok because they cannot get unbound at
* runtime. So mark the driver struct with __refdata to prevent modpost
* triggering a section mismatch warning.
*/
static struct platform_driver amiga_a3000_scsi_driver __refdata = {
.remove_new = __exit_p(amiga_a3000_scsi_remove), .remove_new = __exit_p(amiga_a3000_scsi_remove),
.driver = { .driver = {
.name = "amiga-a3000-scsi", .name = "amiga-a3000-scsi",

View File

@ -108,7 +108,13 @@ static void __exit amiga_a4000t_scsi_remove(struct platform_device *pdev)
release_mem_region(res->start, resource_size(res)); release_mem_region(res->start, resource_size(res));
} }
static struct platform_driver amiga_a4000t_scsi_driver = { /*
* amiga_a4000t_scsi_remove() lives in .exit.text. For drivers registered via
* module_platform_driver_probe() this is ok because they cannot get unbound at
* runtime. So mark the driver struct with __refdata to prevent modpost
* triggering a section mismatch warning.
*/
static struct platform_driver amiga_a4000t_scsi_driver __refdata = {
.remove_new = __exit_p(amiga_a4000t_scsi_remove), .remove_new = __exit_p(amiga_a4000t_scsi_remove),
.driver = { .driver = {
.name = "amiga-a4000t-scsi", .name = "amiga-a4000t-scsi",

View File

@ -746,6 +746,7 @@ struct Scsi_Host *aha152x_probe_one(struct aha152x_setup *setup)
/* need to have host registered before triggering any interrupt */ /* need to have host registered before triggering any interrupt */
list_add_tail(&HOSTDATA(shpnt)->host_list, &aha152x_host_list); list_add_tail(&HOSTDATA(shpnt)->host_list, &aha152x_host_list);
shpnt->no_highmem = true;
shpnt->io_port = setup->io_port; shpnt->io_port = setup->io_port;
shpnt->n_io_port = IO_RANGE; shpnt->n_io_port = IO_RANGE;
shpnt->irq = setup->irq; shpnt->irq = setup->irq;
@ -2940,12 +2941,6 @@ static int aha152x_show_info(struct seq_file *m, struct Scsi_Host *shpnt)
return 0; return 0;
} }
static int aha152x_adjust_queue(struct scsi_device *device)
{
blk_queue_bounce_limit(device->request_queue, BLK_BOUNCE_HIGH);
return 0;
}
static const struct scsi_host_template aha152x_driver_template = { static const struct scsi_host_template aha152x_driver_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.name = AHA152X_REVID, .name = AHA152X_REVID,
@ -2961,7 +2956,6 @@ static const struct scsi_host_template aha152x_driver_template = {
.this_id = 7, .this_id = 7,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.dma_boundary = PAGE_SIZE - 1, .dma_boundary = PAGE_SIZE - 1,
.slave_alloc = aha152x_adjust_queue,
.cmd_size = sizeof(struct aha152x_cmd_priv), .cmd_size = sizeof(struct aha152x_cmd_priv),
}; };

View File

@ -8,79 +8,80 @@ config SCSI_AIC79XX
depends on PCI && HAS_IOPORT && SCSI depends on PCI && HAS_IOPORT && SCSI
select SCSI_SPI_ATTRS select SCSI_SPI_ATTRS
help help
This driver supports all of Adaptec's Ultra 320 PCI-X This driver supports all of Adaptec's Ultra 320 PCI-X
based SCSI controllers. based SCSI controllers.
config AIC79XX_CMDS_PER_DEVICE config AIC79XX_CMDS_PER_DEVICE
int "Maximum number of TCQ commands per device" int "Maximum number of TCQ commands per device"
depends on SCSI_AIC79XX depends on SCSI_AIC79XX
default "32" default "32"
help help
Specify the number of commands you would like to allocate per SCSI Specify the number of commands you would like to allocate per SCSI
device when Tagged Command Queueing (TCQ) is enabled on that device. device when Tagged Command Queueing (TCQ) is enabled on that device.
This is an upper bound value for the number of tagged transactions This is an upper bound value for the number of tagged transactions
to be used for any device. The aic7xxx driver will automatically to be used for any device. The aic7xxx driver will automatically
vary this number based on device behavior. For devices with a vary this number based on device behavior. For devices with a
fixed maximum, the driver will eventually lock to this maximum fixed maximum, the driver will eventually lock to this maximum
and display a console message indicating this value. and display a console message indicating this value.
Due to resource allocation issues in the Linux SCSI mid-layer, using Due to resource allocation issues in the Linux SCSI mid-layer, using
a high number of commands per device may result in memory allocation a high number of commands per device may result in memory allocation
failures when many devices are attached to the system. For this reason, failures when many devices are attached to the system. For this
the default is set to 32. Higher values may result in higher performance reason, the default is set to 32. Higher values may result in higher
on some devices. The upper bound is 253. 0 disables tagged queueing. performance on some devices. The upper bound is 253. 0 disables
tagged queueing.
Per device tag depth can be controlled via the kernel command line Per device tag depth can be controlled via the kernel command line
"tag_info" option. See Documentation/scsi/aic79xx.rst for details. "tag_info" option. See Documentation/scsi/aic79xx.rst for details.
config AIC79XX_RESET_DELAY_MS config AIC79XX_RESET_DELAY_MS
int "Initial bus reset delay in milli-seconds" int "Initial bus reset delay in milli-seconds"
depends on SCSI_AIC79XX depends on SCSI_AIC79XX
default "5000" default "5000"
help help
The number of milliseconds to delay after an initial bus reset. The number of milliseconds to delay after an initial bus reset.
The bus settle delay following all error recovery actions is The bus settle delay following all error recovery actions is
dictated by the SCSI layer and is not affected by this value. dictated by the SCSI layer and is not affected by this value.
Default: 5000 (5 seconds) Default: 5000 (5 seconds)
config AIC79XX_BUILD_FIRMWARE config AIC79XX_BUILD_FIRMWARE
bool "Build Adapter Firmware with Kernel Build" bool "Build Adapter Firmware with Kernel Build"
depends on SCSI_AIC79XX && !PREVENT_FIRMWARE_BUILD depends on SCSI_AIC79XX && !PREVENT_FIRMWARE_BUILD
help help
This option should only be enabled if you are modifying the firmware This option should only be enabled if you are modifying the firmware
source to the aic79xx driver and wish to have the generated firmware source to the aic79xx driver and wish to have the generated firmware
include files updated during a normal kernel build. The assembler include files updated during a normal kernel build. The assembler
for the firmware requires lex and yacc or their equivalents, as well for the firmware requires lex and yacc or their equivalents, as well
as the db v1 library. You may have to install additional packages as the db v1 library. You may have to install additional packages
or modify the assembler Makefile or the files it includes if your or modify the assembler Makefile or the files it includes if your
build environment is different than that of the author. build environment is different than that of the author.
config AIC79XX_DEBUG_ENABLE config AIC79XX_DEBUG_ENABLE
bool "Compile in Debugging Code" bool "Compile in Debugging Code"
depends on SCSI_AIC79XX depends on SCSI_AIC79XX
default y default y
help help
Compile in aic79xx debugging code that can be useful in diagnosing Compile in aic79xx debugging code that can be useful in diagnosing
driver errors. driver errors.
config AIC79XX_DEBUG_MASK config AIC79XX_DEBUG_MASK
int "Debug code enable mask (16383 for all debugging)" int "Debug code enable mask (16383 for all debugging)"
depends on SCSI_AIC79XX depends on SCSI_AIC79XX
default "0" default "0"
help help
Bit mask of debug options that is only valid if the Bit mask of debug options that is only valid if the
CONFIG_AIC79XX_DEBUG_ENABLE option is enabled. The bits in this mask CONFIG_AIC79XX_DEBUG_ENABLE option is enabled. The bits in this mask
are defined in the drivers/scsi/aic7xxx/aic79xx.h - search for the are defined in the drivers/scsi/aic7xxx/aic79xx.h - search for the
variable ahd_debug in that file to find them. variable ahd_debug in that file to find them.
config AIC79XX_REG_PRETTY_PRINT config AIC79XX_REG_PRETTY_PRINT
bool "Decode registers during diagnostics" bool "Decode registers during diagnostics"
depends on SCSI_AIC79XX depends on SCSI_AIC79XX
default y default y
help help
Compile in register value tables for the output of expanded register Compile in register value tables for the output of expanded register
contents in diagnostics. This make it much easier to understand debug contents in diagnostics. This make it much easier to understand debug
output without having to refer to a data book and/or the aic7xxx.reg output without having to refer to a data book and/or the aic7xxx.reg
file. file.

View File

@ -8,84 +8,85 @@ config SCSI_AIC7XXX
depends on (PCI || EISA) && HAS_IOPORT && SCSI depends on (PCI || EISA) && HAS_IOPORT && SCSI
select SCSI_SPI_ATTRS select SCSI_SPI_ATTRS
help help
This driver supports all of Adaptec's Fast through Ultra 160 PCI This driver supports all of Adaptec's Fast through Ultra 160 PCI
based SCSI controllers as well as the aic7770 based EISA and VLB based SCSI controllers as well as the aic7770 based EISA and VLB
SCSI controllers (the 274x and 284x series). For AAA and ARO based SCSI controllers (the 274x and 284x series). For AAA and ARO based
configurations, only SCSI functionality is provided. configurations, only SCSI functionality is provided.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called aic7xxx. module will be called aic7xxx.
config AIC7XXX_CMDS_PER_DEVICE config AIC7XXX_CMDS_PER_DEVICE
int "Maximum number of TCQ commands per device" int "Maximum number of TCQ commands per device"
depends on SCSI_AIC7XXX depends on SCSI_AIC7XXX
default "32" default "32"
help help
Specify the number of commands you would like to allocate per SCSI Specify the number of commands you would like to allocate per SCSI
device when Tagged Command Queueing (TCQ) is enabled on that device. device when Tagged Command Queueing (TCQ) is enabled on that device.
This is an upper bound value for the number of tagged transactions This is an upper bound value for the number of tagged transactions
to be used for any device. The aic7xxx driver will automatically to be used for any device. The aic7xxx driver will automatically
vary this number based on device behavior. For devices with a vary this number based on device behavior. For devices with a
fixed maximum, the driver will eventually lock to this maximum fixed maximum, the driver will eventually lock to this maximum
and display a console message indicating this value. and display a console message indicating this value.
Due to resource allocation issues in the Linux SCSI mid-layer, using Due to resource allocation issues in the Linux SCSI mid-layer, using
a high number of commands per device may result in memory allocation a high number of commands per device may result in memory allocation
failures when many devices are attached to the system. For this reason, failures when many devices are attached to the system. For this
the default is set to 32. Higher values may result in higher performance reason, the default is set to 32. Higher values may result in higher
on some devices. The upper bound is 253. 0 disables tagged queueing. performance on some devices. The upper bound is 253. 0 disables tagged
queueing.
Per device tag depth can be controlled via the kernel command line Per device tag depth can be controlled via the kernel command line
"tag_info" option. See Documentation/scsi/aic7xxx.rst for details. "tag_info" option. See Documentation/scsi/aic7xxx.rst for details.
config AIC7XXX_RESET_DELAY_MS config AIC7XXX_RESET_DELAY_MS
int "Initial bus reset delay in milli-seconds" int "Initial bus reset delay in milli-seconds"
depends on SCSI_AIC7XXX depends on SCSI_AIC7XXX
default "5000" default "5000"
help help
The number of milliseconds to delay after an initial bus reset. The number of milliseconds to delay after an initial bus reset.
The bus settle delay following all error recovery actions is The bus settle delay following all error recovery actions is
dictated by the SCSI layer and is not affected by this value. dictated by the SCSI layer and is not affected by this value.
Default: 5000 (5 seconds) Default: 5000 (5 seconds)
config AIC7XXX_BUILD_FIRMWARE config AIC7XXX_BUILD_FIRMWARE
bool "Build Adapter Firmware with Kernel Build" bool "Build Adapter Firmware with Kernel Build"
depends on SCSI_AIC7XXX && !PREVENT_FIRMWARE_BUILD depends on SCSI_AIC7XXX && !PREVENT_FIRMWARE_BUILD
help help
This option should only be enabled if you are modifying the firmware This option should only be enabled if you are modifying the firmware
source to the aic7xxx driver and wish to have the generated firmware source to the aic7xxx driver and wish to have the generated firmware
include files updated during a normal kernel build. The assembler include files updated during a normal kernel build. The assembler
for the firmware requires lex and yacc or their equivalents, as well for the firmware requires lex and yacc or their equivalents, as well
as the db v1 library. You may have to install additional packages as the db v1 library. You may have to install additional packages
or modify the assembler Makefile or the files it includes if your or modify the assembler Makefile or the files it includes if your
build environment is different than that of the author. build environment is different than that of the author.
config AIC7XXX_DEBUG_ENABLE config AIC7XXX_DEBUG_ENABLE
bool "Compile in Debugging Code" bool "Compile in Debugging Code"
depends on SCSI_AIC7XXX depends on SCSI_AIC7XXX
default y default y
help help
Compile in aic7xxx debugging code that can be useful in diagnosing Compile in aic7xxx debugging code that can be useful in diagnosing
driver errors. driver errors.
config AIC7XXX_DEBUG_MASK config AIC7XXX_DEBUG_MASK
int "Debug code enable mask (2047 for all debugging)" int "Debug code enable mask (2047 for all debugging)"
depends on SCSI_AIC7XXX depends on SCSI_AIC7XXX
default "0" default "0"
help help
Bit mask of debug options that is only valid if the Bit mask of debug options that is only valid if the
CONFIG_AIC7XXX_DEBUG_ENABLE option is enabled. The bits in this mask CONFIG_AIC7XXX_DEBUG_ENABLE option is enabled. The bits in this mask
are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the
variable ahc_debug in that file to find them. variable ahc_debug in that file to find them.
config AIC7XXX_REG_PRETTY_PRINT config AIC7XXX_REG_PRETTY_PRINT
bool "Decode registers during diagnostics" bool "Decode registers during diagnostics"
depends on SCSI_AIC7XXX depends on SCSI_AIC7XXX
default y default y
help help
Compile in register value tables for the output of expanded register Compile in register value tables for the output of expanded register
contents in diagnostics. This make it much easier to understand debug contents in diagnostics. This make it much easier to understand debug
output without having to refer to a data book and/or the aic7xxx.reg output without having to refer to a data book and/or the aic7xxx.reg
file. file.

View File

@ -14,6 +14,7 @@
#include <linux/firmware.h> #include <linux/firmware.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <scsi/sas_ata.h>
#include <scsi/scsi_host.h> #include <scsi/scsi_host.h>
#include "aic94xx.h" #include "aic94xx.h"
@ -24,6 +25,7 @@
/* The format is "version.release.patchlevel" */ /* The format is "version.release.patchlevel" */
#define ASD_DRIVER_VERSION "1.0.3" #define ASD_DRIVER_VERSION "1.0.3"
#define DRV_NAME "aic94xx"
static int use_msi = 0; static int use_msi = 0;
module_param_named(use_msi, use_msi, int, S_IRUGO); module_param_named(use_msi, use_msi, int, S_IRUGO);
@ -34,32 +36,16 @@ MODULE_PARM_DESC(use_msi, "\n"
static struct scsi_transport_template *aic94xx_transport_template; static struct scsi_transport_template *aic94xx_transport_template;
static int asd_scan_finished(struct Scsi_Host *, unsigned long); static int asd_scan_finished(struct Scsi_Host *, unsigned long);
static void asd_scan_start(struct Scsi_Host *); static void asd_scan_start(struct Scsi_Host *);
static const struct attribute_group *asd_sdev_groups[];
static const struct scsi_host_template aic94xx_sht = { static const struct scsi_host_template aic94xx_sht = {
.module = THIS_MODULE, LIBSAS_SHT_BASE
/* .name is initialized */
.name = "aic94xx",
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = sas_slave_configure,
.scan_finished = asd_scan_finished, .scan_finished = asd_scan_finished,
.scan_start = asd_scan_start, .scan_start = asd_scan_start,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.can_queue = 1, .can_queue = 1,
.this_id = -1,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.track_queue_depth = 1, .track_queue_depth = 1,
.sdev_groups = asd_sdev_groups,
}; };
static int asd_map_memio(struct asd_ha_struct *asd_ha) static int asd_map_memio(struct asd_ha_struct *asd_ha)
@ -951,6 +937,11 @@ static void asd_remove_driver_attrs(struct device_driver *driver)
driver_remove_file(driver, &driver_attr_version); driver_remove_file(driver, &driver_attr_version);
} }
static const struct attribute_group *asd_sdev_groups[] = {
&sas_ata_sdev_attr_group,
NULL
};
static struct sas_domain_function_template aic94xx_transport_functions = { static struct sas_domain_function_template aic94xx_transport_functions = {
.lldd_dev_found = asd_dev_found, .lldd_dev_found = asd_dev_found,
.lldd_dev_gone = asd_dev_gone, .lldd_dev_gone = asd_dev_gone,

View File

@ -878,7 +878,13 @@ static void __exit atari_scsi_remove(struct platform_device *pdev)
atari_stram_free(atari_dma_buffer); atari_stram_free(atari_dma_buffer);
} }
static struct platform_driver atari_scsi_driver = { /*
* atari_scsi_remove() lives in .exit.text. For drivers registered via
* module_platform_driver_probe() this is ok because they cannot get unbound at
* runtime. So mark the driver struct with __refdata to prevent modpost
* triggering a section mismatch warning.
*/
static struct platform_driver atari_scsi_driver __refdata = {
.remove_new = __exit_p(atari_scsi_remove), .remove_new = __exit_p(atari_scsi_remove),
.driver = { .driver = {
.name = DRV_MODULE_NAME, .name = DRV_MODULE_NAME,

View File

@ -250,7 +250,7 @@ bfad_debugfs_write_regrd(struct file *file, const char __user *buf,
unsigned long flags; unsigned long flags;
void *kern_buf; void *kern_buf;
kern_buf = memdup_user(buf, nbytes); kern_buf = memdup_user_nul(buf, nbytes);
if (IS_ERR(kern_buf)) if (IS_ERR(kern_buf))
return PTR_ERR(kern_buf); return PTR_ERR(kern_buf);
@ -317,7 +317,7 @@ bfad_debugfs_write_regwr(struct file *file, const char __user *buf,
unsigned long flags; unsigned long flags;
void *kern_buf; void *kern_buf;
kern_buf = memdup_user(buf, nbytes); kern_buf = memdup_user_nul(buf, nbytes);
if (IS_ERR(kern_buf)) if (IS_ERR(kern_buf))
return PTR_ERR(kern_buf); return PTR_ERR(kern_buf);

View File

@ -128,10 +128,8 @@ retry_ofld:
BNX2FC_TGT_DBG(tgt, "ctx_alloc_failure, " BNX2FC_TGT_DBG(tgt, "ctx_alloc_failure, "
"retry ofld..%d\n", i++); "retry ofld..%d\n", i++);
msleep_interruptible(1000); msleep_interruptible(1000);
if (i > 3) { if (i > 3)
i = 0;
goto ofld_err; goto ofld_err;
}
goto retry_ofld; goto retry_ofld;
} }
goto ofld_err; goto ofld_err;

View File

@ -1185,9 +1185,6 @@ static struct pci_error_handlers csio_err_handler = {
static struct pci_driver csio_pci_driver = { static struct pci_driver csio_pci_driver = {
.name = KBUILD_MODNAME, .name = KBUILD_MODNAME,
.driver = {
.owner = THIS_MODULE,
},
.id_table = csio_pci_tbl, .id_table = csio_pci_tbl,
.probe = csio_probe_one, .probe = csio_probe_one,
.remove = csio_remove_one, .remove = csio_remove_one,

View File

@ -216,7 +216,7 @@ void cxlflash_term_global_luns(void)
/** /**
* cxlflash_manage_lun() - handles LUN management activities * cxlflash_manage_lun() - handles LUN management activities
* @sdev: SCSI device associated with LUN. * @sdev: SCSI device associated with LUN.
* @manage: Manage ioctl data structure. * @arg: Manage ioctl data structure.
* *
* This routine is used to notify the driver about a LUN's WWID and associate * This routine is used to notify the driver about a LUN's WWID and associate
* SCSI devices (sdev) with a global LUN instance. Additionally it serves to * SCSI devices (sdev) with a global LUN instance. Additionally it serves to
@ -224,9 +224,9 @@ void cxlflash_term_global_luns(void)
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
int cxlflash_manage_lun(struct scsi_device *sdev, int cxlflash_manage_lun(struct scsi_device *sdev, void *arg)
struct dk_cxlflash_manage_lun *manage)
{ {
struct dk_cxlflash_manage_lun *manage = arg;
struct cxlflash_cfg *cfg = shost_priv(sdev->host); struct cxlflash_cfg *cfg = shost_priv(sdev->host);
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct llun_info *lli = NULL; struct llun_info *lli = NULL;

View File

@ -3280,13 +3280,13 @@ static char *decode_hioctl(unsigned int cmd)
/** /**
* cxlflash_lun_provision() - host LUN provisioning handler * cxlflash_lun_provision() - host LUN provisioning handler
* @cfg: Internal structure associated with the host. * @cfg: Internal structure associated with the host.
* @lunprov: Kernel copy of userspace ioctl data structure. * @arg: Kernel copy of userspace ioctl data structure.
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
static int cxlflash_lun_provision(struct cxlflash_cfg *cfg, static int cxlflash_lun_provision(struct cxlflash_cfg *cfg, void *arg)
struct ht_cxlflash_lun_provision *lunprov)
{ {
struct ht_cxlflash_lun_provision *lunprov = arg;
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct sisl_ioarcb rcb; struct sisl_ioarcb rcb;
@ -3371,16 +3371,16 @@ out:
/** /**
* cxlflash_afu_debug() - host AFU debug handler * cxlflash_afu_debug() - host AFU debug handler
* @cfg: Internal structure associated with the host. * @cfg: Internal structure associated with the host.
* @afu_dbg: Kernel copy of userspace ioctl data structure. * @arg: Kernel copy of userspace ioctl data structure.
* *
* For debug requests requiring a data buffer, always provide an aligned * For debug requests requiring a data buffer, always provide an aligned
* (cache line) buffer to the AFU to appease any alignment requirements. * (cache line) buffer to the AFU to appease any alignment requirements.
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
static int cxlflash_afu_debug(struct cxlflash_cfg *cfg, static int cxlflash_afu_debug(struct cxlflash_cfg *cfg, void *arg)
struct ht_cxlflash_afu_debug *afu_dbg)
{ {
struct ht_cxlflash_afu_debug *afu_dbg = arg;
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct sisl_ioarcb rcb; struct sisl_ioarcb rcb;
@ -3494,10 +3494,8 @@ static long cxlflash_chr_ioctl(struct file *file, unsigned int cmd,
size_t size; size_t size;
hioctl ioctl; hioctl ioctl;
} ioctl_tbl[] = { /* NOTE: order matters here */ } ioctl_tbl[] = { /* NOTE: order matters here */
{ sizeof(struct ht_cxlflash_lun_provision), { sizeof(struct ht_cxlflash_lun_provision), cxlflash_lun_provision },
(hioctl)cxlflash_lun_provision }, { sizeof(struct ht_cxlflash_afu_debug), cxlflash_afu_debug },
{ sizeof(struct ht_cxlflash_afu_debug),
(hioctl)cxlflash_afu_debug },
}; };
/* Hold read semaphore so we can drain if needed */ /* Hold read semaphore so we can drain if needed */

View File

@ -729,8 +729,7 @@ out:
return rc; return rc;
} }
int cxlflash_disk_release(struct scsi_device *sdev, int cxlflash_disk_release(struct scsi_device *sdev, void *release)
struct dk_cxlflash_release *release)
{ {
return _cxlflash_disk_release(sdev, NULL, release); return _cxlflash_disk_release(sdev, NULL, release);
} }
@ -955,8 +954,7 @@ out:
return rc; return rc;
} }
static int cxlflash_disk_detach(struct scsi_device *sdev, static int cxlflash_disk_detach(struct scsi_device *sdev, void *detach)
struct dk_cxlflash_detach *detach)
{ {
return _cxlflash_disk_detach(sdev, NULL, detach); return _cxlflash_disk_detach(sdev, NULL, detach);
} }
@ -1305,7 +1303,7 @@ retry:
/** /**
* cxlflash_disk_attach() - attach a LUN to a context * cxlflash_disk_attach() - attach a LUN to a context
* @sdev: SCSI device associated with LUN. * @sdev: SCSI device associated with LUN.
* @attach: Attach ioctl data structure. * @arg: Attach ioctl data structure.
* *
* Creates a context and attaches LUN to it. A LUN can only be attached * Creates a context and attaches LUN to it. A LUN can only be attached
* one time to a context (subsequent attaches for the same context/LUN pair * one time to a context (subsequent attaches for the same context/LUN pair
@ -1314,9 +1312,9 @@ retry:
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
static int cxlflash_disk_attach(struct scsi_device *sdev, static int cxlflash_disk_attach(struct scsi_device *sdev, void *arg)
struct dk_cxlflash_attach *attach)
{ {
struct dk_cxlflash_attach *attach = arg;
struct cxlflash_cfg *cfg = shost_priv(sdev->host); struct cxlflash_cfg *cfg = shost_priv(sdev->host);
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct afu *afu = cfg->afu; struct afu *afu = cfg->afu;
@ -1621,7 +1619,7 @@ err1:
/** /**
* cxlflash_afu_recover() - initiates AFU recovery * cxlflash_afu_recover() - initiates AFU recovery
* @sdev: SCSI device associated with LUN. * @sdev: SCSI device associated with LUN.
* @recover: Recover ioctl data structure. * @arg: Recover ioctl data structure.
* *
* Only a single recovery is allowed at a time to avoid exhausting CXL * Only a single recovery is allowed at a time to avoid exhausting CXL
* resources (leading to recovery failure) in the event that we're up * resources (leading to recovery failure) in the event that we're up
@ -1648,9 +1646,9 @@ err1:
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
static int cxlflash_afu_recover(struct scsi_device *sdev, static int cxlflash_afu_recover(struct scsi_device *sdev, void *arg)
struct dk_cxlflash_recover_afu *recover)
{ {
struct dk_cxlflash_recover_afu *recover = arg;
struct cxlflash_cfg *cfg = shost_priv(sdev->host); struct cxlflash_cfg *cfg = shost_priv(sdev->host);
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct llun_info *lli = sdev->hostdata; struct llun_info *lli = sdev->hostdata;
@ -1829,13 +1827,13 @@ out:
/** /**
* cxlflash_disk_verify() - verifies a LUN is the same and handle size changes * cxlflash_disk_verify() - verifies a LUN is the same and handle size changes
* @sdev: SCSI device associated with LUN. * @sdev: SCSI device associated with LUN.
* @verify: Verify ioctl data structure. * @arg: Verify ioctl data structure.
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
static int cxlflash_disk_verify(struct scsi_device *sdev, static int cxlflash_disk_verify(struct scsi_device *sdev, void *arg)
struct dk_cxlflash_verify *verify)
{ {
struct dk_cxlflash_verify *verify = arg;
int rc = 0; int rc = 0;
struct ctx_info *ctxi = NULL; struct ctx_info *ctxi = NULL;
struct cxlflash_cfg *cfg = shost_priv(sdev->host); struct cxlflash_cfg *cfg = shost_priv(sdev->host);
@ -2111,16 +2109,16 @@ int cxlflash_ioctl(struct scsi_device *sdev, unsigned int cmd, void __user *arg)
size_t size; size_t size;
sioctl ioctl; sioctl ioctl;
} ioctl_tbl[] = { /* NOTE: order matters here */ } ioctl_tbl[] = { /* NOTE: order matters here */
{sizeof(struct dk_cxlflash_attach), (sioctl)cxlflash_disk_attach}, {sizeof(struct dk_cxlflash_attach), cxlflash_disk_attach},
{sizeof(struct dk_cxlflash_udirect), cxlflash_disk_direct_open}, {sizeof(struct dk_cxlflash_udirect), cxlflash_disk_direct_open},
{sizeof(struct dk_cxlflash_release), (sioctl)cxlflash_disk_release}, {sizeof(struct dk_cxlflash_release), cxlflash_disk_release},
{sizeof(struct dk_cxlflash_detach), (sioctl)cxlflash_disk_detach}, {sizeof(struct dk_cxlflash_detach), cxlflash_disk_detach},
{sizeof(struct dk_cxlflash_verify), (sioctl)cxlflash_disk_verify}, {sizeof(struct dk_cxlflash_verify), cxlflash_disk_verify},
{sizeof(struct dk_cxlflash_recover_afu), (sioctl)cxlflash_afu_recover}, {sizeof(struct dk_cxlflash_recover_afu), cxlflash_afu_recover},
{sizeof(struct dk_cxlflash_manage_lun), (sioctl)cxlflash_manage_lun}, {sizeof(struct dk_cxlflash_manage_lun), cxlflash_manage_lun},
{sizeof(struct dk_cxlflash_uvirtual), cxlflash_disk_virtual_open}, {sizeof(struct dk_cxlflash_uvirtual), cxlflash_disk_virtual_open},
{sizeof(struct dk_cxlflash_resize), (sioctl)cxlflash_vlun_resize}, {sizeof(struct dk_cxlflash_resize), cxlflash_vlun_resize},
{sizeof(struct dk_cxlflash_clone), (sioctl)cxlflash_disk_clone}, {sizeof(struct dk_cxlflash_clone), cxlflash_disk_clone},
}; };
/* Hold read semaphore so we can drain if needed */ /* Hold read semaphore so we can drain if needed */

View File

@ -114,18 +114,16 @@ struct cxlflash_global {
struct page *err_page; /* One page of all 0xF for error notification */ struct page *err_page; /* One page of all 0xF for error notification */
}; };
int cxlflash_vlun_resize(struct scsi_device *sdev, int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize);
struct dk_cxlflash_resize *resize);
int _cxlflash_vlun_resize(struct scsi_device *sdev, struct ctx_info *ctxi, int _cxlflash_vlun_resize(struct scsi_device *sdev, struct ctx_info *ctxi,
struct dk_cxlflash_resize *resize); struct dk_cxlflash_resize *resize);
int cxlflash_disk_release(struct scsi_device *sdev, int cxlflash_disk_release(struct scsi_device *sdev,
struct dk_cxlflash_release *release); void *release);
int _cxlflash_disk_release(struct scsi_device *sdev, struct ctx_info *ctxi, int _cxlflash_disk_release(struct scsi_device *sdev, struct ctx_info *ctxi,
struct dk_cxlflash_release *release); struct dk_cxlflash_release *release);
int cxlflash_disk_clone(struct scsi_device *sdev, int cxlflash_disk_clone(struct scsi_device *sdev, void *arg);
struct dk_cxlflash_clone *clone);
int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg); int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg);
@ -145,8 +143,7 @@ void rhte_checkin(struct ctx_info *ctxi, struct sisl_rht_entry *rhte);
void cxlflash_ba_terminate(struct ba_lun *ba_lun); void cxlflash_ba_terminate(struct ba_lun *ba_lun);
int cxlflash_manage_lun(struct scsi_device *sdev, int cxlflash_manage_lun(struct scsi_device *sdev, void *manage);
struct dk_cxlflash_manage_lun *manage);
int check_state(struct cxlflash_cfg *cfg); int check_state(struct cxlflash_cfg *cfg);

View File

@ -819,8 +819,7 @@ out:
return rc; return rc;
} }
int cxlflash_vlun_resize(struct scsi_device *sdev, int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize)
struct dk_cxlflash_resize *resize)
{ {
return _cxlflash_vlun_resize(sdev, NULL, resize); return _cxlflash_vlun_resize(sdev, NULL, resize);
} }
@ -1178,7 +1177,7 @@ err:
/** /**
* cxlflash_disk_clone() - clone a context by making snapshot of another * cxlflash_disk_clone() - clone a context by making snapshot of another
* @sdev: SCSI device associated with LUN owning virtual LUN. * @sdev: SCSI device associated with LUN owning virtual LUN.
* @clone: Clone ioctl data structure. * @arg: Clone ioctl data structure.
* *
* This routine effectively performs cxlflash_disk_open operation for each * This routine effectively performs cxlflash_disk_open operation for each
* in-use virtual resource in the source context. Note that the destination * in-use virtual resource in the source context. Note that the destination
@ -1187,9 +1186,9 @@ err:
* *
* Return: 0 on success, -errno on failure * Return: 0 on success, -errno on failure
*/ */
int cxlflash_disk_clone(struct scsi_device *sdev, int cxlflash_disk_clone(struct scsi_device *sdev, void *arg)
struct dk_cxlflash_clone *clone)
{ {
struct dk_cxlflash_clone *clone = arg;
struct cxlflash_cfg *cfg = shost_priv(sdev->host); struct cxlflash_cfg *cfg = shost_priv(sdev->host);
struct device *dev = &cfg->dev->dev; struct device *dev = &cfg->dev->dev;
struct llun_info *lli = sdev->hostdata; struct llun_info *lli = sdev->hostdata;

View File

@ -643,7 +643,8 @@ extern int hisi_sas_probe(struct platform_device *pdev,
const struct hisi_sas_hw *ops); const struct hisi_sas_hw *ops);
extern void hisi_sas_remove(struct platform_device *pdev); extern void hisi_sas_remove(struct platform_device *pdev);
extern int hisi_sas_slave_configure(struct scsi_device *sdev); int hisi_sas_device_configure(struct scsi_device *sdev,
struct queue_limits *lim);
extern int hisi_sas_slave_alloc(struct scsi_device *sdev); extern int hisi_sas_slave_alloc(struct scsi_device *sdev);
extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time); extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time);
extern void hisi_sas_scan_start(struct Scsi_Host *shost); extern void hisi_sas_scan_start(struct Scsi_Host *shost);

View File

@ -868,10 +868,11 @@ err_out:
return rc; return rc;
} }
int hisi_sas_slave_configure(struct scsi_device *sdev) int hisi_sas_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct domain_device *dev = sdev_to_domain_dev(sdev); struct domain_device *dev = sdev_to_domain_dev(sdev);
int ret = sas_slave_configure(sdev); int ret = sas_device_configure(sdev, lim);
if (ret) if (ret)
return ret; return ret;
@ -880,7 +881,7 @@ int hisi_sas_slave_configure(struct scsi_device *sdev)
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_sas_slave_configure); EXPORT_SYMBOL_GPL(hisi_sas_device_configure);
void hisi_sas_scan_start(struct Scsi_Host *shost) void hisi_sas_scan_start(struct Scsi_Host *shost)
{ {

View File

@ -1735,28 +1735,12 @@ static struct attribute *host_v1_hw_attrs[] = {
ATTRIBUTE_GROUPS(host_v1_hw); ATTRIBUTE_GROUPS(host_v1_hw);
static const struct scsi_host_template sht_v1_hw = { static const struct scsi_host_template sht_v1_hw = {
.name = DRV_NAME, LIBSAS_SHT_BASE_NO_SLAVE_INIT
.proc_name = DRV_NAME, .device_configure = hisi_sas_device_configure,
.module = THIS_MODULE,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = hisi_sas_slave_configure,
.scan_finished = hisi_sas_scan_finished, .scan_finished = hisi_sas_scan_finished,
.scan_start = hisi_sas_scan_start, .scan_start = hisi_sas_scan_start,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.this_id = -1,
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = hisi_sas_slave_alloc, .slave_alloc = hisi_sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.shost_groups = host_v1_hw_groups, .shost_groups = host_v1_hw_groups,
.host_reset = hisi_sas_host_reset, .host_reset = hisi_sas_host_reset,
}; };

View File

@ -3544,6 +3544,11 @@ static struct attribute *host_v2_hw_attrs[] = {
ATTRIBUTE_GROUPS(host_v2_hw); ATTRIBUTE_GROUPS(host_v2_hw);
static const struct attribute_group *sdev_groups_v2_hw[] = {
&sas_ata_sdev_attr_group,
NULL
};
static void map_queues_v2_hw(struct Scsi_Host *shost) static void map_queues_v2_hw(struct Scsi_Host *shost)
{ {
struct hisi_hba *hisi_hba = shost_priv(shost); struct hisi_hba *hisi_hba = shost_priv(shost);
@ -3562,29 +3567,14 @@ static void map_queues_v2_hw(struct Scsi_Host *shost)
} }
static const struct scsi_host_template sht_v2_hw = { static const struct scsi_host_template sht_v2_hw = {
.name = DRV_NAME, LIBSAS_SHT_BASE_NO_SLAVE_INIT
.proc_name = DRV_NAME, .device_configure = hisi_sas_device_configure,
.module = THIS_MODULE,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = hisi_sas_slave_configure,
.scan_finished = hisi_sas_scan_finished, .scan_finished = hisi_sas_scan_finished,
.scan_start = hisi_sas_scan_start, .scan_start = hisi_sas_scan_start,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.this_id = -1,
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = hisi_sas_slave_alloc, .slave_alloc = hisi_sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.shost_groups = host_v2_hw_groups, .shost_groups = host_v2_hw_groups,
.sdev_groups = sdev_groups_v2_hw,
.host_reset = hisi_sas_host_reset, .host_reset = hisi_sas_host_reset,
.map_queues = map_queues_v2_hw, .map_queues = map_queues_v2_hw,
.host_tagset = 1, .host_tagset = 1,

View File

@ -2902,11 +2902,12 @@ static ssize_t iopoll_q_cnt_v3_hw_show(struct device *dev,
} }
static DEVICE_ATTR_RO(iopoll_q_cnt_v3_hw); static DEVICE_ATTR_RO(iopoll_q_cnt_v3_hw);
static int slave_configure_v3_hw(struct scsi_device *sdev) static int device_configure_v3_hw(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct Scsi_Host *shost = dev_to_shost(&sdev->sdev_gendev); struct Scsi_Host *shost = dev_to_shost(&sdev->sdev_gendev);
struct hisi_hba *hisi_hba = shost_priv(shost); struct hisi_hba *hisi_hba = shost_priv(shost);
int ret = hisi_sas_slave_configure(sdev); int ret = hisi_sas_device_configure(sdev, lim);
struct device *dev = hisi_hba->dev; struct device *dev = hisi_hba->dev;
if (ret) if (ret)
@ -2937,6 +2938,11 @@ static struct attribute *host_v3_hw_attrs[] = {
ATTRIBUTE_GROUPS(host_v3_hw); ATTRIBUTE_GROUPS(host_v3_hw);
static const struct attribute_group *sdev_groups_v3_hw[] = {
&sas_ata_sdev_attr_group,
NULL
};
#define HISI_SAS_DEBUGFS_REG(x) {#x, x} #define HISI_SAS_DEBUGFS_REG(x) {#x, x}
struct hisi_sas_debugfs_reg_lu { struct hisi_sas_debugfs_reg_lu {
@ -3323,31 +3329,16 @@ static void hisi_sas_map_queues(struct Scsi_Host *shost)
} }
static const struct scsi_host_template sht_v3_hw = { static const struct scsi_host_template sht_v3_hw = {
.name = DRV_NAME, LIBSAS_SHT_BASE_NO_SLAVE_INIT
.proc_name = DRV_NAME, .device_configure = device_configure_v3_hw,
.module = THIS_MODULE,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = slave_configure_v3_hw,
.scan_finished = hisi_sas_scan_finished, .scan_finished = hisi_sas_scan_finished,
.scan_start = hisi_sas_scan_start, .scan_start = hisi_sas_scan_start,
.map_queues = hisi_sas_map_queues, .map_queues = hisi_sas_map_queues,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.this_id = -1,
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.sg_prot_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_prot_tablesize = HISI_SAS_SGE_PAGE_CNT,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = hisi_sas_slave_alloc, .slave_alloc = hisi_sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.shost_groups = host_v3_hw_groups, .shost_groups = host_v3_hw_groups,
.sdev_groups = sdev_groups_v3_hw,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
.host_reset = hisi_sas_host_reset, .host_reset = hisi_sas_host_reset,
.host_tagset = 1, .host_tagset = 1,

View File

@ -479,6 +479,12 @@ struct Scsi_Host *scsi_host_alloc(const struct scsi_host_template *sht, int priv
else else
shost->max_segment_size = BLK_MAX_SEGMENT_SIZE; shost->max_segment_size = BLK_MAX_SEGMENT_SIZE;
/* 32-byte (dword) is a common minimum for HBAs. */
if (sht->dma_alignment)
shost->dma_alignment = sht->dma_alignment;
else
shost->dma_alignment = 3;
/* /*
* assume a 4GB boundary, if not set * assume a 4GB boundary, if not set
*/ */

View File

@ -5850,7 +5850,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
{ {
struct Scsi_Host *sh; struct Scsi_Host *sh;
sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info)); sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info *));
if (sh == NULL) { if (sh == NULL) {
dev_err(&h->pdev->dev, "scsi_host_alloc failed\n"); dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");
return -ENOMEM; return -ENOMEM;

View File

@ -1151,11 +1151,11 @@ static struct attribute *hptiop_host_attrs[] = {
ATTRIBUTE_GROUPS(hptiop_host); ATTRIBUTE_GROUPS(hptiop_host);
static int hptiop_slave_config(struct scsi_device *sdev) static int hptiop_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
if (sdev->type == TYPE_TAPE) if (sdev->type == TYPE_TAPE)
blk_queue_max_hw_sectors(sdev->request_queue, 8192); lim->max_hw_sectors = 8192;
return 0; return 0;
} }
@ -1168,7 +1168,7 @@ static const struct scsi_host_template driver_template = {
.emulated = 0, .emulated = 0,
.proc_name = driver_name, .proc_name = driver_name,
.shost_groups = hptiop_host_groups, .shost_groups = hptiop_host_groups,
.slave_configure = hptiop_slave_config, .device_configure = hptiop_device_configure,
.this_id = -1, .this_id = -1,
.change_queue_depth = hptiop_adjust_disk_queue_depth, .change_queue_depth = hptiop_adjust_disk_queue_depth,
.cmd_size = sizeof(struct hpt_cmd_priv), .cmd_size = sizeof(struct hpt_cmd_priv),

View File

@ -5541,8 +5541,6 @@ static void ibmvfc_tgt_add_rport(struct ibmvfc_target *tgt)
rport->supported_classes |= FC_COS_CLASS2; rport->supported_classes |= FC_COS_CLASS2;
if (be32_to_cpu(tgt->service_parms.class3_parms[0]) & 0x80000000) if (be32_to_cpu(tgt->service_parms.class3_parms[0]) & 0x80000000)
rport->supported_classes |= FC_COS_CLASS3; rport->supported_classes |= FC_COS_CLASS3;
if (rport->rqst_q)
blk_queue_max_segments(rport->rqst_q, 1);
} else } else
tgt_dbg(tgt, "rport add failed\n"); tgt_dbg(tgt, "rport add failed\n");
spin_unlock_irqrestore(vhost->host->host_lock, flags); spin_unlock_irqrestore(vhost->host->host_lock, flags);
@ -6391,8 +6389,6 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
ibmvfc_init_sub_crqs(vhost); ibmvfc_init_sub_crqs(vhost);
if (shost_to_fc_host(shost)->rqst_q)
blk_queue_max_segments(shost_to_fc_host(shost)->rqst_q, 1);
dev_set_drvdata(dev, vhost); dev_set_drvdata(dev, vhost);
spin_lock(&ibmvfc_driver_lock); spin_lock(&ibmvfc_driver_lock);
list_add_tail(&vhost->queue, &ibmvfc_head); list_add_tail(&vhost->queue, &ibmvfc_head);
@ -6547,6 +6543,7 @@ static struct fc_function_template ibmvfc_transport_functions = {
.get_starget_port_id = ibmvfc_get_starget_port_id, .get_starget_port_id = ibmvfc_get_starget_port_id,
.show_starget_port_id = 1, .show_starget_port_id = 1,
.max_bsg_segments = 1,
.bsg_request = ibmvfc_bsg_request, .bsg_request = ibmvfc_bsg_request,
.bsg_timeout = ibmvfc_bsg_timeout, .bsg_timeout = ibmvfc_bsg_timeout,
}; };

View File

@ -1100,16 +1100,6 @@ static int device_check(imm_struct *dev, bool autodetect)
return -ENODEV; return -ENODEV;
} }
/*
* imm cannot deal with highmem, so this causes all IO pages for this host
* to reside in low memory (hence mapped)
*/
static int imm_adjust_queue(struct scsi_device *device)
{
blk_queue_bounce_limit(device->request_queue, BLK_BOUNCE_HIGH);
return 0;
}
static const struct scsi_host_template imm_template = { static const struct scsi_host_template imm_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.proc_name = "imm", .proc_name = "imm",
@ -1123,7 +1113,6 @@ static const struct scsi_host_template imm_template = {
.this_id = 7, .this_id = 7,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.can_queue = 1, .can_queue = 1,
.slave_alloc = imm_adjust_queue,
.cmd_size = sizeof(struct scsi_pointer), .cmd_size = sizeof(struct scsi_pointer),
}; };
@ -1235,6 +1224,7 @@ static int __imm_attach(struct parport *pb)
host = scsi_host_alloc(&imm_template, sizeof(imm_struct *)); host = scsi_host_alloc(&imm_template, sizeof(imm_struct *));
if (!host) if (!host)
goto out1; goto out1;
host->no_highmem = true;
host->io_port = pb->base; host->io_port = pb->base;
host->n_io_port = ports; host->n_io_port = ports;
host->dma_channel = -1; host->dma_channel = -1;

View File

@ -4769,15 +4769,17 @@ static void ipr_slave_destroy(struct scsi_device *sdev)
} }
/** /**
* ipr_slave_configure - Configure a SCSI device * ipr_device_configure - Configure a SCSI device
* @sdev: scsi device struct * @sdev: scsi device struct
* @lim: queue limits
* *
* This function configures the specified scsi device. * This function configures the specified scsi device.
* *
* Return value: * Return value:
* 0 on success * 0 on success
**/ **/
static int ipr_slave_configure(struct scsi_device *sdev) static int ipr_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata; struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
struct ipr_resource_entry *res; struct ipr_resource_entry *res;
@ -4798,7 +4800,7 @@ static int ipr_slave_configure(struct scsi_device *sdev)
sdev->no_report_opcodes = 1; sdev->no_report_opcodes = 1;
blk_queue_rq_timeout(sdev->request_queue, blk_queue_rq_timeout(sdev->request_queue,
IPR_VSET_RW_TIMEOUT); IPR_VSET_RW_TIMEOUT);
blk_queue_max_hw_sectors(sdev->request_queue, IPR_VSET_MAX_SECTORS); lim->max_hw_sectors = IPR_VSET_MAX_SECTORS;
} }
spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
@ -6397,7 +6399,7 @@ static const struct scsi_host_template driver_template = {
.eh_device_reset_handler = ipr_eh_dev_reset, .eh_device_reset_handler = ipr_eh_dev_reset,
.eh_host_reset_handler = ipr_eh_host_reset, .eh_host_reset_handler = ipr_eh_host_reset,
.slave_alloc = ipr_slave_alloc, .slave_alloc = ipr_slave_alloc,
.slave_configure = ipr_slave_configure, .device_configure = ipr_device_configure,
.slave_destroy = ipr_slave_destroy, .slave_destroy = ipr_slave_destroy,
.scan_finished = ipr_scan_finished, .scan_finished = ipr_scan_finished,
.target_destroy = ipr_target_destroy, .target_destroy = ipr_target_destroy,

View File

@ -149,33 +149,20 @@ static struct attribute *isci_host_attrs[] = {
ATTRIBUTE_GROUPS(isci_host); ATTRIBUTE_GROUPS(isci_host);
static const struct scsi_host_template isci_sht = { static const struct attribute_group *isci_sdev_groups[] = {
&sas_ata_sdev_attr_group,
NULL
};
.module = THIS_MODULE, static const struct scsi_host_template isci_sht = {
.name = DRV_NAME, LIBSAS_SHT_BASE
.proc_name = DRV_NAME,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = sas_slave_configure,
.scan_finished = isci_host_scan_finished, .scan_finished = isci_host_scan_finished,
.scan_start = isci_host_start, .scan_start = isci_host_start,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.can_queue = ISCI_CAN_QUEUE_VAL, .can_queue = ISCI_CAN_QUEUE_VAL,
.this_id = -1,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS, .eh_abort_handler = sas_eh_abort_handler,
.eh_abort_handler = sas_eh_abort_handler,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.shost_groups = isci_host_groups, .shost_groups = isci_host_groups,
.sdev_groups = isci_sdev_groups,
.track_queue_depth = 1, .track_queue_depth = 1,
}; };

View File

@ -943,6 +943,7 @@ iscsi_sw_tcp_session_create(struct iscsi_endpoint *ep, uint16_t cmds_max,
shost->max_id = 0; shost->max_id = 0;
shost->max_channel = 0; shost->max_channel = 0;
shost->max_cmd_len = SCSI_MAX_VARLEN_CDB_SIZE; shost->max_cmd_len = SCSI_MAX_VARLEN_CDB_SIZE;
shost->dma_alignment = 0;
rc = iscsi_host_get_max_scsi_cmds(shost, cmds_max); rc = iscsi_host_get_max_scsi_cmds(shost, cmds_max);
if (rc < 0) if (rc < 0)
@ -1065,7 +1066,6 @@ static int iscsi_sw_tcp_slave_configure(struct scsi_device *sdev)
if (conn->datadgst_en) if (conn->datadgst_en)
blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
sdev->request_queue); sdev->request_queue);
blk_queue_dma_alignment(sdev->request_queue, 0);
return 0; return 0;
} }

View File

@ -964,3 +964,87 @@ int sas_execute_ata_cmd(struct domain_device *device, u8 *fis, int force_phy_id)
force_phy_id, &tmf_task); force_phy_id, &tmf_task);
} }
EXPORT_SYMBOL_GPL(sas_execute_ata_cmd); EXPORT_SYMBOL_GPL(sas_execute_ata_cmd);
static ssize_t sas_ncq_prio_supported_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct scsi_device *sdev = to_scsi_device(device);
struct domain_device *ddev = sdev_to_domain_dev(sdev);
bool supported;
int rc;
rc = ata_ncq_prio_supported(ddev->sata_dev.ap, sdev, &supported);
if (rc)
return rc;
return sysfs_emit(buf, "%d\n", supported);
}
static struct device_attribute dev_attr_sas_ncq_prio_supported =
__ATTR(ncq_prio_supported, S_IRUGO, sas_ncq_prio_supported_show, NULL);
static ssize_t sas_ncq_prio_enable_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
struct scsi_device *sdev = to_scsi_device(device);
struct domain_device *ddev = sdev_to_domain_dev(sdev);
bool enabled;
int rc;
rc = ata_ncq_prio_enabled(ddev->sata_dev.ap, sdev, &enabled);
if (rc)
return rc;
return sysfs_emit(buf, "%d\n", enabled);
}
static ssize_t sas_ncq_prio_enable_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t len)
{
struct scsi_device *sdev = to_scsi_device(device);
struct domain_device *ddev = sdev_to_domain_dev(sdev);
bool enable;
int rc;
rc = kstrtobool(buf, &enable);
if (rc)
return rc;
rc = ata_ncq_prio_enable(ddev->sata_dev.ap, sdev, enable);
if (rc)
return rc;
return len;
}
static struct device_attribute dev_attr_sas_ncq_prio_enable =
__ATTR(ncq_prio_enable, S_IRUGO | S_IWUSR,
sas_ncq_prio_enable_show, sas_ncq_prio_enable_store);
static struct attribute *sas_ata_sdev_attrs[] = {
&dev_attr_sas_ncq_prio_supported.attr,
&dev_attr_sas_ncq_prio_enable.attr,
NULL
};
static umode_t sas_ata_attr_is_visible(struct kobject *kobj,
struct attribute *attr, int i)
{
struct device *dev = kobj_to_dev(kobj);
struct scsi_device *sdev = to_scsi_device(dev);
struct domain_device *ddev = sdev_to_domain_dev(sdev);
if (!dev_is_sata(ddev))
return 0;
return attr->mode;
}
const struct attribute_group sas_ata_sdev_attr_group = {
.attrs = sas_ata_sdev_attrs,
.is_visible = sas_ata_attr_is_visible,
};
EXPORT_SYMBOL_GPL(sas_ata_sdev_attr_group);

View File

@ -26,6 +26,28 @@ static int sas_configure_phy(struct domain_device *dev, int phy_id,
u8 *sas_addr, int include); u8 *sas_addr, int include);
static int sas_disable_routing(struct domain_device *dev, u8 *sas_addr); static int sas_disable_routing(struct domain_device *dev, u8 *sas_addr);
static void sas_port_add_ex_phy(struct sas_port *port, struct ex_phy *ex_phy)
{
sas_port_add_phy(port, ex_phy->phy);
ex_phy->port = port;
ex_phy->phy_state = PHY_DEVICE_DISCOVERED;
}
static void sas_ex_add_parent_port(struct domain_device *dev, int phy_id)
{
struct expander_device *ex = &dev->ex_dev;
struct ex_phy *ex_phy = &ex->ex_phy[phy_id];
if (!ex->parent_port) {
ex->parent_port = sas_port_alloc(&dev->rphy->dev, phy_id);
/* FIXME: error handling */
BUG_ON(!ex->parent_port);
BUG_ON(sas_port_add(ex->parent_port));
sas_port_mark_backlink(ex->parent_port);
}
sas_port_add_ex_phy(ex->parent_port, ex_phy);
}
/* ---------- SMP task management ---------- */ /* ---------- SMP task management ---------- */
/* Give it some long enough timeout. In seconds. */ /* Give it some long enough timeout. In seconds. */
@ -239,8 +261,7 @@ static void sas_set_ex_phy(struct domain_device *dev, int phy_id,
/* help some expanders that fail to zero sas_address in the 'no /* help some expanders that fail to zero sas_address in the 'no
* device' case * device' case
*/ */
if (phy->attached_dev_type == SAS_PHY_UNUSED || if (phy->attached_dev_type == SAS_PHY_UNUSED)
phy->linkrate < SAS_LINK_RATE_1_5_GBPS)
memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE); memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
else else
memcpy(phy->attached_sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE); memcpy(phy->attached_sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE);
@ -857,9 +878,7 @@ static bool sas_ex_join_wide_port(struct domain_device *parent, int phy_id)
if (!memcmp(phy->attached_sas_addr, ephy->attached_sas_addr, if (!memcmp(phy->attached_sas_addr, ephy->attached_sas_addr,
SAS_ADDR_SIZE) && ephy->port) { SAS_ADDR_SIZE) && ephy->port) {
sas_port_add_phy(ephy->port, phy->phy); sas_port_add_ex_phy(ephy->port, phy);
phy->port = ephy->port;
phy->phy_state = PHY_DEVICE_DISCOVERED;
return true; return true;
} }
} }
@ -963,11 +982,11 @@ static int sas_ex_discover_dev(struct domain_device *dev, int phy_id)
/* Parent and domain coherency */ /* Parent and domain coherency */
if (!dev->parent && sas_phy_match_port_addr(dev->port, ex_phy)) { if (!dev->parent && sas_phy_match_port_addr(dev->port, ex_phy)) {
sas_add_parent_port(dev, phy_id); sas_ex_add_parent_port(dev, phy_id);
return 0; return 0;
} }
if (dev->parent && sas_phy_match_dev_addr(dev->parent, ex_phy)) { if (dev->parent && sas_phy_match_dev_addr(dev->parent, ex_phy)) {
sas_add_parent_port(dev, phy_id); sas_ex_add_parent_port(dev, phy_id);
if (ex_phy->routing_attr == TABLE_ROUTING) if (ex_phy->routing_attr == TABLE_ROUTING)
sas_configure_phy(dev, phy_id, dev->port->sas_addr, 1); sas_configure_phy(dev, phy_id, dev->port->sas_addr, 1);
return 0; return 0;
@ -1849,9 +1868,12 @@ static void sas_unregister_devs_sas_addr(struct domain_device *parent,
if (phy->port) { if (phy->port) {
sas_port_delete_phy(phy->port, phy->phy); sas_port_delete_phy(phy->port, phy->phy);
sas_device_set_phy(found, phy->port); sas_device_set_phy(found, phy->port);
if (phy->port->num_phys == 0) if (phy->port->num_phys == 0) {
list_add_tail(&phy->port->del_list, list_add_tail(&phy->port->del_list,
&parent->port->sas_port_del_list); &parent->port->sas_port_del_list);
if (ex_dev->parent_port == phy->port)
ex_dev->parent_port = NULL;
}
phy->port = NULL; phy->port = NULL;
} }
} }

View File

@ -189,21 +189,6 @@ static inline void sas_phy_set_target(struct asd_sas_phy *p, struct domain_devic
} }
} }
static inline void sas_add_parent_port(struct domain_device *dev, int phy_id)
{
struct expander_device *ex = &dev->ex_dev;
struct ex_phy *ex_phy = &ex->ex_phy[phy_id];
if (!ex->parent_port) {
ex->parent_port = sas_port_alloc(&dev->rphy->dev, phy_id);
/* FIXME: error handling */
BUG_ON(!ex->parent_port);
BUG_ON(sas_port_add(ex->parent_port));
sas_port_mark_backlink(ex->parent_port);
}
sas_port_add_phy(ex->parent_port, ex_phy->phy);
}
static inline struct domain_device *sas_alloc_device(void) static inline struct domain_device *sas_alloc_device(void)
{ {
struct domain_device *dev = kzalloc(sizeof(*dev), GFP_KERNEL); struct domain_device *dev = kzalloc(sizeof(*dev), GFP_KERNEL);

View File

@ -804,14 +804,15 @@ EXPORT_SYMBOL_GPL(sas_target_alloc);
#define SAS_DEF_QD 256 #define SAS_DEF_QD 256
int sas_slave_configure(struct scsi_device *scsi_dev) int sas_device_configure(struct scsi_device *scsi_dev,
struct queue_limits *lim)
{ {
struct domain_device *dev = sdev_to_domain_dev(scsi_dev); struct domain_device *dev = sdev_to_domain_dev(scsi_dev);
BUG_ON(dev->rphy->identify.device_type != SAS_END_DEVICE); BUG_ON(dev->rphy->identify.device_type != SAS_END_DEVICE);
if (dev_is_sata(dev)) { if (dev_is_sata(dev)) {
ata_sas_slave_configure(scsi_dev, dev->sata_dev.ap); ata_sas_device_configure(scsi_dev, lim, dev->sata_dev.ap);
return 0; return 0;
} }
@ -829,7 +830,7 @@ int sas_slave_configure(struct scsi_device *scsi_dev)
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(sas_slave_configure); EXPORT_SYMBOL_GPL(sas_device_configure);
int sas_change_queue_depth(struct scsi_device *sdev, int depth) int sas_change_queue_depth(struct scsi_device *sdev, int depth)
{ {

View File

@ -393,6 +393,37 @@ enum hba_state {
LPFC_HBA_ERROR = -1 LPFC_HBA_ERROR = -1
}; };
enum lpfc_hba_flag { /* hba generic flags */
HBA_ERATT_HANDLED = 0, /* This flag is set when eratt handled */
DEFER_ERATT = 1, /* Deferred error attn in progress */
HBA_FCOE_MODE = 2, /* HBA function in FCoE Mode */
HBA_SP_QUEUE_EVT = 3, /* Slow-path qevt posted to worker thread*/
HBA_POST_RECEIVE_BUFFER = 4, /* Rcv buffers need to be posted */
HBA_PERSISTENT_TOPO = 5, /* Persistent topology support in hba */
ELS_XRI_ABORT_EVENT = 6, /* ELS_XRI abort event was queued */
ASYNC_EVENT = 7,
LINK_DISABLED = 8, /* Link disabled by user */
FCF_TS_INPROG = 9, /* FCF table scan in progress */
FCF_RR_INPROG = 10, /* FCF roundrobin flogi in progress */
HBA_FIP_SUPPORT = 11, /* FIP support in HBA */
HBA_DEVLOSS_TMO = 13, /* HBA in devloss timeout */
HBA_RRQ_ACTIVE = 14, /* process the rrq active list */
HBA_IOQ_FLUSH = 15, /* I/O queues being flushed */
HBA_RECOVERABLE_UE = 17, /* FW supports recoverable UE */
HBA_FORCED_LINK_SPEED = 18, /*
* Firmware supports Forced Link
* Speed capability
*/
HBA_FLOGI_ISSUED = 20, /* FLOGI was issued */
HBA_DEFER_FLOGI = 23, /* Defer FLOGI till read_sparm cmpl */
HBA_SETUP = 24, /* HBA setup completed */
HBA_NEEDS_CFG_PORT = 25, /* SLI3: CONFIG_PORT mbox needed */
HBA_HBEAT_INP = 26, /* mbox HBEAT is in progress */
HBA_HBEAT_TMO = 27, /* HBEAT initiated after timeout */
HBA_FLOGI_OUTSTANDING = 28, /* FLOGI is outstanding */
HBA_RHBA_CMPL = 29, /* RHBA FDMI cmd is successful */
};
struct lpfc_trunk_link_state { struct lpfc_trunk_link_state {
enum hba_state state; enum hba_state state;
uint8_t fault; uint8_t fault;
@ -1007,35 +1038,7 @@ struct lpfc_hba {
#define LS_CT_VEN_RPA 0x20 /* Vendor RPA sent to switch */ #define LS_CT_VEN_RPA 0x20 /* Vendor RPA sent to switch */
#define LS_EXTERNAL_LOOPBACK 0x40 /* External loopback plug inserted */ #define LS_EXTERNAL_LOOPBACK 0x40 /* External loopback plug inserted */
uint32_t hba_flag; /* hba generic flags */ unsigned long hba_flag; /* hba generic flags */
#define HBA_ERATT_HANDLED 0x1 /* This flag is set when eratt handled */
#define DEFER_ERATT 0x2 /* Deferred error attention in progress */
#define HBA_FCOE_MODE 0x4 /* HBA function in FCoE Mode */
#define HBA_SP_QUEUE_EVT 0x8 /* Slow-path qevt posted to worker thread*/
#define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */
#define HBA_PERSISTENT_TOPO 0x20 /* Persistent topology support in hba */
#define ELS_XRI_ABORT_EVENT 0x40 /* ELS_XRI abort event was queued */
#define ASYNC_EVENT 0x80
#define LINK_DISABLED 0x100 /* Link disabled by user */
#define FCF_TS_INPROG 0x200 /* FCF table scan in progress */
#define FCF_RR_INPROG 0x400 /* FCF roundrobin flogi in progress */
#define HBA_FIP_SUPPORT 0x800 /* FIP support in HBA */
#define HBA_DEVLOSS_TMO 0x2000 /* HBA in devloss timeout */
#define HBA_RRQ_ACTIVE 0x4000 /* process the rrq active list */
#define HBA_IOQ_FLUSH 0x8000 /* FCP/NVME I/O queues being flushed */
#define HBA_RECOVERABLE_UE 0x20000 /* Firmware supports recoverable UE */
#define HBA_FORCED_LINK_SPEED 0x40000 /*
* Firmware supports Forced Link Speed
* capability
*/
#define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */
#define HBA_DEFER_FLOGI 0x800000 /* Defer FLOGI till read_sparm cmpl */
#define HBA_SETUP 0x1000000 /* Signifies HBA setup is completed */
#define HBA_NEEDS_CFG_PORT 0x2000000 /* SLI3 - needs a CONFIG_PORT mbox */
#define HBA_HBEAT_INP 0x4000000 /* mbox HBEAT is in progress */
#define HBA_HBEAT_TMO 0x8000000 /* HBEAT initiated after timeout */
#define HBA_FLOGI_OUTSTANDING 0x10000000 /* FLOGI is outstanding */
#define HBA_RHBA_CMPL 0x20000000 /* RHBA FDMI command is successful */
struct completion *fw_dump_cmpl; /* cmpl event tracker for fw_dump */ struct completion *fw_dump_cmpl; /* cmpl event tracker for fw_dump */
uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
@ -1284,6 +1287,7 @@ struct lpfc_hba {
uint32_t total_scsi_bufs; uint32_t total_scsi_bufs;
struct list_head lpfc_iocb_list; struct list_head lpfc_iocb_list;
uint32_t total_iocbq_bufs; uint32_t total_iocbq_bufs;
spinlock_t rrq_list_lock; /* lock for active_rrq_list */
struct list_head active_rrq_list; struct list_head active_rrq_list;
spinlock_t hbalock; spinlock_t hbalock;
struct work_struct unblock_request_work; /* SCSI layer unblock IOs */ struct work_struct unblock_request_work; /* SCSI layer unblock IOs */

View File

@ -322,7 +322,7 @@ lpfc_enable_fip_show(struct device *dev, struct device_attribute *attr,
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
if (phba->hba_flag & HBA_FIP_SUPPORT) if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag))
return scnprintf(buf, PAGE_SIZE, "1\n"); return scnprintf(buf, PAGE_SIZE, "1\n");
else else
return scnprintf(buf, PAGE_SIZE, "0\n"); return scnprintf(buf, PAGE_SIZE, "0\n");
@ -1049,7 +1049,7 @@ lpfc_link_state_show(struct device *dev, struct device_attribute *attr,
case LPFC_INIT_MBX_CMDS: case LPFC_INIT_MBX_CMDS:
case LPFC_LINK_DOWN: case LPFC_LINK_DOWN:
case LPFC_HBA_ERROR: case LPFC_HBA_ERROR:
if (phba->hba_flag & LINK_DISABLED) if (test_bit(LINK_DISABLED, &phba->hba_flag))
len += scnprintf(buf + len, PAGE_SIZE-len, len += scnprintf(buf + len, PAGE_SIZE-len,
"Link Down - User disabled\n"); "Link Down - User disabled\n");
else else
@ -1292,7 +1292,7 @@ lpfc_issue_lip(struct Scsi_Host *shost)
* it doesn't make any sense to allow issue_lip * it doesn't make any sense to allow issue_lip
*/ */
if (test_bit(FC_OFFLINE_MODE, &vport->fc_flag) || if (test_bit(FC_OFFLINE_MODE, &vport->fc_flag) ||
(phba->hba_flag & LINK_DISABLED) || test_bit(LINK_DISABLED, &phba->hba_flag) ||
(phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO)) (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO))
return -EPERM; return -EPERM;
@ -3635,7 +3635,8 @@ lpfc_pt_show(struct device *dev, struct device_attribute *attr, char *buf)
struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba;
return scnprintf(buf, PAGE_SIZE, "%d\n", return scnprintf(buf, PAGE_SIZE, "%d\n",
(phba->hba_flag & HBA_PERSISTENT_TOPO) ? 1 : 0); test_bit(HBA_PERSISTENT_TOPO,
&phba->hba_flag) ? 1 : 0);
} }
static DEVICE_ATTR(pt, 0444, static DEVICE_ATTR(pt, 0444,
lpfc_pt_show, NULL); lpfc_pt_show, NULL);
@ -4205,8 +4206,8 @@ lpfc_topology_store(struct device *dev, struct device_attribute *attr,
&phba->sli4_hba.sli_intf); &phba->sli4_hba.sli_intf);
if_type = bf_get(lpfc_sli_intf_if_type, if_type = bf_get(lpfc_sli_intf_if_type,
&phba->sli4_hba.sli_intf); &phba->sli4_hba.sli_intf);
if ((phba->hba_flag & HBA_PERSISTENT_TOPO || if ((test_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag) ||
(!phba->sli4_hba.pc_sli4_params.pls && (!phba->sli4_hba.pc_sli4_params.pls &&
(sli_family == LPFC_SLI_INTF_FAMILY_G6 || (sli_family == LPFC_SLI_INTF_FAMILY_G6 ||
if_type == LPFC_SLI_INTF_IF_TYPE_6))) && if_type == LPFC_SLI_INTF_IF_TYPE_6))) &&
val == 4) { val == 4) {
@ -4309,7 +4310,7 @@ lpfc_link_speed_store(struct device *dev, struct device_attribute *attr,
if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf); if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf);
if (if_type >= LPFC_SLI_INTF_IF_TYPE_2 && if (if_type >= LPFC_SLI_INTF_IF_TYPE_2 &&
phba->hba_flag & HBA_FORCED_LINK_SPEED) test_bit(HBA_FORCED_LINK_SPEED, &phba->hba_flag))
return -EPERM; return -EPERM;
if (!strncmp(buf, "nolip ", strlen("nolip "))) { if (!strncmp(buf, "nolip ", strlen("nolip "))) {
@ -6497,7 +6498,8 @@ lpfc_get_host_speed(struct Scsi_Host *shost)
struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
if ((lpfc_is_link_up(phba)) && (!(phba->hba_flag & HBA_FCOE_MODE))) { if ((lpfc_is_link_up(phba)) &&
!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
switch(phba->fc_linkspeed) { switch(phba->fc_linkspeed) {
case LPFC_LINK_SPEED_1GHZ: case LPFC_LINK_SPEED_1GHZ:
fc_host_speed(shost) = FC_PORTSPEED_1GBIT; fc_host_speed(shost) = FC_PORTSPEED_1GBIT;
@ -6533,7 +6535,8 @@ lpfc_get_host_speed(struct Scsi_Host *shost)
fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN; fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN;
break; break;
} }
} else if (lpfc_is_link_up(phba) && (phba->hba_flag & HBA_FCOE_MODE)) { } else if (lpfc_is_link_up(phba) &&
test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
switch (phba->fc_linkspeed) { switch (phba->fc_linkspeed) {
case LPFC_ASYNC_LINK_SPEED_1GBPS: case LPFC_ASYNC_LINK_SPEED_1GBPS:
fc_host_speed(shost) = FC_PORTSPEED_1GBIT; fc_host_speed(shost) = FC_PORTSPEED_1GBIT;
@ -6718,7 +6721,7 @@ lpfc_get_stats(struct Scsi_Host *shost)
hs->invalid_crc_count -= lso->invalid_crc_count; hs->invalid_crc_count -= lso->invalid_crc_count;
hs->error_frames -= lso->error_frames; hs->error_frames -= lso->error_frames;
if (phba->hba_flag & HBA_FCOE_MODE) { if (test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
hs->lip_count = -1; hs->lip_count = -1;
hs->nos_count = (phba->link_events >> 1); hs->nos_count = (phba->link_events >> 1);
hs->nos_count -= lso->link_events; hs->nos_count -= lso->link_events;
@ -6816,7 +6819,7 @@ lpfc_reset_stats(struct Scsi_Host *shost)
lso->invalid_tx_word_count = pmb->un.varRdLnk.invalidXmitWord; lso->invalid_tx_word_count = pmb->un.varRdLnk.invalidXmitWord;
lso->invalid_crc_count = pmb->un.varRdLnk.crcCnt; lso->invalid_crc_count = pmb->un.varRdLnk.crcCnt;
lso->error_frames = pmb->un.varRdLnk.crcCnt; lso->error_frames = pmb->un.varRdLnk.crcCnt;
if (phba->hba_flag & HBA_FCOE_MODE) if (test_bit(HBA_FCOE_MODE, &phba->hba_flag))
lso->link_events = (phba->link_events >> 1); lso->link_events = (phba->link_events >> 1);
else else
lso->link_events = (phba->fc_eventTag >> 1); lso->link_events = (phba->fc_eventTag >> 1);
@ -7161,11 +7164,11 @@ lpfc_get_hba_function_mode(struct lpfc_hba *phba)
case PCI_DEVICE_ID_ZEPHYR_DCSP: case PCI_DEVICE_ID_ZEPHYR_DCSP:
case PCI_DEVICE_ID_TIGERSHARK: case PCI_DEVICE_ID_TIGERSHARK:
case PCI_DEVICE_ID_TOMCAT: case PCI_DEVICE_ID_TOMCAT:
phba->hba_flag |= HBA_FCOE_MODE; set_bit(HBA_FCOE_MODE, &phba->hba_flag);
break; break;
default: default:
/* for others, clear the flag */ /* for others, clear the flag */
phba->hba_flag &= ~HBA_FCOE_MODE; clear_bit(HBA_FCOE_MODE, &phba->hba_flag);
} }
} }
@ -7236,7 +7239,7 @@ lpfc_get_cfgparam(struct lpfc_hba *phba)
lpfc_get_hba_function_mode(phba); lpfc_get_hba_function_mode(phba);
/* BlockGuard allowed for FC only. */ /* BlockGuard allowed for FC only. */
if (phba->cfg_enable_bg && phba->hba_flag & HBA_FCOE_MODE) { if (phba->cfg_enable_bg && test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"0581 BlockGuard feature not supported\n"); "0581 BlockGuard feature not supported\n");
/* If set, clear the BlockGuard support param */ /* If set, clear the BlockGuard support param */

View File

@ -5002,7 +5002,8 @@ lpfc_forced_link_speed(struct bsg_job *job)
goto job_error; goto job_error;
} }
forced_reply->supported = (phba->hba_flag & HBA_FORCED_LINK_SPEED) forced_reply->supported = test_bit(HBA_FORCED_LINK_SPEED,
&phba->hba_flag)
? LPFC_FORCED_LINK_SPEED_SUPPORTED ? LPFC_FORCED_LINK_SPEED_SUPPORTED
: LPFC_FORCED_LINK_SPEED_NOT_SUPPORTED; : LPFC_FORCED_LINK_SPEED_NOT_SUPPORTED;
job_error: job_error:

View File

@ -291,9 +291,9 @@ lpfc_ct_handle_mibreq(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocbq)
did = bf_get(els_rsp64_sid, &ctiocbq->wqe.xmit_els_rsp); did = bf_get(els_rsp64_sid, &ctiocbq->wqe.xmit_els_rsp);
if (ulp_status) { if (ulp_status) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
"6438 Unsol CT: status:x%x/x%x did : x%x\n", "6438 Unsol CT: status:x%x/x%x did : x%x\n",
ulp_status, ulp_word4, did); ulp_status, ulp_word4, did);
return; return;
} }
@ -303,17 +303,17 @@ lpfc_ct_handle_mibreq(struct lpfc_hba *phba, struct lpfc_iocbq *ctiocbq)
ndlp = lpfc_findnode_did(vport, did); ndlp = lpfc_findnode_did(vport, did);
if (!ndlp) { if (!ndlp) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
"6439 Unsol CT: NDLP Not Found for DID : x%x", "6439 Unsol CT: NDLP Not Found for DID : x%x",
did); did);
return; return;
} }
ct_req = (struct lpfc_sli_ct_request *)ctiocbq->cmd_dmabuf->virt; ct_req = (struct lpfc_sli_ct_request *)ctiocbq->cmd_dmabuf->virt;
mi_cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp); mi_cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp);
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS,
"6442 : MI Cmd : x%x Not Supported\n", mi_cmd); "6442 MI Cmd : x%x Not Supported\n", mi_cmd);
lpfc_ct_reject_event(ndlp, ct_req, lpfc_ct_reject_event(ndlp, ct_req,
bf_get(wqe_ctxt_tag, bf_get(wqe_ctxt_tag,
&ctiocbq->wqe.xmit_els_rsp.wqe_com), &ctiocbq->wqe.xmit_els_rsp.wqe_com),
@ -2173,7 +2173,7 @@ lpfc_fdmi_rprt_defer(struct lpfc_hba *phba, uint32_t mask)
struct lpfc_nodelist *ndlp; struct lpfc_nodelist *ndlp;
int i; int i;
phba->hba_flag |= HBA_RHBA_CMPL; set_bit(HBA_RHBA_CMPL, &phba->hba_flag);
vports = lpfc_create_vport_work_array(phba); vports = lpfc_create_vport_work_array(phba);
if (vports) { if (vports) {
for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) { for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) {
@ -2368,7 +2368,7 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
* for the physical port completes successfully. * for the physical port completes successfully.
* We may have to defer the RPRT accordingly. * We may have to defer the RPRT accordingly.
*/ */
if (phba->hba_flag & HBA_RHBA_CMPL) { if (test_bit(HBA_RHBA_CMPL, &phba->hba_flag)) {
lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPRT, 0); lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPRT, 0);
} else { } else {
lpfc_printf_vlog(vport, KERN_INFO, lpfc_printf_vlog(vport, KERN_INFO,
@ -2785,7 +2785,7 @@ lpfc_fdmi_port_attr_support_speed(struct lpfc_vport *vport, void *attr)
u32 tcfg; u32 tcfg;
u8 i, cnt; u8 i, cnt;
if (!(phba->hba_flag & HBA_FCOE_MODE)) { if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
cnt = 0; cnt = 0;
if (phba->sli_rev == LPFC_SLI_REV4) { if (phba->sli_rev == LPFC_SLI_REV4) {
tcfg = phba->sli4_hba.conf_trunk; tcfg = phba->sli4_hba.conf_trunk;
@ -2859,7 +2859,7 @@ lpfc_fdmi_port_attr_speed(struct lpfc_vport *vport, void *attr)
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
u32 speeds = 0; u32 speeds = 0;
if (!(phba->hba_flag & HBA_FCOE_MODE)) { if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
switch (phba->fc_linkspeed) { switch (phba->fc_linkspeed) {
case LPFC_LINK_SPEED_1GHZ: case LPFC_LINK_SPEED_1GHZ:
speeds = HBA_PORTSPEED_1GFC; speeds = HBA_PORTSPEED_1GFC;

View File

@ -189,11 +189,11 @@ lpfc_prep_els_iocb(struct lpfc_vport *vport, u8 expect_rsp,
* If this command is for fabric controller and HBA running * If this command is for fabric controller and HBA running
* in FIP mode send FLOGI, FDISC and LOGO as FIP frames. * in FIP mode send FLOGI, FDISC and LOGO as FIP frames.
*/ */
if ((did == Fabric_DID) && if (did == Fabric_DID &&
(phba->hba_flag & HBA_FIP_SUPPORT) && test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) &&
((elscmd == ELS_CMD_FLOGI) || (elscmd == ELS_CMD_FLOGI ||
(elscmd == ELS_CMD_FDISC) || elscmd == ELS_CMD_FDISC ||
(elscmd == ELS_CMD_LOGO))) elscmd == ELS_CMD_LOGO))
switch (elscmd) { switch (elscmd) {
case ELS_CMD_FLOGI: case ELS_CMD_FLOGI:
elsiocb->cmd_flag |= elsiocb->cmd_flag |=
@ -965,7 +965,7 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
* In case of FIP mode, perform roundrobin FCF failover * In case of FIP mode, perform roundrobin FCF failover
* due to new FCF discovery * due to new FCF discovery
*/ */
if ((phba->hba_flag & HBA_FIP_SUPPORT) && if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) &&
(phba->fcf.fcf_flag & FCF_DISCOVERY)) { (phba->fcf.fcf_flag & FCF_DISCOVERY)) {
if (phba->link_state < LPFC_LINK_UP) if (phba->link_state < LPFC_LINK_UP)
goto stop_rr_fcf_flogi; goto stop_rr_fcf_flogi;
@ -999,7 +999,7 @@ stop_rr_fcf_flogi:
IOERR_LOOP_OPEN_FAILURE))) IOERR_LOOP_OPEN_FAILURE)))
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"2858 FLOGI failure Status:x%x/x%x TMO" "2858 FLOGI failure Status:x%x/x%x TMO"
":x%x Data x%x x%x\n", ":x%x Data x%lx x%x\n",
ulp_status, ulp_word4, tmo, ulp_status, ulp_word4, tmo,
phba->hba_flag, phba->fcf.fcf_flag); phba->hba_flag, phba->fcf.fcf_flag);
@ -1119,7 +1119,7 @@ stop_rr_fcf_flogi:
if (sp->cmn.fPort) if (sp->cmn.fPort)
rc = lpfc_cmpl_els_flogi_fabric(vport, ndlp, sp, rc = lpfc_cmpl_els_flogi_fabric(vport, ndlp, sp,
ulp_word4); ulp_word4);
else if (!(phba->hba_flag & HBA_FCOE_MODE)) else if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag))
rc = lpfc_cmpl_els_flogi_nport(vport, ndlp, sp); rc = lpfc_cmpl_els_flogi_nport(vport, ndlp, sp);
else { else {
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
@ -1149,14 +1149,15 @@ stop_rr_fcf_flogi:
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag &= ~FCF_DISCOVERY; phba->fcf.fcf_flag &= ~FCF_DISCOVERY;
phba->hba_flag &= ~(FCF_RR_INPROG | HBA_DEVLOSS_TMO);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
clear_bit(FCF_RR_INPROG, &phba->hba_flag);
clear_bit(HBA_DEVLOSS_TMO, &phba->hba_flag);
phba->fcf.fcf_redisc_attempted = 0; /* reset */ phba->fcf.fcf_redisc_attempted = 0; /* reset */
goto out; goto out;
} }
if (!rc) { if (!rc) {
/* Mark the FCF discovery process done */ /* Mark the FCF discovery process done */
if (phba->hba_flag & HBA_FIP_SUPPORT) if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag))
lpfc_printf_vlog(vport, KERN_INFO, LOG_FIP | lpfc_printf_vlog(vport, KERN_INFO, LOG_FIP |
LOG_ELS, LOG_ELS,
"2769 FLOGI to FCF (x%x) " "2769 FLOGI to FCF (x%x) "
@ -1164,8 +1165,9 @@ stop_rr_fcf_flogi:
phba->fcf.current_rec.fcf_indx); phba->fcf.current_rec.fcf_indx);
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag &= ~FCF_DISCOVERY; phba->fcf.fcf_flag &= ~FCF_DISCOVERY;
phba->hba_flag &= ~(FCF_RR_INPROG | HBA_DEVLOSS_TMO);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
clear_bit(FCF_RR_INPROG, &phba->hba_flag);
clear_bit(HBA_DEVLOSS_TMO, &phba->hba_flag);
phba->fcf.fcf_redisc_attempted = 0; /* reset */ phba->fcf.fcf_redisc_attempted = 0; /* reset */
goto out; goto out;
} }
@ -1202,7 +1204,7 @@ flogifail:
} }
out: out:
if (!flogi_in_retry) if (!flogi_in_retry)
phba->hba_flag &= ~HBA_FLOGI_OUTSTANDING; clear_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag);
lpfc_els_free_iocb(phba, cmdiocb); lpfc_els_free_iocb(phba, cmdiocb);
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
@ -1372,11 +1374,13 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
} }
/* Avoid race with FLOGI completion and hba_flags. */ /* Avoid race with FLOGI completion and hba_flags. */
phba->hba_flag |= (HBA_FLOGI_ISSUED | HBA_FLOGI_OUTSTANDING); set_bit(HBA_FLOGI_ISSUED, &phba->hba_flag);
set_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag);
rc = lpfc_issue_fabric_iocb(phba, elsiocb); rc = lpfc_issue_fabric_iocb(phba, elsiocb);
if (rc == IOCB_ERROR) { if (rc == IOCB_ERROR) {
phba->hba_flag &= ~(HBA_FLOGI_ISSUED | HBA_FLOGI_OUTSTANDING); clear_bit(HBA_FLOGI_ISSUED, &phba->hba_flag);
clear_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag);
lpfc_els_free_iocb(phba, elsiocb); lpfc_els_free_iocb(phba, elsiocb);
lpfc_nlp_put(ndlp); lpfc_nlp_put(ndlp);
return 1; return 1;
@ -1413,7 +1417,7 @@ lpfc_issue_els_flogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"3354 Xmit deferred FLOGI ACC: rx_id: x%x," "3354 Xmit deferred FLOGI ACC: rx_id: x%x,"
" ox_id: x%x, hba_flag x%x\n", " ox_id: x%x, hba_flag x%lx\n",
phba->defer_flogi_acc_rx_id, phba->defer_flogi_acc_rx_id,
phba->defer_flogi_acc_ox_id, phba->hba_flag); phba->defer_flogi_acc_ox_id, phba->hba_flag);
@ -7415,7 +7419,8 @@ lpfc_els_rcv_rdp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
goto error; goto error;
} }
if (phba->sli_rev < LPFC_SLI_REV4 || (phba->hba_flag & HBA_FCOE_MODE)) { if (phba->sli_rev < LPFC_SLI_REV4 ||
test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
rjt_err = LSRJT_UNABLE_TPC; rjt_err = LSRJT_UNABLE_TPC;
rjt_expl = LSEXP_REQ_UNSUPPORTED; rjt_expl = LSEXP_REQ_UNSUPPORTED;
goto error; goto error;
@ -7738,7 +7743,7 @@ lpfc_els_rcv_lcb(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
} }
if (phba->sli_rev < LPFC_SLI_REV4 || if (phba->sli_rev < LPFC_SLI_REV4 ||
phba->hba_flag & HBA_FCOE_MODE || test_bit(HBA_FCOE_MODE, &phba->hba_flag) ||
(bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) < (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) <
LPFC_SLI_INTF_IF_TYPE_2)) { LPFC_SLI_INTF_IF_TYPE_2)) {
rjt_err = LSRJT_CMD_UNSUPPORTED; rjt_err = LSRJT_CMD_UNSUPPORTED;
@ -8443,7 +8448,7 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
memcpy(&phba->fc_fabparam, sp, sizeof(struct serv_parm)); memcpy(&phba->fc_fabparam, sp, sizeof(struct serv_parm));
/* Defer ACC response until AFTER we issue a FLOGI */ /* Defer ACC response until AFTER we issue a FLOGI */
if (!(phba->hba_flag & HBA_FLOGI_ISSUED)) { if (!test_bit(HBA_FLOGI_ISSUED, &phba->hba_flag)) {
phba->defer_flogi_acc_rx_id = bf_get(wqe_ctxt_tag, phba->defer_flogi_acc_rx_id = bf_get(wqe_ctxt_tag,
&wqe->xmit_els_rsp.wqe_com); &wqe->xmit_els_rsp.wqe_com);
phba->defer_flogi_acc_ox_id = bf_get(wqe_rcvoxid, phba->defer_flogi_acc_ox_id = bf_get(wqe_rcvoxid,
@ -8453,7 +8458,7 @@ lpfc_els_rcv_flogi(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb,
lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
"3344 Deferring FLOGI ACC: rx_id: x%x," "3344 Deferring FLOGI ACC: rx_id: x%x,"
" ox_id: x%x, hba_flag x%x\n", " ox_id: x%x, hba_flag x%lx\n",
phba->defer_flogi_acc_rx_id, phba->defer_flogi_acc_rx_id,
phba->defer_flogi_acc_ox_id, phba->hba_flag); phba->defer_flogi_acc_ox_id, phba->hba_flag);

View File

@ -487,7 +487,8 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp)
recovering = true; recovering = true;
} else { } else {
/* Physical port path. */ /* Physical port path. */
if (phba->hba_flag & HBA_FLOGI_OUTSTANDING) if (test_bit(HBA_FLOGI_OUTSTANDING,
&phba->hba_flag))
recovering = true; recovering = true;
} }
break; break;
@ -652,14 +653,15 @@ lpfc_sli4_post_dev_loss_tmo_handler(struct lpfc_hba *phba, int fcf_inuse,
if (!fcf_inuse) if (!fcf_inuse)
return; return;
if ((phba->hba_flag & HBA_FIP_SUPPORT) && !lpfc_fcf_inuse(phba)) { if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) &&
!lpfc_fcf_inuse(phba)) {
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
if (phba->fcf.fcf_flag & FCF_DISCOVERY) { if (phba->fcf.fcf_flag & FCF_DISCOVERY) {
if (phba->hba_flag & HBA_DEVLOSS_TMO) { if (test_and_set_bit(HBA_DEVLOSS_TMO,
&phba->hba_flag)) {
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
return; return;
} }
phba->hba_flag |= HBA_DEVLOSS_TMO;
lpfc_printf_log(phba, KERN_INFO, LOG_FIP, lpfc_printf_log(phba, KERN_INFO, LOG_FIP,
"2847 Last remote node (x%x) using " "2847 Last remote node (x%x) using "
"FCF devloss tmo\n", nlp_did); "FCF devloss tmo\n", nlp_did);
@ -671,8 +673,9 @@ lpfc_sli4_post_dev_loss_tmo_handler(struct lpfc_hba *phba, int fcf_inuse,
"in progress\n"); "in progress\n");
return; return;
} }
if (!(phba->hba_flag & (FCF_TS_INPROG | FCF_RR_INPROG))) { spin_unlock_irq(&phba->hbalock);
spin_unlock_irq(&phba->hbalock); if (!test_bit(FCF_TS_INPROG, &phba->hba_flag) &&
!test_bit(FCF_RR_INPROG, &phba->hba_flag)) {
lpfc_printf_log(phba, KERN_INFO, LOG_FIP, lpfc_printf_log(phba, KERN_INFO, LOG_FIP,
"2869 Devloss tmo to idle FIP engine, " "2869 Devloss tmo to idle FIP engine, "
"unreg in-use FCF and rescan.\n"); "unreg in-use FCF and rescan.\n");
@ -680,11 +683,10 @@ lpfc_sli4_post_dev_loss_tmo_handler(struct lpfc_hba *phba, int fcf_inuse,
lpfc_unregister_fcf_rescan(phba); lpfc_unregister_fcf_rescan(phba);
return; return;
} }
spin_unlock_irq(&phba->hbalock); if (test_bit(FCF_TS_INPROG, &phba->hba_flag))
if (phba->hba_flag & FCF_TS_INPROG)
lpfc_printf_log(phba, KERN_INFO, LOG_FIP, lpfc_printf_log(phba, KERN_INFO, LOG_FIP,
"2870 FCF table scan in progress\n"); "2870 FCF table scan in progress\n");
if (phba->hba_flag & FCF_RR_INPROG) if (test_bit(FCF_RR_INPROG, &phba->hba_flag))
lpfc_printf_log(phba, KERN_INFO, LOG_FIP, lpfc_printf_log(phba, KERN_INFO, LOG_FIP,
"2871 FLOGI roundrobin FCF failover " "2871 FLOGI roundrobin FCF failover "
"in progress\n"); "in progress\n");
@ -978,18 +980,15 @@ lpfc_work_done(struct lpfc_hba *phba)
/* Process SLI4 events */ /* Process SLI4 events */
if (phba->pci_dev_grp == LPFC_PCI_DEV_OC) { if (phba->pci_dev_grp == LPFC_PCI_DEV_OC) {
if (phba->hba_flag & HBA_RRQ_ACTIVE) if (test_bit(HBA_RRQ_ACTIVE, &phba->hba_flag))
lpfc_handle_rrq_active(phba); lpfc_handle_rrq_active(phba);
if (phba->hba_flag & ELS_XRI_ABORT_EVENT) if (test_bit(ELS_XRI_ABORT_EVENT, &phba->hba_flag))
lpfc_sli4_els_xri_abort_event_proc(phba); lpfc_sli4_els_xri_abort_event_proc(phba);
if (phba->hba_flag & ASYNC_EVENT) if (test_bit(ASYNC_EVENT, &phba->hba_flag))
lpfc_sli4_async_event_proc(phba); lpfc_sli4_async_event_proc(phba);
if (phba->hba_flag & HBA_POST_RECEIVE_BUFFER) { if (test_and_clear_bit(HBA_POST_RECEIVE_BUFFER,
spin_lock_irq(&phba->hbalock); &phba->hba_flag))
phba->hba_flag &= ~HBA_POST_RECEIVE_BUFFER;
spin_unlock_irq(&phba->hbalock);
lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ); lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ);
}
if (phba->fcf.fcf_flag & FCF_REDISC_EVT) if (phba->fcf.fcf_flag & FCF_REDISC_EVT)
lpfc_sli4_fcf_redisc_event_proc(phba); lpfc_sli4_fcf_redisc_event_proc(phba);
} }
@ -1035,11 +1034,11 @@ lpfc_work_done(struct lpfc_hba *phba)
status >>= (4*LPFC_ELS_RING); status >>= (4*LPFC_ELS_RING);
if (pring && (status & HA_RXMASK || if (pring && (status & HA_RXMASK ||
pring->flag & LPFC_DEFERRED_RING_EVENT || pring->flag & LPFC_DEFERRED_RING_EVENT ||
phba->hba_flag & HBA_SP_QUEUE_EVT)) { test_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag))) {
if (pring->flag & LPFC_STOP_IOCB_EVENT) { if (pring->flag & LPFC_STOP_IOCB_EVENT) {
pring->flag |= LPFC_DEFERRED_RING_EVENT; pring->flag |= LPFC_DEFERRED_RING_EVENT;
/* Preserve legacy behavior. */ /* Preserve legacy behavior. */
if (!(phba->hba_flag & HBA_SP_QUEUE_EVT)) if (!test_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag))
set_bit(LPFC_DATA_READY, &phba->data_flags); set_bit(LPFC_DATA_READY, &phba->data_flags);
} else { } else {
/* Driver could have abort request completed in queue /* Driver could have abort request completed in queue
@ -1420,7 +1419,8 @@ lpfc_linkup(struct lpfc_hba *phba)
spin_unlock_irq(shost->host_lock); spin_unlock_irq(shost->host_lock);
/* reinitialize initial HBA flag */ /* reinitialize initial HBA flag */
phba->hba_flag &= ~(HBA_FLOGI_ISSUED | HBA_RHBA_CMPL); clear_bit(HBA_FLOGI_ISSUED, &phba->hba_flag);
clear_bit(HBA_RHBA_CMPL, &phba->hba_flag);
return 0; return 0;
} }
@ -1505,7 +1505,7 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
/* don't perform discovery for SLI4 loopback diagnostic test */ /* don't perform discovery for SLI4 loopback diagnostic test */
if ((phba->sli_rev == LPFC_SLI_REV4) && if ((phba->sli_rev == LPFC_SLI_REV4) &&
!(phba->hba_flag & HBA_FCOE_MODE) && !test_bit(HBA_FCOE_MODE, &phba->hba_flag) &&
(phba->link_flag & LS_LOOPBACK_MODE)) (phba->link_flag & LS_LOOPBACK_MODE))
return; return;
@ -1548,7 +1548,7 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
goto sparam_out; goto sparam_out;
} }
phba->hba_flag |= HBA_DEFER_FLOGI; set_bit(HBA_DEFER_FLOGI, &phba->hba_flag);
} else { } else {
lpfc_initial_flogi(vport); lpfc_initial_flogi(vport);
} }
@ -1617,27 +1617,23 @@ lpfc_mbx_cmpl_reg_fcfi(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
/* If there is a pending FCoE event, restart FCF table scan. */ /* If there is a pending FCoE event, restart FCF table scan. */
if ((!(phba->hba_flag & FCF_RR_INPROG)) && if (!test_bit(FCF_RR_INPROG, &phba->hba_flag) &&
lpfc_check_pending_fcoe_event(phba, LPFC_UNREG_FCF)) lpfc_check_pending_fcoe_event(phba, LPFC_UNREG_FCF))
goto fail_out; goto fail_out;
/* Mark successful completion of FCF table scan */ /* Mark successful completion of FCF table scan */
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag |= (FCF_SCAN_DONE | FCF_IN_USE); phba->fcf.fcf_flag |= (FCF_SCAN_DONE | FCF_IN_USE);
phba->hba_flag &= ~FCF_TS_INPROG;
if (vport->port_state != LPFC_FLOGI) {
phba->hba_flag |= FCF_RR_INPROG;
spin_unlock_irq(&phba->hbalock);
lpfc_issue_init_vfi(vport);
goto out;
}
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
clear_bit(FCF_TS_INPROG, &phba->hba_flag);
if (vport->port_state != LPFC_FLOGI) {
set_bit(FCF_RR_INPROG, &phba->hba_flag);
lpfc_issue_init_vfi(vport);
}
goto out; goto out;
fail_out: fail_out:
spin_lock_irq(&phba->hbalock); clear_bit(FCF_RR_INPROG, &phba->hba_flag);
phba->hba_flag &= ~FCF_RR_INPROG;
spin_unlock_irq(&phba->hbalock);
out: out:
mempool_free(mboxq, phba->mbox_mem_pool); mempool_free(mboxq, phba->mbox_mem_pool);
} }
@ -1867,32 +1863,31 @@ lpfc_register_fcf(struct lpfc_hba *phba)
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
/* If the FCF is not available do nothing. */ /* If the FCF is not available do nothing. */
if (!(phba->fcf.fcf_flag & FCF_AVAILABLE)) { if (!(phba->fcf.fcf_flag & FCF_AVAILABLE)) {
phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
clear_bit(FCF_TS_INPROG, &phba->hba_flag);
clear_bit(FCF_RR_INPROG, &phba->hba_flag);
return; return;
} }
/* The FCF is already registered, start discovery */ /* The FCF is already registered, start discovery */
if (phba->fcf.fcf_flag & FCF_REGISTERED) { if (phba->fcf.fcf_flag & FCF_REGISTERED) {
phba->fcf.fcf_flag |= (FCF_SCAN_DONE | FCF_IN_USE); phba->fcf.fcf_flag |= (FCF_SCAN_DONE | FCF_IN_USE);
phba->hba_flag &= ~FCF_TS_INPROG; spin_unlock_irq(&phba->hbalock);
clear_bit(FCF_TS_INPROG, &phba->hba_flag);
if (phba->pport->port_state != LPFC_FLOGI && if (phba->pport->port_state != LPFC_FLOGI &&
test_bit(FC_FABRIC, &phba->pport->fc_flag)) { test_bit(FC_FABRIC, &phba->pport->fc_flag)) {
phba->hba_flag |= FCF_RR_INPROG; set_bit(FCF_RR_INPROG, &phba->hba_flag);
spin_unlock_irq(&phba->hbalock);
lpfc_initial_flogi(phba->pport); lpfc_initial_flogi(phba->pport);
return; return;
} }
spin_unlock_irq(&phba->hbalock);
return; return;
} }
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
fcf_mbxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); fcf_mbxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!fcf_mbxq) { if (!fcf_mbxq) {
spin_lock_irq(&phba->hbalock); clear_bit(FCF_TS_INPROG, &phba->hba_flag);
phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG); clear_bit(FCF_RR_INPROG, &phba->hba_flag);
spin_unlock_irq(&phba->hbalock);
return; return;
} }
@ -1901,9 +1896,8 @@ lpfc_register_fcf(struct lpfc_hba *phba)
fcf_mbxq->mbox_cmpl = lpfc_mbx_cmpl_reg_fcfi; fcf_mbxq->mbox_cmpl = lpfc_mbx_cmpl_reg_fcfi;
rc = lpfc_sli_issue_mbox(phba, fcf_mbxq, MBX_NOWAIT); rc = lpfc_sli_issue_mbox(phba, fcf_mbxq, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) { if (rc == MBX_NOT_FINISHED) {
spin_lock_irq(&phba->hbalock); clear_bit(FCF_TS_INPROG, &phba->hba_flag);
phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG); clear_bit(FCF_RR_INPROG, &phba->hba_flag);
spin_unlock_irq(&phba->hbalock);
mempool_free(fcf_mbxq, phba->mbox_mem_pool); mempool_free(fcf_mbxq, phba->mbox_mem_pool);
} }
@ -1956,7 +1950,7 @@ lpfc_match_fcf_conn_list(struct lpfc_hba *phba,
bf_get(lpfc_fcf_record_fcf_sol, new_fcf_record)) bf_get(lpfc_fcf_record_fcf_sol, new_fcf_record))
return 0; return 0;
if (!(phba->hba_flag & HBA_FIP_SUPPORT)) { if (!test_bit(HBA_FIP_SUPPORT, &phba->hba_flag)) {
*boot_flag = 0; *boot_flag = 0;
*addr_mode = bf_get(lpfc_fcf_record_mac_addr_prov, *addr_mode = bf_get(lpfc_fcf_record_mac_addr_prov,
new_fcf_record); new_fcf_record);
@ -2151,8 +2145,9 @@ lpfc_check_pending_fcoe_event(struct lpfc_hba *phba, uint8_t unreg_fcf)
lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_DISCOVERY, lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_DISCOVERY,
"2833 Stop FCF discovery process due to link " "2833 Stop FCF discovery process due to link "
"state change (x%x)\n", phba->link_state); "state change (x%x)\n", phba->link_state);
clear_bit(FCF_TS_INPROG, &phba->hba_flag);
clear_bit(FCF_RR_INPROG, &phba->hba_flag);
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG);
phba->fcf.fcf_flag &= ~(FCF_REDISC_FOV | FCF_DISCOVERY); phba->fcf.fcf_flag &= ~(FCF_REDISC_FOV | FCF_DISCOVERY);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
} }
@ -2380,9 +2375,7 @@ int lpfc_sli4_fcf_rr_next_proc(struct lpfc_vport *vport, uint16_t fcf_index)
int rc; int rc;
if (fcf_index == LPFC_FCOE_FCF_NEXT_NONE) { if (fcf_index == LPFC_FCOE_FCF_NEXT_NONE) {
spin_lock_irq(&phba->hbalock); if (test_bit(HBA_DEVLOSS_TMO, &phba->hba_flag)) {
if (phba->hba_flag & HBA_DEVLOSS_TMO) {
spin_unlock_irq(&phba->hbalock);
lpfc_printf_log(phba, KERN_INFO, LOG_FIP, lpfc_printf_log(phba, KERN_INFO, LOG_FIP,
"2872 Devloss tmo with no eligible " "2872 Devloss tmo with no eligible "
"FCF, unregister in-use FCF (x%x) " "FCF, unregister in-use FCF (x%x) "
@ -2392,8 +2385,9 @@ int lpfc_sli4_fcf_rr_next_proc(struct lpfc_vport *vport, uint16_t fcf_index)
goto stop_flogi_current_fcf; goto stop_flogi_current_fcf;
} }
/* Mark the end to FLOGI roundrobin failover */ /* Mark the end to FLOGI roundrobin failover */
phba->hba_flag &= ~FCF_RR_INPROG; clear_bit(FCF_RR_INPROG, &phba->hba_flag);
/* Allow action to new fcf asynchronous event */ /* Allow action to new fcf asynchronous event */
spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag &= ~(FCF_AVAILABLE | FCF_SCAN_DONE); phba->fcf.fcf_flag &= ~(FCF_AVAILABLE | FCF_SCAN_DONE);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
lpfc_printf_log(phba, KERN_INFO, LOG_FIP, lpfc_printf_log(phba, KERN_INFO, LOG_FIP,
@ -2630,9 +2624,7 @@ lpfc_mbx_cmpl_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
"2765 Mailbox command READ_FCF_RECORD " "2765 Mailbox command READ_FCF_RECORD "
"failed to retrieve a FCF record.\n"); "failed to retrieve a FCF record.\n");
/* Let next new FCF event trigger fast failover */ /* Let next new FCF event trigger fast failover */
spin_lock_irq(&phba->hbalock); clear_bit(FCF_TS_INPROG, &phba->hba_flag);
phba->hba_flag &= ~FCF_TS_INPROG;
spin_unlock_irq(&phba->hbalock);
lpfc_sli4_mbox_cmd_free(phba, mboxq); lpfc_sli4_mbox_cmd_free(phba, mboxq);
return; return;
} }
@ -2873,10 +2865,10 @@ read_next_fcf:
phba->fcoe_eventtag_at_fcf_scan, phba->fcoe_eventtag_at_fcf_scan,
bf_get(lpfc_fcf_record_fcf_index, bf_get(lpfc_fcf_record_fcf_index,
new_fcf_record)); new_fcf_record));
spin_lock_irq(&phba->hbalock); if (test_bit(HBA_DEVLOSS_TMO,
if (phba->hba_flag & HBA_DEVLOSS_TMO) { &phba->hba_flag)) {
phba->hba_flag &= ~FCF_TS_INPROG; clear_bit(FCF_TS_INPROG,
spin_unlock_irq(&phba->hbalock); &phba->hba_flag);
/* Unregister in-use FCF and rescan */ /* Unregister in-use FCF and rescan */
lpfc_printf_log(phba, KERN_INFO, lpfc_printf_log(phba, KERN_INFO,
LOG_FIP, LOG_FIP,
@ -2889,8 +2881,7 @@ read_next_fcf:
/* /*
* Let next new FCF event trigger fast failover * Let next new FCF event trigger fast failover
*/ */
phba->hba_flag &= ~FCF_TS_INPROG; clear_bit(FCF_TS_INPROG, &phba->hba_flag);
spin_unlock_irq(&phba->hbalock);
return; return;
} }
/* /*
@ -2996,8 +2987,8 @@ lpfc_mbx_cmpl_fcf_rr_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
if (phba->link_state < LPFC_LINK_UP) { if (phba->link_state < LPFC_LINK_UP) {
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag &= ~FCF_DISCOVERY; phba->fcf.fcf_flag &= ~FCF_DISCOVERY;
phba->hba_flag &= ~FCF_RR_INPROG;
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
clear_bit(FCF_RR_INPROG, &phba->hba_flag);
goto out; goto out;
} }
@ -3008,7 +2999,7 @@ lpfc_mbx_cmpl_fcf_rr_read_fcf_rec(struct lpfc_hba *phba, LPFC_MBOXQ_t *mboxq)
lpfc_printf_log(phba, KERN_WARNING, LOG_FIP, lpfc_printf_log(phba, KERN_WARNING, LOG_FIP,
"2766 Mailbox command READ_FCF_RECORD " "2766 Mailbox command READ_FCF_RECORD "
"failed to retrieve a FCF record. " "failed to retrieve a FCF record. "
"hba_flg x%x fcf_flg x%x\n", phba->hba_flag, "hba_flg x%lx fcf_flg x%x\n", phba->hba_flag,
phba->fcf.fcf_flag); phba->fcf.fcf_flag);
lpfc_unregister_fcf_rescan(phba); lpfc_unregister_fcf_rescan(phba);
goto out; goto out;
@ -3471,9 +3462,9 @@ lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
/* Check if sending the FLOGI is being deferred to after we get /* Check if sending the FLOGI is being deferred to after we get
* up to date CSPs from MBX_READ_SPARAM. * up to date CSPs from MBX_READ_SPARAM.
*/ */
if (phba->hba_flag & HBA_DEFER_FLOGI) { if (test_bit(HBA_DEFER_FLOGI, &phba->hba_flag)) {
lpfc_initial_flogi(vport); lpfc_initial_flogi(vport);
phba->hba_flag &= ~HBA_DEFER_FLOGI; clear_bit(HBA_DEFER_FLOGI, &phba->hba_flag);
} }
return; return;
@ -3495,7 +3486,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
spin_lock_irqsave(&phba->hbalock, iflags); spin_lock_irqsave(&phba->hbalock, iflags);
phba->fc_linkspeed = bf_get(lpfc_mbx_read_top_link_spd, la); phba->fc_linkspeed = bf_get(lpfc_mbx_read_top_link_spd, la);
if (!(phba->hba_flag & HBA_FCOE_MODE)) { if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
switch (bf_get(lpfc_mbx_read_top_link_spd, la)) { switch (bf_get(lpfc_mbx_read_top_link_spd, la)) {
case LPFC_LINK_SPEED_1GHZ: case LPFC_LINK_SPEED_1GHZ:
case LPFC_LINK_SPEED_2GHZ: case LPFC_LINK_SPEED_2GHZ:
@ -3611,7 +3602,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
goto out; goto out;
} }
if (!(phba->hba_flag & HBA_FCOE_MODE)) { if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!cfglink_mbox) if (!cfglink_mbox)
goto out; goto out;
@ -3631,7 +3622,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
* is phase 1 implementation that support FCF index 0 and driver * is phase 1 implementation that support FCF index 0 and driver
* defaults. * defaults.
*/ */
if (!(phba->hba_flag & HBA_FIP_SUPPORT)) { if (!test_bit(HBA_FIP_SUPPORT, &phba->hba_flag)) {
fcf_record = kzalloc(sizeof(struct fcf_record), fcf_record = kzalloc(sizeof(struct fcf_record),
GFP_KERNEL); GFP_KERNEL);
if (unlikely(!fcf_record)) { if (unlikely(!fcf_record)) {
@ -3661,12 +3652,10 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
* The driver is expected to do FIP/FCF. Call the port * The driver is expected to do FIP/FCF. Call the port
* and get the FCF Table. * and get the FCF Table.
*/ */
spin_lock_irqsave(&phba->hbalock, iflags); if (test_bit(FCF_TS_INPROG, &phba->hba_flag))
if (phba->hba_flag & FCF_TS_INPROG) {
spin_unlock_irqrestore(&phba->hbalock, iflags);
return; return;
}
/* This is the initial FCF discovery scan */ /* This is the initial FCF discovery scan */
spin_lock_irqsave(&phba->hbalock, iflags);
phba->fcf.fcf_flag |= FCF_INIT_DISC; phba->fcf.fcf_flag |= FCF_INIT_DISC;
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->hbalock, iflags);
lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_DISCOVERY, lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_DISCOVERY,
@ -6997,11 +6986,11 @@ lpfc_unregister_unused_fcf(struct lpfc_hba *phba)
* registered, do nothing. * registered, do nothing.
*/ */
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
if (!(phba->hba_flag & HBA_FCOE_MODE) || if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) ||
!(phba->fcf.fcf_flag & FCF_REGISTERED) || !(phba->fcf.fcf_flag & FCF_REGISTERED) ||
!(phba->hba_flag & HBA_FIP_SUPPORT) || !test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) ||
(phba->fcf.fcf_flag & FCF_DISCOVERY) || (phba->fcf.fcf_flag & FCF_DISCOVERY) ||
(phba->pport->port_state == LPFC_FLOGI)) { phba->pport->port_state == LPFC_FLOGI) {
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
return; return;
} }

View File

@ -2146,6 +2146,14 @@ struct sli4_sge { /* SLI-4 */
uint32_t sge_len; uint32_t sge_len;
}; };
struct sli4_sge_le {
__le32 addr_hi;
__le32 addr_lo;
__le32 word2;
__le32 sge_len;
};
struct sli4_hybrid_sgl { struct sli4_hybrid_sgl {
struct list_head list_node; struct list_head list_node;
struct sli4_sge *dma_sgl; struct sli4_sge *dma_sgl;

View File

@ -567,7 +567,7 @@ lpfc_config_port_post(struct lpfc_hba *phba)
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
/* Initialize ERATT handling flag */ /* Initialize ERATT handling flag */
phba->hba_flag &= ~HBA_ERATT_HANDLED; clear_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
/* Enable appropriate host interrupts */ /* Enable appropriate host interrupts */
if (lpfc_readl(phba->HCregaddr, &status)) { if (lpfc_readl(phba->HCregaddr, &status)) {
@ -599,13 +599,14 @@ lpfc_config_port_post(struct lpfc_hba *phba)
/* Set up heart beat (HB) timer */ /* Set up heart beat (HB) timer */
mod_timer(&phba->hb_tmofunc, mod_timer(&phba->hb_tmofunc,
jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL));
phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); clear_bit(HBA_HBEAT_INP, &phba->hba_flag);
clear_bit(HBA_HBEAT_TMO, &phba->hba_flag);
phba->last_completion_time = jiffies; phba->last_completion_time = jiffies;
/* Set up error attention (ERATT) polling timer */ /* Set up error attention (ERATT) polling timer */
mod_timer(&phba->eratt_poll, mod_timer(&phba->eratt_poll,
jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval)); jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval));
if (phba->hba_flag & LINK_DISABLED) { if (test_bit(LINK_DISABLED, &phba->hba_flag)) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"2598 Adapter Link is disabled.\n"); "2598 Adapter Link is disabled.\n");
lpfc_down_link(phba, pmb); lpfc_down_link(phba, pmb);
@ -925,9 +926,7 @@ lpfc_sli4_free_sp_events(struct lpfc_hba *phba)
struct hbq_dmabuf *dmabuf; struct hbq_dmabuf *dmabuf;
struct lpfc_cq_event *cq_event; struct lpfc_cq_event *cq_event;
spin_lock_irq(&phba->hbalock); clear_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag);
phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
spin_unlock_irq(&phba->hbalock);
while (!list_empty(&phba->sli4_hba.sp_queue_event)) { while (!list_empty(&phba->sli4_hba.sp_queue_event)) {
/* Get the response iocb from the head of work queue */ /* Get the response iocb from the head of work queue */
@ -1228,18 +1227,15 @@ static void
lpfc_rrq_timeout(struct timer_list *t) lpfc_rrq_timeout(struct timer_list *t)
{ {
struct lpfc_hba *phba; struct lpfc_hba *phba;
unsigned long iflag;
phba = from_timer(phba, t, rrq_tmr); phba = from_timer(phba, t, rrq_tmr);
spin_lock_irqsave(&phba->pport->work_port_lock, iflag); if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) {
if (!test_bit(FC_UNLOADING, &phba->pport->load_flag)) clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
phba->hba_flag |= HBA_RRQ_ACTIVE; return;
else }
phba->hba_flag &= ~HBA_RRQ_ACTIVE;
spin_unlock_irqrestore(&phba->pport->work_port_lock, iflag);
if (!test_bit(FC_UNLOADING, &phba->pport->load_flag)) set_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
lpfc_worker_wake_up(phba); lpfc_worker_wake_up(phba);
} }
/** /**
@ -1261,11 +1257,8 @@ lpfc_rrq_timeout(struct timer_list *t)
static void static void
lpfc_hb_mbox_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq) lpfc_hb_mbox_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq)
{ {
unsigned long drvr_flag; clear_bit(HBA_HBEAT_INP, &phba->hba_flag);
clear_bit(HBA_HBEAT_TMO, &phba->hba_flag);
spin_lock_irqsave(&phba->hbalock, drvr_flag);
phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO);
spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
/* Check and reset heart-beat timer if necessary */ /* Check and reset heart-beat timer if necessary */
mempool_free(pmboxq, phba->mbox_mem_pool); mempool_free(pmboxq, phba->mbox_mem_pool);
@ -1457,7 +1450,7 @@ lpfc_issue_hb_mbox(struct lpfc_hba *phba)
int retval; int retval;
/* Is a Heartbeat mbox already in progress */ /* Is a Heartbeat mbox already in progress */
if (phba->hba_flag & HBA_HBEAT_INP) if (test_bit(HBA_HBEAT_INP, &phba->hba_flag))
return 0; return 0;
pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
@ -1473,7 +1466,7 @@ lpfc_issue_hb_mbox(struct lpfc_hba *phba)
mempool_free(pmboxq, phba->mbox_mem_pool); mempool_free(pmboxq, phba->mbox_mem_pool);
return -ENXIO; return -ENXIO;
} }
phba->hba_flag |= HBA_HBEAT_INP; set_bit(HBA_HBEAT_INP, &phba->hba_flag);
return 0; return 0;
} }
@ -1493,7 +1486,7 @@ lpfc_issue_hb_tmo(struct lpfc_hba *phba)
{ {
if (phba->cfg_enable_hba_heartbeat) if (phba->cfg_enable_hba_heartbeat)
return; return;
phba->hba_flag |= HBA_HBEAT_TMO; set_bit(HBA_HBEAT_TMO, &phba->hba_flag);
} }
/** /**
@ -1565,7 +1558,7 @@ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL), msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL),
jiffies)) { jiffies)) {
spin_unlock_irq(&phba->pport->work_port_lock); spin_unlock_irq(&phba->pport->work_port_lock);
if (phba->hba_flag & HBA_HBEAT_INP) if (test_bit(HBA_HBEAT_INP, &phba->hba_flag))
tmo = (1000 * LPFC_HB_MBOX_TIMEOUT); tmo = (1000 * LPFC_HB_MBOX_TIMEOUT);
else else
tmo = (1000 * LPFC_HB_MBOX_INTERVAL); tmo = (1000 * LPFC_HB_MBOX_INTERVAL);
@ -1574,7 +1567,7 @@ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
spin_unlock_irq(&phba->pport->work_port_lock); spin_unlock_irq(&phba->pport->work_port_lock);
/* Check if a MBX_HEARTBEAT is already in progress */ /* Check if a MBX_HEARTBEAT is already in progress */
if (phba->hba_flag & HBA_HBEAT_INP) { if (test_bit(HBA_HBEAT_INP, &phba->hba_flag)) {
/* /*
* If heart beat timeout called with HBA_HBEAT_INP set * If heart beat timeout called with HBA_HBEAT_INP set
* we need to give the hb mailbox cmd a chance to * we need to give the hb mailbox cmd a chance to
@ -1611,7 +1604,7 @@ lpfc_hb_timeout_handler(struct lpfc_hba *phba)
} }
} else { } else {
/* Check to see if we want to force a MBX_HEARTBEAT */ /* Check to see if we want to force a MBX_HEARTBEAT */
if (phba->hba_flag & HBA_HBEAT_TMO) { if (test_bit(HBA_HBEAT_TMO, &phba->hba_flag)) {
retval = lpfc_issue_hb_mbox(phba); retval = lpfc_issue_hb_mbox(phba);
if (retval) if (retval)
tmo = (1000 * LPFC_HB_MBOX_INTERVAL); tmo = (1000 * LPFC_HB_MBOX_INTERVAL);
@ -1699,9 +1692,7 @@ lpfc_handle_deferred_eratt(struct lpfc_hba *phba)
* since we cannot communicate with the pci card anyway. * since we cannot communicate with the pci card anyway.
*/ */
if (pci_channel_offline(phba->pcidev)) { if (pci_channel_offline(phba->pcidev)) {
spin_lock_irq(&phba->hbalock); clear_bit(DEFER_ERATT, &phba->hba_flag);
phba->hba_flag &= ~DEFER_ERATT;
spin_unlock_irq(&phba->hbalock);
return; return;
} }
@ -1752,9 +1743,7 @@ lpfc_handle_deferred_eratt(struct lpfc_hba *phba)
if (!phba->work_hs && !test_bit(FC_UNLOADING, &phba->pport->load_flag)) if (!phba->work_hs && !test_bit(FC_UNLOADING, &phba->pport->load_flag))
phba->work_hs = old_host_status & ~HS_FFER1; phba->work_hs = old_host_status & ~HS_FFER1;
spin_lock_irq(&phba->hbalock); clear_bit(DEFER_ERATT, &phba->hba_flag);
phba->hba_flag &= ~DEFER_ERATT;
spin_unlock_irq(&phba->hbalock);
phba->work_status[0] = readl(phba->MBslimaddr + 0xa8); phba->work_status[0] = readl(phba->MBslimaddr + 0xa8);
phba->work_status[1] = readl(phba->MBslimaddr + 0xac); phba->work_status[1] = readl(phba->MBslimaddr + 0xac);
} }
@ -1798,9 +1787,7 @@ lpfc_handle_eratt_s3(struct lpfc_hba *phba)
* since we cannot communicate with the pci card anyway. * since we cannot communicate with the pci card anyway.
*/ */
if (pci_channel_offline(phba->pcidev)) { if (pci_channel_offline(phba->pcidev)) {
spin_lock_irq(&phba->hbalock); clear_bit(DEFER_ERATT, &phba->hba_flag);
phba->hba_flag &= ~DEFER_ERATT;
spin_unlock_irq(&phba->hbalock);
return; return;
} }
@ -1811,7 +1798,7 @@ lpfc_handle_eratt_s3(struct lpfc_hba *phba)
/* Send an internal error event to mgmt application */ /* Send an internal error event to mgmt application */
lpfc_board_errevt_to_mgmt(phba); lpfc_board_errevt_to_mgmt(phba);
if (phba->hba_flag & DEFER_ERATT) if (test_bit(DEFER_ERATT, &phba->hba_flag))
lpfc_handle_deferred_eratt(phba); lpfc_handle_deferred_eratt(phba);
if ((phba->work_hs & HS_FFER6) || (phba->work_hs & HS_FFER8)) { if ((phba->work_hs & HS_FFER6) || (phba->work_hs & HS_FFER8)) {
@ -2026,7 +2013,7 @@ lpfc_handle_eratt_s4(struct lpfc_hba *phba)
/* consider PCI bus read error as pci_channel_offline */ /* consider PCI bus read error as pci_channel_offline */
if (pci_rd_rc1 == -EIO && pci_rd_rc2 == -EIO) if (pci_rd_rc1 == -EIO && pci_rd_rc2 == -EIO)
return; return;
if (!(phba->hba_flag & HBA_RECOVERABLE_UE)) { if (!test_bit(HBA_RECOVERABLE_UE, &phba->hba_flag)) {
lpfc_sli4_offline_eratt(phba); lpfc_sli4_offline_eratt(phba);
return; return;
} }
@ -3319,9 +3306,10 @@ lpfc_stop_hba_timers(struct lpfc_hba *phba)
del_timer_sync(&phba->hb_tmofunc); del_timer_sync(&phba->hb_tmofunc);
if (phba->sli_rev == LPFC_SLI_REV4) { if (phba->sli_rev == LPFC_SLI_REV4) {
del_timer_sync(&phba->rrq_tmr); del_timer_sync(&phba->rrq_tmr);
phba->hba_flag &= ~HBA_RRQ_ACTIVE; clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
} }
phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); clear_bit(HBA_HBEAT_INP, &phba->hba_flag);
clear_bit(HBA_HBEAT_TMO, &phba->hba_flag);
switch (phba->pci_dev_grp) { switch (phba->pci_dev_grp) {
case LPFC_PCI_DEV_LP: case LPFC_PCI_DEV_LP:
@ -4785,7 +4773,10 @@ lpfc_create_port(struct lpfc_hba *phba, int instance, struct device *dev)
shost->max_id = LPFC_MAX_TARGET; shost->max_id = LPFC_MAX_TARGET;
shost->max_lun = vport->cfg_max_luns; shost->max_lun = vport->cfg_max_luns;
shost->this_id = -1; shost->this_id = -1;
shost->max_cmd_len = 16; if (phba->sli_rev == LPFC_SLI_REV4)
shost->max_cmd_len = LPFC_FCP_CDB_LEN_32;
else
shost->max_cmd_len = LPFC_FCP_CDB_LEN;
if (phba->sli_rev == LPFC_SLI_REV4) { if (phba->sli_rev == LPFC_SLI_REV4) {
if (!phba->cfg_fcp_mq_threshold || if (!phba->cfg_fcp_mq_threshold ||
@ -4976,7 +4967,7 @@ static void lpfc_host_supported_speeds_set(struct Scsi_Host *shost)
* Avoid reporting supported link speed for FCoE as it can't be * Avoid reporting supported link speed for FCoE as it can't be
* controlled via FCoE. * controlled via FCoE.
*/ */
if (phba->hba_flag & HBA_FCOE_MODE) if (test_bit(HBA_FCOE_MODE, &phba->hba_flag))
return; return;
if (phba->lmt & LMT_256Gb) if (phba->lmt & LMT_256Gb)
@ -5490,7 +5481,7 @@ lpfc_sli4_async_link_evt(struct lpfc_hba *phba,
* For FC Mode: issue the READ_TOPOLOGY mailbox command to fetch * For FC Mode: issue the READ_TOPOLOGY mailbox command to fetch
* topology info. Note: Optional for non FC-AL ports. * topology info. Note: Optional for non FC-AL ports.
*/ */
if (!(phba->hba_flag & HBA_FCOE_MODE)) { if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT); rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) if (rc == MBX_NOT_FINISHED)
goto out_free_pmb; goto out_free_pmb;
@ -6025,7 +6016,7 @@ lpfc_cmf_timer(struct hrtimer *timer)
*/ */
if (phba->cmf_active_mode == LPFC_CFG_MANAGED && if (phba->cmf_active_mode == LPFC_CFG_MANAGED &&
phba->link_state != LPFC_LINK_DOWN && phba->link_state != LPFC_LINK_DOWN &&
phba->hba_flag & HBA_SETUP) { test_bit(HBA_SETUP, &phba->hba_flag)) {
mbpi = phba->cmf_last_sync_bw; mbpi = phba->cmf_last_sync_bw;
phba->cmf_last_sync_bw = 0; phba->cmf_last_sync_bw = 0;
extra = 0; extra = 0;
@ -6778,11 +6769,9 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba,
} }
/* If the FCF discovery is in progress, do nothing. */ /* If the FCF discovery is in progress, do nothing. */
spin_lock_irq(&phba->hbalock); if (test_bit(FCF_TS_INPROG, &phba->hba_flag))
if (phba->hba_flag & FCF_TS_INPROG) {
spin_unlock_irq(&phba->hbalock);
break; break;
} spin_lock_irq(&phba->hbalock);
/* If fast FCF failover rescan event is pending, do nothing */ /* If fast FCF failover rescan event is pending, do nothing */
if (phba->fcf.fcf_flag & (FCF_REDISC_EVT | FCF_REDISC_PEND)) { if (phba->fcf.fcf_flag & (FCF_REDISC_EVT | FCF_REDISC_PEND)) {
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
@ -7321,9 +7310,7 @@ void lpfc_sli4_async_event_proc(struct lpfc_hba *phba)
unsigned long iflags; unsigned long iflags;
/* First, declare the async event has been handled */ /* First, declare the async event has been handled */
spin_lock_irqsave(&phba->hbalock, iflags); clear_bit(ASYNC_EVENT, &phba->hba_flag);
phba->hba_flag &= ~ASYNC_EVENT;
spin_unlock_irqrestore(&phba->hbalock, iflags);
/* Now, handle all the async events */ /* Now, handle all the async events */
spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags); spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags);
@ -8247,7 +8234,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
* our max amount and we need to limit lpfc_sg_seg_cnt * our max amount and we need to limit lpfc_sg_seg_cnt
* to minimize the risk of running out. * to minimize the risk of running out.
*/ */
phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) + phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd32) +
sizeof(struct fcp_rsp) + max_buf_size; sizeof(struct fcp_rsp) + max_buf_size;
/* Total SGEs for scsi_sg_list and scsi_sg_prot_list */ /* Total SGEs for scsi_sg_list and scsi_sg_prot_list */
@ -8269,7 +8256,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
* the FCP rsp, a SGE for each, and a SGE for up to * the FCP rsp, a SGE for each, and a SGE for up to
* cfg_sg_seg_cnt data segments. * cfg_sg_seg_cnt data segments.
*/ */
phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) + phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd32) +
sizeof(struct fcp_rsp) + sizeof(struct fcp_rsp) +
((phba->cfg_sg_seg_cnt + extra) * ((phba->cfg_sg_seg_cnt + extra) *
sizeof(struct sli4_sge)); sizeof(struct sli4_sge));
@ -8332,7 +8319,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
phba->lpfc_cmd_rsp_buf_pool = phba->lpfc_cmd_rsp_buf_pool =
dma_pool_create("lpfc_cmd_rsp_buf_pool", dma_pool_create("lpfc_cmd_rsp_buf_pool",
&phba->pcidev->dev, &phba->pcidev->dev,
sizeof(struct fcp_cmnd) + sizeof(struct fcp_cmnd32) +
sizeof(struct fcp_rsp), sizeof(struct fcp_rsp),
i, 0); i, 0);
if (!phba->lpfc_cmd_rsp_buf_pool) { if (!phba->lpfc_cmd_rsp_buf_pool) {
@ -9869,41 +9856,38 @@ lpfc_map_topology(struct lpfc_hba *phba, struct lpfc_mbx_read_config *rd_config)
return; return;
} }
/* FW supports persistent topology - override module parameter value */ /* FW supports persistent topology - override module parameter value */
phba->hba_flag |= HBA_PERSISTENT_TOPO; set_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag);
/* if ASIC_GEN_NUM >= 0xC) */ /* if ASIC_GEN_NUM >= 0xC) */
if ((bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) == if ((bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) ==
LPFC_SLI_INTF_IF_TYPE_6) || LPFC_SLI_INTF_IF_TYPE_6) ||
(bf_get(lpfc_sli_intf_sli_family, &phba->sli4_hba.sli_intf) == (bf_get(lpfc_sli_intf_sli_family, &phba->sli4_hba.sli_intf) ==
LPFC_SLI_INTF_FAMILY_G6)) { LPFC_SLI_INTF_FAMILY_G6)) {
if (!tf) { if (!tf)
phba->cfg_topology = ((pt == LINK_FLAGS_LOOP) phba->cfg_topology = ((pt == LINK_FLAGS_LOOP)
? FLAGS_TOPOLOGY_MODE_LOOP ? FLAGS_TOPOLOGY_MODE_LOOP
: FLAGS_TOPOLOGY_MODE_PT_PT); : FLAGS_TOPOLOGY_MODE_PT_PT);
} else { else
phba->hba_flag &= ~HBA_PERSISTENT_TOPO; clear_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag);
}
} else { /* G5 */ } else { /* G5 */
if (tf) { if (tf)
/* If topology failover set - pt is '0' or '1' */ /* If topology failover set - pt is '0' or '1' */
phba->cfg_topology = (pt ? FLAGS_TOPOLOGY_MODE_PT_LOOP : phba->cfg_topology = (pt ? FLAGS_TOPOLOGY_MODE_PT_LOOP :
FLAGS_TOPOLOGY_MODE_LOOP_PT); FLAGS_TOPOLOGY_MODE_LOOP_PT);
} else { else
phba->cfg_topology = ((pt == LINK_FLAGS_P2P) phba->cfg_topology = ((pt == LINK_FLAGS_P2P)
? FLAGS_TOPOLOGY_MODE_PT_PT ? FLAGS_TOPOLOGY_MODE_PT_PT
: FLAGS_TOPOLOGY_MODE_LOOP); : FLAGS_TOPOLOGY_MODE_LOOP);
}
} }
if (phba->hba_flag & HBA_PERSISTENT_TOPO) { if (test_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag))
lpfc_printf_log(phba, KERN_INFO, LOG_SLI, lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"2020 Using persistent topology value [%s]", "2020 Using persistent topology value [%s]",
lpfc_topo_to_str[phba->cfg_topology]); lpfc_topo_to_str[phba->cfg_topology]);
} else { else
lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, lpfc_printf_log(phba, KERN_WARNING, LOG_SLI,
"2021 Invalid topology values from FW " "2021 Invalid topology values from FW "
"Using driver parameter defined value [%s]", "Using driver parameter defined value [%s]",
lpfc_topo_to_str[phba->cfg_topology]); lpfc_topo_to_str[phba->cfg_topology]);
}
} }
/** /**
@ -10146,7 +10130,7 @@ lpfc_sli4_read_config(struct lpfc_hba *phba)
forced_link_speed = forced_link_speed =
bf_get(lpfc_mbx_rd_conf_link_speed, rd_config); bf_get(lpfc_mbx_rd_conf_link_speed, rd_config);
if (forced_link_speed) { if (forced_link_speed) {
phba->hba_flag |= HBA_FORCED_LINK_SPEED; set_bit(HBA_FORCED_LINK_SPEED, &phba->hba_flag);
switch (forced_link_speed) { switch (forced_link_speed) {
case LINK_SPEED_1G: case LINK_SPEED_1G:
@ -12241,7 +12225,7 @@ lpfc_sli_enable_intr(struct lpfc_hba *phba, uint32_t cfg_mode)
retval = lpfc_sli_config_port(phba, LPFC_SLI_REV3); retval = lpfc_sli_config_port(phba, LPFC_SLI_REV3);
if (retval) if (retval)
return intr_mode; return intr_mode;
phba->hba_flag &= ~HBA_NEEDS_CFG_PORT; clear_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag);
if (cfg_mode == 2) { if (cfg_mode == 2) {
/* Now, try to enable MSI-X interrupt mode */ /* Now, try to enable MSI-X interrupt mode */
@ -14812,6 +14796,7 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
goto out_unset_pci_mem_s4; goto out_unset_pci_mem_s4;
} }
spin_lock_init(&phba->rrq_list_lock);
INIT_LIST_HEAD(&phba->active_rrq_list); INIT_LIST_HEAD(&phba->active_rrq_list);
INIT_LIST_HEAD(&phba->fcf.fcf_pri_list); INIT_LIST_HEAD(&phba->fcf.fcf_pri_list);
@ -15528,7 +15513,7 @@ lpfc_io_error_detected(struct pci_dev *pdev, pci_channel_state_t state)
pci_ers_result_t rc = PCI_ERS_RESULT_DISCONNECT; pci_ers_result_t rc = PCI_ERS_RESULT_DISCONNECT;
if (phba->link_state == LPFC_HBA_ERROR && if (phba->link_state == LPFC_HBA_ERROR &&
phba->hba_flag & HBA_IOQ_FLUSH) test_bit(HBA_IOQ_FLUSH, &phba->hba_flag))
return PCI_ERS_RESULT_NEED_RESET; return PCI_ERS_RESULT_NEED_RESET;
switch (phba->pci_dev_grp) { switch (phba->pci_dev_grp) {

View File

@ -47,6 +47,18 @@
#include "lpfc_debugfs.h" #include "lpfc_debugfs.h"
/* Called to clear RSCN discovery flags when driver is unloading. */
static bool
lpfc_check_unload_and_clr_rscn(unsigned long *fc_flag)
{
/* If unloading, then clear the FC_RSCN_DEFERRED flag */
if (test_bit(FC_UNLOADING, fc_flag)) {
clear_bit(FC_RSCN_DEFERRED, fc_flag);
return false;
}
return test_bit(FC_RSCN_DEFERRED, fc_flag);
}
/* Called to verify a rcv'ed ADISC was intended for us. */ /* Called to verify a rcv'ed ADISC was intended for us. */
static int static int
lpfc_check_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_check_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
@ -213,8 +225,10 @@ void
lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp) lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
{ {
LIST_HEAD(abort_list); LIST_HEAD(abort_list);
LIST_HEAD(drv_cmpl_list);
struct lpfc_sli_ring *pring; struct lpfc_sli_ring *pring;
struct lpfc_iocbq *iocb, *next_iocb; struct lpfc_iocbq *iocb, *next_iocb;
int retval = 0;
pring = lpfc_phba_elsring(phba); pring = lpfc_phba_elsring(phba);
@ -250,11 +264,20 @@ lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp)
/* Abort the targeted IOs and remove them from the abort list. */ /* Abort the targeted IOs and remove them from the abort list. */
list_for_each_entry_safe(iocb, next_iocb, &abort_list, dlist) { list_for_each_entry_safe(iocb, next_iocb, &abort_list, dlist) {
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
list_del_init(&iocb->dlist); list_del_init(&iocb->dlist);
lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL); retval = lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL);
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
if (retval && test_bit(FC_UNLOADING, &phba->pport->load_flag)) {
list_del_init(&iocb->list);
list_add_tail(&iocb->list, &drv_cmpl_list);
}
} }
lpfc_sli_cancel_iocbs(phba, &drv_cmpl_list, IOSTAT_LOCAL_REJECT,
IOERR_SLI_ABORTED);
/* Make sure HBA is alive */ /* Make sure HBA is alive */
lpfc_issue_hb_tmo(phba); lpfc_issue_hb_tmo(phba);
@ -481,7 +504,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
* must have ACCed the remote NPorts FLOGI to us * must have ACCed the remote NPorts FLOGI to us
* to make it here. * to make it here.
*/ */
if (phba->hba_flag & HBA_FLOGI_OUTSTANDING) if (test_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag))
lpfc_els_abort_flogi(phba); lpfc_els_abort_flogi(phba);
ed_tov = be32_to_cpu(sp->cmn.e_d_tov); ed_tov = be32_to_cpu(sp->cmn.e_d_tov);
@ -1604,10 +1627,8 @@ lpfc_device_recov_plogi_issue(struct lpfc_vport *vport,
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
/* Don't do anything that will mess up processing of the /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */
* previous RSCN. if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag))
*/
if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag))
return ndlp->nlp_state; return ndlp->nlp_state;
/* software abort outstanding PLOGI */ /* software abort outstanding PLOGI */
@ -1790,10 +1811,8 @@ lpfc_device_recov_adisc_issue(struct lpfc_vport *vport,
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
/* Don't do anything that will mess up processing of the /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */
* previous RSCN. if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag))
*/
if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag))
return ndlp->nlp_state; return ndlp->nlp_state;
/* software abort outstanding ADISC */ /* software abort outstanding ADISC */
@ -2059,10 +2078,8 @@ lpfc_device_recov_reglogin_issue(struct lpfc_vport *vport,
void *arg, void *arg,
uint32_t evt) uint32_t evt)
{ {
/* Don't do anything that will mess up processing of the /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */
* previous RSCN. if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag))
*/
if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag))
return ndlp->nlp_state; return ndlp->nlp_state;
ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE; ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE;
@ -2375,10 +2392,8 @@ lpfc_device_recov_prli_issue(struct lpfc_vport *vport,
{ {
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
/* Don't do anything that will mess up processing of the /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */
* previous RSCN. if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag))
*/
if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag))
return ndlp->nlp_state; return ndlp->nlp_state;
/* software abort outstanding PRLI */ /* software abort outstanding PRLI */
@ -2894,10 +2909,8 @@ static uint32_t
lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
void *arg, uint32_t evt) void *arg, uint32_t evt)
{ {
/* Don't do anything that will mess up processing of the /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */
* previous RSCN. if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag))
*/
if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag))
return ndlp->nlp_state; return ndlp->nlp_state;
lpfc_cancel_retry_delay_tmo(vport, ndlp); lpfc_cancel_retry_delay_tmo(vport, ndlp);

View File

@ -95,7 +95,7 @@ lpfc_nvme_create_queue(struct nvme_fc_local_port *pnvme_lport,
vport = lport->vport; vport = lport->vport;
if (!vport || test_bit(FC_UNLOADING, &vport->load_flag) || if (!vport || test_bit(FC_UNLOADING, &vport->load_flag) ||
vport->phba->hba_flag & HBA_IOQ_FLUSH) test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag))
return -ENODEV; return -ENODEV;
qhandle = kzalloc(sizeof(struct lpfc_nvme_qhandle), GFP_KERNEL); qhandle = kzalloc(sizeof(struct lpfc_nvme_qhandle), GFP_KERNEL);
@ -272,7 +272,7 @@ lpfc_nvme_handle_lsreq(struct lpfc_hba *phba,
remoteport = lpfc_rport->remoteport; remoteport = lpfc_rport->remoteport;
if (!vport->localport || if (!vport->localport ||
vport->phba->hba_flag & HBA_IOQ_FLUSH) test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag))
return -EINVAL; return -EINVAL;
lport = vport->localport->private; lport = vport->localport->private;
@ -569,7 +569,7 @@ __lpfc_nvme_ls_req(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
ndlp->nlp_DID, ntype, nstate); ndlp->nlp_DID, ntype, nstate);
return -ENODEV; return -ENODEV;
} }
if (vport->phba->hba_flag & HBA_IOQ_FLUSH) if (test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag))
return -ENODEV; return -ENODEV;
if (!vport->phba->sli4_hba.nvmels_wq) if (!vport->phba->sli4_hba.nvmels_wq)
@ -675,7 +675,7 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
vport = lport->vport; vport = lport->vport;
if (test_bit(FC_UNLOADING, &vport->load_flag) || if (test_bit(FC_UNLOADING, &vport->load_flag) ||
vport->phba->hba_flag & HBA_IOQ_FLUSH) test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag))
return -ENODEV; return -ENODEV;
atomic_inc(&lport->fc4NvmeLsRequests); atomic_inc(&lport->fc4NvmeLsRequests);
@ -1568,7 +1568,7 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
phba = vport->phba; phba = vport->phba;
if ((unlikely(test_bit(FC_UNLOADING, &vport->load_flag))) || if ((unlikely(test_bit(FC_UNLOADING, &vport->load_flag))) ||
phba->hba_flag & HBA_IOQ_FLUSH) { test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR, lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR,
"6124 Fail IO, Driver unload\n"); "6124 Fail IO, Driver unload\n");
atomic_inc(&lport->xmt_fcp_err); atomic_inc(&lport->xmt_fcp_err);
@ -1909,24 +1909,19 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
return; return;
} }
/* Guard against IO completion being called at same time */
spin_lock_irqsave(&lpfc_nbuf->buf_lock, flags);
/* If the hba is getting reset, this flag is set. It is
* cleared when the reset is complete and rings reestablished.
*/
spin_lock(&phba->hbalock);
/* driver queued commands are in process of being flushed */ /* driver queued commands are in process of being flushed */
if (phba->hba_flag & HBA_IOQ_FLUSH) { if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) {
spin_unlock(&phba->hbalock);
spin_unlock_irqrestore(&lpfc_nbuf->buf_lock, flags);
lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT,
"6139 Driver in reset cleanup - flushing " "6139 Driver in reset cleanup - flushing "
"NVME Req now. hba_flag x%x\n", "NVME Req now. hba_flag x%lx\n",
phba->hba_flag); phba->hba_flag);
return; return;
} }
/* Guard against IO completion being called at same time */
spin_lock_irqsave(&lpfc_nbuf->buf_lock, flags);
spin_lock(&phba->hbalock);
nvmereq_wqe = &lpfc_nbuf->cur_iocbq; nvmereq_wqe = &lpfc_nbuf->cur_iocbq;
/* /*

View File

@ -1811,7 +1811,9 @@ lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba,
ctxp->flag &= ~LPFC_NVME_XBUSY; ctxp->flag &= ~LPFC_NVME_XBUSY;
spin_unlock_irqrestore(&ctxp->ctxlock, iflag); spin_unlock_irqrestore(&ctxp->ctxlock, iflag);
spin_lock_irqsave(&phba->rrq_list_lock, iflag);
rrq_empty = list_empty(&phba->active_rrq_list); rrq_empty = list_empty(&phba->active_rrq_list);
spin_unlock_irqrestore(&phba->rrq_list_lock, iflag);
ndlp = lpfc_findnode_did(phba->pport, ctxp->sid); ndlp = lpfc_findnode_did(phba->pport, ctxp->sid);
if (ndlp && if (ndlp &&
(ndlp->nlp_state == NLP_STE_UNMAPPED_NODE || (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE ||
@ -3393,14 +3395,12 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
/* If the hba is getting reset, this flag is set. It is /* If the hba is getting reset, this flag is set. It is
* cleared when the reset is complete and rings reestablished. * cleared when the reset is complete and rings reestablished.
*/ */
spin_lock_irqsave(&phba->hbalock, flags);
/* driver queued commands are in process of being flushed */ /* driver queued commands are in process of being flushed */
if (phba->hba_flag & HBA_IOQ_FLUSH) { if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) {
spin_unlock_irqrestore(&phba->hbalock, flags);
atomic_inc(&tgtp->xmt_abort_rsp_error); atomic_inc(&tgtp->xmt_abort_rsp_error);
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"6163 Driver in reset cleanup - flushing " "6163 Driver in reset cleanup - flushing "
"NVME Req now. hba_flag x%x oxid x%x\n", "NVME Req now. hba_flag x%lx oxid x%x\n",
phba->hba_flag, ctxp->oxid); phba->hba_flag, ctxp->oxid);
lpfc_sli_release_iocbq(phba, abts_wqeq); lpfc_sli_release_iocbq(phba, abts_wqeq);
spin_lock_irqsave(&ctxp->ctxlock, flags); spin_lock_irqsave(&ctxp->ctxlock, flags);
@ -3409,6 +3409,7 @@ lpfc_nvmet_sol_fcp_issue_abort(struct lpfc_hba *phba,
return 0; return 0;
} }
spin_lock_irqsave(&phba->hbalock, flags);
/* Outstanding abort is in progress */ /* Outstanding abort is in progress */
if (abts_wqeq->cmd_flag & LPFC_DRIVER_ABORTED) { if (abts_wqeq->cmd_flag & LPFC_DRIVER_ABORTED) {
spin_unlock_irqrestore(&phba->hbalock, flags); spin_unlock_irqrestore(&phba->hbalock, flags);

View File

@ -474,9 +474,11 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba,
ndlp = psb->rdata->pnode; ndlp = psb->rdata->pnode;
else else
ndlp = NULL; ndlp = NULL;
rrq_empty = list_empty(&phba->active_rrq_list);
spin_unlock_irqrestore(&phba->hbalock, iflag); spin_unlock_irqrestore(&phba->hbalock, iflag);
spin_lock_irqsave(&phba->rrq_list_lock, iflag);
rrq_empty = list_empty(&phba->active_rrq_list);
spin_unlock_irqrestore(&phba->rrq_list_lock, iflag);
if (ndlp && !offline) { if (ndlp && !offline) {
lpfc_set_rrq_active(phba, ndlp, lpfc_set_rrq_active(phba, ndlp,
psb->cur_iocbq.sli4_lxritag, rxid, 1); psb->cur_iocbq.sli4_lxritag, rxid, 1);
@ -598,7 +600,7 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
{ {
struct lpfc_io_buf *lpfc_cmd; struct lpfc_io_buf *lpfc_cmd;
struct lpfc_sli4_hdw_queue *qp; struct lpfc_sli4_hdw_queue *qp;
struct sli4_sge *sgl; struct sli4_sge_le *sgl;
dma_addr_t pdma_phys_fcp_rsp; dma_addr_t pdma_phys_fcp_rsp;
dma_addr_t pdma_phys_fcp_cmd; dma_addr_t pdma_phys_fcp_cmd;
uint32_t cpu, idx; uint32_t cpu, idx;
@ -649,23 +651,23 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
* The balance are sg list bdes. Initialize the * The balance are sg list bdes. Initialize the
* first two and leave the rest for queuecommand. * first two and leave the rest for queuecommand.
*/ */
sgl = (struct sli4_sge *)lpfc_cmd->dma_sgl; sgl = (struct sli4_sge_le *)lpfc_cmd->dma_sgl;
pdma_phys_fcp_cmd = tmp->fcp_cmd_rsp_dma_handle; pdma_phys_fcp_cmd = tmp->fcp_cmd_rsp_dma_handle;
sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_cmd)); sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_cmd));
sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_cmd)); sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_cmd));
sgl->word2 = le32_to_cpu(sgl->word2); bf_set_le32(lpfc_sli4_sge_last, sgl, 0);
bf_set(lpfc_sli4_sge_last, sgl, 0); if (cmnd && cmnd->cmd_len > LPFC_FCP_CDB_LEN)
sgl->word2 = cpu_to_le32(sgl->word2); sgl->sge_len = cpu_to_le32(sizeof(struct fcp_cmnd32));
sgl->sge_len = cpu_to_le32(sizeof(struct fcp_cmnd)); else
sgl->sge_len = cpu_to_le32(sizeof(struct fcp_cmnd));
sgl++; sgl++;
/* Setup the physical region for the FCP RSP */ /* Setup the physical region for the FCP RSP */
pdma_phys_fcp_rsp = pdma_phys_fcp_cmd + sizeof(struct fcp_cmnd); pdma_phys_fcp_rsp = pdma_phys_fcp_cmd + sizeof(struct fcp_cmnd32);
sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_rsp)); sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_rsp));
sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_rsp)); sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_rsp));
sgl->word2 = le32_to_cpu(sgl->word2); bf_set_le32(lpfc_sli4_sge_last, sgl, 1);
bf_set(lpfc_sli4_sge_last, sgl, 1);
sgl->word2 = cpu_to_le32(sgl->word2);
sgl->sge_len = cpu_to_le32(sizeof(struct fcp_rsp)); sgl->sge_len = cpu_to_le32(sizeof(struct fcp_rsp));
if (lpfc_ndlp_check_qdepth(phba, ndlp)) { if (lpfc_ndlp_check_qdepth(phba, ndlp)) {
@ -2606,7 +2608,7 @@ lpfc_bg_scsi_prep_dma_buf_s3(struct lpfc_hba *phba,
iocb_cmd->ulpLe = 1; iocb_cmd->ulpLe = 1;
fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd); fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd);
fcp_cmnd->fcpDl = be32_to_cpu(fcpdl); fcp_cmnd->fcpDl = cpu_to_be32(fcpdl);
/* /*
* Due to difference in data length between DIF/non-DIF paths, * Due to difference in data length between DIF/non-DIF paths,
@ -3223,14 +3225,18 @@ lpfc_scsi_prep_dma_buf_s4(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
* explicitly reinitialized. * explicitly reinitialized.
* all iocb memory resources are reused. * all iocb memory resources are reused.
*/ */
fcp_cmnd->fcpDl = cpu_to_be32(scsi_bufflen(scsi_cmnd)); if (scsi_cmnd->cmd_len > LPFC_FCP_CDB_LEN)
((struct fcp_cmnd32 *)fcp_cmnd)->fcpDl =
cpu_to_be32(scsi_bufflen(scsi_cmnd));
else
fcp_cmnd->fcpDl = cpu_to_be32(scsi_bufflen(scsi_cmnd));
/* Set first-burst provided it was successfully negotiated */ /* Set first-burst provided it was successfully negotiated */
if (!(phba->hba_flag & HBA_FCOE_MODE) && if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) &&
vport->cfg_first_burst_size && vport->cfg_first_burst_size &&
scsi_cmnd->sc_data_direction == DMA_TO_DEVICE) { scsi_cmnd->sc_data_direction == DMA_TO_DEVICE) {
u32 init_len, total_len; u32 init_len, total_len;
total_len = be32_to_cpu(fcp_cmnd->fcpDl); total_len = scsi_bufflen(scsi_cmnd);
init_len = min(total_len, vport->cfg_first_burst_size); init_len = min(total_len, vport->cfg_first_burst_size);
/* Word 4 & 5 */ /* Word 4 & 5 */
@ -3418,15 +3424,18 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
} }
fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd); fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd);
fcp_cmnd->fcpDl = be32_to_cpu(fcpdl); if (lpfc_cmd->pCmd->cmd_len > LPFC_FCP_CDB_LEN)
((struct fcp_cmnd32 *)fcp_cmnd)->fcpDl = cpu_to_be32(fcpdl);
else
fcp_cmnd->fcpDl = cpu_to_be32(fcpdl);
/* Set first-burst provided it was successfully negotiated */ /* Set first-burst provided it was successfully negotiated */
if (!(phba->hba_flag & HBA_FCOE_MODE) && if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) &&
vport->cfg_first_burst_size && vport->cfg_first_burst_size &&
scsi_cmnd->sc_data_direction == DMA_TO_DEVICE) { scsi_cmnd->sc_data_direction == DMA_TO_DEVICE) {
u32 init_len, total_len; u32 init_len, total_len;
total_len = be32_to_cpu(fcp_cmnd->fcpDl); total_len = fcpdl;
init_len = min(total_len, vport->cfg_first_burst_size); init_len = min(total_len, vport->cfg_first_burst_size);
/* Word 4 & 5 */ /* Word 4 & 5 */
@ -3434,8 +3443,7 @@ lpfc_bg_scsi_prep_dma_buf_s4(struct lpfc_hba *phba,
wqe->fcp_iwrite.total_xfer_len = total_len; wqe->fcp_iwrite.total_xfer_len = total_len;
} else { } else {
/* Word 4 */ /* Word 4 */
wqe->fcp_iwrite.total_xfer_len = wqe->fcp_iwrite.total_xfer_len = fcpdl;
be32_to_cpu(fcp_cmnd->fcpDl);
} }
/* /*
@ -3892,7 +3900,10 @@ lpfc_handle_fcp_err(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd,
fcprsp->rspInfo3); fcprsp->rspInfo3);
scsi_set_resid(cmnd, 0); scsi_set_resid(cmnd, 0);
fcpDl = be32_to_cpu(fcpcmd->fcpDl); if (cmnd->cmd_len > LPFC_FCP_CDB_LEN)
fcpDl = be32_to_cpu(((struct fcp_cmnd32 *)fcpcmd)->fcpDl);
else
fcpDl = be32_to_cpu(fcpcmd->fcpDl);
if (resp_info & RESID_UNDER) { if (resp_info & RESID_UNDER) {
scsi_set_resid(cmnd, be32_to_cpu(fcprsp->rspResId)); scsi_set_resid(cmnd, be32_to_cpu(fcprsp->rspResId));
@ -4721,6 +4732,14 @@ static int lpfc_scsi_prep_cmnd_buf_s4(struct lpfc_vport *vport,
bf_set(wqe_iod, &wqe->fcp_iread.wqe_com, bf_set(wqe_iod, &wqe->fcp_iread.wqe_com,
LPFC_WQE_IOD_NONE); LPFC_WQE_IOD_NONE);
} }
/* Additional fcp cdb length field calculation.
* LPFC_FCP_CDB_LEN_32 - normal 16 byte cdb length,
* then divide by 4 for the word count.
* shift 2 because of the RDDATA/WRDATA.
*/
if (scsi_cmnd->cmd_len > LPFC_FCP_CDB_LEN)
fcp_cmnd->fcpCntl3 |= 4 << 2;
} else { } else {
/* From the icmnd template, initialize words 4 - 11 */ /* From the icmnd template, initialize words 4 - 11 */
memcpy(&wqe->words[4], &lpfc_icmnd_cmd_template.words[4], memcpy(&wqe->words[4], &lpfc_icmnd_cmd_template.words[4],
@ -4741,7 +4760,7 @@ static int lpfc_scsi_prep_cmnd_buf_s4(struct lpfc_vport *vport,
/* Word 3 */ /* Word 3 */
bf_set(payload_offset_len, &wqe->fcp_icmd, bf_set(payload_offset_len, &wqe->fcp_icmd,
sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp)); sizeof(struct fcp_cmnd32) + sizeof(struct fcp_rsp));
/* Word 6 */ /* Word 6 */
bf_set(wqe_ctxt_tag, &wqe->generic.wqe_com, bf_set(wqe_ctxt_tag, &wqe->generic.wqe_com,
@ -4796,7 +4815,7 @@ lpfc_scsi_prep_cmnd(struct lpfc_vport *vport, struct lpfc_io_buf *lpfc_cmd,
int_to_scsilun(lpfc_cmd->pCmd->device->lun, int_to_scsilun(lpfc_cmd->pCmd->device->lun,
&lpfc_cmd->fcp_cmnd->fcp_lun); &lpfc_cmd->fcp_cmnd->fcp_lun);
ptr = &fcp_cmnd->fcpCdb[0]; ptr = &((struct fcp_cmnd32 *)fcp_cmnd)->fcpCdb[0];
memcpy(ptr, scsi_cmnd->cmnd, scsi_cmnd->cmd_len); memcpy(ptr, scsi_cmnd->cmnd, scsi_cmnd->cmd_len);
if (scsi_cmnd->cmd_len < LPFC_FCP_CDB_LEN) { if (scsi_cmnd->cmd_len < LPFC_FCP_CDB_LEN) {
ptr += scsi_cmnd->cmd_len; ptr += scsi_cmnd->cmd_len;
@ -5041,7 +5060,7 @@ lpfc_check_pci_resettable(struct lpfc_hba *phba)
/* Check for valid Emulex Device ID */ /* Check for valid Emulex Device ID */
if (phba->sli_rev != LPFC_SLI_REV4 || if (phba->sli_rev != LPFC_SLI_REV4 ||
phba->hba_flag & HBA_FCOE_MODE) { test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
lpfc_printf_log(phba, KERN_INFO, LOG_INIT, lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
"8347 Incapable PCI reset device: " "8347 Incapable PCI reset device: "
"0x%04x\n", ptr->device); "0x%04x\n", ptr->device);
@ -5327,7 +5346,7 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
cmnd->cmnd[0], cmnd->cmnd[0],
scsi_prot_ref_tag(cmnd), scsi_prot_ref_tag(cmnd),
scsi_logical_block_count(cmnd), scsi_logical_block_count(cmnd),
(cmnd->cmnd[1]>>5)); scsi_get_prot_type(cmnd));
} }
err = lpfc_bg_scsi_prep_dma_buf(phba, lpfc_cmd); err = lpfc_bg_scsi_prep_dma_buf(phba, lpfc_cmd);
} else { } else {
@ -5518,7 +5537,7 @@ lpfc_abort_handler(struct scsi_cmnd *cmnd)
spin_lock(&phba->hbalock); spin_lock(&phba->hbalock);
/* driver queued commands are in process of being flushed */ /* driver queued commands are in process of being flushed */
if (phba->hba_flag & HBA_IOQ_FLUSH) { if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) {
lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP, lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP,
"3168 SCSI Layer abort requested I/O has been " "3168 SCSI Layer abort requested I/O has been "
"flushed by LLD.\n"); "flushed by LLD.\n");

View File

@ -1,7 +1,7 @@
/******************************************************************* /*******************************************************************
* This file is part of the Emulex Linux Device Driver for * * This file is part of the Emulex Linux Device Driver for *
* Fibre Channel Host Bus Adapters. * * Fibre Channel Host Bus Adapters. *
* Copyright (C) 2017-2022 Broadcom. All Rights Reserved. The term * * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term *
* Broadcom refers to Broadcom Inc and/or its subsidiaries. * * Broadcom refers to Broadcom Inc and/or its subsidiaries. *
* Copyright (C) 2004-2016 Emulex. All rights reserved. * * Copyright (C) 2004-2016 Emulex. All rights reserved. *
* EMULEX and SLI are trademarks of Emulex. * * EMULEX and SLI are trademarks of Emulex. *
@ -24,6 +24,7 @@
struct lpfc_hba; struct lpfc_hba;
#define LPFC_FCP_CDB_LEN 16 #define LPFC_FCP_CDB_LEN 16
#define LPFC_FCP_CDB_LEN_32 32
#define list_remove_head(list, entry, type, member) \ #define list_remove_head(list, entry, type, member) \
do { \ do { \
@ -99,17 +100,11 @@ struct fcp_rsp {
#define SNSCOD_BADCMD 0x20 /* sense code is byte 13 ([12]) */ #define SNSCOD_BADCMD 0x20 /* sense code is byte 13 ([12]) */
}; };
struct fcp_cmnd {
struct scsi_lun fcp_lun;
uint8_t fcpCntl0; /* FCP_CNTL byte 0 (reserved) */
uint8_t fcpCntl1; /* FCP_CNTL byte 1 task codes */
#define SIMPLE_Q 0x00 #define SIMPLE_Q 0x00
#define HEAD_OF_Q 0x01 #define HEAD_OF_Q 0x01
#define ORDERED_Q 0x02 #define ORDERED_Q 0x02
#define ACA_Q 0x04 #define ACA_Q 0x04
#define UNTAGGED 0x05 #define UNTAGGED 0x05
uint8_t fcpCntl2; /* FCP_CTL byte 2 task management codes */
#define FCP_ABORT_TASK_SET 0x02 /* Bit 1 */ #define FCP_ABORT_TASK_SET 0x02 /* Bit 1 */
#define FCP_CLEAR_TASK_SET 0x04 /* bit 2 */ #define FCP_CLEAR_TASK_SET 0x04 /* bit 2 */
#define FCP_BUS_RESET 0x08 /* bit 3 */ #define FCP_BUS_RESET 0x08 /* bit 3 */
@ -117,12 +112,31 @@ struct fcp_cmnd {
#define FCP_TARGET_RESET 0x20 /* bit 5 */ #define FCP_TARGET_RESET 0x20 /* bit 5 */
#define FCP_CLEAR_ACA 0x40 /* bit 6 */ #define FCP_CLEAR_ACA 0x40 /* bit 6 */
#define FCP_TERMINATE_TASK 0x80 /* bit 7 */ #define FCP_TERMINATE_TASK 0x80 /* bit 7 */
uint8_t fcpCntl3;
#define WRITE_DATA 0x01 /* Bit 0 */ #define WRITE_DATA 0x01 /* Bit 0 */
#define READ_DATA 0x02 /* Bit 1 */ #define READ_DATA 0x02 /* Bit 1 */
struct fcp_cmnd {
struct scsi_lun fcp_lun;
uint8_t fcpCntl0; /* FCP_CNTL byte 0 (reserved) */
uint8_t fcpCntl1; /* FCP_CNTL byte 1 task codes */
uint8_t fcpCntl2; /* FCP_CTL byte 2 task management codes */
uint8_t fcpCntl3;
uint8_t fcpCdb[LPFC_FCP_CDB_LEN]; /* SRB cdb field is copied here */ uint8_t fcpCdb[LPFC_FCP_CDB_LEN]; /* SRB cdb field is copied here */
uint32_t fcpDl; /* Total transfer length */ __be32 fcpDl; /* Total transfer length */
};
struct fcp_cmnd32 {
struct scsi_lun fcp_lun;
uint8_t fcpCntl0; /* FCP_CNTL byte 0 (reserved) */
uint8_t fcpCntl1; /* FCP_CNTL byte 1 task codes */
uint8_t fcpCntl2; /* FCP_CTL byte 2 task management codes */
uint8_t fcpCntl3;
uint8_t fcpCdb[LPFC_FCP_CDB_LEN_32]; /* SRB cdb field is copied here */
__be32 fcpDl; /* Total transfer length */
}; };

View File

@ -1024,9 +1024,9 @@ lpfc_handle_rrq_active(struct lpfc_hba *phba)
unsigned long iflags; unsigned long iflags;
LIST_HEAD(send_rrq); LIST_HEAD(send_rrq);
spin_lock_irqsave(&phba->hbalock, iflags); clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
phba->hba_flag &= ~HBA_RRQ_ACTIVE;
next_time = jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov + 1)); next_time = jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov + 1));
spin_lock_irqsave(&phba->rrq_list_lock, iflags);
list_for_each_entry_safe(rrq, nextrrq, list_for_each_entry_safe(rrq, nextrrq,
&phba->active_rrq_list, list) { &phba->active_rrq_list, list) {
if (time_after(jiffies, rrq->rrq_stop_time)) if (time_after(jiffies, rrq->rrq_stop_time))
@ -1034,7 +1034,7 @@ lpfc_handle_rrq_active(struct lpfc_hba *phba)
else if (time_before(rrq->rrq_stop_time, next_time)) else if (time_before(rrq->rrq_stop_time, next_time))
next_time = rrq->rrq_stop_time; next_time = rrq->rrq_stop_time;
} }
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->rrq_list_lock, iflags);
if ((!list_empty(&phba->active_rrq_list)) && if ((!list_empty(&phba->active_rrq_list)) &&
(!test_bit(FC_UNLOADING, &phba->pport->load_flag))) (!test_bit(FC_UNLOADING, &phba->pport->load_flag)))
mod_timer(&phba->rrq_tmr, next_time); mod_timer(&phba->rrq_tmr, next_time);
@ -1072,16 +1072,16 @@ lpfc_get_active_rrq(struct lpfc_vport *vport, uint16_t xri, uint32_t did)
if (phba->sli_rev != LPFC_SLI_REV4) if (phba->sli_rev != LPFC_SLI_REV4)
return NULL; return NULL;
spin_lock_irqsave(&phba->hbalock, iflags); spin_lock_irqsave(&phba->rrq_list_lock, iflags);
list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) { list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) {
if (rrq->vport == vport && rrq->xritag == xri && if (rrq->vport == vport && rrq->xritag == xri &&
rrq->nlp_DID == did){ rrq->nlp_DID == did){
list_del(&rrq->list); list_del(&rrq->list);
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->rrq_list_lock, iflags);
return rrq; return rrq;
} }
} }
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->rrq_list_lock, iflags);
return NULL; return NULL;
} }
@ -1109,7 +1109,7 @@ lpfc_cleanup_vports_rrqs(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
lpfc_sli4_vport_delete_els_xri_aborted(vport); lpfc_sli4_vport_delete_els_xri_aborted(vport);
lpfc_sli4_vport_delete_fcp_xri_aborted(vport); lpfc_sli4_vport_delete_fcp_xri_aborted(vport);
} }
spin_lock_irqsave(&phba->hbalock, iflags); spin_lock_irqsave(&phba->rrq_list_lock, iflags);
list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) { list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) {
if (rrq->vport != vport) if (rrq->vport != vport)
continue; continue;
@ -1118,7 +1118,7 @@ lpfc_cleanup_vports_rrqs(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp)
list_move(&rrq->list, &rrq_list); list_move(&rrq->list, &rrq_list);
} }
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->rrq_list_lock, iflags);
list_for_each_entry_safe(rrq, nextrrq, &rrq_list, list) { list_for_each_entry_safe(rrq, nextrrq, &rrq_list, list) {
list_del(&rrq->list); list_del(&rrq->list);
@ -1179,12 +1179,12 @@ lpfc_set_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
if (!phba->cfg_enable_rrq) if (!phba->cfg_enable_rrq)
return -EINVAL; return -EINVAL;
spin_lock_irqsave(&phba->hbalock, iflags);
if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) { if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) {
phba->hba_flag &= ~HBA_RRQ_ACTIVE; clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
goto out; goto outnl;
} }
spin_lock_irqsave(&phba->hbalock, iflags);
if (ndlp->vport && test_bit(FC_UNLOADING, &ndlp->vport->load_flag)) if (ndlp->vport && test_bit(FC_UNLOADING, &ndlp->vport->load_flag))
goto out; goto out;
@ -1213,16 +1213,18 @@ lpfc_set_rrq_active(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
rrq->nlp_DID = ndlp->nlp_DID; rrq->nlp_DID = ndlp->nlp_DID;
rrq->vport = ndlp->vport; rrq->vport = ndlp->vport;
rrq->rxid = rxid; rrq->rxid = rxid;
spin_lock_irqsave(&phba->hbalock, iflags);
spin_lock_irqsave(&phba->rrq_list_lock, iflags);
empty = list_empty(&phba->active_rrq_list); empty = list_empty(&phba->active_rrq_list);
list_add_tail(&rrq->list, &phba->active_rrq_list); list_add_tail(&rrq->list, &phba->active_rrq_list);
phba->hba_flag |= HBA_RRQ_ACTIVE; spin_unlock_irqrestore(&phba->rrq_list_lock, iflags);
spin_unlock_irqrestore(&phba->hbalock, iflags); set_bit(HBA_RRQ_ACTIVE, &phba->hba_flag);
if (empty) if (empty)
lpfc_worker_wake_up(phba); lpfc_worker_wake_up(phba);
return 0; return 0;
out: out:
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->hbalock, iflags);
outnl:
lpfc_printf_log(phba, KERN_INFO, LOG_SLI, lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"2921 Can't set rrq active xri:0x%x rxid:0x%x" "2921 Can't set rrq active xri:0x%x rxid:0x%x"
" DID:0x%x Send:%d\n", " DID:0x%x Send:%d\n",
@ -3937,7 +3939,7 @@ void lpfc_poll_eratt(struct timer_list *t)
uint64_t sli_intr, cnt; uint64_t sli_intr, cnt;
phba = from_timer(phba, t, eratt_poll); phba = from_timer(phba, t, eratt_poll);
if (!(phba->hba_flag & HBA_SETUP)) if (!test_bit(HBA_SETUP, &phba->hba_flag))
return; return;
if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) if (test_bit(FC_UNLOADING, &phba->pport->load_flag))
@ -4522,9 +4524,7 @@ lpfc_sli_handle_slow_ring_event_s4(struct lpfc_hba *phba,
unsigned long iflag; unsigned long iflag;
int count = 0; int count = 0;
spin_lock_irqsave(&phba->hbalock, iflag); clear_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag);
phba->hba_flag &= ~HBA_SP_QUEUE_EVT;
spin_unlock_irqrestore(&phba->hbalock, iflag);
while (!list_empty(&phba->sli4_hba.sp_queue_event)) { while (!list_empty(&phba->sli4_hba.sp_queue_event)) {
/* Get the response iocb from the head of work queue */ /* Get the response iocb from the head of work queue */
spin_lock_irqsave(&phba->hbalock, iflag); spin_lock_irqsave(&phba->hbalock, iflag);
@ -4681,10 +4681,8 @@ lpfc_sli_flush_io_rings(struct lpfc_hba *phba)
uint32_t i; uint32_t i;
struct lpfc_iocbq *piocb, *next_iocb; struct lpfc_iocbq *piocb, *next_iocb;
spin_lock_irq(&phba->hbalock);
/* Indicate the I/O queues are flushed */ /* Indicate the I/O queues are flushed */
phba->hba_flag |= HBA_IOQ_FLUSH; set_bit(HBA_IOQ_FLUSH, &phba->hba_flag);
spin_unlock_irq(&phba->hbalock);
/* Look on all the FCP Rings for the iotag */ /* Look on all the FCP Rings for the iotag */
if (phba->sli_rev >= LPFC_SLI_REV4) { if (phba->sli_rev >= LPFC_SLI_REV4) {
@ -4762,7 +4760,7 @@ lpfc_sli_brdready_s3(struct lpfc_hba *phba, uint32_t mask)
if (lpfc_readl(phba->HSregaddr, &status)) if (lpfc_readl(phba->HSregaddr, &status))
return 1; return 1;
phba->hba_flag |= HBA_NEEDS_CFG_PORT; set_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag);
/* /*
* Check status register every 100ms for 5 retries, then every * Check status register every 100ms for 5 retries, then every
@ -4841,7 +4839,7 @@ lpfc_sli_brdready_s4(struct lpfc_hba *phba, uint32_t mask)
} else } else
phba->sli4_hba.intr_enable = 0; phba->sli4_hba.intr_enable = 0;
phba->hba_flag &= ~HBA_SETUP; clear_bit(HBA_SETUP, &phba->hba_flag);
return retval; return retval;
} }
@ -5093,7 +5091,7 @@ lpfc_sli_brdreset(struct lpfc_hba *phba)
/* perform board reset */ /* perform board reset */
phba->fc_eventTag = 0; phba->fc_eventTag = 0;
phba->link_events = 0; phba->link_events = 0;
phba->hba_flag |= HBA_NEEDS_CFG_PORT; set_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag);
if (phba->pport) { if (phba->pport) {
phba->pport->fc_myDID = 0; phba->pport->fc_myDID = 0;
phba->pport->fc_prevDID = 0; phba->pport->fc_prevDID = 0;
@ -5153,7 +5151,7 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
/* Reset HBA */ /* Reset HBA */
lpfc_printf_log(phba, KERN_INFO, LOG_SLI, lpfc_printf_log(phba, KERN_INFO, LOG_SLI,
"0295 Reset HBA Data: x%x x%x x%x\n", "0295 Reset HBA Data: x%x x%x x%lx\n",
phba->pport->port_state, psli->sli_flag, phba->pport->port_state, psli->sli_flag,
phba->hba_flag); phba->hba_flag);
@ -5162,7 +5160,7 @@ lpfc_sli4_brdreset(struct lpfc_hba *phba)
phba->link_events = 0; phba->link_events = 0;
phba->pport->fc_myDID = 0; phba->pport->fc_myDID = 0;
phba->pport->fc_prevDID = 0; phba->pport->fc_prevDID = 0;
phba->hba_flag &= ~HBA_SETUP; clear_bit(HBA_SETUP, &phba->hba_flag);
spin_lock_irq(&phba->hbalock); spin_lock_irq(&phba->hbalock);
psli->sli_flag &= ~(LPFC_PROCESS_LA); psli->sli_flag &= ~(LPFC_PROCESS_LA);
@ -5406,7 +5404,7 @@ lpfc_sli_chipset_init(struct lpfc_hba *phba)
return -EIO; return -EIO;
} }
phba->hba_flag |= HBA_NEEDS_CFG_PORT; set_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag);
/* Clear all interrupt enable conditions */ /* Clear all interrupt enable conditions */
writel(0, phba->HCregaddr); writel(0, phba->HCregaddr);
@ -5708,11 +5706,11 @@ lpfc_sli_hba_setup(struct lpfc_hba *phba)
int longs; int longs;
/* Enable ISR already does config_port because of config_msi mbx */ /* Enable ISR already does config_port because of config_msi mbx */
if (phba->hba_flag & HBA_NEEDS_CFG_PORT) { if (test_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag)) {
rc = lpfc_sli_config_port(phba, LPFC_SLI_REV3); rc = lpfc_sli_config_port(phba, LPFC_SLI_REV3);
if (rc) if (rc)
return -EIO; return -EIO;
phba->hba_flag &= ~HBA_NEEDS_CFG_PORT; clear_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag);
} }
phba->fcp_embed_io = 0; /* SLI4 FC support only */ phba->fcp_embed_io = 0; /* SLI4 FC support only */
@ -7759,7 +7757,7 @@ lpfc_set_host_data(struct lpfc_hba *phba, LPFC_MBOXQ_t *mbox)
snprintf(mbox->u.mqe.un.set_host_data.un.data, snprintf(mbox->u.mqe.un.set_host_data.un.data,
LPFC_HOST_OS_DRIVER_VERSION_SIZE, LPFC_HOST_OS_DRIVER_VERSION_SIZE,
"Linux %s v"LPFC_DRIVER_VERSION, "Linux %s v"LPFC_DRIVER_VERSION,
(phba->hba_flag & HBA_FCOE_MODE) ? "FCoE" : "FC"); test_bit(HBA_FCOE_MODE, &phba->hba_flag) ? "FCoE" : "FC");
} }
int int
@ -8487,7 +8485,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
} }
} }
phba->hba_flag &= ~HBA_SETUP; clear_bit(HBA_SETUP, &phba->hba_flag);
lpfc_sli4_dip(phba); lpfc_sli4_dip(phba);
@ -8516,25 +8514,26 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
mqe = &mboxq->u.mqe; mqe = &mboxq->u.mqe;
phba->sli_rev = bf_get(lpfc_mbx_rd_rev_sli_lvl, &mqe->un.read_rev); phba->sli_rev = bf_get(lpfc_mbx_rd_rev_sli_lvl, &mqe->un.read_rev);
if (bf_get(lpfc_mbx_rd_rev_fcoe, &mqe->un.read_rev)) { if (bf_get(lpfc_mbx_rd_rev_fcoe, &mqe->un.read_rev)) {
phba->hba_flag |= HBA_FCOE_MODE; set_bit(HBA_FCOE_MODE, &phba->hba_flag);
phba->fcp_embed_io = 0; /* SLI4 FC support only */ phba->fcp_embed_io = 0; /* SLI4 FC support only */
} else { } else {
phba->hba_flag &= ~HBA_FCOE_MODE; clear_bit(HBA_FCOE_MODE, &phba->hba_flag);
} }
if (bf_get(lpfc_mbx_rd_rev_cee_ver, &mqe->un.read_rev) == if (bf_get(lpfc_mbx_rd_rev_cee_ver, &mqe->un.read_rev) ==
LPFC_DCBX_CEE_MODE) LPFC_DCBX_CEE_MODE)
phba->hba_flag |= HBA_FIP_SUPPORT; set_bit(HBA_FIP_SUPPORT, &phba->hba_flag);
else else
phba->hba_flag &= ~HBA_FIP_SUPPORT; clear_bit(HBA_FIP_SUPPORT, &phba->hba_flag);
phba->hba_flag &= ~HBA_IOQ_FLUSH; clear_bit(HBA_IOQ_FLUSH, &phba->hba_flag);
if (phba->sli_rev != LPFC_SLI_REV4) { if (phba->sli_rev != LPFC_SLI_REV4) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0376 READ_REV Error. SLI Level %d " "0376 READ_REV Error. SLI Level %d "
"FCoE enabled %d\n", "FCoE enabled %d\n",
phba->sli_rev, phba->hba_flag & HBA_FCOE_MODE); phba->sli_rev,
test_bit(HBA_FCOE_MODE, &phba->hba_flag) ? 1 : 0);
rc = -EIO; rc = -EIO;
kfree(vpd); kfree(vpd);
goto out_free_mbox; goto out_free_mbox;
@ -8549,7 +8548,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
* to read FCoE param config regions, only read parameters if the * to read FCoE param config regions, only read parameters if the
* board is FCoE * board is FCoE
*/ */
if (phba->hba_flag & HBA_FCOE_MODE && if (test_bit(HBA_FCOE_MODE, &phba->hba_flag) &&
lpfc_sli4_read_fcoe_params(phba)) lpfc_sli4_read_fcoe_params(phba))
lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_INIT, lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_INIT,
"2570 Failed to read FCoE parameters\n"); "2570 Failed to read FCoE parameters\n");
@ -8626,7 +8625,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
lpfc_set_features(phba, mboxq, LPFC_SET_UE_RECOVERY); lpfc_set_features(phba, mboxq, LPFC_SET_UE_RECOVERY);
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL);
if (rc == MBX_SUCCESS) { if (rc == MBX_SUCCESS) {
phba->hba_flag |= HBA_RECOVERABLE_UE; set_bit(HBA_RECOVERABLE_UE, &phba->hba_flag);
/* Set 1Sec interval to detect UE */ /* Set 1Sec interval to detect UE */
phba->eratt_poll_interval = 1; phba->eratt_poll_interval = 1;
phba->sli4_hba.ue_to_sr = bf_get( phba->sli4_hba.ue_to_sr = bf_get(
@ -8677,7 +8676,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
} }
/* Performance Hints are ONLY for FCoE */ /* Performance Hints are ONLY for FCoE */
if (phba->hba_flag & HBA_FCOE_MODE) { if (test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
if (bf_get(lpfc_mbx_rq_ftr_rsp_perfh, &mqe->un.req_ftrs)) if (bf_get(lpfc_mbx_rq_ftr_rsp_perfh, &mqe->un.req_ftrs))
phba->sli3_options |= LPFC_SLI4_PERFH_ENABLED; phba->sli3_options |= LPFC_SLI4_PERFH_ENABLED;
else else
@ -8936,7 +8935,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
} }
lpfc_sli4_node_prep(phba); lpfc_sli4_node_prep(phba);
if (!(phba->hba_flag & HBA_FCOE_MODE)) { if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) {
if ((phba->nvmet_support == 0) || (phba->cfg_nvmet_mrq == 1)) { if ((phba->nvmet_support == 0) || (phba->cfg_nvmet_mrq == 1)) {
/* /*
* The FC Port needs to register FCFI (index 0) * The FC Port needs to register FCFI (index 0)
@ -9012,7 +9011,8 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
/* Start heart beat timer */ /* Start heart beat timer */
mod_timer(&phba->hb_tmofunc, mod_timer(&phba->hb_tmofunc,
jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL));
phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); clear_bit(HBA_HBEAT_INP, &phba->hba_flag);
clear_bit(HBA_HBEAT_TMO, &phba->hba_flag);
phba->last_completion_time = jiffies; phba->last_completion_time = jiffies;
/* start eq_delay heartbeat */ /* start eq_delay heartbeat */
@ -9054,8 +9054,8 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
/* Setup CMF after HBA is initialized */ /* Setup CMF after HBA is initialized */
lpfc_cmf_setup(phba); lpfc_cmf_setup(phba);
if (!(phba->hba_flag & HBA_FCOE_MODE) && if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) &&
(phba->hba_flag & LINK_DISABLED)) { test_bit(LINK_DISABLED, &phba->hba_flag)) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"3103 Adapter Link is disabled.\n"); "3103 Adapter Link is disabled.\n");
lpfc_down_link(phba, mboxq); lpfc_down_link(phba, mboxq);
@ -9079,7 +9079,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
/* Enable RAS FW log support */ /* Enable RAS FW log support */
lpfc_sli4_ras_setup(phba); lpfc_sli4_ras_setup(phba);
phba->hba_flag |= HBA_SETUP; set_bit(HBA_SETUP, &phba->hba_flag);
return rc; return rc;
out_io_buff_free: out_io_buff_free:
@ -9383,7 +9383,7 @@ lpfc_sli_issue_mbox_s3(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmbox,
} }
/* If HBA has a deferred error attention, fail the iocb. */ /* If HBA has a deferred error attention, fail the iocb. */
if (unlikely(phba->hba_flag & DEFER_ERATT)) { if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) {
spin_unlock_irqrestore(&phba->hbalock, drvr_flag); spin_unlock_irqrestore(&phba->hbalock, drvr_flag);
goto out_not_finished; goto out_not_finished;
} }
@ -10447,7 +10447,7 @@ __lpfc_sli_issue_iocb_s3(struct lpfc_hba *phba, uint32_t ring_number,
return IOCB_ERROR; return IOCB_ERROR;
/* If HBA has a deferred error attention, fail the iocb. */ /* If HBA has a deferred error attention, fail the iocb. */
if (unlikely(phba->hba_flag & DEFER_ERATT)) if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag)))
return IOCB_ERROR; return IOCB_ERROR;
/* /*
@ -10595,18 +10595,18 @@ lpfc_prep_embed_io(struct lpfc_hba *phba, struct lpfc_io_buf *lpfc_cmd)
BUFF_TYPE_BDE_IMMED; BUFF_TYPE_BDE_IMMED;
wqe->generic.bde.tus.f.bdeSize = sgl->sge_len; wqe->generic.bde.tus.f.bdeSize = sgl->sge_len;
wqe->generic.bde.addrHigh = 0; wqe->generic.bde.addrHigh = 0;
wqe->generic.bde.addrLow = 88; /* Word 22 */ wqe->generic.bde.addrLow = 72; /* Word 18 */
bf_set(wqe_wqes, &wqe->fcp_iwrite.wqe_com, 1); bf_set(wqe_wqes, &wqe->fcp_iwrite.wqe_com, 1);
bf_set(wqe_dbde, &wqe->fcp_iwrite.wqe_com, 0); bf_set(wqe_dbde, &wqe->fcp_iwrite.wqe_com, 0);
/* Word 22-29 FCP CMND Payload */ /* Word 18-29 FCP CMND Payload */
ptr = &wqe->words[22]; ptr = &wqe->words[18];
memcpy(ptr, fcp_cmnd, sizeof(struct fcp_cmnd)); memcpy(ptr, fcp_cmnd, sgl->sge_len);
} else { } else {
/* Word 0-2 - Inline BDE */ /* Word 0-2 - Inline BDE */
wqe->generic.bde.tus.f.bdeFlags = BUFF_TYPE_BDE_64; wqe->generic.bde.tus.f.bdeFlags = BUFF_TYPE_BDE_64;
wqe->generic.bde.tus.f.bdeSize = sizeof(struct fcp_cmnd); wqe->generic.bde.tus.f.bdeSize = sgl->sge_len;
wqe->generic.bde.addrHigh = sgl->addr_hi; wqe->generic.bde.addrHigh = sgl->addr_hi;
wqe->generic.bde.addrLow = sgl->addr_lo; wqe->generic.bde.addrLow = sgl->addr_lo;
@ -12361,10 +12361,10 @@ lpfc_ignore_els_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* ELS cmd tag <ulpIoTag> completes */ /* ELS cmd tag <ulpIoTag> completes */
lpfc_printf_log(phba, KERN_INFO, LOG_ELS, lpfc_printf_log(phba, KERN_INFO, LOG_ELS,
"0139 Ignoring ELS cmd code x%x completion Data: " "0139 Ignoring ELS cmd code x%x ref cnt x%x Data: "
"x%x x%x x%x x%px\n", "x%x x%x x%x x%px\n",
ulp_command, ulp_status, ulp_word4, iotag, ulp_command, kref_read(&cmdiocb->ndlp->kref),
cmdiocb->ndlp); ulp_status, ulp_word4, iotag, cmdiocb->ndlp);
/* /*
* Deref the ndlp after free_iocb. sli_release_iocb will access the ndlp * Deref the ndlp after free_iocb. sli_release_iocb will access the ndlp
* if exchange is busy. * if exchange is busy.
@ -12460,7 +12460,9 @@ lpfc_sli_issue_abort_iotag(struct lpfc_hba *phba, struct lpfc_sli_ring *pring,
} }
} }
if (phba->link_state < LPFC_LINK_UP || /* Just close the exchange under certain conditions. */
if (test_bit(FC_UNLOADING, &vport->load_flag) ||
phba->link_state < LPFC_LINK_UP ||
(phba->sli_rev == LPFC_SLI_REV4 && (phba->sli_rev == LPFC_SLI_REV4 &&
phba->sli4_hba.link_state.status == LPFC_FC_LA_TYPE_LINK_DOWN) || phba->sli4_hba.link_state.status == LPFC_FC_LA_TYPE_LINK_DOWN) ||
(phba->link_flag & LS_EXTERNAL_LOOPBACK)) (phba->link_flag & LS_EXTERNAL_LOOPBACK))
@ -12507,10 +12509,10 @@ abort_iotag_exit:
lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI,
"0339 Abort IO XRI x%x, Original iotag x%x, " "0339 Abort IO XRI x%x, Original iotag x%x, "
"abort tag x%x Cmdjob : x%px Abortjob : x%px " "abort tag x%x Cmdjob : x%px Abortjob : x%px "
"retval x%x\n", "retval x%x : IA %d\n",
ulp_context, (phba->sli_rev == LPFC_SLI_REV4) ? ulp_context, (phba->sli_rev == LPFC_SLI_REV4) ?
cmdiocb->iotag : iotag, iotag, cmdiocb, abtsiocbp, cmdiocb->iotag : iotag, iotag, cmdiocb, abtsiocbp,
retval); retval, ia);
if (retval) { if (retval) {
cmdiocb->cmd_flag &= ~LPFC_DRIVER_ABORTED; cmdiocb->cmd_flag &= ~LPFC_DRIVER_ABORTED;
__lpfc_sli_release_iocbq(phba, abtsiocbp); __lpfc_sli_release_iocbq(phba, abtsiocbp);
@ -12775,7 +12777,7 @@ lpfc_sli_abort_iocb(struct lpfc_vport *vport, u16 tgt_id, u64 lun_id,
int i; int i;
/* all I/Os are in process of being flushed */ /* all I/Os are in process of being flushed */
if (phba->hba_flag & HBA_IOQ_FLUSH) if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag))
return errcnt; return errcnt;
for (i = 1; i <= phba->sli.last_iotag; i++) { for (i = 1; i <= phba->sli.last_iotag; i++) {
@ -12845,15 +12847,13 @@ lpfc_sli_abort_taskmgmt(struct lpfc_vport *vport, struct lpfc_sli_ring *pring,
u16 ulp_context, iotag, cqid = LPFC_WQE_CQ_ID_DEFAULT; u16 ulp_context, iotag, cqid = LPFC_WQE_CQ_ID_DEFAULT;
bool ia; bool ia;
spin_lock_irqsave(&phba->hbalock, iflags);
/* all I/Os are in process of being flushed */ /* all I/Os are in process of being flushed */
if (phba->hba_flag & HBA_IOQ_FLUSH) { if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag))
spin_unlock_irqrestore(&phba->hbalock, iflags);
return 0; return 0;
}
sum = 0; sum = 0;
spin_lock_irqsave(&phba->hbalock, iflags);
for (i = 1; i <= phba->sli.last_iotag; i++) { for (i = 1; i <= phba->sli.last_iotag; i++) {
iocbq = phba->sli.iocbq_lookup[i]; iocbq = phba->sli.iocbq_lookup[i];
@ -13385,7 +13385,7 @@ lpfc_sli_eratt_read(struct lpfc_hba *phba)
if ((HS_FFER1 & phba->work_hs) && if ((HS_FFER1 & phba->work_hs) &&
((HS_FFER2 | HS_FFER3 | HS_FFER4 | HS_FFER5 | ((HS_FFER2 | HS_FFER3 | HS_FFER4 | HS_FFER5 |
HS_FFER6 | HS_FFER7 | HS_FFER8) & phba->work_hs)) { HS_FFER6 | HS_FFER7 | HS_FFER8) & phba->work_hs)) {
phba->hba_flag |= DEFER_ERATT; set_bit(DEFER_ERATT, &phba->hba_flag);
/* Clear all interrupt enable conditions */ /* Clear all interrupt enable conditions */
writel(0, phba->HCregaddr); writel(0, phba->HCregaddr);
readl(phba->HCregaddr); readl(phba->HCregaddr);
@ -13394,7 +13394,7 @@ lpfc_sli_eratt_read(struct lpfc_hba *phba)
/* Set the driver HA work bitmap */ /* Set the driver HA work bitmap */
phba->work_ha |= HA_ERATT; phba->work_ha |= HA_ERATT;
/* Indicate polling handles this ERATT */ /* Indicate polling handles this ERATT */
phba->hba_flag |= HBA_ERATT_HANDLED; set_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
return 1; return 1;
} }
return 0; return 0;
@ -13405,7 +13405,7 @@ unplug_err:
/* Set the driver HA work bitmap */ /* Set the driver HA work bitmap */
phba->work_ha |= HA_ERATT; phba->work_ha |= HA_ERATT;
/* Indicate polling handles this ERATT */ /* Indicate polling handles this ERATT */
phba->hba_flag |= HBA_ERATT_HANDLED; set_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
return 1; return 1;
} }
@ -13441,7 +13441,7 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
&uerr_sta_hi)) { &uerr_sta_hi)) {
phba->work_hs |= UNPLUG_ERR; phba->work_hs |= UNPLUG_ERR;
phba->work_ha |= HA_ERATT; phba->work_ha |= HA_ERATT;
phba->hba_flag |= HBA_ERATT_HANDLED; set_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
return 1; return 1;
} }
if ((~phba->sli4_hba.ue_mask_lo & uerr_sta_lo) || if ((~phba->sli4_hba.ue_mask_lo & uerr_sta_lo) ||
@ -13457,7 +13457,7 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
phba->work_status[0] = uerr_sta_lo; phba->work_status[0] = uerr_sta_lo;
phba->work_status[1] = uerr_sta_hi; phba->work_status[1] = uerr_sta_hi;
phba->work_ha |= HA_ERATT; phba->work_ha |= HA_ERATT;
phba->hba_flag |= HBA_ERATT_HANDLED; set_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
return 1; return 1;
} }
break; break;
@ -13469,7 +13469,7 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
&portsmphr)){ &portsmphr)){
phba->work_hs |= UNPLUG_ERR; phba->work_hs |= UNPLUG_ERR;
phba->work_ha |= HA_ERATT; phba->work_ha |= HA_ERATT;
phba->hba_flag |= HBA_ERATT_HANDLED; set_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
return 1; return 1;
} }
if (bf_get(lpfc_sliport_status_err, &portstat_reg)) { if (bf_get(lpfc_sliport_status_err, &portstat_reg)) {
@ -13492,7 +13492,7 @@ lpfc_sli4_eratt_read(struct lpfc_hba *phba)
phba->work_status[0], phba->work_status[0],
phba->work_status[1]); phba->work_status[1]);
phba->work_ha |= HA_ERATT; phba->work_ha |= HA_ERATT;
phba->hba_flag |= HBA_ERATT_HANDLED; set_bit(HBA_ERATT_HANDLED, &phba->hba_flag);
return 1; return 1;
} }
break; break;
@ -13529,22 +13529,18 @@ lpfc_sli_check_eratt(struct lpfc_hba *phba)
return 0; return 0;
/* Check if interrupt handler handles this ERATT */ /* Check if interrupt handler handles this ERATT */
spin_lock_irq(&phba->hbalock); if (test_bit(HBA_ERATT_HANDLED, &phba->hba_flag))
if (phba->hba_flag & HBA_ERATT_HANDLED) {
/* Interrupt handler has handled ERATT */ /* Interrupt handler has handled ERATT */
spin_unlock_irq(&phba->hbalock);
return 0; return 0;
}
/* /*
* If there is deferred error attention, do not check for error * If there is deferred error attention, do not check for error
* attention * attention
*/ */
if (unlikely(phba->hba_flag & DEFER_ERATT)) { if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag)))
spin_unlock_irq(&phba->hbalock);
return 0; return 0;
}
spin_lock_irq(&phba->hbalock);
/* If PCI channel is offline, don't process it */ /* If PCI channel is offline, don't process it */
if (unlikely(pci_channel_offline(phba->pcidev))) { if (unlikely(pci_channel_offline(phba->pcidev))) {
spin_unlock_irq(&phba->hbalock); spin_unlock_irq(&phba->hbalock);
@ -13666,19 +13662,17 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id)
ha_copy &= ~HA_ERATT; ha_copy &= ~HA_ERATT;
/* Check the need for handling ERATT in interrupt handler */ /* Check the need for handling ERATT in interrupt handler */
if (ha_copy & HA_ERATT) { if (ha_copy & HA_ERATT) {
if (phba->hba_flag & HBA_ERATT_HANDLED) if (test_and_set_bit(HBA_ERATT_HANDLED,
&phba->hba_flag))
/* ERATT polling has handled ERATT */ /* ERATT polling has handled ERATT */
ha_copy &= ~HA_ERATT; ha_copy &= ~HA_ERATT;
else
/* Indicate interrupt handler handles ERATT */
phba->hba_flag |= HBA_ERATT_HANDLED;
} }
/* /*
* If there is deferred error attention, do not check for any * If there is deferred error attention, do not check for any
* interrupt. * interrupt.
*/ */
if (unlikely(phba->hba_flag & DEFER_ERATT)) { if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) {
spin_unlock_irqrestore(&phba->hbalock, iflag); spin_unlock_irqrestore(&phba->hbalock, iflag);
return IRQ_NONE; return IRQ_NONE;
} }
@ -13774,7 +13768,7 @@ lpfc_sli_sp_intr_handler(int irq, void *dev_id)
((HS_FFER2 | HS_FFER3 | HS_FFER4 | HS_FFER5 | ((HS_FFER2 | HS_FFER3 | HS_FFER4 | HS_FFER5 |
HS_FFER6 | HS_FFER7 | HS_FFER8) & HS_FFER6 | HS_FFER7 | HS_FFER8) &
phba->work_hs)) { phba->work_hs)) {
phba->hba_flag |= DEFER_ERATT; set_bit(DEFER_ERATT, &phba->hba_flag);
/* Clear all interrupt enable conditions */ /* Clear all interrupt enable conditions */
writel(0, phba->HCregaddr); writel(0, phba->HCregaddr);
readl(phba->HCregaddr); readl(phba->HCregaddr);
@ -13961,16 +13955,16 @@ lpfc_sli_fp_intr_handler(int irq, void *dev_id)
/* Need to read HA REG for FCP ring and other ring events */ /* Need to read HA REG for FCP ring and other ring events */
if (lpfc_readl(phba->HAregaddr, &ha_copy)) if (lpfc_readl(phba->HAregaddr, &ha_copy))
return IRQ_HANDLED; return IRQ_HANDLED;
/* Clear up only attention source related to fast-path */
spin_lock_irqsave(&phba->hbalock, iflag);
/* /*
* If there is deferred error attention, do not check for * If there is deferred error attention, do not check for
* any interrupt. * any interrupt.
*/ */
if (unlikely(phba->hba_flag & DEFER_ERATT)) { if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag)))
spin_unlock_irqrestore(&phba->hbalock, iflag);
return IRQ_NONE; return IRQ_NONE;
}
/* Clear up only attention source related to fast-path */
spin_lock_irqsave(&phba->hbalock, iflag);
writel((ha_copy & (HA_R0_CLR_MSK | HA_R1_CLR_MSK)), writel((ha_copy & (HA_R0_CLR_MSK | HA_R1_CLR_MSK)),
phba->HAregaddr); phba->HAregaddr);
readl(phba->HAregaddr); /* flush */ readl(phba->HAregaddr); /* flush */
@ -14053,18 +14047,15 @@ lpfc_sli_intr_handler(int irq, void *dev_id)
spin_unlock(&phba->hbalock); spin_unlock(&phba->hbalock);
return IRQ_NONE; return IRQ_NONE;
} else if (phba->ha_copy & HA_ERATT) { } else if (phba->ha_copy & HA_ERATT) {
if (phba->hba_flag & HBA_ERATT_HANDLED) if (test_and_set_bit(HBA_ERATT_HANDLED, &phba->hba_flag))
/* ERATT polling has handled ERATT */ /* ERATT polling has handled ERATT */
phba->ha_copy &= ~HA_ERATT; phba->ha_copy &= ~HA_ERATT;
else
/* Indicate interrupt handler handles ERATT */
phba->hba_flag |= HBA_ERATT_HANDLED;
} }
/* /*
* If there is deferred error attention, do not check for any interrupt. * If there is deferred error attention, do not check for any interrupt.
*/ */
if (unlikely(phba->hba_flag & DEFER_ERATT)) { if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) {
spin_unlock(&phba->hbalock); spin_unlock(&phba->hbalock);
return IRQ_NONE; return IRQ_NONE;
} }
@ -14135,9 +14126,7 @@ void lpfc_sli4_els_xri_abort_event_proc(struct lpfc_hba *phba)
unsigned long iflags; unsigned long iflags;
/* First, declare the els xri abort event has been handled */ /* First, declare the els xri abort event has been handled */
spin_lock_irqsave(&phba->hbalock, iflags); clear_bit(ELS_XRI_ABORT_EVENT, &phba->hba_flag);
phba->hba_flag &= ~ELS_XRI_ABORT_EVENT;
spin_unlock_irqrestore(&phba->hbalock, iflags);
/* Now, handle all the els xri abort events */ /* Now, handle all the els xri abort events */
spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock, iflags); spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock, iflags);
@ -14263,9 +14252,7 @@ lpfc_sli4_sp_handle_async_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe)
spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags); spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags);
/* Set the async event flag */ /* Set the async event flag */
spin_lock_irqsave(&phba->hbalock, iflags); set_bit(ASYNC_EVENT, &phba->hba_flag);
phba->hba_flag |= ASYNC_EVENT;
spin_unlock_irqrestore(&phba->hbalock, iflags);
return true; return true;
} }
@ -14505,8 +14492,8 @@ lpfc_sli4_sp_handle_els_wcqe(struct lpfc_hba *phba, struct lpfc_queue *cq,
spin_lock_irqsave(&phba->hbalock, iflags); spin_lock_irqsave(&phba->hbalock, iflags);
list_add_tail(&irspiocbq->cq_event.list, list_add_tail(&irspiocbq->cq_event.list,
&phba->sli4_hba.sp_queue_event); &phba->sli4_hba.sp_queue_event);
phba->hba_flag |= HBA_SP_QUEUE_EVT;
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->hbalock, iflags);
set_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag);
return true; return true;
} }
@ -14580,7 +14567,7 @@ lpfc_sli4_sp_handle_abort_xri_wcqe(struct lpfc_hba *phba,
list_add_tail(&cq_event->list, list_add_tail(&cq_event->list,
&phba->sli4_hba.sp_els_xri_aborted_work_queue); &phba->sli4_hba.sp_els_xri_aborted_work_queue);
/* Set the els xri abort event flag */ /* Set the els xri abort event flag */
phba->hba_flag |= ELS_XRI_ABORT_EVENT; set_bit(ELS_XRI_ABORT_EVENT, &phba->hba_flag);
spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock, spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock,
iflags); iflags);
workposted = true; workposted = true;
@ -14667,9 +14654,9 @@ lpfc_sli4_sp_handle_rcqe(struct lpfc_hba *phba, struct lpfc_rcqe *rcqe)
/* save off the frame for the work thread to process */ /* save off the frame for the work thread to process */
list_add_tail(&dma_buf->cq_event.list, list_add_tail(&dma_buf->cq_event.list,
&phba->sli4_hba.sp_queue_event); &phba->sli4_hba.sp_queue_event);
/* Frame received */
phba->hba_flag |= HBA_SP_QUEUE_EVT;
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->hbalock, iflags);
/* Frame received */
set_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag);
workposted = true; workposted = true;
break; break;
case FC_STATUS_INSUFF_BUF_FRM_DISC: case FC_STATUS_INSUFF_BUF_FRM_DISC:
@ -14689,9 +14676,7 @@ lpfc_sli4_sp_handle_rcqe(struct lpfc_hba *phba, struct lpfc_rcqe *rcqe)
case FC_STATUS_INSUFF_BUF_NEED_BUF: case FC_STATUS_INSUFF_BUF_NEED_BUF:
hrq->RQ_no_posted_buf++; hrq->RQ_no_posted_buf++;
/* Post more buffers if possible */ /* Post more buffers if possible */
spin_lock_irqsave(&phba->hbalock, iflags); set_bit(HBA_POST_RECEIVE_BUFFER, &phba->hba_flag);
phba->hba_flag |= HBA_POST_RECEIVE_BUFFER;
spin_unlock_irqrestore(&phba->hbalock, iflags);
workposted = true; workposted = true;
break; break;
case FC_STATUS_RQ_DMA_FAILURE: case FC_STATUS_RQ_DMA_FAILURE:
@ -19349,8 +19334,8 @@ lpfc_sli4_handle_mds_loopback(struct lpfc_vport *vport,
spin_lock_irqsave(&phba->hbalock, iflags); spin_lock_irqsave(&phba->hbalock, iflags);
list_add_tail(&dmabuf->cq_event.list, list_add_tail(&dmabuf->cq_event.list,
&phba->sli4_hba.sp_queue_event); &phba->sli4_hba.sp_queue_event);
phba->hba_flag |= HBA_SP_QUEUE_EVT;
spin_unlock_irqrestore(&phba->hbalock, iflags); spin_unlock_irqrestore(&phba->hbalock, iflags);
set_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag);
lpfc_worker_wake_up(phba); lpfc_worker_wake_up(phba);
return; return;
} }
@ -20102,9 +20087,7 @@ lpfc_sli4_fcf_scan_read_fcf_rec(struct lpfc_hba *phba, uint16_t fcf_index)
mboxq->vport = phba->pport; mboxq->vport = phba->pport;
mboxq->mbox_cmpl = lpfc_mbx_cmpl_fcf_scan_read_fcf_rec; mboxq->mbox_cmpl = lpfc_mbx_cmpl_fcf_scan_read_fcf_rec;
spin_lock_irq(&phba->hbalock); set_bit(FCF_TS_INPROG, &phba->hba_flag);
phba->hba_flag |= FCF_TS_INPROG;
spin_unlock_irq(&phba->hbalock);
rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_NOWAIT); rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_NOWAIT);
if (rc == MBX_NOT_FINISHED) if (rc == MBX_NOT_FINISHED)
@ -20120,9 +20103,7 @@ fail_fcf_scan:
if (mboxq) if (mboxq)
lpfc_sli4_mbox_cmd_free(phba, mboxq); lpfc_sli4_mbox_cmd_free(phba, mboxq);
/* FCF scan failed, clear FCF_TS_INPROG flag */ /* FCF scan failed, clear FCF_TS_INPROG flag */
spin_lock_irq(&phba->hbalock); clear_bit(FCF_TS_INPROG, &phba->hba_flag);
phba->hba_flag &= ~FCF_TS_INPROG;
spin_unlock_irq(&phba->hbalock);
} }
return error; return error;
} }
@ -20779,7 +20760,7 @@ lpfc_sli_read_link_ste(struct lpfc_hba *phba)
/* This HBA contains PORT_STE configured */ /* This HBA contains PORT_STE configured */
if (!rgn23_data[offset + 2]) if (!rgn23_data[offset + 2])
phba->hba_flag |= LINK_DISABLED; set_bit(LINK_DISABLED, &phba->hba_flag);
goto out; goto out;
} }
@ -22488,7 +22469,7 @@ lpfc_get_cmd_rsp_buf_per_hdwq(struct lpfc_hba *phba,
} }
tmp->fcp_rsp = (struct fcp_rsp *)((uint8_t *)tmp->fcp_cmnd + tmp->fcp_rsp = (struct fcp_rsp *)((uint8_t *)tmp->fcp_cmnd +
sizeof(struct fcp_cmnd)); sizeof(struct fcp_cmnd32));
spin_lock_irqsave(&hdwq->hdwq_lock, iflags); spin_lock_irqsave(&hdwq->hdwq_lock, iflags);
list_add_tail(&tmp->list_node, &lpfc_buf->dma_cmd_rsp_list); list_add_tail(&tmp->list_node, &lpfc_buf->dma_cmd_rsp_list);
@ -22593,12 +22574,13 @@ lpfc_sli_prep_wqe(struct lpfc_hba *phba, struct lpfc_iocbq *job)
u8 cmnd; u8 cmnd;
u32 *pcmd; u32 *pcmd;
u32 if_type = 0; u32 if_type = 0;
u32 fip, abort_tag; u32 abort_tag;
bool fip;
struct lpfc_nodelist *ndlp = NULL; struct lpfc_nodelist *ndlp = NULL;
union lpfc_wqe128 *wqe = &job->wqe; union lpfc_wqe128 *wqe = &job->wqe;
u8 command_type = ELS_COMMAND_NON_FIP; u8 command_type = ELS_COMMAND_NON_FIP;
fip = phba->hba_flag & HBA_FIP_SUPPORT; fip = test_bit(HBA_FIP_SUPPORT, &phba->hba_flag);
/* The fcp commands will set command type */ /* The fcp commands will set command type */
if (job->cmd_flag & LPFC_IO_FCP) if (job->cmd_flag & LPFC_IO_FCP)
command_type = FCP_COMMAND; command_type = FCP_COMMAND;

View File

@ -20,7 +20,7 @@
* included with this package. * * included with this package. *
*******************************************************************/ *******************************************************************/
#define LPFC_DRIVER_VERSION "14.4.0.1" #define LPFC_DRIVER_VERSION "14.4.0.2"
#define LPFC_DRIVER_NAME "lpfc" #define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */ /* Used for SLI 2/3 */

View File

@ -534,7 +534,13 @@ static void __exit mac_scsi_remove(struct platform_device *pdev)
scsi_host_put(instance); scsi_host_put(instance);
} }
static struct platform_driver mac_scsi_driver = { /*
* mac_scsi_remove() lives in .exit.text. For drivers registered via
* module_platform_driver_probe() this is ok because they cannot get unbound at
* runtime. So mark the driver struct with __refdata to prevent modpost
* triggering a section mismatch warning.
*/
static struct platform_driver mac_scsi_driver __refdata = {
.remove_new = __exit_p(mac_scsi_remove), .remove_new = __exit_p(mac_scsi_remove),
.driver = { .driver = {
.name = DRV_MODULE_NAME, .name = DRV_MODULE_NAME,

View File

@ -3,85 +3,84 @@ config MEGARAID_NEWGEN
bool "LSI Logic New Generation RAID Device Drivers" bool "LSI Logic New Generation RAID Device Drivers"
depends on PCI && HAS_IOPORT && SCSI depends on PCI && HAS_IOPORT && SCSI
help help
LSI Logic RAID Device Drivers LSI Logic RAID Device Drivers
config MEGARAID_MM config MEGARAID_MM
tristate "LSI Logic Management Module (New Driver)" tristate "LSI Logic Management Module (New Driver)"
depends on PCI && HAS_IOPORT && SCSI && MEGARAID_NEWGEN depends on PCI && HAS_IOPORT && SCSI && MEGARAID_NEWGEN
help help
Management Module provides ioctl, sysfs support for LSI Logic Management Module provides ioctl, sysfs support for LSI Logic
RAID controllers. RAID controllers.
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called megaraid_mm module will be called megaraid_mm
config MEGARAID_MAILBOX config MEGARAID_MAILBOX
tristate "LSI Logic MegaRAID Driver (New Driver)" tristate "LSI Logic MegaRAID Driver (New Driver)"
depends on PCI && SCSI && MEGARAID_MM depends on PCI && SCSI && MEGARAID_MM
help help
List of supported controllers List of supported controllers
OEM Product Name VID :DID :SVID:SSID OEM Product Name VID :DID :SVID:SSID
--- ------------ ---- ---- ---- ---- --- ------------ ---- ---- ---- ----
Dell PERC3/QC 101E:1960:1028:0471 Dell PERC3/QC 101E:1960:1028:0471
Dell PERC3/DC 101E:1960:1028:0493 Dell PERC3/DC 101E:1960:1028:0493
Dell PERC3/SC 101E:1960:1028:0475 Dell PERC3/SC 101E:1960:1028:0475
Dell PERC3/Di 1028:000E:1028:0123 Dell PERC3/Di 1028:000E:1028:0123
Dell PERC4/SC 1000:1960:1028:0520 Dell PERC4/SC 1000:1960:1028:0520
Dell PERC4/DC 1000:1960:1028:0518 Dell PERC4/DC 1000:1960:1028:0518
Dell PERC4/QC 1000:0407:1028:0531 Dell PERC4/QC 1000:0407:1028:0531
Dell PERC4/Di 1028:000F:1028:014A Dell PERC4/Di 1028:000F:1028:014A
Dell PERC 4e/Si 1028:0013:1028:016c Dell PERC 4e/Si 1028:0013:1028:016c
Dell PERC 4e/Di 1028:0013:1028:016d Dell PERC 4e/Di 1028:0013:1028:016d
Dell PERC 4e/Di 1028:0013:1028:016e Dell PERC 4e/Di 1028:0013:1028:016e
Dell PERC 4e/Di 1028:0013:1028:016f Dell PERC 4e/Di 1028:0013:1028:016f
Dell PERC 4e/Di 1028:0013:1028:0170 Dell PERC 4e/Di 1028:0013:1028:0170
Dell PERC 4e/DC 1000:0408:1028:0002 Dell PERC 4e/DC 1000:0408:1028:0002
Dell PERC 4e/SC 1000:0408:1028:0001 Dell PERC 4e/SC 1000:0408:1028:0001
LSI MegaRAID SCSI 320-0 1000:1960:1000:A520 LSI MegaRAID SCSI 320-0 1000:1960:1000:A520
LSI MegaRAID SCSI 320-1 1000:1960:1000:0520 LSI MegaRAID SCSI 320-1 1000:1960:1000:0520
LSI MegaRAID SCSI 320-2 1000:1960:1000:0518 LSI MegaRAID SCSI 320-2 1000:1960:1000:0518
LSI MegaRAID SCSI 320-0X 1000:0407:1000:0530 LSI MegaRAID SCSI 320-0X 1000:0407:1000:0530
LSI MegaRAID SCSI 320-2X 1000:0407:1000:0532 LSI MegaRAID SCSI 320-2X 1000:0407:1000:0532
LSI MegaRAID SCSI 320-4X 1000:0407:1000:0531 LSI MegaRAID SCSI 320-4X 1000:0407:1000:0531
LSI MegaRAID SCSI 320-1E 1000:0408:1000:0001 LSI MegaRAID SCSI 320-1E 1000:0408:1000:0001
LSI MegaRAID SCSI 320-2E 1000:0408:1000:0002 LSI MegaRAID SCSI 320-2E 1000:0408:1000:0002
LSI MegaRAID SATA 150-4 1000:1960:1000:4523 LSI MegaRAID SATA 150-4 1000:1960:1000:4523
LSI MegaRAID SATA 150-6 1000:1960:1000:0523 LSI MegaRAID SATA 150-6 1000:1960:1000:0523
LSI MegaRAID SATA 300-4X 1000:0409:1000:3004 LSI MegaRAID SATA 300-4X 1000:0409:1000:3004
LSI MegaRAID SATA 300-8X 1000:0409:1000:3008 LSI MegaRAID SATA 300-8X 1000:0409:1000:3008
INTEL RAID Controller SRCU42X 1000:0407:8086:0532 INTEL RAID Controller SRCU42X 1000:0407:8086:0532
INTEL RAID Controller SRCS16 1000:1960:8086:0523 INTEL RAID Controller SRCS16 1000:1960:8086:0523
INTEL RAID Controller SRCU42E 1000:0408:8086:0002 INTEL RAID Controller SRCU42E 1000:0408:8086:0002
INTEL RAID Controller SRCZCRX 1000:0407:8086:0530 INTEL RAID Controller SRCZCRX 1000:0407:8086:0530
INTEL RAID Controller SRCS28X 1000:0409:8086:3008 INTEL RAID Controller SRCS28X 1000:0409:8086:3008
INTEL RAID Controller SROMBU42E 1000:0408:8086:3431 INTEL RAID Controller SROMBU42E 1000:0408:8086:3431
INTEL RAID Controller SROMBU42E 1000:0408:8086:3499 INTEL RAID Controller SROMBU42E 1000:0408:8086:3499
INTEL RAID Controller SRCU51L 1000:1960:8086:0520 INTEL RAID Controller SRCU51L 1000:1960:8086:0520
FSC MegaRAID PCI Express ROMB 1000:0408:1734:1065 FSC MegaRAID PCI Express ROMB 1000:0408:1734:1065
ACER MegaRAID ROMB-2E 1000:0408:1025:004D ACER MegaRAID ROMB-2E 1000:0408:1025:004D
NEC MegaRAID PCI Express ROMB 1000:0408:1033:8287 NEC MegaRAID PCI Express ROMB 1000:0408:1033:8287
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called megaraid_mbox module will be called megaraid_mbox
config MEGARAID_LEGACY config MEGARAID_LEGACY
tristate "LSI Logic Legacy MegaRAID Driver" tristate "LSI Logic Legacy MegaRAID Driver"
depends on PCI && HAS_IOPORT && SCSI depends on PCI && HAS_IOPORT && SCSI
help help
This driver supports the LSI MegaRAID 418, 428, 438, 466, 762, 490 This driver supports the LSI MegaRAID 418, 428, 438, 466, 762, 490
and 467 SCSI host adapters. This driver also support the all U320 and 467 SCSI host adapters. This driver also support the all U320
RAID controllers RAID controllers
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called megaraid module will be called megaraid
config MEGARAID_SAS config MEGARAID_SAS
tristate "LSI Logic MegaRAID SAS RAID Module" tristate "LSI Logic MegaRAID SAS RAID Module"
depends on PCI && SCSI depends on PCI && SCSI
select IRQ_POLL select IRQ_POLL
help help
Module for LSI Logic's SAS based RAID controllers. Module for LSI Logic's SAS based RAID controllers.
To compile this driver as a module, choose 'm' here. To compile this driver as a module, choose 'm' here.
Module will be called megaraid_sas Module will be called megaraid_sas

View File

@ -2701,7 +2701,7 @@ int megasas_get_ctrl_info(struct megasas_instance *instance);
int int
megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend); megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend);
void megasas_set_dynamic_target_properties(struct scsi_device *sdev, void megasas_set_dynamic_target_properties(struct scsi_device *sdev,
bool is_target_prop); struct queue_limits *lim, bool is_target_prop);
int megasas_get_target_prop(struct megasas_instance *instance, int megasas_get_target_prop(struct megasas_instance *instance,
struct scsi_device *sdev); struct scsi_device *sdev);
void megasas_get_snapdump_properties(struct megasas_instance *instance); void megasas_get_snapdump_properties(struct megasas_instance *instance);

View File

@ -1888,7 +1888,7 @@ static struct megasas_instance *megasas_lookup_instance(u16 host_no)
* Returns void * Returns void
*/ */
void megasas_set_dynamic_target_properties(struct scsi_device *sdev, void megasas_set_dynamic_target_properties(struct scsi_device *sdev,
bool is_target_prop) struct queue_limits *lim, bool is_target_prop)
{ {
u16 pd_index = 0, ld; u16 pd_index = 0, ld;
u32 device_id; u32 device_id;
@ -1915,8 +1915,10 @@ void megasas_set_dynamic_target_properties(struct scsi_device *sdev,
return; return;
raid = MR_LdRaidGet(ld, local_map_ptr); raid = MR_LdRaidGet(ld, local_map_ptr);
if (raid->capability.ldPiMode == MR_PROT_INFO_TYPE_CONTROLLER) if (raid->capability.ldPiMode == MR_PROT_INFO_TYPE_CONTROLLER) {
blk_queue_update_dma_alignment(sdev->request_queue, 0x7); if (lim)
lim->dma_alignment = 0x7;
}
mr_device_priv_data->is_tm_capable = mr_device_priv_data->is_tm_capable =
raid->capability.tmCapable; raid->capability.tmCapable;
@ -1967,7 +1969,8 @@ void megasas_set_dynamic_target_properties(struct scsi_device *sdev,
* *
*/ */
static inline void static inline void
megasas_set_nvme_device_properties(struct scsi_device *sdev, u32 max_io_size) megasas_set_nvme_device_properties(struct scsi_device *sdev,
struct queue_limits *lim, u32 max_io_size)
{ {
struct megasas_instance *instance; struct megasas_instance *instance;
u32 mr_nvme_pg_size; u32 mr_nvme_pg_size;
@ -1976,10 +1979,10 @@ megasas_set_nvme_device_properties(struct scsi_device *sdev, u32 max_io_size)
mr_nvme_pg_size = max_t(u32, instance->nvme_page_size, mr_nvme_pg_size = max_t(u32, instance->nvme_page_size,
MR_DEFAULT_NVME_PAGE_SIZE); MR_DEFAULT_NVME_PAGE_SIZE);
blk_queue_max_hw_sectors(sdev->request_queue, (max_io_size / 512)); lim->max_hw_sectors = max_io_size / 512;
lim->virt_boundary_mask = mr_nvme_pg_size - 1;
blk_queue_flag_set(QUEUE_FLAG_NOMERGES, sdev->request_queue); blk_queue_flag_set(QUEUE_FLAG_NOMERGES, sdev->request_queue);
blk_queue_virt_boundary(sdev->request_queue, mr_nvme_pg_size - 1);
} }
/* /*
@ -2041,7 +2044,7 @@ static void megasas_set_fw_assisted_qd(struct scsi_device *sdev,
* @is_target_prop true, if fw provided target properties. * @is_target_prop true, if fw provided target properties.
*/ */
static void megasas_set_static_target_properties(struct scsi_device *sdev, static void megasas_set_static_target_properties(struct scsi_device *sdev,
bool is_target_prop) struct queue_limits *lim, bool is_target_prop)
{ {
u32 max_io_size_kb = MR_DEFAULT_NVME_MDTS_KB; u32 max_io_size_kb = MR_DEFAULT_NVME_MDTS_KB;
struct megasas_instance *instance; struct megasas_instance *instance;
@ -2060,13 +2063,15 @@ static void megasas_set_static_target_properties(struct scsi_device *sdev,
max_io_size_kb = le32_to_cpu(instance->tgt_prop->max_io_size_kb); max_io_size_kb = le32_to_cpu(instance->tgt_prop->max_io_size_kb);
if (instance->nvme_page_size && max_io_size_kb) if (instance->nvme_page_size && max_io_size_kb)
megasas_set_nvme_device_properties(sdev, (max_io_size_kb << 10)); megasas_set_nvme_device_properties(sdev, lim,
max_io_size_kb << 10);
megasas_set_fw_assisted_qd(sdev, is_target_prop); megasas_set_fw_assisted_qd(sdev, is_target_prop);
} }
static int megasas_slave_configure(struct scsi_device *sdev) static int megasas_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
u16 pd_index = 0; u16 pd_index = 0;
struct megasas_instance *instance; struct megasas_instance *instance;
@ -2096,10 +2101,10 @@ static int megasas_slave_configure(struct scsi_device *sdev)
ret_target_prop = megasas_get_target_prop(instance, sdev); ret_target_prop = megasas_get_target_prop(instance, sdev);
is_target_prop = (ret_target_prop == DCMD_SUCCESS) ? true : false; is_target_prop = (ret_target_prop == DCMD_SUCCESS) ? true : false;
megasas_set_static_target_properties(sdev, is_target_prop); megasas_set_static_target_properties(sdev, lim, is_target_prop);
/* This sdev property may change post OCR */ /* This sdev property may change post OCR */
megasas_set_dynamic_target_properties(sdev, is_target_prop); megasas_set_dynamic_target_properties(sdev, lim, is_target_prop);
mutex_unlock(&instance->reset_mutex); mutex_unlock(&instance->reset_mutex);
@ -3507,7 +3512,7 @@ static const struct scsi_host_template megasas_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.name = "Avago SAS based MegaRAID driver", .name = "Avago SAS based MegaRAID driver",
.proc_name = "megaraid_sas", .proc_name = "megaraid_sas",
.slave_configure = megasas_slave_configure, .device_configure = megasas_device_configure,
.slave_alloc = megasas_slave_alloc, .slave_alloc = megasas_slave_alloc,
.slave_destroy = megasas_slave_destroy, .slave_destroy = megasas_slave_destroy,
.queuecommand = megasas_queue_command, .queuecommand = megasas_queue_command,

View File

@ -5119,7 +5119,8 @@ int megasas_reset_fusion(struct Scsi_Host *shost, int reason)
ret_target_prop = megasas_get_target_prop(instance, sdev); ret_target_prop = megasas_get_target_prop(instance, sdev);
is_target_prop = (ret_target_prop == DCMD_SUCCESS) ? true : false; is_target_prop = (ret_target_prop == DCMD_SUCCESS) ? true : false;
megasas_set_dynamic_target_properties(sdev, is_target_prop); megasas_set_dynamic_target_properties(sdev, NULL,
is_target_prop);
} }
status_reg = instance->instancet->read_fw_status_reg status_reg = instance->instancet->read_fw_status_reg

View File

@ -309,6 +309,7 @@ struct mpi3_man6_gpio_entry {
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_GENERIC (0x00) #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_GENERIC (0x00)
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_CABLE_MGMT (0x10) #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_CABLE_MGMT (0x10)
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_ACTIVE_CABLE_OVERCURRENT (0x20) #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_ACTIVE_CABLE_OVERCURRENT (0x20)
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_ACK_REQUIRED (0x02)
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_MASK (0x01) #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_MASK (0x01)
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_EDGE (0x00) #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_EDGE (0x00)
#define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_LEVEL (0x01) #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_LEVEL (0x01)
@ -1315,6 +1316,8 @@ struct mpi3_driver_page0 {
__le32 reserved18; __le32 reserved18;
}; };
#define MPI3_DRIVER0_PAGEVERSION (0x00) #define MPI3_DRIVER0_PAGEVERSION (0x00)
#define MPI3_DRIVER0_BSDOPTS_DEVICEEXPOSURE_DISABLE (0x00000020)
#define MPI3_DRIVER0_BSDOPTS_WRITECACHE_DISABLE (0x00000010)
#define MPI3_DRIVER0_BSDOPTS_HEADLESS_MODE_ENABLE (0x00000008) #define MPI3_DRIVER0_BSDOPTS_HEADLESS_MODE_ENABLE (0x00000008)
#define MPI3_DRIVER0_BSDOPTS_DIS_HII_CONFIG_UTIL (0x00000004) #define MPI3_DRIVER0_BSDOPTS_DIS_HII_CONFIG_UTIL (0x00000004)
#define MPI3_DRIVER0_BSDOPTS_REGISTRATION_MASK (0x00000003) #define MPI3_DRIVER0_BSDOPTS_REGISTRATION_MASK (0x00000003)

View File

@ -198,16 +198,17 @@ struct mpi3_supported_devices_data {
struct mpi3_supported_device supported_device[MPI3_SUPPORTED_DEVICE_MAX]; struct mpi3_supported_device supported_device[MPI3_SUPPORTED_DEVICE_MAX];
}; };
#ifndef MPI3_ENCRYPTED_HASH_MAX #ifndef MPI3_PUBLIC_KEY_MAX
#define MPI3_ENCRYPTED_HASH_MAX (1) #define MPI3_PUBLIC_KEY_MAX (1)
#endif #endif
struct mpi3_encrypted_hash_entry { struct mpi3_encrypted_hash_entry {
u8 hash_image_type; u8 hash_image_type;
u8 hash_algorithm; u8 hash_algorithm;
u8 encryption_algorithm; u8 encryption_algorithm;
u8 reserved03; u8 reserved03;
__le32 reserved04; __le16 public_key_size;
__le32 encrypted_hash[MPI3_ENCRYPTED_HASH_MAX]; __le16 signature_size;
__le32 public_key[MPI3_PUBLIC_KEY_MAX];
}; };
#define MPI3_HASH_IMAGE_TYPE_KEY_WITH_SIGNATURE (0x03) #define MPI3_HASH_IMAGE_TYPE_KEY_WITH_SIGNATURE (0x03)
@ -228,17 +229,6 @@ struct mpi3_encrypted_hash_entry {
#define MPI3_ENCRYPTION_ALGORITHM_RSA2048 (0x04) #define MPI3_ENCRYPTION_ALGORITHM_RSA2048 (0x04)
#define MPI3_ENCRYPTION_ALGORITHM_RSA4096 (0x05) #define MPI3_ENCRYPTION_ALGORITHM_RSA4096 (0x05)
#define MPI3_ENCRYPTION_ALGORITHM_RSA3072 (0x06) #define MPI3_ENCRYPTION_ALGORITHM_RSA3072 (0x06)
#ifndef MPI3_PUBLIC_KEY_MAX
#define MPI3_PUBLIC_KEY_MAX (1)
#endif
struct mpi3_encrypted_key_with_hash_entry {
u8 hash_image_type;
u8 hash_algorithm;
u8 encryption_algorithm;
u8 reserved03;
__le32 reserved04;
__le32 public_key[MPI3_PUBLIC_KEY_MAX];
};
#ifndef MPI3_ENCRYPTED_HASH_ENTRY_MAX #ifndef MPI3_ENCRYPTED_HASH_ENTRY_MAX
#define MPI3_ENCRYPTED_HASH_ENTRY_MAX (1) #define MPI3_ENCRYPTED_HASH_ENTRY_MAX (1)

View File

@ -27,7 +27,7 @@ struct mpi3_ioc_init_request {
__le64 sense_buffer_free_queue_address; __le64 sense_buffer_free_queue_address;
__le64 driver_information_address; __le64 driver_information_address;
}; };
#define MPI3_IOCINIT_MSGFLAGS_WRITESAMEDIVERT_SUPPORTED (0x08)
#define MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED (0x04) #define MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED (0x04)
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_MASK (0x03) #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_MASK (0x03)
#define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_NOT_USED (0x00) #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_NOT_USED (0x00)
@ -101,6 +101,8 @@ struct mpi3_ioc_facts_data {
__le16 max_io_throttle_group; __le16 max_io_throttle_group;
__le16 io_throttle_low; __le16 io_throttle_low;
__le16 io_throttle_high; __le16 io_throttle_high;
__le32 diag_fdl_size;
__le32 diag_tty_size;
}; };
#define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_MASK (0x80000000) #define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_MASK (0x80000000)
#define MPI3_IOCFACTS_CAPABILITY_SUPERVISOR_IOC (0x00000000) #define MPI3_IOCFACTS_CAPABILITY_SUPERVISOR_IOC (0x00000000)
@ -108,13 +110,13 @@ struct mpi3_ioc_facts_data {
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_MASK (0x00000600) #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_MASK (0x00000600)
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_FIXED_THRESHOLD (0x00000000) #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_FIXED_THRESHOLD (0x00000000)
#define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_OUTSTANDING_IO (0x00000200) #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_OUTSTANDING_IO (0x00000200)
#define MPI3_IOCFACTS_CAPABILITY_COMPLETE_RESET_CAPABLE (0x00000100) #define MPI3_IOCFACTS_CAPABILITY_COMPLETE_RESET_SUPPORTED (0x00000100)
#define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_ENABLED (0x00000080) #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_SUPPORTED (0x00000080)
#define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_FW_ENABLED (0x00000040) #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_FW_SUPPORTED (0x00000040)
#define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_DRIVER_ENABLED (0x00000020) #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_DRIVER_SUPPORTED (0x00000020)
#define MPI3_IOCFACTS_CAPABILITY_ADVANCED_HOST_PD_ENABLED (0x00000010) #define MPI3_IOCFACTS_CAPABILITY_ADVANCED_HOST_PD_SUPPORTED (0x00000010)
#define MPI3_IOCFACTS_CAPABILITY_RAID_CAPABLE (0x00000008) #define MPI3_IOCFACTS_CAPABILITY_RAID_SUPPORTED (0x00000008)
#define MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED (0x00000002) #define MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED (0x00000002)
#define MPI3_IOCFACTS_CAPABILITY_COALESCE_CTRL_SUPPORTED (0x00000001) #define MPI3_IOCFACTS_CAPABILITY_COALESCE_CTRL_SUPPORTED (0x00000001)
#define MPI3_IOCFACTS_PID_TYPE_MASK (0xf000) #define MPI3_IOCFACTS_PID_TYPE_MASK (0xf000)
#define MPI3_IOCFACTS_PID_TYPE_SHIFT (12) #define MPI3_IOCFACTS_PID_TYPE_SHIFT (12)
@ -159,6 +161,8 @@ struct mpi3_ioc_facts_data {
#define MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR (0x00000002) #define MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR (0x00000002)
#define MPI3_IOCFACTS_IO_THROTTLE_DATA_LENGTH_NOT_REQUIRED (0x0000) #define MPI3_IOCFACTS_IO_THROTTLE_DATA_LENGTH_NOT_REQUIRED (0x0000)
#define MPI3_IOCFACTS_MAX_IO_THROTTLE_GROUP_NOT_REQUIRED (0x0000) #define MPI3_IOCFACTS_MAX_IO_THROTTLE_GROUP_NOT_REQUIRED (0x0000)
#define MPI3_IOCFACTS_DIAGFDLSIZE_NOT_SUPPORTED (0x00000000)
#define MPI3_IOCFACTS_DIAGTTYSIZE_NOT_SUPPORTED (0x00000000)
struct mpi3_mgmt_passthrough_request { struct mpi3_mgmt_passthrough_request {
__le16 host_tag; __le16 host_tag;
u8 ioc_use_only02; u8 ioc_use_only02;

View File

@ -18,7 +18,7 @@ union mpi3_version_union {
#define MPI3_VERSION_MAJOR (3) #define MPI3_VERSION_MAJOR (3)
#define MPI3_VERSION_MINOR (0) #define MPI3_VERSION_MINOR (0)
#define MPI3_VERSION_UNIT (28) #define MPI3_VERSION_UNIT (31)
#define MPI3_VERSION_DEV (0) #define MPI3_VERSION_DEV (0)
#define MPI3_DEVHANDLE_INVALID (0xffff) #define MPI3_DEVHANDLE_INVALID (0xffff)
struct mpi3_sysif_oper_queue_indexes { struct mpi3_sysif_oper_queue_indexes {

View File

@ -55,15 +55,15 @@ extern struct list_head mrioc_list;
extern int prot_mask; extern int prot_mask;
extern atomic64_t event_counter; extern atomic64_t event_counter;
#define MPI3MR_DRIVER_VERSION "8.5.1.0.0" #define MPI3MR_DRIVER_VERSION "8.8.1.0.50"
#define MPI3MR_DRIVER_RELDATE "5-December-2023" #define MPI3MR_DRIVER_RELDATE "5-March-2024"
#define MPI3MR_DRIVER_NAME "mpi3mr" #define MPI3MR_DRIVER_NAME "mpi3mr"
#define MPI3MR_DRIVER_LICENSE "GPL" #define MPI3MR_DRIVER_LICENSE "GPL"
#define MPI3MR_DRIVER_AUTHOR "Broadcom Inc. <mpi3mr-linuxdrv.pdl@broadcom.com>" #define MPI3MR_DRIVER_AUTHOR "Broadcom Inc. <mpi3mr-linuxdrv.pdl@broadcom.com>"
#define MPI3MR_DRIVER_DESC "MPI3 Storage Controller Device Driver" #define MPI3MR_DRIVER_DESC "MPI3 Storage Controller Device Driver"
#define MPI3MR_NAME_LENGTH 32 #define MPI3MR_NAME_LENGTH 64
#define IOCNAME "%s: " #define IOCNAME "%s: "
#define MPI3MR_DEFAULT_MAX_IO_SIZE (1 * 1024 * 1024) #define MPI3MR_DEFAULT_MAX_IO_SIZE (1 * 1024 * 1024)
@ -294,6 +294,10 @@ enum mpi3mr_reset_reason {
MPI3MR_RESET_FROM_SAS_TRANSPORT_TIMEOUT = 30, MPI3MR_RESET_FROM_SAS_TRANSPORT_TIMEOUT = 30,
}; };
#define MPI3MR_RESET_REASON_OSTYPE_LINUX 1
#define MPI3MR_RESET_REASON_OSTYPE_SHIFT 28
#define MPI3MR_RESET_REASON_IOCNUM_SHIFT 20
/* Queue type definitions */ /* Queue type definitions */
enum queue_type { enum queue_type {
MPI3MR_DEFAULT_QUEUE = 0, MPI3MR_DEFAULT_QUEUE = 0,
@ -1142,7 +1146,7 @@ struct mpi3mr_ioc {
spinlock_t fwevt_lock; spinlock_t fwevt_lock;
struct list_head fwevt_list; struct list_head fwevt_list;
char watchdog_work_q_name[20]; char watchdog_work_q_name[50];
struct workqueue_struct *watchdog_work_q; struct workqueue_struct *watchdog_work_q;
struct delayed_work watchdog_work; struct delayed_work watchdog_work;
spinlock_t watchdog_lock; spinlock_t watchdog_lock;
@ -1336,7 +1340,7 @@ void mpi3mr_start_watchdog(struct mpi3mr_ioc *mrioc);
void mpi3mr_stop_watchdog(struct mpi3mr_ioc *mrioc); void mpi3mr_stop_watchdog(struct mpi3mr_ioc *mrioc);
int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
u32 reset_reason, u8 snapdump); u16 reset_reason, u8 snapdump);
void mpi3mr_ioc_disable_intr(struct mpi3mr_ioc *mrioc); void mpi3mr_ioc_disable_intr(struct mpi3mr_ioc *mrioc);
void mpi3mr_ioc_enable_intr(struct mpi3mr_ioc *mrioc); void mpi3mr_ioc_enable_intr(struct mpi3mr_ioc *mrioc);
@ -1348,7 +1352,6 @@ void mpi3mr_wait_for_host_io(struct mpi3mr_ioc *mrioc, u32 timeout);
void mpi3mr_cleanup_fwevt_list(struct mpi3mr_ioc *mrioc); void mpi3mr_cleanup_fwevt_list(struct mpi3mr_ioc *mrioc);
void mpi3mr_flush_host_io(struct mpi3mr_ioc *mrioc); void mpi3mr_flush_host_io(struct mpi3mr_ioc *mrioc);
void mpi3mr_invalidate_devhandles(struct mpi3mr_ioc *mrioc); void mpi3mr_invalidate_devhandles(struct mpi3mr_ioc *mrioc);
void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc);
void mpi3mr_flush_delayed_cmd_lists(struct mpi3mr_ioc *mrioc); void mpi3mr_flush_delayed_cmd_lists(struct mpi3mr_ioc *mrioc);
void mpi3mr_check_rh_fault_ioc(struct mpi3mr_ioc *mrioc, u32 reason_code); void mpi3mr_check_rh_fault_ioc(struct mpi3mr_ioc *mrioc, u32 reason_code);
void mpi3mr_print_fault_info(struct mpi3mr_ioc *mrioc); void mpi3mr_print_fault_info(struct mpi3mr_ioc *mrioc);

View File

@ -1598,26 +1598,33 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
rval = -EAGAIN; rval = -EAGAIN;
if (mrioc->bsg_cmds.state & MPI3MR_CMD_RESET) if (mrioc->bsg_cmds.state & MPI3MR_CMD_RESET)
goto out_unlock; goto out_unlock;
dprint_bsg_err(mrioc, if (((mpi_header->function != MPI3_FUNCTION_SCSI_IO) &&
"%s: bsg request timedout after %d seconds\n", __func__, (mpi_header->function != MPI3_FUNCTION_NVME_ENCAPSULATED))
karg->timeout); || (mrioc->logging_level & MPI3_DEBUG_BSG_ERROR)) {
if (mrioc->logging_level & MPI3_DEBUG_BSG_ERROR) { ioc_info(mrioc, "%s: bsg request timedout after %d seconds\n",
dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, __func__, karg->timeout);
if (!(mrioc->logging_level & MPI3_DEBUG_BSG_INFO)) {
dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ,
"bsg_mpi3_req"); "bsg_mpi3_req");
if (mpi_header->function == if (mpi_header->function ==
MPI3_BSG_FUNCTION_MGMT_PASSTHROUGH) { MPI3_FUNCTION_MGMT_PASSTHROUGH) {
drv_buf_iter = &drv_bufs[0]; drv_buf_iter = &drv_bufs[0];
dprint_dump(drv_buf_iter->kern_buf, dprint_dump(drv_buf_iter->kern_buf,
rmc_size, "mpi3_mgmt_req"); rmc_size, "mpi3_mgmt_req");
}
} }
} }
if ((mpi_header->function == MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) || if ((mpi_header->function == MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) ||
(mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO)) (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO)) {
dprint_bsg_err(mrioc, "%s: bsg request timedout after %d seconds,\n"
"issuing target reset to (0x%04x)\n", __func__,
karg->timeout, mpi_header->function_dependent);
mpi3mr_issue_tm(mrioc, mpi3mr_issue_tm(mrioc,
MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET, MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET,
mpi_header->function_dependent, 0, mpi_header->function_dependent, 0,
MPI3MR_HOSTTAG_BLK_TMS, MPI3MR_RESETTM_TIMEOUT, MPI3MR_HOSTTAG_BLK_TMS, MPI3MR_RESETTM_TIMEOUT,
&mrioc->host_tm_cmds, &resp_code, NULL); &mrioc->host_tm_cmds, &resp_code, NULL);
}
if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE) && if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE) &&
!(mrioc->bsg_cmds.state & MPI3MR_CMD_RESET)) !(mrioc->bsg_cmds.state & MPI3MR_CMD_RESET))
mpi3mr_soft_reset_handler(mrioc, mpi3mr_soft_reset_handler(mrioc,
@ -1838,6 +1845,10 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc)
{ {
struct device *bsg_dev = &mrioc->bsg_dev; struct device *bsg_dev = &mrioc->bsg_dev;
struct device *parent = &mrioc->shost->shost_gendev; struct device *parent = &mrioc->shost->shost_gendev;
struct queue_limits lim = {
.max_hw_sectors = MPI3MR_MAX_APP_XFER_SECTORS,
.max_segments = MPI3MR_MAX_APP_XFER_SEGMENTS,
};
device_initialize(bsg_dev); device_initialize(bsg_dev);
@ -1853,20 +1864,14 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc)
return; return;
} }
mrioc->bsg_queue = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), mrioc->bsg_queue = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), &lim,
mpi3mr_bsg_request, NULL, 0); mpi3mr_bsg_request, NULL, 0);
if (IS_ERR(mrioc->bsg_queue)) { if (IS_ERR(mrioc->bsg_queue)) {
ioc_err(mrioc, "%s: bsg registration failed\n", ioc_err(mrioc, "%s: bsg registration failed\n",
dev_name(bsg_dev)); dev_name(bsg_dev));
device_del(bsg_dev); device_del(bsg_dev);
put_device(bsg_dev); put_device(bsg_dev);
return;
} }
blk_queue_max_segments(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SEGMENTS);
blk_queue_max_hw_sectors(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SECTORS);
return;
} }
/** /**

View File

@ -11,7 +11,7 @@
#include <linux/io-64-nonatomic-lo-hi.h> #include <linux/io-64-nonatomic-lo-hi.h>
static int static int
mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, u32 reset_reason); mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, u16 reset_reason);
static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc); static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc);
static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc, static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
struct mpi3_ioc_facts_data *facts_data); struct mpi3_ioc_facts_data *facts_data);
@ -1195,7 +1195,7 @@ static inline void mpi3mr_clear_reset_history(struct mpi3mr_ioc *mrioc)
static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc, static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc,
u32 reset_reason) u32 reset_reason)
{ {
u32 ioc_config, timeout, ioc_status; u32 ioc_config, timeout, ioc_status, scratch_pad0;
int retval = -1; int retval = -1;
ioc_info(mrioc, "Issuing Message unit Reset(MUR)\n"); ioc_info(mrioc, "Issuing Message unit Reset(MUR)\n");
@ -1204,7 +1204,11 @@ static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc,
return retval; return retval;
} }
mpi3mr_clear_reset_history(mrioc); mpi3mr_clear_reset_history(mrioc);
writel(reset_reason, &mrioc->sysif_regs->scratchpad[0]); scratch_pad0 = ((MPI3MR_RESET_REASON_OSTYPE_LINUX <<
MPI3MR_RESET_REASON_OSTYPE_SHIFT) |
(mrioc->facts.ioc_num <<
MPI3MR_RESET_REASON_IOCNUM_SHIFT) | reset_reason);
writel(scratch_pad0, &mrioc->sysif_regs->scratchpad[0]);
ioc_config = readl(&mrioc->sysif_regs->ioc_configuration); ioc_config = readl(&mrioc->sysif_regs->ioc_configuration);
ioc_config &= ~MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC; ioc_config &= ~MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC;
writel(ioc_config, &mrioc->sysif_regs->ioc_configuration); writel(ioc_config, &mrioc->sysif_regs->ioc_configuration);
@ -1276,7 +1280,7 @@ mpi3mr_revalidate_factsdata(struct mpi3mr_ioc *mrioc)
mrioc->shost->max_sectors * 512, mrioc->facts.max_data_length); mrioc->shost->max_sectors * 512, mrioc->facts.max_data_length);
if ((mrioc->sas_transport_enabled) && (mrioc->facts.ioc_capabilities & if ((mrioc->sas_transport_enabled) && (mrioc->facts.ioc_capabilities &
MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED)) MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED))
ioc_err(mrioc, ioc_err(mrioc,
"critical error: multipath capability is enabled at the\n" "critical error: multipath capability is enabled at the\n"
"\tcontroller while sas transport support is enabled at the\n" "\tcontroller while sas transport support is enabled at the\n"
@ -1520,11 +1524,11 @@ static inline void mpi3mr_set_diagsave(struct mpi3mr_ioc *mrioc)
* Return: 0 on success, non-zero on failure. * Return: 0 on success, non-zero on failure.
*/ */
static int mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, static int mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type,
u32 reset_reason) u16 reset_reason)
{ {
int retval = -1; int retval = -1;
u8 unlock_retry_count = 0; u8 unlock_retry_count = 0;
u32 host_diagnostic, ioc_status, ioc_config; u32 host_diagnostic, ioc_status, ioc_config, scratch_pad0;
u32 timeout = MPI3MR_RESET_ACK_TIMEOUT * 10; u32 timeout = MPI3MR_RESET_ACK_TIMEOUT * 10;
if ((reset_type != MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET) && if ((reset_type != MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET) &&
@ -1576,6 +1580,9 @@ static int mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type,
unlock_retry_count, host_diagnostic); unlock_retry_count, host_diagnostic);
} while (!(host_diagnostic & MPI3_SYSIF_HOST_DIAG_DIAG_WRITE_ENABLE)); } while (!(host_diagnostic & MPI3_SYSIF_HOST_DIAG_DIAG_WRITE_ENABLE));
scratch_pad0 = ((MPI3MR_RESET_REASON_OSTYPE_LINUX <<
MPI3MR_RESET_REASON_OSTYPE_SHIFT) | (mrioc->facts.ioc_num <<
MPI3MR_RESET_REASON_IOCNUM_SHIFT) | reset_reason);
writel(reset_reason, &mrioc->sysif_regs->scratchpad[0]); writel(reset_reason, &mrioc->sysif_regs->scratchpad[0]);
writel(host_diagnostic | reset_type, writel(host_diagnostic | reset_type,
&mrioc->sysif_regs->host_diagnostic); &mrioc->sysif_regs->host_diagnostic);
@ -2581,7 +2588,7 @@ static void mpi3mr_watchdog_work(struct work_struct *work)
unsigned long flags; unsigned long flags;
enum mpi3mr_iocstate ioc_state; enum mpi3mr_iocstate ioc_state;
u32 fault, host_diagnostic, ioc_status; u32 fault, host_diagnostic, ioc_status;
u32 reset_reason = MPI3MR_RESET_FROM_FAULT_WATCH; u16 reset_reason = MPI3MR_RESET_FROM_FAULT_WATCH;
if (mrioc->reset_in_progress) if (mrioc->reset_in_progress)
return; return;
@ -3302,6 +3309,8 @@ static int mpi3mr_issue_iocinit(struct mpi3mr_ioc *mrioc)
iocinit_req.msg_flags |= iocinit_req.msg_flags |=
MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED; MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED;
iocinit_req.msg_flags |=
MPI3_IOCINIT_MSGFLAGS_WRITESAMEDIVERT_SUPPORTED;
init_completion(&mrioc->init_cmds.done); init_completion(&mrioc->init_cmds.done);
retval = mpi3mr_admin_request_post(mrioc, &iocinit_req, retval = mpi3mr_admin_request_post(mrioc, &iocinit_req,
@ -3668,15 +3677,15 @@ static const struct {
u32 capability; u32 capability;
char *name; char *name;
} mpi3mr_capabilities[] = { } mpi3mr_capabilities[] = {
{ MPI3_IOCFACTS_CAPABILITY_RAID_CAPABLE, "RAID" }, { MPI3_IOCFACTS_CAPABILITY_RAID_SUPPORTED, "RAID" },
{ MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED, "MultiPath" }, { MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED, "MultiPath" },
}; };
/** /**
* mpi3mr_print_ioc_info - Display controller information * mpi3mr_print_ioc_info - Display controller information
* @mrioc: Adapter instance reference * @mrioc: Adapter instance reference
* *
* Display controller personalit, capability, supported * Display controller personality, capability, supported
* protocols etc. * protocols etc.
* *
* Return: Nothing * Return: Nothing
@ -3685,20 +3694,20 @@ static void
mpi3mr_print_ioc_info(struct mpi3mr_ioc *mrioc) mpi3mr_print_ioc_info(struct mpi3mr_ioc *mrioc)
{ {
int i = 0, bytes_written = 0; int i = 0, bytes_written = 0;
char personality[16]; const char *personality;
char protocol[50] = {0}; char protocol[50] = {0};
char capabilities[100] = {0}; char capabilities[100] = {0};
struct mpi3mr_compimg_ver *fwver = &mrioc->facts.fw_ver; struct mpi3mr_compimg_ver *fwver = &mrioc->facts.fw_ver;
switch (mrioc->facts.personality) { switch (mrioc->facts.personality) {
case MPI3_IOCFACTS_FLAGS_PERSONALITY_EHBA: case MPI3_IOCFACTS_FLAGS_PERSONALITY_EHBA:
strncpy(personality, "Enhanced HBA", sizeof(personality)); personality = "Enhanced HBA";
break; break;
case MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR: case MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR:
strncpy(personality, "RAID", sizeof(personality)); personality = "RAID";
break; break;
default: default:
strncpy(personality, "Unknown", sizeof(personality)); personality = "Unknown";
break; break;
} }
@ -3951,7 +3960,7 @@ retry_init:
MPI3MR_HOST_IOS_KDUMP); MPI3MR_HOST_IOS_KDUMP);
if (!(mrioc->facts.ioc_capabilities & if (!(mrioc->facts.ioc_capabilities &
MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED)) { MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED)) {
mrioc->sas_transport_enabled = 1; mrioc->sas_transport_enabled = 1;
mrioc->scsi_device_channel = 1; mrioc->scsi_device_channel = 1;
mrioc->shost->max_channel = 1; mrioc->shost->max_channel = 1;
@ -4966,7 +4975,7 @@ cleanup_drv_cmd:
* Return: 0 on success, non-zero on failure. * Return: 0 on success, non-zero on failure.
*/ */
int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
u32 reset_reason, u8 snapdump) u16 reset_reason, u8 snapdump)
{ {
int retval = 0, i; int retval = 0, i;
unsigned long flags; unsigned long flags;
@ -5102,6 +5111,7 @@ out:
mrioc->device_refresh_on = 0; mrioc->device_refresh_on = 0;
mrioc->unrecoverable = 1; mrioc->unrecoverable = 1;
mrioc->reset_in_progress = 0; mrioc->reset_in_progress = 0;
mrioc->stop_bsgs = 0;
retval = -1; retval = -1;
mpi3mr_flush_cmds_for_unrecovered_controller(mrioc); mpi3mr_flush_cmds_for_unrecovered_controller(mrioc);
} }

View File

@ -986,6 +986,25 @@ static int mpi3mr_change_queue_depth(struct scsi_device *sdev,
return retval; return retval;
} }
static void mpi3mr_configure_nvme_dev(struct mpi3mr_tgt_dev *tgt_dev,
struct queue_limits *lim)
{
u8 pgsz = tgt_dev->dev_spec.pcie_inf.pgsz ? : MPI3MR_DEFAULT_PGSZEXP;
lim->max_hw_sectors = tgt_dev->dev_spec.pcie_inf.mdts / 512;
lim->virt_boundary_mask = (1 << pgsz) - 1;
}
static void mpi3mr_configure_tgt_dev(struct mpi3mr_tgt_dev *tgt_dev,
struct queue_limits *lim)
{
if (tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_PCIE &&
(tgt_dev->dev_spec.pcie_inf.dev_info &
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) ==
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE)
mpi3mr_configure_nvme_dev(tgt_dev, lim);
}
/** /**
* mpi3mr_update_sdev - Update SCSI device information * mpi3mr_update_sdev - Update SCSI device information
* @sdev: SCSI device reference * @sdev: SCSI device reference
@ -1001,35 +1020,21 @@ static void
mpi3mr_update_sdev(struct scsi_device *sdev, void *data) mpi3mr_update_sdev(struct scsi_device *sdev, void *data)
{ {
struct mpi3mr_tgt_dev *tgtdev; struct mpi3mr_tgt_dev *tgtdev;
struct queue_limits lim;
tgtdev = (struct mpi3mr_tgt_dev *)data; tgtdev = (struct mpi3mr_tgt_dev *)data;
if (!tgtdev) if (!tgtdev)
return; return;
mpi3mr_change_queue_depth(sdev, tgtdev->q_depth); mpi3mr_change_queue_depth(sdev, tgtdev->q_depth);
switch (tgtdev->dev_type) {
case MPI3_DEVICE_DEVFORM_PCIE: lim = queue_limits_start_update(sdev->request_queue);
/*The block layer hw sector size = 512*/ mpi3mr_configure_tgt_dev(tgtdev, &lim);
if ((tgtdev->dev_spec.pcie_inf.dev_info & WARN_ON_ONCE(queue_limits_commit_update(sdev->request_queue, &lim));
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) ==
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE) {
blk_queue_max_hw_sectors(sdev->request_queue,
tgtdev->dev_spec.pcie_inf.mdts / 512);
if (tgtdev->dev_spec.pcie_inf.pgsz == 0)
blk_queue_virt_boundary(sdev->request_queue,
((1 << MPI3MR_DEFAULT_PGSZEXP) - 1));
else
blk_queue_virt_boundary(sdev->request_queue,
((1 << tgtdev->dev_spec.pcie_inf.pgsz) - 1));
}
break;
default:
break;
}
} }
/** /**
* mpi3mr_rfresh_tgtdevs - Refresh target device exposure * mpi3mr_refresh_tgtdevs - Refresh target device exposure
* @mrioc: Adapter instance reference * @mrioc: Adapter instance reference
* *
* This is executed post controller reset to identify any * This is executed post controller reset to identify any
@ -1038,8 +1043,7 @@ mpi3mr_update_sdev(struct scsi_device *sdev, void *data)
* *
* Return: Nothing. * Return: Nothing.
*/ */
static void mpi3mr_refresh_tgtdevs(struct mpi3mr_ioc *mrioc)
void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc)
{ {
struct mpi3mr_tgt_dev *tgtdev, *tgtdev_next; struct mpi3mr_tgt_dev *tgtdev, *tgtdev_next;
struct mpi3mr_stgt_priv_data *tgt_priv; struct mpi3mr_stgt_priv_data *tgt_priv;
@ -1047,8 +1051,8 @@ void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc)
dprint_reset(mrioc, "refresh target devices: check for removals\n"); dprint_reset(mrioc, "refresh target devices: check for removals\n");
list_for_each_entry_safe(tgtdev, tgtdev_next, &mrioc->tgtdev_list, list_for_each_entry_safe(tgtdev, tgtdev_next, &mrioc->tgtdev_list,
list) { list) {
if ((tgtdev->dev_handle == MPI3MR_INVALID_DEV_HANDLE) && if (((tgtdev->dev_handle == MPI3MR_INVALID_DEV_HANDLE) ||
tgtdev->is_hidden && tgtdev->is_hidden) &&
tgtdev->host_exposed && tgtdev->starget && tgtdev->host_exposed && tgtdev->starget &&
tgtdev->starget->hostdata) { tgtdev->starget->hostdata) {
tgt_priv = tgtdev->starget->hostdata; tgt_priv = tgtdev->starget->hostdata;
@ -2010,7 +2014,7 @@ static void mpi3mr_fwevt_bh(struct mpi3mr_ioc *mrioc,
mpi3mr_refresh_sas_ports(mrioc); mpi3mr_refresh_sas_ports(mrioc);
mpi3mr_refresh_expanders(mrioc); mpi3mr_refresh_expanders(mrioc);
} }
mpi3mr_rfresh_tgtdevs(mrioc); mpi3mr_refresh_tgtdevs(mrioc);
ioc_info(mrioc, ioc_info(mrioc,
"scan for non responding and newly added devices after soft reset completed\n"); "scan for non responding and newly added devices after soft reset completed\n");
break; break;
@ -4393,15 +4397,17 @@ static void mpi3mr_target_destroy(struct scsi_target *starget)
} }
/** /**
* mpi3mr_slave_configure - Slave configure callback handler * mpi3mr_device_configure - Slave configure callback handler
* @sdev: SCSI device reference * @sdev: SCSI device reference
* @lim: queue limits
* *
* Configure queue depth, max hardware sectors and virt boundary * Configure queue depth, max hardware sectors and virt boundary
* as required * as required
* *
* Return: 0 always. * Return: 0 always.
*/ */
static int mpi3mr_slave_configure(struct scsi_device *sdev) static int mpi3mr_device_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct scsi_target *starget; struct scsi_target *starget;
struct Scsi_Host *shost; struct Scsi_Host *shost;
@ -4432,28 +4438,8 @@ static int mpi3mr_slave_configure(struct scsi_device *sdev)
sdev->eh_timeout = MPI3MR_EH_SCMD_TIMEOUT; sdev->eh_timeout = MPI3MR_EH_SCMD_TIMEOUT;
blk_queue_rq_timeout(sdev->request_queue, MPI3MR_SCMD_TIMEOUT); blk_queue_rq_timeout(sdev->request_queue, MPI3MR_SCMD_TIMEOUT);
switch (tgt_dev->dev_type) { mpi3mr_configure_tgt_dev(tgt_dev, lim);
case MPI3_DEVICE_DEVFORM_PCIE:
/*The block layer hw sector size = 512*/
if ((tgt_dev->dev_spec.pcie_inf.dev_info &
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) ==
MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE) {
blk_queue_max_hw_sectors(sdev->request_queue,
tgt_dev->dev_spec.pcie_inf.mdts / 512);
if (tgt_dev->dev_spec.pcie_inf.pgsz == 0)
blk_queue_virt_boundary(sdev->request_queue,
((1 << MPI3MR_DEFAULT_PGSZEXP) - 1));
else
blk_queue_virt_boundary(sdev->request_queue,
((1 << tgt_dev->dev_spec.pcie_inf.pgsz) - 1));
}
break;
default:
break;
}
mpi3mr_tgtdev_put(tgt_dev); mpi3mr_tgtdev_put(tgt_dev);
return retval; return retval;
} }
@ -4895,7 +4881,7 @@ static int mpi3mr_qcmd(struct Scsi_Host *shost,
MPI3_SCSIIO_MSGFLAGS_DIVERT_TO_FIRMWARE; MPI3_SCSIIO_MSGFLAGS_DIVERT_TO_FIRMWARE;
scsiio_flags |= MPI3_SCSIIO_FLAGS_DIVERT_REASON_IO_THROTTLING; scsiio_flags |= MPI3_SCSIIO_FLAGS_DIVERT_REASON_IO_THROTTLING;
} }
scsiio_req->flags = cpu_to_le32(scsiio_flags); scsiio_req->flags |= cpu_to_le32(scsiio_flags);
if (mpi3mr_op_request_post(mrioc, op_req_q, if (mpi3mr_op_request_post(mrioc, op_req_q,
scmd_priv_data->mpi3mr_scsiio_req)) { scmd_priv_data->mpi3mr_scsiio_req)) {
@ -4921,7 +4907,7 @@ static const struct scsi_host_template mpi3mr_driver_template = {
.queuecommand = mpi3mr_qcmd, .queuecommand = mpi3mr_qcmd,
.target_alloc = mpi3mr_target_alloc, .target_alloc = mpi3mr_target_alloc,
.slave_alloc = mpi3mr_slave_alloc, .slave_alloc = mpi3mr_slave_alloc,
.slave_configure = mpi3mr_slave_configure, .device_configure = mpi3mr_device_configure,
.target_destroy = mpi3mr_target_destroy, .target_destroy = mpi3mr_target_destroy,
.slave_destroy = mpi3mr_slave_destroy, .slave_destroy = mpi3mr_slave_destroy,
.scan_finished = mpi3mr_scan_finished, .scan_finished = mpi3mr_scan_finished,

View File

@ -1351,11 +1351,21 @@ static struct mpi3mr_sas_port *mpi3mr_sas_port_add(struct mpi3mr_ioc *mrioc,
mpi3mr_sas_port_sanity_check(mrioc, mr_sas_node, mpi3mr_sas_port_sanity_check(mrioc, mr_sas_node,
mr_sas_port->remote_identify.sas_address, hba_port); mr_sas_port->remote_identify.sas_address, hba_port);
if (mr_sas_node->num_phys > sizeof(mr_sas_port->phy_mask) * 8)
ioc_info(mrioc, "max port count %u could be too high\n",
mr_sas_node->num_phys);
for (i = 0; i < mr_sas_node->num_phys; i++) { for (i = 0; i < mr_sas_node->num_phys; i++) {
if ((mr_sas_node->phy[i].remote_identify.sas_address != if ((mr_sas_node->phy[i].remote_identify.sas_address !=
mr_sas_port->remote_identify.sas_address) || mr_sas_port->remote_identify.sas_address) ||
(mr_sas_node->phy[i].hba_port != hba_port)) (mr_sas_node->phy[i].hba_port != hba_port))
continue; continue;
if (i > sizeof(mr_sas_port->phy_mask) * 8) {
ioc_warn(mrioc, "skipping port %u, max allowed value is %lu\n",
i, sizeof(mr_sas_port->phy_mask) * 8);
goto out_fail;
}
list_add_tail(&mr_sas_node->phy[i].port_siblings, list_add_tail(&mr_sas_node->phy[i].port_siblings,
&mr_sas_port->phy_list); &mr_sas_port->phy_list);
mr_sas_port->num_phys++; mr_sas_port->num_phys++;

View File

@ -4774,7 +4774,7 @@ _base_display_ioc_capabilities(struct MPT3SAS_ADAPTER *ioc)
char desc[17] = {0}; char desc[17] = {0};
u32 iounit_pg1_flags; u32 iounit_pg1_flags;
strncpy(desc, ioc->manu_pg0.ChipName, 16); strscpy(desc, ioc->manu_pg0.ChipName, sizeof(desc));
ioc_info(ioc, "%s: FWVersion(%02d.%02d.%02d.%02d), ChipRevision(0x%02x)\n", ioc_info(ioc, "%s: FWVersion(%02d.%02d.%02d.%02d), ChipRevision(0x%02x)\n",
desc, desc,
(ioc->facts.FWVersion.Word & 0xFF000000) >> 24, (ioc->facts.FWVersion.Word & 0xFF000000) >> 24,

View File

@ -2497,14 +2497,15 @@ _scsih_enable_tlr(struct MPT3SAS_ADAPTER *ioc, struct scsi_device *sdev)
} }
/** /**
* scsih_slave_configure - device configure routine. * scsih_device_configure - device configure routine.
* @sdev: scsi device struct * @sdev: scsi device struct
* @lim: queue limits
* *
* Return: 0 if ok. Any other return is assumed to be an error and * Return: 0 if ok. Any other return is assumed to be an error and
* the device is ignored. * the device is ignored.
*/ */
static int static int
scsih_slave_configure(struct scsi_device *sdev) scsih_device_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
@ -2609,8 +2610,7 @@ scsih_slave_configure(struct scsi_device *sdev)
raid_device->num_pds, ds); raid_device->num_pds, ds);
if (shost->max_sectors > MPT3SAS_RAID_MAX_SECTORS) { if (shost->max_sectors > MPT3SAS_RAID_MAX_SECTORS) {
blk_queue_max_hw_sectors(sdev->request_queue, lim->max_hw_sectors = MPT3SAS_RAID_MAX_SECTORS;
MPT3SAS_RAID_MAX_SECTORS);
sdev_printk(KERN_INFO, sdev, sdev_printk(KERN_INFO, sdev,
"Set queue's max_sector to: %u\n", "Set queue's max_sector to: %u\n",
MPT3SAS_RAID_MAX_SECTORS); MPT3SAS_RAID_MAX_SECTORS);
@ -2675,8 +2675,7 @@ scsih_slave_configure(struct scsi_device *sdev)
pcie_device->connector_name); pcie_device->connector_name);
if (pcie_device->nvme_mdts) if (pcie_device->nvme_mdts)
blk_queue_max_hw_sectors(sdev->request_queue, lim->max_hw_sectors = pcie_device->nvme_mdts / 512;
pcie_device->nvme_mdts/512);
pcie_device_put(pcie_device); pcie_device_put(pcie_device);
spin_unlock_irqrestore(&ioc->pcie_device_lock, flags); spin_unlock_irqrestore(&ioc->pcie_device_lock, flags);
@ -2687,8 +2686,7 @@ scsih_slave_configure(struct scsi_device *sdev)
**/ **/
blk_queue_flag_set(QUEUE_FLAG_NOMERGES, blk_queue_flag_set(QUEUE_FLAG_NOMERGES,
sdev->request_queue); sdev->request_queue);
blk_queue_virt_boundary(sdev->request_queue, lim->virt_boundary_mask = ioc->page_size - 1;
ioc->page_size - 1);
return 0; return 0;
} }
@ -11914,7 +11912,7 @@ static const struct scsi_host_template mpt2sas_driver_template = {
.queuecommand = scsih_qcmd, .queuecommand = scsih_qcmd,
.target_alloc = scsih_target_alloc, .target_alloc = scsih_target_alloc,
.slave_alloc = scsih_slave_alloc, .slave_alloc = scsih_slave_alloc,
.slave_configure = scsih_slave_configure, .device_configure = scsih_device_configure,
.target_destroy = scsih_target_destroy, .target_destroy = scsih_target_destroy,
.slave_destroy = scsih_slave_destroy, .slave_destroy = scsih_slave_destroy,
.scan_finished = scsih_scan_finished, .scan_finished = scsih_scan_finished,
@ -11952,7 +11950,7 @@ static const struct scsi_host_template mpt3sas_driver_template = {
.queuecommand = scsih_qcmd, .queuecommand = scsih_qcmd,
.target_alloc = scsih_target_alloc, .target_alloc = scsih_target_alloc,
.slave_alloc = scsih_slave_alloc, .slave_alloc = scsih_slave_alloc,
.slave_configure = scsih_slave_configure, .device_configure = scsih_device_configure,
.target_destroy = scsih_target_destroy, .target_destroy = scsih_target_destroy,
.slave_destroy = scsih_slave_destroy, .slave_destroy = scsih_slave_destroy,
.scan_finished = scsih_scan_finished, .scan_finished = scsih_scan_finished,

View File

@ -458,17 +458,17 @@ _transport_expander_report_manufacture(struct MPT3SAS_ADAPTER *ioc,
goto out; goto out;
manufacture_reply = data_out + sizeof(struct rep_manu_request); manufacture_reply = data_out + sizeof(struct rep_manu_request);
strncpy(edev->vendor_id, manufacture_reply->vendor_id, strscpy(edev->vendor_id, manufacture_reply->vendor_id,
SAS_EXPANDER_VENDOR_ID_LEN); sizeof(edev->vendor_id));
strncpy(edev->product_id, manufacture_reply->product_id, strscpy(edev->product_id, manufacture_reply->product_id,
SAS_EXPANDER_PRODUCT_ID_LEN); sizeof(edev->product_id));
strncpy(edev->product_rev, manufacture_reply->product_rev, strscpy(edev->product_rev, manufacture_reply->product_rev,
SAS_EXPANDER_PRODUCT_REV_LEN); sizeof(edev->product_rev));
edev->level = manufacture_reply->sas_format & 1; edev->level = manufacture_reply->sas_format & 1;
if (edev->level) { if (edev->level) {
strncpy(edev->component_vendor_id, strscpy(edev->component_vendor_id,
manufacture_reply->component_vendor_id, manufacture_reply->component_vendor_id,
SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN); sizeof(edev->component_vendor_id));
tmp = (u8 *)&manufacture_reply->component_id; tmp = (u8 *)&manufacture_reply->component_id;
edev->component_id = tmp[0] << 8 | tmp[1]; edev->component_id = tmp[0] << 8 | tmp[1];
edev->component_revision_id = edev->component_revision_id =

View File

@ -26,33 +26,18 @@ static const struct mvs_chip_info mvs_chips[] = {
}; };
static const struct attribute_group *mvst_host_groups[]; static const struct attribute_group *mvst_host_groups[];
static const struct attribute_group *mvst_sdev_groups[];
#define SOC_SAS_NUM 2 #define SOC_SAS_NUM 2
static const struct scsi_host_template mvs_sht = { static const struct scsi_host_template mvs_sht = {
.module = THIS_MODULE, LIBSAS_SHT_BASE
.name = DRV_NAME,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = sas_slave_configure,
.scan_finished = mvs_scan_finished, .scan_finished = mvs_scan_finished,
.scan_start = mvs_scan_start, .scan_start = mvs_scan_start,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.can_queue = 1, .can_queue = 1,
.this_id = -1,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.shost_groups = mvst_host_groups, .shost_groups = mvst_host_groups,
.sdev_groups = mvst_sdev_groups,
.track_queue_depth = 1, .track_queue_depth = 1,
}; };
@ -779,6 +764,11 @@ static struct attribute *mvst_host_attrs[] = {
ATTRIBUTE_GROUPS(mvst_host); ATTRIBUTE_GROUPS(mvst_host);
static const struct attribute_group *mvst_sdev_groups[] = {
&sas_ata_sdev_attr_group,
NULL
};
module_init(mvs_init); module_init(mvs_init);
module_exit(mvs_exit); module_exit(mvs_exit);

View File

@ -1039,3 +1039,8 @@ const struct attribute_group *pm8001_host_groups[] = {
&pm8001_host_attr_group, &pm8001_host_attr_group,
NULL NULL
}; };
const struct attribute_group *pm8001_sdev_groups[] = {
&sas_ata_sdev_attr_group,
NULL
};

View File

@ -110,30 +110,13 @@ static void pm8001_map_queues(struct Scsi_Host *shost)
* The main structure which LLDD must register for scsi core. * The main structure which LLDD must register for scsi core.
*/ */
static const struct scsi_host_template pm8001_sht = { static const struct scsi_host_template pm8001_sht = {
.module = THIS_MODULE, LIBSAS_SHT_BASE
.name = DRV_NAME,
.proc_name = DRV_NAME,
.queuecommand = sas_queuecommand,
.dma_need_drain = ata_scsi_dma_need_drain,
.target_alloc = sas_target_alloc,
.slave_configure = sas_slave_configure,
.scan_finished = pm8001_scan_finished, .scan_finished = pm8001_scan_finished,
.scan_start = pm8001_scan_start, .scan_start = pm8001_scan_start,
.change_queue_depth = sas_change_queue_depth,
.bios_param = sas_bios_param,
.can_queue = 1, .can_queue = 1,
.this_id = -1,
.sg_tablesize = PM8001_MAX_DMA_SG, .sg_tablesize = PM8001_MAX_DMA_SG,
.max_sectors = SCSI_DEFAULT_MAX_SECTORS,
.eh_device_reset_handler = sas_eh_device_reset_handler,
.eh_target_reset_handler = sas_eh_target_reset_handler,
.slave_alloc = sas_slave_alloc,
.target_destroy = sas_target_destroy,
.ioctl = sas_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = sas_ioctl,
#endif
.shost_groups = pm8001_host_groups, .shost_groups = pm8001_host_groups,
.sdev_groups = pm8001_sdev_groups,
.track_queue_depth = 1, .track_queue_depth = 1,
.cmd_per_lun = 32, .cmd_per_lun = 32,
.map_queues = pm8001_map_queues, .map_queues = pm8001_map_queues,

View File

@ -717,6 +717,7 @@ int pm80xx_fatal_errors(struct pm8001_hba_info *pm8001_ha);
void pm8001_free_dev(struct pm8001_device *pm8001_dev); void pm8001_free_dev(struct pm8001_device *pm8001_dev);
/* ctl shared API */ /* ctl shared API */
extern const struct attribute_group *pm8001_host_groups[]; extern const struct attribute_group *pm8001_host_groups[];
extern const struct attribute_group *pm8001_sdev_groups[];
#define PM8001_INVALID_TAG ((u32)-1) #define PM8001_INVALID_TAG ((u32)-1)

View File

@ -197,8 +197,9 @@ static int pmcraid_slave_alloc(struct scsi_device *scsi_dev)
} }
/** /**
* pmcraid_slave_configure - Configures a SCSI device * pmcraid_device_configure - Configures a SCSI device
* @scsi_dev: scsi device struct * @scsi_dev: scsi device struct
* @lim: queue limits
* *
* This function is executed by SCSI mid layer just after a device is first * This function is executed by SCSI mid layer just after a device is first
* scanned (i.e. it has responded to an INQUIRY). For VSET resources, the * scanned (i.e. it has responded to an INQUIRY). For VSET resources, the
@ -209,7 +210,8 @@ static int pmcraid_slave_alloc(struct scsi_device *scsi_dev)
* Return value: * Return value:
* 0 on success * 0 on success
*/ */
static int pmcraid_slave_configure(struct scsi_device *scsi_dev) static int pmcraid_device_configure(struct scsi_device *scsi_dev,
struct queue_limits *lim)
{ {
struct pmcraid_resource_entry *res = scsi_dev->hostdata; struct pmcraid_resource_entry *res = scsi_dev->hostdata;
@ -233,8 +235,7 @@ static int pmcraid_slave_configure(struct scsi_device *scsi_dev)
scsi_dev->allow_restart = 1; scsi_dev->allow_restart = 1;
blk_queue_rq_timeout(scsi_dev->request_queue, blk_queue_rq_timeout(scsi_dev->request_queue,
PMCRAID_VSET_IO_TIMEOUT); PMCRAID_VSET_IO_TIMEOUT);
blk_queue_max_hw_sectors(scsi_dev->request_queue, lim->max_hw_sectors = PMCRAID_VSET_MAX_SECTORS;
PMCRAID_VSET_MAX_SECTORS);
} }
/* /*
@ -3668,7 +3669,7 @@ static const struct scsi_host_template pmcraid_host_template = {
.eh_host_reset_handler = pmcraid_eh_host_reset_handler, .eh_host_reset_handler = pmcraid_eh_host_reset_handler,
.slave_alloc = pmcraid_slave_alloc, .slave_alloc = pmcraid_slave_alloc,
.slave_configure = pmcraid_slave_configure, .device_configure = pmcraid_device_configure,
.slave_destroy = pmcraid_slave_destroy, .slave_destroy = pmcraid_slave_destroy,
.change_queue_depth = pmcraid_change_queue_depth, .change_queue_depth = pmcraid_change_queue_depth,
.can_queue = PMCRAID_MAX_IO_CMD, .can_queue = PMCRAID_MAX_IO_CMD,

View File

@ -986,12 +986,6 @@ second_pass:
return -ENODEV; return -ENODEV;
} }
static int ppa_adjust_queue(struct scsi_device *device)
{
blk_queue_bounce_limit(device->request_queue, BLK_BOUNCE_HIGH);
return 0;
}
static const struct scsi_host_template ppa_template = { static const struct scsi_host_template ppa_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.proc_name = "ppa", .proc_name = "ppa",
@ -1005,7 +999,6 @@ static const struct scsi_host_template ppa_template = {
.this_id = -1, .this_id = -1,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,
.can_queue = 1, .can_queue = 1,
.slave_alloc = ppa_adjust_queue,
.cmd_size = sizeof(struct scsi_pointer), .cmd_size = sizeof(struct scsi_pointer),
}; };
@ -1111,6 +1104,7 @@ static int __ppa_attach(struct parport *pb)
host = scsi_host_alloc(&ppa_template, sizeof(ppa_struct *)); host = scsi_host_alloc(&ppa_template, sizeof(ppa_struct *));
if (!host) if (!host)
goto out1; goto out1;
host->no_highmem = true;
host->io_port = pb->base; host->io_port = pb->base;
host->n_io_port = ports; host->n_io_port = ports;
host->dma_channel = -1; host->dma_channel = -1;

View File

@ -170,7 +170,7 @@ qedf_dbg_debug_cmd_write(struct file *filp, const char __user *buffer,
if (!count || *ppos) if (!count || *ppos)
return 0; return 0;
kern_buf = memdup_user(buffer, count); kern_buf = memdup_user_nul(buffer, count);
if (IS_ERR(kern_buf)) if (IS_ERR(kern_buf))
return PTR_ERR(kern_buf); return PTR_ERR(kern_buf);

View File

@ -2324,9 +2324,6 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, u64 tm_lun,
io_req->fcport = fcport; io_req->fcport = fcport;
io_req->cmd_type = QEDF_TASK_MGMT_CMD; io_req->cmd_type = QEDF_TASK_MGMT_CMD;
/* Record which cpu this request is associated with */
io_req->cpu = smp_processor_id();
/* Set TM flags */ /* Set TM flags */
io_req->io_req_flags = QEDF_READ; io_req->io_req_flags = QEDF_READ;
io_req->data_xfer_len = 0; io_req->data_xfer_len = 0;
@ -2349,6 +2346,9 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, u64 tm_lun,
spin_lock_irqsave(&fcport->rport_lock, flags); spin_lock_irqsave(&fcport->rport_lock, flags);
/* Record which cpu this request is associated with */
io_req->cpu = smp_processor_id();
sqe_idx = qedf_get_sqe_idx(fcport); sqe_idx = qedf_get_sqe_idx(fcport);
sqe = &fcport->sq[sqe_idx]; sqe = &fcport->sq[sqe_idx];
memset(sqe, 0, sizeof(struct fcoe_wqe)); memset(sqe, 0, sizeof(struct fcoe_wqe));

View File

@ -3468,7 +3468,7 @@ retry_probe:
slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER; slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER;
slowpath_params.drv_rev = QEDF_DRIVER_REV_VER; slowpath_params.drv_rev = QEDF_DRIVER_REV_VER;
slowpath_params.drv_eng = QEDF_DRIVER_ENG_VER; slowpath_params.drv_eng = QEDF_DRIVER_ENG_VER;
strncpy(slowpath_params.name, "qedf", QED_DRV_VER_STR_SIZE); strscpy(slowpath_params.name, "qedf", sizeof(slowpath_params.name));
rc = qed_ops->common->slowpath_start(qedf->cdev, &slowpath_params); rc = qed_ops->common->slowpath_start(qedf->cdev, &slowpath_params);
if (rc) { if (rc) {
QEDF_ERR(&(qedf->dbg_ctx), "Cannot start slowpath.\n"); QEDF_ERR(&(qedf->dbg_ctx), "Cannot start slowpath.\n");

View File

@ -120,15 +120,11 @@ static ssize_t
qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer, qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
size_t count, loff_t *ppos) size_t count, loff_t *ppos)
{ {
size_t cnt = 0; char buf[64];
int len;
if (*ppos) len = sprintf(buf, "do_not_recover=%d\n", qedi_do_not_recover);
return 0; return simple_read_from_buffer(buffer, count, ppos, buf, len);
cnt = sprintf(buffer, "do_not_recover=%d\n", qedi_do_not_recover);
cnt = min_t(int, count, cnt - *ppos);
*ppos += cnt;
return cnt;
} }
static int static int

View File

@ -7,29 +7,29 @@ config SCSI_QLA_FC
select FW_LOADER select FW_LOADER
select BTREE select BTREE
help help
This qla2xxx driver supports all QLogic Fibre Channel This qla2xxx driver supports all QLogic Fibre Channel
PCI and PCIe host adapters. PCI and PCIe host adapters.
By default, firmware for the ISP parts will be loaded By default, firmware for the ISP parts will be loaded
via the Firmware Loader interface. via the Firmware Loader interface.
ISP Firmware Filename ISP Firmware Filename
---------- ----------------- ---------- -----------------
21xx ql2100_fw.bin 21xx ql2100_fw.bin
22xx ql2200_fw.bin 22xx ql2200_fw.bin
2300, 2312, 6312 ql2300_fw.bin 2300, 2312, 6312 ql2300_fw.bin
2322, 6322 ql2322_fw.bin 2322, 6322 ql2322_fw.bin
24xx, 54xx ql2400_fw.bin 24xx, 54xx ql2400_fw.bin
25xx ql2500_fw.bin 25xx ql2500_fw.bin
Upon request, the driver caches the firmware image until Upon request, the driver caches the firmware image until
the driver is unloaded. the driver is unloaded.
Firmware images can be retrieved from: Firmware images can be retrieved from:
http://ldriver.qlogic.com/firmware/ http://ldriver.qlogic.com/firmware/
They are also included in the linux-firmware tree as well. They are also included in the linux-firmware tree as well.
config TCM_QLA2XXX config TCM_QLA2XXX
tristate "TCM_QLA2XXX fabric module for QLogic 24xx+ series target mode HBAs" tristate "TCM_QLA2XXX fabric module for QLogic 24xx+ series target mode HBAs"
@ -38,13 +38,15 @@ config TCM_QLA2XXX
select BTREE select BTREE
default n default n
help help
Say Y here to enable the TCM_QLA2XXX fabric module for QLogic 24xx+ series target mode HBAs Say Y here to enable the TCM_QLA2XXX fabric module for QLogic 24xx+
series target mode HBAs.
if TCM_QLA2XXX if TCM_QLA2XXX
config TCM_QLA2XXX_DEBUG config TCM_QLA2XXX_DEBUG
bool "TCM_QLA2XXX fabric module DEBUG mode for QLogic 24xx+ series target mode HBAs" bool "TCM_QLA2XXX fabric module DEBUG mode for QLogic 24xx+ series target mode HBAs"
default n default n
help help
Say Y here to enable the TCM_QLA2XXX fabric module DEBUG for QLogic 24xx+ series target mode HBAs Say Y here to enable the TCM_QLA2XXX fabric module DEBUG for
This will include code to enable the SCSI command jammer QLogic 24xx+ series target mode HBAs.
This will include code to enable the SCSI command jammer.
endif endif

View File

@ -274,7 +274,7 @@ qla_dfs_fw_resource_cnt_show(struct seq_file *s, void *unused)
seq_printf(s, "Driver: estimate iocb used [%d] high water limit [%d]\n", seq_printf(s, "Driver: estimate iocb used [%d] high water limit [%d]\n",
iocbs_used, ha->base_qpair->fwres.iocbs_limit); iocbs_used, ha->base_qpair->fwres.iocbs_limit);
seq_printf(s, "estimate exchange used[%d] high water limit [%d] n", seq_printf(s, "estimate exchange used[%d] high water limit [%d]\n",
exch_used, ha->base_qpair->fwres.exch_limit); exch_used, ha->base_qpair->fwres.exch_limit);
if (ql2xenforce_iocb_limit == 2) { if (ql2xenforce_iocb_limit == 2) {

View File

@ -1957,9 +1957,6 @@ qla2xxx_slave_configure(struct scsi_device *sdev)
scsi_qla_host_t *vha = shost_priv(sdev->host); scsi_qla_host_t *vha = shost_priv(sdev->host);
struct req_que *req = vha->req; struct req_que *req = vha->req;
if (IS_T10_PI_CAPABLE(vha->hw))
blk_queue_update_dma_alignment(sdev->request_queue, 0x7);
scsi_change_queue_depth(sdev, req->max_q_depth); scsi_change_queue_depth(sdev, req->max_q_depth);
return 0; return 0;
} }
@ -3575,6 +3572,9 @@ skip_dpc:
QLA_SG_ALL : 128; QLA_SG_ALL : 128;
} }
if (IS_T10_PI_CAPABLE(base_vha->hw))
host->dma_alignment = 0x7;
ret = scsi_add_host(host, &pdev->dev); ret = scsi_add_host(host, &pdev->dev);
if (ret) if (ret)
goto probe_failed; goto probe_failed;
@ -8156,9 +8156,6 @@ MODULE_DEVICE_TABLE(pci, qla2xxx_pci_tbl);
static struct pci_driver qla2xxx_pci_driver = { static struct pci_driver qla2xxx_pci_driver = {
.name = QLA2XXX_DRIVER_NAME, .name = QLA2XXX_DRIVER_NAME,
.driver = {
.owner = THIS_MODULE,
},
.id_table = qla2xxx_pci_tbl, .id_table = qla2xxx_pci_tbl,
.probe = qla2x00_probe_one, .probe = qla2x00_probe_one,
.remove = qla2x00_remove_one, .remove = qla2x00_remove_one,

Some files were not shown because too many files have changed in this diff Show More