for-5.11/drivers-2020-12-14

-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAl/XgdYQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgpjTBD/4me2TNvGOogbcL0b1leAotndJ7spI/IcFM
 NUMNy3pOGuRBcRjwle85xq44puAjlNkZE2LLatem5sT7ZvS+8lPNnOIoTYgfaCjt
 PhKx2sKlLumVm3BwymYAPcPtke4fikGG15Mwu5nX1oOehmyGrjObGAr3Lo6gexCT
 tQoCOczVqaTsV+iTXrLlmgEgs07J9Tm93uh2cNR8Jgroxb8ivuWeUq4YgbV4kWk+
 Y8XvOyVE/yba0vQf5/hHtWuVoC6RdELnqZ6NCkcP/EicdBecwk1GMJAej1S3zPS1
 0BT7GSFTpm3YUHcygD6LRmRg4I/BmWDTDtMi84+jLat6VvSG1HwIm//qHiCJh3ku
 SlvFZENIWAv5LP92x2vlR5Lt7uE3GK2V/5Pxt2fekyzCth6mzu+hLH4CBPQ3xgyd
 E1JqIQ/ilbXstp+EYoivV5x8yltZQnKEZRopws0EOqj1LsmDPj9XT1wzE9RnB0o+
 PWu/DNhQFhhcmP7Z8uLgPiKIVpyGs+vjxiJLlTtGDFTCy6M5JbcgzGkEkSmnybxH
 7lSanjpLt1dWj85FBMc6fNtJkv2rBPfb4+j0d1kZ45Dzcr4umirGIh7wtCHcgc83
 brmXSt29hlKHseSHMMuNWK8haXcgAE7gq9tD8GZ/kzM7+vkmLLxHJa22Qhq5rp4w
 URPeaBaQJw==
 =ayp2
 -----END PGP SIGNATURE-----

Merge tag 'for-5.11/drivers-2020-12-14' of git://git.kernel.dk/linux-block

Pull block driver updates from Jens Axboe:
 "Nothing major in here:

   - NVMe pull request from Christoph:
        - nvmet passthrough improvements (Chaitanya Kulkarni)
        - fcloop error injection support (James Smart)
        - read-only support for zoned namespaces without Zone Append
          (Javier González)
        - improve some error message (Minwoo Im)
        - reject I/O to offline fabrics namespaces (Victor Gladkov)
        - PCI queue allocation cleanups (Niklas Schnelle)
        - remove an unused allocation in nvmet (Amit Engel)
        - a Kconfig spelling fix (Colin Ian King)
        - nvme_req_qid simplication (Baolin Wang)

   - MD pull request from Song:
        - Fix race condition in md_ioctl() (Dae R. Jeong)
        - Initialize read_slot properly for raid10 (Kevin Vigor)
        - Code cleanup (Pankaj Gupta)
        - md-cluster resync/reshape fix (Zhao Heming)

   - Move null_blk into its own directory (Damien Le Moal)

   - null_blk zone and discard improvements (Damien Le Moal)

   - bcache race fix (Dongsheng Yang)

   - Set of rnbd fixes/improvements (Gioh Kim, Guoqing Jiang, Jack Wang,
     Lutz Pogrell, Md Haris Iqbal)

   - lightnvm NULL pointer deref fix (tangzhenhao)

   - sr in_interrupt() removal (Sebastian Andrzej Siewior)

   - FC endpoint security support for s390/dasd (Jan Höppner, Sebastian
     Ott, Vineeth Vijayan). From the s390 arch guys, arch bits included
     as it made it easier for them to funnel the feature through the
     block driver tree.

   - Follow up fixes (Colin Ian King)"

* tag 'for-5.11/drivers-2020-12-14' of git://git.kernel.dk/linux-block: (64 commits)
  block: drop dead assignments in loop_init()
  sr: Remove in_interrupt() usage in sr_init_command().
  sr: Switch the sector size back to 2048 if sr_read_sector() changed it.
  cdrom: Reset sector_size back it is not 2048.
  drivers/lightnvm: fix a null-ptr-deref bug in pblk-core.c
  null_blk: Move driver into its own directory
  null_blk: Allow controlling max_hw_sectors limit
  null_blk: discard zones on reset
  null_blk: cleanup discard handling
  null_blk: Improve implicit zone close
  null_blk: improve zone locking
  block: Align max_hw_sectors to logical blocksize
  null_blk: Fail zone append to conventional zones
  null_blk: Fix zone size initialization
  bcache: fix race between setting bdev state to none and new write request direct to backing
  block/rnbd: fix a null pointer dereference on dev->blk_symlink_name
  block/rnbd-clt: Dynamically alloc buffer for pathname & blk_symlink_name
  block/rnbd: call kobject_put in the failure path
  Documentation/ABI/rnbd-srv: add document for force_close
  block/rnbd-srv: close a mapped device from server side.
  ...
This commit is contained in:
Linus Torvalds 2020-12-16 13:09:32 -08:00
commit 69f637c335
62 changed files with 1395 additions and 485 deletions

View File

@ -66,7 +66,7 @@ Description: Expected format is the following::
The rnbd_server prepends the <device_path> received from client
with <dev_search_path> and tries to open the
<dev_search_path>/<device_path> block device. On success,
a /dev/rnbd<N> device file, a /sys/block/rnbd_client/rnbd<N>/
a /dev/rnbd<N> device file, a /sys/block/rnbd<N>/
directory and an entry in /sys/class/rnbd-client/ctl/devices
will be created.
@ -95,12 +95,12 @@ Description: Expected format is the following::
---------------------------------
After mapping, the device file can be found by:
o The symlink /sys/class/rnbd-client/ctl/devices/<device_id>
o The symlink /sys/class/rnbd-client/ctl/devices/<device_id>@<session_name>
points to /sys/block/<dev-name>. The last part of the symlink destination
is the same as the device name. By extracting the last part of the
path the path to the device /dev/<dev-name> can be build.
* /dev/block/$(cat /sys/class/rnbd-client/ctl/devices/<device_id>/dev)
* /dev/block/$(cat /sys/class/rnbd-client/ctl/devices/<device_id>@<session_name>/dev)
How to find the <device_id> of the device is described on the next
section.
@ -110,7 +110,7 @@ Date: Feb 2020
KernelVersion: 5.7
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: For each device mapped on the client a new symbolic link is created as
/sys/class/rnbd-client/ctl/devices/<device_id>, which points
/sys/class/rnbd-client/ctl/devices/<device_id>@<session_name>, which points
to the block device created by rnbd (/sys/block/rnbd<N>/).
The <device_id> of each device is created as follows:

View File

@ -48,3 +48,11 @@ Date: Feb 2020
KernelVersion: 5.7
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: Contains the device access mode: ro, rw or migration.
What: /sys/class/rnbd-server/ctl/devices/<device_name>/sessions/<session-name>/force_close
Date: Nov 2020
KernelVersion: 5.10
Contact: Jack Wang <jinpu.wang@cloud.ionos.com> Danil Kipnis <danil.kipnis@cloud.ionos.com>
Description: Write "1" to the file to close the device on server side. Please
note that the client side device will not be closed, read or
write to the device will get -ENOTCONN.

View File

@ -104,6 +104,8 @@ struct ccw_device {
was successfully verified. */
#define PE_PATHGROUP_ESTABLISHED 0x4 /* A pathgroup was reset and had
to be established again. */
#define PE_PATH_FCES_EVENT 0x8 /* The FCES Status of a path has
* changed. */
/*
* Possible CIO actions triggered by the unit check handler.

View File

@ -373,5 +373,6 @@ int chsc_sstpc(void *page, unsigned int op, u16 ctrl, u64 *clock_delta);
int chsc_sstpi(void *page, void *result, size_t size);
int chsc_stzi(void *page, void *result, size_t size);
int chsc_sgib(u32 origin);
int chsc_scud(u16 cu, u64 *esm, u8 *esm_valid);
#endif

View File

@ -157,10 +157,16 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto
__func__, max_hw_sectors);
}
max_hw_sectors = round_down(max_hw_sectors,
limits->logical_block_size >> SECTOR_SHIFT);
limits->max_hw_sectors = max_hw_sectors;
max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors);
max_sectors = min_t(unsigned int, max_sectors, BLK_DEF_MAX_SECTORS);
max_sectors = round_down(max_sectors,
limits->logical_block_size >> SECTOR_SHIFT);
limits->max_sectors = max_sectors;
q->backing_dev_info->io_pages = max_sectors >> (PAGE_SHIFT - 9);
}
EXPORT_SYMBOL(blk_queue_max_hw_sectors);
@ -321,13 +327,20 @@ EXPORT_SYMBOL(blk_queue_max_segment_size);
**/
void blk_queue_logical_block_size(struct request_queue *q, unsigned int size)
{
q->limits.logical_block_size = size;
struct queue_limits *limits = &q->limits;
if (q->limits.physical_block_size < size)
q->limits.physical_block_size = size;
limits->logical_block_size = size;
if (q->limits.io_min < q->limits.physical_block_size)
q->limits.io_min = q->limits.physical_block_size;
if (limits->physical_block_size < size)
limits->physical_block_size = size;
if (limits->io_min < limits->physical_block_size)
limits->io_min = limits->physical_block_size;
limits->max_hw_sectors =
round_down(limits->max_hw_sectors, size >> SECTOR_SHIFT);
limits->max_sectors =
round_down(limits->max_sectors, size >> SECTOR_SHIFT);
}
EXPORT_SYMBOL(blk_queue_logical_block_size);

View File

@ -90,18 +90,6 @@ static inline bool bvec_gap_to_prev(struct request_queue *q,
return __bvec_gap_to_prev(q, bprv, offset);
}
static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
unsigned int nr_segs)
{
rq->nr_phys_segments = nr_segs;
rq->__data_len = bio->bi_iter.bi_size;
rq->bio = rq->biotail = bio;
rq->ioprio = bio_prio(bio);
if (bio->bi_disk)
rq->rq_disk = bio->bi_disk;
}
#ifdef CONFIG_BLK_DEV_INTEGRITY
void blk_flush_integrity(void);
bool __bio_integrity_endio(struct bio *);

View File

@ -16,13 +16,7 @@ menuconfig BLK_DEV
if BLK_DEV
config BLK_DEV_NULL_BLK
tristate "Null test block driver"
select CONFIGFS_FS
config BLK_DEV_NULL_BLK_FAULT_INJECTION
bool "Support fault injection for Null test block driver"
depends on BLK_DEV_NULL_BLK && FAULT_INJECTION
source "drivers/block/null_blk/Kconfig"
config BLK_DEV_FD
tristate "Normal floppy disk support"

View File

@ -41,12 +41,7 @@ obj-$(CONFIG_BLK_DEV_RSXX) += rsxx/
obj-$(CONFIG_ZRAM) += zram/
obj-$(CONFIG_BLK_DEV_RNBD) += rnbd/
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o
null_blk-objs := null_blk_main.o
ifeq ($(CONFIG_BLK_DEV_ZONED), y)
null_blk-$(CONFIG_TRACING) += null_blk_trace.o
endif
null_blk-$(CONFIG_BLK_DEV_ZONED) += null_blk_zoned.o
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk/
skd-y := skd_main.o
swim_mod-y := swim.o swim_asm.o

View File

@ -2304,7 +2304,6 @@ MODULE_ALIAS("devname:loop-control");
static int __init loop_init(void)
{
int i, nr;
unsigned long range;
struct loop_device *lo;
int err;
@ -2341,13 +2340,10 @@ static int __init loop_init(void)
* /dev/loop-control interface, or be instantiated by accessing
* a 'dead' device node.
*/
if (max_loop) {
if (max_loop)
nr = max_loop;
range = max_loop << part_shift;
} else {
else
nr = CONFIG_BLK_DEV_LOOP_MIN_COUNT;
range = 1UL << MINORBITS;
}
err = misc_register(&loop_misc);
if (err < 0)

View File

@ -0,0 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
#
# Null block device driver configuration
#
config BLK_DEV_NULL_BLK
tristate "Null test block driver"
select CONFIGFS_FS
config BLK_DEV_NULL_BLK_FAULT_INJECTION
bool "Support fault injection for Null test block driver"
depends on BLK_DEV_NULL_BLK && FAULT_INJECTION

View File

@ -0,0 +1,11 @@
# SPDX-License-Identifier: GPL-2.0
# needed for trace events
ccflags-y += -I$(src)
obj-$(CONFIG_BLK_DEV_NULL_BLK) += null_blk.o
null_blk-objs := main.o
ifeq ($(CONFIG_BLK_DEV_ZONED), y)
null_blk-$(CONFIG_TRACING) += trace.o
endif
null_blk-$(CONFIG_BLK_DEV_ZONED) += zoned.o

View File

@ -152,6 +152,10 @@ static int g_bs = 512;
module_param_named(bs, g_bs, int, 0444);
MODULE_PARM_DESC(bs, "Block size (in bytes)");
static int g_max_sectors;
module_param_named(max_sectors, g_max_sectors, int, 0444);
MODULE_PARM_DESC(max_sectors, "Maximum size of a command (in 512B sectors)");
static unsigned int nr_devices = 1;
module_param(nr_devices, uint, 0444);
MODULE_PARM_DESC(nr_devices, "Number of devices to register");
@ -346,6 +350,7 @@ NULLB_DEVICE_ATTR(submit_queues, uint, nullb_apply_submit_queues);
NULLB_DEVICE_ATTR(home_node, uint, NULL);
NULLB_DEVICE_ATTR(queue_mode, uint, NULL);
NULLB_DEVICE_ATTR(blocksize, uint, NULL);
NULLB_DEVICE_ATTR(max_sectors, uint, NULL);
NULLB_DEVICE_ATTR(irqmode, uint, NULL);
NULLB_DEVICE_ATTR(hw_queue_depth, uint, NULL);
NULLB_DEVICE_ATTR(index, uint, NULL);
@ -463,6 +468,7 @@ static struct configfs_attribute *nullb_device_attrs[] = {
&nullb_device_attr_home_node,
&nullb_device_attr_queue_mode,
&nullb_device_attr_blocksize,
&nullb_device_attr_max_sectors,
&nullb_device_attr_irqmode,
&nullb_device_attr_hw_queue_depth,
&nullb_device_attr_index,
@ -533,7 +539,7 @@ nullb_group_drop_item(struct config_group *group, struct config_item *item)
static ssize_t memb_group_features_show(struct config_item *item, char *page)
{
return snprintf(page, PAGE_SIZE,
"memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active\n");
"memory_backed,discard,bandwidth,cache,badblocks,zoned,zone_size,zone_capacity,zone_nr_conv,zone_max_open,zone_max_active,blocksize,max_sectors\n");
}
CONFIGFS_ATTR_RO(memb_group_, features);
@ -588,6 +594,7 @@ static struct nullb_device *null_alloc_dev(void)
dev->home_node = g_home_node;
dev->queue_mode = g_queue_mode;
dev->blocksize = g_bs;
dev->max_sectors = g_max_sectors;
dev->irqmode = g_irqmode;
dev->hw_queue_depth = g_hw_queue_depth;
dev->blocking = g_blocking;
@ -1076,13 +1083,16 @@ static void nullb_fill_pattern(struct nullb *nullb, struct page *page,
kunmap_atomic(dst);
}
static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n)
blk_status_t null_handle_discard(struct nullb_device *dev,
sector_t sector, sector_t nr_sectors)
{
struct nullb *nullb = dev->nullb;
size_t n = nr_sectors << SECTOR_SHIFT;
size_t temp;
spin_lock_irq(&nullb->lock);
while (n > 0) {
temp = min_t(size_t, n, nullb->dev->blocksize);
temp = min_t(size_t, n, dev->blocksize);
null_free_sector(nullb, sector, false);
if (null_cache_active(nullb))
null_free_sector(nullb, sector, true);
@ -1090,6 +1100,8 @@ static void null_handle_discard(struct nullb *nullb, sector_t sector, size_t n)
n -= temp;
}
spin_unlock_irq(&nullb->lock);
return BLK_STS_OK;
}
static int null_handle_flush(struct nullb *nullb)
@ -1149,17 +1161,10 @@ static int null_handle_rq(struct nullb_cmd *cmd)
struct nullb *nullb = cmd->nq->dev->nullb;
int err;
unsigned int len;
sector_t sector;
sector_t sector = blk_rq_pos(rq);
struct req_iterator iter;
struct bio_vec bvec;
sector = blk_rq_pos(rq);
if (req_op(rq) == REQ_OP_DISCARD) {
null_handle_discard(nullb, sector, blk_rq_bytes(rq));
return 0;
}
spin_lock_irq(&nullb->lock);
rq_for_each_segment(bvec, rq, iter) {
len = bvec.bv_len;
@ -1183,18 +1188,10 @@ static int null_handle_bio(struct nullb_cmd *cmd)
struct nullb *nullb = cmd->nq->dev->nullb;
int err;
unsigned int len;
sector_t sector;
sector_t sector = bio->bi_iter.bi_sector;
struct bio_vec bvec;
struct bvec_iter iter;
sector = bio->bi_iter.bi_sector;
if (bio_op(bio) == REQ_OP_DISCARD) {
null_handle_discard(nullb, sector,
bio_sectors(bio) << SECTOR_SHIFT);
return 0;
}
spin_lock_irq(&nullb->lock);
bio_for_each_segment(bvec, bio, iter) {
len = bvec.bv_len;
@ -1263,11 +1260,16 @@ static inline blk_status_t null_handle_badblocks(struct nullb_cmd *cmd,
}
static inline blk_status_t null_handle_memory_backed(struct nullb_cmd *cmd,
enum req_opf op)
enum req_opf op,
sector_t sector,
sector_t nr_sectors)
{
struct nullb_device *dev = cmd->nq->dev;
int err;
if (op == REQ_OP_DISCARD)
return null_handle_discard(dev, sector, nr_sectors);
if (dev->queue_mode == NULL_Q_BIO)
err = null_handle_bio(cmd);
else
@ -1343,7 +1345,7 @@ blk_status_t null_process_cmd(struct nullb_cmd *cmd,
}
if (dev->memory_backed)
return null_handle_memory_backed(cmd, op);
return null_handle_memory_backed(cmd, op, sector, nr_sectors);
return BLK_STS_OK;
}
@ -1589,6 +1591,12 @@ static void null_config_discard(struct nullb *nullb)
if (nullb->dev->discard == false)
return;
if (!nullb->dev->memory_backed) {
nullb->dev->discard = false;
pr_info("discard option is ignored without memory backing\n");
return;
}
if (nullb->dev->zoned) {
nullb->dev->discard = false;
pr_info("discard option is ignored in zoned mode\n");
@ -1866,6 +1874,11 @@ static int null_add_dev(struct nullb_device *dev)
blk_queue_logical_block_size(nullb->q, dev->blocksize);
blk_queue_physical_block_size(nullb->q, dev->blocksize);
if (!dev->max_sectors)
dev->max_sectors = queue_max_hw_sectors(nullb->q);
dev->max_sectors = min_t(unsigned int, dev->max_sectors,
BLK_DEF_MAX_SECTORS);
blk_queue_max_hw_sectors(nullb->q, dev->max_sectors);
null_config_discard(nullb);
@ -1909,6 +1922,12 @@ static int __init null_init(void)
g_bs = PAGE_SIZE;
}
if (g_max_sectors > BLK_DEF_MAX_SECTORS) {
pr_warn("invalid max sectors\n");
pr_warn("defaults max sectors to %u\n", BLK_DEF_MAX_SECTORS);
g_max_sectors = BLK_DEF_MAX_SECTORS;
}
if (g_home_node != NUMA_NO_NODE && g_home_node >= nr_online_nodes) {
pr_err("invalid home_node value\n");
g_home_node = NUMA_NO_NODE;

View File

@ -12,6 +12,8 @@
#include <linux/configfs.h>
#include <linux/badblocks.h>
#include <linux/fault-inject.h>
#include <linux/spinlock.h>
#include <linux/mutex.h>
struct nullb_cmd {
struct request *rq;
@ -32,6 +34,26 @@ struct nullb_queue {
struct nullb_cmd *cmds;
};
struct nullb_zone {
/*
* Zone lock to prevent concurrent modification of a zone write
* pointer position and condition: with memory backing, a write
* command execution may sleep on memory allocation. For this case,
* use mutex as the zone lock. Otherwise, use the spinlock for
* locking the zone.
*/
union {
spinlock_t spinlock;
struct mutex mutex;
};
enum blk_zone_type type;
enum blk_zone_cond cond;
sector_t start;
sector_t wp;
unsigned int len;
unsigned int capacity;
};
struct nullb_device {
struct nullb *nullb;
struct config_item item;
@ -45,10 +67,11 @@ struct nullb_device {
unsigned int nr_zones_imp_open;
unsigned int nr_zones_exp_open;
unsigned int nr_zones_closed;
struct blk_zone *zones;
unsigned int imp_close_zone_no;
struct nullb_zone *zones;
sector_t zone_size_sects;
spinlock_t zone_lock;
unsigned long *zone_locks;
bool need_zone_res_mgmt;
spinlock_t zone_res_lock;
unsigned long size; /* device size in MB */
unsigned long completion_nsec; /* time in ns to complete a request */
@ -62,6 +85,7 @@ struct nullb_device {
unsigned int home_node; /* home node for the device */
unsigned int queue_mode; /* block interface */
unsigned int blocksize; /* block size */
unsigned int max_sectors; /* Max sectors per command */
unsigned int irqmode; /* IRQ completion handler */
unsigned int hw_queue_depth; /* queue depth */
unsigned int index; /* index of the disk, only valid with a disk */
@ -93,6 +117,8 @@ struct nullb {
char disk_name[DISK_NAME_LEN];
};
blk_status_t null_handle_discard(struct nullb_device *dev, sector_t sector,
sector_t nr_sectors);
blk_status_t null_process_cmd(struct nullb_cmd *cmd,
enum req_opf op, sector_t sector,
unsigned int nr_sectors);

View File

@ -4,7 +4,7 @@
*
* Copyright (C) 2020 Western Digital Corporation or its affiliates.
*/
#include "null_blk_trace.h"
#include "trace.h"
/*
* Helper to use for all null_blk traces to extract disk name.

View File

@ -73,7 +73,7 @@ TRACE_EVENT(nullb_report_zones,
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH .
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE null_blk_trace
#define TRACE_INCLUDE_FILE trace
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@ -4,19 +4,58 @@
#include "null_blk.h"
#define CREATE_TRACE_POINTS
#include "null_blk_trace.h"
#include "trace.h"
/* zone_size in MBs to sectors. */
#define ZONE_SIZE_SHIFT 11
#define MB_TO_SECTS(mb) (((sector_t)mb * SZ_1M) >> SECTOR_SHIFT)
static inline unsigned int null_zone_no(struct nullb_device *dev, sector_t sect)
{
return sect >> ilog2(dev->zone_size_sects);
}
static inline void null_lock_zone_res(struct nullb_device *dev)
{
if (dev->need_zone_res_mgmt)
spin_lock_irq(&dev->zone_res_lock);
}
static inline void null_unlock_zone_res(struct nullb_device *dev)
{
if (dev->need_zone_res_mgmt)
spin_unlock_irq(&dev->zone_res_lock);
}
static inline void null_init_zone_lock(struct nullb_device *dev,
struct nullb_zone *zone)
{
if (!dev->memory_backed)
spin_lock_init(&zone->spinlock);
else
mutex_init(&zone->mutex);
}
static inline void null_lock_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
if (!dev->memory_backed)
spin_lock_irq(&zone->spinlock);
else
mutex_lock(&zone->mutex);
}
static inline void null_unlock_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
if (!dev->memory_backed)
spin_unlock_irq(&zone->spinlock);
else
mutex_unlock(&zone->mutex);
}
int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
{
sector_t dev_size = (sector_t)dev->size * 1024 * 1024;
sector_t dev_capacity_sects, zone_capacity_sects;
struct nullb_zone *zone;
sector_t sector = 0;
unsigned int i;
@ -38,29 +77,19 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
return -EINVAL;
}
dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT;
dev->nr_zones = dev_size >>
(SECTOR_SHIFT + ilog2(dev->zone_size_sects));
dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct blk_zone),
GFP_KERNEL | __GFP_ZERO);
zone_capacity_sects = MB_TO_SECTS(dev->zone_capacity);
dev_capacity_sects = MB_TO_SECTS(dev->size);
dev->zone_size_sects = MB_TO_SECTS(dev->zone_size);
dev->nr_zones = dev_capacity_sects >> ilog2(dev->zone_size_sects);
if (dev_capacity_sects & (dev->zone_size_sects - 1))
dev->nr_zones++;
dev->zones = kvmalloc_array(dev->nr_zones, sizeof(struct nullb_zone),
GFP_KERNEL | __GFP_ZERO);
if (!dev->zones)
return -ENOMEM;
/*
* With memory backing, the zone_lock spinlock needs to be temporarily
* released to avoid scheduling in atomic context. To guarantee zone
* information protection, use a bitmap to lock zones with
* wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing
* implies that the queue is marked with BLK_MQ_F_BLOCKING.
*/
spin_lock_init(&dev->zone_lock);
if (dev->memory_backed) {
dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
if (!dev->zone_locks) {
kvfree(dev->zones);
return -ENOMEM;
}
}
spin_lock_init(&dev->zone_res_lock);
if (dev->zone_nr_conv >= dev->nr_zones) {
dev->zone_nr_conv = dev->nr_zones - 1;
@ -83,10 +112,13 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
dev->zone_max_open = 0;
pr_info("zone_max_open limit disabled, limit >= zone count\n");
}
dev->need_zone_res_mgmt = dev->zone_max_active || dev->zone_max_open;
dev->imp_close_zone_no = dev->zone_nr_conv;
for (i = 0; i < dev->zone_nr_conv; i++) {
struct blk_zone *zone = &dev->zones[i];
zone = &dev->zones[i];
null_init_zone_lock(dev, zone);
zone->start = sector;
zone->len = dev->zone_size_sects;
zone->capacity = zone->len;
@ -98,11 +130,16 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
}
for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
struct blk_zone *zone = &dev->zones[i];
zone = &dev->zones[i];
null_init_zone_lock(dev, zone);
zone->start = zone->wp = sector;
zone->len = dev->zone_size_sects;
zone->capacity = dev->zone_capacity << ZONE_SIZE_SHIFT;
if (zone->start + dev->zone_size_sects > dev_capacity_sects)
zone->len = dev_capacity_sects - zone->start;
else
zone->len = dev->zone_size_sects;
zone->capacity =
min_t(sector_t, zone->len, zone_capacity_sects);
zone->type = BLK_ZONE_TYPE_SEQWRITE_REQ;
zone->cond = BLK_ZONE_COND_EMPTY;
@ -140,32 +177,17 @@ int null_register_zoned_dev(struct nullb *nullb)
void null_free_zoned_dev(struct nullb_device *dev)
{
bitmap_free(dev->zone_locks);
kvfree(dev->zones);
}
static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
{
if (dev->memory_backed)
wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
spin_lock_irq(&dev->zone_lock);
}
static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)
{
spin_unlock_irq(&dev->zone_lock);
if (dev->memory_backed)
clear_and_wake_up_bit(zno, dev->zone_locks);
}
int null_report_zones(struct gendisk *disk, sector_t sector,
unsigned int nr_zones, report_zones_cb cb, void *data)
{
struct nullb *nullb = disk->private_data;
struct nullb_device *dev = nullb->dev;
unsigned int first_zone, i, zno;
struct blk_zone zone;
unsigned int first_zone, i;
struct nullb_zone *zone;
struct blk_zone blkz;
int error;
first_zone = null_zone_no(dev, sector);
@ -175,19 +197,25 @@ int null_report_zones(struct gendisk *disk, sector_t sector,
nr_zones = min(nr_zones, dev->nr_zones - first_zone);
trace_nullb_report_zones(nullb, nr_zones);
zno = first_zone;
for (i = 0; i < nr_zones; i++, zno++) {
memset(&blkz, 0, sizeof(struct blk_zone));
zone = &dev->zones[first_zone];
for (i = 0; i < nr_zones; i++, zone++) {
/*
* Stacked DM target drivers will remap the zone information by
* modifying the zone information passed to the report callback.
* So use a local copy to avoid corruption of the device zone
* array.
*/
null_lock_zone(dev, zno);
memcpy(&zone, &dev->zones[zno], sizeof(struct blk_zone));
null_unlock_zone(dev, zno);
null_lock_zone(dev, zone);
blkz.start = zone->start;
blkz.len = zone->len;
blkz.wp = zone->wp;
blkz.type = zone->type;
blkz.cond = zone->cond;
blkz.capacity = zone->capacity;
null_unlock_zone(dev, zone);
error = cb(&zone, i, data);
error = cb(&blkz, i, data);
if (error)
return error;
}
@ -203,7 +231,7 @@ size_t null_zone_valid_read_len(struct nullb *nullb,
sector_t sector, unsigned int len)
{
struct nullb_device *dev = nullb->dev;
struct blk_zone *zone = &dev->zones[null_zone_no(dev, sector)];
struct nullb_zone *zone = &dev->zones[null_zone_no(dev, sector)];
unsigned int nr_sectors = len >> SECTOR_SHIFT;
/* Read must be below the write pointer position */
@ -217,11 +245,9 @@ size_t null_zone_valid_read_len(struct nullb *nullb,
return (zone->wp - sector) << SECTOR_SHIFT;
}
static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *zone)
static blk_status_t __null_close_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
return BLK_STS_IOERR;
switch (zone->cond) {
case BLK_ZONE_COND_CLOSED:
/* close operation on closed is not an error */
@ -248,13 +274,24 @@ static blk_status_t null_close_zone(struct nullb_device *dev, struct blk_zone *z
return BLK_STS_OK;
}
static void null_close_first_imp_zone(struct nullb_device *dev)
static void null_close_imp_open_zone(struct nullb_device *dev)
{
unsigned int i;
struct nullb_zone *zone;
unsigned int zno, i;
zno = dev->imp_close_zone_no;
if (zno >= dev->nr_zones)
zno = dev->zone_nr_conv;
for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
if (dev->zones[i].cond == BLK_ZONE_COND_IMP_OPEN) {
null_close_zone(dev, &dev->zones[i]);
zone = &dev->zones[zno];
zno++;
if (zno >= dev->nr_zones)
zno = dev->zone_nr_conv;
if (zone->cond == BLK_ZONE_COND_IMP_OPEN) {
__null_close_zone(dev, zone);
dev->imp_close_zone_no = zno;
return;
}
}
@ -282,7 +319,7 @@ static blk_status_t null_check_open(struct nullb_device *dev)
if (dev->nr_zones_imp_open) {
if (null_check_active(dev) == BLK_STS_OK) {
null_close_first_imp_zone(dev);
null_close_imp_open_zone(dev);
return BLK_STS_OK;
}
}
@ -303,7 +340,8 @@ static blk_status_t null_check_open(struct nullb_device *dev)
* it is not certain that closing an implicit open zone will allow a new zone
* to be opened, since we might already be at the active limit capacity.
*/
static blk_status_t null_check_zone_resources(struct nullb_device *dev, struct blk_zone *zone)
static blk_status_t null_check_zone_resources(struct nullb_device *dev,
struct nullb_zone *zone)
{
blk_status_t ret;
@ -327,34 +365,23 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
{
struct nullb_device *dev = cmd->nq->dev;
unsigned int zno = null_zone_no(dev, sector);
struct blk_zone *zone = &dev->zones[zno];
struct nullb_zone *zone = &dev->zones[zno];
blk_status_t ret;
trace_nullb_zone_op(cmd, zno, zone->cond);
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL) {
if (append)
return BLK_STS_IOERR;
return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
}
null_lock_zone(dev, zno);
null_lock_zone(dev, zone);
switch (zone->cond) {
case BLK_ZONE_COND_FULL:
if (zone->cond == BLK_ZONE_COND_FULL) {
/* Cannot write to a full zone */
ret = BLK_STS_IOERR;
goto unlock;
case BLK_ZONE_COND_EMPTY:
case BLK_ZONE_COND_CLOSED:
ret = null_check_zone_resources(dev, zone);
if (ret != BLK_STS_OK)
goto unlock;
break;
case BLK_ZONE_COND_IMP_OPEN:
case BLK_ZONE_COND_EXP_OPEN:
break;
default:
/* Invalid zone condition */
ret = BLK_STS_IOERR;
goto unlock;
}
/*
@ -379,60 +406,69 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
goto unlock;
}
if (zone->cond == BLK_ZONE_COND_CLOSED) {
dev->nr_zones_closed--;
dev->nr_zones_imp_open++;
} else if (zone->cond == BLK_ZONE_COND_EMPTY) {
dev->nr_zones_imp_open++;
if (zone->cond == BLK_ZONE_COND_CLOSED ||
zone->cond == BLK_ZONE_COND_EMPTY) {
null_lock_zone_res(dev);
ret = null_check_zone_resources(dev, zone);
if (ret != BLK_STS_OK) {
null_unlock_zone_res(dev);
goto unlock;
}
if (zone->cond == BLK_ZONE_COND_CLOSED) {
dev->nr_zones_closed--;
dev->nr_zones_imp_open++;
} else if (zone->cond == BLK_ZONE_COND_EMPTY) {
dev->nr_zones_imp_open++;
}
if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
zone->cond = BLK_ZONE_COND_IMP_OPEN;
null_unlock_zone_res(dev);
}
if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
zone->cond = BLK_ZONE_COND_IMP_OPEN;
/*
* Memory backing allocation may sleep: release the zone_lock spinlock
* to avoid scheduling in atomic context. Zone operation atomicity is
* still guaranteed through the zone_locks bitmap.
*/
if (dev->memory_backed)
spin_unlock_irq(&dev->zone_lock);
ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
if (dev->memory_backed)
spin_lock_irq(&dev->zone_lock);
if (ret != BLK_STS_OK)
goto unlock;
zone->wp += nr_sectors;
if (zone->wp == zone->start + zone->capacity) {
null_lock_zone_res(dev);
if (zone->cond == BLK_ZONE_COND_EXP_OPEN)
dev->nr_zones_exp_open--;
else if (zone->cond == BLK_ZONE_COND_IMP_OPEN)
dev->nr_zones_imp_open--;
zone->cond = BLK_ZONE_COND_FULL;
null_unlock_zone_res(dev);
}
ret = BLK_STS_OK;
unlock:
null_unlock_zone(dev, zno);
null_unlock_zone(dev, zone);
return ret;
}
static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zone)
static blk_status_t null_open_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
blk_status_t ret;
blk_status_t ret = BLK_STS_OK;
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
return BLK_STS_IOERR;
null_lock_zone_res(dev);
switch (zone->cond) {
case BLK_ZONE_COND_EXP_OPEN:
/* open operation on exp open is not an error */
return BLK_STS_OK;
goto unlock;
case BLK_ZONE_COND_EMPTY:
ret = null_check_zone_resources(dev, zone);
if (ret != BLK_STS_OK)
return ret;
goto unlock;
break;
case BLK_ZONE_COND_IMP_OPEN:
dev->nr_zones_imp_open--;
@ -440,35 +476,57 @@ static blk_status_t null_open_zone(struct nullb_device *dev, struct blk_zone *zo
case BLK_ZONE_COND_CLOSED:
ret = null_check_zone_resources(dev, zone);
if (ret != BLK_STS_OK)
return ret;
goto unlock;
dev->nr_zones_closed--;
break;
case BLK_ZONE_COND_FULL:
default:
return BLK_STS_IOERR;
ret = BLK_STS_IOERR;
goto unlock;
}
zone->cond = BLK_ZONE_COND_EXP_OPEN;
dev->nr_zones_exp_open++;
return BLK_STS_OK;
unlock:
null_unlock_zone_res(dev);
return ret;
}
static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *zone)
static blk_status_t null_close_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
blk_status_t ret;
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
return BLK_STS_IOERR;
null_lock_zone_res(dev);
ret = __null_close_zone(dev, zone);
null_unlock_zone_res(dev);
return ret;
}
static blk_status_t null_finish_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
blk_status_t ret = BLK_STS_OK;
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
return BLK_STS_IOERR;
null_lock_zone_res(dev);
switch (zone->cond) {
case BLK_ZONE_COND_FULL:
/* finish operation on full is not an error */
return BLK_STS_OK;
goto unlock;
case BLK_ZONE_COND_EMPTY:
ret = null_check_zone_resources(dev, zone);
if (ret != BLK_STS_OK)
return ret;
goto unlock;
break;
case BLK_ZONE_COND_IMP_OPEN:
dev->nr_zones_imp_open--;
@ -479,27 +537,35 @@ static blk_status_t null_finish_zone(struct nullb_device *dev, struct blk_zone *
case BLK_ZONE_COND_CLOSED:
ret = null_check_zone_resources(dev, zone);
if (ret != BLK_STS_OK)
return ret;
goto unlock;
dev->nr_zones_closed--;
break;
default:
return BLK_STS_IOERR;
ret = BLK_STS_IOERR;
goto unlock;
}
zone->cond = BLK_ZONE_COND_FULL;
zone->wp = zone->start + zone->len;
return BLK_STS_OK;
unlock:
null_unlock_zone_res(dev);
return ret;
}
static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *zone)
static blk_status_t null_reset_zone(struct nullb_device *dev,
struct nullb_zone *zone)
{
if (zone->type == BLK_ZONE_TYPE_CONVENTIONAL)
return BLK_STS_IOERR;
null_lock_zone_res(dev);
switch (zone->cond) {
case BLK_ZONE_COND_EMPTY:
/* reset operation on empty is not an error */
null_unlock_zone_res(dev);
return BLK_STS_OK;
case BLK_ZONE_COND_IMP_OPEN:
dev->nr_zones_imp_open--;
@ -513,12 +579,18 @@ static blk_status_t null_reset_zone(struct nullb_device *dev, struct blk_zone *z
case BLK_ZONE_COND_FULL:
break;
default:
null_unlock_zone_res(dev);
return BLK_STS_IOERR;
}
zone->cond = BLK_ZONE_COND_EMPTY;
zone->wp = zone->start;
null_unlock_zone_res(dev);
if (dev->memory_backed)
return null_handle_discard(dev, zone->start, zone->len);
return BLK_STS_OK;
}
@ -527,19 +599,19 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
{
struct nullb_device *dev = cmd->nq->dev;
unsigned int zone_no;
struct blk_zone *zone;
struct nullb_zone *zone;
blk_status_t ret;
size_t i;
if (op == REQ_OP_ZONE_RESET_ALL) {
for (i = dev->zone_nr_conv; i < dev->nr_zones; i++) {
null_lock_zone(dev, i);
zone = &dev->zones[i];
null_lock_zone(dev, zone);
if (zone->cond != BLK_ZONE_COND_EMPTY) {
null_reset_zone(dev, zone);
trace_nullb_zone_op(cmd, i, zone->cond);
}
null_unlock_zone(dev, i);
null_unlock_zone(dev, zone);
}
return BLK_STS_OK;
}
@ -547,7 +619,7 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
zone_no = null_zone_no(dev, sector);
zone = &dev->zones[zone_no];
null_lock_zone(dev, zone_no);
null_lock_zone(dev, zone);
switch (op) {
case REQ_OP_ZONE_RESET:
@ -570,7 +642,7 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
if (ret == BLK_STS_OK)
trace_nullb_zone_op(cmd, zone_no, zone->cond);
null_unlock_zone(dev, zone_no);
null_unlock_zone(dev, zone);
return ret;
}
@ -578,29 +650,28 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
blk_status_t null_process_zoned_cmd(struct nullb_cmd *cmd, enum req_opf op,
sector_t sector, sector_t nr_sectors)
{
struct nullb_device *dev = cmd->nq->dev;
unsigned int zno = null_zone_no(dev, sector);
struct nullb_device *dev;
struct nullb_zone *zone;
blk_status_t sts;
switch (op) {
case REQ_OP_WRITE:
sts = null_zone_write(cmd, sector, nr_sectors, false);
break;
return null_zone_write(cmd, sector, nr_sectors, false);
case REQ_OP_ZONE_APPEND:
sts = null_zone_write(cmd, sector, nr_sectors, true);
break;
return null_zone_write(cmd, sector, nr_sectors, true);
case REQ_OP_ZONE_RESET:
case REQ_OP_ZONE_RESET_ALL:
case REQ_OP_ZONE_OPEN:
case REQ_OP_ZONE_CLOSE:
case REQ_OP_ZONE_FINISH:
sts = null_zone_mgmt(cmd, op, sector);
break;
return null_zone_mgmt(cmd, op, sector);
default:
null_lock_zone(dev, zno);
sts = null_process_cmd(cmd, op, sector, nr_sectors);
null_unlock_zone(dev, zno);
}
dev = cmd->nq->dev;
zone = &dev->zones[null_zone_no(dev, sector)];
return sts;
null_lock_zone(dev, zone);
sts = null_process_cmd(cmd, op, sector, nr_sectors);
null_unlock_zone(dev, zone);
return sts;
}
}

View File

@ -37,7 +37,6 @@ enum {
};
static const unsigned int rnbd_opt_mandatory[] = {
RNBD_OPT_PATH,
RNBD_OPT_DEV_PATH,
RNBD_OPT_SESSNAME,
};
@ -435,6 +434,7 @@ void rnbd_clt_remove_dev_symlink(struct rnbd_clt_dev *dev)
*/
if (strlen(dev->blk_symlink_name) && try_module_get(THIS_MODULE)) {
sysfs_remove_link(rnbd_devs_kobj, dev->blk_symlink_name);
kfree(dev->blk_symlink_name);
module_put(THIS_MODULE);
}
}
@ -451,9 +451,11 @@ static int rnbd_clt_add_dev_kobj(struct rnbd_clt_dev *dev)
ret = kobject_init_and_add(&dev->kobj, &rnbd_dev_ktype, gd_kobj, "%s",
"rnbd");
if (ret)
if (ret) {
rnbd_clt_err(dev, "Failed to create device sysfs dir, err: %d\n",
ret);
kobject_put(&dev->kobj);
}
return ret;
}
@ -481,16 +483,27 @@ static int rnbd_clt_get_path_name(struct rnbd_clt_dev *dev, char *buf,
if (ret >= len)
return -ENAMETOOLONG;
ret = snprintf(buf, len, "%s@%s", buf, dev->sess->sessname);
if (ret >= len)
return -ENAMETOOLONG;
return 0;
}
static int rnbd_clt_add_dev_symlink(struct rnbd_clt_dev *dev)
{
struct kobject *gd_kobj = &disk_to_dev(dev->gd)->kobj;
int ret;
int ret, len;
len = strlen(dev->pathname) + strlen(dev->sess->sessname) + 2;
dev->blk_symlink_name = kzalloc(len, GFP_KERNEL);
if (!dev->blk_symlink_name) {
rnbd_clt_err(dev, "Failed to allocate memory for blk_symlink_name\n");
return -ENOMEM;
}
ret = rnbd_clt_get_path_name(dev, dev->blk_symlink_name,
sizeof(dev->blk_symlink_name));
len);
if (ret) {
rnbd_clt_err(dev, "Failed to get /sys/block symlink path, err: %d\n",
ret);

View File

@ -59,6 +59,7 @@ static void rnbd_clt_put_dev(struct rnbd_clt_dev *dev)
ida_simple_remove(&index_ida, dev->clt_device_id);
mutex_unlock(&ida_lock);
kfree(dev->hw_queues);
kfree(dev->pathname);
rnbd_clt_put_sess(dev->sess);
mutex_destroy(&dev->lock);
kfree(dev);
@ -1192,6 +1193,12 @@ find_and_get_or_create_sess(const char *sessname,
else if (!first)
return sess;
if (!path_cnt) {
pr_err("Session %s not found, and path parameter not given", sessname);
err = -ENXIO;
goto put_sess;
}
rtrs_ops = (struct rtrs_clt_ops) {
.priv = sess,
.link_ev = rnbd_clt_link_ev,
@ -1380,10 +1387,17 @@ static struct rnbd_clt_dev *init_dev(struct rnbd_clt_session *sess,
pathname, sess->sessname, ret);
goto out_queues;
}
dev->pathname = kzalloc(strlen(pathname) + 1, GFP_KERNEL);
if (!dev->pathname) {
ret = -ENOMEM;
goto out_queues;
}
strlcpy(dev->pathname, pathname, strlen(pathname) + 1);
dev->clt_device_id = ret;
dev->sess = sess;
dev->access_mode = access_mode;
strlcpy(dev->pathname, pathname, sizeof(dev->pathname));
mutex_init(&dev->lock);
refcount_set(&dev->refcount, 1);
dev->dev_state = DEV_STATE_INIT;
@ -1403,17 +1417,20 @@ out_alloc:
return ERR_PTR(ret);
}
static bool __exists_dev(const char *pathname)
static bool __exists_dev(const char *pathname, const char *sessname)
{
struct rnbd_clt_session *sess;
struct rnbd_clt_dev *dev;
bool found = false;
list_for_each_entry(sess, &sess_list, list) {
if (sessname && strncmp(sess->sessname, sessname,
sizeof(sess->sessname)))
continue;
mutex_lock(&sess->lock);
list_for_each_entry(dev, &sess->devs_list, list) {
if (!strncmp(dev->pathname, pathname,
sizeof(dev->pathname))) {
if (strlen(dev->pathname) == strlen(pathname) &&
!strcmp(dev->pathname, pathname)) {
found = true;
break;
}
@ -1426,12 +1443,12 @@ static bool __exists_dev(const char *pathname)
return found;
}
static bool exists_devpath(const char *pathname)
static bool exists_devpath(const char *pathname, const char *sessname)
{
bool found;
mutex_lock(&sess_lock);
found = __exists_dev(pathname);
found = __exists_dev(pathname, sessname);
mutex_unlock(&sess_lock);
return found;
@ -1444,7 +1461,7 @@ static bool insert_dev_if_not_exists_devpath(const char *pathname,
bool found;
mutex_lock(&sess_lock);
found = __exists_dev(pathname);
found = __exists_dev(pathname, sess->sessname);
if (!found) {
mutex_lock(&sess->lock);
list_add_tail(&dev->list, &sess->devs_list);
@ -1474,7 +1491,7 @@ struct rnbd_clt_dev *rnbd_clt_map_device(const char *sessname,
struct rnbd_clt_dev *dev;
int ret;
if (exists_devpath(pathname))
if (unlikely(exists_devpath(pathname, sessname)))
return ERR_PTR(-EEXIST);
sess = find_and_get_or_create_sess(sessname, paths, path_cnt, port_nr);

View File

@ -108,7 +108,7 @@ struct rnbd_clt_dev {
u32 clt_device_id;
struct mutex lock;
enum rnbd_clt_dev_state dev_state;
char pathname[NAME_MAX];
char *pathname;
enum rnbd_access_mode access_mode;
bool read_only;
bool rotational;
@ -126,7 +126,7 @@ struct rnbd_clt_dev {
struct list_head list;
struct gendisk *gd;
struct kobject kobj;
char blk_symlink_name[NAME_MAX];
char *blk_symlink_name;
refcount_t refcount;
struct work_struct unmap_on_rmmod_work;
};

View File

@ -47,13 +47,17 @@ int rnbd_srv_create_dev_sysfs(struct rnbd_srv_dev *dev,
ret = kobject_init_and_add(&dev->dev_kobj, &dev_ktype,
rnbd_devs_kobj, dev_name);
if (ret)
if (ret) {
kobject_put(&dev->dev_kobj);
return ret;
}
dev->dev_sessions_kobj = kobject_create_and_add("sessions",
&dev->dev_kobj);
if (!dev->dev_sessions_kobj)
goto put_dev_kobj;
if (!dev->dev_sessions_kobj) {
ret = -ENOMEM;
goto free_dev_kobj;
}
bdev_kobj = &disk_to_dev(bdev->bd_disk)->kobj;
ret = sysfs_create_link(&dev->dev_kobj, bdev_kobj, "block_dev");
@ -64,7 +68,8 @@ int rnbd_srv_create_dev_sysfs(struct rnbd_srv_dev *dev,
put_sess_kobj:
kobject_put(dev->dev_sessions_kobj);
put_dev_kobj:
free_dev_kobj:
kobject_del(&dev->dev_kobj);
kobject_put(&dev->dev_kobj);
return ret;
}
@ -120,10 +125,46 @@ static ssize_t mapping_path_show(struct kobject *kobj,
static struct kobj_attribute rnbd_srv_dev_session_mapping_path_attr =
__ATTR_RO(mapping_path);
static ssize_t rnbd_srv_dev_session_force_close_show(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
return scnprintf(page, PAGE_SIZE, "Usage: echo 1 > %s\n",
attr->attr.name);
}
static ssize_t rnbd_srv_dev_session_force_close_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct rnbd_srv_sess_dev *sess_dev;
sess_dev = container_of(kobj, struct rnbd_srv_sess_dev, kobj);
if (!sysfs_streq(buf, "1")) {
rnbd_srv_err(sess_dev, "%s: invalid value: '%s'\n",
attr->attr.name, buf);
return -EINVAL;
}
rnbd_srv_info(sess_dev, "force close requested\n");
/* first remove sysfs itself to avoid deadlock */
sysfs_remove_file_self(&sess_dev->kobj, &attr->attr);
rnbd_srv_sess_dev_force_close(sess_dev);
return count;
}
static struct kobj_attribute rnbd_srv_dev_session_force_close_attr =
__ATTR(force_close, 0644,
rnbd_srv_dev_session_force_close_show,
rnbd_srv_dev_session_force_close_store);
static struct attribute *rnbd_srv_default_dev_sessions_attrs[] = {
&rnbd_srv_dev_session_access_mode_attr.attr,
&rnbd_srv_dev_session_ro_attr.attr,
&rnbd_srv_dev_session_mapping_path_attr.attr,
&rnbd_srv_dev_session_force_close_attr.attr,
NULL,
};
@ -145,7 +186,7 @@ static void rnbd_srv_sess_dev_release(struct kobject *kobj)
struct rnbd_srv_sess_dev *sess_dev;
sess_dev = container_of(kobj, struct rnbd_srv_sess_dev, kobj);
rnbd_destroy_sess_dev(sess_dev);
rnbd_destroy_sess_dev(sess_dev, sess_dev->keep_id);
}
static struct kobj_type rnbd_srv_sess_dev_ktype = {
@ -160,18 +201,17 @@ int rnbd_srv_create_dev_session_sysfs(struct rnbd_srv_sess_dev *sess_dev)
ret = kobject_init_and_add(&sess_dev->kobj, &rnbd_srv_sess_dev_ktype,
sess_dev->dev->dev_sessions_kobj, "%s",
sess_dev->sess->sessname);
if (ret)
if (ret) {
kobject_put(&sess_dev->kobj);
return ret;
}
ret = sysfs_create_group(&sess_dev->kobj,
&rnbd_srv_default_dev_session_attr_group);
if (ret)
goto err;
return 0;
err:
kobject_put(&sess_dev->kobj);
if (ret) {
kobject_del(&sess_dev->kobj);
kobject_put(&sess_dev->kobj);
}
return ret;
}

View File

@ -212,12 +212,20 @@ static void rnbd_put_srv_dev(struct rnbd_srv_dev *dev)
kref_put(&dev->kref, destroy_device_cb);
}
void rnbd_destroy_sess_dev(struct rnbd_srv_sess_dev *sess_dev)
void rnbd_destroy_sess_dev(struct rnbd_srv_sess_dev *sess_dev, bool keep_id)
{
DECLARE_COMPLETION_ONSTACK(dc);
xa_erase(&sess_dev->sess->index_idr, sess_dev->device_id);
if (keep_id)
/* free the resources for the id but don't */
/* allow to re-use the id itself because it */
/* is still used by the client */
xa_cmpxchg(&sess_dev->sess->index_idr, sess_dev->device_id,
sess_dev, NULL, 0);
else
xa_erase(&sess_dev->sess->index_idr, sess_dev->device_id);
synchronize_rcu();
sess_dev->destroy_comp = &dc;
rnbd_put_sess_dev(sess_dev);
wait_for_completion(&dc); /* wait for inflights to drop to zero */
@ -328,6 +336,13 @@ static int rnbd_srv_link_ev(struct rtrs_srv *rtrs,
}
}
void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev)
{
rnbd_srv_destroy_dev_session_sysfs(sess_dev);
sess_dev->keep_id = true;
}
static int process_msg_close(struct rtrs_srv *rtrs,
struct rnbd_srv_session *srv_sess,
void *data, size_t datalen, const void *usr,

View File

@ -56,6 +56,7 @@ struct rnbd_srv_sess_dev {
struct rnbd_srv_dev *dev;
struct kobject kobj;
u32 device_id;
bool keep_id;
fmode_t open_flags;
struct kref kref;
struct completion *destroy_comp;
@ -63,6 +64,7 @@ struct rnbd_srv_sess_dev {
enum rnbd_access_mode access_mode;
};
void rnbd_srv_sess_dev_force_close(struct rnbd_srv_sess_dev *sess_dev);
/* rnbd-srv-sysfs.c */
int rnbd_srv_create_dev_sysfs(struct rnbd_srv_dev *dev,
@ -73,6 +75,6 @@ int rnbd_srv_create_dev_session_sysfs(struct rnbd_srv_sess_dev *sess_dev);
void rnbd_srv_destroy_dev_session_sysfs(struct rnbd_srv_sess_dev *sess_dev);
int rnbd_srv_create_sysfs_files(void);
void rnbd_srv_destroy_sysfs_files(void);
void rnbd_destroy_sess_dev(struct rnbd_srv_sess_dev *sess_dev);
void rnbd_destroy_sess_dev(struct rnbd_srv_sess_dev *sess_dev, bool keep_id);
#endif /* RNBD_SRV_H */

View File

@ -2996,13 +2996,15 @@ static noinline int mmc_ioctl_cdrom_read_data(struct cdrom_device_info *cdi,
* SCSI-II devices are not required to support
* READ_CD, so let's try switching block size
*/
/* FIXME: switch back again... */
ret = cdrom_switch_blocksize(cdi, blocksize);
if (ret)
goto out;
if (blocksize != CD_FRAMESIZE) {
ret = cdrom_switch_blocksize(cdi, blocksize);
if (ret)
goto out;
}
cgc->sshdr = NULL;
ret = cdrom_read_cd(cdi, cgc, lba, blocksize, 1);
ret |= cdrom_switch_blocksize(cdi, blocksize);
if (blocksize != CD_FRAMESIZE)
ret |= cdrom_switch_blocksize(cdi, CD_FRAMESIZE);
}
if (!ret && copy_to_user(arg, cgc->buffer, blocksize))
ret = -EFAULT;

View File

@ -1869,6 +1869,10 @@ void pblk_gen_run_ws(struct pblk *pblk, struct pblk_line *line, void *priv,
struct pblk_line_ws *line_ws;
line_ws = mempool_alloc(&pblk->gen_ws_pool, gfp_mask);
if (!line_ws) {
pblk_err(pblk, "pblk: could not allocate memory\n");
return;
}
line_ws->pblk = pblk;
line_ws->line = line;

View File

@ -1114,9 +1114,6 @@ static void cancel_writeback_rate_update_dwork(struct cached_dev *dc)
static void cached_dev_detach_finish(struct work_struct *w)
{
struct cached_dev *dc = container_of(w, struct cached_dev, detach);
struct closure cl;
closure_init_stack(&cl);
BUG_ON(!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags));
BUG_ON(refcount_read(&dc->count));
@ -1130,12 +1127,6 @@ static void cached_dev_detach_finish(struct work_struct *w)
dc->writeback_thread = NULL;
}
memset(&dc->sb.set_uuid, 0, 16);
SET_BDEV_STATE(&dc->sb, BDEV_STATE_NONE);
bch_write_bdev_super(dc, &cl);
closure_sync(&cl);
mutex_lock(&bch_register_lock);
calc_cached_dev_sectors(dc->disk.c);

View File

@ -705,6 +705,15 @@ static int bch_writeback_thread(void *arg)
* bch_cached_dev_detach().
*/
if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)) {
struct closure cl;
closure_init_stack(&cl);
memset(&dc->sb.set_uuid, 0, 16);
SET_BDEV_STATE(&dc->sb, BDEV_STATE_NONE);
bch_write_bdev_super(dc, &cl);
closure_sync(&cl);
up_write(&dc->writeback_lock);
break;
}

View File

@ -663,9 +663,27 @@ out:
* Takes the lock on the TOKEN lock resource so no other
* node can communicate while the operation is underway.
*/
static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
static int lock_token(struct md_cluster_info *cinfo)
{
int error, set_bit = 0;
int error;
error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
if (error) {
pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
__func__, __LINE__, error);
} else {
/* Lock the receive sequence */
mutex_lock(&cinfo->recv_mutex);
}
return error;
}
/* lock_comm()
* Sets the MD_CLUSTER_SEND_LOCK bit to lock the send channel.
*/
static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
{
int rv, set_bit = 0;
struct mddev *mddev = cinfo->mddev;
/*
@ -676,34 +694,19 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
*/
if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
&cinfo->state)) {
error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
rv = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
&cinfo->state);
WARN_ON_ONCE(error);
WARN_ON_ONCE(rv);
md_wakeup_thread(mddev->thread);
set_bit = 1;
}
error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
if (set_bit)
clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
if (error)
pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
__func__, __LINE__, error);
/* Lock the receive sequence */
mutex_lock(&cinfo->recv_mutex);
return error;
}
/* lock_comm()
* Sets the MD_CLUSTER_SEND_LOCK bit to lock the send channel.
*/
static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
{
wait_event(cinfo->wait,
!test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
return lock_token(cinfo, mddev_locked);
rv = lock_token(cinfo);
if (set_bit)
clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
return rv;
}
static void unlock_comm(struct md_cluster_info *cinfo)
@ -783,9 +786,11 @@ static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
{
int ret;
lock_comm(cinfo, mddev_locked);
ret = __sendmsg(cinfo, cmsg);
unlock_comm(cinfo);
ret = lock_comm(cinfo, mddev_locked);
if (!ret) {
ret = __sendmsg(cinfo, cmsg);
unlock_comm(cinfo);
}
return ret;
}
@ -1060,7 +1065,7 @@ static int metadata_update_start(struct mddev *mddev)
return 0;
}
ret = lock_token(cinfo, 1);
ret = lock_token(cinfo);
clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
return ret;
}
@ -1254,7 +1259,10 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
int raid_slot = -1;
md_update_sb(mddev, 1);
lock_comm(cinfo, 1);
if (lock_comm(cinfo, 1)) {
pr_err("%s: lock_comm failed\n", __func__);
return;
}
memset(&cmsg, 0, sizeof(cmsg));
cmsg.type = cpu_to_le32(METADATA_UPDATED);
@ -1403,7 +1411,8 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev)
cmsg.type = cpu_to_le32(NEWDISK);
memcpy(cmsg.uuid, uuid, 16);
cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
lock_comm(cinfo, 1);
if (lock_comm(cinfo, 1))
return -EAGAIN;
ret = __sendmsg(cinfo, &cmsg);
if (ret) {
unlock_comm(cinfo);

View File

@ -639,7 +639,7 @@ static void md_submit_flush_data(struct work_struct *ws)
* could wait for this and below md_handle_request could wait for those
* bios because of suspend check
*/
mddev->last_flush = mddev->start_flush;
mddev->prev_flush_start = mddev->start_flush;
mddev->flush_bio = NULL;
wake_up(&mddev->sb_wait);
@ -660,13 +660,17 @@ static void md_submit_flush_data(struct work_struct *ws)
*/
bool md_flush_request(struct mddev *mddev, struct bio *bio)
{
ktime_t start = ktime_get_boottime();
ktime_t req_start = ktime_get_boottime();
spin_lock_irq(&mddev->lock);
/* flush requests wait until ongoing flush completes,
* hence coalescing all the pending requests.
*/
wait_event_lock_irq(mddev->sb_wait,
!mddev->flush_bio ||
ktime_after(mddev->last_flush, start),
ktime_before(req_start, mddev->prev_flush_start),
mddev->lock);
if (!ktime_after(mddev->last_flush, start)) {
/* new request after previous flush is completed */
if (ktime_after(req_start, mddev->prev_flush_start)) {
WARN_ON(mddev->flush_bio);
mddev->flush_bio = bio;
bio = NULL;
@ -6944,8 +6948,10 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
goto busy;
kick_rdev:
if (mddev_is_clustered(mddev))
md_cluster_ops->remove_disk(mddev, rdev);
if (mddev_is_clustered(mddev)) {
if (md_cluster_ops->remove_disk(mddev, rdev))
goto busy;
}
md_kick_rdev_from_array(rdev);
set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
@ -7274,6 +7280,7 @@ static int update_raid_disks(struct mddev *mddev, int raid_disks)
return -EINVAL;
if (mddev->sync_thread ||
test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) ||
test_bit(MD_RESYNCING_REMOTE, &mddev->recovery) ||
mddev->reshape_position != MaxSector)
return -EBUSY;
@ -7584,8 +7591,11 @@ static int md_ioctl(struct block_device *bdev, fmode_t mode,
err = -EBUSY;
goto out;
}
WARN_ON_ONCE(test_bit(MD_CLOSING, &mddev->flags));
set_bit(MD_CLOSING, &mddev->flags);
if (test_and_set_bit(MD_CLOSING, &mddev->flags)) {
mutex_unlock(&mddev->open_mutex);
err = -EBUSY;
goto out;
}
did_set_md_closing = true;
mutex_unlock(&mddev->open_mutex);
sync_blockdev(bdev);
@ -9634,8 +9644,11 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
}
}
if (mddev->raid_disks != le32_to_cpu(sb->raid_disks))
update_raid_disks(mddev, le32_to_cpu(sb->raid_disks));
if (mddev->raid_disks != le32_to_cpu(sb->raid_disks)) {
ret = update_raid_disks(mddev, le32_to_cpu(sb->raid_disks));
if (ret)
pr_warn("md: updating array disks failed. %d\n", ret);
}
/*
* Since mddev->delta_disks has already updated in update_raid_disks,

View File

@ -495,9 +495,9 @@ struct mddev {
*/
struct bio *flush_bio;
atomic_t flush_pending;
ktime_t start_flush, last_flush; /* last_flush is when the last completed
* flush was started.
*/
ktime_t start_flush, prev_flush_start; /* prev_flush_start is when the previous completed
* flush was started.
*/
struct work_struct flush_work;
struct work_struct event_work; /* used by dm to report failure event */
mempool_t *serial_info_pool;

View File

@ -1128,7 +1128,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
struct md_rdev *err_rdev = NULL;
gfp_t gfp = GFP_NOIO;
if (r10_bio->devs[slot].rdev) {
if (slot >= 0 && r10_bio->devs[slot].rdev) {
/*
* This is an error retry, but we cannot
* safely dereference the rdev in the r10_bio,
@ -1491,6 +1491,7 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
r10_bio->mddev = mddev;
r10_bio->sector = bio->bi_iter.bi_sector;
r10_bio->state = 0;
r10_bio->read_slot = -1;
memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * conf->copies);
if (bio_data_dir(bio) == READ)

View File

@ -85,7 +85,7 @@ static LIST_HEAD(nvme_subsystems);
static DEFINE_MUTEX(nvme_subsystems_lock);
static DEFINE_IDA(nvme_instance_ida);
static dev_t nvme_chr_devt;
static dev_t nvme_ctrl_base_chr_devt;
static struct class *nvme_class;
static struct class *nvme_subsys_class;
@ -137,6 +137,38 @@ int nvme_try_sched_reset(struct nvme_ctrl *ctrl)
}
EXPORT_SYMBOL_GPL(nvme_try_sched_reset);
static void nvme_failfast_work(struct work_struct *work)
{
struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
struct nvme_ctrl, failfast_work);
if (ctrl->state != NVME_CTRL_CONNECTING)
return;
set_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
dev_info(ctrl->device, "failfast expired\n");
nvme_kick_requeue_lists(ctrl);
}
static inline void nvme_start_failfast_work(struct nvme_ctrl *ctrl)
{
if (!ctrl->opts || ctrl->opts->fast_io_fail_tmo == -1)
return;
schedule_delayed_work(&ctrl->failfast_work,
ctrl->opts->fast_io_fail_tmo * HZ);
}
static inline void nvme_stop_failfast_work(struct nvme_ctrl *ctrl)
{
if (!ctrl->opts)
return;
cancel_delayed_work_sync(&ctrl->failfast_work);
clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
}
int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
{
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
@ -422,8 +454,17 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
}
spin_unlock_irqrestore(&ctrl->lock, flags);
if (changed && ctrl->state == NVME_CTRL_LIVE)
if (!changed)
return false;
if (ctrl->state == NVME_CTRL_LIVE) {
if (old_state == NVME_CTRL_CONNECTING)
nvme_stop_failfast_work(ctrl);
nvme_kick_requeue_lists(ctrl);
} else if (ctrl->state == NVME_CTRL_CONNECTING &&
old_state == NVME_CTRL_RESETTING) {
nvme_start_failfast_work(ctrl);
}
return changed;
}
EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
@ -507,29 +548,49 @@ static inline void nvme_clear_nvme_request(struct request *req)
}
}
struct request *nvme_alloc_request(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
static inline unsigned int nvme_req_op(struct nvme_command *cmd)
{
unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
struct request *req;
return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN;
}
if (qid == NVME_QID_ANY) {
req = blk_mq_alloc_request(q, op, flags);
} else {
req = blk_mq_alloc_request_hctx(q, op, flags,
qid ? qid - 1 : 0);
}
if (IS_ERR(req))
return req;
static inline void nvme_init_request(struct request *req,
struct nvme_command *cmd)
{
if (req->q->queuedata)
req->timeout = NVME_IO_TIMEOUT;
else /* no queuedata implies admin queue */
req->timeout = NVME_ADMIN_TIMEOUT;
req->cmd_flags |= REQ_FAILFAST_DRIVER;
nvme_clear_nvme_request(req);
nvme_req(req)->cmd = cmd;
}
struct request *nvme_alloc_request(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags)
{
struct request *req;
req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
if (!IS_ERR(req))
nvme_init_request(req, cmd);
return req;
}
EXPORT_SYMBOL_GPL(nvme_alloc_request);
struct request *nvme_alloc_request_qid(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid)
{
struct request *req;
req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
qid ? qid - 1 : 0);
if (!IS_ERR(req))
nvme_init_request(req, cmd);
return req;
}
EXPORT_SYMBOL_GPL(nvme_alloc_request_qid);
static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable)
{
struct nvme_command c;
@ -886,11 +947,15 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
struct request *req;
int ret;
req = nvme_alloc_request(q, cmd, flags, qid);
if (qid == NVME_QID_ANY)
req = nvme_alloc_request(q, cmd, flags);
else
req = nvme_alloc_request_qid(q, cmd, flags, qid);
if (IS_ERR(req))
return PTR_ERR(req);
req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
if (timeout)
req->timeout = timeout;
if (buffer && bufflen) {
ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
@ -1056,11 +1121,12 @@ static int nvme_submit_user_cmd(struct request_queue *q,
void *meta = NULL;
int ret;
req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY);
req = nvme_alloc_request(q, cmd, 0);
if (IS_ERR(req))
return PTR_ERR(req);
req->timeout = timeout ? timeout : ADMIN_TIMEOUT;
if (timeout)
req->timeout = timeout;
nvme_req(req)->flags |= NVME_REQ_USERCMD;
if (ubuffer && bufflen) {
@ -1130,8 +1196,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl)
{
struct request *rq;
rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED,
NVME_QID_ANY);
rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd,
BLK_MQ_REQ_RESERVED);
if (IS_ERR(rq))
return PTR_ERR(rq);
@ -1291,7 +1357,8 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
NVME_IDENTIFY_DATA_SIZE);
if (status) {
dev_warn(ctrl->device,
"Identify Descriptors failed (%d)\n", status);
"Identify Descriptors failed (nsid=%u, status=0x%x)\n",
nsid, status);
goto free_data;
}
@ -2047,7 +2114,8 @@ static void nvme_update_disk_info(struct gendisk *disk,
nvme_config_discard(disk, ns);
nvme_config_write_zeroes(disk, ns);
if (id->nsattr & NVME_NS_ATTR_RO)
if ((id->nsattr & NVME_NS_ATTR_RO) ||
test_bit(NVME_NS_FORCE_RO, &ns->flags))
set_disk_ro(disk, true);
}
@ -2249,13 +2317,13 @@ int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8);
cmd.common.cdw11 = cpu_to_le32(len);
return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
ADMIN_TIMEOUT, NVME_QID_ANY, 1, 0, false);
return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len, 0,
NVME_QID_ANY, 1, 0, false);
}
EXPORT_SYMBOL_GPL(nvme_sec_submit);
#endif /* CONFIG_BLK_SED_OPAL */
static const struct block_device_operations nvme_fops = {
static const struct block_device_operations nvme_bdev_ops = {
.owner = THIS_MODULE,
.ioctl = nvme_ioctl,
.compat_ioctl = nvme_compat_ioctl,
@ -3262,7 +3330,7 @@ static inline struct nvme_ns_head *dev_to_ns_head(struct device *dev)
{
struct gendisk *disk = dev_to_disk(dev);
if (disk->fops == &nvme_fops)
if (disk->fops == &nvme_bdev_ops)
return nvme_get_ns_from_dev(dev)->head;
else
return disk->private_data;
@ -3371,7 +3439,7 @@ static umode_t nvme_ns_id_attrs_are_visible(struct kobject *kobj,
}
#ifdef CONFIG_NVME_MULTIPATH
if (a == &dev_attr_ana_grpid.attr || a == &dev_attr_ana_state.attr) {
if (dev_to_disk(dev)->fops != &nvme_fops) /* per-path attr */
if (dev_to_disk(dev)->fops != &nvme_bdev_ops) /* per-path attr */
return 0;
if (!nvme_ctrl_use_ana(nvme_get_ns_from_dev(dev)->ctrl))
return 0;
@ -3792,7 +3860,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
struct gendisk *disk;
struct nvme_id_ns *id;
char disk_name[DISK_NAME_LEN];
int node = ctrl->numa_node, flags = GENHD_FL_EXT_DEVT, ret;
int node = ctrl->numa_node, flags = GENHD_FL_EXT_DEVT;
if (nvme_identify_ns(ctrl, nsid, ids, &id))
return;
@ -3816,8 +3884,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
ns->ctrl = ctrl;
kref_init(&ns->kref);
ret = nvme_init_ns_head(ns, nsid, ids, id->nmic & NVME_NS_NMIC_SHARED);
if (ret)
if (nvme_init_ns_head(ns, nsid, ids, id->nmic & NVME_NS_NMIC_SHARED))
goto out_free_queue;
nvme_set_disk_name(disk_name, ns, ctrl, &flags);
@ -3825,7 +3892,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
if (!disk)
goto out_unlink_ns;
disk->fops = &nvme_fops;
disk->fops = &nvme_bdev_ops;
disk->private_data = ns;
disk->queue = ns->queue;
disk->flags = flags;
@ -3836,8 +3903,7 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid,
goto out_put_disk;
if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) {
ret = nvme_nvm_register(ns, disk_name, node);
if (ret) {
if (nvme_nvm_register(ns, disk_name, node)) {
dev_warn(ctrl->device, "LightNVM init failure\n");
goto out_put_disk;
}
@ -4028,8 +4094,11 @@ static int nvme_scan_ns_list(struct nvme_ctrl *ctrl)
ret = nvme_submit_sync_cmd(ctrl->admin_q, &cmd, ns_list,
NVME_IDENTIFY_DATA_SIZE);
if (ret)
if (ret) {
dev_warn(ctrl->device,
"Identify NS List failed (status=0x%x)\n", ret);
goto free;
}
for (i = 0; i < nr_entries; i++) {
u32 nsid = le32_to_cpu(ns_list[i]);
@ -4332,6 +4401,7 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
{
nvme_mpath_stop(ctrl);
nvme_stop_keep_alive(ctrl);
nvme_stop_failfast_work(ctrl);
flush_work(&ctrl->async_event_work);
cancel_work_sync(&ctrl->fw_act_work);
}
@ -4409,6 +4479,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
int ret;
ctrl->state = NVME_CTRL_NEW;
clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
spin_lock_init(&ctrl->lock);
mutex_init(&ctrl->scan_lock);
INIT_LIST_HEAD(&ctrl->namespaces);
@ -4425,6 +4496,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
init_waitqueue_head(&ctrl->state_wq);
INIT_DELAYED_WORK(&ctrl->ka_work, nvme_keep_alive_work);
INIT_DELAYED_WORK(&ctrl->failfast_work, nvme_failfast_work);
memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
@ -4443,7 +4515,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
device_initialize(&ctrl->ctrl_device);
ctrl->device = &ctrl->ctrl_device;
ctrl->device->devt = MKDEV(MAJOR(nvme_chr_devt), ctrl->instance);
ctrl->device->devt = MKDEV(MAJOR(nvme_ctrl_base_chr_devt),
ctrl->instance);
ctrl->device->class = nvme_class;
ctrl->device->parent = ctrl->dev;
ctrl->device->groups = nvme_dev_attr_groups;
@ -4652,7 +4725,8 @@ static int __init nvme_core_init(void)
if (!nvme_delete_wq)
goto destroy_reset_wq;
result = alloc_chrdev_region(&nvme_chr_devt, 0, NVME_MINORS, "nvme");
result = alloc_chrdev_region(&nvme_ctrl_base_chr_devt, 0,
NVME_MINORS, "nvme");
if (result < 0)
goto destroy_delete_wq;
@ -4673,7 +4747,7 @@ static int __init nvme_core_init(void)
destroy_class:
class_destroy(nvme_class);
unregister_chrdev:
unregister_chrdev_region(nvme_chr_devt, NVME_MINORS);
unregister_chrdev_region(nvme_ctrl_base_chr_devt, NVME_MINORS);
destroy_delete_wq:
destroy_workqueue(nvme_delete_wq);
destroy_reset_wq:
@ -4688,7 +4762,7 @@ static void __exit nvme_core_exit(void)
{
class_destroy(nvme_subsys_class);
class_destroy(nvme_class);
unregister_chrdev_region(nvme_chr_devt, NVME_MINORS);
unregister_chrdev_region(nvme_ctrl_base_chr_devt, NVME_MINORS);
destroy_workqueue(nvme_delete_wq);
destroy_workqueue(nvme_reset_wq);
destroy_workqueue(nvme_wq);

View File

@ -549,6 +549,7 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
{
if (ctrl->state != NVME_CTRL_DELETING_NOIO &&
ctrl->state != NVME_CTRL_DEAD &&
!test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) &&
!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
return BLK_STS_RESOURCE;
@ -615,6 +616,7 @@ static const match_table_t opt_tokens = {
{ NVMF_OPT_NR_WRITE_QUEUES, "nr_write_queues=%d" },
{ NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
{ NVMF_OPT_TOS, "tos=%d" },
{ NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" },
{ NVMF_OPT_ERR, NULL }
};
@ -634,6 +636,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
opts->reconnect_delay = NVMF_DEF_RECONNECT_DELAY;
opts->kato = NVME_DEFAULT_KATO;
opts->duplicate_connect = false;
opts->fast_io_fail_tmo = NVMF_DEF_FAIL_FAST_TMO;
opts->hdr_digest = false;
opts->data_digest = false;
opts->tos = -1; /* < 0 == use transport default */
@ -754,6 +757,17 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
pr_warn("ctrl_loss_tmo < 0 will reconnect forever\n");
ctrl_loss_tmo = token;
break;
case NVMF_OPT_FAIL_FAST_TMO:
if (match_int(args, &token)) {
ret = -EINVAL;
goto out;
}
if (token >= 0)
pr_warn("I/O fail on reconnect controller after %d sec\n",
token);
opts->fast_io_fail_tmo = token;
break;
case NVMF_OPT_HOSTNQN:
if (opts->host) {
pr_err("hostnqn already user-assigned: %s\n",
@ -884,11 +898,15 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
opts->nr_poll_queues = 0;
opts->duplicate_connect = true;
}
if (ctrl_loss_tmo < 0)
if (ctrl_loss_tmo < 0) {
opts->max_reconnects = -1;
else
} else {
opts->max_reconnects = DIV_ROUND_UP(ctrl_loss_tmo,
opts->reconnect_delay);
if (ctrl_loss_tmo < opts->fast_io_fail_tmo)
pr_warn("failfast tmo (%d) larger than controller loss tmo (%d)\n",
opts->fast_io_fail_tmo, ctrl_loss_tmo);
}
if (!opts->host) {
kref_get(&nvmf_default_host->ref);
@ -988,7 +1006,8 @@ EXPORT_SYMBOL_GPL(nvmf_free_options);
#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \
NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \
NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\
NVMF_OPT_DISABLE_SQFLOW)
NVMF_OPT_DISABLE_SQFLOW |\
NVMF_OPT_FAIL_FAST_TMO)
static struct nvme_ctrl *
nvmf_create_ctrl(struct device *dev, const char *buf)

View File

@ -15,6 +15,8 @@
#define NVMF_DEF_RECONNECT_DELAY 10
/* default to 600 seconds of reconnect attempts before giving up */
#define NVMF_DEF_CTRL_LOSS_TMO 600
/* default is -1: the fail fast mechanism is disabled */
#define NVMF_DEF_FAIL_FAST_TMO -1
/*
* Define a host as seen by the target. We allocate one at boot, but also
@ -56,6 +58,7 @@ enum {
NVMF_OPT_NR_WRITE_QUEUES = 1 << 17,
NVMF_OPT_NR_POLL_QUEUES = 1 << 18,
NVMF_OPT_TOS = 1 << 19,
NVMF_OPT_FAIL_FAST_TMO = 1 << 20,
};
/**
@ -89,6 +92,7 @@ enum {
* @nr_write_queues: number of queues for write I/O
* @nr_poll_queues: number of queues for polling I/O
* @tos: type of service
* @fast_io_fail_tmo: Fast I/O fail timeout in seconds
*/
struct nvmf_ctrl_options {
unsigned mask;
@ -111,6 +115,7 @@ struct nvmf_ctrl_options {
unsigned int nr_write_queues;
unsigned int nr_poll_queues;
int tos;
int fast_io_fail_tmo;
};
/*

View File

@ -3479,7 +3479,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
ctrl->lport->ops->fcprqst_priv_sz);
ctrl->admin_tag_set.driver_data = ctrl;
ctrl->admin_tag_set.nr_hw_queues = 1;
ctrl->admin_tag_set.timeout = ADMIN_TIMEOUT;
ctrl->admin_tag_set.timeout = NVME_ADMIN_TIMEOUT;
ctrl->admin_tag_set.flags = BLK_MQ_F_NO_SCHED;
ret = blk_mq_alloc_tag_set(&ctrl->admin_tag_set);

View File

@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q,
nvme_nvm_rqtocmd(rqd, ns, cmd);
rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY);
rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0);
if (IS_ERR(rq))
return rq;
@ -767,14 +767,14 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q,
DECLARE_COMPLETION_ONSTACK(wait);
int ret = 0;
rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0,
NVME_QID_ANY);
rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0);
if (IS_ERR(rq)) {
ret = -ENOMEM;
goto err_cmd;
}
rq->timeout = timeout ? timeout : ADMIN_TIMEOUT;
if (timeout)
rq->timeout = timeout;
if (ppa_buf && ppa_len) {
ppa_list = dma_pool_alloc(dev->dma_pool, GFP_KERNEL, &ppa_dma);

View File

@ -279,6 +279,8 @@ static bool nvme_available_path(struct nvme_ns_head *head)
struct nvme_ns *ns;
list_for_each_entry_rcu(ns, &head->list, siblings) {
if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
continue;
switch (ns->ctrl->state) {
case NVME_CTRL_LIVE:
case NVME_CTRL_RESETTING:

View File

@ -24,7 +24,7 @@ extern unsigned int nvme_io_timeout;
#define NVME_IO_TIMEOUT (nvme_io_timeout * HZ)
extern unsigned int admin_timeout;
#define ADMIN_TIMEOUT (admin_timeout * HZ)
#define NVME_ADMIN_TIMEOUT (admin_timeout * HZ)
#define NVME_DEFAULT_KATO 5
#define NVME_KATO_GRACE 10
@ -178,7 +178,8 @@ static inline u16 nvme_req_qid(struct request *req)
{
if (!req->q->queuedata)
return 0;
return blk_mq_unique_tag_to_hwq(blk_mq_unique_tag(req)) + 1;
return req->mq_hctx->queue_num + 1;
}
/* The below value is the specific amount of delay needed before checking
@ -298,6 +299,7 @@ struct nvme_ctrl {
struct work_struct scan_work;
struct work_struct async_event_work;
struct delayed_work ka_work;
struct delayed_work failfast_work;
struct nvme_command ka_cmd;
struct work_struct fw_act_work;
unsigned long events;
@ -331,6 +333,8 @@ struct nvme_ctrl {
u16 icdoff;
u16 maxcmd;
int nr_reconnects;
unsigned long flags;
#define NVME_CTRL_FAILFAST_EXPIRED 0
struct nvmf_ctrl_options *opts;
struct page *discard_page;
@ -442,6 +446,7 @@ struct nvme_ns {
#define NVME_NS_REMOVING 0
#define NVME_NS_DEAD 1
#define NVME_NS_ANA_PENDING 2
#define NVME_NS_FORCE_RO 3
struct nvme_fault_inject fault_inject;
@ -604,6 +609,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl);
#define NVME_QID_ANY -1
struct request *nvme_alloc_request(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags);
struct request *nvme_alloc_request_qid(struct request_queue *q,
struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid);
void nvme_cleanup_cmd(struct request *req);
blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req,

View File

@ -1319,13 +1319,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
req->tag, nvmeq->qid);
abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd,
BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
BLK_MQ_REQ_NOWAIT);
if (IS_ERR(abort_req)) {
atomic_inc(&dev->ctrl.abort_limit);
return BLK_EH_RESET_TIMER;
}
abort_req->timeout = ADMIN_TIMEOUT;
abort_req->end_io_data = NULL;
blk_execute_rq_nowait(abort_req->q, NULL, abort_req, 0, abort_endio);
@ -1622,7 +1621,7 @@ static int nvme_alloc_admin_tags(struct nvme_dev *dev)
dev->admin_tagset.nr_hw_queues = 1;
dev->admin_tagset.queue_depth = NVME_AQ_MQ_TAG_DEPTH;
dev->admin_tagset.timeout = ADMIN_TIMEOUT;
dev->admin_tagset.timeout = NVME_ADMIN_TIMEOUT;
dev->admin_tagset.numa_node = dev->ctrl.numa_node;
dev->admin_tagset.cmd_size = sizeof(struct nvme_iod);
dev->admin_tagset.flags = BLK_MQ_F_NO_SCHED;
@ -2104,6 +2103,12 @@ static void nvme_disable_io_queues(struct nvme_dev *dev)
static unsigned int nvme_max_io_queues(struct nvme_dev *dev)
{
/*
* If tags are shared with admin queue (Apple bug), then
* make sure we only use one IO queue.
*/
if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
return 1;
return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues;
}
@ -2122,16 +2127,7 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
dev->nr_write_queues = write_queues;
dev->nr_poll_queues = poll_queues;
/*
* If tags are shared with admin queue (Apple bug), then
* make sure we only use one IO queue.
*/
if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
nr_io_queues = 1;
else
nr_io_queues = min(nvme_max_io_queues(dev),
dev->nr_allocated_queues - 1);
nr_io_queues = dev->nr_allocated_queues - 1;
result = nvme_set_queue_count(&dev->ctrl, &nr_io_queues);
if (result < 0)
return result;
@ -2234,11 +2230,10 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
cmd.delete_queue.opcode = opcode;
cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY);
req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT);
if (IS_ERR(req))
return PTR_ERR(req);
req->timeout = ADMIN_TIMEOUT;
req->end_io_data = nvmeq;
init_completion(&nvmeq->delete_done);
@ -2254,7 +2249,7 @@ static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode)
unsigned long timeout;
retry:
timeout = ADMIN_TIMEOUT;
timeout = NVME_ADMIN_TIMEOUT;
while (nr_queues > 0) {
if (nvme_delete_queue(&dev->queues[nr_queues], opcode))
break;

View File

@ -797,7 +797,7 @@ static struct blk_mq_tag_set *nvme_rdma_alloc_tagset(struct nvme_ctrl *nctrl,
NVME_RDMA_DATA_SGL_SIZE;
set->driver_data = ctrl;
set->nr_hw_queues = 1;
set->timeout = ADMIN_TIMEOUT;
set->timeout = NVME_ADMIN_TIMEOUT;
set->flags = BLK_MQ_F_NO_SCHED;
} else {
set = &ctrl->tag_set;

View File

@ -1568,7 +1568,7 @@ static struct blk_mq_tag_set *nvme_tcp_alloc_tagset(struct nvme_ctrl *nctrl,
set->cmd_size = sizeof(struct nvme_tcp_request);
set->driver_data = ctrl;
set->nr_hw_queues = 1;
set->timeout = ADMIN_TIMEOUT;
set->timeout = NVME_ADMIN_TIMEOUT;
} else {
set = &ctrl->tag_set;
memset(set, 0, sizeof(*set));

View File

@ -55,12 +55,17 @@ int nvme_update_zone_info(struct nvme_ns *ns, unsigned lbaf)
int status;
/* Driver requires zone append support */
if (!(le32_to_cpu(log->iocs[nvme_cmd_zone_append]) &
if ((le32_to_cpu(log->iocs[nvme_cmd_zone_append]) &
NVME_CMD_EFFECTS_CSUPP)) {
if (test_and_clear_bit(NVME_NS_FORCE_RO, &ns->flags))
dev_warn(ns->ctrl->device,
"Zone Append supported for zoned namespace:%d. Remove read-only mode\n",
ns->head->ns_id);
} else {
set_bit(NVME_NS_FORCE_RO, &ns->flags);
dev_warn(ns->ctrl->device,
"append not supported for zoned namespace:%d\n",
ns->head->ns_id);
return -EINVAL;
"Zone Append not supported for zoned namespace:%d. Forcing to read-only mode\n",
ns->head->ns_id);
}
/* Lazily query controller append limit for the first zoned namespace */

View File

@ -24,7 +24,7 @@ config NVME_TARGET_PASSTHRU
This enables target side NVMe passthru controller support for the
NVMe Over Fabrics protocol. It allows for hosts to manage and
directly access an actual NVMe controller residing on the target
side, incuding executing Vendor Unique Commands.
side, including executing Vendor Unique Commands.
If unsure, say N.

View File

@ -736,9 +736,49 @@ static ssize_t nvmet_passthru_enable_store(struct config_item *item,
}
CONFIGFS_ATTR(nvmet_passthru_, enable);
static ssize_t nvmet_passthru_admin_timeout_show(struct config_item *item,
char *page)
{
return sprintf(page, "%u\n", to_subsys(item->ci_parent)->admin_timeout);
}
static ssize_t nvmet_passthru_admin_timeout_store(struct config_item *item,
const char *page, size_t count)
{
struct nvmet_subsys *subsys = to_subsys(item->ci_parent);
unsigned int timeout;
if (kstrtouint(page, 0, &timeout))
return -EINVAL;
subsys->admin_timeout = timeout;
return count;
}
CONFIGFS_ATTR(nvmet_passthru_, admin_timeout);
static ssize_t nvmet_passthru_io_timeout_show(struct config_item *item,
char *page)
{
return sprintf(page, "%u\n", to_subsys(item->ci_parent)->io_timeout);
}
static ssize_t nvmet_passthru_io_timeout_store(struct config_item *item,
const char *page, size_t count)
{
struct nvmet_subsys *subsys = to_subsys(item->ci_parent);
unsigned int timeout;
if (kstrtouint(page, 0, &timeout))
return -EINVAL;
subsys->io_timeout = timeout;
return count;
}
CONFIGFS_ATTR(nvmet_passthru_, io_timeout);
static struct configfs_attribute *nvmet_passthru_attrs[] = {
&nvmet_passthru_attr_device_path,
&nvmet_passthru_attr_enable,
&nvmet_passthru_attr_admin_timeout,
&nvmet_passthru_attr_io_timeout,
NULL,
};

View File

@ -757,8 +757,6 @@ void nvmet_cq_setup(struct nvmet_ctrl *ctrl, struct nvmet_cq *cq,
{
cq->qid = qid;
cq->size = size;
ctrl->cqs[qid] = cq;
}
void nvmet_sq_setup(struct nvmet_ctrl *ctrl, struct nvmet_sq *sq,
@ -1344,20 +1342,14 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
if (!ctrl->changed_ns_list)
goto out_free_ctrl;
ctrl->cqs = kcalloc(subsys->max_qid + 1,
sizeof(struct nvmet_cq *),
GFP_KERNEL);
if (!ctrl->cqs)
goto out_free_changed_ns_list;
ctrl->sqs = kcalloc(subsys->max_qid + 1,
sizeof(struct nvmet_sq *),
GFP_KERNEL);
if (!ctrl->sqs)
goto out_free_cqs;
goto out_free_changed_ns_list;
if (subsys->cntlid_min > subsys->cntlid_max)
goto out_free_cqs;
goto out_free_changed_ns_list;
ret = ida_simple_get(&cntlid_ida,
subsys->cntlid_min, subsys->cntlid_max,
@ -1395,8 +1387,6 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
out_free_sqs:
kfree(ctrl->sqs);
out_free_cqs:
kfree(ctrl->cqs);
out_free_changed_ns_list:
kfree(ctrl->changed_ns_list);
out_free_ctrl:
@ -1426,7 +1416,6 @@ static void nvmet_ctrl_free(struct kref *ref)
nvmet_async_events_free(ctrl);
kfree(ctrl->sqs);
kfree(ctrl->cqs);
kfree(ctrl->changed_ns_list);
kfree(ctrl);

View File

@ -69,6 +69,7 @@ void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys,
struct nvmet_port *port;
struct nvmet_subsys_link *s;
lockdep_assert_held(&nvmet_config_sem);
nvmet_genctr++;
list_for_each_entry(port, nvmet_ports, global_entry)

View File

@ -564,6 +564,50 @@ fcloop_call_host_done(struct nvmefc_fcp_req *fcpreq,
fcloop_tfcp_req_put(tfcp_req);
}
static bool drop_fabric_opcode;
#define DROP_OPCODE_MASK 0x00FF
/* fabrics opcode will have a bit set above 1st byte */
static int drop_opcode = -1;
static int drop_instance;
static int drop_amount;
static int drop_current_cnt;
/*
* Routine to parse io and determine if the io is to be dropped.
* Returns:
* 0 if io is not obstructed
* 1 if io was dropped
*/
static int check_for_drop(struct fcloop_fcpreq *tfcp_req)
{
struct nvmefc_fcp_req *fcpreq = tfcp_req->fcpreq;
struct nvme_fc_cmd_iu *cmdiu = fcpreq->cmdaddr;
struct nvme_command *sqe = &cmdiu->sqe;
if (drop_opcode == -1)
return 0;
pr_info("%s: seq opcd x%02x fctype x%02x: drop F %s op x%02x "
"inst %d start %d amt %d\n",
__func__, sqe->common.opcode, sqe->fabrics.fctype,
drop_fabric_opcode ? "y" : "n",
drop_opcode, drop_current_cnt, drop_instance, drop_amount);
if ((drop_fabric_opcode &&
(sqe->common.opcode != nvme_fabrics_command ||
sqe->fabrics.fctype != drop_opcode)) ||
(!drop_fabric_opcode && sqe->common.opcode != drop_opcode))
return 0;
if (++drop_current_cnt >= drop_instance) {
if (drop_current_cnt >= drop_instance + drop_amount)
drop_opcode = -1;
return 1;
}
return 0;
}
static void
fcloop_fcp_recv_work(struct work_struct *work)
{
@ -590,10 +634,14 @@ fcloop_fcp_recv_work(struct work_struct *work)
if (unlikely(aborted))
ret = -ECANCELED;
else
ret = nvmet_fc_rcv_fcp_req(tfcp_req->tport->targetport,
else {
if (likely(!check_for_drop(tfcp_req)))
ret = nvmet_fc_rcv_fcp_req(tfcp_req->tport->targetport,
&tfcp_req->tgt_fcp_req,
fcpreq->cmdaddr, fcpreq->cmdlen);
else
pr_info("%s: dropped command ********\n", __func__);
}
if (ret)
fcloop_call_host_done(fcpreq, tfcp_req, ret);
@ -1449,6 +1497,33 @@ fcloop_delete_target_port(struct device *dev, struct device_attribute *attr,
return ret ? ret : count;
}
static ssize_t
fcloop_set_cmd_drop(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
int opcode, starting, amount;
if (sscanf(buf, "%x:%d:%d", &opcode, &starting, &amount) != 3)
return -EBADRQC;
drop_current_cnt = 0;
drop_fabric_opcode = (opcode & ~DROP_OPCODE_MASK) ? true : false;
drop_opcode = (opcode & DROP_OPCODE_MASK);
drop_instance = starting;
/* the check to drop routine uses instance + count to know when
* to end. Thus, if dropping 1 instance, count should be 0.
* so subtract 1 from the count.
*/
drop_amount = amount - 1;
pr_info("%s: DROP: Starting at instance %d of%s opcode x%x drop +%d "
"instances\n",
__func__, drop_instance, drop_fabric_opcode ? " fabric" : "",
drop_opcode, drop_amount);
return count;
}
static DEVICE_ATTR(add_local_port, 0200, NULL, fcloop_create_local_port);
static DEVICE_ATTR(del_local_port, 0200, NULL, fcloop_delete_local_port);
@ -1456,6 +1531,7 @@ static DEVICE_ATTR(add_remote_port, 0200, NULL, fcloop_create_remote_port);
static DEVICE_ATTR(del_remote_port, 0200, NULL, fcloop_delete_remote_port);
static DEVICE_ATTR(add_target_port, 0200, NULL, fcloop_create_target_port);
static DEVICE_ATTR(del_target_port, 0200, NULL, fcloop_delete_target_port);
static DEVICE_ATTR(set_cmd_drop, 0200, NULL, fcloop_set_cmd_drop);
static struct attribute *fcloop_dev_attrs[] = {
&dev_attr_add_local_port.attr,
@ -1464,6 +1540,7 @@ static struct attribute *fcloop_dev_attrs[] = {
&dev_attr_del_remote_port.attr,
&dev_attr_add_target_port.attr,
&dev_attr_del_target_port.attr,
&dev_attr_set_cmd_drop.attr,
NULL
};

View File

@ -355,7 +355,7 @@ static int nvme_loop_configure_admin_queue(struct nvme_loop_ctrl *ctrl)
NVME_INLINE_SG_CNT * sizeof(struct scatterlist);
ctrl->admin_tag_set.driver_data = ctrl;
ctrl->admin_tag_set.nr_hw_queues = 1;
ctrl->admin_tag_set.timeout = ADMIN_TIMEOUT;
ctrl->admin_tag_set.timeout = NVME_ADMIN_TIMEOUT;
ctrl->admin_tag_set.flags = BLK_MQ_F_NO_SCHED;
ctrl->queues[0].ctrl = ctrl;

View File

@ -164,7 +164,6 @@ static inline struct nvmet_port *ana_groups_to_port(
struct nvmet_ctrl {
struct nvmet_subsys *subsys;
struct nvmet_cq **cqs;
struct nvmet_sq **sqs;
bool cmd_seen;
@ -249,6 +248,8 @@ struct nvmet_subsys {
struct nvme_ctrl *passthru_ctrl;
char *passthru_ctrl_path;
struct config_group passthru_group;
unsigned int admin_timeout;
unsigned int io_timeout;
#endif /* CONFIG_NVME_TARGET_PASSTHRU */
};
@ -330,6 +331,7 @@ struct nvmet_req {
struct work_struct work;
} f;
struct {
struct bio inline_bio;
struct request *rq;
struct work_struct work;
bool use_workqueue;

View File

@ -188,35 +188,31 @@ static void nvmet_passthru_req_done(struct request *rq,
static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
{
struct scatterlist *sg;
int op_flags = 0;
struct bio *bio;
int i, ret;
int i;
if (req->sg_cnt > BIO_MAX_PAGES)
return -EINVAL;
if (req->cmd->common.opcode == nvme_cmd_flush)
op_flags = REQ_FUA;
else if (nvme_is_write(req->cmd))
op_flags = REQ_SYNC | REQ_IDLE;
bio = bio_alloc(GFP_KERNEL, req->sg_cnt);
bio->bi_end_io = bio_put;
bio->bi_opf = req_op(rq) | op_flags;
if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
bio = &req->p.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
bio = bio_alloc(GFP_KERNEL, min(req->sg_cnt, BIO_MAX_PAGES));
bio->bi_end_io = bio_put;
}
bio->bi_opf = req_op(rq);
for_each_sg(req->sg, sg, req->sg_cnt, i) {
if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length,
sg->offset) < sg->length) {
bio_put(bio);
if (bio != &req->p.inline_bio)
bio_put(bio);
return -EINVAL;
}
}
ret = blk_rq_append_bio(rq, &bio);
if (unlikely(ret)) {
bio_put(bio);
return ret;
}
blk_rq_bio_prep(rq, bio, req->sg_cnt);
return 0;
}
@ -227,6 +223,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
struct request_queue *q = ctrl->admin_q;
struct nvme_ns *ns = NULL;
struct request *rq = NULL;
unsigned int timeout;
u32 effects;
u16 status;
int ret;
@ -242,14 +239,20 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
}
q = ns->queue;
timeout = req->sq->ctrl->subsys->io_timeout;
} else {
timeout = req->sq->ctrl->subsys->admin_timeout;
}
rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY);
rq = nvme_alloc_request(q, req->cmd, 0);
if (IS_ERR(rq)) {
status = NVME_SC_INTERNAL;
goto out_put_ns;
}
if (timeout)
rq->timeout = timeout;
if (req->sg_cnt) {
ret = nvmet_passthru_map_sg(req, rq);
if (unlikely(ret)) {

View File

@ -2084,19 +2084,24 @@ static void __dasd_device_start_head(struct dasd_device *device)
static void __dasd_device_check_path_events(struct dasd_device *device)
{
__u8 tbvpm, fcsecpm;
int rc;
if (!dasd_path_get_tbvpm(device))
tbvpm = dasd_path_get_tbvpm(device);
fcsecpm = dasd_path_get_fcsecpm(device);
if (!tbvpm && !fcsecpm)
return;
if (device->stopped & ~(DASD_STOPPED_DC_WAIT))
return;
rc = device->discipline->verify_path(device,
dasd_path_get_tbvpm(device));
if (rc)
rc = device->discipline->pe_handler(device, tbvpm, fcsecpm);
if (rc) {
dasd_device_set_timer(device, 50);
else
} else {
dasd_path_clear_all_verify(device);
dasd_path_clear_all_fcsec(device);
}
};
/*
@ -3448,8 +3453,7 @@ static void dasd_generic_auto_online(void *data, async_cookie_t cookie)
* Initial attempt at a probe function. this can be simplified once
* the other detection code is gone.
*/
int dasd_generic_probe(struct ccw_device *cdev,
struct dasd_discipline *discipline)
int dasd_generic_probe(struct ccw_device *cdev)
{
int ret;
@ -3847,6 +3851,10 @@ void dasd_generic_path_event(struct ccw_device *cdev, int *path_event)
if (device->discipline->kick_validate)
device->discipline->kick_validate(device);
}
if (path_event[chp] & PE_PATH_FCES_EVENT) {
dasd_path_fcsec_update(device, chp);
dasd_schedule_device_bh(device);
}
}
hpfpm = dasd_path_get_hpfpm(device);
ifccpm = dasd_path_get_ifccpm(device);

View File

@ -576,6 +576,11 @@ dasd_create_device(struct ccw_device *cdev)
dev_set_drvdata(&cdev->dev, device);
spin_unlock_irqrestore(get_ccwdev_lock(cdev), flags);
device->paths_info = kset_create_and_add("paths_info", NULL,
&device->cdev->dev.kobj);
if (!device->paths_info)
dev_warn(&cdev->dev, "Could not create paths_info kset\n");
return device;
}
@ -622,6 +627,9 @@ dasd_delete_device(struct dasd_device *device)
wait_event(dasd_delete_wq, atomic_read(&device->ref_count) == 0);
dasd_generic_free_discipline(device);
kset_unregister(device->paths_info);
/* Disconnect dasd_device structure from ccw_device structure. */
cdev = device->cdev;
device->cdev = NULL;
@ -1641,6 +1649,39 @@ dasd_path_interval_store(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(path_interval, 0644, dasd_path_interval_show,
dasd_path_interval_store);
static ssize_t
dasd_device_fcs_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct dasd_device *device;
int fc_sec;
int rc;
device = dasd_device_from_cdev(to_ccwdev(dev));
if (IS_ERR(device))
return -ENODEV;
fc_sec = dasd_path_get_fcs_device(device);
if (fc_sec == -EINVAL)
rc = snprintf(buf, PAGE_SIZE, "Inconsistent\n");
else
rc = snprintf(buf, PAGE_SIZE, "%s\n", dasd_path_get_fcs_str(fc_sec));
dasd_put_device(device);
return rc;
}
static DEVICE_ATTR(fc_security, 0444, dasd_device_fcs_show, NULL);
static ssize_t
dasd_path_fcs_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct dasd_path *path = to_dasd_path(kobj);
unsigned int fc_sec = path->fc_security;
return snprintf(buf, PAGE_SIZE, "%s\n", dasd_path_get_fcs_str(fc_sec));
}
static struct kobj_attribute path_fcs_attribute =
__ATTR(fc_security, 0444, dasd_path_fcs_show, NULL);
#define DASD_DEFINE_ATTR(_name, _func) \
static ssize_t dasd_##_name##_show(struct device *dev, \
@ -1697,6 +1738,7 @@ static struct attribute * dasd_attrs[] = {
&dev_attr_path_reset.attr,
&dev_attr_hpf.attr,
&dev_attr_ese.attr,
&dev_attr_fc_security.attr,
NULL,
};
@ -1777,6 +1819,73 @@ dasd_set_feature(struct ccw_device *cdev, int feature, int flag)
}
EXPORT_SYMBOL(dasd_set_feature);
static struct attribute *paths_info_attrs[] = {
&path_fcs_attribute.attr,
NULL,
};
static struct kobj_type path_attr_type = {
.release = dasd_path_release,
.default_attrs = paths_info_attrs,
.sysfs_ops = &kobj_sysfs_ops,
};
static void dasd_path_init_kobj(struct dasd_device *device, int chp)
{
device->path[chp].kobj.kset = device->paths_info;
kobject_init(&device->path[chp].kobj, &path_attr_type);
}
void dasd_path_create_kobj(struct dasd_device *device, int chp)
{
int rc;
if (test_bit(DASD_FLAG_OFFLINE, &device->flags))
return;
if (!device->paths_info) {
dev_warn(&device->cdev->dev, "Unable to create paths objects\n");
return;
}
if (device->path[chp].in_sysfs)
return;
if (!device->path[chp].conf_data)
return;
dasd_path_init_kobj(device, chp);
rc = kobject_add(&device->path[chp].kobj, NULL, "%x.%02x",
device->path[chp].cssid, device->path[chp].chpid);
if (rc)
kobject_put(&device->path[chp].kobj);
device->path[chp].in_sysfs = true;
}
EXPORT_SYMBOL(dasd_path_create_kobj);
void dasd_path_create_kobjects(struct dasd_device *device)
{
u8 lpm, opm;
opm = dasd_path_get_opm(device);
for (lpm = 0x80; lpm; lpm >>= 1) {
if (!(lpm & opm))
continue;
dasd_path_create_kobj(device, pathmask_to_pos(lpm));
}
}
EXPORT_SYMBOL(dasd_path_create_kobjects);
/*
* As we keep kobjects for the lifetime of a device, this function must not be
* called anywhere but in the context of offlining a device.
*/
void dasd_path_remove_kobj(struct dasd_device *device, int chp)
{
if (device->path[chp].in_sysfs) {
kobject_put(&device->path[chp].kobj);
device->path[chp].in_sysfs = false;
}
}
EXPORT_SYMBOL(dasd_path_remove_kobj);
int dasd_add_sysfs_files(struct ccw_device *cdev)
{

View File

@ -103,7 +103,7 @@ struct ext_pool_exhaust_work_data {
};
/* definitions for the path verification worker */
struct path_verification_work_data {
struct pe_handler_work_data {
struct work_struct worker;
struct dasd_device *device;
struct dasd_ccw_req cqr;
@ -111,9 +111,10 @@ struct path_verification_work_data {
__u8 rcd_buffer[DASD_ECKD_RCD_DATA_SIZE];
int isglobal;
__u8 tbvpm;
__u8 fcsecpm;
};
static struct path_verification_work_data *path_verification_worker;
static DEFINE_MUTEX(dasd_path_verification_mutex);
static struct pe_handler_work_data *pe_handler_worker;
static DEFINE_MUTEX(dasd_pe_handler_mutex);
struct check_attention_work_data {
struct work_struct worker;
@ -143,7 +144,7 @@ dasd_eckd_probe (struct ccw_device *cdev)
"ccw-device options");
return ret;
}
ret = dasd_generic_probe(cdev, &dasd_eckd_discipline);
ret = dasd_generic_probe(cdev);
return ret;
}
@ -1000,6 +1001,27 @@ static unsigned char dasd_eckd_path_access(void *conf_data, int conf_len)
return 0;
}
static void dasd_eckd_store_conf_data(struct dasd_device *device,
struct dasd_conf_data *conf_data, int chp)
{
struct channel_path_desc_fmt0 *chp_desc;
struct subchannel_id sch_id;
ccw_device_get_schid(device->cdev, &sch_id);
/*
* path handling and read_conf allocate data
* free it before replacing the pointer
*/
kfree(device->path[chp].conf_data);
device->path[chp].conf_data = conf_data;
device->path[chp].cssid = sch_id.cssid;
device->path[chp].ssid = sch_id.ssid;
chp_desc = ccw_device_get_chp_desc(device->cdev, chp);
if (chp_desc)
device->path[chp].chpid = chp_desc->chpid;
kfree(chp_desc);
}
static void dasd_eckd_clear_conf_data(struct dasd_device *device)
{
struct dasd_eckd_private *private = device->private;
@ -1013,9 +1035,33 @@ static void dasd_eckd_clear_conf_data(struct dasd_device *device)
device->path[i].cssid = 0;
device->path[i].ssid = 0;
device->path[i].chpid = 0;
dasd_path_notoper(device, i);
dasd_path_remove_kobj(device, i);
}
}
static void dasd_eckd_read_fc_security(struct dasd_device *device)
{
struct dasd_eckd_private *private = device->private;
u8 esm_valid;
u8 esm[8];
int chp;
int rc;
rc = chsc_scud(private->uid.ssid, (u64 *)esm, &esm_valid);
if (rc) {
for (chp = 0; chp < 8; chp++)
device->path[chp].fc_security = 0;
return;
}
for (chp = 0; chp < 8; chp++) {
if (esm_valid & (0x80 >> chp))
device->path[chp].fc_security = esm[chp];
else
device->path[chp].fc_security = 0;
}
}
static int dasd_eckd_read_conf(struct dasd_device *device)
{
@ -1026,12 +1072,9 @@ static int dasd_eckd_read_conf(struct dasd_device *device)
struct dasd_eckd_private *private, path_private;
struct dasd_uid *uid;
char print_path_uid[60], print_device_uid[60];
struct channel_path_desc_fmt0 *chp_desc;
struct subchannel_id sch_id;
private = device->private;
opm = ccw_device_get_path_mask(device->cdev);
ccw_device_get_schid(device->cdev, &sch_id);
conf_data_saved = 0;
path_err = 0;
/* get configuration data per operational path */
@ -1066,15 +1109,6 @@ static int dasd_eckd_read_conf(struct dasd_device *device)
kfree(conf_data);
continue;
}
pos = pathmask_to_pos(lpm);
/* store per path conf_data */
device->path[pos].conf_data = conf_data;
device->path[pos].cssid = sch_id.cssid;
device->path[pos].ssid = sch_id.ssid;
chp_desc = ccw_device_get_chp_desc(device->cdev, pos);
if (chp_desc)
device->path[pos].chpid = chp_desc->chpid;
kfree(chp_desc);
/*
* build device UID that other path data
* can be compared to it
@ -1132,18 +1166,13 @@ static int dasd_eckd_read_conf(struct dasd_device *device)
dasd_path_add_cablepm(device, lpm);
continue;
}
pos = pathmask_to_pos(lpm);
/* store per path conf_data */
device->path[pos].conf_data = conf_data;
device->path[pos].cssid = sch_id.cssid;
device->path[pos].ssid = sch_id.ssid;
chp_desc = ccw_device_get_chp_desc(device->cdev, pos);
if (chp_desc)
device->path[pos].chpid = chp_desc->chpid;
kfree(chp_desc);
path_private.conf_data = NULL;
path_private.conf_len = 0;
}
pos = pathmask_to_pos(lpm);
dasd_eckd_store_conf_data(device, conf_data, pos);
switch (dasd_eckd_path_access(conf_data, conf_len)) {
case 0x02:
dasd_path_add_nppm(device, lpm);
@ -1160,6 +1189,8 @@ static int dasd_eckd_read_conf(struct dasd_device *device)
}
}
dasd_eckd_read_fc_security(device);
return path_err;
}
@ -1219,7 +1250,7 @@ static int verify_fcx_max_data(struct dasd_device *device, __u8 lpm)
}
static int rebuild_device_uid(struct dasd_device *device,
struct path_verification_work_data *data)
struct pe_handler_work_data *data)
{
struct dasd_eckd_private *private = device->private;
__u8 lpm, opm = dasd_path_get_opm(device);
@ -1257,31 +1288,18 @@ static int rebuild_device_uid(struct dasd_device *device,
return rc;
}
static void do_path_verification_work(struct work_struct *work)
static void dasd_eckd_path_available_action(struct dasd_device *device,
struct pe_handler_work_data *data)
{
struct path_verification_work_data *data;
struct dasd_device *device;
struct dasd_eckd_private path_private;
struct dasd_uid *uid;
__u8 path_rcd_buf[DASD_ECKD_RCD_DATA_SIZE];
__u8 lpm, opm, npm, ppm, epm, hpfpm, cablepm;
struct dasd_conf_data *conf_data;
unsigned long flags;
char print_uid[60];
int rc;
int rc, pos;
data = container_of(work, struct path_verification_work_data, worker);
device = data->device;
/* delay path verification until device was resumed */
if (test_bit(DASD_FLAG_SUSPENDED, &device->flags)) {
schedule_work(work);
return;
}
/* check if path verification already running and delay if so */
if (test_and_set_bit(DASD_FLAG_PATH_VERIFY, &device->flags)) {
schedule_work(work);
return;
}
opm = 0;
npm = 0;
ppm = 0;
@ -1397,6 +1415,14 @@ static void do_path_verification_work(struct work_struct *work)
}
}
conf_data = kzalloc(DASD_ECKD_RCD_DATA_SIZE, GFP_KERNEL);
if (conf_data) {
memcpy(conf_data, data->rcd_buffer,
DASD_ECKD_RCD_DATA_SIZE);
}
pos = pathmask_to_pos(lpm);
dasd_eckd_store_conf_data(device, conf_data, pos);
/*
* There is a small chance that a path is lost again between
* above path verification and the following modification of
@ -1417,34 +1443,65 @@ static void do_path_verification_work(struct work_struct *work)
dasd_path_add_cablepm(device, cablepm);
dasd_path_add_nohpfpm(device, hpfpm);
spin_unlock_irqrestore(get_ccwdev_lock(device->cdev), flags);
dasd_path_create_kobj(device, pos);
}
}
static void do_pe_handler_work(struct work_struct *work)
{
struct pe_handler_work_data *data;
struct dasd_device *device;
data = container_of(work, struct pe_handler_work_data, worker);
device = data->device;
/* delay path verification until device was resumed */
if (test_bit(DASD_FLAG_SUSPENDED, &device->flags)) {
schedule_work(work);
return;
}
/* check if path verification already running and delay if so */
if (test_and_set_bit(DASD_FLAG_PATH_VERIFY, &device->flags)) {
schedule_work(work);
return;
}
if (data->tbvpm)
dasd_eckd_path_available_action(device, data);
if (data->fcsecpm)
dasd_eckd_read_fc_security(device);
clear_bit(DASD_FLAG_PATH_VERIFY, &device->flags);
dasd_put_device(device);
if (data->isglobal)
mutex_unlock(&dasd_path_verification_mutex);
mutex_unlock(&dasd_pe_handler_mutex);
else
kfree(data);
}
static int dasd_eckd_verify_path(struct dasd_device *device, __u8 lpm)
static int dasd_eckd_pe_handler(struct dasd_device *device,
__u8 tbvpm, __u8 fcsecpm)
{
struct path_verification_work_data *data;
struct pe_handler_work_data *data;
data = kmalloc(sizeof(*data), GFP_ATOMIC | GFP_DMA);
if (!data) {
if (mutex_trylock(&dasd_path_verification_mutex)) {
data = path_verification_worker;
if (mutex_trylock(&dasd_pe_handler_mutex)) {
data = pe_handler_worker;
data->isglobal = 1;
} else
} else {
return -ENOMEM;
}
} else {
memset(data, 0, sizeof(*data));
data->isglobal = 0;
}
INIT_WORK(&data->worker, do_path_verification_work);
INIT_WORK(&data->worker, do_pe_handler_work);
dasd_get_device(device);
data->device = device;
data->tbvpm = lpm;
data->tbvpm = tbvpm;
data->fcsecpm = fcsecpm;
schedule_work(&data->worker);
return 0;
}
@ -2056,6 +2113,8 @@ dasd_eckd_check_characteristics(struct dasd_device *device)
if (rc)
goto out_err3;
dasd_path_create_kobjects(device);
/* Read Feature Codes */
dasd_eckd_read_features(device);
@ -6590,7 +6649,7 @@ static struct dasd_discipline dasd_eckd_discipline = {
.check_device = dasd_eckd_check_characteristics,
.uncheck_device = dasd_eckd_uncheck_device,
.do_analysis = dasd_eckd_do_analysis,
.verify_path = dasd_eckd_verify_path,
.pe_handler = dasd_eckd_pe_handler,
.basic_to_ready = dasd_eckd_basic_to_ready,
.online_to_ready = dasd_eckd_online_to_ready,
.basic_to_known = dasd_eckd_basic_to_known,
@ -6649,16 +6708,16 @@ dasd_eckd_init(void)
GFP_KERNEL | GFP_DMA);
if (!dasd_vol_info_req)
return -ENOMEM;
path_verification_worker = kmalloc(sizeof(*path_verification_worker),
GFP_KERNEL | GFP_DMA);
if (!path_verification_worker) {
pe_handler_worker = kmalloc(sizeof(*pe_handler_worker),
GFP_KERNEL | GFP_DMA);
if (!pe_handler_worker) {
kfree(dasd_reserve_req);
kfree(dasd_vol_info_req);
return -ENOMEM;
}
rawpadpage = (void *)__get_free_page(GFP_KERNEL);
if (!rawpadpage) {
kfree(path_verification_worker);
kfree(pe_handler_worker);
kfree(dasd_reserve_req);
kfree(dasd_vol_info_req);
return -ENOMEM;
@ -6667,7 +6726,7 @@ dasd_eckd_init(void)
if (!ret)
wait_for_device_probe();
else {
kfree(path_verification_worker);
kfree(pe_handler_worker);
kfree(dasd_reserve_req);
kfree(dasd_vol_info_req);
free_page((unsigned long)rawpadpage);
@ -6679,7 +6738,7 @@ static void __exit
dasd_eckd_cleanup(void)
{
ccw_driver_unregister(&dasd_eckd_driver);
kfree(path_verification_worker);
kfree(pe_handler_worker);
kfree(dasd_reserve_req);
free_page((unsigned long)rawpadpage);
}

View File

@ -58,7 +58,7 @@ static struct ccw_driver dasd_fba_driver; /* see below */
static int
dasd_fba_probe(struct ccw_device *cdev)
{
return dasd_generic_probe(cdev, &dasd_fba_discipline);
return dasd_generic_probe(cdev);
}
static int

View File

@ -298,6 +298,7 @@ struct dasd_discipline {
* configuration.
*/
int (*verify_path)(struct dasd_device *, __u8);
int (*pe_handler)(struct dasd_device *, __u8, __u8);
/*
* Last things to do when a device is set online, and first things
@ -418,10 +419,40 @@ extern struct dasd_discipline *dasd_diag_discipline_pointer;
#define DASD_PATH_NOHPF 6
#define DASD_PATH_CUIR 7
#define DASD_PATH_IFCC 8
#define DASD_PATH_FCSEC 9
#define DASD_THRHLD_MAX 4294967295U
#define DASD_INTERVAL_MAX 4294967295U
/* FC Endpoint Security Capabilities */
#define DASD_FC_SECURITY_UNSUP 0
#define DASD_FC_SECURITY_AUTH 1
#define DASD_FC_SECURITY_ENC_FCSP2 2
#define DASD_FC_SECURITY_ENC_ERAS 3
#define DASD_FC_SECURITY_ENC_STR "Encryption"
static const struct {
u8 value;
char *name;
} dasd_path_fcs_mnemonics[] = {
{ DASD_FC_SECURITY_UNSUP, "Unsupported" },
{ DASD_FC_SECURITY_AUTH, "Authentication" },
{ DASD_FC_SECURITY_ENC_FCSP2, DASD_FC_SECURITY_ENC_STR },
{ DASD_FC_SECURITY_ENC_ERAS, DASD_FC_SECURITY_ENC_STR },
};
static inline char *dasd_path_get_fcs_str(int val)
{
int i;
for (i = 0; i < ARRAY_SIZE(dasd_path_fcs_mnemonics); i++) {
if (dasd_path_fcs_mnemonics[i].value == val)
return dasd_path_fcs_mnemonics[i].name;
}
return dasd_path_fcs_mnemonics[0].name;
}
struct dasd_path {
unsigned long flags;
u8 cssid;
@ -430,8 +461,18 @@ struct dasd_path {
struct dasd_conf_data *conf_data;
atomic_t error_count;
unsigned long errorclk;
u8 fc_security;
struct kobject kobj;
bool in_sysfs;
};
#define to_dasd_path(path) container_of(path, struct dasd_path, kobj)
static inline void dasd_path_release(struct kobject *kobj)
{
/* Memory for the dasd_path kobject is freed when dasd_free_device() is called */
}
struct dasd_profile_info {
/* legacy part of profile data, as in dasd_profile_info_t */
@ -542,6 +583,7 @@ struct dasd_device {
struct dentry *hosts_dentry;
struct dasd_profile profile;
struct dasd_format_entry format_entry;
struct kset *paths_info;
};
struct dasd_block {
@ -766,7 +808,7 @@ void dasd_block_set_timer(struct dasd_block *, int);
void dasd_block_clear_timer(struct dasd_block *);
int dasd_cancel_req(struct dasd_ccw_req *);
int dasd_flush_device_queue(struct dasd_device *);
int dasd_generic_probe (struct ccw_device *, struct dasd_discipline *);
int dasd_generic_probe(struct ccw_device *);
void dasd_generic_free_discipline(struct dasd_device *);
void dasd_generic_remove (struct ccw_device *cdev);
int dasd_generic_set_online(struct ccw_device *, struct dasd_discipline *);
@ -814,6 +856,9 @@ int dasd_set_feature(struct ccw_device *, int, int);
int dasd_add_sysfs_files(struct ccw_device *);
void dasd_remove_sysfs_files(struct ccw_device *);
void dasd_path_create_kobj(struct dasd_device *, int);
void dasd_path_create_kobjects(struct dasd_device *);
void dasd_path_remove_kobj(struct dasd_device *, int);
struct dasd_device *dasd_device_from_cdev(struct ccw_device *);
struct dasd_device *dasd_device_from_cdev_locked(struct ccw_device *);
@ -912,6 +957,29 @@ static inline void dasd_path_clear_all_verify(struct dasd_device *device)
dasd_path_clear_verify(device, chp);
}
static inline void dasd_path_fcsec(struct dasd_device *device, int chp)
{
__set_bit(DASD_PATH_FCSEC, &device->path[chp].flags);
}
static inline void dasd_path_clear_fcsec(struct dasd_device *device, int chp)
{
__clear_bit(DASD_PATH_FCSEC, &device->path[chp].flags);
}
static inline int dasd_path_need_fcsec(struct dasd_device *device, int chp)
{
return test_bit(DASD_PATH_FCSEC, &device->path[chp].flags);
}
static inline void dasd_path_clear_all_fcsec(struct dasd_device *device)
{
int chp;
for (chp = 0; chp < 8; chp++)
dasd_path_clear_fcsec(device, chp);
}
static inline void dasd_path_operational(struct dasd_device *device, int chp)
{
__set_bit(DASD_PATH_OPERATIONAL, &device->path[chp].flags);
@ -1037,6 +1105,17 @@ static inline __u8 dasd_path_get_tbvpm(struct dasd_device *device)
return tbvpm;
}
static inline int dasd_path_get_fcsecpm(struct dasd_device *device)
{
int chp;
for (chp = 0; chp < 8; chp++)
if (dasd_path_need_fcsec(device, chp))
return 1;
return 0;
}
static inline __u8 dasd_path_get_nppm(struct dasd_device *device)
{
int chp;
@ -1104,6 +1183,31 @@ static inline __u8 dasd_path_get_hpfpm(struct dasd_device *device)
return hpfpm;
}
static inline u8 dasd_path_get_fcs_path(struct dasd_device *device, int chp)
{
return device->path[chp].fc_security;
}
static inline int dasd_path_get_fcs_device(struct dasd_device *device)
{
u8 fc_sec = 0;
int chp;
for (chp = 0; chp < 8; chp++) {
if (device->opm & (0x80 >> chp)) {
fc_sec = device->path[chp].fc_security;
break;
}
}
for (; chp < 8; chp++) {
if (device->opm & (0x80 >> chp))
if (device->path[chp].fc_security != fc_sec)
return -EINVAL;
}
return fc_sec;
}
/*
* add functions for path masks
* the existing path mask will be extended by the given path mask
@ -1269,6 +1373,11 @@ static inline void dasd_path_notoper(struct dasd_device *device, int chp)
dasd_path_clear_nonpreferred(device, chp);
}
static inline void dasd_path_fcsec_update(struct dasd_device *device, int chp)
{
dasd_path_fcsec(device, chp);
}
/*
* remove all paths from normal operation
*/

View File

@ -384,6 +384,20 @@ static ssize_t chp_chid_external_show(struct device *dev,
}
static DEVICE_ATTR(chid_external, 0444, chp_chid_external_show, NULL);
static ssize_t chp_esc_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct channel_path *chp = to_channelpath(dev);
ssize_t rc;
mutex_lock(&chp->lock);
rc = sprintf(buf, "%x\n", chp->desc_fmt1.esc);
mutex_unlock(&chp->lock);
return rc;
}
static DEVICE_ATTR(esc, 0444, chp_esc_show, NULL);
static ssize_t util_string_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr, char *buf,
loff_t off, size_t count)
@ -414,6 +428,7 @@ static struct attribute *chp_attrs[] = {
&dev_attr_shared.attr,
&dev_attr_chid.attr,
&dev_attr_chid_external.attr,
&dev_attr_esc.attr,
NULL,
};
static struct attribute_group chp_attr_group = {

View File

@ -23,6 +23,7 @@
#define CHP_OFFLINE 1
#define CHP_VARY_ON 2
#define CHP_VARY_OFF 3
#define CHP_FCES_EVENT 4
struct chp_link {
struct chp_id chpid;

View File

@ -37,6 +37,9 @@ static void *sei_page;
static void *chsc_page;
static DEFINE_SPINLOCK(chsc_page_lock);
#define SEI_VF_FLA 0xc0 /* VF flag for Full Link Address */
#define SEI_RS_CHPID 0x4 /* 4 in RS field indicates CHPID */
/**
* chsc_error_from_response() - convert a chsc response to an error
* @response: chsc response code
@ -287,6 +290,15 @@ static void s390_process_res_acc(struct chp_link *link)
css_schedule_reprobe();
}
static int process_fces_event(struct subchannel *sch, void *data)
{
spin_lock_irq(sch->lock);
if (sch->driver && sch->driver->chp_event)
sch->driver->chp_event(sch, data, CHP_FCES_EVENT);
spin_unlock_irq(sch->lock);
return 0;
}
struct chsc_sei_nt0_area {
u8 flags;
u8 vf; /* validity flags */
@ -364,6 +376,16 @@ static char *store_ebcdic(char *dest, const char *src, unsigned long len,
return dest + len;
}
static void chsc_link_from_sei(struct chp_link *link,
struct chsc_sei_nt0_area *sei_area)
{
if ((sei_area->vf & SEI_VF_FLA) != 0) {
link->fla = sei_area->fla;
link->fla_mask = ((sei_area->vf & SEI_VF_FLA) == SEI_VF_FLA) ?
0xffff : 0xff00;
}
}
/* Format node ID and parameters for output in LIR log message. */
static void format_node_data(char *params, char *id, struct node_descriptor *nd)
{
@ -453,15 +475,7 @@ static void chsc_process_sei_res_acc(struct chsc_sei_nt0_area *sei_area)
}
memset(&link, 0, sizeof(struct chp_link));
link.chpid = chpid;
if ((sei_area->vf & 0xc0) != 0) {
link.fla = sei_area->fla;
if ((sei_area->vf & 0xc0) == 0xc0)
/* full link address */
link.fla_mask = 0xffff;
else
/* link address */
link.fla_mask = 0xff00;
}
chsc_link_from_sei(&link, sei_area);
s390_process_res_acc(&link);
}
@ -570,6 +584,33 @@ static void chsc_process_sei_ap_cfg_chg(struct chsc_sei_nt0_area *sei_area)
ap_bus_cfg_chg();
}
static void chsc_process_sei_fces_event(struct chsc_sei_nt0_area *sei_area)
{
struct chp_link link;
struct chp_id chpid;
struct channel_path *chp;
CIO_CRW_EVENT(4,
"chsc: FCES status notification (rs=%02x, rs_id=%04x, FCES-status=%x)\n",
sei_area->rs, sei_area->rsid, sei_area->ccdf[0]);
if (sei_area->rs != SEI_RS_CHPID)
return;
chp_id_init(&chpid);
chpid.id = sei_area->rsid;
/* Ignore the event on unknown/invalid chp */
chp = chpid_to_chp(chpid);
if (!chp)
return;
memset(&link, 0, sizeof(struct chp_link));
link.chpid = chpid;
chsc_link_from_sei(&link, sei_area);
for_each_subchannel_staged(process_fces_event, NULL, &link);
}
static void chsc_process_sei_nt2(struct chsc_sei_nt2_area *sei_area)
{
switch (sei_area->cc) {
@ -611,6 +652,9 @@ static void chsc_process_sei_nt0(struct chsc_sei_nt0_area *sei_area)
case 14: /* scm available notification */
chsc_process_sei_scm_avail(sei_area);
break;
case 15: /* FCES event notification */
chsc_process_sei_fces_event(sei_area);
break;
default: /* other stuff */
CIO_CRW_EVENT(2, "chsc: sei nt0 unhandled cc=%d\n",
sei_area->cc);
@ -1428,3 +1472,86 @@ int chsc_sgib(u32 origin)
return ret;
}
EXPORT_SYMBOL_GPL(chsc_sgib);
#define SCUD_REQ_LEN 0x10 /* SCUD request block length */
#define SCUD_REQ_CMD 0x4b /* SCUD Command Code */
struct chse_cudb {
u16 flags:8;
u16 chp_valid:8;
u16 cu;
u32 esm_valid:8;
u32:24;
u8 chpid[8];
u32:32;
u32:32;
u8 esm[8];
u32 efla[8];
} __packed;
struct chsc_scud {
struct chsc_header request;
u16:4;
u16 fmt:4;
u16 cssid:8;
u16 first_cu;
u16:16;
u16 last_cu;
u32:32;
struct chsc_header response;
u16:4;
u16 fmt_resp:4;
u32:24;
struct chse_cudb cudb[];
} __packed;
/**
* chsc_scud() - Store control-unit description.
* @cu: number of the control-unit
* @esm: 8 1-byte endpoint security mode values
* @esm_valid: validity mask for @esm
*
* Interface to retrieve information about the endpoint security
* modes for up to 8 paths of a control unit.
*
* Returns 0 on success.
*/
int chsc_scud(u16 cu, u64 *esm, u8 *esm_valid)
{
struct chsc_scud *scud = chsc_page;
int ret;
spin_lock_irq(&chsc_page_lock);
memset(chsc_page, 0, PAGE_SIZE);
scud->request.length = SCUD_REQ_LEN;
scud->request.code = SCUD_REQ_CMD;
scud->fmt = 0;
scud->cssid = 0;
scud->first_cu = cu;
scud->last_cu = cu;
ret = chsc(scud);
if (!ret)
ret = chsc_error_from_response(scud->response.code);
if (!ret && (scud->response.length <= 8 || scud->fmt_resp != 0
|| !(scud->cudb[0].flags & 0x80)
|| scud->cudb[0].cu != cu)) {
CIO_MSG_EVENT(2, "chsc: scud failed rc=%04x, L2=%04x "
"FMT=%04x, cudb.flags=%02x, cudb.cu=%04x",
scud->response.code, scud->response.length,
scud->fmt_resp, scud->cudb[0].flags, scud->cudb[0].cu);
ret = -EINVAL;
}
if (ret)
goto out;
memcpy(esm, scud->cudb[0].esm, sizeof(*esm));
*esm_valid = scud->cudb[0].esm_valid;
out:
spin_unlock_irq(&chsc_page_lock);
return ret;
}
EXPORT_SYMBOL_GPL(chsc_scud);

View File

@ -27,7 +27,8 @@ struct channel_path_desc_fmt1 {
u8 lsn;
u8 desc;
u8 chpid;
u32:24;
u32:16;
u8 esc;
u8 chpp;
u32 unused[2];
u16 chid;

View File

@ -1156,7 +1156,8 @@ static int io_subchannel_chp_event(struct subchannel *sch,
struct chp_link *link, int event)
{
struct ccw_device *cdev = sch_get_cdev(sch);
int mask;
int mask, chpid, valid_bit;
int path_event[8];
mask = chp_ssd_get_mask(&sch->ssd_info, link);
if (!mask)
@ -1191,6 +1192,18 @@ static int io_subchannel_chp_event(struct subchannel *sch,
cdev->private->path_new_mask |= mask;
io_subchannel_verify(sch);
break;
case CHP_FCES_EVENT:
/* Forward Endpoint Security event */
for (chpid = 0, valid_bit = 0x80; chpid < 8; chpid++,
valid_bit >>= 1) {
if (mask & valid_bit)
path_event[chpid] = PE_PATH_FCES_EVENT;
else
path_event[chpid] = PE_NONE;
}
if (cdev)
cdev->drv->path_event(cdev, path_event);
break;
}
return 0;
}

View File

@ -416,19 +416,7 @@ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt)
goto out;
}
/*
* we do lazy blocksize switching (when reading XA sectors,
* see CDROMREADMODE2 ioctl)
*/
s_size = cd->device->sector_size;
if (s_size > 2048) {
if (!in_interrupt())
sr_set_blocklength(cd, 2048);
else
scmd_printk(KERN_INFO, SCpnt,
"can't switch blocksize: in interrupt\n");
}
if (s_size != 512 && s_size != 1024 && s_size != 2048) {
scmd_printk(KERN_ERR, SCpnt, "bad sector size %d\n", s_size);
goto out;
@ -701,11 +689,6 @@ error_out:
static void sr_release(struct cdrom_device_info *cdi)
{
struct scsi_cd *cd = cdi->handle;
if (cd->device->sector_size > 2048)
sr_set_blocklength(cd, 2048);
}
static int sr_probe(struct device *dev)

View File

@ -549,6 +549,8 @@ static int sr_read_sector(Scsi_CD *cd, int lba, int blksize, unsigned char *dest
cgc.timeout = IOCTL_TIMEOUT;
rc = sr_do_ioctl(cd, &cgc);
if (blksize != CD_FRAMESIZE)
rc |= sr_set_blocklength(cd, CD_FRAMESIZE);
return rc;
}

View File

@ -594,6 +594,18 @@ static inline void blk_mq_cleanup_rq(struct request *rq)
rq->q->mq_ops->cleanup_rq(rq);
}
static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio,
unsigned int nr_segs)
{
rq->nr_phys_segments = nr_segs;
rq->__data_len = bio->bi_iter.bi_size;
rq->bio = rq->biotail = bio;
rq->ioprio = bio_prio(bio);
if (bio->bi_disk)
rq->rq_disk = bio->bi_disk;
}
blk_qc_t blk_mq_submit_bio(struct bio *bio);
void blk_mq_hctx_set_fq_lock_class(struct blk_mq_hw_ctx *hctx,
struct lock_class_key *key);