more s390 updates for 6.9 merge window

- Various virtual vs physical address usage fixes
 
 - Add new bitwise types and helper functions and use them in s390 specific
   drivers and code to make it easier to find virtual vs physical address
   usage bugs. Right now virtual and physical addresses are identical for
   s390, except for module, vmalloc, and similar areas. This will be
   changed, hopefully with the next merge window, so that e.g. the kernel
   image and modules will be located close to each other, allowing for
   direct branches and also for some other simplifications.
 
   As a prerequisite this requires to fix all misuses of virtual and
   physical addresses. As it turned out people are so used to the concept
   that virtual and physical addresses are the same, that new bugs got added
   to code which was already fixed. In order to avoid that even more code
   gets merged which adds such bugs add and use new bitwise types, so that
   sparse can be used to find such usage bugs.
 
   Most likely the new types can go away again after some time
 
 - Provide a simple ARCH_HAS_DEBUG_VIRTUAL implementation
 
 - Fix kprobe branch handling: if an out-of-line single stepped relative
   branch instruction has a target address within a certain address area in
   the entry code, the program check handler may incorrectly execute cleanup
   code as if KVM code was executed, leading to crashes
 
 - Fix reference counting of zcrypt card objects
 
 - Various other small fixes and cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEECMNfWEw3SLnmiLkZIg7DeRspbsIFAmX5lIYACgkQIg7DeRsp
 bsJYxA//cYGSaopFLuHxQndQi5UcMx0NMcaD9auMyDraPWJH0/F+vIWnLPxgJ1wB
 zfav3Ssvdd/41rPIXHSGxUzXzv+EHCY5+91t4plfCxWtd1SbpJ8gG2vrXTH512ql
 RhJ7crRFQqoigIxlsdwUkwGSqd+L6H73ikKzQyAaJFKC/e9BEpCH4zb8NuTqCeJE
 a81/A8BGSIo4Fk4qj1T5bHZzkznmxtisMPGXzoKa28LKhzwqqbGZYeHohnkaT6co
 UpY/HMdhGH74kkKqpF0cLDI0Bd8ptfjbcVcKyJ8OD7vCfinIM3bZdYg0KIzyRMhu
 d49JDXctPiePXTCi8X+AJnhNYFGlHuDciEpTMzkw97kwhmCfAOW5UaAlBo8dJqat
 zt76Cxbl3D+hYI7Wbs9lt2QsXso4/1fMMNQbu9pwyMKxHlCBVe+ZqNIl0gP8OAXZ
 51pghcLlcwYNeYRkSzvfEhO9aeUsRg40O5UBCklVMRScrx7qie2wsllEFavb7vZd
 Ejv89dvn0KtzYIHG+U5SQ9d0x1JL9qApVM06sCGZGsBM9r4hHFGa8x57Pr+51ZPs
 j0qbr7iAWgCGXN8c3p/Nrev6eqBio81CpD9PWik7QpJS38EKkussqfPdQitgQpsi
 7r0Sx51oBzyAVadmVAf8/flUrbJTvD3BdbkzN99W4BXyARJk7CI=
 =Vh2x
 -----END PGP SIGNATURE-----

Merge tag 's390-6.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull more s390 updates from Heiko Carstens:

 - Various virtual vs physical address usage fixes

 - Add new bitwise types and helper functions and use them in s390
   specific drivers and code to make it easier to find virtual vs
   physical address usage bugs.

   Right now virtual and physical addresses are identical for s390,
   except for module, vmalloc, and similar areas. This will be changed,
   hopefully with the next merge window, so that e.g. the kernel image
   and modules will be located close to each other, allowing for direct
   branches and also for some other simplifications.

   As a prerequisite this requires to fix all misuses of virtual and
   physical addresses. As it turned out people are so used to the
   concept that virtual and physical addresses are the same, that new
   bugs got added to code which was already fixed. In order to avoid
   that even more code gets merged which adds such bugs add and use new
   bitwise types, so that sparse can be used to find such usage bugs.

   Most likely the new types can go away again after some time

 - Provide a simple ARCH_HAS_DEBUG_VIRTUAL implementation

 - Fix kprobe branch handling: if an out-of-line single stepped relative
   branch instruction has a target address within a certain address area
   in the entry code, the program check handler may incorrectly execute
   cleanup code as if KVM code was executed, leading to crashes

 - Fix reference counting of zcrypt card objects

 - Various other small fixes and cleanups

* tag 's390-6.9-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (41 commits)
  s390/entry: compare gmap asce to determine guest/host fault
  s390/entry: remove OUTSIDE macro
  s390/entry: add CIF_SIE flag and remove sie64a() address check
  s390/cio: use while (i--) pattern to clean up
  s390/raw3270: make class3270 constant
  s390/raw3270: improve raw3270_init() readability
  s390/tape: make tape_class constant
  s390/vmlogrdr: make vmlogrdr_class constant
  s390/vmur: make vmur_class constant
  s390/zcrypt: make zcrypt_class constant
  s390/mm: provide simple ARCH_HAS_DEBUG_VIRTUAL support
  s390/vfio_ccw_cp: use new address translation helpers
  s390/iucv: use new address translation helpers
  s390/ctcm: use new address translation helpers
  s390/lcs: use new address translation helpers
  s390/qeth: use new address translation helpers
  s390/zfcp: use new address translation helpers
  s390/tape: fix virtual vs physical address confusion
  s390/3270: use new address translation helpers
  s390/3215: use new address translation helpers
  ...
This commit is contained in:
Linus Torvalds 2024-03-19 11:38:27 -07:00
commit f9c035492f
67 changed files with 771 additions and 553 deletions

View File

@ -63,6 +63,7 @@ config S390
select ARCH_ENABLE_MEMORY_HOTREMOVE
select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
select ARCH_HAS_CURRENT_STACK_POINTER
select ARCH_HAS_DEBUG_VIRTUAL
select ARCH_HAS_DEBUG_VM_PGTABLE
select ARCH_HAS_DEBUG_WX
select ARCH_HAS_DEVMEM_IS_ALLOWED

View File

@ -29,6 +29,7 @@ KBUILD_AFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),$(aflags_dwarf))
endif
KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2 -mpacked-stack
KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
KBUILD_CFLAGS_DECOMPRESSOR += -D__DECOMPRESSOR
KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float -mbackchain
KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
KBUILD_CFLAGS_DECOMPRESSOR += -ffreestanding

View File

@ -810,6 +810,7 @@ CONFIG_DEBUG_OBJECTS_PERCPU_COUNTER=y
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_VM=y
CONFIG_DEBUG_VM_PGFLAGS=y
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
CONFIG_DEBUG_PER_CPU_MAPS=y

View File

@ -217,7 +217,8 @@ extern void ccw_device_destroy_console(struct ccw_device *);
extern int ccw_device_enable_console(struct ccw_device *);
extern void ccw_device_wait_idle(struct ccw_device *);
extern void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size);
extern void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size,
dma32_t *dma_handle);
extern void ccw_device_dma_free(struct ccw_device *cdev,
void *cpu_addr, size_t size);

View File

@ -7,6 +7,7 @@
#include <linux/bitops.h>
#include <linux/genalloc.h>
#include <asm/dma-types.h>
#include <asm/types.h>
#include <asm/tpi.h>
@ -32,7 +33,7 @@ struct ccw1 {
__u8 cmd_code;
__u8 flags;
__u16 count;
__u32 cda;
dma32_t cda;
} __attribute__ ((packed,aligned(8)));
/**
@ -152,8 +153,8 @@ struct sublog {
struct esw0 {
struct sublog sublog;
struct erw erw;
__u32 faddr[2];
__u32 saddr;
dma32_t faddr[2];
dma32_t saddr;
} __attribute__ ((packed));
/**
@ -364,6 +365,8 @@ extern struct device *cio_get_dma_css_dev(void);
void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
size_t size);
void *__cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
size_t size, dma32_t *dma_handle);
void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size);
void cio_gp_dma_destroy(struct gen_pool *gp_dma, struct device *dma_dev);
struct gen_pool *cio_gp_dma_create(struct device *dma_dev, int nr_pages);

View File

@ -0,0 +1,103 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_S390_DMA_TYPES_H_
#define _ASM_S390_DMA_TYPES_H_
#include <linux/types.h>
#include <linux/io.h>
/*
* typedef dma32_t
* Contains a 31 bit absolute address to a DMA capable piece of storage.
*
* For CIO, DMA addresses are always absolute addresses. These addresses tend
* to be used in architectured memory blocks (like ORB, IDAW, MIDAW). Under
* certain circumstances 31 bit wide addresses must be used because the
* address must fit in 31 bits.
*
* This type is to be used when such fields can be modelled as 32 bit wide.
*/
typedef u32 __bitwise dma32_t;
/*
* typedef dma64_t
* Contains a 64 bit absolute address to a DMA capable piece of storage.
*
* For CIO, DMA addresses are always absolute addresses. These addresses tend
* to be used in architectured memory blocks (like ORB, IDAW, MIDAW).
*
* This type is to be used to model such 64 bit wide fields.
*/
typedef u64 __bitwise dma64_t;
/*
* Although DMA addresses should be obtained using the DMA API, in cases when
* it is known that the first argument holds a virtual address that points to
* DMA-able 31 bit addressable storage, then this function can be safely used.
*/
static inline dma32_t virt_to_dma32(void *ptr)
{
return (__force dma32_t)__pa32(ptr);
}
static inline void *dma32_to_virt(dma32_t addr)
{
return __va((__force unsigned long)addr);
}
static inline dma32_t u32_to_dma32(u32 addr)
{
return (__force dma32_t)addr;
}
static inline u32 dma32_to_u32(dma32_t addr)
{
return (__force u32)addr;
}
static inline dma32_t dma32_add(dma32_t a, u32 b)
{
return (__force dma32_t)((__force u32)a + b);
}
static inline dma32_t dma32_and(dma32_t a, u32 b)
{
return (__force dma32_t)((__force u32)a & b);
}
/*
* Although DMA addresses should be obtained using the DMA API, in cases when
* it is known that the first argument holds a virtual address that points to
* DMA-able storage, then this function can be safely used.
*/
static inline dma64_t virt_to_dma64(void *ptr)
{
return (__force dma64_t)__pa(ptr);
}
static inline void *dma64_to_virt(dma64_t addr)
{
return __va((__force unsigned long)addr);
}
static inline dma64_t u64_to_dma64(u64 addr)
{
return (__force dma64_t)addr;
}
static inline u64 dma64_to_u64(dma64_t addr)
{
return (__force u64)addr;
}
static inline dma64_t dma64_add(dma64_t a, u64 b)
{
return (__force dma64_t)((__force u64)a + b);
}
static inline dma64_t dma64_and(dma64_t a, u64 b)
{
return (__force dma64_t)((__force u64)a & b);
}
#endif /* _ASM_S390_DMA_TYPES_H_ */

View File

@ -5,6 +5,7 @@
#include <linux/types.h>
#include <linux/device.h>
#include <linux/blk_types.h>
#include <asm/dma-types.h>
struct arqb {
u64 data;
@ -45,7 +46,7 @@ struct msb {
u16:12;
u16 bs:4;
u32 blk_count;
u64 data_addr;
dma64_t data_addr;
u64 scm_addr;
u64:64;
} __packed;
@ -54,7 +55,7 @@ struct aidaw {
u8 flags;
u32 :24;
u32 :32;
u64 data_addr;
dma64_t data_addr;
} __packed;
#define MSB_OC_CLEAR 0

View File

@ -10,6 +10,7 @@
#define _ASM_S390_FCX_H
#include <linux/types.h>
#include <asm/dma-types.h>
#define TCW_FORMAT_DEFAULT 0
#define TCW_TIDAW_FORMAT_DEFAULT 0
@ -43,16 +44,16 @@ struct tcw {
u32 r:1;
u32 w:1;
u32 :16;
u64 output;
u64 input;
u64 tsb;
u64 tccb;
dma64_t output;
dma64_t input;
dma64_t tsb;
dma64_t tccb;
u32 output_count;
u32 input_count;
u32 :32;
u32 :32;
u32 :32;
u32 intrg;
dma32_t intrg;
} __attribute__ ((packed, aligned(64)));
#define TIDAW_FLAGS_LAST (1 << (7 - 0))
@ -73,7 +74,7 @@ struct tidaw {
u32 flags:8;
u32 :24;
u32 count;
u64 addr;
dma64_t addr;
} __attribute__ ((packed, aligned(16)));
/**

View File

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
/*
* Author(s)......: Holger Smolinski <Holger.Smolinski@de.ibm.com>
* Martin Schwidefsky <schwidefsky@de.ibm.com>
* Bugreports.to..: <Linux390@de.ibm.com>
@ -17,32 +17,37 @@
#include <linux/err.h>
#include <linux/types.h>
#include <linux/slab.h>
#include <asm/cio.h>
#include <linux/uaccess.h>
#include <asm/dma-types.h>
#include <asm/cio.h>
#define IDA_SIZE_LOG 12 /* 11 for 2k , 12 for 4k */
#define IDA_BLOCK_SIZE (1L<<IDA_SIZE_LOG)
#define IDA_SIZE_SHIFT 12
#define IDA_BLOCK_SIZE (1UL << IDA_SIZE_SHIFT)
#define IDA_2K_SIZE_LOG 11
#define IDA_2K_BLOCK_SIZE (1L << IDA_2K_SIZE_LOG)
#define IDA_2K_SIZE_SHIFT 11
#define IDA_2K_BLOCK_SIZE (1UL << IDA_2K_SIZE_SHIFT)
/*
* Test if an address/length pair needs an idal list.
*/
static inline int
idal_is_needed(void *vaddr, unsigned int length)
static inline bool idal_is_needed(void *vaddr, unsigned int length)
{
return ((__pa(vaddr) + length - 1) >> 31) != 0;
}
dma64_t paddr = virt_to_dma64(vaddr);
return (((__force unsigned long)(paddr) + length - 1) >> 31) != 0;
}
/*
* Return the number of idal words needed for an address/length pair.
*/
static inline unsigned int idal_nr_words(void *vaddr, unsigned int length)
{
return ((__pa(vaddr) & (IDA_BLOCK_SIZE-1)) + length +
(IDA_BLOCK_SIZE-1)) >> IDA_SIZE_LOG;
unsigned int cidaw;
cidaw = (unsigned long)vaddr & (IDA_BLOCK_SIZE - 1);
cidaw += length + IDA_BLOCK_SIZE - 1;
cidaw >>= IDA_SIZE_SHIFT;
return cidaw;
}
/*
@ -50,26 +55,27 @@ static inline unsigned int idal_nr_words(void *vaddr, unsigned int length)
*/
static inline unsigned int idal_2k_nr_words(void *vaddr, unsigned int length)
{
return ((__pa(vaddr) & (IDA_2K_BLOCK_SIZE - 1)) + length +
(IDA_2K_BLOCK_SIZE - 1)) >> IDA_2K_SIZE_LOG;
unsigned int cidaw;
cidaw = (unsigned long)vaddr & (IDA_2K_BLOCK_SIZE - 1);
cidaw += length + IDA_2K_BLOCK_SIZE - 1;
cidaw >>= IDA_2K_SIZE_SHIFT;
return cidaw;
}
/*
* Create the list of idal words for an address/length pair.
*/
static inline unsigned long *idal_create_words(unsigned long *idaws,
void *vaddr, unsigned int length)
static inline dma64_t *idal_create_words(dma64_t *idaws, void *vaddr, unsigned int length)
{
unsigned long paddr;
dma64_t paddr = virt_to_dma64(vaddr);
unsigned int cidaw;
paddr = __pa(vaddr);
cidaw = ((paddr & (IDA_BLOCK_SIZE-1)) + length +
(IDA_BLOCK_SIZE-1)) >> IDA_SIZE_LOG;
*idaws++ = paddr;
paddr &= -IDA_BLOCK_SIZE;
cidaw = idal_nr_words(vaddr, length);
paddr = dma64_and(paddr, -IDA_BLOCK_SIZE);
while (--cidaw > 0) {
paddr += IDA_BLOCK_SIZE;
paddr = dma64_add(paddr, IDA_BLOCK_SIZE);
*idaws++ = paddr;
}
return idaws;
@ -79,36 +85,33 @@ static inline unsigned long *idal_create_words(unsigned long *idaws,
* Sets the address of the data in CCW.
* If necessary it allocates an IDAL and sets the appropriate flags.
*/
static inline int
set_normalized_cda(struct ccw1 * ccw, void *vaddr)
static inline int set_normalized_cda(struct ccw1 *ccw, void *vaddr)
{
unsigned int nridaws;
unsigned long *idal;
dma64_t *idal;
if (ccw->flags & CCW_FLAG_IDA)
return -EINVAL;
nridaws = idal_nr_words(vaddr, ccw->count);
if (nridaws > 0) {
idal = kmalloc(nridaws * sizeof(unsigned long),
GFP_ATOMIC | GFP_DMA );
if (idal == NULL)
idal = kcalloc(nridaws, sizeof(*idal), GFP_ATOMIC | GFP_DMA);
if (!idal)
return -ENOMEM;
idal_create_words(idal, vaddr, ccw->count);
ccw->flags |= CCW_FLAG_IDA;
vaddr = idal;
}
ccw->cda = (__u32)(unsigned long) vaddr;
ccw->cda = virt_to_dma32(vaddr);
return 0;
}
/*
* Releases any allocated IDAL related to the CCW.
*/
static inline void
clear_normalized_cda(struct ccw1 * ccw)
static inline void clear_normalized_cda(struct ccw1 *ccw)
{
if (ccw->flags & CCW_FLAG_IDA) {
kfree((void *)(unsigned long) ccw->cda);
kfree(dma32_to_virt(ccw->cda));
ccw->flags &= ~CCW_FLAG_IDA;
}
ccw->cda = 0;
@ -120,125 +123,138 @@ clear_normalized_cda(struct ccw1 * ccw)
struct idal_buffer {
size_t size;
size_t page_order;
void *data[];
dma64_t data[];
};
/*
* Allocate an idal buffer
*/
static inline struct idal_buffer *
idal_buffer_alloc(size_t size, int page_order)
static inline struct idal_buffer *idal_buffer_alloc(size_t size, int page_order)
{
struct idal_buffer *ib;
int nr_chunks, nr_ptrs, i;
struct idal_buffer *ib;
void *vaddr;
nr_ptrs = (size + IDA_BLOCK_SIZE - 1) >> IDA_SIZE_LOG;
nr_chunks = (4096 << page_order) >> IDA_SIZE_LOG;
nr_ptrs = (size + IDA_BLOCK_SIZE - 1) >> IDA_SIZE_SHIFT;
nr_chunks = (PAGE_SIZE << page_order) >> IDA_SIZE_SHIFT;
ib = kmalloc(struct_size(ib, data, nr_ptrs), GFP_DMA | GFP_KERNEL);
if (ib == NULL)
if (!ib)
return ERR_PTR(-ENOMEM);
ib->size = size;
ib->page_order = page_order;
for (i = 0; i < nr_ptrs; i++) {
if ((i & (nr_chunks - 1)) != 0) {
ib->data[i] = ib->data[i-1] + IDA_BLOCK_SIZE;
if (i & (nr_chunks - 1)) {
ib->data[i] = dma64_add(ib->data[i - 1], IDA_BLOCK_SIZE);
continue;
}
ib->data[i] = (void *)
__get_free_pages(GFP_KERNEL, page_order);
if (ib->data[i] != NULL)
continue;
// Not enough memory
while (i >= nr_chunks) {
i -= nr_chunks;
free_pages((unsigned long) ib->data[i],
ib->page_order);
}
kfree(ib);
return ERR_PTR(-ENOMEM);
vaddr = (void *)__get_free_pages(GFP_KERNEL, page_order);
if (!vaddr)
goto error;
ib->data[i] = virt_to_dma64(vaddr);
}
return ib;
error:
while (i >= nr_chunks) {
i -= nr_chunks;
vaddr = dma64_to_virt(ib->data[i]);
free_pages((unsigned long)vaddr, ib->page_order);
}
kfree(ib);
return ERR_PTR(-ENOMEM);
}
/*
* Free an idal buffer.
*/
static inline void
idal_buffer_free(struct idal_buffer *ib)
static inline void idal_buffer_free(struct idal_buffer *ib)
{
int nr_chunks, nr_ptrs, i;
void *vaddr;
nr_ptrs = (ib->size + IDA_BLOCK_SIZE - 1) >> IDA_SIZE_LOG;
nr_chunks = (4096 << ib->page_order) >> IDA_SIZE_LOG;
for (i = 0; i < nr_ptrs; i += nr_chunks)
free_pages((unsigned long) ib->data[i], ib->page_order);
nr_ptrs = (ib->size + IDA_BLOCK_SIZE - 1) >> IDA_SIZE_SHIFT;
nr_chunks = (PAGE_SIZE << ib->page_order) >> IDA_SIZE_SHIFT;
for (i = 0; i < nr_ptrs; i += nr_chunks) {
vaddr = dma64_to_virt(ib->data[i]);
free_pages((unsigned long)vaddr, ib->page_order);
}
kfree(ib);
}
/*
* Test if a idal list is really needed.
*/
static inline int
__idal_buffer_is_needed(struct idal_buffer *ib)
static inline bool __idal_buffer_is_needed(struct idal_buffer *ib)
{
return ib->size > (4096ul << ib->page_order) ||
idal_is_needed(ib->data[0], ib->size);
if (ib->size > (PAGE_SIZE << ib->page_order))
return true;
return idal_is_needed(dma64_to_virt(ib->data[0]), ib->size);
}
/*
* Set channel data address to idal buffer.
*/
static inline void
idal_buffer_set_cda(struct idal_buffer *ib, struct ccw1 *ccw)
static inline void idal_buffer_set_cda(struct idal_buffer *ib, struct ccw1 *ccw)
{
void *vaddr;
if (__idal_buffer_is_needed(ib)) {
// setup idals;
ccw->cda = (u32)(addr_t) ib->data;
/* Setup idals */
ccw->cda = virt_to_dma32(ib->data);
ccw->flags |= CCW_FLAG_IDA;
} else
// we do not need idals - use direct addressing
ccw->cda = (u32)(addr_t) ib->data[0];
} else {
/*
* No idals needed - use direct addressing. Convert from
* dma64_t to virt and then to dma32_t only because of type
* checking. The physical address is known to be below 2GB.
*/
vaddr = dma64_to_virt(ib->data[0]);
ccw->cda = virt_to_dma32(vaddr);
}
ccw->count = ib->size;
}
/*
* Copy count bytes from an idal buffer to user memory
*/
static inline size_t
idal_buffer_to_user(struct idal_buffer *ib, void __user *to, size_t count)
static inline size_t idal_buffer_to_user(struct idal_buffer *ib, void __user *to, size_t count)
{
size_t left;
void *vaddr;
int i;
BUG_ON(count > ib->size);
for (i = 0; count > IDA_BLOCK_SIZE; i++) {
left = copy_to_user(to, ib->data[i], IDA_BLOCK_SIZE);
vaddr = dma64_to_virt(ib->data[i]);
left = copy_to_user(to, vaddr, IDA_BLOCK_SIZE);
if (left)
return left + count - IDA_BLOCK_SIZE;
to = (void __user *) to + IDA_BLOCK_SIZE;
to = (void __user *)to + IDA_BLOCK_SIZE;
count -= IDA_BLOCK_SIZE;
}
return copy_to_user(to, ib->data[i], count);
vaddr = dma64_to_virt(ib->data[i]);
return copy_to_user(to, vaddr, count);
}
/*
* Copy count bytes from user memory to an idal buffer
*/
static inline size_t
idal_buffer_from_user(struct idal_buffer *ib, const void __user *from, size_t count)
static inline size_t idal_buffer_from_user(struct idal_buffer *ib, const void __user *from, size_t count)
{
size_t left;
void *vaddr;
int i;
BUG_ON(count > ib->size);
for (i = 0; count > IDA_BLOCK_SIZE; i++) {
left = copy_from_user(ib->data[i], from, IDA_BLOCK_SIZE);
vaddr = dma64_to_virt(ib->data[i]);
left = copy_from_user(vaddr, from, IDA_BLOCK_SIZE);
if (left)
return left + count - IDA_BLOCK_SIZE;
from = (void __user *) from + IDA_BLOCK_SIZE;
from = (void __user *)from + IDA_BLOCK_SIZE;
count -= IDA_BLOCK_SIZE;
}
return copy_from_user(ib->data[i], from, count);
vaddr = dma64_to_virt(ib->data[i]);
return copy_from_user(vaddr, from, count);
}
#endif

View File

@ -181,9 +181,35 @@ int arch_make_page_accessible(struct page *page);
#define __PAGE_OFFSET 0x0UL
#define PAGE_OFFSET 0x0UL
#define __pa(x) ((unsigned long)(x))
#define __pa_nodebug(x) ((unsigned long)(x))
#ifdef __DECOMPRESSOR
#define __pa(x) __pa_nodebug(x)
#define __pa32(x) __pa(x)
#define __va(x) ((void *)(unsigned long)(x))
#else /* __DECOMPRESSOR */
#ifdef CONFIG_DEBUG_VIRTUAL
unsigned long __phys_addr(unsigned long x, bool is_31bit);
#else /* CONFIG_DEBUG_VIRTUAL */
static inline unsigned long __phys_addr(unsigned long x, bool is_31bit)
{
return __pa_nodebug(x);
}
#endif /* CONFIG_DEBUG_VIRTUAL */
#define __pa(x) __phys_addr((unsigned long)(x), false)
#define __pa32(x) __phys_addr((unsigned long)(x), true)
#define __va(x) ((void *)(unsigned long)(x))
#endif /* __DECOMPRESSOR */
#define phys_to_pfn(phys) ((phys) >> PAGE_SHIFT)
#define pfn_to_phys(pfn) ((pfn) << PAGE_SHIFT)
@ -205,7 +231,7 @@ static inline unsigned long virt_to_pfn(const void *kaddr)
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define page_to_virt(page) pfn_to_virt(page_to_pfn(page))
#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
#define virt_addr_valid(kaddr) pfn_valid(phys_to_pfn(__pa_nodebug(kaddr)))
#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC

View File

@ -14,11 +14,13 @@
#include <linux/bits.h>
#define CIF_SIE 0 /* CPU needs SIE exit cleanup */
#define CIF_NOHZ_DELAY 2 /* delay HZ disable for a tick */
#define CIF_ENABLED_WAIT 5 /* in enabled wait state */
#define CIF_MCCK_GUEST 6 /* machine check happening in guest */
#define CIF_DEDICATED_CPU 7 /* this CPU is dedicated */
#define _CIF_SIE BIT(CIF_SIE)
#define _CIF_NOHZ_DELAY BIT(CIF_NOHZ_DELAY)
#define _CIF_ENABLED_WAIT BIT(CIF_ENABLED_WAIT)
#define _CIF_MCCK_GUEST BIT(CIF_MCCK_GUEST)

View File

@ -14,13 +14,11 @@
#define PIF_SYSCALL 0 /* inside a system call */
#define PIF_EXECVE_PGSTE_RESTART 1 /* restart execve for PGSTE binaries */
#define PIF_SYSCALL_RET_SET 2 /* return value was set via ptrace */
#define PIF_GUEST_FAULT 3 /* indicates program check in sie64a */
#define PIF_FTRACE_FULL_REGS 4 /* all register contents valid (ftrace) */
#define _PIF_SYSCALL BIT(PIF_SYSCALL)
#define _PIF_EXECVE_PGSTE_RESTART BIT(PIF_EXECVE_PGSTE_RESTART)
#define _PIF_SYSCALL_RET_SET BIT(PIF_SYSCALL_RET_SET)
#define _PIF_GUEST_FAULT BIT(PIF_GUEST_FAULT)
#define _PIF_FTRACE_FULL_REGS BIT(PIF_FTRACE_FULL_REGS)
#define PSW32_MASK_PER _AC(0x40000000, UL)

View File

@ -9,8 +9,9 @@
#define __QDIO_H__
#include <linux/interrupt.h>
#include <asm/cio.h>
#include <asm/dma-types.h>
#include <asm/ccwdev.h>
#include <asm/cio.h>
/* only use 4 queues to save some cachelines */
#define QDIO_MAX_QUEUES_PER_IRQ 4
@ -34,9 +35,9 @@
* @dkey: access key for SLSB
*/
struct qdesfmt0 {
u64 sliba;
u64 sla;
u64 slsba;
dma64_t sliba;
dma64_t sla;
dma64_t slsba;
u32 : 32;
u32 akey : 4;
u32 bkey : 4;
@ -74,7 +75,7 @@ struct qdr {
/* private: */
u32 res[9];
/* public: */
u64 qiba;
dma64_t qiba;
u32 : 32;
u32 qkey : 4;
u32 : 28;
@ -146,7 +147,7 @@ struct qaob {
u8 flags;
u16 cbtbs;
u8 sb_count;
u64 sba[QDIO_MAX_ELEMENTS_PER_BUFFER];
dma64_t sba[QDIO_MAX_ELEMENTS_PER_BUFFER];
u16 dcount[QDIO_MAX_ELEMENTS_PER_BUFFER];
u64 user0;
u64 res4[2];
@ -208,7 +209,7 @@ struct qdio_buffer_element {
u8 scount;
u8 sflags;
u32 length;
u64 addr;
dma64_t addr;
} __attribute__ ((packed, aligned(16)));
/**
@ -224,7 +225,7 @@ struct qdio_buffer {
* @sbal: absolute SBAL address
*/
struct sl_element {
u64 sbal;
dma64_t sbal;
} __attribute__ ((packed));
/**

View File

@ -11,6 +11,7 @@
#include <linux/types.h>
#include <asm/css_chars.h>
#include <asm/dma-types.h>
#include <asm/cio.h>
/**
@ -53,7 +54,7 @@ struct cmd_scsw {
__u32 fctl : 3;
__u32 actl : 7;
__u32 stctl : 5;
__u32 cpa;
dma32_t cpa;
__u32 dstat : 8;
__u32 cstat : 8;
__u32 count : 16;
@ -93,7 +94,7 @@ struct tm_scsw {
u32 fctl:3;
u32 actl:7;
u32 stctl:5;
u32 tcw;
dma32_t tcw;
u32 dstat:8;
u32 cstat:8;
u32 fcxs:8;
@ -125,7 +126,7 @@ struct eadm_scsw {
u32 fctl:3;
u32 actl:7;
u32 stctl:5;
u32 aob;
dma32_t aob;
u32 dstat:8;
u32 cstat:8;
u32:16;

View File

@ -119,33 +119,11 @@ _LPP_OFFSET = __LC_LPP
.endm
#if IS_ENABLED(CONFIG_KVM)
/*
* The OUTSIDE macro jumps to the provided label in case the value
* in the provided register is outside of the provided range. The
* macro is useful for checking whether a PSW stored in a register
* pair points inside or outside of a block of instructions.
* @reg: register to check
* @start: start of the range
* @end: end of the range
* @outside_label: jump here if @reg is outside of [@start..@end)
*/
.macro OUTSIDE reg,start,end,outside_label
lgr %r14,\reg
larl %r13,\start
slgr %r14,%r13
clgfrl %r14,.Lrange_size\@
jhe \outside_label
.section .rodata, "a"
.balign 4
.Lrange_size\@:
.long \end - \start
.previous
.endm
.macro SIEEXIT
lg %r9,__SF_SIE_CONTROL(%r15) # get control block pointer
.macro SIEEXIT sie_control
lg %r9,\sie_control # get control block pointer
ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE
lctlg %c1,%c1,__LC_KERNEL_ASCE # load primary asce
ni __LC_CPU_FLAGS+7,255-_CIF_SIE
larl %r9,sie_exit # skip forward to sie_exit
.endm
#endif
@ -214,6 +192,7 @@ SYM_FUNC_START(__sie64a)
lg %r14,__LC_GMAP # get gmap pointer
ltgr %r14,%r14
jz .Lsie_gmap
oi __LC_CPU_FLAGS+7,_CIF_SIE
lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce
.Lsie_gmap:
lg %r14,__SF_SIE_CONTROL(%r15) # get control block pointer
@ -234,7 +213,7 @@ SYM_FUNC_START(__sie64a)
lg %r14,__SF_SIE_CONTROL(%r15) # get control block pointer
ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE
lctlg %c1,%c1,__LC_KERNEL_ASCE # load primary asce
.Lsie_done:
ni __LC_CPU_FLAGS+7,255-_CIF_SIE
# some program checks are suppressing. C code (e.g. do_protection_exception)
# will rewind the PSW by the ILC, which is often 4 bytes in case of SIE. There
# are some corner cases (e.g. runtime instrumentation) where ILC is unpredictable.
@ -337,20 +316,13 @@ SYM_CODE_START(pgm_check_handler)
stpt __LC_SYS_ENTER_TIMER
BPOFF
stmg %r8,%r15,__LC_SAVE_AREA_SYNC
lghi %r10,0
lgr %r10,%r15
lmg %r8,%r9,__LC_PGM_OLD_PSW
tmhh %r8,0x0001 # coming from user space?
jno .Lpgm_skip_asce
lctlg %c1,%c1,__LC_KERNEL_ASCE
j 3f # -> fault in user space
.Lpgm_skip_asce:
#if IS_ENABLED(CONFIG_KVM)
# cleanup critical section for program checks in __sie64a
OUTSIDE %r9,.Lsie_gmap,.Lsie_done,1f
BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST
SIEEXIT
lghi %r10,_PIF_GUEST_FAULT
#endif
1: tmhh %r8,0x4000 # PER bit set in old PSW ?
jnz 2f # -> enabled, can't be a double fault
tm __LC_PGM_ILC+3,0x80 # check for per exception
@ -361,13 +333,20 @@ SYM_CODE_START(pgm_check_handler)
CHECK_VMAP_STACK __LC_SAVE_AREA_SYNC,4f
3: lg %r15,__LC_KERNEL_STACK
4: la %r11,STACK_FRAME_OVERHEAD(%r15)
stg %r10,__PT_FLAGS(%r11)
xc __PT_FLAGS(8,%r11),__PT_FLAGS(%r11)
xc __SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
stmg %r0,%r7,__PT_R0(%r11)
mvc __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
mvc __PT_LAST_BREAK(8,%r11),__LC_PGM_LAST_BREAK
stmg %r8,%r9,__PT_PSW(%r11)
stctg %c1,%c1,__PT_CR1(%r11)
#if IS_ENABLED(CONFIG_KVM)
lg %r12,__LC_GMAP
clc __GMAP_ASCE(8,%r12), __PT_CR1(%r11)
jne 5f
BPENTER __SF_SIE_FLAGS(%r10),_TIF_ISOLATE_BP_GUEST
SIEEXIT __SF_SIE_CONTROL(%r10)
#endif
5: stmg %r8,%r9,__PT_PSW(%r11)
# clear user controlled registers to prevent speculative use
xgr %r0,%r0
xgr %r1,%r1
@ -416,9 +395,10 @@ SYM_CODE_START(\name)
tmhh %r8,0x0001 # interrupting from user ?
jnz 1f
#if IS_ENABLED(CONFIG_KVM)
OUTSIDE %r9,.Lsie_gmap,.Lsie_done,0f
TSTMSK __LC_CPU_FLAGS,_CIF_SIE
jz 0f
BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST
SIEEXIT
SIEEXIT __SF_SIE_CONTROL(%r15)
#endif
0: CHECK_STACK __LC_SAVE_AREA_ASYNC
aghi %r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE)
@ -513,11 +493,20 @@ SYM_CODE_START(mcck_int_handler)
TSTMSK __LC_MCCK_CODE,MCCK_CODE_PSW_IA_VALID
jno .Lmcck_panic
#if IS_ENABLED(CONFIG_KVM)
OUTSIDE %r9,.Lsie_gmap,.Lsie_done,.Lmcck_user
OUTSIDE %r9,.Lsie_entry,.Lsie_leave,4f
TSTMSK __LC_CPU_FLAGS,_CIF_SIE
jz .Lmcck_user
# Need to compare the address instead of a CIF_SIE* flag.
# Otherwise there would be a race between setting the flag
# and entering SIE (or leaving and clearing the flag). This
# would cause machine checks targeted at the guest to be
# handled by the host.
larl %r14,.Lsie_entry
clgrjl %r9,%r14, 4f
larl %r14,.Lsie_leave
clgrjhe %r9,%r14, 4f
oi __LC_CPU_FLAGS+7, _CIF_MCCK_GUEST
4: BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST
SIEEXIT
SIEEXIT __SF_SIE_CONTROL(%r15)
#endif
.Lmcck_user:
lg %r15,__LC_MCCK_STACK

View File

@ -397,7 +397,7 @@ static void service_level_vm_print(struct seq_file *m,
{
char *query_buffer, *str;
query_buffer = kmalloc(1024, GFP_KERNEL | GFP_DMA);
query_buffer = kmalloc(1024, GFP_KERNEL);
if (!query_buffer)
return;
cpcmd("QUERY CPLEVEL", query_buffer, 1024, NULL);

View File

@ -210,13 +210,13 @@ void vtime_flush(struct task_struct *tsk)
virt_timer_expire();
steal = S390_lowcore.steal_timer;
avg_steal = S390_lowcore.avg_steal_timer / 2;
avg_steal = S390_lowcore.avg_steal_timer;
if ((s64) steal > 0) {
S390_lowcore.steal_timer = 0;
account_steal_time(cputime_to_nsecs(steal));
avg_steal += steal;
}
S390_lowcore.avg_steal_timer = avg_steal;
S390_lowcore.avg_steal_timer = avg_steal / 2;
}
static u64 vtime_delta(void)

View File

@ -7,6 +7,7 @@ obj-y := init.o fault.o extmem.o mmap.o vmem.o maccess.o
obj-y += page-states.o pageattr.o pgtable.o pgalloc.o extable.o
obj-$(CONFIG_CMM) += cmm.o
obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_PTDUMP_CORE) += dump_pagetables.o
obj-$(CONFIG_PGSTE) += gmap.o

View File

@ -67,13 +67,15 @@ early_initcall(fault_init);
static enum fault_type get_fault_type(struct pt_regs *regs)
{
union teid teid = { .val = regs->int_parm_long };
struct gmap *gmap;
if (likely(teid.as == PSW_BITS_AS_PRIMARY)) {
if (user_mode(regs))
return USER_FAULT;
if (!IS_ENABLED(CONFIG_PGSTE))
return KERNEL_FAULT;
if (test_pt_regs_flag(regs, PIF_GUEST_FAULT))
gmap = (struct gmap *)S390_lowcore.gmap;
if (regs->cr1 == gmap->asce)
return GMAP_FAULT;
return KERNEL_FAULT;
}

15
arch/s390/mm/physaddr.c Normal file
View File

@ -0,0 +1,15 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/mmdebug.h>
#include <linux/export.h>
#include <linux/mm.h>
#include <asm/page.h>
unsigned long __phys_addr(unsigned long x, bool is_31bit)
{
VIRTUAL_BUG_ON(is_vmalloc_or_module_addr((void *)(x)));
x = __pa_nodebug(x);
if (is_31bit)
VIRTUAL_BUG_ON(x >> 31);
return x;
}
EXPORT_SYMBOL(__phys_addr);

View File

@ -3976,7 +3976,7 @@ static struct dasd_ccw_req *dasd_generic_build_rdc(struct dasd_device *device,
ccw = cqr->cpaddr;
ccw->cmd_code = CCW_CMD_RDC;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
ccw->flags = 0;
ccw->count = rdc_buffer_size;
cqr->startdev = device;
@ -4020,7 +4020,7 @@ char *dasd_get_sense(struct irb *irb)
if (scsw_is_tm(&irb->scsw) && (irb->scsw.tm.fcxs == 0x01)) {
if (irb->scsw.tm.tcw)
tsb = tcw_get_tsb(phys_to_virt(irb->scsw.tm.tcw));
tsb = tcw_get_tsb(dma32_to_virt(irb->scsw.tm.tcw));
if (tsb && tsb->length == 64 && tsb->flags)
switch (tsb->flags & 0x07) {
case 1: /* tsa_iostat */

View File

@ -216,7 +216,7 @@ dasd_3990_erp_DCTL(struct dasd_ccw_req * erp, char modifier)
memset(ccw, 0, sizeof(struct ccw1));
ccw->cmd_code = CCW_CMD_DCTL;
ccw->count = 4;
ccw->cda = (__u32)virt_to_phys(DCTL_data);
ccw->cda = virt_to_dma32(DCTL_data);
dctl_cqr->flags = erp->flags;
dctl_cqr->function = dasd_3990_erp_DCTL;
dctl_cqr->refers = erp;
@ -1589,7 +1589,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
{
struct dasd_device *device = default_erp->startdev;
__u32 cpa = 0;
dma32_t cpa = 0;
struct dasd_ccw_req *cqr;
struct dasd_ccw_req *erp;
struct DE_eckd_data *DE_data;
@ -1693,7 +1693,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
ccw->cmd_code = DASD_ECKD_CCW_DEFINE_EXTENT;
ccw->flags = CCW_FLAG_CC;
ccw->count = 16;
ccw->cda = (__u32)virt_to_phys(DE_data);
ccw->cda = virt_to_dma32(DE_data);
/* create LO ccw */
ccw++;
@ -1701,7 +1701,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
ccw->cmd_code = DASD_ECKD_CCW_LOCATE_RECORD;
ccw->flags = CCW_FLAG_CC;
ccw->count = 16;
ccw->cda = (__u32)virt_to_phys(LO_data);
ccw->cda = virt_to_dma32(LO_data);
/* TIC to the failed ccw */
ccw++;
@ -1747,7 +1747,7 @@ dasd_3990_update_1B(struct dasd_ccw_req * previous_erp, char *sense)
{
struct dasd_device *device = previous_erp->startdev;
__u32 cpa = 0;
dma32_t cpa = 0;
struct dasd_ccw_req *cqr;
struct dasd_ccw_req *erp;
char *LO_data; /* struct LO_eckd_data */
@ -2386,7 +2386,7 @@ static struct dasd_ccw_req *dasd_3990_erp_add_erp(struct dasd_ccw_req *cqr)
tcw = erp->cpaddr;
tsb = (struct tsb *) &tcw[1];
*tcw = *((struct tcw *)cqr->cpaddr);
tcw->tsb = virt_to_phys(tsb);
tcw->tsb = virt_to_dma64(tsb);
} else if (ccw->cmd_code == DASD_ECKD_CCW_PSF) {
/* PSF cannot be chained from NOOP/TIC */
erp->cpaddr = cqr->cpaddr;
@ -2397,7 +2397,7 @@ static struct dasd_ccw_req *dasd_3990_erp_add_erp(struct dasd_ccw_req *cqr)
ccw->flags = CCW_FLAG_CC;
ccw++;
ccw->cmd_code = CCW_CMD_TIC;
ccw->cda = (__u32)virt_to_phys(cqr->cpaddr);
ccw->cda = virt_to_dma32(cqr->cpaddr);
}
erp->flags = cqr->flags;

View File

@ -435,7 +435,7 @@ static int read_unit_address_configuration(struct dasd_device *device,
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->count = sizeof(struct dasd_psf_prssd_data);
ccw->flags |= CCW_FLAG_CC;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - feature codes */
memset(lcu->uac, 0, sizeof(*(lcu->uac)));
@ -443,7 +443,7 @@ static int read_unit_address_configuration(struct dasd_device *device,
ccw++;
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(*(lcu->uac));
ccw->cda = (__u32)virt_to_phys(lcu->uac);
ccw->cda = virt_to_dma32(lcu->uac);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -739,7 +739,7 @@ static int reset_summary_unit_check(struct alias_lcu *lcu,
ccw->cmd_code = DASD_ECKD_CCW_RSCK;
ccw->flags = CCW_FLAG_SLI;
ccw->count = 16;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
((char *)cqr->data)[0] = reason;
clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);

View File

@ -283,7 +283,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
ccw->cmd_code = DASD_ECKD_CCW_DEFINE_EXTENT;
ccw->flags = 0;
ccw->count = 16;
ccw->cda = (__u32)virt_to_phys(data);
ccw->cda = virt_to_dma32(data);
}
memset(data, 0, sizeof(struct DE_eckd_data));
@ -393,7 +393,7 @@ static void locate_record_ext(struct ccw1 *ccw, struct LRE_eckd_data *data,
ccw->count = 22;
else
ccw->count = 20;
ccw->cda = (__u32)virt_to_phys(data);
ccw->cda = virt_to_dma32(data);
}
memset(data, 0, sizeof(*data));
@ -539,11 +539,11 @@ static int prefix_LRE(struct ccw1 *ccw, struct PFX_eckd_data *pfxdata,
ccw->flags = 0;
if (cmd == DASD_ECKD_CCW_WRITE_FULL_TRACK) {
ccw->count = sizeof(*pfxdata) + 2;
ccw->cda = (__u32)virt_to_phys(pfxdata);
ccw->cda = virt_to_dma32(pfxdata);
memset(pfxdata, 0, sizeof(*pfxdata) + 2);
} else {
ccw->count = sizeof(*pfxdata);
ccw->cda = (__u32)virt_to_phys(pfxdata);
ccw->cda = virt_to_dma32(pfxdata);
memset(pfxdata, 0, sizeof(*pfxdata));
}
@ -610,7 +610,7 @@ locate_record(struct ccw1 *ccw, struct LO_eckd_data *data, unsigned int trk,
ccw->cmd_code = DASD_ECKD_CCW_LOCATE_RECORD;
ccw->flags = 0;
ccw->count = 16;
ccw->cda = (__u32)virt_to_phys(data);
ccw->cda = virt_to_dma32(data);
memset(data, 0, sizeof(struct LO_eckd_data));
sector = 0;
@ -825,7 +825,7 @@ static void dasd_eckd_fill_rcd_cqr(struct dasd_device *device,
ccw = cqr->cpaddr;
ccw->cmd_code = DASD_ECKD_CCW_RCD;
ccw->flags = 0;
ccw->cda = (__u32)virt_to_phys(rcd_buffer);
ccw->cda = virt_to_dma32(rcd_buffer);
ccw->count = DASD_ECKD_RCD_DATA_SIZE;
cqr->magic = DASD_ECKD_MAGIC;
@ -853,7 +853,7 @@ static void read_conf_cb(struct dasd_ccw_req *cqr, void *data)
if (cqr->status != DASD_CQR_DONE) {
ccw = cqr->cpaddr;
rcd_buffer = phys_to_virt(ccw->cda);
rcd_buffer = dma32_to_virt(ccw->cda);
memset(rcd_buffer, 0, sizeof(*rcd_buffer));
rcd_buffer[0] = 0xE5;
@ -1534,7 +1534,7 @@ static int dasd_eckd_read_features(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->count = sizeof(struct dasd_psf_prssd_data);
ccw->flags |= CCW_FLAG_CC;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - feature codes */
features = (struct dasd_rssd_features *) (prssdp + 1);
@ -1543,7 +1543,7 @@ static int dasd_eckd_read_features(struct dasd_device *device)
ccw++;
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(struct dasd_rssd_features);
ccw->cda = (__u32)virt_to_phys(features);
ccw->cda = virt_to_dma32(features);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -1603,7 +1603,7 @@ static int dasd_eckd_read_vol_info(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->count = sizeof(*prssdp);
ccw->flags |= CCW_FLAG_CC;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - Volume Storage Query */
vsq = (struct dasd_rssd_vsq *)(prssdp + 1);
@ -1613,7 +1613,7 @@ static int dasd_eckd_read_vol_info(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(*vsq);
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)virt_to_phys(vsq);
ccw->cda = virt_to_dma32(vsq);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -1788,7 +1788,7 @@ static int dasd_eckd_read_ext_pool_info(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->count = sizeof(*prssdp);
ccw->flags |= CCW_FLAG_CC;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
lcq = (struct dasd_rssd_lcq *)(prssdp + 1);
memset(lcq, 0, sizeof(*lcq));
@ -1797,7 +1797,7 @@ static int dasd_eckd_read_ext_pool_info(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(*lcq);
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)virt_to_phys(lcq);
ccw->cda = virt_to_dma32(lcq);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -1894,7 +1894,7 @@ static struct dasd_ccw_req *dasd_eckd_build_psf_ssc(struct dasd_device *device,
}
ccw = cqr->cpaddr;
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->cda = (__u32)virt_to_phys(psf_ssc_data);
ccw->cda = virt_to_dma32(psf_ssc_data);
ccw->count = 66;
cqr->startdev = device;
@ -2250,7 +2250,7 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_READ_COUNT;
ccw->flags = 0;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(count_data);
ccw->cda = virt_to_dma32(count_data);
ccw++;
count_data++;
}
@ -2264,7 +2264,7 @@ dasd_eckd_analysis_ccw(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_READ_COUNT;
ccw->flags = 0;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(count_data);
ccw->cda = virt_to_dma32(count_data);
cqr->block = NULL;
cqr->startdev = device;
@ -2635,7 +2635,7 @@ dasd_eckd_build_check(struct dasd_device *base, struct format_data_t *fdata,
ccw->cmd_code = DASD_ECKD_CCW_READ_COUNT;
ccw->flags = CCW_FLAG_SLI;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(fmt_buffer);
ccw->cda = virt_to_dma32(fmt_buffer);
ccw++;
fmt_buffer++;
}
@ -2845,7 +2845,7 @@ dasd_eckd_build_format(struct dasd_device *base, struct dasd_device *startdev,
ccw->cmd_code = DASD_ECKD_CCW_WRITE_RECORD_ZERO;
ccw->flags = CCW_FLAG_SLI;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(ect);
ccw->cda = virt_to_dma32(ect);
ccw++;
}
if ((intensity & ~0x08) & 0x04) { /* erase track */
@ -2860,7 +2860,7 @@ dasd_eckd_build_format(struct dasd_device *base, struct dasd_device *startdev,
ccw->cmd_code = DASD_ECKD_CCW_WRITE_CKD;
ccw->flags = CCW_FLAG_SLI;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(ect);
ccw->cda = virt_to_dma32(ect);
} else { /* write remaining records */
for (i = 0; i < rpt; i++) {
ect = (struct eckd_count *) data;
@ -2895,7 +2895,7 @@ dasd_eckd_build_format(struct dasd_device *base, struct dasd_device *startdev,
DASD_ECKD_CCW_WRITE_CKD_MT;
ccw->flags = CCW_FLAG_SLI;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(ect);
ccw->cda = virt_to_dma32(ect);
ccw++;
}
}
@ -3836,7 +3836,7 @@ dasd_eckd_dso_ras(struct dasd_device *device, struct dasd_block *block,
}
ccw = cqr->cpaddr;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
ccw->cmd_code = DASD_ECKD_CCW_DSO;
ccw->count = size;
@ -3961,7 +3961,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
unsigned int blksize)
{
struct dasd_eckd_private *private;
unsigned long *idaws;
dma64_t *idaws;
struct LO_eckd_data *LO_data;
struct dasd_ccw_req *cqr;
struct ccw1 *ccw;
@ -4039,8 +4039,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
dasd_sfree_request(cqr, startdev);
return ERR_PTR(-EAGAIN);
}
idaws = (unsigned long *) (cqr->data +
sizeof(struct PFX_eckd_data));
idaws = (dma64_t *)(cqr->data + sizeof(struct PFX_eckd_data));
} else {
if (define_extent(ccw++, cqr->data, first_trk,
last_trk, cmd, basedev, 0) == -EAGAIN) {
@ -4050,8 +4049,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
dasd_sfree_request(cqr, startdev);
return ERR_PTR(-EAGAIN);
}
idaws = (unsigned long *) (cqr->data +
sizeof(struct DE_eckd_data));
idaws = (dma64_t *)(cqr->data + sizeof(struct DE_eckd_data));
}
/* Build locate_record+read/write/ccws. */
LO_data = (struct LO_eckd_data *) (idaws + cidaw);
@ -4105,11 +4103,11 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_single(
ccw->cmd_code = rcmd;
ccw->count = count;
if (idal_is_needed(dst, blksize)) {
ccw->cda = (__u32)virt_to_phys(idaws);
ccw->cda = virt_to_dma32(idaws);
ccw->flags = CCW_FLAG_IDA;
idaws = idal_create_words(idaws, dst, blksize);
} else {
ccw->cda = (__u32)virt_to_phys(dst);
ccw->cda = virt_to_dma32(dst);
ccw->flags = 0;
}
ccw++;
@ -4152,7 +4150,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_track(
unsigned int blk_per_trk,
unsigned int blksize)
{
unsigned long *idaws;
dma64_t *idaws;
struct dasd_ccw_req *cqr;
struct ccw1 *ccw;
struct req_iterator iter;
@ -4222,7 +4220,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_track(
* (or 2K blocks on 31-bit)
* - the scope of a ccw and it's idal ends with the track boundaries
*/
idaws = (unsigned long *) (cqr->data + sizeof(struct PFX_eckd_data));
idaws = (dma64_t *)(cqr->data + sizeof(struct PFX_eckd_data));
recid = first_rec;
new_track = 1;
end_idaw = 0;
@ -4243,7 +4241,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_track(
ccw[-1].flags |= CCW_FLAG_CC;
ccw->cmd_code = cmd;
ccw->count = len_to_track_end;
ccw->cda = (__u32)virt_to_phys(idaws);
ccw->cda = virt_to_dma32(idaws);
ccw->flags = CCW_FLAG_IDA;
ccw++;
recid += count;
@ -4259,7 +4257,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_track(
* idaw ends
*/
if (!idaw_dst) {
if ((__u32)virt_to_phys(dst) & (IDA_BLOCK_SIZE - 1)) {
if ((unsigned long)(dst) & (IDA_BLOCK_SIZE - 1)) {
dasd_sfree_request(cqr, startdev);
return ERR_PTR(-ERANGE);
} else
@ -4279,7 +4277,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_cmd_track(
* idal_create_words will handle cases where idaw_len
* is larger then IDA_BLOCK_SIZE
*/
if (!((__u32)virt_to_phys(idaw_dst + idaw_len) & (IDA_BLOCK_SIZE - 1)))
if (!((unsigned long)(idaw_dst + idaw_len) & (IDA_BLOCK_SIZE - 1)))
end_idaw = 1;
/* We also need to end the idaw at track end */
if (!len_to_track_end) {
@ -4738,11 +4736,11 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
struct req_iterator iter;
struct dasd_ccw_req *cqr;
unsigned int trkcount;
unsigned long *idaws;
unsigned int size;
unsigned char cmd;
struct bio_vec bv;
struct ccw1 *ccw;
dma64_t *idaws;
int use_prefix;
void *data;
char *dst;
@ -4823,7 +4821,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
trkcount, cmd, basedev, 0, 0);
}
idaws = (unsigned long *)(cqr->data + size);
idaws = (dma64_t *)(cqr->data + size);
len_to_track_end = 0;
if (start_padding_sectors) {
ccw[-1].flags |= CCW_FLAG_CC;
@ -4832,7 +4830,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
ccw->count = 57326;
/* 64k map to one track */
len_to_track_end = 65536 - start_padding_sectors * 512;
ccw->cda = (__u32)virt_to_phys(idaws);
ccw->cda = virt_to_dma32(idaws);
ccw->flags |= CCW_FLAG_IDA;
ccw->flags |= CCW_FLAG_SLI;
ccw++;
@ -4851,7 +4849,7 @@ static struct dasd_ccw_req *dasd_eckd_build_cp_raw(struct dasd_device *startdev,
ccw->count = 57326;
/* 64k map to one track */
len_to_track_end = 65536;
ccw->cda = (__u32)virt_to_phys(idaws);
ccw->cda = virt_to_dma32(idaws);
ccw->flags |= CCW_FLAG_IDA;
ccw->flags |= CCW_FLAG_SLI;
ccw++;
@ -4908,9 +4906,9 @@ dasd_eckd_free_cp(struct dasd_ccw_req *cqr, struct request *req)
ccw++;
if (dst) {
if (ccw->flags & CCW_FLAG_IDA)
cda = *((char **)phys_to_virt(ccw->cda));
cda = *((char **)dma32_to_virt(ccw->cda));
else
cda = phys_to_virt(ccw->cda);
cda = dma32_to_virt(ccw->cda);
if (dst != cda) {
if (rq_data_dir(req) == READ)
memcpy(dst, cda, bv.bv_len);
@ -5060,7 +5058,7 @@ dasd_eckd_release(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_RELEASE;
ccw->flags |= CCW_FLAG_SLI;
ccw->count = 32;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
cqr->startdev = device;
cqr->memdev = device;
clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
@ -5115,7 +5113,7 @@ dasd_eckd_reserve(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_RESERVE;
ccw->flags |= CCW_FLAG_SLI;
ccw->count = 32;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
cqr->startdev = device;
cqr->memdev = device;
clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
@ -5169,7 +5167,7 @@ dasd_eckd_steal_lock(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_SLCK;
ccw->flags |= CCW_FLAG_SLI;
ccw->count = 32;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
cqr->startdev = device;
cqr->memdev = device;
clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
@ -5230,7 +5228,7 @@ static int dasd_eckd_snid(struct dasd_device *device,
ccw->cmd_code = DASD_ECKD_CCW_SNID;
ccw->flags |= CCW_FLAG_SLI;
ccw->count = 12;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
cqr->startdev = device;
cqr->memdev = device;
clear_bit(DASD_CQR_FLAGS_USE_ERP, &cqr->flags);
@ -5297,7 +5295,7 @@ dasd_eckd_performance(struct dasd_device *device, void __user *argp)
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->count = sizeof(struct dasd_psf_prssd_data);
ccw->flags |= CCW_FLAG_CC;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - Performance Statistics */
stats = (struct dasd_rssd_perf_stats_t *) (prssdp + 1);
@ -5306,7 +5304,7 @@ dasd_eckd_performance(struct dasd_device *device, void __user *argp)
ccw++;
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(struct dasd_rssd_perf_stats_t);
ccw->cda = (__u32)virt_to_phys(stats);
ccw->cda = virt_to_dma32(stats);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -5450,7 +5448,7 @@ static int dasd_symm_io(struct dasd_device *device, void __user *argp)
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->count = usrparm.psf_data_len;
ccw->flags |= CCW_FLAG_CC;
ccw->cda = (__u32)virt_to_phys(psf_data);
ccw->cda = virt_to_dma32(psf_data);
ccw++;
@ -5458,7 +5456,7 @@ static int dasd_symm_io(struct dasd_device *device, void __user *argp)
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = usrparm.rssd_result_len;
ccw->flags = CCW_FLAG_SLI ;
ccw->cda = (__u32)virt_to_phys(rssd_result);
ccw->cda = virt_to_dma32(rssd_result);
rc = dasd_sleep_on(cqr);
if (rc)
@ -5527,9 +5525,9 @@ dasd_eckd_dump_ccw_range(struct dasd_device *device, struct ccw1 *from,
/* get pointer to data (consider IDALs) */
if (from->flags & CCW_FLAG_IDA)
datap = (char *)*((addr_t *)phys_to_virt(from->cda));
datap = (char *)*((addr_t *)dma32_to_virt(from->cda));
else
datap = phys_to_virt(from->cda);
datap = dma32_to_virt(from->cda);
/* dump data (max 128 bytes) */
for (count = 0; count < from->count && count < 128; count++) {
@ -5598,7 +5596,7 @@ static void dasd_eckd_dump_sense_ccw(struct dasd_device *device,
scsw_dstat(&irb->scsw), scsw_cstat(&irb->scsw),
req ? req->intrc : 0);
len += sprintf(page + len, "Failing CCW: %px\n",
phys_to_virt(irb->scsw.cmd.cpa));
dma32_to_virt(irb->scsw.cmd.cpa));
if (irb->esw.esw0.erw.cons) {
for (sl = 0; sl < 4; sl++) {
len += sprintf(page + len, "Sense(hex) %2d-%2d:",
@ -5641,7 +5639,7 @@ static void dasd_eckd_dump_sense_ccw(struct dasd_device *device,
/* print failing CCW area (maximum 4) */
/* scsw->cda is either valid or zero */
from = ++to;
fail = phys_to_virt(irb->scsw.cmd.cpa); /* failing CCW */
fail = dma32_to_virt(irb->scsw.cmd.cpa); /* failing CCW */
if (from < fail - 2) {
from = fail - 2; /* there is a gap - print header */
dev_err(dev, "......\n");
@ -5691,12 +5689,12 @@ static void dasd_eckd_dump_sense_tcw(struct dasd_device *device,
(irb->scsw.tm.ifob << 7) | irb->scsw.tm.sesq,
req ? req->intrc : 0);
len += sprintf(page + len, "Failing TCW: %px\n",
phys_to_virt(irb->scsw.tm.tcw));
dma32_to_virt(irb->scsw.tm.tcw));
tsb = NULL;
sense = NULL;
if (irb->scsw.tm.tcw && (irb->scsw.tm.fcxs & 0x01))
tsb = tcw_get_tsb(phys_to_virt(irb->scsw.tm.tcw));
tsb = tcw_get_tsb(dma32_to_virt(irb->scsw.tm.tcw));
if (tsb) {
len += sprintf(page + len, "tsb->length %d\n", tsb->length);
@ -5906,7 +5904,7 @@ retry:
ccw->count = sizeof(struct dasd_psf_prssd_data);
ccw->flags |= CCW_FLAG_CC;
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - message buffer */
message_buf = (struct dasd_rssd_messages *) (prssdp + 1);
@ -5916,7 +5914,7 @@ retry:
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(struct dasd_rssd_messages);
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)virt_to_phys(message_buf);
ccw->cda = virt_to_dma32(message_buf);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -5997,14 +5995,14 @@ static int dasd_eckd_query_host_access(struct dasd_device *device,
ccw->count = sizeof(struct dasd_psf_prssd_data);
ccw->flags |= CCW_FLAG_CC;
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)virt_to_phys(prssdp);
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - query host access */
ccw++;
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(struct dasd_psf_query_host_access);
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)virt_to_phys(host_access);
ccw->cda = virt_to_dma32(host_access);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -6239,14 +6237,14 @@ static int dasd_eckd_query_pprc_status(struct dasd_device *device,
ccw->count = sizeof(struct dasd_psf_prssd_data);
ccw->flags |= CCW_FLAG_CC;
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)(addr_t)prssdp;
ccw->cda = virt_to_dma32(prssdp);
/* Read Subsystem Data - query host access */
ccw++;
ccw->cmd_code = DASD_ECKD_CCW_RSSD;
ccw->count = sizeof(*pprc_data);
ccw->flags |= CCW_FLAG_SLI;
ccw->cda = (__u32)(addr_t)pprc_data;
ccw->cda = virt_to_dma32(pprc_data);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;
@ -6340,7 +6338,7 @@ dasd_eckd_psf_cuir_response(struct dasd_device *device, int response,
psf_cuir->ssid = device->path[pos].ssid;
ccw = cqr->cpaddr;
ccw->cmd_code = DASD_ECKD_CCW_PSF;
ccw->cda = (__u32)virt_to_phys(psf_cuir);
ccw->cda = virt_to_dma32(psf_cuir);
ccw->flags = CCW_FLAG_SLI;
ccw->count = sizeof(struct dasd_psf_cuir_response);

View File

@ -485,7 +485,7 @@ int dasd_eer_enable(struct dasd_device *device)
ccw->cmd_code = DASD_ECKD_CCW_SNSS;
ccw->count = SNSS_DATA_SIZE;
ccw->flags = 0;
ccw->cda = (__u32)virt_to_phys(cqr->data);
ccw->cda = virt_to_dma32(cqr->data);
cqr->buildclk = get_tod_clock();
cqr->status = DASD_CQR_FILLED;

View File

@ -78,7 +78,7 @@ define_extent(struct ccw1 * ccw, struct DE_fba_data *data, int rw,
ccw->cmd_code = DASD_FBA_CCW_DEFINE_EXTENT;
ccw->flags = 0;
ccw->count = 16;
ccw->cda = (__u32)virt_to_phys(data);
ccw->cda = virt_to_dma32(data);
memset(data, 0, sizeof (struct DE_fba_data));
if (rw == WRITE)
(data->mask).perm = 0x0;
@ -98,7 +98,7 @@ locate_record(struct ccw1 * ccw, struct LO_fba_data *data, int rw,
ccw->cmd_code = DASD_FBA_CCW_LOCATE;
ccw->flags = 0;
ccw->count = 8;
ccw->cda = (__u32)virt_to_phys(data);
ccw->cda = virt_to_dma32(data);
memset(data, 0, sizeof (struct LO_fba_data));
if (rw == WRITE)
data->operation.cmd = 0x5;
@ -257,7 +257,7 @@ static void ccw_write_zero(struct ccw1 *ccw, int count)
ccw->cmd_code = DASD_FBA_CCW_WRITE;
ccw->flags |= CCW_FLAG_SLI;
ccw->count = count;
ccw->cda = (__u32)virt_to_phys(dasd_fba_zero_page);
ccw->cda = virt_to_dma32(dasd_fba_zero_page);
}
/*
@ -427,7 +427,7 @@ static struct dasd_ccw_req *dasd_fba_build_cp_regular(
struct request *req)
{
struct dasd_fba_private *private = block->base->private;
unsigned long *idaws;
dma64_t *idaws;
struct LO_fba_data *LO_data;
struct dasd_ccw_req *cqr;
struct ccw1 *ccw;
@ -487,7 +487,7 @@ static struct dasd_ccw_req *dasd_fba_build_cp_regular(
define_extent(ccw++, cqr->data, rq_data_dir(req),
block->bp_block, blk_rq_pos(req), blk_rq_sectors(req));
/* Build locate_record + read/write ccws. */
idaws = (unsigned long *) (cqr->data + sizeof(struct DE_fba_data));
idaws = (dma64_t *)(cqr->data + sizeof(struct DE_fba_data));
LO_data = (struct LO_fba_data *) (idaws + cidaw);
/* Locate record for all blocks for smart devices. */
if (private->rdc_data.mode.bits.data_chain != 0) {
@ -523,11 +523,11 @@ static struct dasd_ccw_req *dasd_fba_build_cp_regular(
ccw->cmd_code = cmd;
ccw->count = block->bp_block;
if (idal_is_needed(dst, blksize)) {
ccw->cda = (__u32)virt_to_phys(idaws);
ccw->cda = virt_to_dma32(idaws);
ccw->flags = CCW_FLAG_IDA;
idaws = idal_create_words(idaws, dst, blksize);
} else {
ccw->cda = (__u32)virt_to_phys(dst);
ccw->cda = virt_to_dma32(dst);
ccw->flags = 0;
}
ccw++;
@ -585,9 +585,9 @@ dasd_fba_free_cp(struct dasd_ccw_req *cqr, struct request *req)
ccw++;
if (dst) {
if (ccw->flags & CCW_FLAG_IDA)
cda = *((char **)phys_to_virt(ccw->cda));
cda = *((char **)dma32_to_virt(ccw->cda));
else
cda = phys_to_virt(ccw->cda);
cda = dma32_to_virt(ccw->cda);
if (dst != cda) {
if (rq_data_dir(req) == READ)
memcpy(dst, cda, bv.bv_len);
@ -672,7 +672,7 @@ dasd_fba_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
len += sprintf(page + len, "in req: %px CS: 0x%02X DS: 0x%02X\n",
req, irb->scsw.cmd.cstat, irb->scsw.cmd.dstat);
len += sprintf(page + len, "Failing CCW: %px\n",
(void *) (addr_t) irb->scsw.cmd.cpa);
(void *)(u64)dma32_to_u32(irb->scsw.cmd.cpa));
if (irb->esw.esw0.erw.cons) {
for (sl = 0; sl < 4; sl++) {
len += sprintf(page + len, "Sense(hex) %2d-%2d:",
@ -701,7 +701,7 @@ dasd_fba_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
for (count = 0; count < 32 && count < act->count;
count += sizeof(int))
len += sprintf(page + len, " %08X",
((int *) (addr_t) act->cda)
((int *)dma32_to_virt(act->cda))
[(count>>2)]);
len += sprintf(page + len, "\n");
act++;
@ -710,18 +710,18 @@ dasd_fba_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
/* print failing CCW area */
len = 0;
if (act < ((struct ccw1 *)(addr_t) irb->scsw.cmd.cpa) - 2) {
act = ((struct ccw1 *)(addr_t) irb->scsw.cmd.cpa) - 2;
if (act < ((struct ccw1 *)dma32_to_virt(irb->scsw.cmd.cpa)) - 2) {
act = ((struct ccw1 *)dma32_to_virt(irb->scsw.cmd.cpa)) - 2;
len += sprintf(page + len, "......\n");
}
end = min((struct ccw1 *)(addr_t) irb->scsw.cmd.cpa + 2, last);
end = min((struct ccw1 *)dma32_to_virt(irb->scsw.cmd.cpa) + 2, last);
while (act <= end) {
len += sprintf(page + len, "CCW %px: %08X %08X DAT:",
act, ((int *) act)[0], ((int *) act)[1]);
for (count = 0; count < 32 && count < act->count;
count += sizeof(int))
len += sprintf(page + len, " %08X",
((int *) (addr_t) act->cda)
((int *)dma32_to_virt(act->cda))
[(count>>2)]);
len += sprintf(page + len, "\n");
act++;
@ -738,7 +738,7 @@ dasd_fba_dump_sense(struct dasd_device *device, struct dasd_ccw_req * req,
for (count = 0; count < 32 && count < act->count;
count += sizeof(int))
len += sprintf(page + len, " %08X",
((int *) (addr_t) act->cda)
((int *)dma32_to_virt(act->cda))
[(count>>2)]);
len += sprintf(page + len, "\n");
act++;

View File

@ -920,7 +920,7 @@ __dcssblk_direct_access(struct dcssblk_dev_info *dev_info, pgoff_t pgoff,
dev_sz = dev_info->end - dev_info->start + 1;
if (kaddr)
*kaddr = (void *) dev_info->start + offset;
*kaddr = __va(dev_info->start + offset);
if (pfn)
*pfn = __pfn_to_pfn_t(PFN_DOWN(dev_info->start + offset),
PFN_DEV|PFN_SPECIAL);

View File

@ -131,7 +131,7 @@ static void scm_request_done(struct scm_request *scmrq)
for (i = 0; i < nr_requests_per_io && scmrq->request[i]; i++) {
msb = &scmrq->aob->msb[i];
aidaw = (u64)phys_to_virt(msb->data_addr);
aidaw = (u64)dma64_to_virt(msb->data_addr);
if ((msb->flags & MSB_FLAG_IDA) && aidaw &&
IS_ALIGNED(aidaw, PAGE_SIZE))
@ -196,12 +196,12 @@ static int scm_request_prepare(struct scm_request *scmrq)
msb->scm_addr = scmdev->address + ((u64) blk_rq_pos(req) << 9);
msb->oc = (rq_data_dir(req) == READ) ? MSB_OC_READ : MSB_OC_WRITE;
msb->flags |= MSB_FLAG_IDA;
msb->data_addr = (u64)virt_to_phys(aidaw);
msb->data_addr = virt_to_dma64(aidaw);
rq_for_each_segment(bv, req, iter) {
WARN_ON(bv.bv_offset);
msb->blk_count += bv.bv_len >> 12;
aidaw->data_addr = virt_to_phys(page_address(bv.bv_page));
aidaw->data_addr = virt_to_dma64(page_address(bv.bv_page));
aidaw++;
}

View File

@ -159,7 +159,7 @@ static void raw3215_mk_read_req(struct raw3215_info *raw)
ccw->cmd_code = 0x0A; /* read inquiry */
ccw->flags = 0x20; /* ignore incorrect length */
ccw->count = 160;
ccw->cda = (__u32)__pa(raw->inbuf);
ccw->cda = virt_to_dma32(raw->inbuf);
}
/*
@ -218,7 +218,7 @@ static void raw3215_mk_write_req(struct raw3215_info *raw)
ccw[-1].flags |= 0x40; /* use command chaining */
ccw->cmd_code = 0x01; /* write, auto carrier return */
ccw->flags = 0x20; /* ignore incorrect length ind. */
ccw->cda = (__u32)__pa(raw->buffer + ix);
ccw->cda = virt_to_dma32(raw->buffer + ix);
count = len;
if (ix + count > RAW3215_BUFFER_SIZE)
count = RAW3215_BUFFER_SIZE - ix;

View File

@ -126,7 +126,7 @@ static int fs3270_activate(struct raw3270_view *view)
raw3270_request_set_cmd(fp->init, TC_EWRITEA);
raw3270_request_set_idal(fp->init, fp->rdbuf);
fp->init->rescnt = 0;
cp = fp->rdbuf->data[0];
cp = dma64_to_virt(fp->rdbuf->data[0]);
if (fp->rdbuf_size == 0) {
/* No saved buffer. Just clear the screen. */
fp->init->ccw.count = 1;
@ -164,7 +164,7 @@ static void fs3270_save_callback(struct raw3270_request *rq, void *data)
fp = (struct fs3270 *)rq->view;
/* Correct idal buffer element 0 address. */
fp->rdbuf->data[0] -= 5;
fp->rdbuf->data[0] = dma64_add(fp->rdbuf->data[0], -5);
fp->rdbuf->size += 5;
/*
@ -202,7 +202,7 @@ static void fs3270_deactivate(struct raw3270_view *view)
* room for the TW_KR/TO_SBA/<address>/<address>/TO_IC sequence
* in the activation command.
*/
fp->rdbuf->data[0] += 5;
fp->rdbuf->data[0] = dma64_add(fp->rdbuf->data[0], 5);
fp->rdbuf->size -= 5;
raw3270_request_set_idal(fp->init, fp->rdbuf);
fp->init->rescnt = 0;
@ -521,13 +521,13 @@ static const struct file_operations fs3270_fops = {
static void fs3270_create_cb(int minor)
{
__register_chrdev(IBM_FS3270_MAJOR, minor, 1, "tub", &fs3270_fops);
device_create(class3270, NULL, MKDEV(IBM_FS3270_MAJOR, minor),
device_create(&class3270, NULL, MKDEV(IBM_FS3270_MAJOR, minor),
NULL, "3270/tub%d", minor);
}
static void fs3270_destroy_cb(int minor)
{
device_destroy(class3270, MKDEV(IBM_FS3270_MAJOR, minor));
device_destroy(&class3270, MKDEV(IBM_FS3270_MAJOR, minor));
__unregister_chrdev(IBM_FS3270_MAJOR, minor, 1, "tub");
}
@ -546,7 +546,7 @@ static int __init fs3270_init(void)
rc = __register_chrdev(IBM_FS3270_MAJOR, 0, 1, "fs3270", &fs3270_fops);
if (rc)
return rc;
device_create(class3270, NULL, MKDEV(IBM_FS3270_MAJOR, 0),
device_create(&class3270, NULL, MKDEV(IBM_FS3270_MAJOR, 0),
NULL, "3270/tub");
raw3270_register_notifier(&fs3270_notifier);
return 0;
@ -555,7 +555,7 @@ static int __init fs3270_init(void)
static void __exit fs3270_exit(void)
{
raw3270_unregister_notifier(&fs3270_notifier);
device_destroy(class3270, MKDEV(IBM_FS3270_MAJOR, 0));
device_destroy(&class3270, MKDEV(IBM_FS3270_MAJOR, 0));
__unregister_chrdev(IBM_FS3270_MAJOR, 0, 1, "fs3270");
}

View File

@ -29,7 +29,9 @@
#include <linux/device.h>
#include <linux/mutex.h>
struct class *class3270;
const struct class class3270 = {
.name = "3270",
};
EXPORT_SYMBOL(class3270);
/* The main 3270 data structure. */
@ -160,7 +162,7 @@ struct raw3270_request *raw3270_request_alloc(size_t size)
/*
* Setup ccw.
*/
rq->ccw.cda = __pa(rq->buffer);
rq->ccw.cda = virt_to_dma32(rq->buffer);
rq->ccw.flags = CCW_FLAG_SLI;
return rq;
@ -186,7 +188,7 @@ int raw3270_request_reset(struct raw3270_request *rq)
return -EBUSY;
rq->ccw.cmd_code = 0;
rq->ccw.count = 0;
rq->ccw.cda = __pa(rq->buffer);
rq->ccw.cda = virt_to_dma32(rq->buffer);
rq->ccw.flags = CCW_FLAG_SLI;
rq->rescnt = 0;
rq->rc = 0;
@ -221,7 +223,7 @@ EXPORT_SYMBOL(raw3270_request_add_data);
*/
void raw3270_request_set_data(struct raw3270_request *rq, void *data, size_t size)
{
rq->ccw.cda = __pa(data);
rq->ccw.cda = virt_to_dma32(data);
rq->ccw.count = size;
}
EXPORT_SYMBOL(raw3270_request_set_data);
@ -231,7 +233,7 @@ EXPORT_SYMBOL(raw3270_request_set_data);
*/
void raw3270_request_set_idal(struct raw3270_request *rq, struct idal_buffer *ib)
{
rq->ccw.cda = __pa(ib->data);
rq->ccw.cda = virt_to_dma32(ib->data);
rq->ccw.count = ib->size;
rq->ccw.flags |= CCW_FLAG_IDA;
}
@ -577,7 +579,7 @@ static void raw3270_read_modified(struct raw3270 *rp)
rp->init_readmod.ccw.cmd_code = TC_READMOD;
rp->init_readmod.ccw.flags = CCW_FLAG_SLI;
rp->init_readmod.ccw.count = sizeof(rp->init_data);
rp->init_readmod.ccw.cda = (__u32)__pa(rp->init_data);
rp->init_readmod.ccw.cda = virt_to_dma32(rp->init_data);
rp->init_readmod.callback = raw3270_read_modified_cb;
rp->init_readmod.callback_data = rp->init_data;
rp->state = RAW3270_STATE_READMOD;
@ -597,7 +599,7 @@ static void raw3270_writesf_readpart(struct raw3270 *rp)
rp->init_readpart.ccw.cmd_code = TC_WRITESF;
rp->init_readpart.ccw.flags = CCW_FLAG_SLI;
rp->init_readpart.ccw.count = sizeof(wbuf);
rp->init_readpart.ccw.cda = (__u32)__pa(&rp->init_data);
rp->init_readpart.ccw.cda = virt_to_dma32(&rp->init_data);
rp->state = RAW3270_STATE_W4ATTN;
raw3270_start_irq(&rp->init_view, &rp->init_readpart);
}
@ -635,7 +637,7 @@ static int __raw3270_reset_device(struct raw3270 *rp)
rp->init_reset.ccw.cmd_code = TC_EWRITEA;
rp->init_reset.ccw.flags = CCW_FLAG_SLI;
rp->init_reset.ccw.count = 1;
rp->init_reset.ccw.cda = (__u32)__pa(rp->init_data);
rp->init_reset.ccw.cda = virt_to_dma32(rp->init_data);
rp->init_reset.callback = raw3270_reset_device_cb;
rc = __raw3270_start(rp, &rp->init_view, &rp->init_reset);
if (rc == 0 && rp->state == RAW3270_STATE_INIT)
@ -1316,23 +1318,25 @@ static int raw3270_init(void)
return 0;
raw3270_registered = 1;
rc = ccw_driver_register(&raw3270_ccw_driver);
if (rc == 0) {
/* Create attributes for early (= console) device. */
mutex_lock(&raw3270_mutex);
class3270 = class_create("3270");
list_for_each_entry(rp, &raw3270_devices, list) {
get_device(&rp->cdev->dev);
raw3270_create_attributes(rp);
}
mutex_unlock(&raw3270_mutex);
if (rc)
return rc;
rc = class_register(&class3270);
if (rc)
return rc;
/* Create attributes for early (= console) device. */
mutex_lock(&raw3270_mutex);
list_for_each_entry(rp, &raw3270_devices, list) {
get_device(&rp->cdev->dev);
raw3270_create_attributes(rp);
}
return rc;
mutex_unlock(&raw3270_mutex);
return 0;
}
static void raw3270_exit(void)
{
ccw_driver_unregister(&raw3270_ccw_driver);
class_destroy(class3270);
class_unregister(&class3270);
}
MODULE_LICENSE("GPL");

View File

@ -14,7 +14,7 @@
struct raw3270;
struct raw3270_view;
extern struct class *class3270;
extern const struct class class3270;
/* 3270 CCW request */
struct raw3270_request {

View File

@ -305,7 +305,9 @@ tape_ccw_cc(struct ccw1 *ccw, __u8 cmd_code, __u16 memsize, void *cda)
ccw->cmd_code = cmd_code;
ccw->flags = CCW_FLAG_CC;
ccw->count = memsize;
ccw->cda = (__u32)(addr_t) cda;
ccw->cda = 0;
if (cda)
ccw->cda = virt_to_dma32(cda);
return ccw + 1;
}
@ -315,7 +317,9 @@ tape_ccw_end(struct ccw1 *ccw, __u8 cmd_code, __u16 memsize, void *cda)
ccw->cmd_code = cmd_code;
ccw->flags = 0;
ccw->count = memsize;
ccw->cda = (__u32)(addr_t) cda;
ccw->cda = 0;
if (cda)
ccw->cda = virt_to_dma32(cda);
return ccw + 1;
}
@ -325,7 +329,7 @@ tape_ccw_cmd(struct ccw1 *ccw, __u8 cmd_code)
ccw->cmd_code = cmd_code;
ccw->flags = 0;
ccw->count = 0;
ccw->cda = (__u32)(addr_t) &ccw->cmd_code;
ccw->cda = virt_to_dma32(&ccw->cmd_code);
return ccw + 1;
}
@ -336,7 +340,7 @@ tape_ccw_repeat(struct ccw1 *ccw, __u8 cmd_code, int count)
ccw->cmd_code = cmd_code;
ccw->flags = CCW_FLAG_CC;
ccw->count = 0;
ccw->cda = (__u32)(addr_t) &ccw->cmd_code;
ccw->cda = virt_to_dma32(&ccw->cmd_code);
ccw++;
}
return ccw;

View File

@ -22,7 +22,9 @@ MODULE_DESCRIPTION(
);
MODULE_LICENSE("GPL");
static struct class *tape_class;
static const struct class tape_class = {
.name = "tape390",
};
/*
* Register a tape device and return a pointer to the cdev structure.
@ -74,7 +76,7 @@ struct tape_class_device *register_tape_dev(
if (rc)
goto fail_with_cdev;
tcd->class_device = device_create(tape_class, device,
tcd->class_device = device_create(&tape_class, device,
tcd->char_device->dev, NULL,
"%s", tcd->device_name);
rc = PTR_ERR_OR_ZERO(tcd->class_device);
@ -91,7 +93,7 @@ struct tape_class_device *register_tape_dev(
return tcd;
fail_with_class_device:
device_destroy(tape_class, tcd->char_device->dev);
device_destroy(&tape_class, tcd->char_device->dev);
fail_with_cdev:
cdev_del(tcd->char_device);
@ -107,7 +109,7 @@ void unregister_tape_dev(struct device *device, struct tape_class_device *tcd)
{
if (tcd != NULL && !IS_ERR(tcd)) {
sysfs_remove_link(&device->kobj, tcd->mode_name);
device_destroy(tape_class, tcd->char_device->dev);
device_destroy(&tape_class, tcd->char_device->dev);
cdev_del(tcd->char_device);
kfree(tcd);
}
@ -117,15 +119,12 @@ EXPORT_SYMBOL(unregister_tape_dev);
static int __init tape_init(void)
{
tape_class = class_create("tape390");
return 0;
return class_register(&tape_class);
}
static void __exit tape_exit(void)
{
class_destroy(tape_class);
tape_class = NULL;
class_unregister(&tape_class);
}
postcore_initcall(tape_init);

View File

@ -679,7 +679,9 @@ static const struct attribute_group *vmlogrdr_attr_groups[] = {
NULL,
};
static struct class *vmlogrdr_class;
static const struct class vmlogrdr_class = {
.name = "vmlogrdr_class",
};
static struct device_driver vmlogrdr_driver = {
.name = "vmlogrdr",
.bus = &iucv_bus,
@ -699,12 +701,9 @@ static int vmlogrdr_register_driver(void)
if (ret)
goto out_iucv;
vmlogrdr_class = class_create("vmlogrdr");
if (IS_ERR(vmlogrdr_class)) {
ret = PTR_ERR(vmlogrdr_class);
vmlogrdr_class = NULL;
ret = class_register(&vmlogrdr_class);
if (ret)
goto out_driver;
}
return 0;
out_driver:
@ -718,8 +717,7 @@ out:
static void vmlogrdr_unregister_driver(void)
{
class_destroy(vmlogrdr_class);
vmlogrdr_class = NULL;
class_unregister(&vmlogrdr_class);
driver_unregister(&vmlogrdr_driver);
iucv_unregister(&vmlogrdr_iucv_handler, 1);
}
@ -754,7 +752,7 @@ static int vmlogrdr_register_device(struct vmlogrdr_priv_t *priv)
return ret;
}
priv->class_device = device_create(vmlogrdr_class, dev,
priv->class_device = device_create(&vmlogrdr_class, dev,
MKDEV(vmlogrdr_major,
priv->minor_num),
priv, "%s", dev_name(dev));
@ -771,7 +769,7 @@ static int vmlogrdr_register_device(struct vmlogrdr_priv_t *priv)
static int vmlogrdr_unregister_device(struct vmlogrdr_priv_t *priv)
{
device_destroy(vmlogrdr_class, MKDEV(vmlogrdr_major, priv->minor_num));
device_destroy(&vmlogrdr_class, MKDEV(vmlogrdr_major, priv->minor_num));
if (priv->device != NULL) {
device_unregister(priv->device);
priv->device=NULL;

View File

@ -48,7 +48,9 @@ MODULE_DESCRIPTION("s390 z/VM virtual unit record device driver");
MODULE_LICENSE("GPL");
static dev_t ur_first_dev_maj_min;
static struct class *vmur_class;
static const struct class vmur_class = {
.name = "vmur",
};
static struct debug_info *vmur_dbf;
/* We put the device's record length (for writes) in the driver_info field */
@ -195,7 +197,7 @@ static void free_chan_prog(struct ccw1 *cpa)
struct ccw1 *ptr = cpa;
while (ptr->cda) {
kfree(phys_to_virt(ptr->cda));
kfree(dma32_to_virt(ptr->cda));
ptr++;
}
kfree(cpa);
@ -237,7 +239,7 @@ static struct ccw1 *alloc_chan_prog(const char __user *ubuf, int rec_count,
free_chan_prog(cpa);
return ERR_PTR(-ENOMEM);
}
cpa[i].cda = (u32)virt_to_phys(kbuf);
cpa[i].cda = virt_to_dma32(kbuf);
if (copy_from_user(kbuf, ubuf, reclen)) {
free_chan_prog(cpa);
return ERR_PTR(-EFAULT);
@ -912,7 +914,7 @@ static int ur_set_online(struct ccw_device *cdev)
goto fail_free_cdev;
}
urd->device = device_create(vmur_class, &cdev->dev,
urd->device = device_create(&vmur_class, &cdev->dev,
urd->char_device->dev, NULL, "%s", node_id);
if (IS_ERR(urd->device)) {
rc = PTR_ERR(urd->device);
@ -958,7 +960,7 @@ static int ur_set_offline_force(struct ccw_device *cdev, int force)
/* Work not run yet - need to release reference here */
urdev_put(urd);
}
device_destroy(vmur_class, urd->char_device->dev);
device_destroy(&vmur_class, urd->char_device->dev);
cdev_del(urd->char_device);
urd->char_device = NULL;
rc = 0;
@ -1022,11 +1024,9 @@ static int __init ur_init(void)
debug_set_level(vmur_dbf, 6);
vmur_class = class_create("vmur");
if (IS_ERR(vmur_class)) {
rc = PTR_ERR(vmur_class);
rc = class_register(&vmur_class);
if (rc)
goto fail_free_dbf;
}
rc = ccw_driver_register(&ur_driver);
if (rc)
@ -1046,7 +1046,7 @@ static int __init ur_init(void)
fail_unregister_driver:
ccw_driver_unregister(&ur_driver);
fail_class_destroy:
class_destroy(vmur_class);
class_unregister(&vmur_class);
fail_free_dbf:
debug_unregister(vmur_dbf);
return rc;
@ -1056,7 +1056,7 @@ static void __exit ur_exit(void)
{
unregister_chrdev_region(ur_first_dev_maj_min, NUM_MINORS);
ccw_driver_unregister(&ur_driver);
class_destroy(vmur_class);
class_unregister(&vmur_class);
debug_unregister(vmur_dbf);
pr_info("%s unloaded.\n", ur_banner);
}

View File

@ -240,7 +240,7 @@ static int __ccwgroup_create_symlinks(struct ccwgroup_device *gdev)
rc = sysfs_create_link(&gdev->cdev[i]->dev.kobj,
&gdev->dev.kobj, "group_device");
if (rc) {
for (--i; i >= 0; i--)
while (i--)
sysfs_remove_link(&gdev->cdev[i]->dev.kobj,
"group_device");
return rc;
@ -251,7 +251,7 @@ static int __ccwgroup_create_symlinks(struct ccwgroup_device *gdev)
rc = sysfs_create_link(&gdev->dev.kobj,
&gdev->cdev[i]->dev.kobj, str);
if (rc) {
for (--i; i >= 0; i--) {
while (i--) {
sprintf(str, "cdev%d", i);
sysfs_remove_link(&gdev->dev.kobj, str);
}

View File

@ -191,7 +191,7 @@ EXPORT_SYMBOL_GPL(chsc_ssqd);
* Returns 0 on success.
*/
int chsc_sadc(struct subchannel_id schid, struct chsc_scssc_area *scssc,
u64 summary_indicator_addr, u64 subchannel_indicator_addr, u8 isc)
dma64_t summary_indicator_addr, dma64_t subchannel_indicator_addr, u8 isc)
{
memset(scssc, 0, sizeof(*scssc));
scssc->request.length = 0x0fe0;
@ -844,7 +844,7 @@ chsc_add_cmg_attr(struct channel_subsystem *css)
}
return ret;
cleanup:
for (--i; i >= 0; i--) {
while (i--) {
if (!css->chps[i])
continue;
chp_remove_cmg_attr(css->chps[i]);
@ -861,9 +861,9 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable)
u32 key : 4;
u32 : 28;
u32 zeroes1;
u32 cub_addr1;
dma32_t cub_addr1;
u32 zeroes2;
u32 cub_addr2;
dma32_t cub_addr2;
u32 reserved[13];
struct chsc_header response;
u32 status : 8;
@ -881,8 +881,8 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable)
secm_area->request.code = 0x0016;
secm_area->key = PAGE_DEFAULT_KEY >> 4;
secm_area->cub_addr1 = virt_to_phys(css->cub_addr1);
secm_area->cub_addr2 = virt_to_phys(css->cub_addr2);
secm_area->cub_addr1 = virt_to_dma32(css->cub_addr1);
secm_area->cub_addr2 = virt_to_dma32(css->cub_addr2);
secm_area->operation_code = enable ? 0 : 1;

View File

@ -91,8 +91,8 @@ struct chsc_scssc_area {
u16:16;
u32:32;
u32:32;
u64 summary_indicator_addr;
u64 subchannel_indicator_addr;
dma64_t summary_indicator_addr;
dma64_t subchannel_indicator_addr;
u32 ks:4;
u32 kc:4;
u32:21;
@ -164,7 +164,7 @@ void chsc_chp_offline(struct chp_id chpid);
int chsc_get_channel_measurement_chars(struct channel_path *chp);
int chsc_ssqd(struct subchannel_id schid, struct chsc_ssqd_area *ssqd);
int chsc_sadc(struct subchannel_id schid, struct chsc_scssc_area *scssc,
u64 summary_indicator_addr, u64 subchannel_indicator_addr,
dma64_t summary_indicator_addr, dma64_t subchannel_indicator_addr,
u8 isc);
int chsc_sgib(u32 origin);
int chsc_error_from_response(int response);

View File

@ -148,7 +148,7 @@ cio_start_key (struct subchannel *sch, /* subchannel structure */
orb->cmd.i2k = 0;
orb->cmd.key = key >> 4;
/* issue "Start Subchannel" */
orb->cmd.cpa = (u32)virt_to_phys(cpa);
orb->cmd.cpa = virt_to_dma32(cpa);
ccode = ssch(sch->schid, orb);
/* process condition code */
@ -717,7 +717,7 @@ int cio_tm_start_key(struct subchannel *sch, struct tcw *tcw, u8 lpm, u8 key)
orb->tm.key = key >> 4;
orb->tm.b = 1;
orb->tm.lpm = lpm ? lpm : sch->lpm;
orb->tm.tcw = (u32)virt_to_phys(tcw);
orb->tm.tcw = virt_to_dma32(tcw);
cc = ssch(sch->schid, orb);
switch (cc) {
case 0:

View File

@ -1114,26 +1114,33 @@ static int cio_dma_pool_init(void)
return 0;
}
void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
size_t size)
void *__cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
size_t size, dma32_t *dma_handle)
{
dma_addr_t dma_addr;
unsigned long addr;
size_t chunk_size;
void *addr;
if (!gp_dma)
return NULL;
addr = gen_pool_alloc(gp_dma, size);
addr = gen_pool_dma_alloc(gp_dma, size, &dma_addr);
while (!addr) {
chunk_size = round_up(size, PAGE_SIZE);
addr = (unsigned long) dma_alloc_coherent(dma_dev,
chunk_size, &dma_addr, CIO_DMA_GFP);
addr = dma_alloc_coherent(dma_dev, chunk_size, &dma_addr, CIO_DMA_GFP);
if (!addr)
return NULL;
gen_pool_add_virt(gp_dma, addr, dma_addr, chunk_size, -1);
addr = gen_pool_alloc(gp_dma, size);
gen_pool_add_virt(gp_dma, (unsigned long)addr, dma_addr, chunk_size, -1);
addr = gen_pool_dma_alloc(gp_dma, size, dma_handle ? &dma_addr : NULL);
}
return (void *) addr;
if (dma_handle)
*dma_handle = (__force dma32_t)dma_addr;
return addr;
}
void *cio_gp_dma_zalloc(struct gen_pool *gp_dma, struct device *dma_dev,
size_t size)
{
return __cio_gp_dma_zalloc(gp_dma, dma_dev, size, NULL);
}
void cio_gp_dma_free(struct gen_pool *gp_dma, void *cpu_addr, size_t size)

View File

@ -64,13 +64,13 @@ static void ccw_timeout_log(struct ccw_device *cdev)
printk(KERN_WARNING "cio: orb indicates transport mode\n");
printk(KERN_WARNING "cio: last tcw:\n");
print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
phys_to_virt(orb->tm.tcw),
dma32_to_virt(orb->tm.tcw),
sizeof(struct tcw), 0);
} else {
printk(KERN_WARNING "cio: orb indicates command mode\n");
if ((void *)(addr_t)orb->cmd.cpa ==
if (dma32_to_virt(orb->cmd.cpa) ==
&private->dma_area->sense_ccw ||
(void *)(addr_t)orb->cmd.cpa ==
dma32_to_virt(orb->cmd.cpa) ==
cdev->private->dma_area->iccws)
printk(KERN_WARNING "cio: last channel program "
"(intern):\n");
@ -78,7 +78,7 @@ static void ccw_timeout_log(struct ccw_device *cdev)
printk(KERN_WARNING "cio: last channel program:\n");
print_hex_dump(KERN_WARNING, "cio: ", DUMP_PREFIX_NONE, 16, 1,
phys_to_virt(orb->cmd.cpa),
dma32_to_virt(orb->cmd.cpa),
sizeof(struct ccw1), 0);
}
printk(KERN_WARNING "cio: ccw device state: %d\n",

View File

@ -210,7 +210,7 @@ void ccw_device_sense_id_start(struct ccw_device *cdev)
snsid_init(cdev);
/* Channel program setup. */
cp->cmd_code = CCW_CMD_SENSE_ID;
cp->cda = (u32)virt_to_phys(&cdev->private->dma_area->senseid);
cp->cda = virt_to_dma32(&cdev->private->dma_area->senseid);
cp->count = sizeof(struct senseid);
cp->flags = CCW_FLAG_SLI;
/* Request setup. */

View File

@ -823,13 +823,14 @@ EXPORT_SYMBOL_GPL(ccw_device_get_chid);
* the subchannels dma pool. Maximal size of allocation supported
* is PAGE_SIZE.
*/
void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size)
void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size,
dma32_t *dma_handle)
{
void *addr;
if (!get_device(&cdev->dev))
return NULL;
addr = cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
addr = __cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size, dma_handle);
if (IS_ERR_OR_NULL(addr))
put_device(&cdev->dev);
return addr;

View File

@ -141,7 +141,7 @@ static void spid_build_cp(struct ccw_device *cdev, u8 fn)
pgid->inf.fc = fn;
cp->cmd_code = CCW_CMD_SET_PGID;
cp->cda = (u32)virt_to_phys(pgid);
cp->cda = virt_to_dma32(pgid);
cp->count = sizeof(*pgid);
cp->flags = CCW_FLAG_SLI;
req->cp = cp;
@ -442,7 +442,7 @@ static void snid_build_cp(struct ccw_device *cdev)
/* Channel program setup. */
cp->cmd_code = CCW_CMD_SENSE_PGID;
cp->cda = (u32)virt_to_phys(&cdev->private->dma_area->pgid[i]);
cp->cda = virt_to_dma32(&cdev->private->dma_area->pgid[i]);
cp->count = sizeof(struct pgid);
cp->flags = CCW_FLAG_SLI;
req->cp = cp;
@ -632,11 +632,11 @@ static void stlck_build_cp(struct ccw_device *cdev, void *buf1, void *buf2)
struct ccw1 *cp = cdev->private->dma_area->iccws;
cp[0].cmd_code = CCW_CMD_STLCK;
cp[0].cda = (u32)virt_to_phys(buf1);
cp[0].cda = virt_to_dma32(buf1);
cp[0].count = 32;
cp[0].flags = CCW_FLAG_CC;
cp[1].cmd_code = CCW_CMD_RELEASE;
cp[1].cda = (u32)virt_to_phys(buf2);
cp[1].cda = virt_to_dma32(buf2);
cp[1].count = 32;
cp[1].flags = 0;
req->cp = cp;

View File

@ -332,7 +332,7 @@ ccw_device_do_sense(struct ccw_device *cdev, struct irb *irb)
*/
sense_ccw = &to_io_private(sch)->dma_area->sense_ccw;
sense_ccw->cmd_code = CCW_CMD_BASIC_SENSE;
sense_ccw->cda = virt_to_phys(cdev->private->dma_area->irb.ecw);
sense_ccw->cda = virt_to_dma32(cdev->private->dma_area->irb.ecw);
sense_ccw->count = SENSE_MAX_COUNT;
sense_ccw->flags = CCW_FLAG_SLI;

View File

@ -63,7 +63,7 @@ static int eadm_subchannel_start(struct subchannel *sch, struct aob *aob)
int cc;
orb_init(orb);
orb->eadm.aob = (u32)virt_to_phys(aob);
orb->eadm.aob = virt_to_dma32(aob);
orb->eadm.intparm = (u32)virt_to_phys(sch);
orb->eadm.key = PAGE_DEFAULT_KEY >> 4;
@ -147,7 +147,7 @@ static void eadm_subchannel_irq(struct subchannel *sch)
css_sched_sch_todo(sch, SCH_TODO_EVAL);
return;
}
scm_irq_handler(phys_to_virt(scsw->aob), error);
scm_irq_handler(dma32_to_virt(scsw->aob), error);
private->state = EADM_IDLE;
if (private->completion)

View File

@ -25,7 +25,7 @@
*/
struct tcw *tcw_get_intrg(struct tcw *tcw)
{
return phys_to_virt(tcw->intrg);
return dma32_to_virt(tcw->intrg);
}
EXPORT_SYMBOL(tcw_get_intrg);
@ -40,9 +40,9 @@ EXPORT_SYMBOL(tcw_get_intrg);
void *tcw_get_data(struct tcw *tcw)
{
if (tcw->r)
return phys_to_virt(tcw->input);
return dma64_to_virt(tcw->input);
if (tcw->w)
return phys_to_virt(tcw->output);
return dma64_to_virt(tcw->output);
return NULL;
}
EXPORT_SYMBOL(tcw_get_data);
@ -55,7 +55,7 @@ EXPORT_SYMBOL(tcw_get_data);
*/
struct tccb *tcw_get_tccb(struct tcw *tcw)
{
return phys_to_virt(tcw->tccb);
return dma64_to_virt(tcw->tccb);
}
EXPORT_SYMBOL(tcw_get_tccb);
@ -67,7 +67,7 @@ EXPORT_SYMBOL(tcw_get_tccb);
*/
struct tsb *tcw_get_tsb(struct tcw *tcw)
{
return phys_to_virt(tcw->tsb);
return dma64_to_virt(tcw->tsb);
}
EXPORT_SYMBOL(tcw_get_tsb);
@ -190,7 +190,7 @@ EXPORT_SYMBOL(tcw_finalize);
*/
void tcw_set_intrg(struct tcw *tcw, struct tcw *intrg_tcw)
{
tcw->intrg = (u32)virt_to_phys(intrg_tcw);
tcw->intrg = virt_to_dma32(intrg_tcw);
}
EXPORT_SYMBOL(tcw_set_intrg);
@ -208,11 +208,11 @@ EXPORT_SYMBOL(tcw_set_intrg);
void tcw_set_data(struct tcw *tcw, void *data, int use_tidal)
{
if (tcw->r) {
tcw->input = virt_to_phys(data);
tcw->input = virt_to_dma64(data);
if (use_tidal)
tcw->flags |= TCW_FLAGS_INPUT_TIDA;
} else if (tcw->w) {
tcw->output = virt_to_phys(data);
tcw->output = virt_to_dma64(data);
if (use_tidal)
tcw->flags |= TCW_FLAGS_OUTPUT_TIDA;
}
@ -228,7 +228,7 @@ EXPORT_SYMBOL(tcw_set_data);
*/
void tcw_set_tccb(struct tcw *tcw, struct tccb *tccb)
{
tcw->tccb = virt_to_phys(tccb);
tcw->tccb = virt_to_dma64(tccb);
}
EXPORT_SYMBOL(tcw_set_tccb);
@ -241,7 +241,7 @@ EXPORT_SYMBOL(tcw_set_tccb);
*/
void tcw_set_tsb(struct tcw *tcw, struct tsb *tsb)
{
tcw->tsb = virt_to_phys(tsb);
tcw->tsb = virt_to_dma64(tsb);
}
EXPORT_SYMBOL(tcw_set_tsb);
@ -346,7 +346,7 @@ struct tidaw *tcw_add_tidaw(struct tcw *tcw, int num_tidaws, u8 flags,
memset(tidaw, 0, sizeof(struct tidaw));
tidaw->flags = flags;
tidaw->count = count;
tidaw->addr = virt_to_phys(addr);
tidaw->addr = virt_to_dma64(addr);
return tidaw;
}
EXPORT_SYMBOL(tcw_add_tidaw);

View File

@ -12,6 +12,9 @@
#ifndef S390_ORB_H
#define S390_ORB_H
#include <linux/types.h>
#include <asm/dma-types.h>
/*
* Command-mode operation request block
*/
@ -34,7 +37,7 @@ struct cmd_orb {
u32 ils:1; /* incorrect length */
u32 zero:6; /* reserved zeros */
u32 orbx:1; /* ORB extension control */
u32 cpa; /* channel program address */
dma32_t cpa; /* channel program address */
} __packed __aligned(4);
/*
@ -49,7 +52,7 @@ struct tm_orb {
u32 lpm:8;
u32:7;
u32 x:1;
u32 tcw;
dma32_t tcw;
u32 prio:8;
u32:8;
u32 rsvpgm:8;
@ -71,7 +74,7 @@ struct eadm_orb {
u32 compat2:1;
u32:21;
u32 x:1;
u32 aob;
dma32_t aob;
u32 css_prio:8;
u32:8;
u32 scm_prio:8;

View File

@ -82,7 +82,7 @@ static inline int do_siga_input(unsigned long schid, unsigned long mask,
*/
static inline int do_siga_output(unsigned long schid, unsigned long mask,
unsigned int *bb, unsigned long fc,
unsigned long aob)
dma64_t aob)
{
int cc;
@ -321,7 +321,7 @@ static inline int qdio_siga_sync_q(struct qdio_q *q)
}
static int qdio_siga_output(struct qdio_q *q, unsigned int count,
unsigned int *busy_bit, unsigned long aob)
unsigned int *busy_bit, dma64_t aob)
{
unsigned long schid = *((u32 *) &q->irq_ptr->schid);
unsigned int fc = QDIO_SIGA_WRITE;
@ -628,7 +628,7 @@ int qdio_inspect_output_queue(struct ccw_device *cdev, unsigned int nr,
EXPORT_SYMBOL_GPL(qdio_inspect_output_queue);
static int qdio_kick_outbound_q(struct qdio_q *q, unsigned int count,
unsigned long aob)
dma64_t aob)
{
int retries = 0, cc;
unsigned int busy_bit;
@ -1070,7 +1070,7 @@ int qdio_establish(struct ccw_device *cdev,
irq_ptr->ccw->cmd_code = ciw->cmd;
irq_ptr->ccw->flags = CCW_FLAG_SLI;
irq_ptr->ccw->count = ciw->count;
irq_ptr->ccw->cda = (u32) virt_to_phys(irq_ptr->qdr);
irq_ptr->ccw->cda = virt_to_dma32(irq_ptr->qdr);
spin_lock_irq(get_ccwdev_lock(cdev));
ccw_device_set_options_mask(cdev, 0);
@ -1263,9 +1263,9 @@ static int handle_outbound(struct qdio_q *q, unsigned int bufnr, unsigned int co
qperf_inc(q, outbound_queue_full);
if (queue_type(q) == QDIO_IQDIO_QFMT) {
unsigned long phys_aob = aob ? virt_to_phys(aob) : 0;
dma64_t phys_aob = aob ? virt_to_dma64(aob) : 0;
WARN_ON_ONCE(!IS_ALIGNED(phys_aob, 256));
WARN_ON_ONCE(!IS_ALIGNED(dma64_to_u64(phys_aob), 256));
rc = qdio_kick_outbound_q(q, count, phys_aob);
} else if (qdio_need_siga_sync(q->irq_ptr)) {
rc = qdio_sync_output_queue(q);

View File

@ -179,7 +179,7 @@ static void setup_storage_lists(struct qdio_q *q, struct qdio_irq *irq_ptr,
/* fill in sl */
for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; j++)
q->sl->element[j].sbal = virt_to_phys(q->sbal[j]);
q->sl->element[j].sbal = virt_to_dma64(q->sbal[j]);
}
static void setup_queues(struct qdio_irq *irq_ptr,
@ -291,9 +291,9 @@ void qdio_setup_ssqd_info(struct qdio_irq *irq_ptr)
static void qdio_fill_qdr_desc(struct qdesfmt0 *desc, struct qdio_q *queue)
{
desc->sliba = virt_to_phys(queue->slib);
desc->sla = virt_to_phys(queue->sl);
desc->slsba = virt_to_phys(&queue->slsb);
desc->sliba = virt_to_dma64(queue->slib);
desc->sla = virt_to_dma64(queue->sl);
desc->slsba = virt_to_dma64(&queue->slsb);
desc->akey = PAGE_DEFAULT_KEY >> 4;
desc->bkey = PAGE_DEFAULT_KEY >> 4;
@ -315,7 +315,7 @@ static void setup_qdr(struct qdio_irq *irq_ptr,
irq_ptr->qdr->oqdcnt = qdio_init->no_output_qs;
irq_ptr->qdr->iqdsz = sizeof(struct qdesfmt0) / 4; /* size in words */
irq_ptr->qdr->oqdsz = sizeof(struct qdesfmt0) / 4;
irq_ptr->qdr->qiba = virt_to_phys(&irq_ptr->qib);
irq_ptr->qdr->qiba = virt_to_dma64(&irq_ptr->qib);
irq_ptr->qdr->qkey = PAGE_DEFAULT_KEY >> 4;
for (i = 0; i < qdio_init->no_input_qs; i++)

View File

@ -137,15 +137,15 @@ static struct airq_struct tiqdio_airq = {
static int set_subchannel_ind(struct qdio_irq *irq_ptr, int reset)
{
struct chsc_scssc_area *scssc = (void *)irq_ptr->chsc_page;
u64 summary_indicator_addr, subchannel_indicator_addr;
dma64_t summary_indicator_addr, subchannel_indicator_addr;
int rc;
if (reset) {
summary_indicator_addr = 0;
subchannel_indicator_addr = 0;
} else {
summary_indicator_addr = virt_to_phys(tiqdio_airq.lsi_ptr);
subchannel_indicator_addr = virt_to_phys(irq_ptr->dsci);
summary_indicator_addr = virt_to_dma64(tiqdio_airq.lsi_ptr);
subchannel_indicator_addr = virt_to_dma64(irq_ptr->dsci);
}
rc = chsc_sadc(irq_ptr->schid, scssc, summary_indicator_addr,

View File

@ -190,7 +190,7 @@ static bool page_array_iova_pinned(struct page_array *pa, u64 iova, u64 length)
}
/* Create the list of IDAL words for a page_array. */
static inline void page_array_idal_create_words(struct page_array *pa,
unsigned long *idaws)
dma64_t *idaws)
{
int i;
@ -203,10 +203,10 @@ static inline void page_array_idal_create_words(struct page_array *pa,
*/
for (i = 0; i < pa->pa_nr; i++) {
idaws[i] = page_to_phys(pa->pa_page[i]);
idaws[i] = virt_to_dma64(page_to_virt(pa->pa_page[i]));
/* Incorporate any offset from each starting address */
idaws[i] += pa->pa_iova[i] & (PAGE_SIZE - 1);
idaws[i] = dma64_add(idaws[i], pa->pa_iova[i] & ~PAGE_MASK);
}
}
@ -227,7 +227,7 @@ static void convert_ccw0_to_ccw1(struct ccw1 *source, unsigned long len)
pccw1->flags = ccw0.flags;
pccw1->count = ccw0.count;
}
pccw1->cda = ccw0.cda;
pccw1->cda = u32_to_dma32(ccw0.cda);
pccw1++;
}
}
@ -299,11 +299,12 @@ static inline int ccw_does_data_transfer(struct ccw1 *ccw)
*
* Returns 1 if yes, 0 if no.
*/
static inline int is_cpa_within_range(u32 cpa, u32 head, int len)
static inline int is_cpa_within_range(dma32_t cpa, u32 head, int len)
{
u32 tail = head + (len - 1) * sizeof(struct ccw1);
u32 gcpa = dma32_to_u32(cpa);
return (head <= cpa && cpa <= tail);
return head <= gcpa && gcpa <= tail;
}
static inline int is_tic_within_range(struct ccw1 *ccw, u32 head, int len)
@ -356,7 +357,7 @@ static void ccwchain_cda_free(struct ccwchain *chain, int idx)
if (ccw_is_tic(ccw))
return;
kfree(phys_to_virt(ccw->cda));
kfree(dma32_to_virt(ccw->cda));
}
/**
@ -417,15 +418,17 @@ static int tic_target_chain_exists(struct ccw1 *tic, struct channel_program *cp)
static int ccwchain_loop_tic(struct ccwchain *chain,
struct channel_program *cp);
static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
static int ccwchain_handle_ccw(dma32_t cda, struct channel_program *cp)
{
struct vfio_device *vdev =
&container_of(cp, struct vfio_ccw_private, cp)->vdev;
struct ccwchain *chain;
int len, ret;
u32 gcda;
gcda = dma32_to_u32(cda);
/* Copy 2K (the most we support today) of possible CCWs */
ret = vfio_dma_rw(vdev, cda, cp->guest_cp, CCWCHAIN_LEN_MAX * sizeof(struct ccw1), false);
ret = vfio_dma_rw(vdev, gcda, cp->guest_cp, CCWCHAIN_LEN_MAX * sizeof(struct ccw1), false);
if (ret)
return ret;
@ -434,7 +437,7 @@ static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
convert_ccw0_to_ccw1(cp->guest_cp, CCWCHAIN_LEN_MAX);
/* Count the CCWs in the current chain */
len = ccwchain_calc_length(cda, cp);
len = ccwchain_calc_length(gcda, cp);
if (len < 0)
return len;
@ -444,7 +447,7 @@ static int ccwchain_handle_ccw(u32 cda, struct channel_program *cp)
return -ENOMEM;
chain->ch_len = len;
chain->ch_iova = cda;
chain->ch_iova = gcda;
/* Copy the actual CCWs into the new chain */
memcpy(chain->ch_ccw, cp->guest_cp, len * sizeof(struct ccw1));
@ -487,13 +490,13 @@ static int ccwchain_fetch_tic(struct ccw1 *ccw,
struct channel_program *cp)
{
struct ccwchain *iter;
u32 ccw_head;
u32 cda, ccw_head;
list_for_each_entry(iter, &cp->ccwchain_list, next) {
ccw_head = iter->ch_iova;
if (is_cpa_within_range(ccw->cda, ccw_head, iter->ch_len)) {
ccw->cda = (__u32) (addr_t) (((char *)iter->ch_ccw) +
(ccw->cda - ccw_head));
cda = (u64)iter->ch_ccw + dma32_to_u32(ccw->cda) - ccw_head;
ccw->cda = u32_to_dma32(cda);
return 0;
}
}
@ -501,14 +504,12 @@ static int ccwchain_fetch_tic(struct ccw1 *ccw,
return -EFAULT;
}
static unsigned long *get_guest_idal(struct ccw1 *ccw,
struct channel_program *cp,
int idaw_nr)
static dma64_t *get_guest_idal(struct ccw1 *ccw, struct channel_program *cp, int idaw_nr)
{
struct vfio_device *vdev =
&container_of(cp, struct vfio_ccw_private, cp)->vdev;
unsigned long *idaws;
unsigned int *idaws_f1;
dma64_t *idaws;
dma32_t *idaws_f1;
int idal_len = idaw_nr * sizeof(*idaws);
int idaw_size = idal_is_2k(cp) ? PAGE_SIZE / 2 : PAGE_SIZE;
int idaw_mask = ~(idaw_size - 1);
@ -520,7 +521,7 @@ static unsigned long *get_guest_idal(struct ccw1 *ccw,
if (ccw_is_idal(ccw)) {
/* Copy IDAL from guest */
ret = vfio_dma_rw(vdev, ccw->cda, idaws, idal_len, false);
ret = vfio_dma_rw(vdev, dma32_to_u32(ccw->cda), idaws, idal_len, false);
if (ret) {
kfree(idaws);
return ERR_PTR(ret);
@ -528,14 +529,18 @@ static unsigned long *get_guest_idal(struct ccw1 *ccw,
} else {
/* Fabricate an IDAL based off CCW data address */
if (cp->orb.cmd.c64) {
idaws[0] = ccw->cda;
for (i = 1; i < idaw_nr; i++)
idaws[i] = (idaws[i - 1] + idaw_size) & idaw_mask;
idaws[0] = u64_to_dma64(dma32_to_u32(ccw->cda));
for (i = 1; i < idaw_nr; i++) {
idaws[i] = dma64_add(idaws[i - 1], idaw_size);
idaws[i] = dma64_and(idaws[i], idaw_mask);
}
} else {
idaws_f1 = (unsigned int *)idaws;
idaws_f1 = (dma32_t *)idaws;
idaws_f1[0] = ccw->cda;
for (i = 1; i < idaw_nr; i++)
idaws_f1[i] = (idaws_f1[i - 1] + idaw_size) & idaw_mask;
for (i = 1; i < idaw_nr; i++) {
idaws_f1[i] = dma32_add(idaws_f1[i - 1], idaw_size);
idaws_f1[i] = dma32_and(idaws_f1[i], idaw_mask);
}
}
}
@ -572,7 +577,7 @@ static int ccw_count_idaws(struct ccw1 *ccw,
if (ccw_is_idal(ccw)) {
/* Read first IDAW to check its starting address. */
/* All subsequent IDAWs will be 2K- or 4K-aligned. */
ret = vfio_dma_rw(vdev, ccw->cda, &iova, size, false);
ret = vfio_dma_rw(vdev, dma32_to_u32(ccw->cda), &iova, size, false);
if (ret)
return ret;
@ -583,7 +588,7 @@ static int ccw_count_idaws(struct ccw1 *ccw,
if (!cp->orb.cmd.c64)
iova = iova >> 32;
} else {
iova = ccw->cda;
iova = dma32_to_u32(ccw->cda);
}
/* Format-1 IDAWs operate on 2K each */
@ -604,8 +609,8 @@ static int ccwchain_fetch_ccw(struct ccw1 *ccw,
{
struct vfio_device *vdev =
&container_of(cp, struct vfio_ccw_private, cp)->vdev;
unsigned long *idaws;
unsigned int *idaws_f1;
dma64_t *idaws;
dma32_t *idaws_f1;
int ret;
int idaw_nr;
int i;
@ -636,12 +641,12 @@ static int ccwchain_fetch_ccw(struct ccw1 *ccw,
* Copy guest IDAWs into page_array, in case the memory they
* occupy is not contiguous.
*/
idaws_f1 = (unsigned int *)idaws;
idaws_f1 = (dma32_t *)idaws;
for (i = 0; i < idaw_nr; i++) {
if (cp->orb.cmd.c64)
pa->pa_iova[i] = idaws[i];
pa->pa_iova[i] = dma64_to_u64(idaws[i]);
else
pa->pa_iova[i] = idaws_f1[i];
pa->pa_iova[i] = dma32_to_u32(idaws_f1[i]);
}
if (ccw_does_data_transfer(ccw)) {
@ -652,7 +657,7 @@ static int ccwchain_fetch_ccw(struct ccw1 *ccw,
pa->pa_nr = 0;
}
ccw->cda = (__u32) virt_to_phys(idaws);
ccw->cda = virt_to_dma32(idaws);
ccw->flags |= CCW_FLAG_IDA;
/* Populate the IDAL with pinned/translated addresses from page */
@ -874,7 +879,7 @@ union orb *cp_get_orb(struct channel_program *cp, struct subchannel *sch)
chain = list_first_entry(&cp->ccwchain_list, struct ccwchain, next);
cpa = chain->ch_ccw;
orb->cmd.cpa = (__u32)virt_to_phys(cpa);
orb->cmd.cpa = virt_to_dma32(cpa);
return orb;
}
@ -896,7 +901,7 @@ union orb *cp_get_orb(struct channel_program *cp, struct subchannel *sch)
void cp_update_scsw(struct channel_program *cp, union scsw *scsw)
{
struct ccwchain *chain;
u32 cpa = scsw->cmd.cpa;
dma32_t cpa = scsw->cmd.cpa;
u32 ccw_head;
if (!cp->initialized)
@ -919,9 +924,10 @@ void cp_update_scsw(struct channel_program *cp, union scsw *scsw)
* (cpa - ccw_head) is the offset value of the host
* physical ccw to its chain head.
* Adding this value to the guest physical ccw chain
* head gets us the guest cpa.
* head gets us the guest cpa:
* cpa = chain->ch_iova + (cpa - ccw_head)
*/
cpa = chain->ch_iova + (cpa - ccw_head);
cpa = dma32_add(cpa, chain->ch_iova - ccw_head);
break;
}
}

View File

@ -378,7 +378,7 @@ static void fsm_open(struct vfio_ccw_private *private,
spin_lock_irq(&sch->lock);
sch->isc = VFIO_CCW_ISC;
ret = cio_enable_subchannel(sch, (u32)(unsigned long)sch);
ret = cio_enable_subchannel(sch, (u32)virt_to_phys(sch));
if (ret)
goto err_unlock;

View File

@ -107,7 +107,11 @@ EXPORT_SYMBOL(zcrypt_msgtype);
struct zcdn_device;
static struct class *zcrypt_class;
static void zcdn_device_release(struct device *dev);
static const struct class zcrypt_class = {
.name = ZCRYPT_NAME,
.dev_release = zcdn_device_release,
};
static dev_t zcrypt_devt;
static struct cdev zcrypt_cdev;
@ -130,7 +134,7 @@ static int zcdn_destroy(const char *name);
*/
static inline struct zcdn_device *find_zcdndev_by_name(const char *name)
{
struct device *dev = class_find_device_by_name(zcrypt_class, name);
struct device *dev = class_find_device_by_name(&zcrypt_class, name);
return dev ? to_zcdn_dev(dev) : NULL;
}
@ -142,7 +146,7 @@ static inline struct zcdn_device *find_zcdndev_by_name(const char *name)
*/
static inline struct zcdn_device *find_zcdndev_by_devt(dev_t devt)
{
struct device *dev = class_find_device_by_devt(zcrypt_class, devt);
struct device *dev = class_find_device_by_devt(&zcrypt_class, devt);
return dev ? to_zcdn_dev(dev) : NULL;
}
@ -396,7 +400,7 @@ static int zcdn_create(const char *name)
goto unlockout;
}
zcdndev->device.release = zcdn_device_release;
zcdndev->device.class = zcrypt_class;
zcdndev->device.class = &zcrypt_class;
zcdndev->device.devt = devt;
zcdndev->device.groups = zcdn_dev_attr_groups;
if (name[0])
@ -573,6 +577,7 @@ static inline struct zcrypt_queue *zcrypt_pick_queue(struct zcrypt_card *zc,
{
if (!zq || !try_module_get(zq->queue->ap_dev.device.driver->owner))
return NULL;
zcrypt_card_get(zc);
zcrypt_queue_get(zq);
get_device(&zq->queue->ap_dev.device);
atomic_add(weight, &zc->load);
@ -592,6 +597,7 @@ static inline void zcrypt_drop_queue(struct zcrypt_card *zc,
atomic_sub(weight, &zq->load);
put_device(&zq->queue->ap_dev.device);
zcrypt_queue_put(zq);
zcrypt_card_put(zc);
module_put(mod);
}
@ -2075,12 +2081,9 @@ static int __init zcdn_init(void)
int rc;
/* create a new class 'zcrypt' */
zcrypt_class = class_create(ZCRYPT_NAME);
if (IS_ERR(zcrypt_class)) {
rc = PTR_ERR(zcrypt_class);
goto out_class_create_failed;
}
zcrypt_class->dev_release = zcdn_device_release;
rc = class_register(&zcrypt_class);
if (rc)
goto out_class_register_failed;
/* alloc device minor range */
rc = alloc_chrdev_region(&zcrypt_devt,
@ -2096,35 +2099,35 @@ static int __init zcdn_init(void)
goto out_cdev_add_failed;
/* need some class specific sysfs attributes */
rc = class_create_file(zcrypt_class, &class_attr_zcdn_create);
rc = class_create_file(&zcrypt_class, &class_attr_zcdn_create);
if (rc)
goto out_class_create_file_1_failed;
rc = class_create_file(zcrypt_class, &class_attr_zcdn_destroy);
rc = class_create_file(&zcrypt_class, &class_attr_zcdn_destroy);
if (rc)
goto out_class_create_file_2_failed;
return 0;
out_class_create_file_2_failed:
class_remove_file(zcrypt_class, &class_attr_zcdn_create);
class_remove_file(&zcrypt_class, &class_attr_zcdn_create);
out_class_create_file_1_failed:
cdev_del(&zcrypt_cdev);
out_cdev_add_failed:
unregister_chrdev_region(zcrypt_devt, ZCRYPT_MAX_MINOR_NODES);
out_alloc_chrdev_failed:
class_destroy(zcrypt_class);
out_class_create_failed:
class_unregister(&zcrypt_class);
out_class_register_failed:
return rc;
}
static void zcdn_exit(void)
{
class_remove_file(zcrypt_class, &class_attr_zcdn_create);
class_remove_file(zcrypt_class, &class_attr_zcdn_destroy);
class_remove_file(&zcrypt_class, &class_attr_zcdn_create);
class_remove_file(&zcrypt_class, &class_attr_zcdn_destroy);
zcdn_destroy_all();
cdev_del(&zcrypt_cdev);
unregister_chrdev_region(zcrypt_devt, ZCRYPT_MAX_MINOR_NODES);
class_destroy(zcrypt_class);
class_unregister(&zcrypt_class);
}
/*

View File

@ -1325,7 +1325,7 @@ static void ctcmpc_chx_txdone(fsm_instance *fi, int event, void *arg)
clear_normalized_cda(&ch->ccw[1]);
CTCM_PR_DBGDATA("ccwcda=0x%p data=0x%p\n",
(void *)(unsigned long)ch->ccw[1].cda,
(void *)(u64)dma32_to_u32(ch->ccw[1].cda),
ch->trans_skb->data);
ch->ccw[1].count = ch->max_bufsize;
@ -1340,7 +1340,7 @@ static void ctcmpc_chx_txdone(fsm_instance *fi, int event, void *arg)
}
CTCM_PR_DBGDATA("ccwcda=0x%p data=0x%p\n",
(void *)(unsigned long)ch->ccw[1].cda,
(void *)(u64)dma32_to_u32(ch->ccw[1].cda),
ch->trans_skb->data);
ch->ccw[1].count = ch->trans_skb->len;

View File

@ -1389,7 +1389,7 @@ static int add_channel(struct ccw_device *cdev, enum ctcm_channel_types type,
ch->ccw[15].cmd_code = CCW_CMD_WRITE;
ch->ccw[15].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[15].count = TH_HEADER_LENGTH;
ch->ccw[15].cda = virt_to_phys(ch->discontact_th);
ch->ccw[15].cda = virt_to_dma32(ch->discontact_th);
ch->ccw[16].cmd_code = CCW_CMD_NOOP;
ch->ccw[16].flags = CCW_FLAG_SLI;

View File

@ -1708,57 +1708,57 @@ static void mpc_action_side_xid(fsm_instance *fsm, void *arg, int side)
ch->ccw[9].cmd_code = CCW_CMD_WRITE;
ch->ccw[9].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[9].count = TH_HEADER_LENGTH;
ch->ccw[9].cda = virt_to_phys(ch->xid_th);
ch->ccw[9].cda = virt_to_dma32(ch->xid_th);
if (ch->xid == NULL)
goto done;
ch->ccw[10].cmd_code = CCW_CMD_WRITE;
ch->ccw[10].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[10].count = XID2_LENGTH;
ch->ccw[10].cda = virt_to_phys(ch->xid);
ch->ccw[10].cda = virt_to_dma32(ch->xid);
ch->ccw[11].cmd_code = CCW_CMD_READ;
ch->ccw[11].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[11].count = TH_HEADER_LENGTH;
ch->ccw[11].cda = virt_to_phys(ch->rcvd_xid_th);
ch->ccw[11].cda = virt_to_dma32(ch->rcvd_xid_th);
ch->ccw[12].cmd_code = CCW_CMD_READ;
ch->ccw[12].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[12].count = XID2_LENGTH;
ch->ccw[12].cda = virt_to_phys(ch->rcvd_xid);
ch->ccw[12].cda = virt_to_dma32(ch->rcvd_xid);
ch->ccw[13].cmd_code = CCW_CMD_READ;
ch->ccw[13].cda = virt_to_phys(ch->rcvd_xid_id);
ch->ccw[13].cda = virt_to_dma32(ch->rcvd_xid_id);
} else { /* side == YSIDE : mpc_action_yside_xid */
ch->ccw[9].cmd_code = CCW_CMD_READ;
ch->ccw[9].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[9].count = TH_HEADER_LENGTH;
ch->ccw[9].cda = virt_to_phys(ch->rcvd_xid_th);
ch->ccw[9].cda = virt_to_dma32(ch->rcvd_xid_th);
ch->ccw[10].cmd_code = CCW_CMD_READ;
ch->ccw[10].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[10].count = XID2_LENGTH;
ch->ccw[10].cda = virt_to_phys(ch->rcvd_xid);
ch->ccw[10].cda = virt_to_dma32(ch->rcvd_xid);
if (ch->xid_th == NULL)
goto done;
ch->ccw[11].cmd_code = CCW_CMD_WRITE;
ch->ccw[11].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[11].count = TH_HEADER_LENGTH;
ch->ccw[11].cda = virt_to_phys(ch->xid_th);
ch->ccw[11].cda = virt_to_dma32(ch->xid_th);
if (ch->xid == NULL)
goto done;
ch->ccw[12].cmd_code = CCW_CMD_WRITE;
ch->ccw[12].flags = CCW_FLAG_SLI | CCW_FLAG_CC;
ch->ccw[12].count = XID2_LENGTH;
ch->ccw[12].cda = virt_to_phys(ch->xid);
ch->ccw[12].cda = virt_to_dma32(ch->xid);
if (ch->xid_id == NULL)
goto done;
ch->ccw[13].cmd_code = CCW_CMD_WRITE;
ch->ccw[13].cda = virt_to_phys(ch->xid_id);
ch->ccw[13].cda = virt_to_dma32(ch->xid_id);
}
ch->ccw[13].flags = CCW_FLAG_SLI | CCW_FLAG_CC;

View File

@ -218,7 +218,7 @@ lcs_setup_read_ccws(struct lcs_card *card)
* we do not need to do set_normalized_cda.
*/
card->read.ccws[cnt].cda =
(__u32)virt_to_phys(card->read.iob[cnt].data);
virt_to_dma32(card->read.iob[cnt].data);
((struct lcs_header *)
card->read.iob[cnt].data)->offset = LCS_ILLEGAL_OFFSET;
card->read.iob[cnt].callback = lcs_get_frames_cb;
@ -230,8 +230,7 @@ lcs_setup_read_ccws(struct lcs_card *card)
card->read.ccws[LCS_NUM_BUFFS - 1].flags |= CCW_FLAG_SUSPEND;
/* Last ccw is a tic (transfer in channel). */
card->read.ccws[LCS_NUM_BUFFS].cmd_code = LCS_CCW_TRANSFER;
card->read.ccws[LCS_NUM_BUFFS].cda =
(__u32)virt_to_phys(card->read.ccws);
card->read.ccws[LCS_NUM_BUFFS].cda = virt_to_dma32(card->read.ccws);
/* Setg initial state of the read channel. */
card->read.state = LCS_CH_STATE_INIT;
@ -273,12 +272,11 @@ lcs_setup_write_ccws(struct lcs_card *card)
* we do not need to do set_normalized_cda.
*/
card->write.ccws[cnt].cda =
(__u32)virt_to_phys(card->write.iob[cnt].data);
virt_to_dma32(card->write.iob[cnt].data);
}
/* Last ccw is a tic (transfer in channel). */
card->write.ccws[LCS_NUM_BUFFS].cmd_code = LCS_CCW_TRANSFER;
card->write.ccws[LCS_NUM_BUFFS].cda =
(__u32)virt_to_phys(card->write.ccws);
card->write.ccws[LCS_NUM_BUFFS].cda = virt_to_dma32(card->write.ccws);
/* Set initial state of the write channel. */
card->read.state = LCS_CH_STATE_INIT;
@ -1399,7 +1397,7 @@ lcs_irq(struct ccw_device *cdev, unsigned long intparm, struct irb *irb)
if ((channel->state != LCS_CH_STATE_INIT) &&
(irb->scsw.cmd.fctl & SCSW_FCTL_START_FUNC) &&
(irb->scsw.cmd.cpa != 0)) {
index = (struct ccw1 *) __va((addr_t) irb->scsw.cmd.cpa)
index = (struct ccw1 *)dma32_to_virt(irb->scsw.cmd.cpa)
- channel->ccws;
if ((irb->scsw.cmd.actl & SCSW_ACTL_SUSPENDED) ||
(irb->scsw.cmd.cstat & SCHN_STAT_PCI))

View File

@ -426,7 +426,7 @@ static void qeth_setup_ccw(struct ccw1 *ccw, u8 cmd_code, u8 flags, u32 len,
ccw->cmd_code = cmd_code;
ccw->flags = flags | CCW_FLAG_SLI;
ccw->count = len;
ccw->cda = (__u32)virt_to_phys(data);
ccw->cda = virt_to_dma32(data);
}
static int __qeth_issue_next_read(struct qeth_card *card)
@ -1359,7 +1359,7 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
qeth_tx_complete_buf(queue, buf, error, budget);
for (i = 0; i < queue->max_elements; ++i) {
void *data = phys_to_virt(buf->buffer->element[i].addr);
void *data = dma64_to_virt(buf->buffer->element[i].addr);
if (__test_and_clear_bit(i, buf->from_kmem_cache) && data)
kmem_cache_free(qeth_core_header_cache, data);
@ -1404,7 +1404,7 @@ static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
for (i = 0;
i < aob->sb_count && i < queue->max_elements;
i++) {
void *data = phys_to_virt(aob->sba[i]);
void *data = dma64_to_virt(aob->sba[i]);
if (test_bit(i, buf->from_kmem_cache) && data)
kmem_cache_free(qeth_core_header_cache,
@ -2918,8 +2918,8 @@ static int qeth_init_input_buffer(struct qeth_card *card,
*/
for (i = 0; i < QETH_MAX_BUFFER_ELEMENTS(card); ++i) {
buf->buffer->element[i].length = PAGE_SIZE;
buf->buffer->element[i].addr =
page_to_phys(pool_entry->elements[i]);
buf->buffer->element[i].addr = u64_to_dma64(
page_to_phys(pool_entry->elements[i]));
if (i == QETH_MAX_BUFFER_ELEMENTS(card) - 1)
buf->buffer->element[i].eflags = SBAL_EFLAGS_LAST_ENTRY;
else
@ -3765,9 +3765,9 @@ static void qeth_qdio_cq_handler(struct qeth_card *card, unsigned int qdio_err,
while ((e < QDIO_MAX_ELEMENTS_PER_BUFFER) &&
buffer->element[e].addr) {
unsigned long phys_aob_addr = buffer->element[e].addr;
dma64_t phys_aob_addr = buffer->element[e].addr;
qeth_qdio_handle_aob(card, phys_to_virt(phys_aob_addr));
qeth_qdio_handle_aob(card, dma64_to_virt(phys_aob_addr));
++e;
}
qeth_scrub_qdio_buffer(buffer, QDIO_MAX_ELEMENTS_PER_BUFFER);
@ -4042,7 +4042,7 @@ static unsigned int qeth_fill_buffer(struct qeth_qdio_out_buffer *buf,
if (hd_len) {
is_first_elem = false;
buffer->element[element].addr = virt_to_phys(hdr);
buffer->element[element].addr = virt_to_dma64(hdr);
buffer->element[element].length = hd_len;
buffer->element[element].eflags = SBAL_EFLAGS_FIRST_FRAG;
@ -4063,7 +4063,7 @@ static unsigned int qeth_fill_buffer(struct qeth_qdio_out_buffer *buf,
elem_length = min_t(unsigned int, length,
PAGE_SIZE - offset_in_page(data));
buffer->element[element].addr = virt_to_phys(data);
buffer->element[element].addr = virt_to_dma64(data);
buffer->element[element].length = elem_length;
length -= elem_length;
if (is_first_elem) {
@ -4093,7 +4093,7 @@ static unsigned int qeth_fill_buffer(struct qeth_qdio_out_buffer *buf,
elem_length = min_t(unsigned int, length,
PAGE_SIZE - offset_in_page(data));
buffer->element[element].addr = virt_to_phys(data);
buffer->element[element].addr = virt_to_dma64(data);
buffer->element[element].length = elem_length;
buffer->element[element].eflags =
SBAL_EFLAGS_MIDDLE_FRAG;
@ -5569,7 +5569,7 @@ next_packet:
offset = 0;
}
hdr = phys_to_virt(element->addr) + offset;
hdr = dma64_to_virt(element->addr) + offset;
offset += sizeof(*hdr);
skb = NULL;
@ -5661,7 +5661,7 @@ use_skb:
walk_packet:
while (skb_len) {
int data_len = min(skb_len, (int)(element->length - offset));
char *data = phys_to_virt(element->addr) + offset;
char *data = dma64_to_virt(element->addr) + offset;
skb_len -= data_len;
offset += data_len;

View File

@ -2742,7 +2742,7 @@ void zfcp_fsf_reqid_check(struct zfcp_qdio *qdio, int sbal_idx)
for (idx = 0; idx < QDIO_MAX_ELEMENTS_PER_BUFFER; idx++) {
sbale = &sbal->element[idx];
req_id = sbale->addr;
req_id = dma64_to_u64(sbale->addr);
fsf_req = zfcp_reqlist_find_rm(adapter->req_list, req_id);
if (!fsf_req) {

View File

@ -125,7 +125,7 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
memset(pl, 0,
ZFCP_QDIO_MAX_SBALS_PER_REQ * sizeof(void *));
sbale = qdio->res_q[idx]->element;
req_id = sbale->addr;
req_id = dma64_to_u64(sbale->addr);
scount = min(sbale->scount + 1,
ZFCP_QDIO_MAX_SBALS_PER_REQ + 1);
/* incl. signaling SBAL */
@ -256,7 +256,7 @@ int zfcp_qdio_sbals_from_sg(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
q_req->sbal_number);
return -EINVAL;
}
sbale->addr = sg_phys(sg);
sbale->addr = u64_to_dma64(sg_phys(sg));
sbale->length = sg->length;
}
return 0;

View File

@ -129,14 +129,14 @@ void zfcp_qdio_req_init(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
% QDIO_MAX_BUFFERS_PER_Q;
sbale = zfcp_qdio_sbale_req(qdio, q_req);
sbale->addr = req_id;
sbale->addr = u64_to_dma64(req_id);
sbale->eflags = 0;
sbale->sflags = SBAL_SFLAGS0_COMMAND | sbtype;
if (unlikely(!data))
return;
sbale++;
sbale->addr = virt_to_phys(data);
sbale->addr = virt_to_dma64(data);
sbale->length = len;
}
@ -159,7 +159,7 @@ void zfcp_qdio_fill_next(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req,
BUG_ON(q_req->sbale_curr == qdio->max_sbale_per_sbal - 1);
q_req->sbale_curr++;
sbale = zfcp_qdio_sbale_curr(qdio, q_req);
sbale->addr = virt_to_phys(data);
sbale->addr = virt_to_dma64(data);
sbale->length = len;
}

View File

@ -72,6 +72,7 @@ struct virtio_ccw_device {
unsigned int config_ready;
void *airq_info;
struct vcdev_dma_area *dma_area;
dma32_t dma_area_addr;
};
static inline unsigned long *indicators(struct virtio_ccw_device *vcdev)
@ -84,20 +85,50 @@ static inline unsigned long *indicators2(struct virtio_ccw_device *vcdev)
return &vcdev->dma_area->indicators2;
}
/* Spec stipulates a 64 bit address */
static inline dma64_t indicators_dma(struct virtio_ccw_device *vcdev)
{
u64 dma_area_addr = dma32_to_u32(vcdev->dma_area_addr);
return dma64_add(u64_to_dma64(dma_area_addr),
offsetof(struct vcdev_dma_area, indicators));
}
/* Spec stipulates a 64 bit address */
static inline dma64_t indicators2_dma(struct virtio_ccw_device *vcdev)
{
u64 dma_area_addr = dma32_to_u32(vcdev->dma_area_addr);
return dma64_add(u64_to_dma64(dma_area_addr),
offsetof(struct vcdev_dma_area, indicators2));
}
static inline dma32_t config_block_dma(struct virtio_ccw_device *vcdev)
{
return dma32_add(vcdev->dma_area_addr,
offsetof(struct vcdev_dma_area, config_block));
}
static inline dma32_t status_dma(struct virtio_ccw_device *vcdev)
{
return dma32_add(vcdev->dma_area_addr,
offsetof(struct vcdev_dma_area, status));
}
struct vq_info_block_legacy {
__u64 queue;
dma64_t queue;
__u32 align;
__u16 index;
__u16 num;
} __packed;
struct vq_info_block {
__u64 desc;
dma64_t desc;
__u32 res0;
__u16 index;
__u16 num;
__u64 avail;
__u64 used;
dma64_t avail;
dma64_t used;
} __packed;
struct virtio_feature_desc {
@ -106,8 +137,8 @@ struct virtio_feature_desc {
} __packed;
struct virtio_thinint_area {
unsigned long summary_indicator;
unsigned long indicator;
dma64_t summary_indicator;
dma64_t indicator;
u64 bit_nr;
u8 isc;
} __packed;
@ -123,6 +154,7 @@ struct virtio_rev_info {
struct virtio_ccw_vq_info {
struct virtqueue *vq;
dma32_t info_block_addr;
int num;
union {
struct vq_info_block s;
@ -156,6 +188,11 @@ static inline u8 *get_summary_indicator(struct airq_info *info)
return summary_indicators + info->summary_indicator_idx;
}
static inline dma64_t get_summary_indicator_dma(struct airq_info *info)
{
return virt_to_dma64(get_summary_indicator(info));
}
#define CCW_CMD_SET_VQ 0x13
#define CCW_CMD_VDEV_RESET 0x33
#define CCW_CMD_SET_IND 0x43
@ -260,12 +297,12 @@ static struct airq_info *new_airq_info(int index)
return info;
}
static unsigned long get_airq_indicator(struct virtqueue *vqs[], int nvqs,
u64 *first, void **airq_info)
static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
u64 *first, void **airq_info)
{
int i, j;
struct airq_info *info;
unsigned long indicator_addr = 0;
unsigned long *indicator_addr = NULL;
unsigned long bit, flags;
for (i = 0; i < MAX_AIRQ_AREAS && !indicator_addr; i++) {
@ -275,7 +312,7 @@ static unsigned long get_airq_indicator(struct virtqueue *vqs[], int nvqs,
info = airq_areas[i];
mutex_unlock(&airq_areas_lock);
if (!info)
return 0;
return NULL;
write_lock_irqsave(&info->lock, flags);
bit = airq_iv_alloc(info->aiv, nvqs);
if (bit == -1UL) {
@ -285,7 +322,7 @@ static unsigned long get_airq_indicator(struct virtqueue *vqs[], int nvqs,
}
*first = bit;
*airq_info = info;
indicator_addr = (unsigned long)info->aiv->vector;
indicator_addr = info->aiv->vector;
for (j = 0; j < nvqs; j++) {
airq_iv_set_ptr(info->aiv, bit + j,
(unsigned long)vqs[j]);
@ -348,31 +385,31 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
struct ccw1 *ccw)
{
int ret;
unsigned long *indicatorp = NULL;
struct virtio_thinint_area *thinint_area = NULL;
struct airq_info *airq_info = vcdev->airq_info;
dma64_t *indicatorp = NULL;
if (vcdev->is_thinint) {
thinint_area = ccw_device_dma_zalloc(vcdev->cdev,
sizeof(*thinint_area));
sizeof(*thinint_area),
&ccw->cda);
if (!thinint_area)
return;
thinint_area->summary_indicator =
(unsigned long) get_summary_indicator(airq_info);
get_summary_indicator_dma(airq_info);
thinint_area->isc = VIRTIO_AIRQ_ISC;
ccw->cmd_code = CCW_CMD_SET_IND_ADAPTER;
ccw->count = sizeof(*thinint_area);
ccw->cda = (__u32)virt_to_phys(thinint_area);
} else {
/* payload is the address of the indicators */
indicatorp = ccw_device_dma_zalloc(vcdev->cdev,
sizeof(indicators(vcdev)));
sizeof(*indicatorp),
&ccw->cda);
if (!indicatorp)
return;
*indicatorp = 0;
ccw->cmd_code = CCW_CMD_SET_IND;
ccw->count = sizeof(indicators(vcdev));
ccw->cda = (__u32)virt_to_phys(indicatorp);
ccw->count = sizeof(*indicatorp);
}
/* Deregister indicators from host. */
*indicators(vcdev) = 0;
@ -386,7 +423,7 @@ static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev,
"Failed to deregister indicators (%d)\n", ret);
else if (vcdev->is_thinint)
virtio_ccw_drop_indicators(vcdev);
ccw_device_dma_free(vcdev->cdev, indicatorp, sizeof(indicators(vcdev)));
ccw_device_dma_free(vcdev->cdev, indicatorp, sizeof(*indicatorp));
ccw_device_dma_free(vcdev->cdev, thinint_area, sizeof(*thinint_area));
}
@ -426,7 +463,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev,
ccw->cmd_code = CCW_CMD_READ_VQ_CONF;
ccw->flags = 0;
ccw->count = sizeof(struct vq_config_block);
ccw->cda = (__u32)virt_to_phys(&vcdev->dma_area->config_block);
ccw->cda = config_block_dma(vcdev);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF);
if (ret)
return ret;
@ -463,7 +500,7 @@ static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw)
}
ccw->cmd_code = CCW_CMD_SET_VQ;
ccw->flags = 0;
ccw->cda = (__u32)virt_to_phys(info->info_block);
ccw->cda = info->info_block_addr;
ret = ccw_io_helper(vcdev, ccw,
VIRTIO_CCW_DOING_SET_VQ | index);
/*
@ -486,7 +523,7 @@ static void virtio_ccw_del_vqs(struct virtio_device *vdev)
struct ccw1 *ccw;
struct virtio_ccw_device *vcdev = to_vc_device(vdev);
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return;
@ -525,7 +562,8 @@ static struct virtqueue *virtio_ccw_setup_vq(struct virtio_device *vdev,
goto out_err;
}
info->info_block = ccw_device_dma_zalloc(vcdev->cdev,
sizeof(*info->info_block));
sizeof(*info->info_block),
&info->info_block_addr);
if (!info->info_block) {
dev_warn(&vcdev->cdev->dev, "no info block\n");
err = -ENOMEM;
@ -556,22 +594,22 @@ static struct virtqueue *virtio_ccw_setup_vq(struct virtio_device *vdev,
/* Register it with the host. */
queue = virtqueue_get_desc_addr(vq);
if (vcdev->revision == 0) {
info->info_block->l.queue = queue;
info->info_block->l.queue = u64_to_dma64(queue);
info->info_block->l.align = KVM_VIRTIO_CCW_RING_ALIGN;
info->info_block->l.index = i;
info->info_block->l.num = info->num;
ccw->count = sizeof(info->info_block->l);
} else {
info->info_block->s.desc = queue;
info->info_block->s.desc = u64_to_dma64(queue);
info->info_block->s.index = i;
info->info_block->s.num = info->num;
info->info_block->s.avail = (__u64)virtqueue_get_avail_addr(vq);
info->info_block->s.used = (__u64)virtqueue_get_used_addr(vq);
info->info_block->s.avail = u64_to_dma64(virtqueue_get_avail_addr(vq));
info->info_block->s.used = u64_to_dma64(virtqueue_get_used_addr(vq));
ccw->count = sizeof(info->info_block->s);
}
ccw->cmd_code = CCW_CMD_SET_VQ;
ccw->flags = 0;
ccw->cda = (__u32)virt_to_phys(info->info_block);
ccw->cda = info->info_block_addr;
err = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_VQ | i);
if (err) {
dev_warn(&vcdev->cdev->dev, "SET_VQ failed\n");
@ -605,11 +643,12 @@ static int virtio_ccw_register_adapter_ind(struct virtio_ccw_device *vcdev,
{
int ret;
struct virtio_thinint_area *thinint_area = NULL;
unsigned long indicator_addr;
unsigned long *indicator_addr;
struct airq_info *info;
thinint_area = ccw_device_dma_zalloc(vcdev->cdev,
sizeof(*thinint_area));
sizeof(*thinint_area),
&ccw->cda);
if (!thinint_area) {
ret = -ENOMEM;
goto out;
@ -622,15 +661,13 @@ static int virtio_ccw_register_adapter_ind(struct virtio_ccw_device *vcdev,
ret = -ENOSPC;
goto out;
}
thinint_area->indicator = virt_to_phys((void *)indicator_addr);
thinint_area->indicator = virt_to_dma64(indicator_addr);
info = vcdev->airq_info;
thinint_area->summary_indicator =
virt_to_phys(get_summary_indicator(info));
thinint_area->summary_indicator = get_summary_indicator_dma(info);
thinint_area->isc = VIRTIO_AIRQ_ISC;
ccw->cmd_code = CCW_CMD_SET_IND_ADAPTER;
ccw->flags = CCW_FLAG_SLI;
ccw->count = sizeof(*thinint_area);
ccw->cda = (__u32)virt_to_phys(thinint_area);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_IND_ADAPTER);
if (ret) {
if (ret == -EOPNOTSUPP) {
@ -658,11 +695,11 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
struct irq_affinity *desc)
{
struct virtio_ccw_device *vcdev = to_vc_device(vdev);
unsigned long *indicatorp = NULL;
dma64_t *indicatorp = NULL;
int ret, i, queue_idx = 0;
struct ccw1 *ccw;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return -ENOMEM;
@ -687,10 +724,11 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
* the address of the indicators.
*/
indicatorp = ccw_device_dma_zalloc(vcdev->cdev,
sizeof(indicators(vcdev)));
sizeof(*indicatorp),
&ccw->cda);
if (!indicatorp)
goto out;
*indicatorp = (unsigned long) indicators(vcdev);
*indicatorp = indicators_dma(vcdev);
if (vcdev->is_thinint) {
ret = virtio_ccw_register_adapter_ind(vcdev, vqs, nvqs, ccw);
if (ret)
@ -702,32 +740,30 @@ static int virtio_ccw_find_vqs(struct virtio_device *vdev, unsigned nvqs,
*indicators(vcdev) = 0;
ccw->cmd_code = CCW_CMD_SET_IND;
ccw->flags = 0;
ccw->count = sizeof(indicators(vcdev));
ccw->cda = (__u32)virt_to_phys(indicatorp);
ccw->count = sizeof(*indicatorp);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_IND);
if (ret)
goto out;
}
/* Register indicators2 with host for config changes */
*indicatorp = (unsigned long) indicators2(vcdev);
*indicatorp = indicators2_dma(vcdev);
*indicators2(vcdev) = 0;
ccw->cmd_code = CCW_CMD_SET_CONF_IND;
ccw->flags = 0;
ccw->count = sizeof(indicators2(vcdev));
ccw->cda = (__u32)virt_to_phys(indicatorp);
ccw->count = sizeof(*indicatorp);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_SET_CONF_IND);
if (ret)
goto out;
if (indicatorp)
ccw_device_dma_free(vcdev->cdev, indicatorp,
sizeof(indicators(vcdev)));
sizeof(*indicatorp));
ccw_device_dma_free(vcdev->cdev, ccw, sizeof(*ccw));
return 0;
out:
if (indicatorp)
ccw_device_dma_free(vcdev->cdev, indicatorp,
sizeof(indicators(vcdev)));
sizeof(*indicatorp));
ccw_device_dma_free(vcdev->cdev, ccw, sizeof(*ccw));
virtio_ccw_del_vqs(vdev);
return ret;
@ -738,7 +774,7 @@ static void virtio_ccw_reset(struct virtio_device *vdev)
struct virtio_ccw_device *vcdev = to_vc_device(vdev);
struct ccw1 *ccw;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return;
@ -762,11 +798,12 @@ static u64 virtio_ccw_get_features(struct virtio_device *vdev)
u64 rc;
struct ccw1 *ccw;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return 0;
features = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*features));
features = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*features),
&ccw->cda);
if (!features) {
rc = 0;
goto out_free;
@ -776,7 +813,6 @@ static u64 virtio_ccw_get_features(struct virtio_device *vdev)
ccw->cmd_code = CCW_CMD_READ_FEAT;
ccw->flags = 0;
ccw->count = sizeof(*features);
ccw->cda = (__u32)virt_to_phys(features);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_FEAT);
if (ret) {
rc = 0;
@ -793,7 +829,6 @@ static u64 virtio_ccw_get_features(struct virtio_device *vdev)
ccw->cmd_code = CCW_CMD_READ_FEAT;
ccw->flags = 0;
ccw->count = sizeof(*features);
ccw->cda = (__u32)virt_to_phys(features);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_FEAT);
if (ret == 0)
rc |= (u64)le32_to_cpu(features->features) << 32;
@ -825,11 +860,12 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev)
return -EINVAL;
}
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return -ENOMEM;
features = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*features));
features = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*features),
&ccw->cda);
if (!features) {
ret = -ENOMEM;
goto out_free;
@ -846,7 +882,6 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev)
ccw->cmd_code = CCW_CMD_WRITE_FEAT;
ccw->flags = 0;
ccw->count = sizeof(*features);
ccw->cda = (__u32)virt_to_phys(features);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_FEAT);
if (ret)
goto out_free;
@ -860,7 +895,6 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev)
ccw->cmd_code = CCW_CMD_WRITE_FEAT;
ccw->flags = 0;
ccw->count = sizeof(*features);
ccw->cda = (__u32)virt_to_phys(features);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_FEAT);
out_free:
@ -879,12 +913,13 @@ static void virtio_ccw_get_config(struct virtio_device *vdev,
void *config_area;
unsigned long flags;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return;
config_area = ccw_device_dma_zalloc(vcdev->cdev,
VIRTIO_CCW_CONFIG_SIZE);
VIRTIO_CCW_CONFIG_SIZE,
&ccw->cda);
if (!config_area)
goto out_free;
@ -892,7 +927,6 @@ static void virtio_ccw_get_config(struct virtio_device *vdev,
ccw->cmd_code = CCW_CMD_READ_CONF;
ccw->flags = 0;
ccw->count = offset + len;
ccw->cda = (__u32)virt_to_phys(config_area);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_CONFIG);
if (ret)
goto out_free;
@ -919,12 +953,13 @@ static void virtio_ccw_set_config(struct virtio_device *vdev,
void *config_area;
unsigned long flags;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return;
config_area = ccw_device_dma_zalloc(vcdev->cdev,
VIRTIO_CCW_CONFIG_SIZE);
VIRTIO_CCW_CONFIG_SIZE,
&ccw->cda);
if (!config_area)
goto out_free;
@ -939,7 +974,6 @@ static void virtio_ccw_set_config(struct virtio_device *vdev,
ccw->cmd_code = CCW_CMD_WRITE_CONF;
ccw->flags = 0;
ccw->count = offset + len;
ccw->cda = (__u32)virt_to_phys(config_area);
ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_CONFIG);
out_free:
@ -956,14 +990,14 @@ static u8 virtio_ccw_get_status(struct virtio_device *vdev)
if (vcdev->revision < 2)
return vcdev->dma_area->status;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return old_status;
ccw->cmd_code = CCW_CMD_READ_STATUS;
ccw->flags = 0;
ccw->count = sizeof(vcdev->dma_area->status);
ccw->cda = (__u32)virt_to_phys(&vcdev->dma_area->status);
ccw->cda = status_dma(vcdev);
ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_STATUS);
/*
* If the channel program failed (should only happen if the device
@ -983,7 +1017,7 @@ static void virtio_ccw_set_status(struct virtio_device *vdev, u8 status)
struct ccw1 *ccw;
int ret;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return;
@ -992,11 +1026,11 @@ static void virtio_ccw_set_status(struct virtio_device *vdev, u8 status)
ccw->cmd_code = CCW_CMD_WRITE_STATUS;
ccw->flags = 0;
ccw->count = sizeof(status);
ccw->cda = (__u32)virt_to_phys(&vcdev->dma_area->status);
/* We use ssch for setting the status which is a serializing
* instruction that guarantees the memory writes have
* completed before ssch.
*/
ccw->cda = status_dma(vcdev);
ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_WRITE_STATUS);
/* Write failed? We assume status is unchanged. */
if (ret)
@ -1278,10 +1312,10 @@ static int virtio_ccw_set_transport_rev(struct virtio_ccw_device *vcdev)
struct ccw1 *ccw;
int ret;
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw));
ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw), NULL);
if (!ccw)
return -ENOMEM;
rev = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*rev));
rev = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*rev), &ccw->cda);
if (!rev) {
ccw_device_dma_free(vcdev->cdev, ccw, sizeof(*ccw));
return -ENOMEM;
@ -1291,7 +1325,6 @@ static int virtio_ccw_set_transport_rev(struct virtio_ccw_device *vcdev)
ccw->cmd_code = CCW_CMD_SET_VIRTIO_REV;
ccw->flags = 0;
ccw->count = sizeof(*rev);
ccw->cda = (__u32)virt_to_phys(rev);
vcdev->revision = VIRTIO_CCW_REV_MAX;
do {
@ -1333,7 +1366,8 @@ static int virtio_ccw_online(struct ccw_device *cdev)
vcdev->vdev.dev.parent = &cdev->dev;
vcdev->cdev = cdev;
vcdev->dma_area = ccw_device_dma_zalloc(vcdev->cdev,
sizeof(*vcdev->dma_area));
sizeof(*vcdev->dma_area),
&vcdev->dma_area_addr);
if (!vcdev->dma_area) {
ret = -ENOMEM;
goto out_free;

View File

@ -30,6 +30,7 @@
#include <linux/types.h>
#include <linux/slab.h>
#include <asm/dma-types.h>
#include <asm/debug.h>
/*
@ -76,7 +77,7 @@
* and iucv_message_reply if IUCV_IPBUFLST or IUCV_IPANSLST are used.
*/
struct iucv_array {
u32 address;
dma32_t address;
u32 length;
} __attribute__ ((aligned (8)));

View File

@ -1060,12 +1060,12 @@ static int iucv_sock_sendmsg(struct socket *sock, struct msghdr *msg,
int i;
/* skip iucv_array lying in the headroom */
iba[0].address = (u32)virt_to_phys(skb->data);
iba[0].address = virt_to_dma32(skb->data);
iba[0].length = (u32)skb_headlen(skb);
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
iba[i + 1].address = (u32)virt_to_phys(skb_frag_address(frag));
iba[i + 1].address = virt_to_dma32(skb_frag_address(frag));
iba[i + 1].length = (u32)skb_frag_size(frag);
}
err = pr_iucv->message_send(iucv->path, &txmsg,
@ -1161,12 +1161,12 @@ static void iucv_process_message(struct sock *sk, struct sk_buff *skb,
struct iucv_array *iba = (struct iucv_array *)skb->head;
int i;
iba[0].address = (u32)virt_to_phys(skb->data);
iba[0].address = virt_to_dma32(skb->data);
iba[0].length = (u32)skb_headlen(skb);
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
iba[i + 1].address = (u32)virt_to_phys(skb_frag_address(frag));
iba[i + 1].address = virt_to_dma32(skb_frag_address(frag));
iba[i + 1].length = (u32)skb_frag_size(frag);
}
rc = pr_iucv->message_receive(path, msg,

View File

@ -210,7 +210,7 @@ struct iucv_cmd_dpl {
u8 iprmmsg[8];
u32 ipsrccls;
u32 ipmsgtag;
u32 ipbfadr2;
dma32_t ipbfadr2;
u32 ipbfln2f;
u32 res;
} __attribute__ ((packed,aligned(8)));
@ -226,11 +226,11 @@ struct iucv_cmd_db {
u8 iprcode;
u32 ipmsgid;
u32 iptrgcls;
u32 ipbfadr1;
dma32_t ipbfadr1;
u32 ipbfln1f;
u32 ipsrccls;
u32 ipmsgtag;
u32 ipbfadr2;
dma32_t ipbfadr2;
u32 ipbfln2f;
u32 res;
} __attribute__ ((packed,aligned(8)));
@ -432,7 +432,7 @@ static void iucv_declare_cpu(void *data)
/* Declare interrupt buffer. */
parm = iucv_param_irq[cpu];
memset(parm, 0, sizeof(union iucv_param));
parm->db.ipbfadr1 = virt_to_phys(iucv_irq_data[cpu]);
parm->db.ipbfadr1 = virt_to_dma32(iucv_irq_data[cpu]);
rc = iucv_call_b2f0(IUCV_DECLARE_BUFFER, parm);
if (rc) {
char *err = "Unknown";
@ -1081,8 +1081,7 @@ static int iucv_message_receive_iprmdata(struct iucv_path *path,
size = (size < 8) ? size : 8;
for (array = buffer; size > 0; array++) {
copy = min_t(size_t, size, array->length);
memcpy((u8 *)(addr_t) array->address,
rmmsg, copy);
memcpy(dma32_to_virt(array->address), rmmsg, copy);
rmmsg += copy;
size -= copy;
}
@ -1124,7 +1123,7 @@ int __iucv_message_receive(struct iucv_path *path, struct iucv_message *msg,
parm = iucv_param[smp_processor_id()];
memset(parm, 0, sizeof(union iucv_param));
parm->db.ipbfadr1 = (u32)virt_to_phys(buffer);
parm->db.ipbfadr1 = virt_to_dma32(buffer);
parm->db.ipbfln1f = (u32) size;
parm->db.ipmsgid = msg->id;
parm->db.ippathid = path->pathid;
@ -1242,7 +1241,7 @@ int iucv_message_reply(struct iucv_path *path, struct iucv_message *msg,
parm->dpl.iptrgcls = msg->class;
memcpy(parm->dpl.iprmmsg, reply, min_t(size_t, size, 8));
} else {
parm->db.ipbfadr1 = (u32)virt_to_phys(reply);
parm->db.ipbfadr1 = virt_to_dma32(reply);
parm->db.ipbfln1f = (u32) size;
parm->db.ippathid = path->pathid;
parm->db.ipflags1 = flags;
@ -1294,7 +1293,7 @@ int __iucv_message_send(struct iucv_path *path, struct iucv_message *msg,
parm->dpl.ipmsgtag = msg->tag;
memcpy(parm->dpl.iprmmsg, buffer, 8);
} else {
parm->db.ipbfadr1 = (u32)virt_to_phys(buffer);
parm->db.ipbfadr1 = virt_to_dma32(buffer);
parm->db.ipbfln1f = (u32) size;
parm->db.ippathid = path->pathid;
parm->db.ipflags1 = flags | IUCV_IPNORPY;
@ -1379,7 +1378,7 @@ int iucv_message_send2way(struct iucv_path *path, struct iucv_message *msg,
parm->dpl.iptrgcls = msg->class;
parm->dpl.ipsrccls = srccls;
parm->dpl.ipmsgtag = msg->tag;
parm->dpl.ipbfadr2 = (u32)virt_to_phys(answer);
parm->dpl.ipbfadr2 = virt_to_dma32(answer);
parm->dpl.ipbfln2f = (u32) asize;
memcpy(parm->dpl.iprmmsg, buffer, 8);
} else {
@ -1388,9 +1387,9 @@ int iucv_message_send2way(struct iucv_path *path, struct iucv_message *msg,
parm->db.iptrgcls = msg->class;
parm->db.ipsrccls = srccls;
parm->db.ipmsgtag = msg->tag;
parm->db.ipbfadr1 = (u32)virt_to_phys(buffer);
parm->db.ipbfadr1 = virt_to_dma32(buffer);
parm->db.ipbfln1f = (u32) size;
parm->db.ipbfadr2 = (u32)virt_to_phys(answer);
parm->db.ipbfadr2 = virt_to_dma32(answer);
parm->db.ipbfln2f = (u32) asize;
}
rc = iucv_call_b2f0(IUCV_SEND, parm);