linux-stable/include/linux/intel-svm.h
Lu Baolu ea661ad6e1 iommu/vt-d: Size Page Request Queue to avoid overflow condition
PRQ overflow may cause I/O throughput congestion, resulting in unnecessary
degradation of I/O performance. Appropriately increasing the length of PRQ
can greatly reduce the occurrence of PRQ overflow. The count of maximum
page requests that can be generated in parallel by a PCIe device is
statically defined in the Outstanding Page Request Capacity field of the
PCIe ATS configure space.

The new length of PRQ is calculated by summing up the value of Outstanding
Page Request Capacity register across all devices where Page Requests are
supported on the real PR-capable platform (Intel Sapphire Rapids). The
result is round to the nearest higher power of 2.

The PRQ length is also double sized as the VT-d IOMMU driver only updates
the Page Request Queue Head Register (PQH_REG) after processing the entire
queue.

Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Link: https://lore.kernel.org/r/20220421113558.3504874-1-baolu.lu@linux.intel.com
Link: https://lore.kernel.org/r/20220510023407.2759143-5-baolu.lu@linux.intel.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2022-05-13 15:14:56 +02:00

29 lines
993 B
C

/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright © 2015 Intel Corporation.
*
* Authors: David Woodhouse <David.Woodhouse@intel.com>
*/
#ifndef __INTEL_SVM_H__
#define __INTEL_SVM_H__
/* Page Request Queue depth */
#define PRQ_ORDER 4
#define PRQ_RING_MASK ((0x1000 << PRQ_ORDER) - 0x20)
#define PRQ_DEPTH ((0x1000 << PRQ_ORDER) >> 5)
/*
* The SVM_FLAG_SUPERVISOR_MODE flag requests a PASID which can be used only
* for access to kernel addresses. No IOTLB flushes are automatically done
* for kernel mappings; it is valid only for access to the kernel's static
* 1:1 mapping of physical memory — not to vmalloc or even module mappings.
* A future API addition may permit the use of such ranges, by means of an
* explicit IOTLB flush call (akin to the DMA API's unmap method).
*
* It is unlikely that we will ever hook into flush_tlb_kernel_range() to
* do such IOTLB flushes automatically.
*/
#define SVM_FLAG_SUPERVISOR_MODE BIT(0)
#endif /* __INTEL_SVM_H__ */