iommu: Fix a boundary issue to avoid performance drop

[ Upstream commit 3431c3f660 ]

After the change of patch ("iommu: Switch gather->end to the
inclusive end"), the performace drops from 1600+K IOPS to 1200K in our
kunpeng ARM64 platform.
We find that the range [start1, end1) actually is joint from the range
[end1, end2), but it is considered as disjoint after the change,
so it needs more times of TLB sync, and spends more time on it.
So fix the boundary issue to avoid performance drop.

Fixes: 862c3715de ("iommu: Switch gather->end to the inclusive end")
Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Acked-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/1616643504-120688-1-git-send-email-chenxiang66@hisilicon.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
Xiang Chen 2021-03-25 11:38:24 +08:00 committed by Greg Kroah-Hartman
parent c96f7eb59b
commit 620aa5821a
1 changed files with 1 additions and 1 deletions

View File

@ -544,7 +544,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
* structure can be rewritten.
*/
if (gather->pgsize != size ||
end < gather->start || start > gather->end) {
end + 1 < gather->start || start > gather->end + 1) {
if (gather->pgsize)
iommu_iotlb_sync(domain, gather);
gather->pgsize = size;