Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""

Halil Pasic points out [1] that the full revert of that commit (revert
in bddac7c1e0), and that a partial revert that only reverts the
problematic case, but still keeps some of the cleanups is probably
better.  

And that partial revert [2] had already been verified by Oleksandr
Natalenko to also fix the issue, I had just missed that in the long
discussion.

So let's reinstate the cleanups from commit aa6f8dcbab ("swiotlb:
rework "fix info leak with DMA_FROM_DEVICE""), and effectively only
revert the part that caused problems.

Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1]
Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2]
Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3]
Suggested-by: Halil Pasic <pasic@linux.ibm.com>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Cc: Christoph Hellwig" <hch@lst.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Linus Torvalds 2022-03-28 11:37:05 -07:00
parent ae085d7f93
commit 901c7280ca
3 changed files with 8 additions and 20 deletions

View File

@ -130,11 +130,3 @@ accesses to DMA buffers in both privileged "supervisor" and unprivileged
subsystem that the buffer is fully accessible at the elevated privilege subsystem that the buffer is fully accessible at the elevated privilege
level (and ideally inaccessible or at least read-only at the level (and ideally inaccessible or at least read-only at the
lesser-privileged levels). lesser-privileged levels).
DMA_ATTR_OVERWRITE
------------------
This is a hint to the DMA-mapping subsystem that the device is expected to
overwrite the entire mapped size, thus the caller does not require any of the
previous buffer contents to be preserved. This allows bounce-buffering
implementations to optimise DMA_FROM_DEVICE transfers.

View File

@ -61,14 +61,6 @@
*/ */
#define DMA_ATTR_PRIVILEGED (1UL << 9) #define DMA_ATTR_PRIVILEGED (1UL << 9)
/*
* This is a hint to the DMA-mapping subsystem that the device is expected
* to overwrite the entire mapped size, thus the caller does not require any
* of the previous buffer contents to be preserved. This allows
* bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
*/
#define DMA_ATTR_OVERWRITE (1UL << 10)
/* /*
* A dma_addr_t can hold any valid DMA or bus address for the platform. It can * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
* be given to a device to use as a DMA source or target. It is specific to a * be given to a device to use as a DMA source or target. It is specific to a

View File

@ -627,10 +627,14 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
for (i = 0; i < nr_slots(alloc_size + offset); i++) for (i = 0; i < nr_slots(alloc_size + offset); i++)
mem->slots[index + i].orig_addr = slot_addr(orig_addr, i); mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
tlb_addr = slot_addr(mem->start, index) + offset; tlb_addr = slot_addr(mem->start, index) + offset;
if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && /*
(!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE || * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
dir == DMA_BIDIRECTIONAL)) * to the tlb buffer, if we knew for sure the device will
swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE); * overwirte the entire current content. But we don't. Thus
* unconditional bounce may prevent leaking swiotlb content (i.e.
* kernel memory) to user-space.
*/
swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
return tlb_addr; return tlb_addr;
} }