mm: add an explicit smp_wmb() to UFFDIO_CONTINUE
Users of UFFDIO_CONTINUE may reasonably assume that a write memory barrier is included as part of UFFDIO_CONTINUE. That is, a user may believe that all writes it has done to a page that it is now UFFDIO_CONTINUE'ing are guaranteed to be visible to anyone subsequently reading the page through the newly mapped virtual memory region. Today, such a user happens to be correct. mmget_not_zero(), for example, is called as part of UFFDIO_CONTINUE (and comes before any PTE updates), and it implicitly gives us a write barrier. To be resilient against future changes, include an explicit smp_wmb(). While we're at it, optimize the smp_wmb() that is already incidentally present for the HugeTLB case. Merely making a syscall does not generally imply the memory ordering constraints that we need (including on x86). Link: https://lkml.kernel.org/r/20240307010250.3847179-1-jthoughton@google.com Signed-off-by: James Houghton <jthoughton@google.com> Reviewed-by: Peter Xu <peterx@redhat.com> Cc: Axel Rasmussen <axelrasmussen@google.com> Cc: Muchun Song <songmuchun@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
b555895c31
commit
b14d1671dd
17
mm/hugetlb.c
17
mm/hugetlb.c
|
@ -6780,11 +6780,20 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
|
||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The memory barrier inside __folio_mark_uptodate makes sure that
|
* If we just allocated a new page, we need a memory barrier to ensure
|
||||||
* preceding stores to the page contents become visible before
|
* that preceding stores to the page become visible before the
|
||||||
* the set_pte_at() write.
|
* set_pte_at() write. The memory barrier inside __folio_mark_uptodate
|
||||||
|
* is what we need.
|
||||||
|
*
|
||||||
|
* In the case where we have not allocated a new page (is_continue),
|
||||||
|
* the page must already be uptodate. UFFDIO_CONTINUE already includes
|
||||||
|
* an earlier smp_wmb() to ensure that prior stores will be visible
|
||||||
|
* before the set_pte_at() write.
|
||||||
*/
|
*/
|
||||||
__folio_mark_uptodate(folio);
|
if (!is_continue)
|
||||||
|
__folio_mark_uptodate(folio);
|
||||||
|
else
|
||||||
|
WARN_ON_ONCE(!folio_test_uptodate(folio));
|
||||||
|
|
||||||
/* Add shared, newly allocated pages to the page cache. */
|
/* Add shared, newly allocated pages to the page cache. */
|
||||||
if (vm_shared && !is_continue) {
|
if (vm_shared && !is_continue) {
|
||||||
|
|
|
@ -845,6 +845,15 @@ ssize_t mfill_atomic_zeropage(struct userfaultfd_ctx *ctx,
|
||||||
ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start,
|
ssize_t mfill_atomic_continue(struct userfaultfd_ctx *ctx, unsigned long start,
|
||||||
unsigned long len, uffd_flags_t flags)
|
unsigned long len, uffd_flags_t flags)
|
||||||
{
|
{
|
||||||
|
|
||||||
|
/*
|
||||||
|
* A caller might reasonably assume that UFFDIO_CONTINUE contains an
|
||||||
|
* smp_wmb() to ensure that any writes to the about-to-be-mapped page by
|
||||||
|
* the thread doing the UFFDIO_CONTINUE are guaranteed to be visible to
|
||||||
|
* subsequent loads from the page through the newly mapped address range.
|
||||||
|
*/
|
||||||
|
smp_wmb();
|
||||||
|
|
||||||
return mfill_atomic(ctx, start, 0, len,
|
return mfill_atomic(ctx, start, 0, len,
|
||||||
uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE));
|
uffd_flags_set_mode(flags, MFILL_ATOMIC_CONTINUE));
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in New Issue