linux-stable/mm
Tetsuo Handa 90c4e02bae mm/page_alloc: fix potential deadlock on zonelist_update_seq seqlock
commit 1007843a91 upstream.

syzbot is reporting circular locking dependency which involves
zonelist_update_seq seqlock [1], for this lock is checked by memory
allocation requests which do not need to be retried.

One deadlock scenario is kmalloc(GFP_ATOMIC) from an interrupt handler.

  CPU0
  ----
  __build_all_zonelists() {
    write_seqlock(&zonelist_update_seq); // makes zonelist_update_seq.seqcount odd
    // e.g. timer interrupt handler runs at this moment
      some_timer_func() {
        kmalloc(GFP_ATOMIC) {
          __alloc_pages_slowpath() {
            read_seqbegin(&zonelist_update_seq) {
              // spins forever because zonelist_update_seq.seqcount is odd
            }
          }
        }
      }
    // e.g. timer interrupt handler finishes
    write_sequnlock(&zonelist_update_seq); // makes zonelist_update_seq.seqcount even
  }

This deadlock scenario can be easily eliminated by not calling
read_seqbegin(&zonelist_update_seq) from !__GFP_DIRECT_RECLAIM allocation
requests, for retry is applicable to only __GFP_DIRECT_RECLAIM allocation
requests.  But Michal Hocko does not know whether we should go with this
approach.

Another deadlock scenario which syzbot is reporting is a race between
kmalloc(GFP_ATOMIC) from tty_insert_flip_string_and_push_buffer() with
port->lock held and printk() from __build_all_zonelists() with
zonelist_update_seq held.

  CPU0                                   CPU1
  ----                                   ----
  pty_write() {
    tty_insert_flip_string_and_push_buffer() {
                                         __build_all_zonelists() {
                                           write_seqlock(&zonelist_update_seq);
                                           build_zonelists() {
                                             printk() {
                                               vprintk() {
                                                 vprintk_default() {
                                                   vprintk_emit() {
                                                     console_unlock() {
                                                       console_flush_all() {
                                                         console_emit_next_record() {
                                                           con->write() = serial8250_console_write() {
      spin_lock_irqsave(&port->lock, flags);
      tty_insert_flip_string() {
        tty_insert_flip_string_fixed_flag() {
          __tty_buffer_request_room() {
            tty_buffer_alloc() {
              kmalloc(GFP_ATOMIC | __GFP_NOWARN) {
                __alloc_pages_slowpath() {
                  zonelist_iter_begin() {
                    read_seqbegin(&zonelist_update_seq); // spins forever because zonelist_update_seq.seqcount is odd
                                                             spin_lock_irqsave(&port->lock, flags); // spins forever because port->lock is held
                    }
                  }
                }
              }
            }
          }
        }
      }
      spin_unlock_irqrestore(&port->lock, flags);
                                                             // message is printed to console
                                                             spin_unlock_irqrestore(&port->lock, flags);
                                                           }
                                                         }
                                                       }
                                                     }
                                                   }
                                                 }
                                               }
                                             }
                                           }
                                           write_sequnlock(&zonelist_update_seq);
                                         }
    }
  }

This deadlock scenario can be eliminated by

  preventing interrupt context from calling kmalloc(GFP_ATOMIC)

and

  preventing printk() from calling console_flush_all()

while zonelist_update_seq.seqcount is odd.

Since Petr Mladek thinks that __build_all_zonelists() can become a
candidate for deferring printk() [2], let's address this problem by

  disabling local interrupts in order to avoid kmalloc(GFP_ATOMIC)

and

  disabling synchronous printk() in order to avoid console_flush_all()

.

As a side effect of minimizing duration of zonelist_update_seq.seqcount
being odd by disabling synchronous printk(), latency at
read_seqbegin(&zonelist_update_seq) for both !__GFP_DIRECT_RECLAIM and
__GFP_DIRECT_RECLAIM allocation requests will be reduced.  Although, from
lockdep perspective, not calling read_seqbegin(&zonelist_update_seq) (i.e.
do not record unnecessary locking dependency) from interrupt context is
still preferable, even if we don't allow calling kmalloc(GFP_ATOMIC)
inside
write_seqlock(&zonelist_update_seq)/write_sequnlock(&zonelist_update_seq)
section...

Link: https://lkml.kernel.org/r/8796b95c-3da3-5885-fddd-6ef55f30e4d3@I-love.SAKURA.ne.jp
Fixes: 3d36424b3b ("mm/page_alloc: fix race condition between build_all_zonelists and page allocation")
Link: https://lkml.kernel.org/r/ZCrs+1cDqPWTDFNM@alley [2]
Reported-by: syzbot <syzbot+223c7461c58c58a4cb10@syzkaller.appspotmail.com>
  Link: https://syzkaller.appspot.com/bug?extid=223c7461c58c58a4cb10 [1]
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Petr Mladek <pmladek@suse.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Cc: John Ogness <john.ogness@linutronix.de>
Cc: Patrick Daly <quic_pdaly@quicinc.com>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-05-17 11:13:28 +02:00
..
kasan panic: Consolidate open-coded panic_on_warn checks 2023-02-06 07:49:46 +01:00
backing-dev.c mm: bdi: initialize bdi_min_ratio when bdi is unregistered 2021-12-14 10:18:05 +01:00
balloon_compaction.c
bootmem.c
cleancache.c
cma.c mm/cma.c: fail if fixed declaration can't be honored 2019-08-06 19:06:51 +02:00
cma.h
cma_debug.c mm/cma_debug.c: fix the break condition in cma_maxchunk_get() 2019-06-15 11:54:01 +02:00
compaction.c mm/compaction.c: clear total_{migrate,free}_scanned before scanning a new zone 2019-10-05 13:10:13 +02:00
debug.c mm: get rid of vmacache_flush_all() entirely 2018-09-13 15:18:04 -10:00
debug_page_ref.c
dmapool.c
early_ioremap.c
fadvise.c vfs: implement readahead(2) using POSIX_FADV_WILLNEED 2018-08-30 20:01:32 +02:00
failslab.c
filemap.c mm: fs: initialize fsdata passed to write_begin/write_end interface 2022-11-25 17:40:29 +01:00
frame_vector.c v4l2: don't fall back to follow_pfn() if pin_user_pages_fast() fails 2022-12-08 11:18:32 +01:00
frontswap.c
gup.c gup: document and work around "COW can break either way" issue 2021-04-28 13:16:51 +02:00
gup_benchmark.c mm/gup_benchmark.c: prevent integer overflow in ioctl 2019-12-01 09:17:07 +01:00
highmem.c
hmm.c mm/memory_hotplug: shrink zones when offlining memory 2020-01-29 16:43:27 +01:00
huge_memory.c mm/huge_memory.c: don't discard hugepage if other processes are mapping it 2021-07-20 16:15:58 +02:00
hugetlb.c mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages 2022-11-03 23:52:29 +09:00
hugetlb_cgroup.c mm: hugetlb: switch to css_tryget() in hugetlb_cgroup_charge_cgroup() 2019-11-20 18:45:20 +01:00
hwpoison-inject.c
init-mm.c
internal.h mm/thp: fix vma_address() if virtual address below file offset 2021-07-11 12:49:28 +02:00
interval_tree.c
Kconfig mm/hmm: select mmu notifier when selecting HMM 2019-06-15 11:54:00 +02:00
Kconfig.debug
khugepaged.c mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths 2023-01-18 11:29:59 +01:00
kmemleak-test.c
kmemleak.c Revert "mm: kmemleak: take a full lowmem check in kmemleak_*_phys()" 2022-09-15 12:17:02 +02:00
ksm.c ksm: fix potential missing rmap_item for stable_node 2021-05-22 10:59:44 +02:00
list_lru.c mm: list_lru: set shrinker map bit when child nr_items is not zero 2020-12-11 13:25:02 +01:00
maccess.c uaccess: Add non-pagefault user-space write function 2020-09-09 19:04:29 +02:00
madvise.c mm: madvise(MADV_DODUMP): allow hugetlbfs pages 2018-10-05 16:32:05 -07:00
Makefile vfs: implement readahead(2) using POSIX_FADV_WILLNEED 2018-08-30 20:01:32 +02:00
memblock.c memblock: use kfree() to release kmalloced memblock regions 2022-03-02 11:38:14 +01:00
memcontrol.c memcg: fix possible use-after-free in memcg_write_event_control() 2022-12-14 11:28:27 +01:00
memfd.c memfd: fix F_SEAL_WRITE after shmem huge page allocated 2022-03-08 19:04:10 +01:00
memory-failure.c mm/memory-failure: make sure wait for page writeback in memory_failure 2021-06-30 08:48:14 -04:00
memory.c mm/khugepaged: fix GUP-fast interaction by sending IPI 2023-01-18 11:29:59 +01:00
memory_hotplug.c mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range() 2021-09-22 11:48:12 +02:00
mempolicy.c migrate: hugetlb: check for hugetlb shared PMD in node migration 2023-02-22 12:47:19 +01:00
mempool.c
memtest.c
migrate.c mm/migrate_device.c: flush TLB while holding PTL 2022-10-05 10:36:44 +02:00
mincore.c mm/mincore.c: make mincore() more conservative 2019-05-22 07:37:40 +02:00
mlock.c mm/mlock.c: change count_mm_mlocked_page_nr return type 2019-07-10 09:53:40 +02:00
mm_init.c
mmap.c mm: Fix TLB flush for not-first PFNMAP mappings in unmap_region() 2022-09-20 12:26:48 +02:00
mmu_context.c
mmu_notifier.c mm/mmu_notifier: use hlist_add_head_rcu() 2019-07-31 07:27:08 +02:00
mmzone.c
mprotect.c mm, numa: fix bad pmd by atomically check for pmd_trans_huge when marking page tables prot_numa 2020-03-11 14:15:00 +01:00
mremap.c mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0) 2022-04-15 14:15:05 +02:00
msync.c
nobootmem.c
nommu.c x86/mm: split vmalloc_sync_all() 2020-03-25 08:06:13 +01:00
oom_kill.c mm, oom: do not trigger out_of_memory from the #PF 2021-11-26 11:36:17 +01:00
page-writeback.c mm/page-writeback.c: avoid potential division by zero in wb_min_max_ratio() 2020-01-23 08:21:31 +01:00
page_alloc.c mm/page_alloc: fix potential deadlock on zonelist_update_seq seqlock 2023-05-17 11:13:28 +02:00
page_counter.c mm/page_counter.c: fix protection usage propagation 2020-08-21 11:05:33 +02:00
page_ext.c mm/page_ext.c: fix an imbalance with kmemleak 2019-04-05 22:32:58 +02:00
page_idle.c mm/page_idle.c: fix oops because end_pfn is larger than max_pfn 2019-07-03 13:14:45 +02:00
page_io.c mm: fix unexpected zeroed page mapping with zram swap 2022-05-12 12:20:25 +02:00
page_isolation.c
page_owner.c mm/page_owner: don't access uninitialized memmaps when reading /proc/pagetypeinfo 2019-10-29 09:19:58 +01:00
page_poison.c page_poison: play nicely with KASAN 2019-04-05 22:32:59 +02:00
page_vma_mapped.c mm/thp: another PVMW_SYNC fix in page_vma_mapped_walk() 2021-07-11 12:49:30 +02:00
pagewalk.c mm: pagewalk: fix termination condition in walk_pte_range() 2020-10-01 13:14:32 +02:00
percpu-internal.h
percpu-km.c percpu: convert spin_lock_irq to spin_lock_irqsave. 2019-02-12 19:47:12 +01:00
percpu-stats.c
percpu-vm.c
percpu.c percpu: fix first chunk size calculation for populated bitmap 2020-09-23 12:11:01 +02:00
pgtable-generic.c mm/thp: fix __split_huge_pmd_locked() on shmem migration entry 2021-07-11 12:49:27 +02:00
process_vm_access.c
quicklist.c
readahead.c vfs: implement readahead(2) using POSIX_FADV_WILLNEED 2018-08-30 20:01:32 +02:00
rmap.c mm/rmap: Fix anon_vma->degree ambiguity leading to double-reuse 2022-09-05 10:26:34 +02:00
rodata_test.c
shmem.c shmem: fix a race between shmem_unused_huge_shrink and shmem_evict_inode 2022-01-27 09:04:16 +01:00
slab.c mm/slab.c: fix an infinite loop in leaks_show() 2019-06-15 11:54:01 +02:00
slab.h mm: kmemleak: slob: respect SLAB_NOLEAKTRACE flag 2021-11-26 11:36:23 +01:00
slab_common.c mm: slab: fix kmem_cache_create failed when sysfs node not destroyed 2021-07-28 11:13:44 +02:00
slob.c
slub.c mm/slub: fix to return errno if kmalloc() fails 2022-09-28 11:02:55 +02:00
sparse-vmemmap.c
sparse.c mm/sparse: add the missing sparse_buffer_fini() in error branch 2021-05-22 10:59:38 +02:00
swap.c mm/swap: fix release_pages() when releasing devmap pages 2019-07-31 07:27:03 +02:00
swap_cgroup.c
swap_slots.c
swap_state.c mm/swap_state: fix a data race in swapin_nr_pages 2020-10-01 13:14:47 +02:00
swapfile.c mm/swap: fix swap_info_struct race between swapoff and get_swap_pages() 2023-04-20 12:04:39 +02:00
truncate.c mm/thp: unmap_mapping_page() to fix THP truncate_cleanup_page() 2021-07-11 12:49:28 +02:00
usercopy.c mm/usercopy: return 1 from hardened_usercopy __setup() handler 2022-04-15 14:14:59 +02:00
userfaultfd.c mm: userfaultfd: fix missing cache flush in mcopy_atomic_pte() and __mcopy_atomic() 2022-05-15 19:41:58 +02:00
util.c random: move randomize_page() into mm where it belongs 2022-06-25 11:49:12 +02:00
vmacache.c mm: get rid of vmacache_flush_all() entirely 2018-09-13 15:18:04 -10:00
vmalloc.c mm/vmalloc.c: don't dereference possible NULL pointer in __vunmap() 2020-06-03 08:19:49 +02:00
vmpressure.c mm/vmpressure.c: fix a signedness bug in vmpressure_register_event() 2019-10-17 13:45:19 -07:00
vmscan.c mm/vmscan.c: fix data races using kswapd_classzone_idx 2020-10-01 13:14:41 +02:00
vmstat.c mm, vmstat: drop zone->lock in /proc/pagetypeinfo 2021-06-03 08:38:02 +02:00
workingset.c
z3fold.c z3fold: fix possible reclaim races 2018-12-01 09:37:33 +01:00
zbud.c
zpool.c
zsmalloc.c zsmalloc: fix races between asynchronous zspage free and page migration 2022-06-06 08:24:21 +02:00
zswap.c