mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-09-12 21:57:43 +00:00
f2b79c0d79
With 2M PMD-level mapping, we require 32 struct pages and a single vmemmap page can contain 1024 struct pages (PAGE_SIZE/sizeof(struct page)). Hence with 64K page size, we don't use vmemmap deduplication for PMD-level mapping. [aneesh.kumar@linux.ibm.com: ppc64: don't include radix headers if CONFIG_PPC_RADIX_MMU=n] Link: https://lkml.kernel.org/r/87zg3jw8km.fsf@linux.ibm.com Link: https://lkml.kernel.org/r/20230724190759.483013-12-aneesh.kumar@linux.ibm.com Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Joao Martins <joao.m.martins@oracle.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Oscar Salvador <osalvador@suse.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
||
---|---|---|
.. | ||
damon | ||
active_mm.rst | ||
arch_pgtable_helpers.rst | ||
balance.rst | ||
bootmem.rst | ||
free_page_reporting.rst | ||
frontswap.rst | ||
highmem.rst | ||
hmm.rst | ||
hugetlbfs_reserv.rst | ||
hwpoison.rst | ||
index.rst | ||
ksm.rst | ||
memory-model.rst | ||
mmu_notifier.rst | ||
multigen_lru.rst | ||
numa.rst | ||
oom.rst | ||
overcommit-accounting.rst | ||
page_allocation.rst | ||
page_cache.rst | ||
page_frags.rst | ||
page_migration.rst | ||
page_owner.rst | ||
page_reclaim.rst | ||
page_table_check.rst | ||
page_tables.rst | ||
physical_memory.rst | ||
process_addrs.rst | ||
remap_file_pages.rst | ||
shmfs.rst | ||
slab.rst | ||
slub.rst | ||
split_page_table_lock.rst | ||
swap.rst | ||
transhuge.rst | ||
unevictable-lru.rst | ||
vmalloc.rst | ||
vmalloced-kernel-stacks.rst | ||
vmemmap_dedup.rst | ||
z3fold.rst | ||
zsmalloc.rst |