linux-stable/arch
Ard Biesheuvel 97d6786e06 arm64: mm: account for hotplug memory when randomizing the linear region
As a hardening measure, we currently randomize the placement of
physical memory inside the linear region when KASLR is in effect.
Since the random offset at which to place the available physical
memory inside the linear region is chosen early at boot, it is
based on the memblock description of memory, which does not cover
hotplug memory. The consequence of this is that the randomization
offset may be chosen such that any hotplugged memory located above
memblock_end_of_DRAM() that appears later is pushed off the end of
the linear region, where it cannot be accessed.

So let's limit this randomization of the linear region to ensure
that this can no longer happen, by using the CPU's addressable PA
range instead. As it is guaranteed that no hotpluggable memory will
appear that falls outside of that range, we can safely put this PA
range sized window anywhere in the linear region.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Steven Price <steven.price@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20201014081857.3288-1-ardb@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-10 18:43:25 +00:00
..
alpha arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
arc ARC: [plat-hsdk] Remap CCMs super early in asm boot trampoline 2020-11-02 11:45:09 -08:00
arm ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations 2020-11-04 10:42:57 +02:00
arm64 arm64: mm: account for hotplug memory when randomizing the linear region 2020-11-10 18:43:25 +00:00
c6x arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
csky treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
h8300 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
hexagon arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
ia64 treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
m68k arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
microblaze treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
mips treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
nds32 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
nios2 arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
openrisc arch-cleanup-2020-10-22 2020-10-23 10:06:38 -07:00
parisc treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
powerpc powerpc/numa: Fix build when CONFIG_NUMA=n 2020-11-06 14:16:19 +11:00
riscv RISC-V: Fix the VDSO symbol generaton for binutils-2.35+ 2020-11-06 00:03:48 -08:00
s390 s390/pci: fix hot-plug of PCI function missing bus 2020-11-03 15:12:16 +01:00
sh treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
sparc treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
um arch/um: partially revert the conversion to __section() macro 2020-10-26 15:39:37 -07:00
x86 A set of x86 fixes: 2020-11-08 10:09:36 -08:00
xtensa ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations 2020-11-04 10:42:57 +02:00
.gitignore
Kconfig Merge branch 'work.set_fs' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs 2020-10-22 09:59:21 -07:00