linux-stable/arch/x86/entry
Kirill A. Shutemov 1b8b1aa90c x86/mm: Fix VDSO and VVAR placement on 5-level paging machines
Yingcong has noticed that on the 5-level paging machine, VDSO and VVAR
VMAs are placed above the 47-bit border:

8000001a9000-8000001ad000 r--p 00000000 00:00 0                          [vvar]
8000001ad000-8000001af000 r-xp 00000000 00:00 0                          [vdso]

This might confuse users who are not aware of 5-level paging and expect
all userspace addresses to be under the 47-bit border.

So far problem has only been triggered with ASLR disabled, although it
may also occur with ASLR enabled if the layout is randomized in a just
right way.

The problem happens due to custom placement for the VMAs in the VDSO
code: vdso_addr() tries to place them above the stack and checks the
result against TASK_SIZE_MAX, which is wrong. TASK_SIZE_MAX is set to
the 56-bit border on 5-level paging machines. Use DEFAULT_MAP_WINDOW
instead.

Fixes: b569bab78d ("x86/mm: Prepare to expose larger address space to userspace")
Reported-by: Yingcong Wu <yingcong.wu@intel.com>
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/20230803151609.22141-1-kirill.shutemov%40linux.intel.com
2023-08-09 13:38:48 -07:00
..
syscalls cachestat: implement cachestat syscall 2023-06-09 16:25:16 -07:00
vdso x86/mm: Fix VDSO and VVAR placement on 5-level paging machines 2023-08-09 13:38:48 -07:00
vsyscall x86: Allow atomic MM_CONTEXT flags setting 2023-03-16 13:08:39 -07:00
calling.h x86/retbleed: Add fine grained Kconfig knobs 2022-06-29 17:43:41 +02:00
common.c X86 entry code related updates: 2021-06-29 12:44:51 -07:00
entry.S x86/bugs: Add retbleed=ibpb 2022-06-27 10:34:00 +02:00
entry_32.S x86: Rewrite ret_from_fork() in C 2023-07-10 09:52:25 +02:00
entry_64.S x86: Fix kthread unwind 2023-07-20 23:03:50 +02:00
entry_64_compat.S - Add the call depth tracking mitigation for Retbleed which has 2022-12-14 15:03:00 -08:00
Makefile x86/entry: Build thunk_$(BITS) only if CONFIG_PREEMPTION=y 2022-08-04 12:23:50 +02:00
syscall_32.c x86/syscalls: Stop filling syscall arrays with *_sys_ni_syscall 2021-05-20 15:03:59 +02:00
syscall_64.c x86/syscalls: Stop filling syscall arrays with *_sys_ni_syscall 2021-05-20 15:03:59 +02:00
syscall_x32.c x86/syscalls: Stop filling syscall arrays with *_sys_ni_syscall 2021-05-20 15:03:59 +02:00
thunk_32.S x86/entry: Build thunk_$(BITS) only if CONFIG_PREEMPTION=y 2022-08-04 12:23:50 +02:00
thunk_64.S x86/entry: Move thunk restore code into thunk functions 2023-06-07 09:54:45 -07:00