linux-stable/arch/arm64/lib
Robin Murphy 295cf15623 arm64: Avoid premature usercopy failure
Al reminds us that the usercopy API must only return complete failure
if absolutely nothing could be copied. Currently, if userspace does
something silly like giving us an unaligned pointer to Device memory,
or a size which overruns MTE tag bounds, we may fail to honour that
requirement when faulting on a multi-byte access even though a smaller
access could have succeeded.

Add a mitigation to the fixup routines to fall back to a single-byte
copy if we faulted on a larger access before anything has been written
to the destination, to guarantee making *some* forward progress. We
needn't be too concerned about the overall performance since this should
only occur when callers are doing something a bit dodgy in the first
place. Particularly broken userspace might still be able to trick
generic_perform_write() into an infinite loop by targeting write() at
an mmap() of some read-only device register where the fault-in load
succeeds but any store synchronously aborts such that copy_to_user() is
genuinely unable to make progress, but, well, don't do that...

CC: stable@vger.kernel.org
Reported-by: Chen Huang <chenhuang5@huawei.com>
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2021-07-15 17:29:14 +01:00
..
clear_page.S arm64: lib: Annotate {clear, copy}_page() as position-independent 2021-03-19 12:01:19 +00:00
clear_user.S arm64: Rewrite __arch_clear_user() 2021-06-01 18:34:38 +01:00
copy_from_user.S arm64: Avoid premature usercopy failure 2021-07-15 17:29:14 +01:00
copy_in_user.S arm64: Avoid premature usercopy failure 2021-07-15 17:29:14 +01:00
copy_page.S arm64: lib: Annotate {clear, copy}_page() as position-independent 2021-03-19 12:01:19 +00:00
copy_template.S treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
copy_to_user.S arm64: Avoid premature usercopy failure 2021-07-15 17:29:14 +01:00
crc32.S arm64: lib: Consistently enable crc32 extension 2020-04-28 14:36:32 +01:00
csum.c arm64: csum: Disable KASAN for do_csum() 2020-04-15 21:36:41 +01:00
delay.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234 2019-06-19 17:09:07 +02:00
error-inject.c arm64: Add support for function error injection 2019-08-07 13:53:09 +01:00
insn.c arm64: insn: Add SVE instruction class 2021-05-27 17:38:30 +01:00
kasan_sw_tags.S kasan: arm64: support specialized outlined tag mismatch checks 2021-05-26 23:31:26 +01:00
Makefile Merge branch 'for-next/kasan' into for-next/core 2021-06-24 14:04:00 +01:00
memchr.S arm64: Better optimised memchr() 2021-06-01 18:34:38 +01:00
memcmp.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
memcpy.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
memset.S arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S 2020-10-30 08:32:31 +00:00
mte.S arm64: mte: handle tags zeroing at page allocation time 2021-06-04 19:32:21 +01:00
strchr.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
strcmp.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
strlen.S arm64: fix strlen() with CONFIG_KASAN_HW_TAGS 2021-07-12 13:36:22 +01:00
strncmp.S arm64: update string routine copyrights and URLs 2021-06-02 17:58:26 +01:00
strnlen.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
strrchr.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
tishift.S arm64: lib: Use modern annotations for assembly functions 2020-01-08 12:23:02 +00:00
uaccess_flushcache.c arm64: Rename arm64-internal cache maintenance functions 2021-05-25 19:27:49 +01:00
xor-neon.c treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500 2019-06-19 17:09:55 +02:00