From 01a7eb3e20994701700631ec30462087c4ecf142 Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" Date: Fri, 18 Aug 2023 21:06:29 +0100 Subject: [PATCH] mm: fix clean_record_shared_mapping_range kernel-doc Turn the a), b) into an unordered ReST list and remove the unnecessary 'Note:' prefix. Link: https://lkml.kernel.org/r/20230818200630.2719595-4-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Randy Dunlap Acked-by: Mike Rapoport (IBM) Signed-off-by: Andrew Morton --- mm/mapping_dirty_helpers.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index a26dd8bcfcdb..2f8829b3541a 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -288,13 +288,14 @@ EXPORT_SYMBOL_GPL(wp_shared_mapping_range); * @end: Pointer to the number of the last set bit in @bitmap. * none set. The value is modified as new bits are set by the function. * - * Note: When this function returns there is no guarantee that a CPU has + * When this function returns there is no guarantee that a CPU has * not already dirtied new ptes. However it will not clean any ptes not * reported in the bitmap. The guarantees are as follows: - * a) All ptes dirty when the function starts executing will end up recorded - * in the bitmap. - * b) All ptes dirtied after that will either remain dirty, be recorded in the - * bitmap or both. + * + * * All ptes dirty when the function starts executing will end up recorded + * in the bitmap. + * * All ptes dirtied after that will either remain dirty, be recorded in the + * bitmap or both. * * If a caller needs to make sure all dirty ptes are picked up and none * additional are added, it first needs to write-protect the address-space