Merge branch 'pm-sleep'

Merge system-wide power management updates for 6.8-rc1:

 - Fix possible deadlocks in the core system-wide PM code that occur if
   device-handling functions cannot be executed asynchronously during
   resune from system-wide suspend (Rafael J. Wysocki).

 - Clean up unnecessary local variable initializations in multiple
   places in the hibernation code (Wang chaodong, Li zeming).

 - Adjust core hibernation code to avoid missing wakeup events that
   occur after saving an image to persistent storage (Chris Feng).

 - Update hibernation code to enforce correct ordering during image
   compression and decompression (Hongchen Zhang).

 - Use kmap_local_page() instead of kmap_atomic() in copy_data_page()
   during hibernation and restore (Chen Haonan).

 - Adjust documentation and code comments to reflect recent task freezer
   changes (Kevin Hao).

 - Repair excess function parameter description warning in the
   hibernation image-saving code (Randy Dunlap).

* pm-sleep:
  PM: sleep: Fix possible deadlocks in core system-wide PM code
  async: Introduce async_schedule_dev_nocall()
  async: Split async_schedule_node_domain()
  PM: hibernate: Repair excess function parameter description warning
  PM: sleep: Remove obsolete comment from unlock_system_sleep()
  Documentation: PM: Adjust freezing-of-tasks.rst to the freezer changes
  PM: hibernate: Use kmap_local_page() in copy_data_page()
  PM: hibernate: Enforce ordering during image compression/decompression
  PM: hibernate: Avoid missing wakeup events during hibernation
  PM: hibernate: Do not initialize error in snapshot_write_next()
  PM: hibernate: Do not initialize error in swap_write_page()
  PM: hibernate: Drop unnecessary local variable initialization
This commit is contained in:
Rafael J. Wysocki 2024-01-08 13:42:48 +01:00
commit f1e5e46397
9 changed files with 217 additions and 184 deletions

View file

@ -14,27 +14,28 @@ architectures).
II. How does it work?
=====================
There are three per-task flags used for that, PF_NOFREEZE, PF_FROZEN
and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have
PF_NOFREEZE unset (all user space processes and some kernel threads) are
regarded as 'freezable' and treated in a special way before the system enters a
suspend state as well as before a hibernation image is created (in what follows
we only consider hibernation, but the description also applies to suspend).
There is one per-task flag (PF_NOFREEZE) and three per-task states
(TASK_FROZEN, TASK_FREEZABLE and __TASK_FREEZABLE_UNSAFE) used for that.
The tasks that have PF_NOFREEZE unset (all user space tasks and some kernel
threads) are regarded as 'freezable' and treated in a special way before the
system enters a sleep state as well as before a hibernation image is created
(hibernation is directly covered by what follows, but the description applies
to system-wide suspend too).
Namely, as the first step of the hibernation procedure the function
freeze_processes() (defined in kernel/power/process.c) is called. A system-wide
variable system_freezing_cnt (as opposed to a per-task flag) is used to indicate
whether the system is to undergo a freezing operation. And freeze_processes()
sets this variable. After this, it executes try_to_freeze_tasks() that sends a
fake signal to all user space processes, and wakes up all the kernel threads.
All freezable tasks must react to that by calling try_to_freeze(), which
results in a call to __refrigerator() (defined in kernel/freezer.c), which sets
the task's PF_FROZEN flag, changes its state to TASK_UNINTERRUPTIBLE and makes
it loop until PF_FROZEN is cleared for it. Then, we say that the task is
'frozen' and therefore the set of functions handling this mechanism is referred
to as 'the freezer' (these functions are defined in kernel/power/process.c,
kernel/freezer.c & include/linux/freezer.h). User space processes are generally
frozen before kernel threads.
static key freezer_active (as opposed to a per-task flag or state) is used to
indicate whether the system is to undergo a freezing operation. And
freeze_processes() sets this static key. After this, it executes
try_to_freeze_tasks() that sends a fake signal to all user space processes, and
wakes up all the kernel threads. All freezable tasks must react to that by
calling try_to_freeze(), which results in a call to __refrigerator() (defined
in kernel/freezer.c), which changes the task's state to TASK_FROZEN, and makes
it loop until it is woken by an explicit TASK_FROZEN wakeup. Then, that task
is regarded as 'frozen' and so the set of functions handling this mechanism is
referred to as 'the freezer' (these functions are defined in
kernel/power/process.c, kernel/freezer.c & include/linux/freezer.h). User space
tasks are generally frozen before kernel threads.
__refrigerator() must not be called directly. Instead, use the
try_to_freeze() function (defined in include/linux/freezer.h), that checks
@ -43,31 +44,40 @@ if the task is to be frozen and makes the task enter __refrigerator().
For user space processes try_to_freeze() is called automatically from the
signal-handling code, but the freezable kernel threads need to call it
explicitly in suitable places or use the wait_event_freezable() or
wait_event_freezable_timeout() macros (defined in include/linux/freezer.h)
that combine interruptible sleep with checking if the task is to be frozen and
calling try_to_freeze(). The main loop of a freezable kernel thread may look
wait_event_freezable_timeout() macros (defined in include/linux/wait.h)
that put the task to sleep (TASK_INTERRUPTIBLE) or freeze it (TASK_FROZEN) if
freezer_active is set. The main loop of a freezable kernel thread may look
like the following one::
set_freezable();
do {
hub_events();
wait_event_freezable(khubd_wait,
!list_empty(&hub_event_list) ||
kthread_should_stop());
} while (!kthread_should_stop() || !list_empty(&hub_event_list));
(from drivers/usb/core/hub.c::hub_thread()).
while (true) {
struct task_struct *tsk = NULL;
If a freezable kernel thread fails to call try_to_freeze() after the freezer has
initiated a freezing operation, the freezing of tasks will fail and the entire
hibernation operation will be cancelled. For this reason, freezable kernel
threads must call try_to_freeze() somewhere or use one of the
wait_event_freezable(oom_reaper_wait, oom_reaper_list != NULL);
spin_lock_irq(&oom_reaper_lock);
if (oom_reaper_list != NULL) {
tsk = oom_reaper_list;
oom_reaper_list = tsk->oom_reaper_list;
}
spin_unlock_irq(&oom_reaper_lock);
if (tsk)
oom_reap_task(tsk);
}
(from mm/oom_kill.c::oom_reaper()).
If a freezable kernel thread is not put to the frozen state after the freezer
has initiated a freezing operation, the freezing of tasks will fail and the
entire system-wide transition will be cancelled. For this reason, freezable
kernel threads must call try_to_freeze() somewhere or use one of the
wait_event_freezable() and wait_event_freezable_timeout() macros.
After the system memory state has been restored from a hibernation image and
devices have been reinitialized, the function thaw_processes() is called in
order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that
have been frozen leave __refrigerator() and continue running.
order to wake up each frozen task. Then, the tasks that have been frozen leave
__refrigerator() and continue running.
Rationale behind the functions dealing with freezing and thawing of tasks
@ -96,7 +106,8 @@ III. Which kernel threads are freezable?
Kernel threads are not freezable by default. However, a kernel thread may clear
PF_NOFREEZE for itself by calling set_freezable() (the resetting of PF_NOFREEZE
directly is not allowed). From this point it is regarded as freezable
and must call try_to_freeze() in a suitable place.
and must call try_to_freeze() or variants of wait_event_freezable() in a
suitable place.
IV. Why do we do that?
======================

View file

@ -579,7 +579,7 @@ bool dev_pm_skip_resume(struct device *dev)
}
/**
* device_resume_noirq - Execute a "noirq resume" callback for given device.
* __device_resume_noirq - Execute a "noirq resume" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
@ -587,7 +587,7 @@ bool dev_pm_skip_resume(struct device *dev)
* The driver of @dev will not receive interrupts while this function is being
* executed.
*/
static int device_resume_noirq(struct device *dev, pm_message_t state, bool async)
static void __device_resume_noirq(struct device *dev, pm_message_t state, bool async)
{
pm_callback_t callback = NULL;
const char *info = NULL;
@ -655,7 +655,13 @@ static int device_resume_noirq(struct device *dev, pm_message_t state, bool asyn
Out:
complete_all(&dev->power.completion);
TRACE_RESUME(error);
return error;
if (error) {
suspend_stats.failed_resume_noirq++;
dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, async ? " async noirq" : " noirq", error);
}
}
static bool is_async(struct device *dev)
@ -668,11 +674,15 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
{
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
async_schedule_dev(func, dev);
if (!is_async(dev))
return false;
get_device(dev);
if (async_schedule_dev_nocall(func, dev))
return true;
}
put_device(dev);
return false;
}
@ -680,15 +690,19 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
static void async_resume_noirq(void *data, async_cookie_t cookie)
{
struct device *dev = data;
int error;
error = device_resume_noirq(dev, pm_transition, true);
if (error)
pm_dev_err(dev, pm_transition, " async", error);
__device_resume_noirq(dev, pm_transition, true);
put_device(dev);
}
static void device_resume_noirq(struct device *dev)
{
if (dpm_async_fn(dev, async_resume_noirq))
return;
__device_resume_noirq(dev, pm_transition, false);
}
static void dpm_noirq_resume_devices(pm_message_t state)
{
struct device *dev;
@ -698,14 +712,6 @@ static void dpm_noirq_resume_devices(pm_message_t state)
mutex_lock(&dpm_list_mtx);
pm_transition = state;
/*
* Advanced the async threads upfront,
* in case the starting of async threads is
* delayed by non-async resuming devices.
*/
list_for_each_entry(dev, &dpm_noirq_list, power.entry)
dpm_async_fn(dev, async_resume_noirq);
while (!list_empty(&dpm_noirq_list)) {
dev = to_device(dpm_noirq_list.next);
get_device(dev);
@ -713,17 +719,7 @@ static void dpm_noirq_resume_devices(pm_message_t state)
mutex_unlock(&dpm_list_mtx);
if (!is_async(dev)) {
int error;
error = device_resume_noirq(dev, state, false);
if (error) {
suspend_stats.failed_resume_noirq++;
dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, " noirq", error);
}
}
device_resume_noirq(dev);
put_device(dev);
@ -751,14 +747,14 @@ void dpm_resume_noirq(pm_message_t state)
}
/**
* device_resume_early - Execute an "early resume" callback for given device.
* __device_resume_early - Execute an "early resume" callback for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
*
* Runtime PM is disabled for @dev while this function is being executed.
*/
static int device_resume_early(struct device *dev, pm_message_t state, bool async)
static void __device_resume_early(struct device *dev, pm_message_t state, bool async)
{
pm_callback_t callback = NULL;
const char *info = NULL;
@ -811,21 +807,31 @@ static int device_resume_early(struct device *dev, pm_message_t state, bool asyn
pm_runtime_enable(dev);
complete_all(&dev->power.completion);
return error;
if (error) {
suspend_stats.failed_resume_early++;
dpm_save_failed_step(SUSPEND_RESUME_EARLY);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, async ? " async early" : " early", error);
}
}
static void async_resume_early(void *data, async_cookie_t cookie)
{
struct device *dev = data;
int error;
error = device_resume_early(dev, pm_transition, true);
if (error)
pm_dev_err(dev, pm_transition, " async", error);
__device_resume_early(dev, pm_transition, true);
put_device(dev);
}
static void device_resume_early(struct device *dev)
{
if (dpm_async_fn(dev, async_resume_early))
return;
__device_resume_early(dev, pm_transition, false);
}
/**
* dpm_resume_early - Execute "early resume" callbacks for all devices.
* @state: PM transition of the system being carried out.
@ -839,14 +845,6 @@ void dpm_resume_early(pm_message_t state)
mutex_lock(&dpm_list_mtx);
pm_transition = state;
/*
* Advanced the async threads upfront,
* in case the starting of async threads is
* delayed by non-async resuming devices.
*/
list_for_each_entry(dev, &dpm_late_early_list, power.entry)
dpm_async_fn(dev, async_resume_early);
while (!list_empty(&dpm_late_early_list)) {
dev = to_device(dpm_late_early_list.next);
get_device(dev);
@ -854,17 +852,7 @@ void dpm_resume_early(pm_message_t state)
mutex_unlock(&dpm_list_mtx);
if (!is_async(dev)) {
int error;
error = device_resume_early(dev, state, false);
if (error) {
suspend_stats.failed_resume_early++;
dpm_save_failed_step(SUSPEND_RESUME_EARLY);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, " early", error);
}
}
device_resume_early(dev);
put_device(dev);
@ -888,12 +876,12 @@ void dpm_resume_start(pm_message_t state)
EXPORT_SYMBOL_GPL(dpm_resume_start);
/**
* device_resume - Execute "resume" callbacks for given device.
* __device_resume - Execute "resume" callbacks for given device.
* @dev: Device to handle.
* @state: PM transition of the system being carried out.
* @async: If true, the device is being resumed asynchronously.
*/
static int device_resume(struct device *dev, pm_message_t state, bool async)
static void __device_resume(struct device *dev, pm_message_t state, bool async)
{
pm_callback_t callback = NULL;
const char *info = NULL;
@ -975,20 +963,30 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
TRACE_RESUME(error);
return error;
if (error) {
suspend_stats.failed_resume++;
dpm_save_failed_step(SUSPEND_RESUME);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, async ? " async" : "", error);
}
}
static void async_resume(void *data, async_cookie_t cookie)
{
struct device *dev = data;
int error;
error = device_resume(dev, pm_transition, true);
if (error)
pm_dev_err(dev, pm_transition, " async", error);
__device_resume(dev, pm_transition, true);
put_device(dev);
}
static void device_resume(struct device *dev)
{
if (dpm_async_fn(dev, async_resume))
return;
__device_resume(dev, pm_transition, false);
}
/**
* dpm_resume - Execute "resume" callbacks for non-sysdev devices.
* @state: PM transition of the system being carried out.
@ -1008,27 +1006,17 @@ void dpm_resume(pm_message_t state)
pm_transition = state;
async_error = 0;
list_for_each_entry(dev, &dpm_suspended_list, power.entry)
dpm_async_fn(dev, async_resume);
while (!list_empty(&dpm_suspended_list)) {
dev = to_device(dpm_suspended_list.next);
get_device(dev);
if (!is_async(dev)) {
int error;
mutex_unlock(&dpm_list_mtx);
mutex_unlock(&dpm_list_mtx);
error = device_resume(dev, state, false);
if (error) {
suspend_stats.failed_resume++;
dpm_save_failed_step(SUSPEND_RESUME);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, "", error);
}
device_resume(dev);
mutex_lock(&dpm_list_mtx);
mutex_lock(&dpm_list_mtx);
}
if (!list_empty(&dev->power.entry))
list_move_tail(&dev->power.entry, &dpm_prepared_list);

View file

@ -90,6 +90,8 @@ async_schedule_dev(async_func_t func, struct device *dev)
return async_schedule_node(func, dev, dev_to_node(dev));
}
bool async_schedule_dev_nocall(async_func_t func, struct device *dev);
/**
* async_schedule_dev_domain - A device specific version of async_schedule_domain
* @func: function to execute asynchronously

View file

@ -145,6 +145,39 @@ static void async_run_entry_fn(struct work_struct *work)
wake_up(&async_done);
}
static async_cookie_t __async_schedule_node_domain(async_func_t func,
void *data, int node,
struct async_domain *domain,
struct async_entry *entry)
{
async_cookie_t newcookie;
unsigned long flags;
INIT_LIST_HEAD(&entry->domain_list);
INIT_LIST_HEAD(&entry->global_list);
INIT_WORK(&entry->work, async_run_entry_fn);
entry->func = func;
entry->data = data;
entry->domain = domain;
spin_lock_irqsave(&async_lock, flags);
/* allocate cookie and queue */
newcookie = entry->cookie = next_cookie++;
list_add_tail(&entry->domain_list, &domain->pending);
if (domain->registered)
list_add_tail(&entry->global_list, &async_global_pending);
atomic_inc(&entry_count);
spin_unlock_irqrestore(&async_lock, flags);
/* schedule for execution */
queue_work_node(node, system_unbound_wq, &entry->work);
return newcookie;
}
/**
* async_schedule_node_domain - NUMA specific version of async_schedule_domain
* @func: function to execute asynchronously
@ -186,29 +219,8 @@ async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
func(data, newcookie);
return newcookie;
}
INIT_LIST_HEAD(&entry->domain_list);
INIT_LIST_HEAD(&entry->global_list);
INIT_WORK(&entry->work, async_run_entry_fn);
entry->func = func;
entry->data = data;
entry->domain = domain;
spin_lock_irqsave(&async_lock, flags);
/* allocate cookie and queue */
newcookie = entry->cookie = next_cookie++;
list_add_tail(&entry->domain_list, &domain->pending);
if (domain->registered)
list_add_tail(&entry->global_list, &async_global_pending);
atomic_inc(&entry_count);
spin_unlock_irqrestore(&async_lock, flags);
/* schedule for execution */
queue_work_node(node, system_unbound_wq, &entry->work);
return newcookie;
return __async_schedule_node_domain(func, data, node, domain, entry);
}
EXPORT_SYMBOL_GPL(async_schedule_node_domain);
@ -231,6 +243,35 @@ async_cookie_t async_schedule_node(async_func_t func, void *data, int node)
}
EXPORT_SYMBOL_GPL(async_schedule_node);
/**
* async_schedule_dev_nocall - A simplified variant of async_schedule_dev()
* @func: function to execute asynchronously
* @dev: device argument to be passed to function
*
* @dev is used as both the argument for the function and to provide NUMA
* context for where to run the function.
*
* If the asynchronous execution of @func is scheduled successfully, return
* true. Otherwise, do nothing and return false, unlike async_schedule_dev()
* that will run the function synchronously then.
*/
bool async_schedule_dev_nocall(async_func_t func, struct device *dev)
{
struct async_entry *entry;
entry = kzalloc(sizeof(struct async_entry), GFP_KERNEL);
/* Give up if there is no memory or too much work. */
if (!entry || atomic_read(&entry_count) > MAX_WORK) {
kfree(entry);
return false;
}
__async_schedule_node_domain(func, dev, dev_to_node(dev),
&async_dfl_domain, entry);
return true;
}
/**
* async_synchronize_full - synchronize all asynchronous function calls
*

View file

@ -642,9 +642,9 @@ int hibernation_platform_enter(void)
*/
static void power_down(void)
{
#ifdef CONFIG_SUSPEND
int error;
#ifdef CONFIG_SUSPEND
if (hibernation_mode == HIBERNATION_SUSPEND) {
error = suspend_devices_and_enter(mem_sleep_current);
if (error) {
@ -667,7 +667,13 @@ static void power_down(void)
kernel_restart(NULL);
break;
case HIBERNATION_PLATFORM:
hibernation_platform_enter();
error = hibernation_platform_enter();
if (error == -EAGAIN || error == -EBUSY) {
swsusp_unmark();
events_check_enabled = false;
pr_info("Wakeup event detected during hibernation, rolling back.\n");
return;
}
fallthrough;
case HIBERNATION_SHUTDOWN:
if (kernel_can_power_off())

View file

@ -60,22 +60,6 @@ EXPORT_SYMBOL_GPL(lock_system_sleep);
void unlock_system_sleep(unsigned int flags)
{
/*
* Don't use freezer_count() because we don't want the call to
* try_to_freeze() here.
*
* Reason:
* Fundamentally, we just don't need it, because freezing condition
* doesn't come into effect until we release the
* system_transition_mutex lock, since the freezer always works with
* system_transition_mutex held.
*
* More importantly, in the case of hibernation,
* unlock_system_sleep() gets called in snapshot_read() and
* snapshot_write() when the freezing condition is still in effect.
* Which means, if we use try_to_freeze() here, it would make them
* enter the refrigerator, thus causing hibernation to lockup.
*/
if (!(flags & PF_NOFREEZE))
current->flags &= ~PF_NOFREEZE;
mutex_unlock(&system_transition_mutex);

View file

@ -175,6 +175,8 @@ extern int swsusp_write(unsigned int flags);
void swsusp_close(void);
#ifdef CONFIG_SUSPEND
extern int swsusp_unmark(void);
#else
static inline int swsusp_unmark(void) { return 0; }
#endif
struct __kernel_old_timeval;

View file

@ -1119,7 +1119,7 @@ static void mark_nosave_pages(struct memory_bitmap *bm)
int create_basic_memory_bitmaps(void)
{
struct memory_bitmap *bm1, *bm2;
int error = 0;
int error;
if (forbidden_pages_map && free_pages_map)
return 0;
@ -1487,11 +1487,11 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
s_page = pfn_to_page(src_pfn);
d_page = pfn_to_page(dst_pfn);
if (PageHighMem(s_page)) {
src = kmap_atomic(s_page);
dst = kmap_atomic(d_page);
src = kmap_local_page(s_page);
dst = kmap_local_page(d_page);
zeros_only = do_copy_page(dst, src);
kunmap_atomic(dst);
kunmap_atomic(src);
kunmap_local(dst);
kunmap_local(src);
} else {
if (PageHighMem(d_page)) {
/*
@ -1499,9 +1499,9 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
* data modified by kmap_atomic()
*/
zeros_only = safe_copy_page(buffer, s_page);
dst = kmap_atomic(d_page);
dst = kmap_local_page(d_page);
copy_page(dst, buffer);
kunmap_atomic(dst);
kunmap_local(dst);
} else {
zeros_only = safe_copy_page(page_address(d_page), s_page);
}
@ -2778,7 +2778,7 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca)
int snapshot_write_next(struct snapshot_handle *handle)
{
static struct chain_allocator ca;
int error = 0;
int error;
next:
/* Check if we have already loaded the entire image */

View file

@ -451,7 +451,7 @@ static int get_swap_writer(struct swap_map_handle *handle)
static int swap_write_page(struct swap_map_handle *handle, void *buf,
struct hib_bio_batch *hb)
{
int error = 0;
int error;
sector_t offset;
if (!handle->cur)
@ -606,11 +606,11 @@ static int crc32_threadfn(void *data)
unsigned i;
while (1) {
wait_event(d->go, atomic_read(&d->ready) ||
wait_event(d->go, atomic_read_acquire(&d->ready) ||
kthread_should_stop());
if (kthread_should_stop()) {
d->thr = NULL;
atomic_set(&d->stop, 1);
atomic_set_release(&d->stop, 1);
wake_up(&d->done);
break;
}
@ -619,7 +619,7 @@ static int crc32_threadfn(void *data)
for (i = 0; i < d->run_threads; i++)
*d->crc32 = crc32_le(*d->crc32,
d->unc[i], *d->unc_len[i]);
atomic_set(&d->stop, 1);
atomic_set_release(&d->stop, 1);
wake_up(&d->done);
}
return 0;
@ -649,12 +649,12 @@ static int lzo_compress_threadfn(void *data)
struct cmp_data *d = data;
while (1) {
wait_event(d->go, atomic_read(&d->ready) ||
wait_event(d->go, atomic_read_acquire(&d->ready) ||
kthread_should_stop());
if (kthread_should_stop()) {
d->thr = NULL;
d->ret = -1;
atomic_set(&d->stop, 1);
atomic_set_release(&d->stop, 1);
wake_up(&d->done);
break;
}
@ -663,7 +663,7 @@ static int lzo_compress_threadfn(void *data)
d->ret = lzo1x_1_compress(d->unc, d->unc_len,
d->cmp + LZO_HEADER, &d->cmp_len,
d->wrk);
atomic_set(&d->stop, 1);
atomic_set_release(&d->stop, 1);
wake_up(&d->done);
}
return 0;
@ -798,7 +798,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
data[thr].unc_len = off;
atomic_set(&data[thr].ready, 1);
atomic_set_release(&data[thr].ready, 1);
wake_up(&data[thr].go);
}
@ -806,12 +806,12 @@ static int save_image_lzo(struct swap_map_handle *handle,
break;
crc->run_threads = thr;
atomic_set(&crc->ready, 1);
atomic_set_release(&crc->ready, 1);
wake_up(&crc->go);
for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
wait_event(data[thr].done,
atomic_read(&data[thr].stop));
atomic_read_acquire(&data[thr].stop));
atomic_set(&data[thr].stop, 0);
ret = data[thr].ret;
@ -850,7 +850,7 @@ static int save_image_lzo(struct swap_map_handle *handle,
}
}
wait_event(crc->done, atomic_read(&crc->stop));
wait_event(crc->done, atomic_read_acquire(&crc->stop));
atomic_set(&crc->stop, 0);
}
@ -1132,12 +1132,12 @@ static int lzo_decompress_threadfn(void *data)
struct dec_data *d = data;
while (1) {
wait_event(d->go, atomic_read(&d->ready) ||
wait_event(d->go, atomic_read_acquire(&d->ready) ||
kthread_should_stop());
if (kthread_should_stop()) {
d->thr = NULL;
d->ret = -1;
atomic_set(&d->stop, 1);
atomic_set_release(&d->stop, 1);
wake_up(&d->done);
break;
}
@ -1150,7 +1150,7 @@ static int lzo_decompress_threadfn(void *data)
flush_icache_range((unsigned long)d->unc,
(unsigned long)d->unc + d->unc_len);
atomic_set(&d->stop, 1);
atomic_set_release(&d->stop, 1);
wake_up(&d->done);
}
return 0;
@ -1335,7 +1335,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
}
if (crc->run_threads) {
wait_event(crc->done, atomic_read(&crc->stop));
wait_event(crc->done, atomic_read_acquire(&crc->stop));
atomic_set(&crc->stop, 0);
crc->run_threads = 0;
}
@ -1371,7 +1371,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
pg = 0;
}
atomic_set(&data[thr].ready, 1);
atomic_set_release(&data[thr].ready, 1);
wake_up(&data[thr].go);
}
@ -1390,7 +1390,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
wait_event(data[thr].done,
atomic_read(&data[thr].stop));
atomic_read_acquire(&data[thr].stop));
atomic_set(&data[thr].stop, 0);
ret = data[thr].ret;
@ -1421,7 +1421,7 @@ static int load_image_lzo(struct swap_map_handle *handle,
ret = snapshot_write_next(snapshot);
if (ret <= 0) {
crc->run_threads = thr + 1;
atomic_set(&crc->ready, 1);
atomic_set_release(&crc->ready, 1);
wake_up(&crc->go);
goto out_finish;
}
@ -1429,13 +1429,13 @@ static int load_image_lzo(struct swap_map_handle *handle,
}
crc->run_threads = thr;
atomic_set(&crc->ready, 1);
atomic_set_release(&crc->ready, 1);
wake_up(&crc->go);
}
out_finish:
if (crc->run_threads) {
wait_event(crc->done, atomic_read(&crc->stop));
wait_event(crc->done, atomic_read_acquire(&crc->stop));
atomic_set(&crc->stop, 0);
}
stop = ktime_get();
@ -1566,7 +1566,6 @@ int swsusp_check(bool exclusive)
/**
* swsusp_close - close resume device.
* @exclusive: Close the resume device which is exclusively opened.
*/
void swsusp_close(void)