License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2011-12-21 22:48:43 +00:00
|
|
|
* Basic Node interface support
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/init.h>
|
|
|
|
#include <linux/mm.h>
|
2009-01-06 22:39:14 +00:00
|
|
|
#include <linux/memory.h>
|
2011-05-25 00:11:28 +00:00
|
|
|
#include <linux/vmstat.h>
|
2013-04-29 22:08:07 +00:00
|
|
|
#include <linux/notifier.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/node.h>
|
|
|
|
#include <linux/hugetlb.h>
|
2010-05-24 21:32:29 +00:00
|
|
|
#include <linux/compaction.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/cpumask.h>
|
|
|
|
#include <linux/topology.h>
|
|
|
|
#include <linux/nodemask.h>
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
#include <linux/cpu.h>
|
2007-10-16 08:26:27 +00:00
|
|
|
#include <linux/device.h>
|
2019-03-11 20:56:00 +00:00
|
|
|
#include <linux/pm_runtime.h>
|
2008-10-19 03:26:53 +00:00
|
|
|
#include <linux/swap.h>
|
2010-04-06 10:23:33 +00:00
|
|
|
#include <linux/slab.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static struct bus_type node_subsys = {
|
2007-12-20 01:09:39 +00:00
|
|
|
.name = "node",
|
2011-12-21 22:48:43 +00:00
|
|
|
.dev_name = "node",
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
|
2014-09-30 13:48:22 +00:00
|
|
|
static ssize_t node_read_cpumap(struct device *dev, bool list, char *buf)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2017-10-13 22:57:50 +00:00
|
|
|
ssize_t n;
|
|
|
|
cpumask_var_t mask;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct node *node_dev = to_node(dev);
|
|
|
|
|
2008-04-08 18:43:03 +00:00
|
|
|
/* 2008/04/07: buf currently PAGE_SIZE, need 9 chars per 32 bits. */
|
|
|
|
BUILD_BUG_ON((NR_CPUS/32 * 9) > (PAGE_SIZE-1));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2017-10-13 22:57:50 +00:00
|
|
|
if (!alloc_cpumask_var(&mask, GFP_KERNEL))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
cpumask_and(mask, cpumask_of_node(node_dev->dev.id), cpu_online_mask);
|
|
|
|
n = cpumap_print_to_pagebuf(list, buf, mask);
|
|
|
|
free_cpumask_var(mask);
|
|
|
|
|
|
|
|
return n;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2020-09-16 20:40:42 +00:00
|
|
|
static inline ssize_t cpumap_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
2008-04-08 18:43:03 +00:00
|
|
|
{
|
2014-09-30 13:48:22 +00:00
|
|
|
return node_read_cpumap(dev, false, buf);
|
2008-04-08 18:43:03 +00:00
|
|
|
}
|
2020-09-16 20:40:42 +00:00
|
|
|
|
|
|
|
static DEVICE_ATTR_RO(cpumap);
|
|
|
|
|
|
|
|
static inline ssize_t cpulist_show(struct device *dev,
|
|
|
|
struct device_attribute *attr,
|
|
|
|
char *buf)
|
2008-04-08 18:43:03 +00:00
|
|
|
{
|
2014-09-30 13:48:22 +00:00
|
|
|
return node_read_cpumap(dev, true, buf);
|
2008-04-08 18:43:03 +00:00
|
|
|
}
|
|
|
|
|
2020-09-16 20:40:42 +00:00
|
|
|
static DEVICE_ATTR_RO(cpulist);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2019-03-11 20:56:00 +00:00
|
|
|
/**
|
|
|
|
* struct node_access_nodes - Access class device to hold user visible
|
|
|
|
* relationships to other nodes.
|
|
|
|
* @dev: Device for this memory access class
|
|
|
|
* @list_node: List element in the node's access list
|
|
|
|
* @access: The access class rank
|
2019-06-18 18:55:12 +00:00
|
|
|
* @hmem_attrs: Heterogeneous memory performance attributes
|
2019-03-11 20:56:00 +00:00
|
|
|
*/
|
|
|
|
struct node_access_nodes {
|
|
|
|
struct device dev;
|
|
|
|
struct list_head list_node;
|
|
|
|
unsigned access;
|
2019-03-11 20:56:01 +00:00
|
|
|
#ifdef CONFIG_HMEM_REPORTING
|
|
|
|
struct node_hmem_attrs hmem_attrs;
|
|
|
|
#endif
|
2019-03-11 20:56:00 +00:00
|
|
|
};
|
|
|
|
#define to_access_nodes(dev) container_of(dev, struct node_access_nodes, dev)
|
|
|
|
|
|
|
|
static struct attribute *node_init_access_node_attrs[] = {
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct attribute *node_targ_access_node_attrs[] = {
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct attribute_group initiators = {
|
|
|
|
.name = "initiators",
|
|
|
|
.attrs = node_init_access_node_attrs,
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct attribute_group targets = {
|
|
|
|
.name = "targets",
|
|
|
|
.attrs = node_targ_access_node_attrs,
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct attribute_group *node_access_node_groups[] = {
|
|
|
|
&initiators,
|
|
|
|
&targets,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void node_remove_accesses(struct node *node)
|
|
|
|
{
|
|
|
|
struct node_access_nodes *c, *cnext;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(c, cnext, &node->access_list, list_node) {
|
|
|
|
list_del(&c->list_node);
|
|
|
|
device_unregister(&c->dev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void node_access_release(struct device *dev)
|
|
|
|
{
|
|
|
|
kfree(to_access_nodes(dev));
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct node_access_nodes *node_init_node_access(struct node *node,
|
|
|
|
unsigned access)
|
|
|
|
{
|
|
|
|
struct node_access_nodes *access_node;
|
|
|
|
struct device *dev;
|
|
|
|
|
|
|
|
list_for_each_entry(access_node, &node->access_list, list_node)
|
|
|
|
if (access_node->access == access)
|
|
|
|
return access_node;
|
|
|
|
|
|
|
|
access_node = kzalloc(sizeof(*access_node), GFP_KERNEL);
|
|
|
|
if (!access_node)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
access_node->access = access;
|
|
|
|
dev = &access_node->dev;
|
|
|
|
dev->parent = &node->dev;
|
|
|
|
dev->release = node_access_release;
|
|
|
|
dev->groups = node_access_node_groups;
|
|
|
|
if (dev_set_name(dev, "access%u", access))
|
|
|
|
goto free;
|
|
|
|
|
|
|
|
if (device_register(dev))
|
|
|
|
goto free_name;
|
|
|
|
|
|
|
|
pm_runtime_no_callbacks(dev);
|
|
|
|
list_add_tail(&access_node->list_node, &node->access_list);
|
|
|
|
return access_node;
|
|
|
|
free_name:
|
|
|
|
kfree_const(dev->kobj.name);
|
|
|
|
free:
|
|
|
|
kfree(access_node);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2019-03-11 20:56:01 +00:00
|
|
|
#ifdef CONFIG_HMEM_REPORTING
|
2020-09-16 20:40:42 +00:00
|
|
|
#define ACCESS_ATTR(name) \
|
|
|
|
static ssize_t name##_show(struct device *dev, \
|
|
|
|
struct device_attribute *attr, \
|
|
|
|
char *buf) \
|
|
|
|
{ \
|
|
|
|
return sysfs_emit(buf, "%u\n", \
|
|
|
|
to_access_nodes(dev)->hmem_attrs.name); \
|
|
|
|
} \
|
2019-03-11 20:56:01 +00:00
|
|
|
static DEVICE_ATTR_RO(name);
|
|
|
|
|
|
|
|
ACCESS_ATTR(read_bandwidth)
|
|
|
|
ACCESS_ATTR(read_latency)
|
|
|
|
ACCESS_ATTR(write_bandwidth)
|
|
|
|
ACCESS_ATTR(write_latency)
|
|
|
|
|
|
|
|
static struct attribute *access_attrs[] = {
|
|
|
|
&dev_attr_read_bandwidth.attr,
|
|
|
|
&dev_attr_read_latency.attr,
|
|
|
|
&dev_attr_write_bandwidth.attr,
|
|
|
|
&dev_attr_write_latency.attr,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* node_set_perf_attrs - Set the performance values for given access class
|
|
|
|
* @nid: Node identifier to be set
|
|
|
|
* @hmem_attrs: Heterogeneous memory performance attributes
|
|
|
|
* @access: The access class the for the given attributes
|
|
|
|
*/
|
|
|
|
void node_set_perf_attrs(unsigned int nid, struct node_hmem_attrs *hmem_attrs,
|
|
|
|
unsigned access)
|
|
|
|
{
|
|
|
|
struct node_access_nodes *c;
|
|
|
|
struct node *node;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (WARN_ON_ONCE(!node_online(nid)))
|
|
|
|
return;
|
|
|
|
|
|
|
|
node = node_devices[nid];
|
|
|
|
c = node_init_node_access(node, access);
|
|
|
|
if (!c)
|
|
|
|
return;
|
|
|
|
|
|
|
|
c->hmem_attrs = *hmem_attrs;
|
|
|
|
for (i = 0; access_attrs[i] != NULL; i++) {
|
|
|
|
if (sysfs_add_file_to_group(&c->dev.kobj, access_attrs[i],
|
|
|
|
"initiators")) {
|
|
|
|
pr_info("failed to add performance attribute to node %d\n",
|
|
|
|
nid);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2019-03-11 20:56:02 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* struct node_cache_info - Internal tracking for memory node caches
|
|
|
|
* @dev: Device represeting the cache level
|
|
|
|
* @node: List element for tracking in the node
|
|
|
|
* @cache_attrs:Attributes for this cache level
|
|
|
|
*/
|
|
|
|
struct node_cache_info {
|
|
|
|
struct device dev;
|
|
|
|
struct list_head node;
|
|
|
|
struct node_cache_attrs cache_attrs;
|
|
|
|
};
|
|
|
|
#define to_cache_info(device) container_of(device, struct node_cache_info, dev)
|
|
|
|
|
|
|
|
#define CACHE_ATTR(name, fmt) \
|
|
|
|
static ssize_t name##_show(struct device *dev, \
|
|
|
|
struct device_attribute *attr, \
|
|
|
|
char *buf) \
|
|
|
|
{ \
|
2020-09-16 20:40:42 +00:00
|
|
|
return sysfs_emit(buf, fmt "\n", \
|
|
|
|
to_cache_info(dev)->cache_attrs.name); \
|
2019-03-11 20:56:02 +00:00
|
|
|
} \
|
|
|
|
DEVICE_ATTR_RO(name);
|
|
|
|
|
|
|
|
CACHE_ATTR(size, "%llu")
|
|
|
|
CACHE_ATTR(line_size, "%u")
|
|
|
|
CACHE_ATTR(indexing, "%u")
|
|
|
|
CACHE_ATTR(write_policy, "%u")
|
|
|
|
|
|
|
|
static struct attribute *cache_attrs[] = {
|
|
|
|
&dev_attr_indexing.attr,
|
|
|
|
&dev_attr_size.attr,
|
|
|
|
&dev_attr_line_size.attr,
|
|
|
|
&dev_attr_write_policy.attr,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
ATTRIBUTE_GROUPS(cache);
|
|
|
|
|
|
|
|
static void node_cache_release(struct device *dev)
|
|
|
|
{
|
|
|
|
kfree(dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void node_cacheinfo_release(struct device *dev)
|
|
|
|
{
|
|
|
|
struct node_cache_info *info = to_cache_info(dev);
|
|
|
|
kfree(info);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void node_init_cache_dev(struct node *node)
|
|
|
|
{
|
|
|
|
struct device *dev;
|
|
|
|
|
|
|
|
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
|
|
|
|
if (!dev)
|
|
|
|
return;
|
|
|
|
|
|
|
|
dev->parent = &node->dev;
|
|
|
|
dev->release = node_cache_release;
|
|
|
|
if (dev_set_name(dev, "memory_side_cache"))
|
|
|
|
goto free_dev;
|
|
|
|
|
|
|
|
if (device_register(dev))
|
|
|
|
goto free_name;
|
|
|
|
|
|
|
|
pm_runtime_no_callbacks(dev);
|
|
|
|
node->cache_dev = dev;
|
|
|
|
return;
|
|
|
|
free_name:
|
|
|
|
kfree_const(dev->kobj.name);
|
|
|
|
free_dev:
|
|
|
|
kfree(dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* node_add_cache() - add cache attribute to a memory node
|
|
|
|
* @nid: Node identifier that has new cache attributes
|
|
|
|
* @cache_attrs: Attributes for the cache being added
|
|
|
|
*/
|
|
|
|
void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs)
|
|
|
|
{
|
|
|
|
struct node_cache_info *info;
|
|
|
|
struct device *dev;
|
|
|
|
struct node *node;
|
|
|
|
|
|
|
|
if (!node_online(nid) || !node_devices[nid])
|
|
|
|
return;
|
|
|
|
|
|
|
|
node = node_devices[nid];
|
|
|
|
list_for_each_entry(info, &node->cache_attrs, node) {
|
|
|
|
if (info->cache_attrs.level == cache_attrs->level) {
|
|
|
|
dev_warn(&node->dev,
|
|
|
|
"attempt to add duplicate cache level:%d\n",
|
|
|
|
cache_attrs->level);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!node->cache_dev)
|
|
|
|
node_init_cache_dev(node);
|
|
|
|
if (!node->cache_dev)
|
|
|
|
return;
|
|
|
|
|
|
|
|
info = kzalloc(sizeof(*info), GFP_KERNEL);
|
|
|
|
if (!info)
|
|
|
|
return;
|
|
|
|
|
|
|
|
dev = &info->dev;
|
|
|
|
dev->parent = node->cache_dev;
|
|
|
|
dev->release = node_cacheinfo_release;
|
|
|
|
dev->groups = cache_groups;
|
|
|
|
if (dev_set_name(dev, "index%d", cache_attrs->level))
|
|
|
|
goto free_cache;
|
|
|
|
|
|
|
|
info->cache_attrs = *cache_attrs;
|
|
|
|
if (device_register(dev)) {
|
|
|
|
dev_warn(&node->dev, "failed to add cache level:%d\n",
|
|
|
|
cache_attrs->level);
|
|
|
|
goto free_name;
|
|
|
|
}
|
|
|
|
pm_runtime_no_callbacks(dev);
|
|
|
|
list_add_tail(&info->node, &node->cache_attrs);
|
|
|
|
return;
|
|
|
|
free_name:
|
|
|
|
kfree_const(dev->kobj.name);
|
|
|
|
free_cache:
|
|
|
|
kfree(info);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void node_remove_caches(struct node *node)
|
|
|
|
{
|
|
|
|
struct node_cache_info *info, *next;
|
|
|
|
|
|
|
|
if (!node->cache_dev)
|
|
|
|
return;
|
|
|
|
|
|
|
|
list_for_each_entry_safe(info, next, &node->cache_attrs, node) {
|
|
|
|
list_del(&info->node);
|
|
|
|
device_unregister(&info->dev);
|
|
|
|
}
|
|
|
|
device_unregister(node->cache_dev);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void node_init_caches(unsigned int nid)
|
|
|
|
{
|
|
|
|
INIT_LIST_HEAD(&node_devices[nid]->cache_attrs);
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static void node_init_caches(unsigned int nid) { }
|
|
|
|
static void node_remove_caches(struct node *node) { }
|
2019-03-11 20:56:01 +00:00
|
|
|
#endif
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#define K(x) ((x) << (PAGE_SHIFT - 10))
|
2011-12-21 22:48:43 +00:00
|
|
|
static ssize_t node_read_meminfo(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2020-09-16 20:40:42 +00:00
|
|
|
int len = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
int nid = dev->id;
|
2016-07-28 22:45:31 +00:00
|
|
|
struct pglist_data *pgdat = NODE_DATA(nid);
|
2005-04-16 22:20:36 +00:00
|
|
|
struct sysinfo i;
|
2018-10-26 22:05:50 +00:00
|
|
|
unsigned long sreclaimable, sunreclaimable;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
si_meminfo_node(&i, nid);
|
2020-08-07 06:20:39 +00:00
|
|
|
sreclaimable = node_page_state_pages(pgdat, NR_SLAB_RECLAIMABLE_B);
|
|
|
|
sunreclaimable = node_page_state_pages(pgdat, NR_SLAB_UNRECLAIMABLE_B);
|
2020-09-16 20:40:42 +00:00
|
|
|
len = sysfs_emit_at(buf, len,
|
|
|
|
"Node %d MemTotal: %8lu kB\n"
|
|
|
|
"Node %d MemFree: %8lu kB\n"
|
|
|
|
"Node %d MemUsed: %8lu kB\n"
|
|
|
|
"Node %d Active: %8lu kB\n"
|
|
|
|
"Node %d Inactive: %8lu kB\n"
|
|
|
|
"Node %d Active(anon): %8lu kB\n"
|
|
|
|
"Node %d Inactive(anon): %8lu kB\n"
|
|
|
|
"Node %d Active(file): %8lu kB\n"
|
|
|
|
"Node %d Inactive(file): %8lu kB\n"
|
|
|
|
"Node %d Unevictable: %8lu kB\n"
|
|
|
|
"Node %d Mlocked: %8lu kB\n",
|
|
|
|
nid, K(i.totalram),
|
|
|
|
nid, K(i.freeram),
|
|
|
|
nid, K(i.totalram - i.freeram),
|
|
|
|
nid, K(node_page_state(pgdat, NR_ACTIVE_ANON) +
|
|
|
|
node_page_state(pgdat, NR_ACTIVE_FILE)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_INACTIVE_ANON) +
|
|
|
|
node_page_state(pgdat, NR_INACTIVE_FILE)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_ACTIVE_ANON)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_INACTIVE_ANON)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_ACTIVE_FILE)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_INACTIVE_FILE)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_UNEVICTABLE)),
|
|
|
|
nid, K(sum_zone_node_page_state(nid, NR_MLOCK)));
|
2010-08-10 00:19:50 +00:00
|
|
|
|
2006-09-26 06:31:10 +00:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
2020-09-16 20:40:42 +00:00
|
|
|
len += sysfs_emit_at(buf, len,
|
|
|
|
"Node %d HighTotal: %8lu kB\n"
|
|
|
|
"Node %d HighFree: %8lu kB\n"
|
|
|
|
"Node %d LowTotal: %8lu kB\n"
|
|
|
|
"Node %d LowFree: %8lu kB\n",
|
|
|
|
nid, K(i.totalhigh),
|
|
|
|
nid, K(i.freehigh),
|
|
|
|
nid, K(i.totalram - i.totalhigh),
|
|
|
|
nid, K(i.freeram - i.freehigh));
|
2006-09-26 06:31:10 +00:00
|
|
|
#endif
|
2020-09-16 20:40:42 +00:00
|
|
|
len += sysfs_emit_at(buf, len,
|
|
|
|
"Node %d Dirty: %8lu kB\n"
|
|
|
|
"Node %d Writeback: %8lu kB\n"
|
|
|
|
"Node %d FilePages: %8lu kB\n"
|
|
|
|
"Node %d Mapped: %8lu kB\n"
|
|
|
|
"Node %d AnonPages: %8lu kB\n"
|
|
|
|
"Node %d Shmem: %8lu kB\n"
|
|
|
|
"Node %d KernelStack: %8lu kB\n"
|
2020-04-27 16:00:08 +00:00
|
|
|
#ifdef CONFIG_SHADOW_CALL_STACK
|
2020-09-16 20:40:42 +00:00
|
|
|
"Node %d ShadowCallStack:%8lu kB\n"
|
2020-04-27 16:00:08 +00:00
|
|
|
#endif
|
2020-09-16 20:40:42 +00:00
|
|
|
"Node %d PageTables: %8lu kB\n"
|
|
|
|
"Node %d NFS_Unstable: %8lu kB\n"
|
|
|
|
"Node %d Bounce: %8lu kB\n"
|
|
|
|
"Node %d WritebackTmp: %8lu kB\n"
|
|
|
|
"Node %d KReclaimable: %8lu kB\n"
|
|
|
|
"Node %d Slab: %8lu kB\n"
|
|
|
|
"Node %d SReclaimable: %8lu kB\n"
|
|
|
|
"Node %d SUnreclaim: %8lu kB\n"
|
2011-01-13 23:47:14 +00:00
|
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
2020-09-16 20:40:42 +00:00
|
|
|
"Node %d AnonHugePages: %8lu kB\n"
|
|
|
|
"Node %d ShmemHugePages: %8lu kB\n"
|
|
|
|
"Node %d ShmemPmdMapped: %8lu kB\n"
|
|
|
|
"Node %d FileHugePages: %8lu kB\n"
|
|
|
|
"Node %d FilePmdMapped: %8lu kB\n"
|
2011-01-13 23:47:14 +00:00
|
|
|
#endif
|
2020-09-16 20:40:42 +00:00
|
|
|
,
|
|
|
|
nid, K(node_page_state(pgdat, NR_FILE_DIRTY)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_WRITEBACK)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_FILE_PAGES)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_FILE_MAPPED)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_ANON_MAPPED)),
|
|
|
|
nid, K(i.sharedram),
|
|
|
|
nid, node_page_state(pgdat, NR_KERNEL_STACK_KB),
|
2020-04-27 16:00:08 +00:00
|
|
|
#ifdef CONFIG_SHADOW_CALL_STACK
|
2020-09-16 20:40:42 +00:00
|
|
|
nid, node_page_state(pgdat, NR_KERNEL_SCS_KB),
|
2020-04-27 16:00:08 +00:00
|
|
|
#endif
|
2020-09-16 20:40:42 +00:00
|
|
|
nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)),
|
|
|
|
nid, 0UL,
|
|
|
|
nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)),
|
|
|
|
nid, K(node_page_state(pgdat, NR_WRITEBACK_TEMP)),
|
|
|
|
nid, K(sreclaimable +
|
|
|
|
node_page_state(pgdat, NR_KERNEL_MISC_RECLAIMABLE)),
|
|
|
|
nid, K(sreclaimable + sunreclaimable),
|
|
|
|
nid, K(sreclaimable),
|
|
|
|
nid, K(sunreclaimable)
|
2011-01-13 23:47:14 +00:00
|
|
|
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
|
2020-09-16 20:40:42 +00:00
|
|
|
,
|
|
|
|
nid, K(node_page_state(pgdat, NR_ANON_THPS) *
|
|
|
|
HPAGE_PMD_NR),
|
|
|
|
nid, K(node_page_state(pgdat, NR_SHMEM_THPS) *
|
|
|
|
HPAGE_PMD_NR),
|
|
|
|
nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) *
|
|
|
|
HPAGE_PMD_NR),
|
|
|
|
nid, K(node_page_state(pgdat, NR_FILE_THPS) *
|
|
|
|
HPAGE_PMD_NR),
|
|
|
|
nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED) *
|
|
|
|
HPAGE_PMD_NR)
|
2011-01-13 23:47:14 +00:00
|
|
|
#endif
|
2020-09-16 20:40:42 +00:00
|
|
|
);
|
|
|
|
len += hugetlb_report_node_meminfo(nid, buf + len);
|
|
|
|
return len;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
#undef K
|
2020-09-16 20:40:42 +00:00
|
|
|
static DEVICE_ATTR(meminfo, 0444, node_read_meminfo, NULL);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static ssize_t node_read_numastat(struct device *dev,
|
2020-09-16 20:40:42 +00:00
|
|
|
struct device_attribute *attr, char *buf)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
drivers core: Use sysfs_emit and sysfs_emit_at for show(device *...) functions
Convert the various sprintf fmaily calls in sysfs device show functions
to sysfs_emit and sysfs_emit_at for PAGE_SIZE buffer safety.
Done with:
$ spatch -sp-file sysfs_emit_dev.cocci --in-place --max-width=80 .
And cocci script:
$ cat sysfs_emit_dev.cocci
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
return
- strcpy(buf, chr);
+ sysfs_emit(buf, chr);
...>
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- sprintf(buf,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- snprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
len =
- scnprintf(buf, PAGE_SIZE,
+ sysfs_emit(buf,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
identifier len;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
<...
- len += scnprintf(buf + len, PAGE_SIZE - len,
+ len += sysfs_emit_at(buf, len,
...);
...>
return len;
}
@@
identifier d_show;
identifier dev, attr, buf;
expression chr;
@@
ssize_t d_show(struct device *dev, struct device_attribute *attr, char *buf)
{
...
- strcpy(buf, chr);
- return strlen(buf);
+ return sysfs_emit(buf, chr);
}
Signed-off-by: Joe Perches <joe@perches.com>
Link: https://lore.kernel.org/r/3d033c33056d88bbe34d4ddb62afd05ee166ab9a.1600285923.git.joe@perches.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-16 20:40:39 +00:00
|
|
|
return sysfs_emit(buf,
|
|
|
|
"numa_hit %lu\n"
|
|
|
|
"numa_miss %lu\n"
|
|
|
|
"numa_foreign %lu\n"
|
|
|
|
"interleave_hit %lu\n"
|
|
|
|
"local_node %lu\n"
|
|
|
|
"other_node %lu\n",
|
|
|
|
sum_zone_numa_state(dev->id, NUMA_HIT),
|
|
|
|
sum_zone_numa_state(dev->id, NUMA_MISS),
|
|
|
|
sum_zone_numa_state(dev->id, NUMA_FOREIGN),
|
|
|
|
sum_zone_numa_state(dev->id, NUMA_INTERLEAVE_HIT),
|
|
|
|
sum_zone_numa_state(dev->id, NUMA_LOCAL),
|
|
|
|
sum_zone_numa_state(dev->id, NUMA_OTHER));
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2020-09-16 20:40:42 +00:00
|
|
|
static DEVICE_ATTR(numastat, 0444, node_read_numastat, NULL);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static ssize_t node_read_vmstat(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2010-10-26 21:21:35 +00:00
|
|
|
{
|
|
|
|
int nid = dev->id;
|
mm, vmstat: add infrastructure for per-node vmstats
Patchset: "Move LRU page reclaim from zones to nodes v9"
This series moves LRUs from the zones to the node. While this is a
current rebase, the test results were based on mmotm as of June 23rd.
Conceptually, this series is simple but there are a lot of details.
Some of the broad motivations for this are;
1. The residency of a page partially depends on what zone the page was
allocated from. This is partially combatted by the fair zone allocation
policy but that is a partial solution that introduces overhead in the
page allocator paths.
2. Currently, reclaim on node 0 behaves slightly different to node 1. For
example, direct reclaim scans in zonelist order and reclaims even if
the zone is over the high watermark regardless of the age of pages
in that LRU. Kswapd on the other hand starts reclaim on the highest
unbalanced zone. A difference in distribution of file/anon pages due
to when they were allocated results can result in a difference in
again. While the fair zone allocation policy mitigates some of the
problems here, the page reclaim results on a multi-zone node will
always be different to a single-zone node.
it was scheduled on as a result.
3. kswapd and the page allocator scan zones in the opposite order to
avoid interfering with each other but it's sensitive to timing. This
mitigates the page allocator using pages that were allocated very recently
in the ideal case but it's sensitive to timing. When kswapd is allocating
from lower zones then it's great but during the rebalancing of the highest
zone, the page allocator and kswapd interfere with each other. It's worse
if the highest zone is small and difficult to balance.
4. slab shrinkers are node-based which makes it harder to identify the exact
relationship between slab reclaim and LRU reclaim.
The reason we have zone-based reclaim is that we used to have
large highmem zones in common configurations and it was necessary
to quickly find ZONE_NORMAL pages for reclaim. Today, this is much
less of a concern as machines with lots of memory will (or should) use
64-bit kernels. Combinations of 32-bit hardware and 64-bit hardware are
rare. Machines that do use highmem should have relatively low highmem:lowmem
ratios than we worried about in the past.
Conceptually, moving to node LRUs should be easier to understand. The
page allocator plays fewer tricks to game reclaim and reclaim behaves
similarly on all nodes.
The series has been tested on a 16 core UMA machine and a 2-socket 48
core NUMA machine. The UMA results are presented in most cases as the NUMA
machine behaved similarly.
pagealloc
---------
This is a microbenchmark that shows the benefit of removing the fair zone
allocation policy. It was tested uip to order-4 but only orders 0 and 1 are
shown as the other orders were comparable.
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v9
Min total-odr0-1 490.00 ( 0.00%) 457.00 ( 6.73%)
Min total-odr0-2 347.00 ( 0.00%) 329.00 ( 5.19%)
Min total-odr0-4 288.00 ( 0.00%) 273.00 ( 5.21%)
Min total-odr0-8 251.00 ( 0.00%) 239.00 ( 4.78%)
Min total-odr0-16 234.00 ( 0.00%) 222.00 ( 5.13%)
Min total-odr0-32 223.00 ( 0.00%) 211.00 ( 5.38%)
Min total-odr0-64 217.00 ( 0.00%) 208.00 ( 4.15%)
Min total-odr0-128 214.00 ( 0.00%) 204.00 ( 4.67%)
Min total-odr0-256 250.00 ( 0.00%) 230.00 ( 8.00%)
Min total-odr0-512 271.00 ( 0.00%) 269.00 ( 0.74%)
Min total-odr0-1024 291.00 ( 0.00%) 282.00 ( 3.09%)
Min total-odr0-2048 303.00 ( 0.00%) 296.00 ( 2.31%)
Min total-odr0-4096 311.00 ( 0.00%) 309.00 ( 0.64%)
Min total-odr0-8192 316.00 ( 0.00%) 314.00 ( 0.63%)
Min total-odr0-16384 317.00 ( 0.00%) 315.00 ( 0.63%)
Min total-odr1-1 742.00 ( 0.00%) 712.00 ( 4.04%)
Min total-odr1-2 562.00 ( 0.00%) 530.00 ( 5.69%)
Min total-odr1-4 457.00 ( 0.00%) 433.00 ( 5.25%)
Min total-odr1-8 411.00 ( 0.00%) 381.00 ( 7.30%)
Min total-odr1-16 381.00 ( 0.00%) 356.00 ( 6.56%)
Min total-odr1-32 372.00 ( 0.00%) 346.00 ( 6.99%)
Min total-odr1-64 372.00 ( 0.00%) 343.00 ( 7.80%)
Min total-odr1-128 375.00 ( 0.00%) 351.00 ( 6.40%)
Min total-odr1-256 379.00 ( 0.00%) 351.00 ( 7.39%)
Min total-odr1-512 385.00 ( 0.00%) 355.00 ( 7.79%)
Min total-odr1-1024 386.00 ( 0.00%) 358.00 ( 7.25%)
Min total-odr1-2048 390.00 ( 0.00%) 362.00 ( 7.18%)
Min total-odr1-4096 390.00 ( 0.00%) 362.00 ( 7.18%)
Min total-odr1-8192 388.00 ( 0.00%) 363.00 ( 6.44%)
This shows a steady improvement throughout. The primary benefit is from
reduced system CPU usage which is obvious from the overall times;
4.7.0-rc4 4.7.0-rc4
mmotm-20160623nodelru-v8
User 189.19 191.80
System 2604.45 2533.56
Elapsed 2855.30 2786.39
The vmstats also showed that the fair zone allocation policy was definitely
removed as can be seen here;
4.7.0-rc3 4.7.0-rc3
mmotm-20160623 nodelru-v8
DMA32 allocs 28794729769 0
Normal allocs 48432501431 77227309877
Movable allocs 0 0
tiobench on ext4
----------------
tiobench is a benchmark that artifically benefits if old pages remain resident
while new pages get reclaimed. The fair zone allocation policy mitigates this
problem so pages age fairly. While the benchmark has problems, it is important
that tiobench performance remains constant as it implies that page aging
problems that the fair zone allocation policy fixes are not re-introduced.
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v9
Min PotentialReadSpeed 89.65 ( 0.00%) 90.21 ( 0.62%)
Min SeqRead-MB/sec-1 82.68 ( 0.00%) 82.01 ( -0.81%)
Min SeqRead-MB/sec-2 72.76 ( 0.00%) 72.07 ( -0.95%)
Min SeqRead-MB/sec-4 75.13 ( 0.00%) 74.92 ( -0.28%)
Min SeqRead-MB/sec-8 64.91 ( 0.00%) 65.19 ( 0.43%)
Min SeqRead-MB/sec-16 62.24 ( 0.00%) 62.22 ( -0.03%)
Min RandRead-MB/sec-1 0.88 ( 0.00%) 0.88 ( 0.00%)
Min RandRead-MB/sec-2 0.95 ( 0.00%) 0.92 ( -3.16%)
Min RandRead-MB/sec-4 1.43 ( 0.00%) 1.34 ( -6.29%)
Min RandRead-MB/sec-8 1.61 ( 0.00%) 1.60 ( -0.62%)
Min RandRead-MB/sec-16 1.80 ( 0.00%) 1.90 ( 5.56%)
Min SeqWrite-MB/sec-1 76.41 ( 0.00%) 76.85 ( 0.58%)
Min SeqWrite-MB/sec-2 74.11 ( 0.00%) 73.54 ( -0.77%)
Min SeqWrite-MB/sec-4 80.05 ( 0.00%) 80.13 ( 0.10%)
Min SeqWrite-MB/sec-8 72.88 ( 0.00%) 73.20 ( 0.44%)
Min SeqWrite-MB/sec-16 75.91 ( 0.00%) 76.44 ( 0.70%)
Min RandWrite-MB/sec-1 1.18 ( 0.00%) 1.14 ( -3.39%)
Min RandWrite-MB/sec-2 1.02 ( 0.00%) 1.03 ( 0.98%)
Min RandWrite-MB/sec-4 1.05 ( 0.00%) 0.98 ( -6.67%)
Min RandWrite-MB/sec-8 0.89 ( 0.00%) 0.92 ( 3.37%)
Min RandWrite-MB/sec-16 0.92 ( 0.00%) 0.93 ( 1.09%)
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 approx-v9
User 645.72 525.90
System 403.85 331.75
Elapsed 6795.36 6783.67
This shows that the series has little or not impact on tiobench which is
desirable and a reduction in system CPU usage. It indicates that the fair
zone allocation policy was removed in a manner that didn't reintroduce
one class of page aging bug. There were only minor differences in overall
reclaim activity
4.7.0-rc4 4.7.0-rc4
mmotm-20160623nodelru-v8
Minor Faults 645838 647465
Major Faults 573 640
Swap Ins 0 0
Swap Outs 0 0
DMA allocs 0 0
DMA32 allocs 46041453 44190646
Normal allocs 78053072 79887245
Movable allocs 0 0
Allocation stalls 24 67
Stall zone DMA 0 0
Stall zone DMA32 0 0
Stall zone Normal 0 2
Stall zone HighMem 0 0
Stall zone Movable 0 65
Direct pages scanned 10969 30609
Kswapd pages scanned 93375144 93492094
Kswapd pages reclaimed 93372243 93489370
Direct pages reclaimed 10969 30609
Kswapd efficiency 99% 99%
Kswapd velocity 13741.015 13781.934
Direct efficiency 100% 100%
Direct velocity 1.614 4.512
Percentage direct scans 0% 0%
kswapd activity was roughly comparable. There were differences in direct
reclaim activity but negligible in the context of the overall workload
(velocity of 4 pages per second with the patches applied, 1.6 pages per
second in the baseline kernel).
pgbench read-only large configuration on ext4
---------------------------------------------
pgbench is a database benchmark that can be sensitive to page reclaim
decisions. This also checks if removing the fair zone allocation policy
is safe
pgbench Transactions
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v8
Hmean 1 188.26 ( 0.00%) 189.78 ( 0.81%)
Hmean 5 330.66 ( 0.00%) 328.69 ( -0.59%)
Hmean 12 370.32 ( 0.00%) 380.72 ( 2.81%)
Hmean 21 368.89 ( 0.00%) 369.00 ( 0.03%)
Hmean 30 382.14 ( 0.00%) 360.89 ( -5.56%)
Hmean 32 428.87 ( 0.00%) 432.96 ( 0.95%)
Negligible differences again. As with tiobench, overall reclaim activity
was comparable.
bonnie++ on ext4
----------------
No interesting performance difference, negligible differences on reclaim
stats.
paralleldd on ext4
------------------
This workload uses varying numbers of dd instances to read large amounts of
data from disk.
4.7.0-rc3 4.7.0-rc3
mmotm-20160623 nodelru-v9
Amean Elapsd-1 186.04 ( 0.00%) 189.41 ( -1.82%)
Amean Elapsd-3 192.27 ( 0.00%) 191.38 ( 0.46%)
Amean Elapsd-5 185.21 ( 0.00%) 182.75 ( 1.33%)
Amean Elapsd-7 183.71 ( 0.00%) 182.11 ( 0.87%)
Amean Elapsd-12 180.96 ( 0.00%) 181.58 ( -0.35%)
Amean Elapsd-16 181.36 ( 0.00%) 183.72 ( -1.30%)
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v9
User 1548.01 1552.44
System 8609.71 8515.08
Elapsed 3587.10 3594.54
There is little or no change in performance but some drop in system CPU usage.
4.7.0-rc3 4.7.0-rc3
mmotm-20160623 nodelru-v9
Minor Faults 362662 367360
Major Faults 1204 1143
Swap Ins 22 0
Swap Outs 2855 1029
DMA allocs 0 0
DMA32 allocs 31409797 28837521
Normal allocs 46611853 49231282
Movable allocs 0 0
Direct pages scanned 0 0
Kswapd pages scanned 40845270 40869088
Kswapd pages reclaimed 40830976 40855294
Direct pages reclaimed 0 0
Kswapd efficiency 99% 99%
Kswapd velocity 11386.711 11369.769
Direct efficiency 100% 100%
Direct velocity 0.000 0.000
Percentage direct scans 0% 0%
Page writes by reclaim 2855 1029
Page writes file 0 0
Page writes anon 2855 1029
Page reclaim immediate 771 1628
Sector Reads 293312636 293536360
Sector Writes 18213568 18186480
Page rescued immediate 0 0
Slabs scanned 128257 132747
Direct inode steals 181 56
Kswapd inode steals 59 1131
It basically shows that kswapd was active at roughly the same rate in
both kernels. There was also comparable slab scanning activity and direct
reclaim was avoided in both cases. There appears to be a large difference
in numbers of inodes reclaimed but the workload has few active inodes and
is likely a timing artifact.
stutter
-------
stutter simulates a simple workload. One part uses a lot of anonymous
memory, a second measures mmap latency and a third copies a large file.
The primary metric is checking for mmap latency.
stutter
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v8
Min mmap 16.6283 ( 0.00%) 13.4258 ( 19.26%)
1st-qrtle mmap 54.7570 ( 0.00%) 34.9121 ( 36.24%)
2nd-qrtle mmap 57.3163 ( 0.00%) 46.1147 ( 19.54%)
3rd-qrtle mmap 58.9976 ( 0.00%) 47.1882 ( 20.02%)
Max-90% mmap 59.7433 ( 0.00%) 47.4453 ( 20.58%)
Max-93% mmap 60.1298 ( 0.00%) 47.6037 ( 20.83%)
Max-95% mmap 73.4112 ( 0.00%) 82.8719 (-12.89%)
Max-99% mmap 92.8542 ( 0.00%) 88.8870 ( 4.27%)
Max mmap 1440.6569 ( 0.00%) 121.4201 ( 91.57%)
Mean mmap 59.3493 ( 0.00%) 42.2991 ( 28.73%)
Best99%Mean mmap 57.2121 ( 0.00%) 41.8207 ( 26.90%)
Best95%Mean mmap 55.9113 ( 0.00%) 39.9620 ( 28.53%)
Best90%Mean mmap 55.6199 ( 0.00%) 39.3124 ( 29.32%)
Best50%Mean mmap 53.2183 ( 0.00%) 33.1307 ( 37.75%)
Best10%Mean mmap 45.9842 ( 0.00%) 20.4040 ( 55.63%)
Best5%Mean mmap 43.2256 ( 0.00%) 17.9654 ( 58.44%)
Best1%Mean mmap 32.9388 ( 0.00%) 16.6875 ( 49.34%)
This shows a number of improvements with the worst-case outlier greatly
improved.
Some of the vmstats are interesting
4.7.0-rc4 4.7.0-rc4
mmotm-20160623nodelru-v8
Swap Ins 163 502
Swap Outs 0 0
DMA allocs 0 0
DMA32 allocs 618719206 1381662383
Normal allocs 891235743 564138421
Movable allocs 0 0
Allocation stalls 2603 1
Direct pages scanned 216787 2
Kswapd pages scanned 50719775 41778378
Kswapd pages reclaimed 41541765 41777639
Direct pages reclaimed 209159 0
Kswapd efficiency 81% 99%
Kswapd velocity 16859.554 14329.059
Direct efficiency 96% 0%
Direct velocity 72.061 0.001
Percentage direct scans 0% 0%
Page writes by reclaim 6215049 0
Page writes file 6215049 0
Page writes anon 0 0
Page reclaim immediate 70673 90
Sector Reads 81940800 81680456
Sector Writes 100158984 98816036
Page rescued immediate 0 0
Slabs scanned 1366954 22683
While this is not guaranteed in all cases, this particular test showed
a large reduction in direct reclaim activity. It's also worth noting
that no page writes were issued from reclaim context.
This series is not without its hazards. There are at least three areas
that I'm concerned with even though I could not reproduce any problems in
that area.
1. Reclaim/compaction is going to be affected because the amount of reclaim is
no longer targetted at a specific zone. Compaction works on a per-zone basis
so there is no guarantee that reclaiming a few THP's worth page pages will
have a positive impact on compaction success rates.
2. The Slab/LRU reclaim ratio is affected because the frequency the shrinkers
are called is now different. This may or may not be a problem but if it
is, it'll be because shrinkers are not called enough and some balancing
is required.
3. The anon/file reclaim ratio may be affected. Pages about to be dirtied are
distributed between zones and the fair zone allocation policy used to do
something very similar for anon. The distribution is now different but not
necessarily in any way that matters but it's still worth bearing in mind.
VM statistic counters for reclaim decisions are zone-based. If the kernel
is to reclaim on a per-node basis then we need to track per-node
statistics but there is no infrastructure for that. The most notable
change is that the old node_page_state is renamed to
sum_zone_node_page_state. The new node_page_state takes a pglist_data and
uses per-node stats but none exist yet. There is some renaming such as
vm_stat to vm_zone_stat and the addition of vm_node_stat and the renaming
of mod_state to mod_zone_state. Otherwise, this is mostly a mechanical
patch with no functional change. There is a lot of similarity between the
node and zone helpers which is unfortunate but there was no obvious way of
reusing the code and maintaining type safety.
Link: http://lkml.kernel.org/r/1467970510-21195-2-git-send-email-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@surriel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28 22:45:24 +00:00
|
|
|
struct pglist_data *pgdat = NODE_DATA(nid);
|
2011-05-25 00:11:28 +00:00
|
|
|
int i;
|
2020-09-16 20:40:42 +00:00
|
|
|
int len = 0;
|
2011-05-25 00:11:28 +00:00
|
|
|
|
|
|
|
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
|
2020-09-16 20:40:42 +00:00
|
|
|
len += sysfs_emit_at(buf, len, "%s %lu\n",
|
|
|
|
zone_stat_name(i),
|
|
|
|
sum_zone_node_page_state(nid, i));
|
mm, vmstat: add infrastructure for per-node vmstats
Patchset: "Move LRU page reclaim from zones to nodes v9"
This series moves LRUs from the zones to the node. While this is a
current rebase, the test results were based on mmotm as of June 23rd.
Conceptually, this series is simple but there are a lot of details.
Some of the broad motivations for this are;
1. The residency of a page partially depends on what zone the page was
allocated from. This is partially combatted by the fair zone allocation
policy but that is a partial solution that introduces overhead in the
page allocator paths.
2. Currently, reclaim on node 0 behaves slightly different to node 1. For
example, direct reclaim scans in zonelist order and reclaims even if
the zone is over the high watermark regardless of the age of pages
in that LRU. Kswapd on the other hand starts reclaim on the highest
unbalanced zone. A difference in distribution of file/anon pages due
to when they were allocated results can result in a difference in
again. While the fair zone allocation policy mitigates some of the
problems here, the page reclaim results on a multi-zone node will
always be different to a single-zone node.
it was scheduled on as a result.
3. kswapd and the page allocator scan zones in the opposite order to
avoid interfering with each other but it's sensitive to timing. This
mitigates the page allocator using pages that were allocated very recently
in the ideal case but it's sensitive to timing. When kswapd is allocating
from lower zones then it's great but during the rebalancing of the highest
zone, the page allocator and kswapd interfere with each other. It's worse
if the highest zone is small and difficult to balance.
4. slab shrinkers are node-based which makes it harder to identify the exact
relationship between slab reclaim and LRU reclaim.
The reason we have zone-based reclaim is that we used to have
large highmem zones in common configurations and it was necessary
to quickly find ZONE_NORMAL pages for reclaim. Today, this is much
less of a concern as machines with lots of memory will (or should) use
64-bit kernels. Combinations of 32-bit hardware and 64-bit hardware are
rare. Machines that do use highmem should have relatively low highmem:lowmem
ratios than we worried about in the past.
Conceptually, moving to node LRUs should be easier to understand. The
page allocator plays fewer tricks to game reclaim and reclaim behaves
similarly on all nodes.
The series has been tested on a 16 core UMA machine and a 2-socket 48
core NUMA machine. The UMA results are presented in most cases as the NUMA
machine behaved similarly.
pagealloc
---------
This is a microbenchmark that shows the benefit of removing the fair zone
allocation policy. It was tested uip to order-4 but only orders 0 and 1 are
shown as the other orders were comparable.
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v9
Min total-odr0-1 490.00 ( 0.00%) 457.00 ( 6.73%)
Min total-odr0-2 347.00 ( 0.00%) 329.00 ( 5.19%)
Min total-odr0-4 288.00 ( 0.00%) 273.00 ( 5.21%)
Min total-odr0-8 251.00 ( 0.00%) 239.00 ( 4.78%)
Min total-odr0-16 234.00 ( 0.00%) 222.00 ( 5.13%)
Min total-odr0-32 223.00 ( 0.00%) 211.00 ( 5.38%)
Min total-odr0-64 217.00 ( 0.00%) 208.00 ( 4.15%)
Min total-odr0-128 214.00 ( 0.00%) 204.00 ( 4.67%)
Min total-odr0-256 250.00 ( 0.00%) 230.00 ( 8.00%)
Min total-odr0-512 271.00 ( 0.00%) 269.00 ( 0.74%)
Min total-odr0-1024 291.00 ( 0.00%) 282.00 ( 3.09%)
Min total-odr0-2048 303.00 ( 0.00%) 296.00 ( 2.31%)
Min total-odr0-4096 311.00 ( 0.00%) 309.00 ( 0.64%)
Min total-odr0-8192 316.00 ( 0.00%) 314.00 ( 0.63%)
Min total-odr0-16384 317.00 ( 0.00%) 315.00 ( 0.63%)
Min total-odr1-1 742.00 ( 0.00%) 712.00 ( 4.04%)
Min total-odr1-2 562.00 ( 0.00%) 530.00 ( 5.69%)
Min total-odr1-4 457.00 ( 0.00%) 433.00 ( 5.25%)
Min total-odr1-8 411.00 ( 0.00%) 381.00 ( 7.30%)
Min total-odr1-16 381.00 ( 0.00%) 356.00 ( 6.56%)
Min total-odr1-32 372.00 ( 0.00%) 346.00 ( 6.99%)
Min total-odr1-64 372.00 ( 0.00%) 343.00 ( 7.80%)
Min total-odr1-128 375.00 ( 0.00%) 351.00 ( 6.40%)
Min total-odr1-256 379.00 ( 0.00%) 351.00 ( 7.39%)
Min total-odr1-512 385.00 ( 0.00%) 355.00 ( 7.79%)
Min total-odr1-1024 386.00 ( 0.00%) 358.00 ( 7.25%)
Min total-odr1-2048 390.00 ( 0.00%) 362.00 ( 7.18%)
Min total-odr1-4096 390.00 ( 0.00%) 362.00 ( 7.18%)
Min total-odr1-8192 388.00 ( 0.00%) 363.00 ( 6.44%)
This shows a steady improvement throughout. The primary benefit is from
reduced system CPU usage which is obvious from the overall times;
4.7.0-rc4 4.7.0-rc4
mmotm-20160623nodelru-v8
User 189.19 191.80
System 2604.45 2533.56
Elapsed 2855.30 2786.39
The vmstats also showed that the fair zone allocation policy was definitely
removed as can be seen here;
4.7.0-rc3 4.7.0-rc3
mmotm-20160623 nodelru-v8
DMA32 allocs 28794729769 0
Normal allocs 48432501431 77227309877
Movable allocs 0 0
tiobench on ext4
----------------
tiobench is a benchmark that artifically benefits if old pages remain resident
while new pages get reclaimed. The fair zone allocation policy mitigates this
problem so pages age fairly. While the benchmark has problems, it is important
that tiobench performance remains constant as it implies that page aging
problems that the fair zone allocation policy fixes are not re-introduced.
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v9
Min PotentialReadSpeed 89.65 ( 0.00%) 90.21 ( 0.62%)
Min SeqRead-MB/sec-1 82.68 ( 0.00%) 82.01 ( -0.81%)
Min SeqRead-MB/sec-2 72.76 ( 0.00%) 72.07 ( -0.95%)
Min SeqRead-MB/sec-4 75.13 ( 0.00%) 74.92 ( -0.28%)
Min SeqRead-MB/sec-8 64.91 ( 0.00%) 65.19 ( 0.43%)
Min SeqRead-MB/sec-16 62.24 ( 0.00%) 62.22 ( -0.03%)
Min RandRead-MB/sec-1 0.88 ( 0.00%) 0.88 ( 0.00%)
Min RandRead-MB/sec-2 0.95 ( 0.00%) 0.92 ( -3.16%)
Min RandRead-MB/sec-4 1.43 ( 0.00%) 1.34 ( -6.29%)
Min RandRead-MB/sec-8 1.61 ( 0.00%) 1.60 ( -0.62%)
Min RandRead-MB/sec-16 1.80 ( 0.00%) 1.90 ( 5.56%)
Min SeqWrite-MB/sec-1 76.41 ( 0.00%) 76.85 ( 0.58%)
Min SeqWrite-MB/sec-2 74.11 ( 0.00%) 73.54 ( -0.77%)
Min SeqWrite-MB/sec-4 80.05 ( 0.00%) 80.13 ( 0.10%)
Min SeqWrite-MB/sec-8 72.88 ( 0.00%) 73.20 ( 0.44%)
Min SeqWrite-MB/sec-16 75.91 ( 0.00%) 76.44 ( 0.70%)
Min RandWrite-MB/sec-1 1.18 ( 0.00%) 1.14 ( -3.39%)
Min RandWrite-MB/sec-2 1.02 ( 0.00%) 1.03 ( 0.98%)
Min RandWrite-MB/sec-4 1.05 ( 0.00%) 0.98 ( -6.67%)
Min RandWrite-MB/sec-8 0.89 ( 0.00%) 0.92 ( 3.37%)
Min RandWrite-MB/sec-16 0.92 ( 0.00%) 0.93 ( 1.09%)
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 approx-v9
User 645.72 525.90
System 403.85 331.75
Elapsed 6795.36 6783.67
This shows that the series has little or not impact on tiobench which is
desirable and a reduction in system CPU usage. It indicates that the fair
zone allocation policy was removed in a manner that didn't reintroduce
one class of page aging bug. There were only minor differences in overall
reclaim activity
4.7.0-rc4 4.7.0-rc4
mmotm-20160623nodelru-v8
Minor Faults 645838 647465
Major Faults 573 640
Swap Ins 0 0
Swap Outs 0 0
DMA allocs 0 0
DMA32 allocs 46041453 44190646
Normal allocs 78053072 79887245
Movable allocs 0 0
Allocation stalls 24 67
Stall zone DMA 0 0
Stall zone DMA32 0 0
Stall zone Normal 0 2
Stall zone HighMem 0 0
Stall zone Movable 0 65
Direct pages scanned 10969 30609
Kswapd pages scanned 93375144 93492094
Kswapd pages reclaimed 93372243 93489370
Direct pages reclaimed 10969 30609
Kswapd efficiency 99% 99%
Kswapd velocity 13741.015 13781.934
Direct efficiency 100% 100%
Direct velocity 1.614 4.512
Percentage direct scans 0% 0%
kswapd activity was roughly comparable. There were differences in direct
reclaim activity but negligible in the context of the overall workload
(velocity of 4 pages per second with the patches applied, 1.6 pages per
second in the baseline kernel).
pgbench read-only large configuration on ext4
---------------------------------------------
pgbench is a database benchmark that can be sensitive to page reclaim
decisions. This also checks if removing the fair zone allocation policy
is safe
pgbench Transactions
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v8
Hmean 1 188.26 ( 0.00%) 189.78 ( 0.81%)
Hmean 5 330.66 ( 0.00%) 328.69 ( -0.59%)
Hmean 12 370.32 ( 0.00%) 380.72 ( 2.81%)
Hmean 21 368.89 ( 0.00%) 369.00 ( 0.03%)
Hmean 30 382.14 ( 0.00%) 360.89 ( -5.56%)
Hmean 32 428.87 ( 0.00%) 432.96 ( 0.95%)
Negligible differences again. As with tiobench, overall reclaim activity
was comparable.
bonnie++ on ext4
----------------
No interesting performance difference, negligible differences on reclaim
stats.
paralleldd on ext4
------------------
This workload uses varying numbers of dd instances to read large amounts of
data from disk.
4.7.0-rc3 4.7.0-rc3
mmotm-20160623 nodelru-v9
Amean Elapsd-1 186.04 ( 0.00%) 189.41 ( -1.82%)
Amean Elapsd-3 192.27 ( 0.00%) 191.38 ( 0.46%)
Amean Elapsd-5 185.21 ( 0.00%) 182.75 ( 1.33%)
Amean Elapsd-7 183.71 ( 0.00%) 182.11 ( 0.87%)
Amean Elapsd-12 180.96 ( 0.00%) 181.58 ( -0.35%)
Amean Elapsd-16 181.36 ( 0.00%) 183.72 ( -1.30%)
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v9
User 1548.01 1552.44
System 8609.71 8515.08
Elapsed 3587.10 3594.54
There is little or no change in performance but some drop in system CPU usage.
4.7.0-rc3 4.7.0-rc3
mmotm-20160623 nodelru-v9
Minor Faults 362662 367360
Major Faults 1204 1143
Swap Ins 22 0
Swap Outs 2855 1029
DMA allocs 0 0
DMA32 allocs 31409797 28837521
Normal allocs 46611853 49231282
Movable allocs 0 0
Direct pages scanned 0 0
Kswapd pages scanned 40845270 40869088
Kswapd pages reclaimed 40830976 40855294
Direct pages reclaimed 0 0
Kswapd efficiency 99% 99%
Kswapd velocity 11386.711 11369.769
Direct efficiency 100% 100%
Direct velocity 0.000 0.000
Percentage direct scans 0% 0%
Page writes by reclaim 2855 1029
Page writes file 0 0
Page writes anon 2855 1029
Page reclaim immediate 771 1628
Sector Reads 293312636 293536360
Sector Writes 18213568 18186480
Page rescued immediate 0 0
Slabs scanned 128257 132747
Direct inode steals 181 56
Kswapd inode steals 59 1131
It basically shows that kswapd was active at roughly the same rate in
both kernels. There was also comparable slab scanning activity and direct
reclaim was avoided in both cases. There appears to be a large difference
in numbers of inodes reclaimed but the workload has few active inodes and
is likely a timing artifact.
stutter
-------
stutter simulates a simple workload. One part uses a lot of anonymous
memory, a second measures mmap latency and a third copies a large file.
The primary metric is checking for mmap latency.
stutter
4.7.0-rc4 4.7.0-rc4
mmotm-20160623 nodelru-v8
Min mmap 16.6283 ( 0.00%) 13.4258 ( 19.26%)
1st-qrtle mmap 54.7570 ( 0.00%) 34.9121 ( 36.24%)
2nd-qrtle mmap 57.3163 ( 0.00%) 46.1147 ( 19.54%)
3rd-qrtle mmap 58.9976 ( 0.00%) 47.1882 ( 20.02%)
Max-90% mmap 59.7433 ( 0.00%) 47.4453 ( 20.58%)
Max-93% mmap 60.1298 ( 0.00%) 47.6037 ( 20.83%)
Max-95% mmap 73.4112 ( 0.00%) 82.8719 (-12.89%)
Max-99% mmap 92.8542 ( 0.00%) 88.8870 ( 4.27%)
Max mmap 1440.6569 ( 0.00%) 121.4201 ( 91.57%)
Mean mmap 59.3493 ( 0.00%) 42.2991 ( 28.73%)
Best99%Mean mmap 57.2121 ( 0.00%) 41.8207 ( 26.90%)
Best95%Mean mmap 55.9113 ( 0.00%) 39.9620 ( 28.53%)
Best90%Mean mmap 55.6199 ( 0.00%) 39.3124 ( 29.32%)
Best50%Mean mmap 53.2183 ( 0.00%) 33.1307 ( 37.75%)
Best10%Mean mmap 45.9842 ( 0.00%) 20.4040 ( 55.63%)
Best5%Mean mmap 43.2256 ( 0.00%) 17.9654 ( 58.44%)
Best1%Mean mmap 32.9388 ( 0.00%) 16.6875 ( 49.34%)
This shows a number of improvements with the worst-case outlier greatly
improved.
Some of the vmstats are interesting
4.7.0-rc4 4.7.0-rc4
mmotm-20160623nodelru-v8
Swap Ins 163 502
Swap Outs 0 0
DMA allocs 0 0
DMA32 allocs 618719206 1381662383
Normal allocs 891235743 564138421
Movable allocs 0 0
Allocation stalls 2603 1
Direct pages scanned 216787 2
Kswapd pages scanned 50719775 41778378
Kswapd pages reclaimed 41541765 41777639
Direct pages reclaimed 209159 0
Kswapd efficiency 81% 99%
Kswapd velocity 16859.554 14329.059
Direct efficiency 96% 0%
Direct velocity 72.061 0.001
Percentage direct scans 0% 0%
Page writes by reclaim 6215049 0
Page writes file 6215049 0
Page writes anon 0 0
Page reclaim immediate 70673 90
Sector Reads 81940800 81680456
Sector Writes 100158984 98816036
Page rescued immediate 0 0
Slabs scanned 1366954 22683
While this is not guaranteed in all cases, this particular test showed
a large reduction in direct reclaim activity. It's also worth noting
that no page writes were issued from reclaim context.
This series is not without its hazards. There are at least three areas
that I'm concerned with even though I could not reproduce any problems in
that area.
1. Reclaim/compaction is going to be affected because the amount of reclaim is
no longer targetted at a specific zone. Compaction works on a per-zone basis
so there is no guarantee that reclaiming a few THP's worth page pages will
have a positive impact on compaction success rates.
2. The Slab/LRU reclaim ratio is affected because the frequency the shrinkers
are called is now different. This may or may not be a problem but if it
is, it'll be because shrinkers are not called enough and some balancing
is required.
3. The anon/file reclaim ratio may be affected. Pages about to be dirtied are
distributed between zones and the fair zone allocation policy used to do
something very similar for anon. The distribution is now different but not
necessarily in any way that matters but it's still worth bearing in mind.
VM statistic counters for reclaim decisions are zone-based. If the kernel
is to reclaim on a per-node basis then we need to track per-node
statistics but there is no infrastructure for that. The most notable
change is that the old node_page_state is renamed to
sum_zone_node_page_state. The new node_page_state takes a pglist_data and
uses per-node stats but none exist yet. There is some renaming such as
vm_stat to vm_zone_stat and the addition of vm_node_stat and the renaming
of mod_state to mod_zone_state. Otherwise, this is mostly a mechanical
patch with no functional change. There is a lot of similarity between the
node and zone helpers which is unfortunate but there was no obvious way of
reusing the code and maintaining type safety.
Link: http://lkml.kernel.org/r/1467970510-21195-2-git-send-email-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Rik van Riel <riel@surriel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-28 22:45:24 +00:00
|
|
|
|
mm: change the call sites of numa statistics items
Patch series "Separate NUMA statistics from zone statistics", v2.
Each page allocation updates a set of per-zone statistics with a call to
zone_statistics(). As discussed in 2017 MM summit, these are a
substantial source of overhead in the page allocator and are very rarely
consumed. This significant overhead in cache bouncing caused by zone
counters (NUMA associated counters) update in parallel in multi-threaded
page allocation (pointed out by Dave Hansen).
A link to the MM summit slides:
http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
To mitigate this overhead, this patchset separates NUMA statistics from
zone statistics framework, and update NUMA counter threshold to a fixed
size of MAX_U16 - 2, as a small threshold greatly increases the update
frequency of the global counter from local per cpu counter (suggested by
Ying Huang). The rationality is that these statistics counters don't
need to be read often, unlike other VM counters, so it's not a problem
to use a large threshold and make readers more expensive.
With this patchset, we see 31.3% drop of CPU cycles(537-->369, see
below) for per single page allocation and reclaim on Jesper's
page_bench03 benchmark. Meanwhile, this patchset keeps the same style
of virtual memory statistics with little end-user-visible effects (only
move the numa stats to show behind zone page stats, see the first patch
for details).
I did an experiment of single page allocation and reclaim concurrently
using Jesper's page_bench03 benchmark on a 2-Socket Broadwell-based
server (88 processors with 126G memory) with different size of threshold
of pcp counter.
Benchmark provided by Jesper D Brouer(increase loop times to 10000000):
https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm/bench
Threshold CPU cycles Throughput(88 threads)
32 799 241760478
64 640 301628829
125 537 358906028 <==> system by default
256 468 412397590
512 428 450550704
4096 399 482520943
20000 394 489009617
30000 395 488017817
65533 369(-31.3%) 521661345(+45.3%) <==> with this patchset
N/A 342(-36.3%) 562900157(+56.8%) <==> disable zone_statistics
This patch (of 3):
In this patch, NUMA statistics is separated from zone statistics
framework, all the call sites of NUMA stats are changed to use
numa-stats-specific functions, it does not have any functionality change
except that the number of NUMA stats is shown behind zone page stats
when users *read* the zone info.
E.g. cat /proc/zoneinfo
***Base*** ***With this patch***
nr_free_pages 3976 nr_free_pages 3976
nr_zone_inactive_anon 0 nr_zone_inactive_anon 0
nr_zone_active_anon 0 nr_zone_active_anon 0
nr_zone_inactive_file 0 nr_zone_inactive_file 0
nr_zone_active_file 0 nr_zone_active_file 0
nr_zone_unevictable 0 nr_zone_unevictable 0
nr_zone_write_pending 0 nr_zone_write_pending 0
nr_mlock 0 nr_mlock 0
nr_page_table_pages 0 nr_page_table_pages 0
nr_kernel_stack 0 nr_kernel_stack 0
nr_bounce 0 nr_bounce 0
nr_zspages 0 nr_zspages 0
numa_hit 0 *nr_free_cma 0*
numa_miss 0 numa_hit 0
numa_foreign 0 numa_miss 0
numa_interleave 0 numa_foreign 0
numa_local 0 numa_interleave 0
numa_other 0 numa_local 0
*nr_free_cma 0* numa_other 0
... ...
vm stats threshold: 10 vm stats threshold: 10
... ...
The next patch updates the numa stats counter size and threshold.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1503568801-21305-2-git-send-email-kemi.wang@intel.com
Signed-off-by: Kemi Wang <kemi.wang@intel.com>
Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi.kleen@intel.com>
Cc: Ying Huang <ying.huang@intel.com>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:12:48 +00:00
|
|
|
#ifdef CONFIG_NUMA
|
|
|
|
for (i = 0; i < NR_VM_NUMA_STAT_ITEMS; i++)
|
2020-09-16 20:40:42 +00:00
|
|
|
len += sysfs_emit_at(buf, len, "%s %lu\n",
|
|
|
|
numa_stat_name(i),
|
|
|
|
sum_zone_numa_state(nid, i));
|
mm: change the call sites of numa statistics items
Patch series "Separate NUMA statistics from zone statistics", v2.
Each page allocation updates a set of per-zone statistics with a call to
zone_statistics(). As discussed in 2017 MM summit, these are a
substantial source of overhead in the page allocator and are very rarely
consumed. This significant overhead in cache bouncing caused by zone
counters (NUMA associated counters) update in parallel in multi-threaded
page allocation (pointed out by Dave Hansen).
A link to the MM summit slides:
http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
To mitigate this overhead, this patchset separates NUMA statistics from
zone statistics framework, and update NUMA counter threshold to a fixed
size of MAX_U16 - 2, as a small threshold greatly increases the update
frequency of the global counter from local per cpu counter (suggested by
Ying Huang). The rationality is that these statistics counters don't
need to be read often, unlike other VM counters, so it's not a problem
to use a large threshold and make readers more expensive.
With this patchset, we see 31.3% drop of CPU cycles(537-->369, see
below) for per single page allocation and reclaim on Jesper's
page_bench03 benchmark. Meanwhile, this patchset keeps the same style
of virtual memory statistics with little end-user-visible effects (only
move the numa stats to show behind zone page stats, see the first patch
for details).
I did an experiment of single page allocation and reclaim concurrently
using Jesper's page_bench03 benchmark on a 2-Socket Broadwell-based
server (88 processors with 126G memory) with different size of threshold
of pcp counter.
Benchmark provided by Jesper D Brouer(increase loop times to 10000000):
https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm/bench
Threshold CPU cycles Throughput(88 threads)
32 799 241760478
64 640 301628829
125 537 358906028 <==> system by default
256 468 412397590
512 428 450550704
4096 399 482520943
20000 394 489009617
30000 395 488017817
65533 369(-31.3%) 521661345(+45.3%) <==> with this patchset
N/A 342(-36.3%) 562900157(+56.8%) <==> disable zone_statistics
This patch (of 3):
In this patch, NUMA statistics is separated from zone statistics
framework, all the call sites of NUMA stats are changed to use
numa-stats-specific functions, it does not have any functionality change
except that the number of NUMA stats is shown behind zone page stats
when users *read* the zone info.
E.g. cat /proc/zoneinfo
***Base*** ***With this patch***
nr_free_pages 3976 nr_free_pages 3976
nr_zone_inactive_anon 0 nr_zone_inactive_anon 0
nr_zone_active_anon 0 nr_zone_active_anon 0
nr_zone_inactive_file 0 nr_zone_inactive_file 0
nr_zone_active_file 0 nr_zone_active_file 0
nr_zone_unevictable 0 nr_zone_unevictable 0
nr_zone_write_pending 0 nr_zone_write_pending 0
nr_mlock 0 nr_mlock 0
nr_page_table_pages 0 nr_page_table_pages 0
nr_kernel_stack 0 nr_kernel_stack 0
nr_bounce 0 nr_bounce 0
nr_zspages 0 nr_zspages 0
numa_hit 0 *nr_free_cma 0*
numa_miss 0 numa_hit 0
numa_foreign 0 numa_miss 0
numa_interleave 0 numa_foreign 0
numa_local 0 numa_interleave 0
numa_other 0 numa_local 0
*nr_free_cma 0* numa_other 0
... ...
vm stats threshold: 10 vm stats threshold: 10
... ...
The next patch updates the numa stats counter size and threshold.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1503568801-21305-2-git-send-email-kemi.wang@intel.com
Signed-off-by: Kemi Wang <kemi.wang@intel.com>
Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi.kleen@intel.com>
Cc: Ying Huang <ying.huang@intel.com>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:12:48 +00:00
|
|
|
|
2020-09-16 20:40:42 +00:00
|
|
|
#endif
|
mm: change the call sites of numa statistics items
Patch series "Separate NUMA statistics from zone statistics", v2.
Each page allocation updates a set of per-zone statistics with a call to
zone_statistics(). As discussed in 2017 MM summit, these are a
substantial source of overhead in the page allocator and are very rarely
consumed. This significant overhead in cache bouncing caused by zone
counters (NUMA associated counters) update in parallel in multi-threaded
page allocation (pointed out by Dave Hansen).
A link to the MM summit slides:
http://people.netfilter.org/hawk/presentations/MM-summit2017/MM-summit2017-JesperBrouer.pdf
To mitigate this overhead, this patchset separates NUMA statistics from
zone statistics framework, and update NUMA counter threshold to a fixed
size of MAX_U16 - 2, as a small threshold greatly increases the update
frequency of the global counter from local per cpu counter (suggested by
Ying Huang). The rationality is that these statistics counters don't
need to be read often, unlike other VM counters, so it's not a problem
to use a large threshold and make readers more expensive.
With this patchset, we see 31.3% drop of CPU cycles(537-->369, see
below) for per single page allocation and reclaim on Jesper's
page_bench03 benchmark. Meanwhile, this patchset keeps the same style
of virtual memory statistics with little end-user-visible effects (only
move the numa stats to show behind zone page stats, see the first patch
for details).
I did an experiment of single page allocation and reclaim concurrently
using Jesper's page_bench03 benchmark on a 2-Socket Broadwell-based
server (88 processors with 126G memory) with different size of threshold
of pcp counter.
Benchmark provided by Jesper D Brouer(increase loop times to 10000000):
https://github.com/netoptimizer/prototype-kernel/tree/master/kernel/mm/bench
Threshold CPU cycles Throughput(88 threads)
32 799 241760478
64 640 301628829
125 537 358906028 <==> system by default
256 468 412397590
512 428 450550704
4096 399 482520943
20000 394 489009617
30000 395 488017817
65533 369(-31.3%) 521661345(+45.3%) <==> with this patchset
N/A 342(-36.3%) 562900157(+56.8%) <==> disable zone_statistics
This patch (of 3):
In this patch, NUMA statistics is separated from zone statistics
framework, all the call sites of NUMA stats are changed to use
numa-stats-specific functions, it does not have any functionality change
except that the number of NUMA stats is shown behind zone page stats
when users *read* the zone info.
E.g. cat /proc/zoneinfo
***Base*** ***With this patch***
nr_free_pages 3976 nr_free_pages 3976
nr_zone_inactive_anon 0 nr_zone_inactive_anon 0
nr_zone_active_anon 0 nr_zone_active_anon 0
nr_zone_inactive_file 0 nr_zone_inactive_file 0
nr_zone_active_file 0 nr_zone_active_file 0
nr_zone_unevictable 0 nr_zone_unevictable 0
nr_zone_write_pending 0 nr_zone_write_pending 0
nr_mlock 0 nr_mlock 0
nr_page_table_pages 0 nr_page_table_pages 0
nr_kernel_stack 0 nr_kernel_stack 0
nr_bounce 0 nr_bounce 0
nr_zspages 0 nr_zspages 0
numa_hit 0 *nr_free_cma 0*
numa_miss 0 numa_hit 0
numa_foreign 0 numa_miss 0
numa_interleave 0 numa_foreign 0
numa_local 0 numa_interleave 0
numa_other 0 numa_local 0
*nr_free_cma 0* numa_other 0
... ...
vm stats threshold: 10 vm stats threshold: 10
... ...
The next patch updates the numa stats counter size and threshold.
[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/1503568801-21305-2-git-send-email-kemi.wang@intel.com
Signed-off-by: Kemi Wang <kemi.wang@intel.com>
Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi.kleen@intel.com>
Cc: Ying Huang <ying.huang@intel.com>
Cc: Aaron Lu <aaron.lu@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-08 23:12:48 +00:00
|
|
|
for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
|
2020-09-16 20:40:42 +00:00
|
|
|
len += sysfs_emit_at(buf, len, "%s %lu\n",
|
|
|
|
node_stat_name(i),
|
|
|
|
node_page_state_pages(pgdat, i));
|
2011-05-25 00:11:28 +00:00
|
|
|
|
2020-09-16 20:40:42 +00:00
|
|
|
return len;
|
2010-10-26 21:21:35 +00:00
|
|
|
}
|
2020-09-16 20:40:42 +00:00
|
|
|
static DEVICE_ATTR(vmstat, 0444, node_read_vmstat, NULL);
|
2010-10-26 21:21:35 +00:00
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static ssize_t node_read_distance(struct device *dev,
|
2020-09-16 20:40:42 +00:00
|
|
|
struct device_attribute *attr, char *buf)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int nid = dev->id;
|
|
|
|
int len = 0;
|
|
|
|
int i;
|
|
|
|
|
2010-03-10 22:50:21 +00:00
|
|
|
/*
|
|
|
|
* buf is currently PAGE_SIZE in length and each node needs 4 chars
|
|
|
|
* at the most (distance + space or newline).
|
|
|
|
*/
|
|
|
|
BUILD_BUG_ON(MAX_NUMNODES * 4 > PAGE_SIZE);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-09-16 20:40:42 +00:00
|
|
|
for_each_online_node(i) {
|
|
|
|
len += sysfs_emit_at(buf, len, "%s%d",
|
|
|
|
i ? " " : "", node_distance(nid, i));
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2020-09-16 20:40:42 +00:00
|
|
|
len += sysfs_emit_at(buf, len, "\n");
|
2005-04-16 22:20:36 +00:00
|
|
|
return len;
|
|
|
|
}
|
2020-09-16 20:40:42 +00:00
|
|
|
static DEVICE_ATTR(distance, 0444, node_read_distance, NULL);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2015-01-29 11:29:22 +00:00
|
|
|
static struct attribute *node_dev_attrs[] = {
|
|
|
|
&dev_attr_cpumap.attr,
|
|
|
|
&dev_attr_cpulist.attr,
|
|
|
|
&dev_attr_meminfo.attr,
|
|
|
|
&dev_attr_numastat.attr,
|
|
|
|
&dev_attr_distance.attr,
|
|
|
|
&dev_attr_vmstat.attr,
|
|
|
|
NULL
|
|
|
|
};
|
2015-03-25 12:47:17 +00:00
|
|
|
ATTRIBUTE_GROUPS(node_dev);
|
2015-01-29 11:29:22 +00:00
|
|
|
|
2009-12-15 01:58:25 +00:00
|
|
|
#ifdef CONFIG_HUGETLBFS
|
|
|
|
/*
|
|
|
|
* hugetlbfs per node attributes registration interface:
|
|
|
|
* When/if hugetlb[fs] subsystem initializes [sometime after this module],
|
2009-12-15 01:58:35 +00:00
|
|
|
* it will register its per node attributes for all online nodes with
|
|
|
|
* memory. It will also call register_hugetlbfs_with_node(), below, to
|
2009-12-15 01:58:25 +00:00
|
|
|
* register its attribute registration functions with this node driver.
|
|
|
|
* Once these hooks have been initialized, the node driver will call into
|
|
|
|
* the hugetlb module to [un]register attributes for hot-plugged nodes.
|
|
|
|
*/
|
|
|
|
static node_registration_func_t __hugetlb_register_node;
|
|
|
|
static node_registration_func_t __hugetlb_unregister_node;
|
|
|
|
|
2009-12-15 01:58:36 +00:00
|
|
|
static inline bool hugetlb_register_node(struct node *node)
|
2009-12-15 01:58:25 +00:00
|
|
|
{
|
2009-12-15 01:58:35 +00:00
|
|
|
if (__hugetlb_register_node &&
|
2012-12-12 21:51:36 +00:00
|
|
|
node_state(node->dev.id, N_MEMORY)) {
|
2009-12-15 01:58:25 +00:00
|
|
|
__hugetlb_register_node(node);
|
2009-12-15 01:58:36 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
2009-12-15 01:58:25 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void hugetlb_unregister_node(struct node *node)
|
|
|
|
{
|
|
|
|
if (__hugetlb_unregister_node)
|
|
|
|
__hugetlb_unregister_node(node);
|
|
|
|
}
|
|
|
|
|
|
|
|
void register_hugetlbfs_with_node(node_registration_func_t doregister,
|
|
|
|
node_registration_func_t unregister)
|
|
|
|
{
|
|
|
|
__hugetlb_register_node = doregister;
|
|
|
|
__hugetlb_unregister_node = unregister;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline void hugetlb_register_node(struct node *node) {}
|
|
|
|
|
|
|
|
static inline void hugetlb_unregister_node(struct node *node) {}
|
|
|
|
#endif
|
|
|
|
|
2012-12-12 00:00:57 +00:00
|
|
|
static void node_device_release(struct device *dev)
|
|
|
|
{
|
|
|
|
struct node *node = to_node(dev);
|
|
|
|
|
|
|
|
#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HUGETLBFS)
|
|
|
|
/*
|
|
|
|
* We schedule the work only when a memory section is
|
|
|
|
* onlined/offlined on this node. When we come here,
|
|
|
|
* all the memory on this node has been offlined,
|
|
|
|
* so we won't enqueue new work to this work.
|
|
|
|
*
|
|
|
|
* The work is using node->node_work, so we should
|
|
|
|
* flush work before freeing the memory.
|
|
|
|
*/
|
|
|
|
flush_work(&node->node_work);
|
|
|
|
#endif
|
|
|
|
kfree(node);
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
2007-02-17 18:13:42 +00:00
|
|
|
* register_node - Setup a sysfs device for a node.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @num - Node number to use when creating the device.
|
|
|
|
*
|
|
|
|
* Initialize and register the node device.
|
|
|
|
*/
|
2017-07-10 22:49:20 +00:00
|
|
|
static int register_node(struct node *node, int num)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int error;
|
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
node->dev.id = num;
|
|
|
|
node->dev.bus = &node_subsys;
|
2012-12-12 00:00:57 +00:00
|
|
|
node->dev.release = node_device_release;
|
2015-03-25 12:47:17 +00:00
|
|
|
node->dev.groups = node_dev_groups;
|
2011-12-21 22:48:43 +00:00
|
|
|
error = device_register(&node->dev);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2018-03-11 05:55:50 +00:00
|
|
|
if (error)
|
|
|
|
put_device(&node->dev);
|
|
|
|
else {
|
2009-12-15 01:58:25 +00:00
|
|
|
hugetlb_register_node(node);
|
2010-05-24 21:32:29 +00:00
|
|
|
|
|
|
|
compaction_register_node(node);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2005-05-08 12:28:53 +00:00
|
|
|
/**
|
|
|
|
* unregister_node - unregister a node device
|
|
|
|
* @node: node going away
|
|
|
|
*
|
|
|
|
* Unregisters a node device @node. All the devices on the node must be
|
|
|
|
* unregistered before calling this function.
|
|
|
|
*/
|
|
|
|
void unregister_node(struct node *node)
|
|
|
|
{
|
2009-12-15 01:58:35 +00:00
|
|
|
hugetlb_unregister_node(node); /* no-op, if memoryless node */
|
2019-03-11 20:56:00 +00:00
|
|
|
node_remove_accesses(node);
|
2019-03-11 20:56:02 +00:00
|
|
|
node_remove_caches(node);
|
2011-12-21 22:48:43 +00:00
|
|
|
device_unregister(&node->dev);
|
2005-05-08 12:28:53 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2012-12-12 00:00:56 +00:00
|
|
|
struct node *node_devices[MAX_NUMNODES];
|
2006-06-27 09:53:38 +00:00
|
|
|
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
/*
|
|
|
|
* register cpu under node
|
|
|
|
*/
|
|
|
|
int register_cpu_under_node(unsigned int cpu, unsigned int nid)
|
|
|
|
{
|
2009-12-15 01:59:08 +00:00
|
|
|
int ret;
|
2011-12-21 22:29:42 +00:00
|
|
|
struct device *obj;
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
|
2009-12-15 01:59:06 +00:00
|
|
|
if (!node_online(nid))
|
|
|
|
return 0;
|
|
|
|
|
2011-12-21 22:29:42 +00:00
|
|
|
obj = get_cpu_device(cpu);
|
2009-12-15 01:59:06 +00:00
|
|
|
if (!obj)
|
|
|
|
return 0;
|
|
|
|
|
2012-12-12 00:00:56 +00:00
|
|
|
ret = sysfs_create_link(&node_devices[nid]->dev.kobj,
|
2009-12-15 01:59:06 +00:00
|
|
|
&obj->kobj,
|
|
|
|
kobject_name(&obj->kobj));
|
2009-12-15 01:59:08 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
return sysfs_create_link(&obj->kobj,
|
2012-12-12 00:00:56 +00:00
|
|
|
&node_devices[nid]->dev.kobj,
|
|
|
|
kobject_name(&node_devices[nid]->dev.kobj));
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
}
|
|
|
|
|
2019-03-11 20:56:00 +00:00
|
|
|
/**
|
|
|
|
* register_memory_node_under_compute_node - link memory node to its compute
|
|
|
|
* node for a given access class.
|
2019-06-18 18:55:12 +00:00
|
|
|
* @mem_nid: Memory node number
|
|
|
|
* @cpu_nid: Cpu node number
|
2019-03-11 20:56:00 +00:00
|
|
|
* @access: Access class to register
|
|
|
|
*
|
|
|
|
* Description:
|
|
|
|
* For use with platforms that may have separate memory and compute nodes.
|
|
|
|
* This function will export node relationships linking which memory
|
|
|
|
* initiator nodes can access memory targets at a given ranked access
|
|
|
|
* class.
|
|
|
|
*/
|
|
|
|
int register_memory_node_under_compute_node(unsigned int mem_nid,
|
|
|
|
unsigned int cpu_nid,
|
|
|
|
unsigned access)
|
|
|
|
{
|
|
|
|
struct node *init_node, *targ_node;
|
|
|
|
struct node_access_nodes *initiator, *target;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
if (!node_online(cpu_nid) || !node_online(mem_nid))
|
|
|
|
return -ENODEV;
|
|
|
|
|
|
|
|
init_node = node_devices[cpu_nid];
|
|
|
|
targ_node = node_devices[mem_nid];
|
|
|
|
initiator = node_init_node_access(init_node, access);
|
|
|
|
target = node_init_node_access(targ_node, access);
|
|
|
|
if (!initiator || !target)
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
ret = sysfs_add_link_to_group(&initiator->dev.kobj, "targets",
|
|
|
|
&targ_node->dev.kobj,
|
|
|
|
dev_name(&targ_node->dev));
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
ret = sysfs_add_link_to_group(&target->dev.kobj, "initiators",
|
|
|
|
&init_node->dev.kobj,
|
|
|
|
dev_name(&init_node->dev));
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
err:
|
|
|
|
sysfs_remove_link_from_group(&initiator->dev.kobj, "targets",
|
|
|
|
dev_name(&targ_node->dev));
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
int unregister_cpu_under_node(unsigned int cpu, unsigned int nid)
|
|
|
|
{
|
2011-12-21 22:29:42 +00:00
|
|
|
struct device *obj;
|
2009-12-15 01:59:07 +00:00
|
|
|
|
|
|
|
if (!node_online(nid))
|
|
|
|
return 0;
|
|
|
|
|
2011-12-21 22:29:42 +00:00
|
|
|
obj = get_cpu_device(cpu);
|
2009-12-15 01:59:07 +00:00
|
|
|
if (!obj)
|
|
|
|
return 0;
|
|
|
|
|
2012-12-12 00:00:56 +00:00
|
|
|
sysfs_remove_link(&node_devices[nid]->dev.kobj,
|
2009-12-15 01:59:07 +00:00
|
|
|
kobject_name(&obj->kobj));
|
2009-12-15 01:59:08 +00:00
|
|
|
sysfs_remove_link(&obj->kobj,
|
2012-12-12 00:00:56 +00:00
|
|
|
kobject_name(&node_devices[nid]->dev.kobj));
|
2009-12-15 01:59:07 +00:00
|
|
|
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2009-01-06 22:39:14 +00:00
|
|
|
#ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
|
2016-08-02 21:03:33 +00:00
|
|
|
static int __ref get_nid_for_pfn(unsigned long pfn)
|
2009-01-06 22:39:14 +00:00
|
|
|
{
|
|
|
|
if (!pfn_valid_within(pfn))
|
|
|
|
return -1;
|
2015-06-30 21:57:02 +00:00
|
|
|
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
|
2017-05-16 18:42:39 +00:00
|
|
|
if (system_state < SYSTEM_RUNNING)
|
2015-06-30 21:57:02 +00:00
|
|
|
return early_pfn_to_nid(pfn);
|
|
|
|
#endif
|
2009-01-06 22:39:14 +00:00
|
|
|
return pfn_to_nid(pfn);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* register memory section under specified node if it spans that node */
|
2019-07-18 22:57:43 +00:00
|
|
|
static int register_mem_sect_under_node(struct memory_block *mem_blk,
|
|
|
|
void *arg)
|
2009-01-06 22:39:14 +00:00
|
|
|
{
|
2019-09-23 22:35:49 +00:00
|
|
|
unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE;
|
|
|
|
unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);
|
|
|
|
unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
|
2018-08-17 22:46:22 +00:00
|
|
|
int ret, nid = *(int *)arg;
|
2019-09-23 22:35:49 +00:00
|
|
|
unsigned long pfn;
|
2009-01-06 22:39:14 +00:00
|
|
|
|
2019-09-23 22:35:49 +00:00
|
|
|
for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
|
2009-01-06 22:39:14 +00:00
|
|
|
int page_nid;
|
|
|
|
|
2015-09-04 22:42:39 +00:00
|
|
|
/*
|
|
|
|
* memory block could have several absent sections from start.
|
|
|
|
* skip pfn range from absent section
|
|
|
|
*/
|
2020-04-02 04:09:27 +00:00
|
|
|
if (!pfn_in_present_section(pfn)) {
|
2015-09-04 22:42:39 +00:00
|
|
|
pfn = round_down(pfn + PAGES_PER_SECTION,
|
|
|
|
PAGES_PER_SECTION) - 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2018-04-05 23:22:56 +00:00
|
|
|
/*
|
|
|
|
* We need to check if page belongs to nid only for the boot
|
|
|
|
* case, during hotplug we know that all pages in the memory
|
|
|
|
* block belong to the same node.
|
|
|
|
*/
|
2018-08-17 22:46:22 +00:00
|
|
|
if (system_state == SYSTEM_BOOTING) {
|
2018-04-05 23:22:56 +00:00
|
|
|
page_nid = get_nid_for_pfn(pfn);
|
|
|
|
if (page_nid < 0)
|
|
|
|
continue;
|
|
|
|
if (page_nid != nid)
|
|
|
|
continue;
|
|
|
|
}
|
2019-09-23 22:35:40 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If this memory block spans multiple nodes, we only indicate
|
|
|
|
* the last processed node.
|
|
|
|
*/
|
|
|
|
mem_blk->nid = nid;
|
|
|
|
|
2012-12-12 00:00:56 +00:00
|
|
|
ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,
|
2011-12-21 22:48:43 +00:00
|
|
|
&mem_blk->dev.kobj,
|
|
|
|
kobject_name(&mem_blk->dev.kobj));
|
2009-12-15 01:59:05 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
return sysfs_create_link_nowarn(&mem_blk->dev.kobj,
|
2012-12-12 00:00:56 +00:00
|
|
|
&node_devices[nid]->dev.kobj,
|
|
|
|
kobject_name(&node_devices[nid]->dev.kobj));
|
2009-01-06 22:39:14 +00:00
|
|
|
}
|
|
|
|
/* mem section does not span the specified node */
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2019-07-18 22:57:06 +00:00
|
|
|
/*
|
2019-09-23 22:35:40 +00:00
|
|
|
* Unregister a memory block device under the node it spans. Memory blocks
|
|
|
|
* with multiple nodes cannot be offlined and therefore also never be removed.
|
2019-07-18 22:57:06 +00:00
|
|
|
*/
|
2019-07-18 22:57:12 +00:00
|
|
|
void unregister_memory_block_under_nodes(struct memory_block *mem_blk)
|
2009-01-06 22:39:14 +00:00
|
|
|
{
|
2019-09-23 22:35:40 +00:00
|
|
|
if (mem_blk->nid == NUMA_NO_NODE)
|
|
|
|
return;
|
2009-01-06 22:39:14 +00:00
|
|
|
|
2019-09-23 22:35:40 +00:00
|
|
|
sysfs_remove_link(&node_devices[mem_blk->nid]->dev.kobj,
|
|
|
|
kobject_name(&mem_blk->dev.kobj));
|
|
|
|
sysfs_remove_link(&mem_blk->dev.kobj,
|
|
|
|
kobject_name(&node_devices[mem_blk->nid]->dev.kobj));
|
2009-01-06 22:39:14 +00:00
|
|
|
}
|
|
|
|
|
2018-08-17 22:46:22 +00:00
|
|
|
int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
|
2009-01-06 22:39:14 +00:00
|
|
|
{
|
2019-07-18 22:57:46 +00:00
|
|
|
return walk_memory_blocks(PFN_PHYS(start_pfn),
|
|
|
|
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
|
|
|
|
register_mem_sect_under_node);
|
2009-01-06 22:39:14 +00:00
|
|
|
}
|
2009-12-15 01:58:35 +00:00
|
|
|
|
2009-12-15 01:58:36 +00:00
|
|
|
#ifdef CONFIG_HUGETLBFS
|
2009-12-15 01:58:35 +00:00
|
|
|
/*
|
|
|
|
* Handle per node hstate attribute [un]registration on transistions
|
|
|
|
* to/from memoryless state.
|
|
|
|
*/
|
2009-12-15 01:58:36 +00:00
|
|
|
static void node_hugetlb_work(struct work_struct *work)
|
|
|
|
{
|
|
|
|
struct node *node = container_of(work, struct node, node_work);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We only get here when a node transitions to/from memoryless state.
|
|
|
|
* We can detect which transition occurred by examining whether the
|
|
|
|
* node has memory now. hugetlb_register_node() already check this
|
|
|
|
* so we try to register the attributes. If that fails, then the
|
|
|
|
* node has transitioned to memoryless, try to unregister the
|
|
|
|
* attributes.
|
|
|
|
*/
|
|
|
|
if (!hugetlb_register_node(node))
|
|
|
|
hugetlb_unregister_node(node);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void init_node_hugetlb_work(int nid)
|
|
|
|
{
|
2012-12-12 00:00:56 +00:00
|
|
|
INIT_WORK(&node_devices[nid]->node_work, node_hugetlb_work);
|
2009-12-15 01:58:36 +00:00
|
|
|
}
|
2009-12-15 01:58:35 +00:00
|
|
|
|
|
|
|
static int node_memory_callback(struct notifier_block *self,
|
|
|
|
unsigned long action, void *arg)
|
|
|
|
{
|
|
|
|
struct memory_notify *mnb = arg;
|
|
|
|
int nid = mnb->status_change_nid;
|
|
|
|
|
|
|
|
switch (action) {
|
2009-12-15 01:58:36 +00:00
|
|
|
case MEM_ONLINE:
|
|
|
|
case MEM_OFFLINE:
|
|
|
|
/*
|
|
|
|
* offload per node hstate [un]registration to a work thread
|
|
|
|
* when transitioning to/from memoryless state.
|
|
|
|
*/
|
2009-12-15 01:58:35 +00:00
|
|
|
if (nid != NUMA_NO_NODE)
|
2012-12-12 00:00:56 +00:00
|
|
|
schedule_work(&node_devices[nid]->node_work);
|
2009-12-15 01:58:35 +00:00
|
|
|
break;
|
2009-12-15 01:58:36 +00:00
|
|
|
|
2009-12-15 01:58:35 +00:00
|
|
|
case MEM_GOING_ONLINE:
|
|
|
|
case MEM_GOING_OFFLINE:
|
|
|
|
case MEM_CANCEL_ONLINE:
|
|
|
|
case MEM_CANCEL_OFFLINE:
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NOTIFY_OK;
|
|
|
|
}
|
2009-12-15 01:58:36 +00:00
|
|
|
#endif /* CONFIG_HUGETLBFS */
|
2017-07-06 22:37:49 +00:00
|
|
|
#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
|
2009-12-15 01:58:35 +00:00
|
|
|
|
2009-12-15 01:58:36 +00:00
|
|
|
#if !defined(CONFIG_MEMORY_HOTPLUG_SPARSE) || \
|
|
|
|
!defined(CONFIG_HUGETLBFS)
|
2009-12-15 01:58:35 +00:00
|
|
|
static inline int node_memory_callback(struct notifier_block *self,
|
|
|
|
unsigned long action, void *arg)
|
|
|
|
{
|
|
|
|
return NOTIFY_OK;
|
|
|
|
}
|
2009-12-15 01:58:36 +00:00
|
|
|
|
|
|
|
static void init_node_hugetlb_work(int nid) { }
|
|
|
|
|
|
|
|
#endif
|
2009-01-06 22:39:14 +00:00
|
|
|
|
2017-07-06 22:37:49 +00:00
|
|
|
int __register_one_node(int nid)
|
2006-06-27 09:53:38 +00:00
|
|
|
{
|
2017-07-06 22:37:49 +00:00
|
|
|
int error;
|
[PATCH] node hotplug: register cpu: remove node struct
With Goto-san's patch, we can add new pgdat/node at runtime. I'm now
considering node-hot-add with cpu + memory on ACPI.
I found acpi container, which describes node, could evaluate cpu before
memory. This means cpu-hot-add occurs before memory hot add.
In most part, cpu-hot-add doesn't depend on node hot add. But register_cpu(),
which creates symbolic link from node to cpu, requires that node should be
onlined before register_cpu(). When a node is onlined, its pgdat should be
there.
This patch-set holds off creating symbolic link from node to cpu
until node is onlined.
This removes node arguments from register_cpu().
Now, register_cpu() requires 'struct node' as its argument. But the array of
struct node is now unified in driver/base/node.c now (By Goto's node hotplug
patch). We can get struct node in generic way. So, this argument is not
necessary now.
This patch also guarantees add cpu under node only when node is onlined. It
is necessary for node-hot-add vs. cpu-hot-add patch following this.
Moreover, register_cpu calculates cpu->node_id by cpu_to_node() without regard
to its 'struct node *root' argument. This patch removes it.
Also modify callers of register_cpu()/unregister_cpu, whose args are changed
by register-cpu-remove-node-struct patch.
[Brice.Goglin@ens-lyon.org: fix it]
Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-27 09:53:41 +00:00
|
|
|
int cpu;
|
2006-06-27 09:53:38 +00:00
|
|
|
|
2017-07-06 22:37:49 +00:00
|
|
|
node_devices[nid] = kzalloc(sizeof(struct node), GFP_KERNEL);
|
|
|
|
if (!node_devices[nid])
|
|
|
|
return -ENOMEM;
|
2009-01-06 22:39:14 +00:00
|
|
|
|
2017-07-10 22:49:20 +00:00
|
|
|
error = register_node(node_devices[nid], nid);
|
2009-12-15 01:58:36 +00:00
|
|
|
|
2017-07-06 22:37:49 +00:00
|
|
|
/* link cpu under this node */
|
|
|
|
for_each_present_cpu(cpu) {
|
|
|
|
if (cpu_to_node(cpu) == nid)
|
|
|
|
register_cpu_under_node(cpu, nid);
|
2006-06-27 09:53:38 +00:00
|
|
|
}
|
|
|
|
|
2019-03-11 20:56:00 +00:00
|
|
|
INIT_LIST_HEAD(&node_devices[nid]->access_list);
|
2017-07-06 22:37:49 +00:00
|
|
|
/* initialize work queue for memory hot plug */
|
|
|
|
init_node_hugetlb_work(nid);
|
2019-03-11 20:56:02 +00:00
|
|
|
node_init_caches(nid);
|
2006-06-27 09:53:38 +00:00
|
|
|
|
2017-07-06 22:37:49 +00:00
|
|
|
return error;
|
2006-06-27 09:53:38 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void unregister_one_node(int nid)
|
|
|
|
{
|
2014-03-06 09:18:21 +00:00
|
|
|
if (!node_devices[nid])
|
|
|
|
return;
|
|
|
|
|
2012-12-12 00:00:56 +00:00
|
|
|
unregister_node(node_devices[nid]);
|
|
|
|
node_devices[nid] = NULL;
|
2006-06-27 09:53:38 +00:00
|
|
|
}
|
|
|
|
|
2007-10-16 08:26:27 +00:00
|
|
|
/*
|
|
|
|
* node states attributes
|
|
|
|
*/
|
|
|
|
|
2010-01-05 11:47:59 +00:00
|
|
|
struct node_attr {
|
2011-12-21 22:48:43 +00:00
|
|
|
struct device_attribute attr;
|
2010-01-05 11:47:59 +00:00
|
|
|
enum node_states state;
|
|
|
|
};
|
2007-10-16 08:26:27 +00:00
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static ssize_t show_node_state(struct device *dev,
|
|
|
|
struct device_attribute *attr, char *buf)
|
2007-10-16 08:26:27 +00:00
|
|
|
{
|
2010-01-05 11:47:59 +00:00
|
|
|
struct node_attr *na = container_of(attr, struct node_attr, attr);
|
2020-09-16 20:40:42 +00:00
|
|
|
|
|
|
|
return sysfs_emit(buf, "%*pbl\n",
|
|
|
|
nodemask_pr_args(&node_states[na->state]));
|
2007-10-16 08:26:27 +00:00
|
|
|
}
|
|
|
|
|
2010-01-05 11:47:59 +00:00
|
|
|
#define _NODE_ATTR(name, state) \
|
2011-12-21 22:48:43 +00:00
|
|
|
{ __ATTR(name, 0444, show_node_state, NULL), state }
|
2007-10-16 08:26:27 +00:00
|
|
|
|
2010-01-05 11:47:59 +00:00
|
|
|
static struct node_attr node_state_attr[] = {
|
2012-12-12 00:03:13 +00:00
|
|
|
[N_POSSIBLE] = _NODE_ATTR(possible, N_POSSIBLE),
|
|
|
|
[N_ONLINE] = _NODE_ATTR(online, N_ONLINE),
|
|
|
|
[N_NORMAL_MEMORY] = _NODE_ATTR(has_normal_memory, N_NORMAL_MEMORY),
|
2007-10-16 08:26:27 +00:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
2012-12-12 00:03:13 +00:00
|
|
|
[N_HIGH_MEMORY] = _NODE_ATTR(has_high_memory, N_HIGH_MEMORY),
|
2012-12-12 21:52:00 +00:00
|
|
|
#endif
|
|
|
|
[N_MEMORY] = _NODE_ATTR(has_memory, N_MEMORY),
|
2012-12-12 00:03:13 +00:00
|
|
|
[N_CPU] = _NODE_ATTR(has_cpu, N_CPU),
|
2007-10-16 08:26:27 +00:00
|
|
|
};
|
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static struct attribute *node_state_attrs[] = {
|
2012-12-12 00:03:13 +00:00
|
|
|
&node_state_attr[N_POSSIBLE].attr.attr,
|
|
|
|
&node_state_attr[N_ONLINE].attr.attr,
|
|
|
|
&node_state_attr[N_NORMAL_MEMORY].attr.attr,
|
2010-01-05 11:48:04 +00:00
|
|
|
#ifdef CONFIG_HIGHMEM
|
2012-12-12 00:03:13 +00:00
|
|
|
&node_state_attr[N_HIGH_MEMORY].attr.attr,
|
2012-12-12 21:52:00 +00:00
|
|
|
#endif
|
|
|
|
&node_state_attr[N_MEMORY].attr.attr,
|
2012-12-12 00:03:13 +00:00
|
|
|
&node_state_attr[N_CPU].attr.attr,
|
2010-01-05 11:48:04 +00:00
|
|
|
NULL
|
|
|
|
};
|
2007-10-16 08:26:27 +00:00
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
static struct attribute_group memory_root_attr_group = {
|
|
|
|
.attrs = node_state_attrs,
|
|
|
|
};
|
|
|
|
|
|
|
|
static const struct attribute_group *cpu_root_attr_groups[] = {
|
|
|
|
&memory_root_attr_group,
|
|
|
|
NULL,
|
|
|
|
};
|
|
|
|
|
2009-12-15 01:58:35 +00:00
|
|
|
#define NODE_CALLBACK_PRI 2 /* lower than SLAB */
|
2005-05-08 12:28:53 +00:00
|
|
|
static int __init register_node_type(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2007-10-16 08:26:27 +00:00
|
|
|
int ret;
|
|
|
|
|
2010-01-05 11:48:04 +00:00
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(node_state_attr) != NR_NODE_STATES);
|
|
|
|
BUILD_BUG_ON(ARRAY_SIZE(node_state_attrs)-1 != NR_NODE_STATES);
|
|
|
|
|
2011-12-21 22:48:43 +00:00
|
|
|
ret = subsys_system_register(&node_subsys, cpu_root_attr_groups);
|
2009-12-15 01:58:35 +00:00
|
|
|
if (!ret) {
|
2013-04-29 22:08:07 +00:00
|
|
|
static struct notifier_block node_memory_callback_nb = {
|
|
|
|
.notifier_call = node_memory_callback,
|
|
|
|
.priority = NODE_CALLBACK_PRI,
|
|
|
|
};
|
|
|
|
register_hotmemory_notifier(&node_memory_callback_nb);
|
2009-12-15 01:58:35 +00:00
|
|
|
}
|
2007-10-16 08:26:27 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Note: we're not going to unregister the node class if we fail
|
|
|
|
* to register the node state class attribute files.
|
|
|
|
*/
|
|
|
|
return ret;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
postcore_initcall(register_node_type);
|